However, providing an AI agent with access to your account details and your whole computer system carries certain dangers, despite it operating on your desktop.
However, providing an AI agent with access to your account details and your whole computer system carries certain dangers, despite it operating on your desktop.


An open-source AI tool that “actually accomplishes tasks” is gaining popularity, with users around the internet posting about how they utilize the agent for various purposes, such as organizing reminders, recording health and wellness data, and even interacting with clients. This tool, known as Moltbot (previously Clawdbot), operates locally on numerous devices, allowing you to request it to undertake various tasks on your behalf via communication through WhatsApp, Telegram, Signal, Discord, and iMessage.
Federico Viticci at MacStories showcased how he set up Moltbot on his M4 Mac Mini, converting it into a tool that generates daily audio summaries based on his activities in his calendar, Notion, and Todoist applications. Another user instructed Moltbot to give itself an animated visage, adding a sleep animation spontaneously.
Moltbot directs your queries through the AI provider of your choice, including OpenAI, Anthropic, or Google. Similar to many AI tools we’ve encountered, Moltbot can fill in forms in your browser, send emails, and manage your calendar—yet it does all this considerably more effectively, at least according to a few users of the program.
Nevertheless, there are some warnings; you can also grant Moltbot permission to access your whole computer system, permitting it to read and write files, execute shell commands, and run scripts. Merging admin-level access with your app credentials could present significant security hazards if precautions aren’t taken.
“If your independent AI Agent (like MoltBot) holds admin rights on your computer and I can interact with it by sending you direct messages on social media, then I can try to take control of your computer via a simple direct message,” remarks Rachel Tobac, the CEO of SocialProof Security, in an email to The Verge. “When we give admin privileges to autonomous AI agents, they can fall victim to hijacking through prompt injection, a well-known and unresolved risk.” A prompt injection attack happens when a malicious individual manipulates AI using harmful prompts, which they can either present directly to a chatbot or encode within a file, email, or webpage sent to a large language model.
Jamieson O’Reilly, a cybersecurity expert and founder of the cybersecurity firm Dvuln, discovered that private communications, account passwords, and API keys associated with Moltbot were publicly exposed on the internet, potentially allowing cybercriminals to capture this data or use it for alternative attacks. O’Reilly claims he alerted Moltbot’s developers, who have since provided a solution, according to The Register.
One of the developers of Moltbot mentioned on X that the AI agent is “robust software with numerous risks,” advising users to “thoroughly examine the security documentation before deploying it in any environment connected to the public internet.”
Moltbot has also been affected by scams. Peter Steinberger, the creator of the tool, states that after rebranding Clawdbot to Moltbot due to trademark issues with Anthropic—who operates a chatbot known as Claude—fraudsters created a counterfeit crypto token called “Clawdbot.”