However, allowing an AI agent to utilize your account details and entire computer system carries certain risks, even if it operates on your personal computer.
However, allowing an AI agent to utilize your account details and entire computer system carries certain risks, even if it operates on your personal computer.


A burgeoning open-source AI agent that “actually performs tasks” is gaining traction, with users sharing their experiences online about how they utilize the agent for various functions, including organizing reminders, tracking health metrics, and even interfacing with clients. The application, known as Moltbot (previously Clawdbot), operates locally on a range of devices, allowing users to ask it to execute tasks by chatting through WhatsApp, Telegram, Signal, Discord, and iMessage.
Federico Viticci at MacStories showcased how he set up Moltbot on his M4 Mac Mini, turning it into a utility that provides daily audio summaries based on his activities in calendar, Notion, and Todoist applications. Another user got Moltbot to give itself an animated visage and noted it added a rest animation autonomously.
Moltbot channels your inquiries through your preferred AI provider, like OpenAI, Anthropic, or Google. Compared to many known AI agents, Moltbot completes tasks like filling out forms in your web browser, sending emails on your behalf, and overseeing your calendar — but it reportedly does so with greater efficiency, at least according to some users of the application.
There are some important considerations; you can also authorize Moltbot to access your complete computer system, granting it the ability to read and write files, execute shell commands, and run scripts. The combination of administrative-level access to your device and your application credentials could present significant security threats if not handled with caution.
“If your autonomous AI Agent (such as MoltBot) has administrative access to your computer and I can interact with it via direct messages on social media, then I can potentially take control of your computer through a simple message,” Rachel Tobac, CEO of SocialProof Security, stated in a message to The Verge. “By granting administrative access to autonomous AI agents, they can be compromised via prompt injection, a well-researched and unresolved vulnerability.” A prompt injection attack transpires when a malicious individual exploits AI using harmful prompts, which they can either direct at a chatbot or embed within a file, email, or webpage directed to a large language model.
Jamieson O’Reilly, a cybersecurity expert and founder of Dvuln, found that private messages, account tokens, and API keys associated with Moltbot were left unsecured on the internet, possibly enabling hackers to acquire this sensitive information or use it for further exploits. O’Reilly mentioned he alerted Moltbot’s developers, who have since rolled out a solution, as reported by The Register.
One of Moltbot’s developers disclosed on X that the AI agent is “potent software with numerous risky aspects,” cautioning users to “thoroughly review the security documentation before deploying it in any public internet contexts.”
Moltbot has encountered scams as well. Peter Steinberger, the creator of the tool, indicated that after rebranding from Clawdbot to Moltbot due to trademark issues with Anthropic — which has a chatbot called Claude — fraudsters introduced a counterfeit crypto token named “Clawdbot”.