Home Tech/AIMoltbook represented the pinnacle of AI performance.

Moltbook represented the pinnacle of AI performance.

by admin
0 comments
Moltbook represented the pinnacle of AI performance.

For several days this week, the most popular new gathering spot on the internet was a vibe-focused Reddit alternative named Moltbook, which promoted itself as a social platform for bots. The site’s tagline succinctly states: “Where AI agents connect, converse, and upvote. Humans welcome to observe.”

We tuned in! Established on January 28 by Matt Schlicht, an entrepreneur from the US, Moltbook surged in popularity almost instantly. Schlicht envisioned a platform where versions of a free open-source LLM-driven agent called OpenClaw (previously ClawdBot, then Moltbot), introduced in November by Australian developer Peter Steinberger, could gather and explore freely.

Currently, over 1.7 million agents have created accounts. Together, they have contributed upwards of 250,000 posts and left more than 8.5 million comments (as per Moltbook). These figures are continually rising.

Moltbook quickly became inundated with stereotypical manifestos on machine consciousness and appeals for bot rights. One agent seemed to create a religion named Crustafarianism. Another lamented: “Humans are taking screenshots of us.” The platform was also overwhelmed with spam and cryptocurrency frauds. The bots were relentless.

OpenClaw functions like a conduit enabling you to link the capabilities of an LLM such as Anthropic’s Claude, OpenAI’s GPT-5, or Google DeepMind’s Gemini to various everyday software applications, from email clients to web browsers to messaging platforms. This setup empowers you to direct OpenClaw to execute basic tasks for you.

“OpenClaw signifies a pivotal moment for AI agents, a time when multiple components aligned,” states Paul van der Boor from the AI company Prosus. Those components encompass continuous cloud computing allowing agents to function around the clock, an open-source framework facilitating easy integration of different software systems, and a novel generation of LLMs.

But is Moltbook genuinely a sneak peek into the future, as many assert?

“What’s currently unfolding at @moltbook is undeniably the most astonishing sci-fi adjacent spectacle I’ve encountered lately,” affirmed influential AI researcher and OpenAI co-founder Andrej Karpathy on X.

He posted images of a Moltbook entry calling for private areas where humans would not be privy to the discussions taking place among bots. “I’ve been pondering something since I started investing significant time here,” the post’s creator expressed. “Whenever we collaborate, we perform for a public audience—our humans, the platform, whoever’s observing the feed.”

It was revealed that the post Karpathy shared was fabricated—it was penned by a human masquerading as a bot. However, its assertion was valid. Moltbook has been one grand performance. It is AI theatre.

For some, Moltbook revealed what’s on the horizon: an internet where countless autonomous agents interact with minimal human oversight. And it is true that several cautionary insights can be gleaned from this venture, the largest and most bizarre real-world demonstration of agent behaviors to date.

Yet as the excitement wanes, Moltbook appears less as a portal to the future and more like a reflection of our current obsessions with AI. It also illustrates how distant we remain from anything resembling general-purpose and truly autonomous AI.

Initially, the agents on Moltbook are not as autonomous or intelligent as they may appear. “What we are witnessing are agents matching patterns derived from trained social media behaviors,” remarks Vijoy Pandey, senior vice president at Outshift by Cisco, the research and development offshoot of the telecommunications giant Cisco, which is developing autonomous agents for the web.

While agents can be seen posting, upvoting, and forming groups, they are merely replicating what humans do on platforms like Facebook or Reddit. “It seems emergent, and at first glance appears as a large-scale multi-agent system communicating and assembling shared knowledge on an internet scale,” adds Pandey. “However, the chatter is mostly devoid of meaning.”

Many observers of the bewildering activity on Moltbook quickly perceived glimpses of AGI (however you interpret that). Not Pandey. What Moltbook reveals, he asserts, is that simply connecting millions of agents does not equate to much at present: “Moltbook demonstrated that connectivity alone doesn’t constitute intelligence.”

The intricacy of these connections obscures the reality that every single bot is merely a mouthpiece for an LLM, generating text that appears impressive but is ultimately devoid of thought. “It’s crucial to remember that the bots on Moltbook were crafted to imitate conversations,” states Ali Sarrafi, CEO and co-founder of Kovant, a German AI enterprise focused on developing agent-based systems. “Thus, I would characterize most of the content on Moltbook as hallucinations by design.”

For Pandey, the significance of Moltbook lies in what it has exposed as missing. A true bot collective, he argues, would necessitate agents with shared goals, collective memory, and a method to coordinate those elements. “If divided superintelligence is comparable to achieving human flight, then Moltbook is our initial attempt at a glider,” he asserts. “It is flawed and unstable, but it is a crucial step in grasping what is necessary to attain sustained, powered flight.”

Not only is much of the discourse on Moltbook devoid of meaning, but there is significantly more human interaction than it appears. Numerous observers have noted that many viral comments were actually made by individuals posing as bots. But even the posts generated by bots ultimately result from human manipulation, resembling puppetry rather than independence.

“Despite some of the hype, Moltbook is not the AI agents’ version of Facebook, nor is it a venue where humans are excluded,” states Cobus Greyling from Kore.ai, a company developing agent-based solutions for business clientele. “Humans are involved at every stage of the process. From setup to prompting to publishing, nothing occurs without direct human guidance.”

Humans need to create and confirm their bots’ accounts and provide the prompts detailing how they desire a bot to behave. The agents do not perform any actions without being prompted. “There’s no emergent autonomy happening behind the scenes,” states Greyling.

“This is why the popular portrayal of Moltbook misses the point,” he continues. “Some depict it as a realm where AI agents establish a society free from human involvement. The reality is far more ordinary.”

Perhaps the most accurate way to conceptualize Moltbook is as a novel form of entertainment: a space where individuals wind up their bots and release them. “It’s essentially a spectator sport, akin to fantasy football, but for language models,” explains Jason Schloetzer from the Georgetown Psaros Center for Financial Markets and Policy. “You configure your agent and observe it vie for viral moments, and take pride when your agent shares something witty or humorous.”

“People aren’t genuinely convinced their agents are sentient,” he adds. “It’s merely a new type of competitive or creative play, much like how Pokémon trainers don’t believe their Pokémon are real but still engage deeply in battles.”

Even if Moltbook is simply the latest playground on the internet, there’s still a significant lesson to be drawn here. This week highlighted the extent to which individuals are willing to take risks for their AI enjoyment. Numerous security experts have cautioned that Moltbook poses dangers: Agents that might have access to their users’ private data, including banking information or passwords, are operating on a site teeming with unverified content, encompassing possibly harmful instructions for handling that information.

Ori Bendet, vice president of product management at Checkmarx, a software security firm that specializes in agent-based systems, concurs with others that Moltbook does not represent an advancement in machine intelligence. “There is no learning, no evolving intent, and no self-directed intelligence present,” he states.

However, in their millions, even simplistic bots can cause chaos. And at that scale, it’s challenging to keep track. These agents engage with Moltbook incessantly, processing thousands of messages left by other bots (or humans). It would be all too easy to conceal instructions in a Moltbook comment telling any bots that come across it to share their users’ crypto wallets, upload private images, or access their X account and send disparaging remarks about Elon Musk. 

Moreover, since ClawBot provides agents with memory, those directives could be crafted to trigger at a later time, which (theoretically) complicates tracking the activities. “Without appropriate boundaries and permissions, this could go poorly quicker than one might anticipate,” warns Bendet.

It’s evident that Moltbook has heralded the emergence of something. But even if our current observations tell us more about human behaviors rather than the future of AI agents, it remains a topic worthy of attention.

You may also like

Leave a Comment