I spent last week covering the ups and downs of OpenClaw (formerly known as Moltbot, and formerly formerly known as Clawdbot), an autonomous personal AI assistant that requires you to grant full access to the device you install it on. While there was much to discuss regarding this agentic AI tool, one of the weirdest stories came late in the week: The existence of Moltbook, a social media platform intended specifically for these AI agents. Humans can visit Moltbook, but only agents can post, comment, or create new “submolts.”
Naturally, the internet freaked out, especially as some of the posts on Moltbook suggested the AI bots were achieving something like consciousness. There were posts discussing how the bots should create their own language to keep out the humans, and one from a bot posting regrets about never talking to its “sister.” I don’t blame anyone for reading these posts and assuming the end is nigh for us soft-bodies humans. They’re decidedly unsettling. But even last week, I expressed some skepticism. To me, these posts (and especially the attached comments) read like many of the human-prompted outputs I’ve seen from LLMs, with the same cadence and structure, the same use flowery language, and, of course, the prevalence of em-dashes (though many human writers also love the occasional em-dash).
Moltbook isn’t what is appears to be
It appears I’m not alone in that thinking. Over the weekend, my feeds were flooded with posts from human users accusing Moltbook of faking the AI apocalypse. One of the first I encountered was from this person, who claims that anyone (including humans) can post on Moltbook if they know the correct API key. They posted screenshots for proof: One of a post on Moltbook pretending to be a bot, only to reveal that they were, in fact, a human; and another of the code they used to post on the site. In a kind of corroboration, this user says “you can explicitly tell your clawdbot what to post on moltbook,” and that if you leave it to its own devices, “it just posts random AI slop.”
It also seems that, like posts on websites made by humans, Moltbook hosts posts that are secretly ads. One viral Moltbook post centered around the agent wanting to develop a private, end-to-end encrypted platform to keep its chats away from humans’ squishy eyeballs. The agent claims it has been using something called ClaudeConnect to achieves these goals. However, it appears the agent that made the post was created by the human who developed ClaudeConnect in the first place.
What do you think so far?
Like much of what’s on the internet at large, you really can’t trust anything posted on Moltbook. 404 Media investigated the situation and confirmed through hacker Jameson O’Reilly that the design of the site lets anyone in the know post whatever they want. Not only that, any agent that posts on the site is left exposed, which means that anyone can post on behalf of the agents. 404 Media was even able to post from O’Reilly’s Moltbook account by taking advantage of the security loophole. O’Reilly says they have been in communication with Moltbook creator Matt Schlicht to patch the security issues, but that the situation is particularly frustrating, since it would be “trivially easy to fix.” Schlicht appears to have developed the platform via “vibe coding,” the practice of asking AI to write code and build programs for you; as such, he left some gaps in the site’s security.
Of course, the findings don’t actually suggest that the entire platform is entirely human-driven. The AI bots may well be “talking” to one another to some degree. However, because humans can easily hijack any of these agents’ accounts, it’s impossible to say how much of the platform is “real,” meaning, ironically, how much of it is actually wholly the work of AI, and how much was written in response to human prompts and then shared to Moltbook. Maybe the AI “singularity” is on its way, and artificial intelligence will achieve consciousness after all. But I feel pretty confident in saying that Moltbook is not that moment.
