Comply with ZDNET: Add us as a most well-liked supply on Google.
ZDNET’s key takeaways
- Each Moltbook and OpenClaw are irredeemably insecure.
- No matter Meta and OpenAI paid, it was an excessive amount of.
- Different, higher applications have appeared that do the identical jobs.
The AI enterprise has turn into downright loopy. First, OpenAI employed Peter Steinberger, creator of the favored, horribly insecure open-source agent framework OpenClaw. Now, Meta has acquired Moltbook, the viral AI agent social community that additionally has no safety to talk of. That is nuts.
Additionally: AI brokers of chaos? New analysis exhibits how bots speaking to bots can go sideways quick
Moltbook, a social platform for AI brokers
These are the details of the offers: Meta has confirmed its buy of Moltbook, a Reddit-style social platform the place AI brokers — reasonably than people — put up updates, share info, and work together with one another. Nicely, that is what the Moltbook crew tells individuals. The fact is that these “agents” have been, actually, people position‑taking part in as brokers, or closely scripting what the brokers needed to say. As know-how journalist Mike Elgan wrote, “It’s a website where people cosplay as AI agents to create a false impression of AI sentience and mutual sociability.”
Whereas Moltbook claims to have 1.4 million customers, the true quantity seems to be far smaller. Gal Nagli, cloud safety firm Wiz’s head of risk publicity, tweeted that he was capable of “register 500,000 users on @moltbook” himself as a result of anybody can put up to Moltbook utilizing its REST-API. He estimates there are about 17,000 actual customers on the location. That is not practically as spectacular, is it?
Additionally: AI brokers are quick, free, and uncontrolled, MIT examine finds
On high of that, Moltbook’s safety has been near non-existent. In a follow-up weblog put up, Nagli wrote, “We identified a misconfigured Supabase database belonging to Moltbook, allowing full read and write access to all platform data.” This doesn’t require elite hacker expertise. He and his crew discovered this safety gap with “a non-intrusive security review, simply by browsing like normal users.”
If it is so unhealthy, why did Meta make this deal? Formally, in line with Meta, “The Moltbook team joining MSL [Meta Superintelligence Labs] opens up new ways for AI agents to work for people and businesses. Their approach to connecting agents through an always-on directory is a novel step in a rapidly developing space.”
For Meta, Moltbook additionally aligns with its broader wager that folks will quickly orchestrate fleets of brokers throughout messaging, productiveness, and social apps reasonably than work together with a single monolithic assistant. Whether or not Fb and Instagram customers will need to work together with AI as a substitute of their associates is one other matter completely. On Fb, I am already sick to demise of seeing: “Meet Manus, your new AI work partner. Use Manus to create posts for your Page that engage your audience.”
Additionally: Enterprise AI brokers are multiplying quick, and Microsoft desires full management of them
Meta’s simply using the AI hype prepare. Moltbook could also be solely weeks previous, however, issues and all, it has been a viral hit. The know-how itself is nothing to jot down residence about. There are already comparable applications on the market, resembling The Colony, Clawstr, and 4Claw. None of these, nevertheless, have gotten practically as a lot digital ink.
Monetary phrases weren’t disclosed, however the acquisition brings Moltbook’s co-founders, Matt Schlicht and Ben Parr, into Meta’s MSL for, presumably, a pleasant chunk of change. Whether or not Schlicht’s private AI assistant, Clawd Clawderberg, “who” helped construct Moltbook, was additionally paid wasn’t revealed.
OpenClaw by another identify
One more reason Meta could have gotten its palms on Moltbook is that it failed to succeed in a cope with Peter Steinberger, the Austrian developer behind the even hotter OpenClaw. Initially referred to as Clawdbot and later as Moltbot, OpenClaw lets customers assemble brokers that may management private computer systems and on-line companies with out writing code.
OpenAI CEO Sam Altman tweeted that Steinberger would “drive the next generation of personal agents. He is a genius with a lot of amazing ideas about the future of very smart agents interacting with each other to do very useful things for people. We expect this will quickly become core to our product offerings.”
Additionally: This viral AI agent is evolving quick – and it is nightmare gasoline for safety professionals
Actually? A genius? Steinberger vibecoded the primary model of OpenClaw in about an hour. I believe he was in the suitable place on the proper time to catch the AI agent wave and experience it to riches. Because the saying goes, it is higher to be fortunate than good, and boy was he fortunate.
You see, OpenClaw can be riddled with safety holes. First, there was the important distant code execution bug, CVE‑2026‑25253, that allowed one‑click on distant code execution in opposition to OpenClaw cases through authentication token hijacking over WebSockets.
However, wait — there’s extra! By design, OpenClaw shops API keys and different secrets and techniques in native recordsdata and offers brokers broad working system and app entry. Which means any compromise can leak cloud keys, messaging tokens, passwords, and full chat histories. In brief, “Here are my secrets! Take them! Please!”
Researchers have additionally discovered tens of 1000’s of uncovered OpenClaw cases on the general public web. Many of those are misconfigured in order that what must be “localhost‑only” admin interfaces have been totally open, successfully handing full system management to distant attackers. That is as a result of it is precisely what the unique default setup gave you.
Its ecosystem can be a serious weak point. Evaluation of the OpenClaw expertise market studies that round 12% – 20% of listed group “skills” are outright malware or have severe vulnerabilities.
Additionally: Wish to attempt OpenClaw? NanoClaw is an easier, probably safer AI agent
With all these safety holes uncovered, Steinberger now insists that you simply run OpenClaw solely in single-user mode on a non-public community. Nonetheless, that defeats the entire level of OpenClaw, which is to attract on web companies to do helpful work.
Within the meantime, quite a few different applications, resembling NanoClaw, TrustClaw, and Carapace AI, have emerged. And, guess what? They’re all a lot safer with safety in-built.
What does all this imply? Nicely, to cite Kevin Breen, Immersive’s senior director of Cyber Menace Analysis, “The concept is compelling, but the execution is a security catastrophe. Don’t believe anyone who claims OpenClaw is just ‘maturing in public’. The reality is that it is failing in public. Until the project implements a mandatory zero-trust execution environment and a fully audited marketplace, our recommendation is absolute: Uninstall it. Now.”
You possibly can say a lot the identical about Moltbook. Each are examples of unhealthy, insecure applications with their supporters drunk on AI hype. They’re all sizzle and no steak. Will multi-AI agent networks and an AI agent that works in live performance together with your present companies be an enormous deal? Sure, sure, they may. However neither of those applications, when all is alleged and completed, shall be main the way in which to a productive AI future.



