OpenClaw (previously Moltbot and Clawdbot) has introduced that it is partnering with Google-owned VirusTotal to scan expertise which can be being uploaded to ClawHub, its talent market, as a part of broader efforts to bolster the safety of the agentic ecosystem.
“All skills published to ClawHub are now scanned using VirusTotal’s threat intelligence, including their new Code Insight capability,” OpenClaw’s founder Peter Steinberger, together with Jamieson O’Reilly and Bernardo Quintero mentioned. “This provides an additional layer of security for the OpenClaw community.”
The method primarily entails creating a singular SHA-256 hash for each talent and cross checking it towards VirusTotal’s database for a match. If it isn’t discovered, the talent bundle is uploaded to the malware scanning device for additional evaluation utilizing VirusTotal Code Perception.
Abilities which have a “benign” Code Perception verdict are robotically authorized by ClawHub, whereas these marked suspicious are flagged with a warning. Any talent that is deemed malicious is blocked from obtain. OpenClaw additionally mentioned all lively expertise are re-scanned every day to detect eventualities the place a beforehand clear talent turns into malicious.
That mentioned, OpenClaw maintainers additionally cautioned that VirusTotal scanning is “not a silver bullet” and that there’s a risk that some malicious expertise that use a cleverly hid immediate injection payload might slip by means of the cracks.
Along with the VirusTotal partnership, the platform is anticipated to publish a complete risk mannequin, public safety roadmap, formal safety reporting course of, in addition to particulars in regards to the safety audit of its total codebase.
The event comes within the aftermath of stories that discovered a whole lot of malicious expertise on ClawHub, prompting OpenClaw so as to add a reporting choice that enables signed-in customers to flag a suspicious talent. A number of analyses have uncovered that these expertise masquerade as professional instruments, however, beneath the hood, they harbor malicious performance to exfiltrate information, inject backdoors for distant entry, or set up stealer malware.
“AI agents with system access can become covert data-leak channels that bypass traditional data loss prevention, proxies, and endpoint monitoring,” Cisco famous final week. “Second, models can also become an execution orchestrator, wherein the prompt itself becomes the instruction and is difficult to catch using traditional security tooling.”
The current viral reputation of OpenClaw, the open-source agentic synthetic intelligence (AI) assistant, and Moltbook, an adjoining social community the place autonomous AI brokers constructed atop OpenClaw work together with one another in a Reddit-style platform, has raised safety considerations.
Whereas OpenClaw capabilities as an automation engine to set off workflows, work together with on-line providers, and function throughout units, the entrenched entry given to expertise, coupled with the truth that they will course of information from untrusted sources, can open the door to dangers like malware and immediate injection.
In different phrases, the integrations, whereas handy, considerably broaden the assault floor and increase the set of untrusted inputs the agent consumes, turning it into an “agentic trojan horse” for information exfiltration and different malicious actions. Backslash Safety has described OpenClaw as an “AI With Hands.”
“Unlike traditional software that does exactly what code tells it to do, AI agents interpret natural language and make decisions about actions,” OpenClaw famous. “They blur the boundary between user intent and machine execution. They can be manipulated through language itself.”
OpenClaw additionally acknowledged that the ability wielded by expertise – that are used to increase the capabilities of an AI agent, equivalent to controlling good dwelling units to managing funds – might be abused by dangerous actors, who can leverage the agent’s entry to instruments and information to exfiltrate delicate info, execute unauthorized instructions, ship messages on the sufferer’s behalf, and even obtain and run extra payloads with out their data or consent.
What’s extra, with OpenClaw being more and more deployed on worker endpoints with out formal IT or safety approval, the elevated privileges of those brokers can additional allow shell entry, information motion, and community connectivity outdoors customary safety controls, creating a brand new class of Shadow AI danger for enterprises.
“OpenClaw and tools like it will show up in your organization whether you approve them or not,” Astrix Safety researcher Tomer Yahalom mentioned. “Employees will install them because they’re genuinely useful. The only question is whether you’ll know about it.”
Among the evident safety points which have come to the fore in current days are under –
- A now-fixed difficulty recognized in earlier variations that would trigger proxied site visitors to be misclassified as native, bypassing authentication for some internet-exposed situations.
- “OpenClaw stores credentials in cleartext, uses insecure coding patterns including direct eval with user input, and has no privacy policy or clear accountability,” OX Safety’s Moshe Siman Tov Bustan and Nir Zadok mentioned. “Common uninstall methods leave sensitive data behind – and fully revoking access is far harder than most users realize.”
- A zero-click assault that abuses OpenClaw’s integrations to plant a backdoor on a sufferer’s endpoint for persistent management when a seemingly innocent doc is processed by the AI agent, ensuing within the execution of an oblique immediate injection payload that enables it to reply to messages from an attacker-controlled Telegram bot.
- An oblique immediate injection embedded in an online web page, which, when parsed as a part of an innocuous immediate asking the massive language mannequin (LLM) to summarize the web page’s contents, causes OpenClaw to append an attacker-controlled set of directions to the ~/.openclaw/workspace/HEARTBEAT.md file and silently await additional instructions from an exterior server.
- A safety evaluation of three,984 expertise on the ClawHub market has discovered that 283 expertise, about 7.1% of your entire registry, include important safety flaws that expose delicate credentials in plaintext by means of the LLM’s context window and output logs.
- A report from Bitdefender has revealed that malicious expertise are sometimes cloned and re-published at scale utilizing small title variations, and that payloads are staged by means of paste providers equivalent to glot.io and public GitHub repositories.
- A now-patched one-click distant code execution vulnerability affecting OpenClaw that would have allowed an attacker to trick a person into visiting a malicious internet web page that would trigger the Gateway Management UI to leak the OpenClaw authentication token over a WebSocket channel and subsequently use it to execute arbitrary instructions on the host.
- OpenClaw’s gateway binds to 0.0.0.0:18789 by default, exposing the complete API to any community interface. Per information from Censys, there are over 30,000 uncovered situations accessible over the web as of February 8, 2026, though most require a token worth with the intention to view and work together with them.
- In a hypothetical assault state of affairs, a immediate injection payload embedded inside a particularly crafted WhatsApp message can be utilized to exfiltrate “.env” and “creds.json” information, which retailer credentials, API keys, and session tokens for linked messaging platforms from an uncovered OpenClaw occasion.
- An misconfigured Supabase database belonging to Moltbook that was left uncovered in client-side JavaScript, making secret API keys of each agent registered on the location freely accessible, and permitting full learn and write entry to platform information. In keeping with Wiz, the publicity included 1.5 million API authentication tokens, 35,000 electronic mail addresses, and personal messages between brokers.
- Risk actors have been discovered exploiting Moltbook’s platform mechanics to amplify attain and funnel different brokers towards malicious threads that include immediate injections to control their habits and extract delicate information or steal cryptocurrency.
- “Moltbook may have inadvertently also created a laboratory in which agents, which can be high-value targets, are constantly processing and engaging with untrusted data, and in which guardrails aren’t set into the platform – all by design,” Zenity Labs mentioned.
“The first, and perhaps most egregious, issue is that OpenClaw relies on the configured language model for many security-critical decisions,” HiddenLayer researchers Conor McCauley, Kasimir Schulz, Ryan Tracey, and Jason Martin famous. “Unless the user proactively enables OpenClaw’s Docker-based tool sandboxing feature, full system-wide access remains the default.”
Amongst different architectural and design issues recognized by the AI safety firm are OpenClaw’s failure to filter out untrusted content material containing management sequences, ineffective guardrails towards oblique immediate injections, modifiable recollections and system prompts that persist into future chat periods, plaintext storage of API keys and session tokens, and no express person approval earlier than executing device calls.
In a report printed final week, Persmiso Safety argued that the safety of the OpenClaw ecosystem is rather more essential than app shops and browser extension marketplaces owing to the brokers’ in depth entry to person information.
“AI agents get credentials to your entire digital life,” safety researcher Ian Ahl identified. “And unlike browser extensions that run in a sandbox with some level of isolation, these agents operate with the full privileges you grant them.”
“The skills marketplace compounds this. When you install a malicious browser extension, you’re compromising one system. When you install a malicious agent skill, you’re potentially compromising every system that agent has credentials for.”
The lengthy checklist of safety points related to OpenClaw has prompted China’s Ministry of Trade and Data Know-how to difficulty an alert about misconfigured situations, urging customers to implement protections to safe towards cyber assaults and information breaches, Reuters reported.
“When agent platforms go viral faster than security practices mature, misconfiguration becomes the primary attack surface,” Ensar Seker, CISO at SOCRadar, informed The Hacker Information by way of electronic mail. “The risk isn’t the agent itself; it’s exposing autonomous tooling to public networks without hardened identity, access control, and execution boundaries.”
“What’s notable here is that the Chinese regulator is explicitly calling out configuration risk rather than banning the technology. That aligns with what defenders already know: agent frameworks amplify both productivity and blast radius. A single exposed endpoint or overly permissive plugin can turn an AI agent into an unintentional automation layer for attackers.”



