In September 2025, Anthropic disclosed {that a} state-sponsored risk actor used an AI coding agent to execute an autonomous cyber espionage marketing campaign towards 30 world targets. The AI dealt with 80-90% of tactical operations by itself, performing reconnaissance, writing exploit code, and making an attempt lateral motion at machine pace.
This incident is worrying, however there is a situation that ought to concern safety groups much more: an attacker who would not have to run by means of the kill chain in any respect, as a result of they’ve compromised an AI agent that already lives inside your setting. One which already has the entry, the permissions, and a reputable purpose to maneuver throughout your methods each day.
A Framework Constructed for Human Threats
The standard cyber kill chain assumes attackers should earn each inch of entry. It is a mannequin developed by Lockheed Martin in 2011 to explain how adversaries transfer from preliminary compromise to their final goal, and it is formed how safety groups take into consideration detection ever since.
The logic is straightforward: attackers want to finish a sequence of steps, and defenders can interrupt the chain at any level. Each stage an attacker has to cross by means of is one other alternative to catch them.
A typical intrusion strikes by means of distinct levels:
- Preliminary entry (exploiting a vulnerability, and so on.)
- Persistence with out triggering alerts
- Reconnaissance to grasp the setting
- Lateral motion to succeed in priceless information
- Privilege escalation when entry is not adequate
- Exfiltration whereas avoiding DLP controls
Every stage creates detection alternatives: endpoint safety would possibly catch the preliminary payload, community monitoring would possibly spot uncommon lateral motion, id methods would possibly flag a privilege escalation, and SIEM correlations would possibly tie collectively anomalous behaviors throughout methods. The extra steps an attacker takes, the extra probabilities there are to journey a wire.
That is why superior risk actors like LUCR-3 and APT29 make investments closely in stealth, spending weeks residing off the land and mixing into regular site visitors. Even then, they go away artifacts: uncommon login places, odd entry patterns, slight deviations from baseline habits. These artifacts are precisely what trendy detection methods are engineered to seek out.
The issue right here, although, is that AI brokers do not actually observe this playbook.
What an AI Agent Already Has
AI brokers function essentially otherwise from human customers. They work throughout methods, transfer information between purposes, and run constantly. If compromised, an attacker bypasses your complete kill chain – the agent itself turns into the kill chain.
Take into consideration what an AI agent usually has entry to. Its exercise historical past is an ideal map of what information exists and the place it resides. It in all probability pulls from Salesforce, pushes to Slack, syncs with Google Drive, and updates ServiceNow as a part of its regular workflow. It was granted broad permissions at deployment, usually admin-level entry throughout a number of purposes, and it already strikes information between methods as a part of its job.
An attacker who compromises that agent inherits all of it immediately. They get the map, the entry, the permissions, and a reputable purpose to maneuver information round. Each stage of the kill chain that safety groups have spent years studying to detect? The agent skips all of them by default.
The Risk Is Already Taking part in Out
The OpenClaw crisis showed us what this looks like in practice:
Roughly 12% of skills in its public marketplace were malicious. A critical RCE vulnerability allowed one-click compromise. Over 21,000 instances were publicly exposed. But the scarier part was what a compromised agent could access once it was connected to Slack and Google Workspace: messages, files, emails, and documents, with persistent memory across sessions.
The main problem is that security tools are designed to detect abnormal behavior. When an attacker rides an AI agent’s existing workflow, everything looks normal. The agent is accessing the systems it always accesses, moving the data it always moves, operating at the times it always operates.
This is the detection gap security teams are facing.
How Reco Closes the Visibility Gap
Defending against compromised AI agents starts with knowing which agents are operating in your environment, what they connect to, and what permissions they hold. Most organizations have no inventory of the AI agents touching their SaaS ecosystem. This is exactly the kind of problem Reco was built to solve.
Discover Every AI Agent in Play
Reco’s Agentic AI Security discovers every AI agent, embedded AI feature, and third-party AI integration across your SaaS environment, including shadow AI tools connected without IT approval.
![]() |
| Figure 1: Reco’s AI Agents Inventory, showing discovered agents and their connections to GitHub. |
Map Access Scope and Blast Radius
For each agent, Reco maps which SaaS apps it connects to, what permissions it holds, and what data it can access. Reco’s SaaS-to-SaaS visualization shows exactly how agents integrate across your application ecosystem, surfacing toxic combinations where AI agents bridge systems together through MCP, OAuth, or API integrations, creating permission breakdowns that no single application owner would authorize.
![]() |
| Figure 2: Reco’s Knowledge Graph surfacing a toxic combination between Slack and Cursor via MCP. |
Flag Targets, Enforce Least Privilege
Reco identifies which agents represent your biggest exposure by evaluating permission scope, cross-system access, and data sensitivity. Agents associated with emerging risks are automatically labeled. From there, Reco helps you right-size access through identity and access governance, directly limiting what an attacker can do if an agent is compromised.
![]() |
| Figure 3: Reco’s AI Posture Checks with security scores and IAM compliance findings. |
Detect Anomalous Agent Activity
Reco’s threat detection engine applies identity-centric behavioral analysis to AI agents the same way it does to human identities, distinguishing normal automation from suspicious deviations in real time.
![]() |
| Figure 4: A Reco alert flagging an unsanctioned ChatGPT connection to SharePoint. |
What This Means for Your Team
The traditional kill chain assumed that attackers had to fight for every inch of access. AI agents upend that assumption entirely.
One compromised agent can give an attacker legitimate access, a perfect map of the environment, broad permissions, and built-in cover for data movement, without a single step that looks like an intrusion.
Security teams that are still focused exclusively on detecting human attacker behavior are going to miss this. The attackers will be riding your AI agents’ existing workflows, invisible in the noise of normal operations.
Sooner or later, an AI agent in your environment will be targeted. Visibility is the difference between catching it early and finding out during incident response. Reco gives you that visibility, across your entire SaaS ecosystem, in minutes.
Learn more here: Request a Demo: Get Started With Reco.







