Writer: Itamar Apelblat, CEO and Co-Founder, Token Safety
Not way back, AI deployments contained in the enterprise meant copilots drafting emails or summarizing paperwork. At present, AI brokers are provisioning infrastructure, answering buyer assist tickets, triaging alerts, approving transactions, writing manufacturing code, and a lot extra. They’re now not passive assistants. They’re operators throughout the enterprise.
For CISOs, this shift creates a well-recognized however amplified drawback: entry.
Each AI agent authenticates to programs and companies. It makes use of API keys, OAuth tokens, cloud roles, or service accounts. It reads information, writes configurations, and calls downstream instruments. In different phrases, it behaves precisely like an id, as a result of it’s one.
But in lots of organizations, AI brokers should not ruled as first-class identities. They inherit the privileges of their creators. They function underneath over-scoped service accounts. They’re granted broad entry simply to verify issues work. As soon as deployed, they typically evolve quicker than the controls round them.
That is the rising blind spot in AI safety.
Step one towards closing it’s what we name identity-first safety for AI: recognizing that each autonomous agent have to be ruled, audited, and attested identical to a human person or machine workload. Which means distinctive identities, outlined roles, clear possession, lifecycle administration, entry management, and auditability.
However right here’s the laborious reality: id alone is now not ample.
Conventional id and entry administration (IAM) solutions a simple query: Who’s requesting entry? In a human-driven world, that was typically sufficient. Customers had roles and job features. Companies had outlined scopes. Workflows had been comparatively predictable.
AI brokers create, use, and rotate identities at machine velocity—outpacing conventional IAM controls.
This information exhibits CISOs easy methods to handle the complete lifecycle of AI agent identities, cut back danger, and preserve governance and audit readiness.
Obtain it Free
AI brokers change that equation.
They’re dynamic by design. They interpret inputs, plan actions, and name instruments based mostly on context. An AI agent that begins with the mission to generate a quarterly report would possibly, if prompted or misdirected, try to entry programs unrelated to reporting. An infrastructure agent designed to remediate vulnerabilities would possibly pivot to modifying configurations in ways in which exceed its authentic scope.
When that occurs, identity-based controls don’t essentially cease it from taking place.
Conventional IAM assumes determinism. A job is granted as a result of a person or service performs an outlined perform. The scope of motion is predictable.
AI brokers break that assumption. Their goal could also be fastened, however the path they take to attain it’s fluid. They purpose, chain instruments collectively, and discover different actions.
Static roles had been by no means designed for actors that determine easy methods to act in actual time. If the agent’s position permits the motion, entry is granted, even when the motion now not aligns with the explanation the agent was deployed within the first place.
That is the place intent-based permissioning turns into important.
If id solutions who, intent solutions why.
Intent-based permissions consider whether or not an agent’s declared mission and runtime context justify activating its privileges at that second. Entry is now not only a static mapping between id and position. It turns into conditional on function.
Think about an AI agent chargeable for deploying code. In a conventional mannequin, it could have standing permissions to change infrastructure. In an intent-aware mannequin, these privileges activate solely when the deployment is tied to an permitted pipeline occasion and alter request. If the identical agent makes an attempt to change manufacturing programs exterior that context, the privileges don’t activate that entry.
The id hasn’t modified, however the intent, and due to this fact the authorization, has.
This mix addresses two of the commonest failure modes we’re seeing in AI deployments.
First, privilege inheritance. Builders typically check brokers utilizing their very own elevated credentials. These privileges persist in manufacturing environments, creating pointless publicity. Treating brokers as distinct identities will help eradicate this bleed-through.
Second, mission drift. AI brokers can pivot mid-run based mostly on prompts, integrations, or adversarial enter. Intent-based controls forestall that pivot from turning into unauthorized entry.
For CISOs, the worth isn’t simply tighter management. It’s governance that scales.
AI brokers work together with 1000’s of APIs, SaaS platforms, and cloud sources. Making an attempt to handle danger by enumerating each permissible motion shortly turns into unmanageable. Coverage sprawl will increase complexity, and complexity erodes assurance.
An intent-based mannequin simplifies oversight. Governance shifts from managing 1000’s of discrete motion guidelines to managing outlined id profiles and permitted intent boundaries.
Coverage opinions give attention to whether or not an agent’s mission is acceptable, not whether or not each particular person API name is accounted for in isolation.
Audit trails change into extra significant as nicely. When an incident happens, safety groups can decide not solely which agent carried out an motion, however what intent profile was lively and whether or not the motion aligned with its permitted mission.
That degree of traceability is more and more crucial for regulatory scrutiny and board-level accountability.
The broader concern is that this: AI brokers are accelerating quicker than conventional entry management fashions had been designed to deal with. They function at machine velocity, adapt to context, and orchestrate throughout programs in ways in which blur the traces between software, person, and automation.
CISOs can’t afford to deal with them as simply one other workload.
The shift to agentic AI programs requires a shift in safety considering. Each AI agent have to be handled as an accountable id. And that id have to be constrained not solely by static roles, however by declared function and operational context.
The trail ahead is evident. Stock your AI brokers. Assign them distinctive, lifecycle-managed identities. Outline and doc their permitted missions. And implement controls that activate privileges solely when id, intent, and context align.
Autonomy with out governance is an enormous danger. Identification with out intent is incomplete.
Within the agentic period, understanding who’s performing is important. Guaranteeing they’re performing for the appropriate purpose is what makes agentic AI safe.
If you happen to’re securing agentic AI we’d love to indicate you a technical demo of Token and listen to extra about what you’re engaged on.
Sponsored and written by Token Safety.



