In March 2026, San Francisco as soon as once more turned the epicenter of the cybersecurity world. Hundreds of practitioners, distributors, and buyers gathered at Moscone Middle for the RSA Convention, the place one theme dominated each keynote, panel, and sales space dialog: Agentic AI. Not simply AI as a device, however AI as an actor.
From autonomous code technology to decision-making methods that provoke actions with out human intervention, the business is coming into a brand new section. Developments like Mythos, a next-generation AI framework able to orchestrating complicated, multi-step cyber operations, spotlight each the promise and the chance of this shift.
The Cloud Safety Alliance predicts a surge in simultaneous AI-powered assaults and urges defenders to combat AI with AI. OpenAI has responded by scaling its Trusted Entry for Cyber program to help hundreds of verified defenders and lots of of safety groups. Gartner reinforces this development, forecasting AI spending to develop by 44 % in 2026 and attain $47 trillion by 2029. This far exceeds its projected $238 billion for info safety and danger administration options in 2026.
The Twin-Use Actuality of Agentic AI
Applied sciences like Mythos reveal a basic reality. The identical capabilities that profit defenders additionally empower attackers. Adversaries are already utilizing AI to allow:
- Autonomous reconnaissance and lateral motion
- Actual-time adaptation to defenses
- Scalable, low-cost assaults with minimal human involvement
This isn’t theoretical. Early rogue AI brokers are probing environments, exploiting misconfigurations, and mimicking legit customers. Attackers not want to regulate each step. They’ll deploy brokers that behave like identities.
Each main shift in cybersecurity has led to a wave of level options. The result’s predictable: device sprawl, siloed visibility, and operational complexity. These gaps usually profit attackers. Agentic AI dangers are following the identical path. Early indicators are already seen:
- AI safety posture administration instruments
- AI runtime safety platforms
- AI-specific anomaly detection engines
- AI governance options
Every might present worth, however including extra instruments will increase friction. Organizations don’t want extra dashboards. They want higher context and management over the entities working of their environments, whether or not human or machine.
On the parallel AGC Cybersecurity Investor Convention, AI specialists and business leaders reached a extra pragmatic conclusion: organizations ought to deal with AI like an id. This attitude cuts by means of the hype. Slightly than viewing AI as a brand new device class that requires fully separate safety stacks, it locations AI throughout the established and important area of id safety.
As a result of basically, agentic AI behaves like an id:
- It authenticates (through APIs, tokens, or credentials)
- It accesses methods and information
- It performs actions inside an setting
- It may be compromised, misused, or go rogue
When you settle for this, the trail ahead turns into clearer—and much much less fragmented.
Identification Risk Detection because the Basis
If AI is handled as an id, id menace detection and danger mitigation options turn into the logical management aircraft. This method focuses on analyzing conduct throughout credentials and methods. It combines adaptive verification, behavioral analytics, system intelligence, and danger scoring in a unified platform.
Utilized to AI, this allows:
- Behavioral visibility to detect anomalies equivalent to uncommon entry, privilege escalation, or information exfiltration
- Threat-based controls to regulate entry, implement further verification, or isolate suspicious brokers
- Unified coverage enforcement throughout human and machine identities
- Lifecycle administration to forestall orphaned or unmanaged brokers
As rogue AI brokers emerge, whether or not compromised or malicious, identity-driven safety gives a sensible protection. It enforces least privilege, constantly validates entry, detects irregular conduct, and automates response actions. These capabilities exist already in trendy id safety frameworks and will be prolonged to AI with out introducing new silos.
Conclusion
The conversations in San Francisco this March made one factor clear: the way forward for cybersecurity will probably be formed by entities that may act independently. Some will probably be human. Many won’t.
As applied sciences like Mythos proceed to push the boundaries of what AI can do, the business should evolve its defensive mindset accordingly. The simplest technique may additionally be the only: If it might act, it needs to be handled like an id.
By anchoring AI safety inside id menace detection and danger mitigation frameworks, organizations can defend in opposition to rogue brokers—with out including one more fragmented device to an already complicated protection arsenal.
Be taught Extra on the AI Threat Summit | Ritz-Carlton, Half Moon Bay
Associated: AI Can Autonomously Hack Cloud Programs With Minimal Oversight: Researchers
Associated: ‘Mythos-Ready’ Safety: CSA Urges CISOs to Put together for Accelerated AI Threats



