The 2026 RSA circus is over. The tents are packed and the elephants have been loaded onto the prepare.
Nonetheless, it was an eventful week. There have been fleets of automobiles — Escalades, Rivians, vehicles however curiously, no Teslas — strewn with vendor names and tag strains, and also you couldn’t stroll wherever close to Howard Avenue in San Franciso with out seeing, “AI-[insert word here like enabled, enhanced, native, powered, etc., etc., etc.]”
I spent the week talking with CISOs, cybersecurity professionals, expertise distributors, and repair suppliers. Listed here are a couple of of my takeaways.
The CISO AI hierarchy is actual
Whereas each vendor communicated AI alternative gaga, cybersecurity professionals’ temper was one in every of trepidation. Actually, I got here away with a profile of three distinct CISO archetypes:
The proactive CISO (roughly 20%): These safety leaders had been effectively conscious of the AI-driven enterprise and expertise adjustments afoot and got here armed with a listing of questions tailor-made to their particular enterprise necessities. Many of those executives introduced alongside safety engineers and designers — an action-oriented staff. These CISOs had an honest understanding about their group’s AI enterprise initiatives, in addition to their very own safety wants. The purpose? Develop a purchasing record that aligns with their group’s technique and helps their governance fashions, coverage enforcement controls, and safety expertise stacks.
The curious and confused CISO (roughly 40%): These executives know one thing is going on with AI of their group, however they aren’t certain what, the place, or how a lot is occurring. Their purpose was schooling — what dangers they face, what threat mitigation steps they need to take, and what’s accessible from the trade to assist them cease the bleeding. CISOs on this class are considerably determined for assist.
The blissfully ignorant CISO (roughly 40%): Okay, this one is a bit unfair to CISOs because it’s extra about their organizations. There’s doubtless AI improvement and utilization the CISO and doubtless some executives are unaware of. They approached RSA believing time was on their aspect, in order that they most likely skimmed by the AI rhetoric, shmoozed with distributors, and regarded for the most effective cocktail events.
In my humble opinion, CISOs will cycle by this hierarchy rapidly over the subsequent 12 months. Blissfully ignorant CISOs will get wind of AI initiatives at their group and transfer on to curiosity and confusion. This gained’t take lengthy. Continuing from curious and confused to proactive would be the tougher transition. These CISOs should assess enterprise goals, lively initiatives, and person actions, then work with executives to develop a governance framework, create insurance policies, implement guardrails, monitor actions, and handle a versatile mannequin that retains up with present and future enterprise and technical necessities. A typical analogy heard at RSA is that corporations should be capable to repair the airplane whereas it’s in flight.
Legacy safety distributors have the within monitor on AI — for now
So far as AI expertise consumption for cybersecurity, most CISOs I spoke with had been open-minded whereas leaning towards their current distributors — not less than within the quick time period. This may occasionally purchase legacy safety distributors a bit, however not a lot time.
Bear in mind what occurred within the cloud as we progressed from an absence of cloud belief, to “lift and shift,” to cloud-native? The identical factor is going on with AI, solely even sooner than the cloud. Bolting AI to current instruments gained’t work for lengthy, a 12 months at most.
You’ve bought to get the AI foundations proper
I used to be inspired to listen to distributors describe how they began their AI transition by constructing an infrastructural basis — knowledge basis/context engine, clever management airplane, execution layer, companies, guardrails, and many others. — after which including useful brokers on prime of this basis. Cisco/Splunk impressed me with its improvement strategy and roadmap, whereas AI-based startups corresponding to Summary, Crogl, and Sidekick are betting the farm on this technique.
AI code is making an impression
Distributors are additionally all-in on utilizing AI-development instruments and seeing robust outcomes. I heard about undertaking acceleration together with employees discount. Constructing connectors is an effective instance. Axonius and Tenable, each recognized for broad expertise integration, are utilizing AI to dump numerous this tedious however crucial work, liberating builders to work on performance somewhat than plumbing.
AI pricing stays a large number
Whereas AI capabilities look like baked into many instruments, I discovered that nobody is aware of tips on how to worth their AI companies. Some are doing so by the token, some by the variety of customers, and a few are charging by the agent. The market will flush this out over the remainder of the 12 months.
Utility safety is getting its AI makeover
Everyone knows the impression of AI on software program improvement. It’s clear to me after RSA that the identical factor is going on to software safety. Anthropic’s Claude Code Safety is one instance, however I additionally bought a view of the AWS Safety Agent, which gives software program testing capabilities throughout the software program improvement lifecycle — from design, to improvement, to runtime, to crimson teaming.
Likewise, I met with an organization named XBow that focuses on autonomous offensive safety based mostly on AI brokers. Based mostly on these developments, we’ll see a really completely different software safety market at RSA 2027.
Few could also be ready for what comes subsequent from cyber-adversaries
There’s lively debate within the trade in regards to the impression of AI throughout the risk panorama: Are current cybersecurity defenses satisfactory or will AI tilt the battlefield towards adversaries?
After RSA, I imagine each premises are true. Refined companies with robust governance, threat administration, asset visibility, trendy coaching, and sound hygiene and posture administration must be okay. Alarmingly, this can be a small proportion of organizations. Most others lack superior safety abilities and satisfactory assets. Adversaries armed with AI instruments and automatic workflows can have a area day right here.
Managed suppliers are advancing the AI SOC
Managed safety service suppliers (MSSPs) and managed detection and response (MDR) distributors are pushing the envelope on the AI-enabled safety operations heart (SOC).
Arctic Wolf unveiled its Aurora Superintelligence Platform and the Aurora Agentic SOC, which incorporates brokers for triage, alerting, investigations, and extra. I additionally met with Ontinue, an MSSP that gives companies on prime of Microsoft safety instruments corresponding to Defender for Endpoint, Defender for Azure, and MS Sentinel. It’s utilizing AI to determine what it calls “hyper-contextualization” to grasp all it will possibly about its clients’ enterprise processes and expertise infrastructure so it will possibly enhance decision-making.
Microsoft cements its place
Talking of Microsoft, it’s arduous to level to some other vendor that may match its cybersecurity protection.
In contrast to others, Microsoft got here to RSA armed with AI metrics and proof factors. For instance, Microsoft supplied particular metrics from a number of clients that turned on its Defender brokers and saved a whole bunch of hours of labor whereas enhancing accuracy and productiveness. I’m certain Microsoft has many examples to share.
Beware the cyber class killers
We’ve all the time considered cybersecurity by the lens of safety product classes — EDR, firewalls, SIEM, CSPM, and many others. However multi-agent AI merchandise may tackle many of those duties concurrently, breaking down conventional product buckets and performing as class killers.
CISOs should anticipate this and be open to organizational, course of, and budgetary adjustments. Additionally, will multi-agent cybersecurity merchandise imply the dying of the Gartner Magic Quadrant and all different me-too vendor mapping merchandise?
Consciousness coaching regularly transforms
Coaching is in transition. I’m happy with this improvement. Consciousness coaching is being changed by conduct monitoring and alter. Human threat administration (HRM) instruments from Fable Safety, KnowBe4, and Mimecast, amongst others, watch over customers and supply a nudge after they go astray.
Past artificial phishing, some instruments even present artificial deepfake coaching. HRM gross sales are restricted at the moment to progressive organizations, however I imagine they may grow to be a de facto normal as regulators and cyber-insurance corporations see the sunshine and assist this coaching renaissance.
Safety claims possession of identities
Effectively, partial possession, however this can be a step in the precise course. I’m seeing fascinating developments in areas corresponding to passwordless authentication (I can’t imagine it’s 2026 and we’re nonetheless utilizing passwords), browser safety, non-human id (NHI) safety, and privileged account administration.
RSA additionally pushed discussions about AI-agent entry and motion management — detection, monitoring, management of shadow brokers, zero-standing privilege, and many others. AI will likely be a giant participant, serving to to ease the painful id modernization course of.
As a cryptographer would possibly say, with this text, I’ve tried to hash your complete RSA occasion right into a single key. I actually loved RSA 2026 (my twentieth) and look ahead to subsequent 12 months. See you on the Moscone Middle from April 5 by April 8, 2027.



