Anthropic restricted its Mythos Preview mannequin final week after it autonomously discovered and exploited zero-day vulnerabilities in each main working system and browser. Palo Alto Networks’ Wendi Whitmore warned that comparable capabilities are weeks or months from proliferation. CrowdStrike’s 2026 International Risk Report places common eCrime breakout time at 29 minutes. Mandiant’s M-Developments 2026 reveals adversary hand-off occasions have collapsed to 22 seconds.
Offense is getting quicker. The query is the place precisely defenders are gradual — as a result of it isn’t the place most SOC dashboards recommend.
Detection tooling has gotten materially higher. EDR, cloud safety, e-mail safety, id, and SIEM platforms ship with built-in detection logic that pushes MTTD near zero for identified methods. That is actual progress, and it is the results of years of funding in detection engineering throughout the trade.
However when adversaries are working on timelines measured in seconds and minutes, the query is not whether or not your detections fireplace quick sufficient. It is what occurs between the alert firing and somebody truly choosing it up.
The Submit-Alert Hole
After the alert fires, the clock retains operating. An analyst has to see it, decide it up, assemble context from throughout the stack, examine, make a dedication, and provoke a response. In most SOC environments, that sequence is the place the vast majority of the attacker’s working window truly lives.
The analyst is mid-investigation on one thing else. The alert enters a queue. Context is unfold throughout 4 or 5 instruments. The investigation itself requires querying the SIEM, checking id logs, pulling endpoint telemetry, and correlating timelines. For an intensive investigation — one which ends in a defensible dedication, not a gut-feel shut — that is 20 to 40 minutes of hands-on work, assuming the analyst begins instantly, which they not often do.
Towards a 29-minute breakout window, the investigation hasn’t began by the point the attacker has moved laterally. Towards a 22-second hand-off, the alert may nonetheless be within the queue.
MTTD does not seize any of this. It measures how rapidly the detection fires, and on that entrance, the trade has made real progress. However that metric stops on the alert. It says nothing about how lengthy the post-alert window truly was, what number of alerts acquired an actual investigation versus a fast skim, or what number of have been bulk-closed with out significant evaluation. MTTD stories on the a part of the issue that the trade has already made actual headway on. The downstream publicity — the post-alert investigation hole — is not mirrored anyplace.
What Adjustments When AI Handles Investigation
An AI-driven investigation does not enhance detection velocity. MTTD is a detection engineering metric, and it stays the identical. What AI compresses is the post-alert timeline, which is strictly the place the actual publicity lives.
The queue disappears. Every alert is investigated as it arrives, regardless of severity or time of day. Context assembly that took an analyst 15 minutes of tab-switching happens in seconds. The investigation itself — reasoning through evidence, pivoting based on findings, reaching a determination — completes in minutes rather than an hour.
This is what we built Prophet AI to do. It investigates every alert with the depth and reasoning of a senior analyst, at machine speed: planning the investigation dynamically, querying the relevant data sources, and producing a transparent, evidence-backed conclusion. The post-alert gap doesn’t exist in this model because there is no queue and no wait time. For teams working toward this benchmark, we’ve published practical steps to compress investigation time below two minutes.
The same structural constraint applies to MDR. MDR analysts face the same post-alert bottleneck because they’re still bound by human investigation capacity. The shift from outsourced human investigation to AI investigation removes that ceiling entirely, changing what becomes measurable about your SOC’s actual performance.
The Metrics That Matter Now
Once the post-alert window collapses, the traditional speed metrics stop being the most informative indicators. MTTI of two minutes is meaningful in the first quarter you report it. After that, it’s table stakes. The question shifts from “how fast are we?” to “how much stronger is our security posture getting over time?”
Four metrics capture this:
- Investigation coverage rate. What percentage of total alerts receive a full investigation consisting of a complete line of questioning with evidence? In a traditional SOC, this number is typically 5 to 15 percent. The rest get skimmed, bulk-closed, or ignored. In an AI-driven SOC, it should be 100 percent. This is the single most important metric for understanding whether your SOC is actually seeing what’s happening in your environment.
- Detection surface coverage. MITRE ATT&CK technique coverage mapped against your detection library, with gaps identified and tracked over time. This means continuously mapping the detection surface, identifying techniques with weak or no coverage, and flagging single points of failure or scenarios where a single detection rule is the only thing between the organization and complete blindness to a technique. Detection engineering in an AI-driven SOC requires rethinking how this surface is maintained.
- False positive feedback velocity. How quickly do investigation outcomes feed back into detection tuning? In most SOCs, this loop runs on human memory and quarterly review cycles. The target state is continuous: investigation outcomes should flow directly into detection optimization, suppressing noise and improving signal without waiting for a scheduled review.
- Hunt-driven detection creation rate. How many permanent detections were created from proactive hunting findings versus from incident response? This measures whether your hunting program is expanding your detection surface or just generating reports. The strongest implementations tie hunting directly to detection gaps where you run hypothesis-driven hunts against the techniques with the weakest coverage, then convert confirmed findings into permanent detection rules.
These measurements only matter once AI is doing real investigation work, but they represent a fundamentally different view of SOC performance that’s oriented around security outcomes rather than operational throughput.
The Mythos disclosure crystallized something the security industry already knew but hadn’t fully internalized: AI is accelerating offense at a pace that makes human-speed investigation untenable. The response isn’t to panic about AI-generated exploits. It’s to close the gap where defenders are actually slow — the post-alert investigation window — and to start measuring whether that gap is shrinking.
The teams that shift from reporting detection speed to reporting investigation coverage and detection improvement will have a clearer picture of their actual risk posture. When attackers have AI working for them, that clarity matters.
Prophet Security’s Agentic AI SOC Platform investigates every alert with senior analyst depth, continuously optimizes detections, and runs directed threat hunts against coverage gaps. Visit Prophet Security to see how it works.




