In brief
- Google’s Threat Intelligence Group has verified that hackers employed artificial intelligence to craft a zero-day exploit aimed at a widely used open-source web administration tool.
- Google noted this marks the first confirmed instance of AI-assisted zero-day development observed in real-world attacks.
- Google collaborated with the impacted vendor to fix the vulnerability before the attack campaign expanded, but cautioned that threat actors associated with China and North Korea are also actively leveraging AI for vulnerability research and exploit creation.
Hackers utilized an AI model to uncover and exploit a zero-day vulnerability in a widely adopted open-source web administration tool, as reported by Google’s Threat Intelligence Group.
In a Monday publication, Google explained that the flaw enabled attackers to circumvent two-factor authentication, and warned that the hackers were gearing up for a large-scale exploitation effort before the company stepped in. This is the first time Google has verified AI-assisted zero-day development in actual attacks.
“As AI models become more advanced in coding, we are seeing adversaries increasingly use these tools as expert-level aids for vulnerability research and exploit development, including for zero-day vulnerabilities,” Google stated. “While these tools strengthen defensive research, they also make it easier for adversaries to reverse-engineer applications and create sophisticated, AI-generated exploits.”
The report arrives amid warnings from researchers and governments that AI models are speeding up cyberattacks by assisting hackers in discovering vulnerabilities, producing malware, and automating exploit development.
“Although frontier LLMs struggle with complex enterprise authorization logic, they are getting better at contextual reasoning—effectively interpreting the developer’s intent to link the 2FA enforcement logic with the contradictions of its hardcoded exceptions,” the report noted. “This ability can help models uncover hidden logic errors that seem functionally correct to traditional scanners but are fundamentally flawed from a security standpoint.”
According to Google, the unidentified attackers used AI to spot a logic flaw where the software trusted a condition that bypassed its two-factor authentication protections. Unlike traditional scanners that look for broken code or crashes, the AI examined how the software was supposed to function and identified the inconsistency, enabling attackers to bypass the security check without breaking the encryption itself.
“AI-driven coding has sped up the creation of infrastructure suites and polymorphic malware by adversaries,” Google stated. “These AI-powered development cycles help evade defenses by enabling the creation of obfuscation networks and the integration of AI-generated decoy logic in malware linked to suspected Russia-linked threat actors.”
The report indicates that threat actors from China and North Korea are using AI to find software weaknesses, while Russian groups are employing it to conceal their malware.
“These actors have adopted sophisticated methods for AI-enhanced vulnerability discovery and exploitation, starting with persona-driven jailbreaking attempts and the use of specialized, high-quality security datasets to improve their vulnerability discovery and exploitation processes,” Google wrote.
While Google’s report aimed to highlight the increasing danger of AI-powered cyberattacks, some researchers believe the concern is exaggerated. A separate Cambridge University study analyzing over 90,000 cybercrime forum threads found that most criminals were using AI for spam and phishing rather than crafting sophisticated cyberattacks.
“The role of jailbroken LLMs (Dark AI) as instructors is also overstated, given the importance of subculture and social learning in onboarding—new users value the social connections and community identity involved in learning hacking and cybercrime skills as much as the knowledge itself,” the study noted. “Our initial findings, therefore, suggest that even lamenting the rise of the Vibercriminal may be overstating the level of disruption so far.”
Despite Cambridge’s conclusions, the Threat Intelligence Group’s report also comes as Google has faced security issues related to AI-powered tools. In April, the company fixed a prompt injection flaw in its Antigravity AI coding platform that researchers said could allow attackers to execute commands on a developer’s machine through manipulated prompts.
“Although we do not believe Gemini was used, based on the structure and content of these exploits, we are highly confident that the actor likely used an AI model to aid in the discovery and weaponization of this vulnerability,” Google researchers stated.
Earlier this year, Anthropic restricted access to its Claude Mythos model after tests revealed it could identify thousands of previously unknown software flaws. These findings add to growing concerns that AI models are transforming cybersecurity by helping both defenders and attackers find vulnerabilities more quickly.
“As these capabilities become available to more defenders, many other teams are now experiencing the same sense of overwhelm we did when the findings first came to light,” Mozilla wrote in an April blog post. “For a hardened target, just one such bug would have been a red-alert in 2025, and so many at once makes you pause and wonder whether it’s even possible to keep up.”
Daily Debrief Newsletter
Start every day with the top news stories right now, plus original features, a podcast, videos and more.



