“AI can defeat CAPTCHA systems and analyse voice biometrics to compromise authentication,” in response to cybersecurity vendor Dispersive. “This capability underscores the need for organizations to adopt more advanced, layered security measures.”
Leveraging deepfakes for social engineering
AI-generated deepfakes are being abused to take advantage of channels many staff extra implicitly belief, equivalent to voice and video, as a substitute of counting on much less convincing email-based assaults.
The issue is changing into extra extreme with the broader availability of AI applied sciences able to creating extra convincing deepfakes, in response to Alex Lisle, CTO of deepfake detection platform Actuality Defender.
“There was a recent case involving a cybersecurity company that relied on visual verification for credential resets,” Lisle says. “Their process required a manager to join a Zoom call with IT to confirm an employee’s identity before a password reset.”
Lisle explains: “Attackers are now leveraging deepfakes to impersonate those managers on live video calls to authorize these resets.”
In essentially the most high-profile instance thus far, a finance employee at design and engineering firm Arup was tricked into authorizing a fraudulent HK$200 million ($25.6 million) transaction after attending a videoconference name throughout which fraudsters used deepfake know-how to impersonate its UK-based CFO.
Impersonating manufacturers in malicious advert campaigns
Cybercriminals have begun utilizing gen AI instruments to ship model impersonation campaigns delivered through advertisements and content material platforms, reasonably than conventional phishing or malware.
“Attackers now use gen AI to mass-produce realistic ad copy, creatives, and fake support pages, then distribute them across search ads, social ads, and AI-generated content, targeting high-intent queries like ‘brand login’ or ‘brand support,’” explains Shlomi Beer, co-founder and CEO at ImpersonAlly, a safety startup that focuses on defending the internet advertising ecosystem.
The tactic was utilized in ongoing a collection of Google Advert account fraud, to impersonate the Cursor AI coding assistant agency, and in a faux Shopify ecommerce platform buyer assist rip-off, amongst different assaults.
Abusing OpenClaw
Attackers have additionally begun focusing on viral private AI brokers equivalent to OpenClaw.
OpenClaw affords an open-source AI agent framework. A mixture of provide chain assaults on its talent market and misconfigurations open the door to potential exploits and malware slinging, as CSO lined in way more depth in our earlier report.
“Cybercriminals can exploit these virtual assistants to steal private keys to cryptocurrency wallets and execute code on victims’ devices,” says Edward Wu, CEO and founder at Dropzone AI. “We can expect 2026 to be the year when security teams will try to prevent unsanctioned usage of personal AI agents.”
Poisoning mannequin recollections
To supply short-term and longer-term context, AI brokers are beginning to rely extra on persistent reminiscence, opening the door for exploits that contain planting malicious recollections.
If an attacker injects malicious or false info into an agent’s reminiscence, that corrupted context then influences each future choice the agent makes.
For instance, safety researcher Johann Rehberger confirmed how he might plant false recollections in ChatGPT in September 2025.
“He [Rehberger] used a malicious image with hidden instructions embedded in it to inject fabricated data into the model’s long-term memory,” mentioned Siri Varma Vegiraju, safety tech lead at Microsoft. “The scary part was that once the memory was poisoned, it persisted across sessions and continuously exfiltrated user data to a server the attacker controlled.”
Hacking AI infrastructure
Over the previous 12 months, attackers have shifted from utilizing generative AI to focusing on the infrastructure that allows it.
This vector of assault is exemplified within the provide chain poisoning in Mannequin Context Protocol servers, the place compromised dependencies or modified code launched vulnerabilities into enterprise environments.
For instance, a counterfeit “Postmark MCP Server” found in early 2025 silently BCC’d all processed emails, together with inner paperwork, invoices, and credentials, to an attacker-controlled area.
Many different malicious MCP servers have already been recognized within the wild, many designed to exfiltrate info with out detection, in response to Casey Bleeker CEO at SurePath AI.
“We’re tracking several categories of MCP-specific risk: tool poisoning attacks, where adversaries inject malicious instructions into AI tool descriptions that execute when the agent invokes them; supply chain compromises, where a trusted MCP server or dependency is updated post-approval to behave maliciously; and cross-tool data exfiltration, where compromised components in an agentic workflow silently siphon sensitive data through what looks like legitimate AI activity,” Bleeker explains.
Actuality verify
AI applied sciences are highly effective however they’ve their limitations, a number of specialists inform CSO.
Rik Ferguson, VP of safety intelligence at Forescout, says cybercriminals are largely counting on AI to automate repetitive duties reasonably than extra advanced work, equivalent to vulnerability exploitation.
“The most reliable criminal use [of AI] remains in language-heavy and workflow-heavy tasks such as phishing and pretexting, influence and outreach, triaging and contextualizing vulnerabilities, and generating boilerplate components, rather than reliably discovering and exploiting brand-new vulnerabilities end-to-end,” Ferguson says.
Over the previous twelve months, managed detection and response agency Huntress has tracked menace actors making use of AI to generate and automate conventional tradecraft, from growing scripts to browser extensions and, in some circumstances, even phishing lures.
“We have also seen such ‘vibe coded’ scripts fail to execute and meet their objectives on multiple occasions,” Anton Ovrutsky, principal tactical response analyst at Huntress, tells CSO.
And whereas AI has actually given menace actors a robust instrument it has, not less than thus far, didn’t spawn any new techniques or exploit courses, in response to Ovrutsky.
“A threat actor can indeed rapidly prototype a sophisticated credential theft script, yet the basic ‘laws of physics’ still exist; a threat actor must be in a position to execute such a script in the first place,” Ovrutsky says. “We have yet to observe an exploit path that has been enabled through AI-use exclusively.”
Countermeasures
Collectively the misuse of gen AI instruments is making it simpler for much less expert cybercriminals to earn a dishonest dwelling. Defending towards the assault vector challenges safety professionals to harness the facility of synthetic intelligence extra successfully than attackers.
“Criminal misuse of AI technologies is driving the necessity to test, detect, and respond to these threats, in which AI is also being leveraged to combat cybercriminal activity,” Mindgard’s Garraghan says.
In a weblog submit, Lawrence Pingree, VP of technical advertising at Dispersive, outlines preemptive cyber defenses that safety professionals can take to win what he describes as an “AI ARMS (Automation, Reconnaissance, and Misinformation) race” between attackers and defenders.
“Relying on traditional detection and response mechanisms is no longer sufficient,” Pingree warns.
Alongside worker schooling and consciousness applications, enterprises ought to be utilizing AI to detect and neutralize generative AI-based threats in real-time.
Forescout’s Ferguson says CISOs ought to deal with enterprise AI like some other high-value SaaS platform.
“Tighten identity and conditional access, minimize privileges, lock down keys, and monitor for anomalous AI/API usage and spend,” Ferguson advises.



