By Brian Lengthy, CEO and Co-founder, Adaptive Safety
In March 2025, a finance director at a multinational agency in Singapore joined what gave the impression to be a routine Zoom name together with her senior management group. The CFO was there. Different executives appeared on display. Everybody seemed proper. Everybody sounded proper.
She approved a $499,000 switch earlier than anybody flagged the fraud. Each face on that decision was AI-generated.
This assault has a template. In early 2024, the identical method was used to steal $25.6 million from Arup, one of many world’s largest engineering corporations, in a single afternoon. The strategy has unfold extensively, and the instruments behind it have grown cheaper and simpler to make use of each month since.
The organizations which have stopped these assaults all discovered the identical reply: prepare your individuals to pause and confirm earlier than they act.
The Instruments to Run This Assault Value Virtually Nothing
Cloning somebody’s voice takes three seconds of audio and a free obtain.
Three seconds from a voicemail, a podcast look, an earnings name, or a LinkedIn video is all a present AI mannequin must generate a totally interactive voice duplicate in actual time. The mannequin runs offline, requires no technical background and prices nothing.
Voice deepfake incidents rose 680% year-over-year in 2025. Greater than 100,000 assaults had been recorded in the USA in a single 12 months. The instruments behind them can be found on public repositories, carry no moderation, and run on commonplace client {hardware}.
What makes these assaults so efficient is the preparation behind them. Earlier than putting a single name, attackers map the goal group’s org chart, establish who holds monetary authority, and research the usual approval workflow for wire transfers.
By the point the telephone rings, the script is already written.
Defend your group from deepfakes, AI voice phishing, and spear phishing assaults with next-generation safety consciousness coaching. Firms like Bose PayPal, and Xerox belief Adaptive to defend in opposition to deepfakes, voice phishing, and AI-powered assaults.
See precisely how Adaptive trains your group to identify them.
Tour the #1 AI safety platform now
Your Safety Stack Was Constructed for a Totally different Assault
A deepfake assault targets individuals instantly. It arrives as a dialog: a well-known face on a Zoom display, a voice that matches, an pressing request that seems like every other.
Cellphone calls, video conferences, and voice requests sit exterior all the pieces your safety stack was constructed to examine.
Probably the most refined safety stack on the earth is not going to cease this assault if the worker fielding the decision has by no means been skilled to acknowledge it.
Finance Groups Are the Major Goal. Most Have By no means Educated for This.
The targets in these assaults are the Controller, the accounts payable specialist, and the HR coordinator dealing with payroll. Deepfake attackers additionally name IT assist desks with pressing credential reset requests, delivered in a voice that sounds precisely just like the CTO. These staff have authority to maneuver cash and alter account knowledge.
The assault floor goes additional than most safety leaders account for. AI personas at the moment are showing in hiring pipelines, constructed from stolen LinkedIn profiles and designed to go video interviews. As soon as employed, they get entry to inner methods, supply code and firm knowledge.
Once I began talking with CISOs about this menace eighteen months in the past, about one in ten had seen a profitable deepfake assault at their group.
As we speak, that quantity is over half. Most of what I hear by no means makes the information. Firms have little incentive to reveal {that a} voice clone simply price them $500,000.
The Monetary Scale of This Drawback Is Rising Quick
Deepfake fraud losses exceeded $200 million within the first 4 months of 2025 alone. The complete 12 months of 2024 noticed $359 million in complete losses. International deepfake fraud has now crossed $2.19 billion in documented losses, with the USA accounting for the biggest share.
Amongst organizations that misplaced cash to a deepfake assault, 61% reported losses above $100,000. Almost 19% reported losses above $500,000.
These are solely the losses that had been reported. The precise complete is way increased.
Operating this assault at scale requires three issues: a reputation, a three-second audio pattern, and one worker with no verification protocol. That mixture exists at nearly each group proper now.
Constructing the Reflex Earlier than the Name Comes
The businesses that cease these assaults earlier than cash strikes all do one factor: they prepare their staff to confirm earlier than they act, no matter how acquainted or pressing the request sounds.
Three controls price nothing to place in place: a verbal passcode for any high-value monetary request, a callback requirement on a pre-stored quantity earlier than approving any wire switch, and a standing coverage that urgency in any monetary request is a cause to decelerate. Most organizations have none of those in place at this time.
In July 2025, an attacker used an AI-generated voice to impersonate Secretary of State Marco Rubio, sending voice messages by way of Sign to overseas ministers, a sitting senator, and a governor. Not one of the recipients acted on the messages.
The requests had arrived via an unofficial client messaging app, and that inconsistency alone was sufficient to set off scrutiny. The incident was reported to the State Division earlier than anybody responded. The assault failed as a result of the recipients paused earlier than performing.
A once-a-year compliance module is not going to construct that type of intuition. Deepfake audio is designed to sound precisely proper. An worker who has by no means skilled a voice clone assault has nothing to attract on when their CFO calls requesting a direct switch. The reflex needs to be constructed earlier than that decision comes.
At Adaptive Safety, we simulate AI-powered deepfake assaults throughout voice, SMS, e mail, and video. When an worker receives a name from a cloned model of their CFO requesting an pressing wire switch, it’s a check.
In the event that they fail, the platform adjusts their danger rating and delivers customized coaching tied on to that state of affairs. Safety groups get a transparent, real-time view of the place they’re most uncovered and might act earlier than an attacker does.
The hole between an artificial voice and a human one is closing quicker than most organizations are making ready. The groups working simulations and constructing verification habits at this time are those that can catch the decision earlier than the switch clears.
Three seconds of your CEO’s voice is already on the web. Make certain your group is aware of what to do when it calls.
To find out how Adaptive Safety helps organizations stop AI-powered social engineering assaults, go to adaptivesecurity.com.
Sponsored and written by Adaptive Safety.



