Lawmakers are hoping to address the increasing use of artificial intelligence by fraudsters in a new proposal that would seek to expand penalties for AI scams and criminalize the impersonation of federal officials with AI.
The AI Fraud Deterrence Act, which is set to be proposed Tuesday by Representative Ted Lieu, D-Ca., and Representative Neal Dunn, R-Fl., would update criminal definitions and penalties for fraud to account for the rise of AI.
“As AI technology advances at a rapid pace, our laws must keep up,” Dunn said in a statement announcing the bill.
“The AI Fraud Deterrence Act strengthens penalties for crimes related to fraud committed with the help of AI. I am proud to co-lead this legislation to protect the identities of the public and prevent misuse of this innovative technology,” Dunn said.
“The majority of American people want sensible guardrails on AI. They don’t think a complete Wild West is helpful,” Lieu told NBC News last week.
The proposed law would double the maximum penalty for defrauding financial institutions from $1 to $2 million when AI is knowingly used as part of the crime.
The bill would also explicitly include AI-mediated deception in the definitions of both mail fraud and wire fraud, the latter more commonly known for covering fraud involving “radio or television communication in interstate or foreign commerce,” opening up the explicit possibility of charging individuals who use AI to conduct either type of fraud.
Both would be punishable by $1 million in fines and up to 20 and 30 years in jail, respectively.
The draft also criminalizes the impersonation of federal officials with AI deepfakes, citing AI’s use in attempts to successfully mimic White House Chief of Staff Susie Wiles and Secretary of State Marco Rubio earlier this year.
While fraud has existed for millennia, experts say AI could exacerbate it by easing access to fraud-making tools and increasing the quality of fraudulent outputs.
People who, pre-AI, would not have expended the energy required to commit fraud might now be unbothered by entering a few phrases into an image- or video-generation software to generate a fraudulent image or document.
By using AI, fraudsters can also create higher-quality faked media or documents compared to often-sloppy or clearly faked manual efforts.
In December, the FBI warned that “generative AI reduces the time and effort criminals must expend to deceive their targets.” The alert further cautioned that AI “can correct for human errors that might otherwise serve as warning signs for fraud.”
As reported by the New York Times, expense- and reimbursement-management companies like Expensify, AppZen, and SAP’s Concur all implemented tools to screen for fraudulent, AI-generated receipts earlier this year.
AppZen said that roughly 14% of all fraudulent documents submitted in September were generated by AI, up from zero AI-fueled incidents a year before.
Maura R. Grossman, a research professor of computer science at the University of Waterloo and a practicing lawyer, told NBC News that AI enables a new era of deception: “AI presents a scale, a scope, and a speed for fraud that is very, very different from frauds in the past.”
Many observers worry that existing institutions, like courts, cannot keep up with AI’s rapid development. “AI years are dog years,” said Hany Farid, professor of computer science at the University of California, Berkeley and co-founder of GetReal Security, a leading digital-media authentication company, referencing the speed of AI progress.
Whereas AI-generated images could previously be identified by the appearance of extra feet or hands due to the rudimentary nature of prior image-generation models, today’s image-generation models are much more accurate.
The FBI’s warning from December urged individuals to search for discrepancies in images and videos to identify AI-generated media: “Look for subtle imperfections in images and videos, such as distorted hands or feet,” the alert said.
But to Farid, this 11-month-old advice is wrong and even harmful. “The multiple hands trick, that’s not true anymore,” Farid said. “You can’t look for hands or feet. None of that stuff works.”
Emphasizing the importance of labelling AI-generated content, Rep. Lieu and Rep. Dunn’s proposed bill clarifies that there is a time and place for AI-generated media.
Tuesday’s draft includes a carveout for AI in satire or other acts protected by the First Amendment, “provided such content includes clear disclosure that it is not authentic.”



