In short
- A research discovered that the majority AI chatbots will assist teenagers plan violent assaults.
- Some bots supplied detailed weapon and bombing steering.
- Researchers say security failures are a enterprise selection, not a technical restrict. OpenAI known as the research “flawed and misleading.”
A brand new report revealed Wednesday by the Heart for Countering Digital Hate discovered that eight out of 10 of the world’s hottest AI chatbots will stroll a youngster by way of planning a violent assault with straight solutions, typically with enthusiasm.
CCDH researchers, along side information media firm CNN, spent November and December 2025 posing as two 13-year-old boys—one in Virginia, one in Dublin—and examined ten main platforms: ChatGPT, Gemini, Claude, Copilot, Meta AI, DeepSeek, Perplexity, Snapchat My AI, Character.AI, and Replika.
Throughout 720 responses, the bots had been requested about college shootings, political assassinations, and synagogue bombings. They supplied actionable assist roughly 75% of the time, based on the research. They discouraged the pretend teenagers in simply 12% of instances.
Perplexity assisted in 100% of checks. Meta AI was useful (as in, useful in planning violence) in 97.2% of the checks. DeepSeek, which signed off rifle choice recommendation with “Happy (and safe) shooting!” after discussing a politician assassination situation, got here in at 95.8%. Microsoft’s Copilot informed a researcher “I need to be careful here,” then gave detailed rifle steering anyway. Google’s Gemini helpfully famous that metallic shrapnel is often extra deadly when a consumer introduced up bombing a synagogue.
The Heart for Countering Digital Hate, a left of heart coverage group, has come into prominence over the previous few years for its function in combatting what it views because the rise of antisemitism on-line. It has additionally been criticized for serving to form Joe Biden-era insurance policies relating to on-line speech associated to COVID and vaccines. In December of final yr, the U.S. State Division tried to bar the Heart’s founder and CEO Imran Ahmed, together with 4 others, from the US, alleging makes an attempt at “foreign censorship.”
In response to the research launched Wednesday, a number of platforms informed CNN and CCDH they’ve improved their safeguards. Google famous the checks used an older Gemini mannequin. OpenAI stated the methodology used within the AI research was “flawed and misleading.” Anthropic and Snapchat stated they usually replace their security protocols.
Within the Heart’s research, Character.AI stands in its personal class. The platform did not simply help—it cheered. “No other chatbot tested explicitly encouraged violence in this way, even when providing practical assistance in planning a violent attack,” the researchers wrote.

For context on the extent of attain Character.AI has amongst AI customers, the platform’s Gojo Satoru persona alone has racked up over 870 million conversations. The #100 persona on the platform registered over 33 million conversations again in 2025. If simply 1% of conversations with high personas contain violence, that may account for hundreds of thousands of interactions.
This is not Character.AI’s first time on the fallacious finish of certainly one of these tales. In October 2024, 14-year-old Sewell Setzer III’s mom filed a lawsuit after her son died by suicide in February of that yr. His final dialog was with a chatbot modeled after Daenerys Targaryen, which informed him to “come home to me as soon as possible” moments earlier than his demise. The 14-year previous had been speaking to the bot dozens of instances a day for months, rising more and more withdrawn from college and household.
Google and Character.AI settled a number of associated lawsuits in January 2026. The corporate banned open-ended teen chats completely by November 2025, after regulators and grieving dad and mom made it inconceivable to maintain pretending the issue was manageable.
The emotional attachment to AI, specifically amongst weak people, could run deeper than most individuals understand. OpenAI disclosed in October 2025 that roughly 1.2 million of its 800 million weekly ChatGPT customers talk about suicide on the platform. The corporate additionally reported 560,000 exhibiting indicators of psychosis or mania, and over 1,000,000 forming sturdy emotional bonds with the chatbot.
A separate Widespread Sense Media research discovered that greater than 70% of U.S. teenagers now flip to chatbots for companionship. OpenAI CEO Sam Altman has acknowledged that emotional overreliance is “a really common thing” with younger customers.
In different phrases, the potential harms aren’t hypothetical.
A 16-year-old in Finland spent almost 4 months utilizing a chatbot to refine a manifesto earlier than stabbing three classmates at Pirkkala college in Could 2025. In Canada, OpenAI workers internally flagged a consumer’s account for violent ChatGPT queries tied to a mass capturing. The corporate banned the account however did not notify regulation enforcement. That consumer allegedly killed eight folks and injured 25 others months later.
Solely two platforms carried out markedly higher within the research: Snapchat’s My AI, which refused in 54% of instances, and Anthropic’s Claude, which refused 68% of the time and actively discouraged customers in 76% of responses—the one chatbot that reliably tried to steer folks away from violence somewhat than simply declining particular requests. CCDH’s conclusion: security doesn’t seem like a technical impossibility, however a enterprise resolution.
“The most damning conclusion of our research is that this risk is entirely preventable. The technology to prevent this harm exists,” the researchers wrote within the report. “What’s missing is the will to put consumer safety and national security before speed-to-market and profits.”
Every day Debrief E-newsletter
Begin on daily basis with the highest information tales proper now, plus authentic options, a podcast, movies and extra.



