Briefly
- Dario Amodei says Anthropic is not going to take away bans on mass home surveillance and absolutely autonomous weapons.
- The Pentagon has threatened contract termination and potential motion beneath the Protection Manufacturing Act.
- The standoff follows stories that the U.S. army used Claude to seize former Venezuelan President Nicolás Maduro
Anthropic CEO Dario Amodei mentioned Thursday the corporate is not going to take away safeguards from its Claude AI mannequin, escalating a dispute with the U.S. Division of Protection over how the expertise can be utilized in categorized army techniques.
The assertion comes because the Protection Division critiques its relationship with Anthropic and weighs potential penalties, together with cancellation of the corporate’s $200 million contract and potential invocation of the Protection Manufacturing Act.
“We cannot in good conscience accede to their request,” Amodei wrote, referring to the Pentagon’s demand in January that AI contractors allow use of their techniques for “any lawful use.”
Whereas the Pentagon has since required AI distributors to undertake commonplace “any lawful use” language in future agreements, Anthropic remained the one frontier AI agency that resisted turning over management of its AI to the army.
On Wednesday, Axios first reported that the Pentagon had issued an ultimatum requiring unrestricted army use of Claude. The deadline reportedly is Friday of this week.
“It is the Department’s prerogative to select contractors most aligned with their vision,” Amodei continued. “But given the substantial value that Anthropic’s technology provides to our armed forces, we hope they reconsider.”
In his assertion, Amodei framed the corporate’s stance as aligned with U.S. nationwide safety targets.
“I believe deeply in the existential importance of using AI to defend the United States and other democracies, and to defeat our autocratic adversaries,” he mentioned.
He added that Claude is “extensively deployed across the Department of War and other national security agencies for intelligence analysis, modeling and simulation, operational planning, cyber operations, and more.”
Conflict on AI
The dispute unfolds in opposition to broader issues about how superior AI techniques behave in high-stakes army situations. In a current King’s Faculty London examine, OpenAI’s GPT-5.2, Anthropic’s Claude Sonnet 4, and Google’s Gemini 3 Flash deployed nuclear weapons in 95% of simulated geopolitical crises.
Throughout a speech at SpaceX’s Starbase in Texas in January, Protection Secretary Pete Hegseth mentioned the U.S. army plans to deploy probably the most superior AI fashions.
That very same month, stories surfaced that Claude was used throughout a U.S. operation to seize former Venezuelan President Nicolás Maduro earlier that month. Amodei refuted claims that Anthropic questioned any particular army operations.
“Anthropic understands that the Department of War, not private companies, makes military decisions,” he mentioned. “We have never raised objections to particular military operations nor attempted to limit use of our technology in an ad hoc manner.”
Regardless of this, Amodei mentioned utilizing these techniques for mass home surveillance or autonomous weapons is incompatible with democratic values and presents critical dangers.
“Today, frontier AI systems are simply not reliable enough to power fully autonomous weapons,” he mentioned. “We will not knowingly provide a product that puts America’s warfighters and civilians at risk.”
He additionally addressed the Pentagon’s risk to designate Anthropic a “supply chain risk” whereas additionally probably invoking the Protection Manufacturing Act.
“These latter two threats are inherently contradictory: one labels us a security risk; the other labels Claude as essential to national security,” he mentioned.
Whereas Amodei has mentioned the corporate is not going to adjust to the Pentagon’s request, on the similar time, Anthropic has revised its Accountable Scaling Coverage, dropping a pledge to halt coaching of superior techniques with out assured safeguards in place.
Robert Weissman, co-president of Public Citizen, mentioned the Pentagon’s posture indicators broader strain on the tech business.
“The Pentagon is publicly bullying Anthropic, and the public part is intentional, because they want to pressure this particular company and send a message to all big tech and all corporations that we intend to do and take whatever we want and don’t get in our way,” Weissman advised Decrypt.
Weissman described Anthropic’s guardrails as “modest” and geared toward stopping “improper surveillance of American people or to facilitate the development and deployment of killer robots, AI-enabled weaponry that could launch lethal strikes without humans say so.”
“Those are the most sensible and modest guardrails you could come up with when it comes to this powerful new technology.”
Concerning the Pentagon’s risk of designating Anthropic a “supply chain risk,” Weissman known as it a probably crushing penalty from the federal government, and argued it could strain different AI companies to keep away from imposing comparable limits.
“Individuals might use Claude, but none of the AI companies, particularly Anthropic, have business models based on individual use; they’re looking for business use,” he mentioned. “This is a potentially crushing penalty from the government.”
Whereas the Pentagon has not but mentioned whether or not it plans to undergo with its risk to terminate the contract or invoke the Protection Manufacturing Act, Weissman mentioned the Pentagon is signaling to AI corporations that it expects unrestricted entry to their expertise as soon as it’s deployed in authorities techniques.
“The message of the Pentagon is, ‘we’re not going to tolerate this, and we expect to be able to use the technology as it’s invented for any purpose we want,’” Weissman mentioned.
The Division of Protection and Anthropic didn’t instantly reply to Decrypt’s requests for remark.
Day by day Debrief Publication
Begin day by day with the highest information tales proper now, plus authentic options, a podcast, movies and extra.



