WASHINGTON (AP) — President Donald Trump stated Friday he was ordering all federal companies to part out use of Anthropic expertise after the corporate’s unusually public dispute with the Pentagon over synthetic intelligence security.
Trump’s feedback got here simply over an hour earlier than the Pentagon’s deadline for Anthropic to permit unrestricted navy use of its AI expertise or face penalties — and practically 24 hours after CEO Dario Amodei stated his firm “cannot in good conscience accede” to the Protection Division’s calls for.
Anthropic didn’t instantly reply to a request for remark to Trump’s remarks.
At problem within the protection contract was a conflict over AI’s position in nationwide safety and considerations about how more and more succesful machines could possibly be utilized in high-stakes conditions involving deadly pressure, delicate info or authorities surveillance.
Anthropic, maker of the chatbot Claude, might afford to lose the contract. However the ultimatum this week from Protection Secretary Pete Hegseth posed broader dangers on the peak of the corporate’s meteoric rise from a little-known laptop science analysis lab in San Francisco to one of many world’s most respected startups.
Anthropic spurns Pentagon’s newest proposal over its safeguards
If Amodei didn’t budge, navy officers stated they’d not simply pull Anthropic’s contract but additionally “deem them a supply chain risk,” a designation sometimes stamped on international adversaries that would derail the corporate’s important partnerships with different companies.
And if Amodei had been to cave, he might lose belief within the booming AI trade, significantly from high expertise drawn to the corporate for its guarantees of responsibly constructing better-than-human AI that, with out safeguards, might pose catastrophic risks.
Anthropic stated it sought slender assurances from the Pentagon that Claude gained’t be used for mass surveillance of People or in absolutely autonomous weapons. However after months of personal talks exploded into public debate, it stated in a Thursday assertion that new contract language “framed as compromise was paired with legalese that would allow those safeguards to be disregarded at will.”
That was after Sean Parnell, the Pentagon’s high spokesman, posted on social media that the navy “has no interest in using AI to conduct mass surveillance of Americans (which is illegal) nor do we want to use AI to develop autonomous weapons that operate without human involvement.” He emphasised that the Pentagon desires to “use Anthropic’s model for all lawful purposes,” however he and different officers haven’t detailed how they wish to use the expertise.
Dispute additional polarizes the tech trade
Emil Michael, the protection undersecretary for analysis and engineering, later lashed out at Amodei, alleging on X that he “has a God-complex” and “wants nothing more than to try to personally control the US Military and is ok putting our nation’s safety at risk.”
That message hasn’t resonated in a lot of Silicon Valley, the place a rising variety of tech employees from Anthropic’s high rivals, OpenAI and Google, voiced assist for Amodei’s stand late Thursday in an open letter.
OpenAI and Google, together with Elon Musk’s xAI, even have contracts to provide their AI fashions to the navy.
Musk sided with Trump’s Republican administration on Friday, saying on his social media platform X that “Anthropic hates Western Civilization” after Michael drew consideration to a earlier model of Claude’s guiding rules that inspired “consideration of non-Western perspectives.” The entire main AI fashions, together with Musk’s Grok and OpenAI’s ChatGPT, are programmed with a set of directions that information a chatbot’s values and conduct. Anthropic calls that steering a structure.
Whereas some Trump-allied tech leaders have joined the fray — together with Musk and Palmer Luckey, co-founder of protection contractor Anduril — the polarizing debate over “woke AI” has put others in a tough place.
“The Pentagon is negotiating with Google and OpenAI to try to get them to agree to what Anthropic has refused,” the open letter from some OpenAI and Google workers says. “They’re trying to divide each company with fear that the other will give in.”
However in a shock transfer from one in all Amodei’s fiercest rivals, OpenAI CEO Sam Altman on Friday sided with Anthropic and questioned the Pentagon’s “threatening” transfer in a CNBC interview, suggesting that OpenAI and many of the AI area share the identical crimson strains. Amodei as soon as labored for OpenAI earlier than he and different OpenAI leaders stop to kind Anthropic in 2021.
“For all the differences I have with Anthropic, I mostly trust them as a company, and I think they really do care about safety,” Altman informed CNBC. “I’ve been happy that they’ve been supporting our warfighters. I’m not sure where this is going to go.”
Additionally elevating considerations concerning the Pentagon’s strategy had been Republican and Democratic lawmakers and a former chief of the Protection Division’s AI initiatives.
“Painting a bullseye on Anthropic garners spicy headlines, but everyone loses in the end,” wrote retired Air Power Gen. Jack Shanahan in a social media put up.
Shanahan confronted a special wave of tech employee opposition throughout the first Trump administration when he led Maven, a mission to make use of AI expertise to research drone footage and goal weapons. So many Google workers protested its participation in Venture Maven on the time that the tech big declined to resume the contract after which pledged to not use AI in weaponry.
“Since I was square in the middle of Project Maven & Google, it’s reasonable to assume I would take the Pentagon’s side here,” Shanahan wrote Thursday on social media. “Yet I’m sympathetic to Anthropic’s position. More so than I was to Google’s in 2018.”
He stated Claude is already being broadly used throughout the federal government, together with in categorized settings, and Anthropic’s crimson strains are “reasonable.” He stated the AI giant language fashions that energy chatbots like Claude are additionally “not ready for prime time in national security settings,” significantly not for absolutely autonomous weapons.
“They’re not trying to play cute here,” he wrote.
Pentagon able to punish Anthropic if it doesn’t compromise
Parnell asserted Thursday that opening up use of the expertise would stop the corporate from “jeopardizing critical military operations.”
“We will not let ANY company dictate the terms regarding how we make operational decisions,” Parnell wrote. Anthropic has “until 5:01 p.m. ET on Friday to decide” if it will meet the calls for or face penalties.
When Hegseth and Amodei met on Tuesday, navy officers warned that they might designate Anthropic as a provide chain threat, cancel its contract or invoke a Chilly Battle-era regulation referred to as the Protection Manufacturing Act to offer the navy extra sweeping authority to make use of its merchandise, even when the corporate doesn’t approve.
Amodei stated Thursday that “those latter two threats are inherently contradictory: one labels us a security risk; the other labels Claude as essential to national security.” He stated he hopes the Pentagon will rethink given Claude’s worth to the navy, however, if not, Anthropic “will work to enable a smooth transition to another provider.”
___
O’Brien reported from Windfall, R.I.
Copyright
© 2026 The Related Press. All rights reserved. This web site is just not meant for customers positioned throughout the European Financial Space.



