In short
- A federal choose has blocked the Pentagon from labeling Anthropic a provide chain threat, discovering the transfer probably violated the corporate’s First Modification and due course of rights.
- The dispute stemmed from a $200 million Protection Division AI contract that collapsed after Anthropic refused to permit use of its mannequin for mass surveillance or deadly autonomous warfare.
- The ruling quickly restores Anthropic’s standing with federal contractors and will form how AI corporations set utilization limits in authorities offers.
A federal choose has blocked the Pentagon from labeling Anthropic as a provide chain threat, ruling Thursday that the federal government’s marketing campaign towards the AI firm violated its First Modification and due course of rights.
U.S. District Choose Rita Lin issued a preliminary injunction from the Northern District of California two days after listening to oral arguments from each side, in a case observers say was made inevitable by the federal government’s personal paperwork.
“Nothing in the governing statute supports the Orwellian notion that an American company may be branded a potential adversary and saboteur of the U.S. for expressing disagreement with the government,” Choose Lin wrote.
The inner report was deadly to the federal government’s case, in keeping with Andrew Rossow, public affairs lawyer and CEO of AR Media Consulting, who advised Decrypt that the designation was “triggered by press conduct, not a security analysis.”
“The federal government basically wrote down its personal motive, and it was retaliation,” Rossow mentioned.
The dispute centers on a two-year, $200 million contract awarded to Anthropic in July 2025 by the Department of War’s Chief Digital and Artificial Intelligence Office.
Negotiations to deploy Claude to the division’s GenAI.Mil platform broke down after the 2 sides didn’t agree on utilization restrictions.
Anthropic insisted on two conditions: that Claude not be used for mass surveillance of Americans or for lethal use in autonomous warfare, arguing the model was not yet safe for either purpose.
At a February 24 meeting, Secretary of War Pete Hegseth told Anthropic’s representatives that if the company did not drop its restrictions by February 27, the department would immediately designate it a supply chain risk.
Anthropic refused to comply.
On the same day, President Trump posted a directive on Truth Social ordering every federal agency to “instantly stop” using the company’s technology, calling Anthropic a “radical left, woke firm.”
A little over an hour later, Hegseth described Anthropic’s stance as a “grasp class in conceitedness and betrayal,” ordering that no contractor doing enterprise with the navy could conduct industrial exercise with the agency. The formal provide chain designation adopted by a letter on March 3.
Anthropic sued the federal government on March 9, alleging violations of the First Modification, due course of, and the Administrative Process Act.
“Punishing Anthropic for bringing public scrutiny to the government’s contracting position is classic illegal First Amendment retaliation,” Choose Lin wrote in Thursday’s order.
The order, which was stayed for seven days, blocks all three authorities actions, requires a compliance report by April 6, and restores the established order earlier than the occasions of February 27.
Weaponizing the regulation
The designation of being a “supply chain risk” has been traditionally reserved for international intelligence businesses, terrorists, and different hostile actors.
It had by no means been utilized to a home firm earlier than Anthropic. Protection contractors started assessing and in lots of instances terminating their reliance on Anthropic within the weeks that adopted, Choose Lin’s order famous.
And the federal government’s posturing may have unexpected penalties, specialists argue.
Certainly, Thursday’s ruling may push AI corporations “to formalize ethical guardrails when working with governments,” Pichapen Prateepavanich, coverage strategist and founding father of infrastructure agency Collect Past, advised Decrypt.
To some extent, the ruling additionally means that corporations “can set clear usage limits without automatically triggering punitive regulatory action,” she mentioned.
However this “does not remove the tension,” she added. What the ruling limits is “the ability to escalate that disagreement into broader exclusion or labeling that looks retaliatory.”
Nonetheless, the applying of present statutory authority for designating an organization as a provide chain threat “because it refused to remove safety guardrails” shouldn’t be an extension of the provide chain threat statute, Rossow defined. As an alternative, it operates as a “weaponization” of the regulation.
“This is part of an ongoing pattern of behavior by the White House whenever they’re challenged, resulting in disproportional, emotionally-driven and biased threats and government extortion,” he added.
If the federal government’s “theory” is accepted, it could create a “dangerous” precedent wherein AI corporations might be blacklisted for security insurance policies the federal government dislikes, “before any harm occurs,” with out due course of, below the banner of nationwide safety, Rossow mentioned.
Day by day Debrief Publication
Begin every single day with the highest information tales proper now, plus authentic options, a podcast, movies and extra.



