A court setback for Washington’s crackdown on a major AI supplier
A federal judge has delivered a major early win to Anthropic in its escalating conflict with the US government, granting a preliminary injunction that blocks agencies from implementing orders designating the company a supply chain risk while the case proceeds. But within hours, Pentagon leadership pushed back publicly, arguing that one part of the government’s ban on Anthropic still stands.
The dispute centers on a clash between federal procurement power and an AI company’s attempt to impose limits on how its models can be used. Breaking Defense reports that after Anthropic refused to accept contract language allowing “all lawful use” of its Claude AI by the military, President Donald Trump directed federal agencies on February 27 to immediately cease use of Anthropic’s technology. Defense Secretary Pete Hegseth then posted that no contractor, supplier, or partner doing business with the US military could conduct commercial activity with Anthropic.
The ruling: likely retaliation, not a neutral risk decision
On March 4, the administration followed with two formal letters designating Anthropic as a supply chain risk under separate statutes: one applying across the federal government and one tailored to the Department of Defense. Anthropic responded with two lawsuits, challenging both the broader federal action and the more defense-specific designation.
Judge Rita Lin’s preliminary injunction now pauses enforcement of the orders for the 17 federal agencies named as defendants in the California case until the litigation is resolved. In a striking passage cited by Breaking Defense, Lin wrote that the record “strongly suggests” the reasons given for Anthropic’s designation were pretextual and that the government’s real motive was unlawful retaliation. By granting the injunction, she concluded Anthropic was likely to succeed in its lawsuit, a high bar for this stage of litigation.
That language matters. Preliminary injunctions are not final judgments, but they signal that a court sees a serious likelihood that the challenged action was improper. In practical terms, the order disrupts a high-profile government effort to isolate a leading AI vendor from federal business and sends a warning about how far agencies can go when commercial disputes overlap with national security rhetoric.
The Pentagon’s response shows the fight is far from over
The injunction did not settle the core political and legal conflict. Breaking Defense reports that Undersecretary of Defense and Chief Technology Officer Emil Michael argued on social media that the order contained “dozens of factual errors” and said the supply chain risk designation remained “in full force and effect” under the government-wide statute that he claimed was not subject to Judge Lin’s jurisdiction.
That response reveals a deeper fragmentation in how the government may try to defend its posture. Even with one court order in place, officials appear prepared to argue that separate legal authorities preserve at least part of the practical effect of the blacklist. That sets up the possibility of overlapping court fights, conflicting agency interpretations, and continued uncertainty for contractors that need to know whether Anthropic tools are permissible in federal work.
The case is unusually significant because it is not just about one vendor’s business interests. It is about whether an AI company can resist military contract language it views as too broad without being frozen out through procurement authorities usually associated with more traditional supply chain concerns. If courts conclude the government used those authorities as punishment for a policy disagreement, the ruling could influence how AI governance disputes play out across federal contracting.
Why this matters for the AI sector
For AI companies, the Anthropic fight is becoming an early test of how much leverage the US government expects to have over commercial model providers that want federal business but also want to define limits on deployment. That tension is especially sharp in national security settings, where agencies may seek maximal flexibility and vendors may seek to preserve guardrails tied to surveillance, weapons use, or reputational risk.
The litigation also puts pressure on the language of “supply chain risk.” Traditionally, that phrase evokes concerns about reliability, compromise, foreign influence, or hidden vulnerabilities in critical systems. Here, the judge’s initial view suggests the designation may have been used for something else entirely: retaliation after a disagreement over contract terms. If that interpretation holds, the case could narrow how aggressively procurement tools can be used against AI providers that refuse certain government demands.
For now, the outcome is mixed but unmistakably consequential. Anthropic has won a meaningful early ruling, and the government has been told to pause key actions while the case unfolds. At the same time, Pentagon officials are signaling that they do not accept the practical implications of the injunction as broadly as Anthropic likely does.
That leaves the industry with a clear message. The legal architecture governing AI in government is still being written in real time, and some of its most important rules may emerge not from legislation or agency guidance, but from hard-fought courtroom battles over contract language, retaliation, and the limits of executive power.
This article is based on reporting by Breaking Defense. Read the original article.
Originally published on breakingdefense.com




