
This article first appeared in The Algorithm, our weekly newsletter on AI. To receive stories like this directly in your inbox, subscribe here.
On Thursday, a judge in California temporarily prevented the Pentagon from categorizing Anthropic as a supply chain risk while instructing government entities to cease utilizing its AI. This represents the latest turn in the ongoing dispute that has lasted a month. The situation remains unresolved: The government has one week to file an appeal, and Anthropic is involved in a separate lawsuit regarding this classification that is still pending. Until a resolution is achieved, the company continues to be unwelcome by the government.
The implications of this case—specifically, the government’s authority to penalize a company for non-compliance—were evident from the outset. Anthropic garnered a diverse range of senior supporters, including unexpected allies like former writers of President Trump’s AI policy.
However, Judge Rita Lin’s extensive 43-page ruling implies that what should have been a contract dispute never required such escalation. The situation intensified as the government ignored the proper procedures governing such disputes and incited tension through social media comments from officials that would later contradict their judicial arguments. Essentially, the Pentagon sought to engage in a culture war (in addition to the military actions in Iran that commenced shortly afterward).
For much of 2025, the government utilized Anthropic’s Claude without issue, as referenced in court filings, while Anthropic navigated the delicate balance of being a safety-oriented AI entity that also secured defense contracts. Defense personnel using the AI through Palantir had to accept a government-specific usage agreement which Anthropic co-founder Jared Kaplan asserted “prohibited any mass surveillance of Americans and lethal autonomous combat” (Kaplan’s statement to the court did not elaborate on the agreement). Disagreements only emerged when the government sought to engage in direct contracting with Anthropic.
The judge expressed concern that the public emergence of these disagreements seemed more about punitive measures than simply severing ties with Anthropic. They followed a recognizable trend: public statements predicated prior to legal proceedings.
On February 27, President Trump’s post on Truth Social condemned “Leftwing nutjobs” at Anthropic, instructing all federal agencies to discontinue the use of its AI. This sentiment was quickly mirrored by Defense Secretary Pete Hegseth, who announced plans to classify Anthropic as a supply chain risk.
Executing such an action necessitates the secretary to follow a defined set of procedures, which the judge noted Hegseth failed to fulfill. Letters dispatched to congressional committees claimed that less severe alternatives were explored and deemed impossible, albeit lacking further clarification. The government contended that the classification as a supply chain risk was essential due to Anthropic’s ability to activate a “kill switch,” but its attorneys eventually conceded there was no supporting evidence, according to the judge’s account.
Hegseth’s statement additionally proclaimed, “No contractor, supplier, or partner engaged with the United States military may partake in any business activities with Anthropic.” However, the government’s legal representatives acknowledged on Tuesday that the Secretary lacks the authority for such actions, agreeing with the judge that the declaration had “no legal impact whatsoever.”
The aggressive commentary prompted the judge to deduce that Anthropic was justified in asserting that its First Amendment rights had been compromised. The judge remarked, while referencing the posts, that the government “sought to publicly discipline Anthropic for its ‘ideology’ and ‘rhetoric,’ alongside its ‘arrogance’ for refusing to compromise those principles.”
Branding Anthropic as a supply chain risk would essentially categorize it as a “sabotager” of the government, a determination the judge did not find adequately substantiated. She enacted an order last Thursday halting that classification, preventing the Pentagon from applying it and prohibiting the government from executing the commitments made by Hegseth and Trump. Dean Ball, who contributed to AI policy under the Trump administration but penned a brief supporting Anthropic, termed the judge’s ruling on Thursday as “a significant defeat for the government, indicating Anthropic’s strong likelihood of success on virtually all arguments asserting the government’s actions were illegal and unconstitutional.”
An appeal by the government is anticipated. However, Anthropic’s separate lawsuit submitted in DC articulates similar claims, based on a different provision of the legislation concerning supply chain risks.
The court documents illustrate a distinct trend. Public remarks from officials and the President diverged significantly from the legal requirements in a contract dispute of this nature, with the government’s lawyers repeatedly compelled to generate justifications for social media criticism of the company in hindsight.
Top leadership at the Pentagon and the White House were aware that pursuing the ultimate measure would ignite legal challenges; Anthropic vowed on February 27 to contest the supply chain risk classification days ahead of the government’s formal application on March 3. Continuing regardless indicated that senior officials were, at best, preoccupied during the initial five days of the Iran conflict, executing strikes while also gathering evidence to portray Anthropic as a government saboteur, instead of choosing simpler methods to sever connections with Anthropic.
Nonetheless, even if Anthropic secures victory, the government retains various methods to exclude the company from government contracts. For instance, defense contractors eager to maintain favorable relations with the Pentagon may now lack motivation to partner with Anthropic, even if it’s not labeled as a supply chain risk.
“It’s reasonable to conclude that there are avenues the government can explore to impose some form of pressure without contravening the law,” remarks Charlie Bullock, a senior research fellow at the Institute for Law and AI. “It ultimately hinges on how determined the government is in sanctioning Anthropic.”
Based on the available evidence, the administration appears to be prioritizing high-level resources and efforts to prevail in an AI culture battle. Concurrently, Claude is evidently crucial to its operations, with even President Trump stating that the Pentagon required six months to discontinue its use. The White House expects political allegiance and ideological conformity from leading AI firms, yet the case against Anthropic, at least for the present, reveals the boundaries of its influence.
If you have insights regarding the military’s use of AI, you can confidentially share them via Signal (username jamesodonnell.22).