
Anthropic has announced plans to contest the classification in court.
Anthropic has announced plans to contest the classification in court.


Shortly after President Donald Trump revealed on Truth Social that he was prohibiting Anthropic products from federal usage, Secretary of Defense Pete Hegseth escalated the situation by declaring the AI firm a “supply-chain risk,” which Anthropic states it is prepared to contest legally.
This move could have an immediate effect on a variety of significant tech firms that utilize Claude for their contracts with the Pentagon, including Palantir and AWS. It remains unclear how broadly the Pentagon may restrict companies that provide services through Claude for purposes outside of national security, with Anthropic asserting that the classification pertains solely to the use of its Claude AI in Department of Defense contract work.
Following a week of intense discussions regarding the company’s acceptable use policies, the Pentagon issued Anthropic an ultimatum: consent by Friday, 5:30 PM EST, allowing the Pentagon to use Claude for “all lawful purposes,” including autonomous lethal weapons without human oversight and mass surveillance, or face the classification as a supply-chain risk. This classification is generally applied to firms with connections to foreign governments that represent threats to national security and would prevent any organization using Anthropic products from engaging with the Department of Defense.
In a tweet made just after 5PM ET, Hegseth extended the designation to include companies engaging in “any business activity with Anthropic,” reiterating Trump’s instruction that firms had six months to sell off their interests in Anthropic products.
“Our stance has always remained firm and will continue to be: the Department of War requires complete, unrestricted access to Anthropic’s models for every LAWFUL purpose in defending the Republic,” he asserted. “Instead, @AnthropicAI and its CEO @DarioAmodei have opted for deceit. Masquerading in the pious language of ‘effective altruism,’ they have sought to coerce the United States military into compliance – a cowardly display of corporate virtue-signaling that prioritizes Silicon Valley values over the lives of Americans.”
Hegseth, as Secretary of Defense, possesses the discretion to designate a company as a “supply-chain risk.” Nevertheless, this choice follows multiple other efforts by the Pentagon to pressure Anthropic into allowing the use of Claude as they wished, including a warning to invoke the Defense Production Act.
On Friday evening, Anthropic replied to both the Department of Defense and the White House, stating in a blog post that it had not received direct communication from either entity about the negotiation developments. According to Anthropic, contrary to what Hegseth’s tweet suggested, it informs customers that “The Secretary lacks the statutory authority to substantiate this claim.”
The firm maintains that the classification can solely apply “to the deployment of Claude as part of Department of War contracts—it cannot influence how contractors utilize Claude for other clients,” asserting that simply having a contract with Anthropic is not relevant.
Regarding the classification itself and the DoD policy advocated by Hegseth, Anthropic has expressed its willingness to litigate:
Labeling Anthropic as a supply chain risk would represent an unprecedented measure—one historically reserved for US adversaries, never before publicly designated to an American entity. We are profoundly disheartened by these events. As the pioneering AI company to integrate models into the US government’s classified systems, Anthropic has offered support to American military personnel since June 2024 and is committed to continuing that support.
We believe this classification would be legally questionable and set a perilous precedent for any American company engaging with the government.
No level of intimidation or punishment from the Department of War will alter our position on mass domestic surveillance or completely autonomous weapons. We will contest any supply chain risk classification in court.
Secretary of Defense Pete Hegseth:
This week, Anthropic displayed a master class in arrogance and betrayal, as well as a classic example of how not to conduct business with the United States Government or the Pentagon.
Our stance has never shifted and will not shift: the Department of War must have absolute, unrestricted access to Anthropic’s models for all LAWFUL purposes in defense of the Republic.
Instead, @AnthropicAI and its CEO @DarioAmodei, have opted for duplicity. Hiding behind the self-righteous language of “effective altruism,” they have attempted to coerce the US military into compliance – a cowardly display of corporate virtue-signaling that prioritizes Silicon Valley ideals over American lives.
The Terms of Service of Anthropic’s flawed altruism will never surpass the safety, preparedness, or lives of American troops on the battlefield.
Their true aim is crystal clear: to obtain veto authority over the operational decisions of the United States military. That is unacceptable.
As President Trump articulated on Truth Social, it is the Commander-in-Chief and the American people who will decide the fate of our armed forces, not unelected tech executives.
Anthropic’s position is fundamentally at odds with American values. Consequently, their relationship with the United States Armed Forces and the Federal Government has been irreversibly changed.
In line with the President’s directive for the Federal Government to halt all use of Anthropic’s technology, I am instructing the Department of War to classify Anthropic as a Supply-Chain Risk to National Security. Immediately, no contractor, supplier, or partner that conducts business with the US military may engage in any commercial activity with Anthropic. Anthropic will continue to supply the Department of War its services for a maximum of six months to facilitate a smooth transition to a more suitable and patriotic service.
America’s military personnel will never be held captive by the ideological whims of Big Tech. This determination is final.
Update, February 27th: Added reaction from Anthropic.