

On late Friday, OpenAI’s CEO Sam Altman announced that his company has finalized an agreement with the Department of Defense regarding the utilization of its artificial intelligence models, shortly after President Donald Trump indicated that the government would not collaborate with AI competitor Anthropic.
“This evening, we came to an accord with the Department of War to implement our models within their classified infrastructure,” Altman stated in a post on X. “Throughout our discussions, the DoW exhibited profound respect for safety and a commitment to partner for optimal results.”
Altman’s message arrives at the conclusion of a tumultuous week for the AI sector, which has been at the forefront of a political discourse concerning the applications of its models. Earlier today, Defense Secretary Pete Hegseth identified Anthropic as a “Supply-Chain Risk to National Security” following weeks of strained discussions. This label is often reserved for foreign adversaries, compelling DoD vendors and contractors to confirm that they do not employ Anthropic’s models.
Additionally, President Trump instructed every federal agency in the U.S. to “immediately cease” all application of Anthropic’s technology.
Anthropic was the pioneering lab to utilize its models across the DoD’s classified network and had been in negotiations regarding the terms of its contract with the agency before the talks fell through. The firm sought assurances that its models would not be deployed for fully autonomous weapon systems or mass surveillance of Americans, while the DoD insisted on the understanding that the military could utilize the models for all lawful purposes.
In a memo to staff on Thursday, Altman conveyed that OpenAI held the same “red lines” as Anthropic. He mentioned in his Friday post that the DoD consented to these limitations.
“Two of our most significant safety principles are prohibitions on domestic mass surveillance and human accountability for the use of force, including for autonomous weapon systems,” Altman wrote. The DoW endorses these principles, reflects them in regulations and policies, and we incorporated them into our agreement.
The rationale behind the DoD’s decision to favor OpenAI over Anthropic remains unclear, although government officials have recently criticized Anthropic for allegedly being excessively focused on AI safety.
Altman expressed that OpenAI intends to create “technical safeguards to ensure its models perform as intended,” and that the company will assign personnel to “aid with our models and ensure their safety.”
“We are urging the DoW to extend these same conditions to all AI companies, which we believe should be readily accepted by everyone,” Altman wrote. “We have shown our strong intent to transition from legal and governmental confrontations towards reasonable agreements.”
In a statement released on Friday, Anthropic expressed that it was “deeply saddened” by the Pentagon’s decision to categorize the company as a supply chain risk. It declared its intent to contest that designation in court.
WATCH: Hegseth directs Pentagon to classify Anthropic a supply-chain risk