

On February 28, OpenAI revealed it had secured a contract permitting the US military to utilize its technologies in sensitive contexts. CEO Sam Altman remarked that the discussions, which the company initiated only following the Pentagon’s public censure of Anthropic, were “clearly expedited.”
In its statements, OpenAI emphasized that it had not surrendered to allow the Pentagon unfettered access to its technology. The firm published a blog entry detailing that its arrangement safeguarded against uses involving autonomous weaponry and extensive domestic surveillance, and Altman asserted that the company did not merely agree to the same terms that Anthropic declined.
You might interpret this to suggest that OpenAI secured both the agreement and the ethical high ground, but a closer examination reveals a different story: Anthropic opted for a moral stance that garnered many supporters but ultimately did not succeed, while OpenAI took a pragmatic and legal route that is ultimately more lenient toward the Pentagon.
It remains uncertain if OpenAI can implement the safety measures it claims as the military hastens a politically charged AI strategy amidst operations in Iran, or if the deal will be deemed satisfactory by staff who hoped for a firmer stance. Balancing this situation will be challenging. (OpenAI did not immediately reply to inquiries for more details regarding its agreement.)
However, the intricacies are crucial. The reason OpenAI could reach an agreement when Anthropic was unable to was not solely about limitations, Altman noted, but about methodology. “Anthropic appeared more focused on specific prohibitions in the agreement, instead of referencing relevant laws, which we felt at ease with,” he stated.
OpenAI claims one foundation for its willingness to cooperate with the Pentagon is a presumption that the government won’t violate the law. The company, which has released a limited snippet of its contract, points to various laws and policies regarding autonomous weapons and surveillance. These range from a 2023 directive from the Pentagon on autonomous weaponry (which doesn’t forbid them but provides guidelines for their design and testing) to the broader protections offered by the Fourth Amendment against mass surveillance.
Nonetheless, the excerpt made public “does not grant OpenAI an Anthropic-like, independent right to prevent legal government usage,” wrote Jessica Tillipman, associate dean for government procurement law studies at George Washington University’s law school. It merely states that the Pentagon cannot employ OpenAI’s technology to contravene any of those laws and policies as presently articulated.
The primary reason Anthropic acquired so many backers in its struggle—including some of OpenAI’s own workforce—is that they doubt these regulations are sufficient to avert the development of AI-driven autonomous weaponry or widespread surveillance. And trusting that federal agencies will adhere to the law provides little comfort to anyone recalling that the surveillance practices unveiled by Edward Snowden were deemed lawful by internal agencies and were only ruled illegal after protracted disputes (not to mention the numerous surveillance strategies sanctioned by existing law that AI could enhance). On this aspect, we essentially find ourselves back at the starting line: permitting the Pentagon to utilize its AI for any lawful application.
OpenAI might express, as its national security partnerships chief remarked yesterday, that if one believes the government won’t adhere to the law, then they should also doubt its commitment to the limitations Anthropic suggested. Yet that does not disprove the importance of establishing such limits. Imperfect enforcement does not render constraints meaningless, and contractual stipulations still influence behavior, oversight, and political repercussions.
OpenAI proposes a second line of defense. The firm asserts it retains control over the safety guidelines governing its models and will not provide the military with a version of its AI devoid of those safety measures. “We can incorporate our red lines—no mass surveillance and no guiding weapon systems without human oversight—directly into model operation,” wrote Boaz Barak, an OpenAI employee tasked by Altman to address the issue on X.
However, the company does not clarify how its safety protocols for the military vary from those for ordinary users. Enforcement is also far from perfect, particularly when OpenAI is introducing these safeguards in a classified environment for the first time and is anticipated to accomplish this in merely six months.
There exists another query within all of this: Should it fall upon technology firms to restrict activities that are legal but that they find ethically troubling? The government certainly regarded Anthropic’s readiness to assume such a role as unacceptable. On Friday evening, eight hours prior to the US commencing airstrikes in Tehran, Defense Secretary Pete Hegseth issued strong statements on X. “Anthropic delivered a master class in hubris and treachery,” he proclaimed, echoing President Trump’s directive for the government to halt its relationship with the AI firm after Anthropic endeavored to prevent its model Claude from being utilized for autonomous weaponry or mass domestic surveillance. “The Department of War must possess complete, unrestricted access to Anthropic’s models for any LAWFUL purpose,” Hegseth stated.
However, unless the entirety of OpenAI’s contract reveals further details, it’s challenging not to view the company as teetering on an ideological balance, asserting that it does possess leverage it will gladly wield to act according to its principles while relying on the law as the primary safeguard for what the Pentagon can enact with its technology.
Three critical factors warrant attention here. One pertains to whether this stance will suffice for OpenAI’s most essential employees. Given that AI companies are heavily investing in talent, it’s plausible that some at OpenAI might interpret Altman’s rationale as an unforgivable compromise.
Secondly, there is the scorched-earth approach that Hegseth has vowed to take against Anthropic. Surpassing merely canceling the government’s agreement with the firm, he announced it would be classified as a supply chain threat, and that “no contractor, supplier, or partner that collaborates with the United States military may engage in any commercial activities with Anthropic.” There is considerable debate regarding whether this lethal strike is legally feasible, and Anthropic has indicated it will pursue legal action if the threat is enacted. OpenAI has also publicly opposed the initiative.
Lastly, how will the Pentagon transition from Claude—the sole AI model it currently employs in classified tasks, including some in Venezuela—while it amplifies operations against Iran? Hegseth allocated six months for the agency to undertake this transition, during which the military will integrate OpenAI’s models alongside those from Elon Musk’s xAI.
However, Claude was reportedly employed in the strikes on Iran mere hours after the ban was put in place, indicating that a transition will likely be anything but straightforward. Even if the ongoing dispute between Anthropic and the Pentagon is resolved (which I doubt is the case), we are now witnessing the Pentagon’s AI acceleration strategy placing pressure on firms to relinquish limitations they had previously established, with new tensions in the Middle East serving as the primary testing grounds.
If you possess information regarding how this situation is evolving, please contact me via Signal (username: jamesodonnell.22).