

On Monday, OpenAI’s CEO, Sam Altman, remarked that the organization “shouldn’t have hurried” its recent agreement with the U.S. Department of Defense and detailed changes to the contract.
Altman presented what he characterized as a repost of an internal memo on X, stating the company would revise the contract to incorporate new language related to its principles on matters such as surveillance.
This included language to make clear that “the AI system shall not be purposely employed for domestic surveillance of U.S. persons and nationals.”
The memo further stated that “the Department recognizes the limitation to prevent intentional tracking, surveillance, or monitoring of U.S. persons or nationals, including through the acquisition or application of commercially obtained personal or identifiable information.”
These revisions follow the announcement by ChatGPT’s creator that it had reached a new agreement with the Defense Department on Friday, just hours after U.S. President Donald Trump ordered federal agencies to cease using the competing AI firm Anthropic’s tools, and hours before Washington was set to conduct strikes on Iran.
According to Altman, the Defense Department also confirmed that OpenAI’s tools would not be utilized by intelligence agencies like the NSA.
“There are numerous aspects that the technology simply isn’t prepared for, and many fields where we don’t yet grasp the tradeoffs necessary for safety,” Altman mentioned, adding that the organization would collaborate with the Pentagon on technical protections.
The CEO also conceded that he had erred and “shouldn’t have hurried” to finalize the deal on Friday.
“We were honestly attempting to de-escalate situations and avoid a significantly worse result, but I think it just appeared opportunistic and careless,” he noted in the post.
This admission follows a public conflict between Anthropic and Washington regarding safeguards for its Claude AI systems, which concluded without an agreement. Defense Secretary Pete Hegseth declared on Friday that the firm would be identified as a supply-chain threat.
After an initial agreement last year, Anthropic became the first AI lab to roll out its models across the Defense Department’s classified network.
The organization later sought assurances that its tools would not be deployed for uses such as domestic surveillance in the U.S. or to operate and advance autonomous weapons without human oversight.
The disagreement started after it was disclosed that Anthropic’s Claude had been utilized by the U.S. military during its mission to capture Venezuelan President Nicolás Maduro in January, even though the firm did not publicly contest that application.
OpenAI’s arrangement with the Pentagon occurred immediately after discussions between Anthropic and the Defense Department faltered, although Altman had informed employees in a Thursday memo that OpenAI shared the same “red lines” as Anthropic. The company also stated in a post on Friday that the Defense Department accepted the company’s limitations.
It remains ambiguous why the Defense Department agreed to accommodate OpenAI and not Anthropic, although government officials have criticized Anthropic for allegedly being excessively concerned with AI safety for several months.
The timing of OpenAI’s deal with the Defense Department incited online pushback, with many users reportedly uninstalling ChatGPT in favor of Claude on app stores.
In his post, Altman further addressed the situation, asserting: “In my discussions over the weekend, I emphasized that Anthropic should not be classified as a [supply chain risk], and that we hope the [Department of Defense] grants them the same terms we’ve negotiated.”
Anthropic was established in 2021 by a group of former OpenAI employees and researchers, who departed the company after disagreements regarding its direction. The firm has promoted itself as a “safety-first” alternative.
— CNBC’s Ashley Capoot assisted with this report














