The confidential agreement reportedly does not grant Google the power to prohibit how the government utilizes its AI models.
The confidential agreement reportedly does not grant Google the power to prohibit how the government utilizes its AI models.


Google has entered into a classified agreement permitting the US Department of Defense to deploy its AI models for “any lawful government purpose,” The Information reports. The pact was disclosed shortly after Google staff requested CEO Sundar Pichai prevent the Pentagon from employing its AI amid apprehensions that it could be utilized in “inhumane or extremely detrimental ways.”
If the agreement is validated, it would align Google with OpenAI and xAI, which have also established classified AI arrangements with the US government. Anthropic was additionally on that list until it was blacklisted by the Pentagon for declining the Department of Defense’s requests to eliminate weapon and surveillance-related safeguards from its AI systems.
Citing a single unnamed source “familiar with the situation,” The Information states that the deal asserts that both parties have concurred that the search giant’s AI systems should not be employed for domestic mass surveillance or autonomous weaponry “without suitable human supervision and control.” However, the contract also indicates it does not grant Google “any authority to oversee or deny lawful governmental operational choices,” implying that the agreed limits are more of a casual agreement than enforceable commitments.
In a statement to Reuters, a Google representative expressed that the firm maintains the view that AI should not be utilized for domestic mass surveillance or autonomous weaponry without proper human oversight. “We consider that providing API access to our commercial models, including on Google infrastructure, with industry-standard practices and terms, signifies a responsible approach to supporting national security,” Google informed the outlet.









