Home Tech/AIIs the Pentagon permitted to monitor Americans using AI?

Is the Pentagon permitted to monitor Americans using AI?

by admin
0 comments
Is the Pentagon permitted to monitor Americans using AI?

The ongoing dispute between the Department of Defense and the AI firm Anthropic has raised a profound and still unresolved question: Does current law permit the US government to perform widespread surveillance on its citizens?

Unexpectedly, the answer is complex. More than ten years after Edward Snowden revealed the NSA’s collection of extensive metadata from the devices of Americans, the US is still working through a disparity between public opinion and legal permissions.

The central issue in the conflict between Anthropic and the Pentagon was the military’s interest in utilizing Anthropic’s AI Claude to evaluate large-scale commercial data on Americans. Anthropic insisted that its AI not be utilized for large-scale domestic surveillance (or for autonomous weaponry, which are machines capable of eliminating targets without human intervention). A week following the collapse of negotiations, the Pentagon classified Anthropic as a supply chain risk, a designation usually reserved for foreign entities that pose a national security threat.

Meanwhile, OpenAI, the competing AI company behind ChatGPT, secured an agreement allowing the Pentagon to use its AI for “all lawful purposes”—a phrasing that critics argue leaves room for domestic surveillance. Over the subsequent weekend, users uninstalled ChatGPT in large numbers. Activists chalked messages around OpenAI’s office in San Francisco: “What are your redlines?”

OpenAI announced on Monday that it had revised its agreement to ensure that its AI will not be utilized for domestic surveillance purposes. The company stated that its services would not be employed by intelligence agencies, such as the NSA.

CEO Sam Altman indicated that current law inhibits domestic surveillance by the Department of Defense (now occasionally referred to as the Department of War) and asserted that OpenAI’s contract merely needed to acknowledge this legislation. “The DoW aligns with these principles, embodies them in law and policy, and we incorporated them into our agreement,” he wrote on X. Anthropic CEO Dario Amodei countered this perspective. “To the degree that such surveillance is currently permissible, it is only because the law has yet to adapt to the rapidly evolving capabilities of AI,” he wrote in a policy statement.

So, who is accurate? Does the law permit the Pentagon to surveil Americans using AI?

Enhanced surveillance

The response rests on our definition of surveillance. “Many things that ordinary people might categorize as a search or surveillance … are not actually recognized as such by the law,” states Alan Rozenshtein, a law professor at the University of Minnesota Law School. This implies that publicly accessible information—like social media posts, surveillance camera footage, and voter registration details—is available for collection. Information about Americans incidentally obtained from monitoring foreign nationals is also fair game.

Of particular note, the government can acquire commercial data from firms, which may encompass sensitive personal details such as mobile location and web browsing histories. Recently, agencies ranging from ICE and IRS to the FBI and NSA have increasingly exploited this data marketplace, spurred by an internet economy that gathers user information for advertising purposes. These data sets can enable the government to access information that would generally require a warrant or subpoena to obtain sensitive personal data.

“There’s an enormous amount of information that the government can gather on Americans that is not sufficiently regulated by the Constitution, which includes the Fourth Amendment, or by statute,” notes Rozenshtein. Furthermore, there are no significant limitations on what the government can do with this plethora of data.

This is largely due to the fact that until the last few decades, individuals weren’t creating massive amounts of data that presented new opportunities for surveillance. The Fourth Amendment, which safeguards against unreasonable searches and seizures, was framed when gathering information involved physically entering homes.

Later laws, including the Foreign Intelligence Surveillance Act of 1978 and the Electronic Communications Privacy Act of 1986, were enacted at times when surveillance was limited to wiretapping telephone calls and intercepting emails. Most of the legislation regulating surveillance was implemented before the internet gained traction. We weren’t creating extensive trails of online data, and the government lacked the advanced tools to analyze such data.

Today, however, we possess such capabilities, and AI amplifies the level of surveillance that can be performed. “What AI enables is the ability to process a significant amount of information, none of which is individually sensitive, and hence not specifically regulated, providing the government with considerable powers it previously lacked,” says Rozenshtein.

AI can consolidate individual data points to identify trends, infer conclusions, and craft detailed profiles of individuals—on a massive scale. Additionally, as long as the government gathers the data legally, it can utilize that information in any manner it sees fit, including processing it through AI systems. “The law has not aligned with technological advancements,” observes Rozenshtein.

While surveillance can provoke serious privacy issues, the Pentagon may have valid national security motivations for collecting and analyzing data on Americans. “To collect information on Americans, it must be for a highly specific subset of missions,” states Loren Voss, a former military intelligence officer at the Pentagon.

For instance, a counterintelligence objective might necessitate intelligence about an American collaborating with a foreign power or intending to engage in international terrorism. Yet targeted intelligence efforts can sometimes expand into gathering more extensive data. “This kind of collection does raise concerns,” remarks Voss.

Permissible use

OpenAI has modified its agreement to state that the company’s AI system “shall not be intentionally deployed for domestic surveillance of U.S. persons and nationals,” in accordance with applicable laws. The amendment specifies that this precludes “deliberate tracking, surveillance, or monitoring of U.S. persons or nationals, including through the procurement or use of commercially acquired personal or identifiable data.”

However, the newly added phrasing may not significantly counteract the provision allowing the Pentagon to use the company’s AI system for all lawful objectives, potentially encompassing the collection and analysis of sensitive personal data. “OpenAI can assert whatever it wants in its agreement … but the Pentagon is going to utilize the technology for what it interprets as lawful,” explains Jessica Tillipman, a law professor at George Washington University Law School. This could very likely include domestic surveillance. “In many instances, companies will be unable to prevent the Pentagon from acting in any capacity,” she adds.

The phrasing also raises lingering concerns about “inadvertent” surveillance, as well as the monitoring of foreign nationals or undocumented immigrants residing in the US. “What occurs when there’s a divergence regarding the interpretation of the law, or when the law evolves?” inquires Tillipman.

OpenAI did not respond to a request for commentary. The organization has not disclosed the complete text of its new contract.

Beyond the terms of the contract, OpenAI asserts that it will implement technical safeguards to uphold its prohibition against surveillance, including a “safety stack” to oversee and impede forbidden uses. The company also claims it will assign its own personnel to collaborate with the Pentagon and remain informed. However, it remains unclear how a safety stack would limit the Pentagon’s application of the AI and the extent to which OpenAI’s staff would be informed about the usage of its AI systems. More critically, it is uncertain whether the agreement grants OpenAI the authority to prohibit a legal application of the technology.

Yet this might not be detrimental. Granting an AI corporation the ability to disable its technology amidst government operations presents its own hazards. “You wouldn’t want the US military to find itself in a position where it genuinely needed to take actions to safeguard this country’s national security, only to have a private enterprise deactivate technology,” asserts Voss. However, that does not imply that definitive boundaries should not be established by Congress, she argues.

All these issues are complex. They entail excruciating balances between privacy and national security. That is why perhaps they should be resolved by the public—not through clandestine negotiations between the executive branch and a select few AI companies. For the present, military AI is governed by contracts rather than legislation.

Some lawmakers are beginning to express their views. On Monday, Senator Ron Wyden of Oregon will pursue bipartisan backing for legislation focused on mass surveillance. He has advocated for bills limiting the government’s acquisition of commercial data, including the Fourth Amendment Is Not For Sale Act, initially proposed in 2021 but yet to be enacted into law. “Creating AI profiles of Americans based on that data represents a troubling expansion of mass surveillance that should be prohibited,” he stated in a recent statement.  

You may also like

Leave a Comment