
In January 2024, phones across New Hampshire began to ring. On the other line was a voice resembling Joe Biden’s, encouraging Democrats to “preserve your vote” by bypassing the primary. It seemed genuine, but it was not. The call had been fabricated using artificial intelligence.
Today, the technology behind that deception appears simplistic. Solutions like OpenAI’s Sora now allow for the seamless creation of realistic synthetic videos. AI can be leveraged to concoct messages from politicians and celebrities—even complete news segments—in mere minutes. The concern that elections could be inundated with credible fake media has garnered widespread attention—and rightly so.
However, that’s only part of the narrative. The more significant risk isn’t merely that AI can imitate humans—it’s that it can actively influence opinions. New research released this week demonstrates the extent of that influence. In two comprehensive peer-reviewed studies, AI chatbots significantly swayed voters’ opinions, achieving far greater results than traditional political ads usually accomplish.
In the years ahead, we will witness the emergence of AI that can customize arguments, test effectiveness, and subtly reshape political perspectives at a large scale. That transition—from imitation to active influence—should alarm us profoundly.
The difficulty lies in the fact that contemporary AI doesn’t merely replicate voices or images; it engages in dialogue, interprets emotions, and adjusts its tone for persuasion. It can also command other AIs—guiding image, video, and voice models to create the most compelling content for each audience. By integrating these components, it’s easy to envision a coordinated persuasion apparatus. One AI could draft the message, another might produce the visuals, yet another could disseminate it across platforms and analyze which strategies succeed. No human involvement needed.
A decade ago, orchestrating a successful online influence campaign generally entailed deploying multitudes of individuals managing fake profiles and meme farms. Now, that workload can be automated—efficiently and discreetly. The very technology that supports customer service bots and educational applications can be rechannelled to sway political beliefs or amplify a government’s favored narrative. Furthermore, this influence need not be restricted to advertisements or robocalls. It can be embedded within the everyday tools that people utilize—social media feeds, language learning tools, dating services, or even voice assistants developed and sold by parties aiming to shape public opinion in America. Such influence could emanate from malicious entities employing popular AI tools’ APIs that people already depend on or from entirely new applications crafted with persuasion inherently programmed.
And it’s economical. For less than a million dollars, anyone can craft tailored, conversational messages for every registered voter in the United States. The calculations aren’t complex. Suppose 10 short interactions per individual—approximately 2,700 tokens of text—and price them at current rates for ChatGPT’s API. Even with a registered voter pool of 174 million, the total still remains below $1 million. The 80,000 pivotal voters who influenced the 2016 election could be targeted for less than $3,000.
While this challenge emerges in elections globally, the stakes are particularly high for the United States, given the scale of its electoral processes and the interest they attract from foreign entities. If the US does not act swiftly, the next presidential contest in 2028, or even the midterms in 2026, could be claimed by whoever first automates persuasion.
The 2028 risk
Although some indications suggest that concerns about AI’s impact on elections are exaggerated, an increasing body of research implies the situation might be shifting. Recent studies have indicated that GPT-4 can surpass the persuasive abilities of communication specialists when generating statements on contentious US political issues, and it proves more convincing than non-expert individuals two-thirds of the time in debates with actual voters.
Two significant studies released yesterday expand on these findings in real election scenarios in the United States, Canada, Poland, and the United Kingdom, revealing that brief interactions with chatbots can alter voters’ attitudes by as much as 10 percentage points, with US respondents’ views shifting almost four times more than they did in reaction to tested political ads from 2016 and 2020. Moreover, when models were explicitly optimized for influence, the shift escalated to 25 percentage points—an almost inconceivable change.
While previously limited to well-funded corporations, modern large language models are becoming increasingly accessible. Leading AI firms such as OpenAI, Anthropic, and Google enclose their state-of-the-art models within usage regulations, automated security filters, and account monitoring, and they sometimes suspend users violating these policies. However, these restrictions apply solely to traffic passing through their platforms; they do not extend to the rapidly expanding environment of open-source and open-weight models, which can be accessed by anyone connected to the internet. Although generally smaller and less capable than their commercial equivalents, research has demonstrated that with precise prompting and tuning, these models can now achieve performance levels comparable to top commercial systems.
This all indicates that actors, whether affluent organizations or grassroots groups, possess a clear route to deploying politically persuasive AI on a large scale. Initial demonstrations have already been observed elsewhere worldwide. During India’s 2024 general elections, tens of millions of dollars were reportedly allocated to AI for voter segmentation, identifying swing voters, delivering personalized messages through robocalls and chatbots, among other functions. In Taiwan, officials and researchers have documented operations linked to China utilizing generative AI to produce increasingly subtle disinformation, including deepfakes and language model outputs biased toward messaging endorsed by the Chinese Communist Party.
It’s only a matter of time before this technology reaches US elections—unless it already has. Foreign adversaries are well-positioned to take the lead. China, Russia, Iran, and others already operate networks of troll farms, bot accounts, and covert influence operatives. Coupled with open-source language models capable of generating fluent and localized political content, those efforts can be significantly enhanced. In fact, there is no longer a necessity for human operators who comprehend the language or context. With slight adjustments, a model can impersonate a local organizer, a union representative, or an upset parent without a person physically present in the country. Political campaigns are likely to follow suit. Every major campaign already segments voters, tests messages, and optimizes distribution. AI reduces the costs associated with carrying out all these tasks. Instead of testing a slogan through polling, a campaign can yield hundreds of arguments, deliver them one-on-one, and observe in real-time which arguments influence opinions.
The underlying reality is straightforward: Persuasion has become effective and inexpensive. Campaigns, PACs, foreign entities, advocacy organizations, and opportunists are all competing on the same field—and regulations are sparse.
The regulatory void
Most policymakers haven’t adapted. Over the prior years, US legislators have emphasized deepfakes but overlooked the broader persuasive threat.
Foreign governments have begun to take the issue more earnestly. The European Union’s 2024 AI Act designates election-related persuasion as a “high-risk” scenario. Any system intended to sway voting behavior is now subject to stringent requirements. Administrative tools, such as AI systems employed for organizing campaign events or optimizing logistics, are exempt. Nevertheless, tools that seek to shape political beliefs or voting decisions are not exempted.
Conversely, the United States has yet to establish any substantial boundaries. There are no enforceable regulations defining what constitutes a political influence operation, no external criteria for enforcement guidance, and no collective infrastructure for tracking AI-generated persuasion across platforms. Federal and state authorities have hinted at regulation—the Federal Election Commission is invoking old fraud statutes, the Federal Communications Commission has proposed narrowly focused disclosure mandates for broadcast advertisements, and a few states have enacted deepfake regulations—but these initiatives are sporadic and leave the majority of digital campaigning unregulated.
In practice, the responsibility for identifying and dismantling covert campaigns has been almost exclusively delegated to private enterprises, each possessing its distinct rules, motivations, and blind spots. Google and Meta have instituted policies mandating disclosure when political advertisements are generated by AI. X has largely remained silent on this issue, while TikTok prohibits all paid political advertising. Nevertheless, these rules, modest as they are, pertain only to the small segment of content that is purchased and publicly exhibited. They reveal little about the unpaid, covert persuasion initiatives that might hold greater significance.
To their credit, some firms have begun issuing periodic threat assessments identifying covert influence efforts. Anthropic, OpenAI, Meta, and Google have all reported takedowns of inauthentic accounts. However, these efforts are voluntary and not subject to independent verification. Most importantly, none of this prevents determined entities from circumventing platform limitations entirely with open-source models and off-platform structures.
What a genuine strategy would entail
The United States doesn’t require a complete ban on AI in politics. Some uses may even bolster democracy. A well-crafted candidate chatbot could assist voters in grasping a candidate’s positions on important topics, respond to inquiries directly, or transform complex policy into straightforward terms. Studies have even evidenced that AI can lessen belief in conspiracy theories.
Nevertheless, there are several actions the United States should take to safeguard against AI’s persuasive threats. Firstly, it must defend against politically motivated foreign technology embedded with persuasive capabilities. Adversarial political technology could manifest as a foreign-developed video game where in-game characters reflect political narratives, a social media site whose recommendation algorithm leans toward specific viewpoints, or a language learning application that subtly incorporates messages into daily lessons. Assessments, such as the Center for AI Standards and Innovation’s recent evaluation of DeepSeek, should concentrate on identifying and evaluating AI products—particularly from countries like China, Russia, or Iran—prior to widespread deployment. This initiative necessitates coordination among intelligence agencies, regulators, and platforms to identify and mitigate risks.
Secondly, the United States should take the initiative in defining the regulations surrounding AI-driven persuasion. This includes restricting access to computing power for extensive foreign persuasion efforts, as many operatives will either lease existing models or procure GPU capacity to develop their own. It also involves establishing clear technical standards—through governmental bodies, standards organizations, and voluntary industry commitments—for how AI systems capable of generating political content should function, particularly during critical election periods. Moreover, domestically, the US must define what types of disclosures should be mandated for AI-generated political messaging while considering First Amendment implications.
Ultimately, adversaries will attempt to circumvent these protections—utilizing offshore servers, open-source models, or intermediaries in third nations. This is why the United States requires a foreign policy approach. Multilateral agreements on election integrity should formalize a fundamental principle: States that deploy AI systems to manipulate another nation’s electorate risk collective penalties and public exposure.
Accomplishing this will likely require developing shared monitoring frameworks, aligning disclosure and provenance protocols, and being prepared to execute coordinated takedowns of cross-border persuasion operations—because many of these activities are already transitioning into ambiguous realms where current detection capabilities are inadequate. The US should also strive to include election manipulation in the broader agenda at forums like the G7 and OECD, ensuring that concerns regarding AI persuasion are perceived not just as isolated technological challenges but as communal security threats.
Indeed, the responsibility of ensuring election integrity cannot rest solely on the United States. A functional radar system for AI persuasion will necessitate collaborations with our allies and partners. Influence efforts are seldom restricted by geopolitical boundaries, and open-source models and offshore servers will perpetually exist. The objective is not to eradicate them but to elevate the costs associated with misuse and reduce the duration in which they can operate without detection across jurisdictions.
The period of AI persuasion is imminent, and America’s adversaries are ready. In the US, conversely, the regulations are outdated, the safeguards too limited, and the oversight predominantly voluntary. If the previous decade was shaped by viral misinformation and manipulated videos, the following one will likely be influenced by a more nuanced force: messages that seem reasonable, familiar, and just convincing enough to alter perspectives.
For China, Russia, Iran, and others, exploiting America’s open information landscape represents a strategic advantage. We need a strategy that views AI persuasion not as a remote concern but as an immediate reality. This demands a sober evaluation of the risks to democratic dialogue, instituting concrete standards, and constructing a technical and legal framework around them. Because if we delay until we can visibly discern it occurring, it will already be too late.
Tal Feldman is a JD candidate at Yale Law School focused on technology and national security. Before law school, he developed AI models within the federal government and was a Schwarzman and Truman scholar. Aneesh Pappu is a PhD student and Knight-Hennessy scholar at Stanford University concentrating on agentic AI and technology policy. Before Stanford, he worked as a privacy and security researcher at Google DeepMind and was a Marshall scholar.