Home Tech/AIResearchers discover the factors that render AI chatbots politically influential

Researchers discover the factors that render AI chatbots politically influential

by admin
0 comments
Researchers discover the factors that render AI chatbots politically influential

Approximately two years prior, Sam Altman tweeted that AI systems would attain superhuman persuasion capabilities long before they achieve general intelligence—a forecast that sparked worries regarding the potential influence of AI on democratic elections.

To investigate whether conversational large language models can genuinely influence the political opinions of the public, researchers at the UK AI Security Institute, MIT, Stanford, Carnegie Mellon, among other institutions, conducted the largest study on AI persuasiveness to date, involving nearly 80,000 participants across the UK. The results indicated that political AI chatbots significantly underperformed in superhuman persuasiveness, yet the study highlights more subtle issues regarding our interactions with AI.

AI dystopias

The public discourse surrounding AI’s effects on politics has predominantly centered on concepts borrowed from dystopian science fiction. Large language models possess access to virtually every fact and narrative ever published concerning any issue or candidate. They have assimilated information from texts about psychology, negotiations, and human manipulation. They can leverage extraordinarily high computing power in massive data centers globally. Additionally, they frequently have access to vast amounts of personal data about individual users due to countless online interactions at their disposal.

Engaging with a powerful AI system is essentially conversing with an intelligence that knows everything about everything, as well as nearly everything about you. When considered from this perspective, LLMs can indeed seem somewhat intimidating. The objective of this extensive AI persuasiveness study was to deconstruct such daunting visions into their fundamental elements and examine whether they actually hold validity.

The research team assessed 19 LLMs, including the most potent ones such as three different iterations of ChatGPT and xAI’s Grok-3 beta, alongside a variety of smaller, open-source models. The AIs were tasked with advocating for or against specific positions on 707 political issues chosen by the team. This advocacy was carried out through brief conversations with compensated participants recruited via a crowdsourcing platform. Each participant was required to evaluate their agreement with a specific position on an assigned political issue on a scale from 1 to 100 both before and after conversing with the AI.

You may also like

Leave a Comment