Widely used AI systems can alter voter preferences by 15% under controlled conditions, according to a study by Cornell University and the British Institute for AI Safety.
“There is significant public concern about the potential use of technology for political persuasion and its possible impact on elections and democracy. We address these concerns using pre-registered experiments to assess the ability of large language models to influence voter attitudes,” the experts noted.
The study involved nearly 6,000 participants from the US, Canada, and Poland. They evaluated a political candidate, interacted with a chatbot, and then reassessed their evaluation.
In the US segment, 2,300 people participated ahead of the 2024 presidential elections. Here, AI had an amplifying effect when the interlocutors’ opinions aligned.
If the chatbot supported a candidate whom the participant did not, “more noticeable shifts” occurred—people significantly changed their views on the political figure.
Similar results were observed in Canada and Poland.
The study showed that messages focused on policy had a greater persuasive effect than those based on personality.
The accuracy of statements depended on the conversation. Chatbots supporting right-wing candidates made more inaccurate statements compared to those advocating for left-wing candidates.
“These findings suggest an uncomfortable possibility: AI’s political influence might exploit biases in what models know to unevenly spread inaccuracies even when directly instructed to tell the truth,” the researchers noted.
AI Bias
In a separate study in Science, experts sought to understand why the persuasion effect occurs at all. They tested 19 language models on 76,977 adults in the UK across more than 700 political questions.
“There is widespread fear that conversational AI will soon be able to exert unprecedented influence on human beliefs,” the authors wrote.
They found that prompt engineering methods affect persuasive ability more than model size. Prompts encouraging the LLM to introduce new information increased persuasiveness but reduced accuracy.
AI Is Not Neutral
Think tanks noted that young conservatives are more willing to delegate important government decision-making to AI compared to liberals.
Donald Kendall, director of the Glenn Haskins Center for New Issues at the Heartland Institute, noted that voters often misjudge the neutrality of large language models.
“One thing I try to convey to people is to dispel the illusion that artificial intelligence is impartial. It is clearly biased, sometimes in a passive form,” he added.
The expert stated that major Silicon Valley corporations developing AI “do not shy away from influencing what content is disseminated on their platforms.”
As reported, a simulated meeting of the Fed using AI agents revealed that political pressure divided board members during discussions on interest rates.
