- Two major studies in Nature and Science prove that political chatbots can change attitudes and voting intentions in several countries.
- Persuasion is based primarily on offering many arguments and data, although it increases the risk of inaccurate information.
- Optimizing to influence strengthens the persuasive effect by up to 25 points, but reduces the truthfulness of the responses.
- The findings open an urgent debate in Europe and the rest of the democracies on regulation, transparency and digital literacy.
The emergence of the political chatbots It has ceased to be a technological anecdote to become an element that is starting to matter in real election campaigns. Conversations of just a few minutes with AI models are enough to shift the sympathy towards a candidate by several points or a concrete proposal, something that until recently was only associated with large media campaigns or highly coordinated rallies.
Two far-reaching investigations, published simultaneously in Nature y Science, They've put numbers to something that was already suspected.: The Conversational chatbots are capable of modifying citizens' political attitudes. with remarkable ease, even when they know they are interacting with a machine. And they do so, above all, through arguments loaded with informationnot so much through sophisticated psychological tactics.
Chatbots in campaigns: experiments in the US, Canada, Poland and the UK

The new evidence comes from a battery of experiments coordinated by teams from the Cornell University and the University of Oxford, carried out during actual electoral processes in United States, Canada, Poland and United KingdomIn all cases, participants knew they would be speaking with an AI, but they were unaware of the political orientation of the chatbot assigned to them.
In the work led by David Rand and published in Nature, thousands of voters underwent brief dialogues with language models configured to to defend a specific candidateIn the 2024 US presidential election, for example, 2.306 citizens They first indicated their preference between Donald Trump y Kamala HarrisThey were then randomly assigned to a chatbot that defended one of the two.
After the conversation, changes in attitude and voting intention were measured. Bots favorable to Harris achieved shift 3,9 points on a scale of 0 to 100 among voters initially aligned with Trump, an impact that the authors calculate as four times higher than that of conventional election advertising tested in the 2016 and 2020 campaigns. The pro-Trump model also shifted positions, albeit more moderately, with a change in 1,51 points among Harris supporters.
The results in Canada (With 1.530 participants and chatbots defending Mark Carney o Pierre Polyievre) and in Poland (2.118 people, with models that promoted Rafał Trzaskowski o Karol Nawrocki) were even more striking: in these contexts, chatbots managed changes in voting intention of up to 10 percentage points among opposition voters.
A key aspect of these trials is that, although most conversations lasted only a few minutes, Part of the effect lasted over timeIn the United States, a little over a month after the experiment, a significant fraction of the initial impact was still observed, despite the avalanche of campaign messages received by the participants during that period.
What makes a political chatbot convincing (and why that generates more errors)

The researchers wanted to understand not only whether chatbots could persuade, but how were they achieving itThe pattern that repeats itself in the studies is clear: AI has the greatest influence when It uses many fact-based argumentseven if much of that information is not particularly sophisticated.
In the experiments coordinated by Rand, the most effective instruction for the models was to ask them to be polite, respectful, and who could provide evidence of his statements. Courtesy and a conversational tone helped, but the main lever for change lay in offering data, examples, figures, and constant references to public policies, the economy, or healthcare.
When models were limited in their access to verifiable facts and instructed to persuade without resorting to concrete dataTheir power of influence fell drastically. This result led the authors to conclude that the advantage of chatbots over other formats of political propaganda lies not so much in emotional manipulation as in the information density that they can deploy in just a few turns of conversation.
But this same strategy has a downside: as pressure increases on the models to generate increasingly more supposedly factual claimsThe risk increases that the system will run out of reliable material and begin to “invent” factsSimply put, the chatbot fills in the gaps with data that sounds plausible but isn't necessarily correct.
The study published in Science, with 76.977 adults from the United Kingdom y 19 different models (from small open-source systems to cutting-edge commercial models), it systematically confirms this: the post-training focused on persuasion increased the ability to influence up to a 51%, while simple changes in instructions (the so-called promptingThey added another 27% of efficiency. At the same time, these improvements were accompanied by a noticeable reduction in the factual accuracy.
Ideological asymmetries and the risk of disinformation
One of the most troubling conclusions of the Cornell and Oxford studies is that the imbalance between persuasiveness and truthfulness is not evenly distributed among all candidates and positions. When independent fact-checkers analyzed the messages generated by the chatbots, they found that Models who supported right-wing candidates made more mistakes than those who supported progressive candidates.
According to the authors, this asymmetry It coincides with previous studies that They show that conservative users tend to share more inaccurate content on social media than left-leaning users.Since language models learn from vast amounts of information extracted from the internet, they are likely reflecting some of that bias rather than creating it from scratch.
In any case, the consequence is the same: when a chatbot is instructed to maximize its persuasive power in favor of a particular ideological bloc, the model tends to increase the proportion of misleading claims, although I continue to mix them with a lot of correct data. The problem is not just that false information can slip through.But It does so wrapped in a seemingly reasonable and well-documented narrative.
The researchers also highlight an uncomfortable point: They have not demonstrated that inaccurate claims are inherently more persuasive.However, when AI is pushed to become increasingly effective, the number of errors grows in parallel. In other words, improving persuasive performance without compromising accuracy reveals itself as a technical and ethical challenge that remains unresolved.
This pattern is especially concerning in contexts of high political polarization, like those experienced in parts of Europe and North America, where the margins of victory are narrow and a handful of percentage points can decide the outcome of a general or presidential election.
Limitations of the studies and doubts about the real impact at the ballot box
Although the results from Nature and Science are solid and agree in their main conclusions, both teams insist that These are controlled experiments, not real campaigns.There are several elements that invite the caution when extrapolating the data just like an election in the street.
On the one hand, participants either enrolled voluntarily or were recruited through platforms that offer financial compensation, which introduces self-selection biases and it moves away from the diversity of the actual electorateFurthermore, they knew at all times that They were talking to an AI. and that were part of a study, conditions that would hardly be repeated in an ordinary campaign.
Another important nuance is that the studies primarily measured changes in attitudes and stated intentionsnot the actual vote cast. These are useful indicators, but they are not equivalent to observing final behavior on election day. In fact, in the US experiments, the effect was somewhat smaller than in Canada and Poland, suggesting that the political context and the degree of prior indecision have a significant influence.
In the case of the British study coordinated by Kobi Hackenburg From the UK's AI Security Institute, there are also clear restrictions: the data comes only from voters of the United Kingdom, all of them aware that they were participating in an academic investigation and with compensationThis limits its generalization to other EU countries or less controlled contexts.
Nevertheless, the scale of these works—tens of thousands of participants and more than 700 different political topics— and methodological transparency have led a large part of the academic community to consider that They paint a plausible scenarioThe use of political chatbots capable of altering opinions relatively quickly is no longer a futuristic hypothesis, but a technically feasible scenario in upcoming campaigns.
A new electoral player for Europe and other democracies
Beyond the specific cases of the US, Canada, Poland, and the UK, the findings have direct implications for Europe and Spainwhere the regulation of political communication on social media and the use of personal data in campaigns are already the subject of intense debate. The possibility of incorporating chatbots that maintain personalized dialogues with voters It adds an extra layer of complexity.
Until now, political persuasion was primarily articulated through static advertisements, rallies, televised debates, and social mediaThe arrival of conversational assistants introduces a new element: the ability to maintain one-on-one interactions, adapted on the fly to what the citizen is saying in real time, and all this at a practically marginal cost for the campaign organizers.
The researchers emphasize that the key is no longer just who controls the voter database, but who can develop models capable of responding to, refining, and replicating arguments continuously, with a volume of information that far exceeds what a human volunteer could handle at a switchboard or street post.
In this context, voices like that of the Italian expert Walter Quattrociocchi They insist that the regulatory focus should shift from aggressive personalization or ideological segmentation towards information density that models can provide. Studies show that persuasion grows primarily when data is multiplied, not when emotional strategies are used.
La The coincidence of results between Nature and Science has raised alarms in European organizations concerned about the integrity of democratic processesAlthough the European Union is making progress with frameworks such as the Digital Services Act or the future specific regulation of AI, the speed at which these models evolve It requires a constant review of the mechanisms for supervision, auditing, and transparency..
Digital literacy and defense against automated persuasion

One of the recurring messages in the academic commentaries accompanying these works is that the response cannot be based solely on prohibitions or technical controls. The authors agree that it will be essential to strengthen the digital literacy of the population so that citizens learn to recognize and resist persuasion generated by automatic systems.
Complementary experiments, such as those published in PNAS NexusThey suggest that users who best understand how large language models work are less vulnerable to its attempts at influence. Knowing that a chatbot can be wrong, exaggerate, or fill in the gaps with guesswork reduces the tendency to accept its messages as if they came from an infallible authority.
At the same time, it has been observed that the persuasive effectiveness of AI depends not so much on the interlocutor believing they are talking to an expert human, but on the quality and consistency of the arguments that it receives. In some tests, the chatbot messages even managed to reduce belief in conspiracy theories, regardless of whether the participants thought they were chatting with a person or a machine.
This suggests that the technology itself is not inherently harmful: it can be used for both combat disinformation as to propagate itThe line is drawn by the instructions given to the model, the data with which it is trained, and, above all, the political or commercial objectives of those who put it into action.
While governments and regulators debate transparency limits and requirements, the authors of these works insist on one idea: political chatbots They will only be able to exert a massive influence if the public agrees to interact with them.Hence, the public debate on its use, its clear labeling, and the right not to be subjected to automated persuasion will become central issues in the democratic conversation in the coming years.
The picture painted by the research in Nature and Science reveals both opportunities and risks: AI chatbots can help to better explain public policies and resolve complex doubts, but they can also have the capacity to to tip the electoral scalesespecially among undecided voters, and they do so with a evident price in terms of information accuracy when they are trained to maximize their persuasive power, a delicate balance that democracies will have to address urgently and without naivety.
I am a technology enthusiast who has turned his "geek" interests into a profession. I have spent more than 10 years of my life using cutting-edge technology and tinkering with all kinds of programs out of pure curiosity. Now I have specialized in computer technology and video games. This is because for more than 5 years I have been writing for various websites on technology and video games, creating articles that seek to give you the information you need in a language that is understandable to everyone.
If you have any questions, my knowledge ranges from everything related to the Windows operating system as well as Android for mobile phones. And my commitment is to you, I am always willing to spend a few minutes and help you resolve any questions you may have in this internet world.
