AI impacted US elections more than UK, French, and EU elections, reports new research – Euractiv

Share This Post


Disinformation generated through artificial intelligence or deepfakes did not significantly impact UK, French or European election results. The same cannot be convincingly said of the United States.

Half of the world’s population had, or still have, the chance to go to polls in 72 countries, from South Africa to the South Pacific, to Europe and the Americas – the biggest election year in history.

Faced with an ever-growing concern about disinformation harming the integrity of the democratic system and new risks posed by various artificial intelligence (AI) tools, citizens all over the world made their choices, somewhere impacted by all this and elsewhere a little less.

This is the conclusion of a recent research by the Centre for Emerging Technology and Security (CETaS) at The Alan Turing Institute. It analysed the UK, French and European elections.

Limited impact in Europe

The research identified 16 viral cases of AI disinformation or deepfakes during the UK general election, while only 11 viral cases were identified in the EU and French elections combined.

The paper states that AI-enabled misinformation did not meaningfully impact these election processes, but there are emerging concerns about cases of realistic parody or satire deepfakes that may mislead voters.

Another concern highlighted by the research relates to the use of deepfake pornography smears targeting politicians, especially women, which poses reputational and well-being risks.

Researchers also noticed instances of voter confusion between legitimate political content and AI-generated material, potentially undermining public confidence in online information.

Despite concerns, the research shows that AI has potential benefits. It can amplify environmental campaigns, foster voter-politician engagement, and enhance fact-checking speed.

Researchers highlighted instances of disinformation linked to domestic and state-sponsored actors, including interference associated with Russia, though without significant impact on election outcomes.

US election impact

CETaS researchers also analysed the US election, where several examples of viral AI disinformation were observed. These included AI-generated content used to undermine candidates, AI bot farms mimicking US voters or spreading conspiracy theories.

While there is no conclusive evidence that such content manipulated the results – there is insufficient data – AI-generated subjects still influenced election discourse, enhanced harmful narratives and entrenched political polarisation.

AI-driven content mainly resonated with those whose beliefs already aligned with disinformation, reinforcing existing ideological divisions. Traditional forms of misinformation also played a major role.

Another research published in The Brookings Institution comes to the same conclusion that disinformation played a significant role in shaping public views of candidates and issues discussed during the campaigns.

Its authors mention false stories about immigrants, fabricated videos and images targeting the Democrat candidate, and misleading crime and border security claims as examples of misinformation.

These narratives, they argue, were amplified by social media, memes, and mainstream media, exacerbated by generative AI tools that made it easier to create realistic fakes. Despite independent fact-checking debunking many false claims, they still affected voter perception, especially concerning immigration and the economy.

Taking measures

To prevent harmful narratives from thriving, lessen the hype that surrounds AI-generated threats and combat disinformation, all the referenced research calls for action in key areas.

CETaS researchers suggest strengthening laws on defamation, privacy, and elections and embedding provenance records in government content to ensure authenticity. They also advocate developing benchmarks for deepfake detection expanding guidance for political parties on AI use.

Another suggestion relates to counteracting engagement by adjusting press guidelines to manage disinformation coverage, involving journalists and fact-checkers in refining response strategies.

CETaS also calls for empowering society by closing regulatory gaps, giving academic and civil society groups access to data on harmful campaigns, and establishing nationwide digital literacy programmes.

The Brookings Institution research advocates for stronger content moderation by social media platforms, digital literacy programmes to help people identify false information, and an understanding of the financial incentives driving the spread of lies.

It also highlights the dangers of political polarisation, where people are more inclined to believe and spread negative information about their opponents, warning that if these trends continue, they could harm governance and trust in democracy.

Although the overall scale of AI influence in the elections cannot yet be fully comprehended, it seems that, for now, the world has avoided the full-blown chaos AI can sow in such processes. However, the use of AI tools by political actors will only be on the rise.

[Edited By Brian Maguire | Euractiv’s Advocacy Lab ]





Source link

Related Posts

- Advertisement -spot_img