Combating disinformation needs human-led, AI-enabled disruption

Share This Post


With more newsrooms incorporating artificial intelligence into their daily operations, a hybrid approach combining human oversight and AI automation has emerged as a promising tool to combat the rising tide of disinformation.

AI in journalism is far from straightforward. While it has made routine tasks more efficient, it has also exposed the complex challenges newsrooms face amid rapid technological advances.

At a time when AI-generated content is reshaping public sentiment and trust within the digital media landscape, its dual impact cannot be ignored – AI enhances efficiency and creative possibilities, but also raises significant ethical and authenticity concerns.

The solution appears to lie within the problem itself, but it depends on rigorous ethical frameworks and oversight, requiring coordinated action from researchers, policymakers, industry, and media stakeholders.

The human element

Nearly all research points to the human touch as essential in producing trustworthy and reliable content with AI’s assistance. This also applies to creating tools that verify and fact-check information to combat misinformation.

While disinformation is not new, technology has accelerated its spread and expanded its reach, making a single solution practically impossible. This is why a multi-faceted approach is crucial.

However, media professionals need greater AI literacy and training. Only by acquiring these skills can journalists effectively meet today’s challenges and use AI responsibly in their work.

Breaking the misinformation loop

In an article for The Bulletin, mathematician and writer Susan D’Agostino explores the “misinformation feedback loop”, where algorithms exploit engagement to perpetuate and amplify falsehoods.

She stresses that the problem extends beyond the supply of false content and deeply into the demand side. As conveyed by Professor Marshall Van Alstyne, interventions must tackle both supply and demand.

To disrupt this cycle, D’Agostino proposes a dual-front strategy: curbing the supply of AI-enabled falsehoods while transforming the psychological, technological, and cultural structures that drive their consumption and replication.

Mitigating AI bias

Ramaa Sharma’s detailed examination of AI bias in journalism, featuring insights from multiple experts, offers journalists a practical starting point to mitigate bias through diverse, interdisciplinary teams and fairness practices embedded from the outset.

Key solutions include proactive monitoring and metadata tracking, understanding different types of bias, improving dataset diversity and quality, employing AI tools to detect bias, encouraging transparent AI outputs, and fostering awareness and collaboration.

These approaches combine technical rigour, ethical vigilance, and a collaborative newsroom culture to tackle AI bias. In an era of fragile trust, the critical question is not how challenging this work is, but rather: “What is the cost of not doing so?”

Internal AI chatbots

Some newsrooms have found their own answers to these challenges. Rowan Philip’s article explores how leading news organisations are experimenting with internal AI chatbots that answer reader queries using only their own vetted journalism.

By avoiding the unreliable breadth of the internet – where tools such as ChatGPT draw from vast, unverified sources – these in-house chatbots provide responses grounded in curated archives, with editorial safeguards to limit misinformation.

This cautious optimism demonstrates that, by anchoring AI chatbots in carefully vetted archives, newsrooms can build reader trust and engagement, reinforcing the enduring value of rigorous journalism.

Adapting to change

AI is already embedded in newsroom workflows across Europe, transforming how news is produced and consumed. As threats become more pervasive, understanding and addressing these risks is essential.

The AI revolution in journalism offers unprecedented opportunities but also presents significant challenges. If left unchecked, AI-driven bias and misinformation could erode public trust at a time when reliable information is more important than ever.

European newsrooms and policymakers must act decisively to embed transparency, fairness, and oversight in AI tools. Only through coordinated, interdisciplinary efforts can journalism continue to uphold its vital democratic role in an age shaped by AI.

[Edited By Brian Maguire | Euractiv’s Advocacy Lab ]



Source link

Related Posts

- Advertisement -spot_img