Beyond Disinformation: AI Misinformation and Mal-information Threats

Beyond Disinformation — The Overlooked Threats of Misinformation and Mal-information in the Age of AI

Date: 02 March 2026

1. Introduction

During a recent EXIN AI Compliance Professional Certification Course, participants discussed the growing concern over disinformation—one of the cited risks of Artificial Intelligence (AI). Yet when I asked them about two related phenomena, misinformation and mal-information, they were not as aware that these are often underestimated in their ethical and regulatory implications.

In a world where AI systems curate, generate, and amplify content at scale, distinguishing between these forms of information disorder is crucial. Each has distinct motivations, consequences, and implications for human rights, public trust, and democratic integrity.

This white paper aims to clarify the key differences between the three, explore real-world examples, and outline how they intersect with the UNESCO Recommendation on the Ethics of Artificial Intelligence (2021) and core international human rights frameworks.

2. Understanding the Terminology

Term Definition Intent Example Summary
Disinformation False information deliberately created and shared to deceive or manipulate. Deliberate State-sponsored fake news campaigns or deepfake propaganda.
Misinformation False or inaccurate information shared without intent to harm. Unintentional Sharing an inaccurate statistic or health tip generated by an AI chatbot.
Mal-information Genuine information shared with intent to cause harm. Harmful Publishing private or truthful data to damage reputation or safety.

(Source: Wardle, C., & Derakhshan, H., Information Disorder, Council of Europe, 2017)

3. The Role of AI in Amplifying Information Disorders

AI’s speed, reach, and personalization capabilities make it both a force multiplier and a risk amplifier – shaping public opinion and personal privacy. It plays a dual role:

  • Propagating misinformation accidentally via generative model “hallucinations.”
  • Facilitating disinformation and mal-information when exploited to scale harmful content.

4. Case Studies

4.1 Misinformation: COVID-19 Health Claims

Case: During the pandemic, social platforms were flooded with AI-amplified false health information, including claims regarding 5G technology or dangerous preventatives.

Impact: Confusion around health guidance; undermined trust in scientific authorities.

4.2 Disinformation: 2016 Election Interference

Case: Coordinated campaigns used AI-driven bots to manipulate voter opinions through targeted deceptive advertisements.

Impact: Deepened political polarisation and undermined democratic integrity.

4.3 Mal-information: Doxxing

Case: Publication of real private photos or data to shame individuals, often facilitated by AI-enabled data scraping.

Impact: Exposure to harassment and blackmail; direct violation of dignity and privacy.

5. Comparative Risk Analysis

Dimension Misinformation Disinformation Mal-information
Intent Unintentional Deliberate Harmful
Accuracy False False True

6. Ethical and Governance Implications

AI governance frameworks should implement transparency, strengthen accountability, integrate human rights impact assessments, and promote international cooperation to counter disinformation campaigns that cross borders.

7. Conclusion

Protecting the public from information disorder is about preserving the integrity of knowledge and the conditions for human freedom in the digital age. AI Governance professionals must embrace ethical AI governance rooted in human rights principles.


Deepinder Singh Chhabra

Authored by EXIN Ambassador

GRC Professional Services (EMEA) lead at Verizon Business with 20+ years in cybersecurity, risk management, and compliance. Certified in CISA, CISM, CGEIT, CCISO, CISSP, and CRISC—passionate about cyber resilience, innovation, and industry leadership.

Deepinder Singh Chhabra