Fake news is not the adequate term to describe a complex issue. This report of the Council of Europe, authored by Claire Wardle (Executive Director of First Draft ) and Iranian blogger Hossein Derakhstan firstly underlines the absence of definitional rigour when talking about what it is rather called ‘information pollution’ or ‘information disorder’. The first part offers a conceptual framework that further differentiates:
- mis-information, when the news spread is false but no harm is meant
- dis-information, when news is false and shared to cause harm
- mal-information, when genuine information is spread to cause harm
The authors adjust communication theory to the current digital landscape, identifying also three diverse phases (i.e. creation, production, distribution) and elements (i.e. agents with different motivations, messages, interpreters).
Such specific distinctions are made to stress the ritualistic function of communication, as stated in the second part of the publication on the challenges of filter bubbles and echo chambers: “If we recognise that people seek out and consume content for many reasons beyond simply becoming informed – like feeling connected to similar people or affiliating with a specific identity – it means that pricking the filter bubbler requires more than simply providing diverse information.”
In fact, studies found that rectified news are not as viral as false news and often do not even reach the audience who shared them. The third part of the publication focuses on possible solutions, among which strategic silence is deemed essential because “without amplification, dis-information goes nowhere”.
Nudge technology might provide newsrooms with tools to source-checking and contextualizing mis-information, albeit the authors highlight that information pollution is mostly a visual problem. A news literacy task force is recommended to build a curriculum which include (i) traditional news literacy skills; (ii) forensic social media verification skills; (iii) information about the power of algorithms to shape what is presented to us; (iv) the possibilities but also the ethical implications offered by artificial intelligence; (v) techniques for developing emotional scepticism to override our brain’s tendency to be less critical of content that provokes an emotional response; and (vi) statistical numeracy.
The fourth part focuses on future trends and primarily warns against mis- or dis-information spread through messaging apps.
Thirty-four recommendations are eventually targeted at technology companies, national governments, media organisations, civil society, education ministries and funding bodies to constructively address the issue. Tags:
The content of this article can be used according to the terms of Creative Commons: Attribution-NonCommercial 4.0 International (CC BY-NC 4.0) . To do so use the the wording "this article was originally published on the Resource Centre on Media Freedom in Europe" including a direct active link to the including a direct active link to the original article page.