Glossary of Platform Law and Policy Terms

Disinformation

Cite this article as:
Giovanni de Gregorio (17/12/2021). Disinformation. In Belli, L.; Zingales, N. & Curzi, Y. (Eds.), Glossary of Platform Law and Policy Terms (online). FGV Direito Rio. https://platformglossary.info/disinformation/.

Author: Giovanni de Gregorio

Defining this phenomenon has shown to be far from simple. Scholars from different fields have provided definitions of this phenomenon (Tandoc Jr. et al., 2018)1. The information disorder has been defined as the mix of ‘misinformation’, ‘disinformation’ and ‘malinformation’ which respectively reflect increasing levels of harm and involve different content (Wardle, Derakhshan, 2017)2. False information would include information disseminated as intentionally false and impossible to verify to mislead the public (Allcott, Gentzkow, 2017)3. Adopting this definition would imply that only news disseminated with the intention to mislead readers would fall into the field of disinformation. Therefore, other (false) information outside the framework of intent could be considered free expressions of each one’s thoughts. This could cover for instance information shared due to mistakes or satire as well as investigative journalism which does not base its findings on entirely truthful facts but on reconstructions of truth. According to the European Commission’s High-Level Group on Fake News and Online Disinformation (HLEG), disinformation is “false, inaccurate, or misleading information designed, presented and promoted to intentionally cause public harm or for profit. The risk of harm includes threats to democratic political processes and values, which can specifically target a variety of sectors, such as health, science, education, finance and more” (2018)4. The expression ‘disinformation’ has been considered a more adequate way to describe the spread of false content. Precisely, this situation is not just connected to (fake) news but also false or misleading content like fake accounts, videos, and other fabricated media (Chesney, Citron, 2019)5. Moreover, the HLEG distinguishes the notion of disinformation from that of ‘misinformation’, i.e., “misleading or inaccurate information shared by people who do not recognize it as such’ and underlines that disinformation does not include illegal speech” (e.g., hate speech).

Disinformation is not a phenomenon of the digital age. Its digital dissemination is a novelty. The digital dimension entails the worldwide reach of online content beyond territorial boundaries and the media environment (Sunstein, 2017)6. The spread of false information online during the 2016 Brexit referendum and the US presidential election can be considered two paradigmatic examples of how disinformation influences internal politics, interfering with electoral processes and undermining trust in public institutions and the media. Besides, the pandemic has shown how disinformation can affect information quality and health on a global scale (Majó Vazquez et al., 2020)7.

The spread of false content can be understood by looking at the new media framework (Martens et al., 2018)8. The characteristics of media manipulation highlight how the media sector tends to gravitate toward sensationalism, the need for constant novelty, and the aim of achieving profits instead of professional ethical standards and civic responsibility (Marwick; Lewis, 2017)9. Digital spaces are perfect spaces for disseminating information at minimal or no cost. 

Within this framework, the role of online platforms, including social media, becomes critical to understand how false and misleading information spread online. The monetization of these expressions capturing users’ attention and becoming viral highly depends on the algorithmic system pushing certain messages to the top and promoting further engagement. Unlike traditional media outlets, social media usually perform content moderation activities implementing automated systems defining how information is organized online. Beyond media strategy to disseminate disinformation, scholars have emphasized the role of the political context. Indeed, the role of technology platforms, bots, and foreign spies has tended to be overemphasized (Benkler, 2018)10. Political parties and, in particular, populism movements, have relied on strategies of disinformation to support their political ideas (Bayer et al., 2019)11, and political micro-targeting contributes to this purpose (Dobber et al., 2019)12

This framework shows why, before focusing on the challenges in addressing disinformation, it is worth defining the boundaries of false content considering the digital environment as its primary context. These definitions allow us to understand the multifaceted character of disinformation requiring public actors to face the complexities relating to the regulation of freedom of expression online to tackle this phenomenon. This is why dealing with disinformation means addressing the boundaries of the right to free speech, thus, involving democratic values (Pitruzzella; Pollicino, 2020)13. Indeed, tackling disinformation requires public actors to decide to what extent speech is protected and balanced with other constitutional rights and liberties, as well as how to pursue other (legitimate) interests. Nonetheless, the regulation of speech does not involve any longer just the States and the speaker, but also multiple players outside the control of the State, such as social media companies. In the information society, freedom of expression is like a triangle (Balkin, 2018)14. Therefore, due to the role of online platforms in this field, regulation should also take into account the effects of regulatory choices over the role and responsibilities of these actors.

From a policy perspective, different regulatory solutions have been adopted worldwide (Robinson et al., 2020; De Gregorio; Perotti, 2019)15 16. While the US has not proposed a precise strategy to deal with this phenomenon, the Union focused on soft law commitments by platforms, precisely the code of practice on disinformation, and ad hoc measures targeting the context of the European elections or (Pollicino et al., 2020)17. Domestic legislation of European states provides a highly fragmented regulatory picture (e.g., Germany, France). From a global perspective, other regulatory experiences have shown a tendency towards the criminalization of disinformation (e.g., Singapore, Russia). This fragmentation does not only challenge the protection of the right to freedom of expression online but also undermines the principle of the rule of law rather than promote a clear regulatory framework to address this global phenomenon. For instance, vagueness about definitions and threshold of harm or illegality would negatively impact the right to freedom of expression. Within this framework, the importance of judicial scrutiny of these regulatory measures could ensure a fair assessment of the case and mediation from an independent authority. The role of judicial authority to scrutinize measures to remove false content could contribute to safeguarding the right to freedom of expression against discretional decisions taken by platforms or non-independent public bodies. Besides, the promotion of fact-checking activities, supporting professional media outlets, and investing resources for digital literacy campaigns could play a critical role (Ireton; Posetti, 2018)18. Relying on these measures would entail a lower impact on freedom of expression while building the instruments to fight disinformation on a global scale.

References 

  1. Tandoc Jr, Edson C. et al. (2018). Defining “Fake News”: A Typology of Scholarly Definitions, 6(2), Digital Journalism, 137.
  2. Wardle C., Derakhshan H. (2017). Information Disorder: Towards an Interdisciplinary Framework for Research and Policy Making. Council of Europe report. DGI 2017 09. Available at: https://rm.coe.int/information-disorder-toward-an-interdisciplinary-framework-for-researc/168076277c.
  3. Allcott, H., Gentzkow, M. (2017). Social media and fake news in the 2016 election. Journal of economic perspectives, 31(2), 211-36.
  4. European Commission. (2018). A multi-dimensional approach to disinformation. Report of the independent High-Level Group on fake news and online disinformation.
  5. Chesney, B., Citron, D. (2019). Deep fakes: A looming challenge for privacy, democracy, and national security. Calif. L. Rev., 107, 1753.
  6. Sunstein, Cass R. (2017). #Republic: Divided Democracy in the Age of Social Media. Princeton University Press.
  7. Majó-Vázquez, Silvia et al. (2020). Volume and Patterns of Toxicity in Social Media. Conversations during the COVID-19 Pandemic. Reuters Institute. Available at: https://reutersinstitute.politics.ox.ac.uk/volume-and-patterns-toxicity-social-media-conversations-during-covid-19-pandemic.
  8. Martens, Bertins et al. (2018). The Digital Transformation of News Media and the Rise of Disinformation and Fake News. JRC Digital Economy Working Paper, 2.
  9. Marwick, Alice, Lewis, Rebecca. (2017). Media Manipulation and Disinformation Online. Data & Society.
  10. Benkler, Y., Faris, R., Roberts, H. (2018). Network propaganda: Manipulation, disinformation, and radicalization. In: American politics. Oxford University Press.
  11. Bayer, J., et al., E. (2019). Disinformation and propaganda – impact on the functioning of the rule of law in the EU and its Member States. European Parliament, LIBE Committee, Policy Department for Citizens’ Rights and Constitutional Affairs.
  12. Dobber, Tom & Ó Fathaigh, Ronan & Zuiderveen Borgesius, Frederik J. (2019). The regulation of online political micro-targeting in Europe. Internet Policy Review, 8, 4.
  13. Pitruzzella, Giovanni and Pollicino, Oreste. (2020). Disinformation and Hate Speech. A European Constitutional Perspective. Bocconi University Press.
  14. Balkin, J. M. (2018). Free speech is a triangle. Colum. L. Rev., 118, 2011.
  15. Robinson, Olga et al. (2019). A Report on Antidisinformation Initiative. Available at: https://comprop.oii.ox.ac.uk/wp-content/uploads/sites/93/2019/08/A-Report-of-Anti-Disinformation-Initiatives.
  16. De Gregorio, Giovanni, Perotti, Elena. (2019). Tackling Disinformation around the World: a new policy reportWorld Association of News Publishing Focus.
  17. Pollicino, O., De Gregorio, G., Laura, S. (2020). Europe at the Crossroad: The Regulatory Conundrum to Face the Raise and Amplification of False Content in Internet.
  18. Ireton, C., Posetti, J. (2018). Journalism, fake news & disinformation: handbook for journalism education and training. UNESCO Publishing. Available at: https://unesdoc.unesco.org/ark:/48223/pf0000265552.
Published
Categorized as Entries

By Giovanni de Gregorio

Giovanni De Gregorio is postdoctoral researcher working with the Programme in Comparative Media Law and Policy at the Centre for Socio-Legal Studies at the University of Oxford. His research focuses on digital constitutionalism, platform governance and digital policy.

Leave a comment