Glossary of Platform Law and Policy Terms

Moderation

Cite this article as:
Giovanni de Gregorio (17/12/2021). Moderation. In Belli, L.; Zingales, N. & Curzi, Y. (Eds.), Glossary of Platform Law and Policy Terms (online). FGV Direito Rio. https://platformglossary.info/moderation/.

Author: Giovanni de Gregorio

Content moderation can be described as the result of editorial decisions made by the subject who governs the space where information is published. Moderation has also been defined as “the governance mechanisms that structure participation in a community to facilitate cooperation and prevent abuse” (Grimmelman, 2015)1.

Content moderation is not a novelty in the media sector. As content providers, traditional media outlets like televisions and newspapers have always selected the information to broadcast or disclose. This activity has also extended to the digital environment. Since the first online fora, we have seen how communities have moderated digital spaces to decide which content reflects the values or interest of the group without commercial purposes. In the last years, the commercial side of content moderation has evolved with online platforms, precisely social media, which have built a bureaucracy to moderate content (Klonick, 2019)2. This activity has been defined (Flew; Martin; Suzor, 2019)3 as  

the screening, evaluation, categorization, approval, or removal/hiding of online content according to relevant communications and publishing policies (…) to support and enforce positive communications behavior online, and to minimize aggression and anti-social behavior.

This amount of content flowing on social media spaces is not free but subject to a wide range of practices applied by platforms to manage content posted by their users (Elkin-Koren; Perel, 2019a)4. Just in the case of Facebook, the number of posts moderated in different areas of the world is on a scale of billions each week.

While some practices intend to optimize the matching of content with the users who view it and would potentially engage with it, other practices intend to ensure that content complies with appropriate norms (Elkin-Koren; Perel, 2019b)5. Social media decide how to organize users’ news feeds or set their recommendation system to target certain categories of users (i.e., soft moderation). Together with such activities, social media make editorial decisions which can also lead to the removal of online content to ensure respect and enforce community’s rules (i.e., hard moderation). Content moderation decisions can be entirely automated, made by humans or a mix of them (Gorwa; Binns; Katzenbach, 2020)6. While the activities of pre-moderation like prioritization, delisting and geo-blocking are usually automated, post-moderation is usually the result of automated and human moderation. The massive amount of content to moderate explains why content moderation is usually performed by a mix of machines and human moderators that decide whether to maintain or delete the vast amount of content flowing every day on social media (Roberts, 2018)7.

Within this framework, social media platforms facilitate the global exchange of content generated by users, at a gigantic scale while governing information flow online (Kaye, 2019). However, these characteristics are just one part of the jigsaw explaining the ability and reasons for platforms to discretionary establish how to carry out content moderation. Content moderation is the constitutional activity of social media (Gillespie, 2018)8. The moderation of online content is an almost obligatory step for social media not only to manage removal requests but also to prevent their digital spaces to turn into hostile environments for users due to the spread for example, of incitement to hatred. Indeed, the interest of platforms is not just focused on facilitating the spread of opinions and ideas across the globe but establishing a digital environment where users feel free to share information and data that can feed commercial networks and channels and, especially, attract profits coming from advertising. In other words, the activity of content moderation is performed to attract revenues by ensuring a healthy online community, protecting the corporate image, and showing commitments with ethical values. Within this business framework, users’ data are the central product of online platforms under a logic of accumulation (Zuboff, 2019)9.

In this scenario, content moderation produces positive effects for freedom of expression and democratic values. The organization, filtering, and removal of content increases the possibilities for users to experience a safe digital environment without the interference of objectionable or harmful content. At the same, content moderation negatively impacts the right to freedom of expression. Since social media can select which information deserves to be maintained and deleted according to standards based on the interest to avoid any monetary penalty or reputational damage. Such a situation is usually defined as collateral censorship (Balkin, 2008)10. Scholars have observed that online platforms try to avoid regulatory burdens by relying on the protection recognized by the First Amendment, while, at the same time, they claim immunities as passive conduits for third-party content (Pasquale, 2016)Pasquale, F. (2016). Platform neutrality: Enhancing freedom of expression in spheres of private power. Theoretical Inquiries in Law17(2), 487-513.[/efn_note]. As underlined, immunity allows Internet intermediaries “to have their free speech and everyone else’s too” (Tushnet, 2008)11. Moreover, an extensive activity of content moderation influences even the right to privacy and data protection. Indeed, users could fear being subject to a regime of private surveillance over their information and data. It is worth observing that, in the last case, even the right to free speech is involved due to the users’ concern to be monitored through the information they publish.

More broadly, content moderation challenges also democratic values, such as the principle of the rule of law, since social media autonomously determine how freedom of expression online is protected on a global scale without any public safeguard (Suzor, 2020)12. The immunity granted by these laws leads online platforms to freely choose which values they want to protect and promote, no matter if democratic or anti-democratic and authoritarian. Since online platforms are private businesses, they would naturally tend to focus on minimizing economic risks rather than ensuring a fair balance between fundamental rights when moderating content (De Gregorio, 2018)13. The international relevance of content moderation can be understood even by looking at how this activity has led to escalating violent conflict in countries like Myanmar or Sri Lanka, so that some States decided to shut down social media as increasingly happening in African countries.

Addressing the challenges of content moderation without undermining its social relevance for the digital environment is one of the primary points from a policy perspective. In making decisions on online content, social media platforms apply a complex system of norms, driven by consumption, commercial interests, social norms, liability rules, and regulatory duties, where each set of norms may interact with others (Belli; Zingales, 2017)14. Scholars have mostly proposed to protect the system of immunity (Keller, 2018)15 or reinterpret its characteristics (Bridy, 2018)16, building an administrative monitoring-and-compliance regime (Langvartd, 2017)17, or introducing more safeguards in the process of moderation (De Gregorio, 202018; Bloch Wehba, 202019). In other words, the focus would move from liability to responsibility. To achieve this purpose, transparency and accountability safeguards could help to understand how speech is governed behind the scenes without overwhelming platforms with disproportionate monitoring obligations.

References

  1. Grimmelmann, J. (2015). The virtues of moderation. Yale JL & Tech., 17, 42.
  2. Klonick, K. (2019). The Facebook Oversight Board: Creating an independent institution to adjudicate online free expression. Yale LJ129, 2418.
  3. Flew, T., Martin, F., Suzor, N. (2019). Internet Regulation as Media Policy: Rethinking the question of digital communication platform governance. Journal of Digital Media & Policy, 10(1), 33-50.
  4. Elkin-Koren, N., Perel, M. (2019a). Algorithmic Governance by Online Intermediaries. In Brousseau, E., et al. (eds). Oxford Handbook of Institutions of International Economic Governance and Market Regulation. (2019). Oxford University Press.
  5. Elkin-Koren, N., & Perel, M. (2019b). Separation of functions for AI: Restraining speech regulation by online platforms. Lewis & Clark L. Rev.24, 857. Available at: https://papers.ssrn.com/sol3/papers.cfm?abstract_id=3439261.
  6. Gorwa, R., Binns, R., Katzenbach, C. (2020). Algorithmic content moderation: Technical and political challenges in the automation of platform governance. Big Data & Society7(1).
  7. Roberts, S., T. (2018) Behind the Screen: Content Moderation in the Shadows of Social Media. Yale University Press.
  8. Gillespie, T. (2018). Custodians of the Internet. Platforms, Content Moderation, and the Hidden Decisions that Shape Social Media. Yale University Press.
  9. Zuboff, S. (2019). Surveillance Capitalism: The Fight for a Human Future at the New Frontier of Power. Public Affairs.
  10. Balkin, J. M. (2008). The future of free expression in a digital age. Pepp. L. Rev.36, 427.
  11. Tushnet, R. (2007). Power without responsibility: Intermediaries and the First Amendment. Geo. Wash. L. Rev.76, 986.
  12. Suzor, N. (2019). Lawless: The Secret Rules That Govern Our Digital Lives. Cambridge University Press.
  13. De Gregorio, G. (2018). From constitutional freedoms to the power of the platforms: protecting fundamental rights online in the algorithmic society. Eur. J. Legal Stud.,11, 65.
  14. Belli, L., Zingales, N. (2017). Platform regulations: how platforms are regulated and how they regulate us. Leeds. Available at: https://bibliotecadigital.fgv.br/dspace/handle/10438/19402.
  15. Keller, D. (2018). Internet Platforms: Observations on Speech, Danger, and Money. Hoover Institution. https://www.hoover.org/sites/default/files/research/docs/keller_webreadypdf_final.pdf.
  16. Bridy, A. (2018). Remediating Social Media: A Layer-Conscious Approach. BUJ Sci. & Tech. L.24, 193.
  17. Langvardt, K. (2017). Regulating online content moderation. Geo. LJ106, 1353.
  18. De Gregorio, G. (2020). Democratising online content moderation: A constitutional framework. Computer Law & Security Review36.
  19. Bloch-Wehba, H. (2020). Automation in moderation. Cornell Int’l LJ, 53, 41. https://papers.ssrn.com/sol3/papers.cfm?abstract_id=3521619.
Published
Categorized as Entries

By Giovanni de Gregorio

Giovanni De Gregorio is postdoctoral researcher working with the Programme in Comparative Media Law and Policy at the Centre for Socio-Legal Studies at the University of Oxford. His research focuses on digital constitutionalism, platform governance and digital policy.

Leave a comment