Cynthia Khoo (17/12/2021). Human Review. In Belli, L.; Zingales, N. & Curzi, Y. (Eds.), Glossary of Platform Law and Policy Terms (online). FGV Direito Rio. https://platformglossary.info/human-review/.
Author: Cynthia Khoo
Human review is a term specific to the field of content moderation, or how digital platforms manage, curate, regulate, police, or otherwise make and enforce decisions regarding what kind of content is permitted to remain on the platform, and with what degree of reach or prominence, and what content must be removed, taken down, filtered, banned, blocked, or otherwise suppressed. Human review refers to the part of the content moderation process at which content that users have flagged (see ‘flagging’ and ‘coordinated flagging’) as offensive is reviewed by human eyes, as opposed to assessed by an algorithmic detection or takedown tool, or other forms of automated content moderation. These human reviewers, or their labour, are a crucial component of commercial content moderation systems (Roberts, 2016)1. They may be the platform company’s in-house staff, more frequently the case at smaller platform companies; the platform’s own users who have voluntarily stepped into a moderator role, such as is the case with Reddit’s subreddit moderators and Wikipedia’s editors; or low-paid third-party, external contractors that number up to the thousands or tens of thousands, operating under poor working conditions in content moderation “factories”, as relied on by larger platforms such as Facebook and Google’s YouTube (Caplan, 2018)2. These three roles of human reviewers correspond to Robyn Caplan’s categorization of content moderation models, namely, the artisanal, community-reliant, and industrial approaches, respectively (Caplan, 2018)3.
Human review is also used to check lower-level moderators’ decisions as well as to check that automated or algorithmic content moderation tools are making the correct decisions (Keller, 2018)4, such as to “correct for the limitations of filtering technology” (Keller, 2018)5. As Daphne Keller points out, one danger of combining human review with algorithmic filters, though also a reason to ensure the continued involvement of human review in content moderation, is that “once human errors feed into a filter’s algorithm, they will be amplified, turning a one-time mistake into an every-time mistake and making it literally impossible for users to share certain images or words” (Keller, 2018)6—or conversely, ensuring continued systematized approval and circulation of erroneously under-moderated content, such as technology-facilitated gender-based violence, abuse, and harassment, as well as that aimed at other and intersecting historically marginalized groups.
References
- Roberts, S. T. (2016). Commercial Content Moderation: Digital Laborers’ Dirty Work. In Noble, S.U. and Tynes, B. (Eds.), The intersectional internet: Race, sex, class and culture online (pp. 147-159). New York: Peter Lang. https://ir.lib.uwo.ca/cgi/viewcontent.cgi?article=1012&context=commpub
- Caplan, R. (2018). Content or Context Moderation? Artisanal, Community-Reliant, and Industrial Approaches. Data & Society. https://datasociety.net/wp-content/uploads/2018/11/DS_Content_or_Context_Moderation.pdf.
- Ibid
- Keller, D. (2018). Internet Platforms: Observations on Speech, Danger, and Money. Hoover Institution. https://www.hoover.org/sites/default/files/research/docs/keller_webreadypdf_final.pdf.
- Ibid
- Ibid