Glossary of Platform Law and Policy Terms

Human Review

Cite this article as:
Cynthia Khoo (17/12/2021). Human Review. In Belli, L.; Zingales, N. & Curzi, Y. (Eds.), Glossary of Platform Law and Policy Terms (online). FGV Direito Rio.

Author: Cynthia Khoo

Human review is a term specific to the field of content moderation, or how digital platforms manage, curate, regulate, police, or otherwise make and enforce decisions regarding what kind of content is permitted to remain on the platform, and with what degree of reach or prominence, and what content must be removed, taken down, filtered, banned, blocked, or otherwise suppressed. Human review refers to the part of the content moderation process at which content that users have flagged (see ‘flagging’ and ‘coordinated flagging’) as offensive is reviewed by human eyes, as opposed to assessed by an algorithmic detection or takedown tool, or other forms of automated content moderation. These human reviewers, or their labour, are a crucial component of commercial content moderation systems (Roberts, 2016)1. They may be the platform company’s in-house staff, more frequently the case at smaller platform companies; the platform’s own users who have voluntarily stepped into a moderator role, such as is the case with Reddit’s subreddit moderators and Wikipedia’s editors; or low-paid third-party, external contractors that number up to the thousands or tens of thousands, operating under poor working conditions in content moderation “factories”, as relied on by larger platforms such as Facebook and Google’s YouTube (Caplan, 2018)2. These three roles of human reviewers correspond to Robyn Caplan’s categorization of content moderation models, namely, the artisanal, community-reliant, and industrial approaches, respectively (Caplan, 2018)3.

Human review is also used to check lower-level moderators’ decisions as well as to check that automated or algorithmic content moderation tools are making the correct decisions (Keller, 2018)4, such as to “correct for the limitations of filtering technology” (Keller, 2018)5. As Daphne Keller points out, one danger of combining human review with algorithmic filters, though also a reason to ensure the continued involvement of human review in content moderation, is that “once human errors feed into a filter’s algorithm, they will be amplified, turning a one-time mistake into an every-time mistake and making it literally impossible for users to share certain images or words” (Keller, 2018)6—or conversely, ensuring continued systematized approval and circulation of erroneously under-moderated content, such as technology-facilitated gender-based violence, abuse, and harassment, as well as that aimed at other and intersecting historically marginalized groups.


  1. Roberts, S. T. (2016). Commercial Content Moderation: Digital Laborers’ Dirty Work.  In Noble, S.U. and Tynes, B. (Eds.), The intersectional internet: Race, sex, class and culture online (pp. 147-159). New York: Peter Lang.
  2. Caplan, R. (2018). Content or Context Moderation? Artisanal, Community-Reliant, and Industrial Approaches. Data & Society.
  3. Ibid
  4. Keller, D. (2018). Internet Platforms: Observations on Speech, Danger, and Money. Hoover Institution.
  5. Ibid
  6. Ibid
Categorized as Entries

By Cynthia Khoo

Cynthia Khoo is an Associate at the Center on Privacy and Technology at Georgetown Law, where she leads on worker surveillance and the civil rights implications of commercial data practices, including algorithmic discrimination. She is a Canadian technology and human rights lawyer who joined the Center after accumulating years of experience in technology law, policy, research, and advocacy with various digital rights NGOs and through her sole practice law firm. Cynthia is also a fellow at the Citizen Lab (University of Toronto). She holds a J.D. from the University of Victoria and an LL.M. from the University of Ottawa.

Leave a comment