Paddy Leerssen (16/12/2021). Amplification. In Belli, L.; Zingales, N. & Curzi, Y. (Eds.), Glossary of Platform Law and Policy Terms (online). FGV Direito Rio. https://platformglossary.info/amplification/.
Author: Paddy Leerssen
In the context of platform governance, ‘amplification’ refers to actions (typically by platforms) that magnify the visibility or reach of certain information. The phrase is most commonly used to refer to platform content recommendations and other algorithmic rankings, in cases where particular content is seen to be given an unfair or otherwise unwarranted ranking. Other instances of ‘amplification’ can include the use of bots or astroturfing to disseminate content and the use of platform advertising services to similar ends. The concept has gained traction due to the growing attention for non-illegal forms of online harm, such as disinformation, where the speech as such is unlikely to be prohibited and removed, but its rapid spread is nonetheless seen as a source of concern and a target for regulation.
Amplification is not a legal term, but it is increasingly used in associated policy debates. Perhaps most notably, the European Commission’s Communication “Tackling Online Disinformation: A European Approach” discusses amplification at length (European Commission, 2018)1. It identifies three different kinds of amplification: (1) ‘algorithm-based’ amplification, which relates recommender systems, (2) ‘advertising-driven amplification’, which relates to platform advertising services, and (3) ‘technology-enabled amplification’, which refers to the use of bots and the use of fake accounts. In the Commission’s diagnosis of disinformation, the problem is thus not merely that disinformation is created and disseminated, but that it is amplified by various factors to reach a disproportionate audience.
The UN Special Rapporteur on Freedom of Expression, David Kaye, displays a similar understanding of the term in an open letter to Mark Zuckerberg regarding the Facebook Oversight Board, dated 1 May 2019 (Kaye, 2019a)2. He proposes that the Board should have access to information about “and factors that may amplify the content at issue (e.g., recommendation algorithms, bot accounts, ad policies)”. In a note to the UN General Assembly, the Special Rapporteur also suggested that tools be developed to combat hate speech inter alia through ‘de-amplification’ (Kaye, 2019b)3.
The recent White House Executive Order on Preventing Online Censorship does not address amplification in the same length but does allege that online platforms have “amplified China’s propaganda” and offers this as a ground for further regulation, although it does not define amplification or further elaborate on the claim (The White House, 2020)4.
A key challenge in identifying ‘amplification’ in online platforms is that it implies a baseline of non-amplified treatment, which may not be available. For recommender systems, it is often alleged that hate speech or disinformation are amplified by algorithms that prioritize attention and engagement, but this begs the question of what an appropriate (i.e., non-amplified) ranking for this content would be instead. For instance, a hateful website may be considered ‘amplified’ if it is ranked as the first result on Google Search, but what if it is the second? The tenth? The 100th? Thus, although claims of ‘amplification’ can seem objective and analytical, they may conceal an ultimately subjective and political assessment about the appropriate configuration of recommender systems, and about media diversity in general.
A narrower conception of ‘amplification’ is possible in which it only singles out direct positive discrimination of content, such as Facebook’s prioritization of trusted news sources and YouTube’s prioritization of coronavirus victims. But this narrower conception does not correspond with current usage, as outlined above, as this tends to also include attention-optimizing systems that benefit hate speech and disinformation indirectly. In this light, amplification should be understood as a broad term that can refer to a wide range of factors in the online media environment which facilitates the spread of certain content, whether as an intentional design feature or as an unintentional by-product.
- European Commission. (2018). Communication on Tackling Online Disinformation: A European Approach. Available at: https://www.whitehouse.gov/presidential-actions/executive-order-preventing-online-censorship.
- Kaye, David. (2019a). Mandate of the Special Rapporteur on the promotion and protection of the right to freedom of opinion and expression. Available at: https://www.ohchr.org/Documents/Issues/Opinion/Legislation/OL_OTH_01_05_19.pdf.
- Kaye, D. (2019b). Promotion and protection of the right to freedom of opinion and expression. Available at: https://www.ohchr.org/Documents/Issues/Opinion/A_74_486.pdf.
- The White House. (2020). US Executive Order on Preventing Online Censorship. Available at: https://www.whitehouse.gov/presidential-actions/executive-order-preventing-online-censorship/.