Glossary of Platform Law and Policy Terms

Deplatforming/De-platforming

Cite this article as:
Courtney Radsch (17/12/2021). Deplatforming/De-platforming. In Belli, L.; Zingales, N. & Curzi, Y. (Eds.), Glossary of Platform Law and Policy Terms (online). FGV Direito Rio. https://platformglossary.info/deplatforming-de-platforming/.

Author: Courtney Radsch

‘Deplatforming’, or ‘de-platforming’, refers to the ejection of a user from a specific technology platform by closing their accounts, banning them, or blocking them from using the platform or its services. It is worth noting that deplatforming may be permanent or temporary. Temporary suspensions and impossibility to access one’s account can be considered as deplatforming. Deplatforming is an extreme form of content moderation and a form of punishment for violations of acceptable behavior as determined by the platform’s terms of service or community guidelines. Platforms justify the removal or banning of a user and/or their content based on violations of its terms of service, thereby denying the user access to the community or service that it offers. Deplatforming can and does occur across a range of platforms and can refer to:

  • Social media companies, like Facebook, YouTube or Twitter;
  • Commerce platforms such as Amazon of the Apple Store;
  • Payment platforms, like PayPal or Visa;
  • Service platforms, like Spotify or Stitcher;
  • Internet infrastructure services like Cloudflare or web hosting.

Deplatforming can be a form of content moderation by tech platforms that find certain content objectionable or face public pressure to restrict a user’s access to the platform, often as a result of the content of that person’s speech or ideas, or in response to harassing behavior. It has also been deployed to reduce online harassment, hate speech, and coordinated inauthentic behavior, such as propaganda campaigns. Deplatforming can also occur because of pressure from other platforms, at the same or in different levels of the stack.

Because deplatforming can refer to a range of platforms, and is often implemented as a form of content moderation, this approach by platforms that do not host content, or which are further down the internet stack, raises concerns about the expansion of content-based censorship beyond content-hosting services or platforms. The review of accounts and content can be automated or the result of human review, of a combination of both.

The term is explicitly political because it often refers to banning a user from a platform because of the content of their speech and ideas. Deplatforming has been used as a response to hate speech, terrorist content, and disinformation/propaganda. For example, the major social media firms have removed hundreds of ISIS accounts since 2015, seeking to reduce the UN-designated terrorist group’s reach online, which forced them onto less public and more closed platforms, reducing their visibility and public outreach, but also making it more difficult to monitor their activities. In 2018, Facebook and Instagram deplatformed (Facebook, 2018)1 the Myanmar (Facebook, 2018)2 military after it was involved in the genocide of Rohingya, closing hundreds of pages and accounts related to the military and banning several affiliated users and organizations from its services.

Deplatforming by dominant platforms has pushed extremists to less popular or less public platforms that offer an alternative set of rules or have not yet grappled with what their rules should be. Deplatforming has given rise to alternative platforms, such as the social media site Gab, the crowd-funding site Patreon and the messaging service Telegram. Several platforms shut down and banned Alex Jones and Infowars from their platforms in mid- 2018 in response to their support for white supremacy and involvement in disinformation campaigns, which helped politicize and publicize the concept of deplatforming.

De-platforming can reduce the ability to inject a message into public discourse and recruit followers, but it can also push supporters to obscure and opaque platforms where it is substantially more difficult for law enforcement to monitor their activities. One rationale for deplatforming controversial people or organizations is to prevent them from negatively influencing others. Critics argue this is an ineffective tactic because the affected person will just go to another platform, but a Georgia Tech study found (Chandrasekharan et al., 2017)3 that deplatforming was an effective moderation strategy that reduced the unwanted speech or behavior and created a demonstration effect for other users that helped enforce norms. In response to takedowns on major platforms, extremists often migrate to lesser-known or protected online forums. Research has also shown that people who are deplatformed often fail to transfer audiences from major to minor platforms. Researchers refer to the “online extremists’ dilemma” which describes (Clifford, Powell, 2019)4 how online extremists are forced to balance public outreach and operational security in choosing which digital tools to utilize.

References

  1. Facebook. (2018). Removing Myanmar Military Officials from Facebookhttps://about.fb.com/news/2018/08/removing-myanmar-officials/.
  2. Facebook. (2018). Update on Myanmar. Available at: https://about.fb.com/news/2018/08/update-on-myanmar/.
  3. Chandrasekharan, E., et al. (2017). You can’t stay here: The efficacy of reddit’s 2015 ban examined through hate speechProceedings of the ACM on Human-Computer Interaction, 1- 2.
  4. Clifford, B., Powell, H. C. (2019). De-platforming and the Online extremist’s dilemma. Lawfare Blog, 6. Available at: https://www.lawfareblog.com/de-platforming-and-online-extremists-dilemma.
Published
Categorized as Entries

By Courtney Radsch

Courtney Radsch is an American Journalist. She holds a Ph.D. in international relations and is author of Cyberactivism and Citizen Journalism in Egypt: Digital Dissidence and Political Change. She has also worked as the advocacy director for the Committee to Protect Journalists until 2021.

Leave a comment