Glossary of Platform Law and Policy Terms

Terrorist Content/ Cyberterrorism

Cite this article as:
Ivar Hartmann (17/12/2021). Terrorist Content/ Cyberterrorism. In Belli, L.; Zingales, N. & Curzi, Y. (Eds.), Glossary of Platform Law and Policy Terms (online). FGV Direito Rio.

Author: Ivar Hartmann

The concept of ‘cyberterrorism’ was initially targeted at virtual attacks to infrastructure aimed at disrupting a nation or region in the same way that traditional terrorist attacks do, with the same visibility and clear intent of spreading fear. ‘Cyberterrorism’ now includes the use of social media by terrorist organizations, a phenomenon that researchers have documented over a decade ago (Weimann, 2010)1. ‘Terrorist content’ on social media is content produced and disseminated by terrorist groups or organizations with four main goals: recruitment, training, action and public terror (Goodman, 2018)2. This last goal is the one that features terrorist content more prominently, with the public dissemination of a terrorist narrative, one that glamorizes terrorist groups and uses social media as magnets for attention (Awan, 2017)3.

Much like virtual terrorist attacks need to be disentangled from activist use of computer crimes or hacktivism (Anderson, 2008)4, terrorist content needs to be differentiated from online activism. ‘Cyberactivism’ bears some procedural similarities with the dissemination of terrorist content in social media in the sense that both involve groups that instrumentalize online tools for political purposes with wide visibility, as ‘cyberactivism’ is composed of a triggering event, a media response, viral organization and a physical response (Sandoval-Almazan. Gil-Garcia, 2014)5. However, especially from a substantive point of view, a distinctive element of cyberterrorism and terrorist content is that they are “typically driven by an ideology with the goal of causing shock, alarm and panic” (Veerasamy, 2020)6.

Relevant characteristics of terrorist content in online platforms that help to identify and profile it are its specific communication flow, its influence on users, its use of propaganda and its radicalization language (Fernandez, Alani, 2021)7. Social media companies employ distinct manual and automated content moderation practices to curb terrorist content (Conway et al., 2018)8 and therefore usually attempt to define it in their terms of service, an exercise that is useful in the development of a concept for terrorist content.

More recently, the lines between hate speech content and terrorist content have become increasingly blurred, as many of their traits bear similarities and can be described as belonging to digital hate culture, which includes not only terrorist groups formally recognized as such, but also alt-right, white supremacist and fascist groups, all belonging to “the complex swarm of users that form contingent alliances to contest contemporary political culture and inject their ideology into new spaces (…) united by “a shared politics of negation: against liberalism, egalitarianism, ‘political correctness’, and the like,’ more so than any consistent ideology” (Ganesh, 2018)9.


  1. Weimann, G. (2010). Terror on Facebook, twitter, and YouTube. The Brown Journal of World Affairs16(2), 45-54
  2. Goodman, A. E. J. (2018). When you give a terrorist a twitter: holding social media companies liable for their support of terrorism. Pepp. L. Rev., 46, 147.
  3. Awan, Imran. Cyber-Extremism: Isis and the Power of Social Media. Society, v. 54, 2017.
  4. Anderson, K. (2008). Hacktivism and politically motivated computer crime. Encurve LLC–Building Intentional Security. Available at:
  5. Sandoval-Almazan, R., Gil-Garcia, J. R. (2014). Towards cyberactivism 2.0? Understanding the use of social media and other information technologies for political activism and social movements. Government Information Quarterly, 31(3), 365-378.
  6. Veerasamy, N. (2020). Cyberterrorism–the spectre that is the convergence of the physical and virtual worlds. In Emerging Cyber Threats and Cognitive Vulnerabilities. Academic Press. 27-52.Weimann, G. (2010). Terror on Facebook, twitter, and YouTube. The Brown Journal of World Affairs16(2), 45-54.
  7. Fernandez, M., Alani, H. (2021). Artificial intelligence and online extremism: Challenges and opportunities. In: McDaniel, J., Pease, K. (2021). Predictive Policing and Artificial Intelligence. Routledge.
  8. aterial and its impacts. Studies in Conflict & Terrorism42(1-2), 141-160.
  9. Ganesh, B. (2018). The ungovernability of digital hate culture. Journal of International Affairs71(2), 30-49.
Categorized as Entries

By Ivar Hartmann

Ivar Hartmann is an Associate Professor at Insper Learning Institution in São Paulo, Brazil. His research and teaching areas comprise cyberlaw, legal data science and constitutional law. He was previously an Assistant (2013-2018) and Associate (2018-2020) Professor at FGV Law School in Rio de Janeiro, where he coordinated the Center for Technology and Society (CTS-FGV) and the Legal Data Science Nucleus. Ivar holds an MsC from the Catholic University of Rio Grande do Sul, Brazil, an LL.M. from Harvard Law School, and an S.J.D. from the Rio de Janeiro State University.

Leave a comment