Glossary of Platform Law and Policy Terms

Proactive Measures

Cite this article as:
Daphne Keller and Nicolo Zingales (17/12/2021). Proactive Measures. In Belli, L.; Zingales, N. & Curzi, Y. (Eds.), Glossary of Platform Law and Policy Terms (online). FGV Direito Rio. https://platformglossary.info/proactive-measures/.

Authors: Daphne Keller and Nicolo Zingales

This entry provides an overview of the concept of proactive measures, where ‘measures’ is a term of art which includes a range of steps that can be taken as a form of governance or regulation, usually in relation to specific kinds of content or conducts. ‘Proactive’ is a term that is used frequently to qualify the nature of these measures taken by platforms or other intermediaries with regard to third party content. The two most common meanings are: (1) as an operational matter, acting based on the platform’s own initiative, not in response to a notice or other external source of information; (2) as a legal matter, acting voluntarily and without legal compulsion.

Naturally, there is some overlap between (1) and (2), as the external source of information under (2) may be a judicial order or another form of notification that triggers a legal obligation for the platform to take the measures in question. In addition, legal obligations may arise independently from the existence of a specific notification, as platforms might be subject to a duty of care to prevent the dissemination of certain content in the first place: an example is the recently proposed Eliminating Abusive and Rampant Neglect of Interactive Technologies Act (EARN IT Act) of 2020, which would create an exemption to the immunity of platforms under section 230 of the Communication Decency Act by allowing civil and state criminal suits against companies who do not adhere to certain recommended “best practices” with regard to Child Sexual Abuse Material (CSAM). The EU legislation on this matter is the Audiovisual Media Service Directive (2018/1808) which among other things requireMember States in its art. 28b to ensure that video-sharing platform providers under their jurisdiction take appropriate measures to protect:

(a) minors from programmes, user-generated videos and audiovisual commercial communications which may impair their physical, mental or moral development in accordance with Article 6a(1);

(b) the general public from programmes, user-generated videos and audiovisual commercial communications containing incitement to violence or hatred directed against a group of persons or a member of a group based on any of the grounds referred to in Article 21 of the Charter of the Fundamental Rights of the European Union, or containing content the dissemination of which constitutes a criminal offence in the EU (namely ‘child pornography’ or xenophobia)

(ba) the general public from programmes, user-generated videos and audiovisual commercial communications containing content the dissemination of which constitutes an activity which is a criminal offence under Union law, namely public provocation to commit a terrorist offence within the meaning of Article 5 of Dir. (EU) 2017/541, offences concerning child pornography within the meaning of Article 5(4) of Dir. 2011/93/EU and offences concerning racism and xenophobia

Similar language can be found in the proposal for a Terrorism Regulation in establishing duties of care and proactive measures on Hosting Services Providers (HSPs) to remove terrorist contentincluding to remove when appropriate terrorist material from their services, including by deploying automated detection tools, acting in a “diligent, proportionate and non-discriminatory manner, and with due regard for due process”, and in the Christchurch Call made by several governments and online service providers to address terrorist and other violent extreme content online, including a commitment by providers to adopt: 

specific measures seeking to prevent the upload of terrorist and violent extremist content and to prevent its dissemination on social media and similar content- sharing services, including its immediate and permanent removal, without prejudice to law enforcement and user appeals requirementsin a manner consistent with human rights and fundamental freedoms.

Platforms’ general concern for the adoption of proactive or “voluntary” measures is that they may lead to the establishment of knowledge that triggers an obligation to remove or disable access to content, failing which the platforms might lose the benefit of the safe harbor. For this reason, scholars have argued for the introduction of a general ‘good samaritan’ provision, modeled upon Section 230 of the US Communications Decency Act, which would preserve the application of the safe harbor as long as the measures are taken in “good faith” against certain types of objectionable content (Kuzcerawy, 20181; Barata, 20202).

References

  1. Kuczerawy, Aleksandra. (2018). The EU Commission on voluntary monitoring: Good Samaritan 2.0 or Good Samaritan 0.5? Ku Leuven. Available at: https://www.law.kuleuven.be/citip/blog/the-eu-commission-on-voluntary-monitoring-good-samaritan-2-0-or-good-samaritan-0-5/.
  2. Barata Mir, J. (2020). Positive Intent Protections: Incorporating a Good Samaritan principle in the E U Digital Services Act. Center for Democracy and Technology.

Leave a comment