Rossana Ducato (17/12/2021). Automated Decision Making. In Belli, L.; Zingales, N. & Curzi, Y. (Eds.), Glossary of Platform Law and Policy Terms (online). FGV Direito Rio. https://platformglossary.info/automated-decision-making/.
Author: Rossana Ducato
Automated decision-making (ADM) generally refers to a process or a system where the human decision is supported by or handed over to an algorithm. ADM is increasingly used in several sectors of our society and by different actors (both private and public). For instance, ADM can be embedded in a standalone software that produces a medical recommendation for a patient, an online behavioral advertising system that shows a particular content to a specific target, a credit score system to determine whether one can get a loan, an algorithm that selects the most interesting CV for a position, a recognition filter that scans and bans a user-generated content from a platform, an automated ticketing system which fines drivers exceeding speed limits, an algorithm to assess the recidivism risk, smart contracts (Finck, 2019)1, etc.
Given the spread of ADM and the potential impact on individuals, the Council of Europe has issued the Recommendation CM/Rec(2020)1 on the human rights impacts of algorithmic systems, promoting a lawful and human-centric design of ADM. Otherwise, ADM is not generally regulated as such, but its deployment can be captured by a broad spectrum of laws.
In Europe, for instance, when an ADM processes personal data, the General Data Protection Regulation (GDPR) applies. The GDPR does not expressly define the concept of ADM, but it provides additional rules in the case that a solely ADM process produces legal effects concerning the data subject (i.e., the person to whom data refers) or similarly affects them (Article 22 GDPR). In the literature, several doubts have been raised regarding the exact meaning of ‘decision’, ‘solely automated’, ‘legal effects’, and ‘similarly affect’ (Mendoza; Bygrave, 20172; Bygrave, 20203). The Working Party Article 29 (now, European Data Protection Board) has provided an interpretation of these concepts, suggesting that: 1) the ADM can be fed with any kind of data (whether they are provided directly from the individual, observed or otherwise inferred); 2) a ‘solely’ automated decision means there is no human involvement at any stage of the processing; 3) with ‘legal effects’ entails that the decision must affect the legal rights and freedoms of individuals (e.g., a system that automatically refuses the admission to a country); 4) ‘similarly affects’ intends to include other possible adverse effects which may seriously impact the behavior of individuals, e.g., potentially leading to discrimination – for example, a system denying someone an employment opportunity (WP29, 2018)4. However, the provision does not seem to entail an evaluation of the negative impact on groups (Veale; Edwards, 2018)5. Still, several authors have argued in favor of expanding the data protection framework from the individual level to the collective one (Taylor; Floridi; van der Sloot, 20166; Mantelero, 20187; Brkan, 20198).
As a general rule, the GDPR prohibits such kind of processing unless 1) it is necessary for entering into, or performing, a contract between the data subject and the controller (i.e., the entity leading the processing); 2) it is authorized by the law, which lays down appropriate safeguards for the rights and legitimate interests of the data subject; 3) the data subject explicitly consent to it. When exceptions 1 and 3 apply, the data controller can carry out the ADM, but it must implement suitable measures to protect individuals’ rights and freedoms (Article 22:3, GDPR). Among them, the GDPR lists three main rights that have to be guaranteed to individuals: 1) to obtain human intervention on the part of the controller; 2) to express their point of view; 3) to contest the decision. The implementation of such measures has been differently embraced by Member States (Malgieri, 2019)9.
Finally, if the ADM involves processing particular categories of data (defined in Article 9 GDPR), such as health data or data revealing ethical or political opinions, the GDPR provides a specific discipline. In particular, ADM cannot be performed unless there is the explicit consent of the data subject, or the processing is necessary for a substantial public interest. In both cases, the controller must adopt suitable measures to protect data subjects’ rights, freedoms, and legitimate interests.
Another important legal issue concerning ADM in the framework of GDPR relates to the transparency of the system, i.e., the possibility to understand the logic involved in the algorithm performing a decision according to Article 22. There has been a lively debate in the literature about the existence of the so-called right to explanation in the GDPR (Goodman; Flaxman, 201610; Malgieri, Comandé, 201711; contra Wachter, Mittelstadt, Floridi, 201712). Whether it can be envisaged directly or indirectly in the black letters of the GDPR, there is a convergence toward the elaboration of solutions that can promote the transparency of ADM and “XAI”, i.e., explainable AI (Wachter; Mittelstadt; Russell, 201713; Edward; Veale, 201714; Kaminski; Malgieri, 201915; Brkan; Bonnet, 202016). The High-Level Expert Group on Artificial Intelligence has stressed the importance of explainability among the requirements for trustworthy AI (High-Level Expert Group on AI, 2019)17.
A similar – although not identical – provision on ADM is included at art. 11 of Directive (EU) 2016/680 (Law Enforcement Directive). Being a Directive, it is not self-executive. Therefore, Member States have to implement it in their national law. The Law Enforcement Directive explicitly forbids the use of ADM in criminal matters where the decision produces an adverse legal effect concerning the data subject or significantly affects them. Such a prohibition can be overcome only by Union or Member State law to which the controller is subject, and which provides appropriate safeguards for the rights and freedoms of the data subject, at least the right to obtain human intervention on the part of the controller.
Data protection law is probably the most comprehensive framework tackling the phenomenon of ADM. However, ADM is also regulated by other branches of law. For instance, when the ADM is likely to produce discriminatory results, the protection granted by anti-discrimination law kicks in. Both direct and indirect discrimination is prohibited by the European Convention on Human Rights and EU Law. For example, an ADM leading to the exclusion of a member from an online platform would be deemed to be illegal if based on race or proxies for it, such as Afro-American names (direct discrimination, see Edelman; Luca; Svirsky, 2017)18. Similarly, it would be considered indirect discrimination if a supposed neutral measure is likely to impact in a significative more negative way a protected category if compared to others in a similar situation. For example, to link the earnings of the platform’s drivers to the distance and time they travel appear to be a neutral decision. However, studies show that women drive at a lower average speed; therefore, they are likely to take fewer rides, and, consequently, their pay is substantially lower than their male colleagues (Cook et al., 2018)19.
Nevertheless, the anti-discrimination legal framework suffers important limitations since it covers only specific sectors and certain protected grounds. This situation is particularly critical in the digital discrimination brought by ADM, as the latter often transcends the traditional protected attributes (Borgesius, 201820; Xenidis, Senden, 202021). In online behavioral advertising, for example, people might be discriminated against because the inferential analytics draws correlations among apparently neutral data, thus exposing individuals to price discrimination or exclusions from lucrative job ads without them even being aware of how and based on which criteria they have been profiled (Wachter, 2020)22. In the field of consumer protection, ADM has been recently taken into account in relation to the transparency of online marketplaces. The “Omnibus Directive”, amending the Consumer Right Directive (Directive 2011/83/EC), established that when the price is personalized based on an ADM, the consumer must be informed about it. However, the provision imposes to disclose the ‘whether’ but not the ‘how’ of the ADM (Jabłonowska, 2019)23. It must be said, though, that if the system processes personal data and the price personalization falls within the notion of Article 22 GDPR, the explainability and corresponding remedies (Article 22:3, GDPR) will apply to this situation. Such a transparency requirement, in any case, does not extend to ‘dynamic’ pricing, which depends on real-time market demands. Differently, in the case of rankings, the Omnibus Directive requires to inform the consumers about the main parameters and their weighting behind the “relative prominence given to products, as presented, organized or communicated by the trader”. The concept of ranking is constructed in a technologically neutral way; therefore, it might consist of an ADM.
Another sector regulating ADM is medical devices. When a software standalone can be used for medical purposes, i.e., provide information to support diagnostic or therapeutic decisions or monitor vital physiological parameters, it will have to comply with Regulation (EU) 2017/74524 that establishes the steps to bring a medical device for human use on the market.
ADM is also addressed in the field of content recognition technologies. For instance, the new Copyright in the Digital Single Market Directive (Directive 2019/790)25 provides a new form of direct liability for online platforms (more specifically, online content-sharing service providers, such as YouTube) for their users’ upload. To avoid this form of liability, platforms have two possible options. The golden road traced by the Directive is to negotiate a license with the rightsholder in order to make available the content uploaded by users. As an alternative, online platforms have to demonstrate, among other things, to proactively ensure the unavailability of the (infringing) content. This latter option has attracted the criticisms of copyright scholars and civil society representatives, being a provision that will lead to establishing upload filters and limiting freedoms on the Internet (Cerf et al., 201826; Kretschmer et al., 201927; Reda, 201928). The Directive establishes some guarantees: users rights – such as quotation, criticism, pastiche – shall be preserved (Article 17(7), the proactive measures cannot lead to any general monitoring obligation (Article 17(8)), and the platform must provide for adequate complaint and redress mechanism to allow users contesting the decisions about access denial and removal of content (Article 17(9)). However, several doubts remain as to the impact of these provisions on fundamental rights such as freedom of expression and data protection (Quintais et al., 201929 ; Quintais, 202030; Romero Moreno, 202031; Samuelson, 202032; Schmon, 202033).
References
- Finck, M. (2019). Smart Contracts as Automated Decision-Making under Article 22 GDPR. International Data Privacy Law, 9, 1-17.
- Mendoza, I., Bygrave, L. A. (2017). The right not to be subject to automated decisions based on profiling. In EU Internet Law. Springer, Cham. 77-98.
- Bygrave, Lee. (2020). Article 22. In: Kuner, Christopher, Lee A. Bygrave, and Christopher Docksey. Commentary on the EU General Data Protection Regulation (GDPR). A Commentary. Oxford University Press.
- European Commission. (2018). Guidelines on Automated individual decision-making and Profiling for the purposes of Regulation 2016/679, WP251, rev. 01. Available at: https://ec.europa.eu/newsroom/article29/item-detail.cfm?item_id=612053.
- Veale, M., Edwards, L. (2018). Clarity, surprises, and further questions in the Article 29 Working Party draft guidance on automated decision-making and profiling. Computer Law & Security Review, 34(2), 398-404.
- Taylor, L., Floridi, L. Sloot, Bart Van der, eds. (2016). Group privacy: New challenges of data technologies. Vol. 126. Springer.
- Mantelero, A. (2018). AI and Big Data: A blueprint for a human rights, social and ethical impact assessment. Computer Law & Security Review, 34(4). 754-772.
- Brkan, M. (2019). Do algorithms rule the world? Algorithmic decision-making and data protection in the framework of the GDPR and beyond. International journal of law and information technology, 27(2), 91-121.
- Malgieri, G. (2019). Automated decision-making in the EU Member States: The right to explanation and other “suitable safeguards” in the national legislations. Computer law & security review, 35(5).
- Goodman, B., Flaxman, S. (2016). EU regulations on algorithmic decision-making and a ‘right to explanation’. Preprint.
- Malgieri, G., Comandé, G. (2017). Why a right to legibility of automated decision-making exists in the general data protection regulation. International Data Privacy Law.
- Wachter, S., Mittelstadt, B., Floridi, L. (2017). Why a right to explanation of automated decision-making does not exist in the general data protection regulation. International Data Privacy Law, 7(2), 76-99.
- Wachter, S., Mittelstadt, B., Russell, C. (2017). Counterfactual explanations without opening the black box: Automated decisions and the GDPR. Harv. JL & Tech., 31, 841.
- Edwards, L., Veale, M. (2017). Slave to the algorithm: Why a right to an explanation is probably not the remedy you are looking for. Duke L. & Tech. Rev., 16, 18.
- Kaminski, M. E., Malgieri, G. (2020). Algorithmic impact assessments under the GDPR: producing multi-layered explanations. International Data Privacy Law. 19-28.
- Brkan, M., Bonnet, G. (2020). Legal and technical feasibility of the GDPR’s quest
for explanation of algorithmic decisions: of black boxes, white boxes and Fata Morganas. European Journal of Risk Regulation, 11(1), 18-50. - High-Level Expert Group on AI. (2019). Policy and investment Recommendations for Trustworthy AI. Available at: https://www.europarl.europa.eu/italy/resource/static/files/import/intelligenza_artificiale_30_aprile/ai-hleg_policy-and-investment-recommendations.pdf.
- Edelman, B., Luca, M., Svirsky, D. (2017). Racial discrimination in the sharing
economy: Evidence from a field experiment. American economic journal: applied economics, 9(2), 1-22. - Cook, C. et al. (2018). The gender earnings gap in the gig economy: Evidence from over a million rideshare drivers. National Bureau of Economic Research
- Zuiderveen Borgesius, Frederik. (2018). Discrimination, artificial intelligence, and algorithmic decision-making. Study for the Council of Europe. Available at: https://rm.coe.int/discrimination-artificial-intelligence-and-algorithmic-decision-making/1680925d73.
- Xenidis, R., Senden, L. (2019). EU non-discrimination law in the era of artificial intelligence: Mapping the challenges of algorithmic discrimination. In: Bernitz, U. et al. (eds), General Principles of EU law and the EU Digital Order (Kluwer Law International, 2020), 151-182.
- Wachter, S. (2020). Affinity Profiling and Discrimination by Association in Online Behavioral Advertising. Berkeley Tech. LJ, 35, 367.
- Jabłonowska, A. (2019). Regulation of online platforms in the digital single market. Studia Prawnoustrojowe, (45), 63-79.
- Regulation (EU). 2017/745 of the European Parliament and of the Council of 5 April 2017 on medical devices, amending Directive 2001/83/EC, Regulation (EC) No 178/2002 and Regulation (EC) No 1223/2009 and repealing Council Directives 90/385/EEC and 93/42/EEC.
- Directive 2019/790. 17 April 2019 on Copyright and related rights in the Digital Single Market and amending Directives 96/9/EC and 2001/29/EC. European Parliament, Council of the European Union. https://eur-lex.europa.eu/eli/dir/2019/790/oj
- Cerf, Vint et al. (2018). Joint Letter to the European Parliament. Eletronic Frontier Foundation. Available at: https://www.eff.org/files/2018/06/13/article13letter.pdf.
- Kretschmer, M. et al. (2019). The Copyright Directive: Articles 11 and 13 must go, Statement from European Academics in advance of the Plenary Vote on 26 March 2019.
- Reda, Julia. (2019). EU copyright reform: Our fight was not in vain. Available at: https://juliareda.eu/2019/04/not-in-vain.
- Quintais, J. et al. (2019). Safeguarding user freedoms in implementing Article 17 of the copyright in the Digital Single Market Directive: recommendations from European Academics. Available at: https://www.dekuzu.com/en/docs/European-Academics-article-17-DSMD-SSRN-id3484968.pdf.
- Quintais, J. (2020). The new copyright in the digital single market directive: A critical look. European Intellectual Property Review. Available at: https://papers.ssrn.com/sol3/papers.cfm?abstract_id=3424770.
- Romero Moreno, F. (2020). Upload filters and human rights: implementing Article 17 of the Directive on Copyright in the Digital Single Market. International Review of Law, Computers & Technology, 34(2), 153-182.
- Samuelson, P. (2020). Pushing Back on Stricter Copyright ISP Liability Rules. Michigan Technology Law Review, Forthcoming.
- Schmon, Christoph. (2020). Copyright Filters are on a Collision Course with EU Data Privacy Rules. Available at: https://www.eff.org/deeplinks/2020/02/upload-filters-are-odds-gdpr.