Glossary of Platform Law and Policy Terms

Recommender Systems

Cite this article as:
Rossana Ducato (17/12/2021). Recommender Systems. In Belli, L.; Zingales, N. & Curzi, Y. (Eds.), Glossary of Platform Law and Policy Terms (online). FGV Direito Rio. https://platformglossary.info/reommender-systems/.

Author: Rossana Ducato

‘Recommender systems’ are algorithms aimed at supporting users in their online decision making. More specifically, in the computer science literature, a recommender system is defined as: 

a specific type of advice-giving or decision support system that guides users in a personalized way to interesting or useful objects in a large space of possible options or that produces such objects as output (Felfernig et al., 2018)1.

Examples of such systems are the Amazon recommender tool for products, the Netflix algorithm that suggests movies, the Facebook software that finds ‘friends’ we might know.

A key element of recommender systems is that their suggestions are personalized, i.e., based on users’ preferences. Such information can be directly obtained from users (e.g., asking specifically for her preferences) or can be generated by observation of their behavior (Jannach et al., 2010)2. Most recommender systems rely on machine learning techniques, including deep neural networks (Goanta; Spanakis, 2020)3.

From a technical point of view, four main models of recommendation systems have been identified (Aggarwal 2016, p. 1)4 collaborative filtering systems; 2) content-based recommender systems; 3) knowledge-based recommender systems; 4) hybrid systems.

Collaborative filtering systems perform the recommendation process based on the user-item interaction provided by several users. Let us assume A and B have similar tastes and that the algorithm has recorded such a similarity. A rates the movie Titanic highly, the recommender system infers that the rating of B for Titanic will be likely to be similar. Hence, the algorithm formulates Titanic as a recommendation for B.

Content-based recommender systems construct a predictive model thanks to the attributes (descriptive features) of users or items. Following in the movie example: A rated Titanic highly. Titanic is described by keywords like “drama” and “love affair”. Therefore, movies that are classified in the same way (Romeo+Juliet or Pearl Harbour) would be recommended to A.

Knowledge-based recommender systems formulate recommendations based on the constraints specified by users, the item attributes and the domain knowledge. Such systems are common where items are not bought very often (so it is not efficient to rely on user-item interactions). Examples of them are tools for searching real estates, cars, touristic accommodation, etc. 

Finally, hybrid systems combine one or more of the previous aspects.

Another classification proposed in the literature distinguishes recommender systems in three typologies, based on the role played by the platform in the sourcing of the content recommended (Cobbe; Singh, 2019)5. In the so-called “open recommending” system, such as YouTube, the platform does not perform editorial control and the recommendation is elaborated from user- generated content. On the contrary, “curated recommending” is intended as a system where the platform selects, curates or approves the content. Finally, in “closed recommending” systems the platform creates itself the content to be recommended. Such a classification can be relevant when intermediary liability is at stake. While for “curated” and “closed” recommending systems the safe harbour immunity regime will be out of the picture, queries remain for “open” recommenders. Before the Court of Justice of the EU a case is currently pending to ascertain whether YouTube plays an active role by recommending videos and performing other ancillary activities (Case 500/19).

The organization of the recommender system and its intelligibility can also give rise to direct liability of the platform vis-à-vis the content creator, such as a social media influencer. Goanta and Spanakis (2020)6 argue that the rules against unfair commercial practices and competition law both in Europe (the Unfair Commercial Practices Directive) and in the US (the Federal Trade Commission Act) can offer a first line of defense against the opaqueness of the algorithmic decision-making and the discretionary power exercised by platforms to the detriment of content creators. Such a framework however does not provide a full fledge of protection to the emerging actors involved in social media transactions and needs to be strengthened (Goanta, Spanakis, 2020)7.

Recent legislative initiatives in Europe Ranking

Knowledge-based recommender systems essentially work in response to a search query launched by a user. The output of such a model is likely to overlap with the legal definition of ranking. In Europe, the latter is intended as: “the relative prominence of the offers of traders or the relevance given to search results as presented, organized or communicated by providers of online search functionality, including resulting from the use of algorithmic sequencing, rating or review mechanisms, visual highlights, or other saliency tools, or combinations thereof” (recital 19, Directive (EU) 2019/2161. See also, Art. 3(1)(b), Directive (EU) 2019/2161 and art. 2(8), Regulation (EU) 2019/1150). To increase the transparency in online marketplaces, newly introduced provisions in the B2C and the P2B context impose an obligation to provide clear information about the main parameters and parameter weighting adopted to rank products and to disclose any paid advertising or payment specifically made for achieving a higher ranking within the search results. It is yet to be seen how these transparency requirements will be developed, considering not only the complexity and the dynamicity that ranking algorithms might reach through machine learning but also the possible limitations imposed by trade secrets (Twigg-Flesner, 2018)8. The Commission is currently working on transparency guidelines (European Commission, 2020)9 to facilitate the compliance of platforms.

Ratings and reviews

Both in collaborative filtering and content-based recommender systems, the first input is given by users’ ratings and reviews. They can be defined respectively as scores (in a numerical form) and feedback (in a textual form) generated by the platform’s users to report their experience with a product, a buyer, or a service provider in a supposedly impartial manner. Some platforms provide aggregate or consolidated ratings, which sum up the single ratings or reviews in an overall assessment. Consolidated ratings can play an essential role in supporting the users’ decision-making process, addressing some cognitive difficulties and the problem of information overload, i.e., the ‘wall’ of reviews (Busch 2016)10. Ratings and reviews are not only input for recommender systems. They also represent a private ordering mechanism widely used by online platforms, such as eBay, Amazon, Uber, or Airbnb, to build and maintain trust within their community and to preserve the attractiveness of their services.

Ratings and reviews can perform two main functions: (1) informative and (2) self-regulatory.

(1) First of all, they constitute a reputational mechanism that can help reduce information asymmetry between the parties and promote the overall transparency of the transaction (Smorto, 201611; Busch, 201612; Ranchordás, 201813). They represent a source of information which, before the advent of e-commerce, could have been obtained through channels such as advertising, direct experience or recommendations of friends or acquaintances. In this sense, ratings and reviews have codified the ‘word of mouth’ in the business models, contracts and digital architectures of such platforms (Dellarocas, 2003)14.

Recent legislative interventions in Europe have been directed to ensure the transparency of rating and review mechanisms. In the B2C context, the Directive (EU) 2019/2161 introduced the explicit prohibition to submit or commission false consumer reviews or endorsements, as well as manipulate them, in order to promote products. Furthermore, traders (including platforms) have to declare whether and how the review of a product is genuine, i.e., it is submitted by consumers who have actually used or purchased the product.

(2) The second function of ratings and reviews can be the platform’s self-regulation. On many platforms, users (both service providers and end-users) assess each other. This bi-directional evaluation is an incentive for users to behave according to the rules of the community and maintain a high online reputation. A series of private sanctions usually complete the rating and review systems: if the user’s overall score is below the threshold set by the platform, the personal account can be suspended or deactivated. In some cases, self-regulation is the only function pursued by the platform via the rating (Ducato, 2020). Considering that ratings and reviews can be considered personal data, relating to both the individual who receives the score and the one who gives it to the other user, the data protection framework will apply to this form of automated-decision-making processing (Ducato, 2020)15.

References

  1. Felfernig, A., Boratto, L., Stettinger, M., Tkalčič, M. (2018). Group recommender systems: An introduction.Springer.
  2. Jannach, D. et al. (2010). Recommender systems: an introduction. Cambridge University Press.
  3. Goanta, C., Spanakis, G. (2020). Influencers and Social Media Recommender Systems: Unfair Commercial Practices in EU and US Law. Available at:  https://ssrn.com/abstract=3592000.
  4. Aggarwal, C. C. (2016). Recommender systems (Vol. 1). Cham: Springer International Publishing.
  5. Cobbe, J., Singh, J. (2019). Regulating recommending: motivations, considerations, and principles. Considerations, and Principles.
  6. Goanta, C., Spanakis, G. (2020). Influencers and Social Media Recommender Systems: Unfair Commercial Practices in EU and US Law. Available at:  https://ssrn.com/abstract=3592000.
  7. Goanta, C., Spanakis, G. (2020). Influencers and Social Media Recommender Systems: Unfair Commercial Practices in EU and US Law. Available at:  https://ssrn.com/abstract=3592000.
  8. Twigg-Flesner, Christian. (2018). The EU’s Proposals for Regulating B2B Relationships on Online Platforms – Transparency, Fairness and Beyond. EuCML.
  9. European Commission. (2020). Consultation Results. Ranking transparency guidelines in the framework of the EU regulation on platform-to-business relations – an explainer. Available at: https://ec.europa.eu/digital-single-market/en/news/ranking-transparency-guidelines-framework-eu-regulation-platform-business-relations-explainer.
  10. Busch, C. (2016). Crowdsourcing consumer confidence: How to regulate online rating and review systems in the collaborative economy. European Contract Law and the Digital Single Market. Intersentia, Cambridge.
  11. Smorto, Guido. (2016). Reputazione, Fiducia e Mercati. Europa e diritto privato.
  12. Busch, C. (2016). Crowdsourcing consumer confidence: How to regulate online rating and review systems in the collaborative economy. European Contract Law and the Digital Single Market. Intersentia, Cambridge.
  13. Ranchordás, S. (2018). Online reputation and the regulation of information asymmetries in the platform economy. Critical Analysis of Law.
  14. Dellarocas, Chrysanthos. (2003). The Digitisation of Word of Mouth: Promise and Challenges of Online Feedback Mechanisms. Management Science 49 (10): 1407–1424.
  15. Ducato, R. (2020). Private Ordering of Online Platforms in Smart Urban Mobility: The Case of Uber’s Rating System. In: Smart Urban Mobility. Springer, Berlin, Heidelberg. 301-323.

Leave a comment