CyberBRICS

At the intersection of AI and data protection law: automated decision-making rules, a global perspective (CPDP latAm Panel)

By Katerina Demetzou

On Thursday, 15th of July 2021, the Future of Privacy Forum (FPF) organised during the CPDP LatAm Conference a panel titled ‘At the Intersection of AI and Data Protection law: Automated Decision Making Rules, a Global Perspective’. The aim of the Panel was to explore how existing data protection laws around the world apply to profiling and automated decision making practices. In light of the European Commission’s recent AI Regulation proposal, it is important to explore the way and the extent to which existing laws already protect individuals’ fundamental rights and freedoms against automated processing activities driven by AI technologies. 

The panel consisted of Katerina Demetzou, Policy Fellow for Global Privacy at the Future of Privacy Forum; Simon Hania, Senior Director and Data Protection Officer at Uber; Prof. Laura Schertel Mendes, Law Professor at the University of Brasilia and Eduardo Bertoni, Representative for the Regional Office for South America, Interamerican Institute of Human Rights. The panel discussion was moderated by Dr. Gabriela Zanfir–Fortuna, Director for Global Privacy at the Future of Privacy Forum.

Data Protection laws apply to ADM Practices in light of specific provisions and/or of their broad material scope

To kick-off the conversation, we presented preliminary results of an ongoing project led by the Global Privacy Team at FPF on Automated Decision Making (ADM) around the world. Seven jurisdictions were presented comparatively, among which five already have a general data protection law in force (EU, Brazil, Japan, South Korea, South Africa), while two jurisdictions have data protection bills expected to become laws in 2021 (China and India).

For the purposes of this analysis, the following provisions are being examined: the definitions of ‘processing operation’ and ‘personal data’ given that they are two concepts essential for defining the material scope of the data protection law; the principles of fairness and transparency and legal obligations and rights that relate to these two principles (e.g., right of access, right to an explanation, right to meaningful information etc.); provisions that specifically refer to ADM and profiling (e.g., Article 22 GDPR). 

The preliminary findings are summarized in the following points:

  • All seven jurisdictions have very broad definitions of both “processing operations” and “personal data”. Therefore, processing operations that use automated means (ADM included) fall under the protective scope of these laws. 
  • Not all laws and draft laws analyzed contain ADM-specific provisions. 4 out of 7 jurisdictions have a specific ADM provision: EU (GDPR), Brazil (LGPD), South Africa (POPIA), China (draft PIPL). However, any automated processing, including ADM, is regulated by the examined laws and bills in light of their very broad material scope. 
  • 3 out of 7 laws and bills have a specific provision defining “profiling”: EU (GDPR), Brazil (LGPD), India (draft PDPB).
  • 2 out of 7 laws and bills analyzed have a specific provision on Facial Recognition: South Korea (PIPA), China (draft PIPL). 
  • All laws and bills analyzed have rights and obligations related to the principle of transparency (e.g., right of access) even if they do not have an explicit principle of transparency.
  • Fairness appears in the EU (GDPR), South Korea (PIPA), India (draft PDPB). However, the LGPD (Brazil) has a ‘non-discrimination’ principle and the draft PIPL (China) provides for a ‘principle of sincerity’.

UberOla and Foodinho Cases: National Courts and DPAs decide on ADM cases on the basis of existing laws

In recent months, Dutch national Courts and the Italian Data Protection Authority have ruled on complaints brought by employees of the ride-hailing companies Uber and Ola and the food delivery company Foodinho challenging the companies’ decisions reached with the use of algorithms. Simon Hania summarised the key points of these court decisions. It is important to mention that all cases appeared in the employment context and were all submitted back in 2019. That means that more outcomes of ADM cases may be expected in the near future. 

The first Uber case referred to the matching between drivers and riders which, as the Court judged, qualifies as an ADM based solely on automated means that however does not lead to any ‘legal or similarly significant effect’. Therefore, Article 22 GDPR is not applicable. The second Uber case referred to the deactivation of drivers’ accounts due to signals of potentially fraudulent behaviour or misconduct of the drivers. There, the Court judged that Article 22 is not applicable because, as the company proved, there is always human intervention before an account is deactivated and the actual final decision is made by a human. 

The third example presented was the Ola case, whereby the Court decided that the company’s decision of withholding drivers’ money as an act of penalizing their misconduct qualifies as an automated decision based solely on automated means , producing a ‘legal or similarly significant effect’, and therefore Article 22 GDPR applies. 

In the last example of Foodinho, the decision-making on how well couriers perform was indeed deemed by the Court to be based solely on automated means and it produced a significant effect on the data subjects (the couriers). The problem was highlighted to be the way that the performance metrics were established and specifically on the accuracy of the profiles created. They were not sufficiently accurate for the significance of the effect they would bring. 

This last point spurs the discussion on the importance of the principle of data accuracy which is an often overlooked principle. Having accurate data as the basis for decision making is crucial in order to avoid discriminatory practices and achieve fairer AI systems. As Simon Hania emphasised, we should have information available that is fit for purpose in order to reach accurate decisions. This suggests that the data minimisation principle should be understood as data rightsizing and not as requiring to purely minimise information processed for a decision to be reached.

LGPD: Brazil’s Data Protection Law and its application to ADM practices

The LGPD, Brazil’s recently passed data protection law, is heavily influenced by the EU GDPR in general, but also specifically on the topic of ADM processing. Article 20 of the LGPD protects individuals against decisions that are made only on the basis of automated processing of personal data, when these decisions “affect their interests”. The wording of this provision seems to suggest a wider protection than the relevant Article 22 of the GDPR which requires that the decision “has a legal effect or significantly affects the data subject”. Additionally, Article 20 LGPD provides individuals with a right to an explanation and with the right to request a review of the decision. 

In her presentation, Laura Mendes highlighted two points that require further clarification: first of all, it is still unclear what the definition of “solely automated” is. Secondly, it is not clear what the degree of the review of the decision should be and also whether the review shall be performed by a human. There are two provisions core to the discussion on ADM practices: 

(a) Art 6 IX LGPD, which introduces the principle of non-discrimination as a separate data protection principle. According to it, processing of data shall not take place for “discriminatory, unlawful or abusive purposes”. 

(b) Article 21 LGPD reads “The personal data relating to the regular exercise of rights by the data subjects cannot be used against them.” As Laura Mendes suggested, Article 21 LGPD is a provision with great potential regarding non-discrimination in ADM. 

Latin America & ADM Regulation: there is no homogeneity in Latin American laws but the Ibero-American Network seems to be setting a common tone

In the last part of the panel discussion, a wider picture of the situation in Latin America was presented. It should be clear that Latin America does not have a common, homogenous approach towards data protection. For example, while Argentina has had a data protection law since 2000 for which it obtained an adequacy decision with the EU, Chile is in the process of adopting a data protection law but still has a long way to go, while Peru, Ecuador and Colombia are trying to modernize their laws. 

The American Convention of Human Rights recognises a right to privacy and a right to intimacy, but there is still no interpretation by the Interamerican Court of Human Rights neither on the right to data protection nor specifically on the topic of ADM practices. However, it should be kept in mind that as was the case with Brazil’s LGPD, the GDPR has highly influenced Latin America’s approach to data protection. Another common reference for Latin American countries is the Ibero-American Network which, as Eduardo Bertoni explained in his talk, while it does not produce hard law, it publishes recommendations that are followed by the respective jurisdictions. Regarding specifically the discussion on ADM, Eduardo Bertoni mentioned the following initiatives taken in the Ibero-American space:

  • In 2017, the Ibero-American Standards for data protection were released. Article 29 of these Standards is a provision specific to ADM (“right not to be the subject of automated decisions”) that in content is very similar to Article 22 GDPR. Although these standards are only guidelines and not hard law, it is crucial to mention that the various jurisdictions in Latin America take them into account. 
  • Another document published by the Ibero-American network are the ‘General recommendations for processing of personal data in AI’. An obligation to perform Privacy Impact Assessments and the principle of accountability seem to be important aspects of these recommendations.
  • The Argentinian DPA published a Resolution to interpret the Argentinian Data Protection Law. 
  • The Interamerican Juridical Committee passed a set of principles for personal data protection, without including something specific on ADM. However Eduardo Bertoni highlighted that again the idea is to give the data subject the possibility to request explanation and to oppose the data processing when the processing could significantly affect or harm him/her.

Main Takeaways

While there is an ongoing debate around the regulation of AI systems and automated processing in light of the recently proposed EU AI Act, this panel brought attention to existing data protection laws which are equipped with provisions that protect individuals against automated processing operations. The main takeaways of this panel are the following:

  • Data protection laws around the world either have provisions that specifically regulate ADM practices (such as EU, Brazil, South Africa) or that apply to such practices in light of their broad material scope that protects individuals against processing operations also performed by automated means (such as South Korea, Japan, India).
  • In Europe, national Courts and Data Protection Authorities have already started ruling on cases with a specific focus on ADM practices. It is expected that more rulings will appear in the months / years to come.
  • Brazil provides a robust network of provisions for protection against ADM practices, not only in light of Article 20 LGPD (which highly resembles Article 22 GDPR), but also because it provides for a specific non-discrimination principle. However, there are elements that need to be clarified such as whether the review of an automated decision is required to be performed by a human.
  • The principles of accuracy and data minimisation should form part of the discussion around fair decision making either by humans or by algorithms. The data upon which a decision is based need to be accurate and need to also be as much as needed for the purposes of the specific decision. In that sense, the data minimisation principle should be understood as a “data rightsizing” principle.
  • Jurisdictions in Latin America have neither a common approach towards data protection nor a common pace in adopting or modernising their national data protection laws. 
  • The Ibero-American Network can be seen as setting the common denominator in data protection standards for Latin American jurisdictions. While the Network does not produce hard law and its decisions are not binding, it publishes Standards and recommendations which are followed by the respective jurisdictions. It is thus important to keep an eye on the initiatives taken by this Network given that they will most probably be adopted by the LatAm jurisdictions.

Looking ahead, the debate around the regulation of AI systems will continue to be heated and the protection of fundamental rights and freedoms in light of automated processing operations will still appear as a top priority. In this debate we should keep in mind that the proposed AI Regulation is being introduced in an already existing system of laws, as is data protection law, consumer law, labour law, etc. It is important to have clear what is the reach and the nature of these laws so as to be able to identify the gap that the AI Regulation or any other future proposal comes to fill. This panel highlighted that ADM and automated processing is not unregulated. On the contrary, current laws protect individuals by putting in place binding overarching principles, legal obligations and rights. At the same time, Courts and national authorities have already started enforcing these laws. 

Watch a recording of the panel HERE.

Originally published on 3 August 2021

Source: Future of Privacy Forum

1 thought on “At the intersection of AI and data protection law: automated decision-making rules, a global perspective (CPDP latAm Panel)”

Comments are closed.