Artificial Intelligence in Brazil still needs a strategy

By Walter B. Gaspar and Yasmin Curzi de Mendonça, Researchers at CTS/FGV Law School

This is an abridged version. Download a full version of this analysis report here.

Anyone who takes the time to read the recently published Brazilian Artificial Intelligence Strategy (EBIA) will not be able to get a very concrete idea of what the strategy really is. The document describes, in about fifty pages, some general considerations about the implementation of AI in several sectors, but without ever delving deeply into planning issues that would be basic to a successful strategy. Many questions remain unanswered, making the document take on the appearance more of a letter of intent than a pragmatic planning effort.

We will address below some of these issues, dealing with how EBIA i) does not identify the actors responsible for governance, failing to follow the example of other strategic documents already produced by the Executive; ii) does not specify measurable benchmarks; iii) is too generic in character; iv) does not sufficiently incorporate the expertise of the contributions offered during the public consultation; v) does not delve into the methods available to provide transparency and explicability to AI systems; and vi) uncritically incorporates research on the use of AI in Public Security.

Governance uncertainty

A first essential point that remains undefined is that of the governance structures responsible for its management. Many contributions made during the public consultation on EBIA suggested the creation of regulatory bodies, specific authorities or the use of existing structures. For example, we at FGV Law School’s Center for Technology and Society (CTS) did so, in our contribution (p. 18), as well as the Rio de Janeiro Institute of Technology and Society (ITS Rio) (p. 16). The creation of a specific authority is also identified as a guideline shared by several letters of principles, mapped in the Principled Artificial Intelligence study from Harvard University’s Berkman Klein Center, in which multiple approaches on artificial intelligence regulation were considered for an overall balance of the most frequent recommendations.

Many of the actions in the first axis of the strategy – “Legislation, Regulations and Ethical Use” – as well as the others, would benefit from a better definition of who will be their active subject. Knowing who will be creating, implementing, encouraging or promoting the actions listed defines the extent of what can actually be accomplished. To repeat the point, a definition of the governance structure – responsible actors and their respective capabilities – would bring clarity on the implementation of these actions.

EBIA is silent on this point. Although at times it mentions existing governance structures, none of the “strategic actions” listed at the end of each axis of the document is decisive in relation to a governance body or bodies responsible for monitoring the execution of the strategy as a whole. Ordinance (“Portaria”) No. 4,617/21 of the Ministry of Science, Technology and Innovations (MCTI), which creates the strategy, establishes on this point only that it will be up to the Ministry “to create governance instances and practices to prioritize, implement, monitor and update strategic actions established in the Brazilian Artificial Intelligence Strategy ”.

Given this, it is impossible to know the form that AI governance in Brazil will take in the future. If we look at other neighboring documents produced by the Federal Executive itself, the insufficiency of EBIA is evident: both the National System for Digital Transformation and the National Internet of Things Plan have specific governance bodies created in their regulatory decrees.

Still on the subject, it is worth noting that the broad participation of civil society, academia and the productive sector in these governance structures is essential, given the complex nature of the topic addressed. As repeatedly commented on in the contributions to the public consultation process of the strategy, different applications of artificial intelligence in different sectors have radically different potential risks and benefits, so that the composition of groups that are too homogeneous and univocal for the monitoring of the strategy can lead to blind spots that jeopardize the achievement of the stated objectives.

Measuring progress

The issue of governance related to the artificial intelligence strategy is a topic that will need to be defined by the MCTI in the future to make EBIA concrete. This is crucial, since it gives rise to several other questions that the document left open. What will be the frequency of review and control of the actions? What are the success indicators for each one? At which point will the strategy need to be reformulated – an important topic, given the speed with which the technological landscape changes?

Taking as a reference, for example, the Brazilian Strategy for Digital Transformation, we see that each of its nine axes brings not only strategic actions, but also measurable benchmarks to verify the success of implementation. It would be important to develop indicators and a schedule of periodic reviews for EBIA, with publication of targets for their monitoring. This would contribute to accountability related to the objectives set and would serve as a stepping stone for its execution, facilitating the work of the MCTI and other government agencies involved.

Inaccuracy of actions

Some of EBIA’s “strategic actions” do not have the characteristics of actions, but of objectives. For example, still in the first axis, the action of “Stimulating actions of transparency and responsible disclosure regarding the use of AI systems, and promoting the observance, by such systems, of human rights, democratic values ​​and diversity” sounds more like an introduction to a letter of principles than a concrete action. In fact, it seems more like a reorganization of points already elaborated in the public consultation phase.

The public consultation page that contains this topic holds relevant contributions that could make the statement more concrete. BRASSCOM, for example, highlights the Singapore AI Framework, which lists principles for the application of AI systems and extracts good practices from them; and the Data Privacy Brasil association highlights the terms of the 2018 Toronto Declaration, which also lists actions that go to a level of detail greater than that presented in EBIA. Given the level of detail that can be found in some of civil society’s contributions to EBIA’s public consultation process, the initiative seemed like a missed opportunity.

In addition, some key terms need better definition to be operationalized. For example, when talking about “Facilitating access to open government data” (“AI Governance” axis), it would be important to specify what is intended by “facilitating”, since for AI applications not only access to open data, but the quality and structuring of these data are crucial factors. When enunciating the action of “Stimulating the retention of specialized ICT talent in Brazil” (axis “Workforce and training”), it would be important to list how this objective can be achieved, under the risk of merely resorting to an obvious goal without indicating a real way.

The problem of public security

In addition to these more general considerations, it is important to note at least one substantial and specific problem that EBIA presents when dealing with the use of AI in Public Security.

EBIA introduces statistics from the Carnegie Endowment for International Peace research on the use of AI in surveillance systems. The study aimed to bring attention and alarm to the use of AIs by public authorities in the area of ​​public security, but it is only cited superficially to establish the expansion of the implementation of these systems.

In relation specifically to the dissemination of facial recognition systems in Brazil, EBIA cites a study published by the Igarapé Institute on the implementation of facial recognition systems (SRF). It is a fundamental conclusion of the aforementioned study that the adoption of these systems implied the collection of detailed data on individuals, even before the adoption of the general data protection law (LGPD), and that the effect of video surveillance on crime reduction is limited. This is not clearly described in the explanatory memorandum or in the actions listed in that section.

EBIA criticizes the potential perverse effects of the implementation of SRFs, such as the possibility of algorithmic discrimination and inefficiency of applications. However, in its strategic actions, it outlines few effective ways of addressing and solving these problems and corroborates their implementation, delegating standardization initiatives and possible necessary planning for the structuring of safe systems to other “regulatory bodies”, without even pointing out at all what these would be (pp. 49-50).


The generality with which the themes are addressed in EBIA’s strategic actions, together with the lack of a clear outline of the intended governance structure, deadlines and goals, give the document the appearance of a timid first step on the path of AI regulation.

A series of questions remains open: what are the technical and organizational guidelines for facing problems linked to the implementation of AI systems? Who will be able to define and review them, and how often? How will society participate? What incentive instruments will be applied? What are the priority sectors?

These are questions that EBIA hints at, but doesn’t answer. Given the long process of public consultation, the existing knowledge and the experiences of other countries, a more significant advance was to be expected.