CyberBRICS

Information Integrity in G20

Article by Yasmin Curzi and Luca Belli

Constructing a concept of a program agenda

The combat of disinformation, hate speech, synthetic content produced by generative artificial intelligence (Gen-AI) and harmful algorithmic practices by private actors are topics that have emerged as fundamental themes for the protection of democracy and human rights.

In this context the term “information Integrity” (or “informational integrity”) has been actively used in official communications and documents domestically and internationally as a form of shifting the war-like narrative about addressing such harmful practices, and promoting a transition from a negative lens to a positive lens through a proactive agenda.

Given this scenario on March 22, CTS-FGV held a webinar[1] with UNESCO representatives, academics and European Union experts (Meta Oversight Board; Forum on Information and Democracy; Desinformante and IT for Change) in order to debate this theme.

The debate is part of a series of webinars about Digital Politics in G20, organised as parallel events of the T20, in partnership with the Egov Unit of ONU University, Consumers International and the Institute of Consumer Defence. The first webinar of the series, dedicated to the AI Governance in G20[2], already highlighted the necessity of having regular systems of artificial intelligence in order to limit – or ideally avoid – systemic risks.

The emergence of new terms, created and presented by different actors without a defined conceptualisation was listed as one of the main challenges by the speakers. As presented by Nina Santos, the terminology “information integrity” itself does not yet have a defined meaning or theoretical outline.

Its construction, based on the specific realities of different socio-cultural and political contexts  in which communications arises, is fundamental in order to establish a common meaning – capable of transcending linguistic and cultural barriers and allowing effective actions in an adaptive response to global needs.

In addition to the lack of a common meaning, the term “information integrity” can guide the regulatory debate to protect the content layer. However, the necessity of parameters and stricter, more adequate responsibilities for private actors and users for enabling illegal content must be carried out with a close eye on the socio-technical layer of platforms: the collection and abusive use of personal data for microtargeting and content recommendation based on engagement metrics that favour the virilisation of extremist content and fake news[3].

In order to construct a truly positive agenda, we need to consider digital platforms as sociotechnical systems that not only reflect but are capable of shaping the behaviour of users in their environments, based on certain incentives. Its design elements are deliberately constructed in order to capture and retain the attention of users[4].

Success of a platform is frequently measured by the time that users pass there, which leads to the creation of features that maximise engagement – having the consequence of harming users’ psychological well-being.[5] Recognising the role of design and the user experiences, it becomes clear that these structures need to be structured to promote healthier and more productive interactions. As pointed out by Rafael Evangelista, individuals are recurrently “‘instrumented’ to seek the maxim attention and engagement for themselves – and, as a result, for the platform – in order to maximise their well-being, whether financial or psychological.[6]

Here we need the concept “information integrity” to also encompass the integrity of the communication flow systems themselves. Requirements for significant transparency, independent audits of algorithms, bias mitigation and more robust accountability mechanisms for platforms for application of harmful content are some of the most vital elements for the definitive protection of informational and communicative spaces.

Finally, we need to remember the elephant in the room. The zero rating[7] practices, in which telecommunications operators offer access to certain applications – typically social medias – without charging for data usage on user’s franchises, has been criticised for violating principles of platform neutrality[8] for years, as well as causing particularly harmful effects on the circulation and impact of misinformation.

Zero rating remains extremely prevalent in the Global South, where the great majority of users are low income and, therefore, are easily seduced into using apps promoted by their franchise. However, by artificially focussing users’ attention – and data collection – on a very limited selection of sponsored platforms, such practices limit users’ exposure to a more diverse range of information and even make fact-checking impossible, with access to the entire internet being kept expensive while access to specific platforms is subsidised.

Countries in the Global South, where zero rating is prevalent, often have restricted access to content made available by these favoured platforms, influencing by public perception and democratic discourse,  including the possibility of active promotion of disinformation as well evidenced in the research by Evangelista and Bruno (2019).

The digital communication environment must necessarily seek alternatives that promote fairer and more open access to the internet, in order to avoid the concentration of certain applications that can interfere with its flow, without competition or viable alternatives in the market. The impact of such practices will be the themes of the next webinar in the series, dedicated to Meaningful Connectivity in G20[9], on the 27th of May at 11am


[1] The webinar was organised in partnership with Consumers International (CI) and the Institute of Consumer Defence (IDEC), being within the activities of the Media and Democracy project, which takes place in partnership with FGV Communication, Democracy Reporting International and Lupa Agency for checking, and the participation of CTS-FGV in the T-20 a group of think tanks within the G-20.

[2] Available at:  https://cyberbrics.info/webinar-ai-governance-in-the-g20/

[3] BRADY, W. J., CROCKET, M. J., VAN BAVEL, J. J. The MAD model of moral contagion: The role of motivation, attention, and design in the spread of moralized content online. Perspectives on Psychological Science, 15(4), 978-1010, 2020.

[4] BRUNO, F. G., BENTES, A. C. F., & FALTAY, P. Economia psíquica dos algoritmos e laboratório de plataforma: mercado, ciência e modulação do comportamento. Revista Famecos, 26(3), 2019.

[5] MILLER, Caroline. “Does Social Media Use Cause Depression? How heavy Instagram and Facebook use may be affecting kids negatively”. 2023. Disponível em: https://childmind.org/article/is-social-media-use-causing-depression/

[6] EVANGELISTA, Rafael. “Instrumentação maquínica: como as plataformas sociais produzem nossa desmobilização política cotidiana”, 2020. Disponível em: https://www.comciencia.br/instrumentacao-maquinica-como-as-plataformas-sociais-produzem-nossa-desmobilizacao-politica-cotidiana/

[7] Available at: http://www.zerorating.info.

[8] Available at: Neutralidade de rede e ordem econômica | Observatório do Marco Civil da Internet (omci.org.br)

[9] Register at: Webinar | Meaningful Connectivity in the G20 | Portal FGV