By Torsha Sarkar, Policy Officer, Centre for Internet and Society, and CyberBRICS Associated Scholar
The author would like to thank Elonnai Hickok and Arindrajit Basu for their feedback.
Introduction
In December 2018, the Ministry of Electronics and Information Technology (“MeitY”) released the Intermediary Liability Guidelines (Amendment) Rules (“the Guidelines”), which would be significantly altering the intermediary liability regime in the country. While the Guidelines has drawn a considerable amount of attention and criticism, from the perspective of the government, the change has been overdue.
The Indian government has been determined to overhaul the pre-existing safe harbour regime since last year. The draftversion of the e-commerce policy, which were leaked last year, also hinted at similar plans. As effects of mass dissemination of disinformation, propaganda and hate speech around the world spill over to offline harms, governments have been increasingly looking to enact interventionist laws that leverage more responsibility on the intermediaries. India has not been an exception.
A major source of these harmful and illegal content in India come through the popular communications app WhatsApp, despite the company’s enactment of several anti-spam measures over the past few years. Last year, rumours circulated on WhatsApp prompted a series of lynchings. In May, Reuters reported that clones and software tools were available at minimal cost in the market, for politicians and other interested parties to bypass these measures, and continue the trend of bulk messaging.
These series of incidents have made it clear that disinformation is a very real problem, and the current regulatory framework is not enough to address it. The government’s response to this has been accordingly, to introduce the Guidelines. This rationale also finds a place in its preliminarystatement of reasons.
While enactment of such interventionist laws has triggered fresh rounds of debate on free speech and censorship, it would be wrong to say that such laws were completely one-sided, or uncalled for.
On one hand, automated amplification and online mass circulation of purposeful disinformation, propaganda, of terrorist attack videos, or of plain graphic content, are all problems that the government would concern itself with. On the other hand, several online companies (including Google) also seem to be in an uneasy agreement that simple self-regulation of content would not cut it. For better oversight, more engagement with both government and civil society members is needed.
In March this year, Mark Zuckerberg wrote anop-ed for the Washington Post, calling for more government involvement in the process of content regulation on its platform. While it would be interesting to consider how Zuckerberg’s view aligns with those similarly placed, it would nevertheless be correct to say that online intermediaries are under more pressure than ever to keep their platforms clean of content that is ‘illegal, harmful, obscene’. And this list only grows.
That being said, the criticism from several stakeholders is sharp and clear in instances of such law being enacted – be it the ambitious NetzDG aimed at combating Nazi propaganda, hate speech and fake news, or the controversial new European Copyright Directive which has been welcomed by journalists but has been severely critiqued by online content creators and platforms as detrimental against user-generated content.
In the backdrop of such conflicting interests on online content moderation, it would be useful to examine the Guidelines released by MeitY. In the first portion we would be looking at certain specific concerns existing within the rules, while in the second portion, we would be pushing the narrative further to see what an alternative regulatory framework may look like.
Before we jump to the crux of this discussion, one important disclosure must be made about the underlying ideology of this piece. It would be unrealistic to claim that the internet should be absolutely free from regulation. Swathes of content on child sexual abuse, or terrorist propaganda, or even the hordes of death and rape threats faced by women online are and should be concerns of a civil society. While that is certainly a strong driving force for regulation, this concern should not override the basic considerations for human rights (including freedom of expression). These ideas would be expanded a bit more in the upcoming sections.
Broad, thematic concerns with the Rules
A uniform mechanism of compliance
Timelines
Rule 3(8) of the Guidelines mandates intermediaries, prompted by a court order or a government notification, to take down content relating to unlawful acts within 24 hours of such notification. In case they fail to do so, the safe harbour applicable to them under section 79 of the Information Technology Act (“the Act”) would cease to apply, and they would be liable. Prior to the amendment, this timeframe was 36 hours.
There is a visible lack of research which could rationalize that a 24-hour timeline for compliance is the optimal framework, for all intermediaries, irrespective of the kind of services they provide, or the sizes or resources available to them. As Mozilla Foundation has commented, regulation of illegal content online simply cannot be done in an one-size-fits-all approach, nor can regulation be made with only the tech incumbents in mind. While platforms like YouTube can comfortably remove criminal prohibited content within a span of 24 hours, this still can place a large burden on smaller companies, who may not have the necessary resources to comply within this timeframe. There are a few unintended consequences that would arise out of this situation.
One, sanctions under the Act, which would include both organisational ramifications like website blocking (under section 69A of the Act) as well as individual liability, would affect the smaller intermediaries more than it would affect the bigger ones. A bigger intermediary like Facebook may be able to withstand a large fine in lieu of its failure to control, say, hate speech on its platform. That may not be true for a smaller online marketplace, or even a smaller online social media site, targeted towards a very specific community. This compliance mechanism, accordingly, may just go on to strengthen the larger companies, and eliminating the competition from the smaller companies.
Two, intermediaries, in fear of heavy criminal sanctions would err on the side of law. This would mean that the decisions involved in determining whether a piece of content is illegal or not would be shorter, less nuanced. This would also mean that legitimate speech would also be under risk from censorship, and intermediaries would pay less heed to the technical requirements or the correct legal procedures required for content takedown.
Utilization of ‘automated technology’
Another place where the Guidelines assume that all intermediaries operating in India are on the same footing is Rule 3(9). This mandates these entities to proactively monitor for ‘unlawful content’ on their platforms. Aside the unconstitutionality of this provision, this also assumes that all intermediaries would have the requisite resource to actually set up this tool and operate it successfully. YouTube’s ContentID, which began in 2007, has already seen a whopping 100 million dollars investment by 2018.
Funnily enough, ContentID is a tool exclusively dedicated to finding copyright violation of rights-holder, and even then, it has been proven to be not infallible. The Guidelines’ sweeping net of ‘unlawful’ content include far many more categories than mere violations of IP rights, and the framework assumes that intermediaries would be able to set up and run an automated tool that would filter through all these categories of ‘unlawful content’ at one go.
The problems of AI
Aside the implementation-related concerns, there are also technical challenges related with Rule 3(9). Supervised learning systems (like the one envisaged under the Guidelines) use training data sets for pro-active filtering. This means if the system is taught that for ten instances of A being the input, the output would be B, then for the eleventh time, it sees A, it would give the output B. In the lingo of content filtering, the system would be taught, for example, that nudity is bad. The next time the system encounters nudity in a picture, it would automatically flag it as ‘bad’ and violating the community standards.
Except, that is not how it should work. For every post that is under the scrutiny of the platform operators, numerous nuances and contextual cues act as mitigating factors, none of which, at this point, would beunderstandable by a machine.
Additionally, the training data used to feed the system can be biased. A self-driving car who is fed training data from only one region of the country would learn the customs and driving norms of that particular region, and not the patterns that apply across the intended purpose of driving throughout the country.
Lastly, it is not disputed that bias would be completely eliminated in case the content moderation was undertaken by a human. However, the difference between a human moderator and an automated one, would be that there would be a measure of accountability in the first one. The decision of the human moderator can be disputed, and the moderator would have a chance to explain his reasons for the removal. Artificial intelligence (“AI”) is identified by the algorithmic ‘black box’ that processes inputs, and generates usable outputs. Implementing workable accountability standards for this system, including figuring out appeal and grievance redressal mechanisms in cases of dispute, are all problems that the regulator must concern itself with.
In the absence of any clarity or revision, it seems unlikely that the provision would actually ever see full implementation. Neither would the intermediaries know what kind of ‘automated technology’ they are supposed to use for filtering ‘unlawful content’, nor would there be any incentives for them to actually deploy this system effectively for their platforms.
What can be done?
First, more research is needed to understand the effect of compliance timeframes on the accuracy of content takedown. Several jurisdictions are operating now on different timeframes of compliance, and it would be a far more holistic regulation should the government consider the dialogue around each of them and see what it means for India.
Second, it might be useful to consider the concept of an independent regulator as an alternative and as a compromise between pure governmental regulation (which is more or less what the system is) or self-regulation (which the Guidelines, albeit problematically, also espouse through Rule 3(9)).
The UK White Paper on Harms, a piece of important document in the system of liability overhaul, proposes an arms-length regulator who would be responsible for drafting codes of conduct for online companies and responsible for their enforcement. While the exact merits of the system is still up for debate, the concept of having a separate body to oversee, formulate and also possiblyarbitrate disputes regarding content removal, is finding traction in several parallel developments.
One of the Transatlantic Working Group Sessions seem to discuss this idea in terms of having an ‘internet court’ for illegal content regulation. This would have the noted advantage of a) formulating norms of online content in a transparent, public fashion, something previously done behind closed doors of either the government or the tech incumbents and b) having specially trained professionals who would be able to dispose of matters in an expeditious manner.
India is not unfamiliar to the idea of specialized tribunals, or quasi-judicial bodies for dealing with specific challenges. In 2015, for example, the Government of India passed the Commercial Courts Act, by which specific courts were tasked to deal with matters of very large value. This is neither an isolated instance of the government choosing to create new bodies for dealing with a specific problem, nor would it be inimitable in the future.
There is no silver bullet when it comes to moderation of content on the web. However, in light of these parallel convergence of ideas, the appeal of an independent regulatory system as a sane compromise between complete government control and laissez-faire autonomy, is worth considering.