The Interface between Artificial Intelligence and Free Speech: Implications for India (Part I)

Pic Credits – tps://s.yimg.com

Introduction

A recent document titled Privacy And Freedom of Expression in the Age of Artificial Intelligence released by Article 19, a global human rights organisation, has articulated various threats that AI poses to freedom of expression in consonance to the standards of international human rights law. With the advent of the technological revolution in India, algorithms have become ubiquitous in our lives today in diverse sectors such as healthcare, education or public security. There has been significant academic debate in the United States about bringing AI speech under the ambit of the First Amendment, i.e. the clause regulating free speech in the Constitution of US. However, the contours of such protection remain ambiguous in the absence of clear standards.

AI bears the tremendous potential to enrich our society. However, it also has a disruptive influence on free speech. Algorithms can be a potent instrument for the state to expand restrictions on free speech. The regulation of speech by tech giants using AI-based moderation also presents another conundrum. In this article, the author will offer an appraisal of such critical questions.

The AI speech debate in the US: Acknowledgement of the chilling effect

Scholars in the US are particularly optimistic that a machine could satisfy the Turing test and exhibit a level of intelligence that is “indistinguishable from that of humans.” It could challenge our perception of free speech altogether and impact legal regulation when AI initiates communication akin to humans. In First National Bank of Boston v. Bellotti, the US Supreme Court had laid down that a determination of what constitutes free speech would not require an enquiry into the identity of the source. It could include a “corporation, association, union or individual.” J. Powell had opined that instead of enquiring about the source of speech and “whether they are co-extensive with those of natural persons”, the court should merely scrutinise as to what impact it had on First Amendment protections. The Citizens United v. Federal Election Commission judgement affirmed such a reading of the First Amendment. J. Scalia held that the First Amendment of the US Constitution is defined in terms of the limits of free speech and not the identity of the speakers. Such an interpretation enabled the creation of a very diverse category of non-traditional ‘speakers’ including algorithms to be entitled to the benefit of the clause.

The recognition of non-traditional speakers has opened a new avenue for AI-generated speech, including machine learning. Scholars have scrutinized the instrumental value that AI holds for enriching public discourse, in order to formulate a standard of protection. The attribution of legal personhood to AI has also been evaluated in this context and concerns surrounding the autonomy and dignity of humans have been raised. Consequently, the motivation of the state to censure must also be taken into account to determine the validity of restrictions. The regulation of non-state actors, especially in the era of new-age information warfare, also presents another critical challenge.

International standards: India’s prospect

The Universal Declaration of Human Rights emphatically recognises the freedom of expression as one of its core principles. Article 19 of the ICCPR defines free speech in terms of unhindered exchange of information on any medium. It warrants interference in free speech only in cases where the reputation of others is involved or where national security, public order, public health or morals can be compromised. Likewise, Article 10 of the European Covenant on Human Rights restricts freedom of expression “in the interests of national security, territorial integrity or public safety.”

In 2018, the High-Level Expert Group on Artificial Intelligence (‘AI’) constituted by the European Commission developed a set of ethical guidelines for “trustworthy AI”. Concerns relating to chilling effect on free speech have prompted the European Union into conceptualising these guidelines so as to formulate a set of inviolable normative standards. The ethical guidelines recognise that individual freedom encompasses “mitigation of (in) direct illegitimate coercion, threats to mental autonomy and mental health, unjustified surveillance, deception and unfair manipulation.” The UN’s International Telecommunication Union also took cognizance of the same at its annual AI for Global Good Summit in Geneva. The emergence of AI has opened a pandora’s box in the sphere of free speech around the world. It poses a much bigger threat for jurisdictions where the constitutional protection extended to free speech may be diluted with comparative ease.

The Constitution of India enshrines the freedom of speech and expression for citizens in Article 19 (1) (a). However, much like global standards, it is not a right without limitations. The state can impose reasonable restrictions under Article 19(2), with maintenance of ‘public order’ being one of the widely misused grounds. However, India has seen landmark judicial precedents that have reinvigorated free speech by incorporating multiple tests of proximity, proportionality and arbitrariness, in response to state excesses.

The recent Supreme Court judgement of Shreya Singhal v. Union of India remains an influential ruling in this regard. It gave a sense of determinacy to the public order conundrum. The apex court conclusively adopted the clear and present danger test of Brandenburg v. Ohio and held that incitement of violence is a sine qua non for a public order restriction. However, the rise of big tech, coupled with an intrusive majoritarian government, can insidiously challenge legal precedence. While a plethora of stringent national security laws exist despite being subject to criticism, misinformation and hate speech have become rampant and prompted calls for legal intervention.

Existing state policy for AI in India: Free speech not on the agenda

The NITI Aayog, the foremost policy think-tank of the government of India has formulated a National Strategy Paper on Artificial Intelligence to come up with a roadmap for AI regulation. It emphasises on harnessing India’s AI capability for economic development. The government envisages a collaborative role with the private players and aims to foster innovation in AI. The paper has identified relevant sectors where AI could potentially be employed. However, in light of the regulatory vacuum in which AI is likely to operate, many of these interventions could impose restrictions on free speech if the pitfalls are ignored.

Interestingly, the National Strategy Paper recognises that there could be biases in ostensibly neutral data. It also throws light on the distortions in AI performance caused by the “Black Box Phenomenon”. The conception of a “Black Box” refers to the difficulty in identifying the linkage between the input data and the final results generated due to the opacity of algorithms. As a consequence, AI functions cannot often be rationalized. The limited understanding of such opaque processes is thus likely to create loopholes in free speech regulation. in light of the conspicuous absence of an unequivocal commitment to free speech in the policy paper.

Hate speech regulation: The limits of AI

Recently, social media platforms such as Facebook and Twitter have been under the scanner around the globe for their failure to respond to the proliferation of hate speech on their platforms. Media reports have revealed how human moderators are unable to efficaciously flag objectionable content which could be attributed to a lack of training and the absence of a consistent regulatory standard. Often, human moderators are unable to decide which content deserves to be taken down or the job itself may also be dehumanising or traumatic.[1]

In response, tech companies such as Facebook have consistently employed hate speech detection algorithms despite criticism. The classification of content has, however, proven to be a herculean task as human culture is a fluid notion which cannot be boxed into a set of formulae for machine-reading. A set of uniform standards are also unlikely to be viable due to the divergence in cultural norms across the globe. The notion of culture is itself transient in nature. A dynamic society is also likely to have subaltern counter-cultures which challenge the hegemonic norms. It is unlikely that the input-datasets supplied to an AI system can identify such complexities. Artificial intelligence may fail to distinguish between content that provokes violence and posts that condemn violence or report instances of violence, in the absence of context.

In 2018, Facebook CEO Mark Zuckerberg notably conceded before a US Senate committee that Facebook’s AI requires an immediate upgrade as the line between legitimate democratic discourse and objectionable content is becoming increasingly blurred. The inability of Facebook to accurately flag hate speech can often have alarming consequences as it turned out in Burma, when anti-Rohingya hate speech flooded the social media platform. Facebook’s inability to tackle hate speech against minority communities has also come to the fore in the state of Assam.

The flipside of content monitoring through AI is the prevalence of bias which may not be limited to racial prejudices. A machine learning tool developed by Google, called Perspective, attempted to classify tweets on Twitter ranging from ‘healthy’ to ‘toxic’. The algorithm was familiarised with offensive comments on social media and asked to evaluate Twitter posts. However, the AI demonstrated an uncharacteristic tendency to target tweets written in the African-American colloquial English. Therefore, the ‘neutrality’ of AI may be may be compromised.

In the absence of advanced AI that identify patterns from data sets and recognize context of speech, human intervention continues to be indispensable in classifying hate speech in the first place, despite its limitations. As recently revealed, Facebook’s commercial interests appear to have shaped its hate speech detection policy in India. Thus, the predilection of tech companies to collude with a ruling dispensation further outlines the immediate urgency for transparent regulation.

(This post has been authored by Rongeet Poddar. Rongeet recently graduated from the West Bengal National University of Juridical Sciences, Kolkata and is set to join a top tier law firm in India. He also serves as an Editor at TCLF)

LINK TO PART II

References

  1. S. T. Roberts, Behind the Screen: Content Moderation in the Shadows of Social Media, 79 (2019)

Cite As: Rongeet Poddar,  ‘The Interface Between Artificial Intelligence and Free Speech: Implications for India’ (The Contemporary Law Forum, 25 August 2020) <https://tclf.in/2020/08/25/the-interface-between-artificial-intelligence-and-free-speech-implications-for-india-part-i> date of access.

1 thought on “The Interface between Artificial Intelligence and Free Speech: Implications for India (Part I)”

  1. Pingback: THE ROLE OF ARTIFICIAL INTELLIGENCE IN FREE SPEECH AND CENSORSHIP |

Leave a Comment

Your email address will not be published. Required fields are marked *

This site uses Akismet to reduce spam. Learn how your comment data is processed.