The Interface between Artificial Intelligence and Free Speech: Implications for India (Part II)

LINK TO PART I

Pic Credits- tps://s.yimg.com

Fake news and deep fakes: Weaponizing AI systems

Misinformation has been a potent instrument for regimes to manufacture electoral victories and sway public opinion in referendums such as Brexit. India has also not been insulated from such manufactured propaganda. Media reports have highlighted the use of deep fakes by politicians in the recent Delhi election campaign, where a political party resorted to fabricating campaign videos to ensure outreach to more linguistic groups. Fake news on social media platforms has been linked to mob violence and lynchings. Such precedence makes it conducive to unleash a barrage of AI-aided propaganda in India. Fact-checking websites have tried to identify individual instances of fake news in many cases. However, they are ill-equipped to engage in large-scale flagging of objectionable content.

Scholars have attempted to analyse the misinformation ecosystem following the previous presidential election in the United States. A study revealed how an AI-system curated content from multiple sources to generate fake videos on YouTube. AI bots have also flooded Twitter and resorted to target users with manufactured content. It has also become difficult to detect these bots as technological advancement has made it easier to mimic human behaviour. Likewise, deep fakes are created by coordinating multiple high-quality images seamlessly. Machine-learning then manipulates the audio much like how bots generate fake text on social media platforms and depict individuals fraudulently without consent. Recent developments reveal how AI can have a detrimental impact on public discourse by taking advantage of computational psychology models to create personality profiles based on human weaknesses. It helps in running targeted misinformation campaigns based on the preferences of individual users. Language then becomes merely a tool to appeal to human emotion rather than conveying accurate information to enrich a marketplace of ideas.[1] Therefore, the very fabric of a democratic order could be compromised if tech giants are not exhorted to mitigate the threat of fake news.

An Indian start-up is also employing AI tools to detect misleading content. However, Facebook’s practice of relying on users to identify manipulated content has had limited success, despite the UN Human Rights Council’s endorsement of such a self-regulation framework. In a majoritarian democracy, multiple fake profiles can target content by the sheer strength of numbers and accentuate a misinformation campaign.

The School of Informatics, Computing and Engineering at Indiana University has recently developed an AI-enabled robot called BotSlayer. BotSlayer operates by creating a network map to identify a surge in trending topics. Twitter has also developed Fabula AI to tackle misinformation. This model utilizes data from fact-checking sources and compares it to information on its forum. Dependence on third-party sources may, however, affect the efficiency of the set-up. Moreover, training AI to track fake news itself represents a tall order.

A European Union study evaluated the use of automated content recognition technologies to identify ‘bot’ fake accounts and stem the flow of misinformation such as deep fakes. The study acknowledges that AI is not a panacea and audits by humans are inevitable. However, it notably asserts that data-driven systems often parasite on the expertise of human experts and an element of bias creeps in. Therefore, AI cannot effectively counter disinformation on its own merits as it is yet not adequately equipped to challenge the complexities of careful manipulation. Human oversight must guide efficient AI tools such as the BotSlayer in the immediate future till the computational capacity of AI is enhanced. Furthermore, as the EU study rightly observes, human intervention can curb over-censorship of content which is also a consequence of incorrect machine reading.

State censorship: Predictive policing and facial recognition

Researchers have developed an AI to analyse ‘hostile’ social media patterns for predicting impending violence. The system rests on the rationale that moral convergence is the underlying factor that triggers violence. The model had been developed by studying the Baltimore protests of 2015 in the US, which had erupted after sustained social media mobilization. In India, the recent NITI Aayog Strategy Paper refers to the employment of predictive policing as “intelligent safety systems” that would collect information from social media platforms to deter crime. A working draft on artificial intelligence, prepared by the Centre for Internet and Society, has also highlighted the use of predictive analytics by the police in India. Such software can map crime patterns and assist the police in identifying geographical areas of crime hotspots. It enables the police to detect crime before its commission and respond to potential law and order disruptions more efficiently.

The deployment of a predictive policing system in the US has revealed that the data mining process has malfunctioned as programmers fed corrupted input data. This anomaly reflected the stereotypes held by the police department against racial minorities and the LGBTQ community. Consequently, predictive policing can result in the selective targeting of minorities. Such labelling is also likely to accentuate a spiral of prejudices in the process. In any prevailing political climate where minorities are demonized, predictive policing is likely to represent a tool of marginalization. An over-reliance on automated technologies can also undermine the discretion of the police altogether and lead to arbitrary crackdowns.

Therefore, predictive models can have a chilling effect on free speech in the absence of safeguards if the state does not judiciously utilize them. However, it could be problematic for jurisdictions such as India as the police would often not wait to employ preventive detention laws for incitement of violence and crackdown on individuals who merely criticize the actions of the government or vent their anger on social media without inciting violence.

China’s totalitarian regime has been one of the leading exponents of using AI for censorship. It has intensively used its facial recognition capabilities to identify demonstrators for arrests. The practice has also enhanced the efficacy of its draconian social credit system that punishes individuals for an anti-government stance. The Delhi police in India was also recently in the news for utilizing facial recognition to screen crowds at public demonstrations to identify “habitual protestors” and “rowdy elements”. Furthermore, an Indian artificial intelligence start-up is also helping the police use AI to instantly deliver personal data by identifying faces in a crowd. It is a cause for major concern in India, which has a sophisticated biometric database such as Aadhar. Advanced AI systems are likely to function at the absolute discretion of the executive.

Conclusion

AI could transform the contours of free speech in India altogether even as its efficacy is constrained by various factors as highlighted. The operation of AI tools in legal vacuum could drastically challenge the existing constitutional protection of free speech. Remodelling the current NITI Aayog National Strategy Paper on Artificial Intelligence would thus be a welcome step in gradually streamlining the law and incorporating reasonable safeguards. As demonstrated by the recent hate speech row involving Facebook in India, big tech and the modern day nation-state bear all the more potential to subvert democracy with sophisticated AI tools at their disposal. Therefore, there is an urgent need to adopt comprehensive legal regulations that would govern AI and negate any chilling effect on free speech. Artificial intelligence must not be utilized as a weapon of choice to usher in an Orwellian dystopia.

(This post has been authored by Rongeet Poddar. Rongeet recently graduated from the West Bengal National University of Juridical Sciences, Kolkata and is set to join a top tier law firm in India. He also serves as an Editor at TCLF)

 

References

  1. J. Stanley, How Fascism Works: The Politics of Us and Them, 86 (2018)

Cite As: Rongeet Poddar,  ‘The Interface Between Artificial Intelligence and Free Speech: Implications for India’ (The Contemporary Law Forum, 25 August 2020) <https://tclf.in/2020/08/25/the-interface-between-artificial-intelligence-and-free-speech-implications-for-india-part-ii> date of access.

1 thought on “The Interface between Artificial Intelligence and Free Speech: Implications for India (Part II)”

  1. Pingback: THE ROLE OF ARTIFICIAL INTELLIGENCE IN FREE SPEECH AND CENSORSHIP |

Leave a Comment

Your email address will not be published. Required fields are marked *

This site uses Akismet to reduce spam. Learn how your comment data is processed.