Intersectionality, Moral Policing and International Law: An AI Perspective (Part II)

LINK TO PART I

The Special Role of Artificial Intelligence amidst Moral Policing

It would be appropriate to ask specific and consistent questions about the role of artificial intelligence in the realm of cyberspace and the space of metaphysics of information. There are two different things which affect the way we decide the special role of AI in the realm of intersectionality. This binary in the existent narratives on AI Ethics are stemmed up to the cultural biases in understanding what AI stands for. It has nothing to do with the technological characteristics of AI in general. The fundamentals of mathematics, which require a sync to form AI are for now untouched of the narrative wars on AI Ethics. However, unlike the scientific version of AI, the political (and even legal-industrial) version of AI has to be estimated with a sense of reasonability. As the author in the piece on algorithmic journalism had explained how wrong narratives are being spread and the task of handling misinformation is being weaponized, in continuation with that, in this section, let us understand how AI Ethics, which involves us to decide how AI is ethical, useful or mean anything to us for reasons we put up, has been misused often for years. This is owed to those companies and those so-called experts on AI Ethics, who focus on spreading mere binary narratives on technology diplomacy. For example – the rift between US and China is not just due to the Trade War initiated by the Trump Administration. Remember this – the United States also had similar rifts when it was a newly freed country from the British Empire and decades after the incident when the White House was enflamed by the British Army, when scholars and journalists in the West thought that their narrative of an ‘interconnected world’ was becoming true until 1914, what happens is that the American assistance in the First and Second World Wars saved the UK. The beginning of the era of American international law and internationalism was followed by the Cold War, where without the US, the bipolarity could not have been met, and neither the world could have changed much. Now, that the US owns much along with European countries the market of narrative over diplomacy and human rights, they actually much own the narrative of tech ethics as well. There is no doubt that the US is way beyond China in some of its technological achievements, and despite the tremendous ‘peaceful’ undiplomatic rise of China, the US still stands as a ‘rogue superpower’ against Europe and Asiatic countries. However, intersectionality, which stemmed from both French and American values, is now dominated by the American side rather than any European (our even French) side for that matter. To understand this, let us understand how the narratives are being built.

Despite the fact that immense research is in place, narratives are being made on the use of AI under a misconception that capitalism and the American values are systemically untenable. Factually, American corporatism is messy, imperfect and certainly unsustainable for the world – but it has nothing principally to do with capitalism. American values on the other hand are based on – the Holy Bible’s Judeo-Christian values and the idea of American exceptionalism, which definitely is not perfect. However, America’s idea of liberty is more expansive than the British and European models of liberalism (and even secularism). This is one of the reasons why our judges and advocates like DY Chandrachud, J and Gautam Bhatia, respectively take a lot precedents from the US courts and rely a lot on stalwarts like Ruth Bader Ginsberg and John Roberts, for e.g. This is in fact one of the most interesting feats of the Indo-US ties, where we try to embody American constitutionalism without even understanding the real and aesthetic elements and peripheries of it.

Now that we understand that such narratives are being made in the name of civilizing people, let us directly jump on the narratives that lead the idea of AI Ethics. The first kind of narrative that had developed on AI Ethics, which started rightfully with Alan Turing, was about a mechanistic world, which was furthered by sci-fiction and much more content to lure people. When neoliberalism became inevitable in 1992, then after the formation of the World Trade Organization, the democratization of technology was made inevitable mostly from the US. After the 2008-09 Euro crisis, China emerged as a special player, but laid down its significance in after 2011, properly. Since neoliberalism is very much American by nature, it led to economic prosperity, but inequality too, especially in the US. That also had much to do with outsourcing the job the American talent could inherit to China and other countries to expand their global supply chain. It was endorsed both by George W Bush and Barack Obama, unfortunately. Now, the notion of a global community was endorsed, where issues like poverty, terrorism, minority rights, climate change etc., were accepted as tailor-made narratives. However, what did it lead us was not expected. Data manipulation justify non-facts became a habit for American and European thinkers, scholars, entrepreneurs and even bureaucrats. That manipulation was exposed when the ineffectiveness of the Kyoto Protocol and the 2015 Paris Accords were realized by the American people. AI as an industry was democratized for good reasons, and we have examples like Singapore and the UAE, where democratization is happening. However, information warfare caused sharing much misinformation on AI, which anyways should be a matter of real concern. Also, people must read this expose of the fake narrative spread by a Netflix movie entitled ‘The Social Dilemma’ written by Yasha Levine, which clearly explains how the narrative-making on concerns in cyberspace in the name of surveillance and perception politics has more to do with the way people remain uninformed. Now, there are competing narratives to the American aspect of AI Ethics. We see that in China and Russia, where social credit systems are on course (in Russia it would happen as on the news sooner). The idea of surveillance does not require to be obsessed with colonialism and the Western ‘white guilt’ bias all the time. Even South Korea – a liberal democracy is good at surveillance and yet is less reported. India is being blamed for intent of surveillance, but the laughable part is that even the Union Government and the Delhi Police do not have enough effective systems in facial recognition, for example. India does not have an ideological understanding of surveillance like China and the US because the bureaucracy here has different anthropological and educational problems, which stem to the way the Indian bureaucracy is ineffective in policing and other aspects. Nevertheless, cultural biases differ all around the world, and it is therefore important for any sane person to understand that the anthropological use of AI and its estimation in technology ethics differs, which is why we need more plurilateral approaches and disagreements among countries on the primary and secondary aspects of how AI Ethics Principles are made. If people read Peter Cihen’s co-authored paper on AI Governance, they will realize that a mere want or desirability of having some ‘globalist’ rules or infrastructure of AI Ethics would not be of any use. Following are the recommendations:

  • Legal concepts like privacy, surveillance and even transparency must be treated under the methodology of strategic cultures, to understand how their vertical and horizontal hierarchies work out;
  • The notions of sovereignty in international law always are iron-casted more when the notions of power and competence are neared too much. If we believe in democracy and rule of law, we must focus on models, where ideology is not treated as a culture, but a mere school of thought at best, and we must obviously ensure that the perspectives are not binary, but should be multi-directional, because catalysis and deciphering policy paralysis then obviously would resolve a lot of our problems;
  • Intersectionality is good only in policy-making, because there we can use the Hayekian method of libertarianism in economics and politics to understand how such complex adaptive systems affect our lives, and what strategic or tactical solutions would help us;
  • The idea of AI Ethics cannot be dominated by the monopsony of its cultural ‘roots’. Democracies like India, Singapore and even member-states of the African Union must revisit their cultural roots and discover culture-centric AI Ethics models, better than Russia, China and the US to help the international community;
  • It seems highly unlikely to put forward a shift to codify AI Ethics principles into international law due to the multiplicity of issues that AI can have (as we know for now)

 

(This post is authored by Abhivardhan, Chairperson & Managing Trustee, Indian Society of Artificial Intelligence and Law)

Cite As: Abhivardhan, ‘Intersectionality, Moral Policing and International Law: An AI Perspective (Part II)’ (The Contemporary Law Forum, 22 January 2021) <https://tclf.in/2021/01/22/intersectionality-moral-policing-and-international-law-an-ai-perspective-part-ii> date of access.

1 thought on “Intersectionality, Moral Policing and International Law: An AI Perspective (Part II)”

  1. Pingback: Intersectionality, Moral Policing And International Law: An AI Perspective (Part I) - THE CONTEMPORARY LAW FORUM

Leave a Comment

Your email address will not be published. Required fields are marked *

This site uses Akismet to reduce spam. Learn how your comment data is processed.