Can Your Honour prompt thy query? Analyzing the usage of AI by the Indian Judiciary

Introduction

Artificial Intelligence has engulfed every possible sector within its fold, and the Indian judiciary is no exception. In India, the judiciary has turned to AI in three notable instances, the latest being that of the Manipur HC using Google and ChatGPT 3.5 to research the service rules of Village Defense Force (VDF) personnel.[1] The Court observed that it was “compelled” to do so owing to the failure of an affidavit to explain “VDF’. The first such instance was in 2023, when the Punjab & Haryana High Court used ChatGPT to supplement its reasoning for denying bail to one Jaswinder Singh on the grounds of cruelty.[2] Similarly, the Delhi High Court witnessed the use of ChatGPT by lawyers to establish the reputation of the luxury brand Christian Louboutin for its red-soled spiked shoes.[3]

While the utilization of AI, especially in a sector ruled by human mental faculty since time immemorial, may seem an ‘easy way out’, it raises a question – Are we morphing our dream of ‘techade’ into the nightmare of ‘tech-degrade’? Assessing the ramifications of such utilization on the foundational principles of law and justice is crucial. This blog aims to undertake this assessment.

AI in Courtroom: Case under Scrutiny

The savvy approach followed by Manipur High Court has been widely praised as “a landmark verdict” . However, it must be borne in mind that not all things modern are necessarily progressive. Adoption of AI for judicial purposes comes with its own set of complications. Let’s delve into a few pertinent to our discussion: –

Credibility Concerns in the Courtroom

The biggest deterrent to integrating AI into the legal system is its questionable credibility. In the Christian Louboutin case, the Delhi High Court emphasized that “the reliability and accuracy of AI-generated data are still in the grey area.” AI systems typically source their training data from public datasets, crowdsourcing, UGC, synthetic data, and web scraping—methods often lacking accuracy and reflecting public opinion rather than the factual basis crucial for legal proceedings. Generative AI systems have further fueled these concerns, with examples like AI Overviews fetched suggestions to “jump off the Golden Gate Bridge” and incorrect statements like Barack Hussein Obama is the only Muslim president of the USA. The judiciary relying on such outputs could lead to chaos, as seen in Mata v. Avianca Inc. (2023),[4] where a New York lawyer was fined $5,000 for citing fake cases generated by ChatGPT hallucination.

Currently, the prerequisite of reliable judicial data is unmet, with many key documents inaccessible online or lacking critical information. Moreover, the robotic analysis often fails to capture the uniqueness of each case’s facts, and over-reliance on precedents threatens the evolution of legal practice. This reliance could lead to stagnation, contrary to the need for consistent legal evolution.

Evidentiary Value of AI

The basis on which the High Court admitted the AI-generated response is unclear in both substantive and procedural aspects:

a. Substantive: The judicial system depends on the authenticity and accuracy of evidence to ensure fair outcomes. Misleading the court or presenting fabricated evidence is illegal and undermines the integrity of the judicial process, attracting severe penalties under Section 193 of the Indian Penal Code. While AI may create an illusion of realism, its tendency to produce misleading or false evidence contradicts the core principles of the judiciary. The current legal framework does not recognize AI as a legal entity, meaning AI cannot be held accountable for its actions, leading to a gap in accountability and potential erosion of trust in the judicial process.

b. Procedural: The Bhartiya Sakshya Adhiniyam, 2023 (BSA), which will replace the Indian Evidence Act, 1872 (IEA), governs the rules of evidence. Under both Acts, AI outputs might be classified as “electronic evidence,” but their admissibility faces more challenges than traditional electronic evidence. Generative AI models often operate as “black box,” making it impossible to identify the sources and assess their accuracy, while “opaque algorithms” prevent understanding the AI’s reasoning process. These issues, along with hallucinations and biased databases, render AI unsuitable as primary evidence.

Secondary evidence is also difficult to establish due to the requirement for a certificate of authentication[5] under Section 63 of BSA and Section 65B(4) of IEA, which must be signed by the ‘person in charge’ of the computer device—a task complicated by AI’s multiple contributors and self-learning features. Although electronic records are now classified as primary evidence under Section 57 of BSA, a certificate is still required. The inability of current laws to address these complexities contaminates the basis for accepting ChatGPT’s assistance in this case.

Compromise with Natural Justice:

Principles of natural justice form the bedrock of the whole nine yards of law. The principle consists of three key principles:[6] the Hearing Rule (the right to present one’s case), the Bias Rule (the requirement of impartiality), and the Reasoned Decision Rule (the need for decisions based on valid reasons). In the case of State Bank of India & Ors. v. Rajesh Agarwal, Chief Justice Dr. D.Y. Chandrachud emphasized that these principles must be strictly followed by adjudicating authorities. The Supreme Court has affirmed[7] that all stakeholders have a fundamental right to fair treatment, which includes an unbiased judge and a fair trial free from any prejudice. Thus, the use of AI technologies should be assessed vis-a-vis certain crucial principles of fair trial as follows: –

a. Reasoned Decision: Reasons are the heartbeat of an order and ensure transparency, fairness, and accountability. Providing reasons for decisions allows affected parties to understand the rationale and seek judicial review. However, the “Black Box” phenomenon in AI, where the decision-making process is opaque, directly challenges this principle. AI systems often fail to explain the variables that lead to a particular outcome, as illustrated in cases like State v. Loomis, where the dangers of relying on opaque algorithmic assessments in judicial decisions were highlighted. Without a clear understanding of how AI systems arrive at conclusions, the affected parties are rendered unable to challenge or appeal these decisions, which is a fundamental right in judicial processes. This lack of transparency risks violating due process and eroding trust in judicial outcomes. The judiciary’s reliance on AI could, therefore, undermine the principle of reasoned decision, a cornerstone of fair trial rights.

b. Judicial Impartiality/Bias: Impartiality is another cornerstone of natural justice. Judicial decisions must be based solely on relevant facts and evidence, free from bias or external influence.[8] AI raises concerns about impartiality due to potential biases in its systems. Individuals who develop and oversee the algorithms of these AI systems can potentially affect the outcomes by instilling personal or systematic biases. AI systems created by executives or profit-driven companies can lead to unauthorized interference in the judicial process.

The anchoring effect is a cognitive bias where people rely heavily on the first piece of information presented (the ‘anchor’) when making decisions, further affecting subsequent judgements. Such bias in the evaluation of algorithmic outputs and subsequent judgement on its acceptance or rejection has been found in judicial officers.

Additionally, the data used to train AI often reflects societal biases, leading to biased decisions. The “garbage-in, garbage-out” principle highlights the risk of poor-quality data leading to flawed outcomes. For instance, Mathura Rape Case is acknowledged to be a judicial blunder in India. While human judges can disregard such precedents, an AI tool may not draw such distinction and place reliance upon it to produce an output. This is particularly dangerous in common law system as it deteriorates the quality of legal literature, which serves as law.

Further, there have been proven incidents of racial and gender bias being embedded and magnified by AI algorithms. Incidents like the racial bias embedded in the COMPAS system, which was judicially approved[9] but biased against black defendants, underscore the dangers of integrating biased AI into judicial processes. In a country like India, which has a long way to go against institutionalized bias around caste and gender, integration of AI algorithms in the judicial process is a net detriment.

Jeopardizing Confidentiality: The courts deal with a humongous amount of personal and non-personal data. Despite the principle of open court, there are certain exceptions and considerations which merit attention from the perspective of privacy. Not all judicial data is publically accessible and certain information is mandatorily kept confidential. For instance, trials involving sexual crimes, juveniles, matrimonial matters, etc. are statutorily mandated to be held in-camera, having regard to the nature of such proceedings.
Most AI models train on the input user data, which opens a Pandora’s box of privacy concerns. There have been multiple class actions against AI Giants due to the consequential infringement this training entails. For instance, Open AI openly states that ChatGPT trains on the user prompts. Data collected by AI can be reused in responses to other users or reviewed by humans, making it effectively public. Integrating such systems into the judiciary risks exposing sensitive and restricted information, potentially harming the social and professional lives of those involved, particularly in India’s socio-cultural context.

The Path Forward: Circumscribing AI’s Role

The tendency of AI to ‘replace’ the traditional ways stands in stark contrast to the needs of any judicial system. As Prof. W. D. Ji notes, technological means like AI must be limited to an auxiliary role in judicial practice. To borrow his words, “Do not place the cart before the horse; otherwise, the judicial power will be led astray.” The infant stage of development makes AI incapable of comprehending the complexities and consequences of human experiences and the involved unique factors. As eloquently put by Justice Hima Kohli, “AI cannot substitute the wisdom and experience of a judge nor can it replace the human element required of a lawyer to conduct a case”. At most, such tools could be utilized for preliminary understanding or research purposes, but not as a substitute for human judgment.[10] By circumscribing the role of AI to mechanical yet arduous jobs such as translation, generating legal briefs, preparation of cause list, transcription of proceedings, streamlining the functioning of the Supreme Court’s Registry etc., its potential may be harnessed without breaching the core principles of law and justice. Such confinement to clerical works also coincides with the visions of the second edition of the Supreme Court hackathon.

Given the lack of AI regulation in India and the government’s apprehension about introducing any law on the subject, the action of relying on AI seems like a premature attempt to hop on the trend. The judicial process involves deep understanding of human traits and the pre-requisites of credibility, evidentiary value and natural justice that AI cannot replace. While auxiliary nature can be assigned to AI, humans possess an irreplaceable grip on the gavel.

References

  1. Md Zakir Hussain v. State of Manipur, WP(C) No. 70/2023.

  2. Jaswinder Singh @ Jassi v State of Punjab and Another [2023] PHHC 44541

  3. Christian Louboutin Sas & Anr. v M/S The Shoe Boutique – Shutiq [2023] Del HC CS(COMM) 583/2023.

  4. Mata v. Avianca Inc., [2023] 22-cv-1461 (PKC).

  5. Arjun Panditrao Khotkar v. Kailash Kushanrao Gorantyal (Supreme Court, 14 July 2020) [2020] SCC OnLine SC 571.

  6. Mohinder Singh Gill v. Chief Election Commissioner, 1978 (1) SCC 405.

  7. Zahira Habibullah Sheikh and Ors. v. State of Gujarat and Ors., [2006] 3 SCC 374.

  8. C.K. Takwani, Lectures on Administrative Law (8th ed., Eastern Book Company 2024)​

  9. State v. Loomis, 881 NW 2d 749 (Wis 2016).

  10. Christian Louboutin SAS & Anr. v. M/S The Shoe Boutique CS(COMM) 582 (2023).

(This post has been co-authored by Stuti Singh and Aviral Pathak, fourth year law students at Rajiv Gandhi National University of Law, Punjab.)

 

CITE AS: Stuti Singh and Aviral Pathak“Can Your Honour prompt thy query? Analyzing the usage of AI by the Indian Judiciary  (The Contemporary Law Forum, 13 September 2024) <https://tclf.in/2024/09/13/can-your-honour-prompt-thy-query-analyzing-the-usage-of-ai-by-the-indian-judiciary/>date of access

 

4 thoughts on “Can Your Honour prompt thy query? Analyzing the usage of AI by the Indian Judiciary”

Leave a Comment

Your email address will not be published. Required fields are marked *

This site uses Akismet to reduce spam. Learn how your comment data is processed.