Unveiling Promise and Perils: Opportunities, Risks, and Ethical Issues in the Role of Artificial Intelligence in Criminal Investigations

Introduction

Practically, every facet of our lives that might potentially influence how we make decisions has lately been impacted by Artificial Intelligence (“AI”). The extensive use of AI by individuals, organisations, and governments has had a profound impact on society. India’s judicial system faces a number of issues, viz. the backlog of cases, the lack of judges and other court personnel, the lengthy court processes, and an extended number of hearing dates. All of these problems result in the delay of justice, which in many cases leads to denial of justice. In order to combat some of these issues, AI must be introduced into the justice administration system. This is also needed to improve the efficiency and accuracy of law enforcement systems. This opportunity, however, comes with certain inherent hazards and ethical dilemmas that need to be properly considered. Accordingly, this article seeks to examine the potential applications, dangers, and ethical concerns surrounding the use/integration of AI in criminal investigations.

Laws related to AI in India

Currently in India, there are no specific laws governing the use of AI. However, certain provisions in existing laws such as Sections 43A (A body corporate responsible for handling sensitive personal data in a computer resource may be liable for damages if they fail to implement reasonable security practices and procedures, causing wrongful loss or gain) and 72A (Persons who breach a lawful contract by obtaining access to personal information without consent or knowing it may cause wrongful loss or gain may face imprisonment, fines, or both. This includes intermediaries who provide services under a lawful contract. Penalties may range from three years to five lakh rupees) of the Information Technology Act, 2000, suggest that if somebody commits a crime using AI, he would be held guilty under the IT Act, and other cyber laws. AI is a rapidly evolving technology. Therefore, there is a need to frame rules governing AI in India so as to minimise data mismanagement, invasion of privacy, unfair advantage, and other actions that may jeopardise the interests of others. In R Rajagopal v. State of Tamil Nadu, the Supreme Court of India concluded that while the right to privacy is not specifically expressed in the Constitution, its essence is, and hence it must be examined and protected. Furthermore, in the landmark decision of K.S. Puttaswamy v. Union of India, the Supreme Court stressed upon the importance of a comprehensive legislative framework for data privacy, capable of governing new challenges such as the usage of AI in India. In order to give effect to the aforesaid judgment, the government had introduced the Personal Data Protection Bill, 2019, governing the processing of personal data of Indian residents by public and commercial organisations situated both inside and outside of India. The Bill, which underwent a comprehensive review and was ultimately superseded by a new Digital Personal Data Protection Bill in 2022 (that has now been passed as the Digital Personal Data Protection Act 2023) heavily emphasised on consent for data fiduciaries to treat such data, subject to specific exceptions.

Section 43A of the Information Technology Act of 2000 requires a body corporate dealing with sensitive personal data to pay compensation if adequate security practises are not followed. This has a substantial impact on establishing a company’s responsibility when it uses AI to store and analyse sensitive personal data.

The use of AI in the court system

AI-based technology can be used to record statements given in court without the need for human intervention. It can be used to promote transparency in the conduct of trials. The manual execution of the procedure, which includes the issuance of summons/notices, the attendance of witnesses, the subsequent date of hearing, and so on, causes an unreasonable delay. To minimise these inefficiencies as also to make the trial process easier, an AI-based solution can be employed. AI may be used to summarise or refine the contents of legal papers, allowing courts to issue interim orders swiftly. The Chief Justice of India, DY Chandrachud, has urged judges to embrace technology for the greater good of litigants, noting that plaintiffs should not be saddled because judges are uncomfortable with technology. The Supreme Court, quite recently, i.e., during the live proceedings of the constitution bench hearing on the Maharashtra political crisis, deployed AI technology (developed by a Bangalore-based start-up), to convert/transcript arguments into text. The Delhi Police is working with the Indraprastha Institute of Technology to use AI, social media analytics, and image processing to track down criminals, manage traffic, and stop terrorist activity.

China has already incorporated artificial intelligence in its court system. China has said that from now on, Internet courts would handle millions of legal cases that do not require citizens to appear in court. The smart court uses non-human judges and is powered by AI, allowing participants to file their cases online and resolve disagreements via a digital court session. The machine learning (ML) system may scan court cases for references, propose rules and regulations to the judge, create legal papers, and correct human errors in a verdict. AI in the judicial system is said to have lowered a judge’s average workload by more than a third and saved Chinese people 1.7 billion working hours between 2019 and 2021. Given the sheer number of cases pending in Indian courts, this might be a game changer. The Chinese authorities’ concept might be utilised in India to minimise the number of pending cases, since this method can greatly cut the workload of judges.

Applications of Artificial Intelligence in Financial Crime Investigations

AI can help with anti-money laundering efforts by detecting suspicious transactions that may signal money laundering activities. Common money laundering patterns, such as circular transactions, layering, and smurfing, may be trained into ML models. Financial institutions may more efficiently comply with regulatory obligations and lower the danger of being abused by criminals by automating the examination of these transactions. Insider trading is the practise of trading stocks or other securities based on substantial non-public knowledge. By monitoring channels of communication and spotting anomalous trends in trading behaviour, AI-driven analytics can aid in the detection of this form of financial crime. Tookitaki, a Singapore-based regulatory technology start-up, was able to partner with UOB to bring out a co-created machine learning solution that allows its compliance team to undertake deeper and broader analyses as part of its anti-money laundering efforts thanks to The Fin Lab’s help. With a goal to improve AML monitoring, UOB identified a big opportunity in leveraging machine learning to complement and improve its existing systems for detecting and preventing illegal money flows.

Concerns regarding the use of Artificial Intelligence

At the same time, there are profound concerns pertaining to the use of artificial intelligence in the legal system. There are concerns about the possibility of bias in the data used to train AI systems, which might result in unjust outcomes for certain groups of people. This might weaken the judicial system’s fairness foundation and reduce public faith in the courts. One of the biggest issues with AI is that algorithms could be biased. AI systems are only as accurate as the data on which they are trained. AI systems might unwittingly perpetuate and exacerbate biases if the training data is biased or reflects social preconceptions. This can lead to unjust profiling, prejudice, and targeting during criminal investigations. The employment of AI technology in criminal investigations presents serious privacy and surveillance concerns. If mass surveillance systems, like face recognition systems, are not adequately controlled, and backed by sufficient protections, they can infringe on individuals’ private rights. Balancing security requirements with privacy protection is a key difficulty that must be overcome. Facial recognition systems are biometric technologies that record a person’s facial traits to authenticate their identification or locate them within a group, location, or database. San Francisco, which has long been at the forefront of the technological revolution, took a stand against possible misuse by prohibiting the use of face recognition software by law enforcement and other organisations. San Francisco is the foremost American city to prohibit the use of a tool that numerous law enforcement agencies are using to track down both minor criminal suspects and mass murderers.

Civil liberties organisations have raised concern about the technology’s possible abuse by the government, fearing that it may push the United States into an unduly repressive surveillance state. According to an internal document that provides details about China’s artificial-intelligence surveillance regime, the Chinese tech giant Huawei has tested facial recognition programmes that could send computerised Uighur alarms to government authorities when its surveillance systems identify members of the oppressed minority group. The technology may activate a Uighur alarm if it recognised the face of a member of the largely Muslim minority group, according to the test report, potentially alerting authorities in China, where members of the group have been jailed en masse as part of a harsh government crackdown.

Artificial intelligence systems may also produce false positives (identifying benign transactions as suspicious) or erroneous negatives (failing to detect true criminal activity). These mistakes might result in unneeded investigations, reputational harm, or missed opportunities to catch criminals. Always improving and upgrading. AI models, together with human monitoring, can assist to reduce these mistakes and increase the accuracy of AI-driven forecasts. Biases in facial recognition technologies have already resulted in injustices in the United States. The story of Robert William, an African American man jailed after a face recognition technology wrongly matched his photo to a thief, is the first documented incident of this. Williams was arrested and jailed overnight after having his mug photo, fingerprints, and DNA collected.

The application of artificial intelligence (AI) in criminal investigations presents ethical and moral concerns, especially in terms of delivery of justice and due process. The use of AI in making decisions raises issues regarding due process, particularly when the methodology is used to make critical choices, such as determining guilt or innocence. Individuals may be denied the option to oppose the algorithm’s conclusion, or the algorithm’s choice may be given disproportionate weight in the decision-making process. AI makes choices based on massive volumes of data. This raises issues about data privacy, especially if sensitive information is being gathered and kept in a way that unauthorised parties may access or use. It can be difficult to understand how AI algorithms arrive at a given choice, making accountability and transparency in the decision-making process problematic. This can be a challenge in terms of defending civil rights and maintaining fairness in judgements. To address these problems, it is critical that AI systems be created and deployed in an accountable and ethical manner. This might include defining criteria for data gathering and usage, as well as ways to ensure that algorithms are transparent and responsible. It may also entail introducing human monitoring into the decision-making process to guarantee fair and accurate judgements. Ultimately, the objective should be to develop a system that harnesses the benefits of AI while safeguarding privacy and civil rights.

Way Forward

Since the application of AI has become quite common in India, the Ministry of Industry and Commerce, Government of India, has formed a task force to explore the possibilities for exploiting AI to ensure overall development across diverse sectors. This task force was founded in August of 2017. The task force advised establishing a Inter Ministerial National Artificial Intelligence Mission for a five-year term. It is projected to cost around 1200 crores (INR). This will serve as a Nodal Agency (NA) to oversee and coordinate all technical developments in India including the usage of AI.

The recently passed Digital Personal Data Protection Act, 2023 would apply to the processing of computerised personal data within India, whether obtained online or offline and thereafter digitised. It will also apply to such processing outside of India if it is used to market products or services in India or to profile persons. Even then, the existing rules do not specifically address AI based concerns and are rather based on traditional concepts such as books, creative writing, and discoveries. The scope of AI is far broader and must be dealt in a different manner than the current regime. The phrases ‘patentee’ in Section 2 (p) and ‘person interested’ in Section 2 (t) of the Patents Act, 1970 provide a barrier to including AI in its scope. The current system and rules are incompatible with the impending and even existing technological dynamics. In a country with the second biggest population and the majority of people utilising social media and online commerce, it is critical that the laws be updated to reflect the new structure.

Conclusion

The application of artificial intelligence (AI) in criminal investigations has the potential to cause substantial changes and consequences. While AI can increase decision-making efficiency, precision, and consistency, it also raises questions about justice, transparency, and prejudice. The growing use of artificial intelligence in criminal investigations has raised concerns regarding criminal culpability, statutes and regulations, and ethical and moral considerations. It is critical for stakeholders to engage in ongoing discussions about the use of AI in criminal investigations, with a focus on identifying the risks and benefits involved, as well as developing suitable rules and regulations to ensure that the use of AI is consistent with legal and ethical principles. Developing comprehensive data governance regulations, supporting diversity and inclusion in AI research, performing frequent audits to uncover and mitigate biases, and fostering collaboration among technology experts, legal professionals, and law enforcement organisations are some of the answers. Finally, AI has the potential to be a significant tool in assisting criminal investigations, but it must be used responsibly and ethically to guarantee justice, transparency, and respect for individual rights. By addressing the potential, dangers, and ethical challenges involved with artificial intelligence in criminal investigations, society may reap the advantages of new technology while adhering to core ideals of justice and accountability.

(This post has been authored by Arjun Chaprana, a student at Jindal Global Law School. Arjun had interned with TCLF in the month of June 2023)

Cite as: Arjun Chaprana, “Unveiling Promise and Perils: Opportunities, Risks, and Ethical Issues in the Role of Artificial Intelligence in Criminal Investigations” (The Contemporary Law Forum, 17 August 2023) <https://tclf.in/2023/08/17/unveiling-promise-and-perils-opportunities-risks-and-ethical-issues-in-the-role-of-artificial-intelligence-in-criminal-investigations/> date of access. 

1 thought on “Unveiling Promise and Perils: Opportunities, Risks, and Ethical Issues in the Role of Artificial Intelligence in Criminal Investigations”

  1. What i don’t understood is in reality how you’re now not really a lot more smartly-favored than you might be now. You’re very intelligent. You understand therefore significantly in terms of this topic, produced me personally believe it from a lot of numerous angles. Its like women and men are not interested except it is one thing to accomplish with Woman gaga! Your own stuffs outstanding. Always care for it up!

Leave a Comment

Your email address will not be published. Required fields are marked *

This site uses Akismet to reduce spam. Learn how your comment data is processed.