The Proposed AI Act of the EU: Counting the Crucial Lessons for India

Introduction

With the contemporary developments in Artificial Intelligence (AI) and several related controversies cropping up, India must brace itself for formulating a robust AI regulatory regime. The awe for AI being extremely powerful, in some quarters, is transforming into palpable trepidation for it being ‘a bit too powerful’ calling for the legal and policing regime to step up. If the recent FTC investigation of the groundbreaking ChatGPT is anything to go by, “better safe than sorry” is the best way forward. The Digital India Act, 2023 proposed by the Ministry of Electronics and Information Technology (MeitY) is a prospective legislation that aims to revolutionise India’s digital landscape and align it with global developments. The salient features of the proposal include addressing open internet, accountability and quality of service, all buttressed by an adjudicatory mechanism, which is robust, accessible, and adaptable.

A good reference point to start analysing the proposal could be a parallel proposal, of the Artificial Intelligence Act (AIA) in the European Union which spearheads digital optimization, and safe, transparent, and effective enforcement of laws regulating the development of AI, in concurrence with the values of the Union and the fundamental rights of its people. EU being a paragon in terms of technology laws, the Act undoubtedly has far-reaching implications beyond the borders of the EU. The European approach is one of capacity-building while ensuring trust, and maximising stakeholder benefits.

In this context, we must attempt to juxtapose the salient features of this Regulation in the Indian setting. This piece aims to highlight the key features of the proposed Act, shell out their implications and suggest their incorporation in the proposed Digital India Act, aimed at enhancing the nation’s digital market and safeguarding innovation to enable technologies like AI.

Juxtaposing the salient features of the Proposed AI Act in the Indian scenario

Safeguard against social scoring leading to detrimental treatment

Article 5 of the AIA provides for an unqualified prohibition on using AI to evaluate or classify the trustworthiness of natural persons by social scoring based on their predicted behaviour and discriminating against them. And assessing carefully, this provision could potentially have wide ramifications on the fundamental rights of the people, as it may lead to unjustified detrimental or unfavourable treatment of these persons or groups.

Here, the findings of the February 2021 report- ‘Towards Responsible AI for All’ of the NITI Aayog in collaboration with the World Economic Forum, which talks of ethical implications of AI and addressing the issues of discrimination, prejudice, and such harms caused by AI systems, must be regarded in any future legislative development. The report highlights the capacity of AI to overcome social challenges in India. To achieve a sustainable and safe AI ecosystem, and prevent the potential misuse of highly developed AI, adoption of responsible AI must be encouraged.

The focus is on the need for respecting a value chain, protecting individuals and communities and following the principles of equality, inclusivity and non-discrimination, in accordance with Articles 14, 15, and 16 of the Constitution. The High Court of Kerala in the case of Kadathanad Labour Contract Co.-Operative Society Limited v State of Kerala flagged the issue of digital discrimination while allowing a writ petition by directing the competent authorities to add an option in their portal to enable the petitioners to log into the portal. Thus, such detrimental social scoring must be absolutely prohibited in the Indian legislation and a socially sensitive and perceptive technology should be placed in the public domain.

Obligations for high-risk AI systems

According to the proposed Act by the EU, a high-risk AI system is one which is intended to be used as a safety component of a product or is a product itself, which as per Chapter 5 is required to undergo a third-party conformity assessment before being placed in the market.

  1. Developing risk management systems

With the trend of AI adoption dominating the globe, risk management assumes more importance than ever. AI risks are potential harms caused by AI systems to people, which might be predictable or otherwise. Requirements for high-risk AI systems include the establishment of a risk management system for identifying and analysing foreseeable risks and adopting suitable counters for their reduction. Additionally, standard documentation of such systems in turn evidences due compliance with such requirements.

In times of recent news like the CoWin data breach, Google cautioning people about chatbots, even its own AI Bard, and the never-ending saga around data privacy, countries like India need robust regulatory regimes. Incorporating a framework for risk management in Indian law can lead to better AI governance, by setting guiding principles, developing processes to manage fallouts, and integrating an ethical code of conduct. Such a law would enable us to continuously better the technological infrastructure and organisational policies, as well as make the systems well-equipped in handling risks of bias, privacy, or inefficacy.

         2. Introducing elements of human oversight

The AI systems envisioned by the EU have elements of human oversight, intending to prevent foreseeable risks to health, safety, or fundamental rights. This shall be undertaken by monitoring the operation and limitation of the AI systems and addressing anomalies upon detection. Further, per Article 14 of the AIA, caution is essential against the automation bias of AI while interpreting the outputs generated. Human oversight undeniably provides benefits of reducing risks to sensitive data and thereby, could also prevent a breach of the right to privacy under Article 21 of the Indian Constitution, as recognised by the 9-judge bench in the landmark Puttaswamy case.

Many companies like Samsung have restricted the use of generative AI tools by employees after consecutive episodes of leaks of confidential information using ChatGPT. To mitigate these security concerns, the provision for responsible human oversight in Indian legislation is necessary. Protection of personal data from unwanted use by third parties and setting up guardrails in the public interest and safety concerns are vital. In this light, the concept of automated logging of events in high-risk AI systems for the purpose of traceability and monitoring is another promising feature that inspires confidence in AI systems.

      3. Post-market monitoring and market surveillance

The establishment of post-market monitoring systems is a mandate for the providers of high-risk AI systems to consistently ensure adherence to regulatory requirements. These systems are obliged to report incidents of breach of fundamental rights to the authorities. Article 61 of the AIA provides for the systematic collection and analysis of relevant data for the evaluation of compliance. Another requirement is establishing a market surveillance system for enforcement of the Act. AIA’s Article 63 obliges the National Supervisory Authority to regularly report the outcomes of surveillance to the Commission.

A sound Indian legal framework must propose strict security standards for market monitoring using AI tools. It is advisable that we opt for a model where regulators can probe and detect possible abuse of dominance and other anti-competitive practices, so that appropriate precautions can be taken in time. Competition Commission of India (CCI) making efficient use of AI tools to identify anomalies in the markets stands as an inspiring example for the same and can be emulated by TRAI, SEBI and other bodies which deal with data intensive operations. Thus, enhancing opportunities for AI in the field of commerce. A market assessment system may be proposed which is operated through AI oversight and human supervision. With the potential such innovation holds, the transitioning e-market of India could prosper from the outcomes of modern technology.

sConfidentiality of information and privacy risks

AIA’s Article 70 provides for consultation with the originating national competent authority, prior to the disclosure of any confidential information, when such disclosure would potentially endanger public or national security. Since AI relies on the data input in its training, there are risks of the leak of sensitive data. Hence, access to sensitive information must be authorised with adequate safeguards in place. In this context, risk and impact assessments are crucial for evaluating potential privacy risks and the impact of data breaches on systems. Many banks and technology companies have banned the use of AI chatbots like ChatGPT, by their employees, fearing leaks of sensitive data and data breaches. And thus, incorporating best practices such as human oversight, etc. has become imminent.

The proposed Digital India Act envisions a secure cyberspace by making the nation digitally resilient through the empowerment of the CERT-In. Mechanisms for accountability of developers for the actions of AI can aid in creating a responsible regime, where users have access to grievance resolution and trust the technology. The Digital Personal Data Protection Bill, of 2022 does not differentiate between personal data and sensitive personal data, which stands as a huge drawback currently. Changes like this are critical for ensuring effective AI enforcement and inspiring market trust in people.

Conclusion

The AI Act is at the core of the EU’s single digital market strategy, which aims at the integration of a single homogenous market for AI. India can follow suit to pursue a regime that harnesses its maximum potential. With the traditional market undergoing a dynamic digital transformation, India is at the cusp of becoming a digital giant. The piece highlights those features of the EU’s AI Act, which seem the most practically implementable in the near future, seeing the trajectory of India’s digital growth. Legislation focusing on upholding fundamental rights and Constitutional values, providing additional safeguards for high-risk AI, with human oversight, and market supervision is the ideal one for a democracy like India.

We have already witnessed numerous ambitious prospects like the National Data Governance Framework Policy, announced in the 2023-24 Union Budget, the AI for All report of the NITI Aayog, and the proposed Digital India Act, being tabled by the government and its various institutions. It is hoped that the impact of their implementation leads to the heralding of a secure digital India. The author believes that the prospective Digital India Act will be an excellent opportunity for the country to channel its resources towards capacity building, enhancing trust, fulfilling transparency requirements, and building infrastructure to optimise outputs for envisioning the future of India as a leading digital superpower.

(This article has been authored by Himanshi Srivastava, a law student at Dharmashastra National Law University, Jabalpur.)

Himanshi Srivastava, ‘The Proposed AI Act of the EU- Counting the Crucial Lessons for India final’ (The Contemporary Law forum, 26 July 2023) <https://tclf.in/2023/07/26/the-proposed-ai-act-of-the-eu-counting-the-crucial-lessons-for-india/> date of access.

Leave a Comment

Your email address will not be published. Required fields are marked *

This site uses Akismet to reduce spam. Learn how your comment data is processed.