The Need for an ‘India-centric’ Artificial Intelligence (Development & Regulation) Bill

Introduction

India’s swiftly evolving artificial intelligence (AI) landscape introduces a milieu of opportunities and challenges, necessitating a meticulous reassessment of the nation’s regulatory competence. The omnipresence of AI applications spanning various industries underscores the exigency for lucid, secure, and standardized regulations. The pervasive use of AI technology across diverse sectors accentuates apprehensions pertaining to transparency, safety, data processing, privacy, and consent. These challenges command meticulous attention and the imposition of sector-specific standardization to propel policy interventions and innovations on a global scale.

The exponential growth of artificial intelligence (AI) technologies in India has propelled the government to contemplate comprehensive regulatory measures. While existing frameworks, such as the Digital Personal Data Protection Act, 2023, and the Information Technology Act, 2000, establish a foundation for data governance, they prove inadequate in addressing the intricate challenges posed by AI. This article critically analyses the need to have a dedicated Artificial Intelligence (Development & Regulation) Bill in India, and deconstructs the contents of the bill, which was proposed as a prototype by Indic Pacific Legal Research to the Ministry of Electronics & Information Technology, Government of India. The analysis also covers key inferences derived from the IndiaAI Expert Group Report, First Edition (2023), and those of several global and national-level AI instruments and policies.

The Indian AI Regulatory Landscape at a Glance

India stands at the juncture where a reinvigoration of its regulatory capacity and intelligence conduits is imperative to facilitate technology-agnostic regulation and governance of AI technologies. The primary concerns of transparency and safety in AI applications reverberate prominently in emerging markets. Numerous instances of AI deployment lack transparency in matters of commercial viability and safety, particularly in the realms of data processing, privacy, consent, and the intricate realm of dark patterns. The dearth of sector-specific standardization for algorithmic activities and operations proves to be an impediment, obstructing regulatory interventions and innovations at the global stratum. The judicious enforcement of extant sector-specific regulations, with an initial emphasis on data protection and processing, stands as the panacea to forge an effective path for AI regulation.

India stands as an active participant in the global discourse on AI ethics. Nonetheless, prevailing AI ethics advocates and think tanks concentrated in Western Europe and North America espouse Responsible AI principles with a confined geographical focus, thereby championing an Asia-Pacific approach to AI Ethics. This narrow paradigm tends to serve the interests of Chinese technology policies while marginalizing India within Asia in crucial technology policy engagements. India’s reservoir of insightful contributions to the AI ethics arena, often overlooked in discussions about AI ethics and policy in South East Asia and Japan, now finds a unique platform for collaboration through its commitment to the Indo-Pacific Quad partnership. This offers an unparalleled opportunity for synergizing with AI ethics industry leaders in South East Asia, fostering shared objectives within the Quad framework.

In the midst of legislative strides in areas such as digital sovereignty, digital connectivity, drones, and data protection, the discourse on AI and Law in India exhibits limited metamorphosis. Conversations predominantly orbit around data protection rights and the civil and criminal liabilities of digital intermediaries. Although the government proffers frameworks to regulate the use and processing of personal and non-personal data, including the Digital Personal Data Protection Act, 2023, and the proposed Digital India Act, the focal point on AI regulation remains constrained. This constraint extends even to frameworks envisioned for the National Data Management Office (NDMO) as espoused in the IndiaAI Expert Group Report, First Edition (2023).

Decoding the IndiaAI Expert Group Report, First Edition (2023)

The NDMO, as proposed in the IndiaAI Expert Group Report, First Edition (2023), assumes a pivotal role in championing data quality standards that reverberate through the ethical standards of AI systems, ensuring their industrial viability and safety. However, the NDMO’s purview is principally confined to data quality, processing, and management. While implementing the outlined measures would signify a positive stride, it is noteworthy that data processing agreements possess the potential to embody a company’s AI ethics practices, thereby offering a technology-neutral conduit to regulate AI, especially concerning non-personal data.

The conspicuous absence of self-regulatory frameworks such as Explainable AI or Responsible AI guidelines amidst eminent AI and tech market players underscores the imperativeness of a comprehensive and distinctive approach to AI regulation, meticulously aligned with India’s requirements and standards. Importantly, a reliance on Western or Anglophone standards stands as an unsuitable trajectory for India’s burgeoning AI landscape. Additionally, the potential ramifications of international legal treaties on India’s national data repositories, domiciled on cloud servers in foreign jurisdictions, resonate as areas of concern. A robust and enforceable AI regulatory framework emerges as an imperative safeguard against foreign interference in Indian data, necessitating a rigorous examination of the compatibility of recent free trade agreements (FTAs) with this overarching goal.

Intellectual Property Management Dilemmas

Now, the IndiaAI Expert Group Report identifies critical gaps in addressing intellectual property (IP) issues pertaining to AI. Ambiguities surrounding IP ownership in joint ventures, potential exploitation of student entrepreneurs, and the absence of a transparent IPR determination process underscore the need for a focused AI Regulation. A spatial approach, anchored in technical features and commercial viability, should guide the Governing Council in crafting a definitive IP model.

To address IP concerns, a technology-neutral AI Regulation for the Indian economy should provide clear guidelines for the CoE on joint ventures’ IP ownership, protection of student entrepreneurs, and a transparent IPR determination process. The regulation must consider technical features and commercial viability, providing a definitive framework for IP models.

Tangible Quantifiable Yearly Outputs of the NDMO

The proposed NDMO focuses on data quality standards, revealing a lacuna in regulating AI directly. While its measures are commendable, the NDMO’s 18-month targets fail to address AI-specific concerns. A separate AI Regulation is crucial to ensure that the development, deployment, and proliferation of AI adhere to ethical and legal standards.

Concerning NDMO outputs, the AI Regulation should bridge the gap by incorporating AI-specific measures. This includes addressing issues related to open-source solutions, ensuring human oversight in AI systems, and fostering collaboration between the NDMO and AI regulatory authorities. It is imperative to avoid a disjointed approach and establish a cohesive legal foundation.

Qualification Packs (QPs) and National Occupational Standards (NOS) for AI & Big Data

The IndiaAI Expert Group Report’s reference to QPs and NOS underscores the importance of skill development. However, it also highlights the need for an AI Regulation to complement these standards. Detailed standards for Levels 6, 7, and 8 must be integrated into an overarching regulatory framework to ensure the responsible use of AI.

For QPs and NOS, an AI Regulation should build upon existing skill development standards. It must provide detailed guidance for Levels 6, 7, and 8, incorporating AI best practices, risk assessments, and transparent performance standards. This ensures a skilled workforce aligned with ethical and legal considerations.

Some Global Inferences for an Indian AI Regulation

The global landscape of artificial intelligence (AI) regulations is evolving rapidly, with various countries adopting distinct approaches to ensure responsible AI development and deployment. This section delves into the contrasting regulatory frameworks of China, the United States, and the European Union, analyzing their key features and implications for the responsible use of AI technologies.

Chinese Approach to AI Regulation

China’s approach to AI regulation is characterized by a strong emphasis on government control and data ownership. The Chinese government pursues a maximalist strategy, aiming to regulate all facets of AI development and deployment. This approach is evident in the comprehensive scope of the regulations, which extend from data privacy to algorithm ethics. Moreover, the Chinese government adopts a micromanagement strategy, dictating specific requirements for AI service providers, including the necessity of obtaining licenses and implementing precise technical measures. The assertion of public ownership over data and algorithms is a distinctive feature, treating them as government-managed public resources. This is reflected in requirements for AI service providers to register with the government and obtain licenses, reinforcing the government’s authority over AI data.

Biden Administration’s Executive Order on AI

In contrast, the United States takes a different path with the Executive Order on the Safe, Secure, and Trustworthy Development and Use of Artificial Intelligence. The order prioritizes robust evaluations of AI systems, emphasizing post-deployment performance monitoring to ensure safety and efficacy in real-world scenarios. Transparency and accountability are underscored through the call for effective labeling and content provenance mechanisms, addressing concerns related to the misuse of AI-generated content. The executive order adopts a flexible and technology-conscious definition of AI, acknowledging the dynamic nature of AI technologies. Clear and concise definitions provided within the order, such as “synthetic content,” “testbed,” and “watermarking,” contribute to a common understanding of key terms in the AI context.

European Union’s Artificial Intelligence Act

The European Union introduces a comprehensive pan-European regulatory framework for AI systems through the Artificial Intelligence Act. A risk-based classification of AI systems is a pivotal feature, with stringent regulations applied to high-risk AI applications. The EU’s horizontal approach, opting for Option 3+ over Option 4, aims to strike a balance between robust regulation and fostering AI innovation. The Act prioritizes human oversight, emphasizing the importance of quality risk assessments for AI companies and robust data governance practices. Detailed criteria and procedures for compliance are established, ensuring ethical and safe AI development while safeguarding fundamental rights. The Act aligns with the unique requirements of the European AI landscape.

The Bletchley Declaration on AI Safety

The Bletchley Declaration, a collaborative effort involving several countries, including India, sets out to establish a framework for recognizing and addressing AI risks at both multilateral and domestic levels. The declaration, which was agreed under the leadership of PM Rishi Sunak of the UK, underscores the international nature of AI risks, acknowledging that many challenges are inherently globalized. Notably, the focus is on highly capable general-purpose AI models, especially in cybersecurity and biotechnology.

A key feature of the Bletchley Declaration is the emphasis on safety throughout the AI development process. The document recognizes the special responsibility of developers of cutting-edge AI technologies, referred to as Frontier AI, to ensure the safety of their creations. This involves implementing safety testing procedures, conducting thorough evaluations, and employing suitable safeguards. Transparency and accountability are highlighted, urging stakeholders to provide clear explanations of their strategies for assessing, monitoring, and mitigating potential risks associated with their AI systems.

The agenda outlined in the Bletchley Declaration includes the importance of identifying shared AI safety risks globally, the need for risk-based policies at the national level, collaboration among nations with flexibility based on their contexts, and the stress on transparency by private entities developing advanced AI technologies. Additionally, the declaration suggests the development of evaluation metrics, safety testing tools, and the enhancement of public sector capabilities and scientific research in the AI domain.

UNESCO Recommendation on the Ethics of AI

The UNESCO Recommendation on the Ethics of Artificial Intelligence (2021), as interpreted by the Indian National Commission for Cooperation with UNESCO, offers a set of principles, values, and policy intervention areas. Unlike the Bletchley Declaration, this recommendation places a strong emphasis on the ethical dimensions of AI and its impact on human rights.

One significant aspect of the UNESCO Recommendation is its call to recognize the right to development as a means to fully realize fundamental freedoms and human rights through AI technologies. It extends the scope beyond mandated domains, encompassing emerging technologies like the Internet of Things (IoT), machine learning, and deep learning. The recommendation introduces a checklist matrix covering AI readiness, environmental impact assessment, stakeholder assessments, and the effectiveness of AI ethics policies.

The UNESCO Recommendation addresses the need for ethical governance, advocating for internationally implemented checklists on explainability and transparency requirements. It suggests providing access to datasets to maintain value realization and dataset provider equivalence. The recommendation highlights the importance of accountability, transparency, responsibility, efficiency, effectiveness, algorithms, datasets, and affordable AI solutions. Additionally, it emphasizes equitable access to hardware, AI solutions in local languages, and a global treaty to prevent harmful use of AI in subversion activities.

Hiroshima AI Process Comprehensive Policy Framework

The Hiroshima AI Process Comprehensive Policy Framework, a product of the G7 nations, focuses on comprehensive risk management throughout the AI lifecycle. It mandates AI developers to implement risk management frameworks subject to sector-specific, sector-neutral, and strategic sector-related requirements.

This framework emphasizes traceability and documentation, encouraging developers to maintain records of datasets, processes, and decisions during AI development. It addresses specific risks, including CBRN risks, offensive cyber capabilities, health and safety risks, self-replication, societal risks, and systemic risks. Continuous post-deployment monitoring is promoted to identify and address vulnerabilities, incidents, and patterns of misuse.

Encouraging third-party and user involvement in vulnerability reporting is a notable feature, promoting collaboration for incident reporting and mitigation. The framework stresses responsible information sharing, shared standards and best practices, AI-generated content identification, content provenance, research prioritization, and investment in mitigation tools.

Inferences Drawn

Comparing these regulatory frameworks reveals distinct approaches shaped by the socio-political contexts of each region. China’s emphasis on government control aligns with its broader regulatory philosophy, reflecting a desire for centralized oversight. In contrast, the United States prioritizes flexibility, embracing a dynamic definition of AI and focusing on post-deployment evaluations to enhance real-world safety. The European Union’s risk-based classification and horizontal approach exemplify a nuanced strategy, balancing stringent regulations with an innovation-friendly environment. Of course, China’s approach raises questions about individual privacy and autonomy, as heightened government control over AI data may impact user freedoms. In the United States, the focus on transparency and post-deployment monitoring aligns with democratic values, emphasizing accountability without stifling innovation. The EU’s risk-based methodology acknowledges the diverse AI landscape while ensuring a common regulatory framework. However, the EU AI Act is too much sophisticated in its underpinnings of AI liability and accountability frameworks, which could make it impractical to be implemented at a pan-European level. For example, the Annex I – which offers a glossary of definitions of “artificial intelligence” cater a restrictive approach to regulate artificial intelligence as compared to the Biden Administration’s Executive Order, which has a reasonable approach forward.

When we compare the international declarations and “legal instruments” agreed (for example the AI declarations / frameworks agreed in Bletchley and Hiroshima), it reveals diverse approaches influenced by regional priorities and values. The Bletchley Declaration focuses on safety in AI development with an emphasis on transparency and accountability. The UNESCO Recommendation centralises on ethics and human rights, extending the scope beyond mandated domains and suggesting a checklist matrix for comprehensive assessments. The Hiroshima AI Process Framework mandates comprehensive risk management throughout the AI lifecycle with a focus on traceability and documentation.

The implications of these regulatory frameworks are multifaceted. The Bletchley Declaration highlights the importance of global collaboration and transparency in addressing AI risks, whereas the UNESCO Recommendation underscores the ethical dimensions of AI, advocating for equitable access, language inclusivity, and a global treaty. The Hiroshima AI Process Framework, otherwise, emphasizes mandatory risk management and documentation, promoting shared standards and best practices.

The Artificial Intelligence (Development & Regulation) Bill, 2023

The necessity of having an AI regulation in India could be argued on multiple grounds. However, the specific intent behind drafting the Artificial Intelligence (Development & Regulation) Bill, 2023 could be summarised in the following points:

  • A technology-neutral regulation on AI Safety is necessary to develop contours of AI regulation in India, with an outcome-based risk perspective;
  • An AI regulation could strengthen India’s Digital Public Infrastructure, i.e., the India Stack system in legitimizing and framing inclusive and practical ways to govern data and algorithms, taking the IndiaAI Expert Group in cue;
  • The Information Technology Act, 2000 is inadequate to address the challenges of artificial intelligence proliferation, innovation, use and development in India;
  • India is very much a part of the Global South and resonates with the kind of policy and legal issues related to AI-related products, services and systems (Infrastructure as a Service (IaaS), for example), which implies why an AI regulation could help New Delhi shape up precedents of soft yet flexible ways of regulating hardcore technologies like AI, as a role model of sorts for the underdeveloped and emerging Global South economies;

The Contents of the Bill

The extensive coverage of the Bill is evident in its detailed provisions, covering risk stratification, standards, certifications, ethics, governance, and legal considerations. Let’s delve into the key areas covered by the Bill.

Key Definitions

Artificial Intelligence (AI)

  • The bill defines AI as an information system employing computational, statistical, or machine-learning techniques to generate outputs based on given inputs.
  • The conceptual classification outlines various lenses for evaluating AI, including technical concepts, issue-specific considerations, ethical perspectives, phenomena-based assessments, and anthropomorphism-based evaluations.
  • Technical and commercial classification recognizes AI as a product, service, or system, highlighting the multifaceted nature of AI technologies.

AI-Generated Content

  • This definition encompasses content, physical or digital, significantly modified by an AI system.
  • It includes text, images, audio, and video created through various AI techniques.
  • Subject to the test or use case, this definition is pivotal in determining the scope of AI’s creative outputs.

Content Provenance

  • Refers to the identification, tracking, and watermarking of AI-generated content to establish its origin and authenticity.
  • This is crucial for addressing concerns related to the integrity and reliability of AI-generated materials.

Data

  • Defined as a representation of information, facts, concepts, opinions, or instructions suitable for human or automated processing.
  • The definition recognizes the diverse forms of data that AI systems may process.

Data Fiduciary and Data Principal

  • Data Fiduciary is a person determining the purpose and means of processing personal data.
  • Data Principal refers to the individual to whom the personal data relates.
  • In cases of a child or a person with a disability, the definition includes parents or guardians, emphasizing protection for vulnerable groups.

IDRC (IndiaAI Development & Regulation Council)

  • Established as a statutory and regulatory body to oversee AI development and regulation across government bodies.
  • Aims for coordination and a whole-of-government approach in shaping AI governance.

Employment and Skill Security Standards

  • Address risks related to the deployment and utilization of AI systems concerning employment and skills.
  • Reflects the bill’s recognition of the potential impact of AI on the workforce.

Ethics Code

  • Governs the development, procurement, and commercialization of AI technologies.
  • Emphasizes a pro-innovation and technology-neutral approach to AI governance.

High, Medium, and Narrow Risk AI Systems

  • Categorizes AI systems based on potential risks.
  • Provides a risk-stratification framework to tailor regulations according to the risk profile of AI systems.

Insurance Policy

  • Encompasses measures and requirements concerning insurance for research and development, production, and implementation of AI technologies.
  • Introduces a financial dimension to risk management in AI.

Risk Stratification (Sections 3 & 4)

  • The Bill categorizes AI systems based on risk into narrow, medium, high, and unintended risk categories.
  • Prohibition of unintended risk AI systems reflects a proactive approach to prevent potential harm.

Sector-Specific Standards (Sections 5 & 8)

  • High-risk AI systems associated with strategic and non-strategic sectors, including telecom, space, health, digital public infrastructure, energy, and biotechnology, are subject to sector-specific standards.
  • This sector-specific approach recognizes the diverse applications of AI and tailors regulations accordingly.

Quality and Risk Assessment (Sections 6 & 7)

  • A framework for Quality Assessment and Risk & Vulnerability Assessment is established for high-risk AI systems, ensuring a thorough evaluation process.
  • Certification of AI systems, encompassing ethical, technical, and commercial practices, adds a layer of scrutiny to the development and commercialization of AI.

Ethics Code (Section 7)

  • An ethics code for the development, procurement, and commercialization of AI technologies reflects a pro-innovation, pro-development, and technology-neutral approach to AI governance.
  • This code aims to balance ethical considerations with fostering innovation in the AI landscape.

IndiaAI Development & Regulation Council (IDRC) (Section 9)

  • The establishment of IDRC as a statutory and regulatory body with a whole-of-government approach highlights the need for coordination across government bodies, ministries, and departments.
  • This intra-governmental operability ensures a unified and coordinated effort in regulating AI.

Post-Deployment Monitoring (Sections 11 & 12)

  • High-risk AI systems undergo post-deployment monitoring, emphasizing the importance of continuous evaluation and risk management in real-world scenarios.
  • This feature aligns with the dynamic nature of AI applications and their potential impacts.

Legal Frameworks

  • The Bill acknowledges the concurrent provisions related to existing legal frameworks such as the Digital Personal Data Protection Act, 2023, and the proposed Digital India Act.
  • This recognition ensures coherence and compatibility with existing legislation, avoiding overlaps and conflicts.

Standards and Best Practices (Sections 15 & 16)

  • Shared sector-neutral standards and best practices apply to all AI systems, promoting a consistent and collaborative approach across sectors.
  • This helps create a standardized framework for the development and deployment of AI technologies.

Content Provenance and Identification (Section 18)

  • Standards for content provenance, identification, and watermarking of AI-generated content contribute to transparency and accountability.
  • These measures address concerns related to the authenticity and origin of AI-generated materials.

Employment and Skill Security (Section 19)

  • The Bill introduces standards addressing the risks associated with AI systems, emphasizing employment and skill security.
  • This recognizes the potential impact of AI on the workforce and aims to mitigate associated risks.

Insurance Policies (Section 20)

  • The inclusion of an insurance policy requirement for research & development, production, and implementation of AI technologies adds a financial dimension to risk management.
  • This ensures accountability and provides a safety net for potential liabilities.

There are certain provisions, which are inspired from the Digital Personal Data Protection Act, 2023, such as those related to appeal of civil law matters under the proposed Bill, to the Appellate Tribunal, i.e., Telecom Disputes Settlement and Appellate Tribunal (TDSAT) established under section 14 of the Telecom Regulatory Authority of India Act, 1997. Since we do not see multitudes of AI-related disputes in India comparable to any field of law, say criminal jurisprudence, or property law matters, for example, it is apparent to let AI-related disputes be subject to the TDSAT.

Conclusion

The draft bill’s first version was submitted to the Ministry of Electronics and Information Technology on November 7, 2023. Subject to public consultations, and streams of feedback (both public and private) from law, policy, economics, technology and other domain-based ecosystems in India, future versions of the draft Bill would be proposed, and submitted to the Government of India. This move by Indic Pacific Legal Research was motivated due to a private effort of drafting a Privacy Bill in 2017/2018 for public discourse, when the infamous Puttaswamy I and Puttaswamy II judgments on privacy jurisprudence and the Aadhar Act, 2018 were passed. It is hoped that the draft Bill promotes a democratic, reasonable, open-ended and conscious policy discourse on AI regulation in India, so that people anticipate and understand how AI regulation could be achieved with informed steps undertaken.

(This article has been authored by Abhivardhan, who is the Managing Partner, Indic Pacific Legal Research and Chairperson & Managing Trustee of Indian Society of Artificial Intelligence and Law)

CITE AS: Abhivardhan, “The Need for an ‘India-centric’ Artificial Intelligence (Development & Regulation) Bill” (The Contemporary Law Forum, 10 December 2023) <tclf.in/2023/12/10/the-need-for-an-india-centric-artificial-intelligence-development regulation bill/> date of access

Leave a Comment

Your email address will not be published. Required fields are marked *

This site uses Akismet to reduce spam. Learn how your comment data is processed.