By far, the greatest danger of Artificial Intelligence is that people conclude too early that they understand it. – Bernard Marr
Introduction
In a recent occurrence, the Global Positioning System (GPS), an artificial intelligence (AI) tool used for conveniently navigating directions, led to the death of three people when their car plunged into the Ramganga River from an under-construction bridge. The GPS software had malfunctioned and misguided the travellers by positioning the bridge as operational, without displaying even the slightest warning signs. The fact that this incident was not an isolated one raises broader concerns regarding the reliability of AI systems. The very next week, due to a similar GPS malfunctioning, three car passengers narrowly escaped death when their car plunged into a canal near the Barkapur village in Uttar Pradesh. The main legal issue in these instances pertains to the person or entity upon whom liability can be imposed, which is an area without much jurisprudence. In this backdrop, this article seeks to comprehensively analyse this issue. In doing so, it evolves certain models that can be used as guiding lights to help courts interpret this obscurity when dealing with such cases.
Unpacking AI Liability: Who Bears the Legal Responsibility for AI-Related Harm?
While dealing with the harms caused by an AI system, the question of who is responsible for any harm caused by an AI system and how the liability is allocated between the developer, the user, and the victim is a particularly important one and sets it apart from the regular jurisprudence surrounding the allocation of liability in non-AI contexts. The answer to this question depends largely on whether AI is considered a separate legal entity since liability can only fall upon legal or juristic persons (as seen in cases of corporate criminal liability). In India, AI systems are not recognised as legal persons. This means that they cannot be held liable for harm in the same way a human or a corporation can be. Legal personhood is tied to individual autonomy, which AI lacks. This is because its activities are mainly propelled by the inputs provided in the form of programming. Therefore, the general consensus is that liability must be assigned to the entities behind AI, such as developers, users, or owners.
No legal system in the world has recognised artificial intelligence as a legal entity, with the exception of Saudi Arabia, where a robot named Sophia, an artificially intelligent humanoid has been recognised as a citizen of the nation with rights and obligations equivalent to those of human beings. The reason for AI not being considered as a separate legal entity in almost all jurisdictions in the world is the lack of its ability to function independently like a human. It cannot do things contrary to the specific input programs that have been inserted by a human and, thus, operates according to those directions. Furthermore, it is not possible to punish an AI system and so, liability inevitably has to fall upon the person responsible for the acts of AI.
Liability can be generally classified into two broad types: criminal and civil liability. Criminal liability mandates both mens rea and actus rea. In cases of AI, mens rea is generally attributed to the developer, with AI acting as the agent. Thus, the developer should be held liable. Moreover, developers can also be held liable for harm caused by AI systems if it is proven that the harm was a foreseeable consequence of the AI’s programming or if there was a failure to implement adequate safeguards. The tort law principle of strict liability could apply, especially in cases where the AI system operates autonomously and the developer has the primary control over its design and functionality.
Users of AI systems may also bear liability, particularly if they misuse the AI features or fail to adhere to operational guidelines. This can include scenarios where users knowingly employ AI systems for harmful purposes, thus establishing a direct link of causation between the user’s actions and the resulting harm. Victims of harm caused by AI systems may seek redress through consumer protection laws, where they can file complaints against manufacturers or service providers. The legal framework provides mechanisms for compensation for injuries caused by defective AI products. Section 83 of the Consumer Protection Act, 2019 allows for product liability actions against manufacturers, service providers, or sellers for harm caused by defective products. This can extend to AI systems, holding developers or manufacturers accountable for damages caused by their AI products. The Act thus incorporates principles of strict and vicarious liability to establish accountability which can be extended to developers of AI products and systems too. Thus, coupled with criminal punishments, civil liability in the form of damages or compensation can also be imposed on the defaulter.
In certain scenarios, as per the doctrine of contributory negligence, even victims (who may be users in most scenarios, especially if they are ultimate consumers), can be held equally liable if they fail to use AI devices in the prescribed manner or deviate from the general instructions of use. Therefore, the allocation of liability ultimately boils down to the specific facts and circumstances of a particular case and as of now, there is no legislation in India, concrete enough to impose definite liability on a particular person or entity.
Models as Determining Factors: A Useful Guiding Light
In the realm of determining liability in cases involving AI, three distinct models emerge. The first, known as the AI as a Tool Model, positions AI solely as an instrument, devoid of legal agency akin to a tool or animal. Here, primary liability is attributed to the human actors responsible for the AI’s creation, operation, or supervision, rather than the AI itself, which lacks the mental capacity for criminal intent. The Liability for Foreseeable Crimes Model, on the other hand, holds developers or users accountable for harm caused by AI due to foreseeable consequences of its programming or use, even in the absence of criminal intent. For instance, if an AI software designed for cybersecurity is negligently programmed and subsequently engages in cybercrimes, those responsible for its development could be held liable for negligence, mainly under Tort Law. Generally, all subsequent acts on part of AI are considered foreseeable due to the very nature of AI being not completely reliable and the multiple risks known to the developer. Lastly, the Direct Liability Model introduces the notion that highly advanced AI systems, capable of independent decision-making, may be regarded as direct perpetrators if they meet the criteria of actus reus and mens reus. In this model, AI is treated as a legal entity and can face direct liability for its actions, alongside any involved humans, particularly when its actions are autonomous and detached from human control. This model is generally the least used due to its administrative and practical implications, such as the problem of punishments and the comparatively underdeveloped and nascent stage of AI in India with the absence of such advanced AI systems.
In the absence of specific legal principles, the general consensus is that the developers or programmers have to be held liable with users or victims being liable only in exceptional scenarios, as described previously. Apart from the three models evolved above, another guiding framework can be Niti Aayog’s #AIForAll strategy which classifies AI into sub-categories such as weak and strong AI, narrow and general AI and superintelligent AI. Weak AI refers to a system which appears to behave intelligently, but doesn’t have any kind of consciousness about what it’s doing. On the contrary, strong AI indicates actual thinking with a conscious, subjective mind. Narrow AI reflects the situation wherein AI is programmed to carry out a specific set of tasks and general AI grants a greater realm of operation to the AI system by programming it to carry out a wide variety of activities. Superintelligent AI refers to the phenomenon wherein general and strong AI surpasses human intelligence. The first four categories are primarily controlled by human inputs, thus indicating human liability. However, for cases falling within the ambit of superintelligent AI, liability for harms can potentially be attributed to the AI system independently of its creator. The presence of such a system today is questionable but its existence in the future is inevitable. This analysis again traces back to the conclusion that there cannot be a rigid set of norms governing the liability question; it ultimately depends on the specific facts and circumstances of a particular case.
Conclusion
In an increasingly globalised twenty-first century world, deciding this question on the allocation of liability for AI-induced harms is the need of the hour. With a positive expectancy of the invention of advanced robotic devices such as flying cars in the future, the possibility of harm is high. This is evident from the recent catastrophic consequences caused due to the malfunctioning of GPS, alongside a plethora of other examples including the recent accident of a self-driven Tesla Cybertruck. Although the advancement of technology is a boon, it has to be adequately regulated. This necessitates the clarification of the question explored in this article to serve as an adequate warning and impose liability for these fatal deviations. Therefore, as the future of AI-driven innovation is embraced, it must be ensured that accountability evolves alongside it. This is because technology without responsibility is a risk too great to bear.
(This post has been authored by Manav Pamnani, a fourth-year B.A. LL.B. (Hons.) student from the NALSAR University of Law, Hyderabad.)
CITE AS: Manav Pamnani ‘Liability in the Age of AI: Examining Legal Accountability for AI-Induced Harm’ (The Contemporary Law Forum, 11 September 2025) <https://tclf.in/2025/09/11/liability-in-the-age-of-ai-examining-legal-accountability-for-ai-induced-harm/> date of access.