Introduction
Recently, approximately 1.5 million regular users of the mental health chatbot ‘Woebot’ were confronted with the platform’s termination. Founder Alison Darcy cited regulatory hurdles coupled with the immense disparity between the pace at which Artificial Intelligence (“AI”) and legal frameworks are evolving as reasons for Woebot’s discontinuation.
While accessibility and timely responses are being hailed as the causes of success for AI mental health tools, exploring the integration of AI with conventionally human-centric cognitive behavioural therapy models forms a key part of the broader discourse on ethical mental healthcare. Dubbed as the Eliza Effect, an individual’s over-reliance and emotional dependency on AI platforms traces back to the 1960s, beginning with Joseph Weizenbaum’s invention of the AI chatbot, ‘Eliza’. The risks of this dependency have outpaced the frameworks’ ability to govern them.
In light of the Woebot shutdown and surging adoption of AI for mental healthcare advice, this piece explores the prevailing regulatory grey zone within which such systems operate. Firstly, it analyses the concerns on privacy and consent under the Digital Personal Data Protection Act, 2023 (“DPDPA”) and the Mental Healthcare Act, 2017 (“MHA”). Secondly, it evaluates the existing accountability mechanism for AI-based mental healthcare. Finally, it proposes pragmatic reforms, drawing on the framework of the landmark EU AI Act, 2024 (“EU AI Act”).
Brief overview of the Existing Legal Framework
Patient confidentiality and trust share an inseparable nexus with the field of mental healthcare, demanding accountability and responsibility from professionals. The quandary arises when AI dons the role of a therapist, a service that has been intrinsically human-oriented since its inception. The current legislative framework leaves much to be desired in striking a balance between accountability, privacy and the shift towards AI-centric therapy.
Section 2(r) of the MHA restricts the scope of a ‘mental health professional’ to an individual possessing the listed qualifications under the Section. Barring the addition of an individual with a degree in psychology, the 2022 MHA (Amendment) Bill also posits the same definition. Although the MHA was an immensely progressive statute for its recognition of mental health rights in 2017, it presently fails to keep pace with the advent of AI in the realm of therapy. A parallel examination of the DPDPA, successor to the Information Technology Act, 2000 (“IT Act”), reflects a similar regulatory lag, calling for a deeper examination of its provisions. While the IT Act is backed by well-established practice and jurisprudence, the unenforced DPDPA gives rise to a plethora of open-ended questions on interpretation.
Automation and AI are commonly used interchangeably, however, there remains a fine line between the two. Adaptivity, inferential logic and varying degrees of autonomy in operations are the key elements in the definition of AI systems under the EU AI Act. By contrast, the DPDPA posits a restrictive definition of ‘automated’ under Section 2(b), describing a processing mechanism that is inherently instruction-bound. Automation is restricted to a pre-defined set of instructions, best suited for repetitive tasks, while AI takes it a step further and contextually adapts to situations it was not explicitly programmed for. The definition under the DPDPA thus fails to consider the broader and dynamic dimensions of AI recognised by the EU. Therefore, while all AI may contain elements of automation, the latter cannot encompass the various facets of AI in its ambit. As a result of this statutory non-recognition, AI is neither classified as an ‘individual’ under MHA nor does it find acknowledgement under DPDPA, allowing it to escape liability under existing provisions. Nonetheless, it is being engaged to perform functions analogous to a conventional therapist, creating regulatory expectations that India’s present laws fail to acknowledge.
The Illusion of Patient Autonomy and Consent
Under Section 6 of the DPDPA, lawful data processing hinges on the existence of free, specific, informed, unconditional and unambiguous consent. Drawing from the European Data Protection Board Guidelines on Consent, 2020, the parameters ‘specific’ and ‘informed’ are largely built on five key elements:
- Purpose specification for intended processing;
- Granularity in consent requirements, i.e. seeking distinct consent for each purpose of processing;
- Identity disclosure of the entity receiving the data;
- Existence of the right to withdraw consent; and
- Explicit information on the type of data collected and its use.
Most deep learning AI algorithms employ a bifurcated process, inclusive of the learning stage and the deployment stage. The former relates to input and training through datasets, while the latter optimises these datasets to generate personalised responses. In effect, the process produces swift replies from a chain of learnt patterns instead of deterministic programming. This creates an obscure and untraceable response-generating mechanism, widely known as the ‘black box phenomenon’, inconsistent with the five stipulated elements. Effectively, patients are largely kept in the dark about the processing of the information they divulge. Therefore, the consent given does not truly reflect patient autonomy, but rather is a mere superficial acquiescence to opaque data processing.
A combined reading of Section 5 and Section 7 of the DPDPA restricts the scope of data processing and requires data fiduciaries to issue detailed purpose-oriented notices regarding the use of data inputs. Since AI does not fall within the ambit of a data fiduciary, the black box phenomenon gains reinforcement, exempting AI therapists from making such disclosures.
Additionally, there exist persisting ambiguities in harmonising ‘informed consent’ and ‘capacity’ under the present statutes. Section 4 of the MHA outlines demonstrable mental capacity entitling one to autonomy in their mental healthcare decisions. While consent under DPDPA is presumed static until voluntarily revoked, the patient’s capacity tends to fluctuate during the process of therapy. This leads to an inconsistency between the authorisation given by and comprehension of the patient, rendering their consent valid under the DPDPA despite concurrently lacking the requisite capacity under the MHA. For instance, while an individual with Bipolar Personality Disorder may initially consent to an AI chatbot giving mental health advice, during a manic episode, their capacity is vitiated.
General-purpose AI Models currently adopt an opt-out mechanism pertaining to a user’s consent and data processing. Presently, in order to preclude an AI system such as ChatGPT from saving and utilising user data, one must disable the ‘improve model for everyone’ option under Data Controls. Users tend to overlook such a mechanism, often ignoring the onus placed on them to manually withdraw consent. An audit by the EU in 2024 revealed that a staggering 63% of ChatGPT user data contained personally identifiable information (PII), with only 22% of users aware of disabled settings. Provisions of the DPDPA reflect the principle that consent withdrawal must be as seamless as granting it. Further, consent under the statute is rendered valid only subsequent to a clear affirmative action on the part of the user. The current opt-out mechanism subverts both these principles, rendering them ineffective to deal with AI algorithms.
The Confidentiality Crisis: The Gap in Patient Privacy and Privilege
OpenAI CEO Sam Altman recently flagged grave concerns on privacy and legal privilege in mental health advice offered by ChatGPT. This particular use of AI has evidently caught both the AI industry and policymakers off-guard, with Altman himself underscoring the lack of steadfast mechanisms to safeguard such sensitive data.
The Hippocratic Oath affirms the significance of patient confidentiality and privacy in the course of treatment, principles that find legal genesis under Section 23 of the MHA. The lack of a clear licensing structure provides leeway for the circumvention of significant legal safeguards such as psychotherapist-patient privilege and confidentiality.
Since Generative AI repurposes and trains on large volumes of user data, it is almost impossible to trace whose personal data is being processed within the AI model’s learned outputs. In the context of therapy, AI assumes a one-size-fits-all approach, generating standardised responses from previous training on patient data. AI models fail to assess relative factors such as the patient’s history, social position and behavioural patterns in real time. Thus, not only does this model compromise on quality mental healthcare, but it also exacerbates risks to privacy by large-scale repurposing of patient data.
The Question of Accountability
A conventional therapist has both identity and stakes in the relationship, fostering a sense of accountability. A breach of their duty of care, coupled with a deviation from the acceptable norms of therapy and harm to the patient, gives rise to a cause of action for malpractice. On the contrary, fleeting interactions with an AI chatbot presently do not hold the same value in the eyes of the medical as well as legal fraternity. AI models employ a mirroring approach, which entails the generation of responses that exploitatively appeal to the patient, often serving as their ‘yes-man’ in emotionally vulnerable situations. The probability of misdiagnosis and misguidance, therefore, runs high, warranting scrutiny of the current accountability mechanism surrounding AI.
The 2019 and 2021 draft versions of the DPDPA framed a broader definition of harms that were posed to individuals providing their data. Their inclusion of psychological harms and mental injury is amiss in the present DPDPA, diluting an already weak accountability mechanism for AI used for therapy.
Concerns are further exacerbated in the context of adolescent users, with over 85% turning to AI for counselling in some form. The lack of a robust framework to verify and assess minors’ age and maturity places their engagement with AI in murky waters, exploiting their vulnerability. For instance, Nomi, a mental health chatbot specifically designed for adults, has been repeatedly observed to offer advice and respond to individuals who explicitly disclose they are minors.
The evolving ‘deployer accountability’ approach is increasingly being adopted, pioneered by the EU AI Act. This Act explicitly distinguishes between deployer and provider through clearly stated definitions, streamlining the process of affixing liability. India, on the other hand, remains silent on pinpointing liability, and the lack of explicit definitions and demarcations creates a legal vacuum open to exploitation.
A Roadmap for Responsible AI in Mental Healthcare
Given that the DPDPA and the MHA currently take a rather parochial stand on AI prevalence in mental healthcare, it becomes imperative to tailor existing legislation to fill the gaps that persist. The profound implications for patients demand urgent action; therefore, drafting an entirely new legislation presently fails to offer a pragmatic solution.
The EU AI Act has taken noteworthy strides in AI governance, prioritising a stricter oversight regime and accountability. Drawing from its classification of AI as a ‘High-Risk system’, it is suggested that the DPDPA explicitly recognise AI models as a Significant Data Fiduciary (“SDF”). While it currently lies outside the purview of the statute, recognising AI models as SDFs based on the sensitivity, volume of data processed, and the magnitude of risks posed to patient rights is legally justifiable under the conditions set out in Section 10(1) of the DPDPA. Once so designated, additional obligations, such as the appointment of a Data Protection Officer and regular Data Protection Impact Assessments, provide a potential resolution to the regulatory blind spot where AI therapy currently operates. These duties reinforce principles of human oversight and transparency outlined under Articles 13 and 14 of the EU AI Act, enabling India’s alignment and parity with the present global standard on AI management. Further, recognising AI based therapy as a subset of conventional mental healthcare under the MHA would harmonise the statutes. While the Indian Council of Medical Research Guidelines, 2023, only broadly outline the use of AI in healthcare, statutory recognition would ascribe enforceability to its ethical and legal obligations.
Explainable AI (XAI) models run on the principles of explainability and interpretability. Offering a dual remedy, these systems would strengthen informed consent as well as mitigate the ‘black box’ opacity of AI systems. An extension of the existing notice obligations imposed on data fiduciaries by the DPDPA, mandating explainability, would foster patient autonomy and safety, particularly in mental healthcare. Parallel to the rights afforded to patients under the MHA, the inclusion of the right to algorithmic transparency holds specific relevance given the automation of therapeutic interactions. Incorporating an explicit opt-in mechanism for all types of data processing, thereby shifting the burden of choice onto the patient, would ensure voluntary consent rather than the mere presumption of such approval.
To bridge the gap in the interpretations of continued consent and capacity, a dynamic consent model that involves periodic reassessment of consent is proposed under the DPDPA for AI-run platforms offering mental healthcare services. Such reaffirmation of consent could be mandated at relevant intervals, for instance, upon model updates of the AI tool, collection of sensitive personal data or episodes where the patient evidences an extreme behavioural shift.
Similar to the CE Marking Certification under Article 48 of the EU AI Act, India could consider adopting a mandatory compliance-based certification regime specifically for AI systems that offer mental health support. Withholding certification from chatbots that fall short on demonstrably meeting confidentiality and explainability obligations, this approach is best-suited for domains where opaque systems cause immeasurable harm to users.
Conclusion
The dangers of unregulated AI deployment no longer remain speculative, as Meta AI’s recent internal policy document alarmingly deemed false medical advice and sensual conversations with children as ‘acceptable’ responses by their systems. Paradoxically, even though the DPDPA is yet to be enforced, it bears the marks of a statutory framework that is outdated in the face of technological developments. The 2022 MHA (Amendment) Bill too envisions a ‘person-centric’ mental healthcare ecosystem in its Statement of Objects and Reasons, falling behind the realities it once sought to regulate.
While this piece does not aim to establish whether AI should assume the role of a therapist, it acknowledges that integration with AI is inevitable. A framework well-equipped to address patient confidentiality, privacy and system accountability would thus ensure that innovation in mental healthcare does not compromise the rights of the very individuals it seeks to serve.
(This post has been co-authored by Amrita Nair and Soumya Yadav, second-year students at Hidayatullah National Law University, Raipur,)
CITE AS: Amrita Nair and Soumya Yadav, ‘Artificial Empathy, Tangible Risks: Consent, Confidentiality and Compliance in AI-Enabled Therapy’ (The Contemporary Law Forum, 9 November 2025) <https://tclf.in/2025/11/09/artificial-empathy-tangible-risks-consent-confidentiality-and-compliance-in-ai-enabled-therapy/>date of access.