Introduction
India is facing a surge in AI-generated content (including deepfakes), causing reputational abuse, financial fraud, and electoral/ political misinformation. To counter this, the Ministry of Electronics & IT (“Meity”) released the draft amendments to the Information Technology (Intermediary Guidelines and Digital Media Ethics) Rules, 2021 (“IT Rules, 2021”), which propose a due diligence mechanism for intermediaries and significant social media intermediaries (“SSMIs”) to regulate synthetically generated information (“SGI”). SGI is defined as artificially or algorithmically created, generated, modified, or altered using a computer resource in a manner that reasonably appears authentic. Such a clear definition reflects a leap from the Government’s earlier advisory on 1st March 2024 (withdrawn within 15 days), where it sought to straitjacket due diligence on intermediaries without providing clarification on the definition, procedure, or reasonable safeguards, reflecting the limited state maturity in addressing the rising menace of AI-generated content.
This article argues that the draft rules strike a fair balance between user protection and intermediary obligations while requiring certain refinements to ensure clarity and certainty in implementation.
Zooming into the Specifics
The draft rule 3(3)(a) mandates intermediaries to embed a permanent, unique identifier in the content and display a prominent label that covers at least 10% of the visual surface or, for audio, during the first 10% of its duration. This marks a clear shift from the 2024 Advisory, which, although required to have permanent, unique metadata or an identifier, did not specify how it should be implemented.
However, the rule faces challenges that stem not so much from the law itself as from the technology underpinning it. First, multiple studies have noted a constant tussle between deepfake detection and deepfake algorithms, leading to merely 60-65% accuracy in detecting such content. Second, once the content is detected, the rules require permanent labelling. Still, the 10% rule does not automatically guarantee protection, as a label confined to 10% of the surface area of a video or the initial 10% in case of audio, can be cropped, trimmed, or re-encoded, and the edited clip can be publicly reposted or circulated over encrypted messengers like WhatsApp or Telegram. This strips provenance and allows misinformation to spread without downstream checks.
Rule 3(3)(b), meanwhile, imposes a duty on intermediaries to ensure such a label cannot be modified, suppressed, or removed. Given the uncertainty in technology, this imposes a high compliance burden on intermediaries other than SSMIs, which have small teams and employ AI to provide cost-effective solutions to both businesses and consumers. Such a position breaks the rule of purpose limitation in legislation, allowing the government to potentially exploit such loopholes in the future. Instead, the sub-rule could rephrase the obligation by asking intermediaries to incorporate “appropriate and reasonable technical measures” for both pre- and post-harm circumstances, thereby affording platforms the flexibility to, first, prevent unauthorised modifications and, second, identify the source of alterations when they do occur.
The Two-Step Procedure to Implement the Rule
The final part of the draft rules imposes a special duty on SSMIs to incorporate a two-step verification procedure for identifying and labelling AI-generated content on their platforms. Firstly, it mandates self-declaration by users regarding the originality of the content, and secondly, it requires SSMIs to deploy measures to verify such declarations and also prominently display on their platform whether the posted content is SGI or not.
Although the rules do not specify whether intermediaries must verify only the content declared to be SGI, it would, by implication, mean analysing all the content for originality, whether self-declared or not. Such broad wordings render the self-declaration process infructuous and might result in a chilling effect on users. The language of the rule, however, suggests that the government intends to simplify the verification process and protect users from unnecessary privacy violations. Therefore, the revised rules should expressly clarify that SSMIs may rely on users’ self-declarations and limit their due diligence to verifying label integrity, while content self-declared as SGI should be exempt from any intrusive checks.
Moreover, given the potential implications of this verification method, the rules can identify certain types of content forms that require a higher threshold of due diligence. Among others, we emphasize social media advertisements as one such category. Ads powered by SSMI’s algorithm have a quick, pervasive, and targeted impact on different types of users across various types of content featuring celebrities and public figures. Moreover, they also provide a stamp of legitimacy for users who cannot distinguish between deepfakes and real content, causing them to believe in such promoted content (as noted by the RBI in the case of financial frauds). Therefore, to prevent SSMIs’ algorithms from indirectly amplifying harmful AI-generated content, it is essential to introduce India-specific risk assessment framework, reflecting the realities of their advertising-driven business model.
Overall, the draft rules reflect a balanced and pragmatic approach by the government. In the long run, such a framework can encourage global companies operating in India to design India-specific solutions that take into account its cultural, linguistic, religious, and political diversity. Given their extensive experience in content moderation and substantial investment in AI-driven tools, intermediaries, particularly SSMIs, are well-positioned to develop such mechanisms. If effectively implemented, these amendments could serve as a catalyst for social media platforms to build robust, India-centric filtering systems capable of mitigating the growing risks posed by deepfakes.