BEYOND THE ‘META’VERSE: REWRITING THE RULES OF FACT-CHECKING

In the digital era of cross-border information exchange and social interactions, the world now runs on an information expressway, where information travels faster than ever. Social media platforms play a crucial role in expanding the scope of speech and expression. However, they are also tasked with the critical responsibility of deploying mechanisms to mitigate the spread of misinformation.

As scholars across the globe explore methods to combat the proliferation of fake news, two recent developments warrant particular attention. Firstly, Meta’s decision to discontinue the use of Fact Check Units (“FCUs”) and secondly, the Bombay High Court’s decision, in the case of Kunal Kamra v. Union of India, declaring the law regulating government-appointed FCUs unconstitutional. This ruling not only challenges the legal foundation of government-established FCUs but also leaves the future of fact-checking in India uncertain.

In this piece, we discuss how, even though FCUs fall short of meeting idealized expectations, they remain critical tools in dispelling fake information. The authors attempt to direct attention towards the algorithmic challenges, offering insights into the limitations of FCUs that influenced Meta’s decision and suggest ways to enhance their efficiency as an alternative to discarding them. Further, we examine the lacuna of fact-checking mechanisms in government policy and offer suggestions for crafting a comprehensive legal framework that regulates the same.

REVISITING THE ROLE OF FACT CHECKERS IN COMBATING FAKE NEWS

In an era of rampant misinformation, FCUs have proven indispensable in separating fact from fiction. FCUs played a crucial role during the COVID-19 pandemic, which the WHO termed both a health crisis and an infodemic due to rampant misinformation, as reflected in National Crime Records Bureau reports indicating a 214% surge in fake news cases during this period. Even during the Citizenship Amendment Act protests, the SC duly acknowledged the spread of unregulated content online and was struck amidst the complexities of how to curb protestors from believing in fake news. In this chaotic environment, the deliberate and rapid spread of misinformation designed to distort reality, fuel majoritarian biases, and incite panic was curbed with the help of FCUs. A similar pattern emerged during Operation Sindoor, where government-affiliated FCUs were reportedly involved in issuing takedown requests against academic commentary and verified on-ground reporting, despite the Kunal Kamra judgment having declared the legal foundation of such units unconstitutional.

Despite FCUs’ proven significance, both Meta and the Bombay HC have moved to curb their use. What led to this shift? Meta has taken this decision to combat censorship. As the promoter himself stated, the fact-checkers have been too politically biased and have destroyed more trust than they created.” Simultaneously, the Bombay HC struck down rule 3(1)(b)(v) of the IT Rules, introduced by the 2023 amendment because this provision gave the Central government authority to take down any information regarding the ‘business of the Central Government’ that its FCUs identify to be fake, false or misleading’. The court held that the amendment violated the principle of ‘nemo judex in causa sua’ and that its vague, broad language and lack of guidelines risked a chilling effect on freedom of speech.

The authors in honest belief and with due regard to the judgment, opinionate that the argument advanced by Ld. Solicitor General in the case holds merit. He argued that access to genuine information is fundamental for informed decision-making, while misinformation inevitably leads to misled decisions. Thus, a total absence of FCUs is surely not the solution.

Although the FCUs have demonstrated their mettle at multiple instances, they still face challenges on two fronts: firstly, platform-based FCUs struggle with algorithmic inefficiencies and scalability, and secondly, while government-appointed ones are criticized for political biases and lack of objectivity, they are also accompanied by prima facie violation of fundamental rights, by manipulating the use of reasonable restrictions under Article 19(2).

Meta’s recent decision reflects the growing frustration with traditional fact-checking systems. Without transparent criteria or robust end-to-end transcription systems, these lack precision and accountability thereby functioning as black boxes. Fact-checking organizations like International Fact Checking-Network (IFCN) and the European Fact Checking Standards Network (EFCSN) have laid down a code of principles, yet platform-based actors and independent FCUs avoid disclosing their algorithmic processes. This is what could lead to “real censorship” as FCU algorithms continue to pick issues which are biased or are simply one sided opinions.

On the other hand, Government-appointed FCUs have also been under scrutiny. In the Kunal Kamra judgement, Justice Patel noted that the government, through its FCU, becomes the final arbiter not just of what is or is not fake, but, more importantly, of the right to place an opposing point of view. Moreover, as the government assumes the authority to determine what content remains online, it also becomes particularly sensitive to dissent. This not only poses a threat of a ‘chilling effect’ on speech, as highlighted in the case of Shreya Singhal, but also puts indirect pressure on social media platforms, which will lose protections against civil and criminal liability, leading to concerns that they will take down news content even potentially disputed by the government.

A major issue with FCUs is their tendency to act only after misinformation has gone viral, tackling false claims once they’ve already flooded the public sphere. By the time they step in, the damage is done, and the truth struggles to catch up with the wildfire of misinformation. The operative scale of FCUs is often limited to the objective algorithm, which functions in an automated manner, focusing merely on central claims and prioritizing claims based on their perceived solvability. However, statements, in the form of individuals’ opinions and prejudices, which are based on untrue facts are not touched upon. Solutions to tackle algorithmic challenges may be difficult to implement, however would definitely make the working efficient.

SETTING THE RULES: FACT-CHECKING, BUT MAKING IT WORK

The efficacy of fact-checking depends a great deal on contextual factors such as wording, presentation, and sources. The solution lies in developing an algorithm that evaluates content across these contextual factors. By implementing a scale that labels content based on distinct metrics ranging from ‘research in progress’ to ‘miscaptioned’, ‘satire’, or ‘legitimate’ we can ensure a more nuanced and accurate fact-checking process. Furthermore, most of the FCUs are primarily human-driven, operating on a manual model leading to bias and arbitrariness.

Adopting a two-stage model could enhance the accuracy of claims made by the FCUs regarding false information. In the first stage, an algorithm thoroughly examines all claims including claims that the algorithm identifies as inconclusive, ambiguous, or involving subjective elements requiring human judgment for a definitive determination which can then be forwarded to the second stage. In the second stage, a manual fact-checking unit will review and further verify the claims.

Further, suggestions from media and tech scholars can add invaluable insights on how to make fair guidelines that ensure the robust overall functioning of the platforms; and inputs from legal academicians can ensure that these regulations do not curb speech arbitrarily. Penalties for non-compliance should impose proportionate financial penalties rather than revoke safe harbor protections, ensuring accountability without prompting excessive content filtering by social media platforms. Non-punitive measures must emphasize correction and flagging of content rather than removal of content unless it satisfies any of the conditions enumerated under Article 19(2) of the Constitution. Grievance redressal cells could be established at all federal levels to appeal against the flagging or removal of content by social media intermediaries.

Meta being one of the leading social media platforms in India makes it imperative for Indian policymakers to adopt new guidelines to counter the threat of misinformation. The alternative suggested by Meta to FCUs is a ‘Community Notes’ model wherein users are empowered to comment on and assess the accuracy of posts. In the authors’ view, relying on systems that reflect the opinions or preferences of the majority could lead to unfair outcomes, where the voices of smaller groups are ignored. When these systems are powered by algorithms designed to maximize profit, they tend to promote content that generates the most clicks, likes, or sales.

EPILOGUE

Navigating the role of FCUs is extremely important to protect the integrity of information, especially in the digital age of such accessibility to content. Meta’s move to discontinue the use of FCUs is backed with a reason. However simply removing FCUs, as Meta has chosen to do, is not the solution as it risks leaving an already fragile ecosystem vulnerable to unchecked misinformation. Algorithmic inefficiencies hinder FCUs’ ability to assess intricate claims, while their reactive approach undermines their effectiveness. Similarly, government oversight introduces the potential for bias, censorship, and the stifling of dissent.

Thus, a more effective approach lies in addressing these limitations through innovative solutions. Introducing a multi-metric scale for categorizing content provides a more refined and accurate method for fact-checking, reducing the potential for bias. When integrated with a two-tiered process, where algorithms manage straightforward claims and more intricate issues are referred for human evaluation, it ensures a balanced approach that combines efficiency with thoughtful judgment.

The current gaps in the legal framework governing FCUs leave room for inconsistencies for which a possible solution is to develop policy guidelines for platform operated FCU’s, ensuring that they adhere to clear standards of fairness while maintaining strict neutrality to avoid political interference. Fact-checking can be fortified to counter the growing complexity of misinformation, while also safeguarding democratic principles and ensuring the integrity of public trust.

(This post has been authored by Survepalli Prithvika and Kanishk Goyal, second-year students at NLIU, Bhopal)

CITE AS: Survepalli Prithvika and Kanishk Goyal, ‘Beyond The ‘Meta’verse: Rewriting The Rules Of Fact-Checking’ (The Contemporary Law Forum, 25 July 2025) <https://tclf.in/2025/07/25/beyond-the-metaverse-rewriting-the-rules-of-fact-checking/>date of access.

Leave a Comment

Your email address will not be published. Required fields are marked *

This site uses Akismet to reduce spam. Learn how your comment data is processed.