To Err Is Robot: Liability When AI Gets Taxes Wrong

Introduction

Novel accountability and liability questions, as the rise of Artificial Intelligence (AI) becomes a new reality, have become important issues to be addressed. One such possibility lies in the area of taxation. Numbers, Data, Filing, and Compliance are a few of the myriad terms when one talks about taxation, coincidentally, most of them collide with what AI specialises in: numbers, data, analytics and algorithms. The adoption of AI in taxation systems is on the rise, particularly within tax compliance. It is believed to be accounting for efficiency and greater efficacy on the part of tax administrations. Recent OECD reports on Tax Administration stand as demonstrations of the rise in investments in digital infrastructures. Notably, many administrations discussed in the report have almost doubled the usage of virtual assistants and AI since 2018. Tax administrations have, ever since, enhanced measures proactively, evidenced by a 17–23% rise in e-filing and a 10% rise in e-payments since 2014. 80% of tax administrations reported that they are either using or are in the process of implementing techniques that allow for data analysis without human intervention, according to another report in 2024 by PWS.

This piece comprehensively attempts to examine the incorporation of AI into taxation compliance procedures, if and when it is incorporated, the kind of changes the system would be facing, possible solutions that could be accommodated, observations on the legal side, and examination of other complexities.

The complexities of AI models, which rely on training data and probabilistic decision-making, show susceptibility to errors, biases, and interpretative discrepancies. Unlike human tax professionals, who can navigate nuanced statutory interpretations with discretion, this introduces challenges in accountability, highlighting the need to explore the implications of liability when AI makes mistakes.

AI in Tax Calculations

AI can be defined as a system that interprets external data, internalises it through sophisticated algorithms to produce output that is based on empirical analysis. Traditional tax calculations and reporting are labour-intensive and demand significant human effort, especially in large corporations dealing with intricate tax requirements. It minimizes the need for manual involvement, thereby cutting labour costs. Additionally, its ability to reduce errors and ensure timely tax filings helps businesses avoid hefty fines and penalties, further contributing to overall cost efficiency.

The author proposes a framework for integrating AI in tax calculations consisting of five levels. At the foundational level, it ensures updates for current tax regulations and includes mechanisms to double-check these updates, minimising human error. The second level involves analysing structured invoices, where the system extracts essential data points to enhance efficiency and accuracy in tax reporting. The third level focuses on processing unstructured invoices, utilizing advanced image recognition and data extraction techniques to handle formats such as images. The fourth level addresses biases within the systems that can affect tax decisions. Finally, the fifth level encompasses auditable automated systems, where compliance checks are ensured and the outputs of the other AI systems are validated, resulting in an audit trail. This framework provides a clear understanding of machine intelligence’s evolving role in tax compliance, from basic updates to complex analysis and bias mitigation. The structure discussed, however, is subject to future technological developments (complexities) and is now based on the maturity of AI at the current levels. All of this can be brought closer to implementation through reasoned policy and legal justifications, some of which are discussed hereunder.

Determining The Liability

To effectively evaluate compliance with tax mechanisms, it is advantageous, according to the author, to categorise entities into tiers based on their scale, specifically small, medium, and large enterprises, specific to B2B scenarios. This allows for a more nuanced understanding of liability and mirrors the already established stratifications like those in section 2(85) of the Companies Act and the MSME Development Act, 2006.

The proposed penalty framework for enterprises in a B2B context can be corresponded with the level of AI automation employed and the size of the enterprise in question. At the initial stage of automation (level 1), where cross-validation of tax codes against tax percentages is done, the penalties remain low. For small and medium enterprises, liability rests primarily on the provider, whereas in the case of large enterprises, the responsibility is shared between the provider and the taxpayer.

At the second stage (level 2), where AI systems process structured invoices and credit notes, the penalties diverge: small enterprises continue to face low penalties, but for medium and large enterprises, the penalties rise to a high level, with liability shared between both the provider and the taxpayer.

At level 3, the processing of unstructured invoices and credit notes would deal with penalties remaining comparatively low across enterprises of all sizes; however, across all enterprises, the liability is distributed between the provider (AI enabler) and the taxpayer. More significant consequences arise at level 4, which concerns the detection of biases in tax determination. Here, small and medium enterprises are subject to high penalties, with the provider bearing the burden, while large enterprises face very high penalties, also with the provider solely liable. At the most advanced stage of automation (level 5), involving the introduction of audit agents on top of the AI engine, the penalties reach their peak. At this level, very high penalties apply uniformly to small, medium, and large enterprises, with liability jointly imposed on both the AI agent and the provider.

The justifications for this system come from many sources and are discussed hereunder. The need to regulate collaborations between persons filing taxes, AI-based tax calculation providers and AI agents (audit specific AI algorithm) (in this context, a system or program that is capable of autonomously performing tasks on behalf of a user or another system by designing its workflow and utilizing available tools) relies on important precedents such as the Satyam Price Waterhouse scandal, where complicity by external regulators proved to attract regulatory sanctions.

Further, in standardising these evaluations, many existing frameworks need to be considered as well. For instance, Sections 8 and 28 of the DPDP Act already establish foundational and critical points on data regulation and controls when data fiduciaries are involved. Implementing mandatory manual audits for certain entities within defined brackets will also improve compliance and accountability. It provides for key requirements such as accuracy, data minimisation (where entities collect only the required data), transparency, etc. This acts like an existing statutory basis for accountability. As it is laid out already in Section 65 of the CGST Act, where an authorised officer has the power to conduct an audit to verify records, aligning also with the principles held in cases such as HGIEPL Joint Venture v. Union of India, where the court held for the importance of due process, verification and transparency despite being automated systems. While there might be arguments that there are no exclusive clauses demanding reasoning from judgments present in the tax statutes, it must be noted that it is no longer res integra, and the requirement of reason stands settled through many decisions of the court.

The penalty structure can be sought and could draw inspiration from the CGST Act, Sections 122-125, which outline how tax evasion, fraud can be penalised. Additionally, the company/firm taking part in the preparation of this AI, if found to be biased (for instance, a recent case involving the Deepseek AI of China, which evades questions on controversial topics relating to China), must be made strictly liable, as this may lead to significant distortions by the AI tax matters too.

General Issues Associated with AI and Perspectives

Beyond technical challenges, data privacy and security are among the most pressing concerns concerning AI integration. It raises important issues regarding the confidentiality and protection of taxpayer information. Ensuring compliance with strict data protection laws and maintaining robust security measures are essential to safeguarding sensitive financial data.

Another major challenge lies in overcoming resistance to change and barriers to adoption. As highlighted by Fatz, Hake, and Fettke, implementing AI and blockchain technologies for decentralised tax validation faces scepticism from both tax authorities and taxpayers. Learnings from international reports, such as Communication and Engagement with SMEs, show that gradual approaches and open dialogue help in furthering this objective. Additionally, addressing the skills gap is vital for the effective application of AI. Integration with existing tax infrastructures also presents a significant hurdle. Ensuring compatibility with legacy systems is essential to prevent disruptions in tax administration.

The financial implications of adopting AI remain important considerations. The costs associated with acquiring hardware, software, and skilled personnel can be substantial, particularly for tax administrations with limited resources. Weighing these expenses against the long-term advantages is a key factor in making informed decisions about their implementation.

Small-scale experiments would result in offering great learnings into Tax administrations using AI. Since they tend to be controlled, these tests would enquire into ethical considerations, AI scalability, integration with existing IT infrastructure, and general governance practices. Depending on the certainty of the data found, decisions regarding the tasks could be made. If data quality is too poor for meaningful conclusions, for instance, the AI project may need reconsideration.

Transparency in algorithmic decision-making is another critical consideration. For instance, under Article L-311.3 of the French Code of Relations between the Public and the Administration, authorities must inform individuals of decisions based on algorithmic processing. Individuals have the right to demand information pertaining to the same. The National Strategy for Artificial Intelligence, published by the NITI Aayog, also provides great insights on how AI could be explained, thereby preventing the Black box phenomenon- a scenario where only the input and the results remain as the known factors, with little to no understanding of anything more.

Various factors must be carefully evaluated, including defining what constitutes “automatic” processes and determining the extent of human involvement, balancing manual versus automated actions, ensuring system interoperability, fulfilling reporting obligations, facilitating effective communication, and regularly assessing the efficiency of procedures.

Conclusion

Hence, one could conclude that there is a significant efficiency in using AI in Tax Administration. However, throughout this blog, the author has tried to emphasise that there is also a growing need for how the AI itself is administered. Many countries stand as examples for the successful implementation, which could be comparatively explored for implementation. Further, in the author’s opinion, it should not be the case of ‘hurt first, fix later’ when it comes to the extent of consequences that unchecked systems could cause. Phenomena such as Black box, as discussed in the section above, prove to be highly harmful to the public at large, and examples from around the world highlight this. The proposed 5-level framework helps to a certain extent in conquering the assessment of systems, even to the most advanced levels of responses (advanced, atleast for now!).

(This post has been authored by Ashna Kamuni, fourth-year student at NALSAR, Hyderabad)

CITE AS: Ashna Kamuni ‘To Err Is Robot: Liability When AI Gets Taxes Wrong’ (The Contemporary Law Forum, 12 November 2025) <https://tclf.in/2025/11/12/to-err-is-robot-liability-when-ai-gets-taxes-wrong/>date of access.

Leave a Comment

Your email address will not be published. Required fields are marked *

This site uses Akismet to reduce spam. Learn how your comment data is processed.