Taxpayer Rights and AI: Patient Information Leaflet for Good Administration and Due Process of Law
Artificial Intelligence (AI) is everywhere — and in the world of taxation, it’s increasingly prescribed as a remedy for the chronic conditions of tax administrations and courts — often overwhelmed, slow, and under-resourced. From automated assessments to AI-driven audits, this powerful medicine promises greater speed and efficiency.
Yet taxpayers are entitled to fundamental rights — including Good Administration and Due Process of Law. Like any treatment, AI requires a proper diagnosis and dosage. It may alleviate some of the tax system’s symptoms, but it can also provoke adverse reactions — from opacity to legal inflammation. The key is finding the right therapeutic balance: innovation must not come at the cost of fundamental rights.
This text is structured as a patient information leaflet: it asks whether AI in taxation can be a digital remedy for fairness and legal certainty — or whether, if misused, it risks becoming a prescription with serious normative side effects.
- What Is AI in Taxation — and Who Is It For?
AI refers to systems capable of processing large volumes of data to support — or automate decision-making (ADM)[1]. According to the EU AI Act, AI systems operate with varying degrees of autonomy and can generate outputs such as recommendations or decisions that influence the environment[2].
In the tax ecosystem, AI is designed for:
- Tax Authorities: to streamline procedures, automate routine acts, detect fraud, and allocate resources more efficiently — all in the public interest.
- Tax Courts: to improve judicial capacity through tools supporting legal research, document review, or case triage — albeit at a later stage in the tax process.
- Taxpayers: to assist with compliance and tax planning. Still, this leaflet focuses on institutional use, , where AI can quietly shift the tax balance and affect taxpayers’ rights[3].
In taxation, AI is increasingly administered as a remedy for long-standing institutional diseases. Its use targets chronic inefficiencies by addressing structural challenges such as resource constraints that demand selective allocation of audit, enforcement efforts and cost reduction[4], information asymmetry (since only taxpayers hold full knowledge of taxable facts) and the impracticality of comprehensive pre-assessment in large-scale self-assessment systems[5]. ADM tools, whether used in enforcement, compliance, or adjudication, are becoming integral to the full tax cycle — streamlining operations, but also reshaping how power is exercised and rights are safeguarded.
Common use cases include:
Use Case |
Function / Benefit |
Automated Audits & Inspections |
Uses risk-scoring to flag high-risk files, enhancing fraud detection. |
Automatic Acts & Notices |
Sends tax assessments, reminders, and penalties automatically. |
Fraud Detection |
Identifies evasive behaviour via data correlation and anomaly detection. |
Pre-Filled Returns & Forms |
Helps compliance by proposing draft returns using prior information. |
Personalised Taxpayer Assistance |
Uses chatbots or virtual agents [6], tailored to individual queries and obligations. |
Document & Data Processing |
Enables paperless workflows and cross-checks across databases. |
Deadline Monitoring |
Tracks legal deadlines to prevent prescription or caducity. |
Legal Research & Decision Support |
Finds case law and doctrine for consistent reasoning in administrative/judicial acts. |
Case Triage & Resource Allocation |
Sorts files by complexity or urgency to guide resource deployment. |
Consistency Checks |
Compares outcomes to detect discrepancies and enhance coherence. |
Cross-Border Cooperation |
Facilitates international data exchange for coordinated action. |
These tools can undoubtedly improve procedural hygiene, but their use must be carefully dosed. This isn’t just automation — it’s a recalibration of tax power. The principles of legality, proportionality, transparency, and human oversight must remain in every dose.
2. Legal Ingredients
AI in tax procedures must follow a regulated prescription. A blend of European, international, and national hard and soft law instruments provides the active ingredients for a digital transition grounded in the rule of law. These instruments ensure that automation — especially by public authorities — respects transparency, accountability, and human dignity. The table below lists the main legal substances, grouped by origin and legal scope:
Origin & Scope |
Instrument |
Key Principles / Provisions |
European Frameworks on AI and Digital Governance |
EU AI Act |
Risk-based regime (Art. 6, Annex III); Human oversight (Art. 14); Transparency (Arts. 13, 15); Fundamental Rights Impact Assessment (Art. 27); Human-centricity (Art. 3(1), Recital 5). |
European Ethical Charter on the Use of Artificial Intelligence in Judicial Systems[7] |
5 principles: Fundamental rights, Non-discrimination, Data quality & security, Transparency, Human oversight. |
|
European Declaration on Digital Rights and Principles for the Digital Decade[8] |
Human-centric digital transformation; Fairness, inclusion, and dignity (Chs. I–IV). |
|
Examples of National Frameworks and Guidelines on AI and Digital Governance |
Portuguese Charter of Human Rights in the Digital Age[9] |
Arts. 9 & 19: AI must respect dignity, non-discrimination, and due process. |
Spanish Tax Agency Artificial Intelligence Strategy[10] |
Human oversight, anti-fraud focus, ethical use, multidisciplinary development, secure & high-quality data. |
|
Preliminary version of the Portuguese Ethical Charter for the Use of Artificial Intelligence in Administrative and Tax Courts[11] |
Emphasises digital humanism, transparency, and judicial autonomy in AI-assisted adjudication. |
|
General European Fundamental Rights Instruments |
Charter of Fundamental Rights of the EU (CFREU) |
Dignity (Art. 1), Good administration (Art. 41), Fair trial (Art. 47). |
European Convention on Human Rights (ECHR) |
Fair trial (Art. 6); Private life protection (Art. 8) — critical in AI’s data use. |
|
General Data Protection Regulation (GDPR) [12] |
Lawful, fair, transparent processing (Arts. 5, 13–15); Protection from fully automated decisions (Art. 22). |
|
Examples of National Fundamental Rights Instruments |
Typically guarantee dignity, due process, and good administration — core safeguards in digital public law. |
|
Portuguese General Tax Law |
Art. 60-A(1): Authorises ICT (incl. AI) use, within procedural law. |
|
Spanish General Tax Law) |
Art. 96(2): Requires digital tools (incl. AI) to respect legal procedure and taxpayer rights. |
When applied in the right dose — and under effective oversight — these legal frameworks ensure that AI boosts procedural efficiency without compromising taxpayer rights.
- Potential Side Effects & Warnings
Like any active substance, AI in taxation may produce side effects. Most are manageable, some tolerable, and a few — especially those affecting taxpayer rights — can trigger serious legal reactions.
As Luís Pica observes, intelligent tax systems mark a shift: traditional, discretionary procedures give way to “anthropomorphised” logic, where machines simulate human judgment through coded inference[15]. In such systems, input data becomes output decisions, often bypassing the interpretative steps where legal meaning and proportionality are formed — a shortcut that risks undermining core procedural guarantees.
Therefore, the main risks and required precautions are:
- Interpretation Deficiency: AI detects correlations, not causalities[16]. Tax law depends on open-textured legal concepts — “abuse,” “reasonableness,” “economic substance” — which require human interpretation[17]. Replacing legal judgment with statistical inference risks breaching the principle of legality. Legal qualification demands contextual reasoning, not just pattern recognition.
- Opacity & the Black Box Problem: Many AI systems operate as black boxes, making decisions that cannot be explained. But the duty to give reasons is foundational to both the right to good administration and the right of defence. A decision that present its factual and legal reasons is one that cannot be lawfully contested.
- Bias , Representation, and Equal Treatment: Incomplete or skewed datasets may replicate historical inequalities or overlook representativeness, leading to unfair treatment. Independent audits and risk-based oversight are essential to avoid disproportionate effects and uphold equality before the law[18]. An example of this side effect is the Dutch childcare benefits scandal, in which a fraud-detection system wrongfully flagged over 20,000 families — many with dual citizenship or immigrant backgrounds — producing systemic discrimination and legal harm to vulnerable people[19].
- Data Protection and Dignity: AI feeds on large volumes of personal data. Without compliance with the GDPR and national charters, the dignity and privacy of taxpayers are at risk. Lawful, transparent processing is not optional — it is a precondition for digital legitimacy.
- Errors and Injustice: Overreliance on automated outputs without verification can propagate structural errors.[20]. For instance, in Portugal, a Lisbon Court of Appeal (non-tax) ruling drafted using generative AI included fictitious citations and unreliable legal reasoning[21]. Reputational damage aside, it highlights a deeper threat: undermining trust in the legal system. Taxpayers are entitled to accountable, human-reviewed decisions.
- Loss of Individualisation: tax adjudication cannot be fully standardised.In Cityland (C‑164/24)[22], the ECJ ruled that automatic VAT deregistration, without contextual analysis, violated the principle of good administration. Even non-AI automation can threaten fundamental rights when it overrides proportionality and human discretion. AI should support — not erase — contextual justice.
- AI Literacy and Institutional Readiness: For AI to be lawful and effective, users — especially decision-makers — must have proper AI training and digital literacy[23] Digital competence is not a bonus: it is a procedural safeguard, ensuring that AI is applied with full awareness of its limitations and legal implications.
3. Recommended Use & Dosage
AI is not a miracle cure. But when administered responsibly — with human judgment as the active ingredient — it can support a healthier, more efficient, and more transparent tax system. For AI to align with a legitimate, rights-based model of tax governance, it must be robust, ethical, and trustworthy.
The legal prescription is clear: AI must serve good administration, due process, and human dignity — never replace them. As Sánchez López suggests, he path forward requires a robust concept of digital good administration, ensuring that AI serves the public interest without undermining constitutional guarantees[24]. Nathwani reinforces that automation must be paired with justiciable safeguards. AI must operate under a balanced model, enhancing but never substituting human legal reasoning.
To ensure proper use, follow these dosage guidelines:
Dosage Guideline |
Purpose & Legal Rationale |
|
Governance Frameworks |
AI must operate under robust legal and ethical frameworks, with independent supervision mechanisms ensuring procedural fairness and accountability. |
|
Impact Assessments |
|
|
Data Protection |
Personal and sensitive taxpayer data must be processed lawfully, transparently, and securely, in accordance with the GDPR and digital rights charters. |
|
Transparency & Contestability |
AI-generated decisions must be explainable and reviewable. If a decision affects a taxpayer, it must be traceable to legal standards, not just statistical models. |
|
Human-in-the-Loop Safeguards |
There must always be a path to human appeal. Whether AI suggests or decides, human oversight must ensure fairness, proportionality, and legality. |
|
AI Literacy & Training |
All institutional users — judges, tax officials, auditors — must be trained in AI’s capabilities and limits. Taxpayers should also understand how AI affects them. |
Contact for Assistance
If you’re unsure whether AI is properly “treating” your taxpayer rights — or if your tax file shows signs of legal fever — consult your Digital Rights Legal Internist. Like a family doctor for algorithmic conditions, they’re here to diagnose ambiguities, prescribe transparency, and monitor the vital signs of legality in your tax record.
And for those administering AI in tax matters: remember — this tool is no miracle cure. It supports, but never replaces, critical legal judgment. When in doubt, always request a second (human) opinion.
Potential side effects include procedural opacity, constitutional inflammation, or a sudden onset of legal uncertainty. If symptoms persist, seek immediate interpretative attention.
[1]ADM refers to decisions or processes made — in whole or in part — without human intervention. It includes both traditional rules-based systems and advanced AI models (e.g., machine learning or neural networks). V. Kunal Nathwani, Artificial Intelligence in Automated Decision-Making in Tax Administration: The Case for Legal, Justiciable and Enforceable Safeguards (London: Institute for Fiscal Studies, 2024), ISBN 978-1-80103-201-8, p. 5.
[2] Article 3(1) of the Regulation (EU) 2024/1689 of the European Parliament and of the Council of 13 June 2024 laying down harmonised rules on artificial intelligence and amending Regulations (EC) No 300/2008, (EU) No 167/2013, (EU) No 168/2013, (EU) 2018/858, (EU) 2018/1139 and (EU) 2019/2144 and Directives 2014/90/EU, (EU) 2016/797 and (EU) 2020/1828 (Artificial Intelligence Act).
[3] As Pica argues, in a digitally transformed tax system based on collaborative compliance, taxpayers risk being marginalised in the core interpretative and decision-making processes — reduced to data providers, while “Fiscal Artificial Intelligence” applies the law. V. Luís Manuel Pica, “Breves notas sobre a antropomorfização da administração tributária artificialmente inteligente e o (novo) modelo de sistema de gestão fiscal,” Revista Jurídica Portucalense 34 (2023): https://doi.org/10.34625/issn.2183-2705(34)2023.ic-07, p. 134,.
[4] Kunal Nathwani, op. cit., pp. 6–7. AI systems act as enhanced statisticians: they process larger datasets, operate faster, access more diverse and interconnected information, and efficiently cross-check data with minimal delay — often surpassing human capabilities in technical performance – v. Ehrke-Rabel, Tina, and Barbara Gunacker-Slawitsch. 2025. “Tax Administration AI: The Holy Grail to Overcome Information Asymmetry in Tax Enforcement?” Intertax 53 (2): pp. 128–140, p. 135
[5] V. Ehrke-Rabel, Tina, and Barbara Gunacker-Slawitsch, op. cit., pp. 132-133.
[6] I.e., conversational AI systems — such as chatbots — simulate human dialogue to provide tailored responses. E.g., the Portuguese Tax Authority, for instance, launched “cATia” in 2020 to assist individual taxpayers with basic queries via its official website.
[7] Council of Europe. European Ethical Charter on the Use of Artificial Intelligence in Judicial Systems and Their Environment. Strasbourg: European Commission for the Efficiency of Justice (CEPEJ), 2018. https://rm.coe.int/ethical-charter-en-for-publication-4-december-2018/16808f699c .
[8] European Declaration on Digital Rights and Principles for the Digital Decade 2023/C 23/01
[9] Law No. 27/2021, of May 17th
[10] V. Agencia Estatal de Administración Tributaria. Estrategia de Inteligencia Artificial. Madrid: Agencia Tributaria, May 27, 2024. https://sede.agenciatributaria.gob.es/static_files/AEAT_Intranet/Gabinete/Estrategia_IA.pdf.
[11] Conselho Superior dos Tribunais Administrativos e Fiscais. Carta Ética para a Utilização da Inteligência Artificial nos Tribunais Administrativos e Fiscais (Preliminary Version for Public Consultation). Deliberation of 25 March 2025, published at https://cstaf.info/wp-content/uploads/2025/03/T006_Carta_Etica_Draft-8.pdf .
[12] Regulation (EU) 2016/679 of the European Parliament and of the Council of 27 April 2016 on the protection of natural persons with regard to the processing of personal data and on the free movement of such data, and repealing Directive 95/46/EC (General Data Protection Regulation)
[13] Charterpedia, hosted by the EU Agency for Fundamental Rights, offers article-by-article analysis of the EU Charter of Fundamental Rights, including legal explanations, case law, and national and international parallels — useful for exploring how rights such as good administration and due process are protected across jurisdictions. Available at: https://fra.europa.eu/en.
[14] For instance, in the Portuguese Constitution human dignity is enshrined in Article 1, due process in Article 20, and good administration in Articles 266 and 268.
[15] Luís Manuel Pica, op. cit., p. 132
[16] Ehrke-Rabel, Tina, and Barbara Gunacker-Slawitsch, op. cit., p. 132-136.
[17] And as Miguel Correia reminds us: where law is open-ended, human discretion isn’t optional — it’s the engine of legality – Miguel Gonçalves Correia, “Inteligência Artificial e Administração Tributária – A Perspetiva Portuguesa,” in Memorias de las XXXII Jornadas Latinoamericanas de Derecho Tributario: Víctor Manuel Rojas Amandi, coord. Ana María Streicher, Pablo José Areco e Enrique J. Sánchez Romero (Valência: Tirant lo Blanch, 2024), pp. 2217–2218.
[18] For instance, the US Treasury Inspector General for Tax Administration has warned that AI may amplify existing biases, posing risks to civil liberties, ethics, and social equity. The report calls for stronger oversight structures to ensure ethical, secure, and accountable deployment of AI in tax administration. V. Treasury Inspector General for Tax Administration. Governance Efforts Should Be Accelerated To Ensure the Safe, Secure, and Trustworthy Development and Use of Artificial Intelligence. Report Number 2025-IE-R003. November 12, 2024. https://www.tigta.gov/reports/inspection-evaluation/governance-efforts-should-be-accelerated-ensure-safe-secure-and
[19] V. Björn ten Seldam and Alex Brenninkmeijer. The Dutch Benefits Scandal: A Cautionary Tale for Algorithmic Enforcement. EU Law Enforcement, April 30, 2021. https://eulawenforcement.com/?p=7941
[20] As recognised in Recital 75 of the EU AI Act, technical robustness is essential to prevent harmful or erroneous outputs in high-risk AI systems.
[21] V. João Paulo Godinho. “Conselho Superior da Magistratura abre averiguação a suposto uso de IA num acórdão da Relação de Lisboa.” Observador, February 11, 2025. https://observador.pt/2025/02/11/conselho-superior-da-magistratura-abre-averiguacao-a-suposto-uso-de-ia-num-acordao-da-relacao-de-lisboa/ . (in Portuguese)
[22] Court of Justice of the European Union, Cityland EOOD v. Direktor na Direktsia ‘Obzhalvane i danachno-osiguritelna praktika’ – Veliko Tarnovo, Case C‑164/24, ECLI:EU:C:2025:XXX, Judgment of 3 April 2025.
[23] Article 4 of the EU AI Act affirms this requirement.
[24] María Esther Sánchez López, Hacia la construcción del principio de buena administración digital. Retos y oportunidades, Quincena Fiscal, no. 6 (abril 2025): Editorial Aranzadi, 14.
Marta Carmo
June 2025