The OECD released a report analysing trends, challenges, and policies in public-sector AI use across 11 core government functions, with a particular focus on tax administration.
The OECD released a report titled “Governing with Artificial Intelligence: The State of Play and Way Forward in Core Government Functions” on 18 September 2025, highlighting key trends, challenges, and policy initiatives in public-sector Artificial Intelligence (AI) use, with case studies across 11 core government areas—including a special focus on tax administration.
AI offers tremendous potential in its use by governments. It helps governments automate and tailor public services, improve decision-making, detect fraud, and enrich civil servants’ work and learning. However, benefits also hinge on managing risks: skewed data in AI systems can cause harmful decisions; lack of transparency erodes accountability; and overreliance can widen digital divides and propagate errors, reducing citizen trust. These trade-offs need to account for governments’ specific challenges where adoption trails some firms in the private sector, slowed by skill gaps, legacy IT systems, limited data, tight budgets, and stricter needs for privacy, transparency, and representation.
AI is one of the most transformative forces of the 21st century, and it is becoming an integral part of digital government worldwide. Governments’ use of AI can facilitate automated and tailored internal processes and public services, foster better decision making and forecasting, improve fraud detection and improve public servants’ job quality and learning – all with tangible impacts. For example, the Alan Turing Institute estimates that AI could automate 84% of repetitive public service transactions in the United Kingdom, saving the equivalent of 1,200 person-years of work annually. Despite its promise, government AI use trails the private sector.
Key findings: How AI can serve citizens
The OECD has conducted in-depth research on AI in 11 core functions of government across 200 use cases. The results suggest that AI is most prevalent in terms of total use cases in public service and justice functions and civic participation, with relatively less use seen in policy evaluation, tax administration and civil service reform. In between are public procurement, financial management, fighting corruption and promoting public integrity, and regulatory design and delivery. Possible explanations for this distribution include that some functions encompass a wider variety of uses (public services) while others are more narrow (civil service reform, tax administration).
Also, some face more regulatory constraints (e.g. tax administration, given rules on using tax data), while some face fewer implementation challenges and can mature faster (civic participation). In some functions, such as justice administration, public demands and growing transaction backlogs may precipitate AI adoption as an opportunity to tackle urgent challenges.
AI’s use is more prevalent in internal operations and public service delivery, but less prominent in government oversight. Less use is also seen in policymaking, consistent with previous OECD analysis. Use cases often rely on classic rules-based approaches or established machine learning (ML) techniques, with generative AI (GenAI), including large language models (LLMs), being less common.
In terms of benefits, the largest share of cases seeks to promote automated, streamlined and tailored processes and services; followed by better decision making and forecasting; and enhanced accountability and anomaly detection. A few cases seek to unlock new opportunities for external stakeholders (e.g. citizens, businesses) through access to government-provided AI systems, but further efforts may be warranted.
Risks for AI use in government
There is no such thing as risk-free AI adoption. Unlocking AI’s benefits requires mitigating its risks. Biased algorithms can result in adverse outcomes; AI misuse can infringe on or prevent the free exercise of human rights; insufficient transparency, explainability and public understanding of AI can erode accountability and cause public resistance; and the over-reliance on AI can widen digital divides and allow systemic errors to propagate, weakening citizen trust in government. Such risks may be amplified in countries lacking mechanisms to guarantee the exercise, protection and promotion of human rights, or result from AI misuse by individual public servants. Public service workforce displacement could also occur if governments seek to replace rather than augment civil servants’ capabilities.