Public Governance in the Age of Artificial Intelligence
Carlos Santiso, Head of Digital, Innovative, and Open Government at the Organisation for Economic Co-operation and Development (OECD), explains why artificial intelligence is like nothing governments have faced before.
Heading 1
Heading 2
Heading 3
Heading 4
Heading 5
Heading 6
Lorem ipsum dolor sit amet, consectetur adipiscing elit, sed do eiusmod tempor incididunt ut labore et dolore magna aliqua. Ut enim ad minim veniam, quis nostrud exercitation ullamco laboris nisi ut aliquip ex ea commodo consequat. Duis aute irure dolor in reprehenderit in voluptate velit esse cillum dolore eu fugiat nulla pariatur.
Block quote
Heading 6
Ordered list
- Item 1
- Item 2
- Item 3
Unordered list
- Item A
- Item B
- Item C
Bold text
Emphasis
Superscript
Subscript
Governance Matters: How is artificial intelligence (AI) different from other digital technologies that governments have deployed in recent years?
Carlos Santiso: AI has huge disruptive power and the potential to greatly impact the public sector. First, what sets AI apart from other government technologies is the speed at which it evolves. It feels as though there is a new revolution in the AI space almost daily.
This is partly because the technology can augment itself, improving its own capabilities. Also, AI feeds on ever-expanding sources of new data and we have been living through what the United Nations (UN) called a “data revolution”.1 As more and more data is generated – from an increased use of new technologies such as the internet of things and unstructured big data – and as the speed of the internet gets faster, it is only increasing the rate of AI’s development.
The second aspect that sets AI apart is that it is a general-purpose technology. The scope of its applications is limitless. Most people are likely familiar with ChatGPT or text-to-image generators, but that is only the tip of the iceberg. Microsoft has already warned that we will need proper guardrails for how AI is used in critical infrastructure, such as energy systems or water supplies. That gives us some sense of how broad these applications will be.
The prospects for AI are very exciting, but they can also be quite frightening. AI therefore demands responsible management, particularly by the public sector which has a greater duty of care for the public good. The inherent traits of AI have important implications for governments as users of these innovations.
This is something that governments which are part of the Organisation for Economic Co-operation and Development (OECD) Working Party of Senior Digital Government Officials have long recognised.
Given the rate of evolution and its pervasiveness, governments must learn from past mistakes in rolling out new digital technologies. While governments have to allow for experimentation in applying AI to policy issues, at the same time there needs to be more steering and oversight from the centre of government, with common principles and guidelines.
Are there any cases where AI is already changing the way that governments think and operate, especially in the design and delivery of policies and services?
Until recently, AI’s main role has been in supporting how governments manage their internal processes and deliver public services. For example, it is helping governments become more efficient at targeting welfare programmes and social benefits. The more data that is generated, the more governments can see how well a programme is working for different constituencies. During the COVID-19 pandemic, the Government of Colombia used AI to target welfare payments at those who needed them most. This shows that AI has tremendous potential for making policy delivery more efficient and effective.
There are also interesting lessons to be learnt from the use of AI in the policy areas where it was adopted early, such as anti-fraud and anti-corruption. In the past decade, practitioners in these fields have started to use AI to detect suspicious anomalies in government procurement. The U.K. has been a pioneer with its Connect programme, which searches for cases of tax evasion and avoidance. At its most basic, AI is about pattern recognition. This can be used not only to look for anomalies in people’s income and declarations but in their way of life or their social media activity. Many of these platforms have been very effective in raising red flags for oversight agencies and law enforcement to follow up on.
Operationally, AI means that a lot of time-consuming tasks can be automated, and the use of scarce resources – including employee time – can be optimised and better targeted. This allows budget-constrained governments to gradually shift resources from back-office administrative functions to frontline functions interacting with the public. I think we will see that AI – in improving efficiency and automating back-office work – will give civil servants more time to work directly with members of the public. This will create more meaningful human interactions and engagement, which can help build trust.
However, AI’s growing capabilities mean it can move beyond improving the way governments operate – enhancing the efficiency of service delivery – to changing how governments think about designing more user-centric services that are more tailored to people’s needs.
One key area is how governments design and deliver integrated services around the typical “life events” of a person. AI can anticipate and link up the individual and their family to services they might need at different stages, from cradle to grave. It can support them in identifying and accessing the services they are entitled to in relation to a specific experience, such as having a child, recovering from a disaster, or approaching retirement. In essence, it can contemplate ways to make their lives easier.
Bringing solutions like these to life requires more than digital tools – it entails a radical change in bureaucratic cultures and putting the public back at the heart of public services. In fact, it is more about transforming government than simply digitalising it. This structural transformation of government towards citizen-centred services is something we are looking into in the OECD Public Governance Committee with a view to developing a global standard for the design and delivery of user-centred government services.
This is a growing area of interest. In December 2021, U.S. President Joe Biden signed an executive order on a radical transformation of government service delivery to rebuild trust in government, improve lives, and centre federal services around people’s lived experiences. The initiative includes 36 customer experience improvement commitments across 17 federal agencies. “Government must be held accountable for designing and delivering services with a focus on the actual experience of the people whom it is meant to serve,” said Biden.
Are there any examples of current AI initiatives that are not working so well?
The main challenge is that AI programmes are only ever as good as the underlying algorithms and data on which they operate. If governments do not have finely tuned and ethically reviewed algorithms that use reliable registries with accurate and representative data, then that will negatively impact results. This is particularly an issue for developing countries where they may lack high-quality data that represents their citizens. This can enhance the risk of bias and there is a danger that these biases can then be embedded in the design of the algorithms.
We know that the quality of an algorithm depends heavily on the quality of the data on which it is trained. AI will not be able to capture the inequalities suffered by those who are digitally disenfranchised or who work in the informal economy, for example. These are the new “data poor”. There are risks that automated decision-making processes driven by AI might create new sources of exclusion and discrimination, and informal and unequal emerging economies are particularly at risk.
We have seen cases where the government did not have the required guardrails in place. Six years ago, there was a huge scandal in the Netherlands because an algorithm was found to be biased, wrongly excluding certain people from receiving welfare benefits. When it was discovered, the Government was forced to resign and the tax administration was later fined by the courts. Similarly, in Australia, from 2016 to 2019, around 400,000 welfare recipients were wrongly accused by the Government of misreporting their income and consequently fined. The scandal, which was known as “Robodebt”, highlighted the dangers of governments using poorly designed or trained algorithms and the need for transparency, accountability, and human oversight.
These examples illustrate how essential it is to build guardrails during the development, deployment, and oversight of AI by public entities, especially in sensitive policy domains such as welfare benefits, social protection, immigration policy, or law enforcement.
In 2019, we released the OECD AI Principles to guide good AI governance.2 These include the importance of adhering to human-centred, rights-based values as well as ensuring the robustness, security, and safety of personal data being used. The principle that algorithms must be transparent and explainable enables greater accountability. This could be achieved through, for example, the establishment of open registers of public algorithms and the traceability of decisions informed by automated AI systems. In its Digital Republic law of 2016, France established mandatory open registries of public algorithms and more recently the U.K. adopted a government-wide standard for algorithmic transparency.
The latest report on global trends in government innovation3 from the OECD Observatory of Public Sector Innovation delved further into some of the recent developments in algorithmic transparency and accountability in the public sector. Cities have often been at the forefront of this movement. Amsterdam, Helsinki, and more recently Barcelona have established such open registries. Accountability and, in particular, human determination ultimately are the responsibility of the humans that construct the algorithms on which the decisions informed by AI systems are built. Human accountability is the bedrock of an ethical AI approach.
How can governments step in to address some of the problems with the use of AI?
One of the challenges for policymakers is that this space is evolving faster than we are able to gather evidence and use it to inform policies and regulations. Regulation is often playing catch-up. One reason is the lack of anticipation of the disruptive effects that emerging technologies will have. Consider ChatGPT. A year ago, few policymakers were thinking about it. Even in the EU, where discussions on regulations are currently the most advanced in terms of a risk-based approach to AI, ChatGPT was not even included until recently.
In this space more than any other, we therefore need agile and forward-looking regulatory approaches. Many countries, and we at the OECD, are thinking about how to design policies that have the flexibility to adapt to how the technology is evolving and that do not hinder innovation at the same time. In 2021, we adopted the OECD recommendation for agile regulatory governance4 to help achieve regulation that can harness innovation whilst managing risks.
Governments also need to devise the appropriate institutional mechanisms to anticipate the risks and opportunities arising from AI, such as the Regulatory Horizons Council in the U.K., and step up international regulatory cooperation. In Europe, the proposal for an AI Act is advancing and, when adopted, will likely generate an important demonstration effect for other countries around the world – akin to the so-called Brussels effect that the European data protection legislation has had in the past. Interestingly, even some of the world’s leading AI companies are now asking for better regulation.
Transparency is a major issue. When the algorithm is a “black box” it is very difficult to oversee it effectively. Many policymakers have drawn parallels with how the pharmaceutical industry has come to be regulated and the need for ex-ante quality control in both the design and delivery of the product. We need robust ethical oversight and mechanisms for conducting both upstream and downstream impact assessments of every algorithm, for example through mandatory ex-ante social and ethical impact assessments, an approach that has been recommended by the United Nations Educational, Scientific, and Cultural Organization (UNESCO).
There are some very interesting policy developments in that regard. For example, the U.K. has adopted standards of algorithmic transparency and is advancing with a proposal for decentralised regulatory oversight. Spain created Europe’s first AI supervisory agency, due to begin operations late this year. The agency will monitor projects within Spain’s National AI Strategy and the development of regulatory sandboxes for AI applications. More recently, the European Commission is setting up a European Centre for Algorithmic Transparency following the adoption of the Digital Services Act.
The debate on the good governance of AI has moved into the political sphere, as part of a broader debate on the protection of human rights in the digital era. Often referred to as “digital rights”, this debate is increasingly promoted by governments such as Spain through its own Charter of Digital Rights, Europe’s Declaration on Digital Rights and Principles, and more recently the Ibero-American Charter of Principles and Rights in Digital Environments which was adopted last March. At the OECD, we recently launched a global initiative on building trust and reinforcing democracy,5 which includes a focus on digital democracy and the corrosive effects of mis- and disinformation.
AI has become a geopolitical concern. Because it is a general-purpose technology, it can be used for good and bad. AI could be misused by authoritarian governments to control people or to help malign actors spread disinformation. These fears feed into those wider discussions about the future of digital human rights, which incorporate elements such as privacy and freedom of expression. These questions are political more than they are technical, and governments have a responsibility to foster a purposeful dialogue around these issues.
In the next couple of years, we could make decisions that will define our future in very significant ways. The manner in which we will handle this challenge is also likely to shape the world’s approach to the regulation of other emerging technologies, such as neurotech and biotech.
What do governance practitioners need to consider in their own application of this technology?
The duty to ensure that AI is being used effectively and responsibly within governments has too often been overlooked. Governments have huge influence over people’s lives and that brings with it a duty of care – one that goes above that of companies. In the U.S., for instance, courts have used AI algorithms during the sentencing process to help assess the risk an offender poses to society. This is a hugely consequential decision and we know that AI can absorb the biases of its designers and data sources. It goes to show how important it is that governments are role models.
To ensure that they are setting the right example, governments need two things: a clear definition of the standards they want to uphold and the correct monitoring and enforcement mechanisms. These are important issues that governments are addressing through mechanisms such as the OECD Working Party on Digital Government. Many governments around the world are developing rights-based standards and governance arrangements to better oversee the deployment of AI across the public sector, often adopting AI strategies specifically intended for the public sector.
Our own work on AI in the public sector6 at the OECD Observatory of Public Sector Innovation shows that there is a growing trend in governments’ AI strategies for thinking about their role as both regulators and users of AI. We recently published a report with CAF, the Development Bank of Latin America, on the strategic and responsible use of AI in the public sector which revealed a similar trend in Latin America. In fact, Colombia has been one of the first OECD countries to adopt an ethical framework for its AI strategy.7
Some countries are talking about having an AI agency run on similar lines to their data protection agency, a route that the EU’s draft AI Act is proposing. One of the roles of these agencies might be to lead detailed impact assessments, testing potential solutions, and undertaking research on the positive and negative effects before anything is rolled out.
Another key strategy is to build up the internal capabilities within government so that not all AI-based tech development is being outsourced to private sector partners. For example, Singapore’s GovTech agency model allows for more effective work with agile startups.8 Governments should have the capacity to “insource” so that they can better understand, develop, and govern these technologies. Indeed, at the OECD, we are considering the potential of strengthening new public-private partnerships between government bureaucracies and agile govtech startups developing AI-based solutions, especially for city governments.9 For example, in Argentina’s city of Córdoba, the CorLab and its govtech fund have been particularly innovative in fostering such partnerships.10
Governments can also influence the future development of AI solutions through their public procurement might, embedding core principles in their guidelines for acquiring this technology, as has been done in the U.K. and the U.S. By insisting on certain standards from contractors they can set an example that will influence the behaviour of the wider marketplace.
How can we ensure that the lessons from negative experiences with AI are being shared between governments across the world?
At the national level, one challenge is that, currently, the development and deployment of AI within governments is quite decentralised, often with little oversight from national government. At the OECD, we have a working party on digital government which within it has a working group on emerging technology. This has provided a useful space for sharing approaches on the governance arrangements for AI.
At the international level, there are no global bodies for creating regulations or setting standards. You have different countries coming into this with very different backgrounds and perspectives, leading to regulatory fragmentation. Yet, as we know, the technology space operates across borders and so co-ordination is very important. There is a particularly international element around questions of data sharing and transfer.
In essence, there is currently a gap in the global governance of this critical technology. At the OECD, we have sought to address this with our AI Principles. We have also established the AI Policy Observatory as a platform for information, dialogue, and collaboration.11 In 2020, 15 governments created the Global Partnership on Artificial Intelligence as a multi-stakeholder initiative to foster international co-operation.12 More recently, in May 2023, we also saw the first G7 meeting on AI which was an exciting step towards more global co-ordination. We need to keep fostering dialogue and building the evidence base that is required to create good policies and regulations.
What are the hallmarks of a good AI strategy?
More than 60 countries now have a dedicated AI strategy, some of which include a focus on its use within the public sector. Often many of the considerations for AI are part of other strategies governing digital transformation or data governance, though it is important that these approaches do explicitly consider the specifics of deploying AI within government.
When it comes to AI, we are witnessing a bit of a pendulum effect. Attitudes towards AI swing between seeing it as the answer to everything – which was particularly the case a couple of years ago before the pandemic – to regarding it as a terrifying and even an existential threat to humanity. Approaches to AI governance need to find the right balance between these two extremes. There are risks, but we also want to embrace change in terms of improving knowledge, services, and the ability of governments to serve the public.
One issue that is central to a good AI strategy is the acquisition of digital skills. There are a lot of discussions about the future of work in the digital era and concerns about the net effect of AI. What is certain, however, is that it is imperative to upgrade digital and data skills and to think more about working in the public sector in a world of AI. A key lesson of the past decade and the many failures of digital transformation projects is that governments cannot outsource their ability to set up and execute these programmes. Digital skills have become essential government assets. This is why we at the OECD have developed a framework to guide governments’ digital skills strategies.13
Essentially, it is important that we identify and manage risks, particularly around unethical uses. There are major issues around cyber security, particularly in relation to people’s data and to the involvement of AI in managing critical infrastructure. We need the right level of oversight of these systems, mechanisms to ensure they are designed properly and transparently so they function as intended, and a series of safeguards around privacy and other rights. Ultimately, we need to ensure that AI is something that works for us, not something that happens to us.
Endnotes
- https://www.undatarevolution.org/report/
- https://oecd.ai/en/ai-principles
- https://oecd-opsi.org/publications/trends-2023/
- https://legalinstruments.oecd.org/en/instruments/OECD-LEGAL-0464
- https://www.oecd.org/governance/reinforcing-democracy/
- https://oecd-opsi.org/work-areas/ai/
- https://inteligenciaartificial.gov.co/static/img/MARCO_ETICO.pdf
- https://www.tech.gov.sg/
- https://oecd-opsi.org/blog/public-admins-digital-startups/
- https://oecd-opsi.org/innovations/cordoba-smart-city-fund/
- https://oecd.ai/en/
- https://gpai.ai/
- https://www.oecd.org/employment/the-oecd-framework-for-digital-talent-and-skills-in-the-public-sector-4e7c3f58-en.htm
- Item 1
- Item 2
- Item 3
Carlos Santiso is Head of Division for Digital, Innovative, and Open Government at the Organisation for Economic Cooperation and Development (OECD). He is also a member of the advisory group on public governance of the United Nations and of the World Economic Forum’s advisory council on anti-corruption. He has previously held managerial roles at the Development Bank of Latin America, the Inter-American Development Bank, and the African Development Bank, served in the British Department for International Development, and as an advisor to the Cabinet of the French Prime Minister.