Putting ethical AI at the heart of public service

Putting ethical AI at the heart of public service

The UK public sector is integrating AI tools to turbocharge its stalling productivity. Government departments are trialling the AI assistant Humphrey, expanding digital skills training for civil servants and long-term strategies like the NHS 10-Year Plan put AI and technology at the heart of transformation. Yet while adoption is moving fast, regulation is lagging.

The UK has yet to create its own AI regulations, but with controversy around data use, copyright, and creators’ rights, a UK AI Act is expected within the year. The EU’s AI Actand Code of Practice offer a glimpse of what regulation could look like: a framework that prioritises accountability and transparency. 

Regulation is a human capability challenge. Without the skills and culture to apply regulation, even the most carefully written laws will fail.  As the UK moves towards enhancing digital skills in the public sector and developing its own AI regulation, it needs to ensure that ethics are engrained in the public sector’s efforts to integrate AI responsibly.

Why the public sector must lead on AI ethics

As the provider of public services that have a direct impact on citizens’ lives, often in high-stakes areas like healthcare, policing, and social benefits, it is essential that the public sector leads in ethical AI practices and eventually, regulatory compliance. Unlike private companies, mistakes or biases in public sector AI can erode trust in government, create inequities or even cause harm to vulnerable populations.

Without ethical AI literacy among civil servants, AI risks amplifying bias. In healthcare, AI trained on skewed datasets could misdiagnose patients from minority backgrounds. In policing, predictive algorithms can disproportionately target certain communities. And in welfare, automated decision-making can deny support to eligible claimants.

Taking inspiration from the EU’s regulation

The EU AI Act is the world’s most comprehensive legal framework on AI, including strict obligations for companies that are providing AI systems within the EU. The accompanying Code of Practice is a guiding document, a set of non-legally binding guidelines designed to help companies demonstrate compliance in areas like transparency, copyright and safety. 

The development of this Code was contentious with tech companies who warned it would stifle innovation. Meta refused to sign the code with their Chief Global Affairs Officer, Joel Kaplan saying the Code’s ‘over-reach will throttle the development and deployment of frontier AI models in Europe’. Similarly, while Google signed the Code, it warned it went too far.

Despite the controversy, the Code is likely to shape global standards. The UK, which has yet to set out its own AI regulations, could adopt many of its principles to maintain public trust while fostering innovation. The EU’s approach demonstrates that governments can convene industry around common standards that put responsible use at the centre.

The public sector should aim to be a model user of AI: transparent in its processes, accountable in its decisions and proactive in identifying risks

On the other hand, the US’ recent AI Action Plan sets out a starkly different approach to regulation. It calls for cutting the red tape, makes no mention of AI and copyright law and prioritises innovation and speed over responsibility.

The UK should avoid both extremes. Over-regulation risks stagnation and wasted opportunity, especially in a public sector already struggling with productivity. Under-regulation risks bias, eroded trust and ethical failures that could set back adoption for years.

The opportunity lies in a middle ground: regulation that enables innovation while embedding trust, ethics, and transparency from the outset.

AI ethical literacy matters

In lieu of formal regulations, the UK can start building the right culture by embedding ethics into every digital upskilling programme. Civil servants shouldn’t just learn how AI works, but also when to use it, how to govern it and why responsibility matters. This will prepare the public sector to comply when regulation does arrive because even the best frameworks will fail if staff lack the skills to implement them.

The government has recognised the urgency of digital skills and introduced an upskilling plan for 7,000 Senior Civil Servants and launching the NHS Digital Academyto educate NHS staff on basic digital and data competence. 

However, progress is uneven with only 21% of Senior Civil Servants feeling confident in digital and data essentials. The public sector also relies on outside contractors for digital skills, with 55% of digital and data spending in 2023 going to external providers, creating barriers to institutional knowledge.

As AI becomes embedded in essential services, we need a workforce capable of spotting AI errors or ethical risks. Without that capability, we could embed discrimination and erode trust at the very moment AI adoption accelerates. Integrating ethical literacy now will allow the public sector to adapt quickly when regulation arrives, rather than rush to retrofit new behaviours later.

The way forward

AI regulation in the UK must be paired with investment in public sector capability, in both technical proficiency and ethical literacy. The public sector should aim to be a model user of AI: transparent in its processes, accountable in its decisions and proactive in identifying risks.

The UK has an opportunity to lead globally by showing that AI can be adopted quickly and responsibly by setting a standard for trust-based innovation that other nations follow. But to seize it, ethics and capability must be treated as core infrastructure, not optional extras.

If the UK can combine the EU’s accountability-first approach with serious skills development, it could lead the world in public sector AI adoption, not only in speed, but in trust. 

Tony Holmes, Practice Lead for Solutions Architects in the Public Sector, Pluralsight

Tony Holmes

Tony Holmes is Practice Lead for Solutions Architects in the Public Sector at Pluralsight. A seasoned technology professional, Tony’s journey in the tech world spans over two decades, during which he has worked with some of the world’s most forward-thinking technology organisations, including the BBC, Digital Equipment Corporation, Microsoft Research, Opsware, Hewlett Packard, and Oracle.

Author

Scroll to Top

SUBSCRIBE

SUBSCRIBE