top of page

Catch 22 - The Key Challenges Ahead for the UK’s AI Policy

Written by Dhruv Banerjee (General Course)


The King’s Speech last year had a moment that seems particularly compelling in hindsight. This was a suggestion regarding AI in the UK, and the possibility of “appropriate legislation to place requirements on those working to develop the most powerful artificial intelligence”. The reason this was notable was that after years of light-touch regulation on AI by conservative governments, it seemed at the time that the Starmer administration would take a different approach. An approach of more restrictive regulation, perhaps inspired by the 2024 EU AI Act which classifies different risk levels of AI use. The possibility of the UK and the EU’s AI policies beginning to converge seemed like a real one.

 

As the months have gone by, however, this has proven not to be the case. Although the finer details have differed, the Johnson, Sunak, and Starmer administrations seem to have adopted a broadly similar overarching motto for AI regulation: an approach that is self-described as “pro-innovation.” The recent AI Opportunities Plan, for instance, seeks to further deploy AI across a range of public services, and foster AI “growth zones”. This marks once again a clear divergence of the UK’s strategy from the ever-growing Brussels red tape. On one hand, the UK has the advantage of having more flexibility in dealing with emerging risks. On the other hand, they also come with a bag full of challenges.


The Four Key Challenges Ahead

Firstly, ensuring strategic coherence on AI regulation across different sectoral regulators is important. One of the most distinct aspects of AI regulation in the UK is the fact that sectoral regulators are responsible for implementing the guardrails. Yes, there is a set of principles provided by the White Paper, which include safety, transparency, fairness accountability, and contestability.  However, ultimately the various sectoral regulators do retain quite some autonomy in determining how best to implement the principles.  The Medicines and Healthcare Products Regulatory Agency (MHRA) for instance is aiming for “risk proportionate regulation of AI” in medicine. The Financial Conduct Authority (FCA) has prioritised building an “in-depth understanding” of AI deployment in financial markets. The Information Commissioner’s  Office (ICO)  focuses in particular on the risks associated with data used in AI. These are just some of the agencies responsible.

 

In April 2024, all of these regulators released documents detailing their respective strategic approaches. The question that needs to be posed is a simple one: Can these different regulators, and their differing approaches towards AI, ensure an ultimately cohesive regulatory structure in the UK? I spoke to numerous policy researchers at the LSE, and the broad consensus seems to be that “only time will tell.” Ultimately, the Government must figure out a way to ensure that intersectoral approaches to AI regulation are not disjointed. 


Several studies have sought to demonstrate that AI could be a huge technological enabler for the “bureaucratic machine”. A report by the Turing Institute estimates that approximately 143 Million tasks undertaken by the bureaucracy are completely automatable, meaning that AI could potentially increase productivity within these services. Notably, in January 2025 the Government rolled out a set of AI tools known as “Humphrey”, which is meant to be used by civil servants to optimise the functioning of public services, make things move quicker, and reduce costs. On paper, all of this sounds great, and maybe even has the potential to be quite beneficial. But it is also important to note that things are not so straightforward.

 

For AI tools to genuinely have a positive impact on public services, the people using these tools need to know how to use them properly. I remember a professor of mine quipping that “You cannot just build the roof without a solid foundation”. A report by the National Audit Office released in 2024 found that out of all the government bodies that had deployed AI in the workflow, only 8% had a proper AI strategy. The absence of a sufficient understanding of deployed AI has some pretty glaring risks. Research suggests that humans are more likely to believe AI, including disinformation by AI, possibly because AI-structured responses give the illusion of confidence. Thus, departments lacking AI training might not be able to provide the necessary quality of human oversight required for AI tools.

 

Thirdly, the evaluation of policies incorporating AI in public services is critical. Here it is important to take lessons from prior attempts at modernising the Government. A great example to consider is that of New Public Management (NPM). Over the 1980s and 1990s, there was an increasing push to modernise the government and the bureaucracy in the UK. This involved streamlining the public sector by essentially running it like private sector undertakings. It also involved reducing the number of people working in these services, and instead outsourcing work to management consultants who could provide recommendations on efficient cost-cutting.

 

While on paper this sounds like a good approach from the perspective of efficiency, in reality, it was far from successful. With the benefit of hindsight, study after study has shown that this NPM approach actually ended up decreasing efficiency and increasing costs across various sectors. In the case of NPM, more proactive evaluation, and genuine attention paid to the evidence could have averted the issues that came up. In the case of incorporating AI in public services, these are key lessons to keep in mind. After all, as sound as a policy might seem in theory, being prepared to adapt if it does not do well is essential.

 

Finally, the Government and regulators must ensure that they incorporate public opinions on AI. It is essential to keep a temperature check on how comfortable citizens are with different AI models and their specific use cases in public services. According to polls, around 87% of people in the UK would back a stricter law on AI, specifically one that could ensure that AI models are safe before deployment. A December 2024 survey on public perception of AI in the UK noted that “Public anxiety around AI remains high,” with people most prominently associating negative words with AI. People are also largely uncomfortable with the idea of AI replacing human decision-making in important or high-stakes areas, and many believe that AI will likely have a disproportionate effect on different groups.

 

There are two things to consider here. One, it is critical to find ways to build public trust in AI for any AI policy to be successful. In this regard, it is important to be more transparent about precisely what kind of decision AI is being used for within public services and to disseminate this information in a manner that is easily accessible to those impacted by these decisions. Two, actually making an effort to understand the public’s key concerns over AI and considering them while updating legislation is also essential. For example, the Tracker survey reports, which are under the Department of Science, Information, and Technology, should be referred to while drafting policy. In this regard, best practices in other countries should also be studied, including Finland’s active attempt to engage its citizens on the topic of AI. The Government has actively encouraged both citizens and companies to educate themselves on using and applying AI (including those working in Government) for several years now. 

 

What Next?

It would be unfair to portray the UK as being completely devoid of any safeguards. If anything, I would currently classify the UK’s approach as having the potential to strike a healthy balance between tackling AI risks while still ensuring the potential for growth. In terms of regulation, they do still appear to be more inclined towards thinking about safety than the USA.

 

However, as mentioned previously, they need to get a few things right to ensure this. For starters, more specific guidelines for sectoral regulators need to be provided, to ensure that there remains broad coherence between their strategies. Those working in the bureaucracy (or really anyone in the public sector) also need to be provided with both clear AI use strategies as well as training to ensure adequate human oversight in AI deployment. This must also be complemented by rigorous evaluations of how well this process is going. Most importantly, any good AI policy must also take into account the opinions of the public.

Commentaires


LSESU Think Tank | Best New Society 2024

bottom of page