Opinion

What is in store for UK AI: The long awaited government response is here

Published Date
Feb 15 2024
Emma Keeling, Senior Tech and Data Knowledge Lawyer, and Jane Finlayson- Brown, Tech and Data Partner, take a closer look at the latest on the UK’s approach to AI as the Government aims for agility.

After months of expectation and desire for insight, the long awaited government response to the UK March 2023 AI White Paper (White Paper) was published on 6 February 2024 (the Response). Although it runs to over 140 pages, there are few real surprises. So what are the headlines as the UK looks to be “a global leader in safe AI development and deployment”?

Consistent principles 

The UK will continue to look to 5 key principles to dictate its approach to AI, namely:

  • Safety, security and robustness; 
  • Transparency and explainability; 
  • Fairness; 
  • Accountability and governance; and
  • Contestability and redress.

The principles are intended to provide a robust basis for existing regulators to follow and set the Government’s expectations for AI development. They reflect the OECD AI principles, therefore paving the way for international interoperability. 

 No general legislation…just yet 

In contrast to the EU and other countries looking to regulate AI through legislation (e.g., Canada), for now the Government does not propose to introduce new general legislation. It aims to foster innovation and protect safety by approaching AI from a context and sector based position. The Government wants to avoid “unnecessary blanket rules that apply to all AI technologies” and will rely on existing legislative and regulatory frameworks.

Each regulator will be expected to take steps to manage AI development and deployment in their own sector or area of activity, using existing powers. On 6 February 2024, the Government published initial guidance to support regulators to implement the principles, suggesting mechanisms such as provision of guidance, creation of tools, encouragement to share information and support transparency, and the application of voluntary measures, directed towards developers and deployers of AI. Phase 2 guidance will follow in the Summer of 2024, before Phase 3 addresses collaboration and joint solutions.

Readiness of the regulators -investment and information gathering

Whilst the Information Commissioner’s Office (ICO) and the Competition and Markets Authority (CMA) are singled out for particular praise (the Government highlights the ICO’s update on AI and data protection guidance – read more in our blog here - and the CMA’s review of foundation models), the Government recognises the variety of AI experience across the UK regulators. As such, it has committed £10 million and expects regulators to use the investment to fund the development of research, practical tools, and day to day management of AI risks.

The Government has requested that a number of UK regulators provide an update on their AI strategy by 30 April 2024. The regulators will be expected to outline: AI-related risks specific to its sector or regulated activities; steps being taken to address AI; detail of the regulator’s current AI capacity, expertise and structures; and its AI plans for the coming 12 months. This should act as a level-set and presumably help the Government to target support where it is needed.

The Government expects AI regulatory action to be taken under existing powers. To ensure this is feasible, it will review current regulatory powers and give consideration as to gaps in regulatory coverage.

Coherence and collaboration are key

Business needs certainty, particularly if a pro-innovation outcome is to be realised. A distributed, multi-regulator model may create challenges if different regulators take different approaches to AI, especially where their remit overlaps. As such, the Government is establishing a central function to ensure collaboration and consistency. A steering committee will be established by Spring 2024 to support the exchange of knowledge and the formalisation of regulatory coordination.

In addition, a newly recruited multidisciplinary team will carry out cross-sector risk assessments and monitor existing and emerging AI risks, the effectiveness of government and regulatory intervention and the powers of the regulators themselves. A cross-sector risk register, a single source of “truth”, will be consulted upon later in the year.

Despite the clear need for a central function to support coordination, the Response indicated that regulators are keen to ensure that their regulatory independence and remit is not compromised.

Keeping options open, allowing for a change of course

This multi-regulator approach has long been the preferred route for the Government. However, it is clear that the door to legislation has not been closed and indeed the Government may be laying the groundwork to pivot when it considers necessary.

The Response states that “the challenges posed by AI technologies will ultimately require legislative action in every country once understanding of risk has matured”. It is unclear when the relevant maturity level will have been reached but in January 2024, press reports indicated that the Government will publish a series of tests that it will use to support just this sort of decision making. Objective tests will certainly give business something to work with or to keep in mind.

The Response further clarifies that the Government will keep its approach to AI under review, looking, amongst other things, to the regulators’ plans, the Government’s own review of regulator powers (both noted above) and its wider approach to AI legislation to inform its thinking. 

Not all AI is equal, highly capable general purpose AI is a special case 

Consistent with the AI Safety Summit in November 2023, the Government recognises frontier AI as presenting particular challenges and risks. In the Response, the Government focuses on “highly capable general purpose AI” defined as “foundation models that can perform a wide variety of tasks and match or exceed the capabilities present in today’s most advanced models. Generally, such models will span from novice through to expert capabilities with some even showing superhuman performance across a range of tasks.”

The Response articulates why developers of such AI systems may need to be subject to binding and targeted requirements and why voluntary commitments alone (such as those established at the AI Safety Summit) will not suffice. Amongst other things, the Government considers that these highly capable systems may present risks because they can be used across different sectors such that their impact (and potential for harm) is broad ranging and existing context based regulation may not be as effective. The Government also notes that regulators often focus on the deployment layer when enforcing requirements, but that organisations using these highly capable systems may not be best placed to understand and mitigate risk.

The Government sets out a series of questions to consider when determining how best to manage highly capable general purpose AI systems, touching on design, development and deployment.  Whilst speculating on the potential for more formal regulation, the Government anticipates that it would only be targeted at a small group of developers of the most powerful systems (based on compute and capability). For instance, perhaps requiring compliance with the 5 key principles, requiring pre-market permits or model licensing, imposing transparency risk management and corporate governance obligations, or action to address specific harms.

This type of AI is clearly first in the sights of the Government as it looks to the next stage of its regulatory approach. The Government will use the summer to work with industry, academia and civil society, including the open source community and developers, to “refine” its approach to regulating these highly capable general purpose AI systems. We can expect an update on its work on new responsibilities for these developers by the end of 2024.

Key gaps to resolve 

Beyond the ongoing review of regulatory strategy, powers, AI risk and ultimately the need for legislation, the Government flags two areas where specific engagement is required.

Intellectual property

In 2023 the Government established a specific working group with the UK creative industry, copyright holders and AI developers. The intention was to agree a voluntary code that addressed the need for copyright protection alongside AI developers’ need for access to training data. However, no voluntary code could be agreed so the Government has decided to step in. It will look to support innovation and AI development without undermining creativity, aiming for more transparency from AI developers and attribution of outputs. No time frame is given for further information on this workstream beyond “soon”, but it is clearly a hot topic, with infringement cases making their way through courts across the globe.

Supply chain and liability 

The Government recognises the potential complexity of AI supply chains and the challenge of fairly allocating liability with the “right” actor. Efforts to date have focused on highly capable general purpose AI but the Government has yet to reach a conclusion on how to address this challenge. The Government continues to explore the issue and will “consider introducing measures to effectively allocate accountability and fairly distribute legal responsibility to those in the life cycle best able to mitigate AI-related risks”.

Digital regulators act as a hub and the AI Safety Institute tests the systems

The Digital Regulation Cooperation Forum, comprising the ICO, FCA, CMA and Ofcom, has been tasked with operating the AI and Digital hub. This hub is intended to support innovation and act as an advisory service where there is a cross-regulatory angle, particularly considering legal and regulatory requirements prior to the launch of an AI system. The (fairly basic) eligibility criteria for this scheme were also published on 6 February 2024 and the service is due to launch in pilot form in Spring 2024. 

Alongside the AI and Digital hub, the AI Safety Institute was established to evaluate AI systems and safety research in partnership with international players such as the US and Singapore. On 5 February, the AI Safety Institute published its third progress report, highlighting the start of pre-deployment testing on AI models provided by leading AI developers, looking at advanced AI systems. This testing will focus on misuse of AI systems, societal impacts, autonomous systems and the efficacy of safeguards.

AI initiatives to innovate and tackle specific harms 

The Government announced or reminded us of additional specific initiatives with a view to addressing societal harms, misuse and autonomy risk. By way of example: investment in nine research hubs; a £9 million partnership with the US regarding responsible AI; reform of automated decision making requirements under data protection law to allow for a more flexible approach; launch of an AI Management Essentials scheme later in 2024, setting minimum good practice standards for companies selling AI products and services (directed at public procurement but with potential learnings for the private sector); NCSC AI and cyber guidance; a call for views (expected Spring 2024) regarding an AI and cyber security Code of Practice; and Government guidance on the fair, responsible and safe use of AI in HR and recruitment (expected Spring 2024), to name but a few.

The Government trailed an “Introduction to AI Assurance”, subsequently published on 12 February 2024, as the first in a series of guidance to help organisations upskill on topics around AI assurance and governance. The Introduction to AI Assurance sits alongside the existing UK AI Standards Hub and the Portfolio of AI Assurance techniques. It introduces key AI assurance concepts and stakeholders; provides an overview of different assurance techniques and how to implement AI assurance within organisations; and gives a brief overview of key actions that organisations can take to embed assurance.

Rearranging and renaming in the Government 

As part of the upskilling and building of its AI capacity, the Government has also restructured teams across departments addressing AI. Individuals in the Government supporting work in this area are part of the AI Policy Directorate and each department has its own AI lead Minister. There is no longer an Office of AI and the Centre for Data Ethics and Innovation is rebranded as the Responsible Technology Adoption Unit. This last change is intended to better reflect the role of the group. 

The UK’s international position 

The Government remains keen to highlight the UK’s pedigree in AI and its central place in the global AI landscape. The Response identifies an extensive list of international organisations and bodies that feature UK engagement.

The extent to which the UK’s influence continues will be somewhat determined by the success of its balancing act and whether AI is promoted effectively and safely in the UK.

It is interesting to note that, whilst the recently agreed EU AI Act was not mentioned at all in the Response, the US was called out as a particular partner or inspiration. For example the Government flagged an intention to consider a risk management framework not dissimilar to that issued by the US National Institute of Standards and Technology.  The UK’s approach to AI regulation looks likely to track the US more closely than its nearest neighbours.

New government, all change? 

With a general election on the horizon it is certainly worth keeping abreast of opposition party policy on AI. Whilst unclear on all specifics, it is likely that a Labour party government would take a more statutory route compared to the current regime. Earlier this month it was reported that a Labour government would replace the current voluntary testing arrangement (flagged above in relation to the AI Safety Institute) with a regime under which developers of highly capable AI would be required to test their systems under independent oversight and share their test data with officials for example. As such, initiatives and approaches set in train now may well evolve. 

Content Disclaimer

This content was originally published by Allen & Overy before the A&O Shearman merger

Related capabilities