Opinion

UK AI Regulation: latest developments and future direction

Published Date
Sep 27 2023
Steve Wood, Special Advisor to Allen & Overy’s data team and Emma Keeling, Senior Tech and Data Knowledge Lawyer, take a look at the latest developments in UK AI regulation-where are we now and what are the next steps in the UK’s approach?

Back in April of this year we published a blog analysing the UK Government’s white paper on AI regulation (the AI White Paper). The Government had taken a pro-innovation focus and a principles-based approach to regulation, focused on supporting and joining up the work of existing regulators.

The Government sought to distinguish its approach from the EU and the more prescriptive and top-down approach of the AI Act, which is entering into its final trilogue this autumn. The Government indicated that future AI legislation, if any, was most likely to focus on a statutory duty for regulators, requiring them to have due regard to AI principles.

So where are we now, and what are the next steps in the UK’s approach to AI regulation?

The Science and Technology and Select Committee report on the governance of AI

In light of the Government’s AI governance proposals (including those pre-dating the White Paper), in October 2022 the House of Commons Science and Technology Select Committee opened an inquiry into the governance of artificial intelligence. The Committee published an interim report on 31 August (the Committee Report). 

  • The Committee Report contains a useful and evidenced overview of the benefits and challenges of AI across a range of sectors from healthcare to education, automotive to climate change. It sets out “twelve challenges of AI governance”, that it considers any policy or framework must be designed to address. For example:
  • the introduction or perpetuation of biases that society finds unacceptable (eg racial bias in insurance pricing);
  • the risk of identification of personal data, which is then used in ways that the public perceives to be inappropriate;
  • the risk to effective enforcement of intellectual property laws (including in relation to opensource usage and copyright);
  • the risk that AI can generate material that misrepresents the likes of behaviour, opinions or character;
  • the “black box” risk, referring to the lack of explain-ability of the results of AI models;
  • liability risk, referring to uncertainty regarding the rules for establishing liability for harm caused by AI models or tools and the how to allocate liability across a supply chain; and
  • the fact that the most powerful AI needs very large datasets, which are held by relatively few organisations, and access to significant computing power.

The evidence provided to the Committee does not provide a clear consensus as to the precise detail of the most serious risks presented by AI and when they may emerge, but the Committee Report recognises the need to balance benefits and risks.

Its conclusions also indicate some balance about how generative AI should be regarded: “the technology should not be viewed as a form of magic or as something that creates sentient machines capable of self-improvement and independent decisions. It is akin to other technologies: humans instruct a model or tool and use the outputs to inform, assist or augment a range of activities”.

The UK has a good opportunity to build on its existing foundations of regulation and the Committee accepts that this is a good starting point but sets out a concern that the UK is also falling behind international counterparts in establishing formal regulation. It makes the following recommendation: “We urge the Government to accelerate, not to pause, the establishment of a governance regime for AI, including whatever statutory measures as may be needed”.  The Committee Report considers that a “tightly-focused” AI Bill in the next King’s Speech to Parliament (in November) would help and not hinder the UK’s ambition. The Committee Report also highlights the importance of international engagement and cooperation.

There is recognition of the AI regulation already underway by existing regulators, such as the Information Commissioner’s Office, Competition and Markets Authority, the Financial Conduct Authority and Ofcom. This also includes their work together under the umbrella of the Digital Regulation Cooperation Forum (DRCF). The Committee concludes that whilst regulators are already engaged with the implications of AI including through the DRCF more is required: “it is already clear that the resolution of all of the Challenges set out in this report may require a more well-developed central coordinating function.” This appears to indicate a need for further resources for the UK’s regulators and the Committee considers that the Government should carry out a gap analysis to assess resourcing, capacity and also whether new powers are required by regulators to enable effective enforcement of any AI principles (something the DRCF has not called for itself).  

Can we expect a UK Government announcement soon?

In her statement to Parliament on 19 September, Michelle Donelan, Secretary of State for Science, Innovation and Technology summarised what she saw as latest developments on the UK’s AI policy. Having received over 400 responses to its AI White Paper, we can expect the Government to announce its formal response to the consultation this autumn, at some point after the AI Safety Summit (more on this below).  Whilst the Secretary of State was clear that the Government remains committed to an approach that enables evolution and iteration in regulation of AI, it will be interesting to see to what extent the Government takes account of the Committee Report and whether that aligns with wider feedback on the White Paper. One option for legislation would be to add the AI principles to the Data Protection and Digital Information Bill currently progressing through Parliament.  Otherwise, it may be tight timing to get standalone AI legislation passed before a general election in late 2024. The King’s Speech to Parliament will provide the opportunity to clarify the intentions though whether the Government will have time to consolidate its thinking from the AI Safety Summit in time remains to be seen. Waiting until after the election would mean AI legislation would not go into effect until 2025 at the earliest (a position the Committee said would be concerning).

UK AI Safety Summit this autumn

This autumn the spotlight will fall on the UK’s ambitions towards regulating AI as the UK hosts the AI Safety Summit, an event initiated by UK Prime Minister Rishi Sunak. A wide range of international participants are expected from governments, business and academia. The Summit will take place 1-2 November at Bletchley Park and on 4 September the UK Government set out its objectives for the summit:

  • a shared understanding of the risks posed by frontier AI and the need for action;
  • a forward process for international collaboration on frontier AI safety, including how best to support national and international frameworks;
  • appropriate measures which individual organisations should take to increase frontier AI safety;
  • areas for potential collaboration on AI safety research, including evaluating model capabilities and the development of new standards to support governance; and
  • showcase how ensuring the safe development of AI will enable AI to be used for good globally.

In her September statement, the Secretary of State emphasised that the Summit will build on initiatives of the UN, OECD, G7 and G20 with a view to agreeing practical next steps to address risk. It will be interesting to see to what extent the next steps can indeed be implemented in practice. The Government’s focus here is clearly on ‘frontier AI’, a foresight focus and what has been variously described for example as general purpose, often foundation models that exceed current capabilities and can carry out a wide range of tasks, including those which may pose serious risk to the public.  This appears to indicate that there will be less focus on regulation of risks in the ‘here and now’ – for example the risks of bias from using AI in recruitment (which is already in mainstream use), though the Committee Report encourages the Government to cast the net wide, both in terms of using the Committee Report as a basis for discussion but also in terms of the range of attendees.

The Frontier AI Taskforce

In the same vein, the Government has recently renamed its Foundation AI Taskforce to reflect, the Secretary of State explained, its role in evaluating the risk at the frontier of AI. The Frontier AI Taskforce has now issued its first progress report, setting out its actions in appointing members to its Expert Advisory Board, recruitment of expert AI researchers, partnering with leading technical organisations and building the technical foundations for AI research inside the Government. We can expect further announcements from the Taskforce over the coming months and into 2024. It will also be an influential voice in shaping future regulation and appears to be aligned with the Committee Report as it flags that “moving fast matters” and “time is of the essence” when it comes to AI development and regulation.

Central functions and regulator engagement

The Frontier AI Taskforce is just one source of expertise that the Government expects to feed into its newly established central AI risk function-part of the Department of Science, Innovation and Technology (DSIT). This central function was identified in the White Paper as a necessary feature to ensure coherence and the Secretary of State reiterates the expectation that this function will enable the Government to monitor risks holistically and identify gaps in approach. The scope of this role may well be influenced by feedback on the White Paper and the extent to which respondents perceive potential for an inconsistent and disjointed approach to regulation.

Alongside the DSIT central AI risk function, the Secretary of State did acknowledge that the Government continues to look at ways to improve coordination and clarity across the multi-regulator landscape. Again, as trailed in the White Paper and the Secretary of State’s letter to DRCF regulators in the Spring, her September statement calls out the Government’s engagement with the DRCF and in particular, a pilot scheme for an advisory service.

In a separate press release on 19 September, the Government set out details of the pilot multi-agency advisory scheme. The pilot will launch in 2024 for a year and is to be backed by Government funding. The advisory scheme, following up on the White Paper pledge to establish a sandbox, is intended to provide tailored support to help businesses meet regulatory requirements for digital technologies, not just AI. It remains to be seen how effective and scalable this service will be given the time frames and funding involved and bearing in mind the need for businesses to apply to make use of the service. No doubt there will be scope for broader, more generalised multi agency advice and guidance in relation to AI too.

Continued regulatory focus on AI

In the meantime, UK regulators continue to focus on AI. For example, on 18 September, the Competition and Markets Authority published its initial report (the CMA Initial Report) following a review of competition and consumer protection considerations in the development and use of AI foundation models.

The CMA review focused on three levels of the value chain-the development of foundation models, their use in other markets and applications, and the consumer experience. As recognised by the Committee Report described above, the CMA Initial Report notes the benefits of AI foundation models such as the potential for new and easier access to improved products, services and information, socially beneficial developments (eg in the field of health and science) and lower prices. However, it also acknowledges potential risks such as misleading information and fraud before flagging competition specific concerns, for instance the risk that foundation models may be used by a few businesses to root market power, causing lack of products and higher pricing.

In light of those perceived risks, the CMA Initial Report sets out a series of 7 principles. The principles are intended to guide businesses in the development and use of foundation models such that they produce positive outcomes for businesses, people and the economy. Whilst the CMA anticipates developing these principles further, they are currently framed as:

  • Accountability – foundation model developers and deployers are accountable for outputs provided to consumers.
  • Access – ongoing ready access to key inputs such as data (with proprietary data of increasing importance), compute and expertise, without unnecessary restrictions. The CMA Initial Report flags the need for continuing effective challenge to early movers from new entrants to the market.
  • Diversity – sustained diversity of business models, including both open and closed models, with the CMA Initial Report noting that open-source models should help to reduced barriers to entry/expansion.
  • Choice – sufficient choice for businesses so they can decide how to use foundation models, including variety of deployment options-in house development, partnerships, APIs, plug-ins etc.
  • Flexibility – having the flexibility to switch and/or use multiple foundation models according to need. The CMA Initial Report notes the need for interoperability.
  • Fair dealing – no anti-competitive conduct including anti-competitive self-preferencing, tying or bundling.
  • Transparency – consumers and businesses are given information about the risks and limitations of foundation model-generated content so they can make informed choices.

The CMA Initial Report promises further insight on the principles, feedback and information on their adoption in early 2024 and it will be interesting to see how these regulator-specific principles dovetail with the UK’s wider principles-based approach to AI regulation. The CMA Initial Report considers that there will be an important role for regulation as AI develops but it will need to be proportionate and targeted at identified risks. Burdensome regulation does have the potential to stifle competition and innovation. As such, the CMA Initial Report identifies continued investment in resources and institutions as an interim step, such that a wider range of people and organisations can study and scrutinise foundation models.

Other UK developments

Developments regarding AI are not limited to regulatory engagement and the Government advisory body, the Centre for Data Ethics and Innovation has released a portfolio of AI assurance techniques

Meanwhile, the pressure of new AI legislation continues to build. The Trades Union Congress (TUC) has launched an AI taskforce as it calls for “urgent” new legislation to safeguard workers’ rights and ensure AI benefits all. The taskforce aims to publish an expert-drafted AI and Employment Bill early in 2024 and will lobby to have it incorporated into UK law.

A future direction for the UK?

As we approach the AI Safety Summit, the UK Government seems to be increasingly focused on frontier AI, including the risks of misuse (eg bio-tech weapons) and loss of control being generated by AI. This may require an international multi-national framework, to address the risks across borders and prohibitions or conditions on certain AI systems. In the meantime, the UK’s detailed approach to the ‘nearer’ AI risks (eg generative AI/ applications that are akin to what the EU would term “high risk”) still appears unclear and questions remain about whether there be will gaps in the UK regulatory system compared to the more comprehensive EU AI Act, though the UK may have more agility to adapt its regulation and is rapidly ramping up regulatory cooperation.

Content Disclaimer

This content was originally published by Allen & Overy before the A&O Shearman merger