Opinion

UK AI: existing regulators take the lead

Published Date
Nov 21 2024
As further initiatives come in to play and legislation is on the horizon (described further in our blog here), existing regulators (such as the ICO, CMA, Ofcom and FCA) continue to press on with their approach to AI regulation, including through the Digital Regulation Cooperation Forum (DRCF).

Collaboration and consistency

In October 2024 the DRCF published a consolidated perspective on AI and Transparency. Whilst the principle of transparency is a familiar one, this article (based on a workshop earlier in the year) acts to demonstrate how a common concept cuts across the spectrum of regulators, with multiple requirements for AI transparency and enforcement through the differing regulatory regimes. It is a clear example of how organisations need to take a holistic approach to regulatory requirement and guidance when they are developing and implementing AI. Helpfully, direct engagement and support is also on the cards. The DRCF AI and Digital Hub launched in a 12-month trial in April 2024 allowing innovators to ask a specific query which spans the regulatory remits of DRCF member regulators.  It is open for submissions now and the DRCF have also published case examples.

The Government has also followed through on its pre-election commitment to set up the Regulatory Innovation Office (RIO), led by Department for Science, Technology and Innovation (DSIT). It won’t be an independent oversight statutory body but will focus on cross-cutting work to improving regulatory performance and accountability, setting regulatory priorities that align with the Government’s broader policy aims. Whilst the RIO will support the regulators it will also inform the Government of regulatory barriers to innovation. Initially the RIO has four focus areas, including AI in healthcare. How the RIO will work with the DRCF is currently unclear. 

In response to the sector driven approach to AI regulation in the UK, amongst others, the four DRCF regulators have published individual plans about their approach to AI:

Engagement with industry

Whilst regulator to regulator collaboration is clear, engagement with industry also continues. For example, in September,2024 the Bank of England launched a call for applications to join its new AI Consortium. This platform is intended to facilitate public-private interaction and input, specifically regarding the development, deployment and use of AI in the financial services sector, addressing capabilities and opportunities, benefits and risks, as well as the Bank of England’s approach to promoting the safe adoption of AI. 

Similarly, the FCA has established an AI Lab intended to support it  “in deepening our understanding of the risks and opportunities AI presents to UK consumers and markets, and help inform our regulatory approach in a practical, collaborative way”.

The AI Lab includes the AI Spotlight where accepted projects will be featured on a dedicated webpage to offer practical insight into solutions and AI applications across a range of topics in financial services. Other projects will be demonstrated at an upcoming AI Spotlight Showcase on 28 January 2025.

To be held on 29-30 January 2025, the AI Sprint is intended to inform the FCA’s regulatory approach to AI through engagement with industry, academics, regulators, technologists and consumer representatives, sharing practical experiences and expertise. 

An AI Input Zone provides a route for the FCA to gain further insight and, as part of that, it launched questionnaire on 4 November 2024. Views are sought on: (i) what AI use cases firms are considering and what barriers are preventing any current or future adoption; (ii) whether current regulation is sufficient to support firms in embracing the benefits of AI in a safe and responsible way; and (iii) whether there are any specific changes to the regulatory regime or additional guidance that would be useful.

Details of how to engage with the AI Lab initiatives can be found here.

The intention is also to expand the computing power of the FCA’s existing digital sandbox and further enhance the nature of data sets and AI testing capability so as to support AI innovation.

A focus on generative AI

There has been a specific focus by regulators on generative AI too. For example, the CMA has focused on generative AI and in April 2024 it outlined three key risks to effective competition on AI Foundation Models (FMs) and set out plans for further action in the market. The three risks cover: 

  • firms controlling critical inputs for developing FMs may restrict access to shield themselves from competition; 
  • powerful incumbents could exploit their positions in consumer or business facing markets to distort choice in FM services and restrict competition in deployment; and
  • partnerships involving key players could exacerbate existing positions of market power through the value chain.

The CMA’s approach will also feed into their priorities for investigation under the Digital Markets, Competition and Consumers Act (passed in 2024 before the election).

In July 2024, Ofcom issued a discussion paper on Red Teaming for GenAI Harms, considering how red teaming can help address risks from misuse, including assessing for vulnerabilities related to generation of child sexual abuse material, low-cost deepfake adverts and synthetic terrorist content. July also saw an Ofcom discussion paper specifically addressing deepfakes, looking to explore, amongst other things, the impact of generative AI on deepfake proliferation and to analyse measure that organizations in the technology supply chain can take to respond to those deepfakes. In November,  it reminded organisations about the application of the Online Safety Act to generative AI and AI chatbots.

The ICO has issued a series of consultations about the application of GDPR to generative AI, including data scraping and lawful basis, accuracy data subject rights, purpose limitation, controllership and the supply chain.  We can expect the finalised guidance later this year. 

The ICO also continues to monitor and engage with technology companies as they look to develop and train generative AI models, particularly where there is an intention or desire to use UK user data. Whilst not providing specific regulatory approval, the ICO has been clear that it expects transparency about how people’s data is being used and expects organizations to put in place effective safeguards, including opt-outs, if using legitimate interests as the GDPR lawful basis for processing. 

Scenario specific guidance

Beyond the use of generative AI, certain AI use cases have received particular engagement. For example, the ICO carried out a series of consensual audits engagements with developers and providers of AI powered recruitment tools, reporting in November 2024 on both areas of good practice but also areas where data protection compliance requires improvement. Whilst it provided over 300 recommendations as part of the audit process, the report identifies seven key recommendations alongside a checklist of questions to ask before using AI recruitment tools, and flagging the Government’s Responsible AI in recruitment guide. The report demonstrates how an existing regulator is looking to address AI through its sphere of interest. You can read more on this AI and recruitment report in our blog here.

Keeping track

Regulatory guidance and initiatives continue apace. As we have already seen, the regulators are keen to engage with industry and business. So besides tracking their recommendations and input to support AI implementation, it is worth keeping note of upcoming opportunities to offer insights where your views and experiences could help to shape the regulatory direction of travel.