Opinion

UK AI policy developments and where next?

Published Date
Nov 21 2024

As the EU presses ahead with its implementation of the AI Act, the UK continues to develop its evolutionary approach to AI policy and regulation. As the new Labour Government starts to implement its perspective and ahead of a new UK AI Bill, likely to be published within the year, this blog rounds up the latest developments in UK AI policy.

A new regulatory outlook?

The previous Conservative Government signalled its intention regarding AI regulation in its response to the AI White Paper in February 2024. As previously discussed, the response focused on an agile, sector-based approach to empower existing regulators. The Government recognised the long-term need for binding requirements on developers of highly capable general-purpose AI models but did not commit to introducing legislation. It also focused on the role of the new UK AI Safety Institute in standards and review of AI models.

The new Labour Government took office in July 2024 and the messages have focused on supporting innovation and addressing the most serious risks from frontier AI. While the new Government has signalled its intentions to introduce a new AI Bill, there appears to be significant continuity from the previous Conservative Government, and we are unlikely to see a major shift in the scope of AI regulation.

Focused regulation for the most powerful AI models - coming soon

So what do we know about the prospects of new AI legislation? At this stage, the detail is unclear. In the July 2024 King’s Speech, the Government said that it would “seek to harness the power of artificial intelligence as we look to strengthen safety frameworks" and would "establish the appropriate legislation to place requirements on those working to develop the most powerful artificial intelligence models."  

Similarly, in a House of Lords debate on July 30, 2024, Baroness Jones also confirmed that the Government will “establish legislation to ensure the safe development of AI models by introducing targeted requirements on a handful of companies developing the most powerful AI systems.” The headline scope looks likely to be much narrower than the likes of the EU AI Act, but the date for introducing an AI Bill and gaining sight of the detail, has not been specified.

The Government has clearly prioritised two other technology focused Bills for this session of Parliament – the Data (Use and Access) Bill (an evolution of the previous government’s Digital Information and Smart Data Bill), and the Cyber Security and Resilience Bill. This prioritisation may simply be because the Bills were “ready to go” based on previous work. In contrast, any AI Bill will likely require further discussion and consultation. However, it is possible that there could be a draft Bill published before introduction to Parliament (as we saw for the Online Safety Bill), or that further consultations prior to draft publication could shed further light (as we expect for the Cyber Security and Resilience Bill, due in 2025). Watch this space.

And in the meantime?

With the future AI Bill focussing on a specific slice of the AI ecosystem, some had hoped to see helpful provisions make their way into the likes of the Data (Use and Access) Bill (the DUA Bill).  As we flagged in our blog, the DUA Bill does, amongst other things, retain proposals to ease the regulatory burden on automated decision making. However, the stated intention to include “targeted reforms to some data laws” where there is currently a “lack of clarity impeding the safe development and deployment of some new technologies” has not materialised in a way that significantly targets AI. In that context, and as further explained in our blog, the UK remains reliant on regulatory guidance. In the absence of wider ranging legislative requirements, an obligation for the ICO to consult on and issue, a statutory code as to how the UK GDPR is to be interpreted for the purpose of AI would be a step in the right direction. The ICO would then be required to take the statutory code into account when interpreting the UK GDPR in respect of AI, giving the industry sensible direction and confidence in how the rules will be interpreted in practice.

It is worth noting that Lord Clement Jones, a Liberal Democrat Peer, has introduced a Private Members’ Bill – the Public Authority Algorithmic and Automated Decision-Making Systems Bill. Its specified aims are to regulate the use of automated and algorithmic tools in decision-making processes in the public sector, require public authorities to complete an impact assessment of automated and algorithmic decision-making systems and ensure the adoption of transparency standards for such systems. This may also have an impact on private companies working with the public sector. Private Members’ Bills generally only have a slim chance of making the statute book, as the government of day will often not provide enough Parliamentary time. Before the election for example, Lord Holmes introduced the Artificial Intelligence (Regulation) Bill, which failed to progress through Parliament. That said, the Public Authority Algorithmic and Automated Decision-Making Systems Bill could still have an influence on the Government’s own AI Bill. 

Wider government policy initiatives - spotting opportunities and funding engagement

Since taking office, the Government has been quick to engage with the potential of AI to support economic growth and productivity as well as to foster the UK’s competitive global position. It has appointed the Chair of Advanced Research And Invention Agency, Matt Clifford, to deliver a new AI Opportunities Action Plan. The Action Plan is framed as a means “to identify ways to accelerate the use of AI to improve people’s lives by making services better and developing new products” as well as a route to consider infrastructure and talent requirements. In speaking to a breadth of stakeholders (industry, academia, regulators, civil society) it has the potential to be a useful way to gather current thinking, but it remains to be seen whether the recommendations provided to the Secretary of State translate into tangible actions. That said, a new “AI Opportunities Unit” at the Department for Science, Technology and Innovation (DSIT) will be set up to pool expertise, deliver the benefits of AI and implement proposals. We expect to have sight of the Action Plan imminently.

As a further example of continuity and of direct engagement with industry, the Government has also announced funding for 98 AI projects with a share of GBP2 million. These projects were successful in their pitch for funding, as unveiled in October 2023, and include projects focused on the tasks such as the efficiency of prescription deliveries, reduction of train delays and developing a skilled construction workforce.

Innovate and implement, but do so responsibly

As we await specific legislation, and whilst encouraging AI development, focus remains on innovating responsibly. As such, this month, the Government published its Assuring a Responsible Future for AI report. The report acknowledges the need for AI assurance to help measure, assess and demonstrate the trustworthiness of AI and so mitigate risks. But it also recognises the potential of the market for assurance products and services-a business viewed as a necessity if AI innovation is to succeed and as an opportunity to generate growth. Amongst other things, DSIT plans to work with industry to develop a ‘Roadmap to trusted third-party AI assurance’, setting out, by the end of the year, its vision and actions needed to create a quality market of AI assurance service providers.

Perhaps more directly applicable to most businesses is the plan for an AI Assurance Platform. The UK already has the UK AI Standards Hub training and information on assurance and the Responsible Technology Adoption Unit’s Responsible AI Toolkit (itself updated earlier this month with a Model for Responsible Innovation, described as a practical tool for the public sector and beyond). However, to support the demand for AI assurance and stimulate supply, the AI Assurance Platform is intended as  one-stop-shop for information on actions businesses can take to identify and mitigate the potential risks and harms posed by AI.  Given the volume and complexity of guidance and standards increasingly landing on desks, this single port of call may be welcome for SMEs in particular, trying to gain clarity on best practice.

The platform will host existing DSIT assurance content but will also include a new AI Essentials toolkit, intended to make AI assurance best practice accessible to industry. Initially this will include an AI Management Essentials tool, a self-assessment tool based on existing principles and standards such as those in ISO/IEC 42001 (Artificial Intelligence - Management System), the EU AI Act, and the NIST AI Risk Management Framework. It is open for consultation until 29 January 2025, with feedback particularly requested from SMEs.

UK AI Safety Institute (AISI) delivers its first outputs

A core pillar of the UK’s approach to AI safety is the AISI. Currently, the AISI is a research organization within government and does not have any statutory footing or independence from government. Again, indicating a focus on safety, the Government has signalled its ongoing support for the AISI and indicated it may be placed on a statutory footing. It therefore seems likely that the AISI will be a long term part of the UK AI framework and will play a crucial role in standard setting and testing of the most powerful AI models.

Work delivered by the AISI in 2024 includes a research programme for frontier AI safety cases, insights into question-answer evaluations for Frontier AI, initial advanced AI evaluations methods and results, the development of Inspect, an open-source framework for large language model evaluations and the launch of a Systemic AI Safety Grants programme, funding researchers across academia, industry and civil society.

The AISI is also at the forefront of international engagement on AI, this month signing a bilateral agreement with Singapore to work on AI safety and taking part in the first meeting of the International Network of AI Safety Institutes members in San Francisco.

UK signs first international treaty on AI – human rights on the agenda

In a signal that the new Government intends to recognise the importance of human rights to AI safety, the UK signed the Council of Europe Framework Convention on Artificial Intelligence and Human Rights, Democracy and the Rule of Law (the Convention). The principles and obligations in the Convention apply to the activities within the life cycle of an AI system as undertaken by public bodies (or private actors acting on their behalf). There is discretion for signatories to decide how to apply them to other private organizations. The forthcoming UK AI Bill will need to align with the Convention as signatories will need to “adopt or maintain appropriate legislative, administrative or other measures to give effect to the provisions set out in this Convention.”

The principles are in step with those we see in the likes of the EU AI Act and cover dignity and autonomy, transparency and oversight, accountability, equality and non-discrimination, privacy, reliability and safe innovation. Obligations address the need for procedural safeguards, assessment and mitigation of risk and for remedies for human rights violations arising from AI system activities.

The Convention was also signed by the U.S., the EU, Andorra, Georgia, Iceland, Israel, Norway, the Republic of Moldova and San Marino. The role of the U.S. and the EU in the Convention is also an important step towards agreement on global principles (while allowing countries some room for specific implementation).

AI and Cyber

The Government has also maintained a focus on the cyber risks to AI with a call for views on a Voluntary Code of Practice (the Code) closing in August. The Code is based on the National Cyber Security Centre (NCSC) Guidelines for secure AI system development as well as commissioned research. It addresses the whole AI life cycle and will focus on practical steps for stakeholders across the AI supply chain, particularly Developers and System Operators, to protect end-users. The approach also anticipates submitting the Code to the European Telecommunications Standards Institute (ETSI) such that it can form the basis of a global standard.

Meanwhile, the NCSC continues to develop a range of guidance and resources to address cyber risks related to AI. As referenced above, in 2023 it published  Guidelines for secure AI system development and in May 2024 it added an update to the Machine learning security principles.

The AISI’s work also includes tests of advanced AI models for cyber security risks. For example, it has found that publicly available models were able to solve simple Capture The Flag (CTF) challenges, of the sort aimed at high school students, but struggled with university-level problems.

Existing regulators take the lead

As further initiatives come in to play and legislation is on the horizon, existing regulators (such as the ICO, CMA, Ofcom and FCA) continue to press on with their approach to AI regulation, including through the Digital Regulation Cooperation Forum (DRCF). You can read more about recent regulator activities in our blog here.

Conclusion

While the UK has not gone down the same path as the EU in terms of comprehensive AI legislation, the activity above indicates the important role that the UK will continue to play in relation to standards, regulation and policy, but taking a progressive evolutionary approach. Beyond the potential for binding AI legislation there are many and various initiatives and proposals that will influence the UK’s direction of travel when it comes to engagement with AI. Although the initial approach of the Government does not differ substantially from that of the previous administration, companies should remain mindful of the evolving nature of AI regulation and Government policy, as well as the potential for divergence across jurisdictions.

The EU is currently wrestling with the implementation of the EU AI Act and organisations are trying to navigate the wider complex data and digital regulatory environment in play. We are already seeing practical consequences, with tech companies warning that overregulation, a fragmented landscape and unpredictable enforcement will impede AI innovation. In an open letter signed by dozens of tech companies in September 2024, organisations particularly criticised the approach of data protection authorities to the use of EU personal data to train AI systems. Some have delayed their AI applications in the EU as a result. This all makes for an unsettling backdrop for the UK, where the regulatory regime is so closely aligned. Before the UK Government shows its hand, it would  be wise to consider whether there can be any lessons learned from its neighbouring region.

For a wider jurisdictional view of AI developments readers may also like to see our “Zooming In on AI” series of blogs here.

Related capabilities