Opinion

Your 2023 Wrapped: UK AI and data protection edition

Published Date
Jan 9 2024
Related people
2023 saw a surge in interest in the application of generative AI within business models. So, if AI and data protection was your favourite genre of 2023, or if you found it to be a broken record, this post consolidates and reflects on the key UK updates in one place.

New technologies can make it difficult to keep up to date with the evolving regulatory guidance. Risks accompany opportunities. In the UK, the recent UK AI Safety Summit focused on the risks presented by ‘frontier AI’ (such as large language models ‘LLMs’, ChatGPT and Google’s Bard). The UK Information Commissioner’s Office (the ICO) has been active in addressing data protection risks, whilst attempting to support the UK Government’s pro-innovation focus and principles-based approach to the regulation of AI in the UK (discussed in our September blog post).  

A&O’s AI Working Group regularly advises clients across the spectrum of issues (including data protection) associated with the adoption of AI. We will follow up with a more detailed analysis of the fairness and transparency issues trailed in this post.

A review of ICO guidance for AI in 2023

Considering data protection from the outset

In June 2023, the ICO warned businesses that they must consider and mitigate data protection risks before adopting generative AI technology and signalled that failure to do so would result in ICO intervention. 

A consistent message from the ICO throughout 2023 is that the opportunities AI presents to businesses will not relieve them of a responsibility to address associated data protection risks. The ICO has already started to back this message up with enforcement action. The ICO issued a preliminary enforcement notice in October 2023 against a social media company for a failure to address privacy risks associated with its use of a generative AI chatbot which was used by teens.

The ICO released eight suggested questions earlier this year that developers and users should ask at the outset before in the context of developing or using generative AI:

1. What is the lawful basis for processing personal data? 
2. Is the business a controller, joint controller or a processor? 
3. Has the business prepared a Data Protection Impact Assessment (DPIA)? 
4. How will the business ensure transparency? 
5. How will the business mitigate security risks? 
6. How will the business limit unnecessary processing? 
7. How will the business comply with individual rights requests? 
8. Will the business use generative AI to make solely automated decisions? 

Updated guidance

The ICO also made a number of updates to its AI guidance in 2023. In particular, the ICO further clarified how to address the key GDPR principles of lawfulness, fairness and transparency in the context of AI and introduced in-depth guidance on privacy-enhancing technologies.

1 – Lawfulness

The ICO emphasises the importance of separating each distinct data processing operation used in AI systems and ensuring that a business can identify a lawful basis for each one. The ICO reiterates that for most of the lawful bases the processing must be “necessary” to achieve a specific purpose, which means that the processing must be more than “useful” and cannot be achieved by less intrusive means. The ICO also compared the research and development phase of AI development with the deployment phase. The ICO notes that there will be different purposes in each phase and therefore separate considerations of legal bases. 

These are important points for businesses engaging in large-scale AI development or deployment. A single business may often consider a variety of deployments related to generative AI at any one time, for example, ranging from recruitment to customer facing chatbots. Each business, therefore, will need to consider the legal bases for their processing activities in each use case at each stage in the AI lifecycle and ensure that the processing is necessary to achieve the specific purpose.

2 – Fairness

The ICO notes that data must be processed in a fair way, in accordance with Article 5 of the UK GDPR. This means that personal data must be processed in a way that people would reasonably expect and must not have unjustified or adverse effects. AI systems should also be statistically accurate and not lead to unjust discrimination. These principles extend to the output of the processing as well as the way that the processing is carried out. 

Businesses using generative AI should also consider fairness in relation to the additional rules under Article 22 of the UK GDPR for automated decision-making. Solely automated decision-making that produces legal effects concerning a data subject or has similarly significant effects on the individual is only permitted in limited circumstances. In such cases, additional information must be provided to the relevant data subject about the processing and they must be given the rights to obtain human intervention or challenge a decision made by the AI. Notably, section 14 of the draft Data Protection and Digital Information Bill is proposed to amend Article 22 of the UK GDPR, including by clarifying what constitutes an automated decision: “a decision based solely on automated processing” where there is no meaningful human involvement in the taking of the decision. The Bill also provides measures a controller must adopt as safeguards for the data subject’s rights and permits the Secretary of State to issue regulations to further govern automated decision-making. The ICO provided its response to the Bill in May 2023

Fairness considerations vary across the AI lifecycle. For example, at the project initiation and design stage, it is important to consider to whom the AI system will apply and be able to explain why it applies to certain groups and not others; at the data collection stage, collection should be focused on the clearly defined purpose; and at the decommissioning stage, a business should consider how to anonymise or erase personal data. 

3 – Transparency and Explainability 

The ICO states that a business must be transparent about how the relevant AI system processes personal data. Businesses are obliged to provide privacy information to data subjects, including if their personal data is processed in the AI model’s training stage. AI often presents transparency challenges due to its complexity, its adaptive nature and potential trade-offs in disclosing how an AI system operates.

At a high level, the ICO notes that organisations should provide data subjects with information on the purposes for processing their personal data, the retention periods applicable for that personal data and details of which parties will receive the personal data.  

The ICO has helpfully developed joint guidance with the Alan Turing Institute to help businesses explain AI and its impact to relevant stakeholders. The provides organisations with practical advice on how to structure build and present an explanation of an AI system with worked examples. This guidance suggests that organisations collect information in a way that allows for a range of explanation types and use policies, procedures and documentation to provide meaningful explanations of AI systems to affected individuals. 

4 – Privacy-enhancing technologies

The ICO’s guidance on privacy-enhancing technologies (PETs) (issued in June 2023) indicates that the ICO expects PETs to play an important role in helping organisations to comply with the principle of "data protection by design and by default". The ICO notes that PETs may enable organisations to minimise the use of personal data, maximise information security and/or empower data subjects. 

This applies in the context of AI inputs and outputs, including when training AI models. The ICO distinguishes between PETs that provide “input privacy” (that reduce the number of parties with access to the personal data processed by the AI system) and “output privacy” (that reduce the risk that individuals can derive information from the result of the processing activity).   

The ICO notes that the use of PETs is not mandatory and PETs should not be treated as a "silver bullet" by organisations – processing by AI systems must still be lawful, fair and transparent.

Other ICO initiatives addressing AI

In addition to these pieces of guidance, the ICO has been active in supporting organisations in its sandbox and preparing reports in collaboration with other regulatory bodies and industry players. The ICO’s sandbox supports organisations that plan on using personal data in novel, complex or potentially high-risk ways by enabling them to test their innovations and get feedback from the ICO on the data protection risks. The reports generally aim to provide guidance on innovative projects which operate in challenging areas of data protection, and so can provide a useful resource for companies whose activities give rise to uncertainty around what data protection compliance should look like.

The ICO is also part of the Digital Regulation Cooperation Forum (DRCF) (along with the CMA, Ofcom and the FCA), which aims to harmonise the regulation of digital services. The DRCF will provide a newly-announced AI advisory service which can be used by businesses across the UK to check whether their AI and digital innovations comply with regulatory requirements. The aim is to provide businesses with tailored advice on such requirements and allow them to bring their innovations to market more quickly than at present, whilst still ensuring they are safe for consumers. 

The ICO also co-sponsored the Global Privacy Assembly Resolution on Generative AI Systems that was adopted in October 2023. The resolution set out in further detail what data protection authorities will expect from companies in the generative AI context and acknowledged the ongoing global discussion in policy making and enforcement of data protection rules.

Clearview decision – territorial scope of the GDPR, facial recognition and image scraping

The ICO, amongst other data protection authorities, initiated enforcement action against Clearview AI Inc by fining it £7.5m in 2022. In October 2023, Clearview successfully appealed against this decision before the First-tier Tribunal. In November 2023, the ICO sought permission to appeal against that ruling. 

Clearview engaged in data scraping images of people in the UK from websites and social media to create an online database which allows customers, including the police, to check images against those in the database, using Clearview’s AI application. Clearview has no UK or EU establishment and does not offer services to UK or EU customers. The ICO determined that Clearview had breached several data protection requirements, including by failing to inform individuals that their personal data was processed in this way. 

The Tribunal concluded that Clearview’s activities fell within the territorial scope of the UK GDPR under Article 3.2(b), since Clearview was processing personal data related to the monitoring of individuals’ behaviour in the UK. However, the Tribunal considered that the activities fell outside the material scope of the UK GDPR as the services were only provided to non-UK criminal law enforcement and national security entities. 

The ICO is seeking permission to challenge this finding on the basis that Clearview itself was not processing for these foreign law enforcement purposes. The outcome of this appeal may provide further precedent on the extraterritoriality of the UK GDPR and for companies involved in image scraping and working through third parties. 

Final thoughts – expected developments in data protection

Following the UK AI Safety Summit, we expect that further guidance from the ICO and more detailed plans for AI regulation from the UK Government will follow. 

The DRCF has also signalled that its strategic priorities for 2023/2024 include building consensus around the key principles to regulate AI, investigating how third party providers could audit algorithms including AI to further regulatory compliance and supporting the UK Government in developing the regulatory framework to address AI. 

Look out for our upcoming post, which will explore the issues of fairness and transparency in more detail. 

Content Disclaimer

This content was originally published by Allen & Overy before the A&O Shearman merger