Opinion

Zooming in on AI #15: Regulatory spaghetti and AI – how to make sense of the EU GDPR and the EU AI Act

Zooming in on AI #15: Regulatory spaghetti and AI how to make sense of the EU GDPR and the EU AI Act
Read Time
7 mins
Published Date
Feb 3 2025
Related people

Given the rapid speed of development in the field of AI, it is increasingly important that businesses develop effective governance to address the regulatory framework governing the development, training, use and deployment of AI.

As the Regulation (EU) 2016/679 (General Data Protection Regulation) (the EU GDPR) has been in effect since 2018, businesses now have the opportunity to consider where they can leverage their existing data protection governance to support compliance with EU Regulation 2024/1689, also known as the Artificial Intelligence Act (the EU AI Act)1 . They will also need to understand where important differences exist and how to approach new or updated compliance measures.  In this blog we set out the key issues to consider and how to effectively navigate the two regimes together.

EU GDPR v EU AI Act – how do they differ? 

The EU GDPR has become a fact of modern-day business and has been widely copied into legislation around the world.  By contrast the EU AI Act has generally not inspired many lookalike laws, but nonetheless resonates with high level principles articulated in international instruments such as the OECD AI principles and the Council of Europe’s Framework Convention on Artificial Intelligence.

The integral difference between the EU GDPR and the EU AI Act lies in the fact that the EU AI Act is a product safety law that is specifically concerned with the safe development, deployment and use of AI systems, whereas the EU GDPR, by comparison, is a much broader fundamental rights law that enshrines individuals’ rights regarding the processing of their personal data. 

Before we turn to consider the areas of overlap and where existing compliance mechanisms can be leveraged, it is perhaps worth zooming out to take a bird’s eye view of the overall landscape.

Both the EU GDPR and the EU AI Act aim in general to be technologically neutral.  As a starting point, the EU GDPR applies to all personal data processing within both its jurisdictional and material scope irrespective of the risk profile of that data.  As such, personal data that may be thought to be relatively low risk – for instance employees’ work contact data, is subject to the same protections as sensitive health data, but the application of those protections will be different. The underlying ethos of the EU GDPR is very much to consider the context and to apply tests of necessity, and to consider what is appropriate.  More stringent security measures will, for instance, inevitably be more appropriate to higher risk data than anodyne data that is unlikely to cause harm.

While the principles behind the EU AI Act are similar to those of EU GDPR – in that it has always been presented as being risk orientated, the legislative approach is different, such that the rules that it enshrines are applied only to certain activities and certain use cases.   As is well known, the EU AI Act considers four different risk groups: (1) prohibited AI practices; (2) high risk; (3) limited risk; and (4) minimal risk AI systems.  It also contains provisions on General Purpose AI (GPIAs). You can read further about the obligations for high risk AI systems on our blog here, limited risk AI systems here, and GPIAs here

Given that many AI use cases involve the processing of data, a company may be caught in different ways under the two regimes. For example:

  • A company may be a subject to the EU GDPR in respect of its operation of an AI system as a controller where that AI system involves the processing of personal data in its EU establishment and the company is directing the key elements of the processing activities.  The full panoply of the EU GDPR (applied proportionately), will arise, but it may be classed as a limited risk AI system under the EU AI Act (e.g. an AI chatbot assistant on a retail site).
  • If a company develops an AI system which does not process personal data (e.g. road traffic system monitoring), this use of data will not be subject to the EU GDPR at all, but the company will still be a provider of a high-risk AI system under the EU AI Act.
  • Where a company provides a recruitment system service that uses AI to make decisions, it is likely to involve the processing of personal data and therefore be subject to the EU GDPR, perhaps as a processor rather than a controller, if it is doing so on behalf of another company.  It may also qualify as a provider of a high-risk system under the EU AI Act.

A company which makes the key decisions in respect of an implemented biometric identification system that it is operating is likely to be a controller for that AI system under the EU GDPR and the deployer of a high-risk system under the EU AI Act. Businesses will therefore need to consider the overlap and differences in the context of their particular uses of AI. 

Leveraging compliance frameworks

Despite their differences, both the EU AI Act and the EU GDPR are focused on ensuring the responsible and ethical use of technology, and to some extent there are compliance duties which could be read across from one piece of law to the other. In particular, businesses can look to harmonise their compliance efforts under both frameworks in areas such as transparency, technical and operational measures and governance.   This requires careful mapping, but we have set out a snapshot below of some areas of overlap that companies can consider tackling together.

Transparency

Compliance frameworkArticleRequirement
EU GDPR

13 and 14


Right to be informed

Controllers must inform individuals about the collection and use of their personal data, including by:

  • Providing details such as the identity and contact information of the controller
  • Explaining the purposes and legal basis for data processing
  • Disclosing any recipients of the personal data o Informing individuals about the data retention period
  • Notifying individuals if automated decision making, including profiling, is involved in how their data will be processed

 

EU GDPR
22

Right against automated decision-making (Article 22)

  • Individuals have the right not to be subject to decisions based solely on automated processing, including profiling, unless specific conditions are met.
  • If automated decision-making is used, controllers must:
  • provide meaningful information about the logic involved, and
  • explain the significance and consequences of such processing to the individual.
  • provide meaningful information about the logic involved, and
  • explain the significance and consequences of such processing to the individual.
EU AI Act
50

Transparency obligations

The obligations (additional to those under national or Union law) require providers and deployers of certain AI systems to make various disclosures including:

  • Informing users at the first interaction that they are engaging with an AI system.
  • Clearly labelling text which is published for the purpose of informing the public on matters of public interest if it is artificially created or modified.
  • Identifying outputs from AI systems generating synthetic content as artificially generated or manipulated.
 

EU AI Act
13

User Information Requirements

  • Transparent information is not just to be provided to individuals but also to entities using AI systems, e.g. a company that uses a system made available by a developer (Article 13).
  • Greater focus on technical information that should be provided to users (individuals and entities).
 
EU AI Act
86

Decision-making transparency (Article 86)

  • Any persons affected by decisions made by a deployer using high-risk AI systems can request clear explanations about the decision-making process and key elements of the decision made.
  • Applicable to the extent it is not already covered under Union law, and reflects the EU GDPR's emphasis on transparency in decision-making.

Steps to leverage compliance frameworks

Update GDPR information notices appropriately to include the further information required under the EU AI Act, noting the distinct requirements for certain limited risk AI systems, and also address (in respect of corporate users) in contractual notices/terms

Technical and operational measures

Compliance frameworkArticleRequirement
EU GDPR
25

Data Protection by design and default

  • Controllers must implement technical and organisational measures to implement privacy by design protections
  • Measures may include data pseudonymisation, minimisation, and storage limitation.

EU GDPR

32

Security of Processing

  • Controllers and processors must adopt appropriate technical and organisational measures for security.
  • Measures include data encryption and regular testing of adopted measures.
  • Ensures availability and integrity of processing systems.

EU GDPR

42

Certification mechanism

  • Can be used to demonstrate compliance with Articles 25 and 32
  • Both the EU GDPR and the EU AI Act require companies to consider the risks of bias arising from the use of an AI system – the EU GDPR under the fairness principle (Article 5) and the EU AI Act under its data governance provisions (Article 10).

EU AI Act

9

Risk management system

  • Foreseeable risks related to health, safety, or fundamental rights must be identified and analysed in any high-risk AI system.
  • Providers are required to implement appropriate and targeted measures to address identified risks.
    To ensure risk measures are effective, ongoing testing of the high-risk AI system should occur throughout the development process, and before the product is placed on the market.

Steps to leverage compliance frameworks

When assessing technical and operational measures from an EU GDPR perspective, implement a risk assessment under the EU AI Act addressing high-risk AI systems

Consolidate the requirements to conduct a data protection impact assessment (Article 35 and 44 of the EU GDPR) and fundamental rights impact assessment (FRIA) (applicable to deployers of certain high risk AI systems) or data governance and management requirements (including assessment of potential biases) for high-risk AI systems (Articles 27 and 10 of the EU AI Act). You can read more about FRIAs in our blog post here.

Governance

Compliance frameworkArticleRequirement

EU GDPR

30

Record of processing activity

  • Controllers are required to maintain an record of its data processing activities which describes certain information relating to its data processing. This is not a contemporaneous log of processing activity but will need to be kept up to date as categories of data/recipients/purpose of processing etc change over time.

EU GDPR

44 to 49

International personal data transfer risk protections

  • Necessary to comply with long standing EU and UK policy requirements that data is adequately protected when transferred to a third country.

EU AI Act

Article 12 (1)

Activity log

  • High-risk AI systems must be technically capable of automatically logging its activity over its lifetime.

EU AI Act

10

AI specific data governance measures

Additional to the requirements under Articles 44 to 49 of the EU GDPR, AI specific measures for high-risk AI systems should be adopted, including:

  • data governance and management practices for training, validating and testing data (Article 10);
  • use of high quality, relevant, sufficiently representative, free of errors and complete data given the purpose (Articles 10(3) and 10(4)); and
  • measures to prevent and mitigate potential biases (Article 10(2)).

Steps to leverage compliance frameworks

  • Adapt existing data protection policies and principles that are assumedly EU GDPR-compliant to include the necessary AI-specific obligations
  • Adopt a wider approach to AI governance and risk management
  • Reflect key trends (transparency, bias, risk assessment/documentation)throughout the training and deployment phases, and implement an element of human review and oversight as appropriate

You can read more about how governance can foster responsible AI in our blog post here. There is now a significant trend towards companies integrating and aligning their compliance functions for areas such as data protection and AI. The International Association of Privacy Professionals (IAPP) also researched this trend in their 2024 report on Organizational Digital Governance.

Companies should consider adoption of relevant standards for AI system conformity. In the EU this will involve standards issued by CEN, the European Committee for Standardization. Globally, this will also include standards issued by ISO and NIST on AI risk management. Companies may decide to adopt one standard as the overall approach for their company or develop an approach that integrates relevant components into their own bespoke governance framework. This will depend on which markets are key to their AI development and deployment and how they are using the technology.

Oversight and enforcement of EU GDPR and the EU AI Act

It is likely, and in certain cases imperative as per Article 74(8) of the EU AI Act, that some Data Protection Authorities under the EU GDPR will play a role in the oversight and enforcement of the EU AI Act. The EU AI Act gives EU Member States discretion as to the structure and design of these three types of authorities: Market Surveillance Authorities, Notifying Authorities and National Public Authorities. Read more about the types of authorities here, and access a table outlining the authorities that Member States have designated so far.

Member States are still deciding how to implement these structures, but some different examples are emerging. For example, Spain will establish an Artificial Intelligence Supervisory Agency (AESIA) acting as a Market Surveillance Authority under the Department of Digital Transformation. In contrast, Luxembourg has proposed The National Commission for Data Protection (CNPD), its national data protection authority, to act as its Market Surveillance Authority.

Therefore, companies will need to be prepared for engagement that spans multiple regulators in some Member States. Given the regulatory overlap explained above, it is of course possible that companies will encounter sanctions (and monetary penalties) under both regimes, although the EU AI Act requires that regard be given other fines that have been applied in respect of the same activity/lack of activity (Article 99(7)).

What are the tensions when applying EU GDPR to AI systems?

Although the EU GDPR is not intended to conflict with the EU AI Act, its robust and comprehensive approach to data protection presents certain challenges for the rollout of AI technologies. In the last year the European Data Protection Board and some national Data Protection Authorities (DPAs) have issued guidance that have addressed some of the areas of challenge, but we discuss below some of issues that still remain.

Requirements under the EU GDPR Safeguards for AI systems

Data minimisation and purpose limitations

  • Mandates collection and processing of only the minimum necessary data, to be used only for specified, explicit, and legitimate purposes.
  • Poses challenges for AI systems that need large data sets for training.
  • AI systems need flexibility to repurpose data for different applications, which conflicts with the EU GDPR's strict data use requirements.
  • Companies must document and justify the necessity of using personal data for AI model development, especially when sourced from web scraping.
Lawful basis
A fundamental aspect of data compliance is the need to identify a lawful basis under Article 6.
  • Reliance on legitimate interests is crucial for GDPR compliance in AI model development that uses publicly available data.
  • The EDPB's December 2024 Opinion on AI models recognises legitimate interests as a valid option.
  • Rigorous assessment and evidence are required to support reliance on legitimate interests and the balancing test conclusion.
Data subject rights
  • Individuals have enshrined data rights, including the right to:
  • access their data and request rectifications; and
  • be forgotten, meaning their data can be permanently erased
  • AI systems must be designed to accommodate EU GDPR rights, which can be challenging due to the tokenised structure of data in large language models.
  • Implementing the right to deletion in AI systems is particularly challenging.
  • Tools are emerging to apply data protection rights to AI system inputs and outputs through “unlearning”, but they are not yet fully proven.
  • The recent EDPB Opinion states that the anonymity of AI models should be assessed on a case-by-case basis, focusing on the model's outputs.
Automated decision-making
  • Restricts decisions made solely by automated processes with legal or significant effects on individuals.
  • Challenges the use of AI systems designed for automated decision-making.
  • Additional safeguards may be required for EU GDPR compliance.
  • High-risk AI systems must include human machine interface tools such that human oversight is effective while they are in use (Article 14).

What impact will EU GDPR have on innovation and AI?

Guidance issued in 2024 by the EDPB and national DPAs (such as the CNIL In France) were important milestones in the developing a deeper understanding of how EU GDPR will apply to AI. In December 2024, the UK Information Commissioner’s Office (ICO) also issued its response toits consultations on Generative AI on the basis of the United Kingdom General Data Protection Regulation (the UK GDPR).

The resulting guidance has set out compliance approaches for companies to follow when developing and deploying AI models and systems, but the interpretation of the EU GDPR/UK GDPR has shown little flexibility and pragmatism. For example, the approach taken by the EDPB to anonymity and AI models indicates a regulatory posture that assumes that the EU GDPR is likely to apply equally to all parts of the AI supply chain, including the models themselves. Furthermore, a high bar has been set for companies to apply legitimate interests and how to meet transparency requirements under the EU GDPR.

Additionally, the ICO has sent a message that it expects companies to carefully consider whether they are working together as joint controllers when one company deploys an AI model that is developed by another. This is in contrast to a processor-controller or independent controller- independent controller model for the developer and deployer. Joint controller models are likely to be complex to implement at scale, when there can be thousands of deployment of AI models.

Concerns have been expressed that overly narrow interpretations of data protection laws may reduce innovation and result in a far more cautious approach than the rest of the world, especially given the attention from privacy activists such as Max Schrems/NYOB. We can expect a wider policy debate in 2025 about whether the interpretation of EU GDPR is creating a fair balance between protection against risk and harms, and supporting innovation.

In 2024 an open letter was written by a number of technology companies about the risks of regulatory uncertainty in the EU. The letter noted: “If companies and institutions are going to invest tens of billions of euros to build Generative AI for European citizens, they require clear rules, consistently applied, enabling the use of European data. But in recent times, regulatory decision making has become fragmented and unpredictable, while interventions by the European Data Protection Authorities have created huge uncertainty about what kinds of data can be used to train AI models”.

In the meantime, effective and joined up governance for the EU GDPR and the EU AI Act remain critically important to successfully implementing AI systems.

Related capabilities