Opinion

AI Act published in EU Official Journal to enter into force on August 1 2024

Published Date
Jul 25 2024
Related people
On July 12, 2024, the Regulation (EU) 2024/1689 laying down harmonised rules on artificial intelligence (the ‘EU AI Act’) was published in the official Journal of the European Union. The EU AI Act aims to establish a regulatory framework on the use of artificial intelligence (‘AI’) in the EU.

It will enter into force on August 1 2024, with staggered implementation of provisions between February 2 2025 (whereby prohibited AI systems must be withdrawn from the market), August 2 2026 (when the majority of provisions will become applicable, including for high-risk systems) and August 2 2027 (when, for instance, certain AI systems that are safety components of products or are themselves regulated products will be covered by the AI Act, and providers of general-purpose AI models that have been placed on the market before August 2 2025 will need to comply with the AI Act). There are further exemptions and deviating transition periods, which we address in this blog

In the meantime, the European Commission has launched the AI Pact, which encourages early adoption of compliance with the requirements of the EU AI Act and is conducting a targeted consultation on the use of AI in the financial services sector

Scope of application

The EU AI Act applies to systems that are intended to be used in the EU. Organisations outside the EU will fall under the EU AI Act if (i) they supply AI systems to the EU or (ii) the output produced by their AI systems will be used in the EU.

The EU AI Act defines an AI system as a machine-based system designed to operate with varying levels of autonomy and that may exhibit adaptiveness after deployment, and that, for explicit or implicit objectives, infers, from the inputs it receives, how to generate outputs such as predictions, content, recommendations or decisions that can influence physical or virtual environments. 

Pursuant to Article 2, the EU AI Act applies to:  

  • providers placing on the market or putting into service AI systems or general-purpose AI models in the EU, deployers established or located in the EU, and providers or deployers of AI systems where the output produced is used in the EU; 
  • importers and distributors of AI systems; 
  • product manufacturers who place on the market or put into service AI systems together with their product under their own name or mark;  
  • authorized representatives of non-EU established providers; and  
  • any affected persons in the EU. 

The EU AI Act sets out exemptions from its scope, such as for AI systems used solely for military, defence and national security purposes and for scientific research and development. 

Risk-based approach

The EU AI Act regulates different types of AI systems according to the level of risk involved and is based on the following categories: 

Minimal risk

AI systems that do not fall within one of the three main risk classes below can be classified as minimal risk. Such systems are, for example, AI-enabled video games or email spam filters. The EU AI Act allows free use of minimal risk systems, with no mandatory requirements for such systems. The EU AI Act, however, encourages such systems to adhere to voluntary codes of conduct. 

Limited risk

Limited risk AI systems are subject to light transparency obligations as compared to high-risk systems. Providers and deployers of AI systems must ensure that individuals are aware when they are interacting with AI, with clear labelling of synthetic content and disclosure of AI involvement in content generation, except in cases authorized by law for criminal investigation or where it is obvious. They are also required to inform users about the use of emotion recognition and biometric categorisation systems, with exemptions for certain artistic or editorial contexts and legal authorizations. 

Limited risk AI systems include systems, other than those in unacceptable or high-risk categories, that interact with individuals, are capable of emotion recognition, contain biometric categorisation systems and any other systems capable of generating synthetic content (such as ‘deep fakes’). 

High-risk

High-risk AI systems are permitted under the EU AI Act but will be required to comply with obligations relating to operational transparency, training, risk-mitigation and quality management systems, high-quality data sets, activity logging, detailed documentation, clear user information, human oversight and a high level of robustness, accuracy and cybersecurity.  

Article 6 of the EU AI Act categorises certain AI systems as ‘high-risk’ based on the sectors they are used in and the intended purpose, such as systems used in recruitment processes or in applications that are integral to the functioning of critical infrastructure (e.g. power or water supply, transportation). High-risk AI systems fall into two categories of AI systems that are (i) safety components and subject to a third party conformity assessment (such as toys or medical devices), and (ii) specifically designated by the European Commission as high-risk (such as AI systems that evaluate creditworthiness of people or access to other essential private or public services or those that can be used in law enforcement to assess the risk or likelihood of an individual becoming an offender or victim of criminal offences).

Requirements for high-risk systems include:

  • adequate risk assessment and mitigation (Article 9);
  • compliance with data governance and management standards to protect personal data (Article 10);
  • transparency measures, including providing information to users (Article 13);
  • human oversight to minimise risk and ensure the system's operation aligns with intended purposes (Article 14);
  • high levels of accuracy, robustness, and cybersecurity (Article 15); and
  • certain providers that use high risk AI systems must additionally undertake a fundamental rights impact assessment (Article 27).

Unacceptable risk

Unacceptable risk AI systems are prohibited under the EU AI Act due to the potential for significant harm to users. For example, some of the AI systems which are prohibited under the EU AI Act include AI systems that:

  • use subliminal techniques beyond a person’s consciousness or purposefully manipulative or deceptive techniques;
  • exploit vulnerabilities of a person or group, caused by age, disability, or specific social or economic situations;
  • evaluate or classify based on social behavior or known, inferred or predicted personal or personality characteristics, leading to detrimental or unfavourable treatment;
  • use biometric categorisation that classify persons based on their biometric data to deduce or infer, among other things, their race, political opinions, trade union membership or sexual orientation; or
  • predict the risk of a person committing a criminal offence, based solely on the profiling or assessing of personality traits and characteristics. This does not apply where the AI system is only used to support a human assessment which is already based on objective and verifiable facts directly linked to criminal activity.

Enforcement and Penalties

The EU AI Act includes the creation of new offices and bodies to implement and enforce the EU AI Act, such as:

  • an AI office, a new body within the European Commission, that will implement and enforce the EU AI Act;
  • a scientific panel of independent experts to support the enforcement activities and issue alerts on systemic risks;
  • an EU AI Board, which will be composed of representatives of Member States and responsible for advisory tasks such as issuing opinions and recommendations; and
  • an advisory forum, consisting of stakeholders (from industry, start-ups, SMEs, civil society and academia), to provide technical expertise to the AI Board and the European Commission. Additionally, Member States are required to establish independent and impartial national competent authorities (including notifying and market surveillance authorities) to ensure the application and implementation of this Regulation and must provide these authorities with adequate resources (including personnel with expertise in AI and related fields). The authorities are also responsible for ensuring cybersecurity and confidentiality and must report to the European Commission every two years on their resources.

The EU AI Act also sets out penalties for non-compliance:

  • for non-compliance with the prohibition on AI systems carrying unacceptable risk, fines of up to 7% of total worldwide annual turnover or EUR 35 million, whichever is higher;
  • for breach of certain provisions in respect of high-risk AI systems, fines of up to 3% of total worldwide annual turnover or EUR 15 million, whichever is higher;
  • the supply of incorrect, incomplete or misleading information to the relevant authorities may also be subject to a fine of up to 1% of total worldwide annual turnover or EUR 7.5 million, whichever is higher; and
  • if providers of general purpose AI models have intentionally or negligently infringed the EU AI Act, or failed to comply with requests from regulators, such providers may be subject to fines of up to 3% of total worldwide annual turnover or EUR 15 million, whichever is higher.

The EU AI Act is available here.  

Related capabilities