Opinion

Zooming in on AI – #1: When will the AI Act apply?

EU Regulation 2024/1689, also known as the Artificial Intelligence Act (AI Act), enters into force as of August 1, 2024. But when will it become applicable?

Some background

The AI Act sets out a harmonized legal framework for the development, supply, and use of AI systems in the EU. As a regulation, the AI Act directly enters into force in all EU Member States alike, without the need for transposition into national law. The nature of a regulation also prevents Member States from adopting additional restrictions on the same subject matter, unless explicitly authorized within the AI Act.  

The AI Act introduces a risk-based approach to the regulation of AI, distinguishing between four categories of AI systems: prohibited AI practices, high-risk AI systems, general-purpose AI models, and other AI systems. Each category is subject to different levels of compliance. 

Not all the AI Act’s provisions will apply immediately. The AI Act sets out different transitional periods and dates of application depending on the type and impact of the AI systems concerned.

When will the AI Act apply?  

In this first publication of our “Zooming in on AI” series, we will provide an overview of the timeline for the application of the main provisions of the AI Act and highlight some of the key implications for AI providers and deployers.

2 February 2025: prohibited AI practices

Certain AI practices will be prohibited from 2 February 2025. This concerns AI systems that have been designed or that are used in a manner that is considered to pose an unacceptable risk to humans, such as AI systems that create or expand facial recognition databases through the untargeted scraping of facial images from the internet or CCTV footage. This prohibition applies to a large extent to both the use and supply of such AI systems.

2 May 2025: codes of practice for general-purpose AI Models 

The AI Office, which is part of the European Commission, and has been created at the beginning of this year, will issue codes of practice for providers of general-purpose AI models by 2 May 2025. These voluntary codes will enable providers of general-purpose AI models to demonstrate their compliance with the relevant obligations from the AI Act, similarly to the codes of conduct under the GDPR. The AI Office has opened a call for expression of interest to participate in the drawing-up of the first code of practice and has also launched a multi-stakeholder consultation, allowing stakeholders to express their views until 10 September 2024.

2 August 2025: governance and enforcement framework and fines

The governance and enforcement framework of the AI Act will become effective on 2 August 2025. This means that by then, the EU Member States must designate their national competent authorities, comprising at least one notifying authority and at least one market surveillance authority:

  • The notifying authority will be responsible for the supervision of conformity assessment bodies. Providers of high-risk AI systems will have to undergo a conformity assessment procedure, which for certain high-risk AI systems involving biometrics entails the involvement of an external conformity assessment body.
  • The market surveillance authority is the national authority responsible for enforcement of the AI Act. It has the investigative and corrective powers set out under Article 14 of Regulation (EU) 2019/1020.

The EU Member States may designate multiple competent authorities. For example, the data protection authority as well as sector-specific supervisory authorities such as those of the financial, insurance, or telecom sector may be appointed as market surveillance authorities. At a national level, it will be very important to coordinate the approaches of the authorities to achieve effective and consistent enforcement. An undesired outcome would be that authorities adopt diverging or even conflicting interpretations or guidance. It will be up to each Member State to adopt legislation to address this specific issue. If EU Member States do not get this right, it can create inconsistencies and uncertainty for supervised entities.  

On top of the national authorities, the governance and enforcement framework will also include the establishment of a European Artificial Intelligence Board, composed of one representative per Member State, whose role will consist in advising and assisting the Commission and EU Member States on AI matters. 

The AI Office will have the exclusive powers to supervise and enforce the obligations of providers of general-purpose AI models. 

General-Purpose AI models

The rules regarding general-purpose AI models will begin to apply as of 2 August 2025 (by exception, providers of general-purpose AI models that have been placed on the market before 2 August 2025 must comply with the AI Act by 2 August 2027). 

The rules for providers of general-purpose AI models include, among other things, that they have adequate technical documentation, information, and policies for their models, and that they appoint an authorized representative in the Union if they are established in third countries. Providers of general-purpose AI models with systemic risk, which are models that have high impact capabilities or that have been trained using a certain amount of computation power, have additional obligations, including the obligation for the relevant provider to notify the Commission within two weeks of meeting the criteria.

2 August 2026: general application date

The AI Act applies in its entirety from 2 August 2026, unless otherwise specified. 

High-Risk AI systems

The provisions that apply from 2 August 2026 include those relating to the obligations of high-risk AI system providers, which include creating a risk management system, adhering to data quality and governance standards for training, validating, and testing the system, maintaining technical documentation to facilitate the competent authorities' assessment of the system's compliance, and fulfilling the obligation to log events. Providers must also meet transparency requirements towards deployers, as well as ensure human oversight, robustness, security, and accuracy. Furthermore, providers will need to undergo conformity assessment procedures and adhere to registration requirements.

Deployers must also fulfill various obligations, such as conducting a fundamental rights impact assessment prior to the deployment of certain high-risk systems – for instance, using an AI system to determine a natural person's credit score – and informing the market surveillance authority of the assessment's findings.

Limited risk AI systems

The provisions for AI systems with limited risk will take effect from 2 August 2026 as well. For instance, providers of AI systems designed for direct interaction with individuals must ensure that those individuals are aware that they are interacting with an AI system.

Fines for general-purpose AI models

The Commission will be able to impose on providers of general-purpose AI models fines not exceeding 3% of their annual total worldwide turnover in the preceding financial year or EUR15 million whichever is higher.

2 August 2027: AI systems as safety components

AI systems that are safety components of products, or which are themselves products, falling within the scope of certain Union harmonization legislation, will be classified as high-risk as of 2 August 2027 if the product concerned undergoes the conformity assessment procedure with a third-party conformity assessment body pursuant to that relevant Union harmonization legislation. This concerns, for example, products such as machinery, toys, lifts, or medical devices.

General-purpose AI models placed on the market before August 2, 2025

Providers of general-purpose AI models that have been placed on the market before 2 August 2025 must comply with the AI Act by 2 August 2027.

2 August 2030: AI systems as components of certain large-scale IT systems

AI systems which are components of certain large-scale IT systems, such as the Schengen Information System or the Visa Information System, that have been placed on the market or put into service before 2 August 2027, must be brought into compliance. 

Exemption for high-risk AI systems that have been placed on the market or put into service before 2 August 2026

Operators of high-risk AI systems that have been placed on the market or put into service before 2 August 2026 are exempt from the AI Act, on the condition that the systems concerned are not subject to significant changes in their decisions. By exception, if such systems are intended to be used by public authorities, their providers and deployers must comply by 2 August 2030 at the latest.

The AI Act represents a major milestone in the EU's digital agenda, and a significant challenge for providers and deployers of AI systems in the EU. It is therefore crucial for companies to understand the scope and implications of the AI regulation, and to prepare for the upcoming deadlines and obligations. It is not yet clear whether authorities will immediately enforce the AI Act as of the entry into force of an obligation.

A&O Shearman will continue to monitor the developments and provide further insights on the AI Act and related topics in our “Zooming in on AI” series.

Timeline of the AI act

Related capabilities