Opinion

Zooming in on AI - #14: Enforcement of the AI Act

Zooming in on AI - #14: Enforcement of the AI Act

The European Union Artificial Intelligence Act (AI Act) entered into force on August 1 2024. The AI Act establishes a risk-based approach to AI, prohibiting certain practices that are deemed unacceptable, such as social scoring or manipulating human behavior, and imposing specific requirements on high-risk AI systems, such as those used in health, employment or law enforcement. However, as the ink dries, one challenge lies ahead: how will it be enforced in practice?

Summary

The European Union's Artificial Intelligence Act (AI Act), effective from August 1, 2024, introduces a risk-based framework for AI regulation, prohibiting certain practices and imposing requirements on high-risk AI systems.

The enforcement of the AI Act involves a combination of centralized and decentralized structures, with key roles played by national market surveillance authorities, the European Commission (via the AI Office), and the European Data Protection Supervisor (EDPS).

Operators will face administrative burdens dealing with multiple authorities. Differences in national expertise may cause inconsistencies in enforcement. The AI Office's dual role could affect its impartiality. Varying expertise among national authorities may lead to inconsistent enforcement.

The AI Act establishes both a centralized and decentralized structure for the enforcement of its provisions. The main actors are the national market surveillance authorities for AI systems, the European Commission, acting through the AI Office, for general-purpose AI models and the European Data Protection Supervisor (EDPS) for Union institutions, agencies and bodies. (Read more about the AI systems and AI models under the AI Act in our Zooming in on AI – #2: AI system v AI models.) This article explores the roles and responsibilities of these authorities, detailing the actions they may take when operators (i.e. developers, deployers, importers, distributors, etc.) fail to comply with the established rules.

1. Market Surveillance Authorities

The Member States will play a key role in the enforcement of the AI Act. Each Member State will designate at least one notifying authority and at least one market surveillance authority, which will both constitute the national competent authorities:

  • Notifying Authorities: These authorities will intervene in the pre-implementation phase of AI systems. They are responsible for establishing and applying the framework for conformity assessment bodies, developed in cooperation with notifying authorities from all Member States. Conformity assessment bodies certify the compliance of high-risk AI systems with the AI Act. If they meet the applicable requirements, conformity assessment bodies may be notified by the competent notifying authority and become a notified body. 
     
  • Market Surveillance Authorities: These authorities will supervise AI systems after implementation, once they are placed on the market, put into service, or used within their jurisdiction and the territory of their Member State.

Unlike notifying authorities, market surveillance authorities have the power to impose sanctions for non-compliance. They are equipped with investigative and corrective powers as outlined in Regulation (EU) 2019/1020, including the authority to impose penalties as per Member State law. Additionally, the AI Act grants them the power to impose administrative fines for various infringements, including:

  • Non-compliance with the prohibited AI practices, subject to administrative fines of up to EUR 35 million or, if the offender is an undertaking, up to 7% of its total worldwide annual turnover for the preceding financial year, whichever is higher.
  • Non-compliance with the provisions listed in Article 99(4) of the AI Act, which include the main obligations of operators and notified bodies, subject to administrative fines of up to EUR 15 million or, if the offender is an undertaking, up to 3% of its total worldwide annual turnover for the preceding financial year, whichever is higher.
  • Providing incorrect, incomplete, or misleading information to notified bodies or national competent authorities, subject to administrative fines of up to EUR 7 million or, if the offender is an undertaking, up to 1% of its total worldwide annual turnover for the preceding financial year, whichever is higher.

Market surveillance authorities can act on their own initiative or upon receiving a complaint. Anyone having grounds to consider that there has been an infringement of the AI Act may submit complaints to the relevant market surveillance authority. The scope of the right of complaint is notably broad, as it is not limited to individuals who are personally affected by an AI system. Instead, it extends to any person who suspects an infringement, which could potentially open the door to abuse.

Member States must designate their national competent authorities by August 2, 2025. 

2. European commission and AI office

The Commission has exclusive powers to supervise and enforce obligations for general-purpose AI models. The Commission will entrust these tasks to the AI Office, a dedicated function within the Commission.

The Commission, through the AI Office, may act on its own initiative, upon receiving a complaint from downstream providers of a general-purpose model, or upon receiving a qualified alert from the scientific panel. This panel, composed of independent experts, is established by the Commission to support the enforcement of the AI Act by providing advice, developing evaluation tools, and assisting market surveillance authorities. Additionally, the AI Office may act at the request of market surveillance authorities when necessary and proportionate to assist with their tasks under the AI Act.

The Commission is equipped with investigative and corrective powers, including requesting documentation and information from providers of general-purpose AI models, conducting evaluations to assess compliance and investigate systemic risks, and requesting providers to take corrective measures.

The AI Act grants the Commission the power to impose fines on providers of general-purpose AI models, not exceeding 3% of their annual total worldwide turnover or EUR 15 million, whichever is higher, for non-compliance. These fines are subject to full judicial review by the Court of Justice of the European Union. This sanctioning power is significant for the Commission, which generally does not have direct enforcement powers, except in areas like competition law and recently under the Digital Services Act (DSA) and the Digital Markets Act (DMA).

In addition to its enforcement role regarding general-purpose AI models, the AI Office also monitors and supervises compliance of AI systems based on a general-purpose AI model where the same provider develops both the AI system and the general-purpose AI model (e.g. an AI-powered chatbot developed by the same provider that created the underlying general-purpose language model). For this purpose, the AI Office has the same investigative and corrective powers as market surveillance authorities. Market surveillance authorities are required to cooperate with the AI Office to carry out compliance evaluations regarding general-purpose AI systems that can be used directly by deployers for at least one high-risk purpose. (Read more about high-risk systems and general-purpose AI models in our blogs: Zooming in on AI – #10: EU AI Act – What are the obligations for “high-risk AI systems”? and Zooming in on AI – #12: spotlight on GPAI Models).

3. European data protection supervisor

The AI Act designates the European Data Protection Supervisor (EDPS) as the competent market surveillance authority for Union institutions, agencies and bodies, except in relation to the CJEU acting in its judicial capacity. This includes agencies such as Europol, Frontex, and Eurojust, among others. The EDPS is equipped with the same investigative and corrective powers as national market surveillance authorities, including the power to impose fines, although these fines are significantly lower than those for other operators. This includes administrative fines of up to EUR 1.5 million for non-compliance with prohibited AI practices and EUR 750,000 for other infringements.

4. Cooperation and coordination

The AI Act includes several mechanisms to ensure cooperation and coordination among national competent authorities and the Commission. In particular:

  • If a market surveillance authority finds that non-compliance extends beyond its national territory, it must inform the Commission and other Member States without delay. This includes sharing the results of its evaluation and the actions it has required the operator to take.
  • If the operator of an AI system does not take adequate corrective action within the specified period, the market surveillance authority must take appropriate provisional measures to stop or limit the system's availability or use. The authority must also inform the Commission and other Member States.
  • If no market surveillance authority from other Member States or the Commission objects to the provisional measures within three months (or 30 days for prohibited AI practices), the measures are deemed justified.

The AI Act also provides for a Union safeguard procedure, where the Commission intervenes in case of disagreements or objections among Member States or operators regarding national measures taken. The Commission will also contribute to harmonizing the applicable rules by developing guidelines on the practical implementation of the AI Act and adopting delegated acts to amend various provisions of the regulation (e.g., the list of high-risk AI systems, technical documentation requirements, and criteria for general-purpose AI models with systemic risk).

The European Artificial Intelligence Board (AI Board) will facilitate cooperation and ensure consistent application of the AI Act across Member States. Composed of one representative per Member State, with the EDPS and the AI Office as observers, the AI Board will provide a platform for cooperation and exchange among market surveillance and notifying authorities respectively. It will offer advice on implementing the AI Act and assist enforcement authorities, including supporting joint investigations. 

However, the applicable procedure will depend on whether the AI system relates to products covered by the Union harmonization legislation listed in section A of Annex I of the AI Act (such as, for example, machinery, toy safety, radio equipment, medical devices, etc.). If so, the relevant sectoral procedures will apply instead of those provided for in the AI Act.

5. Challenges

The AI Act's enforcement framework is complex and raises questions about its practical implementation. Some of the main challenges include:

  • The lack of a one-stop shop mechanism. Operators will have to deal with multiple authorities in different Member States, depending on where their AI systems are placed on the market, put into service, or used. While the AI Act provides mechanisms for cooperation and coordination among national competent authorities and the Commission, these may not be sufficient to address the administrative burden of dealing with multiple authorities.
  • The lack of harmonization for the procedural aspects. The AI Act does not specify applicable deadlines for authorities to act, limitation periods, the right to be heard of the complainant, access by the complainant to the investigation file, or the publication of decisions. These aspects are largely subject to national law, which may vary significantly among Member States, increasing fragmentation. Similar issues arose regarding the enforcement of the GDPR, which faces difficulties due to differences in national administrative procedures and interpretations of the cooperation mechanism between data protection authorities. The Commission has recently proposed a Regulation to lay down additional procedural rules for the enforcement of the GDPR, which is currently under negotiation between the EU institutions.
  • The double role of the AI Office. The AI Office, a function within the Commission, is in charge of supervising and enforcing the obligations for general-purpose AI models, including by imposing sanctions, while at the same time it is tasked with developing Union expertise and capabilities in the field of AI. This may pose challenges for the impartiality of the AI Office, as well as for the trust and cooperation of operators.
  • Different expertise of market surveillance authorities. The expertise of market surveillance authorities will vary across Member States. Some Member States will designate their existing data protection authority to oversee AI systems, while others will establish new authorities specifically for this purpose. This variation in expertise and institutional background is likely to result in differing interpretations and enforcement actions, potentially leading to inconsistencies in the application of the AI Act across Member States.
 

 

Related capabilities