Opinion

Council approves EU AI Act

Published Date
Jun 11 2024
Related people
On 21 May 2024, the Council of the European Union (the 'Council') issued a press release announcing it had approved the Proposal for a Regulation of the European Parliament and of the Council Laying Down Harmonised Rules on Artificial Intelligence (the 'EU AI Act').

Purpose and scope

The EU AI Act aims to provide a legal framework for AI use as part of the EU’s new legislative framework, whilst covering risks such as health and safety risks and risks to fundamental rights. 

The EU AI Act applies to AI systems (and general-purpose AI ('GPAI') models) placed on the EU market or used in the EU, regardless of the provider's location. It covers EU-based deployers and affects providers outside the EU if their AI system's output produces an effect in the EU or is placed within the EU. Manufacturers, importers and distributors are also regulated by the EU AI Act.

The EU AI Act broadly defines AI system as a “machine-based system designed to operate with varying levels of autonomy and that may exhibit adaptiveness after deployment, and that, for explicit or implicit objectives , infers, from the inputs it receives, how to generate outputs such as predictions, content, recommendations or decisions that can influence physical or virtual environments”.

The EU AI Act sets out exemptions from its scope, such as for AI systems used solely for military, defence and national security purposes and for scientific research and development.

Classifying risk in AI systems

The EU AI Act follows a ‘risk-based approach’, placing stricter rules in cases where there is higher risk of harm to society. The risk-based approach is based on the following categories: unacceptable, high, limited or minimal risk.

Unacceptable risk

The following are some examples of the AI systems which are prohibited under the EU AI Act as they have a potential detrimental effect on the relevant person: 

  • materially distort human behavior and cause adverse impacts on financial interests or physical or psychological health;
  • carry out biometric categorisation using sensitive characteristics (e.g. political, religious or philosophical beliefs, race, sex life or sexual orientation);
  • conduct social scoring that may lead to discriminatory outcomes and the exclusion of certain groups;
  • conduct risk assessments, based solely on the personality traits or the profiling of a person, to predict occurrence of the person’s actual or potential commission of a criminal offence;
  • create or expand facial recognition databases by untargeted scraping of facial images from the internet or CCTV footage; or
  • carry out emotion recognition in the workplace and education, except where strictly needed for medical or safety reasons.

High-risk

AI systems identified as high-risk are permitted but will be required to comply with obligations relating to operational transparency, training, risk-mitigation and quality management systems, high-quality data sets, activity logging, detailed documentation, clear user information, human oversight and a high level of robustness, accuracy and cybersecurity. They will also be subject to a conformity assessment, be placed on the market with a declaration of conformity and bear a CE marking. Deployers will have to carry out a mandatory fundamental rights impact assessment for high-risk AI systems.

High-risk AI systems fall into two categories of AI systems that are (i) safety components and subject to a third party conformity assessment (such as toys or medical devices), and (ii) specifically designated by the Commission as high-risk, including AI systems that:

  • utilise certain biometric identification, categorisation and emotion recognition systems but are not categorised as unacceptable risk systems;
  • can be used in the management and operation of critical infrastructure;
  • can be used in employment, worker management and recruitment;
  • evaluate creditworthiness of people or access to other essential private or public services;
  • can be used in law enforcement to assess the risk or likelihood of an individual becoming an offender or victim of criminal offences;
  • can be used by public authorities in the context of migration, asylum seeking and border control management;
  • can be used by judicial authorities to research and interpret facts and laws; or
  • can influence the outcome of an election, referendum or individuals’ voting behavior.

Limited risk

AI systems presenting only limited risk would be subject to light transparency obligations compared to high-risk systems. Providers and deployers of AI systems must ensure that individuals are aware when they are interacting with AI, with clear labelling of synthetic content and disclosure of AI involvement in content generation, except in cases authorized by law for criminal investigation or where it is obvious. They are also required to inform users about the use of emotion recognition and biometric categorisation systems, with exemptions for certain artistic or editorial contexts and legal authorizations.

Limited risk AI systems include systems, other than those in unacceptable or high risk categories, that interact with individuals, are capable of emotion recognition, contain biometric categorisation systems and any other systems capable of generating synthetic content (such as ‘deep fakes’).

Minimal risk

AI systems that do not fall within one of the three main risk classes above can be classified as minimal risk. Such systems are, for example, AI-enabled video games or email spam filters. The EU AI Act allows free use of minimal risk systems, with no mandatory requirements for such systems. The EU AI Act, however, encourages such systems to adhere to voluntary codes of conduct.

General-purpose AI models

The EU AI Act also regulates GPAI models, which are AI models that are trained with a large amount of data, display significant generality and are capable of competently performing a wide range of distinct tasks.

The models that do not pose systemic risks will be subject to limited requirements, for example with regard to transparency including the provision of information and documentation to providers who intend to integrate the GPAI into their own AI system, a policy to comply with EU copyright legislation and the provision of a detailed summary of the content used to train the system.

The EU AI Act imposes additional obligations on GPAI models with systemic risk (i.e. models that have high impact capabilities evaluated on the basis of appropriate technical tools and methodologies, or those that have been determined to have such capabilities by the European Commission). Key obligations for these GPAI models include conducting model evaluations, assessing and mitigating systemic risks, tracking and reporting serious incidents to the AI Office (as defined below) and ensuring adequate cybersecurity protection.

Enforcement and penalties

The EU AI Act includes the creation of new offices and bodies to implement and enforce the EU AI Act, such as:

  • an AI office, a new body within the EC, that will implement and enforce the EU AI Act (the 'AI Office');
  • a scientific panel of independent experts to support the enforcement activities and issue alerts on systemic risks;
  • an EU AI Board, which will be composed of representatives of EU Member States and responsible for advisory tasks such as issuing opinions and recommendations; and
  • an advisory forum, consisting of stakeholders (from industry, start-ups, SMES, civil society and academia), to provide technical expertise to the AI Board and the Commission.

Additionally, Member States are required to establish independent and impartial national competent authorities, including notifying and market surveillance authorities, to ensure the application and implementation of this Regulation, and must provide these authorities with adequate resources, including personnel with expertise in AI and related fields. The authorities are also responsible for ensuring cybersecurity and confidentiality and must report to the Commission every two years on their resources.

 

The EU AI Act also sets out penalties for non-compliance:

  • for non-compliance with the prohibition on AI systems carrying unacceptable risk, fines of up to 7% of total worldwide annual turnover or EUR 35 million, whichever is higher;
  • for breach of certain provisions in respect of high-risk AI systems, fines of up to 3% of total worldwide annual turnover or EUR 15 million, whichever is higher;
  • the supply of incorrect, incomplete or misleading information to the relevant authorities may also be subject to a fine of up to 1% of total worldwide annual turnover or EUR 7.5 million, whichever is higher; and
  • if providers of GPAI models have intentionally or negligently infringed the EU AI Act, or failed to comply with requests from regulators, such providers may be subject to fines of up to 3% of total worldwide annual turnover or EUR 15 million, whichever is higher. 

 

Next steps

The EU AI Act will be signed by the presidents of the European Parliament and the Council and then published in the EU’s Official Journal. Within 20 days of this publication, the EU AI Act will enter into force. Most of its provisions will apply two years after its entry into force, with some exceptions for specific provisions.

Certain requirements will come into effect sooner, such as the provisions on prohibited practices for “unacceptable risk” AI systems and the GPAI rules, which take effect six and 12 months after the EU AI Act’s entry into force, respectively. Provisions relating to obligations for high-risk systems that are safety components are set to come into effect 36 months after the EU AI Act’s entry into force.

The press release is available here and the EU AI Act here.

Read last week’s highlight here.