Opinion

Zooming in on AI – #10: EU AI Act – What are the obligations for “high-risk AI systems”?

Companies deploying high-risk artificial intelligence (AI) systems must prepare to navigate a complex landscape of new obligations by August 2, 2026. In this post we explain the key obligations for providers and deployers of high-risk AI systems.

 

Some background

The EU AI Act (“AI Act”)[1] takes a risk based approach and divides AI systems into four different risk groups: (1) prohibited AI practices, (2) high risk, (3) limited risk and  (4) minimal risk AI systems. Different legal requirements apply to each category of AI system and to each role of the stakeholders in the AI value chain. The specific requirements for high-risk AI systems are set out in Chapter III of the AI Act, and they are subject to the role of the stakeholder. When dealing with high-risk AI systems, providers face a variety of obligations. These obligations are generally more stringent than those faced by deployers. Therefore, it is crucial to determine the purpose and objectives of any AI-related project from the outset to clearly determine the roles of the stakeholders and identify the obligations that will apply throughout the AI value chain.

In this post we will focus on the key obligations for high-risk AI systems.  We set out the differences between deployers and providers of AI systems and the possibility for actors to shift from one qualification to another in the fourth post of our “Zooming in on AI” series. Read more on when the obligations under the AI Act will apply in our first “Zooming in on AI” post and on the difference between an AI system and AI model in our second post. 

 

What is a “high-risk” AI system?

High-risk AI systems are those that pose a significant risk to health, safety, or fundamental rights and are specifically classified as high-risk in the AI Act. There are two groups of high-risk AI systems:

1. AI systems that are (i) safety components of products covered by sectorial EU product safety law and (ii) required to undergo a third-party conformity assessment. The list of relevant product safety laws contains more than 30 Directives and Regulations, including for the safety of toys, vehicles, lifts, civil aviation and medical devices. A safety component is defined as part of a product where the failure endangers the health and safety of a person or property.[2]

2. AI systems set out in the specific list in Annex III[3] in the following eight areas
(1) Biometrics
(2) Critical infrastructure
(3) Education and vocational training
(4) Employment, workers management and access to self-employment
(5) Access to and enjoyment of essential private services and essential public services and benefits (including for credit scores and credit checks or for risk assessment and pricing for health and life insurances)
(6) Law enforcement
(7) Migration, asylum and border control management
(8) Administration of justice and democratic processes

There are a number of exemptions to this.[4] Additionally, the European Commission is empowered to add or modify use-cases of high-risk AI systems[5]and has to publish practical guidance with examples for high-risk AI systems by February 2026[6].

 

Key obligations for providers of high-risk AI systems

The AI Act introduces a set of obligations for providers of high-risk AI systems, including:


1. Registration Obligations: Providers must register themselves and their system in the EU database before placing the high-risk AI system on the market or putting it into service.[7]

2. Quality Management System[8]: Providers of high-risk AI systems must implement, document, and maintain a quality management system. The objective of this is to ensure that high-risk AI systems are designed, developed, and deployed in compliance with the AI Act. This includes conformity assessment procedures, technical standards and establishing a risk management system.

3. Risk Management System[9]: Providers must have a risk management system running as an iterative process throughout the entire lifecycle of the AI system. The risk management system must, on a case-by-case basis:  (i) identify known and reasonably foreseeable risks when the AI system is used in accordance with its intended purpose and under reasonably foreseeable misuse; (ii) ensure adoption of appropriate and targeted measures to address those risks; and (ii) consider additional risks based on post-market monitoring data. 

4. Report Serious Incidents[10]: Providers must report any serious incident to the market surveillance authorities of the European country where the incident occurred immediately after having established a causal link between the high-risk AI system and the incident (or the reasonable likelihood of such a link). Following the incident, the provider must perform a risk assessment and adopt corrective measures.

5. Data Governance & Quality[11]: A key concern with AI is the amplification of biases. Therefore, providers of high-risk AI systems are obliged to take appropriate measures to detect, prevent and mitigate possible biases. The AI Act requires providers of high-risk AI systems to use high-quality data sets for training, validation, and testing, as the output of the AI system depends largely on the quality of the training data. The data must be relevant, sufficiently representative and, to the best extent possible, free of errors and complete. Ensuring quality training data will require providers to invest substantial resources in preparing and continually reviewing the data used for AI system training. For AI systems that are not developed based on AI model training, those requirements apply only to the testing datasets.

6. Documentation & Recordkeeping: Providers must produce technical documentation before a high-risk AI system is placed on the market and implement automatic recording of events (logs) over their lifetime to ensure the traceability of the functioning appropriate to its intended purpose of the AI system. The AI Act provides a list of the minimum information that the technical documentation must include, such as a description of the system, its elements and the process for its development and a description of the risk management system.

SMEs, including startups, may provide the elements of the technical documentation in a simplified manner. The European Commission will publish a simplified form.

7. Transparency and Human Oversight[12]: ‘Having a human in the loop’ is a core principle of responsible and trustworthy AI. The AI Act provides for extensive transparency obligations for providers to enable effective human control and oversight, including instructions for safe use and information about the level of accuracy, robustness, and cybersecurity of the high-risk AI system. 
The transparency obligations include (where applicable) further information to enable deployers to interpret the AI system’s output. In practice it will be up to providers to determine how to give information in a relevant, accessible and comprehensible manner. Providers can use synergies with existing information obligations (e.g. under the GDPR).

In addition, the system must be designed to allow for effective human oversight, e.g. through a “stop” button or a similar procedure to safely shut down the system. Individuals to whom oversight is assigned must be able to:  (i) properly understand the relevant capacities and limitations of the system and monitor its operations, including in view of detecting and addressing anomalies, dysfunctions and unexpected performance; (ii) remain aware of the possible tendency of automatically relying or over-relying on the output produced by the system; (iii) correctly interpret the system’s output; and (iv) decide not to use the system or otherwise disregard, override or reverse the system’s output.

8. Cybersecurity[13]: Providers must design and deploy the AI system in a way to achieve appropriate accuracy, robustness and cybersecurity. The European Commission will encourage the development of benchmarks and measurement methodologies.

Upon reasoned request of the competent authority, providers must demonstrate compliance with these requirements. It is therefore important to have a robust AI governance and adequate documentation in place.

Key obligations for deployers of high-risk AI systems

Deployers must also fulfil various obligations when using high-risk AI systems[14], including:

1. Instructions for Use: Deployers must take appropriate technical and organizational measures to ensure they use high-risk AI systems and monitor the operation of the AI system in accordance with the instructions for use [15]. EU or national law may impose additional obligations in this respect.

2. Fundamental Right Impact Assessment (“FRIA”)[16]: Certain deployers of high-risk AI systems must carry out a FRIA before putting the high-risk AI system into use. There are two types of deployers where a FRIA is required: (i) bodies governed by public law or deployers that are private entities providing public services; or (ii) deployers of AI systems evaluating the creditworthiness, establishing a credit score or used for risk assessment and pricing regarding life and health insurance[17]. The AI Office will publish a template FRIA questionnaire.

3. Human Oversight[18]: Deployers of high-risk AI systems must assign human oversight to a person with the necessary competence, training, authority and support.

4. Data Quality[19]: If the deployer has control over the input data, deployers must ensure that the input data is relevant and sufficiently representative.

5. Documentation[20]: Deployers are subject to retention obligations with regard to the logs generated by the AI system for at least six months in an automatic and documented manner.

6. Incident Reporting[21]: If deployers have reason to consider that the use of the AI system in accordance with the instructions presents a risk or they have identified a serious incident, they have to immediately inform the provider and the relevant market surveillance authority and immediate suspend the use of the system.

7. Risk Management System[22]: Deployers must implement and document a risk management system as a continuous iterative process run throughout the lifecycle of the AI system, with regular systematic review and updating.

8. Information Obligation for Employers[23]: Where employers use high-risk AI systems at the workplace, they must inform (i) the employee representatives, and (ii) the employees that they will be subject to a high-risk AI system.

Key obligations of importers and distributors

As in other types of product safety laws, importers and distributors face downstream compliance obligations under the AI Act.

Before placing a high-risk AI system on the market, importers must ensure that the system is in conformity with the AI Act. They must verify that the provider has completed the conformity assessment procedure, has drawn up the technical documentation and that the system bears the required CE marking and is accompanied by the EU declaration of conformity and instructions for use. Additionally, importers must prove that the provider has appointed an authorised representative.

Distributors are required to perform verifications before placing a high-risk AI system on the market and must not make the AI system available on the market, if they consider that a high-risk AI system is not in conformity with the requirements of the AI Act.

Both importers and distributors are required to notify the provider and relevant authorities if they detect that the high-risk AI system poses risks to the health, safety, and basic rights of individuals.

 

When will obligations come into effect and be enforceable?

The provisions relating to the obligations of high-risk AI system providers will apply from August 2, 2026 (except for high-risk AI systems that have been placed on the market or put into service before August 2, 2026).

 

Conclusion

Providers and deployers of high-risk AI systems are subject to a broad range of obligations under the AI Act. Companies deploying or providing high-risk AI systems should start incorporating their requirements under the AI Act into their AI strategy and governance programs. They should leverage existing compliance frameworks, such as documentation, transparency and risk assessments under the GDPR. Leveraging synergies between frameworks and taking a global, holistic approach to AI governance will be critical to ensure a consistent risk management while maintaining innovation and entrepreneurship.

 

Footnotes

[1] Regulation (EU) 2024/1689. This regulation came into force on August 1, 2024.
[2] Article 6(1), AI Act.
[3] Article 6(2) and Annex III, AI Act.
[4] Article 6(3) and Annex III, AI Act.
[5] Article 6(6), (7) and Annex III, AI Act.
[6] Article 6(5) and Annex III, AI Act.
[7] Article 16(g), AI Act.
[8] Article 17, AI Act.
[9] Article 9, AI Act.
[10] Article 73, AI Act.
[11] Article 10, AI Act.
[12] Articles 13, 14, AI Act.
[13] Article 15, AI Act.
[14] Article 26, AI Act.
[15] Article 26(1), (5), AI Act.
[16] Article 27, AI Act.
[17] Referred to in no. 5(b) and (c) of Annex III, AI Act.
[18] Article 26(2), AI Act.
[19] Article 26(4), AI Act.
[20] Article 26(6), AI Act.
[21] Article 26(5), AI Act.
[22] Article 26, AI Act.
[23] Article 26(7), AI Act.