With over 5,300 exhibitors from almost 70 countries and 83,000 visitors in November 2023, MEDICA in Düsseldorf is one of the largest medical technology B2B trade fairs in the world. Allen & Overy has again been present with its own booth. The focus this year was on the digital transformation of the healthcare system in the context of the growing “outpatientisation” of treatment solutions based on Artificial Intelligence (AI).
On the legal side, the European Union seeks to address these developments by creating legal certainty for providers and users of AI, by mitigating the risks which might result from the use of AI and by harmonizing non-contractual civil liability rules to AI. Against this background, the European Commission issued a proposal for a regulation on harmonised rules on AI in 2021 (the draft AI Act) and a directive on adapting non-contractual civil liability rules to artificial intelligence in 2022 (the AI Liability Directive).
What kind of AI systems are covered by the draft AI Act?
Taking into account the rapid technological developments related to AI, the definition of AI in the AI Act aims to be as technology neutral and future proof as possible. According to the definition (in the latest proposal as amended by the European Parliament in June 2023), an AI system means software that can, for a given set of human-defined objectives, generate outputs such as content, predictions, recommendations, or decisions influencing the environment they interact with (Article 3 (1) AI Act). The AI Act takes a risk-based approach and classifies AI systems in four categories: (1) certain unacceptable and particularly harmful AI practices are prohibited; (2) high-risk AI systems that pose significant risks to the health and safety or fundamental right of persons have to comply with a set of mandatory requirements for trustworthy AI and follow conformity assessment procedures; (3) for low risk AI systems, certain transparency requirements are proposed, and (4) there are no particular obligations for minimal or no risk AI systems.
Who is subject to the provisions of the AI Act?
The AI Act applies to providers and users within the EU, as well as to providers located outside the EU if their AI systems are used within the EU . Providers are persons or entities that develop AI systems with a view to placing it on the market or putting it into service under its own name or trademark, irrespective whether for payment or free of charge. Users include any person or entity using AI systems, but does not include use of an AI system during personal non-professional activities.
Further, the AI Act establishes obligations for certain importers and distributors of AI systems.
Does AI used in medical devices qualify as high-risk AI?
AI systems are considered high risk if:
(i) they are intended to be used as a safety component of a product, or if they themselves are a product covered by the Medical Device Regulation (EU) 2017/745 (MDR) or by the In Vitro Diagnostic Regulation (EU) 2017/746 (IVDR) and if,
(ii) the relevant product is required to undergo a third-party conformity assessment pursuant to the MDR or the IVDR. This does also apply if the AI system is placed on the market or put into service independently from the relevant product.
As the vast majority of AI systems used in medical devices fulfill the aforesaid conditions, most AI systems applied in medical devices would be categorized as high-risk under the AI Act.
What kind of obligations related to high-risk AI systems are imposed by the draft AI Act?
The AI Act places various obligations on providers, importers, distributors and users of high-risk AI systems. Below is a summary of the most important obligations:
- Providers have to establish, implement, document and maintain risk management and quality management systems. If the AI system involves the training of models with data, providers must ensure that training, validation and testing data meet certain quality criteria, including prior assessment of the availability, quantity and suitability of data sets and examination in view of possible biases. Furthermore, training validation and testing data shall be relevant, representative, free or errors and complete. Technical documentation needs to be drawn up and be kept up-to-date. The AI system needs to be set up and developed in a way that allows the automatic recording of events (‘logs’) while the AI system is operating and effective oversight by natural persons. Providers of high-risk AI systems must conduct a conformity assessment procedure involving a notified body to prove compliance with the AI Act.
- Importers of AI systems in the European Union need to ensure that appropriate conformity assessment procedures have been carried out by the provider of the AI system, that the provider has drawn up the required technical documentation and that the AI system bears the required CE conformity marking and is accompanied by the required documentation and instruction of use.
- Distributors of AI systems in the European Union have to verify that the AI system bears the required CE conformity marking, that it is accompanied by the required documentation and instruction of use and that the provider and importer of the AI system have complied with their respective obligations.
- Users of high-risk AI systems shall use such systems in accordance with the instructions of use and shall organise their own resources and activities to implement the human oversight measures indicated by the provider. The user shall furthermore ensure that input data over which the user exercises control is relevant in the view of the intended use of the system and shall keep the logs automatically generated by the system to the extent the logs are under their control.
What are potential fines in case of a breach of the obligations set forth in the AI Act
The draft AI Act foresees fines of up to EUR 40 million or 7% of the global revenue of a company if prohibited AI systems are used or if high-risk AI systems do not comply with the requirements of the AI Act.
How is liability related to AI systems impacted by the AI Liability Directive?
The specific characteristics of AI systems, including complexity, autonomy, and opacity (the so-called “black box” effect) can make it difficult for victims to prove fault and causality and there may be uncertainty as to how the courts will interpret and apply existing national liability rules in cases involving AI. In this respect, the AI Liability Directive eases the burden of proof through the use of disclosure obligations and rebuttable presumptions. It applies to non-contractual civil law claims for damages caused by an AI system, where such claims are brought under fault-based liability regimes.
The AI Liability Directive provides that European member states shall ensure that national courts are empowered to order the disclosure of relevant evidence about specific high-risk AI systems that are suspected of having caused damage if the claimant has undertaken all proportionate attempts to gather the evidence from the defendant unsuccessfully. The AI Liability Directive introduces a rebuttable presumption of non-compliance with a duty of care if the defendant does not comply with a request to disclose or preserve evidence.
Since it can be challenging for claimants to establish a causal link between non-compliance with the AI Act or with other national laws applicable to AI systems and the output produced by the AI system or the failure of the AI system to produce an output that gave rise to the relevant damage, the AI Liability Directive establishes a rebuttable presumption of causality regarding this causal link. However, in case of high-risk AI systems the AI Liability Directive establishes an exception from the presumption of causality, where the defendant demonstrates that the claimant has reasonable access to sufficient evidence and expertise to prove the causal link. This exception shall incentivize defendants to comply with their documentation, recording and disclosure obligations under the AI Act and shall ensure a high level of transparency of AI systems.
Outlook
The European Council has adopted its common position on the AI Act in December 2022. The European Parliament adopted its negotiating position in June 2023 with substantial amendments to the European Commission's text. EU lawmakers have started the negotiations to finalise the new legislation and trilogue meetings took place in June, July, September and October 2023. While the AI Act was expected to pass the European legislative procedure by the end of 2023 and to come into force in 2024, such process might be derailed by disagreement from key EU member states. Germany, France and Italy are pushing for a code of conduct without an initial sanction regime. In any case there will be a transition period of two years to prepare before the AI Act will become applicable.