Opinion

European Parliament committees adopt their vision on the AI Act proposal

Published Date
May 17 2023
The European Parliament’s committees for Civil Liberties, Justice and Home Affairs (LIBE) and for Internal Market and Consumer Protection (IMCO) adopted a report setting out the Parliament’s vision for the proposed EU Regulation for artificial intelligence (the AI Act) on 11 May 2023. 

The proposed AI Act has already been recognized as the first of its kind and as landmark piece of legislation that would tackle the risks arising around the creation and use of artificial intelligence (AI) systems. However, it has also been criticised for having the potential consequence of stifling innovation in the EU and putting the EU technologically behind other world economies. The European Parliament aims to address these concerns and ensure that AI is trustworthy, human-centric and respects fundamental rights and values.

Allen & Overy's summary of the European Commission's original proposal is available here.

Key definitions

The European Parliament proposes stepping away from defining AI as “software”, as in the original text of the AI Act. It instead proposes adopting a universal, technology-neutral and future-proof definition. It defines an “AI system” as “a machine-based system that is designed to operate with varying levels of autonomy and that can, for explicit or implicit objectives, generate outputs such as predictions, recommendations, or decisions that influence physical or virtual environments”. This definition better aligns with the OECD approach, but deviates from the definition proposed by the Council of the European Union. 

The European Parliament also proposes substituting the term “user” (that counter-intuitively means an entity or person under whose authority the AI system is operated) to “deployer’’, which should create clarity in interpreting the provisions of the AI Act.

Article 3 of the AI Act includes other new definitions, for instance:

  • “general purpose AI system” means an AI system that can be used in or adapted to a wide range of applications for which it was not intentionally and specifically designed;
  • “foundation model” means an AI model trained on broad data at scale, designed for its generality of output and that can be adapted to a wide range of distinctive tasks;
  • “deep fake” means “manipulated or synthetic audio, image or video content that would falsely appear to be authentic or truthful, and which features depictions of persons appearing to say or do things they did not say or do, produced using AI techniques, including machine learning and deep learning”; and
  • many other terms, including “biometric-based data”, “biometric identification” and “biometric verification”, “significant risk”, “social scoring”, “social behavior”, “state of the art” and “testing in real world conditions”.

General principles

Under new Article 4a, operators (including developers and deployers) in scope of the AI Act are required to use best efforts to develop and use AI systems or foundation models in accordance with a number of general principles to promote ethical and trustworthy AI, such as:

  • respect for human dignity and autonomy, and functioning subject to human oversight;
  • technical robustness and safety to minimise unintended and unexpected harms or unlawful use by malicious third parties;
  • privacy and data governance;
  • transparency, including appropriate traceability and explainability, as well as informing users about their rights and about the capabilities and limitations of an AI system;
  • diversity, non-discrimination and fairness;
  • social and environmental sustainability and benefit to all human beings, while monitoring the long-term impacts on the society.

Article 4a explains that these general principles are in turn translated into specific requirements set out in the Act as applicable to AI systems and foundation models

Risk-based approach to regulating AI

The report maintains the European Commission's risk-based approach, which classifies AI systems depending on the level of risk the AI can generate, but expands the list of prohibited AI practices significantly and introduces significant changes to the categorisation of high-risk AI. New prohibitions concern, among others, placing on the market, putting into service or use of an AI system for:

  • crime prediction based on the profiling of individuals, their location or past criminal behavior;
  • ‘real-time’ remote biometric identification systems in publicly accessible spaces and ‘post’ remote biometric identification systems (with narrow exceptions for law enforcement authorities for prosecution of serious crimes);
  • creation of facial recognition technology databases through the scraping of facial images from the internet or CCTV footage; or
  • inferring the emotions of individuals in certain areas, such as in workplace, by education institutions, law enforcement or border management.

In relation to high-risk AI systems, the Annex III listing high-risk AI systems has been fine-tuned and expanded. For instance, biometric-based systems, including systems for biometric identification or for inferences about personal characteristics of individuals based on biometric and biometric-based data (eg emotion recognition systems) are designated as high-risk, with some narrow exceptions. One exception from the high-risk AI list exists for AI systems used for detecting financial fraud. Annex III has also been expanded to include new instances of high-risk AI, for instance, AI systems used for public elections or AI-based content recommender systems of very large online platforms (VLOPs) under the Digital Services Act.

In addition, an AI system will no longer automatically be high risk if listed in Annex III; the AI system will also have to pose a significant risk to people’s safety, health or fundamental rights or in some cases a significant risk of harm to the environment. 

The new Article 29a provides for a fundamental rights impact assessment prior to putting a high-risk AI system into use for the first time. It includes requirements for what the impact assessment must cover and, other than where the deployer is an SME, requires the deployer to notify national supervisory authorities and “relevant stakeholders” (for example, equality bodies, consumer protection agencies, social partners and data protection “agencies”) and, to the extent possible, obtain their input (a six week period for response is to be allowed). Certain entities deploying AI will be required to publish the summary of the results of this impact assessment. The Article also provides that if a data protection impact assessment is required to be conducted under the GDPR, it should be done in parallel with the fundamental rights assessment and attached as an addendum.

AI foundation models

The European Parliament included new obligations on the providers of AI foundation models (AI foundation models being in essence pre-trained AI systems that can be used as a basis for developing other AI systems). The new article 28b requires the providers of foundation models to register these models in an EU database; to ensure that the models comply with comprehensive requirements for their design and development; to produce and keep for 10 years certain documentation; to draw up  extensive technical documentation and intelligible instructions for downstream providers, and to provide information on the characteristics, limitations, assumptions and risks of the model or its use.

AI literacy

A new Article 4d will require the providers and deployers of AI systems to ensure a sufficient level of AI literacy of their staff and other persons dealing with AI systems on their behalf, taking into account their technical knowledge, experience, education and training and the context the AI systems are to be used in. 
The proposal explains that ‘AI literacy’ refers to skills, knowledge and understanding that allows various stakeholders to make an informed deployment of AI systems and gain awareness about the opportunities and risks of AI and its possible harms. Such literacy measures could include learning notions and skills required to ensure compliance with the AI Act.

EU AI Office

The report amends the provisions on the proposed European Artificial Intelligence Board, which was modelled on the example of the European Data Protection Board under the GDPR, to introduce an EU AI Office. The EU AI Office would still have a wide range of powers to monitor implementation of the AI Act, provide guidance and coordinate of cross-border issues.

Next steps

The European Parliament is expected to vote on the report in a plenary session (currently scheduled for 12-16 June 2023), then the trilogue negotiations between the European Parliament, the Council of the EU and the European Commission can commence. The Council of the EU adopted its position in December 2022. These negotiations usually take several months. 

The press release is available hereand the preliminary consolidated version of the adopted report here

Content Disclaimer

This content was originally published by Allen & Overy before the A&O Shearman merger

Related capabilities