1. Some background
The EU AI Act (AI Act) specifies different obligations for deployers of high-risk AI system. While often argued as less stringent than the obligations of providers, they must not be overlooked. In this post we focus on the obligation under Article 27 of the AI Act to conduct a FRIA prior to the deployment of high-risk AI systems.1 At first not included in the initial draft of the Commission,2 the FRIA was introduced by the European Parliament in June 2023.3
In the tenth post of our “Zooming in on AI” series, we dove into the different obligations related to high-risk AI systems. To read more on the distinctions between deployers and providers of AI systems and the possibility for companies to shift from one qualification to another, please refer to the fourth post of our “Zooming in on AI” series.
2. Who must conduct a FRIA?
A FRIA must be developed by specific deployers of high-risk AI systems - further elaborated under the tenth post of our “Zooming in on AI” series- before putting the system into use. The AI Act defines two categories of deployers subject to FRIA requirements:
- The first includes bodies governed by public law, as well as private entities providing public services. According to Recital 96 of the AI Act, private entities providing such public services are linked to tasks in the public interest such as in the areas of education, healthcare, social services, housing, administration of justice. These two types of entities must conduct a FRIA when deploying high-risk AI systems listed under Annex III, with the exception AI systems intended to be used as safety components in the management of critical digital infrastructure, road traffic, or in the supply of water, gas, heating or electricity.4
- The second category of deployers required to conduct FRIA consists of deployers using high-risk AI systems to evaluate the creditworthiness of natural persons or establish their credit score, with the exception of AI systems used for the purpose of detecting financial fraud, as well as deployers using high-risk AI systems for risk assessment and pricing in relation to natural persons in the case of life and health insurance.5
3. What is a FRIA?
The FRIA aims to mitigate harms of high-risk AI systems in relation to individuals’ fundamental rights. Article 27 is one of the few provisions in the AI Act that diverges from the predominantly technical compliance requirements and requires deployers to reflect on why, where, and how the high-risk AI system will be deployed.6
Deployers concerned must conduct a FRIA before the first use of any high-risk AI system and update the FRIA if the underlying fact pattern has changed.7 In addition, they must notify the competent market surveillance authority - designated in each EU Member State - of the assessment results unless exempted from this requirement by the authority in specific circumstances (e.g. for exceptional reasons of public security).8
Article 27(1) specifies the criteria to be assessed when conducting a FRIA. These criteria can be grouped into three sections:
- A descriptive section: The deployer must describe the system’s intended purposes and the processes it will be used in, the timeframes and frequency deployers will rely on it, and the individuals and or groups it may affect.9
- An assessment section: The deployer must assess the specific risks of harm likely to impact the identified individual. This assessment must also consider the instructions for use provided by the provider of the high-risk AI system.10
- A mitigation section: This section focuses on risk mitigation, including the implementation of human oversight measures, as well as measures to address the risks upon materialization. This includes internal governance arrangements and complaint mechanisms.11
Additionally, it is recommended that stakeholders be consulted during the process, including representatives of groups likely to be affected by the AI system, independent experts, and possibly civil society organizations.12 When first introduced by the European Parliament in June 2023, the FRIA required even further information such as a detailed plans of the mitigation measures or the specific risks of harm likely to impact marginalized persons or vulnerable groups.13
The AI Office will publish a template to facilitate the conducting of FRIAs.14 However, as of the date of this blog post, it has not yet been published.
4. Can a DPIA be leveraged for a FRIA?
Article 27(4) of the AI Act allows deployers who have already performed a DPIA in connection with the high-risk AI system under the GDPR to leverage that DPIA for the FRIA as well. In practice, this means that the DPIA and FRIA will often be conducted concurrently and may even be consolidated into a single integrated report.15 Article 26(9) of the AI Act further emphasizes the connection between DPIAs and FRIAs by requiring deployers to use the instructions provided by the AI system provider to comply with their DPIA obligations under the GDPR.
The scope of a DPIA differs theoretically from that of a FRIA. A DPIA focuses on the processing of personal data and the risks to the rights and freedoms of data subjects, while a FRIA considers the risks to the fundamental rights of all individuals affected by the high-risk AI system, not limited to personal data. However, in practice, the scopes largely overlap because high-risk AI systems that affect individuals typically process personal data.
Deployers of high-risk AI systems that require a FRIA are also likely to require a DPIA under the GDPR. According to the Guidelines on Data Protection Impact Assessment (DPIA) and determining whether processing is “likely to result in a high risk” for the purposes of Regulation 2016/679 by the Article 29 Data Protection Working Party, several criteria indicate when a DPIA is required. These criteria include the use of evaluation or scoring techniques, the handling of sensitive or highly personal data such as health or financial information, the matching or combining of data sets, and the use of new technology. In most instances, if a data controller identifies that the processing meets at least two of these criteria, a DPIA should be conducted. While AI technology may eventually cease to be considered "new," the other criteria are likely to remain relevant for high-risk AI systems that require a FRIA. For example, this is evident in the use of AI for credit scoring, where sensitive personal data and evaluation techniques are employed.
Methodologically, both DPIAs and FRIAs are similar, focusing on assessing risks to individuals and determining mitigation measures. However, there are notable differences in their consequences:
- Non-compliance with a DPIA can result in significant fines under the GDPR, while the AI Act does not specify sanctions for failing to conduct a FRIA.
- Additionally, if a DPIA identifies high residual risks even after mitigation measures have been applied, the data controller must seek prior consultation with the supervisory authority before proceeding with the data processing. On the other hand, the FRIA is primarily a documentation requirement. It does not have the power to prevent the deployer from using a high-risk AI system, regardless of the risks identified.
The EDPB recently announced in its letter to the EU AI Office on the role of DPAs in the AI Act Framework that it started working on Guidelines on interplay between the GDPR. It also invited the AI Office to join the discussions. These guidelines may clarify the connection between the DPIA and FRIA.
Footnotes
1. Article 27, AI Act.
2. Available at: <https://eur-lex.europa.eu/legal-content/EN/TXT/?uri=CELEX:52021PC0206>.
3. See Amendment 413, Proposal for a regulation Article 29(a)
4. No. 2 Annex III, AI Act.
5. No. 5(b), and (c) Annex III, AI Act.
6. Recital 96, AI Act.
7. Article 27(2), AI Act.
8. Article 27(3), AI Act.
9. Article 27(1)(a), (b), and (c), AI Act.
10. Article 27(1)(d), AI Act.
11. Article 27(1)(d), and (e) and (c), AI Act.
12. Recital 96, AI Act.
13. See Amendment 413, Proposal for a regulation Article 29(a)(1)(h), and (f)
14. Article 27(5), AI Act.
15. This approach may have downsides, for instance where the competent supervisory authority is not a data protection authority, providing the DPIA in addition to the FRIA may generate additional questions from the authority.