However, cybersecurity is not only relevant for high-risk AI systems, but for all AI systems that process data, interact with users or influence physical or virtual environments. In this article, we will explain what the AI Act requires for high-risk AI systems in terms of cybersecurity, and why cybersecurity should be considered across all AI systems, irrespective of their risk level.
Cybersecurity requirements for high-risk AI systems under the AI Act
According to Article 15 of the AI Act, high-risk AI systems must be designed and developed in such a way that they achieve an appropriate level of accuracy, robustness and cybersecurity, and that they perform consistently in those respects throughout their lifecycle. This means that high-risk AI systems must be as resilient as possible regarding errors, faults or inconsistencies that may occur within the system or the environment in which the system operates, and that they must be protected against attempts by unauthorised third parties to exploit system vulnerabilities. The AI Act provides some guidance on the technical aspects of how to measure and ensure the appropriate levels of accuracy and robustness and encourages the development of benchmarks and measurement methodologies in cooperation with relevant stakeholders and organisations, such as metrology and benchmarking authorities. The levels of accuracy and the relevant accuracy metrics of high-risk AI systems must be declared in the accompanying instructions of use.
The technical measures aiming to ensure the cybersecurity of high-risk AI systems should be appropriate to the relevant circumstances and risks. Technical solutions that may be used include technical redundancy solutions, which may include backup or fail-safe plans, and measures to prevent, detect, respond to, resolve and control for attacks including those carried out using:
- data poisoning, where the threat actor manipulates the training data;
- model poisoning, where the threat actor manipulates pre-trained components used in training; or
- model evasion, where the threat actor manipulates input data to trick the model into doing something unintended.
Risk Assessments for High-Risk AI Systems
The AI Act requires providers of high-risk AI systems to conduct a risk assessment before placing the system on the market or putting it into service, and that they document the results of the risk assessment in the technical documentation. The risk assessment must identify and analyse the potential risks posed by the AI system to health, safety and fundamental rights, and the measures taken to prevent or mitigate those risks. The risk assessment must also consider the cybersecurity risks associated with the AI system, and the measures taken to ensure its resilience against malicious attacks. The risk assessment must be updated regularly throughout the lifecycle of the AI system, and the technical documentation must be made available to the competent authorities upon request. Providers must ensure their quality control and assurance processes for high-risk AI systems create and record this documentation appropriately, and on the assumption it will be disclosed in any subsequent enforcement action.
The Cyber Resilience Act (the CRA)
The cybersecurity of AI systems is also affected by the CRA. The CRA imposes a number of cybersecurity requirements on “products with digital elements” (i.e. connected products including Wi-Fi routers and IoT devices, as well as certain forms of software). These requirements include protecting against unauthorised access through tools like authentication and identity management and minimising the collection of data to only process what is adequate and relevant for the device or system’s intended use. The CRA also contains specific provisions relating to high-risk devices, as defined under the AI Act: connected devices which contain AI models which are (i) in-scope of the CRA and (ii) fulfil the CRA’s security-by-design requirements will be deemed to be in compliance with the cybersecurity requirements of the AI Act.
Why cybersecurity should be considered across all AI systems
The AI Act sets out specific requirements for high-risk AI systems in terms of cybersecurity in the EU, but also importantly will provide influential guidance on what the standard is for reasonable cybersecurity in riskier AI solutions elsewhere. Further, as with all digital solutions, this does not mean that other AI systems are exempt from cybersecurity considerations in the EU, or elsewhere. On the contrary, all AI systems that process data, interact with users or influence physical or virtual environments are potentially exposed to cybersecurity threats, and should be designed and developed with security in mind.
Cybersecurity is not only a matter of compliance, but also of trust, reputation and competitiveness. Cyberattacks against AI systems can have serious consequences, such as compromising the confidentiality, integrity or availability of data, causing harm or damage to users or third parties, undermining the performance or reliability of the AI system, or violating fundamental rights or ethical principles. Cyberattacks can also erode the trust and confidence of users, customers and stakeholders in the AI system and its provider and damage its reputation and market position. Moreover, cybersecurity is not a static concept, but a dynamic and evolving one, that requires constant monitoring, updating and improvement, in response to the changing threat landscape and the advancement of technology.
Providers of AI systems should adopt a risk-based security-by-design and security-by-default approach, which means that security should be integrated into the design and development process of the AI system, and that the default settings should ensure the highest level of security possible. Providers of AI systems should also conduct regular risk assessments, implement appropriate technical and organisational measures, and follow best practices and standards to ensure the cybersecurity of their AI systems.
Best Practices for AI System Providers
So, what should AI system providers do? Firstly, adopting a risk-based security-by-design and security-by-default approach is required. This means including early risk assessment processes to identify potential security risks, and then integrating security into the AI system's design and development process and ensuring the default settings provide the appropriate level of security required in light of the risks posed. Further, because the safety and security of AI products is mostly defined in the design and development phase, deploying appropriate quality control and assurance processes, and creating and retaining documentation should be essential for deployers seeking to demonstrate adequate cybersecurity risk management in light of the risks posed by the AI product.
Providers should also conduct regular risk assessments, implement appropriate technical and organizational measures, and follow best practices and standards to ensure their AI systems' cybersecurity. They must comply with existing laws and regulations related to cybersecurity, like the General Data Protection Regulation (GDPR), which protects personal data against unauthorized or unlawful processing, and the Digital Operational Resilience Act (DORA), which applies to the financial sector and sets requirements for ICT risk management, security, operational resilience, and third-party risk management.
Additionally, providers should cooperate with relevant authorities and stakeholders, such as the European Union Agency for Cybersecurity (ENISA), which offers guidance and support on cybersecurity policy and AI system-related issues.
Conclusion
Cybersecurity is a key requirement for high-risk AI systems under the AI Act, but it is also a relevant and important consideration for all AI systems that process data, interact with users or influence physical or virtual environments. Cybersecurity is not only a matter of compliance, but also of trust, reputation and competitiveness. Providers of AI systems should adopt a security-by-design and security-by-default approach, conduct regular risk assessments, implement appropriate technical and organisational measures, and follow best practices and standards to ensure the cybersecurity of their AI systems. Providers of AI systems should also comply with the existing laws and regulations that apply to cybersecurity and cooperate with the relevant authorities and stakeholders. By doing so, providers of AI systems can contribute to the development of trustworthy, safe and respectful AI in the EU and beyond.