The open-source model accelerates innovation by enabling developers to build on existing work and learn from each other through contributions to open-source projects. This approach enhances the productivity and competitiveness of organisations by reducing development time and costs.1
It is therefore no surprise to see that open-source AI has become a priority topic in the AI ecosystem, as recently reflected in the “Statement on Inclusive on Sustainable Artificial Intelligence for People and the Planet”, resulting from discussions that took place during the AI Action Summit in Paris.2
In this edition of our “Zooming in on AI” series, we delve into the concept of open-source AI. We explore its definition, the regulatory framework established by the EU AI Act (AI Act), and the legal considerations for organisations that develop or use open-source AI technologies.
1. Open-source in the context of AI
In the world of software, talking of open source essentially means making source code publicly available and allowing third parties to build upon it.
However, unlike traditional software, which runs entirely based on its source code, an AI model requires three key components to be fully functional: the code, the parameters of the model, and the training data. Therefore, when it comes to AI, this added complexity changes the paradigm of how openness applies and its implications.
Openness in AI vs. Open-source AI
While the terms “openness in AI [development]” and “open-source AI” are often used interchangeably, they are different concepts.
Per Recital 102 of the AI Act, openness refers to the broader idea of making AI systems transparent and accessible, which can include open-source AI, but also encompasses other practices such as providing APIs or public libraries for model access. Open-source AI, on the other hand, specifically involves releasing AI systems under open-source licenses that allow for unrestricted (or lightly restricted) use, modification, and/or distribution.
Recital 103 of the AI Act, further explains that free and open-source AI components “cover the software and data, including models and general-purpose AI models, tools, services or processes of an AI system”.
So, what qualifies as open-source?
In essence, open-source in AI does not only refer to the publication of the source code related to the use or development of a model, but rather to the publication of the model itself and the parameters that constitute it.3
This is what the OpenSource Initiative (OSI) attempted to show in its proposed definition of “open-source AI”4, which states that “An Open-Source AI is an AI system that grants the freedoms to:
- Use the system for any purpose and without having to ask for permission.
- Study how the system works and inspect its components.
- Modify the system for any purpose, including to change its output.
- Share the system for others to use with or without modifications, for any purpose."
According to the OSI, the license must cover all necessary components of the AI System, such as data, code, and model parameters (e.g. weights), to qualify as open-source.
Achieving a standardized and harmonized definition across jurisdictions would be highly beneficial, given that open-source AI can be exempt from burdensome regulatory requirements under the AI Act.
2. Open-source in the context of the AI Act
The European Union's AI Act, which came into force on August 1, 2024, establishes a comprehensive regulatory framework for AI systems.5
Recognising the potential of open-source AI for research and innovation, the AI Act provides some exemptions for such models, which may vary depending on whether the open-source AI is an AI System or an AI Model.6
On one hand, according to Article 2(12) of the AI Act, AI systems released under free and open-source licenses are exempt from the AI Act's requirements unless they are placed on the market or put into service as:
- high-risk AI systems; or
- an AI system that falls under prohibited AI practices7; or
- an AI system which falls under one of the following AI practices8:
- intends to interact directly with natural persons; or
- generates synthetic audio, image, video or text content;
- includes an emotion recognition system or a biometric categorisation system;
- generates or manipulates image, audio or video content constituting a deep fake;
- generates or manipulates text which is published with the purpose of informing the public on matters of public interest.
Given this large list of exclusions, the exemption for open-source AI system exemption may have little practical effect, and organisations should be mindful of this when running their assessments.
On the other hand, according to Recital 104 of the AI Act, general-purpose AI (GPAI) models released under a free and open-source licence, and whose parameters, including the weights, the information on the model architecture, and the information on model usage, are made publicly available, should be subject to exceptions, unless they can be considered as presenting a systemic risk.
However, and in any case, the scope of this exemption appears limited to a narrow part of the AI Act obligations as it:
- only applies to the transparency-related requirements imposed on GPAI models; and
- should not concern (a) the obligation to produce a “summary about the content used for model training”; and (b) the obligation to put in place a policy to comply with European union copyright law.
Therefore, the open-source exemptions in the AI Act appear limited. Even if an AI system or a GPAI model is licensed under a free and open-source license, it may still need to comply with the AI Act and require a careful assessment
3. Legal considerations in the development of open source AI
For organisations developing or deploying AI, understanding the legal implications of open-source AI is essential. Here are some key considerations:
- Compliance with regulatory framework
- AI act. Organisations must ensure that their open-source AI models comply with the AI Act's requirements, particularly if they fall into the high-risk category. This compliance involves conducting thorough risk assessments, implementing necessary safeguards, and maintaining detailed and technical documentation.
- Data protection laws (such as the General Data Protection Regulation (GDPR)). Organisations must implement robust data protection measures to safeguard personal data used in training and deploying open-source AI models. This includes adhering to data protection regulations and implementing technical and organisational measures to prevent data breaches.
- Vigilance on licensing and intellectual property. Clear and enforceable licensing agreements are critical for protecting the rights of developers and users of open-source AI. Organisations should carefully review and select appropriate open-source licenses for their AI models, ensuring that they align with their purposes and legal obligations.
Conclusion
Open-source AI presents a unique opportunity for innovation and collaboration in the tech industry. By understanding the legal landscape and addressing the associated risks, organisations can leverage the benefits of open-source AI while ensuring compliance with regulatory requirements. Organisations should stay informed and proactive in navigating this ever-evolving landscape.
- European Commission – The impact of open source software and hardware on technological independence, competitiveness and innovation in the EU economy, September 2021 (here)
- Statement on Inclusive and Sustainable Artificial Intelligence for People and the Planet, 11 February 2025 “(…) we have affirmed the following main priorities: Ensuring AI is open, inclusive, transparent, ethical, safe, secure and trustworthy, taking into account international frameworks for all” (here)
- Commission Nationale de l’Informatique et des Libertés (CNIL) – In depth analysis – Open source practices in artificial intelligence (here)
- OpenSource Initiative – The OpenSource AI Definition (here)
- See our first post Zooming in on AI - #1: When will the AI Act apply? for further details on the entry into force of the AI Act
- See our second post Zooming in on AI - #2: AI system vs AI models for further description of the AI Act key definitions of AI system and AI models
- AI Act, Article 5 – Prohibited AI practices
- AI Act, Article 50 – Transparency obligations for providers and deployers of certain AI systems