Author Sophia Antonopoulou – Associate
The 2nd of August 2025 marks a key point of AI Act (Regulation EU 2024/1689), since the obligations for providers of general-purpose AI (GPAI) models enter into application. In preparation, the European Commission has published three key tools in July 2025: a voluntary Code of Practice, official guidelines clarifying who must comply and how and an explanatory notice and template for the public summary of training content for GPAI models. These developments are particularly important, because GPAI models play a significant role in the market, can be used for a variety of tasks and be integrated into a wide array of downstream AI systems.
The AI Act defines a GPAI as “an AI model, including where such an AI model is trained with a large amount of data using self-supervision at scale, that displays significant generality and is capable of competently performing a wide range of distinct tasks regardless of the way the model is placed on the market and that can be integrated into a variety of downstream systems or applications, except AI models that are used for research, development or prototyping activities before they are placed on the market”. Large generative AI models are a typical example for a GPAI model, given that they allow for flexible generation of content, such as in the form of text, audio, images or video, that can readily accommodate a wide range of distinctive tasks.
The AI Act lays down specific obligations for the providers of GPAI models, throughout the entire model’s lifecycle. These obligations include:
- transparency measures
- obligations to draw up and keep up-to-date documentation
- obligations to put in place a policy to comply with Union law on copyright and related rights
- obligations to draw up and make publicly available a sufficiently detailed summary about the content used for training of the GPAI model
It’s worth noting that providers of GPAI models that are released under a free and open-source licence are exempted from these transparency-related requirements under certain conditions, unless they are GPAI models with systemic risk.
Providers of GPAI models with systemic risk are subject, in addition to the aforementioned obligations for providers of GPAI models, to obligations aimed at assessing and mitigating those risks throughout the whole lifecycle of the GPAI models, while these models also need to be notified the AI Office. These obligations include conducting model evaluations, reporting serious incidents and ensuring an adequate level of cybersecurity protection. The AI Act defines a “systemic risk” as “a risk that is specific to the high-impact capabilities of GPAI models, having a significant impact on the Union market due to their reach, or due to actual or reasonably foreseeable negative effects on public health, safety, public security, fundamental rights, or the society as a whole, that can be propagated at scale across the value chain”. Which models are considered GPAI models with systemic risk may change over time, reflecting the evolving state of the art and potential societal adaptation to increasingly advanced models.
A useful tool for providers of GPAI models to demonstrate compliance with the AI Act is the Code of Practice, which was published on July 10th of 2025. The GPAI Code of Practice is a voluntary tool, prepared by independent experts in a multi-stakeholder process, designed to help industry comply with the AI Act legal obligations on safety, transparency and copyright of GPAI models.
Therefore, providers/developers of GPAI models, are now required to:
- Review their obligations and assess model risks
- Ensure transparency about how those models work and what data trained them
- Decide whether to adopt the Code of Practice to minimise compliance risk
Taking these steps before the 2nd of August 2025 builds trust, ensures compliance and promotes secure and responsible AI use.