General-Purpose AI (GPAI) Models
GPAIPlain-Language Explanation
Articles 51-56 regulate general-purpose AI models—models trained on broad data that can perform many different tasks. All GPAI providers must maintain technical documentation, publish training data summaries, and implement copyright compliance. Models trained with more than 10^25 FLOPs are classified as having systemic risk and face additional requirements: adversarial testing, incident reporting, and enhanced cybersecurity.
Relevant Articles
Classification of GPAI Models with Systemic Risk
A GPAI model is classified as having systemic risk if trained with more than 10^25 FLOPs, or if the Commission decides it has high-impact capabilities. This classification triggers additional obligations under Article 55.
Obligations for Providers of General-Purpose AI Models
All GPAI providers must: maintain technical documentation, provide information to downstream providers, implement copyright compliance, and publish a training data summary. Open-source models are exempt from documentation obligations unless they have systemic risk.
Obligations for Providers of GPAI Models with Systemic Risk
GPAI models with systemic risk face additional obligations: adversarial testing (red-teaming), systemic risk assessment, incident reporting to AI Office, and cybersecurity measures.
Transparency Obligations for Providers of General-Purpose AI Models
All providers of general-purpose AI models placed on the EU market must provide certain transparency information to downstream providers and make a publicly available summary of the model's capabilities and intended uses.
Authorised Representatives of Providers of General-Purpose AI Models
Providers of general-purpose AI models not established in the EU must designate an EU-established authorised representative who acts as contact point for the AI Office and national authorities.
Codes of Practice
Mandates the AI Office to facilitate the preparation of voluntary codes of practice for GPAI providers. Codes of practice may cover documentation, copyright, data transparency, and safety evaluations, and can serve as a means of demonstrating compliance.
Relevant Annexes
Annex XI: Technical Documentation for General-Purpose AI Models
Lists technical information GPAI model providers must document and maintain under Article 53(1)(a), covering model architecture, training data, evaluation results and capabilities.
View legal text
Annex XI — Technical Documentation referred to in Article 53(1)(a) and (b)(i) Section 1 — Information to be provided by all providers of general-purpose AI models: 1. General description of the general-purpose AI model including: (a) the tasks the model is intended to perform; (b) the type and nature of the model, including number of parameters, architecture, training approach; (c) information about training compute (total FLOPs used during training); (d) the modalities of the model (text, image, video, audio, code, etc.). 2. Description of the data used for training, fine-tuning and alignment including: (a) information about the data sources and data curation methodology; (b) information about the scope and nature of web crawls; (c) data filtering techniques used. 3. Information on training of the model: (a) information on the computational infrastructure used; (b) information on techniques used for training. 4. Evaluation results including safety and security benchmarks. 5. Detailed description of the policies to identify and implement copyright compliance.
Annex XII: Information for Downstream Providers of GPAI Models
Lists the information GPAI model providers must make available to downstream AI system providers who integrate the GPAI model into their products.
View legal text
Annex XII — Transparency Information referred to in Article 53(1)(b)(ii) for General-Purpose AI Models to be Provided to Downstream Providers Information to be provided by providers of general-purpose AI models to downstream providers that integrate the model into their AI systems: 1. A general description of the general-purpose AI model. 2. The intended use of the general-purpose AI model. 3. Restrictions on the use of the model (e.g., use cases where the model should not be used without additional fine-tuning or safety measures). 4. Known or reasonably foreseeable risks. 5. Technical measures that downstream providers need to implement to enable safe integration. 6. Contact details of the general-purpose AI model provider.
Annex XIII: Criteria for Designation of High-Impact Capabilities / Codes of Practice
Lists the criteria the European Commission uses to identify GPAI models with systemic risk based on high-impact capabilities beyond the 10^25 FLOPs threshold.
View legal text
Annex XIII — Criteria for the Classification of General-Purpose AI Models with Systemic Risk referred to in Article 51(2) For the purposes of Article 51(2), the Commission shall take into account the following criteria when assessing whether a general-purpose AI model has high impact capabilities leading to systemic risks at Union level: 1. The number of parameters of the model. 2. The quality and size of the training data set, including whether the training data set is multimodal. 3. The amount of compute used for training the model, measured in FLOPs. 4. The input and output modalities of the model, such as text, image, audio, video, code, other modalities. 5. The benchmarks and evaluations of the model's capabilities, including on standardised benchmarks. 6. The number of registered end-users. 7. Whether the model has been found to have caused, contributed to, or is significantly exposed to serious incidents in the Union. 8. Whether it has been designated as critical infrastructure under Directive (EU) 2022/2555.