Home/Articles/Article 55
Chapter VGPAI

Article 55

Obligations for Providers of GPAI Models with Systemic Risk

Plain-Language Summary

GPAI models with systemic risk face additional obligations: adversarial testing (red-teaming), systemic risk assessment, incident reporting to AI Office, and cybersecurity measures.

Keywords

GPAIsystemic riskred-teamingadversarial testingincident reportingcybersecurityrisk assessment

Legal Text

Article 55 — Obligations for Providers of GPAI Models with Systemic Risk

1. In addition to obligations listed in Article 53, providers of general-purpose AI models with systemic risk shall:
(a) perform model evaluation in accordance with standardised protocols, including conducting and documenting adversarial testing (red-teaming) to identify and mitigate systemic risks;
(b) assess and mitigate possible systemic risks at Union level that may stem from the development, placing on market, or use of the model;
(c) track, document and report without undue delay to the AI Office serious incidents and possible corrective measures;
(d) ensure an adequate level of cybersecurity protection for the model and its physical infrastructure.

2. Providers may rely on codes of practice to demonstrate compliance until a harmonised standard is published.