Recitals
Recitals are the preamble paragraphs of the EU AI Act that explain the reasoning and intent behind each provision. They are not directly binding but guide interpretation.
Purpose and Trustworthy AI
Establishes the goal of developing trustworthy AI that respects fundamental rights, democracy, and the rule of law while enabling innovation.
Definition of AI System
Explains why the definition of AI system must be technology-neutral and future-proof, distinguishing AI from simpler software.
Scope — What is Outside the AI Act
Clarifies that the AI Act does not cover AI systems used exclusively for military, defence or national security purposes, nor purely personal non-professional AI use.
Prohibited Practices — Rationale
Explains why certain AI applications should be absolutely prohibited as incompatible with Union values, focusing on manipulation, social scoring, and biometric surveillance.
High-Risk Classification — Safety Components
Explains that AI systems embedded as safety components in regulated products should be high-risk because safety failures in such products can have severe consequences.
High-Risk Classification — Annex III Use Cases
Justifies why eight categories of AI applications in Annex III are classified as high-risk due to their potential impact on fundamental rights, safety or livelihoods.
Transparency and Disclosure Obligations
Explains the rationale for requiring transparency when people interact with AI systems, especially chatbots and synthetic media, to preserve informed decision-making.
General-Purpose AI Models — Rationale
Explains why general-purpose AI (GPAI) models require specific rules given that they can be integrated into many downstream applications and may have broad societal impacts.
Systemic Risk — Definition and Classification
Defines systemic risk for GPAI models and explains the 10^25 FLOPs threshold as a proxy for high-impact capability sufficient to justify additional obligations.
AI Office and Governance Structure
Describes the role of the AI Office as the central Union-level body for supervising GPAI models and coordinating national competent authorities.
Penalties and Proportionality
Explains the rationale for the three-tier penalty structure and the proportionality requirements especially for SMEs and start-ups.
Fundamental Rights Impact Assessment
Explains why certain deployers of high-risk AI in public or quasi-public contexts must conduct a fundamental rights impact assessment before deployment.
Regulatory Sandboxes and Innovation
Explains the purpose of AI regulatory sandboxes as tools to facilitate innovation while ensuring safety and compliance by allowing testing in a controlled environment.
Phased Application Timeline
Explains the staggered entry into force dates, allowing different provisions to apply progressively so that operators have adequate time to comply.
Relationship with GDPR and Other EU Law
Clarifies that the AI Act supplements rather than replaces existing EU legislation such as GDPR, and explains how to handle overlapping obligations.