← Home

Recitals

Recitals are the preamble paragraphs of the EU AI Act that explain the reasoning and intent behind each provision. They are not directly binding but guide interpretation.

(1)

Purpose and Trustworthy AI

Establishes the goal of developing trustworthy AI that respects fundamental rights, democracy, and the rule of law while enabling innovation.

(2)

Definition of AI System

Explains why the definition of AI system must be technology-neutral and future-proof, distinguishing AI from simpler software.

(5)

Scope — What is Outside the AI Act

Clarifies that the AI Act does not cover AI systems used exclusively for military, defence or national security purposes, nor purely personal non-professional AI use.

(12)

Prohibited Practices — Rationale

Explains why certain AI applications should be absolutely prohibited as incompatible with Union values, focusing on manipulation, social scoring, and biometric surveillance.

(47)

High-Risk Classification — Safety Components

Explains that AI systems embedded as safety components in regulated products should be high-risk because safety failures in such products can have severe consequences.

(48)

High-Risk Classification — Annex III Use Cases

Justifies why eight categories of AI applications in Annex III are classified as high-risk due to their potential impact on fundamental rights, safety or livelihoods.

(58)

Transparency and Disclosure Obligations

Explains the rationale for requiring transparency when people interact with AI systems, especially chatbots and synthetic media, to preserve informed decision-making.

(97)

General-Purpose AI Models — Rationale

Explains why general-purpose AI (GPAI) models require specific rules given that they can be integrated into many downstream applications and may have broad societal impacts.

(99)

Systemic Risk — Definition and Classification

Defines systemic risk for GPAI models and explains the 10^25 FLOPs threshold as a proxy for high-impact capability sufficient to justify additional obligations.

(101)

AI Office and Governance Structure

Describes the role of the AI Office as the central Union-level body for supervising GPAI models and coordinating national competent authorities.

(106)

Penalties and Proportionality

Explains the rationale for the three-tier penalty structure and the proportionality requirements especially for SMEs and start-ups.

(110)

Fundamental Rights Impact Assessment

Explains why certain deployers of high-risk AI in public or quasi-public contexts must conduct a fundamental rights impact assessment before deployment.

(116)

Regulatory Sandboxes and Innovation

Explains the purpose of AI regulatory sandboxes as tools to facilitate innovation while ensuring safety and compliance by allowing testing in a controlled environment.

(121)

Phased Application Timeline

Explains the staggered entry into force dates, allowing different provisions to apply progressively so that operators have adequate time to comply.

(130)

Relationship with GDPR and Other EU Law

Clarifies that the AI Act supplements rather than replaces existing EU legislation such as GDPR, and explains how to handle overlapping obligations.