High-Risk AI Systems

High-Risk

Plain-Language Explanation

High-risk AI systems are those used in safety-critical products or in sensitive sectors listed in Annex III: biometrics, critical infrastructure, education, employment, essential services (credit scoring, healthcare), law enforcement, migration, and justice. These systems must meet strict requirements for data quality, transparency, human oversight, accuracy, and robustness before being placed on the market.

Relevant Articles

Art. 6High-Risk

Classification Rules for High-Risk AI Systems

Defines when an AI system qualifies as high-risk: either as a safety component of products requiring EU conformity assessment (Annex I), or if explicitly listed in Annex III. Provides a self-assessment pathway for systems that pose no significant risk despite being listed in Annex III.

Art. 7High-Risk

Amendments to Annex III

Empowers the European Commission to update Annex III (the list of high-risk AI use cases) by delegated act, based on defined criteria. Ensures the list remains current as AI capabilities and risks evolve.

Art. 8High-Risk

Compliance with the Requirements

Providers of high-risk AI systems must ensure their systems comply with the requirements set out in Chapter III Section 2 (Articles 9–15). This obligation applies throughout the entire lifecycle of the system.

Art. 9High-Risk

Risk Management System

Requires providers of high-risk AI systems to establish, implement, document, and maintain a risk management system throughout the entire lifecycle. The system must identify, analyse, evaluate, and mitigate known and foreseeable risks.

Art. 10High-Risk

Data and Data Governance

Sets data governance requirements for high-risk AI systems trained on data. Training, validation, and testing datasets must be relevant, representative, sufficiently free of errors, and complete. Sensitive personal data may only be used under specific conditions.

Art. 11High-Risk

Technical Documentation

Requires providers of high-risk AI systems to draw up and maintain comprehensive technical documentation before placing the system on the market. The documentation must contain the information set out in Annex IV and be kept up to date throughout the lifecycle.

Art. 12High-Risk

Record-keeping

High-risk AI systems must be designed and developed to automatically record events (logs) relevant to identifying risks and ensuring human oversight. The logging capabilities must be sufficient to trace the system's operation over its lifetime.

Art. 13High-Risk

Transparency and Provision of Information to Deployers

High-risk AI systems must be sufficiently transparent to enable deployers to interpret the system's output and use it appropriately. Providers must supply instructions for use that describe the system's characteristics, limitations, performance, and any residual risks.

Art. 14High-Risk

Human Oversight

High-risk AI systems must be designed and developed to enable effective oversight by natural persons during the period of use. Oversight must allow humans to monitor, intervene, understand, and override the system, and operators must be assigned appropriate oversight responsibilities.

Art. 15High-Risk

Accuracy, Robustness and Cybersecurity

High-risk AI systems must achieve appropriate levels of accuracy, robustness, and cybersecurity throughout their lifecycle. Systems must be resilient to errors, inconsistencies, and adversarial attacks, and relevant metrics must be declared in accompanying documentation.

Relevant Annexes

Annex III: High-Risk AI Systems (Article 6(2))

Lists 8 categories of high-risk AI use cases: biometrics, critical infrastructure, education, employment, essential services (credit scoring, healthcare), law enforcement, migration, and justice/democratic processes.

View legal text
Annex III — High-Risk AI Systems referred to in Article 6(2)

1. BIOMETRICS: Remote biometric identification; biometric categorisation by sensitive attributes; emotion recognition.

2. CRITICAL INFRASTRUCTURE: Safety components in management of critical digital infrastructure, road traffic, water, gas, heating or electricity supply.

3. EDUCATION AND VOCATIONAL TRAINING: Access/admission decisions; assessment of learning outcomes; level of education assessment; monitoring exam conduct.

4. EMPLOYMENT AND HR: Recruitment and selection; employment conditions, promotion, termination; task allocation; performance monitoring.

5. ESSENTIAL PRIVATE/PUBLIC SERVICES: Eligibility for public benefits/healthcare; credit scoring and creditworthiness (not fraud detection); life and health insurance risk pricing; emergency call classification and dispatch prioritisation.

6. LAW ENFORCEMENT: Victim risk assessment; polygraphs; evidence reliability assessment; recidivism risk assessment; criminal profiling.

7. MIGRATION, ASYLUM AND BORDER CONTROL: Polygraphs; risk assessment of persons entering; travel document verification; asylum/visa application review; detection of persons at borders.

8. ADMINISTRATION OF JUSTICE AND DEMOCRATIC PROCESSES: AI assisting courts in fact/law determination; AI influencing election outcomes or voting behaviour.

Annex IV: Technical Documentation for High-Risk AI Systems

Specifies the content required in the technical documentation that providers of high-risk AI systems must draw up before placing them on the market (Article 11).

View legal text
Annex IV — Technical Documentation referred to in Article 11(1)

The technical documentation referred to in Article 11(1) shall contain at least the following information:

1. A general description of the AI system including:
   (a) its intended purpose;
   (b) the version of the software;
   (c) how it interacts with hardware/software;
   (d) descriptions of each system version and updates.

2. A detailed description of the elements of the AI system including:
   (a) training methods and techniques;
   (b) model architecture and design choices;
   (c) training data and datasets used;
   (d) data governance and management practices;
   (e) assessment of available training, validation and testing data.

3. Detailed information about the monitoring, functioning and control of the AI system.

4. A description of the risk management system in accordance with Article 9.

5. Changes made to the system through its lifecycle.

6. A list of the harmonised standards applied.

7. Copy of the EU declaration of conformity.