Chapter IIIHigh-Risk
Article 6
Classification Rules for High-Risk AI Systems
Plain-Language Summary
Defines when an AI system qualifies as high-risk: either as a safety component of products requiring EU conformity assessment (Annex I), or if explicitly listed in Annex III. Provides a self-assessment pathway for systems that pose no significant risk despite being listed in Annex III.
Keywords
high-riskclassificationconformity assessmentAnnex IAnnex IIIsafety componentprofiling
Legal Text
Article 6 — Classification Rules for High-Risk AI Systems 1. Irrespective of whether an AI system is placed on the market or put into service independently from the products referred to in this paragraph, that AI system shall be considered to be high-risk where both of the following conditions are fulfilled: (a) the AI system is intended to be used as a safety component of a product, or the AI system is itself a product, covered by the Union harmonisation legislation listed in Annex I; (b) the product whose safety component pursuant to point (a) is the AI system, or the AI system itself as a product, is required to undergo a third-party conformity assessment pursuant to the Union harmonisation legislation listed in Annex I. 2. In addition to the high-risk AI systems referred to in paragraph 1, AI systems referred to in Annex III shall also be considered to be high-risk. 3. By derogation from paragraph 2, an AI system referred to in Annex III shall not be considered to be high-risk where it does not pose a significant risk of harm to the health, safety or fundamental rights of natural persons, including by not materially influencing the outcome of decision making. An AI system referred to in Annex III shall not be considered to be high-risk where it fulfils any of the following conditions: (a) the AI system is intended to perform a narrow procedural task; (b) the AI system is intended to improve the result of a previously completed human activity; (c) the AI system is intended to detect decision-making patterns or deviations from prior decision-making patterns and is not meant to replace or influence the previously completed human assessment, without proper human review; (d) the AI system is intended to perform a preparatory task to an assessment relevant for the purposes of the use cases listed in Annex III. Notwithstanding the above, an AI system shall always be considered high-risk where it performs profiling of natural persons.
Referenced by other articles
Art. 64AI Office and GovernanceArt. 60Testing of High-Risk AI Systems in Real World Conditions Outside AI SandboxesArt. 61Further Processing of Personal Data for Developing AI of Public InterestArt. 62Measures for Providers and Deployers that are SMEs, Including Start-UpsArt. 63Derogations for Specific OperatorsArt. 65Advisory ForumArt. 83Supervision of Testing in Real World Conditions by Market Surveillance AuthoritiesArt. 66Scientific Panel of Independent Experts+4 more