Chapter IIIHigh-Risk
Article 27
Fundamental Rights Impact Assessment
Plain-Language Summary
Requires public bodies and operators providing public services to conduct a fundamental rights impact assessment (FRIA) before deploying a high-risk AI system. The FRIA must identify the processes, persons affected, and measures to minimise risks to fundamental rights.
Keywords
FRIAfundamental rights impact assessmentpublic bodiesdeployerrisk assessmentaffected personsgovernance
Legal Text
Article 27 — Fundamental Rights Impact Assessment for Certain High-Risk AI Systems 1. Prior to deploying a high-risk AI system referred to in points 2 to 8 of Annex III, deployers that are bodies governed by public law, or private entities providing public services, and deployers of high-risk AI systems referred to in point 1 of Annex III shall perform a fundamental rights impact assessment. 2. The assessment shall cover the following: (a) description of the deployer's processes in which the high-risk AI system will be used in line with its intended purpose; (b) a description of the period of time within and frequency with which each high-risk AI system is intended to be used; (c) the categories of natural persons and groups likely to be affected in the specific context of use; (d) the specific risks of harm likely to impact the categories of persons or groups of persons identified pursuant to point (c); (e) a description of the implementation of human oversight measures, in accordance with the instructions for use; (f) measures to be taken in case of materialisation of risks, including the arrangements for internal governance and complaint mechanisms. 3. Deployers shall register the fundamental rights impact assessment in the EU database referred to in Article 71 prior to deploying the high-risk AI system in question.