Home/Articles/Article 15
Chapter IIIHigh-Risk

Article 15

Accuracy, Robustness and Cybersecurity

Plain-Language Summary

High-risk AI systems must achieve appropriate levels of accuracy, robustness, and cybersecurity throughout their lifecycle. Systems must be resilient to errors, inconsistencies, and adversarial attacks, and relevant metrics must be declared in accompanying documentation.

Keywords

accuracyrobustnesscybersecurityresilienceadversarial attacksfeedback loopsperformance metrics

Legal Text

Article 15 — Accuracy, Robustness and Cybersecurity

1. High-risk AI systems shall be designed and developed in such a way that they achieve an appropriate level of accuracy, robustness, and cybersecurity, and perform consistently in those respects throughout their lifecycle.

2. The levels of accuracy and the relevant accuracy metrics of high-risk AI systems shall be declared in the accompanying instructions for use.

3. High-risk AI systems shall be resilient as regards errors, faults or inconsistencies that may occur within the system or the environment in which the system operates, in particular where such errors, faults or inconsistencies may lead to death or personal injury, or to significant harm to property or the environment.

4. The technical robustness of high-risk AI systems shall be achieved through technical redundancy solutions, which may include backup or fail-safe plans.

5. High-risk AI systems that continue to learn after being placed on the market or put into service shall be developed in such a way as to eliminate or reduce as far as possible the risk of possibly biased outputs influencing input for future operations (feedback loops) and ensure that any such feedback loops are duly addressed with appropriate mitigation measures.

6. High-risk AI systems shall be resilient as regards attempts by unauthorised third parties to alter their use, outputs or performance by exploiting system vulnerabilities.