Chapter VI
Article 60
Testing of High-Risk AI Systems in Real World Conditions Outside AI Sandboxes
Plain-Language Summary
Allows real-world testing of high-risk AI systems outside sandboxes by providers or prospective providers under specific conditions. Such testing must be conducted with a plan approved by the relevant authority and must not pose unacceptable risks.
Keywords
real world testinghigh-risk AItesting planmarket surveillanceconditionssubjects
Legal Text
Article 60 — Testing of High-Risk AI Systems in Real World Conditions Outside AI Sandboxes 1. Testing of high-risk AI systems in real world conditions outside AI regulatory sandboxes may be conducted by providers or prospective providers of high-risk AI systems listed in Annex III, in accordance with this Article and the real world testing plan referred to in paragraph 4, without prejudice to the prohibitions and restrictions under Chapter II. 2. The testing in real world conditions may be conducted only where all of the following conditions are fulfilled: (a) the provider or prospective provider has drawn up a real world testing plan and submitted it to the market surveillance authority; (b) the market surveillance authority has approved the real world testing plan; (c) the testing in real world conditions is conducted after the prospective provider or provider has informed the subjects; (d) the testing does not last longer than necessary to achieve its objectives.