Article Browser
Browse the EU AI Act structured by chapters and articles. The Act contains 113 articles across 13 chapters.
General Provisions
States the purpose of the EU AI Act: improving the internal market, promoting trustworthy human-centric AI, while ensuring high protection of health, safety, and fundamental rights.
Defines the personal, territorial, and material scope of the EU AI Act. Applies to providers, deployers, importers, distributors, and manufacturers involved with AI systems in or affecting the EU. Key exclusions include AI for military and national security, scientific research, and personal non-professional use.
Provides key definitions including: AI system, general-purpose AI model, provider, deployer, operator, risk, and many other terms used throughout the regulation.
Requires providers and deployers to ensure their staff and others operating AI systems on their behalf have sufficient AI literacy, taking into account technical knowledge, experience, education, and training as well as the context of use.
Prohibited AI Practices
High-Risk AI Systems
Defines when an AI system qualifies as high-risk: either as a safety component of products requiring EU conformity assessment (Annex I), or if explicitly listed in Annex III. Provides a self-assessment pathway for systems that pose no significant risk despite being listed in Annex III.
Empowers the European Commission to update Annex III (the list of high-risk AI use cases) by delegated act, based on defined criteria. Ensures the list remains current as AI capabilities and risks evolve.
Providers of high-risk AI systems must ensure their systems comply with the requirements set out in Chapter III Section 2 (Articles 9–15). This obligation applies throughout the entire lifecycle of the system.
Requires providers of high-risk AI systems to establish, implement, document, and maintain a risk management system throughout the entire lifecycle. The system must identify, analyse, evaluate, and mitigate known and foreseeable risks.
Sets data governance requirements for high-risk AI systems trained on data. Training, validation, and testing datasets must be relevant, representative, sufficiently free of errors, and complete. Sensitive personal data may only be used under specific conditions.
Requires providers of high-risk AI systems to draw up and maintain comprehensive technical documentation before placing the system on the market. The documentation must contain the information set out in Annex IV and be kept up to date throughout the lifecycle.
High-risk AI systems must be designed and developed to automatically record events (logs) relevant to identifying risks and ensuring human oversight. The logging capabilities must be sufficient to trace the system's operation over its lifetime.
High-risk AI systems must be sufficiently transparent to enable deployers to interpret the system's output and use it appropriately. Providers must supply instructions for use that describe the system's characteristics, limitations, performance, and any residual risks.
High-risk AI systems must be designed and developed to enable effective oversight by natural persons during the period of use. Oversight must allow humans to monitor, intervene, understand, and override the system, and operators must be assigned appropriate oversight responsibilities.
High-risk AI systems must achieve appropriate levels of accuracy, robustness, and cybersecurity throughout their lifecycle. Systems must be resilient to errors, inconsistencies, and adversarial attacks, and relevant metrics must be declared in accompanying documentation.
Sets out the full list of obligations that providers of high-risk AI systems must fulfil before and after placing the system on the market, including technical documentation, conformity assessment, CE marking, registration, post-market monitoring, and incident reporting.
Requires providers of high-risk AI systems to put in place a quality management system (QMS) covering all aspects of their AI development and deployment. The QMS must be documented in a systematic and orderly manner in the form of written policies, procedures, and instructions.
Providers must keep technical documentation, quality management system documentation, and related records for a period of 10 years after the high-risk AI system has been placed on the market or put into service.
Providers must retain automatically generated logs of high-risk AI systems for a defined period. Providers who remain in control of the system keep logs for at least 6 months, unless other Union or national law prescribes a longer period.
Providers who have reason to believe their high-risk AI system is not in conformity must immediately take corrective actions to bring it into conformity, withdraw it, or recall it. They must inform distributors, deployers, authorised representatives, importers, and relevant authorities.
Providers of high-risk AI systems must cooperate with national competent authorities and supply all necessary information, access, and assistance to verify compliance and enable proper market surveillance investigations.
Providers not established in the EU must appoint an EU-based authorised representative before placing a high-risk AI system on the EU market. The representative acts on behalf of the provider and is the contact point for national authorities.
Importers bringing high-risk AI systems into the EU must verify conformity before market placement: check that the provider has performed the conformity assessment, the system bears CE marking, has the required documentation, and an authorised representative is in place.
Distributors making high-risk AI systems available must verify that the system bears CE marking and has the required instructions for use. They must inform providers or importers of any identified risk and must not make non-conforming systems available.
Governs how obligations flow along the AI supply chain. Deployers who substantially modify a high-risk AI system become providers. Third-country providers who make systems available to EU deployers for own use bear provider obligations. Integrators of AI components bear specific responsibilities.
Deployers of high-risk AI systems must use systems according to providers' instructions, implement human oversight, inform and train staff, retain logs for at least 6 months, and notify providers of risks. Public bodies must also conduct fundamental rights impact assessments.
Requires public bodies and operators providing public services to conduct a fundamental rights impact assessment (FRIA) before deploying a high-risk AI system. The FRIA must identify the processes, persons affected, and measures to minimise risks to fundamental rights.
Member States must notify the Commission and other Member States of conformity assessment bodies authorised to carry out third-party conformity assessments for high-risk AI systems. Notified bodies must meet the requirements in Article 29.
Specifies the requirements that notified bodies must meet: independence, technical competence, financial standing, staff qualifications, confidentiality obligations, and liability insurance. Bodies must not engage in activities that could compromise their independence.
Notified bodies may subcontract specific conformity assessment activities or use subsidiaries, but must obtain the prior written agreement of the client and take full responsibility for the work. Subcontractors must meet the same requirements as notified bodies.
Conformity assessment bodies wishing to become notified bodies must apply for notification to the notifying authority of their Member State. The application must include a description of conformity assessment activities and an accreditation certificate.
Sets out the procedure by which Member States notify the Commission of conformity assessment bodies. The Commission publishes notifications in NANDO and informs Member States. Notifications become effective 2 weeks after publication unless an objection is raised.
Specifies the operations and conduct of notified bodies: they must carry out conformity assessments in a proportionate manner, set up an appeals procedure, and keep records of conformity assessment activities for at least 10 years.
Notified bodies must carry out conformity assessments impartially, set fees that do not depend on the outcome of assessment, protect confidential information, and participate in standardisation and coordination activities.
Where a notified body's competence changes, it must inform the notifying authority. The notifying authority may suspend, restrict, or withdraw the notification if the notified body no longer meets the requirements.
Where concerns arise about a notified body's competence, the Commission, in consultation with Member States, may investigate. If the body is found non-compliant, the Member State must take corrective action.
The Commission must ensure appropriate coordination and cooperation between notified bodies operating in the field of high-risk AI systems through a group for notified bodies. Bodies must participate in this coordination work.
Addresses recognition of conformity assessment bodies from third countries and international cooperation. The Commission may adopt decisions to recognise conformity assessment bodies from third countries where mutual recognition agreements exist.
Where international agreements so provide, third-country conformity assessment bodies may carry out activities under this Regulation. The Commission is empowered to adopt implementing acts to recognise such bodies.
Mandates the development of harmonised standards for high-risk AI. Where harmonised standards are published in the Official Journal, providers complying with them are presumed to meet the corresponding requirements of the AI Act.
Where no harmonised standards exist or are not yet available, the Commission may adopt implementing acts establishing common specifications for high-risk AI systems. Providers complying with these common specifications are presumed to be in conformity.
Creates rebuttable presumptions of conformity for specific scenarios: AI systems trained and tested on data reflecting the intended geographic, behavioural or functional setting are presumed to meet data governance requirements; systems compliant with voluntary cybersecurity schemes are presumed to meet cybersecurity requirements.
Defines the conformity assessment procedures applicable to high-risk AI systems. Most systems use the internal control procedure (Annex VI). Biometric identification systems and systems in Annex II products require third-party assessment by a notified body (Annex VII).
Notified bodies must issue EU technical documentation assessment certificates and quality management system assessment certificates where conformity has been established. Certificates must specify validity period, conditions, and be kept for at least 10 years.
Notified bodies must inform their national notifying authority of any certificates issued, refused, restricted, suspended, or withdrawn, and of any circumstances affecting the scope or conditions of notification.
National competent authorities may authorise specific high-risk AI systems to be placed on the market or put into service without a full conformity assessment in exceptional circumstances: e.g. in the interest of public security, or in response to a serious disaster.
Providers must draw up a written EU declaration of conformity for each high-risk AI system, certifying compliance with the regulation. The declaration must contain the information set out in Annex V and be kept available for 10 years.
Requires providers to affix the CE marking to high-risk AI systems before placing them on the EU market. The CE marking must follow the general principles in Regulation (EC) No 765/2008 and must be visible, legible, and indelible.
Before placing a high-risk AI system on the EU market, providers must register themselves and the system in the EU database. Deployers who are public bodies must also register before first using the system. The registration must contain the information set out in Annex VIII.
General-Purpose AI Models
A GPAI model is classified as having systemic risk if trained with more than 10^25 FLOPs, or if the Commission decides it has high-impact capabilities. This classification triggers additional obligations under Article 55.
All providers of general-purpose AI models placed on the EU market must provide certain transparency information to downstream providers and make a publicly available summary of the model's capabilities and intended uses.
All GPAI providers must: maintain technical documentation, provide information to downstream providers, implement copyright compliance, and publish a training data summary. Open-source models are exempt from documentation obligations unless they have systemic risk.
Providers of general-purpose AI models not established in the EU must designate an EU-established authorised representative who acts as contact point for the AI Office and national authorities.
GPAI models with systemic risk face additional obligations: adversarial testing (red-teaming), systemic risk assessment, incident reporting to AI Office, and cybersecurity measures.
Mandates the AI Office to facilitate the preparation of voluntary codes of practice for GPAI providers. Codes of practice may cover documentation, copyright, data transparency, and safety evaluations, and can serve as a means of demonstrating compliance.
Innovation Support Measures
Requires Member States to establish at least one AI regulatory sandbox before August 2026. Sandboxes provide controlled environments where AI systems can be developed, trained, tested, and validated before market placement, under the supervision of national competent authorities.
Establishes the conditions and rules for operating AI regulatory sandboxes. Participation does not exempt participants from the applicable legal framework. Authorities must publish annual reports on sandbox activities.
Allows personal data lawfully collected for other purposes to be processed in AI regulatory sandboxes for developing AI systems in the public interest, subject to strict safeguards. This constitutes a limited derogation from GDPR purpose limitation.
Allows real-world testing of high-risk AI systems outside sandboxes by providers or prospective providers under specific conditions. Such testing must be conducted with a plan approved by the relevant authority and must not pose unacceptable risks.
Provides that certain Union law provisions allowing further processing of personal data for scientific or statistical purposes also apply to development of AI in the public interest under this Regulation.
Requires Member States and the Commission to take specific measures to support SMEs and start-ups in complying with the AI Act, including reduced fees for conformity assessments, dedicated guidance, and access to regulatory sandboxes.
Allows Member States to grant derogations from certain documentation and registration obligations to operators in financial services, law enforcement, and other regulated sectors where equivalent mechanisms already exist under sectoral legislation.
Governance
Establishes the AI Office within the European Commission as the body responsible for supervising GPAI models, developing Union AI expertise, and coordinating enforcement with national authorities.
Establishes an Advisory Forum to provide technical expertise and advise the AI Office and Member States on AI Act implementation. Membership includes representatives from industry, civil society, academia, and standardisation bodies, balanced between large and small operators.
Establishes a Scientific Panel of Independent Experts to support the enforcement of the AI Act, particularly regarding general-purpose AI models with systemic risk. The Panel assists the AI Office with technical assessments and evaluations.
Requires each Member State to designate at least one national competent authority responsible for supervising the application and implementation of the AI Act. One authority must be designated as market surveillance authority.
Establishes an EU-wide database managed by the Commission for the registration of high-risk AI systems. The database enables transparency and provides market surveillance authorities with the information they need to supervise AI systems.
Requires providers of high-risk AI systems to establish and document a post-market monitoring system. This system must actively collect and analyse data from deployers to identify safety issues, serious incidents, and any need for corrective action.
Requires national competent authorities, notified bodies, and the AI Office to ensure the confidentiality of information and data obtained during the application of the AI Act. Trade secrets and commercially sensitive information must be protected.
EU Database
Defines the structure and content of the EU database for high-risk AI systems. The database must be publicly accessible and contain specified information about registered AI systems, providers, and deployers.
Providers of high-risk AI systems must establish a post-market monitoring system and prepare a post-market monitoring plan. The plan must be part of the technical documentation and must cover the entire post-deployment lifecycle.
Requires providers of high-risk AI systems to report serious incidents to market surveillance authorities without undue delay. Incidents causing death must be reported within 2 days; other serious incidents within 10 days. GPAI providers must report to the AI Office.
Requires providers and deployers of high-risk AI systems to report non-serious malfunctions and serious but not life-threatening incidents to the relevant authority as part of post-market monitoring, enabling trend analysis and systemic risk identification.
Post-Market Monitoring
Designates market surveillance authorities and grants them investigative powers for AI systems. Authorities may request access to AI systems, data, source code, and documentation, and may conduct audits and on-site inspections.
Sets out the procedure for national authorities to follow when a high-risk AI system presenting a serious risk is identified. Authorities must notify the operator, take proportionate restrictive measures, and report to the Commission.
Establishes the Union safeguard procedure where the Commission reviews national restrictive measures against high-risk AI systems. Ensures a uniform approach to AI systems presenting risks across the EU single market.
Addresses situations where a technically compliant high-risk AI system nevertheless presents an unacceptable risk to health, safety or fundamental rights. Authorities may require corrective action or withdrawal even for CE-marked systems.
Covers procedural non-compliance situations: missing CE marking, incorrectly affixed CE marking, missing EU declaration of conformity, or missing registration. Authorities must require the operator to rectify the non-compliance.
Mandates the Commission to establish Union AI testing support structures. These structures provide testing expertise to market surveillance authorities and other bodies, and develop common testing methodologies and tools.
Grants market surveillance authorities broad powers to investigate AI system compliance: requesting information, accessing premises, obtaining source code under controlled conditions, ordering systemic risk assessments, and imposing interim measures.
Establishes mutual assistance obligations between national authorities and the AI Office for market surveillance of GPAI models. National authorities may refer investigations of GPAI models to the AI Office.
Grants market surveillance authorities the power to supervise testing of high-risk AI systems in real world conditions conducted under Article 60. Authorities may suspend, prohibit, or impose conditions on such testing where risks are identified.
Grants authorities and the AI Office the right to access source code, training data, and documentation of AI systems under defined conditions and with strict confidentiality safeguards, for market surveillance and enforcement purposes.
Codes of Conduct
Encourages providers of non-high-risk AI systems to voluntarily apply requirements from Chapter III Section 2, such as risk management, documentation, and human oversight. The Commission and Member States shall promote the drawing up of codes of conduct.
The Commission shall assess and facilitate the use of confidential computing and privacy-preserving AI technologies, particularly for sensitive use cases in healthcare, law enforcement, and finance.
Establishes whistleblower protections for persons reporting infringements of the AI Act to national competent authorities or the AI Office. Reporting persons must be protected from retaliation in accordance with Directive (EU) 2019/1937.
Encourages providers to commit voluntarily to the AI Act obligations ahead of its application date, through the AI Pact. The AI Office coordinates the AI Pact and facilitates sharing of good practices.
Clarifies that the AI Act is without prejudice to EU competition law. Information sharing for compliance purposes does not constitute an infringement of competition rules.
Amends Regulation (EU) No 300/2013 to align it with the AI Act framework for the security of civil aviation.
Requires the Commission to regularly assess the need to update the AI Act, including the list of prohibited practices, the high-risk use cases in Annex III, and the governance provisions. A first evaluation shall be submitted to Parliament and Council by August 2029.
Empowers the AI Office and the Scientific Panel to evaluate general-purpose AI models, including conducting assessments of models suspected of presenting systemic risk.
Grants the AI Office the power to enforce GPAI model obligations. The AI Office may request information, conduct investigations, and impose measures. The Commission decides on non-compliance findings.
Specifies the range of measures the Commission may impose on non-compliant GPAI model providers: requiring compliance, restricting access to the model, and imposing fines.
Delegated Acts
Sets out the conditions under which the Commission may exercise its delegated powers under the AI Act: 5-year mandate, ordinary objection procedure, right of revocation for Parliament or Council.
Establishes the committee procedure applicable to implementing acts under the AI Act. The Commission is assisted by a committee composed of Member State representatives.
Requires the Commission to evaluate and review the AI Act periodically, including an assessment of the adequacy of the list of prohibited practices and the high-risk classification criteria.
Confers on the AI Office supervisory responsibility for general-purpose AI models, including conducting evaluations, issuing guidance, and taking enforcement action in coordination with the Commission.
Penalties
Three-tier penalty structure: (1) Up to €35M or 7% of global turnover for prohibited practice violations; (2) up to €15M or 3% for other obligations; (3) up to €7.5M or 1% for providing false information. SMEs pay the lower amount.
Sets out fines applicable to providers of general-purpose AI models who violate their obligations. Violations may result in fines of up to €15 million or 3% of global annual turnover for standard violations, and up to €30 million or 7% for withholding information.
Empowers the European Data Protection Supervisor to impose administrative fines on EU institutions, bodies, and agencies that violate the AI Act obligations in their capacity as deployers.
Amends Regulation (EU) No 1024/2012 on administrative cooperation through the Internal Market Information System (IMI) to add provisions relevant to the AI Act.
Amends Regulation (EU) No 167/2013 on agricultural and forestry vehicles to align with AI Act requirements for AI systems used as safety components in agricultural vehicles.
Amends Regulation (EU) No 168/2013 on the approval and market surveillance of two- or three-wheel vehicles and quadricycles to align with AI Act requirements.
Amends Directive 2014/90/EU on marine equipment to align with AI Act requirements for AI-based marine equipment.
Amends Directive (EU) 2016/797 on the interoperability of the rail system to align with AI Act requirements for AI systems in railway applications.
Amends Regulation (EU) 2018/858 on the approval and market surveillance of motor vehicles to align with AI Act requirements for AI systems in motor vehicles.
Amends Regulation (EU) 2018/1139 on common rules in the field of civil aviation to align with AI Act requirements for AI systems in aviation.
Amends Regulation (EU) 2019/2144 on type-approval requirements for motor vehicles and trailers to align with AI Act requirements.
Contains technical amendments to sectoral legislation related to AI Act alignment, particularly concerning machinery and product safety directives.
Repeals Regulation (EU) 2021/694 (Digital Europe Programme) provisions related to AI insofar as they overlap with the AI Act. Other sectoral AI regulations may be repealed or amended in line with AI Act provisions.
Sets out transitional provisions for AI systems already on the market before the application date. Systems placed on the market before 2 August 2026 must comply with Chapter III requirements by 2 August 2027.
The EU AI Act entered into force on 1 August 2024. Application is phased: prohibited practices apply from 2 February 2025; GPAI obligations from 2 August 2025; most high-risk AI requirements from 2 August 2026; Annex I product AI from 2 August 2027.