EU AI Act Compliance Checklist
Last updated: February 2026
If your AI system is classified as high-risk under the EU AI Act (Regulation 2024/1689), you must meet a comprehensive set of obligations before placing it on the market or putting it into service. Below is a checklist of the 12 core requirements, referenced to the specific Articles that define them.
Risk Management System (Article 9)
Establish and maintain a continuous, iterative risk management system throughout the entire lifecycle of the AI system. This includes identifying and analysing known and foreseeable risks, estimating and evaluating risks that may emerge during intended use and reasonably foreseeable misuse, and adopting appropriate risk mitigation measures.
Data and Data Governance (Article 10)
Training, validation, and testing datasets must be subject to appropriate data governance and management practices. This covers data collection, relevance, representativeness, freedom from errors, completeness, and statistical properties. Biases that could lead to discrimination must be identified and addressed.
Technical Documentation (Article 11)
Draw up technical documentation before the system is placed on the market or put into service, and keep it up to date. Documentation must demonstrate compliance with all high-risk requirements and provide national authorities and notified bodies with sufficient information to assess compliance.
Record-Keeping and Logging (Article 12)
High-risk AI systems must be designed to automatically record events (logs) during operation. Logs must enable traceability of the system's functioning, including the identification of input data, the period of use, the reference database used for verification, and situations where the system poses a risk.
Transparency and Information to Deployers (Article 13)
Provide clear, adequate information to deployers, including the provider's identity, the system's characteristics, capabilities, and limitations, intended purpose, performance metrics, known risks, and instructions for use. Information must be concise, correct, and accessible.
Human Oversight (Article 14)
Design the system so it can be effectively overseen by natural persons during use. Human oversight measures must enable the overseer to fully understand the system's capabilities and limitations, monitor operation, detect anomalies, and be able to intervene or interrupt the system.
Accuracy, Robustness, and Cybersecurity (Article 15)
Achieve appropriate levels of accuracy, robustness, and cybersecurity throughout the lifecycle. The system must be resilient to errors, faults, and attempts at manipulation by unauthorised third parties. Accuracy levels must be declared in instructions for use.
Quality Management System (Article 17)
Put in place a quality management system that ensures compliance in a systematic and documented manner. This includes strategies for regulatory compliance, design and development procedures, testing and validation procedures, data management practices, and a post-market monitoring system.
Conformity Assessment (Articles 43–49)
Before placing a high-risk AI system on the market, undergo a conformity assessment to verify compliance with all requirements. Depending on the specific use case, this may involve internal control (Annex VI) or assessment by a notified body (Annex VII). Biometric systems generally require third-party assessment.
EU Declaration of Conformity (Article 47)
Draw up a written EU declaration of conformity for each high-risk AI system, identifying the provider, the system, the applicable requirements, and the conformity assessment procedures followed. Keep the declaration available for national authorities for 10 years after the system is placed on the market.
EU Database Registration (Article 71)
Register the high-risk AI system in the EU database established under Article 71 before placing it on the market or putting it into service. The registration must include the provider's name, the system's intended purpose, its risk classification, and the status of the conformity assessment.
Post-Market Monitoring (Article 72)
Establish a post-market monitoring system that is proportionate to the nature of the AI system and its risk level. Actively and systematically collect, document, and analyse relevant data on performance throughout the system's lifetime. Report serious incidents and malfunctions to the relevant market surveillance authority.
Getting Started
Compliance is not a single event — it is a continuous process that begins with understanding your risk classification and extends through design, deployment, and post-market monitoring. The first step is determining whether your system qualifies as high-risk.
Start with classification
Determine your AI system's risk level and understand which obligations apply.
Check Your AI System's Risk Level