EU AI Act Timeline 2026
Last updated: February 2026
The EU AI Act (Regulation 2024/1689) follows a phased implementation schedule, with different provisions becoming applicable at different times. Understanding these deadlines is critical for planning your compliance strategy. Here is the complete timeline.
1 August 2024 — Entry into Force
The AI Act was published in the Official Journal of the EU on 12 July 2024 and entered into force on 1 August 2024. This started the clock on all transitional periods.
2 February 2025 — Prohibited Practices Apply
Six months after entry into force, the ban on prohibited AI practices (Article 5) became enforceable. Organisations must have ceased all banned practices by this date, including social scoring, subliminal manipulation, and most real-time remote biometric identification in public spaces.
2 August 2025 — GPAI Rules and Governance
Twelve months after entry into force, obligations for general-purpose AI (GPAI) models take effect (Articles 51–56). Providers of GPAI models must comply with transparency requirements, and providers of GPAI models with systemic risk must conduct model evaluations and report serious incidents. Governance structures including the AI Office, AI Board, and advisory forum also become operational.
2 August 2026 — High-Risk Obligations Apply
Twenty-four months after entry into force, the core provisions for high-risk AI systems become applicable. This includes all obligations under Articles 9–15 (risk management, data governance, transparency, human oversight, accuracy/robustness), conformity assessments, EU database registration, and post-market monitoring. This is the most significant compliance deadline for most organisations.
2 August 2027 — Annex I High-Risk Systems
Thirty-six months after entry into force, obligations apply to high-risk AI systems that are safety components of products covered by existing EU harmonisation legislation listed in Annex I (e.g., medical devices, machinery, aviation). These systems receive an additional year because they are already subject to existing sectoral conformity assessment procedures that need to be adapted.
What You Should Be Doing Now
With the August 2026 deadline approaching, organisations deploying or providing high-risk AI systems should be actively working on compliance. Key steps include: classifying your AI systems, conducting gap analyses against the requirements, building out risk management and quality management systems, preparing technical documentation, and planning for conformity assessment.
The first step is understanding which risk category your system falls into.
Don't wait until August
Classify your AI system now and start planning for compliance.
Check Your AI System's Risk Level