The EU AI Act Explained: Everything You Need to Know
Last updated: February 2026
The EU AI Act (Regulation (EU) 2024/1689) is the world's first comprehensive legal framework for artificial intelligence. Adopted by the European Parliament in March 2024 and entering into force on 1 August 2024, it introduces binding rules for anyone who develops, deploys, or distributes AI systems that affect people in the European Union — regardless of where the company is based.
This guide breaks down the Act's core concepts: the risk-based classification system, what's banned, what's regulated, who's responsible, and the deadlines you need to meet.
The Core Idea: Risk-Based Regulation
Instead of regulating all AI the same way, the EU AI Act takes a risk-based approach. The higher the potential harm an AI system can cause, the stricter the rules. Systems are classified into four tiers:
- Prohibited — Certain AI practices are banned entirely.
- High risk — Subject to extensive compliance requirements.
- Limited risk — Must meet specific transparency obligations.
- Minimal risk — Largely unregulated; voluntary codes of conduct encouraged.
This tiered structure allows innovation in low-risk areas while ensuring strong safeguards where AI can directly affect people's lives, rights, and safety.
Prohibited AI Practices (Article 5)
The Act outright bans AI systems that pose an unacceptable risk to fundamental rights. These prohibitions apply from 2 February 2025. Banned practices include:
- Subliminal or deceptive manipulation — AI that uses techniques below conscious awareness or deliberately deceptive methods to distort someone's behaviour, causing them significant harm.
- Exploiting vulnerable groups — Systems that target people based on their age, disability, or socio-economic circumstances to manipulate their behaviour harmfully.
- Social scoring — Evaluating or classifying people based on their social behaviour or personal characteristics, leading to unjustified or disproportionate treatment in unrelated contexts.
- Predictive policing based on profiling — Assessing someone's risk of committing a crime based solely on profiling or personality analysis, without objective, verifiable facts.
- Untargeted facial image scraping — Building or expanding facial recognition databases by indiscriminately scraping images from the internet or CCTV footage.
- Emotion recognition at work and school — Inferring people's emotional states in workplaces or educational institutions, except where medically or safety-critical.
- Biometric categorisation by sensitive attributes — Classifying people by race, political opinions, religion, sexual orientation, or other protected characteristics using biometric data.
Real-time remote biometric identification in public spaces by law enforcement is also prohibited, with narrow exceptions for finding missing persons, preventing imminent threats, or identifying suspects in serious crimes — each requiring prior judicial authorisation and a fundamental rights impact assessment.
High-Risk AI Systems (Chapter III)
The Act identifies two pathways through which an AI system becomes "high-risk":
Pathway 1: Safety components (Annex II)
If your AI system is a safety component of a product already regulated under EU product safety legislation — medical devices, machinery, toys, lifts, vehicles, aviation systems — and that product requires a third-party conformity assessment, your AI system is automatically classified as high-risk.
Pathway 2: Sensitive use cases (Annex III)
AI systems deployed in the following domains are considered high-risk:
- Biometrics — Remote identification, categorisation, emotion recognition.
- Critical infrastructure — Road traffic, water, gas, heating, electricity management.
- Education — Admissions, learning assessment, test proctoring, education-level evaluation.
- Employment — Recruitment, CV screening, interview evaluation, performance monitoring, promotion and termination decisions.
- Essential services — Credit scoring, insurance risk assessment, public benefits eligibility, emergency call prioritisation.
- Law enforcement — Risk assessment, polygraph analysis, evidence evaluation, crime analytics.
- Migration & borders — Visa and asylum processing, risk assessment, document verification.
- Justice & democracy — Legal research assistance, alternative dispute resolution, election influence tools.
Exemptions from high-risk (Article 6(3))
Even if an AI system falls under Annex III, it may qualify for an exemption if it:
- Performs a narrow, well-defined procedural task;
- Improves the result of a previously completed human activity;
- Detects patterns without replacing human judgement; or
- Performs preparatory work for a human decision-maker.
However, if the system profiles individuals (automated assessment of personal characteristics like work performance, health, or behaviour), these exemptions do not apply — the system remains high-risk.
Providers claiming an exemption must document their reasoning before placing the system on the market and provide this documentation to authorities upon request.
Obligations for High-Risk Systems (Articles 8–17)
Providers of high-risk AI systems must comply with twelve core requirements:
- Risk management system (Art. 9) — Continuous identification, analysis, and mitigation of risks throughout the system's lifecycle.
- Data governance (Art. 10) — Training data must be relevant, representative, and as free of errors and bias as possible.
- Technical documentation (Art. 11) — Detailed documentation of design, development, and performance to demonstrate compliance.
- Record-keeping (Art. 12) — Automatic logging of events to enable traceability and auditing.
- Transparency & instructions for use (Art. 13) — Clear information for deployers about capabilities, limitations, and oversight measures.
- Human oversight (Art. 14) — Design must allow effective human supervision and the ability to override or stop the system.
- Accuracy, robustness & cybersecurity (Art. 15) — Appropriate performance levels and resilience against errors and attacks.
- Quality management system (Art. 17) — Organisational procedures covering compliance, testing, risk management, and incident reporting.
- Conformity assessment (Art. 43) — Self-assessment for most systems; third-party assessment for biometric systems.
- CE marking (Art. 48) — Affixed to indicate conformity with the Act.
- EU database registration (Art. 49) — Public registration before market placement.
- Post-market monitoring (Art. 72) — Ongoing performance surveillance and serious incident reporting within 15 days.
Limited Risk: Transparency Obligations (Article 50)
Some AI systems don't qualify as high-risk but still require transparency measures:
- Chatbots — Users must be told they are interacting with AI, unless it's obvious from context.
- Content generation — AI-generated text, images, audio, and video must carry machine-readable labels.
- Deepfakes — Synthetically generated or manipulated content depicting real people or events must be disclosed.
- Emotion recognition & biometric categorisation — Individuals must be informed when these systems are applied to them.
General-Purpose AI Models (GPAI)
The Act introduces specific rules for general-purpose AI models — foundation models like large language models that can be adapted for many downstream applications.
All GPAI model providers must:
- Prepare and maintain technical documentation;
- Provide instructions and information to downstream system integrators;
- Comply with the EU Copyright Directive;
- Publish a summary of training data content.
GPAI models with systemic risk (those trained with more than 1025 FLOPs of compute) face additional obligations:
- Conduct and document model evaluations and adversarial testing;
- Assess and mitigate systemic risks;
- Track and report serious incidents to the AI Office;
- Ensure adequate cybersecurity protection.
Open-source GPAI models with publicly available weights only need to comply with copyright and training data transparency, unless they present systemic risk.
Who Does the Act Apply To?
The AI Act has broad extraterritorial reach, similar to the GDPR:
- Providers (developers) who place AI systems on the EU market, whether based in the EU or elsewhere.
- Deployers (users) who use AI systems in a professional capacity within the EU.
- Third-country operators whose AI system's output is used within the EU.
- Importers and distributors who bring AI systems into the EU market.
"Users" in the Act's terminology means organisations deploying AI professionally — not end-consumers.
Enforcement & Penalties
The penalty structure scales with the severity of the violation:
- Prohibited practices — Up to €35 million or 7% of global annual turnover.
- High-risk non-compliance — Up to €15 million or 3% of global annual turnover.
- Providing incorrect information to authorities — Up to €7.5 million or 1.5% of global annual turnover.
For SMEs and startups, fines are capped at the lower of the two figures, providing some proportionality. Enforcement is handled by national authorities in each EU member state, while the EU AI Office oversees GPAI model compliance.
Key Dates & Timeline
- 1 August 2024 — AI Act enters into force.
- 2 February 2025 — Prohibited practices apply; AI literacy requirements take effect.
- 2 August 2025 — GPAI model obligations apply; governance structure operational.
- 2 August 2026 — Most high-risk obligations apply (Annex III systems).
- 2 August 2027 — High-risk obligations for Annex I products (safety components in regulated products).
What Should You Do Now?
With the 2 August 2026 deadline approaching for most high-risk obligations, companies should:
- Classify your AI systems — Determine which risk category each system falls into.
- Assess gaps — Compare current practices against the Act's requirements.
- Build compliance infrastructure — Establish risk management, documentation, and monitoring processes.
- Engage legal counsel — The Act is complex; professional guidance ensures you don't miss critical obligations.
Find out your AI system's risk level
Our free tool classifies your system in under 3 minutes using the official EU AI Act decision tree.
Start Free Assessment