EU AI Act Risk Classification Guide
Last updated: February 2026
The EU AI Act (Regulation 2024/1689) establishes a risk-based framework for regulating artificial intelligence systems within the European Union. Rather than applying a one-size-fits-all approach, the regulation classifies AI systems into four distinct risk tiers, each carrying different obligations for providers and deployers.
Unacceptable Risk — Prohibited Practices (Article 5)
At the top of the pyramid are AI practices that pose a clear threat to fundamental rights and are outright banned. These include:
- Subliminal manipulation — AI systems that deploy subliminal techniques beyond a person's consciousness to materially distort behaviour in a way that causes or is likely to cause harm.
- Exploitation of vulnerabilities — Systems targeting specific groups (age, disability, social or economic situation) to materially distort their behaviour.
- Social scoring by public authorities — General-purpose social credit systems operated by or on behalf of public authorities.
- Real-time remote biometric identification in publicly accessible spaces for law enforcement, except in narrowly defined circumstances.
- Emotion recognition in the workplace and educational institutions, except for medical or safety reasons.
- Untargeted facial image scraping from the internet or CCTV to build or expand facial recognition databases.
Violations of prohibited practices carry the highest fines under the Act — up to €35 million or 7% of global annual turnover.
High Risk (Annex III & Article 6)
High-risk AI systems are those used in sensitive domains where failures could significantly impact health, safety, or fundamental rights. Article 6 defines two pathways for a system to be classified as high-risk:
- Safety components — AI systems used as safety components of products covered by EU harmonisation legislation listed in Annex I (e.g., machinery, medical devices, toys, lifts).
- Annex III use cases — AI systems deployed in specific domains including biometrics, critical infrastructure, education, employment, essential services, law enforcement, migration, and administration of justice.
High-risk systems face the most comprehensive set of obligations, including risk management systems, data governance, technical documentation, transparency, human oversight, and accuracy and robustness requirements (Articles 9–15).
Limited Risk — Transparency Obligations (Article 50)
Limited-risk AI systems are subject to specific transparency requirements. These include:
- Chatbots and conversational AI — Users must be informed they are interacting with an AI system, unless this is obvious from the circumstances.
- Deepfakes and synthetic content — AI-generated or manipulated images, audio, or video must be clearly labelled as artificially generated.
- Emotion recognition and biometric categorisation — Individuals must be informed when such systems are applied to them.
- AI-generated text published on matters of public interest — Must be labelled as artificially generated, unless editorially reviewed by a human.
Minimal Risk
The vast majority of AI systems currently in use fall into this category — AI-enabled video games, spam filters, inventory management systems, and similar applications. These systems are not subject to specific regulatory obligations under the AI Act, though providers are encouraged to voluntarily adopt codes of conduct (Article 95).
Even minimal-risk systems must still comply with existing EU law, including GDPR, product safety directives, and sector-specific regulations.
How to Determine Your Risk Level
Classifying your AI system correctly is the critical first step toward compliance. The risk level determines your obligations, timeline, and potential penalties. Key factors include the system's intended purpose, the domain in which it operates, whether it interacts directly with natural persons, and the severity of potential harm.
Our free assessment tool walks you through the classification process step by step, asking targeted questions based on the actual regulatory text to determine where your system falls.
Not sure where your AI system falls?
Answer a few questions and get your risk classification in under 3 minutes.
Check Your AI System's Risk Level