High-Risk AI Systems Explained

Last updated: February 2026

Annex III of the EU AI Act (Regulation 2024/1689) defines eight specific domains in which AI systems are classified as high-risk. Systems in these categories must comply with the full set of obligations under Articles 9–15, including risk management, data governance, transparency, and human oversight. Below is a breakdown of each domain.

1. Biometrics (Annex III, Point 1)

AI systems used for remote biometric identification (other than those prohibited under Article 5), biometric categorisation based on sensitive attributes, and emotion recognition systems. This includes facial recognition systems used for identity verification at border crossings, as well as categorisation systems that infer race, political opinions, or health status from biometric data.

2. Critical Infrastructure (Annex III, Point 2)

AI systems used as safety components in the management and operation of critical infrastructure, including road traffic, water supply, gas, heating, and electricity. For example, an AI system that controls traffic signal timing at intersections or manages load balancing on an electrical grid falls within this category. The key criterion is whether a failure of the AI system could endanger life, health, or cause significant property or environmental damage.

3. Education and Vocational Training (Annex III, Point 3)

AI systems that determine access to or admission into educational and vocational training institutions, evaluate learning outcomes, assess the appropriate level of education for an individual, or monitor and detect prohibited behaviour of students during tests. An AI-powered admissions screening tool, automated essay grading system, or AI proctoring software would all qualify as high-risk under this domain.

4. Employment, Workers Management, and Self-Employment (Annex III, Point 4)

AI systems used for recruitment and selection (CV screening, interview scoring), decisions on promotion and termination, task allocation based on individual behaviour or traits, and monitoring and evaluating the performance and behaviour of workers. This is one of the broadest high-risk categories, covering everything from automated resume screening tools to workforce analytics platforms that track employee productivity.

5. Access to Essential Private and Public Services (Annex III, Point 5)

AI systems used to evaluate eligibility for public assistance benefits and services, assess creditworthiness (except for detecting financial fraud), evaluate and classify emergency calls, perform risk assessment and pricing for life and health insurance, and triage in healthcare. A credit scoring algorithm, an AI system that prioritises emergency dispatch, or an insurance underwriting model powered by machine learning would all be classified as high-risk.

6. Law Enforcement (Annex III, Point 6)

AI systems used by law enforcement to assess the risk of a person offending or reoffending, as polygraphs or similar tools to detect deception, to evaluate the reliability of evidence, to predict the occurrence or reoccurrence of criminal offences based on profiling, and for crime analytics regarding natural persons. Predictive policing tools and AI-driven risk assessment instruments used in bail or sentencing decisions fall under this domain.

7. Migration, Asylum, and Border Control (Annex III, Point 7)

AI systems used as polygraphs or similar tools, to assess risks posed by persons entering the territory of a Member State, to assist in the examination of applications for asylum, visa, and residence permits, and for detecting, recognising, or identifying persons in the context of migration. Automated document verification systems at border checkpoints and AI tools that assist in processing asylum applications are examples.

8. Administration of Justice and Democratic Processes (Annex III, Point 8)

AI systems used to assist judicial authorities in researching and interpreting facts and law and in applying the law to a concrete set of facts, as well as AI systems used to influence the outcome of elections or referendums or the voting behaviour of natural persons. This covers AI legal research tools that go beyond information retrieval to recommend outcomes, as well as political micro-targeting systems.

Key Obligations for High-Risk Systems

Providers of high-risk AI systems must establish a risk management system (Article 9), meet data and data governance requirements (Article 10), produce technical documentation (Article 11), implement record-keeping and logging (Article 12), ensure transparency and information provision to deployers (Article 13), enable human oversight (Article 14), and meet accuracy, robustness, and cybersecurity requirements (Article 15).

Deployers (organisations using high-risk AI systems) also have specific obligations under Article 26, including implementing appropriate human oversight measures and monitoring the system in operation.

Does your AI system fall into a high-risk category?

Our free assessment walks you through the classification logic from the regulation itself.

Check Your AI System's Risk Level