High-Risk AI: Does Your System Qualify?
The EU AI Act’s compliance burden scales with risk. Minimal-risk systems face almost no obligations. High-risk systems face extensive requirements: risk management, technical documentation, conformity assessments, human oversight, monitoring, and more. The difference between “high-risk” and “not high-risk” can mean the difference between a few transparency disclosures and a six-figure compliance programme.
So the classification matters. And it’s less clear-cut than the Act’s structure might suggest.
How the Act defines high-risk
Article 6 establishes two pathways to high-risk classification:
Pathway 1: Safety components of regulated products (Article 6(1))
If your AI system is a safety component of a product that falls under EU harmonised legislation listed in Annex I (medical devices, machinery, toys, lifts, vehicles, aviation systems) and that product is subject to third-party conformity assessment, the AI system is high-risk.
This pathway is relatively straightforward. If you’re building AI into a medical device or an aircraft component, you already know you’re in a heavily regulated space.
Pathway 2: Standalone high-risk use cases (Article 6(2), Annex III)
This is where most businesses need to pay attention. Annex III lists specific use cases that the Act deems high-risk, grouped into eight categories:
-
Biometrics. Remote biometric identification, biometric categorisation by sensitive attributes (race, political opinions, trade union membership, religious beliefs, sex life, sexual orientation), emotion recognition
-
Critical infrastructure. AI systems used as safety components in the management and operation of critical digital infrastructure, road traffic, and the supply of water, gas, heating, or electricity
-
Education and vocational training. Systems that determine access to education, assess learning outcomes, determine the appropriate level of education for an individual, or monitor prohibited behaviour during tests
-
Employment. Systems used to recruit, screen, filter, or evaluate candidates; make decisions affecting employment relationships (promotion, termination, task allocation, performance monitoring)
-
Access to essential services. Systems used to evaluate creditworthiness, set insurance premiums or assess risk, evaluate eligibility for public benefits or social services, dispatch emergency services, or assess health and life insurance risk
-
Law enforcement. Polygraph or similar tools, risk assessment for victimisation, evidence reliability, criminal profiling, crime analytics
-
Migration, asylum, and border control. Polygraphs, risk assessments, document authentication, visa and residence permit applications
-
Administration of justice. Systems used to research and interpret facts and law, or to apply the law to facts
The grey areas
The Annex III categories sound precise, but the boundaries are arguable in practice. Here are the common areas of uncertainty:
Customer service chatbots
A chatbot that answers product questions is not high-risk. But what if that chatbot handles complaints that affect whether a customer receives a refund? What if it triages support requests in a way that determines service priority? The question is whether the AI system’s outputs materially influence a decision that the Act considers high-risk.
In most cases, a customer service chatbot is limited-risk (transparency obligations only). But if it’s making consequential decisions about people — determining eligibility, allocating resources, or gatekeeping access to services — look more carefully.
Recommendation engines
A recommendation engine that suggests products in an online shop is minimal-risk. A recommendation engine that determines which financial products a customer is shown, effectively pre-filtering their access to credit or insurance, could cross into high-risk territory under the “essential services” category.
The distinction often comes down to whether the system is influencing preference (low-risk) or restricting access (potentially high-risk).
People analytics and workforce management
AI-powered workforce management tools are a particularly grey area. An AI system that optimises shift scheduling is probably not high-risk. An AI system that evaluates employee performance, flags underperformers, or influences promotion decisions almost certainly is.
The Annex III employment category is broad: it covers systems “intended to be used to make decisions affecting terms of work-related contractual relationships, promotion and termination of work-related contractual relationships, to allocate tasks based on individual behaviour or personal traits or characteristics, or to monitor and evaluate the performance and behaviour of persons in such relationships.”
If your workforce management tool touches any of these areas, treat it as high-risk until you can demonstrate otherwise.
AI in marketing personalisation
Targeted advertising and marketing personalisation generally fall outside the high-risk categories. But the Act’s prohibition on AI that “manipulates behaviour through subliminal, deceptive, or exploitative techniques” (Article 5) means aggressive personalisation engines warrant a review — not for high-risk classification, but to ensure they don’t fall into prohibited territory.
The Article 6(3) exception
Article 6(3) provides an important carve-out. Even if an AI system falls within an Annex III category, it is not considered high-risk if it does not pose a significant risk of harm to the health, safety, or fundamental rights of natural persons. Specifically, the system is exempt if it:
- Performs a narrow procedural task
- Improves the result of a previously completed human activity
- Detects decision-making patterns without replacing or influencing the human assessment
- Performs a preparatory task for an assessment relevant to the use cases in Annex III
This is a meaningful exception, but it requires careful analysis. The provider must document why the exception applies, and the national competent authority can challenge that determination. If you intend to rely on this carve-out, document your reasoning thoroughly.
How to assess your systems
For each AI system in your inventory, work through these questions:
1. Is it a safety component of a regulated product? If yes → high-risk under Article 6(1). Check Annex I for the relevant legislation.
2. Does it fall within an Annex III category? Map the system’s function against the eight categories. Consider not just its primary purpose but also its secondary effects. A system designed for scheduling that also generates performance scores is touching the employment category.
3. Does the Article 6(3) exception apply? If the system performs a narrow procedural task, prepares information for human assessment without influencing it, or detects patterns without replacing human judgement, you may be able to claim the exception. Document your reasoning.
4. Who is affected, and how? High-risk classification is about impact on people. The more consequential the AI system’s outputs are for the individuals affected (their employment, access to services, education, safety), the more likely it’s high-risk.
5. What happens if the system is wrong? Consider the failure modes. If the system makes an error, what are the consequences? Incorrect product recommendations are a minor inconvenience. Incorrect creditworthiness assessments can prevent someone from getting a mortgage. The severity of potential errors is a strong indicator of risk level.
When you’re genuinely unsure
If your analysis leaves you uncertain, consider these approaches:
Err towards high-risk classification. The cost of wrongly classifying a system as high-risk is extra compliance work. The cost of wrongly classifying a high-risk system as not high-risk is regulatory exposure and potential fines of up to €15 million or 3% of global turnover.
Seek a legal opinion. For systems where the classification has significant business implications, especially if high-risk obligations would require substantial changes to operations, invest in a legal analysis. The Act is new, and interpretive guidance from the EU AI Office and national authorities is still developing.
Watch for standards and guidance. The European Commission and national competent authorities will publish guidance documents. The EU AI Office has already started issuing clarifications. These will narrow the grey areas over time.
Document your analysis regardless. Whether you conclude a system is high-risk or not, document the reasoning. If a regulator disagrees with your classification, showing a thorough, good-faith analysis is far better than having no documentation at all.
The practical impact
High-risk classification triggers a substantial set of obligations. For providers: a risk management system, data governance, technical documentation, conformity assessment, quality management, and registration in the EU database. For deployers: human oversight, monitoring, logging, Fundamental Rights Impact Assessments, and incident reporting.
These aren’t trivial. They require organisational investment, process changes, and ongoing maintenance. But they’re proportionate to the risk. Systems that can significantly affect people’s lives, livelihoods, and rights deserve more rigorous governance.
The businesses that will handle this best are the ones that classify carefully now, rather than discovering their obligations after a complaint or audit. The classification decision is the first real compliance step. Everything else follows from it.