EU AI Act Article 6(3): The High-Risk Exemption That Could Get You Fined
If your AI system falls within one of the Annex III high-risk use cases, there is a way out. Article 6(3) of the EU AI Act says an Annex III system “shall not be considered to be high-risk where it does not pose a significant risk of harm to the health, safety or fundamental rights of natural persons.”
On first reading, this looks like a generous escape hatch. Read it again. The exemption is narrow, the assessment sits with the provider, and the consequences of getting it wrong are measured in millions of euros and years of unpicked compliance work. The phrase “significant risk” is undefined. Article 6(3) is likely to be one of the most litigated provisions in the Act.
What Article 6(3) actually says
The provision disapplies high-risk classification for Annex III systems that meet one or more of four conditions:
(a) A narrow procedural task. The system performs a limited, well-defined procedural step — for example, converting unstructured data into a structured format, or detecting duplicates in a dataset.
(b) Improving a previously completed human activity. The system’s role is to enhance or refine output that a human has already produced. The human did the work; the AI polishes it.
(c) Detecting decision-making patterns or deviations. The system identifies patterns or deviations in prior decision-making but is not meant to replace or influence the previously completed human assessment without proper human review.
(d) A preparatory task. The system performs a preparatory step to an assessment relevant to an Annex III use case — gathering information, formatting inputs, pre-sorting — but not making the assessment itself.
If any one of these fits, the Annex III system escapes high-risk classification.
The profiling carve-out
Here is the part most summaries of Article 6(3) bury. The final subparagraph states that an AI system referred to in Annex III shall always be considered high-risk where it performs profiling of natural persons.
There is no exemption. There is no careful reading of the four conditions. If your system profiles people — in the GDPR sense of automated processing to evaluate personal aspects, predict behaviour, or analyse characteristics — and it operates in an Annex III area, it is high-risk. Full stop.
This carve-out cuts through a lot of the exemption arguments businesses are tempted to make. Employment screening tools, creditworthiness models, insurance risk scoring, and most people-analytics systems involve profiling almost by definition. The exemption is not available to them.
The documentation trap
A common misreading of Article 6(3) is that claiming the exemption means you have no obligations. This is wrong.
Under Article 6(4), a provider who considers that its Annex III system is not high-risk must document its assessment before the system is placed on the market or put into service. The documentation must be provided to national competent authorities upon request.
Under Article 49(2), providers that apply the Article 6(3) exemption are still required to register the system in the EU database before placing it on the market. The registration identifies the system, the provider, and the grounds for the exemption.
In other words: you cannot quietly decide you are exempt. You must produce a written assessment, file it, and expose it to regulator review. If the assessment is thin, wishful, or unsigned, the exemption will not survive contact with a market surveillance authority.
Why “significant risk” is the real problem
The phrase “significant risk of harm to the health, safety or fundamental rights of natural persons” is undefined in the Act. It is the hinge on which the entire exemption turns, and there is no bright-line test.
The European Commission is empowered under Article 6(5) to publish guidelines specifying the practical implementation of Article 6, including a comprehensive list of practical examples of use cases that are, and are not, high-risk. The Act set a hard deadline of 2 February 2026 for those guidelines. The Commission missed that deadline, with publication now expected in the second quarter of 2026. Until the guidelines land — and even after, given they will be illustrative rather than definitive — there is no authoritative test for what counts as “significant risk.”
Without that bright line, providers are effectively self-certifying that their system “does not pose a significant risk” — a judgement they are not neutral about, because the alternative is a six-figure compliance programme.
This is where most Article 6(3) claims will fail in practice. Not because the four conditions were wrong, but because the underlying “significant risk” judgement was optimistic.
The blast radius of being wrong
Misclassifying a high-risk system as exempt does not just mean you skipped some paperwork. It means every Chapter III Section 2 obligation was breached from the moment the system was placed on the market:
- No risk management system under Article 9.
- No data governance under Article 10.
- No technical documentation under Annex IV.
- No conformity assessment under Article 43.
- No CE marking under Article 48.
- No quality management system under Article 17.
- No post-market monitoring under Article 72.
- No serious incident reporting framework under Article 73.
Every one of these is a separate breach. Penalties under Article 99 for non-compliance with the high-risk obligations run to €15 million or 3% of global turnover, whichever is higher.
Market surveillance authorities can also reclassify the system retrospectively under Article 79 and require withdrawal, recall, or corrective action. If the system has been in the market for a year when this happens, the remediation bill is not small.
The Commission can change the goalposts
Article 6(4) second subparagraph authorises the Commission to adopt delegated acts amending the conditions of the exemption — adding new conditions or removing existing ones — where there is concrete evidence that systems falling under Annex III are being incorrectly considered not high-risk.
The practical effect: an exemption that is defensible today may not be defensible in two years’ time. If you build a compliance posture on the assumption that Article 6(3) applies forever, you are building on sand.
How to think about Article 6(3)
The exemption is real. It exists for a reason — the Act is not trying to sweep narrow procedural tools and preparatory pipelines into the same regime as hiring algorithms and credit scoring models. If your system is genuinely a narrow procedural tool, claim the exemption.
But treat it as a last resort, not a first filter:
1. Default to high-risk. If the system touches an Annex III use case, assume high-risk compliance is required. Only deviate from that default with explicit analysis.
2. Rule out the profiling carve-out first. Before assessing the four conditions, confirm the system does not profile natural persons. If it does, the analysis stops there.
3. Apply the four conditions narrowly. “Improves a previously completed human activity” does not mean “is used somewhere in a process where humans also work.” The human must have completed the activity; the AI must refine the output.
4. Document the significant-risk assessment. The four conditions are a filter on top of the “not a significant risk” judgement. Your documentation should address both — why the system fits one of the four conditions, and why, in light of its actual deployment context, it does not pose a significant risk to health, safety, or fundamental rights.
5. Register, even if exempt. Article 49(2) requires database registration. This is often overlooked and is an easy breach to identify on audit.
6. Get legal sign-off. For any Annex III system where the business consequence of high-risk classification is significant, the Article 6(3) decision warrants a written legal opinion. A thorough, signed assessment survives regulator challenge. A product manager’s judgement call does not.
The honest summary
Article 6(3) is not a loophole. It is a narrow, documented, reviewable, and contingent exemption that trades some compliance work today for the risk of catastrophic non-compliance tomorrow. Organisations that rely on it casually — because compliance is expensive and the conditions are vague — are the ones most likely to end up as the case law that tightens it.
The safe posture is to treat the exemption as available only when you can prove, in writing, that it applies. Everything else is high-risk until a lawyer says otherwise.