EU AI Act High-Risk Deadline Delayed to December 2027
On 26 March 2026, the European Parliament voted 569–45 to delay the application of high-risk AI system obligations under the EU AI Act. The original deadline of 2 August 2026 has been pushed back, but not for everything, and not by as much as some headlines suggest.
If you’ve been working towards August 2026 compliance, you have more time on the heaviest obligations. But several requirements are already enforceable, and the delay doesn’t change that.
Here’s what actually changed.
The new dates
The Parliament’s vote creates a tiered delay:
High-risk AI systems listed in Annex III, covering biometrics, critical infrastructure, education, employment, essential services, law enforcement, justice, and border management, now have until 2 December 2027.
AI systems regulated under EU sectoral legislation, those embedded in products like medical devices, radio equipment, and toys, have until 2 August 2028.
Watermarking requirements for AI-generated audio, images, video, and text move to 2 November 2026, a three-month delay from the original August date.
| Obligation | Original deadline | New deadline |
|---|---|---|
| High-risk AI systems (Annex III) | 2 Aug 2026 | 2 Dec 2027 |
| AI in products under sectoral legislation | 2 Aug 2027 | 2 Aug 2028 |
| Watermarking for AI-generated content | 2 Aug 2026 | 2 Nov 2026 |
What hasn’t changed
The delay applies specifically to high-risk system obligations. Several significant parts of the Act are already in force and are unaffected by this vote:
Prohibited practices (Article 5): in force since 2 February 2025. The AI systems that are outright banned remain banned. This includes manipulative AI, social scoring, most real-time biometric identification in public spaces, and emotion recognition in workplaces.
AI literacy (Article 4): in force since 2 February 2025. Organisations must ensure staff working with AI systems have a sufficient level of AI literacy.
GPAI model obligations (Articles 51–56): in force since 2 August 2025. Providers of general-purpose AI models already carry transparency, documentation, and copyright obligations.
Transparency obligations (Article 50): the date for these hasn’t been explicitly pushed back. If your chatbot interacts with people and doesn’t disclose it’s AI, you should be fixing that now regardless of the high-risk timeline.
New: nudifier AI ban
The Parliament introduced a new prohibition targeting AI systems that create or manipulate sexually explicit images resembling identifiable real persons without their consent. This covers so-called “nudifier” or “undressing” apps.
There’s a narrow exception for systems with “effective safety measures preventing users from creating such images,” but the ban is broad. If you operate anywhere near AI-generated imagery, check your product against this provision.
More breathing room for SMEs
The amendments extend existing SME flexibility measures to small mid-cap enterprises. The Act also now permits personal data processing by service providers specifically for detecting and correcting biases in AI systems, subject to strict necessity safeguards.
For smaller organisations, this is welcome. The compliance burden for a 50-person company building an AI hiring tool is materially different from what’s reasonable for a large enterprise, and the Act now acknowledges this more broadly.
This isn’t final yet
The Parliament’s vote is one step in the legislative process. These amendments still need to be negotiated with the EU Council before they become law. The official text of the AI Act, as published, still references 2 August 2026 as the enforcement date for high-risk obligations.
The Council will likely agree. The vote was overwhelming at 569–45. But until the trilogue process completes and the amended text is published in the Official Journal, the original dates technically stand.
What this means for your compliance planning
The temptation is to relax. Don’t.
If you’re building a high-risk AI system: You’ve gained roughly 16 months. Use them. The compliance requirements haven’t changed, only the deadline. A risk management system (Article 9), technical documentation (Annex IV), conformity assessment (Article 43), and a Declaration of Conformity (Article 47) still take months to prepare properly. December 2027 sounds far away. It isn’t, once you account for the actual work involved.
If you’re a deployer of high-risk AI: Your Fundamental Rights Impact Assessment (Article 27), monitoring obligations, and human oversight arrangements now have a later deadline. But your providers need to be working on their documentation in parallel, and that conversation should happen sooner rather than later.
If you’re not high-risk: Nothing has changed for you. Transparency obligations, prohibited practices, and AI literacy requirements are either already in force or arriving on roughly the original timeline. A customer service chatbot that doesn’t disclose it’s AI is non-compliant regardless of the high-risk delay.
If you’ve already started compliance work: You’re ahead. The organisations that began early will have the smoothest transition, and the extra time means you can be more thorough rather than rushing to meet August 2026. Don’t stop. Refine.
The worst outcome from this delay would be treating it as permission to do nothing until late 2027. GDPR had a two-year implementation period. Many companies waited until the final months and paid for it, both in fines and in the scramble itself.
The high-risk deadline moved. The amount of work didn’t.
This article is part of the ComplyDrive resource library. ComplyDrive publishes an EU AI Act Compliance Checklist and Sample Documentation — 47 checklist items across 5 phases, with 9 complete example compliance documents. Details at complydrive.ai.