The August 2026 Deadline: What Actually Needs to Be Done By When
The EU AI Act doesn’t land all at once. It entered into force on 1 August 2024, but its obligations phase in over a three-year period. Some are already enforceable. Others don’t apply until 2027. The date that matters most for the majority of businesses is 2 August 2026, when the bulk of the Act’s obligations become applicable.
But treating that as a single deadline is a mistake. Different requirements apply at different times, and several have already passed. Here’s what’s actually happening, and when.
What’s already in force
Prohibited practices — since 2 February 2025
The Act’s outright bans took effect first. Article 5 prohibits AI systems that:
- Manipulate behaviour through subliminal, deceptive, or exploitative techniques that cause significant harm
- Exploit vulnerabilities of specific groups (age, disability, social or economic situation)
- Score people socially — evaluating individuals based on social behaviour or personal characteristics, where that score leads to detrimental treatment in unrelated contexts
- Assess or predict criminal offence risk based solely on profiling or personality traits (with narrow exceptions for law enforcement augmenting human assessments based on objective facts)
- Scrape facial images from the internet or CCTV to build facial recognition databases
- Infer emotions in workplaces or educational institutions (except for medical or safety purposes)
- Use real-time remote biometric identification in public spaces for law enforcement (with tightly defined exceptions)
If any of your AI systems do these things, you’re already in breach. These prohibitions carry the highest penalties under the Act: up to €35 million or 7% of global annual turnover, whichever is greater.
Most mainstream business AI systems won’t fall foul of these bans. But they’re worth checking against, particularly the manipulation and vulnerability exploitation clauses, which are broad enough to catch aggressive personalisation engines or dark-pattern-adjacent AI features.
AI literacy — since 2 February 2025
Article 4 requires that organisations ensure their staff and other persons dealing with AI systems on their behalf have a sufficient level of AI literacy. This is already applicable.
The Act doesn’t prescribe a specific training programme. It requires that people working with AI understand enough about how it works, what it can and can’t do, and what the risks are. The appropriate level of literacy depends on the context. An AI engineer needs different knowledge than a customer service manager who oversees a chatbot.
In practice, this means you should already have some form of AI awareness training in place, documented and proportionate to how your organisation uses AI. If you don’t, this is low-hanging fruit to address immediately.
The main deadline: 2 August 2026
This is when the majority of the Act’s substantive obligations become enforceable. For most businesses, this is the date that matters.
High-risk AI system obligations
The full set of obligations for high-risk AI systems under Articles 6–27 and Articles 40–49 apply from this date. This includes:
For providers:
- Risk management systems (Article 9)
- Data governance (Article 10)
- Technical documentation (Article 11, Annex IV)
- Record-keeping and automatic logging (Article 12)
- Transparency and information to deployers (Article 13)
- Human oversight capabilities (Article 14)
- Accuracy, robustness, and cybersecurity (Article 15)
- Quality management systems (Article 17)
- Conformity assessments (Articles 40–49)
- EU Declaration of Conformity (Article 47, Annex V)
- CE marking (Article 48)
- Registration in the EU database (Article 49)
For deployers:
- Use according to instructions (Article 26)
- Human oversight implementation (Article 26)
- Input data relevance (Article 26)
- Monitoring in operation (Article 26)
- Log retention (Article 26)
- Fundamental Rights Impact Assessment (Article 27)
- Serious incident reporting (Article 73)
If you’re a deployer of high-risk AI, August 2026 is when regulators can start asking to see your FRIA, your monitoring procedures, your human oversight arrangements, and your incident reporting processes. If you haven’t built these yet, you have roughly four months.
Transparency obligations (Article 50)
These apply to all AI systems that interact with people, regardless of risk classification:
- AI interaction disclosure: If your AI system interacts directly with people (chatbots, virtual assistants, AI phone agents), you must inform them they are interacting with AI. The disclosure must happen at first contact, before or at the beginning of the interaction.
- Synthetic content labelling: AI-generated or manipulated images, audio, video, and text must be marked as artificially generated or manipulated, in a machine-readable format where technically feasible.
- Emotion recognition and biometric categorisation: If your system performs these functions, you must inform the people it’s applied to.
These obligations are straightforward to implement but easy to overlook. A chatbot without an AI disclosure label is a compliance violation from August 2026.
General-purpose AI models (Articles 51–56)
Providers of general-purpose AI (GPAI) models, the foundation models that power many downstream applications, have their own set of obligations from August 2026:
- Technical documentation
- Copyright policy and compliance with the EU Copyright Directive
- A detailed summary of training data content
- Additional obligations for models with “systemic risk” (those trained with more than 10²⁵ FLOPs of compute, or designated by the Commission): model evaluation, adversarial testing, incident tracking and reporting, and adequate cybersecurity protections
If you’re building applications on top of GPAI models (using the OpenAI API, Claude API, Gemini API, or similar), the model provider carries these obligations. But you still carry deployer obligations for the application you’ve built on top.
Governance and enforcement
The national competent authorities designated by each EU member state must be established and operational. The EU AI Office, which oversees GPAI model compliance, is already operational. National market surveillance authorities will begin active enforcement.
Penalties for non-compliance with the obligations applicable from this date:
- Up to €15 million or 3% of global turnover for violations of AI system obligations
- Up to €7.5 million or 1% of global turnover for supplying incorrect or misleading information to authorities
What comes later
Extended timeline for certain high-risk systems — 2 August 2027
High-risk AI systems that are regulated as safety components of products covered by existing EU harmonised legislation (listed in Annex I, Section A) get an additional year. This applies to AI embedded in:
- Machinery and equipment
- Toys
- Lifts
- Radio equipment
- Pressure equipment
- Medical devices and in vitro diagnostics
- Civil aviation systems
- Motor vehicles
- Agricultural and forestry vehicles
- Marine equipment
If your AI system is embedded in one of these product categories and is subject to third-party conformity assessment under the relevant sectoral legislation, the high-risk obligations apply from August 2027 rather than August 2026.
This extension doesn’t apply to standalone AI systems in these sectors — only to those that function as safety components of the physical products.
Codes of practice for GPAI — expected 2025
The EU AI Office is developing codes of practice for general-purpose AI model providers, expected to be finalised in 2025. These will provide more specific guidance on how to comply with Articles 53 and 55, particularly around training data transparency and systemic risk mitigation.
What this means for your planning
The phased rollout creates a false sense of security for some businesses. The logic runs: “The main deadline is August 2026, we have time.” But several factors compress the timeline:
You’re already behind on some things. AI literacy training (Article 4) and prohibited practices checks (Article 5) should already be in place. If they’re not, address them now.
Compliance infrastructure takes time to build. A Fundamental Rights Impact Assessment isn’t something you write in an afternoon. Human oversight processes need to be designed, staffed, and tested. Monitoring systems need to be instrumented. Incident reporting workflows need to be defined and rehearsed. If you start building these in July 2026, you won’t be ready by August.
Your providers need time too. If you’re a deployer of high-risk AI, you need documentation from your providers: instructions for use, conformity declarations, technical details about the system’s capabilities and limitations. If your provider hasn’t prepared these yet, that conversation needs to happen now, not the week before the deadline.
Regulators may not enforce on day one, but they will enforce. The GDPR took effect in May 2018. Early enforcement was slow, but the fines that followed were substantial. The AI Act will likely follow a similar pattern: an initial grace period followed by increasingly active enforcement. Being compliant from day one is cheaper than scrambling after a complaint or audit.
A practical timeline
Working backwards from 2 August 2026:
Now (April 2026):
- Complete your AI system inventory
- Classify each system by risk level
- Determine your role (provider or deployer) for each system
- Verify AI literacy training is in place
- Confirm no prohibited practices are in use
May 2026:
- Begin Fundamental Rights Impact Assessments for high-risk deployments
- Request documentation from providers of high-risk AI systems
- Design human oversight processes
- Set up monitoring and logging infrastructure
June 2026:
- Complete FRIAs
- Implement transparency disclosures (chatbot labels, synthetic content marking)
- Test incident reporting workflows
- Document everything: processes, assessments, decisions
July 2026:
- Review and stress-test all compliance measures
- Conduct dry runs of incident reporting
- Brief leadership on remaining gaps and risks
- Ensure all documentation is accessible for potential audits
August 2026:
- Obligations become enforceable
- Continue monitoring, logging, and refining processes
The businesses that will have the smoothest transition are the ones that started early enough to discover the hard problems (the AI system nobody inventoried, the provider who can’t supply adequate documentation, the high-risk classification that triggers obligations nobody budgeted for) while there was still time to solve them.
Four months isn’t a lot of time. But it’s enough, if you start now.