Penalties Under the EU AI Act: How Fines Are Calculated
The EU AI Act’s penalty framework follows the GDPR playbook: headline-grabbing maximum fines designed to ensure that non-compliance is more expensive than compliance. But like GDPR, the actual mechanics of how fines are calculated and applied matter more than the maximums. Understanding the penalty structure helps you prioritise your compliance efforts where the risk is greatest.
The three penalty tiers
The Act establishes three tiers of administrative fines, each tied to different categories of violation:
Tier 1: Prohibited practices — up to €35 million or 7% of global annual turnover
The highest penalties apply to violations of Article 5, the outright bans. This covers:
- Deploying AI systems that manipulate behaviour through subliminal or deceptive techniques
- Exploiting vulnerabilities of specific groups
- Social scoring by public authorities (or private entities where it leads to detrimental treatment in unrelated contexts)
- Untargeted facial image scraping for facial recognition databases
- Emotion recognition in workplaces or educational institutions (with limited exceptions)
- Real-time remote biometric identification in public spaces (with limited exceptions)
The 7% figure applies to global annual turnover of the preceding financial year, whichever is higher than the fixed amount. For a company with €1 billion in annual revenue, the maximum fine for a prohibited practice violation is €70 million.
Tier 2: High-risk and other obligations — up to €15 million or 3% of global annual turnover
This tier covers most of the Act’s substantive requirements:
- Failure to comply with high-risk AI system obligations (Articles 6–27)
- Non-compliance with GPAI model obligations (Articles 51–56)
- Failure to meet transparency requirements (Article 50)
- Non-compliance with obligations for providers, deployers, importers, or distributors
For a company with €1 billion in revenue, the maximum here is €30 million.
Tier 3: Incorrect information — up to €7.5 million or 1% of global annual turnover
The lowest tier applies to supplying incorrect, incomplete, or misleading information to national competent authorities or notified bodies. This might seem minor, but it covers situations where an organisation misrepresents its compliance status during an investigation or audit.
How fines are actually determined
The Act doesn’t require regulators to impose maximum fines for every violation. Article 99 sets out the factors that national competent authorities must consider when deciding whether to impose a fine and how large it should be:
Nature, gravity, and duration of the infringement. A systematic, deliberate violation sustained over months will attract a higher fine than a one-off technical oversight quickly corrected.
Whether the infringement was intentional or negligent. Deliberate non-compliance is treated more severely than an honest mistake. But negligence — failing to take reasonable steps to comply — is still punishable.
Actions taken to mitigate harm. If you detected the issue, took corrective action, and mitigated the impact on affected people, that works in your favour.
Degree of responsibility. The Act considers what technical and organisational measures were in place. Demonstrating a genuine compliance programme — even one with gaps — is better than demonstrating no effort at all.
Previous infringements. Repeat offenders face higher penalties, as under GDPR.
Cooperation with authorities. Cooperating with the investigation, providing requested information promptly, and engaging constructively with regulators reduces the fine.
The manner in which the infringement became known. If the authority discovered the violation through a complaint or its own investigation, that’s different from the organisation self-reporting.
Financial strength of the entity. Fines must be “effective, proportionate and dissuasive.” A €1 million fine is dissuasive for a startup but negligible for a global technology company.
Economic benefit gained. If the organisation profited from the non-compliance, the fine should at least exceed that benefit — otherwise non-compliance is rational.
SME and startup provisions
The Act treats small and medium enterprises differently. Article 99(5) provides that when calculating fines for SMEs (including startups), their economic viability must be taken into account. The maximum fine amounts apply, but the actual amounts imposed should reflect the organisation’s size and financial capacity.
This is an acknowledgement that a €15 million fine would bankrupt most SMEs, which wouldn’t serve the Act’s objectives. But it’s not a blanket exemption — SMEs are still subject to fines, just calibrated to their circumstances.
What GDPR enforcement tells us
The AI Act is enforced by national competent authorities, much as GDPR is enforced by national data protection authorities. GDPR’s enforcement history offers useful signals for what to expect:
Enforcement will be uneven across member states. Some GDPR authorities (France’s CNIL, Ireland’s DPC, Italy’s Garante) have been far more active than others. Expect similar variation under the AI Act. Organisations with significant operations in enforcement-active jurisdictions face higher practical risk.
Early enforcement focuses on visible violations. The first GDPR fines targeted the most visible failures: inadequate consent mechanisms, data breach notification failures, large-scale processing without legal basis. For the AI Act, expect early enforcement to target easy-to-detect violations: undisclosed chatbots, prohibited practices, missing transparency labels.
Large fines come later. GDPR’s record fines (€1.2 billion against Meta, €746 million against Amazon) came years after the regulation took effect, as enforcement agencies built expertise and case law developed. The AI Act’s largest fines will likely follow a similar trajectory.
Complaints drive enforcement. Many GDPR investigations started with individual complaints. Under the AI Act, complaints from people affected by AI systems (job applicants screened by AI, customers interacted with by undisclosed chatbots, individuals subject to biometric processing) will likely trigger investigations.
Sectoral focus matters. GDPR enforcement has concentrated on specific sectors: adtech, social media, financial services. AI Act enforcement will likely focus on sectors with visible AI use: recruitment, financial services, healthcare, and customer-facing AI in large consumer platforms.
Beyond fines: other enforcement powers
Administrative fines aren’t the only consequence of non-compliance. National competent authorities have additional powers:
Corrective measures. Authorities can order organisations to bring their AI systems into compliance, modify or withdraw non-compliant systems from the market, or take other corrective action.
Market withdrawal. For high-risk AI systems that don’t meet the requirements, authorities can order the system to be withdrawn from the market entirely. For a provider whose revenue depends on that product, this is potentially more damaging than a fine.
Temporary bans. Authorities can prohibit the use of an AI system on a temporary basis while they investigate.
Public disclosure. Enforcement actions and decisions may be published, creating reputational damage that can exceed the financial penalty.
Prioritising your compliance investment
Given the tiered penalty structure, the rational approach to compliance investment follows the risk hierarchy:
Highest priority: prohibited practices. The penalties are the highest (7% of turnover), the violations are the easiest to detect, and there’s no grace period — these are already enforceable. Review your AI systems against Article 5 immediately.
Second priority: transparency obligations. These are enforceable from August 2026, easy for regulators to detect, and relatively cheap to implement. There’s no excuse for non-compliance on chatbot disclosure or content labelling.
Third priority: high-risk compliance. The obligations are extensive, but they apply only to systems classified as high-risk. The investment is significant but justified by the penalty exposure (3% of turnover) and the reputational risk of a compliance failure involving a system that affects people’s employment, credit, or safety.
Ongoing: cooperation and documentation. Whatever the state of your compliance, maintain good documentation and cooperate fully with any regulatory inquiry. The factors that reduce fines — corrective action, cooperation, genuine compliance effort — are entirely within your control.
The penalty framework is designed to make compliance the rational economic choice. For most businesses, the cost of compliance is a fraction of the potential penalties. The organisations that will face the largest fines are those that made a deliberate choice to ignore the Act — or failed to give it any attention at all.