GDPR and the AI Act: Where They Overlap and Where They Don't
If you’ve been through GDPR compliance, you might expect the EU AI Act to feel familiar. Both are EU regulations. Both have extraterritorial scope. Both carry substantial fines. And both affect how organisations handle data.
But the AI Act isn’t “GDPR for AI.” It regulates different things, with different obligations, and is enforced by different authorities. Understanding where they overlap, and where they don’t, helps you build a compliance programme that handles both efficiently without duplicating effort or leaving gaps.
What each regulation covers
GDPR regulates the processing of personal data. Its focus is on data protection rights: consent, purpose limitation, data minimisation, accuracy, storage limitation, and the rights of data subjects (access, rectification, erasure, portability, objection).
The AI Act regulates AI systems. Its focus is on the safety and fundamental rights risks posed by AI: risk classification, transparency, human oversight, accuracy, robustness, and governance. It applies to AI systems regardless of whether they process personal data.
The overlap occurs when AI systems process personal data — which most do. In those cases, both regulations apply simultaneously.
Where they overlap
Data governance
GDPR requires that personal data is processed lawfully, fairly, and transparently, collected for specified purposes, adequate, relevant, and limited to what’s necessary, accurate, and kept no longer than necessary.
The AI Act’s Article 10 requires that training, validation, and testing data for high-risk AI systems meets specific quality criteria: relevance, representativeness, accuracy, completeness, and freedom from errors. It also requires appropriate data governance and management practices.
The overlap: Both require data quality and appropriate governance. A well-designed data governance programme can serve both regulations. The AI Act adds requirements specific to AI (representativeness of training data, bias testing, and statistical properties), but the foundational data management practices are shared.
The gap: GDPR focuses on lawful processing of personal data. The AI Act’s data requirements extend beyond personal data to all data used in AI systems, including non-personal data. An AI system trained on anonymised data still has AI Act data governance obligations, even if GDPR is less relevant.
Impact assessments
GDPR requires a Data Protection Impact Assessment (DPIA) under Article 35 when processing is “likely to result in a high risk to the rights and freedoms of natural persons.” This includes systematic, automated processing that produces legal or similarly significant effects.
The AI Act requires a Fundamental Rights Impact Assessment (FRIA) under Article 27 for deployers of high-risk AI systems in certain categories (public bodies and private entities providing public services).
The overlap: Both assessments evaluate risks to individuals. A DPIA for an AI system already covers data protection risks, and a FRIA covers broader fundamental rights. Many of the inputs are the same: system description, data flows, affected populations, and risk mitigations.
Practical approach: Conduct them together. Use a combined assessment framework that covers both DPIA requirements (data processing risks, necessity, proportionality, data subject rights) and FRIA requirements (broader fundamental rights, non-discrimination, human dignity, effective remedy). This avoids duplicating the system analysis while ensuring both assessments are complete.
Automated decision-making
GDPR Article 22 gives individuals the right not to be subject to decisions based solely on automated processing that produce legal or similarly significant effects. Organisations relying on automated decision-making must provide meaningful information about the logic involved, safeguards (including human intervention), and the right to contest the decision.
The AI Act’s human oversight requirements (Articles 14 and 26) require that high-risk AI systems allow for human oversight, including the ability to override or disregard AI outputs.
The overlap: Both require human involvement in automated decisions and the ability for individuals to challenge AI-driven outcomes. If your AI system makes decisions about people that produce legal or significant effects (hiring, credit, benefits), you need human involvement under both GDPR and the AI Act.
The gap: GDPR’s Article 22 applies to decisions based “solely” on automated processing. If a human is involved in the decision chain, Article 22 may not apply (though other GDPR provisions still do). The AI Act’s human oversight requirement applies to all high-risk AI systems, regardless of whether the decision is “solely” automated. The AI Act’s bar is higher — it requires meaningful human oversight, not just token human involvement.
Transparency
GDPR requires transparency about how personal data is processed: what data is collected, why, how it’s used, and individuals’ rights. For automated decision-making, meaningful information about the logic involved must be provided.
The AI Act’s Article 50 requires transparency about AI system use: disclosure when interacting with AI, labelling of synthetic content, and notification of biometric processing.
The overlap: Both require informing people about what’s happening with their data and with AI systems they encounter. A combined transparency approach — telling people both what data you collect and that AI is involved — satisfies both.
The gap: GDPR transparency is about data processing. AI Act transparency is about AI system use. You could have a non-personal-data AI system (image generation using only public domain images) that has AI Act transparency obligations but minimal GDPR obligations. Conversely, you could have personal data processing without AI (manual database queries) that has GDPR obligations but no AI Act obligations.
Rights of individuals
GDPR provides data subjects with specific rights: access, rectification, erasure, restriction, portability, and objection.
The AI Act provides affected individuals with the right to an explanation of individual decision-making based on high-risk AI (Article 86), and the right to lodge a complaint with a market surveillance authority (Article 85).
The overlap: Both give individuals the right to understand and challenge decisions that affect them. A person denied credit by an AI system can exercise GDPR rights, such as accessing their data and understanding the logic, alongside AI Act rights like requesting an explanation of the AI decision or complaining to the authority.
Practical approach: Design your complaints and subject access processes to handle both GDPR and AI Act requests through a single intake. When someone asks “why was my application denied?”, the response should address both the data processing aspects (GDPR) and the AI system’s role (AI Act).
Where they diverge
Scope
GDPR applies to personal data processing. The AI Act applies to AI systems. These are different things, and neither fully contains the other:
- An AI system processing non-personal data (quality control in manufacturing, environmental monitoring) is in scope for the AI Act but may have minimal GDPR relevance.
- Personal data processing without AI (manual database lookups, paper filing systems) is in scope for GDPR but not the AI Act.
Enforcement
GDPR is enforced by national data protection authorities (DPAs). The AI Act is enforced by national market surveillance authorities and the EU AI Office.
These are different bodies with different mandates, procedures, and institutional cultures. In some member states, the DPA may also be designated as the AI Act authority, but in many cases they will be separate entities. Organisations may need to engage with both for the same AI system.
Risk framework
GDPR doesn’t classify processing activities by risk level in the same way the AI Act does. GDPR applies uniformly, with the DPIA requirement triggered by specific processing characteristics (large-scale, systematic, sensitive data).
The AI Act’s entire structure is risk-based: prohibited, high-risk, limited risk, and minimal risk, each with different obligations. The compliance burden varies dramatically based on classification.
Product safety vs. data protection
The AI Act is fundamentally a product safety regulation. It’s about ensuring AI systems are safe, accurate, and governed appropriately. GDPR is about protecting personal data and individuals’ data rights. The AI Act shares more with product safety directives like CE marking, conformity assessment, and market surveillance than with GDPR.
This difference in orientation matters for how you staff your compliance programme. GDPR compliance typically sits with legal, privacy, or data protection functions. AI Act compliance may need to involve product management, engineering, quality assurance, and risk management, teams more accustomed to product safety frameworks than data protection.
Building an integrated compliance programme
The most efficient approach is to build a compliance programme that addresses both regulations through shared processes where they overlap, while maintaining separate workstreams for their unique requirements:
Shared processes:
- Data governance and quality management
- Impact assessments (combined DPIA/FRIA)
- Transparency and disclosure mechanisms
- Individual rights and complaints handling
- Incident and breach reporting coordination
GDPR-specific:
- Lawful basis for processing
- Data subject rights (access, erasure, portability)
- Data processing agreements with controllers/processors
- Cross-border transfer mechanisms
- Data Protection Officer designation
AI Act-specific:
- AI system inventory and risk classification
- High-risk system conformity assessment
- Technical documentation and instructions for use
- Post-market monitoring
- Human oversight implementation
- EU database registration
The organisations that will manage both regulations most effectively are those that recognise the overlap without treating them as interchangeable. They share different concerns about related problems. An integrated compliance programme addresses both efficiently. Treating them as identical creates gaps. Treating them as completely separate creates duplication.