Table of Contents
⚡ TL;DR — For Data Protection Officers:
- The EU AI Act is not GDPR for AI — but the two laws overlap extensively wherever AI systems process personal data. As DPO, you are now the primary bridge between them.
- Your existing GDPR expertise is directly relevant to AI Act compliance — data governance, impact assessments, risk management, and individual rights are concepts that translate directly.
- But the AI Act introduces new obligations that go beyond data protection: Technical File documentation, conformity assessments, human oversight requirements, and post-market monitoring. These require new skills and new cross-functional relationships.
- This post maps every GDPR–AI Act overlap, explains the DPO’s expanded mandate, and gives you the practical framework for managing both frameworks simultaneously.
Of all the professionals in an organisation whose role is most directly transformed by the EU AI Act, the DPO is near the top of the list. You already own the intersection of data, risk, and individual rights. The AI Act lands squarely in that intersection — and it does so in ways that sometimes align with your existing expertise and sometimes pull in entirely new directions.
The risk for DPOs who don’t proactively engage with the AI Act is being handed responsibility for AI governance without the authority, budget, or cross-functional mandate to execute it. The opportunity — for those who engage early — is to position the DPO function as the organisation’s central AI governance authority, with the credibility that comes from being the person who understood both frameworks from the start.
This post gives you the complete picture: where GDPR and the AI Act overlap, where they diverge, and how to build a DPO practice that handles both coherently. For the broader AI governance framework, see our AI Governance Framework pillar guide.
AI Governance vs Data Protection: Understanding the Relationship
The first question DPOs need to settle internally is: what is the relationship between GDPR and the EU AI Act? They are not the same law, they are not redundant, and compliance with one does not imply compliance with the other. But they are deeply intertwined wherever AI systems process personal data — which covers the large majority of commercially significant AI applications.
| Dimension | GDPR | EU AI Act | Overlap / Interaction |
|---|---|---|---|
| Primary focus | Protection of personal data and privacy | Safety and fundamental rights of AI systems | Both protect fundamental rights; AI Act explicitly references Charter rights GDPR also protects |
| Trigger | Processing of personal data | Development/deployment of AI systems in specific risk categories | Most high-risk AI systems also process personal data — both laws apply simultaneously |
| Impact assessment | Data Protection Impact Assessment (DPIA) — Article 35 GDPR | Fundamental Rights Impact Assessment (FRIA) — Article 27 AI Act; Risk Management System — Article 9 | FRIA and DPIA can be integrated; Article 27(5) explicitly encourages this. DPO is the natural owner of the combined assessment. |
| Individual rights | Access, rectification, erasure, objection, portability, explanation of automated decisions (Article 22) | Right to be informed of AI use (Article 50); right to explanation and challenge for deployers | GDPR Article 22 (automated decision-making) and AI Act Article 14 (human oversight) address the same decisions from different angles |
| Supervisory authority | National Data Protection Authority (DPA) | National Market Surveillance Authority (MSA); European AI Office for GPAI | In several member states, the DPA has been designated as the AI Act MSA — one enforcement conversation with multiple regulatory hats |
| Documentation | Records of Processing Activities (ROPA) | Technical File (Annex IV); Quality Management System (Article 17) | ROPA entries for AI systems must be consistent with Technical File descriptions; DPO owns both |
The GDPR–AI Act Overlap: Where Your Existing Expertise Applies Directly
Data Governance: Your ROPA Becomes Your Technical File Foundation
GDPR Article 30 requires you to maintain Records of Processing Activities (ROPA) for every processing operation involving personal data. The AI Act’s Article 11 Technical File requires documentation of training data, validation data, and the data governance practices applied to them.
For AI systems that process personal data in training or operation, these two records need to be internally consistent — and ideally, the GDPR data governance work you’ve already done should feed directly into the AI Act Technical File. The DPO who has maintained detailed ROPA entries for AI processing activities has already documented much of what Article 11 and Annex IV Section 3 require.
Practically, this means adding AI Act-specific fields to your ROPA for AI processing activities: dataset provenance and version information, bias evaluation results, demographic representativeness assessment, and the data quality measures applied. These additions turn your GDPR documentation into dual-purpose compliance artefacts.
Impact Assessments: Integrating DPIA and FRIA
This is the most significant practical overlap — and the clearest area where the DPO’s role expands under the AI Act.
GDPR Article 35 requires a Data Protection Impact Assessment (DPIA) for processing that is likely to result in high risk to the rights and freedoms of natural persons — including systematic evaluation of individuals, processing of special categories of data, and automated decision-making. AI Act Article 27 requires a Fundamental Rights Impact Assessment (FRIA) before deploying certain high-risk AI systems.
Article 27(5) of the AI Act explicitly states that FRIA and DPIA can be combined. In practice, this integration should be your default approach for any high-risk AI system that processes personal data:
- Scope determination: The combined assessment covers both data protection risks (GDPR) and broader fundamental rights risks (AI Act) — including fairness, non-discrimination, freedom of expression, and access to justice
- Risk assessment methodology: Your existing DPIA methodology can be extended with AI-specific risk categories from the Article 9 risk management framework
- Consultation: GDPR Article 36 requires prior consultation with the supervisory authority for high-risk processing — in member states where the DPA is also the AI Act MSA, this single consultation covers both frameworks
- Ownership: You — the DPO — are the natural owner of the combined assessment. It’s a natural extension of your existing DPIA mandate.
For the full FRIA methodology and when it’s required, see our post on Fundamental Rights Impact Assessments (FRIA): Who Needs Them?
Automated Decision-Making: GDPR Article 22 and AI Act Article 14
GDPR Article 22 gives individuals the right not to be subject to solely automated decisions with significant effects — and requires human review on request. AI Act Article 14 requires that high-risk AI systems are designed to enable meaningful human oversight.
These provisions address the same decisions from different angles. Article 22 gives individuals a right to contest automated decisions and request human review. Article 14 requires that the system is designed to make that human review genuinely possible — not just theoretically available. As DPO, you are already the internal owner of Article 22 compliance. Extending that ownership to Article 14 human oversight design is a natural scope expansion.
Practically: ensure that for every AI system covered by Article 22, the human review process required by Article 22 is enabled by the technical human oversight controls required by Article 14. If the controls aren’t there to make human review meaningful, Article 22 rights can’t be exercised effectively — creating simultaneous GDPR and AI Act exposure.
Where AI Governance Goes Beyond Data Protection: New DPO Territory
The GDPR–AI Act overlap is substantial — but so is the territory the AI Act covers that GDPR doesn’t. As DPO, you need to either build competence in these areas or build relationships with the colleagues who own them.
Technical File and Conformity Assessment: New Cross-Functional Relationships
The AI Act Technical File (Article 11, Annex IV) covers areas well outside traditional data protection scope: AI system architecture, model performance metrics, adversarial robustness testing, cybersecurity measures, and conformity assessment procedures. The DPO’s role here is primarily coordinative — ensuring the data governance sections of the Technical File are accurate and consistent with GDPR records, and ensuring the overall Technical File is managed as a living document.
Build formal relationships with your engineering and product teams around the Technical File. The DPO should be in the review chain for Technical File updates — not as the primary author of technical sections, but as the person who verifies the data protection and individual rights dimensions are correctly represented. For the full Technical File blueprint, see our Article 11 and Annex IV pillar guide.
Post-Market Monitoring: Extending Your Incident Response Framework
GDPR requires you to manage personal data breaches and report them to the supervisory authority within 72 hours. The AI Act requires you to report serious incidents — where an AI system caused or contributed to significant harm — within 15 days (3 days for life-threatening incidents).
In practice, many serious AI incidents will also involve personal data breaches — they are the same incident viewed through two regulatory lenses. Extend your GDPR incident response procedure to cover AI Act incident classification and reporting alongside GDPR breach notification. Build a single incident triage process that determines which notification obligations are triggered — GDPR breach notification, AI Act serious incident reporting, or both.
AI Literacy: Adding AI Competence to Your Training Portfolio
Article 4 of the AI Act requires AI literacy training for all staff working with AI systems. As the DPO, you likely already own or influence the GDPR training programme. Extending this to cover AI literacy is a natural scope expansion — and one that positions you as the organisation’s go-to for both data and AI competence standards.
Specifically, AI literacy training for staff who work with high-risk AI systems must be tailored to their role: engineers need technical AI literacy; business users need AI output interpretation skills; oversight personnel need to understand the limits of human oversight and the signs of automation bias. See our post on meeting mandatory AI literacy training requirements for the curriculum framework.
Building the DPO’s AI Governance Practice: A Practical Framework
Here is the practical framework for integrating AI governance into your existing DPO practice without creating parallel bureaucracies.
- Extend your AI processing ROPA entries to include AI Act-specific fields: system classification (high-risk or not), Technical File reference, conformity assessment status, and human oversight mechanism description.
- Create a combined DPIA/FRIA template for all new AI systems that process personal data. Run it before deployment of any new high-risk AI system, in parallel with the Article 9 risk management assessment.
- Join the Technical File review chain for all AI systems your organisation operates. Your sign-off should be required for the data governance sections of every Technical File before conformity assessment.
- Extend your incident response procedure to include AI Act serious incident classification and reporting timelines alongside GDPR breach notification requirements.
- Add AI literacy to your training portfolio — start with the compliance, legal, and product teams who most urgently need the Article 4 obligation met, then expand to all AI-adjacent roles.
- Build the supervisory authority relationship — if your national DPA is also designated as the AI Act MSA, proactively brief them on your combined GDPR/AI Act compliance programme. Regulators who know you are engaged treat first-time issues differently from those who discover non-compliance before you do.
Frequently Asked Questions
What is the DPO’s role under the EU AI Act?
The EU AI Act does not create a specific “AI DPO” role, but it creates obligations that naturally fall within or adjacent to the DPO’s existing mandate: data governance documentation, impact assessments (FRIA, which can be integrated with DPIA), individual rights at the intersection of GDPR Article 22 and AI Act Article 14, incident reporting, and AI literacy training. In most organisations, the DPO is best positioned to be the internal owner of these obligations — or at minimum the coordinator who ensures GDPR and AI Act compliance are consistent with each other.
Is a DPIA sufficient for high-risk AI systems, or do we also need a FRIA?
For public sector deployers of high-risk AI systems, Article 27 mandates a FRIA before deployment. For private sector deployers, a FRIA is not universally mandatory — but it is required in specific circumstances (deployers who are banks, insurers, or other entities conducting systematic large-scale profiling). Regardless of whether FRIA is mandatory for your organisation, conducting one is strong governance practice — and Article 27(5) explicitly allows you to integrate it with your DPIA, reducing the total assessment burden. A combined DPIA/FRIA is better governance than doing only one.
If the national DPA is also the AI Act MSA in our country, does that mean one enforcement authority covers both frameworks?
Yes, in those member states — and this has significant practical implications. It means the regulator examining your GDPR compliance is the same one who can audit your AI Act compliance. Regulatory relationships you’ve built through GDPR engagement are directly relevant to AI Act enforcement. It also means that a failure in one area may prompt examination in the other — a GDPR breach involving an AI system is likely to trigger questions about AI Act compliance for that same system.
Can the DPO serve on the AI Ethics Committee?
Yes, and in many organisations the DPO should. The DPO brings three things that AI Ethics Committees often lack: an independently protected role (which supports the committee’s independence), deep expertise in individual rights and impact assessment methodology, and established relationships with supervisory authorities. The DPO’s GDPR-mandated independence from the business lines whose AI systems are being reviewed makes them a natural candidate for a committee role that requires impartiality. For the full ethics committee structure, see our AI Governance Framework guide.
How does GDPR’s right to explanation interact with AI Act human oversight requirements?
GDPR Article 22 gives individuals the right to request human review of automated decisions and to receive a meaningful explanation of the logic involved. AI Act Article 14 requires that high-risk AI systems are designed to make meaningful human oversight technically possible. The interaction is this: Article 14 creates the technical precondition for Article 22 to be exercisable in practice. If the AI system doesn’t log its inputs and outputs, if there’s no override mechanism, if human operators can’t understand what the system did — then Article 22 rights exist on paper but can’t be exercised in reality. Ensuring Article 14 technical controls are in place is therefore part of your GDPR Article 22 compliance responsibility.
Managing both GDPR and EU AI Act compliance?
Unorma’s compliance platform integrates AI Act Technical File management, FRIA workflows, and audit simulation in one place — giving DPOs a single view of AI governance and data protection obligations across every system.Explore Unorma for DPOs →

Jasper Claes is a Compliance Manager and consultant specializing in AI governance for high-scale technology companies operating in regulated markets. He advises product and legal teams on implementing practical compliance frameworks aligned with evolving regulations such as the EU AI Act. Through his writing, Jasper focuses on translating complex regulatory requirements into clear, actionable guidance for teams building and deploying AI systems.