If you're a DPO or compliance officer preparing for the EU AI Act, you've likely asked: how does the new FRIA relate to the DPIA I already do under GDPR? This article breaks down the differences, overlaps, and the critical Article 27(4) bridge provision.
FRIA vs DPIA — Scope Comparison
The FRIA encompasses all EU Charter rights. The DPIA focuses on data protection. Article 27(4) bridges them.
At a Glance: FRIA vs DPIA
| Dimension | DPIA (GDPR Art. 35) | FRIA (AI Act Art. 27) |
|---|---|---|
| Legal basis | Regulation (EU) 2016/679 (GDPR) | Regulation (EU) 2024/1689 (AI Act) |
| Trigger | High-risk processing of personal data | Deployment of high-risk AI by specific deployer types |
| Scope | Data protection and privacy rights | All fundamental rights in the EU Charter |
| Who must conduct | Data controller | Deployer (public body, public service provider, credit/insurance) |
| When | Before processing begins | Before AI system is put into use |
| Notify authority | DPA if high risk remains (Art. 36) | Market surveillance authority (mandatory, Art. 27(1)) |
| Focus | Privacy risks to data subjects | Broader rights: dignity, freedoms, equality, solidarity, justice |
The Core Difference: Scope of Rights
The most fundamental difference is scope. A DPIA focuses specifically on data protection and privacy — the right to privacy under of the Charter and the right to data protection under . A FRIA must assess impact across the entire Charter of Fundamental Rights.
This means a FRIA considers rights that a DPIA typically doesn't address: human dignity, non-discrimination, freedom of expression, workers' rights, the right to an effective remedy, and children's rights, among others.
In practice, an AI system used for recruitment (Annex III, category 4) needs a FRIA that assesses discrimination risks ( of the Charter), workers' rights (-31), and equality before the law () — none of which a standard DPIA would cover.
The Overlap: What DPIAs and FRIAs Share
Despite the scope difference, there is significant overlap:
- System description — both require documenting what the system does, how it processes data, and its purpose
- Affected persons — both identify who is impacted and what groups are vulnerable
- Risk assessment methodology — both use systematic risk identification and evaluation
- Mitigation measures — both document controls to reduce identified risks
- Governance structures — both address oversight, review processes, and accountability
This overlap is exactly why exists — to prevent unnecessary duplication of effort.
Article 27(4): The Bridge Provision
explicitly allows deployers to build on their existing DPIA work:
Where the FRIA obligations are already met through a DPIA under GDPR Article 35 or the Law Enforcement Directive Article 27, the FRIA shall complement — not duplicate — that assessment.
In practice, this means:
- If you have a comprehensive DPIA for the same AI system, you can reuse the system description, data flow mapping, affected person analysis, and privacy risk assessment sections
- The FRIA then becomes a complementary assessment that adds the non-privacy fundamental rights dimensions
- You save significant effort on sections S1 (system description), S2 (affected persons), and parts of S4 (privacy/freedom rights)
- Estimated effort reduction: 30-40% if the DPIA is comprehensive and current
A Unified Assessment Architecture
For organisations with multiple AI systems, the most efficient approach is a unified assessment architecture. Instead of treating the FRIA and DPIA as separate exercises, create a shared 'processing activity nucleus' that captures common data once:
- Entity context (organisation details, sector, jurisdictions) — used by both DPIA and FRIA
- Data context (data categories, subjects, volume) — used by DPIA, reusable in FRIA
- Processing context (purposes, legal basis, automation level) — used by both
- Transfer context (recipients, locations, mechanisms) — drives TIA, referenced in both
- Technology context (AI system details, risk classification) — FRIA-specific but shared with AI inventory
This approach eliminates duplicate data entry and ensures consistency across assessments. When the processing activity changes, all linked assessments are flagged for review.
When Do You Need Both?
Most high-risk AI systems that require a FRIA will also require a DPIA. This is because high-risk AI systems almost always involve processing personal data of individuals whose rights are affected.
The main scenario where you'd have a FRIA without a DPIA is an AI system that doesn't process personal data but still impacts fundamental rights — for example, an AI used in critical infrastructure management that affects safety rights without processing individual data. In practice, this is rare.
Practical Recommendations
- Don't duplicate effort — audit your existing DPIAs first and map reusable sections
- Build integrated teams — your FRIA team should include your DPO from the start
- Use a shared data model — capture AI system information once, reference it in both assessments
- Align your review cycles — schedule DPIA and FRIA reviews together for each AI system
- Document the connection — explicitly note which FRIA sections reuse DPIA findings
- Prepare for regulatory scrutiny — market surveillance authorities will expect to see how your FRIA and DPIA relate
Key Takeaways
- The DPIA focuses on data protection; the FRIA covers all Charter fundamental rights
- explicitly allows DPIA reuse within the FRIA — saving 30-40% of effort
- A unified assessment architecture is the most efficient approach for multiple AI systems
- Most high-risk AI systems will require both a DPIA and a FRIA
- Start with your existing DPIAs and build the FRIA as a complementary layer