Article 27 of the EU AI Act introduces the fundamental rights impact assessment (FRIA) — a mandatory obligation for certain deployers of high-risk AI systems. With the 2 August 2026 deadline approaching, this article explains exactly who must comply, what a FRIA involves, and how to prepare.
FRIA Decision Flow — Am I Required to Do a FRIA?
Simplified decision tree based on Article 27(1) and Annex III. Use the FRIA Screening Tool for a full interactive assessment.
Who Must Conduct a FRIA?
Not every organisation that uses AI is obligated to conduct a FRIA. specifies three categories of deployers who must perform this assessment before putting a high-risk AI system into use:
- Public authorities and bodies (government departments, agencies, local authorities)
- Private entities providing public services (healthcare providers, utility companies, public transport operators)
- Deployers using AI for creditworthiness assessment or life and health insurance risk/pricing (Annex III, point 5(b) and (c))
Crucially, the obligation applies to deployers — not providers. A deployer is the organisation using the AI system under its own authority, while the provider is the entity that developed it. This distinction matters because the deployer is closest to the actual impact on individuals' fundamental rights.
What Does a FRIA Cover?
The FRIA must assess the impact of the high-risk AI system on the fundamental rights enshrined in the EU Charter of Fundamental Rights. specifically lists the following areas:
- A description of the deployer's processes using the high-risk AI system
- The period of time and frequency for which the system will be used
- Categories of persons and groups likely to be affected
- Specific risks of harm likely to impact the identified categories
- Description of human oversight measures
- Measures to be taken if risks materialise, including governance and complaint mechanisms
The assessment must be performed before the high-risk AI system is put into use. It is not a one-time exercise — deployers should update it when circumstances change materially.
The Article 27(4) DPIA Bridge
One of the most important provisions for practitioners is , which creates a bridge between the FRIA and the data protection impact assessment (DPIA) under GDPR .
"Where the obligations set out in paragraph 1 of this Article are already met through the data protection impact assessment carried out pursuant to Article 35 of Regulation (EU) 2016/679 [...] the fundamental rights impact assessment [...] shall complement that data protection impact assessment."
This means that if you have already conducted a DPIA for the same AI system, you can reuse relevant sections of that assessment. In practice, a comprehensive DPIA may cover 30-40% of what a FRIA requires, particularly around data processing, privacy risks, and data subject impacts.
Timeline and Deadline
The FRIA obligation applies from 2 August 2026 — the date when the full provisions on high-risk AI systems become applicable. Deployers who are already using high-risk AI systems on that date must conduct their FRIA as soon as possible and no later than the application deadline.
The EU AI Office is expected to publish a FRIA template questionnaire under , but this has not yet been released. Organisations should not wait for this template before starting their preparation — the substantive requirements in are already clear enough to begin.
Market Surveillance Notification
also requires deployers to notify the relevant market surveillance authority of the results of the FRIA. This is a significant obligation that goes beyond internal documentation — it creates an external accountability mechanism.
The notification must be submitted to the authority designated under . For most EU member states, this is the national data protection authority or a dedicated AI authority.
Practical Preparation Steps
Inventory your AI systems
Identify all AI systems in use and classify them against Annex III categories to determine which are high-risk.
Determine your deployer type
Confirm whether you fall under obligations as a public body, public service provider, or credit/insurance deployer.
Audit existing DPIAs
Map which DPIAs can be leveraged under to reduce the effort required for your FRIA.
Identify affected rights
Map each AI system to the EU Charter rights it could impact — covering all seven title areas.
Build your assessment team
Involve legal, technical, DPO, and affected stakeholder representatives from the outset.
Start documenting
Begin recording processes, human oversight measures, and risk mitigation strategies before the template is published.
Monitor for the official template
Track the EU AI Office for the template publication — but don't wait for it to begin preparation.
Key Takeaways
The FRIA is mandatory for public bodies, public service providers, and credit/insurance deployers of high-risk AI
It must be conducted before the AI system is put into use
Existing DPIAs can be reused under to reduce effort
The August 2026 deadline is approaching — preparation should begin now
Market surveillance authorities must be notified of FRIA results
The EU AI Office template is not yet published but the requirements are clear