Fundamental Rights Impact Assessments (FRIA): Who Needs Them?

Anna Lisowska

⚡ TL;DR

  • Article 27 of the EU AI Act requires certain deployers of high-risk AI systems to conduct a Fundamental Rights Impact Assessment (FRIA) before deployment. “Certain deployers” is defined more broadly than most organisations expect.
  • The FRIA is distinct from — but can be integrated with — a GDPR Data Protection Impact Assessment (DPIA). Running both as a single combined assessment is explicitly encouraged in Article 27(5) and is the most efficient approach.
  • Even for deployers not explicitly required to conduct a FRIA, doing one is strong governance practice that demonstrates due diligence and substantially reduces liability exposure when something goes wrong.

Of all the EU AI Act obligations, the Fundamental Rights Impact Assessment is among the most misunderstood. Some organisations assume it applies only to government bodies. Others conflate it with the GDPR’s Data Protection Impact Assessment and assume one document covers both. Still others — particularly in financial services, insurance, and critical infrastructure — assume it doesn’t apply to private sector entities at all.

All three assumptions are incorrect. Article 27’s FRIA obligation applies to a specific set of deployers whose activities touch on fundamental rights in predictable and significant ways — and that set includes major private sector actors. Understanding precisely who must conduct a FRIA, what it must contain, and how it integrates with existing GDPR processes is the starting point for any deployer’s compliance programme.

For the broader deployer obligation framework, see our post on provider vs. deployer responsibilities. For how the DPO should manage the FRIA alongside GDPR obligations, see our post on the DPO’s new role in AI governance.

Who Must Conduct a FRIA Under Article 27?

Article 27(1) defines the deployer categories required to conduct a FRIA before deploying a high-risk AI system. The list is more extensive than most compliance teams initially recognise:

Deployer CategoryArticle 27 TriggerTypical SectorsFRIA Mandatory?
Public bodies and public authoritiesAny use of a high-risk AI system in the exercise of public powers or in the provision of public servicesGovernment agencies, municipalities, law enforcement, judicial bodies, tax authoritiesYes — always
Entities providing essential private services to the publicUse of AI in decisions affecting access to essential services with broad public impactBanks, insurers, credit institutions, critical infrastructure operators, healthcare providersYes — for Annex III §5 systems
Employers conducting large-scale profilingSystematic use of AI for employment decisions affecting large numbers of workers or applicantsLarge employers using AI for hiring, performance management, workforce analytics at scaleYes — where systematic and large-scale
All other private deployersGeneral use of high-risk AI systemsOther private sector companiesNot mandatory — but strongly recommended

The practical significance: financial services companies (banks, insurers, payment processors) using AI for credit scoring, insurance underwriting, or fraud detection are covered under the “essential private services” category. Large technology companies using AI for employment decisions at scale are covered under the “large-scale profiling” category. These are not edge cases — they are among the most commercially significant AI deployments in the EU economy, and their deployers are required to conduct FRIAs before deployment.

What a FRIA Must Cover: The Article 27 Requirements

Article 27(2) specifies the minimum content of a FRIA. The assessment must document:

  1. A description of the deployer’s processes in which the high-risk AI system will be used — including the specific decision types it supports, the affected population, and the deployment context.
  2. The period and frequency of the system’s use.
  3. Categories of natural persons and groups likely to be affected — with particular attention to vulnerable groups and populations with heightened fundamental rights exposure.
  4. The specific fundamental rights at risk, given the system’s use case and the affected population. This is the core assessment: which rights from the EU Charter of Fundamental Rights are potentially impacted, and what is the nature and severity of that impact?
  5. The safeguards and mitigations the deployer will implement to address each identified fundamental rights risk — including technical controls, operational procedures, and oversight mechanisms.
  6. The supervisory authority to be contacted if an unacceptable fundamental rights risk is identified — with a commitment to prior consultation where required.

The assessment must be submitted to the relevant market surveillance authority before deployment. For public sector deployers, it must also be made available to the public in a meaningful summary form.

The Fundamental Rights Framework: What Rights the FRIA Covers

The fundamental rights examined in a FRIA are those protected by the EU Charter of Fundamental Rights. For high-risk AI systems, the rights most commonly implicated are:

Charter RightArticleTypical AI Risk ScenarioFRIA Assessment Focus
Human DignityArt. 1Dehumanising treatment in automated systems; systems that treat individuals as data points rather than personsDoes the system’s design respect the inherent worth of affected individuals?
Non-Discrimination & EqualityArt. 21–23Disparate impact on protected groups in hiring, credit, housing, education, or criminal justiceAre there demographic disparities in AI outputs? Are mitigations adequate?
Privacy & Data ProtectionArt. 7–8Excessive data collection; use of data beyond original purpose; re-identification risksDoes the system process personal data proportionately and with appropriate safeguards?
Freedom of Expression & InformationArt. 11AI content moderation, recommendation systems that suppress or amplify certain viewpointsDoes the system affect individuals’ access to information or ability to express views?
Right to Work & Fair EmploymentArt. 15, 31AI-driven hiring and termination decisions; automated performance management affecting working conditionsAre AI employment decisions fair, transparent, and contestable by affected workers?
Access to Justice & Effective RemedyArt. 47AI decisions that lack explainability or that individuals cannot challenge effectivelyCan affected individuals understand and challenge AI-assisted decisions through accessible processes?
Rights of the ChildArt. 24AI in education; content recommendation algorithms; parental decision AIAre the best interests of children protected as a primary consideration in the system’s design?

The FRIA Template: A Practical Section-by-Section Guide

The following template structure satisfies Article 27(2) requirements and integrates with the GDPR DPIA structure to support a combined assessment where relevant.

Section 1: Deployer and System Identification

  • Deployer organisation name, legal form, and Article 27 category (public body / essential services / large-scale employer)
  • AI system name, version, and provider identity
  • Provider’s EU database registration reference and Declaration of Conformity reference
  • Intended deployment context and business process the system will support
  • Assessment date, lead assessor, and approval authority

Section 2: Process Description and Scope

  • Detailed description of the process in which the AI system will be used — workflow steps, decision types, human involvement, and the AI’s specific role in each step
  • Deployment frequency and volume — how often the system makes or informs decisions, and the approximate number of individuals affected
  • Geographic scope — member states where the system will be deployed
  • Duration — planned deployment period

Section 3: Affected Population Analysis

  • Categories of natural persons subject to AI-assisted decisions — with specific attention to vulnerable or protected groups
  • For each affected group: the nature of the decision affecting them, the potential consequences (adverse and beneficial), and the group’s particular vulnerability to AI-related harm
  • Groups warranting heightened scrutiny: persons with disabilities, minors, ethnic or religious minorities, persons with mental health conditions, economically vulnerable populations

Section 4: Fundamental Rights Risk Assessment

  • For each relevant Charter right: the specific risk scenario; likelihood assessment (certain / probable / possible / unlikely); severity assessment (severe / significant / moderate / minor); and whether the risk is mitigated or unmitigated
  • Cross-reference to the Technical File’s risk management system (Article 9) — the FRIA should be consistent with the provider’s documented risk assessment, not a separate document reaching different conclusions
  • Particular attention to cumulative risks — where multiple moderate risks across multiple rights may have a severe aggregate impact on specific affected groups

Section 5: Safeguards and Mitigations

  • For each identified risk: the specific safeguard addressing it; whether the safeguard is technical (built into the system), operational (process-based), or legal (contractual or policy-based); and the named person responsible for maintaining the safeguard
  • Human oversight measures — how affected individuals can request human review of AI-assisted decisions, and the timeline and procedure for such review (integrating GDPR Article 22 rights where applicable)
  • Redress mechanisms — how individuals who believe they have been harmed by an AI-assisted decision can seek remedy
  • Monitoring commitments — how the deployer will monitor whether safeguards remain effective over the system’s deployment lifetime

Section 6: Residual Risk Assessment and Conclusion

  • Residual fundamental rights risk after mitigations: assessment of whether any rights are still at significant risk after all safeguards are in place
  • Deployment decision: proceed / proceed with additional conditions / do not deploy
  • If residual risk is significant: prior consultation commitment — notification to the relevant supervisory authority before deployment
  • Review schedule: when the FRIA will be reviewed and by whom

Integrating the FRIA with Your GDPR DPIA

Article 27(5) explicitly states that “the assessment referred to in paragraph 1 may be part of a data protection impact assessment pursuant to Article 35 of Regulation (EU) 2016/679 where such assessment is carried out.” In practical terms, the most efficient approach for AI systems that process personal data is a combined DPIA/FRIA — a single structured assessment document that satisfies both sets of requirements.

The combined document structure: lead with the FRIA structure (which is broader in scope, covering all fundamental rights); include the GDPR DPIA’s specific data protection risk assessment as a sub-section within Section 4 (Fundamental Rights Risk Assessment); ensure the data subject rights analysis (GDPR Articles 13–22) appears within Section 5 (Safeguards) with the FRIA’s broader redress mechanism analysis. The European Data Protection Board’s published guidance on data subject rights provides useful reference material for the GDPR sections of the combined assessment.

For the DPO’s role in managing this integration, see our full post on the DPO’s new role in AI governance alongside GDPR.

Frequently Asked Questions

What is a FRIA under the EU AI Act and who must conduct one?

A Fundamental Rights Impact Assessment (FRIA) is a structured pre-deployment assessment that evaluates the impact of a high-risk AI system on fundamental rights protected by the EU Charter of Fundamental Rights. Article 27 requires FRIAs to be conducted by: all public bodies and public authorities deploying high-risk AI systems in the exercise of public powers; private entities providing essential services (banks, insurers, credit institutions, critical infrastructure operators, healthcare providers) using high-risk AI in service delivery; and private employers conducting systematic large-scale profiling of employees or job applicants using AI. All other private sector deployers are not explicitly required to conduct a FRIA but are strongly advised to do so as a matter of due diligence and liability management.

How is a FRIA different from a GDPR Data Protection Impact Assessment (DPIA)?

A DPIA (required under GDPR Article 35) specifically assesses risks to individuals’ privacy and data protection rights arising from personal data processing. A FRIA assesses impacts across the full spectrum of fundamental rights protected by the EU Charter — including non-discrimination, human dignity, freedom of expression, access to justice, and the rights of the child — not just privacy. The FRIA is therefore broader in scope than the DPIA. In practice, for AI systems that process personal data, Article 27(5) explicitly allows the FRIA and DPIA to be conducted as a single combined assessment — which is the recommended approach for efficiency and consistency.

Does a private company need to conduct a FRIA?

It depends on the company’s sector and the nature of its AI deployment. Private companies in financial services (banks, insurers, payment processors), healthcare, energy, and transport infrastructure that use high-risk AI systems in service delivery are covered under Article 27’s “essential services” category and must conduct FRIAs. Large private employers using AI for systematic employment-related decisions at scale are covered under the “large-scale profiling” category. Other private sector deployers are not explicitly required to conduct a FRIA — but conducting one is strong governance practice that demonstrates due diligence, may be required by enterprise customers in procurement due diligence, and substantially reduces liability exposure if an AI-related harm later occurs.

When must a FRIA be completed — before or after deployment?

Before deployment — specifically, before the high-risk AI system is put into service in the context covered by the FRIA. Article 27(1) uses the language “before deploying a high-risk AI system.” The FRIA must be completed, its conclusion documented, and any required prior consultation with supervisory authorities completed before the first affected individual is subject to the system. A FRIA completed after deployment is not compliant with Article 27, even if it reaches the same conclusions a pre-deployment FRIA would have reached.

Does a FRIA need to be repeated if the AI system or its deployment context changes?

Yes. A FRIA is specific to a particular deployment context — the deployer’s process, the affected population, the decision types, and the safeguards in place at the time of assessment. If any of these change materially — the system is updated with new capabilities, deployed to a new population, or used for a significantly different decision type — a new or updated FRIA should be conducted. Additionally, the FRIA should be reviewed periodically (annually is a reasonable default) even without material changes, to assess whether the fundamental rights risk assessment remains accurate given changes in the social, legal, or operational environment. Build a FRIA review schedule into your AI governance calendar. See our post on AI system inventory management for how to track review schedules across your AI portfolio.

Need to conduct a FRIA for a high-risk AI deployment?

Unorma’s compliance platform includes a structured FRIA workflow aligned to Article 27 requirements — guiding your assessment through all six sections, integrating DPIA data where relevant, and storing completed assessments in your evidence vault with tamper-evident records.Go to Full Eu Ai Act Compliance Guide →

Share this post

Leave a Reply