← All articles

Fundamental Rights Impact Assessments: Who Needs One and What Goes In

The Fundamental Rights Impact Assessment (FRIA) is one of the EU AI Act’s most distinctive requirements. It’s specific to deployers of high-risk AI systems, it must be completed before the system is put into use, and it focuses on something most compliance frameworks ignore: the impact on fundamental rights.

If you deploy high-risk AI, you need an FRIA. Here’s who’s in scope, what the assessment must cover, and how to approach it practically.

Who needs to conduct an FRIA

Article 27 requires an FRIA from two categories of deployers:

  1. Bodies governed by public law — government agencies, public authorities, and entities performing public functions
  2. Private entities providing public services — this includes private companies operating in areas like banking, insurance, healthcare, education, and other essential services

The second category is broader than it first appears. If your company provides financial services, insurance products, educational services, or other services that the Act considers “essential,” you likely fall within scope. A private bank using AI for credit decisioning, an insurer using AI for risk assessment, or a private educational institution using AI for admissions decisions all need to conduct FRIAs.

For private entities not providing public services, the FRIA is not explicitly required under Article 27. However, conducting one voluntarily is good practice — it demonstrates due diligence, supports your GDPR obligations (where a Data Protection Impact Assessment may already be required), and provides evidence of responsible AI governance if questions arise.

What the FRIA must contain

Article 27(3) sets out the required content. The assessment must include:

A description of the deployer’s processes

Describe the specific business processes where the AI system will be used. Not a general description of the system, but how your organisation uses it. What decisions does it inform? What workflows does it sit within? Who interacts with it and in what capacity?

For example, if you’re deploying an AI credit scoring system, describe: when in the customer journey the system is consulted, what data inputs it receives, how its output is used in the credit decision, and who makes the final decision.

The period and frequency of use

How often is the system used? Is it always active, or activated for specific tasks? Is it seasonal? This context helps assess the scale of potential impact.

Categories of people affected

Identify who is affected by the AI system’s operation. This includes:

  • Direct subjects: the people the AI system makes decisions about (job applicants, loan applicants, students)
  • Indirect subjects: people affected by decisions about others (dependents of someone denied credit, colleagues of a terminated employee)
  • Vulnerable groups: are any of the affected people in vulnerable categories (children, elderly, people with disabilities, economically disadvantaged)?

Be specific. “Customers” is insufficient. “Retail banking customers applying for personal loans in the UK and EU markets, including potentially vulnerable customers in financial hardship” is more appropriate.

Specific risks to fundamental rights

This is the core of the FRIA. Assess how the AI system might affect specific fundamental rights, as set out in the EU Charter of Fundamental Rights. The most commonly relevant rights include:

Non-discrimination (Article 21 of the Charter). Could the AI system treat people differently based on protected characteristics? AI systems trained on historical data frequently embed and amplify existing biases. A hiring tool trained on past hiring decisions may discriminate against candidates from underrepresented groups. A credit scoring model may disadvantage applicants from certain postcodes or demographic profiles.

Privacy and data protection (Articles 7 and 8). What personal data does the system process? How is it collected, stored, and used? Does the system process special category data (biometric, health, political opinions)? Is the data processing proportionate to the purpose?

Human dignity (Article 1). Does the AI system treat people with dignity? Systems that reduce people to scores, categories, or automated decisions without recourse can undermine dignity, particularly when the decisions have significant life impact.

Freedom of expression (Article 11). Could the system restrict or chill free expression? Content moderation AI, for example, can suppress legitimate speech.

Right to an effective remedy (Article 47). Can people affected by the AI system challenge its decisions? Is there a meaningful complaints and redress mechanism?

Rights of the child (Article 24). If the system affects children, are their best interests considered?

For each right, assess:

  • Whether the system could negatively affect it
  • The likelihood of such an impact
  • The severity if it occurs
  • The number of people potentially affected
  • Whether the impact is reversible

Human oversight measures

Describe the measures you have in place to ensure human oversight of the AI system. Who is responsible? What is their authority? How can they intervene? What training have they received?

The FRIA should demonstrate that human oversight is genuine, not nominal. If the oversight person reviews 500 AI decisions per hour, they’re not providing meaningful oversight.

Measures for redress

Describe how people affected by the AI system’s decisions can seek redress. This includes:

  • How they can find out an AI system was used in a decision affecting them
  • How they can challenge or appeal the decision
  • How complaints are handled and within what timeframe
  • Whether alternative, non-AI decision processes are available

Notification to the market surveillance authority

Under Article 27(4), deployers must notify the relevant market surveillance authority of the results of the FRIA. For the assessment to be useful to the authority, it needs to be substantive, specific, and honest about identified risks and mitigations.

How to conduct an FRIA in practice

Step 1: Assemble the right team

An FRIA shouldn’t be written by a single compliance officer in isolation. You need input from:

  • Legal/compliance — to interpret the fundamental rights framework and regulatory requirements
  • Technical/engineering — to explain how the system works, its limitations, and its failure modes
  • Operations — to describe how the system is actually used in practice
  • Ethics/diversity (if available) — to identify risks that technical and legal teams may miss
  • Affected stakeholders (where feasible) — representatives of the groups affected by the system can provide perspectives that internal teams lack

Step 2: Map the system’s operation

Before assessing rights impacts, you need a clear picture of how the system operates within your organisation. Document:

  • Data inputs and their sources
  • The system’s decision logic (at whatever level of detail the provider can share)
  • How outputs are used — automatically acted on, or reviewed by humans?
  • Feedback loops — do decisions affect future system behaviour?
  • Integration points with other systems

Step 3: Identify affected rights

Work through the Charter rights systematically. For each right, ask: could this system, in normal operation or in failure, negatively affect this right? Don’t limit yourself to the obvious cases. A credit scoring system obviously engages non-discrimination. But does it also affect human dignity (reducing people to a score)? Privacy (processing extensive financial data)? Freedom of movement (if denial of credit affects someone’s ability to rent housing in another country)?

Step 4: Assess likelihood and severity

For each identified risk, assess how likely it is to materialise and how severe the consequences would be. Use concrete scenarios rather than abstract ratings. “If the system’s training data reflects historical lending discrimination, applicants from [specific demographic] may receive systematically lower scores, resulting in higher rejection rates” is more useful than “medium risk of discrimination.”

Step 5: Document mitigations

For each identified risk, describe what measures you have in place or plan to implement. These might include:

  • Bias testing and monitoring
  • Human review of high-impact decisions
  • Appeal and redress mechanisms
  • Regular audits of system performance across demographic groups
  • Training for oversight staff
  • Limits on the system’s decision authority

Step 6: Review and update

The FRIA is not a one-off document. It must be updated when:

  • The AI system is materially changed
  • The use case or affected population changes
  • Monitoring reveals new risks
  • Relevant case law or regulatory guidance is published

Establish a review schedule — annually at minimum, and triggered by material changes.

FRIA vs. DPIA

If you already conduct Data Protection Impact Assessments under GDPR, you may be wondering how the FRIA relates. They overlap but serve different purposes:

  • DPIA focuses on risks to personal data and privacy. It’s required when processing is “likely to result in a high risk to the rights and freedoms of natural persons.”
  • FRIA focuses on a broader set of fundamental rights, including non-discrimination, dignity, effective remedy, and others that extend beyond data protection.

A DPIA may inform parts of the FRIA, particularly around data processing and privacy risks. But the FRIA’s scope is wider. You can’t substitute one for the other, though conducting them together for the same AI system is efficient and reduces duplication.

Getting started

If you haven’t started your FRIA process, begin with your highest-risk deployment — the AI system that affects the most people in the most consequential way. Complete that assessment thoroughly, then use it as a template for subsequent assessments.

The FRIA is a governance exercise, not a box-ticking exercise. Done well, it helps you understand and manage the actual risks your AI systems pose to people. Done poorly, it satisfies nobody: not the regulator, not the people affected, and not your organisation’s leadership when something goes wrong.

Free Resource

Free EU AI Act Priority Checklist

The 5 most critical compliance items before the August 2, 2026 deadline. Delivered to your inbox.