← All articles

Risk Management for AI Systems: What Article 9 Actually Requires

Article 9 of the EU AI Act requires providers of high-risk AI systems to establish, implement, document, and maintain a risk management system. This isn’t a risk assessment, a one-time exercise you complete and file. It’s a continuous, iterative process that runs throughout the AI system’s entire lifecycle.

The distinction matters. Many organisations approach risk management as a project: identify risks, document mitigations, get sign-off, and move on. Article 9 explicitly rejects this approach. The risk management system must be “regularly and systematically updated” to reflect new information, changes in the system, and lessons from operational experience.

What Article 9 requires

The risk management system must include the following elements:

Identification and analysis of known and reasonably foreseeable risks

You need to identify the risks that your high-risk AI system poses to health, safety, and fundamental rights. These risks should be assessed both during normal operation and during conditions of reasonably foreseeable misuse.

“Reasonably foreseeable” is key. You’re not expected to anticipate every possible misuse, but you are expected to think beyond the intended use case. If you provide a recruitment screening tool, it’s reasonably foreseeable that a deployer might apply it to roles it wasn’t validated for. If you provide a credit scoring model, it’s reasonably foreseeable that edge cases in the input data could produce discriminatory outputs.

The analysis must consider risks both to individual users and to broader groups of affected people. A system that performs well on average but fails systematically for a specific demographic group presents a risk that an aggregate assessment would miss.

Estimation and evaluation of risks

Once identified, risks must be assessed for likelihood and severity. Article 9(2)(b) requires this assessment to consider the risks “that may emerge when the high-risk AI system is used in accordance with its intended purpose and under conditions of reasonably foreseeable misuse.”

This is where quantification matters. “There is some risk of bias” is an observation, not an evaluation. “Testing on [specific dataset] showed a 12-percentage-point accuracy gap between [demographic group A] and [demographic group B], affecting approximately [X%] of expected inputs” is an evaluation that supports informed decision-making about mitigations.

Where quantification isn’t possible, qualitative assessment with clear reasoning is the next best approach. But avoid vague risk ratings (“medium risk”) without supporting analysis.

Adoption of appropriate and targeted risk management measures

Identified risks must be mitigated. Article 9(2)(c) specifies that risk management measures should aim to eliminate or reduce risks “as far as possible through adequate design and development.” Where residual risks remain, mitigation measures should include “the provision of adequate information” (especially to deployers) and, where appropriate, training.

The Act establishes a hierarchy of controls:

  1. Design out the risk — modify the system to eliminate the risk source
  2. Reduce through engineering — add technical safeguards (input validation, output filtering, confidence thresholds)
  3. Inform and train — provide documentation and guidance to deployers and users
  4. Monitor and respond — detect when residual risks materialise and respond

This hierarchy mirrors established safety engineering principles. The most effective mitigations are baked into the system design. The least effective are instructions that rely on humans consistently following procedures.

Residual risk management

Article 9(4) acknowledges that some residual risk will remain after mitigations. The residual risk must be:

  • Assessed and judged acceptable in view of the system’s intended purpose
  • Communicated to deployers through the instructions for use
  • Considered in the context of the overall risk-benefit balance

If the residual risk is unacceptable, the system shouldn’t be deployed — or its use case should be narrowed until the residual risk falls within acceptable bounds.

Testing

Article 9(5) requires testing of the high-risk AI system “in order to identify the most appropriate and targeted risk management measures.” Testing must be performed “against prior defined metrics and probabilistic thresholds that are appropriate to the intended purpose.”

This means:

  • Define your performance metrics and acceptable thresholds before testing
  • Test against those metrics systematically
  • Use the results to inform your risk management measures
  • Document the test methodology, results, and how they influenced your risk decisions

Testing must cover the conditions of intended use, and as far as possible, conditions of reasonably foreseeable misuse.

What auditors will expect to see

Whether your system undergoes internal conformity assessment or third-party assessment by a notified body, the assessor will evaluate your risk management system. Here’s what they’ll look for:

A living document, not a snapshot

The risk management system must show evidence of iteration. A document dated 18 months ago with no updates suggests a one-off exercise, not a continuous process. Expect questions about:

  • When was the risk assessment last updated?
  • What triggered the last update?
  • What changed between versions?
  • What new risks have been identified since the initial assessment?

Traceability from risks to mitigations

For each identified risk, the assessor should be able to trace a clear path to the mitigation measure and the evidence that it works. Risk → Mitigation → Validation → Residual Risk Assessment. Gaps in this chain (risks without mitigations, mitigations without evidence of effectiveness) will be flagged.

Evidence-based risk evaluation

Risk ratings need to be supported by analysis, not just assigned. If you rate a bias risk as “low,” the assessor will ask what testing you conducted to support that rating. If you rate an accuracy risk as “acceptable,” they’ll want to see the performance data.

Coverage of the full lifecycle

The risk management system must cover:

  • Design and development: risks from architecture, data, and algorithm choices
  • Testing and validation: risks identified through testing
  • Deployment: risks from the operational environment
  • Post-market operation: risks that emerge in production
  • Decommissioning: risks from system withdrawal or data disposal

A risk management system that only covers the development phase is incomplete.

Stakeholder input

Article 9(9) requires that risk management measures consider “the generally acknowledged state of the art” including from relevant harmonised standards and codes of good practice. Assessors will check whether you’ve referenced applicable standards and industry guidance.

They’ll also look for evidence that affected stakeholders were considered. Not necessarily consulted directly, but that the risk assessment accounts for the perspectives of people affected by the AI system.

Building your risk management system

Start with the system description

Before assessing risks, document what the system does, how it works, what data it processes, who it affects, and in what context. This system description is the foundation of the risk assessment — you can’t identify risks without understanding the system.

Use a structured risk identification method

Don’t rely on brainstorming alone. Use structured methods:

  • Failure mode analysis: For each component and decision point, what could go wrong?
  • Misuse scenarios: How might the system be used outside its intended purpose?
  • Stakeholder perspective: For each category of affected person, what risks do they face?
  • Data-related risks: What if the data quality changes, distributions shift, or biases exist?
  • Interaction risks: How does the system interact with other systems, and what risks arise from those interactions?

Document decisions, not just outcomes

For each risk and mitigation decision, document the reasoning. Why was this risk rated at this level? Why was this mitigation approach chosen over alternatives? Why is the residual risk considered acceptable?

This reasoning is valuable both for auditors (who need to understand your thought process) and for future iterations (when you revisit decisions with new information).

Connect to monitoring

Your risk management system should specify what monitoring is needed to detect when risks materialise in production. Each significant risk should have an associated monitoring metric or trigger. This connects Article 9 (risk management) to Article 72 (post-market monitoring) — they’re not separate activities but parts of a continuous cycle.

Plan for updates

Define the triggers for updating your risk management system:

  • Scheduled reviews (at least annually, more frequently for newer or higher-risk systems)
  • New information about system performance from post-market monitoring
  • Feedback from deployers or affected individuals
  • Changes to the system (model updates, data changes, use case expansion)
  • Relevant new standards, guidance, or regulatory interpretations
  • Incidents or near-misses

Each update should be documented with the date, trigger, changes made, and the person responsible.

The mindset shift

For organisations accustomed to project-based risk assessments, Article 9 requires a shift in thinking. The risk management system is not something you build once and maintain passively. It’s an operational process — like financial auditing or quality management — that requires ongoing attention, resources, and governance.

The organisations that will handle this best are those that integrate risk management into their AI development and deployment workflows rather than bolting it on as a separate compliance activity. When risk assessment is part of every design decision, every deployment decision, and every model update, it becomes part of normal operations rather than a periodic burden.

Free Resource

Free EU AI Act Priority Checklist

The 5 most critical compliance items before the August 2, 2026 deadline. Delivered to your inbox.