← All articles

Serious Incident Reporting Under Article 73

When an AI system causes serious harm, the EU AI Act doesn’t just expect you to fix it. It requires you to report it. Article 73 establishes a mandatory incident reporting framework for providers and deployers of high-risk AI systems, with timescales tight enough that you can’t afford to figure out the process after the incident occurs.

What counts as a serious incident

Article 3(49) defines a “serious incident” as an incident or malfunctioning of an AI system that directly or indirectly leads to any of the following:

Death of a person. Any death that is causally linked to the AI system’s operation or malfunction.

Serious damage to the health of a person. This includes physical injury and, importantly, serious psychological harm. If an AI system’s malfunction causes a person significant psychological distress — for example, wrongful denial of critical services, discriminatory treatment with severe personal consequences, or exposure to harmful content, this may qualify.

Serious and irreversible disruption of the management or operation of critical infrastructure. If your AI system manages or supports critical infrastructure (energy, transport, water, digital infrastructure) and a malfunction seriously disrupts that infrastructure in a way that’s difficult to reverse, that’s a reportable incident.

Breach of obligations under Union law intended to protect fundamental rights. This is broader than physical harm. If the AI system’s operation results in a serious breach of fundamental rights — significant discrimination, violation of privacy at scale, denial of due process — it’s reportable.

Serious damage to property or the environment. Large-scale property damage or environmental harm caused by or contributed to by the AI system.

The threshold is “serious,” not “any.” A chatbot giving an incorrect answer is not a serious incident. A credit scoring system systematically denying loans to an entire ethnic group is. The line between the two isn’t always obvious, which is why you need clear internal criteria before something happens.

Who has reporting obligations

Providers

Providers of high-risk AI systems that are placed on the EU market or put into service in the EU must report serious incidents to the market surveillance authorities of the member states where the incident occurred.

If the incident occurred in multiple member states, the provider must report to all relevant authorities. In practice, this means the provider needs to know where their systems are deployed — a requirement that connects to the broader traceability obligations under the Act.

Deployers

Deployers must report serious incidents to the market surveillance authority of the member state where the incident occurred. Additionally, the deployer must inform the provider (or distributor) of the AI system.

This creates a dual reporting flow: the deployer reports to the authority and to the provider. The provider may then have their own reporting obligations based on the information received.

Reporting timescales

The timescales are demanding:

Immediately upon establishing a causal link between the AI system and the serious incident, and in any case no later than 15 days after the provider or deployer becomes aware of the serious incident.

“Becomes aware” is the trigger. If a customer complaint arrives on Monday describing an incident that occurred the previous week, the clock starts on Monday, when the organisation learned of it, not when the incident occurred.

The 15-day outer limit is absolute. But the Act also expects immediate reporting once you’ve established that your AI system caused or contributed to the incident. If the causal link is clear on day 2, reporting on day 14 is too late.

For incidents involving widespread harm or ongoing risk, earlier reporting is expected. If your AI system is actively causing harm, you don’t have 15 days to think about it.

What to report

The report must include sufficient information for the authority to assess the incident. While the exact format may vary by member state, your report should cover:

Identification of the AI system. Name, version, provider details, any unique identifier or registration number in the EU database.

Description of the incident. What happened, when, where, and who was affected. Be factual and specific.

Causal analysis. What is the link between the AI system and the harm? Was it a malfunction, an error in outputs, a misuse, or an interaction with other systems?

Severity and scope. How many people were affected? What was the nature and extent of the harm?

Immediate actions taken. What did you do when you became aware? Suspend the system? Notify deployers? Implement a workaround?

Corrective measures. What steps are you taking or planning to prevent recurrence?

Contact information. A designated person or team the authority can contact for follow-up.

Building your incident response process

You need this process designed, documented, and tested before an incident occurs. The 15-day window doesn’t leave time for creating a process from scratch.

Define internal severity criteria

Create a classification framework that maps internal incidents to the Article 73 thresholds. Not every AI system error is a serious incident, and your teams need clear criteria to escalate appropriately. Undertriage means missing reporting obligations. Overtriage means overwhelming your compliance team with non-reportable events.

Your criteria should address:

  • What constitutes “serious damage to health” in the context of your AI system
  • What would constitute a “serious breach of fundamental rights”
  • What level of property or infrastructure damage triggers reporting
  • How to assess whether the AI system caused or contributed to the harm (vs. coincidental involvement)

Establish detection mechanisms

You can’t report what you don’t detect. Build systems to identify potential serious incidents:

  • Automated monitoring that flags anomalous outputs, error patterns, or outcomes that match your severity criteria
  • Deployer reporting channels that make it easy for deployers to escalate incidents quickly
  • Customer complaint analysis that identifies patterns suggesting systematic harm
  • Media and social monitoring for public reports of problems with your AI system

Design the escalation path

When a potential serious incident is detected, it needs to reach the right people quickly:

  1. Detection — front-line team or automated system identifies a potential incident
  2. Initial assessment — within hours, a designated person assesses whether the incident meets the Article 73 threshold
  3. Causal investigation — technical team investigates the AI system’s role
  4. Reporting decision — compliance or legal team decides whether reporting is required
  5. Report submission — designated person submits the report to the relevant authority
  6. Provider/deployer notification — if you’re a deployer, notify the provider (and vice versa)
  7. Corrective action — implement measures to prevent recurrence

Each step needs an owner, a timeframe, and a fallback if the primary owner is unavailable. Serious incidents don’t wait for business hours.

Prepare template reports

Draft template incident reports that can be quickly adapted when an incident occurs. Having a template reduces the time between decision and submission, which matters when you’re working within 15 days.

Test the process

Run tabletop exercises simulating serious incidents. Walk through the full process — from detection to report submission, and identify bottlenecks, unclear responsibilities, or gaps in information flow.

Exercises are particularly valuable for testing cross-functional coordination. Incident reporting involves engineering (causal analysis), legal (regulatory assessment), compliance (report preparation), and leadership (decision authority). These teams don’t always communicate smoothly under pressure unless they’ve practised.

Coordination with other reporting obligations

The AI Act’s incident reporting obligation doesn’t exist in isolation. You may have parallel reporting obligations under:

GDPR. If the incident involves a personal data breach, you may need to notify the supervisory authority within 72 hours under Article 33 of the GDPR, and affected individuals under Article 34.

NIS2 Directive. If your organisation is covered by the NIS2 Directive (operators of essential or important services), significant incidents must be reported to the relevant CSIRT within 24 hours.

Sector-specific regulations. Financial services, healthcare, aviation, and other regulated sectors have their own incident reporting frameworks.

Coordinate these obligations to avoid conflicting timescales, duplicated effort, or inconsistent reporting. Ideally, your incident response process should have a single intake that triggers all relevant reporting workflows simultaneously.

After the report

Submitting the report isn’t the end. Expect:

Follow-up requests. The authority may ask for additional information, technical analysis, or evidence.

Corrective action requirements. The authority may require you to take specific corrective measures, including modifying, withdrawing, or recalling the AI system.

Investigation. The authority may conduct its own investigation, including requesting access to the AI system, its documentation, and its logs.

Public disclosure. Depending on the severity and member state procedures, the incident and enforcement action may be publicly disclosed.

Your internal process should include provisions for post-report engagement with authorities, including designating a point of contact and ensuring that relevant documentation and data are preserved and accessible.

Incident reporting is reactive, but the preparation for it is proactive. The organisations that will handle serious incidents best are those that built the detection, assessment, and reporting infrastructure before they needed it.

Free Resource

Free EU AI Act Priority Checklist

The 5 most critical compliance items before the August 2, 2026 deadline. Delivered to your inbox.