← All articles

What CTOs Need to Know About Article 26

If your company deploys high-risk AI systems, Article 26 of the EU AI Act is the article your engineering team needs to read. It’s the core set of deployer obligations, and most of them require technical implementation, not just policy documents.

As CTO, the compliance team will produce the policies. Legal will interpret the requirements. But the actual implementation (logging, monitoring, oversight tooling, incident detection) lands on your team. Here’s what Article 26 requires in concrete, buildable terms.

Use according to instructions for use

Article 26(1) requires deployers to “take appropriate technical and organisational measures to ensure they use [high-risk AI] systems in accordance with the instructions for use accompanying the systems.”

In practice, this means:

Get the documentation. Your provider must supply instructions for use under Article 13 and Annex IV, section 9. These should describe the system’s intended purpose, capabilities, limitations, known risks, and required operational conditions. If your provider hasn’t supplied these, request them now. You can’t comply with instructions you haven’t received.

Review the constraints. The instructions will specify conditions for use: input data requirements, populations the system has been validated for, performance boundaries, scenarios where the system shouldn’t be used. Your engineering team needs to verify that your deployment falls within these parameters.

Enforce the boundaries in code. If the instructions say the system shouldn’t be used for a particular demographic or beyond a certain confidence threshold, don’t rely on operators remembering. Build guardrails into your integration: input validation, output filtering, scope checks. Make the constraints architectural, not procedural.

Human oversight

Article 26(2) requires deployers to “assign human oversight to natural persons who have the necessary competence, training and authority.” Article 14 defines what human oversight means technically. The system must be designed to allow humans to:

  • Understand the system’s capabilities and limitations
  • Monitor its operation
  • Interpret its outputs correctly
  • Decide not to use the system, override outputs, or reverse decisions
  • Intervene or stop the system

For a CTO, this translates to specific engineering requirements:

Build oversight interfaces. The people responsible for overseeing the AI system need tooling. Dashboards that show what the system is doing, what inputs it’s receiving, what outputs it’s generating, and how those outputs are being used. If your AI system runs in a black box that nobody can inspect in real time, you don’t have human oversight.

Expose confidence and reasoning. Where technically feasible, surface the system’s confidence scores, contributing factors, or reasoning alongside its outputs. An operator who sees “candidate rejected” can’t exercise meaningful oversight. An operator who sees “candidate rejected, 73% confidence, primary factors: employment gap, non-standard qualification” can.

Enable override and reversal. Every decision the AI system makes or influences must be reversible. Build workflows that allow the oversight person to override the AI’s output, escalate to a senior reviewer, or flag a case for manual processing. These workflows need to be practical and low-friction. If overriding the AI takes 20 minutes of paperwork, nobody will do it.

Assign and document the oversight roles. This isn’t purely technical, but your team will likely need to build the role-based access controls. Identify who has oversight authority for each AI system, ensure they have the right access in your systems, and log their oversight activities.

Input data relevance

Article 26(4) requires deployers to ensure that “input data is relevant and sufficiently representative in view of the intended purpose of the high-risk AI system.”

This is a data quality obligation. Your engineering team needs to:

Validate inputs before they reach the AI system. If the system was trained and validated on a specific type of data, ensure your pipeline only sends data that matches. If the system expects structured applicant profiles, don’t feed it unstructured free-text scraped from social media.

Monitor for data drift. The data your system processes today may not match the data it was validated against. If your customer demographics shift, if your data sources change, if the format of incoming data evolves, the system’s performance may degrade in ways the provider didn’t anticipate. Build monitoring to detect distribution shifts in input data.

Document your data pipeline. From data source to AI input, document the transformations, filters, and enrichments your data passes through. If a regulator asks where the AI system’s inputs come from and how they’re prepared, you need an answer.

Monitoring in operation

Article 26(5) requires deployers to “monitor the operation of the high-risk AI system on the basis of the instructions for use.” If you identify risks, you must “inform the provider or distributor” and “suspend the use of the system” if there’s a risk to health, safety, or fundamental rights.

For engineering teams, this means building production monitoring that goes beyond uptime:

Output quality monitoring. Track the distribution of the AI system’s outputs over time. If a hiring tool starts rejecting a disproportionate number of candidates from a particular demographic, you need to detect that. If a credit scoring model’s approval rate drops by 30% with no change in applicant population, something is wrong.

Anomaly detection on AI behaviour. Set up alerts for unusual patterns: sudden shifts in output distributions, unexpected error rates, significant changes in confidence scores, or increased rates of outputs that human reviewers override.

Performance against documented benchmarks. The provider’s documentation should include performance metrics. Monitor your system’s real-world performance against these benchmarks. If accuracy in production is significantly lower than the documented benchmarks, that’s a signal to investigate and potentially escalate.

Incident triggers. Define what constitutes an event that requires action — provider notification, use suspension, or incident reporting under Article 73. Build these triggers into your monitoring, not into your manual review processes.

Automatic logging and log retention

Article 26(6) requires deployers to “keep the logs automatically generated by that high-risk AI system to the extent such logs are under their control” for at least six months, unless a longer period is required by other law.

Engineering implications:

Capture the logs. Ensure your integration captures whatever logs the AI system generates. For API-based systems, this means logging requests and responses, including timestamps, input data hashes, outputs, confidence scores, and any metadata the API returns. For embedded systems, work with the provider to understand what logging is available.

Store them durably. Six months minimum, with appropriate retention policies. These logs need to be retrievable. If a regulator requests them, you can’t point at a data lake with no indexing. Structure the storage so you can query by time period, by individual affected, and by outcome.

Protect them. AI system logs likely contain personal data. Apply appropriate access controls, encryption, and data protection measures. The logs themselves need to comply with GDPR. You’re retaining personal data for a legitimate purpose (legal compliance), but you still need to handle it properly.

Plan for volume. High-throughput AI systems can generate substantial log volumes. If your AI system processes thousands of inputs per day, six months of detailed logs adds up. Plan your storage architecture accordingly.

Fundamental Rights Impact Assessment

Article 27 requires deployers of high-risk AI systems to conduct a Fundamental Rights Impact Assessment (FRIA) before putting the system into use. While this is primarily a governance and legal exercise, the CTO’s team provides the technical inputs:

  • How the system works technically (architecture, data flows, decision logic)
  • What data it processes and how
  • Performance characteristics and known failure modes
  • Technical measures in place for oversight, bias mitigation, and data protection
  • Monitoring capabilities and what they can detect

The FRIA needs to be completed before deployment and updated when the system or its use changes materially. If your team ships a major update to the AI integration, flag it for FRIA review.

Serious incident reporting

Article 73 requires deployers to report serious incidents to the relevant market surveillance authority. A serious incident includes death, serious health damage, serious disruption to critical infrastructure, or serious breach of fundamental rights.

The engineering requirement: build detection mechanisms. If your AI system could potentially cause serious harm (and if it’s high-risk, by definition it could), you need automated detection for scenarios that could constitute serious incidents. This connects to your monitoring infrastructure, but with specific escalation paths that route to the compliance and legal teams, not just the engineering on-call rotation.

What to build, in priority order

If you’re starting from nothing, here’s a practical sequencing:

  1. Logging first. Start capturing AI system inputs, outputs, and metadata now. You can’t backfill logs, and the retention clock starts when you generate them.

  2. Oversight interfaces second. Give the designated oversight people the ability to see what the AI system is doing and intervene. Even a basic dashboard is better than no visibility.

  3. Monitoring third. Layer on output distribution tracking, anomaly detection, and performance benchmarking. Connect these to alerting.

  4. Guardrails fourth. Implement input validation, output filtering, and scope enforcement based on the provider’s instructions for use.

  5. Incident detection last. Build the automated triggers and escalation paths for potential serious incidents.

This order prioritises the obligations that are hardest to retrofit (logging), then builds towards the more sophisticated capabilities. Start now. August 2026 is close, and these systems need to be operational, tested, and documented by then.

Free Resource

Free EU AI Act Priority Checklist

The 5 most critical compliance items before the August 2, 2026 deadline. Delivered to your inbox.