Who Counts as a Deployer Under the EU AI Act?
If your business uses AI but didn’t build it, you might assume the EU AI Act is someone else’s problem. The developer made it. The vendor sold it. Surely the compliance burden sits with them?
It doesn’t. Or at least, not all of it.
The EU AI Act creates two primary roles: providers and deployers. Providers develop AI systems or commission their development and place them on the market. Deployers use AI systems under their own authority. If you’ve purchased, licensed, or integrated a third-party AI tool into your operations, you are almost certainly a deployer, and the Act assigns you specific obligations.
What the Act actually says
Article 3(4) defines a deployer as:
“any natural or legal person, public authority, agency or other body using an AI system under its authority, except where the AI system is used in the course of a personal non-professional activity.”
The critical phrase is “under its authority.” If your organisation decides to use an AI system, selects it, configures it, and points it at your customers or employees, you are using it under your authority. It doesn’t matter that someone else built it.
This distinction catches a lot of businesses off guard. A recruitment firm using an AI screening tool is a deployer. A bank using an AI credit scoring model is a deployer. A retailer using an AI chatbot for customer service is a deployer. In each case, the provider who built the system has their own obligations, but that doesn’t remove yours.
When a deployer becomes a provider
There’s an important boundary to understand. Article 25 states that a deployer is reclassified as a provider if they:
- Put their own name or trademark on a high-risk AI system already on the market
- Make a substantial modification to a high-risk AI system
- Modify the intended purpose of an AI system so that it becomes high-risk
This matters more than it might seem. If you’ve taken a general-purpose AI model, fine-tuned it on your data, integrated it into your product, and launched it under your brand, you may have crossed the line from deployer to provider. Providers face significantly heavier obligations: conformity assessments, technical documentation, risk management systems, and registration in the EU database.
The line between “configuration” and “substantial modification” isn’t always clear-cut. Customising a system prompt is probably configuration. Fine-tuning a model on proprietary data and deploying it as a core product feature is more likely a substantial modification. If you’re unsure, err on the side of assuming provider status — the penalties for getting it wrong are serious.
What deployers are actually required to do
The Act doesn’t let deployers off lightly. Article 26 sets out a series of concrete obligations. Here’s what they require in practice:
Use the system according to its instructions
Providers of high-risk AI systems must supply instructions for use (as specified in Annex IV, section 9 and Article 13). As a deployer, you are required to follow them. This is a legal obligation, not a suggestion. If the provider says the system shouldn’t be used for a particular purpose, or that outputs need human review before acting on them, you need to comply.
In practice, this means someone in your organisation needs to actually read the provider’s documentation. If the instructions are unclear or missing, that’s a conversation to have with your provider before the deadline, not after an audit.
Ensure human oversight
Article 14 requires that high-risk AI systems are designed to allow human oversight. Article 26 makes it the deployer’s job to actually implement it. You need competent people assigned to oversee the AI system, with the authority and ability to override or disregard its outputs.
This is more than a tick-box exercise. “Human oversight” means the person reviewing AI outputs understands what the system does, knows its limitations, and has the practical ability to intervene. A customer service agent who rubber-stamps every AI suggestion because they’re measured on speed is not providing meaningful oversight.
Monitor the system in operation
You must monitor the AI system while it’s running. If something looks wrong (unexpected outputs, performance degradation, potential risks to health, safety, or fundamental rights), you need to catch it and act. Article 26(5) requires deployers to suspend use and inform the provider if they believe the system presents a risk.
This implies logging, monitoring dashboards, and clear escalation procedures. You can’t monitor what you don’t measure.
Conduct a Fundamental Rights Impact Assessment
For deployers of high-risk AI systems, Article 27 requires a Fundamental Rights Impact Assessment (FRIA) before putting the system into use. This isn’t the same as a Data Protection Impact Assessment under GDPR, though they overlap. The FRIA specifically examines how the AI system might affect fundamental rights: non-discrimination, privacy, freedom of expression, human dignity.
The assessment must describe the deployer’s processes where the AI system will be used, the period and frequency of use, the categories of people affected, and the specific risks to their rights. It must also describe the human oversight measures and the procedures for complaints and redress.
Keep logs
If the high-risk AI system generates logs automatically, the deployer must keep those logs for at least six months (unless a longer period is required by other law). These logs are your evidence trail. If a regulator asks how your AI system performed over the past quarter, or if a customer files a complaint, the logs are what you’ll point to.
Report serious incidents
Under Article 73, deployers must report serious incidents to the relevant market surveillance authority. A “serious incident” is one that results in death, serious damage to health, serious disruption to critical infrastructure, or a serious breach of fundamental rights obligations. The reporting timescale is tight: immediately upon establishing a causal link, and in any case within 15 days of becoming aware of the incident.
Transparency obligations
Article 50 imposes transparency obligations that apply regardless of risk level. If your AI system interacts directly with people (chatbots, virtual assistants), you must tell them they’re dealing with AI. If your system generates deepfakes or synthetic content, you must label it. If your system is used to make decisions about people (emotion recognition, biometric categorisation), additional disclosure requirements apply.
These aren’t limited to high-risk systems. Even a low-risk customer service chatbot triggers transparency obligations.
The practical gap
The challenge for most deployers isn’t understanding what the Act requires. It’s operationalising it. Many businesses have adopted AI tools incrementally, often without central coordination. Marketing bought an AI copywriting tool. Support deployed a chatbot. HR started using an AI screening service. Each decision made sense in isolation, but nobody mapped the full inventory of AI systems or assessed which ones fall under the Act.
That inventory is where compliance starts. You cannot assess your obligations if you don’t know what AI systems you’re running, who selected them, what data they process, and who they affect.
What to do now
If you’re a deployer (and most businesses using AI are), here’s the minimum you should be working on before August 2026:
- Inventory your AI systems. Every tool, API, model, and integration. Include the ones marketing bought without telling IT.
- Classify each system’s risk level. Map them against Annex III to determine which are high-risk.
- Identify your role for each system. Are you a deployer, or have you crossed into provider territory?
- Request documentation from providers. You need their instructions for use, risk assessments, and conformity declarations for any high-risk system.
- Assign human oversight. Name the people responsible for overseeing each high-risk system and ensure they have the competence and authority to do the job.
- Set up monitoring and logging. You can’t comply with monitoring obligations if you have no monitoring infrastructure.
- Prepare your FRIA process. Build the template, identify who conducts the assessments, and start with your highest-risk deployments.
The Act gives deployers real responsibilities. They’re less onerous than provider obligations, but they’re not trivial, and they’re enforceable. The businesses that treat “we didn’t build it” as a compliance strategy will find out the hard way that the Act disagrees.