← All articles

AI Act Compliance for Non-EU Companies

If your company is headquartered outside the European Union, you might assume the EU AI Act doesn’t apply to you. This is the same assumption many non-EU companies made about GDPR in 2018. They were wrong then, and the AI Act takes the same approach.

The EU AI Act has explicit extraterritorial scope. Where your company is based matters far less than where your AI system has its effects.

How extraterritorial scope works

Article 2 defines who falls under the Act:

Article 2(1)(a): Providers who place AI systems on the market or put them into service in the EU, regardless of whether those providers are established in the EU or in a third country.

Article 2(1)(b): Deployers of AI systems who are established in the EU or are in a place where EU law applies.

Article 2(1)(c): Providers and deployers of AI systems established in a third country, where the output produced by the AI system is used in the EU.

That last clause is the critical one. If your AI system produces outputs that are used within the EU — even if your company, your servers, and your employees are entirely outside the EU — the Act applies to you.

What “output used in the EU” means in practice

Consider these scenarios:

A US SaaS company with EU customers. You’ve built an AI-powered customer service chatbot. It serves customers globally, including customers in EU member states. The chatbot’s outputs (responses to customer queries) are used in the EU. The Act applies.

An Australian fintech using AI credit scoring. Your AI system assesses loan applications. Some applicants are EU residents applying through your EU-licensed subsidiary. The AI system’s outputs (credit decisions) affect people in the EU. The Act applies.

A Canadian recruitment platform. Your AI system screens CVs and ranks candidates for employers. Some of those employers are EU companies hiring in the EU. The AI system’s outputs (candidate rankings) are used by EU deployers. The Act applies to you as a provider.

A Japanese company with no EU operations. You’ve built an AI translation tool. A German company licenses it for internal use. The tool’s outputs are used in the EU. The Act applies to you as a provider. The German company is the deployer with its own obligations.

The pattern is consistent: if your AI system’s outputs touch the EU, whether through direct customer interaction, decisions about EU residents, or use by EU-based organisations, you’re in scope.

The authorised representative requirement

If you’re a non-EU provider of AI systems that fall under the Act, Article 22 requires you to appoint an authorised representative established in the EU. This representative acts as your contact point for national competent authorities and must be empowered to:

  • Verify that the EU Declaration of Conformity and technical documentation have been drawn up
  • Provide the national competent authority with all information necessary to demonstrate compliance
  • Cooperate with authorities on any corrective action

The authorised representative must be appointed by written mandate. They don’t replace your obligations (you’re still responsible for compliance), but they serve as the EU’s point of contact for enforcement purposes.

This is directly analogous to GDPR’s requirement for a representative under Article 27 of that regulation. If you already have a GDPR representative in the EU, they may be able to take on the AI Act representative role as well, though the competencies required are different.

Practical implications for non-EU providers

You need to know the Act as well as EU competitors do

The compliance requirements for non-EU providers are identical to those for EU providers. If your AI system is high-risk, you need the same risk management system, technical documentation, conformity assessment, quality management system, and EU database registration as an EU-based provider.

The fact that you’re based outside the EU doesn’t reduce the standard. It does, however, create practical challenges:

Legal complexity. You’re subject to your home country’s laws and the EU AI Act simultaneously. If these conflict (for example, if your home jurisdiction’s data laws restrict the information you can share with EU authorities), you need legal advice on how to navigate both.

Regulatory interaction. Dealing with EU regulators from a different timezone and legal culture adds friction. Your authorised representative helps, but complex compliance questions will still require direct engagement.

Documentation in EU languages. Certain documentation, such as instructions for use and declarations of conformity, may need to be provided in the official language of the member state where the system is placed on the market. If you’re selling across multiple EU countries, that means multiple language versions.

You need to know which member states you’re operating in

The EU AI Act is a regulation (directly applicable in all member states), but enforcement is handled by national competent authorities designated by each member state. Different authorities may have different enforcement priorities, procedures, and interpretations.

If your AI system is used across multiple EU member states, you may need to engage with multiple national authorities. In practice, your primary point of contact will typically be the authority in the member state where your authorised representative is established, but other authorities can investigate violations affecting people in their jurisdiction.

You should plan for conformity assessment

For high-risk AI systems, the Act requires conformity assessment before the system can be placed on the EU market. For most high-risk categories, this can be done through internal assessment (self-certification). But for some categories — notably biometric identification systems — a third-party conformity assessment by a notified body is required.

If you need a third-party assessment, you’ll need to engage an EU-based notified body. These assessments take time. If your product launch timeline depends on EU market access, factor the conformity assessment into your planning now.

Practical implications for non-EU deployers

If you’re a non-EU company that deploys AI systems and the outputs are used in the EU, your deployer obligations apply in the EU context:

Human oversight for EU-affecting decisions. If your AI system makes decisions about EU residents — screening EU job applicants, assessing EU credit applications, serving EU customers — you need human oversight for those decisions that meets the Act’s standards.

Transparency for EU interactions. If your chatbot talks to EU customers, the Article 50 transparency disclosures must be in place for those interactions. You might choose to implement them globally (simpler) or geo-target them (more complex but avoids changes for other markets).

Incident reporting for EU harms. If your AI system causes a serious incident affecting someone in the EU, you have reporting obligations to the relevant EU market surveillance authority.

Fundamental Rights Impact Assessment. If you deploy high-risk AI in a context that affects EU residents and you fall within the Article 27 scope (public services), you need an FRIA.

The enforcement question

Non-EU companies sometimes ask: “How would the EU actually enforce this against us?”

The mechanisms are similar to GDPR enforcement against non-EU companies:

Authorised representative liability. Your EU representative can be held accountable, which creates practical enforcement leverage even if the parent company is outside EU jurisdiction.

Market access restrictions. Non-compliant AI systems can be prohibited from the EU market. If EU revenue matters to your business, this is a powerful incentive.

Cooperation agreements. The EU is building regulatory cooperation frameworks with other jurisdictions. Enforcement actions can be coordinated internationally.

Customer and partner pressure. EU-based companies that deploy your AI system have their own compliance obligations. If your product doesn’t meet the Act’s requirements, your EU customers can’t use it compliantly. This creates commercial pressure independent of direct enforcement.

Reputational impact. A public enforcement action by an EU authority affects your global reputation, not just your EU business.

What to do now

If you’re a non-EU company with AI systems that affect people in the EU:

  1. Assess your exposure. Map which of your AI systems produce outputs used in the EU. Consider both direct customer-facing systems and systems used by EU-based partners or customers.

  2. Classify your role. For each in-scope system, determine whether you’re a provider, a deployer, or both.

  3. Appoint an authorised representative. If you’re a provider placing AI systems on the EU market, this is mandatory. Start the process early — finding the right representative and executing the mandate takes time.

  4. Align your compliance programme. The requirements are the same regardless of where you’re based. If you’re already building compliance for your home market’s AI regulations (if any), look for synergies with the EU requirements.

  5. Engage with your EU customers. If you’re a provider and your EU customers are deployers, they’ll need documentation from you — instructions for use, conformity declarations, risk assessment details. Start those conversations now.

The EU AI Act’s extraterritorial scope is not ambiguous. If your AI affects people in the EU, the Act applies to you. The sooner you engage with that reality, the better positioned you’ll be when enforcement begins.

Free Resource

Free EU AI Act Priority Checklist

The 5 most critical compliance items before the August 2, 2026 deadline. Delivered to your inbox.