← All articles

Building a Compliance Team When You Don't Have One

Most businesses affected by the EU AI Act don’t have a dedicated AI governance team. They don’t have a Chief AI Officer, an AI Ethics Board, or a compliance function that knows anything about AI regulation. They have a legal team that handles contracts, an engineering team that builds products, and a product team that decides what gets built. The AI Act’s compliance obligations need to land somewhere in this existing structure.

Hiring a dedicated AI compliance team is the ideal approach, but it’s unrealistic for most mid-sized companies. What’s realistic is distributing responsibilities across existing roles, supported by clear ownership and practical processes.

What needs to be covered

Before distributing responsibilities, understand what the Act requires operationally:

Inventory and classification. Someone needs to map every AI system in the organisation and determine its risk classification. This is a one-off project that becomes an ongoing process.

Risk management. For high-risk systems, someone needs to maintain a risk management system: identifying risks, implementing mitigations, and reviewing them regularly.

Technical documentation. Providers must maintain detailed technical documentation for high-risk systems. This includes system descriptions, performance metrics, data governance details, and instructions for use.

Human oversight. Designated individuals must oversee each high-risk AI system with the competence and authority to intervene.

Monitoring. AI systems in production need continuous monitoring: performance tracking, drift detection, anomaly alerting.

Impact assessments. For deployers of high-risk AI, Fundamental Rights Impact Assessments must be conducted before deployment.

Incident reporting. Serious incidents must be detected, assessed, and reported to authorities within 15 days.

Transparency. AI systems interacting with people must be disclosed. Synthetic content must be labelled.

AI literacy. Staff working with AI must have appropriate training.

A practical distribution model

Here’s how these responsibilities can map to roles that most mid-sized companies already have:

Every compliance programme needs a single point of accountability. Someone must own the overall compliance status and be answerable to leadership and, if necessary, regulators.

Responsibilities:

  • Overall AI Act compliance ownership
  • Relationship with national competent authorities
  • Final decision on risk classifications where they’re contested
  • Sign-off on Fundamental Rights Impact Assessments
  • Incident reporting decisions (whether an event meets the Article 73 threshold)
  • Ensuring AI literacy training is in place

This person doesn’t need to be an AI expert. They need to understand the regulatory framework, have authority to direct resources, and coordinate across functions. In many mid-sized companies, this will be the General Counsel or COO, possibly with support from an external AI Act specialist for the initial setup.

Engineering / CTO function

Most of the Act’s operational requirements require technical implementation. The engineering team builds the infrastructure that makes compliance possible.

Responsibilities:

  • AI system inventory (technical audit of APIs, models, integrations)
  • Logging infrastructure for AI system inputs and outputs
  • Monitoring dashboards and alerting
  • Input validation and output filtering guardrails
  • Human oversight tooling
  • Technical sections of documentation (system architecture, data flows, performance metrics)
  • Model version management
  • Incident detection mechanisms

The CTO or engineering lead should designate a specific engineer or team as the AI compliance technical lead. This person bridges the compliance owner’s requirements and the engineering team’s implementation. They don’t need to be full-time on compliance, but they need allocated time and clear authority to prioritise compliance work.

Product management

Product managers typically own the decisions about how AI is used in the product: what features to build, what use cases to pursue, and how they’re presented to users.

Responsibilities:

  • Intended purpose documentation for each AI system
  • Use case scoping (ensuring AI systems stay within their classified risk boundaries)
  • Transparency implementation in user interfaces (AI disclosure labels, content attribution)
  • User-facing documentation (how the AI system works, its limitations)
  • User feedback collection and escalation
  • Coordinating with engineering on human oversight workflows

Product managers are also well-positioned to conduct the deployer survey, identifying AI tools used across the business, because they typically have the cross-functional relationships to get honest answers.

Data / analytics function

If your company has a data team, data engineers, or data scientists, they contribute to several compliance areas.

Responsibilities:

  • Data governance for AI training, validation, and testing data
  • Bias assessment and fairness testing
  • Data quality monitoring
  • Performance metrics calculation and tracking
  • Statistical analysis for impact assessments

HR / People function

HR has a specific role because of the AI literacy obligation and because many high-risk AI use cases involve employment.

Responsibilities:

  • AI literacy training programme design and delivery
  • Assessment of AI tools used in recruitment and HR processes
  • Human oversight for any AI-assisted hiring or performance decisions
  • Supporting Fundamental Rights Impact Assessments for employment-related AI

External support

Even with good internal distribution, most mid-sized companies will benefit from external support in specific areas:

  • Legal counsel with AI Act expertise for initial risk classification, FRIA methodology, and regulatory interpretation
  • Technical consultants for conformity assessment support (if you’re a provider of high-risk AI)
  • Training providers for AI literacy programmes
  • Auditors for periodic compliance reviews

The goal is to use external expertise for setup and periodic review, not as an ongoing crutch. Compliance must be embedded in internal operations to be sustainable.

Making it work

Create a RACI matrix

For each compliance obligation, document who is Responsible (does the work), Accountable (owns the outcome), Consulted (provides input), and Informed (kept updated). This prevents obligations from falling between teams.

ObligationResponsibleAccountableConsultedInformed
AI inventoryEngineering + ProductCompliance ownerAll departmentsLeadership
Risk classificationCompliance ownerCompliance ownerEngineering, LegalProduct
Technical documentationEngineeringCTOProduct, DataCompliance owner
Human oversightDesignated individualsCompliance ownerEngineeringProduct
MonitoringEngineeringCTODataCompliance owner
FRIACompliance ownerCompliance ownerEngineering, Product, HRLeadership
Incident reportingCompliance ownerCompliance ownerEngineeringLeadership
TransparencyProductProduct leadEngineeringCompliance owner
AI literacyHRHR leadAll departmentsCompliance owner

Set a regular cadence

Compliance isn’t a project. It’s an ongoing process. Establish regular touchpoints:

  • Monthly: Compliance owner reviews monitoring dashboards, incident log, and any new AI system adoptions
  • Quarterly: Cross-functional compliance review — engineering, product, legal, and data review compliance status, address issues, update documentation
  • Annually: Full compliance audit, FRIA review, training refresh, documentation update

Budget for it

Distributed compliance still costs time and money. The time your engineering team spends building monitoring infrastructure, the time your product managers spend on documentation, the external legal fees for regulatory interpretation: these all need to be budgeted.

The cost of distributed compliance is lower than hiring a dedicated team, but it’s not zero. Make it visible in budgets so it’s resourced consistently rather than squeezed out by competing priorities.

Start with the highest risk

You don’t need to do everything at once. Prioritise:

  1. AI system inventory and risk classification (you need to know what you have)
  2. Prohibited practices review (these are already enforceable)
  3. AI literacy training (already required under Article 4)
  4. Transparency disclosures (straightforward to implement)
  5. High-risk compliance (for your most consequential AI systems first)

This sequencing gives you quick wins while building towards the more complex obligations.

When to hire

At some point, the distributed model may not be enough. Consider hiring dedicated AI compliance staff when:

  • You have more than five high-risk AI systems
  • You are a provider (not just a deployer) of high-risk AI systems
  • Your AI systems affect large numbers of people
  • Multiple regulators across different member states are relevant
  • The compliance owner is spending more than a quarter of their time on AI Act compliance

Until then, the distributed model works, provided roles are clear, time is allocated, and the compliance owner has the authority to hold people accountable.

Free Resource

Free EU AI Act Priority Checklist

The 5 most critical compliance items before the August 2, 2026 deadline. Delivered to your inbox.