← All articles

Your First AI System Inventory: A Practical Walkthrough

Before you can comply with the EU AI Act, you need to know what you’re complying for. That means building an inventory of every AI system your organisation uses, develops, or distributes. It sounds straightforward. In practice, most organisations discover AI systems they didn’t know they had.

The inventory is the foundation of everything that follows: risk classification, obligation mapping, documentation, and oversight. Get it wrong, and every downstream compliance activity is built on incomplete information.

Why this is harder than it looks

AI systems don’t always announce themselves. Enterprise software increasingly embeds AI features without making them prominent. Your CRM might use AI for lead scoring. Your customer support platform might route tickets using machine learning. Your email security tool almost certainly uses AI for threat detection. Your HR platform might rank candidates using an algorithm that qualifies as an AI system under the Act.

The problem compounds across departments. Marketing adopted an AI copywriting tool. Sales uses an AI-powered prospecting platform. Finance relies on automated fraud detection. Each team made reasonable procurement decisions, but nobody mapped the full landscape.

Article 3(1) of the Act defines an AI system broadly:

“a machine-based system that is designed to operate with varying levels of autonomy and that may exhibit adaptiveness after deployment and that, for explicit or implicit objectives, infers, from the input it receives, how to generate outputs such as predictions, content, recommendations, or decisions that can influence physical or virtual environments.”

That definition catches more than you might expect. Rule-based systems with no learning capability are generally excluded, but anything that infers outputs from inputs, including many analytics tools, recommendation engines, and automated decision systems, potentially falls within scope.

Step 1: Define what you’re looking for

Before you start surveying teams, establish clear criteria. You need a working definition that non-technical people can apply. Something like:

Include any tool, feature, or system that:

  • Makes predictions, recommendations, or decisions based on data
  • Generates content (text, images, code, audio)
  • Classifies, scores, or ranks people or things
  • Automates decisions that previously required human judgement
  • Uses terms like “AI,” “ML,” “machine learning,” “neural network,” or “intelligent” in its marketing materials

Exclude:

  • Simple rule-based automation (if X then Y, with no inference)
  • Standard statistical reporting and dashboards
  • Deterministic algorithms with no learning component

This isn’t a legally precise boundary, but it’s a practical starting point. You can refine classifications later. The goal at this stage is to cast a wide net.

Step 2: Survey every department

Send a structured survey to every department head and team lead. Don’t rely on IT or procurement records alone. Shadow IT is real, and many AI tools are adopted at the team level without centralised approval.

Your survey should capture:

  • System name and vendor — what is it called, and who provides it?
  • What it does — Describe its function in plain language
  • How it’s used — What business process does it support?
  • Who uses it — Which roles interact with it?
  • Who it affects — Does it interact with or make decisions about customers, employees, candidates, or other people?
  • Data inputs — What data does it process? Personal data? Biometric data?
  • Decision authority — Are its outputs acted on automatically, or does a human review them?
  • Procurement route — Was it procured centrally, or adopted at the team level?
  • Contract details — Is there a contract? Who signed it? When does it expire?

Expect follow-up conversations. Many respondents won’t initially identify AI-powered features in their tools. “We use Salesforce” doesn’t capture that Salesforce Einstein is running AI predictions on their pipeline.

Step 3: Check your technology stack

Complement the survey with a technical audit. Review:

  • API integrations — Which external APIs does your infrastructure call? Any AI/ML services? Check for OpenAI, Anthropic, Google AI, Azure Cognitive Services, AWS AI services, and similar.
  • SaaS subscriptions — Review your SaaS inventory for AI-powered tools. Check billing records, SSO configurations, and browser extension usage.
  • Custom-built systems — Does your engineering team maintain any in-house AI or ML models? Check code repositories for ML frameworks (TensorFlow, PyTorch, scikit-learn, Hugging Face).
  • Embedded AI features — Review feature announcements and documentation for your core platforms. Many enterprise tools have added AI features that activate automatically or are enabled by default.

Step 4: Classify each system by risk

Once you have your inventory, map each system against the Act’s risk tiers:

Unacceptable risk (prohibited — Article 5)

Systems that are banned outright. Social scoring, manipulative AI, certain biometric systems. If anything in your inventory matches these, stop using it immediately.

High-risk (heavy obligations — Article 6, Annex III)

Systems used in areas the Act considers high-risk:

  • Biometric identification and categorisation
  • Critical infrastructure management
  • Education and vocational training (access, assessment)
  • Employment (recruitment, screening, evaluation, promotion, termination)
  • Access to essential services (credit scoring, insurance, social benefits)
  • Law enforcement
  • Migration and border control
  • Administration of justice

If your AI system is used in any of these domains and makes or materially influences decisions about people, it’s likely high-risk.

Limited risk (transparency obligations — Article 50)

Systems that interact directly with people (chatbots, virtual assistants) or generate synthetic content. These trigger transparency obligations but not the full high-risk compliance framework.

Minimal risk (no specific obligations)

Everything else. Most AI systems fall here. No specific regulatory obligations, though general product safety and consumer protection laws still apply.

Step 5: Determine your role

For each system, establish whether your organisation is a provider or a deployer:

  • Provider: You developed the AI system, or commissioned its development, or put your name on it, or substantially modified it
  • Deployer: You use the AI system under your authority but didn’t develop it

The distinction matters because providers and deployers have different obligations. A deployer of a high-risk system has serious responsibilities (human oversight, monitoring, incident reporting, FRIA), but a provider’s obligations are heavier still (conformity assessment, technical documentation, risk management system, quality management system).

Some organisations are both: a provider of their own AI product and a deployer of third-party AI tools used internally.

Step 6: Document and maintain

Your inventory isn’t a one-off exercise. It needs to be maintained as a living document. AI systems get adopted, retired, upgraded, and replaced. New features get enabled. Vendors add AI capabilities to existing products.

Assign ownership of the inventory to a specific role, whether that’s a Chief AI Officer, a compliance lead, or a technology governance function. Establish a review cadence (quarterly is reasonable) and a process for capturing new AI systems at the point of procurement.

Your inventory should capture, at minimum:

FieldPurpose
System nameIdentification
Vendor / developerProvider relationship
Business functionWhat it’s used for
Risk classificationUnacceptable / High / Limited / Minimal
Your roleProvider / Deployer
People affectedCustomers, employees, candidates, etc.
Data processedCategories of personal data
Human oversightWho reviews outputs, and how
Documentation statusInstructions for use, FRIA, risk assessment
Contract expiryFor procurement planning

Common pitfalls

Overlooking embedded AI. The most commonly missed systems are AI features embedded in platforms you already use. Salesforce Einstein, Microsoft Copilot, Google Workspace AI features, Zoom’s AI companion. These are AI systems under the Act, even if you think of them as “just features.”

Assuming IT knows everything. Department-level tool adoption is widespread. Marketing teams subscribe to AI tools with a credit card. Sales reps use AI browser extensions. If you only survey IT, you’ll miss a significant portion of your AI footprint.

Classifying too conservatively. Some organisations, anxious about compliance, classify everything as high-risk. This creates unnecessary work and dilutes focus. Be rigorous about the Annex III categories — not every AI system that affects people is high-risk.

Classifying too liberally. The opposite problem. Assuming your chatbot is “minimal risk” because it only answers FAQs ignores the transparency obligations that apply regardless of risk level.

Treating the inventory as a project, not a process. If you build the inventory once and never update it, it will be outdated within months. AI adoption is accelerating, not slowing down.

What comes next

With a complete inventory, you can start the compliance work that matters: conducting Fundamental Rights Impact Assessments for high-risk deployments, requesting documentation from providers, establishing human oversight processes, and building monitoring infrastructure.

The inventory itself isn’t a compliance requirement in the way that a FRIA or incident reporting process is. But without it, you’re trying to comply with obligations you can’t see. Every compliance framework starts with knowing what you have. The AI Act is no different.

Free Resource

Free EU AI Act Priority Checklist

The 5 most critical compliance items before the August 2, 2026 deadline. Delivered to your inbox.