← All articles

General-Purpose AI Models: What the EU AI Act Means for GPT Wrappers

A significant number of businesses have built products on top of general-purpose AI models: OpenAI’s GPT, Anthropic’s Claude, Google’s Gemini, Meta’s Llama, and others. You send prompts via an API, receive outputs, and present them through your own interface. Your product might be a customer service chatbot, a document analysis tool, a coding assistant, or a content generation platform.

The EU AI Act has specific provisions for these products, and they don’t always land where people expect. The obligations are split between the foundation model provider and you, the downstream application builder. The split isn’t equal.

The GPAI model framework

Articles 51–56 create a separate regulatory framework for general-purpose AI (GPAI) models. A GPAI model is defined as an AI model that is “trained with a large amount of data using self-supervision at scale, that displays significant generality and is capable of competently performing a wide range of distinct tasks.”

This covers the major foundation models: GPT-4, Claude, Gemini, Llama, Mistral, and similar. The obligations under Articles 51–56 fall on the providers of the GPAI model — that’s OpenAI, Anthropic, Google, Meta, etc., not you.

What GPAI model providers must do

All GPAI model providers must:

  • Maintain technical documentation about the model
  • Provide information and documentation to downstream providers who integrate the model into their own AI systems
  • Establish a policy to comply with EU copyright law
  • Publish a sufficiently detailed summary of the training data content

GPAI models with systemic risk (those trained with more than 10²⁵ FLOPs of compute, or designated by the European Commission) have additional obligations: model evaluation and adversarial testing, tracking and reporting of serious incidents, cybersecurity protections, and energy consumption reporting.

What this means for you

If you’re building on top of a GPAI model via its API, the Articles 51–56 obligations are the model provider’s problem. You don’t need to document the model’s training data, conduct model-level evaluations, or publish compute statistics.

But this is where many people stop reading, and it’s where they get caught out. The GPAI model obligations are separate from your obligations as the builder of an AI system that uses that model.

Your obligations as an AI system provider

When you take a GPAI model and build a product around it (adding a system prompt, connecting it to your data, creating a user interface, deploying it to customers), you are not just using a model. You are the provider of an AI system under the Act.

Article 25(1) makes this explicit: “any person who integrates a general-purpose AI model into an AI system… is considered the provider of that AI system.”

This matters enormously. As the provider of an AI system, your obligations depend on the risk classification of your system, not on the nature of the underlying model.

If your system is high-risk

If the AI system you’ve built falls into a high-risk category under Annex III — for example, a recruitment screening tool, a credit assessment system, or an educational evaluation tool — you carry the full high-risk provider obligations:

  • Risk management system (Article 9)
  • Data governance for any additional training or fine-tuning data (Article 10)
  • Technical documentation (Article 11, Annex IV)
  • Record-keeping and logging (Article 12)
  • Transparency and instructions for use (Article 13)
  • Human oversight capabilities (Article 14)
  • Accuracy, robustness, and cybersecurity (Article 15)
  • Quality management system (Article 17)
  • Conformity assessment (Articles 40–49)
  • EU Declaration of Conformity (Article 47)
  • Registration in the EU database (Article 49)

The fact that the underlying model is provided by someone else doesn’t reduce these obligations. You are the provider of the system, and you are responsible for ensuring it meets the requirements.

If your system is limited-risk

If your system interacts directly with people (as most chatbots do), Article 50’s transparency obligations apply regardless of risk classification:

  • Tell users they’re interacting with AI
  • Label any synthetic content generated by the system
  • Disclose any emotion recognition or biometric categorisation

If your system is minimal-risk

If your system doesn’t fall into high-risk categories and doesn’t directly interact with people, you have minimal obligations under the Act. But you’re still the provider of an AI system, and general provisions about accuracy, transparency, and prohibited practices still apply.

The practical challenges

You don’t control the model

The core tension for GPAI wrappers is that you’re responsible for an AI system whose behaviour is substantially determined by a model you don’t control. This creates several practical difficulties:

Performance documentation. You need to document your system’s accuracy, robustness, and limitations. But these properties depend heavily on the underlying model, which the model provider may update without notice. An OpenAI model update that changes output characteristics can affect your system’s documented performance overnight.

Risk management. You need to identify and mitigate risks from your AI system. Many of these risks originate in the model (hallucinations, biases, failure modes) and you can’t directly fix them. Your risk management should focus on what you can control: input validation, output filtering, prompt engineering, human oversight, and use-case restrictions.

Conformity assessment. You must demonstrate that your system meets the Act’s requirements. For properties that depend on the underlying model, you’ll need to reference the model provider’s documentation and conduct your own testing to verify that the system, as you’ve configured and deployed it, meets the requirements.

Model updates change your system

GPAI models are updated regularly. OpenAI, Anthropic, and Google all release model updates that can change output characteristics. If you’re calling the latest model version via API, your AI system’s behaviour changes when the model updates, potentially affecting your documented performance, risk profile, and compliance status.

Mitigation options:

  • Pin to a specific model version where the API supports it
  • Test against model updates before allowing them into production
  • Build monitoring that detects behavioural changes after model updates
  • Include model version in your logging and documentation

Downstream data and fine-tuning

If you fine-tune the base model on your own data, your obligations increase. You become responsible for the quality and governance of the fine-tuning data under Article 10. If you’ve fine-tuned a model for a high-risk use case, the data governance requirements are significant: representativeness, bias assessment, quality management, and documentation.

Even without fine-tuning, if you use retrieval-augmented generation (RAG) to inject your own data into the model’s context, the quality and accuracy of that data affect your system’s compliance. Garbage in, garbage out. If your RAG pipeline feeds the model inaccurate information that leads to harmful outputs, that’s your responsibility.

What you need from the model provider

Article 53(1)(b) requires GPAI model providers to give downstream providers (that’s you) information and documentation that enables them to comply with their own obligations. This includes:

  • The model’s capabilities and limitations
  • Technical characteristics relevant to your use case
  • Information about training data (the published summary)
  • Known risks and recommended mitigations

If your model provider isn’t supplying this information, request it explicitly. You need it for your own technical documentation and risk management. Major providers are increasingly publishing model cards and system-level documentation, but the detail and format may not match what the Act requires. Push for specifics.

Building a compliant GPAI wrapper

Treat your system as a product, not just an API call

The biggest mindset shift is recognising that your product isn’t “just a wrapper.” In the eyes of the Act, you’re the provider of an AI system. That system happens to use someone else’s model, much as a car manufacturer uses someone else’s engine. The car manufacturer is still responsible for the car’s safety.

Invest in what you control

Focus your compliance effort on the elements you control:

  • Input validation and guardrails. Filter, validate, and constrain inputs to keep the system within its intended use case
  • Output filtering and post-processing. Check outputs for accuracy, appropriateness, and safety before presenting them to users
  • System prompt and configuration. Your system prompt and model parameters shape the system’s behaviour — document and version them
  • User interface and experience. How users interact with the system, what disclosures they see, and how they can override or challenge outputs
  • Monitoring and feedback loops. Track system performance, user feedback, and edge cases in production

Document the boundary

Your technical documentation should clearly delineate where your system ends and the model provider’s responsibility begins. Document:

  • Which GPAI model you use and which version
  • What documentation the model provider has supplied
  • What additional testing and validation you’ve conducted
  • How you manage model updates
  • What risks originate from the model vs. from your system’s configuration and deployment

This boundary documentation serves both compliance purposes and practical risk management. When something goes wrong, knowing where the responsibility lies helps you respond appropriately — escalating to the model provider for model-level issues while addressing system-level issues directly.

The GPAI wrapper landscape is large and growing. The Act’s message to this market is clear: building on someone else’s model doesn’t delegate your compliance obligations. You are the provider of your AI system, and you own its compliance.

Free Resource

Free EU AI Act Priority Checklist

The 5 most critical compliance items before the August 2, 2026 deadline. Delivered to your inbox.