Table of Contents
🗺 TL;DR — Read This First:
- General-Purpose AI (GPAI) models have been subject to transparency and copyright obligations since August 2, 2025.
- When a GPAI model is integrated into a product that serves a high-risk use case, the entire compliance framework — not just transparency — applies to that deployment.
- This post maps the transition journey for product teams using GPAI models, including what changes as you move from basic compliance to full high-risk governance.
Here’s the compliance scenario I see playing out repeatedly in 2026: a product team has been using a large language model API for 18 months. They know they have basic GPAI transparency obligations. They’ve added a disclosure to their terms of service. They think they’re done.
What they haven’t checked is whether their specific application — the way they’ve built the product, the decisions it influences, the people it affects — has transformed their “GPAI transparency” situation into a full “high-risk AI governance” obligation. And in many cases, it has.
This post is the roadmap for navigating that transition: understanding where GPAI obligations end, where high-risk obligations begin, and how to build a compliance architecture that works across both. For the full regulatory foundation, read our Ultimate Guide to EU AI Act Compliance (2026 Edition).
What Are GPAI Models, and Why Do They Have Their Own Rules?
The EU AI Act introduced the concept of “General-Purpose AI Models” (GPAI models) to address a category of AI that didn’t fit neatly into the existing high-risk framework: large foundation models like GPT-4, Claude, Gemini, and Llama that are trained on vast datasets and can perform a wide range of tasks across different domains.
The challenge for regulators is that a GPAI model doesn’t have a single, fixed use case at the point of training. Its risk profile is largely determined by how it’s deployed. The Act therefore creates a separate, dedicated compliance track for GPAI model providers (the organisations that train and release these models), while treating organisations that build products on top of GPAI models as either deployers or — if they substantially modify the model for a specific use case — providers of a downstream AI system.
The GPAI Compliance Stack: From Basic to Systemic Risk
GPAI obligations operate on two levels, defined in Chapter V of the Act:
| Obligation Level | Who It Applies To | Key Requirements | In Force Since |
|---|---|---|---|
| Standard GPAI | All GPAI model providers placing models on the EU market | Technical documentation, compliance with copyright law, training data summaries, policy for downstream deployers | August 2, 2025 |
| Systemic Risk GPAI | GPAI models with training compute exceeding 10²⁵ FLOPs | All standard obligations PLUS: adversarial testing, serious incident reporting, cybersecurity measures, energy efficiency disclosure | August 2, 2025 |
The 10²⁵ FLOPs threshold for systemic risk designation was set in Commission Decision 2024/1693. This currently captures the largest frontier models. The European AI Office has authority to reassess this threshold as compute costs fall and model capabilities evolve — so the systemic risk category is likely to expand over time.
The Critical Distinction: GPAI Transparency vs. High-Risk Governance
This is the section most product teams need to read carefully.
GPAI obligations as described above apply to the model provider — the organisation that trains and releases the foundation model. If you’re building a product that uses the OpenAI API, or Anthropic’s API, or any other third-party GPAI model, you are a downstream deployer, not a GPAI provider. Your GPAI obligations as a deployer are primarily to:
- Use the model in accordance with the provider’s usage policies
- Maintain appropriate technical and organisational measures
- Provide transparency to end users where required
But here is where the transition happens: the moment your application of that GPAI model serves a high-risk use case under Annex III, you are no longer just a GPAI deployer — you are a provider of a high-risk AI system.
The GPAI model’s compliance with its own obligations does not transfer to your product. You need your own full high-risk compliance programme.
Is Your GPAI Deployment Triggering High-Risk Status?
Work through this decision logic for every GPAI-integrated product you operate:
- What task is the GPAI performing in your product? Not what the model can do in general — what specific function does it serve in your product?
- Does that task’s output influence a consequential decision about a person? See the Annex III categories: employment, credit, education, safety, law enforcement, migration.
- Is the AI output presented to the human decision-maker in a way that materially shapes their decision? Even a “just a recommendation” framing doesn’t automatically exclude you from Annex III — it depends on how prominent, how structured, and how acted-upon the recommendation is in practice.
- Do you have the ability to monitor, audit, and modify the model’s behaviour in your deployment context? If yes, you have the technical control expected of a provider. If no, your supplier relationship may need restructuring.
To understand which Annex III categories your use case may fall into, see our detailed guide: Is Your AI High-Risk? A Guide to Annex III Classifications.
The 2026 Transition Roadmap: GPAI to High-Risk Governance
For product teams that have determined their GPAI deployment triggers high-risk status, here is the practical transition roadmap.
Phase 1: Documentation Infrastructure (Months 1–3)
The most significant operational change when moving from GPAI transparency to high-risk governance is the documentation requirement. Under Article 11 and Annex IV, your Technical File must now document not just what the GPAI model does in general, but what your specific application does — how you’ve configured it, what data it receives, what outputs it produces, and how those outputs are used.
Key actions in this phase:
- Document the system design: inputs, outputs, decision logic, integration points
- Record the training and fine-tuning approach (if you’ve customised the model)
- Create a dataset provenance record for any data you’ve used in fine-tuning or retrieval-augmented generation
- Establish a version control system for your model configuration — every change needs to be tracked
Unorma’s Document Generator (F06) automates the majority of Annex IV field population directly from your system’s technical metadata. For a walkthrough of what the full Article 11 Technical File requires, see our guide to Article 11 automation.
Phase 2: Risk Management System (Months 2–4)
Article 9 requires a risk management system that is not a one-time assessment but a continuous process across the entire lifecycle of the AI system. For GPAI-based products, this is more complex than for traditional ML systems because the underlying model’s behaviour can change with API updates, prompt engineering changes, or context shifts.
Your risk management system needs to capture:
- Known and foreseeable risks associated with the use case
- Residual risks after mitigation measures are applied
- Specific risks arising from the use of a GPAI model (hallucination risk, context limitation risk, prompt injection risk)
- Testing results across reasonably foreseeable conditions, including edge cases and adversarial inputs
Phase 3: Human Oversight Implementation (Months 3–5)
Article 14 requires that high-risk AI systems enable human operators to understand and monitor the system, intervene or override when necessary, and not blindly defer to AI outputs. For GPAI-powered products, this often requires explicit product redesign — building in override mechanisms, confidence indicators, and audit trails that weren’t necessary under the transparency-only GPAI regime.
Our post on designing human oversight to meet Article 14 standards covers the specific technical requirements in detail.
Phase 4: Post-Market Monitoring (Months 4–6)
Under Article 72, high-risk AI providers must have a post-market monitoring plan in operation from the day the system is placed on the market. For GPAI-based systems, this is particularly important because the underlying model’s behaviour may shift between API versions, and your compliance documentation needs to reflect the system as it actually operates — not as it was designed at a single point in time.
Post-market monitoring needs to capture: performance metrics across defined accuracy and robustness thresholds, any serious incidents or near-misses, and changes in user behaviour that may indicate the system is being used in ways outside its intended scope. See our dedicated post on managing model drift and post-market monitoring requirements for the methodology.
The Open-Source GPAI Model Question
A significant number of product teams are building on open-source GPAI models — Llama, Mistral, and similar releases. The EU AI Act provides a limited exception for open-source GPAI models under Article 53(2): providers of open-source GPAI models that make their weights publicly available are generally exempt from the standard GPAI documentation and transparency obligations, unless the model qualifies as systemic risk.
However — and this is critical — this exception applies to the open-source model provider, not to you. When you take an open-source GPAI model and build a product on it, particularly for a high-risk use case, you become the provider of that application and all the associated obligations fall to you. The open-source exception is not a compliance shield for downstream product companies.
Supplier Agreements and Contractual Due Diligence
One practical implication of the GPAI compliance framework that many product teams overlook: your contractual relationship with your GPAI model provider needs to be structured to support your compliance obligations.
Under Article 25, GPAI providers must make certain information available to downstream operators — including sufficient technical information to conduct their own compliance assessments. If your API agreement doesn’t include access to model cards, technical documentation, or usage policies, you may find yourself unable to complete your own Annex IV Technical File.
Review your supplier agreements against this checklist:
- Does the provider supply a model card or technical disclosure document?
- Are you permitted to conduct bias testing and adversarial testing on the model’s outputs?
- Does the provider notify you of significant model updates that could affect your compliance documentation?
- Do you have data processing agreements in place that cover the use of EU personal data in API calls?
For the full vendor due diligence framework, read our post on AI vendor due diligence in the 2026 procurement landscape.
Frequently Asked Questions
Will the GPAI rules change after 2026?
The European AI Office is mandated to review the GPAI provisions — including the systemic risk threshold and the open-source exemption — on an ongoing basis. The compute threshold for systemic risk designation is expected to be revisited as frontier model training becomes more accessible. Monitor the European AI Office’s published guidance at the European Commission’s AI policy hub for updates.
How does GPAI compliance interact with GDPR when personal data is sent to the model via API?
Sending personal data to a GPAI model API involves data processing that is independently regulated under GDPR. You need a valid legal basis for that processing, appropriate data processing agreements with the API provider, and data minimisation measures (e.g., anonymisation or pseudonymisation before API calls where possible). The DPO’s role in managing AI governance alongside GDPR is explored in our post on the DPO’s expanding remit in the AI era.
What is the compliance obligation for fine-tuned GPAI models?
If you substantially modify a GPAI model through fine-tuning — changing its capabilities, its domain specialisation, or its safety properties — you may be considered a provider of a new or modified AI system rather than purely a deployer. “Substantial modification” is a key threshold concept in the Act (Article 3(23)), and it is subject to ongoing European AI Office guidance. For safety-critical use cases, err on the side of treating fine-tuning as a substantial modification until clearer guidance is issued.
We use retrieval-augmented generation (RAG) with our own proprietary data. Does that make us a GPAI model provider?
Almost certainly not. Adding RAG over a foundation model does not constitute “training” a GPAI model — you are configuring and deploying an existing model, not creating a new one. You remain a deployer of the underlying model (with the deployer’s obligations) and a provider of the overall system (with any applicable high-risk obligations, depending on your use case).
If the GPAI model provider is compliant, do I still need my own compliance programme?
Yes, absolutely. GPAI model provider compliance is necessary but not sufficient for downstream product compliance. The GPAI provider’s obligations cover the model itself — how it was trained, what its capabilities and limitations are, and what the intellectual property situation is. Your obligations as a deployer cover the specific application: the use case, the affected individuals, the human oversight mechanisms, and the post-market monitoring. One does not substitute for the other.
What exactly counts as a “General-Purpose AI Model” under the EU AI Act?
Article 3(63) defines a GPAI model as an AI model trained on large amounts of data using self-supervision at scale, displaying significant generality, and capable of competently performing a wide range of distinct tasks. In practice, this captures large foundation models like GPT-4, Claude, Gemini, Llama, and Mistral. Traditional ML models trained for specific tasks — even sophisticated ones — are not generally GPAI models under this definition.

Jasper Claes is a Compliance Manager and consultant specializing in AI governance for high-scale technology companies operating in regulated markets. He advises product and legal teams on implementing practical compliance frameworks aligned with evolving regulations such as the EU AI Act. Through his writing, Jasper focuses on translating complex regulatory requirements into clear, actionable guidance for teams building and deploying AI systems.
