Table of Contents
⚡ TL;DR — Critical Points for LLM Users:
- Using a third-party LLM (GPT-4, Claude, Gemini, Llama) in your product does not transfer the EU AI Act compliance burden to the model provider. You carry your own obligations as a deployer — and potentially as a provider.
- If your LLM-powered application influences high-risk decisions (employment, credit, education, safety), you are not just a deployer — you are a provider of a high-risk AI system with full Article 8–15 obligations.
- The compliance gap between “I’m just using an API” and “I understand my actual legal obligations” is where most enforcement exposure sits in 2026.
The phrase I hear most from product teams using LLM APIs: “The compliance is OpenAI’s problem, not ours — we’re just using their API.” This is one of the most consequential misunderstandings in the 2026 AI compliance landscape. It is both legally incorrect and commercially dangerous.
The EU AI Act creates a carefully layered obligation structure that assigns compliance responsibilities at each level of the AI supply chain. The LLM provider has obligations for the model. You — the team that builds a product on top of that model — have entirely separate obligations for the system you’ve built. Neither set of obligations substitutes for the other.
This post explains exactly what you owe as a deployer of a third-party LLM, when that deployer status escalates to provider status, and what you need to do about it. For the full provider vs. deployer framework, see our post on which EU AI Act obligations apply to you.
Understanding Your Position in the LLM Supply Chain
When you use a third-party LLM API, you’re operating within a three-tier supply chain that the EU AI Act explicitly addresses:
| Tier | Entity | Example | Primary Obligations |
|---|---|---|---|
| Tier 1 | GPAI Model Provider | OpenAI, Anthropic, Google, Meta | Chapter V GPAI obligations: technical documentation, copyright compliance, training data transparency, downstream operator policies |
| Tier 2 | Application Provider / Deployer | You — the company building on the LLM API | Article 26 deployer obligations plus full Article 8–15 provider obligations if the application is high-risk |
| Tier 3 | End Users / Customers | Your customers using your product | May have deployer obligations if they are professional deployers of your system in a high-risk context |
You sit at Tier 2 in this structure. The GPAI provider’s obligations — their model documentation, copyright compliance, and downstream policies — do not flow through to you. They are that provider’s obligations to meet. What you receive from the GPAI provider is: a compliant model (you hope), documentation about the model’s capabilities and limitations, and usage policies that define how you can and cannot use the model.
Your obligations start with what you build on top of the model.
Your Core EU AI Act Deployer Obligations When Using Third-Party LLMs
If your LLM application serves a use case that is not classified as high-risk under Annex III, your obligations are relatively light — but not zero.
Obligation 1: Use the Model Within Its Documented Intended Use
Article 26(1) requires deployers to use systems in accordance with the provider’s instructions and intended use documentation. For LLM APIs, this means operating within the provider’s published usage policies — and not deploying the model for use cases the provider has explicitly prohibited.
OpenAI’s usage policies, Anthropic’s acceptable use policy, and Google’s Gemini API terms all contain specific prohibited use cases. If your application involves those use cases, you are in breach of Article 26(1) — and potentially in breach of Article 5’s prohibited AI practices if the use case crosses an absolute prohibition line. Read our post on prohibited AI practices in 2026 for the Article 5 breakdown.
Obligation 2: Transparency to End Users
Article 50 requires that users interacting with an AI system — particularly a conversational AI or content-generation tool — are informed that they are interacting with AI, unless the AI nature is obvious from context. If your application uses an LLM to interact with users (chatbot, virtual assistant, AI-generated content), you must display clear disclosure.
This requirement applies to you as the application operator — not to the LLM provider. The LLM provider discloses at the model level; you must disclose at the user interface level.
Obligation 3: AI Literacy for Your Team
Article 4 requires that all staff who work with or are affected by AI systems receive appropriate AI literacy training. For teams using LLMs to build products, this means: product managers who design the prompts, engineers who implement the API integration, customer success teams who troubleshoot AI outputs, and any team member who makes decisions informed by LLM outputs all need training appropriate to their role and the AI system’s risk level.
Obligation 4: Data Protection Compliance for LLM API Calls
Sending personal data to a third-party LLM API is a data processing activity governed by GDPR independently of the AI Act. Before sending any personal data to an external LLM:
- Establish a valid legal basis for the processing
- Execute a Data Processing Agreement (DPA) with the LLM provider
- Assess whether the processing transfers personal data outside the EU (most major LLM providers have EU data processing options — verify this for your specific configuration)
- Implement data minimisation — only send data necessary for the task, ideally with PII anonymised or pseudonymised before the API call
For more on the intersection of AI governance and data protection, see our post on the DPO’s new role in managing AI governance alongside GDPR.
When LLM Deployer Status Escalates to High-Risk Provider Status
Here is where most teams have the most significant compliance gap. Using an LLM API doesn’t make you a high-risk AI provider. But using that API to build a specific product might — depending entirely on what the product does.
The Escalation Test: Does Your LLM Application Influence High-Risk Decisions?
Ask this question about your LLM application: does its output directly or materially influence a consequential decision about a person in any of the Annex III categories?
- An LLM that summarises internal meeting notes: Not high-risk.
- An LLM that analyses CVs and ranks candidates for a hiring manager: High-risk (Annex III Category 4).
- An LLM that helps a customer service agent respond to queries: Likely not high-risk.
- An LLM that analyses customer financial behaviour and recommends credit limits: High-risk (Annex III Category 5).
- An LLM that generates personalised learning content: Likely not high-risk.
- An LLM that assesses student writing and assigns scores used for academic progression: High-risk (Annex III Category 3).
The key variable is not the technology (LLM) — it’s the use case (what decision does the output inform, and what are the consequences for the person affected).
What High-Risk LLM Application Providers Must Do
If your LLM application crosses into high-risk territory, you acquire the full set of provider obligations under Articles 8–17. The specific challenges for LLM-based systems that differ from traditional ML models:
Technical Documentation Challenges for LLM-Based Systems
Article 11 and Annex IV require a Technical File that describes your system’s design, training data, testing results, and risk management. For LLM-based systems, this creates specific documentation challenges:
- The underlying model is a black box. You don’t have full access to the training data or architecture of the model you’re using. Your Technical File must document your system’s design and your use of the third-party model — and must reference the model provider’s published documentation for the aspects you don’t control. This requires the model provider to publish adequate technical information — which is now an obligation under their GPAI Chapter V duties.
- Prompt engineering is part of your system design. Your system prompts, few-shot examples, and prompt chaining logic are core components of your AI system’s design and must be documented. Changes to prompts that materially affect the system’s risk profile may constitute substantial modifications requiring Technical File updates.
- Output variability. LLMs produce non-deterministic outputs — the same input can produce different results across runs. Your technical documentation must address how this variability is managed, what bounds you’ve established for acceptable output variation, and how the system handles unexpected outputs.
Risk Management for LLM-Specific Risks
Article 9’s risk management system must address the specific risk profile of LLM-based applications. LLM-specific risks that must be documented include:
- Hallucination risk: The model produces confident-sounding but factually incorrect outputs. Document your hallucination rate assessment methodology and the safeguards in place to prevent hallucinated outputs from influencing high-stakes decisions.
- Prompt injection risk: Malicious content in user inputs or retrieved documents manipulates the model’s behaviour. Document your prompt architecture, input sanitisation approach, and prompt injection testing results. The OWASP Top 10 for LLM Applications is the standard reference for LLM-specific security risks and provides a useful taxonomy for your risk documentation.
- Model update risk: The LLM provider updates the model without notice, changing your system’s behaviour without you changing anything. Document your model version management approach and the monitoring you have in place to detect performance changes following upstream model updates. For the methodology, see our post on managing model drift and post-market monitoring.
- Context window limitations: Long documents or conversations that exceed the model’s context window may produce degraded or inconsistent outputs. Document the context management approach and how edge cases are handled.
Human Oversight Requirements for LLM Applications
Article 14’s human oversight requirements are particularly important for LLM-based systems because of the hallucination and output variability risks. Your product design must include:
- Mechanisms that prevent users from treating LLM outputs as authoritative without critical evaluation — confidence indicators, uncertainty disclosures, or mandatory human review steps for high-stakes outputs
- Override and correction capabilities — users must be able to reject or modify AI outputs and record their alternative decision
- Clear instructions to deployers about what oversight they must maintain and what outputs require human review before acting
For the implementation guide, see our post on designing Article 14 compliant human oversight.
Reviewing Your LLM Provider Contracts for Compliance Support
Your supplier agreement with your LLM provider needs to support your compliance obligations. Check your current agreement for:
| Contract Element | Why You Need It | If It’s Missing |
|---|---|---|
| Technical documentation / model card access | You need to reference the model’s capabilities and limitations in your Technical File | You cannot complete Annex IV Section 2 without it |
| Model version notification | Upstream model changes may constitute substantial modifications to your system | Silent model updates can invalidate your conformity assessment |
| Data Processing Agreement (DPA) | Mandatory for any personal data sent in API calls under GDPR | Active GDPR violation; data protection authority exposure |
| Bias testing access | You need to conduct your own bias testing on the model’s outputs for your specific use case | Cannot complete Article 10 bias evaluation without testing access |
| Incident notification | If the model produces outputs that cause a serious incident, you need to know | Article 73 incident reporting obligations may be triggered without your knowledge |
For enterprise SaaS buyers assessing LLM API vendors, our post on AI vendor due diligence in 2026 provides the full procurement checklist.
Frequently Asked Questions
What are an AI deployer’s obligations under the EU AI Act when using third-party LLMs?
Under Article 26, deployers of high-risk AI systems must: use the system within its documented intended use; ensure human oversight by appropriately qualified personnel; monitor the system’s operation and report serious incidents; maintain operational logs for at least six months; and ensure AI literacy training for relevant staff. For non-high-risk applications, Article 50 transparency requirements apply where the LLM interacts directly with users. Additionally, GDPR data processing obligations apply independently for any personal data sent to the LLM provider.
If OpenAI or Anthropic are GPAI-compliant, does that cover my compliance?
No. GPAI model provider compliance covers the model — the training data documentation, copyright compliance, and systemic risk measures (if applicable). It does not cover your application — the specific use case, the system you’ve built on top of the model, the decisions your application influences, the users affected, or the human oversight mechanisms in your product. These are your obligations to meet regardless of the upstream model’s compliance status.
We use RAG (retrieval-augmented generation) with our own data. Does this change our obligations?
RAG doesn’t change your classification as a deployer or provider — that’s determined by your use case and whether you’re placing the system on the market. However, RAG does affect your Technical File requirements: you must document the retrieval system, the knowledge base contents and provenance, the chunking and embedding methodology, and the retrieval quality measures. If your RAG knowledge base contains personal data, that creates additional GDPR obligations around data subject rights that interact with the AI system’s outputs.
What happens if the LLM provider makes an update that breaks our compliance?
Model updates that materially change the underlying model’s behaviour may constitute substantial modifications to your high-risk AI system — requiring a reassessment of your Technical File and potentially a new conformity assessment. This is why model version pinning and proactive monitoring for upstream changes are essential architectural practices. Your supplier agreement should include notification obligations for model updates, and your post-market monitoring should include comparison runs against a baseline version after any upstream change.
We use an open-source LLM that we host ourselves. Does this change anything?
Self-hosting an open-source LLM changes your data protection profile (no third-party data processor for your API calls) and your supplier dependency risk (no upstream model updates you don’t control). It doesn’t change your AI Act obligations — if the system serves a high-risk use case, you are a provider with full Article 8–15 obligations. In fact, self-hosting may increase your documentation burden: you cannot reference the original model provider’s published documentation for aspects you’ve now assumed full control over, and you must document the model’s characteristics based on your own evaluation.
Using LLMs in a high-risk context and unsure of your obligations?
Unorma’s system classification workflow maps your specific LLM deployment against both GPAI and high-risk obligations, and generates the compliance checklist your team needs to act on immediately.Map Your LLM Compliance Obligations →

Jasper Claes is a Compliance Manager and consultant specializing in AI governance for high-scale technology companies operating in regulated markets. He advises product and legal teams on implementing practical compliance frameworks aligned with evolving regulations such as the EU AI Act. Through his writing, Jasper focuses on translating complex regulatory requirements into clear, actionable guidance for teams building and deploying AI systems.