Table of Contents
⚡ TL;DR — The Answer in 30 Seconds:
- Provider = you developed the AI system or substantially modified it and placed it on the market. You carry the heaviest obligations: Technical File, risk management system, conformity assessment, CE marking, EU database registration.
- Deployer = you use someone else’s AI system in a professional context. You have lighter but still significant duties: human oversight, monitoring, incident reporting, staff training.
- Many organisations are both simultaneously — provider of their own AI product to customers, deployer of third-party AI tools internally. Each role carries separate obligations.
Of all the questions I field from legal and product teams, this one comes up most consistently: “Are we a provider or a deployer?” And the answer almost always requires more than a yes or no, because the distinction isn’t always as clean as the law makes it sound.
Getting this right matters enormously. Misclassify yourself as a deployer when you’re actually a provider and you’re operating without the compliance programme the law requires. Misclassify yourself as a provider when you’re a deployer and you may be investing heavily in obligations that don’t apply to you. Either way, the consequences are real.
This post gives you the complete framework for determining which category you fall into — and what exactly you owe under each. For the broader regulatory context, start with our Ultimate Guide to EU AI Act Compliance.
How the EU AI Act Defines Provider vs. Deployer
The definitions sit in Article 3 of the EU AI Act and are precise — even if their application to real-world scenarios requires careful analysis.
A Provider (Article 3(3)) is “a natural or legal person, public authority, agency or other body that develops an AI system or a general-purpose AI model or that has an AI system or a general-purpose AI model developed and places it on the market or puts it into service under its own name or trademark, whether for payment or free of charge.”
A Deployer (Article 3(4)) is “a natural or legal person, public authority, agency or other body that uses an AI system under its own authority except where the AI system is used in the course of a personal non-professional activity.”
The decisive word in the Provider definition is “places on the market.” If you develop an AI system and offer it to third parties — customers, clients, users — under your own name, you are a provider of that system, regardless of whether you built the underlying model yourself or assembled it from third-party components. The decisive word in the Deployer definition is “uses” — you are consuming an AI capability built by someone else and applying it in your operations or products.
What Are AI Provider Obligations Under the EU AI Act?
Providers of high-risk AI systems carry the full weight of Chapter III compliance. Article 16 lists their obligations, and they are substantial:
| Provider Obligation | Governing Article | What It Requires | Deadline |
|---|---|---|---|
| Risk Management System | Article 9 | Continuous, iterative process identifying and mitigating risks across the system lifecycle | Before market placement |
| Data Governance | Article 10 | Training data quality, representativeness, bias evaluation, personal data protection | Before market placement |
| Technical Documentation (Annex IV) | Article 11 | Complete Technical File covering all 8 Annex IV sections; kept current; retained 10 years | Before market placement |
| Logging Capability | Article 12 | Automatic logging of system operation; 6-month minimum retention | Before market placement |
| Transparency to Deployers | Article 13 | Instructions for use, capability/limitation disclosure, human oversight guidance | Before market placement |
| Human Oversight Design | Article 14 | Technical controls enabling human operators to monitor, understand, and override the system | Before market placement |
| Accuracy & Cybersecurity | Article 15 | Documented performance metrics; adversarial robustness testing; cybersecurity architecture | Before market placement |
| Quality Management System | Article 17 | Documented QMS covering design, data, risk management, monitoring, and incident response | Before market placement |
| Conformity Assessment | Article 43 | Internal self-assessment or Notified Body assessment; Declaration of Conformity; CE marking | Before market placement |
| EU AI Database Registration | Article 71 | Register system in publicly accessible EU AI database | Before market placement |
| Post-Market Monitoring | Article 72 | Systematic monitoring of in-production performance; incident reporting; Technical File updates | Ongoing from market placement |
For the step-by-step process for meeting these obligations before August 2026, use our 6-Month Readiness Checklist. For the technical documentation specifics, read our Article 11 and Annex IV pillar guide.
What Are AI Deployer Obligations Under the EU AI Act?
Deployers of high-risk AI systems have significantly lighter obligations than providers — but “lighter” does not mean trivial. Article 26 sets out deployer duties, and several of them require genuine operational investment.
As a deployer of a high-risk AI system, you must:
- Use the system in accordance with the provider’s instructions (Article 26(1)). If you deploy a system outside its documented intended use, you may acquire provider-level obligations for the out-of-scope use case.
- Assign appropriately trained human oversight personnel (Article 26(2)). You are responsible for ensuring that the people who oversee the system in your organisation are competent to do so — this requires genuine assessment of their skills, not just a box-tick.
- Monitor the system’s operation and report serious incidents or malfunctions to the provider and to national authorities (Article 26(5)).
- Maintain logs generated by the system for at least six months where you control the logging infrastructure (Article 26(6)).
- Conduct a Fundamental Rights Impact Assessment (FRIA) before deploying the system if you are a public body, or if you operate in certain sensitive sectors (Article 27). Private sector deployers in most sectors are not directly required to conduct a FRIA, but doing so is strong governance practice.
- Inform affected individuals that they are subject to AI-assisted decisions where required by Article 50.
- Ensure AI literacy for all staff working with the system under Article 4.
The Grey Zone: When a Deployer Becomes a Provider
Article 25 creates the most important — and most frequently misunderstood — boundary in the entire provider/deployer framework. It establishes that a deployer becomes a provider, and acquires all provider obligations, when they make a substantial modification to the AI system.
What Counts as a Substantial Modification That Changes Your Legal Status?
Under Article 3(23), a substantial modification is “a change to the AI system after its placing on the market or putting into service which affects the compliance of the AI system with the requirements set out in this Regulation or results in a modification to the intended purpose for which the AI system has been assessed.”
In practice, these actions almost always constitute substantial modifications that convert a deployer into a provider:
- Fine-tuning a third-party model on your own proprietary data
- Changing the intended use case of the system beyond what the provider documented — using a document summarisation AI for employment decision support, for example
- Removing or bypassing safety controls built into the system by the provider
- Integrating the system into a product architecture that materially changes its risk profile relative to its documented intended use
- Building a Retrieval-Augmented Generation (RAG) layer over a GPAI model with proprietary data, where the RAG output is used for high-risk decisions
These actions, when taken by a company that started as a deployer, trigger provider-level compliance obligations — including the full Technical File, risk management system, conformity assessment, and EU database registration. Companies that make these modifications without recognising the change in legal status have acquired a provider’s compliance obligations without a provider’s compliance programme.
Real-World Scenarios: Provider, Deployer, or Both?
Scenario 1: The SaaS HR Platform
A company builds an AI-powered candidate screening tool and sells it as a SaaS product to employers. The AI is trained on the company’s own data and operated under the company’s brand.
Classification: Provider. The company developed the AI system and places it on the market under its own name. Full Article 16 provider obligations apply to the screening tool. If the company also uses a third-party AI tool internally for its own employee performance management, it is simultaneously a Deployer of that separate system.
Scenario 2: The Bank Using a Credit Scoring API
A bank integrates a third-party credit scoring API into its loan application workflow. The bank uses the API output as one input into loan decisions made by human underwriters.
Classification: Deployer. The bank is using a system developed and placed on the market by the API provider. Article 26 deployer obligations apply. However, if the bank substantially modifies the scoring logic — adding proprietary features, re-weighting outputs, building a wrapper model — it moves toward provider status for the modified system.
Scenario 3: The Enterprise Using OpenAI for Internal Operations
A manufacturing company uses the OpenAI API to build an internal tool that analyses HR performance reviews and flags employees for promotion consideration.
Classification: Provider of a high-risk AI system. This is the scenario most companies get wrong. Even though they’re “just using an API,” they have built a system that uses AI for employment decisions (Annex III, Category 4) and placed it into service within their own organisation. The fact that it’s internal doesn’t change the classification — it’s still a high-risk AI system that requires a Technical File, risk management system, and conformity assessment. See our dedicated guide on using third-party LLMs and your AI Act obligations.
Scenario 4: The Reseller
A consultancy purchases an AI compliance tool from a vendor and provides it to clients as part of their service offering, under the consultancy’s own branding.
Classification: Provider. By placing the system on the market under their own brand — even though they didn’t build it — the consultancy becomes a provider under Article 3(3). They must ensure the system meets all applicable requirements, which in practice requires a contractual arrangement with the original vendor to provide the underlying compliance documentation.
The Dual-Role Reality: How to Manage Both Sets of Obligations
Most technology companies are both providers and deployers — providers of their own AI product, deployers of the AI tools they use internally (HR systems, productivity tools, analytics platforms). Managing both roles requires:
- A complete AI system inventory that distinguishes provider systems (your own products) from deployer systems (third-party tools you use). See our post on why you need a centralised AI system inventory.
- Separate compliance tracks for each role — the provider track covering Articles 8–17, 43, 71, 72; the deployer track covering Article 26, Article 27 (if applicable), and Article 4.
- A vendor management programme that verifies the compliance status of third-party AI providers whose systems you deploy. See our post on AI vendor due diligence in 2026.
- Clear internal ownership — the compliance team owns both tracks but needs named owners in product/engineering (for provider obligations) and in operations/procurement (for deployer obligations).
Frequently Asked Questions
What are the fines for deployers who violate their obligations?
Deployers who fail to meet their Article 26 obligations face Tier 2 fines under Article 99 — up to €15 million or 3% of global annual turnover, whichever is higher. Deployers who supply false or misleading information to market surveillance authorities face Tier 3 fines — up to €7.5 million or 1% of turnover. The same fine structure applies to deployers and providers for equivalent violations.
We use a third-party AI API. Does the provider’s compliance cover us?
The provider’s compliance covers their obligations for the model or system they supply. It does not cover your obligations as a deployer, nor does it cover provider-level obligations you may acquire if you substantially modify the system. You need to verify that your provider is compliant — and you need to maintain your own deployer compliance programme independently. A compliant upstream provider is a necessary condition for your compliance, not a sufficient one.
Does the EU AI Act apply to internal AI tools that we don’t sell to anyone?
Yes. The Act’s scope covers AI systems that are “put into service” — which includes internal deployment. If your organisation builds or substantially modifies an AI system used internally for high-risk purposes (employment decisions, access to services, safety-critical operations), you are a provider of that system with full compliance obligations. The fact that the system is not sold to external customers does not remove the compliance requirement.
When does a deployer become a provider under Article 25?
Article 25 converts a deployer into a provider when they make a “substantial modification” to the AI system — defined as a change that affects the system’s compliance with the regulation or changes its intended purpose. In practice, this covers fine-tuning a third-party model, changing the system’s use case beyond its documented scope, removing safety controls, or building a material new layer of functionality that changes the system’s risk profile. When this threshold is crossed, the former deployer acquires full provider obligations for the modified system.
Can a company be both a provider and a deployer at the same time?
Yes, and this is extremely common. A company that sells an AI product to customers is a provider of that product. The same company likely uses third-party AI tools internally — making it a deployer of those systems. Each role carries its own obligations, and the compliance programme must address both tracks separately. The starting point is always a complete inventory of every AI system the organisation builds, sells, or uses.
What is the difference between an AI provider and an AI deployer under the EU AI Act?
A provider develops an AI system and places it on the market or puts it into service under their own name — carrying the full set of compliance obligations including Technical File, risk management, conformity assessment, and EU database registration. A deployer uses a system developed by someone else in a professional context — carrying lighter duties centred on human oversight, monitoring, incident reporting, and staff training. The key distinction is whether you are making the system available to others (provider) or using it yourself (deployer).

Jasper Claes is a Compliance Manager and consultant specializing in AI governance for high-scale technology companies operating in regulated markets. He advises product and legal teams on implementing practical compliance frameworks aligned with evolving regulations such as the EU AI Act. Through his writing, Jasper focuses on translating complex regulatory requirements into clear, actionable guidance for teams building and deploying AI systems.
