Table of Contents
📋 TL;DR — For Procurement, Legal, and IT Leaders:
- When you buy or deploy a third-party AI system in a high-risk context, you acquire deployer obligations under the EU AI Act — regardless of what the vendor’s contract says about compliance responsibility.
- A vendor’s claim of “EU AI Act compliance” is not self-certifying. You need to verify it with specific documentation before contract signature.
- The AI vendor due diligence process in 2026 is not a box-tick exercise. It is a risk management activity that determines your regulatory exposure, your liability architecture, and your ability to fulfil your own obligations as a deployer.
Enterprise procurement of AI software has fundamentally changed in 2026. Two years ago, buying an AI tool meant evaluating accuracy benchmarks, integration capabilities, pricing models, and support SLAs. Today, all of those factors still matter — but they’re joined by a compliance dimension that can determine whether the tool you buy creates regulatory exposure the moment you deploy it.
I’ve seen procurement processes that skipped AI compliance vetting, signed contracts with AI vendors who had no EU AI Act compliance programme, and deployed high-risk AI systems without understanding the deployer obligations being acquired. The cost of those decisions is becoming visible in 2026 — both in regulatory enforcement activity and in the legal liability conversations those contracts are now generating.
This post gives you the complete AI vendor due diligence framework: what to ask before you sign, what documentation to require, how to structure your contracts, and the red flags that should walk you away from a vendor regardless of how good the product looks. For the governance framework that this procurement process sits within, see our AI Governance Framework pillar guide.
Why AI Vendor Due Diligence Is Now a Legal Obligation, Not Just Good Practice
As a deployer of a third-party AI system, Article 26 of the EU AI Act requires you to use the system in accordance with the provider’s instructions and to take appropriate technical and organisational measures to ensure human oversight and monitoring. You cannot meet these obligations if you don’t understand the system’s compliance status.
More directly: Article 26(1) states that deployers “shall take appropriate technical and organisational measures to ensure they use such systems in accordance with the instructions of use.” If you have no documented instructions of use because you never requested them from the vendor, you cannot demonstrate Article 26 compliance.
Furthermore, if the AI system you buy is non-compliant — if it lacks the required Technical File, was never registered in the EU AI database, or has no post-market monitoring — you as the deployer may face enforcement action for using a non-compliant system. Your commercial relationship with the vendor does not shield you from regulatory scrutiny of the system you’re operating.
AI vendor due diligence is therefore not a procurement best practice — it’s the mechanism through which you verify that the system you’re buying enables your own compliance, rather than undermining it.
The AI Vendor Due Diligence Framework: Five Evaluation Dimensions
Dimension 1: Scope and Classification — Is This System High-Risk?
The starting point for any AI vendor assessment is determining whether the system you’re evaluating is a high-risk AI system under Annex III of the EU AI Act. This determines the level of compliance documentation you need to require from the vendor.
Ask the vendor directly: “Have you assessed your system against the EU AI Act’s Annex III high-risk categories? What was your conclusion, and can you share the classification assessment documentation?”
Vendors who have not conducted a classification assessment — or who cannot articulate their classification reasoning — have not begun their compliance programme. That’s a significant red flag for any AI system touching consequential decisions. For the classification framework, see our post on Annex III high-risk AI classifications.
Remember that your use case also matters for classification. A system the vendor classified as non-high-risk for its primary use case may become high-risk in your specific deployment context. If your intended use touches employment, credit, education, or safety, verify the classification against your actual use case — not just the vendor’s default use case description.
Dimension 2: Compliance Documentation — What Can They Provide?
For high-risk AI systems, the vendor should be able to provide specific documentation before contract signature. This is not negotiable — these are the documents the law requires them to have, and if they don’t have them, they are not compliant.
| Document | Legal Basis | What to Verify | Red Flag if… |
|---|---|---|---|
| EU Declaration of Conformity | Article 47 | Is it signed? Does it reference the correct conformity assessment procedure? Is it current (covers the version you’re buying)? | Missing; draft/unsigned; doesn’t cover the current system version |
| Technical File Summary / Annex IV Abstract | Article 11 | Does it cover all 8 Annex IV sections? Is system description specific enough to match your intended use? | Generic / boilerplate; missing sections; intended purpose doesn’t match your use case |
| EU AI Database Registration Reference | Article 71 | Can you verify the registration in the public EU AI database? Does the registration match the product being sold? | Cannot find the registration; registration describes a different system version |
| Instructions for Use (Article 13) | Article 13 | Does it include capability/limitation disclosure? Human oversight requirements? Conditions of use and maintenance requirements? | Generic product documentation without AI Act-specific sections; no limitation disclosure |
| Bias Testing Results | Article 10 | Does it include subgroup performance data for relevant protected characteristics? Is the methodology documented? | Aggregate-only metrics; no demographic breakdown; methodology not disclosed |
| Post-Market Monitoring Plan | Article 72 | Is there a specific plan with metrics, thresholds, and responsible parties? How will you be notified of issues? | Generic monitoring statement; no customer notification process; no specific metrics |
Dimension 3: Data Protection — The GDPR Layer
Every AI vendor relationship that involves your data requires a data protection assessment independently of the AI Act. For vendors whose AI processes your personal data (customer data, employee data, user data), you need:
- A Data Processing Agreement (DPA) that meets GDPR Article 28 requirements — covering processing instructions, sub-processor management, data subject rights support, breach notification, and deletion/return of data
- An EU data residency confirmation or, where data is processed outside the EU, adequate transfer safeguards (Standard Contractual Clauses or equivalent)
- Clarity on whether AI training uses your data — some AI vendors use customer data to train or fine-tune their models. This requires explicit contractual prohibition or explicit consent, depending on the data involved
- Data retention and deletion procedures — particularly important for AI systems that log interactions, as Article 12 AI Act logging requirements and GDPR data minimisation principles need to be reconciled
For the full intersection of data protection and AI governance, see our post on the DPO’s role in managing AI governance alongside GDPR.
Dimension 4: Contractual Liability Allocation
AI vendor contracts in 2026 need specific provisions that didn’t exist in standard software contracts three years ago. Your legal team should insist on:
- Compliance warranties: The vendor warrants that the system is compliant with the EU AI Act as of the contract date and will remain compliant throughout the contract term, with notification obligations if compliance status changes
- Substantial modification notification: The vendor commits to notify you before making any substantial modification to the system — allowing you to assess whether your own conformity assessment needs to be revisited
- Regulatory cooperation: The vendor agrees to cooperate with any market surveillance authority investigation of the system, including providing access to the Technical File, and to keep you informed of any enforcement activity
- Incident notification: The vendor commits to notify you within 24 hours of any serious incident involving the system — allowing you to meet your own Article 73 reporting obligations
- Indemnification: The vendor indemnifies you for losses arising from the system’s non-compliance with the EU AI Act — at minimum for the period before you could reasonably have discovered the non-compliance
- Termination right: You have the right to terminate the contract without penalty if the vendor fails to remediate a compliance gap within a defined period after notification
Dimension 5: Ongoing Monitoring — Compliance Doesn’t End at Contract Signature
AI vendor compliance is not a point-in-time determination. The vendor’s compliance status can change — through system modifications, regulatory findings, or market surveillance authority action. Build ongoing monitoring into your vendor management programme:
- Annual compliance review: Require vendors of high-risk AI systems to provide updated compliance documentation annually — at minimum a confirmation that the Declaration of Conformity remains current and the EU database registration is up to date
- EU AI database monitoring: Monitor the vendor’s EU AI database registration for changes or withdrawals that might indicate a compliance issue
- Regulatory watch: Monitor EU AI Office and national MSA publications for enforcement actions in your vendor’s sector
- Version update review: Every major system version update should trigger a request for updated compliance documentation — confirming the new version has been assessed and the Declaration of Conformity updated if required
The AI Vendor Due Diligence Checklist
Use this checklist in your vendor assessment process. For high-risk AI systems, all items in Sections A and B are mandatory. For non-high-risk AI systems with limited risk transparency obligations, Section C applies.
Section A — Scope and Classification (All AI Vendors)
| ✓ | Checklist Item |
|---|---|
| ☐ | Vendor has conducted a formal Annex III classification assessment for the system you are buying |
| ☐ | Classification conclusion matches your intended use case — not just the vendor’s default use case |
| ☐ | If classified as non-high-risk, reasoning is documented and plausible for your use case |
| ☐ | Vendor can confirm system does not involve Article 5 prohibited practices |
Section B — High-Risk AI System Compliance Documentation
| ✓ | Checklist Item |
|---|---|
| ☐ | Signed EU Declaration of Conformity provided — current version, covers the product being purchased |
| ☐ | EU AI database registration verified — registration is current and matches product |
| ☐ | Technical File summary or Annex IV abstract provided — covers all 8 sections, not generic |
| ☐ | Bias testing results provided with demographic subgroup performance data for relevant protected characteristics |
| ☐ | Instructions for use (Article 13) provided — includes capability/limitation disclosure and human oversight requirements |
| ☐ | Post-market monitoring plan described — with specific metrics, thresholds, and customer notification process |
| ☐ | Data Processing Agreement provided — covers GDPR Article 28 requirements including sub-processors and breach notification |
| ☐ | EU data residency confirmed or adequate transfer safeguards documented for non-EU processing |
| ☐ | Contractual compliance warranties, incident notification, substantial modification notification, and indemnification clauses agreed |
| ☐ | Annual compliance review cadence agreed in contract |
Section C — Limited Risk and Minimal Risk AI Systems
| ✓ | Checklist Item |
|---|---|
| ☐ | Article 50 transparency obligations confirmed — user disclosure of AI interaction is implemented |
| ☐ | Data Processing Agreement in place if personal data is processed |
| ☐ | Use case review completed — confirm classification is correct for your specific deployment context |
Red Flags That Should Stop a Procurement Process
Not every compliance gap is equal. Some gaps can be remediated with contractual protections or vendor commitments. Others should stop the procurement process entirely:
- No classification assessment conducted. A vendor selling a system that plausibly falls in Annex III without having conducted a classification assessment has not started their compliance programme. Walk away or make contract signature conditional on assessment completion with a hard deadline.
- Cannot produce a Declaration of Conformity. For a high-risk AI system, this document is mandatory before market placement. A vendor who doesn’t have one is currently operating outside the law.
- “Our legal team is working on compliance.” In 2026, “working on” is not acceptable for a system already on the market. The law required compliance before market placement — a system that has been on the market for months without a Declaration of Conformity is in ongoing violation.
- Aggregate-only bias metrics with no subgroup data. This is not a documentation gap — it is evidence that the bias assessment required by Article 10 was not properly conducted. For employment, credit, or education AI, this creates direct discrimination risk that your organisation would absorb by deploying the system.
- Resistance to DPA or compliance warranty clauses. A vendor who resists standard data protection agreements or refuses to warrant their AI Act compliance has something to hide. Treat this as a serious signal about both their compliance status and their commercial trustworthiness.
Frequently Asked Questions
What is an AI vendor due diligence checklist and why do I need one in 2026?
An AI vendor due diligence checklist is a structured assessment framework that verifies whether an AI system you’re considering purchasing meets EU AI Act compliance requirements — and whether your deployment of it will be legally sound. You need one because as a deployer of a third-party AI system, you acquire obligations under Article 26 of the EU AI Act. If the vendor’s system is non-compliant, you may be operating an illegal AI system. The checklist ensures you verify compliance before contract signature, not after deployment.
Can we rely on a vendor’s self-declaration of EU AI Act compliance?
No. For high-risk AI systems, self-assessment is a legally defined process under Annex VI of the Act — but it produces specific documents (a Declaration of Conformity and a Technical File) that you can review. A vendor saying “we’re compliant” without producing these documents is making an unverifiable claim. Request the specific documentation the law requires them to have, and verify what you can independently — including the EU AI database registration, which is publicly accessible.
What contractual protections should we require from AI vendors?
At minimum: EU AI Act compliance warranties; substantial modification notification obligations; serious incident notification within 24 hours; regulatory cooperation commitments; indemnification for losses from vendor non-compliance; and a right to terminate without penalty for unremediated compliance failures. These protections don’t eliminate your deployer obligations — but they ensure you have recourse if the vendor’s non-compliance creates enforcement exposure for you.
How does AI vendor due diligence differ from standard IT vendor assessment?
Standard IT vendor assessment covers security posture, data protection (DPA), uptime SLAs, and integration capabilities. AI vendor due diligence adds: EU AI Act classification and conformity documentation; bias testing results and demographic performance data; human oversight design assessment; post-market monitoring methodology; and AI-specific contractual provisions (substantial modification notification, incident reporting). The GDPR layer (DPA, data residency) remains necessary but is not sufficient on its own.
We’re a public sector organisation. Do we have additional obligations when buying AI?
Yes. Public sector deployers of high-risk AI systems are among the entities explicitly required to conduct a Fundamental Rights Impact Assessment (FRIA) under Article 27 before deploying the system. This is a significant additional obligation beyond the standard deployer duties. Public sector procurement processes should build the FRIA into the pre-deployment timeline — which means starting it before the procurement decision is finalised, not after. Your DPO should own this process, ideally integrated with the DPIA you’re conducting for the same system.
Are you an AI vendor being asked to demonstrate compliance?
Unorma gives AI providers the complete compliance documentation stack — from automated Technical File generation to EU database registration readiness — so you can answer procurement due diligence questions with documentation, not promises.Build Your Compliance Documentation →

Jasper Claes is a Compliance Manager and consultant specializing in AI governance for high-scale technology companies operating in regulated markets. He advises product and legal teams on implementing practical compliance frameworks aligned with evolving regulations such as the EU AI Act. Through his writing, Jasper focuses on translating complex regulatory requirements into clear, actionable guidance for teams building and deploying AI systems.