Beyond Spreadsheets: Why You Need a Centralized AI System Inventory

Jasper Claes

⚡ TL;DR

  • You cannot comply with the EU AI Act for systems you do not know you have. A complete AI system inventory is the first and most foundational step in any compliance programme — before risk assessment, before Technical Files, before conformity assessment.
  • Most organisations that have attempted an AI inventory using spreadsheets have found it decaying within 90 days: new systems appear without being registered, old systems are updated without the inventory reflecting the change, and classification decisions are inconsistent across teams.
  • A centralised AI system inventory is not a spreadsheet with a governance label. It is a structured, versioned, automatically maintained register that serves as the authoritative source of truth for every AI system your organisation builds, buys, or deploys.

The conversation I have most often with compliance leads who have been asked to deliver EU AI Act compliance starts with the same question: “Where do we begin?” My answer is always the same: with the inventory. Before you can assess risk, before you can decide what documentation is required, before you can run conformity assessments — you need to know what AI systems exist in your organisation. All of them.

This turns out to be harder than it sounds. Most organisations are surprised by the number of AI systems they have when they actually look. Shadow AI deployments made by individual teams using self-service tools. GPAI API integrations built by product engineers who did not go through IT procurement. Third-party SaaS products with AI features that were enabled by default. Internal tools built on open-source models that are technically high-risk but have never been reviewed by anyone with AI Act knowledge. A thorough inventory typically surfaces 2–4 times as many AI systems as the initial estimate.

This post explains what a complete AI system inventory must contain, why spreadsheets structurally cannot deliver it at scale, and how to build an inventory infrastructure that actually serves as the foundation for your compliance programme. For the broader governance framework this inventory sits within, see our AI Governance Framework pillar guide.

What Must an AI System Inventory Contain?

The EU AI Act does not define a specific inventory format, but the compliance obligations it creates — risk-tiered documentation, role-specific duties, operator/deployer distinction, registration requirements — collectively define what information the inventory must hold to be operationally useful. An inventory that does not contain the right fields cannot serve as the foundation for an Act-compliant governance programme.

Field CategoryRequired FieldsWhy It Matters for Compliance
System IdentityName; version; description; vendor (if third-party); internal owner; business unitIdentifies the system and assigns organisational accountability — prerequisite for all other compliance activities
Organisational RoleProvider / Deployer / Both; if provider: placed on market or internal only; if deployer: provider identity and compliance statusDetermines which Article obligations apply — provider vs. deployer carry fundamentally different compliance tracks
Risk ClassificationRisk tier (prohibited / high-risk / limited risk / minimal risk); Annex III category if high-risk; classification reasoning; classification date and reviewerDetermines documentation and conformity assessment requirements; classification decision must be auditable
GPAI StatusDoes the system use a GPAI model? Which model(s)? GPAI provider compliance status; whether GPAI deployment triggers high-risk obligationsGPAI deployments may carry separate obligations and require provider-level compliance review for the system built on top
Compliance StatusTechnical File status; conformity assessment status; EU database registration reference; Declaration of Conformity link; open non-conformitiesProvides at-a-glance compliance posture across the AI portfolio; surfaces systems at risk of enforcement
Deployment ContextGeographic deployment (EU / non-EU); affected population (customers / employees / public); approximate volume (decisions/month)EU deployment is the jurisdictional trigger; affected population and volume inform FRIA and human oversight requirements
Data Protection IntersectionGDPR processing activity reference; DPIA status; ROPA entry link; DPO review dateMost high-risk AI systems also process personal data; GDPR and AI Act compliance must be coordinated for the same system
Change HistoryLast modified date; modification type; substantial modification flag; re-assessment required flagSubstantial modifications may convert a deployer to a provider and require a new conformity assessment — the inventory must capture this

Why Spreadsheets Fail at Scale: The Four Structural Problems

Spreadsheet-based AI inventories are the most common approach and the most consistently inadequate. They fail not because they are poorly designed but because spreadsheets are structurally unsuited to the four characteristics that make an AI inventory challenging.

Problem 1: Discovery — You Can’t Inventory What You Don’t Know Exists

A spreadsheet is populated by people who know to add entries. It has no mechanism for discovering AI systems that were deployed without going through a centralised process. Shadow AI deployments — teams using self-service tools, engineers integrating APIs without procurement oversight, business users adopting AI-enhanced SaaS features — are precisely the systems that never get added to a spreadsheet-based inventory.

A centralised inventory system addresses this through integration with your technology infrastructure: scanning SaaS procurement records, monitoring API usage logs for known AI provider endpoints, reviewing cloud deployment configurations for AI-specific patterns. The goal is continuous discovery rather than point-in-time manual population. The NIST Risk Management Framework’s IDENTIFY function provides the conceptual model here — you cannot manage risk for assets you have not identified.

Problem 2: Currency — Inventories Decay Faster Than They Are Updated

AI systems change constantly — model updates, new features, expanded use cases, new deployment regions. A spreadsheet has no mechanism for detecting that a system changed and triggering an inventory update. Within 90 days of a manual inventory exercise, the gap between the spreadsheet and reality typically exceeds the organisation’s risk tolerance.

A centralised inventory system addresses this through integration with your change management process: every deployment pipeline that touches an inventoried AI system automatically updates the relevant inventory record or flags it for review. The inventory becomes an active participant in the change management process, not a passive document that hopes to be updated.

Problem 3: Consistency — Classification Decisions Are Made Independently Across Teams

When individual teams self-classify their AI systems without a centralised methodology, the same factual situation gets classified differently depending on who does the assessment, how well they understand the Annex III categories, and how risk-averse they are. One team classifies an employment AI tool as “minimal risk” because they believe their use of it is trivial. Another team classifies an identical tool as “high-risk” because they read the Annex III language carefully.

A centralised inventory enforces consistent classification methodology by routing classification decisions through a structured workflow with defined criteria, challenge mechanisms, and review by a central AI governance function. Every classification decision is documented with reasoning, reviewer identity, and date — creating the auditable record that a market surveillance authority may request.

Problem 4: Linkage — Compliance Artefacts Are Scattered Across Disconnected Systems

A system’s compliance status is only as useful as its links to the underlying compliance artefacts — the Technical File, the conformity assessment record, the DPIA, the training records for oversight personnel. In a spreadsheet-based system, these links are typically absent or broken. The inventory says “Technical File: complete” without linking to the actual file, its version, or when it was last updated.

A centralised inventory system serves as the master index that links every compliance artefact to the system it covers — with version references, status indicators, and last-update timestamps for each artefact. When a market surveillance authority asks to see the complete compliance record for a specific system, the inventory produces a structured package rather than requiring hours of manual assembly from scattered sources.

The AI System Discovery Process: Finding What You Don’t Know You Have

Building a complete inventory starts with a discovery exercise that goes beyond asking business units what AI they use. Systematic discovery should cover four source categories:

Source 1: Procurement and IT Records

Review SaaS procurement records for vendors with known AI capabilities. Check software licence registrations for AI tools. Audit cloud cost records for spending on known AI provider services (OpenAI, Anthropic, Google AI APIs, AWS Bedrock, Azure OpenAI). The ENISA AI Threat Landscape report identifies the most common AI tool categories deployed in enterprise environments — use it as a reference list for procurement scanning.

Source 2: Engineering Infrastructure

Scan API gateway logs for calls to known AI provider endpoints. Review Kubernetes and container registries for AI-related images. Check CI/CD pipeline configurations for ML framework dependencies (TensorFlow, PyTorch, Hugging Face Transformers, LangChain). Query your internal model registry for all registered models.

Source 3: Business Unit Self-Reporting

Structured interviews with business unit leads using a consistent questionnaire covering: automated decision-making in any process; AI-assisted writing, analysis, or recommendation tools; third-party SaaS products with AI features; any tool that uses machine learning, natural language processing, or computer vision. Provide examples to counteract the common misconception that “AI” means only large language models.

Source 4: Data and Analytics Functions

Review outputs from data science and analytics teams for any model or algorithm that produces outputs used in business decisions. Statistical models, rules engines, and scoring algorithms may qualify as AI systems under the broad EU AI Act definition in Article 3(1), which covers any “machine-based system designed to operate with varying levels of autonomy” — not just neural networks or LLMs.

Classification at Scale: From Discovery to Compliance Obligation

Once discovered, every AI system must be classified against the EU AI Act’s risk tiers. For large portfolios, classification at scale requires a structured workflow that is both thorough enough to be reliable and efficient enough to be executable:

  1. Initial triage: Screen every discovered system against the Article 5 prohibited list and the Annex III high-risk categories. Systems that clearly do not touch any Annex III use case can be classified as minimal risk with a brief documented rationale. Systems that touch any Annex III use case proceed to detailed classification.
  2. Detailed classification: For systems that may be high-risk, conduct a structured classification assessment using the three-part test: (a) does the system fall within an Annex III category based on its intended purpose? (b) is the system used in the EU or affecting EU residents? (c) is the system used in a professional context (not purely personal non-professional use)? All three must be true for high-risk classification to apply.
  3. Edge-case review: Systems where the classification is uncertain are escalated to the central AI governance function for expert review, with the reasoning documented. This creates the auditable decision trail that market surveillance authorities expect to see.
  4. Compliance obligation mapping: For each classified system, the inventory automatically generates a compliance obligation checklist — the specific Articles that apply given the system’s risk tier, organisational role, and deployment context.

For the detailed Annex III classification criteria and worked examples, see our post on Annex III high-risk AI classifications. For the provider vs. deployer determination that affects the obligation checklist, see our post on provider and deployer responsibilities.

Integrating the Inventory with Your Governance Processes

A centralised AI system inventory reaches its full value when it is integrated into the governance processes that consume it — not maintained as a separate compliance exercise.

Procurement integration: New AI system procurement must trigger an inventory registration and classification before purchase approval. The inventory becomes a procurement gate, not a post-hoc record. For the full vendor due diligence framework this enables, see our post on AI vendor due diligence in 2026.

Change management integration: Every change management request that touches an inventoried AI system must include an impact assessment — does this change constitute a substantial modification? Does it require a Technical File update? Does it require a new conformity assessment? The inventory triggers these questions automatically based on the system’s classification and role.

Risk management integration: The inventory feeds your enterprise risk management framework with AI-specific risk signals — high-risk systems with open non-conformities, systems approaching Technical File review deadlines, systems with anomalous override rates in their post-market monitoring data.

Regulatory reporting integration: When a market surveillance authority requests information about your AI systems, the inventory is the first place you look. A well-maintained inventory should allow you to respond to a regulatory request with a complete, structured compliance package for any system within 48 hours.

Frequently Asked Questions

What is an AI system inventory and why does the EU AI Act require one?

An AI system inventory (also called an AI register or AI asset register) is a structured, maintained record of every AI system an organisation builds, buys, or deploys. The EU AI Act does not use the word “inventory,” but its obligations — risk-tiered documentation requirements, provider vs. deployer role determination, conformity assessment before market placement, and post-market monitoring — collectively require organisations to know what AI systems they have, how they are classified, and what their compliance status is. An organisation without a complete inventory cannot demonstrate compliance with these obligations. Most enterprises that conduct a systematic AI discovery exercise find significantly more systems than they expected — typically 2–4 times their initial estimate.

What is the difference between an AI system inventory and a Data Protection impact assessment register?

A DPIA register (or Records of Processing Activities under GDPR Article 30) documents data processing activities from a privacy and data protection perspective. An AI system inventory documents AI systems from an AI Act compliance perspective — including systems that process no personal data and therefore don’t appear in the DPIA register. The two registers should be cross-linked: for AI systems that also process personal data, the DPIA register entry and the AI system inventory entry should reference each other. The DPO typically owns the DPIA register; the AI governance function owns the AI system inventory; they should be coordinated rather than siloed. See our post on the DPO’s role in AI governance.

How do we handle AI features embedded in third-party SaaS tools we use?

Third-party SaaS tools with AI features must be in your inventory if those features are used in a high-risk context. For each such tool: determine whether the AI feature use case falls in Annex III; determine your organisational role (you are almost certainly a deployer, not a provider); and verify the vendor’s compliance status for the AI features you use. Add the tool to your inventory with the vendor’s compliance documentation references — Declaration of Conformity link, EU database registration reference — and your deployer obligation checklist. If the vendor cannot provide compliance documentation for high-risk AI features, that is a procurement risk requiring escalation to your vendor management process.

Our organisation has dozens of AI systems. How do we prioritise the inventory and compliance work?

Prioritise by risk tier and enforcement timeline. The August 2026 deadline for high-risk AI compliance means: immediately identify all systems potentially falling in Annex III; prioritise those for rapid classification and Technical File development; systems clearly in the minimal risk category can be processed more slowly. Within the high-risk category, prioritise by volume (higher-volume systems affect more people), by deployment maturity (systems already in production have less time for remediation), and by documentation status (systems with no existing compliance documentation need the most work). Use the inventory to generate a prioritised compliance workplan — the systems at the top of that plan should be receiving the majority of your compliance resources right now.

How often should the AI system inventory be reviewed and updated?

The inventory should be updated continuously as systems change — not reviewed periodically. Every system deployment, model update, or use-case expansion should trigger an inventory update as part of the change management process. Beyond continuous updates, conduct a full discovery exercise at least annually to surface shadow AI deployments and unofficial tool adoptions that bypassed the standard process. Additionally, conduct a full portfolio review whenever the regulatory landscape changes materially — the Commission’s Annex III expansion mechanism means the list of high-risk categories can be extended, potentially reclassifying systems that were previously minimal risk.

Replace your spreadsheet with a compliance-grade AI system inventory.

Unorma’s AI System Inventory provides automated discovery integration, structured classification workflows, compliance obligation mapping, and full artefact linkage — turning your inventory from a decaying spreadsheet into the live foundation of your governance programme.Build Your AI System Inventory →

Share this post

Leave a Reply