Table of Contents
🛠 Key Takeaways — For Engineers and Product Managers:
- Article 11 of the EU AI Act requires every high-risk AI system to have a Technical File — a living document that proves your system is safe, transparent, and under control. It must exist before you place the system on the market.
- Annex IV defines exactly what goes inside: 8 mandatory content areas covering system design, training data, testing results, risk management, and cybersecurity.
- The Technical File is not a one-time deliverable. It must be updated throughout the system’s entire operational life and retained for 10 years after market placement.
- This guide gives engineers and PMs the complete blueprint — what to write, how to structure it, what regulators actually look for, and how to automate the parts that don’t need to be manual.
If you’ve ever shipped software to an FDA-regulated market, you’ve encountered the concept of a Design History File. If you’ve worked in automotive, you know the Product Technical File. The EU AI Act’s Technical File under Article 11 is the AI industry’s equivalent — and if you’ve never built one before, the specification in Annex IV can look daunting on first read.
It doesn’t need to be. The underlying logic is straightforward: regulators want to be able to pick up your documentation, understand what your AI system does, verify that you’ve thought carefully about its risks, and confirm that you’ve built in the controls necessary to keep it operating safely. The challenge is translating that goal into a concrete set of documents that an engineering team can actually produce.
That’s what this guide does. By the end, you’ll know exactly what goes in each section of your Technical File, how to structure the documents for auditability, what common gaps regulators flag in early assessments, and how to build a documentation workflow that doesn’t collapse under the weight of a constantly evolving AI system.
For the executive governance layer that sits above this technical work, see our AI Governance Framework pillar guide. For the full regulatory context, start with our Ultimate Guide to EU AI Act Compliance.
What Is an AI Technical File Under Article 11?
Article 11 of the EU AI Act states that providers of high-risk AI systems must draw up technical documentation before placing the system on the market or putting it into service. That documentation must be kept up to date and must be sufficient to allow competent authorities to assess the system’s compliance with all applicable requirements.
Three things in that definition deserve emphasis:
“Before placing the system on the market.” The Technical File is a prerequisite for market placement — not an artefact you create after the product ships. If your system is live and your Technical File doesn’t exist yet, you’re already in breach of Article 11.
“Kept up to date.” The Technical File is a living document. Every substantial modification to the system — change in training data, model architecture update, new deployment context, change in intended use — requires a corresponding update to the Technical File. This has significant implications for how you manage documentation alongside your engineering workflow.
“Sufficient to allow competent authorities to assess compliance.” The Technical File is written for a regulator, not for your internal team. It needs to be self-contained — a market surveillance authority that has never seen your product should be able to read it and verify compliance without needing to interview your engineers.
The specific content requirements are defined in Annex IV, which lists eight mandatory content areas. We’ll work through each in detail below.
Who Is Responsible for the AI Technical Documentation?
Under Article 16, the obligation to create and maintain the Technical File rests with the provider — the organisation that develops the AI system and places it on the market. This is typically the company that trains or substantially configures the AI model and sells or deploys it as a product.
In practice, the work sits at the intersection of several teams:
| Team | Technical File Responsibilities | Annex IV Sections |
|---|---|---|
| ML / AI Engineering | System architecture description; training methodology; model performance metrics; validation results; algorithmic logic | Sections 1, 2, 4, 6 |
| Data Engineering | Dataset provenance documentation; data quality assessment; bias evaluation; data governance records | Section 3 |
| Product Management | System purpose and intended use; deployer instructions; use case scope; change management documentation | Sections 1, 7 |
| Security Engineering | Cybersecurity architecture; adversarial robustness testing; access controls; incident response procedures | Section 5 |
| Compliance / Legal | Risk management system documentation; conformity assessment outputs; Declaration of Conformity; version control and retention | Sections 5, 8 |
The cross-functional nature of the Technical File is one of the primary reasons it’s harder to produce than most engineering teams expect. It requires coordination across teams that don’t normally produce shared written deliverables — and it requires those deliverables to be maintained in sync with each other as the system evolves.
The 8 Annex IV Sections: A Complete Technical Documentation Blueprint
Annex IV of the EU AI Act defines eight mandatory content sections for every Technical File. Below is a detailed breakdown of what each section requires, what good documentation looks like, and the common gaps that compliance reviewers find most frequently.
Section 1: General System Description and Intended Purpose
This is the Technical File’s executive summary — the section that orients a reader who has never seen your system before. It must cover:
- The intended purpose of the AI system: what task it performs, in what context, and for what users
- The version information and the date on which the documentation was prepared or updated
- How the system interacts with hardware and software it is used with
- The categories of natural persons and groups the system is intended to be used on or with
- The specific contexts and conditions of use for which the system is designed
- The functionality the system is not designed or intended to perform (negative scope)
What good looks like: A clear, jargon-free description that an intelligent non-specialist can read and understand. Specify the use case precisely — “automated candidate screening for entry-level software engineering roles at companies with under 500 employees using resumes submitted via an ATS” is better than “AI for HR.” Precision protects you: a vague intended purpose creates scope ambiguity that regulators will exploit.
Common gap: Teams write the intended purpose to reflect what the system can do rather than what it is designed and intended to do. If the system is being sold for a specific use case, document that use case — and explicitly exclude use cases you haven’t validated for. Use cases outside the documented scope become the deployer’s responsibility under Article 25.
Section 2: System Design and Development Process
This section is where engineers need to document the “how” of the system’s creation. It includes:
- The general design of the system including architectural diagrams
- The methods and steps used in development, including design choices and their rationale
- The key design specifications, including the algorithms used
- Where the system uses components developed by third parties, how those components integrate and what their own documentation status is
How to Write an AI Architectural Diagram for Regulatory Purposes
Most engineering teams already produce architectural diagrams — but the diagrams that satisfy an ML review board are not the same as the diagrams that satisfy a market surveillance authority. Regulatory architectural documentation needs to show:
- Data flow from input to output: How does raw input (text, image, structured data) travel through your system to produce a decision or recommendation? Every transformation step needs to be visible.
- Model components and their roles: If your system uses multiple models (e.g., a retrieval model plus a generation model plus a classification model), each needs to be identified with its function described.
- Human decision points: Where in the pipeline does human review or override occur? These need to be explicitly shown — they are the Article 14 human oversight controls made visible in diagram form.
- External system dependencies: Third-party APIs, external databases, and integrated services all need to appear, with a note on what compliance status each has.
- Data storage and logging points: Where is data retained? For how long? These are Article 12 logging requirements translated into your architecture diagram.
The MLOps Community publishes practical templates for ML system architecture documentation that are useful as a starting baseline — though they need to be extended for regulatory purposes as described above.
Common gap: Diagrams that show the “happy path” of system operation without documenting error states, fallback behaviours, or the handling of out-of-distribution inputs. Regulators want to see that you’ve designed for failure modes, not just for normal operation.
Section 3: AI Data Governance Requirements — Training, Validation, and Testing Data
Section 3 of Annex IV is where Article 10’s data governance requirements translate into Technical File content. This is consistently the section with the most significant gaps in early compliance assessments — and the one where data engineering teams most frequently underestimate the documentation effort involved.
What AI Data Governance Documentation Must Cover
Your data documentation must address:
- Dataset provenance: Where did each dataset come from? What were the terms of use? For purchased or licensed datasets, include the agreement reference. For web-scraped data, document the collection methodology and any filtering applied.
- Dataset characteristics: The size, scope, and content of each training, validation, and test dataset. Include the time period covered, the geographic distribution, and any known gaps or underrepresentation.
- Data collection and processing methodology: How was the data collected, cleaned, labelled, and processed? What quality control measures were applied?
- Representativeness assessment: Is the training dataset representative of the real-world population the system will operate on? Document any identified gaps and the steps taken to address them.
- Bias evaluation: What bias detection methodology was used? What biases were found, and what mitigation steps were taken? What residual biases remain, and why are they acceptable?
- Personal data handling: If personal data was used in training, what was the legal basis? What data protection measures were applied? This section will be reviewed by both AI Act market surveillance authorities and data protection authorities — ensure it is consistent with your GDPR records of processing.
The Hugging Face Dataset Cards framework is a well-established industry standard for structured dataset documentation. It covers provenance, composition, collection methodology, and known limitations in a structured format that maps well onto the Annex IV requirements. Use it as your data documentation template, extended with the regulatory fields not covered in the standard card.
For organisations using synthetic data to augment training sets, document the generation methodology, the model used to generate the data, and the validation process used to confirm the synthetic data’s quality and representativeness. Synthetic data does not eliminate the documentation obligation — it changes what needs to be documented.
Common gap: Teams document the final training dataset but not the intermediate processing steps. Regulators may ask to trace a data lineage from raw collection through filtering, cleaning, and augmentation to the final training set. If you don’t have that provenance documented, you cannot reconstruct it after the fact.
Section 4: Monitoring, Functioning, and Control Mechanisms
This section documents how your system operates in production — the controls, monitoring, and logging mechanisms that keep it operating as intended after deployment.
Required content includes:
- Description of the system’s technical capabilities and limitations, including performance across different conditions and demographic groups
- The level of accuracy achieved during testing, with specific performance metrics
- Known or foreseeable circumstances that may lead to system failure or reduced performance
- The human oversight measures built into the system, including the controls available to human operators
- The technical measures to support post-market monitoring, including logging specifications
Model Cards as Technical File Documentation
Model cards — first proposed by Google researchers in 2018 and now widely used across the ML industry — are structured documents that describe a model’s intended uses, performance characteristics, limitations, and ethical considerations. They are the natural home for much of Section 4’s content.
The Google Model Cards framework provides a well-tested structure that includes: model details, intended use, factors (relevant demographic and environmental variables), metrics, evaluation data, training data, quantitative analyses, ethical considerations, and caveats and recommendations. The EU AI Act extends this with specific requirements for performance across protected characteristic groups and documentation of residual known risks.
The NIST AI Risk Management Framework (AI RMF) — while a US-origin document — provides complementary methodology for the performance evaluation and monitoring content that Article 15 and Section 4 of Annex IV require. NIST’s GOVERN, MAP, MEASURE, and MANAGE functions map well onto the EU AI Act’s risk lifecycle requirements.
What a complete model card for EU AI Act purposes looks like:
| Model Card Section | Required Content | AI Act Reference |
|---|---|---|
| Model Identity | Model name, version, release date, architecture type, training compute | Annex IV Section 1 |
| Intended Use | Primary use cases; intended user populations; out-of-scope uses | Annex IV Section 1; Article 13 |
| Performance Metrics | Accuracy, precision, recall, F1 by subgroup; performance across protected characteristic groups; confidence calibration | Article 15; Annex IV Section 4 |
| Evaluation Dataset | Test dataset composition, size, demographic coverage, collection period | Article 10; Annex IV Section 3 |
| Bias Analysis | Bias detection methodology; disparate impact results across protected groups; mitigation measures applied; residual bias disclosure | Article 10(2)(f); Annex IV Section 3 |
| Known Limitations | Conditions of degraded performance; out-of-distribution failure modes; adversarial vulnerabilities identified | Article 9; Annex IV Section 4 |
| Ethical Considerations | Fundamental rights implications; privacy considerations; fairness framework applied | Article 27; EU Charter |
Section 5: Cybersecurity Requirements Under Article 15
Article 15 requires high-risk AI systems to be resilient against attempts by unauthorised third parties to alter their use, outputs, or performance — through either technical means (adversarial attacks) or data manipulation (data poisoning). Section 5 of Annex IV requires this security architecture to be documented.
AI Cybersecurity Documentation: What Article 15 Actually Requires
Your cybersecurity documentation must address four distinct threat vectors:
1. Data Poisoning. Attacks that manipulate training data to degrade model performance or introduce backdoors. Document your data validation pipeline, the integrity checks applied to training data, and the procedures for detecting and responding to suspected data poisoning incidents. For organisations using data from public sources or external vendors, document the supply chain security controls.
2. Adversarial Inputs. Carefully crafted inputs designed to cause the model to produce incorrect outputs. Document the adversarial testing methodology used during development, the attack types tested, the robustness results, and any hardening measures implemented in response. The OWASP Machine Learning Security Top 10 provides a widely referenced taxonomy of ML-specific attack vectors that is a useful reference for structuring your adversarial testing programme and documenting its coverage.
3. Model Extraction and Inversion. Attacks aimed at reconstructing the model or its training data from query access. Document the access controls, rate limiting, and output perturbation measures that protect against model extraction, and the data minimisation practices that reduce the risk of training data inversion.
4. Prompt Injection (for LLM-based systems). For systems using large language models, prompt injection — where malicious content in user inputs or retrieved context manipulates the model’s behaviour — is an AI-specific threat that needs explicit documentation. Document your prompt architecture, the input sanitisation measures applied, and the testing methodology used to assess injection resistance.
Beyond these AI-specific threats, the cybersecurity documentation should also reference your general security architecture: access controls, encryption in transit and at rest, vulnerability management, and incident response procedures. The ENISA Threat Landscape for AI is the EU’s own reference document for AI-specific cybersecurity threats — citing alignment with ENISA’s taxonomy in your Technical File is both technically sound and demonstrates awareness of the European regulatory context.
Common gap: Teams document general IT security controls but treat AI-specific threats as out of scope for the Technical File. Article 15 explicitly requires documentation of resilience against attacks targeting the AI system specifically — not just the infrastructure it runs on. Adversarial robustness testing results must be included even if they reveal vulnerabilities — omitting them because the results are imperfect is worse than disclosing imperfect results with a documented mitigation plan.
Section 6: Testing, Validation, and Performance Evidence
This section is your evidence vault for the claim that the system works as intended. It must include:
- Testing procedures applied and the results obtained
- Validation methodology and the validation dataset characteristics
- Performance metrics used and the results across the full test population and relevant subgroups
- Pre-determined performance levels — the pass/fail thresholds set before testing, not after
- Testing results for specific use cases or subpopulations where performance may differ from the overall metric
The framing of pre-determined performance thresholds is important. Regulators look for evidence that you set your performance bar before you ran the tests — not that you set it at whatever your system happened to achieve. A test report that says “we achieved 94% accuracy, which meets our target” is stronger when you can show that the 94% target was established in your test plan before execution.
For statistical testing methodology, the Google PAIR Guidebook provides practical guidance on fairness evaluation methodologies that translate well into Annex IV documentation — particularly for systems making decisions about people across demographic groups.
Section 7: Logs, Monitoring, and Instructions for Deployers
This section covers the operational documentation that deployers need to use the system responsibly under Article 13. It must include:
- The information and instructions that the provider supplies to deployers — capabilities, limitations, maintenance requirements, and instructions for human oversight
- The automatic logging specifications: what the system logs, at what granularity, and for how long
- The post-market monitoring plan: how performance will be tracked in production and what triggers a system review
What AI Compliance Logging Must Capture
Article 12 requires high-risk AI systems to have logging capabilities that enable post-hoc review of system operation. At minimum, your logs must capture:
- The time period of each use of the system (start and end date/time)
- The reference database against which the system checked (relevant for biometric systems)
- The input data that led to the system’s output (subject to data minimisation requirements)
- The identity of the natural persons involved in verification of the outputs (for human oversight audit trail)
Logs must be retained for at least six months, or longer where sector-specific law requires. The logging architecture and retention policy must be documented in the Technical File. For a deep dive into building an immutable audit trail that satisfies these requirements, see our post on creating an immutable audit trail and evidence vault.
Section 8: EU Declaration of Conformity and Conformity Assessment Reference
The final section of the Technical File references the conformity assessment procedure that was followed and attaches the EU Declaration of Conformity — the formal legal statement that the provider makes confirming the system meets all applicable EU AI Act requirements.
The Declaration of Conformity is a specific document format required by Article 47. It must include:
- The provider’s name and address
- The AI system’s name, type, and version
- A statement that the Declaration of Conformity is issued under the sole responsibility of the provider
- Identification of the conformity assessment procedure followed (internal assessment or Notified Body)
- If applicable, the Notified Body’s identification number and the certificate reference
- Reference to any harmonised standards or technical specifications applied
- The place and date of issue, the name and function of the signatory, and their signature
The Declaration of Conformity must be kept updated — if substantial modifications are made to the system after initial conformity assessment, the assessment must be repeated and the Declaration reissued.
How to Structure Your Technical File for Audit Readiness
Knowing what goes in the Technical File is half the challenge. Structuring it so that a regulator can navigate it efficiently — and so your team can maintain it without a documentation crisis every time the model is updated — is the other half.
The Master Document Architecture
Structure your Technical File as a master index document that links to subordinate documents for each Annex IV section. This architecture has three advantages:
Maintainability: When your model is updated, you update the relevant subordinate document (e.g., the Model Card or the performance test report) and update the version reference in the master index. You don’t need to rewrite the entire Technical File for every system change.
Team ownership: Each subordinate document can be owned by the appropriate team. ML engineering owns the model architecture document. Data engineering owns the dataset documentation. Security owns the cybersecurity assessment. The master index is the compliance team’s document.
Regulatory navigation: A market surveillance authority that opens your Technical File should be able to find any specific piece of information — the bias testing results, the cybersecurity measures, the Declaration of Conformity — within two clicks of the master index. If they have to read the entire document to find a specific answer, the documentation is too dense.
Version Control and Change Management
Every version of your Technical File — and every subordinate document within it — needs to be version controlled. This means:
- A version number and effective date on every document
- A change log that records what changed, why, and who approved it
- Retention of previous versions for the full 10-year retention period (Article 18)
- A defined change management process that specifies what types of system changes trigger a Technical File update and what level of sign-off is required
A “substantial modification” to the system — defined in Article 3(23) as a change that affects the system’s compliance with requirements or changes its intended purpose — requires a new conformity assessment, not just a Technical File update. Build a change classification process into your engineering workflow so that the compliance team is notified of changes that cross this threshold. For guidance on what constitutes a substantial modification in practice, see the European AI Office’s published guidance at the Commission’s AI policy hub.
Common Technical File Gaps That Regulators Find First
Based on early conformity assessment experience and published guidance from EU member state supervisory authorities, these are the most common Technical File deficiencies in 2026:
| Gap Category | What’s Missing | Why It Matters | Annex IV Section |
|---|---|---|---|
| Subgroup performance data | Performance metrics reported only at aggregate level — no breakdown by protected characteristic | Article 10(2)(f) requires bias evaluation; aggregate metrics hide disparate impact | Sections 3, 4, 6 |
| Data provenance gaps | Training data sources listed but not traced to specific datasets with terms of use | Creates copyright and data governance compliance gaps; regulator cannot verify lawful data use | Section 3 |
| Missing AI-specific security testing | General IT security documentation present; no adversarial robustness testing documented | Article 15 explicitly requires AI-specific security resilience; IT security alone does not satisfy it | Section 5 |
| Post-dated test plans | Performance thresholds documented after test results are known | Undermines credibility of testing regime; regulators check metadata timestamps | Section 6 |
| No negative scope statement | Intended purpose documented but out-of-scope uses not explicitly excluded | Creates liability exposure for misuse scenarios; explicit exclusions protect the provider | Section 1 |
| Outdated documentation | Technical File reflects an earlier version of the system; updates not incorporated | Article 11 requires documentation to be kept up to date; stale documentation is itself a violation | All sections |
Automating AI Technical Documentation: What Can and Cannot Be Automated
One of the most common questions from engineering teams is: how much of this can we automate? The honest answer is: more than you might think, but not all of it.
What Can Be Automated
- System metadata extraction: Model name, version, architecture type, parameter count, training compute, and API endpoint information can all be extracted programmatically from your ML infrastructure.
- Performance metric population: Accuracy, precision, recall, F1, AUC, and fairness metrics generated during your evaluation pipeline can be automatically piped into your Technical File’s performance section.
- Dataset statistics: Record counts, feature distributions, demographic breakdowns, and data quality metrics generated by your data pipeline can populate Section 3 automatically.
- Log specification generation: The logging configuration of your deployment infrastructure can be extracted and formatted into the Section 7 logging documentation.
- Version control and change detection: Changes to model configurations, dataset versions, or system architecture can trigger automated alerts that flag a Technical File update requirement.
What Requires Human Judgment
- Intended purpose description: The specific use case scope, the negative scope, and the population description require human authorship — they are legal commitments, not technical metrics.
- Bias assessment interpretation: The detection of disparate impact in performance metrics is automatable; the assessment of whether that impact is acceptable, explainable, and mitigated requires human judgment and sign-off.
- Risk management documentation: The Article 9 risk management process produces outputs that must be documented — but the risk identification and evaluation itself is a structured human process.
- Cybersecurity threat assessment: Adversarial testing can be partially automated, but the threat model — what adversaries are assumed, what attack vectors are in scope — requires human expert input.
- Declaration of Conformity: This is a legally binding document signed by a responsible officer. It cannot be automated.
Unorma’s Document Generator (F06) automates the extractable sections — pulling system metadata, performance metrics, and dataset statistics directly into a structured Annex IV template — while surfacing the sections that require human input as a guided workflow. This typically reduces Technical File creation time by 60–70% compared to building from scratch in a document editor. For the audit readiness check that confirms your completed Technical File is defensible, see our Audit Simulation tool (F08).
The Technical File Across the AI System Lifecycle
Understanding how the Technical File evolves across your system’s lifecycle prevents the most expensive compliance mistake: treating it as a one-time deliverable.
| Lifecycle Stage | Technical File Activity | Trigger |
|---|---|---|
| Design Phase | Section 1 (intended purpose) and Section 2 (architecture) drafts begin; risk management system initiated | Classification as high-risk AI confirmed |
| Development Phase | Sections 3 (data), 5 (security), 6 (testing) populated as development proceeds; model card maintained | Each training run; each dataset change; each security review |
| Pre-Market | All sections completed and reviewed; conformity assessment conducted; Declaration of Conformity issued; EU database registration | Before market placement |
| Post-Market Operations | Section 7 (monitoring) updated with operational data; performance metrics updated; incident records added | Continuous; formal review at least annually |
| Substantial Modification | Affected sections updated; new conformity assessment; new Declaration of Conformity; EU database entry updated | Any change meeting Article 3(23) substantial modification definition |
| End of Market Life | Final version archived; retention period begins (10 years from Article 18) | System withdrawn from market |
For the practical application of post-market monitoring in the Technical File context — particularly managing model drift and its documentation implications — see our dedicated post: Managing Model Drift: Post-Market Monitoring Requirements for 2026.
Frequently Asked Questions
What happens if the regulator finds gaps in our Technical File?
Under Article 79, national market surveillance authorities can issue corrective action orders requiring gaps to be remediated within a specified timeframe. If the gaps represent non-compliance with mandatory requirements (rather than merely incomplete documentation of compliant practices), fines under Article 99 may apply. In practice, early enforcement is likely to focus on directing remediation rather than immediate fines for good-faith operators with otherwise compliant systems. The most important factor is demonstrating that you are actively working to close identified gaps — not that your Technical File was perfect from day one.
Can the same Technical File cover multiple versions of our AI system?
Yes, with proper version control. The recommended approach is a master Technical File that references version-specific subordinate documents for the sections that change between versions (model card, performance results, security assessment). Each version’s complete Technical File state — the master index plus all referenced documents at their version-appropriate state — must be reconstructable from your archive for the full ten-year retention period. A change log at the master document level, recording what changed in each version and when, is essential for demonstrating this to a regulator.
What is a “substantial modification” that requires a new conformity assessment?
Article 3(23) defines a substantial modification as a change that affects the AI system’s compliance with requirements or alters its intended purpose. In practice, the following typically trigger a reassessment: significant changes to the training dataset; changes to the model architecture that affect performance characteristics; expansion into a new use case or user population not covered by the original intended purpose; changes to the system’s inputs or outputs that affect its risk profile; and changes to the human oversight mechanisms. Minor bug fixes, performance optimisations that don’t change risk characteristics, and UI changes that don’t affect system functionality are generally not substantial modifications — but document your classification of any significant change.
Does a third-party GPAI model provider’s documentation count toward my Technical File?
Partially. If you’re building on a third-party GPAI model, the model provider’s technical documentation — model cards, training data summaries, performance benchmarks — can be referenced in your Technical File as evidence of the underlying model’s characteristics. However, it does not substitute for your own documentation of the system you’ve built on top of it: your intended use case, your data governance for fine-tuning or RAG data, your cybersecurity architecture, and your conformity assessment. The provider’s documentation covers the model; your Technical File must cover the system.
How long must AI technical documentation be retained?
Under Article 18, technical documentation and the EU Declaration of Conformity must be kept available for national competent authorities for ten years after the AI system has been placed on the market or put into service. If the system is withdrawn from the market before that period ends, the ten-year clock still runs from the date of market placement — not from the date of withdrawal. Build your document retention system to handle this automatically.
What are the Article 11 requirements for AI technical documentation?
Article 11 requires providers of high-risk AI systems to draw up comprehensive technical documentation before market placement and keep it updated throughout the system’s operational life. The documentation must be sufficient for competent authorities to assess compliance with all applicable requirements. The specific content is defined in Annex IV across eight sections: general system description, design and development process, training and test data information, monitoring and control mechanisms, cybersecurity measures, testing results, operational instructions for deployers, and the EU Declaration of Conformity.
Ready to build your Article 11 Technical File without starting from a blank page?
Unorma’s Document Generator auto-populates your Annex IV Technical File from your system’s metadata, performance metrics, and dataset records — cutting documentation time by up to 70%. The Audit Simulation then checks every section against the Article 11 requirements before a regulator does.Start Your Technical File with Unorma →
Download Our Developer Obligation Whitepaper →
← Back to the Ultimate Guide to EU AI Act Compliance

Jasper Claes is a Compliance Manager and consultant specializing in AI governance for high-scale technology companies operating in regulated markets. He advises product and legal teams on implementing practical compliance frameworks aligned with evolving regulations such as the EU AI Act. Through his writing, Jasper focuses on translating complex regulatory requirements into clear, actionable guidance for teams building and deploying AI systems.
