AI Compliance Glossary

Glossary of Ai Compliance Management and EU AI Act

A

AI Act (European Union AI Act)

The world’s first comprehensive horizontal legal framework for Artificial Intelligence, designed to address the risks of AI while positioning Europe as a leader in trustworthy AI.

AI Literacy (Art. 4)

The skills, knowledge, and understanding that allow providers, deployers, and affected persons to take an informed deployment of AI systems. Unorma’s Train module automates this requirement.

Annex III

The specific section of the Act that lists "High-Risk" AI use cases, such as biometric identification, critical infrastructure, and recruitment.

Anonymized Data

Data that has been processed so that the data subject is no longer identifiable. Unlike pseudonymized data, truly anonymized data falls outside the scope of the GDPR.

Authorized Representative (Art. 25)

A natural or legal person established in the EU who has received a mandate from a non-EU AI provider to perform their obligations under the Act.

B

Biometric Categorization

The practice of assigning natural persons to specific categories based on biometric data (e.g., race, political orientation, or religion). Most biometric categorization is prohibited under the Act.

C

CE Marking

A mandatory conformity marking for products sold within the European Economic Area. High-risk AI systems must obtain a CE mark to prove they meet the requirements of the AI Act.

Conformity Assessment

The process of demonstrating whether the requirements of the AI Act relating to a high-risk AI system have been fulfilled. This can be internal or involve a "Notified Body."

Common Specifications (Art. 41)

A set of technical requirements providing a means to comply with the AI Act when harmonized standards do not exist or are insufficient.

D

Dashboard

A visual interface that displays key metrics, analytics, and performance indicators in real time for easy monitoring and decision-making.

Deployer (Art. 3)

Any natural or legal person using an AI system under its authority in a professional capacity (formerly referred to as a "User").

Deepfake

AI-generated content (images, audio, or video) that falsely appears authentic. The AI Act mandates transparency labels for such content.

Distributor

Any natural or legal person in the supply chain, other than the provider or importer, that makes an AI system available on the Union market.

Data Poisoning

A type of attack where malicious data is introduced into training datasets to compromise an AI system's performance or behavior.

E

Evidence Vault

A centralized digital repository for storing technical documentation, logs, and certificates. See Unorma’s Evidence Vault (F07).

Error Log

A record of issues, bugs, or failures that occur within a system, used by developers to diagnose and resolve problems.

ETL (Extract, Transform, Load)

A process for collecting data from various sources, transforming it into a usable format, and loading it into a database or data warehouse.

Event tracking

Monitoring specific user actions within a product (such as clicks or form submissions) to analyze behavior and optimize experiences.

F

Fundamental Rights Impact Assessment (FRIA)

A mandatory assessment for certain high-risk AI systems (primarily in the public sector) to evaluate how the system affects the basic rights of citizens.

Funnel

A marketing or sales model that illustrates the stages users go through before converting into paying customers.

Functional requirements

Specific behaviors or functions a system must perform, defining how software should operate from the user's perspective.

Feedback loop

A process of gathering user feedback, analyzing it, and applying insights to improve the product or customer experience.

G

General Purpose AI (GPAI)

An AI model that displays significant generality and is capable of competently performing a wide range of distinct tasks (e.g., Large Language Models like GPT-4).

H

High-Risk AI System

AI systems that pose significant threats to health, safety, or fundamental rights. These are subject to the strictest compliance obligations under the Act.

Human Oversight (Art. 14)

The requirement that AI systems must be designed so that natural persons can oversee their functioning, prevent "automation bias," and intervene or shut down the system if necessary.

I

Incident Reporting

The mandatory obligation for providers to report any "serious incident" involving an AI system to the national supervisory authorities.

Importer

Any person established in the EU who places an AI system on the market that bears the name or trademark of a person established outside the EU.

Intended Purpose

The use for which an AI system is intended by the provider, as specified in the instructions for use or technical documentation.

Individual Predictive Policing

AI systems used to assess the risk of an individual committing a crime based solely on profiling or personality traits. This is a Prohibited Practice under Article 5.

Input Data

Data provided to or acquired by an AI system on the basis of which the system produces an output.

Infrastructure as a Service (IaaS)

A cloud computing model that provides virtualized computing resources like servers, storage, and networking on demand.

J

Journey mapping

A visual representation of the steps users take when interacting with a product, used to identify pain points and opportunities for improvement.

K

KPI (Key Performance Indicator)

A measurable value that shows how effectively a company or team is achieving its business objectives.

Knowledge base

A centralized collection of articles, guides, and FAQs designed to help users find answers and learn independently.

L

Large Language Model (LLM)

A type of GPAI trained on vast amounts of text data, capable of generating human-like responses.

Limited Risk AI

AI systems that are subject only to light transparency obligations, such as chatbots or AI-generated content.

Low-Code Platform

A development environment that enables users to create software with minimal manual coding through drag-and-drop tools and visual interfaces.

M

Model Card

A standardized document providing a brief summary of a model’s trained parameters, intended use, and limitations. Unorma’s Document Generator (F06) creates these automatically.

Market Surveillance Authority

The national authority responsible for monitoring whether AI systems on the market comply with the requirements of the Act.

Machine Learning Model

A mathematical construct that generates an inference or prediction based on input data. The AI Act regulates the system built around these models.

Metrics dashboard

Software that automates repetitive marketing tasks such as email campaigns, lead nurturing, and social media management.

Microservices

An architectural approach where software is built as a collection of small, independent services that communicate through APIs.

Model Drift

The phenomenon where an AI model's performance degrades over time due to changes in data or environment. Tracking drift is essential for Post-Market Monitoring (Art. 72).

N

Notified Body

An independent third-party conformity assessment body designated by an EU member state to check the compliance of certain high-risk AI systems.

No-code platform

A development environment that allows users to build apps and workflows without writing code, using visual interfaces instead.

Notification system

A feature that sends alerts or updates to users about important events, changes, or actions within a product.

O

Onboarding flow

A step-by-step process designed to help new users get started and experience the product's core value quickly.

Opt-In

A user's consent to receive communication, notifications, or marketing materials, typically via email or in-app prompts.

P

Post-Market Monitoring

The process where providers collect and review experience gained from the use of AI systems they have placed on the market to identify the need for corrective actions.

Persona (User persona)

A fictional profile representing a key segment of your audience, used to guide product design and marketing strategies.

Provider (Art. 3)

An entity that develops an AI system (or has it developed) with a view to placing it on the market or putting it into service under its own name.

Prototype

An early, simplified version of a product used to test concepts and gather feedback before full development.

Q

Quality assurance (QA)

The process of systematically testing software to ensure it meets quality standards and is free from defects.

Query

A request made to a database to retrieve or manipulate specific information.

Quota

A set limit or target for performance, usage, or sales within a defined period.

R

Readiness Score

A visual metric used to track how close a system is to full compliance. Featured in the Unorma Dashboard.

Risk Management System (Art. 9)

A continuous, iterative process run throughout the entire lifecycle of a high-risk AI system to identify, estimate, and evaluate risks.

Rate limiting

A technique that controls the number of requests a user or application can make within a specific time period to prevent abuse and ensure system stability.

Real-time processing

The ability to process data and deliver results immediately as events occur, enabling instant updates and interactions.

Real-time Remote Biometric Identification

The automated recognition of physical features in public spaces for law enforcement purposes. This is strictly restricted and usually requires prior judicial authorization.

Regression testing

Testing performed to ensure that new changes or updates don't break existing functionality in previously working features.

Regulatory Sandbox

A controlled environment set up by authorities to test innovative AI systems under supervision for a limited time.

Reasonably Foreseeable Misuse

Use of an AI system in a way not intended by the provider, but which may result from predictable human behavior or interaction with other systems.

S

Sandboxes (Regulatory Sandboxes)

Controlled environments set up by public authorities to test innovative AI systems under supervision for a limited time before they are placed on the market.

Substantial Modification

Any change to an AI system after it has been placed on the market that was not foreseen in the initial assessment and affects its compliance. This often requires a new conformity assessment.

Social Scoring

Evaluating or classifying individuals based on social behavior or personality traits over time, leading to unfavorable treatment. This practice is banned in the EU.

T

Technical Documentation (Art. 11)

The "living document" that contains all information necessary for authorities to assess the compliance of an AI system. It must be kept for 10 years after the system is placed on the market.

Tech stack

The combination of software technologies and tools a company uses to build and run its applications.

Testing in Real-World Conditions

The evaluation of an AI system in its actual environment outside of a laboratory, subject to specific transparency and consent rules.

U

Untargeted Scraping

The practice of bulk-harvesting facial images from the internet or CCTV to build facial recognition databases. This is a prohibited practice under the Act.

Usage-based pricing

A pricing model where customers pay based on the amount they use a service.

V

Validation Data

Data used to evaluate a trained AI system and tune its non-learnable parameters (hyperparameters) to prevent "overfitting."

W

Webhooks

Automated messages sent from an app when a specific event occurs, enabling integrations between software.

Workflow automation

The use of software to automate repetitive tasks and processes within an organization.

White-label SaaS

A SaaS product that can be rebranded and sold by another company as its own.

Y

Year-over-year (YoY)

A growth metric comparing performance for the same period in consecutive years.

Z

Zero downtime deployment

Deploying software updates without disrupting service availability for users.