Glossary of Ai Compliance Management and EU AI Act
The world’s first comprehensive horizontal legal framework for Artificial Intelligence, designed to address the risks of AI while positioning Europe as a leader in trustworthy AI.
The skills, knowledge, and understanding that allow providers, deployers, and affected persons to take an informed deployment of AI systems. Unorma’s Train module automates this requirement.
The specific section of the Act that lists "High-Risk" AI use cases, such as biometric identification, critical infrastructure, and recruitment.
Data that has been processed so that the data subject is no longer identifiable. Unlike pseudonymized data, truly anonymized data falls outside the scope of the GDPR.
A natural or legal person established in the EU who has received a mandate from a non-EU AI provider to perform their obligations under the Act.
The practice of assigning natural persons to specific categories based on biometric data (e.g., race, political orientation, or religion). Most biometric categorization is prohibited under the Act.
A mandatory conformity marking for products sold within the European Economic Area. High-risk AI systems must obtain a CE mark to prove they meet the requirements of the AI Act.
The process of demonstrating whether the requirements of the AI Act relating to a high-risk AI system have been fulfilled. This can be internal or involve a "Notified Body."
A set of technical requirements providing a means to comply with the AI Act when harmonized standards do not exist or are insufficient.
A visual interface that displays key metrics, analytics, and performance indicators in real time for easy monitoring and decision-making.
Any natural or legal person using an AI system under its authority in a professional capacity (formerly referred to as a "User").
AI-generated content (images, audio, or video) that falsely appears authentic. The AI Act mandates transparency labels for such content.
Any natural or legal person in the supply chain, other than the provider or importer, that makes an AI system available on the Union market.
A type of attack where malicious data is introduced into training datasets to compromise an AI system's performance or behavior.
A centralized digital repository for storing technical documentation, logs, and certificates. See Unorma’s Evidence Vault (F07).
A record of issues, bugs, or failures that occur within a system, used by developers to diagnose and resolve problems.
A process for collecting data from various sources, transforming it into a usable format, and loading it into a database or data warehouse.
Monitoring specific user actions within a product (such as clicks or form submissions) to analyze behavior and optimize experiences.
A mandatory assessment for certain high-risk AI systems (primarily in the public sector) to evaluate how the system affects the basic rights of citizens.
A marketing or sales model that illustrates the stages users go through before converting into paying customers.
Specific behaviors or functions a system must perform, defining how software should operate from the user's perspective.
A process of gathering user feedback, analyzing it, and applying insights to improve the product or customer experience.
An AI model that displays significant generality and is capable of competently performing a wide range of distinct tasks (e.g., Large Language Models like GPT-4).
AI systems that pose significant threats to health, safety, or fundamental rights. These are subject to the strictest compliance obligations under the Act.
The requirement that AI systems must be designed so that natural persons can oversee their functioning, prevent "automation bias," and intervene or shut down the system if necessary.
The mandatory obligation for providers to report any "serious incident" involving an AI system to the national supervisory authorities.
Any person established in the EU who places an AI system on the market that bears the name or trademark of a person established outside the EU.
The use for which an AI system is intended by the provider, as specified in the instructions for use or technical documentation.
AI systems used to assess the risk of an individual committing a crime based solely on profiling or personality traits. This is a Prohibited Practice under Article 5.
Data provided to or acquired by an AI system on the basis of which the system produces an output.
A cloud computing model that provides virtualized computing resources like servers, storage, and networking on demand.
A visual representation of the steps users take when interacting with a product, used to identify pain points and opportunities for improvement.
A measurable value that shows how effectively a company or team is achieving its business objectives.
A centralized collection of articles, guides, and FAQs designed to help users find answers and learn independently.
A type of GPAI trained on vast amounts of text data, capable of generating human-like responses.
AI systems that are subject only to light transparency obligations, such as chatbots or AI-generated content.
A development environment that enables users to create software with minimal manual coding through drag-and-drop tools and visual interfaces.
A standardized document providing a brief summary of a model’s trained parameters, intended use, and limitations. Unorma’s Document Generator (F06) creates these automatically.
The national authority responsible for monitoring whether AI systems on the market comply with the requirements of the Act.
A mathematical construct that generates an inference or prediction based on input data. The AI Act regulates the system built around these models.
Software that automates repetitive marketing tasks such as email campaigns, lead nurturing, and social media management.
An architectural approach where software is built as a collection of small, independent services that communicate through APIs.
The phenomenon where an AI model's performance degrades over time due to changes in data or environment. Tracking drift is essential for Post-Market Monitoring (Art. 72).
An independent third-party conformity assessment body designated by an EU member state to check the compliance of certain high-risk AI systems.
A development environment that allows users to build apps and workflows without writing code, using visual interfaces instead.
A feature that sends alerts or updates to users about important events, changes, or actions within a product.
A step-by-step process designed to help new users get started and experience the product's core value quickly.
A user's consent to receive communication, notifications, or marketing materials, typically via email or in-app prompts.
The process where providers collect and review experience gained from the use of AI systems they have placed on the market to identify the need for corrective actions.
A fictional profile representing a key segment of your audience, used to guide product design and marketing strategies.
An entity that develops an AI system (or has it developed) with a view to placing it on the market or putting it into service under its own name.
An early, simplified version of a product used to test concepts and gather feedback before full development.
The process of systematically testing software to ensure it meets quality standards and is free from defects.
A request made to a database to retrieve or manipulate specific information.
A set limit or target for performance, usage, or sales within a defined period.
A visual metric used to track how close a system is to full compliance. Featured in the Unorma Dashboard.
A continuous, iterative process run throughout the entire lifecycle of a high-risk AI system to identify, estimate, and evaluate risks.
A technique that controls the number of requests a user or application can make within a specific time period to prevent abuse and ensure system stability.
The ability to process data and deliver results immediately as events occur, enabling instant updates and interactions.
The automated recognition of physical features in public spaces for law enforcement purposes. This is strictly restricted and usually requires prior judicial authorization.
Testing performed to ensure that new changes or updates don't break existing functionality in previously working features.
A controlled environment set up by authorities to test innovative AI systems under supervision for a limited time.
Use of an AI system in a way not intended by the provider, but which may result from predictable human behavior or interaction with other systems.
Controlled environments set up by public authorities to test innovative AI systems under supervision for a limited time before they are placed on the market.
Any change to an AI system after it has been placed on the market that was not foreseen in the initial assessment and affects its compliance. This often requires a new conformity assessment.
Evaluating or classifying individuals based on social behavior or personality traits over time, leading to unfavorable treatment. This practice is banned in the EU.
The "living document" that contains all information necessary for authorities to assess the compliance of an AI system. It must be kept for 10 years after the system is placed on the market.
The combination of software technologies and tools a company uses to build and run its applications.
The evaluation of an AI system in its actual environment outside of a laboratory, subject to specific transparency and consent rules.
The practice of bulk-harvesting facial images from the internet or CCTV to build facial recognition databases. This is a prohibited practice under the Act.
A pricing model where customers pay based on the amount they use a service.
Data used to evaluate a trained AI system and tune its non-learnable parameters (hyperparameters) to prevent "overfitting."
Automated messages sent from an app when a specific event occurs, enabling integrations between software.
The use of software to automate repetitive tasks and processes within an organization.
A SaaS product that can be rebranded and sold by another company as its own.
A growth metric comparing performance for the same period in consecutive years.
Deploying software updates without disrupting service availability for users.