Prohibited AI Practices 2026: What Features You Must Remove Now

Jasper Claes

EU AI Act prohibited practices

🚫 TL;DR — Critical Alert:

  • Six categories of AI practices became fully prohibited across the EU from February 2, 2025. If you’re still operating any of these, you are in active violation right now.
  • Fines for prohibited practices reach €35 million or 7% of global turnover — the highest penalty tier in the entire Act.
  • This post lists every prohibited practice, with practical examples of what must be removed or redesigned immediately.

The EU AI Act’s prohibition rules aren’t a future compliance problem. They have been in force since February 2, 2025 — over a year before the broader high-risk obligations kick in for most operators. If your AI product contains EU AI Act prohibited practices, any of the features described in this post, you are not ahead of the deadline. You are already in breach.

I want to be clear about why this matters beyond the fines: the prohibited practices list represents the EU’s absolute red lines — AI applications that the legislature decided, after significant debate, pose such a fundamental threat to human dignity and safety that they cannot be permitted under any commercial justification. The political will to enforce these provisions is high.

Let’s go through every prohibited category under Article 5 of the EU AI Act, with practical examples and the design changes required. For the broader regulatory context, read our Ultimate Guide to EU AI Act Compliance (2026 Edition).

The Six Prohibited AI Practice Categories Under Article 5

Prohibition 1: Subliminal Manipulation

Article 5(1)(a) prohibits AI systems that deploy subliminal techniques beyond a person’s consciousness — or techniques that exploit psychological weaknesses or vulnerabilities — to materially distort behaviour in a way that causes or is likely to cause significant harm.

What this looks like in products:

  • Recommendation algorithms deliberately designed to exploit compulsive behaviour (addiction loops) to maximise engagement at the cost of user wellbeing
  • AI-driven dark patterns that use microtargeting based on psychological profiling to push users into decisions against their interests
  • AI that serves personalised content calibrated to exploit identified cognitive biases — not to inform, but to coerce

The design test: Is your system designed to bypass rational decision-making rather than inform it? If your A/B testing framework is optimising for engagement at the expense of user awareness or wellbeing, that’s the direction regulators are looking.

Prohibition 2: Exploitation of Vulnerability

Article 5(1)(b) prohibits AI systems that exploit specific vulnerabilities of particular groups — defined by age, disability, or social or economic circumstances — in a way that causes or is likely to cause significant harm.

What this looks like in products:

  • An AI-powered lending product that uses vulnerability signals (indicators of financial desperation, job loss, housing insecurity) to target high-interest products at people who would be damaged by them
  • An AI chatbot in a children’s app designed to exploit developmental psychology to drive in-app purchases
  • Targeted advertising AI that specifically identifies and exploits signals of mental health vulnerability to market services to those individuals

The design test: Does your system’s value proposition depend on users being in a compromised or vulnerable state? Does it perform better commercially when it identifies vulnerability signals and acts on them?

Prohibition 3: Social Scoring by Public Authorities

Article 5(1)(c) prohibits AI systems used by or on behalf of public authorities to evaluate or classify individuals based on their social behaviour or personal characteristics, resulting in detrimental treatment unrelated to the original data context, or treatment that is disproportionate to actual social behaviour.

What this means: This is the “social credit system” prohibition. It applies to public authorities, not private companies — so most commercial AI operators are not directly in scope. However, any software company selling AI tools to government or public sector clients must ensure their products cannot be used to implement social scoring systems.

Procurement implication: If you’re selling to public sector clients, your contracts and product terms need explicit restrictions on social scoring use cases. See our post on AI vendor due diligence in the 2026 procurement landscape for the buyer’s perspective.

Prohibition 4: Real-Time Remote Biometric Identification in Public Spaces (Law Enforcement)

Article 5(1)(d) prohibits the use of real-time remote biometric identification systems in publicly accessible spaces for law enforcement purposes — with narrow, strictly regulated exceptions.

The exceptions (which are very limited): The exceptions cover targeted searches for specific victims of serious crimes, prevention of specific and imminent terrorist threats, and identification of suspects in serious criminal offences. Even these exceptions require prior judicial or administrative authorisation.

What this means for commercial operators: If you supply facial recognition or biometric identification technology, your terms of use must explicitly prohibit real-time law enforcement use in public spaces. This isn’t just good practice — it’s a legal requirement for providers.

Prohibition 5: Emotion Recognition in Workplaces and Educational Institutions

Article 5(1)(f) prohibits the use of emotion recognition AI systems in the workplace and in educational institutions — with limited exceptions for safety or medical purposes.

This is the prohibition that catches the most companies off guard, because “wellbeing monitoring” and “engagement analytics” products have proliferated significantly in recent years.

Clearly prohibited:

  • AI tools that analyse facial expressions during video calls to assess employee engagement, stress levels, or mood
  • Software that infers emotional state from voice analysis during meetings or performance reviews
  • Student monitoring tools that use facial recognition to track attention levels or emotional engagement in class

Grey zone: Safety monitoring applications in genuinely dangerous work environments (where detecting fatigue or extreme stress prevents accidents) sit in a more complex position. But the burden of proof for invoking the safety exception is high, and it requires explicit legal basis.

What to remove: If your HR or EdTech product has any feature that uses camera or microphone input to infer emotional state about workers or students, that feature must be disabled or removed immediately.

Prohibition 6: Biometric Categorisation Systems Inferring Sensitive Characteristics

Article 5(1)(g) prohibits AI systems that use biometric data to categorise individuals by race, political opinion, trade union membership, religious beliefs, sexual orientation, or health status.

What this prohibits:

  • Facial analysis systems that claim to infer sexual orientation or political views from appearance
  • Voice analysis tools that categorise individuals by ethnicity or religion
  • Any system that uses biometric proxies to assign protected characteristic labels to individuals

The scientific basis for many of these claims is deeply contested — regulators don’t need to disprove the technology’s accuracy claim to enforce this prohibition. The act of categorising by protected characteristics via biometric data is itself prohibited.

The EU AI Act prohibited practices Compliance Audit: What to Check

Article 5 ProhibitionIn Force SinceKey Question for Your ProductMaximum Fine
Subliminal manipulation (Art. 5.1.a)Feb 2025Does the system bypass rational decision-making?€35M / 7% turnover
Vulnerability exploitation (Art. 5.1.b)Feb 2025Does it target individuals in compromised states?€35M / 7% turnover
Social scoring (Art. 5.1.c)Feb 2025Could it be used by public authorities to classify citizens?€35M / 7% turnover
Real-time biometric ID – law enforcement (Art. 5.1.d)Feb 2025Can it be used for public space surveillance?€35M / 7% turnover
Workplace/education emotion recognition (Art. 5.1.f)Feb 2025Does it infer emotional state of employees or students?€35M / 7% turnover
Biometric categorisation by protected characteristics (Art. 5.1.g)Feb 2025Does it infer race, religion, sexuality from biometric data?€35M / 7% turnover

What Does “Remove” Actually Mean?

When I say “remove” a prohibited feature, I mean one of three things depending on the architecture:

  1. Feature removal: The functionality is deleted from the product and cannot be accessed by any user via any means (including API).
  2. Use-case restriction: The functionality exists but is technically restricted from the prohibited use case — with contractual and technical safeguards that can be demonstrated to a regulator.
  3. Complete product discontinuation: For products whose entire value proposition depends on a prohibited practice, discontinuation of that product line is the only compliant path.

“We added a warning in the terms of service” is not a compliant approach. Article 5 prohibitions are not waivable by user consent. They are absolute.

The Intersection With Your Broader Compliance Programme

Prohibited practices checks should be the first step in any AI compliance programme — before you invest in Technical File preparation or risk management systems for high-risk AI. There’s no point building a compliant Article 11 Technical File for a system that is prohibited under Article 5.

After you’ve cleared the prohibited practices check, the next step is determining whether any of your systems are high-risk under Annex III. See our guide: Is Your AI High-Risk? A Guide to Annex III Classifications.

And for the full picture of what non-compliance costs — including the fine calculation methodology — read our post on breaking down the €35M EU AI Act fines.

Frequently Asked Questions

Can we sell our prohibited AI product outside the EU and use it there?

The EU AI Act applies when the output of an AI system is used within the EU, or when EU-based individuals are affected — not purely based on where the system is physically hosted or operated. Selling a product specifically to markets outside the EU where it has no EU-facing use case may remove EU Act exposure, but get specific legal advice before relying on this as a compliance strategy.

We built a feature before the prohibition was in force. Are we liable for past use?

The prohibition applies from February 2, 2025 forward. There is no liability for pre-prohibition operation in principle, though continued operation after the prohibition date creates exposure. The critical question is whether the feature was disabled by February 2, 2025. If the feature remained active after that date, enforcement authorities can treat it as an ongoing violation regardless of when it was built.

Does the emotion recognition prohibition apply to voluntary wellness apps?

Article 5(1)(f) specifically covers workplace and educational institution contexts. A consumer wellness app used voluntarily by an individual outside of an employment or educational relationship is not directly prohibited under this provision. However, emotion inference in other contexts may still face scrutiny under GDPR’s special category data rules and under the broader manipulation prohibitions.

What is the difference between “subliminal manipulation” and normal personalisation?

This is one of the most contested questions in AI Act interpretation. The prohibition targets techniques that operate “beyond a person’s consciousness” — meaning the person cannot reasonably detect that they are being manipulated. Normal personalisation that adapts content to user preferences, where the user is aware of and can control the personalisation, is generally not prohibited. The key tests are: Is the technique hidden? Is it designed to bypass rational evaluation rather than inform it? Does it cause or risk causing significant harm?

When did the EU AI Act’s prohibited practices rules come into force?

February 2, 2025 — six months after the Act entered into force on August 1, 2024. This was the first major enforcement milestone under the Act’s phased implementation timeline. Unlike the high-risk AI obligations (which apply from August 2026), prohibited practices have been enforceable for over a year.

Not sure if a feature in your product crosses an Article 5 line?

Unorma’s Audit Simulation checks your documented system design against every Article 5 prohibition and flags potential violations before a regulator does.Run a Prohibited Practices Check →

Share this post

Leave a Reply