← All articles

Prohibited AI Practices That Are Already Banned

While most of the EU AI Act’s obligations don’t take effect until August 2026, one category has been enforceable since 2 February 2025: prohibited AI practices. These are AI uses that the EU considers so fundamentally incompatible with human rights and values that they are banned outright.

The penalties for violation are the Act’s highest — up to €35 million or 7% of global annual turnover, whichever is greater. And unlike the high-risk obligations, there is no grace period, no transition window, and no reduced regime for SMEs. If you’re operating a prohibited AI system, you’re already in breach.

The prohibited practices

Article 5 bans the following AI practices:

Subliminal, manipulative, or deceptive techniques

Article 5(1)(a) prohibits AI systems that deploy “subliminal techniques beyond a person’s consciousness” or “purposefully manipulative or deceptive techniques” with the objective or effect of materially distorting a person’s behaviour, causing them to take a decision they would not have otherwise taken, in a manner that causes or is reasonably likely to cause significant harm.

What this catches: AI systems designed to manipulate people into decisions that harm them. Think dark patterns on steroids: AI-powered persuasion engines that exploit psychological vulnerabilities, sophisticated nudging systems designed to override rational decision-making, or AI that uses personalisation to create artificial urgency or false scarcity at a level that goes beyond normal marketing.

What this doesn’t catch: Standard recommendation engines, personalised marketing, A/B testing, and persuasive design are not prohibited per se. The prohibition requires the AI to use techniques that are subliminal, deceptive, or manipulative AND that cause significant harm. A recommendation engine that suggests products you might like is fine. An AI system that dynamically adjusts pricing, urgency messaging, and product presentation to exploit a detected vulnerability (financial stress, addictive behaviour, cognitive impairment) is not.

The grey area: The boundary between “persuasive” and “manipulative” is debated. Aggressive personalisation engines that optimise for conversion using psychological profiling are worth reviewing against this provision. If your AI system’s effectiveness depends on users not understanding how it influences them, that’s a signal to look more carefully.

Exploitation of vulnerabilities

Article 5(1)(b) prohibits AI systems that exploit the vulnerabilities of specific groups — due to age, disability, or social or economic situation — in a way that materially distorts their behaviour and causes or is likely to cause significant harm.

What this catches: AI systems that target vulnerable people with harmful outcomes. An AI lending system that targets elderly customers with unfavourable terms. A gaming platform that uses AI to identify and exploit users showing signs of gambling addiction. A subscription service that uses AI to make cancellation harder for users it identifies as less technologically literate.

What this doesn’t catch: Serving vulnerable populations isn’t prohibited. Targeting their vulnerabilities for harmful purposes is. An AI system that helps elderly users navigate a complex service is fine. An AI system that identifies elderly users and presents them with more confusing interfaces to increase accidental purchases is prohibited.

Social scoring

Article 5(1)(c) prohibits AI systems used by public authorities, or by private entities on their behalf, for evaluating or classifying people based on their social behaviour or personal characteristics, where the resulting social score leads to detrimental treatment in contexts unrelated to the original data collection, or where the treatment is unjustified or disproportionate.

What this catches: Government-operated social credit systems, and private systems that function equivalently when used on behalf of public authorities. The prohibition specifically targets the linking of behaviour in one context to consequences in an unrelated context — for example, restricting someone’s access to public services based on their social media activity.

What this doesn’t catch: Credit scoring based on financial behaviour, loyalty programmes, reputation systems within specific platforms, and employee performance ratings are not social scoring under this provision. The prohibition is about cross-context scoring that creates detrimental treatment, particularly when operated by or for public authorities.

Predictive policing based solely on profiling

Article 5(1)(d) prohibits AI systems that assess or predict the risk that a person will commit a criminal offence, based solely on profiling or on assessing personality traits and characteristics. This doesn’t apply to AI that augments human assessments based on objective, verifiable facts directly linked to criminal activity.

Untargeted facial image scraping

Article 5(1)(e) prohibits AI systems that create or expand facial recognition databases through untargeted scraping of facial images from the internet or CCTV footage. This is a direct response to companies like Clearview AI, which built facial recognition databases by scraping billions of images from social media and public websites.

Workplace and educational emotion recognition

Article 5(1)(f) prohibits AI systems that infer emotions of people in the workplace or in educational institutions. There are exceptions for medical and safety purposes, such as a system that detects driver fatigue in a workplace vehicle or monitors a patient’s emotional state for therapeutic purposes.

What this catches: Call centre analytics tools that assess employee emotional states during calls. Educational platforms that monitor student engagement through facial expression analysis. Workplace surveillance systems that track employee mood or stress levels.

What this doesn’t catch: Emotion detection for medical purposes (monitoring patients for signs of distress), for safety purposes (detecting operator fatigue), or in contexts outside workplaces and educational institutions.

Real-time remote biometric identification in public spaces

Article 5(1)(h) prohibits the use of real-time remote biometric identification (primarily facial recognition) in publicly accessible spaces for law enforcement, with tightly defined exceptions for:

  • Searching for specific victims of abduction, trafficking, or sexual exploitation
  • Preventing a specific, substantial, and imminent threat to life or a foreseeable terrorist attack
  • Identifying suspects of serious criminal offences

Even these exceptions require prior judicial authorisation (except in duly justified cases of urgency) and are subject to strict proportionality requirements.

Why you should check now

Most businesses assume none of these prohibitions apply to them. “We don’t do social scoring” is a common and often correct response. But the prohibitions are drafted broadly, and edge cases exist:

Review your personalisation engines. If your AI system uses detailed psychological profiling to optimise conversion, review it against the manipulation and vulnerability exploitation provisions. The question isn’t whether your system personalises — it’s whether it uses techniques that are deceptive or subliminal to drive decisions that harm the user.

Check your workforce analytics. If you use any AI tools that analyse employee behaviour, sentiment, engagement, or emotional state, review them against the workplace emotion recognition ban. Many HR technology vendors have added AI features that may fall within this prohibition.

Audit your data sources. If you use facial recognition or biometric identification in any capacity, verify that the training data wasn’t built from untargeted scraping. Your vendor should be able to confirm the provenance of their training data.

Consider your AI’s impact on vulnerable groups. If your AI system serves or affects elderly users, children, people with disabilities, or economically disadvantaged populations, review whether it could be considered to exploit their vulnerabilities.

What to do if you find a problem

If your review identifies an AI system that might fall within a prohibited category:

  1. Get a legal opinion. The boundaries of some prohibitions are interpretive. Before shutting down a system or making material business changes, get a qualified legal assessment.

  2. Stop using it while you assess. If there’s a reasonable possibility the system is prohibited, suspend its use pending the legal review. The penalties for continued use of a prohibited system are severe.

  3. Document your analysis. Whether you conclude the system is prohibited or not, document your reasoning. If a regulator investigates, demonstrating a thorough, good-faith analysis of Article 5 compliance is far better than having no documentation.

  4. Notify your provider. If the potentially prohibited system is provided by a third party, inform them of your concerns. They may not be aware that their system is being used in a way that triggers the prohibition.

The prohibited practices provisions are the sharpest teeth in the EU AI Act. They carry the highest penalties, they’re already enforceable, and they target AI uses that the EU considers fundamentally harmful. Even if you’re confident none of your systems are affected, a documented review is cheap insurance against a very expensive problem.

Free Resource

Free EU AI Act Priority Checklist

The 5 most critical compliance items before the August 2, 2026 deadline. Delivered to your inbox.