Article 5 is the sharpest edge of the EU AI Act. It bans eight categories of AI practice outright — no risk mitigation, no conformity assessment, no exceptions. These prohibitions have been in force since 2 February 2025, making them the first provisions of the AI Act to apply. If your AI system falls into any of these categories, it must be discontinued immediately.
Why Article 5 Matters Now
While most of the EU AI Act phases in over time — high-risk obligations in August 2026, product safety rules in 2027 — the prohibited practices took effect on 2 February 2025. This means enforcement is already live. Any organisation deploying AI in the EU should have already audited its systems against these categories.
Penalties for violating are the highest in the Act: up to EUR 35 million or 7% of global annual turnover, whichever is higher (). For context, this exceeds GDPR's maximum fine framework.
The 8 Prohibited Practices
1. Subliminal Manipulation
(a) prohibits AI systems that deploy subliminal techniques beyond a person's consciousness, or purposefully manipulative or deceptive techniques, to materially distort behaviour in a way that causes or is reasonably likely to cause significant harm.
The key elements: the technique must operate below conscious awareness or be deliberately deceptive, and the distortion must be material — meaning it genuinely changes a person's decision-making in a way that causes real harm. A recommendation algorithm that nudges preferences is not automatically caught; one that systematically exploits cognitive vulnerabilities to cause financial or physical harm likely is.
2. Exploitation of Vulnerabilities
(b) bans AI systems that exploit vulnerabilities of specific groups due to age, disability, or social or economic situation. This targets systems designed to take advantage of people who are less able to resist manipulation — for example, AI-driven marketing that targets elderly people with misleading financial products, or systems that exploit children's developmental vulnerabilities.
3. Social Scoring
(c) prohibits AI systems used by public authorities (or on their behalf) for general-purpose social scoring — evaluating or classifying people based on social behaviour or personality characteristics, where the resulting score leads to detrimental treatment that is unjustified or disproportionate.
This prohibition targets government-run social credit systems. Private loyalty programmes and credit scoring are not caught by this provision, though they may fall under other obligations as high-risk systems under Annex III.
4. Individual Criminal Offence Risk Assessment
(d) bans AI systems that assess the risk of a natural person committing criminal offences solely based on profiling or personality traits. The ban does not extend to AI systems that support human assessment based on objective, verifiable facts directly linked to criminal activity — it specifically targets pure predictive profiling.
5. Untargeted Facial Image Scraping
(e) prohibits creating or expanding facial recognition databases through untargeted scraping of facial images from the internet or CCTV footage. This directly addresses practices like those used by Clearview AI and similar services that build biometric databases without consent.
6. Emotion Recognition in Workplace and Education
(f) bans AI systems that infer emotions in workplace and educational settings, except where the system is intended for medical or safety purposes. An employer cannot use AI to monitor whether employees are 'engaged' or 'stressed' through facial analysis, but a system detecting driver fatigue in safety-critical transport is permitted.
7. Biometric Categorisation for Sensitive Attributes
(g) prohibits biometric categorisation systems that individually categorise people based on biometric data to deduce or infer race, political opinions, trade union membership, religious beliefs, sex life, or sexual orientation. Law enforcement use of biometric categorisation for other purposes may still be permissible under strict conditions.
8. Real-Time Remote Biometric Identification in Public Spaces
(h) prohibits real-time remote biometric identification (RBI) in publicly accessible spaces for law enforcement purposes — with three narrow exceptions: targeted search for victims of specific crimes (abduction, trafficking, sexual exploitation), prevention of a specific and imminent threat to life or a genuine terrorist threat, and identification of suspects of serious criminal offences listed in Annex II.
Even where exceptions apply, use requires prior judicial authorisation (except in duly justified cases of urgency) and must comply with strict necessity and proportionality requirements.
How to Audit Against Article 5
Inventory all AI systems
List every AI system deployed in your organisation, including third-party tools and embedded AI features in software you use.
Map to the 8 categories
For each system, assess whether it could fall within any of the eight prohibited categories. Pay particular attention to systems involving biometrics, behavioural analysis, or vulnerability targeting.
Document your assessment
Record why each system does or does not fall within scope. This documentation protects you in the event of a regulatory inquiry.
Discontinue or modify
Any system that falls within a prohibited category must be discontinued. If the system can be modified to remove the prohibited element, document the changes and reassess.
Monitor ongoing compliance
New AI features or use cases may inadvertently cross into prohibited territory. Build checks into your AI procurement and deployment processes.
Key Takeaways
prohibitions have been in force since 2 February 2025 — enforcement is already live
Eight categories of AI practice are banned outright, covering manipulation, social scoring, predictive policing, facial scraping, emotion recognition, biometric categorisation, and real-time biometric identification
Penalties are the highest in the AI Act: up to EUR 35 million or 7% of global turnover
Real-time biometric identification has three narrow law enforcement exceptions with strict safeguards
Every organisation using AI should audit existing systems against these categories now