The AI Act classifies AI systems into four risk tiers. The wizard walks you through the classification in three steps.
Step 1: Prohibited Practices Screening (Art. 5)
The wizard checks 8 prohibited practices from Article 5. If any apply, the system is classified as Unacceptable Risk and cannot be deployed in the EU. These include:
- Subliminal manipulation or exploitation of vulnerabilities
- Social scoring by public authorities
- Real-time remote biometric identification for law enforcement
- Untargeted facial recognition database creation
- Emotion inference in workplaces/education
- Biometric categorisation for sensitive attributes
Step 2: High-Risk Classification (Annex III)
If no prohibited practices apply, the wizard checks 8 Annex III high-risk categories (biometrics, critical infrastructure, education, employment, essential services, law enforcement, migration, justice). Select all that apply.
Step 3: Exceptions (Art. 6(3))
If the system falls into an Annex III category, the wizard checks for Art. 6(3) exceptions — narrow procedural tasks, quality improvement only, or purely preparatory analysis. If all exceptions apply, the system is downgraded to Limited Risk.
Result
| Classification | Compliance Impact |
|---|---|
| Unacceptable | System must NOT be deployed — prohibited under Art. 5 |
| High Risk | Full compliance with Art. 8–15 required (documentation, testing, oversight, etc.) |
| Limited Risk | Transparency obligations only (inform users of AI interaction) |
| Minimal Risk | No mandatory requirements — voluntary codes of conduct |