Risk Classification Wizard — Classify AI Systems by Risk Tier
The Risk Classification Wizard guides you through a structured, five-step process to determine the risk tier of an AI system under the EU AI Act. The classification result determines which compliance obligations apply to the system. This wizard implements the logic of Articles 5, 6, and Annex III of Regulation (EU) 2024/1689 in an interactive questionnaire format. Every question includes explanatory text drawn directly from the regulation to help you make accurate classifications.
Launching the Wizard
The first step asks you to select the AI system you want to classify. If you launched the wizard from an AI system's detail page, the system is pre-selected. Otherwise, choose from a dropdown of all AI systems in your inventory that have a status of Draft or Active. Systems already classified will show their current classification with an option to reclassify. Retired systems cannot be classified.
The system selection step also displays a summary of the selected system's key attributes (name, description, intended purpose, role type, and environment) so you have the relevant context before answering classification questions. Review this information carefully — the accuracy of your classification depends on having a clear understanding of what the system does.
This critical step screens the AI system against the eight categories of prohibited AI practices defined in Art. 5 of the EU AI Act. You must answer each question honestly and accurately. If any question is answered "Yes", the system will be classified as Unacceptable Risk and the wizard will proceed directly to the results step with a recommendation to discontinue the system.
The eight screening questions are:
| # | Prohibited Practice | Question | Art. 5 Reference |
|---|---|---|---|
| 1 | Subliminal Manipulation | Does this AI system deploy subliminal techniques beyond a person's consciousness, or purposefully manipulative or deceptive techniques, with the objective or effect of materially distorting the behaviour of a person or group in a manner that causes or is reasonably likely to cause significant harm? | Art. 5(1)(a) |
| 2 | Vulnerable Exploitation | Does this AI system exploit any vulnerabilities of a specific group of persons due to their age, disability, or a specific social or economic situation, with the objective or effect of materially distorting their behaviour in a manner that causes or is reasonably likely to cause significant harm? | Art. 5(1)(b) |
| 3 | Social Scoring by Public Authorities | Is this AI system used by or on behalf of public authorities for the evaluation or classification of natural persons based on their social behaviour or known/predicted personal or personality characteristics, where the social score leads to detrimental or unfavourable treatment that is unjustified or disproportionate? | Art. 5(1)(c) |
| 4 | Real-Time Remote Biometric Identification | Is this AI system used for real-time remote biometric identification of natural persons in publicly accessible spaces for the purpose of law enforcement, except in narrowly defined exceptions (search for victims, prevention of imminent threat, identification of suspects of serious criminal offences)? | Art. 5(1)(d) |
| 5 | Public Authority Social Scoring | Does this AI system evaluate or classify natural persons or groups based on their social behaviour or known, inferred, or predicted personal or personality characteristics, where the resulting social score leads to detrimental treatment in social contexts unrelated to the context in which the data was originally generated or collected, or treatment that is unjustified or disproportionate to their social behaviour or its gravity? | Art. 5(1)(c)(i)-(ii) |
| 6 | Untargeted Facial Recognition Scraping | Does this AI system create or expand facial recognition databases through the untargeted scraping of facial images from the internet or CCTV footage? | Art. 5(1)(e) |
| 7 | Workplace/Education Emotion Inference | Does this AI system infer emotions of natural persons in the areas of workplace or education institutions, except where the AI system is intended to be put into service or placed on the market for medical or safety reasons? | Art. 5(1)(f) |
| 8 | Biometric Categorisation (Sensitive Attributes) | Does this AI system categorise natural persons individually based on their biometric data to deduce or infer their race, political opinions, trade union membership, religious or philosophical beliefs, sex life, or sexual orientation? (Exception: labelling or filtering of lawfully acquired biometric datasets in the area of law enforcement.) | Art. 5(1)(g) |
If no prohibited practices were identified in Step 2, the wizard proceeds to assess whether the AI system falls into one of the high-risk categories defined in Annex III of the EU AI Act. You are presented with eight category groups, each containing specific use cases. Select all categories that apply to your AI system:
| Category | Description & Examples | Annex III Ref |
|---|---|---|
| Biometric Identification & Categorisation | Remote biometric identification, categorisation by sensitive attributes, emotion recognition. Examples: facial recognition access control, biometric onboarding, voice ID. | Annex III, §1 |
| Critical Infrastructure | Safety components of critical digital/physical infrastructure. Examples: water treatment AI, electricity grid management, traffic systems. | Annex III, §2 |
| Education & Vocational Training | Admission, assignment, learning evaluation, exam monitoring. Examples: automated essay grading, admission screening, exam proctoring. | Annex III, §3 |
| Employment & Workers Management | Recruitment, CV screening, performance monitoring, promotion/termination decisions. Examples: automated CV screening, AI performance reviews. | Annex III, §4 |
| Essential Private & Public Services | Credit scoring, insurance risk pricing, emergency call classification, public benefits eligibility. Examples: credit models, triage systems. | Annex III, §5 |
| Law Enforcement | Risk profiling, polygraphs, evidence evaluation, predictive policing, crime analytics. Examples: recidivism prediction, suspect profiling. | Annex III, §6 |
| Migration, Asylum & Border Control | Immigration polygraphs, migrant risk assessment, asylum examination, border surveillance. Examples: document verification, asylum analysis. | Annex III, §7 |
| Administration of Justice & Democratic Processes | Judicial research, law application, election influence. Examples: legal research AI, sentencing tools, campaign targeting. | Annex III, §8 |
If one or more categories are selected, the system is provisionally classified as High Risk, subject to the exceptions assessed in Step 4.
If the system was provisionally classified as High Risk in Step 3, this step assesses whether any of the Art. 6(3) exceptions apply. These exceptions allow a system that would otherwise be High Risk to be downgraded if it meets specific criteria. The three exception conditions are:
| Exception | Description |
|---|---|
| Narrow Procedural Task | The AI system is intended to perform a narrow procedural task — i.e., it performs a routine, well-defined operation with limited discretion. For example, an AI system that simply converts unstructured data into structured data (e.g., OCR on forms) without making substantive decisions may qualify for this exception. |
| Improving Human Activity Result | The AI system is intended to improve the result of a previously completed human activity — i.e., it reviews or refines a decision already made by a human rather than making an independent decision. For example, a grammar checker that suggests improvements to human-written text. |
| Detecting Decision-Making Patterns | The AI system is intended to detect decision-making patterns or deviations from prior decision-making patterns and is not meant to replace or influence the previously completed human assessment without proper human review. For example, an analytics tool that flags anomalies in judicial sentencing patterns for human review. |
If no exceptions apply, or if you choose not to claim an exception, the system remains classified as High Risk. If the system was not classified as High Risk in Step 3 (no Annex III categories selected), this step is skipped, and the system is classified as either Limited Risk or Minimal Risk based on its transparency obligations.
The final step presents the classification result with a detailed rationale. The result screen includes:
- Risk Tier Badge — A large, colour-coded badge showing the classification: Unacceptable (red), High (orange), Limited (amber), or Minimal (green).
- Rationale Summary — A generated explanation of how the classification was determined, referencing the specific articles and questions that drove the result. For Unacceptable classifications, the specific prohibited practice is cited. For High Risk, the Annex III categories and any claimed exceptions are listed. For Limited and Minimal, the absence of high-risk indicators is noted.
- Compliance Obligations Summary — A bullet-point list of the obligations that apply to this risk tier (e.g., for High Risk: risk management system, data governance, technical documentation, record-keeping, transparency, human oversight, accuracy/robustness/cybersecurity, conformity assessment, registration in EU database).
- Next Steps — Recommended actions based on the classification result, such as "Initiate Gap Assessment" for High-Risk systems or "Ensure Transparency Notice" for Limited-Risk systems.
Click Save Classification to persist the result. The classification is linked to the AI system record and visible on the system's detail page and on the dashboard risk distribution chart. Previous classifications are archived and accessible via the audit log.
Classification Logic Summary
The wizard follows this decision tree:
- If any Art. 5 prohibited practice is triggered → Unacceptable Risk
- Else if any Annex III category is selected and no Art. 6(3) exception applies → High Risk
- Else if an Annex III category is selected but an Art. 6(3) exception is claimed → Limited or Minimal Risk (depending on transparency obligations)
- Else if the system has transparency obligations (e.g., it generates deepfakes, interacts with users as a chatbot, or performs emotion recognition) → Limited Risk
- Else → Minimal Risk