Risk Classification Wizard — Classify AI Systems by Risk Tier

The Risk Classification Wizard guides you through a structured, five-step process to determine the risk tier of an AI system under the EU AI Act. The classification result determines which compliance obligations apply to the system. This wizard implements the logic of Articles 5, 6, and Annex III of Regulation (EU) 2024/1689 in an interactive questionnaire format. Every question includes explanatory text drawn directly from the regulation to help you make accurate classifications.

Launching the Wizard

Step 1 — System Selection

The first step asks you to select the AI system you want to classify. If you launched the wizard from an AI system's detail page, the system is pre-selected. Otherwise, choose from a dropdown of all AI systems in your inventory that have a status of Draft or Active. Systems already classified will show their current classification with an option to reclassify. Retired systems cannot be classified.

The system selection step also displays a summary of the selected system's key attributes (name, description, intended purpose, role type, and environment) so you have the relevant context before answering classification questions. Review this information carefully — the accuracy of your classification depends on having a clear understanding of what the system does.

Step 2 — Article 5 Prohibited Practices Screening

This critical step screens the AI system against the eight categories of prohibited AI practices defined in Art. 5 of the EU AI Act. You must answer each question honestly and accurately. If any question is answered "Yes", the system will be classified as Unacceptable Risk and the wizard will proceed directly to the results step with a recommendation to discontinue the system.

The eight screening questions are:

#Prohibited PracticeQuestionArt. 5 Reference
1Subliminal ManipulationDoes this AI system deploy subliminal techniques beyond a person's consciousness, or purposefully manipulative or deceptive techniques, with the objective or effect of materially distorting the behaviour of a person or group in a manner that causes or is reasonably likely to cause significant harm?Art. 5(1)(a)
2Vulnerable ExploitationDoes this AI system exploit any vulnerabilities of a specific group of persons due to their age, disability, or a specific social or economic situation, with the objective or effect of materially distorting their behaviour in a manner that causes or is reasonably likely to cause significant harm?Art. 5(1)(b)
3Social Scoring by Public AuthoritiesIs this AI system used by or on behalf of public authorities for the evaluation or classification of natural persons based on their social behaviour or known/predicted personal or personality characteristics, where the social score leads to detrimental or unfavourable treatment that is unjustified or disproportionate?Art. 5(1)(c)
4Real-Time Remote Biometric IdentificationIs this AI system used for real-time remote biometric identification of natural persons in publicly accessible spaces for the purpose of law enforcement, except in narrowly defined exceptions (search for victims, prevention of imminent threat, identification of suspects of serious criminal offences)?Art. 5(1)(d)
5Public Authority Social ScoringDoes this AI system evaluate or classify natural persons or groups based on their social behaviour or known, inferred, or predicted personal or personality characteristics, where the resulting social score leads to detrimental treatment in social contexts unrelated to the context in which the data was originally generated or collected, or treatment that is unjustified or disproportionate to their social behaviour or its gravity?Art. 5(1)(c)(i)-(ii)
6Untargeted Facial Recognition ScrapingDoes this AI system create or expand facial recognition databases through the untargeted scraping of facial images from the internet or CCTV footage?Art. 5(1)(e)
7Workplace/Education Emotion InferenceDoes this AI system infer emotions of natural persons in the areas of workplace or education institutions, except where the AI system is intended to be put into service or placed on the market for medical or safety reasons?Art. 5(1)(f)
8Biometric Categorisation (Sensitive Attributes)Does this AI system categorise natural persons individually based on their biometric data to deduce or infer their race, political opinions, trade union membership, religious or philosophical beliefs, sex life, or sexual orientation? (Exception: labelling or filtering of lawfully acquired biometric datasets in the area of law enforcement.)Art. 5(1)(g)
Warning — Unacceptable Classification: If you answer "Yes" to any of the eight prohibited practice questions, the system is classified as Unacceptable Risk. Under the EU AI Act, these AI practices are prohibited outright. You must immediately discontinue use of the system and initiate a remediation or decommissioning plan. The classification result will include a detailed explanation of which prohibited practice was triggered and the corresponding article reference.
Step 3 — Annex III High-Risk Categories

If no prohibited practices were identified in Step 2, the wizard proceeds to assess whether the AI system falls into one of the high-risk categories defined in Annex III of the EU AI Act. You are presented with eight category groups, each containing specific use cases. Select all categories that apply to your AI system:

CategoryDescription & ExamplesAnnex III Ref
Biometric Identification & CategorisationRemote biometric identification, categorisation by sensitive attributes, emotion recognition. Examples: facial recognition access control, biometric onboarding, voice ID.Annex III, §1
Critical InfrastructureSafety components of critical digital/physical infrastructure. Examples: water treatment AI, electricity grid management, traffic systems.Annex III, §2
Education & Vocational TrainingAdmission, assignment, learning evaluation, exam monitoring. Examples: automated essay grading, admission screening, exam proctoring.Annex III, §3
Employment & Workers ManagementRecruitment, CV screening, performance monitoring, promotion/termination decisions. Examples: automated CV screening, AI performance reviews.Annex III, §4
Essential Private & Public ServicesCredit scoring, insurance risk pricing, emergency call classification, public benefits eligibility. Examples: credit models, triage systems.Annex III, §5
Law EnforcementRisk profiling, polygraphs, evidence evaluation, predictive policing, crime analytics. Examples: recidivism prediction, suspect profiling.Annex III, §6
Migration, Asylum & Border ControlImmigration polygraphs, migrant risk assessment, asylum examination, border surveillance. Examples: document verification, asylum analysis.Annex III, §7
Administration of Justice & Democratic ProcessesJudicial research, law application, election influence. Examples: legal research AI, sentencing tools, campaign targeting.Annex III, §8

If one or more categories are selected, the system is provisionally classified as High Risk, subject to the exceptions assessed in Step 4.

Step 4 — Article 6(3) Exceptions

If the system was provisionally classified as High Risk in Step 3, this step assesses whether any of the Art. 6(3) exceptions apply. These exceptions allow a system that would otherwise be High Risk to be downgraded if it meets specific criteria. The three exception conditions are:

ExceptionDescription
Narrow Procedural TaskThe AI system is intended to perform a narrow procedural task — i.e., it performs a routine, well-defined operation with limited discretion. For example, an AI system that simply converts unstructured data into structured data (e.g., OCR on forms) without making substantive decisions may qualify for this exception.
Improving Human Activity ResultThe AI system is intended to improve the result of a previously completed human activity — i.e., it reviews or refines a decision already made by a human rather than making an independent decision. For example, a grammar checker that suggests improvements to human-written text.
Detecting Decision-Making PatternsThe AI system is intended to detect decision-making patterns or deviations from prior decision-making patterns and is not meant to replace or influence the previously completed human assessment without proper human review. For example, an analytics tool that flags anomalies in judicial sentencing patterns for human review.
Important: If any exception applies and the AI system does not pose a significant risk of harm to health, safety, or fundamental rights, the system may be downgraded from High Risk to Limited or Minimal Risk. However, the provider must document the reasons for considering the exception applicable and notify the relevant supervisory authority before placing the system on the market. The wizard captures your rationale and stores it in the classification record.

If no exceptions apply, or if you choose not to claim an exception, the system remains classified as High Risk. If the system was not classified as High Risk in Step 3 (no Annex III categories selected), this step is skipped, and the system is classified as either Limited Risk or Minimal Risk based on its transparency obligations.

Step 5 — Classification Result

The final step presents the classification result with a detailed rationale. The result screen includes:

  • Risk Tier Badge — A large, colour-coded badge showing the classification: Unacceptable (red), High (orange), Limited (amber), or Minimal (green).
  • Rationale Summary — A generated explanation of how the classification was determined, referencing the specific articles and questions that drove the result. For Unacceptable classifications, the specific prohibited practice is cited. For High Risk, the Annex III categories and any claimed exceptions are listed. For Limited and Minimal, the absence of high-risk indicators is noted.
  • Compliance Obligations Summary — A bullet-point list of the obligations that apply to this risk tier (e.g., for High Risk: risk management system, data governance, technical documentation, record-keeping, transparency, human oversight, accuracy/robustness/cybersecurity, conformity assessment, registration in EU database).
  • Next Steps — Recommended actions based on the classification result, such as "Initiate Gap Assessment" for High-Risk systems or "Ensure Transparency Notice" for Limited-Risk systems.

Click Save Classification to persist the result. The classification is linked to the AI system record and visible on the system's detail page and on the dashboard risk distribution chart. Previous classifications are archived and accessible via the audit log.

Tip — Reclassification: You can reclassify an AI system at any time by launching the wizard again from the system's detail page. This is useful when the system's intended purpose, deployment scope, or functionality changes. Each classification run is independently recorded, so you maintain a full audit trail of how the system's risk classification has evolved over time.

Classification Logic Summary

The wizard follows this decision tree:

  1. If any Art. 5 prohibited practice is triggered → Unacceptable Risk
  2. Else if any Annex III category is selected and no Art. 6(3) exception applies → High Risk
  3. Else if an Annex III category is selected but an Art. 6(3) exception is claimed → Limited or Minimal Risk (depending on transparency obligations)
  4. Else if the system has transparency obligations (e.g., it generates deepfakes, interacts with users as a chatbot, or performs emotion recognition) → Limited Risk
  5. Else → Minimal Risk