AI Act Gap Assessment — Evaluate Compliance Maturity

The AI Act Gap Assessment is a comprehensive, 50-question evaluation that measures your organisation's compliance maturity against the key requirements of the EU AI Act. The assessment is structured into eight chapters, each corresponding to a major compliance domain. Completing this assessment generates a compliance score, a chapter-by-chapter breakdown, and an automated remediation plan targeting the areas where your organisation falls short. This article explains every aspect of the assessment: its structure, scoring methodology, question weights, and how results translate into actionable remediation items.

Starting an Assessment

Step 1 — Navigate to Gap Assessment

From the sidebar, go to EU AI Act → Gap Assessment. If you have previously completed assessments, they are listed with their dates, overall scores, and statuses. Click + New Assessment to begin a fresh assessment.

Step 2 — Progress Through Chapters

The assessment is presented one chapter at a time. A progress bar at the top shows how many chapters (and questions) you have completed. You can navigate between chapters using the sidebar chapter list or the Next/Previous buttons. All questions in a chapter must be answered before moving to the next chapter, but you can save progress and return later — partial assessments are preserved.

Step 3 — Answer Each Question

Each question has a maturity scoring scale from 0 to 4, a weight indicator (W1, W2, or W3), and an optional notes field. Select the score that best reflects your current state, and optionally add notes to provide context or evidence for your answer. The notes field is particularly valuable during audits, as it demonstrates the reasoning behind each score.

Step 4 — Review & Submit

After completing all eight chapters, a review screen shows a summary of all your answers. You can revisit any chapter to make changes before final submission. Click Submit Assessment to finalise. Once submitted, the assessment is locked and cannot be edited (but you can start a new assessment at any time for comparison).

Assessment Chapters & Questions

The 50 questions are distributed across eight chapters. Each chapter maps directly to one or more articles of the EU AI Act:

Chapter 1 — Risk Management (Art. 9)

This chapter evaluates whether your organisation has established and maintains a risk management system for high-risk AI systems throughout their lifecycle. Questions cover:

  • Existence of a documented risk management system that identifies, analyses, and evaluates known and foreseeable risks
  • Regular testing and validation of the risk management system with appropriate metrics
  • Integration of risk management into the overall AI development and deployment lifecycle
  • Identification and mitigation of risks related to reasonably foreseeable misuse
  • Communication of residual risks to deployers and affected persons
  • Testing of the AI system against preliminary defined metrics and probabilistic thresholds

Chapter 2 — Data & Data Governance (Art. 10)

This chapter assesses data management practices for training, validation, and testing datasets. Questions cover:

  • Data governance and management practices (collection, labelling, cleaning, enrichment)
  • Statistical properties examination (biases, gaps, inadequacies)
  • Relevance, representativeness, and completeness of training data
  • Appropriate use of personal data with lawful legal bases
  • Bias detection and mitigation measures in datasets
  • Data quality metrics and monitoring procedures
  • Documentation of data provenance and lineage

Chapter 3 — Technical Documentation (Art. 11)

This chapter evaluates whether your technical documentation meets the requirements of Art. 11 and Annex IV. Questions cover:

  • Completeness of technical documentation per Annex IV requirements
  • Documentation of system description, design specifications, and development process
  • Documentation of monitoring, testing, and validation procedures
  • Documentation kept up to date throughout the AI system lifecycle
  • Accessibility of documentation to competent authorities upon request
  • Version control and change management for technical documents

Chapter 4 — Record-Keeping & Logging (Art. 12)

This chapter assesses the logging capabilities of your high-risk AI systems. Questions cover:

  • Automatic logging of events (operations, inputs, outputs, decisions)
  • Traceability of the AI system's operation throughout its lifecycle
  • Log retention periods appropriate to the intended purpose
  • Protection of logs against tampering and unauthorised access
  • Logging sufficient for post-market monitoring and incident investigation
  • Compliance with Regulation (EU) 2024/1689 Art. 12 specific requirements

Chapter 5 — Transparency & User Information (Art. 13)

This chapter evaluates transparency measures and the information provided to deployers and users. Questions cover:

  • Design for sufficient transparency to enable deployers to interpret and use outputs appropriately
  • Instructions for use accompanying the AI system (intended purpose, limitations, known risks)
  • Information about the level of accuracy, robustness, and cybersecurity expected
  • Clear disclosure to persons subject to AI system decisions
  • Transparency about the AI nature of the system where required (chatbots, deepfakes, emotion recognition)
  • Disclosure of automated decision-making and the right to obtain human intervention
  • Accessibility and understandability of provided information for target audiences

Chapter 6 — Human Oversight (Art. 14)

This chapter assesses whether appropriate human oversight measures are in place. Questions cover:

  • Design enabling effective oversight by natural persons during the period of use
  • Measures to enable human-in-the-loop, human-on-the-loop, or human-in-command approaches
  • Ability for overseers to fully understand the AI system's capabilities and limitations
  • Ability to correctly interpret the AI system's output
  • Ability to decide not to use the AI system or to override, disregard, or reverse the output
  • Ability to intervene in the operation or interrupt the system via a "stop" mechanism
  • Training and competency of persons performing oversight

Chapter 7 — Accuracy, Robustness & Cybersecurity (Art. 15)

This chapter evaluates the technical resilience of your AI systems. Questions cover:

  • Designed and developed to achieve an appropriate level of accuracy for the intended purpose
  • Accuracy levels declared in accompanying instructions for use
  • Resilience to errors, faults, and inconsistencies within the system or its environment
  • Technical redundancy solutions (backup systems, fail-safe plans)
  • Resilience against attempts by unauthorised third parties to alter use or performance (adversarial attacks)
  • Cybersecurity measures appropriate to the circumstances and risks
  • Protection against data poisoning, model manipulation, and adversarial inputs

Chapter 8 — Conformity Assessment & Registration (Art. 16/43/49)

This chapter assesses whether your organisation is prepared for conformity assessment and registration obligations. Questions cover:

  • Quality management system in place covering all aspects of Art. 17
  • Conformity assessment procedures identified (internal or third-party based on risk and domain)
  • Preparation for EU database registration per Art. 49
  • CE marking procedures understood and ready to implement
  • Declaration of conformity documentation prepared
  • Post-market monitoring system integrated with conformity requirements

Scoring Methodology

ScoreMaturity LevelDescription
0Non-existentNo measures, processes, or documentation exist for this requirement. The organisation has not yet begun to address this area.
1Initial / Ad HocSome awareness exists, but measures are ad hoc, undocumented, and inconsistently applied. Individual efforts may exist but are not formalised.
2DevelopingProcesses and measures are partially documented and implemented. Gaps remain, and consistency across the organisation is limited. A formal programme is under development.
3Defined & ImplementedProcesses are fully documented, approved, and consistently implemented across the organisation. Evidence of implementation is available and verifiable.
4Optimised & Continuous ImprovementProcesses are not only fully implemented but are subject to regular review, continuous improvement, and benchmarking. The organisation exceeds minimum requirements and demonstrates best-in-class practices.

Question Weights

Each question is assigned a weight that reflects its relative importance to overall compliance:

WeightLabelMeaningImpact on Remediation Priority
W3CriticalThis requirement is essential for regulatory compliance. Failure to meet this requirement poses a significant risk of enforcement action, penalties, or prohibition of the AI system. These are typically requirements with explicit deadlines or mandatory obligations in the regulation text.If scored below 3 → generates a Critical priority remediation action
W2ImportantThis requirement is important for a robust compliance programme. While not the most critical, failure to meet it leaves significant gaps that auditors and supervisory authorities will identify. These requirements are often necessary to demonstrate the effectiveness of the overall compliance system.If scored below 3 → generates a High priority remediation action
W1SupportingThis requirement supports overall compliance maturity but is less likely to be the focus of enforcement action. Meeting this requirement demonstrates a mature, comprehensive approach. These are often best-practice recommendations or supporting activities.If scored below 3 → generates a Medium priority remediation action
Score Calculation: The overall compliance score is calculated as a weighted average: Score = (Σ (question_score × question_weight)) / (Σ (max_score × question_weight)) × 100. Each chapter also has its own sub-score calculated the same way but limited to the questions within that chapter. The overall score is displayed as a percentage ring on the results page, with colour coding: green (≥80%), amber (50–79%), red (<50%).

Results Page

After submitting the assessment, the results page presents:

  • Overall Score Ring — A circular progress indicator showing the overall weighted compliance score as a percentage. The ring is colour-coded: green for scores at or above 80%, amber for 50–79%, and red for below 50%. The centre of the ring displays the numeric score.
  • Chapter Breakdown — A bar chart or table showing the score for each of the eight chapters. Chapters scoring below 50% are highlighted in red, those between 50% and 79% in amber, and those at or above 80% in green. This breakdown quickly identifies which compliance domains need the most attention.
  • Remediation Plan Auto-Generation — Based on the assessment results, the system automatically generates a remediation roadmap. Every question scored below 3 (Defined & Implemented) generates a remediation action. The priority of each action is determined by the question's weight (W3 → Critical, W2 → High, W1 → Medium). The remediation plan includes the specific article reference, a description of the gap, and a suggested remediation action. See the Remediation Roadmap help article for details on managing the generated plan.
Tip — Periodic Reassessment: Conduct gap assessments quarterly or after significant changes to your AI portfolio. Each assessment is saved independently, allowing you to track compliance maturity over time. The dashboard compliance score always reflects the most recent assessment, but you can compare historical assessments from the assessment list view.
Warning — Assessment Integrity: Once submitted, an assessment cannot be edited. This is an intentional design decision to preserve audit trail integrity. If you discover an error after submission, start a new assessment and note the correction in the notes fields. Auditors will appreciate the transparency of maintaining an unaltered record.

Notes Field

Each question includes an optional notes field (up to 2,000 characters). Use this field to document:

  • Evidence supporting your score (e.g., "See internal policy document AI-GOV-003, approved 2025-11-15")
  • Planned improvements (e.g., "Data governance framework draft in review, expected completion Q1 2026")
  • Contextual explanations (e.g., "N/A — this system does not process personal data")
  • References to external certifications or audit reports

These notes are included in the assessment export and are invaluable during regulatory audits and management reviews.