Post-Market Monitoring — Art. 72 Compliance

Article 72 of the EU AI Act requires providers of high-risk AI systems to establish and document a post-market monitoring system in a manner that is proportionate to the nature of the AI technologies and the risks of the system. This system must actively and systematically collect, document, and analyse relevant data throughout the AI system's lifetime. The Post-Market Monitoring module in Venvera enables you to define, schedule, and track monitoring plans for each AI system, including key performance indicators (KPIs), review frequencies, and review outcomes. This article provides complete documentation of every feature.

Purpose of Post-Market Monitoring

Post-market monitoring serves several critical functions under the EU AI Act:

  • Continuous Compliance Verification — Ensures that the AI system continues to meet the requirements of the regulation throughout its operational life, not just at the time of initial conformity assessment.
  • Performance Tracking — Monitors the AI system's accuracy, robustness, and reliability over time to detect degradation, drift, or emergent issues that were not apparent during testing.
  • Incident Detection — Provides early warning of potential serious incidents by tracking KPIs that may indicate performance problems, bias emergence, or safety concerns before they manifest as actual incidents.
  • Evidence for Authorities — Generates a documented record of ongoing monitoring activities that can be presented to market surveillance authorities upon request, demonstrating proactive compliance.
  • Feedback Loop — Feeds monitoring findings back into the risk management system (Art. 9) and the continuous improvement process, enabling the provider to update the AI system's risk profile and compliance documentation as needed.

List View

Step 1 — Open the Monitoring List

Navigate to EU AI Act → Monitoring from the sidebar. The list view shows all monitoring plans across all AI systems, sorted by next review date (nearest first to highlight upcoming reviews). Each row shows the plan title, linked AI system, frequency, next review date, number of KPIs, and status.

Step 2 — Search Plans

Use the search bar to filter plans by title or linked AI system name. The search is case-insensitive and matches partial strings, allowing you to quickly find monitoring plans for a specific system or topic.

Step 3 — Filter by Status

Use the Status dropdown to filter by plan status. Typical statuses include Active (monitoring is ongoing), Paused (temporarily suspended), and Completed (the AI system has been retired and monitoring is no longer required). Active plans with overdue review dates are highlighted with a warning indicator.

Step 4 — Filter by Frequency

Use the Frequency dropdown to filter plans by their review cadence. Available frequencies are:

  • Daily — Review and update monitoring data every day. Appropriate for mission-critical AI systems with high transaction volumes where rapid drift detection is essential (e.g., real-time fraud detection, autonomous safety systems). Daily monitoring generates the most granular data but requires significant resource commitment.
  • Weekly — Review monitoring data weekly. Suitable for high-risk systems with moderate transaction volumes where weekly trend analysis is sufficient to catch emerging issues before they become critical.
  • Monthly — Review monitoring data monthly. Appropriate for most high-risk AI systems as a standard monitoring cadence. Monthly reviews balance thoroughness with resource efficiency.
  • Quarterly — Review monitoring data every three months. Suitable for systems with stable performance characteristics and lower risk profiles, or as a supplementary cadence for detailed deep-dive reviews that complement more frequent operational monitoring.
  • Annually — Review monitoring data once per year. Appropriate for minimal-risk systems where monitoring is voluntary, or as a cadence for comprehensive annual reviews that supplement more frequent operational monitoring. Annual reviews are also useful for documenting year-over-year trends and supporting management reporting.

Creating a New Monitoring Plan

Click + Add Monitoring Plan to open the creation form. Complete all required fields:

FieldTypeRequiredDescription
AI SystemDropdownRequiredSelect the AI system this monitoring plan covers. The dropdown lists all AI systems in your inventory with their current status. Each monitoring plan must be linked to exactly one AI system. An AI system can have multiple monitoring plans — for example, one plan for performance monitoring (daily), another for bias monitoring (monthly), and a third for comprehensive compliance review (quarterly). This flexibility allows you to tailor monitoring intensity to different risk dimensions of the same system.
TitleText inputRequiredA descriptive title for the monitoring plan (e.g., "Fraud Detection — Daily Performance Monitoring", "HR Screening — Monthly Bias Review", "Credit Risk — Quarterly Compliance Review"). Maximum 300 characters. The title should convey both the AI system and the focus area of the monitoring plan. Titles must be unique within the same AI system to prevent confusion when multiple plans exist for a single system.
DescriptionTextareaOptionalA detailed description of the monitoring plan's scope, methodology, and objectives. Document what aspects of the AI system are being monitored, what data sources are used, what analysis techniques are applied, and what the expected outcomes of each review cycle are. Include references to the Art. 9 risk management system if the monitoring plan addresses specific identified risks. Include threshold definitions for each KPI — at what values should an investigation be triggered? This description serves as the operating manual for whoever performs the monitoring activities. Maximum 5,000 characters.
FrequencyDropdownOptionalSelect the review frequency: Daily, Weekly, Monthly, Quarterly, or Annually. The frequency determines how often the monitoring plan's next review date is recalculated after each review is completed. Choose a frequency that is proportionate to the AI system's risk level, transaction volume, and the criticality of the KPIs being tracked. Art. 72 requires that monitoring be "proportionate to the nature of the AI technologies and the risks" — over-monitoring wastes resources while under-monitoring creates compliance gaps and blind spots.
KPIsText input (comma-separated)OptionalEnter the key performance indicators (KPIs) that this monitoring plan tracks, separated by commas. Examples of well-chosen KPIs:
  • Accuracy — Overall prediction accuracy, measured against ground truth labels.
  • False Positive Rate — Proportion of negative cases incorrectly classified as positive.
  • False Negative Rate — Proportion of positive cases incorrectly classified as negative.
  • Demographic Parity — Difference in positive outcome rates across protected groups.
  • Response Latency — Time taken for the AI system to produce an output.
  • Data Drift Score — Statistical measure of how much input data distributions have shifted from training data.
  • Model Confidence Distribution — Distribution of confidence scores in model outputs, detecting calibration drift.
  • Error Rate by Subgroup — Accuracy broken down by demographic or operational subgroups to detect emerging biases.
Each KPI should correspond to a measurable metric with defined thresholds documented in the description field. Well-defined KPIs are the backbone of effective post-market monitoring.
Next Review DateDate pickerOptionalThe date of the next scheduled review. This date is used to calculate overdue status and to generate review reminders on the dashboard. When a review is completed, update this field to the next scheduled date based on the monitoring frequency. For new plans, set the first review date based on when you expect to have sufficient monitoring data to analyse. Overdue review dates (past today's date) are highlighted in red throughout the platform, including on the dashboard's "Needing Review" count and the monitoring list view.
Art. 72 Requirements Summary: The post-market monitoring system must: (a) be established before the AI system is placed on the market or put into service; (b) actively and systematically collect, document, and analyse relevant data provided by deployers or collected through other sources; (c) be proportionate to the nature of the AI technologies and risks; (d) feed relevant data back into the risk management system (Art. 9) to update risk assessments; (e) for high-risk AI systems, be part of the quality management system (Art. 17); and (f) include the post-market monitoring plan as an element of the technical documentation (Annex IV). Ensure that your monitoring plans collectively address all of these requirements.
Tip — Connecting Monitoring to Incidents: When a monitoring review identifies an anomaly or performance issue that exceeds defined thresholds, create an incident record in the AI Act Incidents module to formally document and track the issue. This creates a documented link between your monitoring activities and your incident management process, demonstrating to auditors that your monitoring system is effective at detecting issues early and that identified issues are properly managed through to resolution. Reference the monitoring plan and the specific KPI that triggered the investigation in the incident description.
Warning — Monitoring Plan Before Market Placement: Art. 72(1) requires that the post-market monitoring system be established before the AI system is placed on the market or put into service. Do not wait until after deployment to create monitoring plans. Define your monitoring plans during the development phase and include them in your technical documentation package. A monitoring plan created after deployment may be viewed by supervisory authorities as evidence of non-compliance with Art. 72(1). Create at least a draft monitoring plan at the same time as you register the AI system.

Review Workflow

When a monitoring plan's next review date arrives, the responsible person should follow this structured process:

Step 1 — Collect KPI Data

Gather the latest KPI data for the monitored AI system from production logs, analytics dashboards, and any automated monitoring tools. Ensure data covers the full period since the last review.

Step 2 — Compare Against Thresholds

Compare current KPI values against baseline values and the defined thresholds documented in the monitoring plan description. Identify any KPIs that have breached warning or critical thresholds.

Step 3 — Document Findings

Record the review findings, including all KPI values, trend observations, and any anomalies detected. Update the monitoring plan's notes or create a review record.

Step 4 — Investigate Breaches

If any KPIs are outside acceptable thresholds, investigate the root cause. Document the investigation methodology, findings, and conclusions. Determine whether the issue poses a risk to compliance, safety, or fundamental rights.

Step 5 — Escalate if Needed

If the investigation reveals a significant issue, create an incident record in the AI Act Incidents module and/or update the risk management documentation. For serious issues, escalate to senior management immediately.

Step 6 — Update Next Review Date

After completing the review, update the next review date based on the monitoring frequency. If the monitoring plan itself needs to be updated (new KPIs, changed thresholds, different frequency), make the necessary edits to ensure the plan remains current and effective.

This structured workflow ensures continuous, documented monitoring that satisfies Art. 72 requirements and provides a robust audit trail demonstrating active compliance management throughout the AI system's operational life.