Article 14 requires high-risk AI systems to be designed with human oversight measures. These must be proportionate to the risk level and context of use.
Adding an oversight measure
Go to AI Act → Human Oversight and click Add Measure.
| Field | Required | Description |
|---|---|---|
| Measure Title | Required | Name of the oversight measure |
| AI System | Required | Which AI system this applies to |
| Type | Required | Human-in-the-loop, Human-on-the-loop, or Human-in-command |
| Description | Optional | How the oversight is implemented in practice |
| Responsible Person | Optional | Who performs the oversight |
| Status | Required | Planned, Active, or Under Review |
Types of oversight
- Human-in-the-loop — A human approves every output before it takes effect
- Human-on-the-loop — The system operates autonomously but a human monitors and can intervene
- Human-in-command — A human has the ability to override or shut down the system at any time