Article 14 requires high-risk AI systems to be designed with human oversight measures. These must be proportionate to the risk level and context of use.

Adding an oversight measure

Go to AI Act → Human Oversight and click Add Measure.

FieldRequiredDescription
Measure TitleRequiredName of the oversight measure
AI SystemRequiredWhich AI system this applies to
TypeRequiredHuman-in-the-loop, Human-on-the-loop, or Human-in-command
DescriptionOptionalHow the oversight is implemented in practice
Responsible PersonOptionalWho performs the oversight
StatusRequiredPlanned, Active, or Under Review

Types of oversight

  • Human-in-the-loop — A human approves every output before it takes effect
  • Human-on-the-loop — The system operates autonomously but a human monitors and can intervene
  • Human-in-command — A human has the ability to override or shut down the system at any time