Meaningful Human Control
A regulatory and operational standard ensuring that humans retain the ability to oversee, intervene in, and override AI decision-making processes.
Définition
In the context of Agentic AI, "control" does not necessarily mean a human must approve every single micro-decision, which would negate the speed benefits of automation. Instead, it refers to the design of the system: setting clear operational boundaries, ensuring the AI is transparent about its status as a machine, and implementing a functional "kill-switch" that allows a human operator to immediately halt the system if it deviates from its intended goals or safety parameters.
Exemple concret
A logistics company uses an autonomous AI scheduler to route delivery trucks in real-time. "Meaningful human control" is maintained not by a dispatcher approving every turn, but by a dashboard that allows the dispatcher to define "No-Go Zones" and instantly recall all trucks to the depot (the kill-switch) in the event of a severe weather alert or system malfunction.