Robustness
The ability of an AI system to maintain reliable performance under a variety of challenging or adversarial conditions.
Definition
The property of withstanding input perturbations, distribution shifts, or attack vectors (adversarial examples). Achieved through adversarial training, ensemble methods, or robust optimization. Governance requires specifying robustness requirements for each use case, testing under defined stress scenarios, and incorporating robustness checks into validation and monitoring processes to ensure systems remain dependable in real-world conditions.
Real-World Example
A self-driving car vendor subjects its vision system to simulated fog, glare, and adversarial patch attacks. They incorporate these adversarial examples into the training set and enforce a governance policy that performance under each condition must meet minimum detection rates before highway deployment.