False Negative

When an AI model incorrectly predicts a negative class for an instance that is actually positive (Type II error).

Definition

Occurs when the model misses true cases—dangerous in fraud, security, or medical diagnoses. False negatives can lead to undetected threats or missed interventions. Governance must monitor recall rates, establish acceptable risk levels, and implement secondary checks or monitoring (e.g., anomaly detection) to catch missed events.

Real-World Example

In a cancer-screening AI, a 2% false-negative rate means 2 in 100 tumors go undetected. A hospital sets up a manual double-read process for low-confidence scans and lowers the model’s decision threshold, reducing false negatives to 1% at the expense of slightly increased false positives, then tracks patient outcomes to validate the adjustment’s effectiveness.