False Negative
When an AI model incorrectly predicts a negative class for an instance that is actually positive (Type II error).
Occurs when the model misses true cases - dangerous in fraud, security, or medical diagnoses. False negatives can lead to undetected threats or missed interventions. Governance must monitor recall rates, establish acceptable risk levels, and implement secondary checks or monitoring (e.g., anomaly detection) to catch missed events.
In a cancer-screening AI, a 2% false-negative rate means 2 in 100 tumors go undetected. A hospital sets up a manual double-read process for low-confidence scans and lowers the model’s decision threshold, reducing false negatives to 1% at the expense of slightly increased false positives, then tracks patient outcomes to validate the adjustment’s effectiveness.

We help you find answers
What problem does Enzai solve?
Enzai provides enterprise-grade infrastructure to manage AI risk and compliance. It creates a centralized system of record where AI systems, models, datasets, and governance decisions are documented, assessed, and auditable.
Who is Enzai built for?
How is Enzai different from other governance tools?
Can we start if we have no existing AI governance process?
Does AI governance slow down innovation?
How does Enzai stay aligned with evolving AI regulations?
Research, insights, and updates
Empower your organization to adopt, govern, and monitor AI with enterprise-grade confidence. Built for regulated organizations operating at scale.





