Fairness Metrics
Quantitative measures (e.g., demographic parity, equalized odds) used to evaluate how fair an AI model’s predictions are across groups.
Provide objective criteria to detect and monitor group-based outcome disparities. Common metrics include: Demographic Parity (equal positive-prediction rates), Equalized Odds (equal true/false positive rates), and Calibration (predicted risk matches observed outcomes). Governance frameworks mandate selecting appropriate metrics for each use case and tracking them continuously to enforce fairness SLAs.
A university’s predictive-admissions model reports demographic parity differences quarterly. When female applicants’ positive-prediction rates fall below 95% of male applicants, an alert triggers a fairness review. The team adjusts decision thresholds to meet the 0.8 parity rule and documents the change in the fairness dashboard.

We help you find answers
What problem does Enzai solve?
Enzai provides enterprise-grade infrastructure to manage AI risk and compliance. It creates a centralized system of record where AI systems, models, datasets, and governance decisions are documented, assessed, and auditable.
Who is Enzai built for?
How is Enzai different from other governance tools?
Can we start if we have no existing AI governance process?
Does AI governance slow down innovation?
How does Enzai stay aligned with evolving AI regulations?
Research, insights, and updates
Empower your organization to adopt, govern, and monitor AI with enterprise-grade confidence. Built for regulated organizations operating at scale.





