Justice Metrics
Quantitative measures (e.g., disparate impact, equal opportunity) used to assess fairness and nondiscrimination in AI decision‐making.
Statistical metrics that quantify fairness across demographic or protected classes. Disparate impact measures outcome rates, equal opportunity measures equal true-positive rates, and calibration assesses predicted risk accuracy. Governance frameworks mandate selecting appropriate justice metrics per use case, setting acceptable thresholds, and regularly reporting them to oversight bodies to ensure nondiscrimination throughout the AI lifecycle.
A predictive-policing model’s outputs are evaluated monthly: the department calculates disparate-impact ratios for stops by race and finds the ratio exceeds the 0.8 threshold. They pause automated patrol recommendations, recalibrate the model with fairness constraints, and verify post-deployment that stop rates meet justice-metric targets.

We help you find answers
What problem does Enzai solve?
Enzai provides enterprise-grade infrastructure to manage AI risk and compliance. It creates a centralized system of record where AI systems, models, datasets, and governance decisions are documented, assessed, and auditable.
Who is Enzai built for?
How is Enzai different from other governance tools?
Can we start if we have no existing AI governance process?
Does AI governance slow down innovation?
How does Enzai stay aligned with evolving AI regulations?
Research, insights, and updates
Empower your organization to adopt, govern, and monitor AI with enterprise-grade confidence. Built for regulated organizations operating at scale.





