Fairness Metrics
Quantitative measures (e.g., demographic parity, equalized odds) used to evaluate how fair an AI model’s predictions are across groups.
Definition
Provide objective criteria to detect and monitor group-based outcome disparities. Common metrics include: Demographic Parity (equal positive-prediction rates), Equalized Odds (equal true/false positive rates), and Calibration (predicted risk matches observed outcomes). Governance frameworks mandate selecting appropriate metrics for each use case and tracking them continuously to enforce fairness SLAs.
Real-World Example
A university’s predictive-admissions model reports demographic parity differences quarterly. When female applicants’ positive-prediction rates fall below 95% of male applicants, an alert triggers a fairness review. The team adjusts decision thresholds to meet the 0.8 parity rule and documents the change in the fairness dashboard.