Fairness
Ensuring AI systems produce unbiased, equitable outcomes across different individuals and groups, and mitigating discriminatory impacts.
Definition
A governance principle and technical objective requiring that AI models deliver comparable outcomes for protected and non-protected groups. Fairness is operationalized via statistical criteria (e.g., demographic parity, equal opportunity), bias-mitigation algorithms, and continuous monitoring. It demands stakeholder engagement to define what “equitable” means in context and to validate that fairness interventions do not unduly sacrifice accuracy.
Real-World Example
A hiring AI is audited for gender balance: before mitigation, male candidates had a 25% higher interview-offer rate. The team applies a fairness constraint during retraining to equalize offer probabilities, and post-deployment reports show interview-rates within 2% across genders—improving equity without harming predictive accuracy.