Bias Audit
An evaluation process to detect and mitigate biases in AI systems, ensuring fairness and compliance with ethical standards.
Definition
A structured review—often by an independent team or external firm—that examines every stage of the AI lifecycle (data collection, preprocessing, modeling, evaluation) for bias. Auditors apply statistical tests (e.g., disparate impact), model explainability tools, and user-group analyses. The process ends with concrete remediation recommendations and governance updates.
Real-World Example
A bank commissions a bias audit of its credit-scoring AI. Auditors sample loan decisions by demographic group, find that applicants from certain ZIP codes are disproportionately denied, and recommend data-augmentation and new fairness constraints in the scoring algorithm, after which the bank monitors denial rates monthly to track progress.