Discrimination
In AI, refers to unfair treatment of individuals or groups based on biases in data or algorithms, leading to unequal outcomes.
Definition
Occurs when model predictions or decisions systematically disadvantage protected classes (race, gender, age). Governance requires defining discrimination metrics (e.g., equal opportunity, demographic parity), embedding fairness constraints in training, and auditing deployed models for disparate impact, with remediation plans when thresholds are violated.
Real-World Example
A university’s admissions-prediction tool yields lower admission probabilities for first-generation students. A fairness audit reveals features correlated with legacy applicants driving the disparity. The admissions office removes those proxies and retrains the model to achieve equal true-positive rates across student backgrounds.