AI Bias

Systematic errors in AI outputs resulting from prejudiced training data or flawed algorithms, leading to unfair outcomes.

Definition

Systematic deviations in AI outputs that unfairly favor or disadvantage particular groups—stemming from skewed datasets, flawed labeling, or mis-specified objectives—and requiring detection, measurement, and mitigation.

Real-World Example

A facial-recognition system trained mostly on light-skinned faces shows higher error rates for darker-skinned individuals. The vendor rebalances its training dataset and deploys ongoing bias-monitoring dashboards to ensure equitable performance across all skin tones.