Bias Amplification

The phenomenon where AI systems exacerbate existing biases present in the training data, leading to increasingly skewed outcomes.

Definition

A feedback loop in which models trained on biased data make predictions that reinforce those biases in new data—e.g., by preferentially selecting or weighting certain outcomes—thus magnifying original inequities. Detecting amplification requires longitudinal audits, and mitigation may involve data-augmentation strategies to dampen feedback cycles.

Real-World Example

A news-recommendation bot promotes stories similar to those users click. If it initially surfaces predominantly political content for a subgroup, users click more political articles, reinforcing the bot’s belief that politics is their only interest. Over time, the bot amplifies this narrow focus. The team mitigates it by adding diversity constraints to recommendation logic.