Overfitting
A modeling issue where an AI system learns noise or idiosyncrasies in training data, reducing its ability to generalize to new, unseen data.
Definition
Occurs when model complexity (too many parameters) allows memorization of training examples rather than learning general patterns. Symptoms include high training accuracy but low validation/test performance. Governance practices involve regular cross-validation, monitoring train/validation loss divergence, applying regularization techniques (dropout, weight decay), and defining acceptable generalization gaps before allowing models to enter production.
Real-World Example
A self-driving AI shows 99% detection accuracy in simulation but only 70% on real-world test drives. Engineers diagnose overfitting, add dropout layers, augment training with varied lighting conditions, and retrain—achieving balanced 90% accuracy in both simulated and real tests before deployment.