Underfitting

A modeling issue where an AI system is too simple to capture underlying data patterns, resulting in poor performance on both training and new data.

Definition

Occurs when model complexity (e.g., too few parameters, overly strong regularization) prevents learning true relationships, leading to high training and validation errors. Governance practices include monitoring both training and validation loss, setting acceptable error thresholds, and iteratively increasing model capacity or feature complexity until underfitting is resolved—while ensuring changes are tracked and reviewed through version control and validation gates.

Real-World Example

A demand-forecasting model built as a simple linear regression shows 10% error on both historical and hold-out data—indicating underfitting. The team iteratively adds polynomial features and switches to a random-forest model, reducing error to 3% before deployment.