Underfitting
A modeling issue where an AI system is too simple to capture underlying data patterns, resulting in poor performance on both training and new data.
Occurs when model complexity (e.g., too few parameters, overly strong regularization) prevents learning true relationships, leading to high training and validation errors. Governance practices include monitoring both training and validation loss, setting acceptable error thresholds, and iteratively increasing model capacity or feature complexity until underfitting is resolved - while ensuring changes are tracked and reviewed through version control and validation gates.
A demand-forecasting model built as a simple linear regression shows 10% error on both historical and hold-out data - indicating underfitting. The team iteratively adds polynomial features and switches to a random-forest model, reducing error to 3% before deployment.

We help you find answers
What problem does Enzai solve?
Enzai provides enterprise-grade infrastructure to manage AI risk and compliance. It creates a centralized system of record where AI systems, models, datasets, and governance decisions are documented, assessed, and auditable.
Who is Enzai built for?
How is Enzai different from other governance tools?
Can we start if we have no existing AI governance process?
Does AI governance slow down innovation?
How does Enzai stay aligned with evolving AI regulations?
Research, insights, and updates
Empower your organization to adopt, govern, and monitor AI with enterprise-grade confidence. Built for regulated organizations operating at scale.





