Confidence Interval
A range of values, derived from sample statistics, that is likely to contain the value of an unknown population parameter, used in AI to express uncertainty.
Definition
Quantifies the statistical uncertainty around model metrics (e.g., accuracy, mean error). Reporting confidence intervals—rather than point estimates—provides stakeholders with a more realistic view of model reliability, supporting risk-based decisions. Governance policies often mandate CI reporting for all key performance indicators used in production.
Real-World Example
A credit-card fraud model reports a 99.2% precision with a 95% confidence interval of [98.5%, 99.6%] based on cross-validation. Compliance teams use that CI to set risk thresholds, ensuring decisions account for metric uncertainty and avoid over-reliance on potentially optimistic point estimates.