Variance Monitoring
Tracking fluctuations in AI model outputs or performance metrics over time to detect drift and infer potential degradation or risk.
Definition
Involves calculating statistical variances in key metrics (prediction distributions, feature importances, performance scores) and comparing them to rolling baselines. Significant deviations trigger alerts for deeper investigation. Governance defines acceptable variance bands, monitoring frequencies, and automated response procedures (data-pipeline checks, model retraining) to maintain model stability and reliability.
Real-World Example
An online-advertising AI tracks weekly variance in click-through distributions across user segments. When variance exceeds twice the historical standard deviation, an alert prompts the data fleet team to inspect for recent code or data changes—preventing unnoticed drift from affecting campaign performance.