Novelty Detection
Techniques for identifying inputs or scenarios that differ significantly from training data, triggering review or safe-mode operation to prevent unexpected failures.
Definition
Also called out-of-distribution detection, it uses statistical distances, autoencoder reconstruction errors, or uncertainty estimates (e.g., Bayesian networks) to flag anomalous inputs. Governance configures thresholds for safe-mode fallbacks, logs novelty events for incident analysis, and regularly updates detection models to reflect evolving data distributions.
Real-World Example
A medical-imaging AI flags any scan whose pixel-distribution diverges from the training set by more than two standard deviations. When novelty is detected, the system routes the scan to a specialist for manual review and logs the event for later analysis—preventing the model from making confident but spurious diagnoses on unusual cases.