Novelty Detection
Techniques for identifying inputs or scenarios that differ significantly from training data, triggering review or safe-mode operation to prevent unexpected failures.
Also called out-of-distribution detection, it uses statistical distances, autoencoder reconstruction errors, or uncertainty estimates (e.g., Bayesian networks) to flag anomalous inputs. Governance configures thresholds for safe-mode fallbacks, logs novelty events for incident analysis, and regularly updates detection models to reflect evolving data distributions.
A medical-imaging AI flags any scan whose pixel-distribution diverges from the training set by more than two standard deviations. When novelty is detected, the system routes the scan to a specialist for manual review and logs the event for later analysis - preventing the model from making confident but spurious diagnoses on unusual cases.

We help you find answers
What problem does Enzai solve?
Enzai provides enterprise-grade infrastructure to manage AI risk and compliance. It creates a centralized system of record where AI systems, models, datasets, and governance decisions are documented, assessed, and auditable.
Who is Enzai built for?
How is Enzai different from other governance tools?
Can we start if we have no existing AI governance process?
Does AI governance slow down innovation?
How does Enzai stay aligned with evolving AI regulations?
Research, insights, and updates
Empower your organization to adopt, govern, and monitor AI with enterprise-grade confidence. Built for regulated organizations operating at scale.





