Hallucination

When generative AI produces incorrect or fabricated information that appears plausible but has no basis in the training data.

Definition

A failure mode of large generative models (text, image, audio) where the system confidently invents details—facts, quotes, references—that sound coherent yet are false. Hallucinations arise from the model’s probabilistic sampling and lack of grounding. Governance approaches include grounding on trusted knowledge sources, retrieval-augmented generation, calibrated confidence scores, and post-generation fact-checking layers to catch fabrications before release.

Real-World Example

A legal-tech chatbot invents a case citation “Smith v. United Republic, 2021” when summarizing contract law. The firm integrates a citation-check service: after generation, the chatbot cross-references each case against an authoritative database and flags any unverified citations for human review, preventing reliance on bogus precedents.