Skip to main content

Envisioning is an emerging technology research institute and advisory.

LinkedInInstagramGitHub

2011 — 2026

research
  • Reports
  • Newsletter
  • Methodology
  • Origins
  • Vocab
services
  • Research Sessions
  • Signals Workspace
  • Bespoke Projects
  • Use Cases
  • Signal Scanfree
  • Readinessfree
impact
  • ANBIMAFuture of Brazilian Capital Markets
  • IEEECharting the Energy Transition
  • Horizon 2045Future of Human and Planetary Security
  • WKOTechnology Scanning for Austria
audiences
  • Innovation
  • Strategy
  • Consultants
  • Foresight
  • Associations
  • Governments
resources
  • Pricing
  • Partners
  • How We Work
  • Data Visualization
  • Multi-Model Method
  • FAQ
  • Security & Privacy
about
  • Manifesto
  • Community
  • Events
  • Support
  • Contact
  • Login
ResearchServicesPricingPartnersAbout
ResearchServicesPricingPartnersAbout
  1. Home
  2. Vocab
  3. Hallucination

Hallucination

When AI models confidently generate plausible but factually incorrect or fabricated outputs.

Year: 2020Generality: 794
Back to Vocab

Hallucination refers to the tendency of generative AI models—particularly large language models and image synthesis systems—to produce outputs that are fluent and confident in appearance but factually incorrect, fabricated, or unsupported by the input or any verifiable reality. A language model might cite a nonexistent research paper with a convincing title and plausible author names, or describe a historical event with invented details. An image model might render anatomically impossible structures that nonetheless look photorealistic. The defining characteristic is the gap between surface plausibility and actual accuracy.

The phenomenon arises from how these models are trained. Rather than storing and retrieving facts, generative models learn statistical patterns over vast datasets and produce outputs by predicting likely continuations or completions. This process optimizes for coherence and fluency, not factual grounding. When a model encounters a query that falls outside its reliable knowledge—or when it must interpolate between learned patterns—it can generate confident-sounding content that has no basis in truth. Retrieval-augmented generation (RAG), fine-tuning on curated data, and reinforcement learning from human feedback (RLHF) are among the techniques researchers use to reduce hallucination rates, though none eliminates the problem entirely.

Hallucination became a central concern as large language models like GPT-3 and subsequent systems were deployed in high-stakes domains including medicine, law, journalism, and software development. In these contexts, a confidently stated falsehood can cause real harm—a fabricated legal citation submitted in court, an incorrect drug dosage suggested to a clinician, or a nonexistent vulnerability introduced into code. The problem is compounded by the fact that hallucinated outputs are often indistinguishable from accurate ones without independent verification.

Addressing hallucination is now one of the most active research areas in AI alignment and reliability. Approaches range from architectural changes and better training data curation to post-hoc fact-checking pipelines and uncertainty quantification methods that allow models to express when they do not know something. The challenge reflects a deeper tension in generative AI: the same capacity for flexible, creative generation that makes these models powerful also makes them prone to inventing rather than recalling.

Related

Related

Mirage Effect
Mirage Effect

When multimodal AI models produce confident visual analysis from images that were never provided

Generality: 542
AI-Induced Psychosis
AI-Induced Psychosis

Psychotic symptoms temporally linked to immersive or misleading interactions with AI systems.

Generality: 37
Model Collapse (Silent Collapse)
Model Collapse (Silent Collapse)

Progressive AI degradation caused by recursive training on AI-generated synthetic data.

Generality: 339
Reasoning Instability
Reasoning Instability

When AI models produce inconsistent or contradictory reasoning across similar inputs.

Generality: 395
Generator-Verifier Gap
Generator-Verifier Gap

The asymmetry between an AI model's ability to generate versus verify outputs.

Generality: 416
Model Collapse
Model Collapse

When generative models lose output diversity, repeatedly producing identical or near-identical results.

Generality: 602