Skip to main content

Envisioning is an emerging technology research institute and advisory.

LinkedInInstagramGitHub

2011 — 2026

research
  • Reports
  • Newsletter
  • Methodology
  • Origins
  • Vocab
services
  • Research Sessions
  • Signals Workspace
  • Bespoke Projects
  • Use Cases
  • Signal Scanfree
  • Readinessfree
impact
  • ANBIMAFuture of Brazilian Capital Markets
  • IEEECharting the Energy Transition
  • Horizon 2045Future of Human and Planetary Security
  • WKOTechnology Scanning for Austria
audiences
  • Innovation
  • Strategy
  • Consultants
  • Foresight
  • Associations
  • Governments
resources
  • Pricing
  • Partners
  • How We Work
  • Data Visualization
  • Multi-Model Method
  • FAQ
  • Security & Privacy
about
  • Manifesto
  • Community
  • Events
  • Support
  • Contact
  • Login
ResearchServicesPricingPartnersAbout
ResearchServicesPricingPartnersAbout
  1. Home
  2. Vocab
  3. Word Salad

Word Salad

Incoherent, meaningless text output produced by a language model lacking semantic structure.

Year: 2019Generality: 380
Back to Vocab

In AI and natural language processing, "word salad" refers to model outputs that are syntactically garbled, semantically incoherent, or so contextually disconnected that they convey no meaningful information. The term is borrowed from clinical psychology, where it describes fragmented, disorganized speech associated with certain psychiatric conditions, but in the ML context it specifically characterizes failure modes of generative language systems. Word salad outputs may superficially resemble natural language — containing real words and partial grammatical structures — yet fail to communicate any coherent idea or intent.

Word salad typically emerges from several underlying causes. Early rule-based NLP systems could produce it when template logic broke down or when input fell outside expected patterns. In neural language models, it can result from insufficient training data, poor sampling strategies (such as very high temperature settings that flatten the probability distribution over tokens), or model collapse during training. Adversarial inputs and prompt injection attacks can also deliberately induce word salad as a way to destabilize model outputs. The phenomenon became a prominent benchmark concern as large language models like GPT-2 and GPT-3 demonstrated that scale alone did not guarantee coherence under all conditions.

Understanding and measuring word salad is important for evaluating language model quality and safety. Metrics such as perplexity, BERTScore, and human coherence ratings are commonly used to detect incoherent outputs, though no single automated metric perfectly captures the full range of failure modes. Reducing word salad has driven advances in decoding strategies — including beam search, nucleus sampling, and repetition penalties — as well as improvements in fine-tuning and reinforcement learning from human feedback (RLHF). As language models are deployed in high-stakes applications like medical documentation or legal drafting, the ability to reliably avoid incoherent outputs has become a core reliability and trustworthiness requirement.

Related

Related

Slop
Slop

Low-quality, generic AI-generated content that is verbose, repetitive, or contextually hollow.

Generality: 96
Model Collapse (Silent Collapse)
Model Collapse (Silent Collapse)

Progressive AI degradation caused by recursive training on AI-generated synthetic data.

Generality: 339
Reasoning Instability
Reasoning Instability

When AI models produce inconsistent or contradictory reasoning across similar inputs.

Generality: 395
Stochastic Parrot
Stochastic Parrot

A critique of language models that produce fluent text without genuine understanding.

Generality: 450
Model Collapse
Model Collapse

When generative models lose output diversity, repeatedly producing identical or near-identical results.

Generality: 602
Hallucination
Hallucination

When AI models confidently generate plausible but factually incorrect or fabricated outputs.

Generality: 794