Skip to main content

Envisioning is an emerging technology research institute and advisory.

LinkedInInstagramGitHub

2011 — 2026

research
  • Reports
  • Newsletter
  • Methodology
  • Origins
  • Vocab
services
  • Research Sessions
  • Signals Workspace
  • Bespoke Projects
  • Use Cases
  • Signal Scanfree
  • Readinessfree
impact
  • ANBIMAFuture of Brazilian Capital Markets
  • IEEECharting the Energy Transition
  • Horizon 2045Future of Human and Planetary Security
  • WKOTechnology Scanning for Austria
audiences
  • Innovation
  • Strategy
  • Consultants
  • Foresight
  • Associations
  • Governments
resources
  • Pricing
  • Partners
  • How We Work
  • Data Visualization
  • Multi-Model Method
  • FAQ
  • Security & Privacy
about
  • Manifesto
  • Community
  • Events
  • Support
  • Contact
  • Login
ResearchServicesPricingPartnersAbout
ResearchServicesPricingPartnersAbout
  1. Home
  2. Vocab
  3. Chain of Thought (CoT) Prompting

Chain of Thought (CoT) Prompting

A prompting technique that guides language models through explicit intermediate reasoning steps.

Year: 2022Generality: 694
Back to Vocab

Chain of Thought (CoT) prompting is a technique for eliciting complex reasoning from large language models by encouraging them to produce explicit intermediate steps before arriving at a final answer. Rather than asking a model to jump directly from a question to a conclusion, CoT prompting—either through few-shot examples that demonstrate step-by-step reasoning or through zero-shot instructions like "think step by step"—guides the model to decompose a problem into a sequence of logical sub-steps. This mirrors the scratchpad-style reasoning humans use when working through difficult problems.

The mechanism works because large language models are trained to predict plausible continuations of text. When prompted with examples that show reasoning chains, the model learns to generate similar intermediate text, and that generated reasoning in turn conditions the model's subsequent token predictions toward more accurate conclusions. The approach is particularly effective on tasks requiring arithmetic, symbolic manipulation, commonsense inference, and multi-hop question answering—domains where direct answer prediction frequently fails but where a correct reasoning trace reliably leads to a correct answer.

CoT prompting became a prominent research focus following the 2022 paper by Wei et al. at Google Brain, which demonstrated that the technique emerged as a capability only in sufficiently large models (roughly 100B+ parameters), suggesting it is an emergent property of scale. Subsequent work showed that even smaller models could benefit when fine-tuned on reasoning chain data, and that self-consistency—sampling multiple reasoning paths and taking a majority vote—further improved accuracy. Variants such as Tree of Thought and Program of Thought extended the paradigm by exploring branching reasoning structures or offloading computation to code interpreters.

The significance of CoT extends beyond benchmark performance. By making a model's reasoning process legible, it offers a degree of interpretability that direct-answer prompting lacks, allowing practitioners to identify where a model's logic goes wrong. This transparency is valuable for debugging, for building user trust, and for constructing more reliable AI pipelines in high-stakes domains such as medicine, law, and scientific research.

Related

Related

Meta Chain-of-Thought
Meta Chain-of-Thought

A meta-level approach that generates or selects reasoning templates to guide LLM step-by-step thinking.

Generality: 292
Tree of Thoughts
Tree of Thoughts

A prompting framework that guides LLMs to explore multiple reasoning paths simultaneously.

Generality: 520
Visual Chain of Thought
Visual Chain of Thought

Explicit intermediate visual reasoning steps that expose and structure a model's multi-step problem solving.

Generality: 550
Prompt Chaining
Prompt Chaining

Linking sequential prompts so each output feeds the next, enabling complex multi-step reasoning.

Generality: 463
Chain-of-Thought Monitoring
Chain-of-Thought Monitoring

Observing a model's reasoning steps to detect unsafe or deceptive behavior.

Generality: 322
Chain of Draft
Chain of Draft

Minimalist reasoning using fewer tokens than chain-of-thought for efficient intermediate reasoning

Generality: 535