Skip to main content

Envisioning is an emerging technology research institute and advisory.

LinkedInInstagramGitHub

2011 — 2026

research
  • Reports
  • Newsletter
  • Methodology
  • Origins
  • Vocab
services
  • Research Sessions
  • Signals Workspace
  • Bespoke Projects
  • Use Cases
  • Signal Scanfree
  • Readinessfree
impact
  • ANBIMAFuture of Brazilian Capital Markets
  • IEEECharting the Energy Transition
  • Horizon 2045Future of Human and Planetary Security
  • WKOTechnology Scanning for Austria
audiences
  • Innovation
  • Strategy
  • Consultants
  • Foresight
  • Associations
  • Governments
resources
  • Pricing
  • Partners
  • How We Work
  • Data Visualization
  • Multi-Model Method
  • FAQ
  • Security & Privacy
about
  • Manifesto
  • Community
  • Events
  • Support
  • Contact
  • Login
ResearchServicesPricingPartnersAbout
ResearchServicesPricingPartnersAbout
  1. Home
  2. Vocab
  3. Tree of Thoughts

Tree of Thoughts

A prompting framework that guides LLMs to explore multiple reasoning paths simultaneously.

Year: 2023Generality: 520
Back to Vocab

Tree of Thoughts (ToT) is a prompting and inference framework for large language models that structures the reasoning process as a search over a tree of intermediate "thoughts" — coherent text fragments representing partial steps toward a solution. Rather than generating a single linear chain of reasoning, ToT allows the model to branch into multiple candidate continuations at each step, evaluate the promise of each branch, and use search strategies such as breadth-first or depth-first traversal to navigate toward a final answer. This mirrors the way humans deliberate by considering several approaches before committing to one.

The mechanics of ToT involve three core components: a thought generator that produces candidate next steps, an evaluator that scores or classifies each partial solution's viability (often using the LLM itself as a judge), and a search algorithm that decides which branches to expand or prune. This design separates the generative and evaluative roles of the model, enabling systematic exploration of the problem space rather than greedy, left-to-right decoding. The framework is particularly effective on tasks requiring multi-step planning, mathematical reasoning, and combinatorial problem-solving, where a single misstep early in a chain can derail the entire solution.

Tree of Thoughts matters because it exposes a significant limitation of standard chain-of-thought prompting: its vulnerability to early errors that compound without correction. By treating inference as a search problem, ToT dramatically improves performance on challenging benchmarks such as Game of 24 and creative writing tasks with structural constraints. It also connects modern LLM reasoning to classical AI search literature, suggesting that deliberate, tree-structured planning is a powerful complement to the pattern-matching strengths of neural language models. The framework has influenced subsequent work on agent architectures, self-refinement, and inference-time compute scaling.

Related

Related

Chain of Thought (CoT) Prompting
Chain of Thought (CoT) Prompting

A prompting technique that guides language models through explicit intermediate reasoning steps.

Generality: 694
Thought Token
Thought Token

Special tokens that give language models explicit space to reason before answering.

Generality: 450
Meta Chain-of-Thought
Meta Chain-of-Thought

A meta-level approach that generates or selects reasoning templates to guide LLM step-by-step thinking.

Generality: 292
Visual Chain of Thought
Visual Chain of Thought

Explicit intermediate visual reasoning steps that expose and structure a model's multi-step problem solving.

Generality: 550
Chain of Draft
Chain of Draft

Minimalist reasoning using fewer tokens than chain-of-thought for efficient intermediate reasoning

Generality: 535
Chain-of-Thought Monitoring
Chain-of-Thought Monitoring

Observing a model's reasoning steps to detect unsafe or deceptive behavior.

Generality: 322