Skip to main content

Envisioning is an emerging technology research institute and advisory.

LinkedInInstagramGitHub

2011 — 2026

research
  • Reports
  • Newsletter
  • Methodology
  • Origins
  • Vocab
services
  • Research Sessions
  • Signals Workspace
  • Bespoke Projects
  • Use Cases
  • Signal Scanfree
  • Readinessfree
impact
  • ANBIMAFuture of Brazilian Capital Markets
  • IEEECharting the Energy Transition
  • Horizon 2045Future of Human and Planetary Security
  • WKOTechnology Scanning for Austria
audiences
  • Innovation
  • Strategy
  • Consultants
  • Foresight
  • Associations
  • Governments
resources
  • Pricing
  • Partners
  • How We Work
  • Data Visualization
  • Multi-Model Method
  • FAQ
  • Security & Privacy
about
  • Manifesto
  • Community
  • Events
  • Support
  • Contact
  • Login
ResearchServicesPricingPartnersAbout
ResearchServicesPricingPartnersAbout
  1. Home
  2. Vocab
  3. ReAct (Reason+Act)

ReAct (Reason+Act)

A prompting framework that interleaves language model reasoning with grounded action execution.

Year: 2023Generality: 485
Back to Vocab

ReAct is a prompting paradigm for large language models (LLMs) that interleaves chain-of-thought reasoning traces with discrete action steps, allowing a model to think through a problem and interact with external tools or environments in alternating fashion. Introduced in a 2022 paper by Yao et al. and gaining widespread adoption through 2023, the framework structures model outputs as sequences of Thought → Action → Observation tuples: the model reasons about what to do, issues an action (such as a search query or API call), receives an observation from the environment, and then reasons again in light of that new information. This loop continues until the model produces a final answer.

The core motivation behind ReAct is to address two failure modes common in LLM-based agents: hallucination and lack of grounding. Pure chain-of-thought prompting lets models reason but leaves them disconnected from real-world information, while pure action-based approaches (like tool-use pipelines) lack the interpretable reasoning that helps models recover from errors. By tightly coupling reasoning and acting, ReAct enables models to dynamically adjust their plans when an action returns unexpected results, producing more reliable and auditable behavior on tasks like multi-hop question answering, fact verification, and interactive decision-making benchmarks such as HotpotQA, FEVER, and ALFWorld.

ReAct has become a foundational building block in the LLM agent ecosystem. Frameworks like LangChain and LlamaIndex implement ReAct-style agents as a default pattern, and it has influenced subsequent agent architectures including Reflexion, Toolformer, and AutoGPT. Its significance lies not just in improved benchmark performance but in establishing a legible, modular structure for agentic behavior: because the reasoning traces are explicit, developers can inspect why an agent took a particular action and intervene when the logic goes wrong. This transparency is increasingly valued as LLM agents are deployed in higher-stakes settings requiring human oversight.

Related

Related

Chain of Thought (CoT) Prompting
Chain of Thought (CoT) Prompting

A prompting technique that guides language models through explicit intermediate reasoning steps.

Generality: 694
Text-to-Action Model
Text-to-Action Model

A model that converts natural language instructions into executable real-world or digital actions.

Generality: 620
Prompt Chaining
Prompt Chaining

Linking sequential prompts so each output feeds the next, enabling complex multi-step reasoning.

Generality: 463
ACE (Agentic Context Engineering)
ACE (Agentic Context Engineering)

Designing inputs and interfaces that enable AI models to act as reliable autonomous agents.

Generality: 293
Adaptive Reasoning
Adaptive Reasoning

AI capability to flexibly construct and revise multi-step inferences when facing novel problems.

Generality: 701
Meta Prompt
Meta Prompt

A prompting strategy that structures how AI models reason and orchestrate complex tasks.

Generality: 381