Skip to main content

Envisioning is an emerging technology research institute and advisory.

LinkedInInstagramGitHub

2011 — 2026

research
  • Reports
  • Newsletter
  • Methodology
  • Origins
  • Vocab
services
  • Research Sessions
  • Signals Workspace
  • Bespoke Projects
  • Use Cases
  • Signal Scanfree
  • Readinessfree
impact
  • ANBIMAFuture of Brazilian Capital Markets
  • IEEECharting the Energy Transition
  • Horizon 2045Future of Human and Planetary Security
  • WKOTechnology Scanning for Austria
audiences
  • Innovation
  • Strategy
  • Consultants
  • Foresight
  • Associations
  • Governments
resources
  • Pricing
  • Partners
  • How We Work
  • Data Visualization
  • Multi-Model Method
  • FAQ
  • Security & Privacy
about
  • Manifesto
  • Community
  • Events
  • Support
  • Contact
  • Login
ResearchServicesPricingPartnersAbout
ResearchServicesPricingPartnersAbout
  1. Home
  2. Vocab
  3. In-Context Learning

In-Context Learning

A model learns new tasks from prompt examples alone, without any weight updates.

Year: 2020Generality: 717
Back to Vocab

In-context learning (ICL) is a capability of large language models (LLMs) in which the model adapts its behavior to a new task by conditioning on a handful of input-output examples embedded directly in the prompt, rather than through any update to its parameters. At inference time, the model reads the provided examples, infers the pattern or task structure they imply, and applies that understanding to a novel query — all without gradient descent or fine-tuning. The examples serve as implicit instructions, and the model's ability to exploit them emerges from the statistical regularities absorbed during large-scale pretraining.

The mechanics of ICL are still an active area of research, but leading hypotheses suggest that transformers implicitly implement a form of gradient-based learning in their forward pass through attention mechanisms. When given demonstrations, the model effectively performs a kind of "meta-learning" — recognizing task structure from the examples and generalizing accordingly. The number and quality of demonstrations matter considerably: zero-shot ICL provides only a task description, one-shot provides a single example, and few-shot provides several, with performance generally improving as more relevant examples are added.

ICL became practically significant with the release of GPT-3 in 2020, which demonstrated that a sufficiently large pretrained model could perform competitively on diverse benchmarks — translation, arithmetic, question answering — using only prompt-level conditioning. This was a striking departure from the prevailing paradigm of task-specific fine-tuning, and it catalyzed enormous interest in prompt engineering, chain-of-thought prompting, and retrieval-augmented generation as complementary techniques.

The importance of ICL lies in its flexibility and accessibility: practitioners can adapt a single frozen model to new tasks without expensive retraining, making deployment faster and more cost-effective. However, ICL has notable limitations — it is sensitive to example ordering and phrasing, constrained by context window length, and can be unreliable on tasks that require precise reasoning or domain knowledge not well-represented in pretraining data. Understanding when and why ICL succeeds or fails remains one of the central questions in modern LLM research.

Related

Related

System Prompt Learning
System Prompt Learning

Automatically optimizing persistent model instructions to steer behavior without full retraining.

Generality: 520
Incremental Learning
Incremental Learning

A learning paradigm where models continuously update from new data without full retraining.

Generality: 702
CLIP (Contrastive Language–Image Pre-training)
CLIP (Contrastive Language–Image Pre-training)

OpenAI model that learns visual concepts by aligning images with natural language descriptions.

Generality: 703
Contrastive Learning
Contrastive Learning

A self-supervised technique that learns representations by comparing similar and dissimilar data pairs.

Generality: 694
Instruction Tuning
Instruction Tuning

Fine-tuning language models on instruction-response pairs to improve task-following behavior.

Generality: 694
Long-Context Modeling
Long-Context Modeling

Architectures and techniques enabling AI models to process and reason over very long sequences.

Generality: 694