Skip to main content

Envisioning is an emerging technology research institute and advisory.

LinkedInInstagramGitHub

2011 — 2026

research
  • Reports
  • Newsletter
  • Methodology
  • Origins
  • Vocab
services
  • Research Sessions
  • Signals Workspace
  • Bespoke Projects
  • Use Cases
  • Signal Scanfree
  • Readinessfree
impact
  • ANBIMAFuture of Brazilian Capital Markets
  • IEEECharting the Energy Transition
  • Horizon 2045Future of Human and Planetary Security
  • WKOTechnology Scanning for Austria
audiences
  • Innovation
  • Strategy
  • Consultants
  • Foresight
  • Associations
  • Governments
resources
  • Pricing
  • Partners
  • How We Work
  • Data Visualization
  • Multi-Model Method
  • FAQ
  • Security & Privacy
about
  • Manifesto
  • Community
  • Events
  • Support
  • Contact
  • Login
ResearchServicesPricingPartnersAbout
ResearchServicesPricingPartnersAbout
  1. Home
  2. Vocab
  3. One-Shot Learning

One-Shot Learning

A learning paradigm where models generalize effectively from a single training example per class.

Year: 2006Generality: 694
Back to Vocab

One-shot learning is a machine learning paradigm in which a model is trained to recognize or classify new concepts from just a single labeled example per class. This stands in sharp contrast to conventional deep learning approaches, which typically require thousands or millions of examples to achieve reliable performance. The challenge is fundamental: how can a model generalize meaningfully when it has seen almost nothing? One-shot learning addresses this by shifting the burden from raw data volume to smarter inductive biases, learned representations, and similarity-based reasoning.

The dominant approaches to one-shot learning fall into a few broad families. Metric-based methods, such as Siamese networks and matching networks, learn an embedding space where examples from the same class cluster together, allowing a new example to be classified by proximity to known prototypes. Model-based approaches use memory-augmented neural networks or recurrent architectures that can rapidly encode and retrieve new information. Optimization-based methods, such as MAML (Model-Agnostic Meta-Learning), train models to have initial parameters that can be fine-tuned to new tasks with minimal gradient steps. All of these strategies share a common thread: they are trained across many tasks so that learning to learn from few examples becomes the core competency.

One-shot learning is closely related to the broader field of few-shot learning, which generalizes the constraint to a small but not necessarily singular number of examples, and meta-learning, which frames the problem as learning how to learn efficiently. The distinction matters in practice: true one-shot settings are extremely demanding and often serve as a benchmark for the upper bound of sample efficiency.

The practical importance of one-shot learning is substantial. In medical imaging, rare conditions may yield only a handful of annotated scans. In robotics, a system may need to recognize a novel object after a single demonstration. In security and biometrics, face verification systems must authenticate individuals from minimal enrollment data. As AI is deployed in increasingly specialized and data-scarce domains, the ability to learn from minimal supervision becomes not just academically interesting but operationally essential.

Related

Related

Few-Shot Learning
Few-Shot Learning

Training ML models to generalize accurately from only a handful of labeled examples.

Generality: 759
FSL (Few-Shot Learning)
FSL (Few-Shot Learning)

Training models to generalize accurately from only a handful of labeled examples.

Generality: 710
Zero-Shot Learning (ZSL)
Zero-Shot Learning (ZSL)

A technique enabling models to recognize concepts never encountered during training.

Generality: 620
Meta-Learning
Meta-Learning

A paradigm enabling models to learn how to learn across tasks efficiently.

Generality: 756
Zero-shot Capability
Zero-shot Capability

An AI model's ability to perform unseen tasks without task-specific training examples.

Generality: 650
Data-Efficient Learning
Data-Efficient Learning

Machine learning approaches that achieve strong performance with minimal training data.

Generality: 752