Skip to main content

Envisioning is an emerging technology research institute and advisory.

LinkedInInstagramGitHub

2011 — 2026

research
  • Reports
  • Newsletter
  • Methodology
  • Origins
  • Vocab
services
  • Research Sessions
  • Signals Workspace
  • Bespoke Projects
  • Use Cases
  • Signal Scanfree
  • Readinessfree
impact
  • ANBIMAFuture of Brazilian Capital Markets
  • IEEECharting the Energy Transition
  • Horizon 2045Future of Human and Planetary Security
  • WKOTechnology Scanning for Austria
audiences
  • Innovation
  • Strategy
  • Consultants
  • Foresight
  • Associations
  • Governments
resources
  • Pricing
  • Partners
  • How We Work
  • Data Visualization
  • Multi-Model Method
  • FAQ
  • Security & Privacy
about
  • Manifesto
  • Community
  • Events
  • Support
  • Contact
  • Login
ResearchServicesPricingPartnersAbout
ResearchServicesPricingPartnersAbout
  1. Home
  2. Vocab
  3. Learnability

Learnability

Whether and how efficiently a model class can generalize from finite training data.

Year: 1984Generality: 794
Back to Vocab

Learnability is a foundational concept in machine learning theory that asks a deceptively simple question: under what conditions can an algorithm reliably learn a target function from a finite set of examples? Rather than measuring the performance of any single model, learnability characterizes entire hypothesis classes — the families of functions a learning algorithm is permitted to consider. A class is deemed learnable if there exists an algorithm that, given enough training data, produces a hypothesis that generalizes well to unseen examples with high probability. This framing shifts the focus from empirical performance to principled guarantees about what is possible to learn at all.

The dominant formal framework for studying learnability is Probably Approximately Correct (PAC) learning, introduced by Leslie Valiant in 1984. PAC learning defines learnability in terms of two tolerance parameters: the learner must find a hypothesis with error at most ε, with probability at least 1 − δ, using a number of samples that is polynomial in 1/ε, 1/δ, and the complexity of the hypothesis class. A complementary tool is the Vapnik-Chervonenkis (VC) dimension, which quantifies the expressive capacity of a hypothesis class by measuring the largest set of points it can shatter — classify correctly in all possible ways. Finite VC dimension is both necessary and sufficient for PAC learnability in the binary classification setting, providing a clean geometric characterization of which model families can be learned from data.

Learnability theory has direct practical implications for machine learning. It explains why overly expressive models require more data to generalize, and why regularization — which effectively restricts the hypothesis class — can improve performance on unseen examples. The tension between model complexity and sample efficiency underlies phenomena like overfitting and underfitting. More recent work has extended classical learnability results to settings involving neural networks, online learning, and distribution shift, where the original PAC assumptions often break down. Understanding learnability helps practitioners make principled decisions about model selection, data requirements, and the fundamental limits of what a given learning system can achieve.

Related

Related

Generalization
Generalization

A model's ability to perform accurately on new, previously unseen data.

Generality: 913
Universality Hypothesis
Universality Hypothesis

The claim that sufficiently expressive models can approximate any learnable function.

Generality: 720
Sample Efficiency
Sample Efficiency

How well a model learns from limited training data to achieve strong performance.

Generality: 710
Data-Efficient Learning
Data-Efficient Learning

Machine learning approaches that achieve strong performance with minimal training data.

Generality: 752
Linear Separability
Linear Separability

Whether two data classes can be perfectly divided by a single hyperplane.

Generality: 694
Meta-Learning
Meta-Learning

A paradigm enabling models to learn how to learn across tasks efficiently.

Generality: 756