Skip to main content

Envisioning is an emerging technology research institute and advisory.

LinkedInInstagramGitHub

2011 — 2026

research
  • Reports
  • Newsletter
  • Methodology
  • Origins
  • Vocab
services
  • Research Sessions
  • Signals Workspace
  • Bespoke Projects
  • Use Cases
  • Signal Scanfree
  • Readinessfree
impact
  • ANBIMAFuture of Brazilian Capital Markets
  • IEEECharting the Energy Transition
  • Horizon 2045Future of Human and Planetary Security
  • WKOTechnology Scanning for Austria
audiences
  • Innovation
  • Strategy
  • Consultants
  • Foresight
  • Associations
  • Governments
resources
  • Pricing
  • Partners
  • How We Work
  • Data Visualization
  • Multi-Model Method
  • FAQ
  • Security & Privacy
about
  • Manifesto
  • Community
  • Events
  • Support
  • Contact
  • Login
ResearchServicesPricingPartnersAbout
ResearchServicesPricingPartnersAbout
  1. Home
  2. Vocab
  3. Solomonoff Induction

Solomonoff Induction

A universal Bayesian framework for prediction grounded in algorithmic information theory.

Year: 1964Generality: 678
Back to Vocab

Solomonoff Induction is a theoretical framework for predicting future data from past observations by combining Bayesian inference with algorithmic information theory. At its core, it assigns prior probabilities to hypotheses based on their algorithmic complexity — specifically, the length of the shortest computer program that generates the observed data. Simpler programs receive higher prior probability, formalizing Occam's Razor in a mathematically rigorous way. The framework considers the weighted sum of all computable hypotheses, making it a universal prior over all possible data-generating processes.

Unlike traditional Bayesian inference, which requires a practitioner to manually specify a hypothesis space and prior distribution, Solomonoff Induction constructs these automatically. Every computable sequence is assigned a prior probability proportional to 2 raised to the negative power of the program length that produces it — a quantity known as the Solomonoff prior or universal prior. As new observations arrive, these probabilities are updated in standard Bayesian fashion. The result is a predictor that, in theory, converges to the true data-generating distribution faster than any other computable method, a property known as Solomonoff's completeness theorem.

The practical limitation of Solomonoff Induction is that it is not computable. Evaluating the universal prior requires solving the halting problem — determining whether arbitrary programs terminate — which is undecidable. This makes the framework semi-computable at best: it can be approximated from above but never exactly computed. Despite this, it serves as a powerful theoretical benchmark, establishing what optimal inductive inference would look like if computational constraints were removed.

Solomonoff Induction holds significant influence in machine learning theory and artificial intelligence research. It underpins the theoretical foundations of Minimum Description Length (MDL) principles, AIXI (a model of optimal reinforcement learning agents), and discussions around universal intelligence. It provides a formal answer to the philosophical problem of induction — how to justify generalizing from finite observations — and continues to shape thinking about learning, compression, and prediction in both theoretical and applied AI contexts.

Related

Related

Algorithmic Probability
Algorithmic Probability

The probability that a random program produces a specific output on a universal Turing machine.

Generality: 657
AIT (Algorithmic Information Theory)
AIT (Algorithmic Information Theory)

A framework quantifying data complexity via the shortest program that produces it.

Generality: 752
Program Induction
Program Induction

Automatically generating programs from data and desired input-output behavior.

Generality: 579
Kolmogorov Complexity
Kolmogorov Complexity

The length of the shortest program that produces a given string as output.

Generality: 760
Thermodynamic Bayesian Inference
Thermodynamic Bayesian Inference

A framework unifying thermodynamic principles with Bayesian inference through energy minimization.

Generality: 450
Inductive Bias
Inductive Bias

Built-in assumptions that help a learning algorithm generalize beyond its training data.

Generality: 838