Assigns equal probability to all outcomes when no evidence favors any particular one.
The principle of indifference is a foundational concept in probability theory stating that, in the absence of any evidence distinguishing one outcome from another, rational agents should assign equal probability to each possible outcome. In machine learning and AI, this principle most commonly surfaces when constructing prior probability distributions for Bayesian models. When a practitioner has no domain knowledge or data to justify weighting one hypothesis over another, the principle of indifference provides a principled default: a uniform prior. This approach ensures that the model's initial beliefs do not arbitrarily favor any particular outcome before evidence is observed.
In practice, applying the principle of indifference requires careful definition of the outcome space. The assigned probabilities depend heavily on how outcomes are partitioned and described — a subtlety known as Bertrand's paradox, which demonstrates that different but equally valid descriptions of the same problem can yield conflicting uniform distributions. This sensitivity to problem framing has led to significant debate about when and how the principle should be applied, and has motivated more sophisticated approaches to prior construction, such as Jeffreys priors, which remain invariant under reparameterization.
Despite its limitations, the principle of indifference remains practically important in AI and ML. It underpins maximum entropy methods, where the least informative distribution consistent with known constraints is selected — a generalization of uniform priors to structured settings. It also appears in reinforcement learning, where agents exploring unknown environments often initialize with uniform action-selection policies before accumulating experience. In Naive Bayes classifiers and other probabilistic models, uniform priors serve as a regularization baseline when training data is scarce.
The principle matters because it addresses a fundamental challenge in probabilistic reasoning: how to act rationally under complete ignorance. By providing a systematic default, it prevents arbitrary or biased initialization of models and supports reproducible, transparent decision-making. While modern Bayesian practice often replaces flat priors with more informative or robust alternatives, the principle of indifference remains a conceptual anchor for understanding what it means to have no prior knowledge — and why that starting point must still be represented explicitly in any probabilistic system.