The logarithm of the odds ratio, linking probabilities to linear model outputs.
Log odds express the logarithm of the ratio between the probability of an event occurring and the probability of it not occurring: log(p / (1 − p)). While raw probabilities are bounded between 0 and 1, this transformation maps them onto the entire real number line, from negative infinity to positive infinity. This unbounded, continuous range makes log odds far more amenable to linear modeling than probabilities themselves, which is why they serve as the foundational link function in logistic regression.
In logistic regression — one of the most widely used classification algorithms in machine learning — the model directly predicts log odds as a linear combination of input features. Each coefficient in the model represents the change in log odds associated with a one-unit increase in the corresponding predictor, holding all other variables constant. To recover an interpretable probability from these predictions, practitioners apply the sigmoid (logistic) function, which is simply the inverse of the log-odds transformation. This tight mathematical relationship between log odds, the sigmoid function, and probability is central to how logistic regression operates.
Beyond logistic regression, log odds appear throughout machine learning and probabilistic reasoning. In Naive Bayes classifiers, classification decisions can be framed as comparing log-odds ratios derived from class-conditional likelihoods, making inference efficient and numerically stable. In information theory and model calibration, log odds provide a natural way to update beliefs — adding log-odds contributions from independent pieces of evidence corresponds to multiplying probabilities, a property that simplifies sequential Bayesian updating.
The practical importance of log odds extends to model interpretability and diagnostics. Because coefficients in logistic regression operate in log-odds space, practitioners can reason about feature importance and direction of effect without needing to evaluate the full nonlinear probability curve. Converting log-odds coefficients to odds ratios (by exponentiating them) yields another widely used interpretive quantity in clinical and social science applications. As machine learning increasingly intersects with fields demanding transparent, interpretable models, fluency with log odds remains an essential skill for practitioners.