
Information Bottleneck Theory
A principled information‑theoretic framework that seeks compact representations of inputs which retain maximal information about relevant outputs by trading off compression and predictive power.
Information bottleneck (IB) formulates representation learning as an optimization of mutual information where a learned representation T compresses the input X (minimizing I(X;T)) while preserving information about the target Y (maximizing I(T;Y)), typically expressed via the Lagrangian objective L = I(X;T) - β I(T;Y) (or equivalently minimizing I(X;T) subject to a constraint on I(T;Y)). Rooted in rate‑distortion theory, IB defines sufficiency in information terms (T is sufficient for Y if I(T;Y)=I(X;Y)) and characterizes the continuum of optimal encoders along a compression–relevance tradeoff controlled by β. In ML (Machine Learning) contexts the IB provides a normative explanation for feature extraction, supervised and unsupervised representation learning, and has been used to analyze and regularize deep neural networks by interpreting hidden layers as progressively compressed predictors. Practical variants (e.g., the variational information bottleneck) make the approach scalable by replacing exact mutual‑information terms with variational bounds; however, empirical application faces challenges from mutual‑information estimation in high dimensions, deterministic network dynamics, and the choice of noise or stochastic encoders required to make information measures meaningful. The IB connects to broader theoretical frameworks—MDL, PAC‑Bayes, and renormalization‑group analogies—and has spurred debate about its explanatory power for deep learning training dynamics and generalization.
First published in 1999 (Tishby, Pereira & Bialek), the idea attracted renewed and wider attention in the mid‑2010s (roughly 2014–2018) as researchers applied and adapted IB principles to interpret and regularize deep networks, notably via variational methods and information‑theoretic analyses of training dynamics.
Key contributors include Naftali Tishby, Fernando C. Pereira and William Bialek (originators of the IB method, 1999); Alexander A. Alemi et al. (prominent for the variational information bottleneck and scalable implementations); Ravid Shwartz‑Ziv and Naftali Tishby (work on interpreting deep network training through IB); and a broader community of information‑theory and ML (Machine Learning) researchers who have extended, critiqued, and applied IB ideas (including empirical critiques and refinements by Saxe and colleagues and others exploring connections to PAC‑Bayes and compression‑based generalization).
