A cognitive bias causing humans to systematically underestimate exponential growth trajectories.
Exponential slope blindness is a cognitive bias in which people fail to intuitively grasp the implications of exponential growth, consistently underestimating how quickly an exponentially growing quantity will scale. Because human intuition is calibrated for linear change — where each step adds a roughly constant amount — the early, seemingly modest phase of exponential growth feels unremarkable, while the later explosive acceleration arrives as a surprise. This mismatch between linear intuition and nonlinear reality is not merely an abstract curiosity; it has concrete consequences in domains ranging from pandemic modeling to technology forecasting to financial compounding.
In machine learning and AI development, exponential slope blindness is particularly consequential. Compute availability, model parameter counts, and benchmark performance have all followed roughly exponential trajectories over the past decade. Observers anchored to linear expectations repeatedly underestimated how rapidly capabilities would advance, leading to both premature dismissals of AI progress and insufficient preparation for its societal impacts. The bias also affects how practitioners reason about training costs, data requirements, and the scaling laws that govern large model behavior — all of which involve multiplicative rather than additive dynamics.
The mechanism behind the bias is well-grounded in cognitive science. Humans tend to use additive mental models as a default heuristic, and logarithmic perception of magnitude (Weber-Fechner law) further compresses the apparent difference between large numbers. When asked to extrapolate an exponential curve, most people produce estimates that are orders of magnitude too low after just a few doubling periods. Visualization tools, log-scale plots, and explicit doubling-time framing are among the interventions shown to partially correct for this distortion.
In AI discourse, the term gained currency as researchers and commentators sought language to explain why both the public and domain experts so frequently misjudged the pace of progress in deep learning, language models, and related fields. Recognizing exponential slope blindness has become a practical concern for AI safety researchers, policymakers, and product strategists who must make decisions contingent on where exponential capability curves will be in two, five, or ten years — timescales where linear intuition is most dangerously misleading.