Two cognitive modes: fast and intuitive versus slow and deliberate thinking.
System 1 and System 2 are a conceptual framework from cognitive psychology describing two distinct modes of human thought. System 1 operates automatically, rapidly, and with minimal conscious effort — it handles pattern recognition, intuitive judgments, and routine decisions through heuristics shaped by experience. System 2, by contrast, is slow, deliberate, and effortful, engaging when tasks require logical reasoning, careful analysis, or navigating genuinely novel situations. The two systems are not anatomically distinct brain regions but rather functional descriptions of how cognition shifts between automatic and controlled processing depending on context and cognitive load.
In machine learning research, this framework has become a productive lens for evaluating and designing AI systems. Early deep learning models were often characterized as System 1-like: fast pattern matchers that excel at perceptual tasks but struggle with multi-step reasoning or out-of-distribution problems. This framing motivated significant research into architectures and training paradigms that could exhibit more System 2-like behavior — including chain-of-thought prompting, scratchpad reasoning, and neuro-symbolic approaches that combine learned representations with explicit logical inference. The analogy helps researchers articulate why models that perform impressively on benchmarks can still fail at tasks requiring compositional or causal reasoning.
The distinction also informs debates about AI safety and reliability. System 1-style failures — confident but wrong outputs driven by spurious correlations — mirror the cognitive biases Kahneman documented in humans, such as availability heuristics and anchoring effects. Researchers studying large language models have drawn explicit parallels, noting that models can exhibit fast, fluent responses that mask shallow understanding. Efforts to build more robust AI increasingly focus on enabling systems to "slow down" and verify their own reasoning, analogous to System 2 engagement in humans.
The terminology was popularized by Daniel Kahneman's 2011 book Thinking, Fast and Slow, building on decades of collaborative work with Amos Tversky on heuristics and biases. While the underlying dual-process theory has older roots in psychology, its adoption as an organizing metaphor in AI and ML discourse accelerated substantially in the 2010s as researchers sought richer frameworks for understanding model capabilities and limitations.