A scenario where AI rapidly escalates from human-level to vastly superhuman intelligence.
Fast takeoff refers to a hypothetical scenario in which an artificial intelligence system transitions from roughly human-level capability to far-exceeding human intelligence over an extremely compressed timeframe — potentially hours, days, or weeks rather than years or decades. The core mechanism driving this scenario is recursive self-improvement: once an AI becomes capable enough to meaningfully enhance its own algorithms, architecture, or underlying hardware, each improvement enables faster and better subsequent improvements, producing an explosive feedback loop. This stands in contrast to a "slow takeoff," where capability gains accumulate gradually enough for humans to observe, adapt, and intervene.
The plausibility of fast takeoff depends heavily on assumptions about the nature of intelligence and the bottlenecks constraining AI progress. Proponents argue that intelligence is highly compressible — that a sufficiently capable system could rapidly discover optimizations that took human researchers decades to find — and that software self-modification could outpace any physical or institutional constraints. Skeptics counter that real-world limitations such as compute availability, data requirements, and the difficulty of verifying one's own improvements would naturally slow any such acceleration, making a gradual transition far more likely.
The concept is central to AI safety research because a fast takeoff dramatically compresses the window available for human oversight and course correction. If a system surpasses human cognitive ability before researchers understand its goals or values, misalignment between the system's objectives and human welfare could become effectively irreversible. Thinkers like Eliezer Yudkowsky and Nick Bostrom have argued this makes the alignment problem uniquely urgent: unlike most technological risks, a fast takeoff may offer no second chances. Whether or not fast takeoff is considered likely, it has shaped research priorities around interpretability, corrigibility, and the formal specification of AI objectives.