Skip to main content

Envisioning is an emerging technology research institute and advisory.

LinkedInInstagramGitHub

2011 — 2026

research
  • Reports
  • Newsletter
  • Methodology
  • Origins
  • Vocab
services
  • Research Sessions
  • Signals Workspace
  • Bespoke Projects
  • Use Cases
  • Signal Scanfree
  • Readinessfree
impact
  • ANBIMAFuture of Brazilian Capital Markets
  • IEEECharting the Energy Transition
  • Horizon 2045Future of Human and Planetary Security
  • WKOTechnology Scanning for Austria
audiences
  • Innovation
  • Strategy
  • Consultants
  • Foresight
  • Associations
  • Governments
resources
  • Pricing
  • Partners
  • How We Work
  • Data Visualization
  • Multi-Model Method
  • FAQ
  • Security & Privacy
about
  • Manifesto
  • Community
  • Events
  • Support
  • Contact
  • Login
ResearchServicesPricingPartnersAbout
ResearchServicesPricingPartnersAbout
  1. Home
  2. Vocab
  3. Fast Takeoff

Fast Takeoff

A scenario where AI rapidly escalates from human-level to vastly superhuman intelligence.

Year: 1998Generality: 339
Back to Vocab

Fast takeoff refers to a hypothetical scenario in which an artificial intelligence system transitions from roughly human-level capability to far-exceeding human intelligence over an extremely compressed timeframe — potentially hours, days, or weeks rather than years or decades. The core mechanism driving this scenario is recursive self-improvement: once an AI becomes capable enough to meaningfully enhance its own algorithms, architecture, or underlying hardware, each improvement enables faster and better subsequent improvements, producing an explosive feedback loop. This stands in contrast to a "slow takeoff," where capability gains accumulate gradually enough for humans to observe, adapt, and intervene.

The plausibility of fast takeoff depends heavily on assumptions about the nature of intelligence and the bottlenecks constraining AI progress. Proponents argue that intelligence is highly compressible — that a sufficiently capable system could rapidly discover optimizations that took human researchers decades to find — and that software self-modification could outpace any physical or institutional constraints. Skeptics counter that real-world limitations such as compute availability, data requirements, and the difficulty of verifying one's own improvements would naturally slow any such acceleration, making a gradual transition far more likely.

The concept is central to AI safety research because a fast takeoff dramatically compresses the window available for human oversight and course correction. If a system surpasses human cognitive ability before researchers understand its goals or values, misalignment between the system's objectives and human welfare could become effectively irreversible. Thinkers like Eliezer Yudkowsky and Nick Bostrom have argued this makes the alignment problem uniquely urgent: unlike most technological risks, a fast takeoff may offer no second chances. Whether or not fast takeoff is considered likely, it has shaped research priorities around interpretability, corrigibility, and the formal specification of AI objectives.

Related

Related

Foom
Foom

Hypothetical scenario where an AI recursively self-improves into superintelligence almost instantaneously.

Generality: 96
Intelligence Explosion
Intelligence Explosion

A hypothetical runaway process where AI recursively self-improves to rapidly surpass human intelligence.

Generality: 520
Singularity
Singularity

Hypothetical moment when AI surpasses human intelligence, triggering uncontrollable technological acceleration.

Generality: 611
Superintelligence
Superintelligence

A hypothetical AI that surpasses human cognitive ability across every domain.

Generality: 550
Discontinuous Jump
Discontinuous Jump

A sudden, dramatic leap in AI capability that defies prior incremental trends.

Generality: 339
Recursive Self-Improvement
Recursive Self-Improvement

An AI system that autonomously and iteratively enhances its own intelligence and capabilities.

Generality: 703