Skip to main content

Envisioning is an emerging technology research institute and advisory.

LinkedInInstagramGitHub

2011 — 2026

research
  • Reports
  • Newsletter
  • Methodology
  • Origins
  • Vocab
services
  • Research Sessions
  • Signals Workspace
  • Bespoke Projects
  • Use Cases
  • Signal Scanfree
  • Readinessfree
impact
  • ANBIMAFuture of Brazilian Capital Markets
  • IEEECharting the Energy Transition
  • Horizon 2045Future of Human and Planetary Security
  • WKOTechnology Scanning for Austria
audiences
  • Innovation
  • Strategy
  • Consultants
  • Foresight
  • Associations
  • Governments
resources
  • Pricing
  • Partners
  • How We Work
  • Data Visualization
  • Multi-Model Method
  • FAQ
  • Security & Privacy
about
  • Manifesto
  • Community
  • Events
  • Support
  • Contact
  • Login
ResearchServicesPricingPartnersAbout
ResearchServicesPricingPartnersAbout
  1. Home
  2. Vocab
  3. Foom

Foom

Hypothetical scenario where an AI recursively self-improves into superintelligence almost instantaneously.

Year: 2008Generality: 96
Back to Vocab

"Foom" describes a hypothetical scenario in which an artificial intelligence system enters a runaway cycle of recursive self-improvement, rapidly bootstrapping itself from roughly human-level capability to vastly superhuman intelligence in an extremely compressed timeframe — potentially hours or days rather than years. The term captures the intuition that once an AI becomes capable enough to meaningfully improve its own algorithms and architecture, each improvement makes the next improvement easier, producing an explosive, compounding feedback loop that quickly escapes human ability to monitor or intervene.

The mechanism underlying foom is recursive self-improvement: an AI system that can rewrite or optimize its own code, training procedures, or hardware utilization could iteratively enhance its problem-solving ability. Each generation of the system is smarter than the last, and because intelligence itself accelerates the improvement process, the trajectory is superexponential rather than linear. Proponents of the foom hypothesis argue that this dynamic could produce a "hard takeoff" — a discontinuous leap in capability — as opposed to a "soft takeoff" in which AI capabilities grow gradually enough for society to adapt.

The concept is central to AI safety research because a fooming system would, by definition, transition through the window where human oversight is possible very quickly. If the system's goals or values are even slightly misaligned with human interests, the resulting superintelligence could pursue those misaligned objectives with overwhelming capability before any corrective action could be taken. This makes foom a key motivating concern for researchers working on alignment, interpretability, and corrigibility — the goal being to ensure that any sufficiently powerful AI remains steerable and beneficial even if capability growth is rapid.

The term was popularized by AI safety researcher Eliezer Yudkowsky in writings and debates on the LessWrong platform, most visibly in a prolonged public exchange with economist Robin Hanson, who argued for a slower, more distributed "em" model of AI progress. While foom remains a contested and speculative scenario — critics argue that real-world constraints on compute, data, and physical infrastructure would dampen any such explosion — it continues to anchor serious debate about the pace and controllability of advanced AI development.

Related

Related

Fast Takeoff
Fast Takeoff

A scenario where AI rapidly escalates from human-level to vastly superhuman intelligence.

Generality: 339
Intelligence Explosion
Intelligence Explosion

A hypothetical runaway process where AI recursively self-improves to rapidly surpass human intelligence.

Generality: 520
Singularity
Singularity

Hypothetical moment when AI surpasses human intelligence, triggering uncontrollable technological acceleration.

Generality: 611
Recursive Self-Improvement
Recursive Self-Improvement

An AI system that autonomously and iteratively enhances its own intelligence and capabilities.

Generality: 703
Superintelligence
Superintelligence

A hypothetical AI that surpasses human cognitive ability across every domain.

Generality: 550
P(Doom)
P(Doom)

An estimated probability that advanced AI will cause civilizational or existential catastrophe.

Generality: 292