Skip to main content

Envisioning is an emerging technology research institute and advisory.

LinkedInInstagramGitHub

2011 — 2026

research
  • Reports
  • Newsletter
  • Methodology
  • Origins
  • Vocab
services
  • Research Sessions
  • Signals Workspace
  • Bespoke Projects
  • Use Cases
  • Signal Scanfree
  • Readinessfree
impact
  • ANBIMAFuture of Brazilian Capital Markets
  • IEEECharting the Energy Transition
  • Horizon 2045Future of Human and Planetary Security
  • WKOTechnology Scanning for Austria
audiences
  • Innovation
  • Strategy
  • Consultants
  • Foresight
  • Associations
  • Governments
resources
  • Pricing
  • Partners
  • How We Work
  • Data Visualization
  • Multi-Model Method
  • FAQ
  • Security & Privacy
about
  • Manifesto
  • Community
  • Events
  • Support
  • Contact
  • Login
ResearchServicesPricingPartnersAbout
ResearchServicesPricingPartnersAbout
  1. Home
  2. Vocab
  3. P(Doom)

P(Doom)

An estimated probability that advanced AI will cause civilizational or existential catastrophe.

Year: 2022Generality: 292
Back to Vocab

P(doom) is an informal but increasingly prominent concept in AI safety discourse, referring to a researcher's subjective probability estimate that advanced artificial intelligence will lead to an existential or civilizational catastrophe — an outcome that either eliminates humanity or permanently and drastically curtails its long-term potential. Rather than a formally derived statistical measure, it functions as a shorthand for communicating one's overall risk assessment of transformative AI development, and individual estimates vary enormously across the research community, ranging from fractions of a percent to near-certainty.

The concept draws on expected value reasoning and decision theory: even a small P(doom) estimate, when multiplied against the magnitude of potential harm, can justify substantial investment in AI safety research. Researchers use it to anchor conversations about how seriously to treat tail risks — low-probability but catastrophic outcomes — when designing AI systems, governance frameworks, and alignment strategies. It implicitly encodes assumptions about timelines to powerful AI, the tractability of alignment, and the likelihood that misaligned systems could acquire sufficient capability to cause irreversible harm.

P(doom) estimates are inherently personal and epistemically uncertain, reflecting deep disagreements about the nature of intelligence, the difficulty of value alignment, and the robustness of human oversight mechanisms. Prominent AI safety researchers and lab leaders have publicly shared estimates ranging from below 5% to above 50%, and these figures often shift as new capabilities emerge or new alignment techniques are developed. Critics argue the concept can be misleading if treated as a precise probability rather than a rough intuition, while proponents contend that making such estimates explicit forces clearer thinking about assumptions and priorities.

The term gained significant traction in public and technical discourse around 2022–2023, coinciding with rapid advances in large language models and renewed mainstream attention to AI risk. It now serves as a cultural and rhetorical touchstone in debates about AI governance, the pace of capability development, and the urgency of safety research — making it a useful, if imprecise, lens for understanding how seriously different actors take long-term AI risk.

Related

Related

Foom
Foom

Hypothetical scenario where an AI recursively self-improves into superintelligence almost instantaneously.

Generality: 96
Catastrophic Risk
Catastrophic Risk

The potential for AI systems to cause severe, large-scale harm or societal disruption.

Generality: 745
Shoggoth
Shoggoth

A meme depicting advanced AI as a powerful, alien, and unknowable entity.

Generality: 19
Proliferation Problem
Proliferation Problem

Exponential growth in possible states or actions that makes computation infeasibly complex.

Generality: 496
PDS (Psychological Depth Scale)
PDS (Psychological Depth Scale)

A psychometric tool measuring the complexity and depth of an individual's inner psychological life.

Generality: 94
Exponential Divergence
Exponential Divergence

When small perturbations amplify exponentially across iterations, destabilizing AI systems.

Generality: 339