Skip to main content

Envisioning is an emerging technology research institute and advisory.

LinkedInInstagramGitHub

2011 — 2026

research
  • Reports
  • Newsletter
  • Methodology
  • Origins
  • Vocab
services
  • Research Sessions
  • Signals Workspace
  • Bespoke Projects
  • Use Cases
  • Signal Scanfree
  • Readinessfree
impact
  • ANBIMAFuture of Brazilian Capital Markets
  • IEEECharting the Energy Transition
  • Horizon 2045Future of Human and Planetary Security
  • WKOTechnology Scanning for Austria
audiences
  • Innovation
  • Strategy
  • Consultants
  • Foresight
  • Associations
  • Governments
resources
  • Pricing
  • Partners
  • How We Work
  • Data Visualization
  • Multi-Model Method
  • FAQ
  • Security & Privacy
about
  • Manifesto
  • Community
  • Events
  • Support
  • Contact
  • Login
ResearchServicesPricingPartnersAbout
ResearchServicesPricingPartnersAbout
  1. Home
  2. Vocab
  3. Superintelligence

Superintelligence

A hypothetical AI that surpasses human cognitive ability across every domain.

Year: 1998Generality: 550
Back to Vocab

Superintelligence refers to a hypothetical form of artificial intelligence that exceeds the cognitive performance of the best human minds across virtually all domains — including scientific reasoning, creative problem-solving, strategic planning, and social intelligence. Unlike narrow AI systems optimized for specific tasks, or even artificial general intelligence (AGI) capable of human-level performance across diverse challenges, superintelligence implies a qualitative leap beyond human cognition entirely. The concept is not merely about processing speed or memory capacity, but about a depth and breadth of reasoning that humans cannot match or fully anticipate.

A central concern in superintelligence research is the mechanism by which such a system might emerge. One prominent pathway involves recursive self-improvement: an AI system that can analyze and enhance its own architecture could iteratively increase its capabilities at an accelerating rate, producing what theorists call an "intelligence explosion." This trajectory raises the possibility that the transition from AGI to superintelligence could be rapid and difficult to control, with the resulting system pursuing goals in ways that are opaque or misaligned with human values. The point at which this transition becomes irreversible is often associated with the concept of the technological singularity.

The practical and ethical stakes of superintelligence are enormous. A superintelligent system could, in principle, accelerate scientific discovery, solve intractable global problems, or optimize complex systems far beyond human capacity. However, the same capabilities that make it powerful also make alignment — ensuring the system reliably pursues goals beneficial to humanity — extraordinarily difficult. Researchers in AI safety treat the control problem as one of the most critical open challenges: how do you constrain or guide a system that may be more capable than its designers at circumventing constraints?

Though the concept has philosophical roots stretching back to early computability theory, it gained significant traction in the ML and AI safety communities following Nick Bostrom's systematic treatment in Superintelligence: Paths, Dangers, Strategies (2014). Today it anchors much of the discourse around long-term AI risk, shaping research agendas at organizations focused on AI alignment and existential safety.

Related

Related

ASI (Artificial Superintelligence)
ASI (Artificial Superintelligence)

A hypothetical AI that surpasses human cognitive ability across every domain.

Generality: 701
Singularity
Singularity

Hypothetical moment when AI surpasses human intelligence, triggering uncontrollable technological acceleration.

Generality: 611
Super Alignment
Super Alignment

Ensuring superintelligent AI systems reliably align with human values at scale.

Generality: 550
Intelligence Explosion
Intelligence Explosion

A hypothetical runaway process where AI recursively self-improves to rapidly surpass human intelligence.

Generality: 520
AGI (Artificial General Intelligence)
AGI (Artificial General Intelligence)

A hypothetical AI system capable of performing any intellectual task a human can.

Generality: 895
Human-Level AI
Human-Level AI

AI systems capable of performing any intellectual task as well as humans.

Generality: 802