Skip to main content

Envisioning is an emerging technology research institute and advisory.

LinkedInInstagramGitHub

2011 — 2026

research
  • Reports
  • Newsletter
  • Methodology
  • Origins
  • Vocab
services
  • Research Sessions
  • Signals Workspace
  • Bespoke Projects
  • Use Cases
  • Signal Scanfree
  • Readinessfree
impact
  • ANBIMAFuture of Brazilian Capital Markets
  • IEEECharting the Energy Transition
  • Horizon 2045Future of Human and Planetary Security
  • WKOTechnology Scanning for Austria
audiences
  • Innovation
  • Strategy
  • Consultants
  • Foresight
  • Associations
  • Governments
resources
  • Pricing
  • Partners
  • How We Work
  • Data Visualization
  • Multi-Model Method
  • FAQ
  • Security & Privacy
about
  • Manifesto
  • Community
  • Events
  • Support
  • Contact
  • Login
ResearchServicesPricingPartnersAbout
ResearchServicesPricingPartnersAbout
  1. Home
  2. Vocab
  3. ASI (Artificial Superintelligence)

ASI (Artificial Superintelligence)

A hypothetical AI that surpasses human cognitive ability across every domain.

Year: 1965Generality: 701
Back to Vocab

Artificial Superintelligence (ASI) refers to a hypothetical form of machine intelligence that would exceed the cognitive performance of humans across virtually all domains — including scientific reasoning, creative problem-solving, social understanding, and strategic planning. Unlike Artificial General Intelligence (AGI), which aims to match human-level ability, ASI implies a system so capable that it could recursively improve its own design, potentially accelerating its intelligence far beyond any human or collective human effort. It remains a theoretical construct, but one that anchors serious research in AI safety and long-term risk analysis.

The mechanisms by which ASI might emerge are debated, but most frameworks involve either a rapid recursive self-improvement loop — where an AGI-level system rewrites and optimizes its own architecture — or a slower accumulation of capability through scaled learning systems. Either path raises the question of alignment: whether such a system would pursue goals compatible with human values and survival. This challenge, often called the control problem or alignment problem, is considered one of the most consequential open questions in AI research, since a misaligned superintelligent system could pursue objectives in ways that are catastrophic and irreversible.

The concept gained significant traction in AI discourse following philosopher Nick Bostrom's 2014 book Superintelligence: Paths, Dangers, Strategies, which formalized many of the risks and scenarios surrounding ASI development. Bostrom's work, alongside contributions from researchers at organizations like the Machine Intelligence Research Institute (MIRI) and later OpenAI and Anthropic, helped establish AI safety as a legitimate academic and engineering discipline. The term itself draws on earlier ideas from I.J. Good's 1965 notion of an "intelligence explosion," but its modern framing is firmly rooted in contemporary machine learning trajectories.

While no system today comes close to ASI, the concept shapes how researchers prioritize safety, interpretability, and governance in current AI development. It serves as a long-horizon reference point for evaluating the stakes of incremental progress in large language models, reinforcement learning, and autonomous systems. Whether ASI is decades away, centuries away, or fundamentally impossible remains an open and actively contested question.

Related

Related

Superintelligence
Superintelligence

A hypothetical AI that surpasses human cognitive ability across every domain.

Generality: 550
AGI (Artificial General Intelligence)
AGI (Artificial General Intelligence)

A hypothetical AI system capable of performing any intellectual task a human can.

Generality: 895
Singularity
Singularity

Hypothetical moment when AI surpasses human intelligence, triggering uncontrollable technological acceleration.

Generality: 611
Human-Level AI
Human-Level AI

AI systems capable of performing any intellectual task as well as humans.

Generality: 802
Sovereign AI
Sovereign AI

An AI system capable of autonomous decision-making and action independent of human oversight.

Generality: 384
AMI (Advanced Machine Intelligence)
AMI (Advanced Machine Intelligence)

AI systems capable of complex cognitive tasks integrating reasoning, perception, and adaptive decision-making.

Generality: 692