Skip to main content

Envisioning is an emerging technology research institute and advisory.

LinkedInInstagramGitHub

2011 — 2026

research
  • Reports
  • Newsletter
  • Methodology
  • Origins
  • Vocab
services
  • Research Sessions
  • Signals Workspace
  • Bespoke Projects
  • Use Cases
  • Signal Scanfree
  • Readinessfree
impact
  • ANBIMAFuture of Brazilian Capital Markets
  • IEEECharting the Energy Transition
  • Horizon 2045Future of Human and Planetary Security
  • WKOTechnology Scanning for Austria
audiences
  • Innovation
  • Strategy
  • Consultants
  • Foresight
  • Associations
  • Governments
resources
  • Pricing
  • Partners
  • How We Work
  • Data Visualization
  • Multi-Model Method
  • FAQ
  • Security & Privacy
about
  • Manifesto
  • Community
  • Events
  • Support
  • Contact
  • Login
ResearchServicesPricingPartnersAbout
ResearchServicesPricingPartnersAbout
  1. Home
  2. Research
  3. Horizons
  4. Artificial Superintelligence

Artificial Superintelligence

AI systems that exceed human intelligence across all cognitive domains and capabilities
Back to HorizonsView interactive version

Artificial superintelligence (ASI) refers to AI systems that significantly surpass human intelligence across all domains of cognitive ability, including scientific creativity, general wisdom, and social skills. Unlike artificial general intelligence (AGI), which matches human-level intelligence, ASI would exceed human capabilities in every measurable way. Such systems could potentially improve themselves recursively, leading to rapid capability growth that could quickly outpace human comprehension and control.

The emergence of ASI could represent a fundamental transformation of human civilization, potentially solving problems that have eluded humanity for centuries—disease, aging, climate change, resource scarcity—while also posing existential risks if not developed and controlled carefully. ASI could accelerate scientific and technological progress beyond human capacity, potentially making human researchers obsolete in many fields. The technology raises profound questions about control, alignment with human values, and the future role of humanity in a world with superintelligent entities.

At TRL 2, artificial superintelligence remains theoretical, with no clear path to development and active debate about whether it's even possible or desirable. Research in AI safety, alignment, and control is exploring how such systems might be developed safely, though many experts believe we're decades or longer away from ASI, if it's achievable at all. The technology faces fundamental challenges including understanding intelligence itself, ensuring AI systems remain aligned with human values as they become more capable, and developing control mechanisms for systems that may be far more intelligent than their creators. However, given the potential impact—both positive and negative—research into ASI safety and development is considered critical. If ASI is eventually developed, it could be humanity's most significant achievement or greatest challenge, fundamentally reshaping civilization in ways that are difficult to predict.

TRL
2/9Theoretical
Impact
5/5
Investment
5/5
Category
Software

Related Organizations

Safe Superintelligence Inc.

United States · Startup

100%

Founded by Ilya Sutskever to focus exclusively on building safe superintelligence.

Developer
Anthropic logo
Anthropic

United States · Company

95%

An AI safety and research company developing Constitutional AI to align models with human values.

Developer
Google DeepMind logo
Google DeepMind

United Kingdom · Research Lab

95%

Developers of the Gemini family of models, which are trained from the start to be multimodal across text, images, video, and audio.

Developer
OpenAI logo

OpenAI

United States · Company

95%

Creator of GPT-4o, a natively multimodal model capable of reasoning across audio, vision, and text in real-time.

Developer
Alignment Research Center (ARC) logo
Alignment Research Center (ARC)

United States · Nonprofit

90%

Conducts theoretical research and model evaluations to align future advanced AI systems.

Researcher
Machine Intelligence Research Institute (MIRI)

United States · Nonprofit

90%

Research organization focused on the mathematical foundations of safe artificial superintelligence.

Researcher
Center for Human-Compatible AI (CHAI) logo
Center for Human-Compatible AI (CHAI)

United States · Research Lab

85%

Academic research center at UC Berkeley focused on ensuring AI systems remain beneficial to humans.

Researcher
Conjecture logo
Conjecture

United Kingdom · Startup

85%

AI alignment startup focusing on 'Cognitive Emulation' and making systems bounded and interpretable.

Researcher
Future of Life Institute logo
Future of Life Institute

United States · Nonprofit

80%

Focuses on existential risks and the long-term future of life, including the ethical treatment of advanced AI systems.

Standards Body

Supporting Evidence

Evidence data is not available for this technology yet.

Connections

Software
Software
Agentic AI

AI systems that autonomously plan, decide, and adapt to achieve goals without constant human input

TRL
6/9
Impact
5/5
Investment
5/5

Book a research session

Bring this signal into a focused decision sprint with analyst-led framing and synthesis.
Research Sessions