Skip to main content

Envisioning is an emerging technology research institute and advisory.

LinkedInInstagramGitHub

2011 — 2026

research
  • Reports
  • Newsletter
  • Methodology
  • Origins
  • Vocab
services
  • Research Sessions
  • Signals Workspace
  • Bespoke Projects
  • Use Cases
  • Signal Scanfree
  • Readinessfree
impact
  • ANBIMAFuture of Brazilian Capital Markets
  • IEEECharting the Energy Transition
  • Horizon 2045Future of Human and Planetary Security
  • WKOTechnology Scanning for Austria
audiences
  • Innovation
  • Strategy
  • Consultants
  • Foresight
  • Associations
  • Governments
resources
  • Pricing
  • Partners
  • How We Work
  • Data Visualization
  • Multi-Model Method
  • FAQ
  • Security & Privacy
about
  • Manifesto
  • Community
  • Events
  • Support
  • Contact
  • Login
ResearchServicesPricingPartnersAbout
ResearchServicesPricingPartnersAbout
  1. Home
  2. Vocab
  3. Exascale Computing

Exascale Computing

Computing systems capable of performing at least one quintillion floating-point operations per second.

Year: 2022Generality: 627
Back to Vocab

Exascale computing refers to systems capable of executing at least one exaflop — 10¹⁸ floating-point operations per second — representing a thousandfold leap beyond petascale computing. This threshold is not merely a benchmark of raw speed; it marks a qualitative shift in what computational science can attempt. Problems that once required years of simulation time or were simply out of reach become tractable at exascale, including high-fidelity climate modeling, molecular dynamics at biological scales, and training or running the largest AI models without the bottlenecks imposed by distributed computing across slower infrastructure.

For machine learning specifically, exascale systems matter because model scale and data volume have become primary drivers of capability. Training frontier large language models, running ensemble simulations for scientific discovery, and performing real-time inference over massive sensor networks all place demands that push even petascale clusters to their limits. Exascale hardware — typically built from hundreds of thousands of accelerators (GPUs or custom AI chips) interconnected with high-bandwidth fabrics — allows researchers to explore model architectures, dataset sizes, and training regimes that are otherwise economically or physically infeasible.

Achieving exascale performance introduces significant engineering challenges beyond raw compute. Memory bandwidth, interconnect latency, power consumption (exascale machines can draw 20–40 megawatts), and fault tolerance all require rethinking system design from the ground up. Software stacks must be parallelized across millions of cores simultaneously, and numerical precision must be carefully managed to maintain stability at scale. These constraints have spurred innovations in mixed-precision training, model parallelism, and energy-efficient chip design that have since propagated into mainstream ML infrastructure.

The first confirmed exascale system, Frontier at Oak Ridge National Laboratory, came online in 2022 and was quickly applied to scientific AI workloads including protein structure prediction and fusion energy research. As exascale systems proliferate globally — with deployments in the United States, Europe, China, and Japan — they are expected to accelerate the frontier of AI research, particularly in scientific domains where simulation and learned models must be tightly integrated.

Related

Related

HPC (High Performance Computing)
HPC (High Performance Computing)

Aggregated computing infrastructure delivering processing power far beyond standard workstations.

Generality: 792
Planetary Scale System
Planetary Scale System

AI platforms operating globally to address complex, worldwide challenges using massive data.

Generality: 520
Accelerated Computing
Accelerated Computing

Using specialized hardware to dramatically speed up AI and machine learning workloads.

Generality: 794
Frontier Models
Frontier Models

The most capable AI systems available, operating at the edge of known performance.

Generality: 680
Hyperscalers
Hyperscalers

Massive cloud infrastructure providers that power AI, big data, and enterprise computing at scale.

Generality: 658
Accelerator
Accelerator

Specialized hardware that speeds up AI training and inference beyond CPU capabilities.

Generality: 792