Skip to main content

Envisioning is an emerging technology research institute and advisory.

LinkedInInstagramGitHub

2011 — 2026

research
  • Reports
  • Newsletter
  • Methodology
  • Origins
  • Vocab
services
  • Research Sessions
  • Signals Workspace
  • Bespoke Projects
  • Use Cases
  • Signal Scanfree
  • Readinessfree
impact
  • ANBIMAFuture of Brazilian Capital Markets
  • IEEECharting the Energy Transition
  • Horizon 2045Future of Human and Planetary Security
  • WKOTechnology Scanning for Austria
audiences
  • Innovation
  • Strategy
  • Consultants
  • Foresight
  • Associations
  • Governments
resources
  • Pricing
  • Partners
  • How We Work
  • Data Visualization
  • Multi-Model Method
  • FAQ
  • Security & Privacy
about
  • Manifesto
  • Community
  • Events
  • Support
  • Contact
  • Login
ResearchServicesPricingPartnersAbout
ResearchServicesPricingPartnersAbout
  1. Home
  2. Vocab
  3. HPC (High Performance Computing)

HPC (High Performance Computing)

Aggregated computing infrastructure delivering processing power far beyond standard workstations.

Year: 2012Generality: 792
Back to Vocab

High Performance Computing (HPC) refers to the use of powerful computing clusters, supercomputers, or distributed systems to perform computationally intensive tasks at speeds and scales far beyond what a standard desktop or workstation can achieve. In the context of AI and machine learning, HPC provides the raw computational muscle needed to train large models, process massive datasets, and run complex simulations that would otherwise be prohibitively slow or entirely infeasible.

HPC systems typically combine large numbers of processors or accelerators — such as GPUs or TPUs — connected through high-speed interconnects and backed by fast parallel storage. Workloads are distributed across many nodes simultaneously, allowing tasks like matrix multiplication, gradient computation, and data preprocessing to proceed in parallel. Frameworks like MPI (Message Passing Interface) and tools such as SLURM for job scheduling are commonly used to coordinate these distributed workloads efficiently across hundreds or thousands of compute nodes.

The relationship between HPC and modern AI has become deeply symbiotic. The explosion of deep learning after 2012 was enabled in large part by GPU-based HPC infrastructure, which reduced neural network training times from weeks to hours. Today, training frontier large language models and multimodal systems requires HPC clusters with thousands of accelerators running continuously for weeks, consuming megawatts of power. Cloud providers and national laboratories have built purpose-built AI supercomputers — such as NVIDIA's DGX SuperPOD systems and Argonne's Aurora — specifically to meet this demand.

Beyond model training, HPC is critical for inference at scale, scientific AI applications like protein structure prediction and climate modeling, and reinforcement learning environments that require massive simulation throughput. As model sizes and dataset scales continue to grow, HPC infrastructure has become a strategic resource — shaping which organizations can compete at the frontier of AI research and deployment. Efficient use of HPC, including techniques like mixed-precision training and model parallelism, has itself become an important area of ML systems research.

Related

Related

Accelerated Computing
Accelerated Computing

Using specialized hardware to dramatically speed up AI and machine learning workloads.

Generality: 794
Hyperscalers
Hyperscalers

Massive cloud infrastructure providers that power AI, big data, and enterprise computing at scale.

Generality: 658
Compute
Compute

The processing power and hardware resources required to train and run AI models.

Generality: 875
GPU (Graphics Processing Unit)
GPU (Graphics Processing Unit)

Massively parallel processor that accelerates deep learning by handling thousands of simultaneous computations.

Generality: 871
Accelerator
Accelerator

Specialized hardware that speeds up AI training and inference beyond CPU capabilities.

Generality: 792
Exascale Computing
Exascale Computing

Computing systems capable of performing at least one quintillion floating-point operations per second.

Generality: 627