Skip to main content

Envisioning is an emerging technology research institute and advisory.

LinkedInInstagramGitHub

2011 — 2026

research
  • Reports
  • Newsletter
  • Methodology
  • Origins
  • Vocab
services
  • Research Sessions
  • Signals Workspace
  • Bespoke Projects
  • Use Cases
  • Signal Scanfree
  • Readinessfree
impact
  • ANBIMAFuture of Brazilian Capital Markets
  • IEEECharting the Energy Transition
  • Horizon 2045Future of Human and Planetary Security
  • WKOTechnology Scanning for Austria
audiences
  • Innovation
  • Strategy
  • Consultants
  • Foresight
  • Associations
  • Governments
resources
  • Pricing
  • Partners
  • How We Work
  • Data Visualization
  • Multi-Model Method
  • FAQ
  • Security & Privacy
about
  • Manifesto
  • Community
  • Events
  • Support
  • Contact
  • Login
ResearchServicesPricingPartnersAbout
ResearchServicesPricingPartnersAbout
  1. Home
  2. Vocab
  3. Compute

Compute

The processing power and hardware resources required to train and run AI models.

Year: 2012Generality: 875
Back to Vocab

Compute refers to the computational resources—hardware, energy, and time—required to execute AI algorithms, train models, and run inference. In practice, this encompasses CPUs (Central Processing Units), GPUs (Graphics Processing Units), and purpose-built accelerators like Google's TPUs (Tensor Processing Units). These processors perform the massive volumes of floating-point arithmetic that underpin modern machine learning, particularly the matrix multiplications central to neural network training. The amount of available compute directly constrains what kinds of models can be built: larger compute budgets enable deeper networks, larger datasets, and longer training runs.

The relationship between compute and AI capability became a defining theme of modern machine learning around 2012, when researchers demonstrated that training deep convolutional neural networks on GPUs could dramatically outperform prior approaches on image recognition benchmarks. Since then, the compute used to train frontier AI models has scaled at a pace far exceeding Moore's Law, roughly doubling every six months according to some analyses. This scaling has been a primary driver of breakthroughs in language modeling, protein structure prediction, and generative media, giving rise to the empirical observation that model performance often improves predictably as compute increases—a relationship formalized in neural scaling laws.

Compute is increasingly treated as a strategic resource. Cloud providers such as AWS, Google Cloud, and Microsoft Azure have made high-performance compute accessible on demand, lowering barriers for researchers and startups. At the same time, the concentration of cutting-edge compute in a small number of large organizations has raised concerns about who can participate in frontier AI development. Hardware efficiency—achieving more useful computation per watt or per dollar—has become a major research and engineering priority, pursued through techniques like mixed-precision training, model quantization, and hardware-aware neural architecture search.

Beyond raw hardware, compute is also a lens for understanding AI progress and risk. Tracking the compute used to train notable models has become a standard method for benchmarking the field's trajectory and informing policy discussions around AI governance. As training runs for large models now cost tens of millions of dollars and consume megawatt-hours of electricity, compute sits at the intersection of technical capability, economic access, and environmental impact.

Related

Related

Training Compute
Training Compute

The total computational resources consumed while training a machine learning model.

Generality: 650
Compute Efficiency
Compute Efficiency

How effectively a system converts computational resources into useful model performance.

Generality: 702
Evaluation-Time Compute
Evaluation-Time Compute

Computational resources consumed when an AI model runs inference on new data.

Generality: 627
Accelerated Computing
Accelerated Computing

Using specialized hardware to dramatically speed up AI and machine learning workloads.

Generality: 794
Training Cost
Training Cost

The total computational, energy, and financial resources required to train an AI model.

Generality: 620
HPC (High Performance Computing)
HPC (High Performance Computing)

Aggregated computing infrastructure delivering processing power far beyond standard workstations.

Generality: 792