Skip to main content

Envisioning is an emerging technology research institute and advisory.

LinkedInInstagramGitHub

2011 — 2026

research
  • Reports
  • Newsletter
  • Methodology
  • Origins
  • Vocab
services
  • Research Sessions
  • Signals Workspace
  • Bespoke Projects
  • Use Cases
  • Signal Scanfree
  • Readinessfree
impact
  • ANBIMAFuture of Brazilian Capital Markets
  • IEEECharting the Energy Transition
  • Horizon 2045Future of Human and Planetary Security
  • WKOTechnology Scanning for Austria
audiences
  • Innovation
  • Strategy
  • Consultants
  • Foresight
  • Associations
  • Governments
resources
  • Pricing
  • Partners
  • How We Work
  • Data Visualization
  • Multi-Model Method
  • FAQ
  • Security & Privacy
about
  • Manifesto
  • Community
  • Events
  • Support
  • Contact
  • Login
ResearchServicesPricingPartnersAbout
ResearchServicesPricingPartnersAbout
  1. Home
  2. Vocab
  3. Experience Curve

Experience Curve

Costs decline predictably as cumulative production or training experience increases.

Year: 2016Generality: 520
Back to Vocab

The experience curve is an empirical principle stating that the cost of producing a unit of output decreases by a consistent percentage each time cumulative output doubles. Originally observed in manufacturing industries, the concept has become increasingly central to understanding how AI and machine learning systems scale: as more data is processed, more training compute is consumed, or more inference operations are performed, the effective cost per unit of capability tends to fall in a predictable, log-linear fashion. This relationship is closely related to — but distinct from — learning curves, which track performance improvements with experience rather than cost reductions.

In the context of machine learning, the experience curve manifests across several dimensions. Training costs for large models have dropped dramatically as hardware manufacturers, cloud providers, and ML engineers accumulate operational experience. Algorithmic improvements compound these gains: techniques discovered through extensive experimentation — such as better optimizers, mixed-precision training, and efficient attention mechanisms — reduce the compute required to reach a given performance threshold. Researchers have quantified this as an "algorithmic efficiency" trend, finding that the compute needed to match a fixed benchmark halves roughly every 9–16 months, independent of hardware improvements.

The experience curve has significant strategic implications for AI development and deployment. Organizations that scale faster accumulate experience more rapidly, driving down their unit costs and creating competitive moats that are difficult for slower-moving rivals to close. This dynamic helps explain the intense race among AI labs and cloud providers to maximize training runs and inference volume. It also informs investment decisions: if costs are expected to fall along a predictable curve, capital expenditure on current-generation infrastructure can be justified by projections of future cost-competitiveness.

Understanding the experience curve is also critical for AI policy and forecasting. Analysts use it to project when AI capabilities will become economically viable for new applications, how quickly frontier model costs will commoditize, and what the long-run equilibrium pricing of AI services might look like. However, the curve is not guaranteed to continue indefinitely — physical limits, data scarcity, and diminishing algorithmic returns can cause the slope to flatten, making it essential to distinguish genuine experience-driven gains from temporary favorable conditions.

Related

Related

Exponential Slope Blindness
Exponential Slope Blindness

A cognitive bias causing humans to systematically underestimate exponential growth trajectories.

Generality: 94
Training Cost
Training Cost

The total computational, energy, and financial resources required to train an AI model.

Generality: 620
Compute Efficiency
Compute Efficiency

How effectively a system converts computational resources into useful model performance.

Generality: 702
Algorithmic Gains
Algorithmic Gains

Performance improvements from better algorithms rather than more compute, data, or parameters.

Generality: 627
Scaling Laws
Scaling Laws

Predictable power-law relationships between model size, data, compute, and performance.

Generality: 724
Scaling Hypothesis
Scaling Hypothesis

Increasing model size, data, and compute reliably improves machine learning performance.

Generality: 753