Skip to main content

Envisioning is an emerging technology research institute and advisory.

LinkedInInstagramGitHub

2011 — 2026

research
  • Reports
  • Newsletter
  • Methodology
  • Origins
  • Vocab
services
  • Research Sessions
  • Signals Workspace
  • Bespoke Projects
  • Use Cases
  • Signal Scanfree
  • Readinessfree
impact
  • ANBIMAFuture of Brazilian Capital Markets
  • IEEECharting the Energy Transition
  • Horizon 2045Future of Human and Planetary Security
  • WKOTechnology Scanning for Austria
audiences
  • Innovation
  • Strategy
  • Consultants
  • Foresight
  • Associations
  • Governments
resources
  • Pricing
  • Partners
  • How We Work
  • Data Visualization
  • Multi-Model Method
  • FAQ
  • Security & Privacy
about
  • Manifesto
  • Community
  • Events
  • Support
  • Contact
  • Login
ResearchServicesPricingPartnersAbout
ResearchServicesPricingPartnersAbout
  1. Home
  2. Vocab
  3. OOMs (Orders of Magnitude)

OOMs (Orders of Magnitude)

A scale-based framework for comparing quantities using powers of ten.

Year: 2019Generality: 650
Back to Vocab

Orders of magnitude (OOMs) describe the relative size or scale of quantities using powers of ten, providing an intuitive framework for comparing values that differ dramatically in size. In machine learning and AI research, OOMs have become an essential shorthand for reasoning about compute budgets, model sizes, dataset scales, and performance benchmarks. When researchers say two models differ by "two OOMs" in parameter count, they mean one is roughly 100 times larger than the other — a distinction that carries profound implications for training cost, inference speed, and capability.

The practical importance of OOMs in ML stems from the field's extraordinary range of relevant scales. Model sizes span from thousands to hundreds of billions of parameters. Training compute is measured in FLOPs ranging from millions to zettaflops. Dataset sizes range from hundreds of examples to trillions of tokens. Without OOM reasoning, comparing these quantities or tracking their growth over time would be unwieldy. Researchers routinely use log-scale plots and OOM language precisely because linear scales collapse meaningful distinctions at the extremes.

OOMs became particularly central to ML discourse with the emergence of scaling laws research around 2019-2020, which demonstrated that model performance improves predictably as compute, data, and parameters each scale by orders of magnitude. This work, associated with researchers at OpenAI and DeepMind, reframed model development as a question of how many OOMs of compute one can afford, rather than which architecture to choose. The phrase entered everyday ML vocabulary as a concise way to express whether a proposed improvement is a "rounding error" or a genuinely transformative leap.

Beyond scaling discussions, OOM thinking shapes how practitioners evaluate hardware, estimate costs, and set research priorities. A technique that improves efficiency by 10x (one OOM) is considered highly significant; one that improves it by 1.5x is often treated as incremental. This framing encourages researchers to ask whether proposed advances are large enough to matter at scale, making OOMs not just a measurement tool but a lens for strategic decision-making in AI development.

Related

Related

Scaling Laws
Scaling Laws

Predictable power-law relationships between model size, data, compute, and performance.

Generality: 724
Foom
Foom

Hypothetical scenario where an AI recursively self-improves into superintelligence almost instantaneously.

Generality: 96
Scaling Hypothesis
Scaling Hypothesis

Increasing model size, data, and compute reliably improves machine learning performance.

Generality: 753
Exascale Computing
Exascale Computing

Computing systems capable of performing at least one quintillion floating-point operations per second.

Generality: 627
AFMs (Analog Foundation Models)
AFMs (Analog Foundation Models)

Large pretrained AI models designed to run on analog hardware for dramatic efficiency gains.

Generality: 96
Internet Scale
Internet Scale

ML systems designed to train, serve, or process data across billions of users and devices.

Generality: 520