Skip to main content

Envisioning is an emerging technology research institute and advisory.

LinkedInInstagramGitHub

2011 — 2026

research
  • Reports
  • Newsletter
  • Methodology
  • Origins
  • Vocab
services
  • Research Sessions
  • Signals Workspace
  • Bespoke Projects
  • Use Cases
  • Signal Scanfree
  • Readinessfree
impact
  • ANBIMAFuture of Brazilian Capital Markets
  • IEEECharting the Energy Transition
  • Horizon 2045Future of Human and Planetary Security
  • WKOTechnology Scanning for Austria
audiences
  • Innovation
  • Strategy
  • Consultants
  • Foresight
  • Associations
  • Governments
resources
  • Pricing
  • Partners
  • How We Work
  • Data Visualization
  • Multi-Model Method
  • FAQ
  • Security & Privacy
about
  • Manifesto
  • Community
  • Events
  • Support
  • Contact
  • Login
ResearchServicesPricingPartnersAbout
ResearchServicesPricingPartnersAbout
  1. Home
  2. Vocab
  3. Fab (Fabrication Facility)

Fab (Fabrication Facility)

A specialized factory where semiconductor chips are manufactured using photolithography and advanced materials.

Year: 2012Generality: 627
Back to Vocab

A fabrication facility, or fab, is a highly specialized manufacturing plant where semiconductor devices — including microprocessors, memory chips, and custom silicon — are produced at scale. These facilities operate under extraordinarily controlled conditions: cleanrooms classified by the number of airborne particles per cubic meter, where workers wear full-body suits to prevent microscopic contamination that could ruin entire wafer batches. The core manufacturing process involves depositing, patterning, and etching dozens of material layers onto silicon wafers using photolithography, where ultraviolet or extreme ultraviolet (EUV) light projects circuit patterns at nanometer-scale precision.

The relevance of fabs to AI and machine learning has grown dramatically as demand for specialized hardware has surged. Training large neural networks requires enormous computational throughput, driving the design of purpose-built chips — GPUs, TPUs, and custom AI accelerators — that must be physically realized in fabs. The capabilities of these chips are directly constrained by what a given fab's process node can achieve: smaller transistors mean more compute per unit area, lower power consumption, and faster inference. The transition from 28nm to 7nm to 3nm process nodes has been a primary driver of AI hardware performance gains over the past decade.

Fabs are among the most capital-intensive industrial facilities ever built, with leading-edge plants costing $10–20 billion or more to construct and equip. This has led to significant consolidation in the industry, with only a handful of companies — most notably TSMC, Samsung, and Intel Foundry — capable of manufacturing at the frontier process nodes. This concentration creates strategic dependencies: AI chip designers like NVIDIA, Google, and AMD rely almost entirely on TSMC for their most advanced silicon.

For the AI field, fab capacity and process node availability are not merely engineering footnotes — they are strategic constraints. Supply chain disruptions, geopolitical tensions around semiconductor manufacturing, and the physical limits of lithography all directly shape which AI systems can be built, at what cost, and at what scale. Understanding fabs is therefore essential context for anyone reasoning about the trajectory of AI hardware.

Related

Related

Fabless
Fabless

A semiconductor company that designs chips but outsources all physical manufacturing to foundries.

Generality: 383
FPGA (Field-Programmable Gate Array)
FPGA (Field-Programmable Gate Array)

Reconfigurable hardware chips that accelerate AI workloads with low latency and power.

Generality: 627
Accelerator Chip
Accelerator Chip

Specialized hardware that dramatically speeds up AI training and inference workloads.

Generality: 781
ASIC (Application-Specific Integrated Circuit)
ASIC (Application-Specific Integrated Circuit)

Custom silicon chips designed to accelerate specific computational workloads with maximum efficiency.

Generality: 700
Accelerator
Accelerator

Specialized hardware that speeds up AI training and inference beyond CPU capabilities.

Generality: 792
Accelerated Computing
Accelerated Computing

Using specialized hardware to dramatically speed up AI and machine learning workloads.

Generality: 794