Skip to main content

Envisioning is an emerging technology research institute and advisory.

LinkedInInstagramGitHub

2011 — 2026

research
  • Reports
  • Newsletter
  • Methodology
  • Origins
  • Vocab
services
  • Research Sessions
  • Signals Workspace
  • Bespoke Projects
  • Use Cases
  • Signal Scanfree
  • Readinessfree
impact
  • ANBIMAFuture of Brazilian Capital Markets
  • IEEECharting the Energy Transition
  • Horizon 2045Future of Human and Planetary Security
  • WKOTechnology Scanning for Austria
audiences
  • Innovation
  • Strategy
  • Consultants
  • Foresight
  • Associations
  • Governments
resources
  • Pricing
  • Partners
  • How We Work
  • Data Visualization
  • Multi-Model Method
  • FAQ
  • Security & Privacy
about
  • Manifesto
  • Community
  • Events
  • Support
  • Contact
  • Login
ResearchServicesPricingPartnersAbout
ResearchServicesPricingPartnersAbout
  1. Home
  2. Vocab
  3. Foundation Model

Foundation Model

A large pre-trained model adaptable to many tasks without retraining from scratch.

Year: 2021Generality: 838
Back to Vocab

A foundation model is a large-scale AI system trained on broad, diverse data that can be adapted to a wide range of downstream tasks. Rather than building specialized models from scratch for each application, practitioners fine-tune or prompt a single pre-trained base model, dramatically reducing the cost and data requirements of deploying AI in new domains. The term was formally introduced by Stanford's Center for Research on Foundation Models in 2021, though the underlying paradigm had been building for years through models like BERT and GPT-3.

These models work by learning rich, general-purpose representations during a computationally intensive pre-training phase, typically using self-supervised objectives on massive text, image, or multimodal corpora. The resulting model encodes broad world knowledge and transferable patterns that can be unlocked for specific tasks through fine-tuning on labeled data, retrieval augmentation, or prompt engineering. Scale is central to the paradigm: as model size and training data grow, emergent capabilities appear that were not explicitly trained for, such as in-context learning, chain-of-thought reasoning, and cross-modal understanding.

Foundation models matter because they fundamentally shift the economics and accessibility of AI development. Organizations that lack the resources to train billion-parameter models from scratch can still build capable applications by adapting publicly available or API-accessible foundation models. This has accelerated progress across fields including medicine, law, software engineering, and scientific research, where domain-specific labeled data is scarce but general language or vision understanding is highly valuable.

The paradigm also raises important concerns. Because a single foundation model may underlie thousands of downstream applications, any biases, factual errors, or safety failures baked into pre-training can propagate at scale — a phenomenon sometimes called homogenization risk. Researchers are actively studying how to audit, align, and robustly adapt foundation models to ensure that their broad deployment remains safe and beneficial across the diverse contexts in which they are used.

Related

Related

Base Model
Base Model

A pre-trained model used as a starting point for task-specific adaptation.

Generality: 794
Pretrained Model
Pretrained Model

A model trained on large data, reused or fine-tuned for new tasks.

Generality: 838
RFM (Robotics Foundation Model)
RFM (Robotics Foundation Model)

A large-scale pretrained model providing general-purpose capabilities across diverse robotic tasks.

Generality: 322
LFMs (Liquid Foundation Models)
LFMs (Liquid Foundation Models)

Efficient generative AI models using dynamical systems principles to handle diverse data types.

Generality: 102
Dual Use Foundational Model
Dual Use Foundational Model

Powerful general-purpose AI systems adaptable for both beneficial and harmful applications.

Generality: 646
AFMs (Analog Foundation Models)
AFMs (Analog Foundation Models)

Large pretrained AI models designed to run on analog hardware for dramatic efficiency gains.

Generality: 96