Skip to main content

Envisioning is an emerging technology research institute and advisory.

LinkedInInstagramGitHub

2011 — 2026

research
  • Reports
  • Newsletter
  • Methodology
  • Origins
  • Vocab
services
  • Research Sessions
  • Signals Workspace
  • Bespoke Projects
  • Use Cases
  • Signal Scanfree
  • Readinessfree
impact
  • ANBIMAFuture of Brazilian Capital Markets
  • IEEECharting the Energy Transition
  • Horizon 2045Future of Human and Planetary Security
  • WKOTechnology Scanning for Austria
audiences
  • Innovation
  • Strategy
  • Consultants
  • Foresight
  • Associations
  • Governments
resources
  • Pricing
  • Partners
  • How We Work
  • Data Visualization
  • Multi-Model Method
  • FAQ
  • Security & Privacy
about
  • Manifesto
  • Community
  • Events
  • Support
  • Contact
  • Login
ResearchServicesPricingPartnersAbout
ResearchServicesPricingPartnersAbout
  1. Home
  2. Vocab
  3. Pretrained Model

Pretrained Model

A model trained on large data, reused or fine-tuned for new tasks.

Year: 2013Generality: 838
Back to Vocab

A pretrained model is a neural network that has already been trained on a large dataset and whose learned parameters — weights and biases — can be reused as a starting point for new tasks. Rather than initializing a model with random weights and training from scratch, practitioners leverage the representations a pretrained model has already internalized, which often encode rich, generalizable features of the data domain. This approach is foundational to modern machine learning workflows across natural language processing, computer vision, speech recognition, and beyond.

The mechanism behind pretrained models is closely tied to transfer learning. During pretraining, a model learns broad, reusable representations — such as syntactic patterns in text or edge and texture features in images — from massive datasets. When applied to a downstream task, these representations can either be used directly (zero-shot or few-shot inference) or refined through fine-tuning on a smaller, task-specific dataset. Fine-tuning adjusts the pretrained weights to better suit the target domain while preserving the general knowledge acquired during pretraining, dramatically reducing the data and compute required to achieve strong performance.

Pretrained models became especially prominent in computer vision with the release of deep CNNs like AlexNet and VGGNet trained on ImageNet, and later in NLP with the introduction of word embeddings and, more transformatively, large Transformer-based models such as BERT and GPT. These models demonstrated that a single pretrained backbone could be adapted to dozens of downstream tasks with minimal additional training, setting new benchmarks across entire fields and democratizing access to high-performance AI.

The practical significance of pretrained models is enormous. They lower the barrier to entry for organizations without massive compute budgets or proprietary datasets, enable rapid prototyping, and concentrate research investment into shared, reusable artifacts. However, they also raise concerns around bias propagation — flaws embedded during pretraining can persist through fine-tuning — and the environmental cost of training ever-larger foundation models. Understanding pretrained models is now considered essential knowledge for any practitioner working in modern machine learning.

Related

Related

Base Model
Base Model

A pre-trained model used as a starting point for task-specific adaptation.

Generality: 794
Self-Supervised Pretraining
Self-Supervised Pretraining

A technique where models learn rich representations from unlabeled data before fine-tuning on specific tasks.

Generality: 794
Transfer Learning
Transfer Learning

Reusing a model trained on one task to accelerate learning on another.

Generality: 820
Foundation Model
Foundation Model

A large pre-trained model adaptable to many tasks without retraining from scratch.

Generality: 838
Fine-Tuning
Fine-Tuning

Adapting a pre-trained model to a specific task by continuing training on new data.

Generality: 796
Continual Pre-Training
Continual Pre-Training

Incrementally updating a pre-trained model on new data while preserving prior knowledge.

Generality: 575