Skip to main content

Envisioning is an emerging technology research institute and advisory.

LinkedInInstagramGitHub

2011 — 2026

research
  • Reports
  • Newsletter
  • Methodology
  • Origins
  • Vocab
services
  • Research Sessions
  • Signals Workspace
  • Bespoke Projects
  • Use Cases
  • Signal Scanfree
  • Readinessfree
impact
  • ANBIMAFuture of Brazilian Capital Markets
  • IEEECharting the Energy Transition
  • Horizon 2045Future of Human and Planetary Security
  • WKOTechnology Scanning for Austria
audiences
  • Innovation
  • Strategy
  • Consultants
  • Foresight
  • Associations
  • Governments
resources
  • Pricing
  • Partners
  • How We Work
  • Data Visualization
  • Multi-Model Method
  • FAQ
  • Security & Privacy
about
  • Manifesto
  • Community
  • Events
  • Support
  • Contact
  • Login
ResearchServicesPricingPartnersAbout
ResearchServicesPricingPartnersAbout
  1. Home
  2. Vocab
  3. Transfer Learning

Transfer Learning

Reusing a model trained on one task to accelerate learning on another.

Year: 1995Generality: 820
Back to Vocab

Transfer learning is a machine learning paradigm in which knowledge acquired while solving one problem is deliberately applied to a different but related problem. Rather than training a model from scratch, a practitioner begins with a model already trained on a large dataset — often called a foundation or pre-trained model — and adapts it to a new target task. This approach is especially valuable when the target task has limited labeled data, since the pre-trained model has already learned general-purpose representations, such as edges and textures in images or syntactic patterns in text, that transfer usefully across domains.

In practice, transfer learning typically takes one of two forms: feature extraction or fine-tuning. In feature extraction, the pre-trained model's weights are frozen and its internal representations are used as fixed inputs to a new, smaller model trained on the target task. In fine-tuning, the pre-trained weights serve as an initialization point, and some or all layers are updated through continued training on the target dataset. Fine-tuning tends to yield better performance when sufficient target data is available, while feature extraction is preferred when data is scarce or computational resources are limited. The choice of which layers to freeze or update often depends on how similar the source and target domains are.

Transfer learning became central to modern deep learning after researchers demonstrated that convolutional neural networks trained on ImageNet could be repurposed for a wide range of vision tasks with minimal additional training. The paradigm later transformed natural language processing with the introduction of large pre-trained language models such as BERT and GPT, which could be fine-tuned on downstream tasks like sentiment analysis, question answering, and named entity recognition with remarkable efficiency.

The practical impact of transfer learning is difficult to overstate. It dramatically lowers the data and compute requirements for building high-performing models, democratizing access to state-of-the-art AI capabilities. Organizations without the resources to train billion-parameter models from scratch can still achieve competitive results by fine-tuning publicly available pre-trained models. This has accelerated progress across virtually every applied domain of machine learning, from medical imaging to robotics to code generation.

Related

Related

Transfer Capability
Transfer Capability

An AI system's ability to apply knowledge learned in one domain to another.

Generality: 650
Pretrained Model
Pretrained Model

A model trained on large data, reused or fine-tuned for new tasks.

Generality: 838
Fine-Tuning
Fine-Tuning

Adapting a pre-trained model to a specific task by continuing training on new data.

Generality: 796
Transfer Reinforcement Learning (TRL)
Transfer Reinforcement Learning (TRL)

Using knowledge from prior tasks to accelerate reinforcement learning in new, related environments.

Generality: 620
Self-Supervised Pretraining
Self-Supervised Pretraining

A technique where models learn rich representations from unlabeled data before fine-tuning on specific tasks.

Generality: 794
MTL (Multi-Task Learning)
MTL (Multi-Task Learning)

Training a single model simultaneously on multiple related tasks to improve generalization.

Generality: 796