Skip to main content

Envisioning is an emerging technology research institute and advisory.

LinkedInInstagramGitHub

2011 — 2026

research
  • Reports
  • Newsletter
  • Methodology
  • Origins
  • Vocab
services
  • Research Sessions
  • Signals Workspace
  • Bespoke Projects
  • Use Cases
  • Signal Scanfree
  • Readinessfree
impact
  • ANBIMAFuture of Brazilian Capital Markets
  • IEEECharting the Energy Transition
  • Horizon 2045Future of Human and Planetary Security
  • WKOTechnology Scanning for Austria
audiences
  • Innovation
  • Strategy
  • Consultants
  • Foresight
  • Associations
  • Governments
resources
  • Pricing
  • Partners
  • How We Work
  • Data Visualization
  • Multi-Model Method
  • FAQ
  • Security & Privacy
about
  • Manifesto
  • Community
  • Events
  • Support
  • Contact
  • Login
ResearchServicesPricingPartnersAbout
ResearchServicesPricingPartnersAbout
  1. Home
  2. Vocab
  3. Training

Training

The iterative process of optimizing a model's parameters using data.

Year: 1956Generality: 950
Back to Vocab

Training is the core process by which a machine learning model learns from data. During training, the model is repeatedly exposed to examples and adjusts its internal parameters — such as the weights in a neural network — to minimize the discrepancy between its predictions and the correct outputs. This discrepancy is quantified by a loss function, and the adjustments are guided by an optimization algorithm such as stochastic gradient descent. The result is a model whose parameters encode statistical patterns extracted from the training data, enabling it to make useful predictions on new inputs.

The mechanics of training vary by model type, but in deep learning the dominant approach combines forward passes — where inputs flow through the network to produce predictions — with backpropagation, which computes gradients of the loss with respect to each parameter. These gradients indicate how each weight should change to reduce the loss, and the optimizer applies those changes incrementally over many iterations. Hyperparameters such as learning rate, batch size, and regularization strength shape how efficiently and stably this process converges.

A critical concern during training is generalization: the model must learn the underlying structure of the data rather than memorizing the specific examples it was shown. To monitor this, practitioners typically hold out a validation set that is never used for parameter updates. If performance on the validation set degrades while training loss continues to fall, the model is overfitting. Techniques such as dropout, weight decay, early stopping, and data augmentation are commonly employed to keep overfitting in check and improve generalization to unseen data.

Training can take many forms depending on the learning paradigm. Supervised training relies on labeled input-output pairs; unsupervised training finds structure in unlabeled data; reinforcement learning trains agents through reward signals from environmental interaction; and self-supervised training constructs supervisory signals from the data itself, as in large language model pretraining. Regardless of paradigm, the computational cost of training scales with model size and dataset volume, making efficient training infrastructure — including GPUs, distributed computing, and mixed-precision arithmetic — an essential part of modern machine learning practice.

Related

Related

Training Data
Training Data

The labeled examples used to teach a machine learning model.

Generality: 920
Training Objective
Training Objective

The criterion a machine learning model optimizes to learn from data.

Generality: 820
Training Cost
Training Cost

The total computational, energy, and financial resources required to train an AI model.

Generality: 620
Supervised Learning
Supervised Learning

Training models on labeled input-output pairs to predict or classify new data.

Generality: 900
Training Compute
Training Compute

The total computational resources consumed while training a machine learning model.

Generality: 650
Supervision
Supervision

Training ML models using labeled input-output pairs to guide learning.

Generality: 820