Skip to main content

Envisioning is an emerging technology research institute and advisory.

LinkedInInstagramGitHub

2011 — 2026

research
  • Reports
  • Newsletter
  • Methodology
  • Origins
  • Vocab
services
  • Research Sessions
  • Signals Workspace
  • Bespoke Projects
  • Use Cases
  • Signal Scanfree
  • Readinessfree
impact
  • ANBIMAFuture of Brazilian Capital Markets
  • IEEECharting the Energy Transition
  • Horizon 2045Future of Human and Planetary Security
  • WKOTechnology Scanning for Austria
audiences
  • Innovation
  • Strategy
  • Consultants
  • Foresight
  • Associations
  • Governments
resources
  • Pricing
  • Partners
  • How We Work
  • Data Visualization
  • Multi-Model Method
  • FAQ
  • Security & Privacy
about
  • Manifesto
  • Community
  • Events
  • Support
  • Contact
  • Login
ResearchServicesPricingPartnersAbout
ResearchServicesPricingPartnersAbout
  1. Home
  2. Vocab
  3. Weight

Weight

A learnable parameter that scales the influence of inputs within a model.

Year: 1986Generality: 850
Back to Vocab

In machine learning, a weight is a learnable numerical parameter that determines how strongly an input signal or feature influences a model's output. In linear models, weights are the coefficients multiplied by input features before summing to produce a prediction. In neural networks, weights are associated with the connections between neurons across layers — each connection carries a scalar weight that scales the activation signal passing through it. The collection of all weights in a model defines its learned representation of the underlying data distribution.

Weights are adjusted during training through an optimization process designed to minimize a loss function. The most common approach is gradient descent, where the gradient of the loss with respect to each weight is computed via backpropagation and used to nudge weights in the direction that reduces error. This iterative update process — repeated over many batches of training data — allows a model to progressively encode useful patterns. The learning rate controls the step size of each update, making it one of the most important hyperparameters governing how weights evolve.

The initial values of weights matter considerably. Poorly initialized weights can lead to vanishing or exploding gradients, stalling or destabilizing training. Techniques such as Xavier and He initialization were developed specifically to set weights at scales appropriate for the depth and activation functions of a given network, enabling stable gradient flow from the outset.

Weights are central to nearly every aspect of model behavior: they encode what a model has learned, determine its capacity to generalize, and are the primary target of regularization techniques like L1 and L2 penalties, which discourage excessively large weight values to reduce overfitting. In the era of large pretrained models, the sheer count of weights — often in the billions — has become a key metric of model scale and capability. Transferring pretrained weights to new tasks via fine-tuning has also become a dominant paradigm, underscoring how much learned information is stored within a model's weight values.

Related

Related

Weighted Sum
Weighted Sum

A linear combination of inputs scaled by learned weights, fundamental to neural networks.

Generality: 820
Weight Initialization
Weight Initialization

Setting neural network weights before training to ensure stable, effective learning.

Generality: 650
Parameter
Parameter

A model-internal variable whose value is learned directly from training data.

Generality: 928
Weight Decay
Weight Decay

A regularization method that penalizes large weights to prevent overfitting.

Generality: 750
Open Weights
Open Weights

Publicly released model parameters that enable transparency, reproducibility, and collaborative AI development.

Generality: 694
Upweighting
Upweighting

Increasing the influence of selected data points or features during model training.

Generality: 620