Skip to main content

Envisioning is an emerging technology research institute and advisory.

LinkedInInstagramGitHub

2011 — 2026

research
  • Reports
  • Newsletter
  • Methodology
  • Origins
  • Vocab
services
  • Research Sessions
  • Signals Workspace
  • Bespoke Projects
  • Use Cases
  • Signal Scanfree
  • Readinessfree
impact
  • ANBIMAFuture of Brazilian Capital Markets
  • IEEECharting the Energy Transition
  • Horizon 2045Future of Human and Planetary Security
  • WKOTechnology Scanning for Austria
audiences
  • Innovation
  • Strategy
  • Consultants
  • Foresight
  • Associations
  • Governments
resources
  • Pricing
  • Partners
  • How We Work
  • Data Visualization
  • Multi-Model Method
  • FAQ
  • Security & Privacy
about
  • Manifesto
  • Community
  • Events
  • Support
  • Contact
  • Login
ResearchServicesPricingPartnersAbout
ResearchServicesPricingPartnersAbout
  1. Home
  2. Vocab
  3. Tunable Parameters

Tunable Parameters

Model variables adjusted during training to optimize performance on a given task.

Year: 1986Generality: 720
Back to Vocab

Tunable parameters are the adjustable variables within a machine learning model that are modified during the training process to minimize error and improve predictive performance. These include two distinct categories: model parameters, such as the weights and biases in a neural network, which are learned directly from training data through optimization algorithms; and hyperparameters, such as learning rate, batch size, number of layers, and regularization strength, which are set prior to training and govern how that learning process unfolds. Together, these variables define both the structure and the behavior of a model.

During training, model parameters are updated iteratively using techniques like gradient descent and backpropagation, where the model computes the gradient of a loss function with respect to each parameter and adjusts values in the direction that reduces error. Hyperparameters, by contrast, require a separate optimization loop — common strategies include grid search, random search, and more sophisticated methods like Bayesian optimization, which builds a probabilistic model of the hyperparameter space to identify promising configurations more efficiently than exhaustive approaches.

The importance of tunable parameters extends beyond raw accuracy. Poorly chosen values can lead to underfitting, where the model is too simple to capture meaningful patterns, or overfitting, where it memorizes training data and fails to generalize to new inputs. Regularization hyperparameters such as dropout rates or L2 penalty coefficients directly control this tradeoff, making their careful selection essential to building models that perform reliably in deployment.

As models have grown in scale and complexity — from shallow networks to deep architectures with billions of parameters — the challenge of effective tuning has intensified. Automated machine learning (AutoML) frameworks and neural architecture search (NAS) have emerged to address this, treating the selection of both architecture and hyperparameters as an optimization problem in its own right. Proper parameter tuning remains one of the most practically consequential skills in applied machine learning, directly determining whether a model succeeds or fails on real-world tasks.

Related

Related

Hyperparameter Tuning
Hyperparameter Tuning

Optimizing model configuration settings that are set before training begins.

Generality: 794
Hyperparameter
Hyperparameter

Pre-training configuration settings that govern how a machine learning model learns.

Generality: 801
Parameter
Parameter

A model-internal variable whose value is learned directly from training data.

Generality: 928
Parameterized Model
Parameterized Model

A model whose behavior is governed by learnable numerical values called parameters.

Generality: 875
Parameter Space
Parameter Space

The multidimensional space of all possible values a model's parameters can take.

Generality: 794
Parameter Size
Parameter Size

The total count of learnable weights and biases in a machine learning model.

Generality: 694