Skip to main content

Envisioning is an emerging technology research institute and advisory.

LinkedInInstagramGitHub

2011 — 2026

research
  • Reports
  • Newsletter
  • Methodology
  • Origins
  • Vocab
services
  • Research Sessions
  • Signals Workspace
  • Bespoke Projects
  • Use Cases
  • Signal Scanfree
  • Readinessfree
impact
  • ANBIMAFuture of Brazilian Capital Markets
  • IEEECharting the Energy Transition
  • Horizon 2045Future of Human and Planetary Security
  • WKOTechnology Scanning for Austria
audiences
  • Innovation
  • Strategy
  • Consultants
  • Foresight
  • Associations
  • Governments
resources
  • Pricing
  • Partners
  • How We Work
  • Data Visualization
  • Multi-Model Method
  • FAQ
  • Security & Privacy
about
  • Manifesto
  • Community
  • Events
  • Support
  • Contact
  • Login
ResearchServicesPricingPartnersAbout
ResearchServicesPricingPartnersAbout
  1. Home
  2. Vocab
  3. Objective Function

Objective Function

A mathematical function that quantifies what a machine learning model is optimizing.

Year: 1947Generality: 908
Back to Vocab

An objective function is a mathematical expression that defines the goal of an optimization problem by assigning a scalar value to any candidate solution or set of model parameters. In machine learning, it serves as the formal specification of what the algorithm is trying to achieve — whether that means minimizing prediction error, maximizing likelihood, or balancing competing objectives. Every training procedure implicitly or explicitly optimizes some objective function, making it one of the most foundational concepts in the field.

Objective functions generally fall into two categories based on the direction of optimization. Loss functions (also called cost functions) are minimized — common examples include mean squared error for regression, cross-entropy for classification, and hinge loss for support vector machines. Fitness or reward functions are maximized, as seen in evolutionary algorithms and reinforcement learning. In practice, the distinction is superficial since minimizing a function is equivalent to maximizing its negation, but the framing often reflects the problem domain and the algorithm's design.

The choice of objective function profoundly shapes model behavior. A poorly chosen objective can lead to models that technically minimize the stated criterion while failing at the actual task — a phenomenon sometimes called Goodhart's Law. For example, optimizing purely for accuracy on imbalanced datasets can produce models that ignore minority classes entirely. Regularization terms are frequently added to the primary objective to penalize model complexity, effectively turning the problem into a multi-term optimization that balances fit against generalization.

Modern deep learning has expanded the design space for objective functions considerably. Techniques like contrastive loss, triplet loss, and adversarial objectives (as in GANs) encode complex geometric or game-theoretic goals that go far beyond simple error minimization. The objective function also interacts tightly with the optimization algorithm used to minimize it — properties like convexity, smoothness, and the landscape of local minima all influence whether gradient descent or its variants will converge to useful solutions. Selecting and designing the right objective remains as much an art as a science.

Related

Related

Training Objective
Training Objective

The criterion a machine learning model optimizes to learn from data.

Generality: 820
Loss Function
Loss Function

A mathematical measure of error that guides model training toward better predictions.

Generality: 909
Optimization Problem
Optimization Problem

Finding the best solution from all feasible options given an objective and constraints.

Generality: 962
Loss Optimization
Loss Optimization

Iteratively adjusting model parameters to minimize prediction error measured by a loss function.

Generality: 875
Surrogate Objective
Surrogate Objective

A tractable proxy function used to approximate an intractable or expensive primary objective.

Generality: 720
Auxiliary Loss
Auxiliary Loss

An extra training objective that improves learning by optimizing secondary tasks alongside the primary goal.

Generality: 563