Skip to main content

Envisioning is an emerging technology research institute and advisory.

LinkedInInstagramGitHub

2011 — 2026

research
  • Reports
  • Newsletter
  • Methodology
  • Origins
  • Vocab
services
  • Research Sessions
  • Signals Workspace
  • Bespoke Projects
  • Use Cases
  • Signal Scanfree
  • Readinessfree
impact
  • ANBIMAFuture of Brazilian Capital Markets
  • IEEECharting the Energy Transition
  • Horizon 2045Future of Human and Planetary Security
  • WKOTechnology Scanning for Austria
audiences
  • Innovation
  • Strategy
  • Consultants
  • Foresight
  • Associations
  • Governments
resources
  • Pricing
  • Partners
  • How We Work
  • Data Visualization
  • Multi-Model Method
  • FAQ
  • Security & Privacy
about
  • Manifesto
  • Community
  • Events
  • Support
  • Contact
  • Login
ResearchServicesPricingPartnersAbout
ResearchServicesPricingPartnersAbout
  1. Home
  2. Vocab
  3. Prediction Error

Prediction Error

The gap between a model's predicted values and the actual observed outcomes.

Year: 1986Generality: 875
Back to Vocab

Prediction error quantifies how far a machine learning model's outputs deviate from the true values it was trained or evaluated against. It serves as the fundamental signal that drives learning itself — during training, algorithms adjust their internal parameters specifically to reduce this discrepancy. Prediction error is not a single metric but a family of related measures, including mean squared error (MSE), mean absolute error (MAE), cross-entropy loss, and others, each suited to different problem types. Regression tasks typically use squared or absolute error, while classification problems rely on probabilistic losses like cross-entropy that penalize confident wrong predictions more heavily.

The mechanics of how prediction error is used depend on the learning paradigm. In supervised learning, error is computed by comparing model outputs to labeled ground truth, then propagated backward through the network via gradient descent to update weights. This iterative error-reduction process is the engine behind nearly all modern deep learning. The choice of error metric is consequential: MSE penalizes large errors disproportionately due to squaring, making it sensitive to outliers, while MAE treats all deviations linearly. Selecting the wrong loss function can lead a model to optimize for the wrong objective entirely.

Prediction error also plays a diagnostic role beyond training. Evaluating error on held-out test data reveals whether a model has generalized or merely memorized its training examples. A large gap between training error and test error signals overfitting — the model has learned noise rather than signal. Conversely, high error on both sets indicates underfitting, where the model lacks the capacity to capture underlying patterns. This bias-variance tradeoff, formalized through the decomposition of expected prediction error into bias, variance, and irreducible noise components, remains one of the most important frameworks for understanding model behavior.

Prediction error matters because it connects abstract model performance to real-world consequences. In medical diagnosis, a model's error rate translates directly to missed conditions or false alarms. In financial forecasting, it determines the reliability of risk estimates. Minimizing prediction error while maintaining generalization is the central challenge of applied machine learning, making it one of the field's most foundational and practically significant concepts.

Related

Related

Prediction
Prediction

Using learned patterns from data to estimate unknown or future outcomes.

Generality: 964
Mean Squared Error
Mean Squared Error

A loss function measuring average squared differences between predicted and actual values.

Generality: 871
Loss Function
Loss Function

A mathematical measure of error that guides model training toward better predictions.

Generality: 909
RMSE (Root Mean Squared Error)
RMSE (Root Mean Squared Error)

A regression metric that penalizes large prediction errors by squaring residuals before averaging.

Generality: 796
MAE (Mean Absolute Error)
MAE (Mean Absolute Error)

A regression metric measuring the average absolute difference between predicted and actual values.

Generality: 796
Accuracy
Accuracy

The fraction of correct predictions a classification model makes overall.

Generality: 875