Skip to main content

Envisioning is an emerging technology research institute and advisory.

LinkedInInstagramGitHub

2011 — 2026

research
  • Reports
  • Newsletter
  • Methodology
  • Origins
  • Vocab
services
  • Research Sessions
  • Signals Workspace
  • Bespoke Projects
  • Use Cases
  • Signal Scanfree
  • Readinessfree
impact
  • ANBIMAFuture of Brazilian Capital Markets
  • IEEECharting the Energy Transition
  • Horizon 2045Future of Human and Planetary Security
  • WKOTechnology Scanning for Austria
audiences
  • Innovation
  • Strategy
  • Consultants
  • Foresight
  • Associations
  • Governments
resources
  • Pricing
  • Partners
  • How We Work
  • Data Visualization
  • Multi-Model Method
  • FAQ
  • Security & Privacy
about
  • Manifesto
  • Community
  • Events
  • Support
  • Contact
  • Login
ResearchServicesPricingPartnersAbout
ResearchServicesPricingPartnersAbout
  1. Home
  2. Vocab
  3. MAE (Mean Absolute Error)

MAE (Mean Absolute Error)

A regression metric measuring the average absolute difference between predicted and actual values.

Year: 1990Generality: 796
Back to Vocab

Mean Absolute Error (MAE) is a widely used evaluation metric in regression tasks that quantifies the average magnitude of prediction errors made by a model. It is computed by taking the absolute difference between each predicted value and its corresponding true value, then averaging those differences across all samples. Mathematically, MAE = (1/n) × Σ|yᵢ − ŷᵢ|, where n is the number of observations, yᵢ is the true value, and ŷᵢ is the predicted value. The result is expressed in the same units as the target variable, making it highly interpretable.

One of MAE's defining characteristics is its treatment of all errors equally, regardless of their magnitude. Unlike Mean Squared Error (MSE), which squares the residuals and therefore penalizes large errors disproportionately, MAE applies a linear penalty. This makes MAE more robust to outliers — a single extreme prediction will not dominate the metric the way it would in MSE. For datasets where outliers are common or where large errors are not considered especially catastrophic, MAE is often the preferred choice.

In the context of model training, MAE can also serve as a loss function, sometimes called L1 loss. When used for optimization, it introduces a non-differentiability at zero (since the absolute value function has no defined gradient there), which can complicate gradient-based optimization. In practice, this is handled using subgradients or smooth approximations such as the Huber loss, which blends MAE and MSE behavior depending on the error magnitude. Despite this nuance, L1 loss is valued for its tendency to produce sparse solutions and its resilience to noisy labels.

MAE is a foundational metric across many applied ML domains, including demand forecasting, financial modeling, weather prediction, and any task where predictions are continuous numeric quantities. When communicating model performance to non-technical stakeholders, MAE is particularly useful because its value has a direct, intuitive interpretation: an MAE of 5.2, for instance, means the model's predictions are off by 5.2 units on average. This clarity makes it one of the most commonly reported metrics in regression benchmarks and production monitoring systems.

Related

Related

Mean Squared Error
Mean Squared Error

A loss function measuring average squared differences between predicted and actual values.

Generality: 871
RMSE (Root Mean Squared Error)
RMSE (Root Mean Squared Error)

A regression metric that penalizes large prediction errors by squaring residuals before averaging.

Generality: 796
Prediction Error
Prediction Error

The gap between a model's predicted values and the actual observed outcomes.

Generality: 875
Average Precision
Average Precision

A single-score metric summarizing model performance across all precision-recall thresholds.

Generality: 700
Loss Function
Loss Function

A mathematical measure of error that guides model training toward better predictions.

Generality: 909
Validation Metric
Validation Metric

A quantitative measure used to evaluate model performance on held-out data.

Generality: 780