A regression metric measuring the average absolute difference between predicted and actual values.
Mean Absolute Error (MAE) is a widely used evaluation metric in regression tasks that quantifies the average magnitude of prediction errors made by a model. It is computed by taking the absolute difference between each predicted value and its corresponding true value, then averaging those differences across all samples. Mathematically, MAE = (1/n) × Σ|yᵢ − ŷᵢ|, where n is the number of observations, yᵢ is the true value, and ŷᵢ is the predicted value. The result is expressed in the same units as the target variable, making it highly interpretable.
One of MAE's defining characteristics is its treatment of all errors equally, regardless of their magnitude. Unlike Mean Squared Error (MSE), which squares the residuals and therefore penalizes large errors disproportionately, MAE applies a linear penalty. This makes MAE more robust to outliers — a single extreme prediction will not dominate the metric the way it would in MSE. For datasets where outliers are common or where large errors are not considered especially catastrophic, MAE is often the preferred choice.
In the context of model training, MAE can also serve as a loss function, sometimes called L1 loss. When used for optimization, it introduces a non-differentiability at zero (since the absolute value function has no defined gradient there), which can complicate gradient-based optimization. In practice, this is handled using subgradients or smooth approximations such as the Huber loss, which blends MAE and MSE behavior depending on the error magnitude. Despite this nuance, L1 loss is valued for its tendency to produce sparse solutions and its resilience to noisy labels.
MAE is a foundational metric across many applied ML domains, including demand forecasting, financial modeling, weather prediction, and any task where predictions are continuous numeric quantities. When communicating model performance to non-technical stakeholders, MAE is particularly useful because its value has a direct, intuitive interpretation: an MAE of 5.2, for instance, means the model's predictions are off by 5.2 units on average. This clarity makes it one of the most commonly reported metrics in regression benchmarks and production monitoring systems.