A tool designed to quantify the average magnitude of errors in a set of predictions, it computes the average of the absolute differences between predicted and actual values. For instance, if predicted sales figures were $100, $120, and $140, while actual sales were $90, $110, and $160, this instrument would calculate the average of the absolute values of the differences: |100-90|, |120-110|, and |140-160|, resulting in an average error magnitude.
This type of calculation serves as a critical measure of forecast accuracy, enabling objective comparisons between different predictive models. Its widespread use stems from its interpretability and robustness, offering a straightforward metric for evaluating performance without the distortions that can arise from squaring errors (as in mean squared error). Early adoption emerged in statistical analysis and forecasting, solidifying its place as a standard metric across diverse fields requiring reliable prediction.