Forum Navigation
Please to create posts and topics.

Forecast Accuracy Measure

Often we are asked "Why do you measure accuracy/error as forecast-actual / actual and not over forecast?"

Historically Sales groups have been comfortable using forecast as a denominator, given their culture of beating their sales plan. Since most of the demand planning evolved from Sales function, MAPE was also measured this way. So this was mostly cultural. In such a scenario, Sales/Forecast will measure Sales attainment. For example, sales of 120 over 100 will mean a 120% attainment while the error of 20% will also be expressed as a proportion of their forecast. So it was more of a convenience for Sales Management.

However, more scientifically, the denominator is designed so that it will control functional bias in the forecasting process. Since Supply Chain is the customer of the forecast and directly affected by error performance, an upward bias by Sales groups in the forecast will cause high inventories. So if Demand planning reports into the Sales function with an implicit upward bias in the forecast, then it is appropriate to divide by the Actual Sales to overcome this bias. Using Actuals is also ideal because it is not under the control of the forecaster. If we use forecast as the denominator, the forecaster can improve accuracy marginally by consistently over-forecasting.
But there is a trend in the industry now to move Demand planning functions into the Supply Chain. If Supply Chain is held responsible for inventories alone, then it will create a new bias to under forecast the true sales. If MAPE is using Actuals, then you can improve forecast accuracy by under-forecasting while the inventories can be managed below target.