10 What is a good forecast error?

 

When teaching students or training professionals on forecasting KPIs, I like to repeat over and over the same question: “How do you know if a forecast is good enough?” Take a minute to think about this question (you can directly refer to your own demand planning process).

Usually, students and professionals reply along these lines:

  • “We compare this year’s accuracy against what we achieved last year.” But what if last year was especially good or bad? For example, the accuracy you achieved forecasting March 2021 is likely much better than the one achieved in March 2020, which was probably dramatically low due to Covid lockdowns.
  • “Anything lower than our accuracy target is acceptable.” This is a chicken and egg problem: how do we set the accuracy target in the first place?
  • “We compare our forecast to another forecast and see if we can beat it.” This is not a bad idea, but against which forecast (tool or model) should you compare yourself?
  • “This forecast looks correct (as it follows trend and seasonality), so it should be good.” Visually assessing forecasts might provide you with some insights, but it is not a standardized practice to evaluate their quality. And it is not scalable. You cannot visually assess thousands of forecasts.

10.1 Benchmarking

10.1.1 Naïve forecasts

10.1.2 Moving average

10.1.3 Seasonal benchmarks

10.2 Why tracking demand coefficient of variation is not recommended

10.2.1 COV and simple demand patterns

10.2.2 COV and realistic demand patterns

Summary