Because the word ‘forecast’ is in everyday use, it’s easy to think that we understand what it means when the context moves from weather to business. But forecasting in business can be easily derailed by one or more myths that can frustrate our efforts to improve forecast quality - or even make matters worse.
Here are seven of the most common:
(Click on a myth to read our response)
In practice, up to 50% of forecasts fail to beat the simplest benchmark - a naïve forecast, where last period’s actual is used as the forecast for this period. This can be made even harder to bear by the expense of forecasting software and processes.
Many businesses focus exclusively on forecast accuracy rather than measuring and managing the value added by forecasting.
While levels of error may be lower, in practice, it is often these forecasts that fail to add value, because the level of error is higher than the simplistic benchmark. And if your stable products also have the highest demand, even minor failures can have a disproportionately large impact on the performance of forecasting as a whole.
In this case, focussing exclusively on simple forecast accuracy measures actually makes matters worse.
The academic evidence is that there is little correlation between increasing sophistication and improved forecast quality. It also confirms that so-called optimising functions in software are no ‘silver bullet’. The ability to fit a mathematical model to history is a poor predictor of forecasting performance.
Most forecasting packages use a similarly small set of forecasting algorithms – each of which will work well in some circumstances, but fail in others. In practice, the best way to improve performance is to identify where the chosen approaches destroy value and stop this, rather than pursuing the unattainable ideal of ‘optimisation’.
All mathematical approaches work from the assumption that the future is more or less like the past, which is often not the case. So it is easy to see why manual intervention, often in the form of consensus forecasting, where collective judgement is used to hone the statistical forecasts is a common feature of forecast processes.
Again, research shows that, while many interventions do add value, the majority of judgemental adjustments make forecasts worse not better. The best way to improve forecasts is often to do less; the trick is to work out what type of intervention should be stopped and which encouraged.
Forecasting can be made complicated, often unnecessarily so. To avoid this trap, decisions should be driven by evidence – in this case, if and where forecasting has added value to the business, delivered in a language understood by all, not an obscure statistic.
It is unlikely that one person or function holds the monopoly of knowledge and expertise needed to forecast well; it is inevitably a collective effort. And, until now, there has been no objective way to set comparative targets for forecast performance since “good and bad” are dependent on the forecastability of demand, which varies between products, geographies and over time.
But when measured correctly, forecast performance can be compared and this, allied to a philosophy of continuous improvement, provides the foundation of a process for managing forecast performance.
Forecasts don’t need to be perfect predictions for them to add value to the business; any reduction in error will provide better information for decision making and both statistical methods and management judgement have roles to play.
If we accept that it is always possible to improve forecasts and it can often involve both less work and expense, the trick is to work out which is the best method to use, where and to continuously monitor and manage the performance of the process.