‘…of course it is not possible to have zero forecast errors’
So went a conversation with a potential client some years ago. As usual when somebody says something that, from your world view, is so obviously wrong that you have never thought of what you might say to counter it, I was left open mouthed and speechless.
It got me thinking about why I don’t think it is possible to predict the future perfectly, or more generally ‘optimise’ forecasting.
I started with itemising everything you need to assume to believe it is possible to predict the future or optimise a forecast process.
Assumption Number 1: The future is like the past.
This means that you will do exactly the same things in the future that you did in the past and that, if you did the same things that they will have the same result. Furthermore, the same would have to apply to anything which in any way influenced your business – competitors, the market, the economy.
Assumption Number 2: It is possible to model the past perfectly
This means that you will be able to distinguish noise – random or one off effects – from the recurring signal and describe it perfectly mathematically. It also assumes that you will be able to distinguish between mathematical models that actually do this and those that just look like they do this. Beware!...having a good mathematical fit to the past (a high R squared value) is a poor predictor of forecasting performance (it is called ‘over fitting' in the trade), because there is a good chance that you have mathematically captured a big chunk of interference, so obscuring the signal.
Assumption Number 3: Any adjustments made to a forecast perfectly capture the impact of one-off events.
75% of statistically generated forecasts are judgementally adjusted. It is a reasonable assumption that there will be ‘one-off events’ occurring in the future that cannot be predicted from the historic record, but is it safe to assume that the size and timing of every one of these is capable of being predicted perfectly?
Assumption Number 4: That any failure of any of these assumptions will be detected and corrected immediately
Any failure in the forecast process will be detected and corrected immediately. This means that it is possible to analyse every forecast error, every period and determine the probability of it being evidence of a problem rather than simply random noise.
I don’t know about you, but I think it is rather improbable that all four of these assumptions are realistic descriptions of the world in which we live. The only one I have any confidence in is the last one, and it is the area that many companies fail to address.
My sense is that the future is approximately like the past most of the time, except when it isn’t, and that we ought to be able to model it tolerably well and do a decent job of spotting the big exceptions and allow for them in the forecast. The end result then ought to be something that is ‘good enough’ most of the time.
But should that be the limit of our ambition? Let’s take a look at the evidence.
In our work we have discovered that many companies struggle to beat an approach based on using this period’s actual as the forecast for the next period; a simple ‘naïve forecast’. And typically around 50% of the low level granular forecasts fail to beat this very modest benchmark.
So while the geeks are struggling to work out how to create a perfect forecast there is a veritable glut of low hanging fruit rotting on the vine…out of sight, because of under investment in the measurement and monitoring process.
I recommend that practitioners aim to install ‘good enough’ forecasting tools and processes with the minimum of fuss and quickly switch their attention to evidence-based, continuous improvement, making sure that they have the right measurement techniques and tools to track value added at a very granular level (as described in Assumption 4 and elsewhere on this site).
So I say, start with ‘good enough’ and strive to improve. And the best way to improve is to weed out the dumb stuff.
Perfection can wait.