In previous blog posts I have discussed how to go about the process of identifying forecasting bias: systematic under- or over-forecasting.
Unfortunately, there are more ways of getting this wrong than there are of getting it right and I would like to share one that I came across recently in a major blue chip company.
Like many botched attempts at measuring and managing forecast quality, it has the peculiar characteristic of hiding stupidity under a superficial cloak of common sense which I find particularly irritating.
It is best explained using a simple example. Consider this pattern of forecast errors:
Period 1 Period 2 Period 3 Period 4 Total
Product A + 4 + 6 + 5 +5 +20
Product B +10 -10 + 9 -9 0
Product C - 10 + 8 -11 +9 0
Total + 4 + 4 + 3 + 5 +20
It is pretty clear what is going on in this example.
At a total level there is a clear pattern of bias – consistent over-forecasting – which is entirely down to Product A. The forecast for products B and C are unbiased – their average level of error is zero – but unlike A, the pattern of errors is highly variable. This may be because they are more difficult to forecast or simply because they are much larger than Product A.
But now imagine that you are managing this process and you have just received the data for the first period. You suspect that you have a bias problem because you have over-forecast by 4 units and you are now trying to find out why. When you drill into the detail, you see that the biggest contributor is Product B (underlined), so that is obviously the source of the bias, isn't it?
What about Period 2? Now the biggest contributor is Product C. And Period 3 and 4?…products B and C again. In fact, ‘the problem’ is never product A!!
So what's going on? The mistake is to try and detect a systematic problem that is the product of the sequence of errors, by looking only at individual errors. As a result, the only systematic error (product A) has become completely clouded by unsystematic variation (products B and C).
Of course, this is not quite so obvious in a large company when you are dealing with thousands of forecasts, but nevertheless, it is not exactly rocket science and I think we have the right to expect better from intelligent people. What made it worse is that the flawed logic of the process infected the language used in the business; so a single forecast error came to be described as forecast bias –an obvious tautology.
So what? you might say. Why get wound up over a word?
The reason is the consequence of this failure of logic. Not only does it mean that the real cause of the problem is missed, it can easily lead to deterioration in the overall quality of the forecast process!
So, going back to our original example, suppose that after Period 1 we decide to correct products B’s ‘forecast bias’ by reducing the forecast for period 2 by 10 units. Instead of period 2 being under-forecast by 10 units, it is now under-forecast by 20 units. And if period 2’s under-forecasting was treated as bias rather than variation, the following period’s error would be +29 rather than +9, and so on.
They say a problem shared is a problem halved. Well, it has helped me, but that doesn’t really count. What does matter is that many companies have invested many millions in forecasting technology and processes, but then proceed to waste it by using sloppy logic and flawed measurement and management practices.
It is about time that we realised that forecasting does not stop at the production of numbers; the quality of the performance management process is often the difference between worthy endeavour and adding value.