For those still keen to obtain some of the benefits of the causal approach, "leading edge" techniques offer a kind of halfway house. The idea is to find a variable that, although not really a causal factor, always moves in the same direction as one - and ideally a little earlier. Leading edge techniques are popular in the United States, where data on, say, housing starts gives a fairly good indication of future demand through the building industry supply chain - right back as far as the logging companies. Published surveys of companies' investment intentions - and of the size of their order books - are other useful leading edge indicators. The big drawback is that they will not provide useful information on an individual product basis. They can only give "early warning" signals.
Instead of tracking the factors that cause orders, the company can look instead at the trends which underly those causes. Time series methods rely on the fact that trends can be discovered in the lumped-together purchasing decisions of individual buyers. This is where the sales manager's graph comes into its own, for time series techniques take us back to the sales history. It is nice to find that a product's history can yield some useful information about its future.
Time series techniques are essentially averaging calculations. Take the past six months' sales, calculate the average and call it next month's sales. Next month take the past six months' sales again - the last five from the previous time, plus the new month - calculate the average, and there is another month's forecast.
Like all averages, however, it is susceptible to extreme values. Consider the impact of a large, unexpected, one-off order. The average goes up, and so does the forecast for next month. And what about the recent promotional offer? That also artificially inflates the average, and therefore the forecast.
Dirty data can be cleaned, to some extent, by deliberately setting flags within the forecasting system, and instructing them to disregard known one-off spikes. It is equally possible to disregard the peaks and troughs altogether: simply count the average, plus or minus a certain range. But sometimes they cannot be ignored. They may indicate seasonal variances.
Unfortunately, it is generally easier to detect seasonality than to quantify it. In the case of individual products there is simply too much "noise". And if you look at the total business, it may be that conflicting seasonal cycles just cancel each other out. There is quite a big German company, for example, which sells huge quantities of ice cream in the summer and of ginger cake in the winter. It could, unlikely as it may seem, find that sales were flat throughout the year.
The answer is to try to measure seasonality at product group level. The more that products are classified according to their end use, the more clearly will seasonality show through. Naturally, judgement is required here, and there might have to be some reclassifying of products between groups. Product groups based on, say, common manufacturing technology are usually less than ideal for both forecasting and sales analysis purposes.
Cleaned of aberrant peaks and troughs, and with its underlying seasonality understood, even the dirtiest sales data begins to become meaningful. The cleaning-up operation is not difficult, yet forecasts are not seasonally adjusted as often as they should be. There is often an element of suspicion: "seasonally adjusted" numbers, like the Government's unemployment figures, tend to bring to mind Disraeli's comment about lies, damned lies and statistics.