Forecast Shenanigans

I started with Hewlett-Packard Co in the Santa Clara Division, one of the venerable early divisions of the company.  I entered as a product marketing engineer.  That meant I was responsible for a family or two of products.  That included product promotion, data sheets, training to our sales force during new product introductions, product pricing, and product forecasting.

Forecasts come in many forms, including annual aggregate sales forecast in the quota setting process, new product introduction ramp-up and life cycle forecasts, and the more mundane monthly detail of forecasting the sales volume of each individual product.  That forecast drives production planning, including parts procurement, manpower and capacity planning, and affects product lead time for order fulfillment.

In my two years with HP, I had shown more interest in some of the data analytics than most of my colleagues, and order detail forecasts involve groveling in the data more than most folks want to do.  I ended up as the designated liaison to attend and represent our marketing department at the monthly forecast/production planning meeting.  Attendees included the division manager, marketing manager, manufacturing manager, materials manager, and me.

I would review the forecast for my products, and by other product marketing engineers for their products.  Then I would go to the meeting, where the materials manager always had a long list of products where the forecast was wrong.  Naturally, the forecast errors were the reason for everything that was out of control in production – high raw materials inventory, late deliveries, you name it.  I’d patiently explain what I could and commit to investigate things I didn’t know.

I knew that marketing people can play games with the forecast and inflate their expectations.  The idea is that if you forecast high, then there will be plenty of product available, deliveries are good, and it’s a competitive advantage.  The other factor that lurks in the background is the marketing/R&D bond.  The team that designs a product likes an enthusiastic marketing colleague who is going to sell a lot of it.  So, the marketing engineer gets points for being “aggressive”.  Fortunately, we were quickly able to put our product forecasts on a sound basis, with only an occasional need to nudge someone back to reality.

Yet somewhere in my mind, I was sure that there were some questions I needed to answer better.  Just how wrong is the forecast, can it be better, and what is the impact of being wrong.

My nemesis in this process was the Materials Manager.  I’ll call him Fred.  He had just finished a rotation in marketing.  Dave Packard had recently given his speech chastising divisions for letting inventories zoom out of control and creating a cash crisis for the company.  Fred, identified as an up-and-comer in the company, was selected to move to the Materials Manager role to slay this dragon for our division.  He apparently set himself on a strategy to make sure there was always someone else to blame.  And I wasn’t seeing a hint of collaborative spirit in his eye when he was blaming marketing, even though we had worked together recently.

That sent me into project mode to understand the question of just how good a forecast can be.  The goal was simple.  If the statistics of actual results versus forecast say we are doing about as good as anyone can for a product, let’s not waste time complaining about the forecast.  Let’s spend the time adapting to the reality.  Yet if a product forecast is consistently out of range, let’s fix the forecast.

A deep dive into some sales data paid off.  Keep in mind that the era was well before Excel spreadsheets and readily available online databases.  Sales data was hard to get.  Fortunately, there was an expert consultant working in our division and he showed me how to get into the sales database.  I had already been working with him to add some graphic plotting capability to our sales reports, and that was going to come in handy.

I looked for a time period where our business was very stable and found a 30-month window where sales were steady with modest growth.  The only discernable variations from month to month were the slight seasonal undulations.  That’s a period allowing the best possible forecast.  So, I plotted those sales, both the total, and for each of our 70 individual products.

The next step was to look at the month-by-month deviation in actual results compared to a best statistical average for each of the products.  Then with a table of percent deviation from average in one column and average monthly volume in another, I could plot deviation as a percent of monthly volume for each of the products.  The result was just as expected.  The left end of the plot, the low volume products in the range of a few to 20 units a month, showed very high deviation percentages.  The deviation could be as high as 100%.   At the far right, around 500 units per month, the average sales deviation was much smaller.  Here the forecast could routinely be much more accurate, around 5% above or below.

In addition to the 30-month sales chart and the plot of deviation versus volume, I added a third chart.  The final chart was a recommended table of “forecast goodness” thresholds, by volume.  There were 3 ranges – low, medium, and high volume.  The idea was that if actual sales deviation fell under the threshold level, we shouldn’t spend time on the forecast error.  It’s only when actual results exceeded the threshold that we should trigger special attention.  The thresholds were proposed to trigger attention on around 10% of the products each month.

I showed the graphs to our marketing manager, and it made sense to him.  We alerted the division manager that we’d like to make a short presentation to the next planning meeting.  And as a courtesy, I previewed the concepts with the production participants.

At the production planning meeting, the presentation went well.  No one had any argument with the proposed thresholds, with the promise that if we needed to adjust them, we would.  Unsaid, yet hanging in the air, was the clear implication in the data – you just can’t do any better than this.  Everyone in the room knew there wasn’t much to argue about.

The division manager seemed relieved, as I observed future meetings, that we stood down from the persistent attacks on forecast items and were able to focus much better on things that really mattered.  The “Kampe Criteria” held up for many years, even after I had moved on from that division.

There was an epilogue on this topic.  One of those things that really mattered was production lead time.  That’s how long it takes from the start of product production until the product is ready to ship; it includes ordering or manufacturing key component parts.  Our division manager frequently mentioned lead time, wondering why we can’t shorten it.  It would help keep inventory under control and improve responsiveness to customers.  It’s also hard to do.

I realized that Fred, the Materials Manager, was beginning to inject the traditional complaints about forecast accuracy when the topic of production lead time came up, as if improving forecast accuracy was a substitute for reducing lead time.  That’s simply a distraction tactic.  So back to the computer for another simple model.  This topic was very much the heart of the thesis I had done in grad school.  Better yet, production had conducted a class on production scheduling and planning.  From that I learned the models they used for scheduling production runs and inventory purchases.

The model showed the effects of forecast accuracy on inventory levels at current lead times.  It was negligible.  The distraction tactic only worked because the effect of forecast accuracy had never been quantified before.

The model also showed the results of improving lead times.  The results were just as expected.  Reducing production lead time yields vastly more improvement on inventory than any improvement from a better forecast.

I asked Fred to look at the results, which he patiently did, not seeming very happy about it.  At the end, I suggested, with some eagerness, that we set up some time with the division manager to show him the results.  Fred replied rather frostily “I don’t think that will be necessary.”  Apparently, it wasn’t.  After that, the forecast was off the radar as a villain for production planning issues.