Driving Efficiency through Forecast Accuracy

All companies big or small do some sort of business forecasting, they estimate how their future will look like and use this information to make decisions that will shape the future in the way they want. However the quality of this forecast, its usability, detail and timing all decide if this forecast will yield efficiencies in the organization or not.

In this blog post, I will explain how each of the variables effect efficiency and how measurement of forecast accuracy can help you drive it. If you want to understand how forecast accuracy can be calculated please check out my blog post "How clean is your crystal ball: measuring forecasting accuracy - Part 1" and "Part 2"

Assessing the quality of forecast.

When you start reading about forecasting quality you will come across a wide variety of KPIs and metrics. This can be overwhelming, and confusing and you will probably decide on a KPI without assessing the impact of this metric on the efficiency of the organization. So before you decide to measure a new KPI or change an existing one, you need to ask yourself a question: what qualities your forecast should have so that it drives efficiency in the business?

    1. Forecast Bias
      Forecast bias is a measure of deviation, it tells you how above or below you performed versus your forecast on an aggregated level. It is generally a good metric for organizations which do not have product complexity, be it in manufacturing process, lead-times, profitability or shelf life, and is ideal for products that follow "delayed differentiation" methodologies. Because if they are close to zero forecast bias, they are most probably efficient.

      Bias

      biasgraph 

      Forecast bias is also a good measure for companies who are new to demand planning and S&OP. As this will show them where they stand as far as their process output is concerned and will generally show if the organizational behavior exhibits cautious pessimism or unrealistic optimism.

    2. Mean Absolute Error
      Mean Absolute Error is probably the most common KPI associated with forecasting. However since it’s a very un-forgiving KPI, and makes a lot of companies succumb to a belief of “forecast is never accurate”. Which is true in a way. But imagine, if you have already decided that something cannot be fixed, you would not see it as an opportunity for improvement.

      MAD

      The reason for its popularity is because it is calculated at a very granular level and can pick up in-accuracy at all levels. And that is why I recommend it with caution. Not every company and scenario requires us to be overly accurate with forecasting, and there is no need to measure a "Key Performance Indicator" if it is in-fact not a "Key Performance Indicator". Let me give you an example, if in any given situation, my customer service levels are high (my customers are also happy) and my inventory is low (given that I work on a make to stock strategy) the input I am giving to my supply chain to act on is fairly accurate. If during the same time my Forecast accuracy is poor, the KPI is not a reflection of true business situation, and most probably it has not been setup to measure my supply chain's reactiveness.

      A lot of factors can impact mean absolute error:

      1. Masterdata
        Bad masterdata can lead to forecast and actuals, this results in error to accumulate, unless masterdata housekeeping is done frequently, this can be the biggest cause of high error.
      2. Lead times (actuals vs. demonstrated)
        Generally forecast accuracy calculations assume a certain lag for calculation of accuracy (I talk more about this later). If this lag is not demonstrated in the actual supply chain lead times, it induces an error in the forecast. This error is not visible in forecast bias as it nets off in preceding or following periods. Therefore, if your supply chain has differing lead times by products, or if you are assuming a lag which is different from demonstrated, mean absolute error is not the right KPI to measure.
      3. Inventory levels
        Inventory has a big role to play in servicing lead times, and generally a higher inventory automatically means you have more flexibility in servicing orders. Though this might be a good approach for businesses with relatively low product cost, it ruins your forecast accuracy. The only case where inventory does not induce error in forecast is when inventory levels are a function of reaction lead times, and are equal to the lag at which forecast accuracy is measured.
      4. Agreements with customers on service levels
        Many a times I have seen situations where service levels and forecast accuracy do not compliment each other, which ideally they should. One cause for such a scenario is when the supplier and customer agree on service level measurement in which orders placed beyond an agreed lead time are not considered in service level measurement. Though some may argue that this is not the correct way to measure service levels, it is a common practice in the industry. Having such an agreement renders a forecast accuracy measurement using mean absolute error useless.
      5. Type of products
        A simple product portfolio like that of fertilizer manufacturers or industrial chemicals, calculating mean absolute error seems futile. Generally such supply chains are highly flexible and practice delayed differentiation techniques, which means orders can be served for a different product (e.g. a different pack size) in a very short lead time. In this case, either mean absolute error should be calculated at a product family or category level or shouldn’t be measured at all.
      6. Others
        The list given above is not exhaustive, and care should be taken when mean absolute error is used as a KPI. Over-arching principle remains the same: if the KPI does not tie in with the way business is conducted, and an improvement in the KPI does not result in better business delivery, it should be done away with. Having said that, in complex supply chains like those of packaged food or pharmaceuticals, mean absolute error is a life saver, and must be measured.
    3. Volatility
      Recently I came across a new measure for forecast accuracy. Interestingly forecast data is not used to calculate it. Volatility uses historical sales data to check if a product is erratic in its sales pattern, indirectly suggesting that the forecast for this product will be more wrong than a less volatile product. I like this measure because it addresses the problem directly. Volatile products contribute more in terms of cost to the supply chain. You would either need to keep higher inventory levels, or invest more in capacity or take a hit on service levels to keep selling highly volatile products. Measuring this KPI and actively reducing it can go a long way in improving supply chain efficiency.

      volatility

      However certain business models are made entirely on capitalizing short term opportunities; in that case measuring volatility is useless. Take the case of commodity trading, in which the entire business model is to sell when you get a good price. Unless you have complex econometric models generating your forecasts, and you are able to forecast highly volatile products with reasonable accuracy, there is probably no point measuring any KPI associated with forecast accuracy.

    4. Forecasting Lag considerations
      I briefly wrote about forecast lag earlier in the blog. Generally forecasting lag is agreed between the supply and demand teams within the organization, and usually you end up with a lag that takes into account maximum lead times for supply chain reaction. This may not be the case for your entire product portfolio or your entire customer landscape.

      Unless your KPI takes this into account, you will end up with a measurement that is not aligned with business operations. A general rule of thumb is that if the lag is bigger, accuracy results will be poorer. In certain cases however a longer lag might be necessary, take for example an aircraft manufacturer where the lead times are extremely high. In such a case a longer lag will depict business operations and a bad long term forecast will result in poor business performance. All the KPIs I mentioned above are impacted by forecast lag, and care should be taken when selecting it.

Usability of forecasts generated

You can forecast almost anything. However supply chain experts suggest one golden rule: NEVER FORECAST SOMETHING THAT CAN BE CALCULATED. This is the concept of dependent and independent demand which can be expanded to all measures in use within a business. Taking financial forecasting as an example, I would forecast my revenue and fixed costs but not my profit or variable costs, simply because they can be calculated.

What is important however is the usability of the generated forecast. If the metric being forecasted cannot be used by the business directly, it should not be forecasted. Such metrics are a big source of confusion for the decision makers in a company.

To ensure a forecasted metric drives business efficiency two questions should be asked:

    1. How will this metric drive efficiency?
      Forecasting product sale or input cost helps drive improvement in service levels, inventory and cost, however if majority of your cost components are fixed, or if you have capacity bottle necks it is better to not spend resources on forecasting these metrics.
    2. Can this metric be calculated using another metric that is already been forecasted?
      This brings us to the age old conundrum of double guessing, and ends up with a roll up of future business performance that does not reconcile.

      If a business keeps the above considerations in mind while choosing what to forecast, efficiency is a direct outcome.

    3. How detailed is your forecast?
      Starting from the point above, once you have the usability of your forecasted metric sorted out, the next question is what questions should this forecast answer? For example if you are forecasting sales to improve supply chain efficiency, should you be forecasting in dollars? The answer is NO! A lot of organizations end up doing this mistake, and develop forecasts with lower level of detail than needed. This results in the forecast to be broken down using averages and percentages to make it usable. This not only destroys forecast accuracy but also induces large amounts of inefficiencies in the business.

      I always recommend forecasting at a level that captures all intrinsic patterns and trends in the product/customer mix and holds a fair degree of accuracy.

      On the contrary I have also seen forecasts being blindly generated at the lowest level possible causing the forecast to be a data management problem rather than an outlook of business performance.

      Forecast detail is a classic trade off situation, and therefore it is a critical business decision. The only guiding principle in deciding the level of detail is the end objective of efficiency and how that can be achieved without generating exorbitant amounts of data.

    4. How frequently do you update your forecast?
      Business reality is always changing. Agility and adaptability are clichéd words used in developing supply chains, but with due merit. Deciding how frequently your forecasts should be updated is key to driving efficient operations. The key to this decision is understanding market dynamics and internal reaction times. This understanding yields a forecast that is accurate and drives business efficiency.

Conclusion

Forecast quality is of paramount importance to business performance, however care should be taken how forecasts are generated and how their quality is measured, because otherwise the primary input for business operations is either un-usable or too different from reality. In either case efficiency is not the primary outcome.

Be the first to comment

Leave a Reply