Communicating Forecast Uncertainty


The inherent uncertainty in forecasting is not communicated well. People don’t understand it and don’t incorporate it into their discussions.  

With the majority of demand forecasts today being presented as point forecasts, there is a conveyed sense of certainty that does not actually exist. Unfortunately, point forecasts can be misleading, misunderstood, and even undermine trust in the forecasters and their process.  

If there is additional information about the uncertainty of a forecast, it informs decisions and leads to better outcomes. 

This additional information can be as simple as prediction intervals. Rooted in the historical performance of the forecast, prediction intervals provide a sense of the range of possible values given some degree of certainty. It allows planners to say things like: With 90% certainty, we will sell somewhere between 1,000 and 1,500 units next period.


Challenges in Conveying Uncertainty 


Forecasts get used by a number of stakeholders, for a variety of reasons. This includes senior management, supply chain managers, other demand planners, sales and marketing teams, and more. Every user is unique in their needs and understanding of the forecast, and may need more (or less) context around the numbers.  

Does the end user fully understand, and is able to intelligently use, the information being provided to them? It is likely the case that they understand the forecast, but not the level or source of the uncertainty.  

If the prediction interval, or the range around the forecast, is too wide, then people may not be able to effectively action the information. In the case of forecast uncertainty being presented in a fan chart, if the bands representing uncertainty are extremely wide, it may be doing more harm than good. For example, if the point forecast is 100 units, and there is a 90% certainty that actuals will be plus or minus 100 units. This uncertainty is too broad, and will be difficult to use effectively. 



Without getting deep into the underlying math, uncertainty can be calculated using prediction intervals, with their width representing the degree of uncertainty. There is a key cautionary note: Do not simply guess, calculate it. Calculating uncertainty based on past statistical forecasts is one thing, but it’s a whole different ball game if it’s being ‘calculated’ solely based on management input.  

As much as we hate to admit it, people generally do a poor job of understanding uncertainty and conveying it. They usually underestimate it. This absolutely needs to be taken into account.  

Fig. 1 embodies the human factor that creeps into forecasting. With Daniel’s permission, this post in the official IBF Linkedin group gets included here because it is a prime example of how ‘politics’ can undermine the numbers. The comments go on to discuss how individuals’ bias influence can reduce with process maturity, yet it still creeps in occasionally.  


When Done Properly 

When done properly, communicating the uncertainty around a forecast can and will lead to more effective decision making. With a more complete information set, stakeholders are more informed about the potential outcomes and can act accordingly. People will act according to the information they’re given, and as importantly, how they interpret that information.   



To acknowledge that uncertainty exists is to acknowledge that there are factors driving uncertainty. Having conversations about what those factors are leads to a better examination of the assumptions that underlay the point forecast, and ultimately a more robust S&OP process.   

For example, let’s say the forecast for next period is 100 units. Current inventory and safety stocks will be taken into account, but ultimately right around 100 units will be produced. Now let’s say the forecast is 100 but with a prediction interval of 50 units and a 95% confidence. That is to say “we’re 95% certain that we will sell between 50 and 150 units next week, but the most likely value is 100 units.”

More diligent thought must be put into the forecast calculations and the drivers that get considered. If there are key accounts ordering this product, perhaps the planner decides to err on the side of overproducing, and not run the risk of shorting them.  



If done correctly, communicating forecast uncertainty can increase people’s trust in the forecasting process as well as the forecaster. Point forecasts are almost always wrong by their very nature, which degrades trust in the planner. Conveying the range of possible outcomes paints a much more realistic picture, and by extension, will most often be (more) correct. By demonstrating to other stakeholders that there isn’t 100% certainty, the forecaster gives them a sense of the reliability in the information being provided.  



With the right level of education and understanding, the forecast user can consider uncertainty in addition to their knowledge of other factors such as how safety stocks are set. They can make better decisions on a situation to situation basis, with all relevant factors considered. The key is that the people receiving the forecast understand the measures of uncertainty being shared with them, and how they were derived.  


3 Ways to Convey Uncertainty 


With the correct information included, fan charts provide a wealth of context to a forecast. Of course, the more certain you want to be, the wider the fan will be. Fan width also depends on the amount of history being used to generate the forecast, as well as the statistical forecasting technique being used.  

To demonstrate the various level of certainty, include multiple bands on the fan labeled respectively. Depending on how much variance there is in the item being forecasted, the gap between each band of the fan may be more or less. 

Fig. 2 Fan Chart with Confidence



To draw from the world of weather forecasting, Table 1 lays out the uncertainty scale used by the Intergovernmental Panel on Climate Change (IPCC).  Alongside the quantitative likelihood percentages, are corresponding qualitative descriptions. Having both, addresses the issue of humans being poor interpreters of percentage based uncertainty.  


Likelihood of the occurrence/outcome

Virtually certain

> 99% probability

Very likely

> 90% probability


> 66% probability

About as likely as not

33% – 66% probability


< 33% probability

Very unlikely

< 10% probability

Exceptionally unlikely

< 1% probability

Table 1. IPCC uncertainty scale 



For cases where there are varying degrees of uncertainty, consider using colour. Looking to the world of weather forecasting once again, the Weather Prediction Center’s forecast in Fig. 2 does a wonderful job of this. Instead of making a blanket statement that the forecast for the East Coast is 6” of snow, it can be stated that areas like Washington have a 0-1% chance of accumulating 6” of snow while Boston has a 70-80% chance of getting half a foot of it. 

Fig. 3 Weather Prediction Center’s forecast for at least 6” of snow accumulation, valid 00z1/4 to 00z 1/5.

Putting it All Together 

Pair the colour coding with the IPCC’s language as well, and you can say that it is exceptionally unlikely that Washington will have 6” of snow accumulation, while it is likely that Boston will. Compare this description to the point forecast of 6” of snow accumulation, and it is easy to tell which is more informative.  

Of all the above recommendations, the common thread is using more than one descriptor. Rather than providing a single number, like that of a point forecast, make use of probabilities, variance, language and colour. Further qualitative and quantitative descriptors add context and build trust throughout the S&OP team. Take the time to educate forecast-end-users on the meaning and derivation of the metrics, and they will make more informed decisions that will lead to better outcomes.


Author: Marcus Rogers

Like what you just read? Sign up for our news letter.

Our quarterly forecast for the future of supply chain, straight to your inbox. 


Comments are closed.