In my last blog I mentioned a ‘fact’ about data that came up during a recent public training course (Decision-Making and Quantitative Risk Analysis). This fact stuns me every time I think about it, and certainly floored me the first time I encountered it. So many companies just don’t have it.
Data, that is. Historical data from completed projects, sometimes billion-dollar projects, is simply not collected especially in resources and infrastructure cost estimation. Instead every risk is re-estimated from scratch in every new project based entirely upon an estimator’s recollections or guesses. This is not a suggestion that estimators don’t know what they’re talking about, rather that the benefits of adding historical data to the analysis far outweigh the cost of gathering the information in the first place.
I first worked in the banking sector, hence my surprise to learn of this lack of data storage in certain areas of risk analysis. Project cost estimation, especially in resources and infrastructure – I’m talking to you. In financial circles there are literally millions of data points collected daily across the entire organisation. Gathering data (and then analysing it for some benefit) is simply ‘what we do’, and this process isn’t challenged. Some of the data is quite ‘small’, such as the number of seconds a particular caller was kept on hold before being answered, and others are quite ‘big’, such as multi-million dollar losses due to fraudulent activities. Regardless, it’s all kept in the knowledge that information is power – in this case the power to make intelligent decisions in the future.
How can you judge the efficacy of an estimation process (workshops etc.) if you don’t track the final observed outcomes specifically to make such a judgment? Well, you can’t. And that leaves your company’s risk and decision assessment process in limbo. Without measurement there can be no process improvement or corporate learning. Are you ‘passing’ or ‘failing’ with your use of Monte Carlo simulation via risk analysis software?
Generally the observed outcomes for risks in models will be near the estimated value, and this is to be expected. However the main role of risk analysis is to adjudge exposure to the unexpected. Far too many cost estimation models have very little volatility in their line items. I am very curious to know just how often the realised value of a given line item is outside the range of “possible values” as defined in the model. And what about the total project costs overall? This hints at and leads to the big question which is what could/should be done with such data if it were to be recorded?
I shall address these questions in the next blog. I know you’re excited to find out!