Social media companies have to do a lot of decision making under uncertainty, a whole lot of uncertainty, because every risk assessment focuses on a brand-new kind of product with no history in its marketplace. They have to bet fearlessly on a whole lot of unknown taking place on the immaterial Internet. But as one who has enjoyed a brief two years experience working in the blogosphere, I’ll bet that, as ephemeral as that realm may seem, there likely is some there there, some value when we figure out how to calculate it.
I know what he means, I think, but I myself don’t tend to look past sensors.
In my last blog I introduced the idea of a customised risk analysis solution to problems commonly faced in project risk management, especially cost estimation. Of course this idea is not uniquely applicable to project costs, but this paradigm is the simplest to explore, and that’s what I’m about to do.
Picture a risk register in a worksheet that has been created at a macro level to encapsulate most (all?) of the risks your projects may face. For any given project only a subset of these will be relevant – what is the best way to get these risks into a risk model on the next worksheet? By pressing a button of course! It is almost trivial to write code that picks up all selected risks and places them and the relevant data fields in the model worksheet. Sure beats manually copying and pasting individual line items and the transcribing errors that follow.
The next problem is utilising the workshopped parameters (likelihood of event, three-point estimates for severity etc.) in a logical way to be referenced by appropriate @RISK functions. Once a model structure has been agreed upon a macro button can place @RISK distributions where they ought to go, either logically due to the paradigm (using RiskBinomial, for example) or via a drop-down selection for dollar impact (RiskPert or RiskGamma, say). My clients have been especially thankful when I limit their choice of distribution and provide a simple flow-chart to follow to make this very decision. Reducing the propensity for arguments in risk workshops is worth its weight in gold; if we can assume that reducing this risk ‘weighs’ plenty!
Similarly one or two instances of the simulation settings are likely to satisfy all requirements, so these too can be activated by macro buttons. In this way a user can’t run a ‘poor’ simulation thus creating spurious results. The simulation output that is required can be placed into a report template attached to the model template and generated using yet another simply-labelled macro button. In this way there will be consistent reporting across the organisation allowing decision makers to become familiar and comfortable with simulation results they might otherwise ignore or be unaware of.
A risk model created by this process may not be the theoretically optimal one, but it will be valid and in context with its intended use. It will certainly be easy to use! The results will be consistent and should satisfy management’s desires as well as regulatory requirements.
The project cost estimation is but one example, and the above possibilities are far from the only ones imaginable. Additional complexity or alternate needs would be just as easily met simply with different code essentially without any practical limits. You don’t need to be an expert in Monte Carlo techniques and software to run robust, credible risk analyses. All you need is a risk analysis consultant who macro-controls the cumbersome and probabilistic elements, some appropriate simulation options and reporting procedures. Ask for me by name!
I have recently spoken to several clients whom have all came to the same conclusion about the risk analysis solution they think is most appropriate. They don’t want to do it, and I have no problem with that!
Of course that’s not precisely true. The benefits of Monte Carlo techniques in risk analysis are quite well understood and there is plenty of buy-in from businesses in the Australasian region. The trouble these businesses face (particularly in the realm of project cost estimation) is that the specific process of quantifying their risks for stochastic analysis and the ensuing simulation is not well understood and the means to ameliorate this appears to be beyond their reach. The modelling and simulation components of the project risk management process are not given adequate resources to be performed well, and certainly not to the extent that they provide the most useful information.
It is the case that many companies do not employ dedicated quantitative analysts. This means they have to rely upon some (maybe one) person in the team who has a non-zero quantity of experience and possibly training with risk simulation software to create a valid and credible stochastic model. This person is also not likely to be given enough time to do said task, thus the model inevitably suffers. It is my experience that most models – and all project cost estimation models – can be improved or actually need to be fixed.
So the corporate mind is willing, but the flesh is weak. How can this be addressed? No amount of additional training will suddenly allow you to overcome your time and resource constraints. Perhaps you can’t get the budget for training anyway or don’t want to master risk analysis software when it’s not really core to your role? The solution is one that I personally endorse (and provide!) as a risk analysis consultant – custom Excel programming.
VBA for Excel is a fairly simple language to learn, yet very powerful tool for automating repetitive or sometimes complex spreadsheet tasks. A customised solution involves writing VBA code to perform the tasks we’d rather not do ourselves in the risk analysis model. The “we” here refers to companies that find themselves in the situations previously described whereby they are incapable of creating and operating these models, not necessarily though any fault of their own. In my next blog I’ll examine some modelling problems/requirements and how they might be dealt with effectively using customisation.
The other night, I had the opportunity to watch a free webcast titled “Use of @RISK for Probabilistic Decision Analysis of a Manufacturing Forecast in an Environment of High Uncertainty”. This presentation was extremely timely, since many companies are struggling to survive in these challenging economic times. Dr. Jose Briones did an excellent job discussing and illustrating how profitability projections in a manufacturing environment are directly tied to how the sales forecast fits with the capability of the operation, and how different manufacturing capacities and productions rates impact the output of the plant and the allocation of the fixed cost of production.
In the example he presents, a company is trying to decide how best to balance the sales of certain families of products to maximize revenue, maintain a diverse product line, and properly price each individual product based on the impact to the manufacturing schedule and fixed cost allocation.
He spends an appropriate amount of time discussing different input distributions such as the Triangular, Normal, Pert and Gamma distributions as well as sharing his recommendations on when to use them. He also shares his expertise on fixed cost allocation by product and the dangers in using the common method of dividing the fixed cost by the total production, and recommends doing so by allocating the fixed costs based on the projected run time of each product family. Lastly, he spends some time discussing the interpretation of the results, which I feel does a great job wrapping up the information presented in the webcast.
Dr. Jose A. Briones is currently the Director of Operations for SpyroTek Performance Solutions, a diversified supplier of specialty materials, BPM software and innovation consulting services. Dr. Briones has a PhD in Chemical Engineering from Clemson University and is a graduate of the Business Administration Program of Wharton Business School. If you have any questions about the webcast, you can contact Jose at Brioneja@SpyroTek.com or through Jameson Romeo-Hall at Palisade Corporation.
My problem with this discussion, as you’ve probably gathered, is not the efforts of smart people to grapple with the opportunities and operations management issues raised by Internet-based computing. It’s the FUD that folks in computing seem to experience when it comes to clear, plain labels. They flee into the land of buzz in order to assure TO–Total Ownership–of the terms.
For starters, take the term Cloud for Internet. It all gets just a little too. . . .well, vaporous. It makes me feel like the grandmother of a man being ceremoniously installed as a dean at Cornell University a while back. Having survived into her nineties and through the morning’s pomp and circumstance, she asked her grandson what exactly he would be doing in this new job, and as he started to explain, she looked as if something tasted bad. Finally she broke in. "Honey," she said, "if you can’t say it in one sentence, it has got to be illegal."
Modeling from empirical data takes observed information and attempts to replicate that information in a set of calculations. There are a number of relationships to account for when incorporating those data in a model. These relationships include dependencies and/or correlations. Correlations are often omitted for a variety of reasons, which can lead to critical errors in your results. Some knowledge of the situation leads to a more credible representation of the relationships in the data. Added knowledge, perhaps from subject matter experts, or other sources, aids the refinement of the conclusions one can draw from the data. Whether the correlations are direct or aggregate, involving simple mathematics or greater complexities, ultimately the model is likely to be used in some form of analysis for projecting future outcomes. The knowledge brought to the model and the analysis with embedded correlations improves knowledge about inherent uncertainty in a given problem.
Correlation is a principal relational element which describes relationships between variables in datasets. There may be general tendencies and patterns which drive the input risks to move together or differently from each other. It is these relationships between variables which need to be expressed in a model to bolster its usefulness, which is accomplished with correlation. It is important to remember there may be observed correlation between variables but it is not necessarily a causal relationship; it may be only a general tendency of paired behavior.
One significant aspect to note: positive correlations appear to increase uncertainty. Wait, you say, how is that possible? Knowledge is supposed to reduce uncertainty. Doesn’t knowing counteract unknowing? Think about it for a moment. In effect, the correlations included in the model reduce the uncertainty about reality while increasing the range of predicted values, adding uncertainty. What may seem illogical on the outset really is quite logical. If two (or more) risks are positively correlated, their aggregation will produce a larger range as a consequence of Monte Carlo sampling. In fact, failing to account for correlations that really are there reduces the validity of the analysis.
Correlations are easily incorporated in models set up for Monte Carlo simulation. MCS, as a technique, generates many ‘random’ samples allowing the modeler to study a variety of scenarios and their impact on decisions. A correlation matrix defines the sampling relationship between any pair of input variables in the model. Using a tool such as Palisade’s @RISK facilitates matrix construction. Once the correlations are in place, running the MCS will produce results and scenarios that are more credible. We want decisions to be based on the best information available and the correlations lend a hand to the knowledge we already incorporate into the process.
Senior Training Consultant
I understood the use of neural network software to counter nonlinear events like market turbulence, and I understood the continual classification and reclassification. But I was intrigued that nowhere in the article was there a mention of risk, risk analysis, or even risk assessment. Maybe it was there all the time, incorporated in the proprietary software, and maybe it just wasn’t mentioned. Certainly the asset managers who developed the program were aware they were at risk–they were chewing their nails as their fund slid down right beside all the other funds that were dropping in value. But assessing risk doesn’t seem to have been a factor in the firm’s new defense against mayhem in the markets.
So. Is it time to shut down your Monte Carlo software? I don’t think so. . . .
Palisade Training services show you how to apply @RISK and the DecisionTools Suite to real-life problems, maximizing your software investment. All seminars include free step-by-step books and multimedia training CDs that include dozens of example models.
- 6-7 April 2010, Washington, DC
Decision-Making and Quantitative Risk Analysis using @RISK
- 6-8 April 2010, Washington, DC
Decision-Making & Quantitative Risk Analysis using the DecisionTools Suite
- 20-22 Apr 2010, London (near Heathrow)
Risk Modelling, Uncertainty Analysis and Decision-Making using @RISK and The DecisionTools Suite
- 19-20 de abril 2010, Vitória, Brasil
Avaliação do Risco em Projetos para Usuários Iniciantes/Intermediários (em português)
- 26-28 de abril 2010, Belo Horizonte
Avaliação do Risco para Usuários Iniciantes/Intermediários (em português)
- 21 al 23 de abril del 2010, Guayaquil, Ecuador
Análisis de Riesgo y Decisión para Aplicaciones en Finanzas y Banca utilizando @RISK y el DecisionsTool Suite
- 26 al 28 de abril del 2010,Buenos Aires, Argentina
Evaluación de Riesgo para Usuarios Principiantes/Intermedios (en español)