Month: April 2010

Calculating the There There

Gertrude Stein once famously criticized Oakland, California because "there is no there there." I think of this often as I watch a fascinating and rapidly rising business trend, the conglomeration of social media enterprises via acquisitions and mergers.  This week the European company Wikio, a specialist in the "blogosphere and social media," acquired the French company Neotia, which purveys a "buzz monitoring and online reputation management platform." 
 
Wikio, which purportedly indexes a million websites, wanted to buy Neotia because that company is good at analyzing the influence of brands and buzz campaigns and because of its CEO’s expertise in decision analysis. Neotia wanted to be bought because Wikio will provide potential clients a means of accessing Neotia. 
 
What is remarkable to me is that here is an industry in which the functions of the products are so new that their originators also have to originate names for what they do–buzz monitoring, measurement of online influence, and so forth–and yet with only a couple of years experience in most cases, these companies are buying and selling each other. How do they create product strategies for these products?  How do they calculate prices?  And then, when they want to swallow up a competitor, how do they calculate an offering share price and figure out their value-at-risk? And how do they project how long any of these values will hold?
 

Social media companies have to do a lot of decision making under uncertainty, a whole lot of uncertainty, because every risk assessment focuses on a brand-new kind of product with no history in its marketplace.  They have to bet fearlessly on a whole lot of unknown taking place on the immaterial Internet.  But as one who has enjoyed a brief two years experience working in the blogosphere, I’ll bet that, as ephemeral as that realm may seem, there likely is some there there, some value when we figure out how to calculate it.  

20 Questions in a New Orbit

An Ottawa toy developer is trying to make a jet-propelled leap from an online game to space travel. His vehicle? A neural network designed as the back end system for a game of 20 questions. Twelve years ago Robin Burgener wrote a neural net program to train on the sequences of player responses to questions–beginning with Animal? Vegetable? Mineral?–posed by the neural network,              
 
 
The game is does more than pose simple yes-or-no answers to lead you to a conclusion. The neural network algorithm is able to pose different questions in different orders, and it gets the right answer about 80 percent of the time.                                                         , 
 
Now, apparently, the sky’s the limit for Burgener’s neural network.  He was scheduled to make a presentation late last month at the Goddard Space Flight Centre explaining the potential uses for a neural networked 20 questions on board a space craft. These uses center broadly on troubleshooting technical and equipment problems and subsequently anticipating future problems.  
 
If, as he claims is true, his neural net guessing program can work around responses that are misleading or downright lies, what that would mean for space travelers, he concludes, is that  "if a sensor fails, you’re able to see past it."
 

I know what he means, I think, but I myself don’t tend to look past sensors.        

Robust Risk Analysis for the Time/Expertise Poor – Part 2

In my last blog I introduced the idea of a customised risk analysis solution to problems commonly faced in project risk management, especially cost estimation. Of course this idea is not uniquely applicable to project costs, but this paradigm is the simplest to explore, and that’s what I’m about to do.

Picture a risk register in a worksheet that has been created at a macro level to encapsulate most (all?) of the risks your projects may face. For any given project only a subset of these will be relevant – what is the best way to get these risks into a risk model on the next worksheet? By pressing a button of course! It is almost trivial to write code that picks up all selected risks and places them and the relevant data fields in the model worksheet. Sure beats manually copying and pasting individual line items and the transcribing errors that follow.

The next problem is utilising the workshopped parameters (likelihood of event, three-point estimates for severity etc.) in a logical way to be referenced by appropriate @RISK functions. Once a model structure has been agreed upon a macro button can place @RISK distributions where they ought to go, either logically due to the paradigm (using RiskBinomial, for example) or via a drop-down selection for dollar impact (RiskPert or RiskGamma, say). My clients have been especially thankful when I limit their choice of distribution and provide a simple flow-chart to follow to make this very decision. Reducing the propensity for arguments in risk workshops is worth its weight in gold; if we can assume that reducing this risk ‘weighs’ plenty!

Similarly one or two instances of the simulation settings are likely to satisfy all requirements, so these too can be activated by macro buttons. In this way a user can’t run a ‘poor’ simulation thus creating spurious results. The simulation output that is required can be placed into a report template attached to the model template and generated using yet another simply-labelled macro button. In this way there will be consistent reporting across the organisation allowing decision makers to become familiar and comfortable with simulation results they might otherwise ignore or be unaware of.

A risk model created by this process may not be the theoretically optimal one, but it will be valid and in context with its intended use. It will certainly be easy to use! The results will be consistent and should satisfy management’s desires as well as regulatory requirements.
The project cost estimation is but one example, and the above possibilities are far from the only ones imaginable. Additional complexity or alternate needs would be just as easily met simply with different code essentially without any practical limits. You don’t need to be an expert in Monte Carlo techniques and software to run robust, credible risk analyses. All you need is a risk analysis consultant who macro-controls the cumbersome and probabilistic elements, some appropriate simulation options and reporting procedures. Ask for me by name!

» Robust Risk Analysis for the Time/Expertise Poor – Part 1

Rishi Prabhakar
Trainer/Consultant

Robust Risk Analysis for the Time/Expertise Poor – Part 1

I have recently spoken to several clients whom have all came to the same conclusion about the risk analysis solution they think is most appropriate. They don’t want to do it, and I have no problem with that!

Of course that’s not precisely true. The benefits of Monte Carlo techniques in risk analysis are quite well understood and there is plenty of buy-in from businesses in the Australasian region. The trouble these businesses face (particularly in the realm of project cost estimation) is that the specific process of quantifying their risks for stochastic analysis and the ensuing simulation is not well understood and the means to ameliorate this appears to be beyond their reach. The modelling and simulation components of the project risk management process are not given adequate resources to be performed well, and certainly not to the extent that they provide the most useful information.

It is the case that many companies do not employ dedicated quantitative analysts. This means they have to rely upon some (maybe one) person in the team who has a non-zero quantity of experience and possibly training with risk simulation software to create a valid and credible stochastic model. This person is also not likely to be given enough time to do said task, thus the model inevitably suffers. It is my experience that most models – and all project cost estimation models – can be improved or actually need to be fixed.

So the corporate mind is willing, but the flesh is weak. How can this be addressed? No amount of additional training will suddenly allow you to overcome your time and resource constraints. Perhaps you can’t get the budget for training anyway or don’t want to master risk analysis software when it’s not really core to your role? The solution is one that I personally endorse (and provide!) as a risk analysis consultant – custom Excel programming.

VBA for Excel is a fairly simple language to learn, yet very powerful tool for automating repetitive or sometimes complex spreadsheet tasks. A customised solution involves writing VBA code to perform the tasks we’d rather not do ourselves in the risk analysis model. The “we” here refers to companies that find themselves in the situations previously described whereby they are incapable of creating and operating these models, not necessarily though any fault of their own. In my next blog I’ll examine some modelling problems/requirements and how they might be dealt with effectively using customisation.

Rishi Prabhakar
Trainer/Consultant

Profitability Projections in a Manufacturing Environment of High Uncertainty

The other night, I had the opportunity to watch a free webcast titled “Use of @RISK for Probabilistic Decision Analysis of a Manufacturing Forecast in an Environment of High Uncertainty”. This presentation was extremely timely, since many companies are struggling to survive in these challenging economic times. Dr. Jose Briones did an excellent job discussing and illustrating how profitability projections in a manufacturing environment are directly tied to how the sales forecast fits with the capability of the operation, and how different manufacturing capacities and productions rates impact the output of the plant and the allocation of the fixed cost of production.

In the example he presents, a company is trying to decide how best to balance the sales of certain families of products to maximize revenue, maintain a diverse product line, and properly price each individual product based on the impact to the manufacturing schedule and fixed cost allocation.

He spends an appropriate amount of time discussing different input distributions such as the Triangular, Normal, Pert and Gamma distributions as well as sharing his recommendations on when to use them. He also shares his expertise on fixed cost allocation by product and the dangers in using the common method of dividing the fixed cost by the total production, and recommends doing so by allocating the fixed costs based on the projected run time of each product family. Lastly, he spends some time discussing the interpretation of the results, which I feel does a great job wrapping up the information presented in the webcast.
 

Dr. Jose A. Briones is currently the Director of Operations for SpyroTek Performance Solutions, a diversified supplier of specialty materials, BPM software and innovation consulting services. Dr. Briones has a PhD in Chemical Engineering from Clemson University and is a graduate of the Business Administration Program of Wharton Business School. If you have any questions about the webcast, you can contact Jose at Brioneja@SpyroTek.com or through Jameson Romeo-Hall at Palisade Corporation.
 

 

Cost-Benefit Analysis in the Land of Buzz

For the past couple of years, I’ve been following the advance of cloud computing into the marketplace.  Recently, as the Cloud has begun to–I can’t say materialize as that might confer some notion of definable substance, which in this line of business is to be avoided at all costs–become a presence, information officers have been increasingly interested in matters of costs and benefits. Those who are considering migrating their current computing operations to the Cloud would like to make risk assessments that weigh CAPEX–capital expense–against OPEX–operating expense–and for that they will need help calculating the TCO–the Total Cost of Ownership. To forecast the TCO, they will need to get out the Monte Carlo software to predict their potential flow of data out through the Cloud, and depending on a company’s familiarity with risk analysis, this "could = hire a consultant who understands the meaning of all this."
 
Recently, to help clarify matters, a Computerworld blogger declared, "The fact that people are so interested in cloud TCO indicates that the general value proposition of cloud computing has been accepted and absorbed."  The need for this incisive commentary he blames on the fact that "there’s been an amazing amount of FUD"–Fear, Uncertainty, and Doubt–"strewn about on the topic of cloud TCO." 

My problem with this discussion, as you’ve probably gathered, is not the efforts of smart people to grapple with the opportunities and operations management issues raised by Internet-based computing.  It’s the FUD that folks in computing seem to experience when it comes to clear, plain labels.  They flee into the land of buzz in order to assure TO–Total Ownership–of the terms.  

 

For starters, take the term Cloud for Internet.  It all gets just a little too. . . .well, vaporous.  It makes me feel like the grandmother of a man being ceremoniously installed as a dean at Cornell University a while back.  Having survived into her nineties and through the morning’s pomp and circumstance, she asked her grandson what exactly he would be doing in this new job, and as he started to explain, she looked as if something tasted bad.  Finally she broke in.  "Honey," she said, "if you can’t say it in one sentence, it has got to be illegal." 

The Paradox of Knowledge

Modeling from empirical data takes observed information and attempts to replicate that information in a set of calculations. There are a number of relationships to account for when incorporating those data in a model. These relationships include dependencies and/or correlations. Correlations are often omitted for a variety of reasons, which can lead to critical errors in your results. Some knowledge of the situation leads to a more credible representation of the relationships in the data. Added knowledge, perhaps from subject matter experts, or other sources, aids the refinement of the conclusions one can draw from the data. Whether the correlations are direct or aggregate, involving simple mathematics or greater complexities, ultimately the model is likely to be used in some form of analysis for projecting future outcomes. The knowledge brought to the model and the analysis with embedded correlations improves knowledge about inherent uncertainty in a given problem.

Correlation is a principal relational element  which describes relationships between variables in datasets. There may be general tendencies and patterns which drive the input risks to move together or differently from each other. It is these relationships between variables which need to be expressed in a model to bolster its usefulness, which is accomplished with correlation. It is important to remember there may be observed correlation between variables but it is not necessarily a causal relationship; it may be only a general tendency of paired behavior.

One significant aspect to note: positive correlations appear to increase uncertainty. Wait, you say, how is that possible? Knowledge is supposed to reduce uncertainty. Doesn’t knowing counteract unknowing? Think about it for a moment. In effect, the correlations included in the model reduce the uncertainty about reality while increasing the range of predicted values, adding uncertainty. What may seem illogical on the outset really is quite logical. If two (or more) risks are positively correlated, their aggregation will produce a larger range as a consequence of Monte Carlo sampling. In fact, failing to account for correlations that really are there reduces the validity of the analysis.

Correlations are easily incorporated in models set up for Monte Carlo simulation. MCS, as a technique, generates many ‘random’ samples allowing the modeler to study a variety of scenarios and their impact on decisions. A correlation matrix defines the sampling relationship between any pair of input variables in the model. Using a tool such as Palisade’s @RISK facilitates matrix construction. Once the correlations are in place, running the MCS will produce results and scenarios that are more credible. We want decisions to be based on the best information available and the correlations lend a hand to the knowledge we already incorporate into the process.

Thompson Terry
Senior Training Consultant

Neural Nets vs. the Ripple Effect

About a week ago the Financial Times ran an article about a "new" investment analysis technique that could cut through turbulence in the financial markets: neural network analysis.  I thought okay, this isn’t new but maybe the application is innovative.  Besides, I liked the metaphor the reporter used, a metal ball dropped in a vat of oil and the ensuing ripples that disturb the oil.
 
The article is about software developed by a Danish investment firm that turned its back on "linear" models to adopt a neural network approach that continually reclassifies investments in a portfolio and then makes suggestions about which equities to buy and which to sell. The proprietary software chews through a heap of data–prices, price-earnings ratio, and interest rates, for starters, and its performance bench mark is the Russell 1000 index. 
 
The test portfolio used to proof the method was acquired in 2007, just before the ball dropped into the oil.  For a time it seemed to hold up but then got caught in the turbulence and its undertow. It has now recovered nicely, ahead of the Russell 1000 in fact, and the asset managers are looking  for more investors. This is a sweet success story, especially given the demon turbulence looming over the project and the fact that the assets are apparently owned by the Danish state pension plan.

I understood the use of neural network software to counter nonlinear events like market turbulence, and I understood the continual classification and reclassification.  But I was intrigued that nowhere in the article was there a mention of risk, risk analysis, or even risk assessment.  Maybe it was there all the time, incorporated in the proprietary software, and maybe it just wasn’t mentioned.  Certainly the asset managers who developed the program were aware they were at risk–they were chewing their nails as their fund slid down right beside all the other funds that were dropping in value.  But assessing risk doesn’t seem to have been a factor in the firm’s new defense against mayhem in the markets.  

 

So.  Is it time to shut down your Monte Carlo software?  I don’t think so. . . .   

April 2010 – Worldwide Training Schedule

Palisade Training services show you how to apply @RISK and the DecisionTools Suite to real-life problems, maximizing your software investment. All seminars include free step-by-step books and multimedia training CDs that include dozens of example models.

North America

London

Brasil

Latin-America

Asia-Pacific