Some Uses of Decision Support Software

Risk and decision analysis software: The DecisionTools SuiteWhen
considering decision-making under uncertainty, one may need to evaluate
which of several decision possibilities to select, and then conduct a
detailed risk analysis of that decision. Palisade Corporation’s Decision support
software (such as the PrecisionTree decision tree software and the
@RISK Monte Carlo simulation software) can be used in a wide range of
decision analysis contexts to support the selection and detailed
analysis in these situations. In addition, one may need to calibrate
models or explore and analyse existing data sets, and the statistics
software StatTools can facilitate certain forms of statistical analysis
that may not be possible when using Excel statistics functionality.
(Palisade’s DecisionTools Suite also contains other software products,
including RISKOptimizer and Evolver to deal with optimisation problems,
the neural network software NeuralTools, and TopRank to support model
auditing and sensitivity analysis).

Frequent
applications of the DecisionTools Suite include cost estimation
(project cost estimation, construction cost estimation, cost budgeting
and contingency planning), discounted cash flow analysis and financial
forecasting, risk registers (event risk modelling and operational
risk), options valuation and real options analysis, Six Sigma analysis,
product strategy, environmental risk analysis, veterinary risk
assessments, operations management, retirement planning and so on.
Indeed, the range and flexibility of the DecisionTools Suite means that
the number of applications is vast, and really only limited by a user’s
ability to appropriately formulate their own situation in a way that is
suitable for quantitative analysis.

Whilst
the software can be used essentially in applications in all industries
and functions, the oil and gas sector is a very active one. The
increasing cost of discovery and recovery of oil from more remote and
hostile environments means that an effective resource allocation and a
rigorous decision evaluation are key to business success in these
contexts. Decision tree software and Monte Carlo simulation software
are therefore widely used in exploration and production (e.g. seismic
testing decisions, reserves estimation, and production forecasting
using exponential decline curves or other methods) and for other
aspects of risk assessment for large projects.

Monte Carlo Meets Simulation

Latest to be touched by a greening of consciousness is the Formula One race crowd.  Race fan and Bleacher Report columnist Long John Silver has set himself the ambitious goal of specifying the carbon footprint of an F1 car on a single race day.  He is going to include data from all the major European races.  Although he doesn’t specify all the statistical analysis techniques he will use, he does mention “sensitivity analysis,” and I assume, given that this is a fairly straightforward operation research problem, he will make his projections using some kind of(pardon the pun) Monte Carlo software.

Okay, so he projects the carbon dioxide emissions of one car in a typical race.  This single-car footprint, multiplied by 20 or 22, the customary number of starters, can be extrapolated to form a little stampede–forgive the mixed metaphor–of carbon clouds that hover over an entire race day.  These pile up on a pretty gloomy horizon because, as Long John reveals, while even the most common common gas guzzler gets only 13 miles per gallon, the typical F1 gets 1.5 miles per gallon.

It’s enough to chill the thrills.  But that doesn’t seem to trouble Long John Silver as much as one discovery made in the course of his analysis: the Monte Carlo–the race of races–is an outlier.

Latin Hypercube and Monte Carlo Sampling: When is the distinction important?

Latin Hypercube analysisWhen using @RISK (Monte Carlo software for risk analysis and risk assessment), a user may choose between the Monte Carlo (MC) and Latin Hypercube (LH) sampling types.  LH sampling involves a stratification of the input distribution i.e. the cumulative curve is divided into equal intervals on the cumulative probability scale (0 to 1.0).  In theory, LH sampling would create a more representative sampling of the distribution.  It would avoid potential non-representative clustering of sampled values (particularly when small sample sizes are drawn i.e. a small number of iterations is used), and would also ensure that tail samples of the input distributions are drawn (e.g. for 1000 iterations, exactly one value above the P99.9 would be drawn, whereas for MC sampling either none, one or several samples may be drawn).  In general, LH sampling would be favourable when testing and developing a model (running only a small number of iterations), or when running a model that is so large that only a small number of iterations can be conducted.  It can also be used to force the sampling of tails of distributions, although it should be remembered that LH stratification applies to each individual input variable, and it would not force the simultaneous sampling of tail values for more than one input.

Dr. Michael Rees
Director of Training and Consulting

Manufacturing jobs back to the US?

Recently, one furniture manufacturing company who had slated 8 of their manufacturing facilities to be moved to China changed its mind and will continue producing furniture in the US. The reasons for change that were cited were the weak US Dollar and high shipping costs from China to the US. Both of these factors actually made it more cost effective to produce in the States. Decisions like this give the US a second chance to reduce operating costs and increase quality to save jobs in the US by implementing proven quality programs like Lean Six Sigma.

But will US manufacturers take this opportunity to re-enter the global scene ready to play at a higher level? Or will they choose to take the business that comes back across the ocean for now but watch it go back overseas as the situation normalizes?  This is a special ‘gift’ that we are given, where we have an opportunity to win back some of the losses of prior years.  I am a firm believer in the American “Can Do” spirit.  I know we can do it.  But will we?  And how should we proceed?

The state of Six Sigma Conventions and Summits

I do believe these conventions are extremely important venues for idea sharing, networking, brainstorming and continuing education.  It’s my impression that there are hundreds of thousands of Six Sigma practitioners, just in the Unites States alone. With this said, why are these “tradeshows” so sparsely attended? One may argue that the number of attendees has been increasing but I would think that these conferences should attract thousands of delegates and hundreds of exhibitors which isn’t the case. The organizations such as the ASQ, ISSSP and IQPC that coordinate these conventions are experts at hosting excellent events and do an excellent job, so why the low turnouts? My thoughts:

Competing technologies
As technologies around the internet grow, we are developing more tools to stay connected to the outside world without ever leaving our desks. Technologies like “web conferencing” and VOIP allow us to have meetings, and attend classes and free seminars. Couple this capability with professional networking sites like Linkedin, and one could conceivably never leave their desk and still learn the latest trends and developments in the industry, as well as network.

Frequency
It seems that other industries have far fewer focused tradeshows per year. Could it be the sheer volume of events per year that is diluting the communities will to participate? All told, I would guess that there are at least 15-20 Six Sigma events per year that range in focus from the Department of Defense Lean Six Sigma, Healthcare, Lean Enterprise, Design for Six Sigma and a host of Six Sigma Summits and award ceremonies.

Cost
The cost of attendance often exceeds $2000 per attendee, and the cost of exhibiting and speaking for 45 minutes is more than some full-time employees make per year.  American businesses may be spending these dollars on their processes improvement and product development projects or perhaps they just don’t have the money and time to participate.

Recommendation
How about each organization holds a maximum of events 2 per year, perhaps one on the east coast and one on the west . Invite all the different communities and have special education and speaking tracks based on industry. This model would allow larger meetings where true idea sharing and networking can happen across industry, allowing the Automobile Six Sigma Practitioners to meet DOD and Healthcare Practitioners and share ideas and information.

When building a cost estimation risk analysis model, how can we ensure validity when the data source is an expert’s opinion?

In a risk analysis model, the use of alternate parameters when your data ‘source’ is an expert’s opinion makes perfect sense. There is no need to justify the estimation of obscure parameters in the absence of data—simply estimate one or more percentiles instead. No more than three are needed. This is easily done using Alternate Parameters in @RISK’s define distribution window. I have found such a method especially appealing to clients in the cost estimation field when there is usually little to no data (relevant or otherwise) for parameter estimation.

However, by not explicitly defining the mean/standard deviation/minimum/maximum etc how do you know what they are? Are they still logical, or can the distribution now be sampled in an infeasible region? Essentially, are you aware of the implications of your choice of percentile parameters?

A tedious solution to this problem is to check each input distribution in your model one by one in the define distribution window and see for yourself what the min/mean/max etc are. For only one or two distributions this may seem reasonable. But how would you like to do that for a 1,000 line risk register? I’d straight out refuse! Instead, I’d add a couple of columns that calculate some useful theoretical statistics (the RiskTheo functions) I can use to sanity check my distributions.

An obvious example in cost estimation is the use of @RISK’s RiskTheoMin. When a variable cost is given a “Low” parameter (P10, say) rather than a strict “Minimum” (such as used by the ever popular Triangle and Pert distributions), the theoretical minimum is unknown. A column of RiskTheoMin functions with conditional formatting will quickly highlight which of your costs now have a negative theoretical minimum and need to be truncated etc. The same could also apply to maximums (RiskTheoMax). Further, checks such as “the probability the distribution will sample above/below X should be roughly Y%” can also be run with @RISK’s RiskTheoTarget functions.

Happy automatic model validity checking!

» Watch a short movie about Defining Distributions in @RISK

Rishi Prabhakar
Palisade Training Team

The Next Big Thing?

During my days in Product Development, we were always looking for the Next Big thing. Now that I am submersed in the world of Six Sigma, I continuously ask myself what the next big thing will be and when will it happen? I think we can all agree that the progress in Quality during the last century has been more of an evolutionary process rather than revolutionary event. With this said, are we in the midst of the Next Big Thing with the advent of Lean Six Sigma (LSS) and Design for Lean Sigma (DFLS)? Or are these trends minor refinements on the theme? Could it be the implantation of the lessons of human capital learned from Western Electric’s Hawthorn Studies, some 80 years later, with a better workplace just around the bend?  Will it be a new statistical analysis or possibly the recognition of proven techniques such as Genetic Algorithm Optimization, Monte Carlo Simulation or the use of neural networks that help to transform how we analyze data to make better decisions for the future?


No matter what the next big is, the strong will evolve with the change to capitalize on the opportunities that it opens for us.

Risk & Decision Analysis Conference in NYC

2008 Palisade Risk & Decision Analysis Conference, New York City

The 2008 Palisade Risk & Decision Analysis Conference is quickly approaching. Reserve your spot now and get a $200 discount on registration (only $595.

The conference takes place at the Hyatt Regency Jersey City, which is 15 minutes from Newark airport and a 10-minute ride to Manhattan via the PATH train station located at the hotel. Call 201 469 4750 or 800 233 1234 and mention code “PALI” to book a special
room rate.

This year’s conference focuses on the recent releases of all-new DecisionTools Suite 5.0 and @RISK 5.0 mark new ways of approaching risk and
making tough decisions. The DecisionTools Suite is the only truly integrated
analytical toolset for Excel, bringing together risk analysis,
decision analysis, optimization, prediction, and data analysis in a more
cohesive way than ever before. With the DecisionTools Suite, you can gain
insights better than with any single product. This variety of analytical
approaches to real-life problems is the theme of this year’s conferences in New
York, London, and Medellín.

Said past participant Lina Cheung of CP Risk Solutions, LLC: “The reception was wonderful. I got to meet with different people. It’s very interesting to see how people use @RISK and quantitative analysis skills in different industries, and within the same industry in a different environment.”

» Join us November 13 & 14, 2008 at the Hyatt Regency Jersey City

» Watch video testimonials from last year’s conference

Risk Analysis and Evacuating for Gustav

Henry Yennie, a program manager with the Louisiana State Office of Mental Health, began using Palisade Corporation’s Monte Carlo software @RISK after Hurricane Katrina made landfall. He was serving in the
disaster response command center in Baton Rouge, Louisiana and wanted to use data from the family assistance call center  help managers of future disaster response teams predict staffing for call centers.  His risk analysis work didn’t end with his study of call center use, and in fact, he has become one of state’s experts on decision making under uncertainty.  We know, because we heard from Henry a couple of months ago, because he had launched into a new risk assessment study: in case another big storm blew up and New Orleans had to be evacuated, how many school buses should the city have access to?  And where and when should the buses be available?

Of course, we haven’t heard from Henry since Hurricane Gustav made landfall, so we don’t know yet how accurate his study was.  But all reports from New Orleans are that there were enough school buses in the right places at the right time to make for a smooth evacuation. 

More on disaster decision evaluation after Henry checks in with his results.

Transformation Partners Company Now Featuring @RISK in Black Belt Certification Courses

I earned my Lean Six Sigma Black Belt Certification from Transformation Partners Company  (or TPC) a full service Lean Six Sigma consultancy firm based in Fairport, NY. They have chosen Palisade’s @RISK to be used exclusively for their Monte Carlo Simulation modules taught during their Black Belt, Master Black Belt, Design for Six Sigma (DFSS), and Product Development programs.

TPC will be starting a new wave of certification courses this month featuring @RISK in 4 different locations across New York and Pennsylvania. Check out TPC’s website for more information on their programs and to the find links to sign up for upcoming classes.