Month: May 2010

Another take on the BP Oil Spill

We are pleased to introduce you to consultant and trainer Sandi Claudell, today’s featured guest blogger. Sandi is CEO of MindSpring Coaching, and has been a valued Palisade Six Sigma Partner for quite some time. She is a Six Sigma Master Black Belt (Motorola), and is a Lean Master (Toyota Motors – Japan) among other notable achievements.

–Steve Hunt


Part 1: The Platform Disaster

Much has been said about the disastrous BP oil spill in New Orleans. If we use the theory of probability and reliability then have too many different companies responsible for a very complex construction and operation added to the chance of failure.

 

There is probably a cultural issue at work where each entity wanted to give the other what they wanted to hear rather than the truth. (For historic and recent examples: NASA Challenger and recent Toyota Prius problems). When we lose sight of quality and reliability of parts, construction, maintenance, testing under ALL conditions rather than the obvious few, etc. then we run high risks of failure. When you build 100+ wells and avoided disasters  . . . perhaps people fool themselves into thinking there never WILL be a disaster. They don’t look at a model that demonstrates the longer you go without such an event (given the input factors of how each element can and will fail) the closer you come to the event we all want to avoid.

 

They may or may not have used an integrated Systems Design  . . . not simply an engineering system but the system on how individuals work together, communicate with each other, act as a conforming unit or a more self-directed autonomous unit looking for and generating solutions outside the box. A team that is innovative and willing to look at all the possibilities and create a breakthrough design that was / is more mistake proof.

 

If they had used DFSS (Design for Six Sigma) then their designs would be more robust taking into consideration all the necessary safety precautions for human life as well as immediate response to a potential failure. As part of DFSS we use a statistical tool call Design of Experiments (Strategy of Formulations, Central Composites, etc.) where we can try very complex interactions (factors) with minimal effort / cost and maximum statistical accuracy. DoE creates prediction equations that allow us to model and ask questions of what would happen under different conditions. More importantly we can look at many different quality metrics (responses, outcomes, etc.) with the same experimental trial. If we replicate the test then we can even forecast what elements cause variation (very hard to detect in highly complex systems without the use of statistics).

 

If they had used an FMEA (Failure Mode Effect Analysis  . . . a tool used in Six Sigma) then they could have anticipated failures and put error proofing devices in place to detect and/or respond to potential faults BEFORE it is irreversible. If we add a Monte Carlo simulation to potential working conditions then the model forecasts probability plots and identifies key factors that will be critical to success or failure.

 

Perhaps they did indeed use a Monte Carlo using Crystal Ball. It is a good product but if they used Palisade’s @RISK and added some of the other tools provided by Palisade such as RISK Optimizer, Neural Tools, etc. then they could have analyzed the system in other dimensions besides a simple Monte Carlo, thus uncovering weaknesses BEFORE designing and/or building the platform and well.

 

Part 2: Capping the well head

 

In Lean there is a whole discipline called “Error Proofing Devices”. As part of the design effort we need to create first and foremost safety and other devices that prevent the error from occurring in the first place. If that line of defense fails then there should be devices built into the process designed to cap the well if your error proofing fails. If that line of defense fails then there should be a disaster response plan created and practiced and tested to ensure that the spill is repaired immediately.

 

Part 3: Treating the resulting spill

 

Again, Design of Experiments could test different materials, chemicals and methods to find the right combination to contain or otherwise manage the resulting oil spill. Trying one chemical only may be the age old definition of madness . . . trying the same thing over and over again expecting different results. Again, a robust design of experiments could aid in the process of finding a solution that is most effective and with multiple tests on the same samples ensure that is it the most safe for the environment and the population most directly in the path of the oil spill. These tests are ideally run years before such a spill however, doing something now is better than simply standing by and watching it happen.

 

Last but not least:

 

Management (Executives down to line managers) should have coaches. Coaches who can speak to the culture, the systems design, the tools and methods used in Lean Six Sigma and who can verify data analysis and help with the accurate interpretation of the data. These coaches should be independent . . . not a full time employee of the corporation as they are more likely to speak the truth and highlight risks as well as opportunities.

 

Now BP and all the other entities may have done some of what I mentioned above. But I would assume they must have left out one or more of the listed items or we wouldn’t be looking at the oil traveling into the wetlands around New Orleans right now. Hindsight is always brilliant but we can learn from our mistakes. We can create better cultures, systems, error proofing devices, Experimental Designs etc.

 

 

BIO:  

 

Sandi Claudell is CEO of MindSpring Coaching. She is a Master Black Belt in Six Sigma, a Lean Master and has worked as a consultant for many companies to initiate worldwide improvements. For more information or to contact Sandi please visit http://www.mindspringcoaching.com/.

Statistical Gizmos and the UK Election

The recent elections in the United Kingdom provided a really fun opportunity to see how extensively statistical decision evaluation and predictive modeling have penetrated popular culture.  The British press outdid themselves with online graphical gizmos that allowed readers to set the terms for outcome scenarios and let those spin out in true operations management style.
 
While The BBC offered an election seat calculator that really only translated voting percentages to number of Parliament seats won, the Guardian put up a Three-Way Swingometer.  With about 8, depending on which you count, parties in the fray, the Swingometer allowed readers to twiddle a dial to anticipate the effect of hypothetical party-shifting and election results.  
 
Next, Nate Silver, the election forecasting guru behind the FiveThirtyEight.com website, produced what he calls the Advanced Swingometer to offset the statistical disarray introduced by the original version’s assumption of a uniform rate of "swing."  He backed this up with a demonstration of how elegant the statistical analysis  behind his model was. 
 
The Times came forward with a predictive map based on the predictions of gamblers in UK’s lively betting shop scene.  Who know where those risk assessments came from.  
 

None of the online descriptions of the methods behind the gizmos were very detailed.  There were no mentions of named statistical analysis procedures, and this turns out to have been a good thing–because none of the gambits proved up to foreseeing the muddle that resulted from the actual voting.  If you wanted to try to come to a clear view of that, you will need to consult the decision tree posted by the BBC. 

Oops! Didn’t see that coming!

We are pleased to introduce you to consultant and trainer David Roy, our first guest blogger in my blog. Dave comes to us from SSPI, Six Sigma Professionals, Inc., and taught Jack Welch and his entire staff their Six Sigma Green Belt training. David’s blog will be the first in a series, and this initial entry also has a quick survey at the end for your input on structuring DFSS training.

–Steve Hunt

 
 

Oops! Didn’t see that coming!

 

How often do we hear these words after we have made a change to product, service or process?

 

We frequently solve one problem only to discover a new problem; or the solution we selected didn’t really resolve the problem.

 

There are many reasons for these surprises. Problem Solving sometimes addresses the symptoms and not the root cause. Useful solutions often have compromising harmful effects that we did not consider.

 

You may now be thinking; “Wow, if everything we do is going to turn out bad let’s not change anything.”   The reality is that change is inevitable. Whether driven by rising customer expectations, innovative new technologies or even variation in inputs over time; change will occur.

 

Managing the design and implementation of these changes requires a more formal methodology than the prominent “Launch and Learn” method.

 

The sophistication of the methodology will vary depending on the magnitude of the risks associated with the change. If we are problem solving for variation in a standard process and trying to regain control simple tools such as Cause and Effect diagram and Failure Mode Effects Analysis and Standard Work may be all that is required.

 

When we start to explore reducing variation or introducing new technologies or process then we need to bring on a Design For Six Sigma (DFSS) methodology which incorporates elements such as Change Management, Robust Design, Reliability, Modeling & Simulation and Piloting & Prototyping.

 

Over the next 4 blogs we will cover the four phases of a DFSS project under the framework of I-dentify, C-onceptualize, O-ptimize, and V-erify or ICOV for short.

We will give a high level look at the steps within these phases and the tools used to reduce the risk of the change and un-intended consequences.

 

On another note, if you are able, we’d like to ask for your guidance by completing a short marketing survey to help SSPI structure our training in a way that is most useful to our community. This 8 question survey should take less than 5 minutes, and is anonymous. Your opinions are greatly appreciated.

http://www.surveymonkey.com/s.aspx?sm=2aQk8QF1eLB5MFQJC1pUXA_3d_3d

 

BIO:

 

David Roy is an integral part of the Six Sigma community. He taught GE’s Jack Welch and entire staff Six Sigma, as well as served as Senior Vice President of Textron Six Sigma. He is a Certified GE Master Black Belt, was instrumental in developing GE’s DMADV (DFSS) methodology, and has taught 3 waves of DFSS Black Belts. Dave’s experience includes Product and Transactional so his examples are of interest to all. David holds an BS in Mechanical Engineering from The University of New Hampshire. He is also the co-author “Services Design for Six Sigma – A Roadmap for Excellence”

 

June 2010 – Worldwide Training Schedule

Palisade Training services show you how to apply @RISK and the DecisionTools Suite to real-life problems, maximizing your software investment. All seminars include free step-by-step books and multimedia training CDs that include dozens of example models.

North America

London

Brasil

Latin-America

Asia-Pacific

Neural Nets Writ Small

Of all the statistical analysis techniques I receive news alerts for, the neural network flashes up on my screen most often.  While I, like many of you, really enjoy the big-screen futuristic applications of neural nets–prediction of sun storms is a splendid recent example–there is a quieter trend ramping up at a more down-to-earth level. The nano level,that is the itsy-bitsy, teeny-weeny, the molecular level.  
 
For at least the past five years, the nanotechnology industry has been predicting and prototyping ways to incorporate neural networks into nano-machines.  This innovation has proved to be very handy for sensing devices.  The nano-sensor combines receptor particles with electronics controlled by a neural network algorithm.  The neural net sorts through the sensor responses to uncover patterns that trigger alerts.
 
This year there was a flurry of media attention focused on one of these sensing technologies, the nano-nose, which uses an array of nano-receptors coordinated by a neural network.  These sensors are being promoted to sniff out everything from explosives to disease.  
 

One indication of the expected adoption of applications that combine nano with neural is the advertising for neural network algorithms that can downsize to nano. But more than one of the nano-machine innovators has commented on the need to develop more robust statistical analysis techniques to improve the accuracy of the sensors.  Which means that there will be more neural network to shrink, which means that the algorithms advertised today may already be outdated.

Whatever the commercial considerations and no matter how blasé we become about technological possibility, there is still a big wow factor in packing a high-powered computing technique into such infinitesimal space, and you can be certain the nano people will be harnessing neural networks to many new kinds of more-mini-than-micro machines.

Put More Science into Cost Risk Analysis

At the 2010 Palisade Risk Conference in London, John Zhao of Statoil used a mock cost estimate contingency model to demonstrate how @RISK simulation functions can yield a more realistic project contingency through integrated qualitative risk assessment and quantitative risk analysis.

While future oil prices may be hard to predict due to low manageability, it is absolutely possible to scientifically forecast the sizes of risks that companies are willing to take, and such risks may include the probabilistic volumes of newly discovered reserves, the probability of meeting a project development schedule, chances of project cost overruns, and the likelihood of eroding entire project profitability. To achieve these goals, @RISK has lent a helping hand to business analysts for easier operation of complicated mathematical modelling.

Statoil, an international oil company, takes risk management seriously and has applied Monte Carlo simulation techniques in core and support businesses using @RISK. Such applications not only include the solo use of individual applications, but integrated combinations from drilling, reserve estimation, and well completion to cost and schedule controls at project execution. Besides the widespread uses of the software, Zhao discussed a specific application of @RISK to convincingly simulate required capital project contingency  in detail.

A simplistic line-item ranging exercise using @RISK Monte Carlo simulation is no longer adequate to derive large capital project contingency, as empirical data confirmed that many disastrous cost overrunning projects were lack of contingency to cover the covert risks. In order to show management a complete risk picture on a project, both systemic risks (which empirical history has indicated a likelihood of occurring), and specific risks (which have discrete probabilistic characteristics), should be included in the overall project risk analysis. Therefore the combination of continuous PDF for project cost estimates, and discrete PDF for project risk registers, may prevail and provide management with a more convincing project cost contingency.

John Zhao is Quality and Risk Manager at StatoilHydro Canada Limited. He has 22 years project management experience in the petrochemical industry. He has authored many papers and made numerous presentations worldwide on the subject of risk and contingency management. In the past 10 years, John has developed his expertise in cost engineering and risk analysis for large downstream and oilsands upstream projects across Canada. His extensive knowledge in construction project qualitative risk assessment process has made him an expert on the subject in North America; his proprietary Monte Carlo model using @RISK is a popular tool for project contingency and escalation simulation. The quantitative model that John has built has integrated @RISK with PrecisionTree to help corporations conduct risk-based strategic decision-making.

» View the complete abstract and PDF presentation of "Put More Science into Cost Risk Analysis"
» Read Zhao’s whitepaper, "Put More Science into Quantitative Risk Analysis"

The Economics of Supply Chain Risk Management using @RISK

At the 2010 Palisade Risk Conference in London, David Inbar of Minet Technologies presented a talk on supply chain risk management.

Supply chain risk management is an emerging field which has been growing significantly in importance because of modern management concepts such as lean, globalization and outsourcing. The mutual dependencies and close collaboration in modern supply chains create unique risks and challenges. Supply chain risk management is an economic process and choosing the elements and amount of risk mitigations should be based on economic measures.

Inbar’s talk gave an overview of the concepts and process of supply chain risk management, and demonstrated how using Monte Carlo simulation techniques with @RISK risk analysis software adds value to the decision making under uncertainty processes and enables managers to purchase the most cost effective mitigations. Says Inbar, "An organization with the right risk management process can assure peace of mind to customers and supply chain partners."

David Inbar is the founder and managing director of Minet Technologies, a provider of professional services and technologies in supply chain and purchasing. Minet is active in the interfaces between business, processes and technologies in the world of supply chain and purchasing, creating methodologies and delivering projects and solutions.

» View a PDF of the presentation here
» Abstracts and presentations from the 2010 Palisade Risk Conference in London

Free Webcast this Thursday: Targeted Analyses and Compelling Communication: A Formula for Successful Value Creation in Management Science

The value of quantitative science projects too often remains unrealized for would-be consumers. Despite flawless analyses, sophisticated reports and dazzling presentations, the message goes unheeded by those who could most benefit: If only they understood how to operationalize the results. The clarity with which quantitative scientists view the practical application of results is often paralleled only by their inability to generate that same clarity in their customers. The result is that good management science is at best ignored and worst, misunderstood (and misapplied). This free live webcast describes steps we as quantitative scientists can take to foster understanding, generate novel insights and stimulate actionable results with our clients, as well as demonstrates some of the tools we use – including @RISK.

» Register now (FREE) 
» View archived webcasts