Month: June 2010

@RISK Quick Tips: Asset Price Random Walks and Options Valuation.

@RISK risk modeling software is used for a wide variety of applications in financial risk analysis forecasting, investments, and banking. This model is one example of how @RISK can help in risk analysis decision making.

Models of the prices of assets (stocks, property, commodities) very often assume a random walk over time, in which the periodic price changes are random, and in the simplest models are independent of each other. The future price level of the asset may result in some contract or payoff becoming valuable, such as in the case of financial market options. In these cases, the value of the contract (contingent payment or option) is calculated as the average discounted value of the future payoff. In the special case of European options on a traded underlying asset, the value calculated from the simulation may be compared with mathematical formulas that analytically provide the valuation, such as the Black-Scholes equation. In many more complex cases, the pertinent analytic formulas may be unknown or very complex to derive, and one may wish to rely on simulation techniques. This particular model compares the average simulated payoff for European Call and Put options with the Black-Scholes valuation.

» Download the example: AssetPrices.Options.BS.Multi.xls

@RISK Quick Tips: Discounted Cash Flow (DCF)

@RISK risk modeling software is used for a wide variety of applications in financial risk analysis forecasting, investments, and banking. Below is an application of a discounted cash flow analysis.

Discounted cash flow (DCF) calculations are a frequent example of the use of @RISK. In the example model, the sources of risk are the revenue growth rate and the variable costs as a percentage of sales. After taking into account the assumed investment, and applying a discount factor, the DCF is derived. Following the simulation, the average (mean) of the DCF is known as the net present value (NPV).

In this example, the results show that the average DCF is positive (about 40), whereas the probability of a negative DCF is about 15%. The decision as to whether to proceed or not with this project will therefore depend on the risk perspective or tolerance of the decision-maker.

This example has also been extended to calculate the distribution of bonus payments on the assumption that a bonus is paid whenever the net DCF is larger than a fixed amount (such as 50). It also uses some of the @RISK Statistics functions RiskMean, RiskTarget, and RiskTargetD to work out the average net DCF, the probability that the net DCF is negative and the probability that a bonus is paid.

» Example model: CashFlow.xls

@RISK Six Sigma calculator models the performance of a process with uncertain elements

Developed using the Six Sigma features of @RISK,
software for risk analysis using Monte Carlo simulation

Palisade’s Six Sigma Calculator allows you to create a function that models the performance of a process with uncertain elements. It allows you to include uncertainty around design factors through the use of probability distributions. It was built by Palisade Custom Development using the @RISK Developer’s Kit (RDK) to perform a Monte Carlo simulation so the following process capability metrics can be calculated: Cpk, Cpk Upper, Cpk Lower, Sigma Level, DPM, Cp, Ppk, Pp.

The RDK is Palisade’s widely-used risk analysis programming toolkit. It uses the features and functions of @RISK for Excel – the industry-leading risk analysis tool for spreadsheets. The RDK allows you to build Monte Carlo simulation models in your own applications using Windows and .NET programming languages, such as C, C#, C++, Visual Basic, or Visual Basic .NET. Examples of programs written in Windows and .NET programming languages are provided.

Palisade Custom Development services are used to build tailored applications for individual client needs using @RISK and other technology.

» Six Sigma Calculator
» More about using @RISK for Six Sigma
» More about using @RISK
» Palisade Custom Development

Oops! Didn’t see that coming! Part 2

Guest blogger David Roy Six Sigma Professionals, Inc., and taught Jack Welch and his entire staff their Six Sigma Green Belt training. Dave also has a quick survey for your input on structuring DFSS training. brings us the second installment of his four-part blog. Dave comes to us from SSPI,

 

–Steve Hunt

 
Oops! Didn’t see that coming! Part 2

We’d like to ask for your guidance by completing a short marketing survey to help SSPI structure our training in a way that is most useful to our community. This 8 question survey should take less than 5 minutes, and is anonymous. Your opinions are greatly appreciated.

As a continuation from the May blog, we are now covering the “Identify” phase of the ICOV framework of a rigorous new design process.

This phase is important because it establishes the framework for the concept, establishes the level of rigor required for the project management process, estimates the development cost, collects the Customer and Business requirements and the criteria for success.

 

The level of project management needs to be flexible and scalable depending on the Level of Effort (cost) and the Level of Innovation (risk) of the new concept.

 

Surely a project that will take a month to develop and has been done elsewhere requires less rigor that a concept that will take 3 years to develop and represents a brand new invention which has never been done before.

 

The I phase consists of two Tollgates during which an objective steering committee will decide whether to refine the work in the current phase, proceed or cancel the project. 

 

Tollgate 1 Exit Criteria are:

o     Decision To Collect The Voice Of The Customer To Define Customer Needs, Wants And Delights

o     Verification adequate funding is available to define Customer Needs

o     Identification of the Tollgate Keepers1 leader & the appropriate staff

 

Tollgate 2 Exit Criteria is successful demonstration of:

o     Assessment of market opportunity

o     Command a reasonable price or be affordable

o     Commitment to development of the Conceptual Designs

o     Verification adequate funding is available to develop the Conceptual Design

o     Identification of the Gate Keepers leader (gate approver) & the appropriate staff

o     Continue flow down of CTSs to Functional Requirements

Click to Enlarge 

Formal tools which can be used in this phase are Market/Customer research tools, Product Roadmaps, Process Roadmaps, Technology Roadmaps, Multigenerational plans, Quality Functional Deployment (House of Quality).

 

Market/Customer research tools may include Customer Relationship Management (CRM) Data, Surveys, Focus Groups, Conjoint Analysis and Kano Model Analysis.

 

The next blog will cover the Conceptualize phase

 

 

 

BIO:

 

David Roy is an integral part of the Six Sigma community. He taught GE’s Jack Welch and entire staff Six Sigma, as well as served as Senior Vice President of Textron Six Sigma. He is a Certified GE Master Black Belt, was instrumental in developing GE’s DMADV (DFSS) methodology, and has taught 3 waves of DFSS Black Belts. Dave’s experience includes Product and Transactional so his examples are of interest to all. David holds an BS in Mechanical Engineering from The University of New Hampshire. He is also the co-author “Services Design for Six Sigma – A Roadmap for Excellence”

» Part 1

Clear Legal Precedent for Dealing with Uncertainty

A recent U.S. Court of Appeals case is timely not only because it involves corporate liability for ocean pollution when everybody in this country is morbidly tracking the BP spill in the Gulf but because it is a case in which the judge highlights and corrects some common misconceptions about Monte Carlo simulation.
 
In a consolidated case involving hazardous waste dumping in the Houston Ship Channel, the codefendants, Tenneco and Occidental, acknowledged liability for the  pollution cleanup, but they appealed a lower court’s decision partly on the basis of the court’s method of allocating costs. The court had called an environmental engineer as expert witness and statistical analyst.  The engineer used Monte Carlo software and court-established inputs for his model. The defendants challenged the court’s inputs in the risk analysis model, and the Circuit Court decision rebutted their objections in clear terms.
 
Writing for the Fifth Circuit Court of Appeals, Judge Patrick Higginbotham said, "Monte Carlo measures the probability of various outcomes, within the bounds of input variables; to calculate Occidental’s waste volume,. . .  Instead of simply averaging the input values, Monte Carlo analysis uses randomly-generated data points to increase accuracy, and then looks to the results that those data points generate. The methodology is particularly useful when reaching an exact numerical result is impossible or infeasible and the data provide a known range—a minimum and a maximum, for example—but leave the exact answer uncertain."
 
Responding to the charge that this method of statistical analysis is unreliable and untestable, Higginbotham responded,". . .the cited cases at most stand for the proposition that Monte Carlo analysis is unreliable when injected with faulty inputs, but nothing more. . . .  Monte Carlo simulation is not inherently untestable. . . . If anything, Monte Carlo provides greater certainty than the basic alternatives: using one of the three data or using the arithmetic average of all three."
 
Countering the challenge that the model results were "equivocal" the judge continued, " The Monte Carlo analysis—though it produced a statistical range of likely outcomes and not one determinative answer—supports choosing one result over another, and certainly assisted the district court in its decisionmaking."
 

The decisions-by-the-numbers guys certainly had their day in court.  The free advertising wasn’t bad either.

Value-Based Management Compensation

Full disclosure: I am, like so many of my friends, an investor––a small-time one–and recently, I have joined in the public outrage about bankers’ bonuses and executive compensation in general. Compensation is one of the hot buttons in the debate over financial reform.  I keep wondering why compensation practices are what they are and how they could be adjusted to calm turmoil on Wall Street.

Enter Marwaan Karame, and his version of risk analysis.

 
Karame heads the New York consultancy Economic Value Advisors, which coaches major corporations on Value Based Management.  Value–long-term versus right-now profit–is the foundation of the firm’s philosophy.  Its central principle is that any activity a business undertakes should increase the wealth of its shareholders–in the case of a privately held company, the number of shareholders may equal 1.
 
Karame has developed what he calls Value Based Compensation, and the goals of this are to align the self-interest of management with the self-interest of shareholders. He believes the shareholders, the company, come first.  And this means a lot of decision-making under uncertainty.  But Marwaan has a method for his management-shareholders balancing act, and it involves performance targets, statistical analysis, and risk assessment (in this case, managing probabilities of performance). His strategy involves maintaining a reserve of bonus funds and timing the payout of these rewards. 
 
The point at which Monte Carlo simulation and Monte Carlo software come in is the point at which variance between performance targets and the level and timing of reward converge. He shows his his client how to click into Monte Carlo in the Excel spreadsheet and use the software to locate the tipping point between wealth for management and wealth for shareholders. 
 

As a small–very small–shareholder, discovering that there is such a tipping point and that Karame knows how to locate it is reassuring.  Makes me feel there’s someone on my side.   

@RISK Quick Tips: Running multiple risk analysis simulations to see how changes in model variables affect simulation results

Example Model: SENSIM.XLS

Sensitivity analysis in @RISK (risk analysis software using Monte Carlo simulation) lets you see the impact of uncertain risk analysis model parameters on your results. But what if some of the uncertain model parameters are under your control? In this case the value a variable will take is not random, but can be set by you. For example, you might need to choose between some possible prices you could charge, different possible raw materials you could use or from a set of possible bids or bets. To properly analyze your model, you need to run a simulation at each possible value for the "user-controlled" variables and compare the results. A Sensitivity Simulation in @RISK allows you to quickly and easily do this – offering a powerful analysis technique for selecting between available alternatives.

In @RISK any number of simulations can be included in a single Sensitivity Simulation. The RiskSimtable function is used to enter lists of values, which will be used in the individual simulations, into your worksheet cells and formulas. @RISK will automatically process and display the results from each of the individual simulations together, allowing easy comparison.

» Click here to see how to run a Sensitivity Simulation
» Click here to download the example file SENSIM.XLS

(Data) Cleanliness Is Next To Godliness

I’m pleased to welcome Palisade Six Sigma Partner Edward Biernat of Consulting with Impact as featured guest blogger. As well as running a successful consultancy, Ed is a noted Six Sigma educator and author.

 

–Steve Hunt

 

 

(Data) Cleanliness Is Next To Godliness

 

I recently had dinner with Eric Alden, a Master Black Belt for Xerox corporation.  Eric had just gotten back from the American Society for Quality’s  (ASQ) headquarters in Milwaukee where he was one of 200 Master Black Belts worldwide that generated the questions for the upcoming ASQ Master Black Belt certification examination (more on that in an upcoming post).  Eric had also recently completed a mini-course for the local ASQ chapter on data integrity.  We shared some war stories and came up with some common threads regarding data integrity.

 

1.       Just because it is a number doesn’t mean it is worth anything.  People get enamored with tons of data from process instrumentation, shop floor collection sources or Excel spreadsheets.  There seems to be a false security with this pile of data, and managers often look to the Black Belt to ‘sort it out’, because with all that data, the answer is in there somewhere.  Many a belt has crashed on the rocky reefs of bad data, often after tons of time and effort (and credibility) were wasted generating false answers.

2.       GIGO.  The Garbage In – Garbage Out philosophy of computing applies especially to existing corporate databases.  Here a few recent examples of GIGO.

a.       A belt wanted to analyze the specific timing of events in shop floor process and had tons of data from the process instrumentation that had times down to the fraction of a second.  After lengthy analysis, they found a significant difference between two shifts and forced the lesser shift to adopt the sequence of the more uniform shift.  After introducing costly production problems and actually hurting the overall process, the sensors were found to be faulty and the overall process subject to human manipulation to generate the ‘pretty charts’ that everyone expected.

b.      Office areas are not immune.  Something as simple as a checksheet to gather data to analyze when a particular computer error occurred can be in question, especially when the clerk fills in the times at the end of the shift from memory rather than logging the event as it occurs.

3.       Good data in bad spreadsheets.  Even if you get good data, having an inexperienced person setting up the spreadsheet can cause problems.  It is analogous to a person using a word processing software and making a table using spaces and tabs.  It looks like a great table until you have to manipulate it.  Then it falls apart.  Problems like merged cells, subtotals, random formula inserted in cells, etc. can make a Belt weep and cause significant errors in the resulting analyses.

4.       Useless manipulation.  Often a big issue is that management wants data sliced a certain way for no good reason.  This sometimes leads to the proliferation of additional spreadsheets or databases that needlessly add to complexity.  (Note: If you have an ERP system like Oracle or SAP, USE IT!  They are designed to house data and protect its integrity.  Plus the data entry screens typically allow for better and more accurate entry.  Few things are more wasteful than entering everything in the ERP system then re-entering it into a spreadsheet to appease a manager’s inability to adapt and change.)

 

What are some tactics for resolving these issues?

1.       On a macro level, start ensuring that the data that your company is collecting is sound data as part of the preparation for a Six Sigma launch, or a part of plain old good business.  Bad data slows down or stops a Six Sigma project dead in its tracks, changing it from getting something done to fixing the data. 

a.       Know catalog your data databases, including the extra ones (Excel, Access) that are usually relied upon but undocumented.

b.      Prioritize the data sources by synchronizing them with your Six Sigma launch sequencing. 

c.       Sample the data to insure its usefulness.  If it is bad, fix it.  This will give teams better data to start off with and will allow time for that data to accumulate for analysis.

2.       For specific projects, conduct a Measurement System Analysis (MSA) on you data sources (This tool is often used in the Measure phase of the DMAIC model).  We often think of MSA’s when it comes to physical measurements.  It is just as critical in the ‘softer’ data. 

a.       Pull the correct sample size.  In StatTools, under  Statistical Inference there is a Sample Size Selection tool that can be used to pull the correct amount of data needed for the analysis.

b.      Pull your data randomly and follow the trail to the actual entry point.  That may mean watching how individuals enter data, probing for special circumstances, etc.

c.       In your analysis, look for random factors such as vacation fill-ins.  Both Eric and I both had several experiences where one person was filling in for someone who is out sick or on vacation and, usually do to inadequate training, varies from the expected process.

3.       Pivot Tables are our friends.  Start today upgrading the skill sets of the people that do the actual data entry and first level analysis.  Train them in how to use tools like Picot Tables that slice the data but leave the actual spreadsheet intact.  The fewer merged cells, etc. that we fight with, the better.

4.       Managers – Trust your Belt.  If they say the data is bad, it probably is.  No matter how much you want an answer today, you may not be able to get one.  The good news is that some processes can be modeled using @RISK to begin improvement that is directionally correct while waiting for the data to compile.  Then the better data can be used to either update or replace the early model.

5.       Go hunting.  Find extraneous datasets and merge them / kill them.  The fewer that are out there, the more likely you will be able to ensure the integrity of those that remain.

 

Remember that data analysis is a funnel.  Tons of data leads to bunches of information which then can help us make some decisions.  Throwing bad data into the system is similar to throwing bad tomatoes into the food distribution system.  The end results can be pretty messy and difficult to clean up. 

 

Also, don’t miss Ed Biernat’s free live webcast DMAIC and Using a Non-Intuition Approach, Thursday, 11AM Eastern Time.

 

Sign up here:

https://palisade.webex.com/palisade/onstage/g.php?d=719996370&t=a

 

 

BIO:

 

Edward Biernat is the president of Consulting With Impact, Ltd., a training, coaching, and consultancy located in Canandaigua, NY that he founded in 1998.

Free Webcast This Thursday: “DMAIC and Using a Non-Intuition Approach”

On Thursday, June 10, 2010, Ed Biernat will present a free live webcast entitled. "DMAIC and Using a Non-Intuition Approach"

Experience is often critical to good decision making.  It helps us see patterns and react quickly.  In that sense it is a strength.  However, if the environment changes radically, and we use the old paradigms to see the new world, bad things can happen.  The Six Sigma DMAIC process is a great tool set for helping us see the world through data and thus helps us adapt through the change.  What is needed is the addition of other tools and insights to help us interpret the analyses correctly.

In this free live webcast, we will review some of the latest research in cognitive psychology and related fields and discuss how to apply these insights into the realm of Lean Six Sigma transformation.  We will challenge the role of intuition as a primary factor in decision-making and, while not removing it entirely from the framework, look to put it into its proper place.  We will also examine some of the biases, both in the data and in our heads, that may lead good people to make bad decisions when the old rules fail to apply in the face of radical change.

» Register now (FREE)
» View archived webcasts

Health Care Management: Decision Making at Two Levels

Reading recent reviews of two books on healthcare caused me to realize that in spite of the rapidly increasing number of clinical studies that use risk analysis and neural networks to sort out the best treatment choices, there has been very little published on how to use quantitative tools like decision trees and Monte Carlo software to manage health care better. Given the recent national debates on health care reform, this is actually quite surprising. 
 
There’s health care management, and then there’s health care management.  On the macro level, decision evaluation focuses on the organization. Marian C. Jennings’s Health Care Strategy for Uncertain Times (2000) prescribes ways for corporate health care managers to reshape the ways their organizations deal with uncertainty by adopting the same quantitative techniques used in the commercial realm by enterprises like investment firms and utility companies.  On the micro level, health care management focuses on you, your body. Thomas Goetz’s The Decision Tree (2010) prescribes how to apply a number of these same decision analysis techniques to your own health. 
 
Essentially, what both books are saying is, "Look, the only certainty is uncertainty.  But you have some numbers.  Here are the tools to turn those numbers into plans you can reasonably rely on." These tools shouldn’t be news to you as a reader of this blog, but apparently, if the popularity of Goetz’s book and renewed attention to Jennings’s are any indication at all, the health care management arena is plenty ripe for quantitative decision support tools.