Day: January 8, 2009

Read This While You Can Still Access It

The best-laid plans are. . . .subject to change.  An article by Joe Nocera in this week’s New York Times Magazine causes me to put on the back burner my plans to blog on the concept of  probability and its various expressions.  I can do that later, but right now I want to persuade you to read Nocera.

Offering a really good capsule history of Value-at-Risk modeling for the uninitiated, Nocera delves into a theme that has pervaded my recent blogs for Palisade Corporation: it may not be the model but more likely the person managing the modeling who introduces slop into risk analysis.  He has talked to a good many risk management experts, and is able to present a balanced view of both the limitations of VaR techniques and the shortcomings of the people who relied so heavily on their risk assessment techniques as to bring about the collapse of those sectors of the financial markets that depend on hedging and mortgages.

One thing that will be a relief to any of you who are doing quantitative risk assessment, Nocera never points a finger at Monte Carlo software or any other category of quantitative analysis software.  So, the problem isn’t the tools.  It may be the—

I don’t want to spoil this excellent article for you.

Six Sigma Terminologies

In a recent article called Six Sigma Speak in iSixSigma Magazine, Craig Gygi does a very nice job articulating one of the barriers to Six Sigma, and that is the terminology that we use. We are laden with terms from other languages such a poka-yoke, “same letter lists,” as well as synonyms and acronyms galore.  As he describes, this is caused because each organization that adopts Six Sigma then adapts words to reflect the specifics to their organization’s culture and priorities.
 

To name of few of the more common synonyms in Six Sigma:

  • Output = response = effect = key metric = CTQ (critical to quality) = CTC (critical to customer) = KPOV (key process output variable) = Y
  • Input = variable = factor = KPIV (key process input variable) = X
  • Average = mean = central tendency = expected value
  • Spread = variation = dispersion = range = variance = scatter
  • Variable data = continuous data
  • Attribute data = categorical data = discrete data

The one thing he doesn’t address in the article head on, are the sheer number of acronyms we use. Below I have named just a few:

  • DMAIC = Define, Measure, Analysis, Improve and Control
  • DFSS = Design for Six Sigma
  • GB, BB, MBB, LSSBB etc… which represents some of the levels  of certification one can achieve from GB (Green Belt) to MBB (Master Black Belt) and LSSBB (Lean Six Sigma Black Belt) please don’t forget some of the others such as LSSMBB, DFLSSBB etc…
  • VOC = Voice of the Customer
  • DOE = Design of Experiments
  • DFMEA = Design Failure Mode and Effect Analysis
  • MSA = Measurement System Analysis

To add to the list are the process capability and statistical acronyms such as Cp, Cpk, Pp, PpK, ANOVA, ANOM, DPPM etc . . . As we all know this is a very small example of the special terminology that we have either created or adopted over time. It’s no wonder that the people outside of the Six Sigma world are intimidated and bewildered before they even get into learning the tools to develop or improve a process.
 

Craig Gygi is also the co-author of Six Sigma for Dummies which I have to admit was one of the first Six Sigma books I purchased when trying to make sense of it all.
 

Lastly, I learned a new acronym on iSixSigma.com while writing this post, BAU (Business as usual.) I don’t know if it’s truly a LSS term but what the heck!