Ed Biernat, Consulting with Impact, has been in touch to respond to my recent question about analysis paralysis: How do you know when you’ve done enough decision analysis, no more, no less than will benefit you?
Here’s Ed’s take on the issue: "Goldilocks had it easy. She eventually got it right the third time. This issue is one that we wrestle with in Lean Six Sigma overall, because it is easy to become enamored with the analysis of data. Analysis paralysis kills the speed of an implementation and must be vanquished at all costs. Inertia is the biggest foe that we face in implementing Lean Six Sigma. It was one of the big problems with the old model with statisticians in businesses (and why it is hard to find a pure statistician around now in anything but actuarial endeavors.) What the issue really comes down to the basic question, What Problem Are You Solving?
Golf makes a quick analogy. Let’s take the greatest 7-iron player in the world. This person can play the 7-iron like nobody’s business. In fact, they use the club more than any other club in their bag, and crowd really appreciates this virtuoso of the 7-iron. But what is the purpose of the game? To use the 7-iron or to get the lowest score on the course? For risk-analysis geniuses, we can substitute the risk analysis tool for the 7-iron. It is a great tool, a powerful tool. But only if it helps us solve the problem we are facing. And that problem is probably not to build the world’s best model.
If you have addressed the question that you started with when you built the model, then you have done enough analysis. In our consultancy, our bias is to get close and move forward unless we are dealing with a mission-critical decision. We fully admit that we are not modeling experts, and we are OK with that. That is not why our clients engage our services. We solve problems and help them to change their culture. Modeling helps with that by getting the team familiar with issues and sensitivities before we do a full deployment. Once they can see the impact of this variation and their assumptions, and once they have a framework for going forward, we put the model away because it’s done its job."
Thanks, Ed, for giving this some thought!