In his blog yesterday for Smart Data Collective
, Dean Abbott, makes a worthy, commonsense observation: no matter how accurate a predictive model is, it is of no use to the enterprise unless it is presented in such a way that all the decision makers understand what factors and techniques went into the analysis and why.
The reason that the ‘best understood’ model is more effective than the ‘best’ model is that when the people with authority over a particular decision are presented with a statistical analysis that is beyond their ken, they may or may not pretend to understand it. But in any event, they are not likely to buy into the results if they can’t retell the story the model describes.
Take for instance, a Monte Carlo simulation that focuses on credit risk analysis for a particular loan. Everyone in the line of authority will be held responsible for real world outcome of what the Monte Carlo software describes in the Excel spreadsheet. And if you are one of these decision makers, how can you take responsibility for something you may not quite understand?
The problem of acceptance of a predictive model presents the analyst with a tough question: Do I present the model that I know is true and statistically accurate? Or do I present a ruder, cruder analysis that presents a story that can be immediately understood?
Abbott suggests a compromise: streamline your plot by masking (Abbott says "removing") fields that contribute to the robustness of the analysis but involve statistical twists and turns that are distracting to decision makers who may not be fascinated with technique and just want to see how the story turns out. This, he explains, allows you to work from a model both you and the decision makers can believe in.