The August issue of McKinsey Quarterly is devoted to the year’s ten most significant "tech-enabled" business trends. Of the ten, the one that caught my eye was Experimentation and Big Data. Like many of us in IT-based businesses, I had been wide awake to Big Data–which are really nano-data in vast quantity–but only peripherally aware of entrepreneurial experimentation using it.
Big Data, of course, are the billions of bytes of information that come to a business through their use of Internet–clicks, click-throughs, patterns of browser use. The more successful your e-commerce operation, the heavier the flow of Big Data. But even the smallest e-enterprise can slurp an astonishing amount of it. What’s big about Big Data are the patterns in which they arrive, and these patterns are where the Experimentation comes in. They tempt the enterprise to employ new stratagems to improve the effectiveness of the e-operations, and because of the fluidity of the Internet medium, a great deal of tinkering results. If one ploy doesn’t prove beneficial, it’s pretty easy to try another one.
In effect, Big Data, provide a focus group, or a series of focus groups. This, it seems to me, is where risk analysis, especially financial risk analysis, has a Big Role to play. With all the historical data in your lap, any standard operations risk simulation can tell you a lot about the potential results of Ploy A versus the results of Ploy B. Train a neural network on the data. Then bring up the Excel, the Monte Carlo software, and let it chomp through the known patterns. This should yield fair predictions of potential results. The ideal implementation, of course, is one in which the analytics play in real time as Big Data roll in.
While the McKinsey authors don’t trouble themselves about specific statistical analysis techniques, they make an important point: the fluidity and speed of both data acquisition and quantitative analysis require a new managerial mindset toward experimentation and decision evaluation.