One of the criticisms leveled at the risk assessment models blamed for many of the recent failures in the finance world is that they allowed too much room for error–that the models themselves were inaccurate. My argument is that it was not the models but operator error–sloppiness or excessive optimism–in dealing with probabilities.
Theoretical probability, a branch of mathematics that attempts to reconcile likelihood with random phenomena, is the basis of statistical analysis and other forms of quantitative analysis of large bodies of data. Most models have to accomplish the same reconciliation. Most Monte Carlo software, for instance, calls for user input of probability functions. Obviously, the models that result are only as accurate as the probability functions incorporated in them.
There have been questions raised about how appropriately the Wall Street risk analysis models used by hedge funds and banks accounted for real-world time horizons. According to at least one commentator, most of these models tried to account for very limited time horizons, weeks instead of years. Obviously, the shorter the time period a financial organization is exposed to risk, the less probability of a cataclysmic event. And the probability of the same devastating event occurring over a period of a decade or two is much greater. Unusual events are just that: they don’t occur very often.
There is no reason why the risk analysis programs that turn out Wall Street’s models could not be tweaked to account for longer time periods–unless the person doing the modeling wants–consciously or unconsciously–to develop an irrationally sunny scenario.