Media reports on the global search for alternative and sustainable energy sources often dwell in the happy realms of possibility and leave me happily clinging to a cheerful bits of information they offer up––"if everyone over the age of 21 replaced on incandescent lightbulb with a fluorescent," and blah, blah––when was the last time you read such an account that came to an unhappy conclusion or the prospect of failure? And who knows whether these bits are facts or factoids, the unreliable cousins of fact. More important, where do the calculations in them come from?
I never stopped to think about the sources or pertinence of the peppy facts and factoids I like so much until I came across a brief blog mention of scientist Seth Darling at the U.S. Department of Energy’s Argonne National Laboratory. Darling is a photovoltaics expert who is trying to separate fact from factoid and frame a realistic picture of the costs of solar electrical generation. He is using his Monte Carlo software to "lift up the rug" under which many assumptions about solar energy have been swept.
Darling points out that the photovoltaics industry is expanding rapidly, with the number of its stakeholders growing in parallel: investors and funding agencies, technology developers, regulators, and policymakers. None of these stakeholders can rely on cheerful factoids. They have to make too many decisions under uncertainty, and they need reliable information on which to base statistical analysis, risk assessment, and production predictions. Darling is trying to provide an analytical framework for testing assumptions behind solar electrical production, calculating its lifetime costs, and comparing these with conventional generation methods. He calls this a "levelized cost of energy." This goes beyond immediate financial risk analysis to incorporate over the lifetime of the production resource such usually hidden variables as the cost of financing, insurance, maintenance, and depreciation.
If you’re not a quantitative sissy, a category to which I happily consign myself, you will want to take a look at Darling’s recent paper with co-authors Fengqui You, Thomas Veselka, and Gartner analyst Alfonso Velosa. It’s bound to let the sun shine into some of the darker corners of alternative energy production.
There’s a new game out there for all of you in the game of risk management. "Take Charge: A Risk Simulation Game," developed by the Indian risk management services provider Aujas
, debuted at a recent summit for CSOs––chief security officers, as opposed to chief strategic officers––in Bangalore.
"Take Charge" is a team sport that is played within hypothetical business environments and marketplaces, and as the teams interact, their prowess as decision makers is evaluated. The goal for each team is to balance risk analysis with business growth. This requires not only standard fare such as financial risk analysis but project risk management strategies incorporating such issues as information security and global accessibility. The team that best keeps its eyes on the prize, that best understands the goals of their imaginary business and creates the best frameworks for making decisions under high levels of uncertainty is the winner.
The point of this high-intensity play is to reveal that calculated risk is essential to growth and profit and to highlight the role of the risk officer as a key player in corporate strategy. But many of you, dear readers, are already very much aware of your importance in strategy making. You’re already out there on the real board playing the game of taking charge.
Financial advisers took a hit from the 2008 meltdown of the markets. Many investors, finding fault with their advisers’ lack of prescience or actual handling of their investments during the crisis, decided they could do just as well managing their own investments––and they ditched their advisory firms.
So far their results probably haven’t been bad. For the past two years stocks have been making steady gains, so these new independents have no reason to second-guess their decisions. But a recent blog
on CNBC.com put out the strong opinion that it’s probably time for the investors who cut their advisers loose to swallow their pride and kiss and make up.
The basic rationale behind this opinion goes something like this: in an environment with increasingly complex markets and rapid trading automated by neural networks, the everyday investor does not have the necessary skills in financial risk analysis or access to the essential risk analysis solutions to survive. In addition, new increases in market volatility make it difficult for the amateur, without benefit of Monte Carlo software, to keep pace. And furthermore, this free spirit most likely will not have the time or discipline to absorb and process the deluge of information the markets pour out.
Overall, it’s a pretty good argument, but I find this last bit––the requirement of time and discipline–-the most convincing. If most readers are as lazy as I am, financial advisers should see a big uptick in their stock.
Last May 6, the Dow Jones Industrial Average made a rapid series of inexplicable drops, and, in fact, in one five-minute period fell more than 500 points. Then, just as inexplicably, the market recovered. The causes of the so-called Flash Crash remained mysterious until September, when the SEC issued a report on the rapid fluctuation of the market. It found that a single "large fundamental trader" had used an algorithm to aggressively hedge its market position quickly.
Since then the role of neural networks and algorithms in automated transactions has received a good deal of attention from the media. The online edition of this month’s Wired
offers a fascinating perspective on algorithms as investors. It reveals how neural networks and other automated types of statistical analysis can chew through news of the financial markets–essential a big pile of data– to instantaneously produce a financial risk analysis, make a larger determination of the results of a prospective trade portfolio risk management terms, and make the trade. The speed with which a computer can function as an investor is part of the problem. It produces a kind of feedback loop in which each instantaneous trade produces instantaneous responses from other computers trolling the markets.
The trend toward computer control of financial markets, however, does not continue unfettered. The month after the Flash Crash, the SEC instituted some "circuit breakers," rules to stop trading when the feedback loops begun too intense and the markets fluctuate too rapidly.
All of this presents an interesting and larger question: How much control can we delegate to computers–not just in the financial realm but in our social and creative lives–before we have to scramble to catch up with them and regain control?
Baby Boomers are coming face to face with the realities of retirement, and their financial advisers are having to dig deep to come up with strategies that will calm their fears of a recurrence of the financial meltdown of 2008. In this climate, one term that comes up repeatedly is fixed income, which usually means bonds. Here, it is interesting to note that even fixed is not as certain as it sounds. Prices and rates of return for bonds vary over time and in opposition to those of equities. Even given this dynamic, the challenge for bond fund managers is essentially the same as for equity fund managers–how to diversify a portfolio’s holdings to minimize risk and optimize return.
In 2008 and early 2009 the the credit risk of corporate bonds was painfully in evidence, and since then financial planners have been sharpening their credit risk management tools to stabilize returns on bond portfolios. It has been generally accepted by investment professionals that the greater the number of financial instruments in a portfolio, the broader the spread of credit risk. A recent credit risk analysis by the BondDesk Group found, however, that spreading risk over an increasing number of investments is effective only up to a point, after which further investments offer no further protection against loss.
The BondDesk Group used Monte Carlo simulation software to determine two values, tail risk (loss of 20%) and black swan risk (loss at a catastrophic level of 50% or more), in portfolios that progressively increased in size from 2 to 50 bonds. Taken together the two measures of risk predicted which bonds would default. Interestingly, the simulations revealed that both kinds of risk were reduced by increases in portfolio sizes up to 10 bonds, and in both cases, these benefits began to diminish with bond number 11.
The group’s credit risk analysis brings good news for both investment advisors–it simplifies their work–and investors–it reduces the cost of investing! For the juicy details, go to the BondDesk website.
In elementary school arithmetic, most of us who want to do well struggle to come up with the correct answer to that problem posted on the blackboard. Unfortunately, that’s the way many grown-up decision makers approach risk management. At a recent Palisade Users Conference, v.p. Randy Heffernan offered up some fun and insightful comments about risk analysis and the need many managers seem feel to boil risk assessment down to a single "correct" answer–the Number.
The Number, he points out, harbors a number of evils that bedevil rational decision making. The reason for this is that risk is, by definition, uncertainty, and uncertainty is often a compound of a number of unknowns. Uncertainty embodies many possible outcomes or answers. Trying to identify a single resolution to uncertainty leads to simplistic and often dead-wrong answers. Randy points out that the way to get the best of something as vaporous as uncertainty is to use probability or–in the case of multiple unknowns–probabilities.
A probability expresses a range of outcomes or numbers, and this, Randy proposes, allows the risk manager a fuller understanding of any particular course of action. "It means thinking in two dimensions," he says, "not just ‘what if’ but ‘how likely.”’
Almost every well brought up manager thinks risk analysis, the process of quantifying risk, is important. But there is more than one approach to quantification, Randy counsels. If you want to do really useful risk analysis, forget about coming up with The Number and concentrate concentrate on seeing the range of possibilities.
A recent chat with Palisade customer Vertex Pharmaceuticals
reinforced something I learned a few years ago when I was working with a biotechnology start-up: the development of a new drug begins with a bright idea and then enters a long, dark tunnel of uncertainty and risk. The odds that the idea will ever emerge in the marketplace are very long, 10 to 1, and the costs of development are gi-normous–from $60 to $100 million to get a new drug even as far as phase 2 clinical trials. But then. . . .the payout can also be gi-normous.
At every step in the development process, pharmaceutical risk assessment is crucial to a development company’s viability. The company has a pool of drug "candidates" in its so-called "pipeline," the pathway that leads a candidate from preclinical development through phases 1,2, and 3 of clinical trials and, with much luck and funding, into the market. At each stage, the pharmaceutical risk management process must weigh the probabilities and potential benefits of a drug reaching the market, factor into that calculation the optimal timing of investment in development, and decide whether and when to invest in further development.
It’s a big, broad playing field for risk, and the game goes on for a long time. By necessity, the people who create the risk analysis models for pharmaceutical development have brought specialized sophistication to such analytical techniques as Monte Carlo simulation, sensitivity analysis, and decision trees. There are lessons from the pharmaceutical industry for you if, in your game, you want to play at all, you’re in it for the long haul.
This week the BBC’s home editor, Mark Easton, reports on a new study examining the "harm" impacts of drugs in the UK, where there is ongoing debate about government drug policy and the issue of legalization. The study was a statistical analysis produced by the Independent Scientific Committee on Drugs, a group of scientists who believe the government’s current drug classification system is based on dogma rather than facts. It was published in the prestigious medical journal The Lancet.
At the heart of this study is a multivariate statistical analysis that evaluates 16 measures of personal and public "harm" caused by 20 legal and illegal drugs–from heroin and Ecstasy to alcohol and khat. Multivariate analysis is related to Monte Carlo simulation, and like that statistical technique incorporates sensitivity analysis.
You probably won’t be surprised to learn that the study finds the most "harm" comes from alcohol and the least from mushrooms, or that the authors conclude that "aggressively targeting alcohol harms is a a valid and necessary public health strategy."
On the way to these final judgments, there are some very interesting findings, and if only for the glitzy graphs representing the statistical analyses–and the current push to legalize marijuana in California–I recommend taking a look at Easton’s blog.
For some amusement and bemusement scroll down to the comments to the blog–e.g., "I especially like the ‘pink’ bits in the last diagram, ‘drug-specific imparement of mental functions’, which is presumably what you are actually paying for." These opinions demonstrate aptly that–statistical analysis, sensitivity regression analysis, Monte Carlo simulation–you can lay the facts on the line but no two people will look at them the same way.
Sooner or later, it had to happen. . . . Tweets have been linked to stock market behavior. This was not a case of inside information. Researchers from Indiana University have demonstrated
that public mood, as expressed in millions of Tweets, can predict stock market behavior with fair reliability .
Analyzing a collection of 9.8 million Tweets from 2.7 million users in 2008, the team used a "subjectivity analysis" tool called OpinionFinder and a Profile of Mood States (a psychological measure) to create a time series that tracked daily variation in public mood as exhibited by the language of the Tweets. It then compared the fluctuations in mood with those of the closing values of the Dow Jones index.
To make these comparisons, the team trained a neural network on the data. Of course, this was not just any neural network. It was a Self-Organizing Fuzzy Neural Network, one in which organizes its own"neurons" during the training process.
The patterns that this neural network identified revealed that Tweeting terms that convey a sense of calmness anticipated upward movement in the stock market. These predictions were 87.6 percent accurate. Although I have been unable to to track down the statistical analysis methods behind the mood measures, these odds would seem to be impressive.
Does the relation between Tweeting and the stock market work only one way? Or does this result imply that if we want to avoid another Black Swan dive in the financial markets, we should just think calm thoughts and Twitter slowly?