Month: March 2010

Making Optimal Choices, or Just Making Choices? Part 4

It has taken four entries but I’ll finish this blog stream now with a discussion on optimisation optimisation. That’s not a typo. It’s an art form that is analogous to elegant modelling, as opposed to ‘just’ modelling. The tipping competition model not only opened my eyes to the world of optimisation but also that not all models are created equally, even if they are numerically equivalent.

A few rounds into the season we were allowed to buy and sell riders, which was great if you’d bought some duds at the start! But from a modelling point of view the complexity had increased exponentially. There was now a time component to the model structure as well as different prices for the riders based on their performance to date. My first attempt to model this was quite cumbersome with dozens of 0/1 decision cells to indicate buying riders at the start of the season and then the buying and selling of riders after four rounds. While mathematically correct I wasn’t doing Evolver any favours by having such complex dependencies between so many decision cells. The optimisation was taking far too long to converge, so much so that the final solution when the optimisation was stopped after what I considered to be a reasonable length of time was greatly impacted by the initial solution.

Now this of course isn’t usually the way things work with Evolver; as a sophisticated genetic algorithm optimiser the global optimal solution should be found regardless of the initial conditions. However the time taken to get to such a solution can be extended greatly if optimising an unnecessarily convoluted model. After initially blaming the software I realised I could simplify the model by turning two decisions (“buy” and “sell”) into one (“change status”). This immediately removed one third of the decision variables and straight away the optimisation converged quickly to a global optimal solution regardless of the initial feasible solution. Evolver can only work with the model you give it!

The act of optimising the optimisation can be a subtly tricky one, but is necessary if you are to have confidence in your optimised solutions and thus the decisions made. Building a model that ‘works’ is only the first piece to the puzzle. If the solutions aren’t stable then can you really be sure you’re producing the best answer? No. And if the model is being used to decide which of the multi-million dollar projects you will proceed with (or some other equally critical decision) I’m sure you’d want to have some certainty around the answer provided by Evolver. Of course if you’d like some help with an optimisation model from experienced risk analysis consultants feel free to contact the consulting team at Palisade!

» Making Optimal Choices, Part 1
» Making Optimal Choices, Part 2
» Making Optimal Choices, Part 3

Rishi Prabhakar
Trainer/Consultant

Making Optimal Choices, or Just Making Choices? Part 3

Part 2 of this blog ended with me very quickly stating that the MotoGP tipping comp optimiser was identical in structure to a portfolio optimisation problem, where the portfolio could contain stock or other assets, or even projects. Let’s look at this in a little more detail as I’m sure you’re reading this to find how to optimise your own decisions rather than wondering how I went in the tipping competition!

In my model there was a fixed budget (though less could be spent if desired) to spend on riders, with the aim of maximising their total points haul. In the real world you may have a total budget of say $100m to invest in a range of projects perhaps many hundreds of millions of dollars in total value each of which have certain expected returns. At its simplest this decision evaluation will find the most (expected) profitable portfolio of the projects. This is an inclusion/exclusion grouping model, but it is very simple to optimise assets with a continuous level e.g. the amount of money invested in various shares etc. Another real example I have seen when working with an investment company here in Australia was a model whose goal was simply to find the portfolio mix that came closest to the total allowable spend without exceeding it.

Further realism can be included by using constraints should there be the need. A resource constraint may mean there has to be a limit to the number of projects that can be run simultaneously. There may also be a minimum number of projects determined by management as a mitigation strategy. Such constraints are very simple to employ using Evolver and add value to the decision analysis without the need to provide specific risk analysis/Monte Carlo simulation information for the model.

A slightly more sophisticated method of turning an optimisation into a useful portfolio risk management tool where uncertainty hasn’t been specifically modelled is to estimate the possible downside of each asset and include it in the calculation of the portfolio’s ‘score’. The Evolver software comes standard with over twenty example spreadsheets for your educational pleasure, of which “Portfolio Mix.xls” gives one method for doing just this.
In the next (and final) instalment of the Making Optimal Choices blog I will explore the idea that not all optimisations no matter how mathematically correct will produce the same results in good time, and that elegant modelling should always be the goal prior to firing up Evolver.

And so you know, I came second in the competition. Next year I’m hoping to go one better!

» Making Optimal Choices, Part 1
» Making Optimal Choices, Part 2

Rishi Prabhakar
Trainer/Consultant

Risk in the financial sector – have we learned any lessons?

As part of his pre-Budget report in December, the UK chancellor, Alistair Darling, announced a one-off super-tax on bankers’ bonuses. This followed ongoing threats by bankers that they will leave the UK if their (bonus) earning potential is curtailed.

Unsurprisingly, this angered the British tax-payer who, thanks to the excessive risks taken by the banks, is now the proud owner of several formerly publicly-owned national banks.  As a result of this and the severe recession that followed, for a while it seemed as though the financial sector would have to change.  However, the current headlines suggest that, whilst the financial crisis was certainly a sharp shock, it may have been too short to ensure that measures were put in place to ensure it never happened again.

The key factor to understand is the grasp that money has over financial institutions.  In fairness this is as it should be – after all their raison d’etre is to make money.  However, this has developed into a culture of ‘profit-at-any-cost’ that is inherent throughout almost all financial organisations.  One outcome is inappropriate incentive structures that reward short-term income-generation over and above any other activity.  Another repercussion, particularly over the past few boom years, has been an increased tolerance of risk.

Over the past two years or so, many risk departments will have flagged up levels of uncertainty that, in previous times, would have been unacceptable.  For various reasons, much of this advice has been ignored.  Frustrating at the time, in light of events of the past few months, this must now seem inexcusable to risk managers, both within and outside the financial sector.  Many of these people will know that sophisticated risk analysis tools are available to enable them to ‘measure’ the likelihood of an event occurring and the severity of its effects.

The accuracy of the results depends on the quality of the data input.  It also hinges on the ability of the financial sector to adopt a realistic attitude to risk.  And, to quote City minister Lord Myners, this means that bankers must ‘live in the real world‘.

Craig Ferri
EMEA Managing Director of Risk & Decision Analysis

Making Optimal Choices, or Just Making Choices? Part 2

In my last blog entry I introduced the notion that optimal decision making wasn’t ‘on the radar’ for many clients in Australasia, and laid out a couple of ideas why. I too once focussed on Monte Carlo simulation rather than decision evaluation, but last year the most obscure event changed that.

Call me a nerd of you will, but I like modelling problems in Excel. There is skill involved in setting up a problem such that the model assumptions aren’t too gross, and an art to making the model elegant. This elegance can be very important to optimisation problems, but more on that later. My first homemade optimisation problem was generated by motorcycle racing! MotoGP, to be precise. A friendly tipping competition with friends was formed at the start of the 2009 season with the following structure:

  • Entrants played the role of Team Manager.
  • Team Managers had a fixed budget to spend on riders.
  • Either a few good riders could be purchased, or many lesser riders, or something in between.
  • The team that had accumulated the most points at the end of the season was the winner and received kudos!

Although the future results could not be known of course so I set up and ran the optimisation with Evolver after the event to see what the optimal team selection would have been. Historical data could have been used to discover the type of rider mix that tended to be optimal and thus make an informed decision for this competition. The risk in having only a few riders was that any misfortune would have a big negative impact on the points won, whereas a team consisting of many (cheaper) riders was less likely to suffer such a fate. This downside scenario will be modelled into the 2010 MotoGP Team Manager predictive, optimised model (currently in production)!

What has this to do with the corporate world? Replace “team” with portfolio and “riders” with “assets”, “shares” or “projects” and you have a classic portfolio optimisation model. I hadn’t created this model with business applications in mind but I realised that was precisely what I was doing. An instant later I realised just how useful Evolver would be in many decision scenarios even though it doesn’t incorporate uncertainty (RISKOptimizer does).

In the next instalment I will further explore some practical applications for Evolver and you’ll see just how universally appropriate it can be.

» Making Optimal Choices, Part 1

Rishi Prabhakar
Trainer/Consultant

New Approaches to Risk and Decision Analysis

Risk analysis and decision-making tools are relevant to most organisations, in most industries around the world.  This is demonstrated by the speaker line-up at this year’s European User Conference, an event at which we believe it is important to bring together customers from a wide range of market sectors.

We are holding ‘New Approaches to Risk and Decision Analysis‘ at the Institute of Directors in central London on 14th and 15th April 2010.  As with previous years, the programme aims to provide everyone attending with practical advice to enhance the decision-making capabilities of their organisation.  Customer presentations, which offer insight into a wide variety of  business applications of risk and decision analysis, include:

  • CapGemini: Faldo’s folly or Monty’s Carlo – The Ryder Cup and Monte Carlo simulation
  • DTU Transport: New approaches to transport project assessment; reference scenario forecasting and quantitative risk analysis
  • Georg-August University Research: Benefits from weather derivatives in agriculture: a portfolio optimisation using RISKOptimizer
  • Graz University of Technology: Calculation of construction costs for building projects – application of the Monte Carlo method
  • Halcrow: Risk-based water distribution rehabilitation planning – impact modelling and estimation
  • Pricewaterhouse Coopers: PricewaterhouseCoopers and Palisade: an overview
  • Noven: Use of Monte Carlo simulations for risk management in pharmaceuticals
  • SLR Consulting: Risk sharing in waste management projects – @RISK and sensitivity analysis
  • Statoil: Put more science into cost risk analysis
  • Unilever: Succeeding in DecisionTools Suite 5 rollout – Unilever’s story

We will also look at the recently-launched language versions of @RISK and DecisionTools Suite, which are now available in French, German, Spanish, Portuguese and Japanese.  Software training sessions will provide delegates with practical knowledge to ensure they can optimise their use of the tools and implement business best practise and methodologies.

With over 100 delegates from around the world attending, the event is also a good opportunity to network and knowledge-share with risk professionals from around the world.

» Complete programme schedule, more information on each presentation,
   and registration details

Making Optimal Choices, or Just Making Choices? Part 1

Something has troubled me for some time regarding the choices being made in risk land. I train and work with many clients whom have adopted Monte Carlo simulation techniques (via @RISK for Excel) into the day-to-day running of their businesses. By doing so they (hopefully) now have a good understanding of the exposure they are facing be it in project cost estimation, discounted cash flow analysis or, well, anything really. But this is only one facet of risk and decision assessment, specifically dealing with the descriptive statistical output from a simulation. What of the decision evaluation component? Why aren’t more of my customers analysing the decisions they make, or better yet actually optimising them? I have a few ideas why.

If you’re in business you have to make decisions. Big ones, little ones, yes/no, multiple state and continuous value decisions. Decisions that impact other decisions in simple or complex dependency structures. But are you making the best decisions possible? I’m sure important decisions aren’t being made completely randomly (I hope!) but I see many companies who rely completely upon qualitative techniques for their decision making (experience, gut feel, etc.) which of course means optimality is no more than a hoped for outcome rather than something that is actually being worked towards.
Firstly the decision model must be identified and then quantified, and this can be a difficult task. There is a level of modelling aptitude necessary for effective modelling that goes beyond merely knowing Excel and its functions, and into the construction of logical mathematical descriptions of possibly complicated processes. Relevant decisions need to be identified and the impact of those decisions combined into a formula that can be mathematically optimised. A critical component to all this is the knowledge that spreadsheet models can actually be optimised, and that in cases where Excel’s Solver fails there are Palisade products (Evolver and RISKOptimzer) that can perform optimisations under virtually any circumstance.

I too used to focus on Monte Carlo simulation rather than decision evaluation, and this was mainly a product of the clients I was dealing with almost exclusively when I first worked for Palisade. In my next blog I’ll tell you why that changed and also get a little more into the nuts and bolts of optimisation.

Rishi Prabhakar
Trainer/Consultant

Rumors of Death

Allan Roth, who writes a blog for CBS Money Watch called "The Irrational Investor," recently asked his readers a rhetorical question: Is Financial Monte Carlo Simulation Dead? Since rhetorical questions demand an answer in less time than it takes the questioner to draw breath, Roth obliged. 
 
While expressing sympathy for the investors who were victims of poor risk assessment and forecasting when the financial markets shook themselves down to rubble in 2008, Roth is taking a very politely defensive swing at one of the many critics of risk analysis who have turned up the volume since then–one Jim Otar of Otar Retirement Solutions and the author of Unveiling the Retirement Myth.  

Roth is an experienced user of Monte Carlo software who knows the pitfalls of overoptimistic assumptions.  He says he finds 99 percent of the Monte Carlo models he’s see over the years to be inadequate because of this flaw.  Jim Otar, for his part, finds other flaws as well: in the generation of randomness and trends and in the sequence of returns. Otar’s modeling method does not rely on randomness but on a century’s worth of historical data. 

 
Our two worthy opponents put their models up against one another in a match that crunched identical inputs.  Their models produced very, very similar results, apparently satisfying each analyst as to the superiority of his method.  But while Roth said nice things about Otar and his model, he pointed out the limitations of relying on historical information alone. In other words, he doesn’t concede.
 
For any kind of retirement planning models, he says, the cure to flaws is conservative input. Then he giddily sends his readers to one of those rudimentary online Monte Carlo calculators that investment firms love to offer their clients. 
 

Rumors of this death are greatly exaggerated.  

Quantitative risk assessment under utilised for infrastructure projects

Why is it that most of the high profile projects managed by the government in the UK all ultimately become beset by problems? A number of projects jump to mind – the Millennium Dome, Wembley Stadium and currently the NHS IT. All three have been plagued by developmental delays and financial mismanagement.

Recently, yet another worthy, but ambitious project has been announced – the North-South high speed rail line to connect London to Scotland. One wonders if the government undertakes detailed quantitative project risk analysis for its infrastructure initiatives?

A good example to highlight in this context is ENGCOMP, a Saskatchewan-based engineering consulting firm that has worked with the Canadian Department of National Defence (DND) to help define budgets for the fourth phase of construction of its Fleet Maintenance Facility at Canadian Forces Base Esquimalt in Victoria, British Columbia. Using @RISK, a Monte Carlo simulation tool, ENGCOMP helped the DND define and secure budget approval from the Federal Government’s Treasury Board. The consultancy firm was able to estimate the impact of the variability and uncertainties pertaining to risks, costs and scheduling. This assessment enabled it to estimate the project risk budget or the risk reserve and schedule contingency, which were both factored in when defining the total project cost of the infrastructure project.

The fact is, in the world of business, risk is inherent and unavoidable. Whilst one cannot completely control risk, one can certainly help reduce uncertainty, greatly increasing the chances of project success. For instance, a key finding of the project risk analysis conducted by ENGCOMP was that, taking into account all the risk and uncertainties on the project, there is an 85 per cent chance that the Fleet Maintenance Facility project will be completed in January 2014. A fairly positive result for the DND, given the scale and complexity of this project in question.

Craig Ferri
EMEA Managing Director of Risk & Decision Analysis

New @RISK 5.5.1 and DecisionTools Suite 5.5.1 Now Available!

New DecisionTools Suite 5.5.1 is a maintenance update that has been fully translated into Spanish, German, French, Portuguese and Japanese. It features simulation of password-protected worksheets in @RISK as well as an integrated RISKOptimizer toolbar. In addition, you can now also launch any DecisionTools program from within any other program already running. If you still have @RISK 5.0 or DecisionTools Suite 5.0, version 5.5.1 offers @RISK simulations that run 2 to 20 times faster than before, new scatter plots from scenario analysis, a freehand distribution artist, an Excel-style Insert Function dialog with graphs, and much more.
 
@RISK 5.5.1 and DecisionTools Suite 5.5.1 are free for current maintenance holders. If you don’t have maintenance, contact Palisade to get up to date:

US/Canada
607-277-8000, sales@palisade.com

Europe
+44 1895 425050, sales@palisade-europe.com

Latin America
607-277-8000 x318, ventas@palisade.com

Brasil
607-277-8000 x318, vendas@palisade.com

Asia-Pacific

+61 2 9929 9799, sales@palisade.com.au

» Get your update
» Read What’s New in @RISK 5.5.1 and DecisionTools Suite 5.5.1

Confusion, Consensus, Certainty

Longtime user of Palisade’s Monte Carlo software and other decision analysis tools, Willy Aspinall uses these tools to beat back some heavy-duty varieties of uncertainty.  How long will it be before a volcano actually blows its top as opposed to gurgles over its rim?  What factors should transportation officials focus on to reduce the likelihood of airline disasters?  What are the acceptable limits of air pollution?  What exactly will the climate be like for our grandchildren?
 
Aspinall is often called upon to provide expert testimony on these kinds of life-and-death questions, and he has recently called attention to one of the problems with expert testimony, including his own:  In which expert should you place your confidence? In an opinion piece in this January’s Nature–a magazine that is an icon of scientific validity–Aspinall describes the benefits of using a method called "expert elicitation" to balance the opinions of a group of experts.  The method, developed by Roger Cooke of Resources for the Future, attempts to quantify and then pool the uncertainties to arrive at what Cooke calls a "rational consensus."
 

When experts disagree, Cooke has pointed out, any attempt to impose agreement will "promote confusion between consensus and certainty."  In order to get around this problem, Aspinall points out in his article, the goal of risk analysis should be to "quantify uncertainty, not to remove it from the decision process."  His ongoing  risk assessment of volcanic activity on the island of Monserrat in the West Indies is the longest running application of Cooke’s "expert elicitation" method.  For details about how the elicitation and the pooling of opinion works, I recommend taking a look at the January 2010 issue of Nature.