# Month: July 2009

# Neural Networks and the Quest for Identity

I’m a pretty fair typist. I’d say my current speed is roughly 60 words per minute–I was more accurate before this keyboard made it so easy to delete. What I am not good at and the instance in which I do more deleting per character is inputting passwords. Password requests make me tense up, and even for those passwords I remember perfectly well, I tend to let the characters ripple off my fingertips. Often my efforts to establish my identity are rejected, and this means I have to return to the dialog box, slow down, and pick at the keys like a chicken going for the grit.

Sure is, and a word to the wise: Protect your identity. Hunt and peck no more.

# Error-checking Excel Models using TopRank

As one of the products within Palisade’s DecisionTools Suite, TopRank perhaps does not get the attention that it deserves! In many ways, TopRank is a very natural tool for general Excel modellers, perhaps especially so for those modellers who wish to perform sensitivity analysis, but who do not need the full risk analysis with simulation techniques that are available using @RISK.

TopRank is a great tool to audit models, and to check-for errors. For example:

- By simply using TopRank to generate a list of all inputs that affect an output, one can quickly get an idea of the basic quality of a model, particularly in terms of the robustness of the model when performing a sensitivity analysis. A typical example of the type of issue that a simple scan of all inputs can give you is where an item that should be a single model input is repeated several times within the model (e.g. the same tax rate appearing as a separate variable in each year of a multi-year model); to conduct a (valid) sensitivity analysis on the tax rate, the model formulae relating to tax would need to be rebuilt first. There are many other similar examples that arise in practice.
- When using the standard error-checking technique of calculating (what should be) the same value through two different paths in the model, or when comparing two quantities that should be equal (such as forecast assets and liabilities on a balance sheet), the difference between these two can be calculated (and it should of course always be zero). By setting this error check cell as a TopRank output, one can trace and vary all inputs that affect this (ideally zero) cell. Once TopRank has run, the input sources of any errors will be listed, will allows one to check formulae in the model which are dependent on that input (using standard Excel dependency tracing) until the formula containing the error is found.

These are just a couple of examples which try to show how TopRank can be useful as an error-checking tool in general Excel modelling, irrespective of whether further sensitivity analysis (or even risk analysis with @RISK) is required not. Get your copy now while stocks last!

Dr. Michael Rees

Director of Training and Consulting

# Risk Analysis, Optimization, and the Special Sauce

Okay, let’s get back to baseball betting investor Clay Graham and his words to live by: "picking a winner is not the same thing as making a smart bet."

*winner*you need to choose the team with the highest probability of winning."

*smart bet*, you put your money where the expected return on investment–your ROI– is greater than zero."

# Speeding up the Convergence of Optimization Problems: Part II

In an earlier blog, I mentioned a few heuristic methods to speed up the convergence of models when using genetic algorithms for optimization modelling, such as those that are in Evolver and RISKOptimizer. In this post, I add a few additional points that are specific to RISKOptimizer. These include:

- Set the number of iterations according to the optimization objectives. For example, the mean of distribution can usually be estimated to reasonable accuracy with only 30 to 50 iterations in many cases. An initial run of the model with such low numbers of iterations can be a useful way of generating an initial reasonable estimate of the optimum input combinations. Where there are other optimization objective (e.g. relating to P10 or P90 of the output distribution), these statistics also converge fairly quickly (though not as quickly as the mean), so similar methods can still be applied. In all of this, I refer not to formal statistical convergence, but to “decision-making convergence”, that is to the ultimate use of the model’s output for decision purposes.
- Stop the procedure and increase the number of iterations as more accuracy is desired. For example, small numbers of iterations can be used to generate a reasonable set of trial values quite quickly, but as more accuracy is desired the number of iterations can be increased (say a first sequence with 50 iterations per simulation, stopping the optimization at the best values, increasing iterations to 500, repeating, and then perhaps increasing iterations to 5000 once we are very close to the optimal point
- As mentioned in the earlier blog on Evolver, problems with hard constraints are difficult to solve accurately, and especially when using RISKOptimizer this is even more pronounced – statistical error due to the simulation sampling can often result in true valid solutions being rejected, something that is, by definition, more likely to happen precisely when we approach the optimum point. The use of soft constraints or of model reformulation approached (e.g. making a constraint into a model property) can also be powerful related techniques.
- Convert to an Evolver problem. The powerful feature of version 5 of @RISK is that risk distributions may be swapped out. This means, for example, that uncertainty could be removed from RISKOptimizer model, so that the optimization can be done using Evolver (thus removing the need to perform a complete simulation for each set of trial values). The Evolver solution can be used as starting point for a RISKOptimizer solution (by swapping the @RISK functions back in to the Evolver-optimised model). In many cases the solution to such a static optimization would be similar to the solution to the problem containing uncertainty, or at least provide a good basis for a search for such a solution.

Dr. Michael Rees

Director of Training and Consulting

# August 2009 – Worldwide Training Schedule

Palisade Training services show you how to apply @RISK and the DecisionTools Suite to real-life problems, maximizing your software investment. All seminars include free step-by-step books and multimedia training CDs that include dozens of example models.

**North America**

- 4-5 August 2009, Saskatoon, SK:

Decision-Making and Quantitative Risk Analysis using @RISK - 4-6 August 2009, Saskatoon, SK:

Decision-Making & Quantitative Risk Analysis using the DecisionTools Suite - 12-13 August 2009, Charlotte, NC:

Decision-Making and Quantitative Risk Analysis using @RISK - 19-20 August 2009, Denver, CO:

Decision-Making and Quantitative Risk Analysis using @RISK

**Asia-Pacific**

- 26-27 Aug 2009, Sydney:

Decision-Making and Quantitative Risk Analysis using @RISK - 28 August 2009, Sydney:

Decision-Making and Quantitative Risk Analysis using the DecisionTools Suite

**Latin-America**

- 12-14 de agosto del 2009, Santiago, Chile:

Evaluación de Riesgo para Usuarios Principiantes/Intermedios (en español) - 26-28 de agosto del 2009, Guatemala, Guatemala:

Evaluación de Riesgo para Usuarios Principiantes/Intermedios (en español)

# Speeding up the Convergence of Optimization Problems: Part I

The use of genetic algorithms to solve optimization problems has a number of advantages. These include that they have the ability to solve essentially any problem, such as those which are non-linear, or those with multiple local optima but a single global optimum. Palisade’s Evolver can be used to deal with such situations in static Excel modelling, whereas RISKOptimizer can be applied to models containing uncertainties.

One main drawback to some approaches based on genetic algorithms is their slow convergence in some situations. In this blog, I mention a few heuristic methods to speed up the convergence of models when using Evolver (which also apply to RISKOptimizer), and in another posting deal with some items specific to RISKOptimizer.

Concerning Evolver, a few methods can sometimes helping, including:

- In problems without constraints, recognising that the solution surface around an optimal point is often quite flat, so that for many practical purposes an approximate solution is essentially as useful as the true solution.
- In problems with constraints, trial values very close to the optimum set of values are very likely to create situations in which constraints are not satisfied, even though they are very close to being the case. As well as therefore accepting approximate solutions as a sufficient solution, it can be more appropriate to replace hard constraints (that must be satisfied) with soft ones (in which a penalty function is applied when constraints are not satisfied).
- Reformulating a continuous optimization problem (e.g. what weights to apply to each asset class in a portfolio situation) as a discrete one (e.g. in which trial weights are all multiples of 10%). Once an approximate solution has been found, the granularity can be increased.
- Stopping the algorithm to adjust the Settings, and restarting can sometimes also help. A typical adjustment might include a large (but temporary) increase the mutation rate.
- If all else fails, you may need a faster computer!

Dr. Michael Rees

Director of Training and Consulting

# Excel Best Practices and Optimization Modelling presented at Munich Risk Congress

A highlight of last month was Palisade’s Summer Risk Congress in Munich, Germany. This one-day event involved case studies presented by customers, presentations of new features in @RISK5.5, and my own presentations – which concerned themselves with the use of the DecisionTools Suite for optimization modelling, and a presentation of modelling best practices in Excel and @RISK.

I have previously written several postings on best practices in Excel modelling, so won’t mention this further here. As well as highlighting some key linkages between the topics of risk analysis, decision-making under uncertainty, real options and optimization, my optimization talk tried to show that the theme of optimization runs through many of the DecisionTools Suite products, including TopRank, PrecisionTree, @RISK, Evolver and RISKOptimizer (and is not just a subject relating to the latter two products).

The talk also covered some of the challenges and common errors made when formulating optimization models in practice; these include the failure to capture the optimization trade-off within the logic of the model, that the Excel formulae required are often more complex than in more standard models (the model has to be flexible (and valid) over a wide range of input variable combinations), and incorrect optimization objectives (such as trying to optimize non-controllable inputs).

Finally, I mentioned some techniques to improve the speed of convergence of optimization models using genetic algorithms (such as those that are in Evolver and RISKOptimizer); this topic is covered in two later blog postings.

Dr. Michael Rees

Director of Training and Consulting

# Risk Analysis for Real-Life Bettors

The analogy between gambling and investing has been made so often by folks who do risk analysis that it has become trite. But this week I heard from a guy who actually lives the analogy.

"I’m not a gambler, I’m an investor," Clay Graham protests regularly when he describes his approach to sports betting. Clay is a consultant who specializes in quantitative analysis, and somewhere in his resume is an item that says he was doing mathematical quality control before the term "Six Sigma" was invented. He is also the proprietor of the website BaseballWon, which exists to advise bettors on return on baseball bets.

*money*real? I mean, do you actually put your own money into this?"

. . . . more from Clay tomorrow.

# Fast-Moving Goods, Risk Analysis, and Spend Control

It seems reasonable that in a recession, cost containment would become hot topic, and in the world of supply chain management, it’s known as "spend control."

*supplier’s*standpoint. The risk assessment models will churn out a range of what potential commodity costs could be, so the procurement folks can build caps and locks into their contracts.

All of this requires some analytical knowhow, and the Spendmatters blog promises a primer on the DelMonte approach. It’s worth following because it ain’t easy, it ain’t simple, but it’s how you get to spend control.