Thursday, January 18, 2018   You are here:  Features   Search
  Industry News Minimize
 Print   
  Managing Mother Nature: Planning for Extreme Weather Events
Managing Mother Nature: Planning for Extreme Weather Events

By Randy Heffernan

Taken separately, the most severe natural events are unlikely to occur. However, Mother Nature can take many forms, and her wrath is notoriously difficult to predict accurately, even with the best practices and software tools used by meteorologists. It is that unpredictability that makes such events so destructive.
But severe weather is only one part of the risk equation. Industries must manage weather risk on a day-to-day basis. Despite the severity of extreme events and the frequency of lesser events, risk analysis of weather is still rarely given the prominence it deserves. It is crucial to determine what risks emerge when various types of weather conditions strike. Organizations should take a more strategic approach to this type of  risk. Increasingly, companies are adopting more sophisticated techniques to specifically account for, rather than ignore, the inherent uncertainty and unpredictability that characterize weather risk.
Techniques such as quantitative risk management (QRM) and decision-making under uncertainty (DMU) are widely used to predict uncertain outcomes. Utilizing QRM and DMU requires thinking more quantitatively, with numbers and probabilities, recognizing the reality that uncertainty exists in nearly every decision and accounting for those uncertainties in a quantitative way. Quantitative risk analysis (QRA) is becoming an increasingly important tool in planning for weather risk.
One helpful example of QRA is the use of Monte Carlo simulation. An analytical technique that’s been around since World War II, Monte Carlo simulation is a computational method which, in simple terms, looks at all possible outcomes and identifies the probabilities of different outcomes occurring. Monte Carlo simulation in particular is being applied by a wide range of private companies and government agencies to formulate mitigation strategies.
In traditional risk-assessment models, actuarial experts confer on what the best, worst and most likely scenarios are for a given variable. But this “three-point estimate” method does not assess the probabilities that this will happen – one has no idea if each case has a one percent chance or a 50 percent chance or a 99 percent chance of occurring.
Monte Carlo simulation fixes this problem by allowing decision-makers to use ranges of values instead of single-point estimates to represent uncertain values. These ranges are known as probability distributions, and include commonly-used ones like the normal, or “bell curve,” and the uniform, which means that every value in the range has an equal chance of occurring. Depending on the distribution, certain parts of each range are more likely to happen than others.
Monte Carlo simulation also allows for identifying the drivers of risk and ranking them from most to least important based on the impact each has on the bottom line. This allows decision makers to put resources into mitigating the most important risks instead of wasting time by guessing.
In early 2009, critics alleged that the method had failed because the financial crisis happened while big banks were using Monte Carlo. The flaw with that argument is that it ignores the model itself, which is only as good as its practitioner. The old adage “garbage in, garbage out” still applies. If you are inaccurately modeling by using the wrong distribution functions or not representing the likelihood of something happening within those results, you will get the wrong outcome.
Take an example of modeling stock market behavior. Mathematically, using the normal distribution to model the stock market shows that the likelihood of the market crashes of 2008 or 1987 should only occur once every 300,000 years. Clearly this is not the case, and yet the normal distribution was – and still is – commonly used for this purpose, mainly for convenience.
So what does this have to do with the weather? In the southern tier of New York State, extreme flooding occurred in the 1930s, the 1950s, in 1972, and then again this year, when the Susquehanna and Chemung rivers converged and put houses under 10 feet of water. These extreme events, as with significant stock market dips, are not that infrequent. People are starting to realize they can’t just rely on the average as being the default or expected scenario all the time. Clearly the risks organizations face are not limited to the financial markets.
We have to pay attention to the way we are constructing risk models of all types. It’s important to look at all the outcomes, and re-examine the assumptions underlying the models. In training classes, the questions heard most frequently include: Which probability distribution do I use? How do I properly model my situation? What is the worthwhile analysis to do? As a result, we spend a significant amount of time helping professionals in all industries construct proper risk models. Simulation is easy; making sure your model is accurate is the most important step. 


Posted on Wednesday, February 15, 2012 (Archive on Tuesday, May 15, 2012)
Posted by Scott  Contributed by Scott
Return

Rating:
Comments:
Save

Current Rating:
  

Privacy Statement   Terms Of Use   Copyright 2013 The Warren Group    Login