Seven Rules for Risk and Uncertainty

Curtis Faith

Saving the planet will not be easy. We know what the most urgent problems are, but there is too much uncertainty to predict any particular outcomes or events with precision.

So what are we to do? How do we formulate strategy and plans to avert or mitigate disasters? How do we plan for problems when we don’t know exactly what form they will take?

Seven rules

As I noted in my previous blog post, from my experience as a trader, an entrepreneur, and what what I learned about how emergency room (ER) doctors manage risk and uncertainty, those who are confronted with uncertainty as part of their daily lives use a similar strategy for managing that uncertainty. Further, the way they make decisions and develop plans for the future is very relevant to the Azimuth project.

My second book, Inside the Mind of the Turtles, described this strategy in detail. In the book, I outlined seven rules for managing risk and uncertainty. They are:

  • Overcome fear,
  • Remain flexible,
  • Take reasoned risks,
  • Prepare to be wrong,
  • Actively seek reality,
  • Respond quickly to change, and
  • Focus on decisions, not outcomes.

Most of you are familiar with many of the aspects of life-or-death emergencies, having experienced them when you or a loved one has been seriously sick or injured. So it may be a little easier to understand these rules if you examine them from the perspective of an ER doctor.

Overcome fear

Risk is everywhere in the ER. You can’t avoid it. Do nothing, and the patient may die. Do the wrong thing, and the patient may die. Do the right thing, and the patient still may die. You can’t avoid risk.

At times, there may be so many patients requiring assistance that it becomes impossible to give them all care. Yet decisions must be made. The time element is critical to emergency care, and this greatly increases the risk associated with delay or making the wrong decisions. The doctor must make decisions quickly when the ER is busy, and these decisions are extremely important. Unlike in trading or in a startup, in the ER, mistakes can kill someone.

To be a successful as an ER doctor, you must be able to handle life-or-death decisions every day. You must have the confidence in your own abilities and your own judgment to act quickly when there is very little time. No doctor who is afraid to make life-or-death decisions stays in ER for very long.

Remain flexible

One of the hallmarks of an ER facility is the ability to act very quickly to address virtually any type of critical medical need. A well-equipped ER will have diagnostic and surgical facilities onsite, defibrillators for heart attack victims, and even surgical tools for those times when a patient may not survive the trip up the elevator to a full surgical suite.

Another way that an ER facility organizes for flexibility is by making sure that there are sufficient doctors with a broad range of specialties available. ERs don’t staff for the average workload; they staff for the maximum expected workload. They keep a strategic reserve of doctors and nurses available to assist in case things get extremely busy.

Take reasoned risks

Triage is one way of managing the risks associated with the uncertainty of medical diagnoses and treatments. Triage is a way of sorting patients so those who require immediate assistance are helped first, those in potentially critical situations next, and those in no imminent danger of further damage are helped last. For example, if you go to the ER with a broken leg, you may or may not be the first person in line for treatment. If a shooting victim comes in, you will be shuffled back in line. Your injury, while serious, can wait because you are in little danger of dying, and a few hours’ delay in setting a bone is unlikely to cause permanent damage.

Diagnosis itself is one of the most important aspects of emergency treatment. The wrong diagnosis can kill a patient. The right diagnosis can save a life. Yet diagnosis is messy. There are no right answers, only probable answers.

Doctors weigh the probability of particular diseases or injuries against the seriousness of outcomes for the likely conditions and the time sensitivity of a given treatment. Some problems require immediate care, whereas some are less urgent. Good doctors can quickly evaluate the symptoms and results of diagnostic tests to deliver the best diagnosis. The diagnosis may be wrong, but a good doctor will evaluate the factors to determine the most likely one and will continue to run tests to eliminate rarer but potentially more serious problems in time to effect relevant treatment.

Prepare to be wrong

A preliminary diagnosis may be wrong; the onset of more serious symptoms may indicate that a problem is more urgent than anticipated initially. Doctors know this. This is why they and their staff continuously monitor the health status of their patients.

Often while the initial diagnosis is being treated doctors will order additional tests to verify the correctness of that initial diagnosis. They know they can be wrong in their assessment. So they allow for this by checking for alternatives even while treating for the current diagnosis.

More than perhaps any other experts in uncertainty, doctors understand the ways that uncertainty can manifest itself. As a profession, doctors have almost completely mapped the current thinking in medicine into a large tree of objective and even subjective tests that can be run to confirm or eliminate a particular diagnosis. So a doctor knows exactly how to tell if she is wrong and what to do in that event almost every time she makes a diagnosis. Doctors also know which other, less common medical problems also can exhibit the same symptoms that the previous diagnosis did.

For example, if a patient comes in with a medium-grade fever, a doctor normally will check the ears, nose, sinuses, lymph nodes, and breathing to eliminate organ-specific issues and then probably issue a diagnosis of a flu infection. If the fever rises above 102 degrees (39C), the doctor probably will start running some tests to eliminate more serious problems, such as a bacterial infection or viral meningitis.

Actively seek reality

Since doctors are not 100 percent certain that the diagnosis they have made for a given patient is correct, they continue to monitor that patient’s health. If the patient is in serious danger, he will be monitored continuously. Anyone who visits a hospital emergency room will notice all the various monitors and diagnostic machines. There are ECG monitors to check the general health of the heart, pulse monitors, blood oxygenation testers, etc. The ER staff always has up-to-the-second status for their patients. These immediate readings alert doctors and nurses quickly to changes indicating a worsening condition.

Once the monitors have been set up (generally by the nursing staff), ER doctors double-check their diagnosis by running tests to rule out more serious illnesses or injuries that may be less common. The more serious the patient’s condition, the more tests will be run. A small error in diagnosis may cost a patient’s life if she suffers from a serious condition with poor vital signs such as very low blood pressure or an erratic pulse. A large error in diagnosis may not matter for a patient who is relatively healthy. So more time and effort are spent to verify the diagnoses of patients with more serious conditions, and less time and effort are spent verifying the diagnoses of stable patients.

Actively seeking reality is extremely important in emergency medicine because initial diagnoses are likely to be in error to some degree a significant percentage of the time. Since misdiagnoses can kill people, much time and effort are spent to verify and check a diagnosis and to make sure that a patient does not regress.

Respond quickly to change

If caught early, a misdiagnosis or a significant change in a patient’s condition need not be cause for worry. If caught late, it can mean serious complications, extended hospitalization, or even death. For critical illness and injury, time is very important.

The entire point of closely monitoring a patient is to enable the doctor to quickly determine if there is something more serious wrong than was first evident. A doctor’s initial diagnosis comes from the symptoms that are readily apparent. A good doctor knows that there may be a more serious condition causing those symptoms. More serious conditions often warrant different treatment. Sometimes a patient’s condition is serious enough that a few hours can mean the difference between life and death or between full recovery and permanent brain damage.

For example, a mother comes into the ER with her preteen son, who is running a fever of 102 degrees (39C), has a headache, and is vomiting. These are most likely symptoms from a flu infection that is not particularly emergent. The treatment for the flu is normally just bed rest and drinking lots of fluids. So, if the ER is busy, the flu patient normally will wait as patients with more urgent problems get care.

The addition of one more symptom may change the treatment completely. If the patient who may have been sitting in the ER waiting room starts complaining of a stiff painful neck in addition to the flu symptoms, this may be indicative of spinal meningitis, which is a life-threatening disease if not treated quickly. The attending physician likely will order an immediate lumbar puncture (also called a spinal tap) to examine the spinal fluid to see if it is infected with the organisms that cause spinal meningitis. If it is a bacterial infection, treatment with antibiotics will begin right away. A few hours difference can save a life in the case of bacterial spinal meningitis.

The important thing to remember is that a good doctor knows what to look for that will indicate a more serious condition than was indicated initially. She also will respond very quickly to administer appropriate treatment when the symptoms or tests indicate a more serious condition. A good doctor is not afraid of being wrong. A good doctor is looking for any sign that she might have been wrong so that she can help the patient who has a more serious disease in time to treat it so the patient can recover completely.

Focus on decisions, not outcomes

One of the difficulties facing ER doctors because of the uncertainty of medical diagnoses and treatments is the fact that a doctor can do everything correctly, and the patient still may die or suffer permanent damage. The doctor might perform perfectly and still lose the patient.

At times, a patient may require risky surgery to save his life. The doctor will weigh the risk of the surgery itself against the risk of alternative treatments. If the surgery will increase the chances of the patient surviving, then the doctor will order the surgery or perform it herself in cases of extreme emergency.

A doctor may make the best decision under the circumstances using the very best information available, and still the patient may die. A good doctor will evaluate the decision not on the basis of how it turns out but according to the relative probabilities of the outcomes themselves. An outcome of a dead patient does not mean that surgery was a mistake. Likewise, it may be that the surgery should not have been performed even when it has a successful outcome.

If ER doctors evaluated their decisions on the basis of outcomes, then it would lead to bad medicine. For example, if a particular surgery has a 10 percent mortality rate, meaning that 10 percent of the patients who have the surgery die soon after, this is risky surgery. If a patient has an injury that will kill the patient 60 percent of the time without that surgery, then the correct action is to have the surgery performed because the patient will be six times more likely to live with it than without it. If an ER doctor orders the surgery and it is performed without error, the patient still may die. This does not change the fact that absent any new information, the decision to have the surgery still was correct.

The inherent uncertainty of diagnosis and treatment means that many times the right treatment will have a bad outcome. A good doctor knows this and will continue prescribing the best possible treatment even when a few rare examples cross her path.

Relevance for Azimuth

Like an ER doctor trying to diagnose a patient in critical condition, we don’t have much time. We need to prepare ourselves so that when problems arise and disaster strikes, we can quickly determine what’s wrong, stabilize the patient, make sure we have found all the problems, monitor progress, and maintain vigilance until the patient has recovered.

The sheer complexity of the issues, and the scope of the problems that endanger the planet and life on it, ensure that there will never be enough information to make a “correct” analysis, or one single foolproof plan of action. Except in the very broadest terms, we can’t know what the future will bring so we need to build plans that acknowledge that very real limitation.

Rather than pretend that we know more than is possible to know we should embrace the uncertainty. We need to build flexible organizations and structures so that we are prepared to act no matter what happens. We need to build flexible plans that can accommodate change.

We need to build the capability to acquire and assimilate an understanding of reality as it unfolds. We need to seek the truth about our condition and likely prospects for the future.

And we need to be willing to change our minds when circumstances indicate that our judgments have been wrong.

Being ready for any potential scenario will not be easy. It will require a tremendous effort on the part of a global network of scientists, engineers, and others who are interested in saving the planet.

I hope that you consider joining our effort.

20 Responses to Seven Rules for Risk and Uncertainty

  1. Tim van Beek says:

    There is an interesting book about common mistakes that people make when they try to control complex systems:

    – Dietrich Dörner: “The Logic Of Failure: Recognizing And Avoiding Error In Complex Situations”

    Dietrich Dörner is a professor of cognitive behavior and psychology. In his book he describes among other things an experiment, where the probands had to come up with a plan to improve the situation in a poor country, much like a minister of development aid of some rich western country would today. Behind the curtain was a complex system of coupled differential equations modeling some typical aspects of the target country, with computer operators feeding the model with the input of the probands.

    Of course one could engage in endless discussions about the validity of the computer model, but it is nevertheless possible to identify typical mistakes made by most probands.

    Example: After they had developed a strategy, most probands did not ask about the current development of the target country (they were free to ask for any kind of information they wanted at any time). Instead they waited for the model to complete the model run (some decades of simulated time) and were shocked when they were told that their decisions had some unwanted long-term consequences (which were obvious long before the model run was completed, but the probands simply did not ask about it).

    • Curtis Faith says:

      Interesting example Tim. It fits well with my experience of people.

      Most people spend all their time trying to figure out the “right” answer and then almost no time after making a decision ensuring that the answer they came up with was indeed “right.”

  2. John F says:

    The mathematical statistics of extreme values has been worked out
    http://en.wikipedia.org/wiki/Extreme_value_theory
    Short version, the Gumbel distribution is to extreme values what the Gaussian distribution is to average values. But even so, expected maxima are difficult to calculate properly. Otherwise gamblers would always know when to quit while they’re ahead.

    The science of decisions has been highly developed, mostly through operations research
    http://en.wikipedia.org/wiki/Operations_research
    The most used concept involves the ROC curve
    http://en.wikipedia.org/wiki/Receiver_operating_characteristic
    originally developed for military reasons.

    Among the most active people in risk management in environmental contexts is Igor Linkov, e.g. Linkov et al. (eds), “Real-time and deliberative decision making”, Springer, 2008.
    http://www.springer.com/environment/environmental+management/book/978-1-4020-9025-7

    One place to get started for risk in ecosystem restoration is Yoe et al., “Addressing risk and uncertainty in ecological restoration projects.” ERDC, 2010.
    http://www.dtic.mil/cgi-bin/GetTRDoc?Location=U2&doc=GetTRDoc.pdf&AD=ADA529184

    • Curtis Faith says:

      Thanks for the links John.

      Much of the science and mathematics of decisions under uncertainty applies to problems with a known or estimable variance. The type of uncertainty we encountered in trading, and that applies with sustainability issues is much more problematic. According to Nathan Urban in our forum discussion on the topic, it is sometimes called “deep uncertainty.”

      For example, much of the effects we envision for the future for global warming depend on a certain rate of energy consumption. That consumption, at least partially depends on energy price and the psychological response of consumers to that price. We can’t predict the price, so we can’t really predict the consumption trends. We can extrapolate, we can guess, but we can’t predict with any defined probability.

      There are also cases where extreme weather becomes more likely. The geographical location of the extreme weather is also important. If the U.S. midwest and Southeast were to have suffered the same drought and heatwave that Russia just suffered, this might have had a very significant affect on the response of the U.S. politicians. It would be harder for people to believe that global warming is a myth or a propaganda tool of the left. The U.S. is the biggest consumer of energy and also one of the key holdouts for a global treaty on carbon emissions, so an adverse event happening in the U.S. will have greater overall global effect than one in Pakistan, or Russia, unfortunately.

      In the future, if a serious adverse weather event that might have a 1 in 20 chance of occurring in any given year happens next year, that will have greater effect on the outcome 50 years from now than if the same adverse event happened 30 years from now. If it is an event that is tied to heat in some obvious way—like the heatwave in Russia, for example—that will have a greater psychological effect than if it isn’t so easily tied to heat.

      So the level of uncertainty for sustainability issues is necessarily much higher at the macro level than we would like because the complexity exceeds our ability to model reliably.

      • Nathan Urban says:

        In the presence of deep uncertainty it’s often preferable to seek policies that are “robust”, in the sense that they work well (if not perfectly) over a range of possible assumptions, without having to explicitly assign correct probabilities to all outcomes. Rob Lempert at RAND, among others, has worked on robust decision-making with respect to climate change. Some examples are this preprint and this paper:

        • David McInerney, Robert Lempert and Klaus Keller, What are robust strategies in the face of uncertain climate threshold responses?.

        • Robert J. Lempert and Myles T. Collins, Managing the risk of uncertain threshold responses: comparison of robust, optimum, and precautionary approaches, Risk Analysis 27 (2007), 1009-1026.

        Abstract: Many commentators have suggested the need for new decision analysis approaches to better manage systems with deeply uncertain, poorly characterized risks. Most notably, policy challenges such as abrupt climate change involve potential nonlinear or threshold responses where both the triggering level and subsequent system response are poorly understood. This study uses a simple computer simulation model to compare several alternative frameworks for decision making under uncertainty—optimal expected utility, the precautionary principle, and three different approaches to robust decision making—for addressing the challenge of adding pollution to a lake without triggering unwanted and potentially irreversible eutrophication. The three robust decision approaches—trading some optimal performance for less sensitivity to assumptions, satisficing over a wide range of futures, and keeping options open—are found to identify similar strategies as the most robust choice. This study also suggests that these robust decision approaches offer a quantitative, decision analytic framework that captures the spirit of the precautionary principle while addressing some of its shortcomings. Finally, this study finds that robust strategies may be preferable to optimum strategies when the uncertainty is sufficiently deep and the set of alternative policy options is sufficiently rich.

        • Curtis Faith says:

          Yes, robust policies are exactly what we’d like to see enacted. The papers are interesting.

          Robustness is one of the major qualities we looked for in an algorithmic trading strategy. Would it work under a diverse set of market conditions. We looked for similar situations to what the second paper (McIerney et al.) described as ““dancing on the top of a needle,” where the peak performance was surrounded by a dropoff in performance for small changes in a given parameter value.

          It is interesting that they identified in their (McIerney et al.) conclusion that “increasing near term investment in reducing anthropogenic climate forcing may be a promising avenue for increasing the robustness of climate strategies.”

          This is seems almost self-evidently true on an intuitive level as well. Small investments now will have much greater effect than ones made in a few decades, and the risk of worst-case scenarios is greatly reduced.

          Alas, the decisions are not going to be made on the basis of robust decision making strategy. They are going to be made on the basis of political and economic considerations.

        • John Baez says:

          Thanks for the references, Nathan. I took the liberty of adding some details.

          I’d also like to thank Tim van Beek and John F for more references. I’ve put them all in a new page:

          Risk management, Azimuth Library.

          Someday we should create lots of pages on subjects like ‘deep uncertainty’, ‘decision theory’, ‘fat tails’, etcetera. But this is a start!

  3. John Sidles says:

    Over on Lance Fortnow’s weblog, I reviewed the International Roadmap Committee (IRC) ”More than Moore” criteria for enterprises: FOM, LEP, WAT, SHR, and ECO.

    How does Azimuth match-up by the FOM-LEP-WAT-SHR-ECO metric?

  4. Phil Henshaw says:

    Curtis, It’s nice to aim to be comprehensive, and think strategically, but all the strategy you point to relies on the assumption in your first sentence:

    “We know what the most urgent problems are,…”

    Isn’t it reasonable to consider if the reason our problems got so very far out of hand is that we still don’t quite see what’s causing them to multiply? The most urgent problem could them be seen as needing to find out why all our environmental solutions of the past have piled up and become even faster multiplying problems in the present.

    Once you start looking for undiscovered problems, instead of hopeful solutions for assumed problems, the whole quest for understanding changes. For example, lots of people recognize that our money system can’t remain stable unless it’s continually growing by %’s, and other people have recognized that our physical systems can’t remain stable unless they stop growing by %’s.

    The odd thing is the two groups of people mostly don’t talk to each other or seem to see that those options are contradictory. In this particular case that logical contradiction points to essential parts of the economy at odds with each other, and would naturally produce escalating strain on each other until something fails. Would you think that sort of rather visible systemic problem, largely “hidden in sight” for everyone, is the kind a thing that should stimulate the mind of a real scientist, and might also be worth attending to?

    • Curtis Faith says:

      In this particular case that logical contradiction points to essential parts of the economy at odds with each other, and would naturally produce escalating strain on each other until something fails. Would you think that sort of rather visible systemic problem, largely “hidden in sight” for everyone, is the kind a thing that should stimulate the mind of a real scientist, and might also be worth attending to?

      Yes. In fact, you have pointed out one of the reasons I am not optimistic that we will avoid big problems. I don’t think we’ll see any changes until “something fails.”

      There are many people looking at new ways to do business, and new ideas for the economy. One of my favorites is Umair Haque at the Havas Media Lab at Harvard. There are certainly many people talking about how to reduce consumption and get away from a growth-at-all-costs economy. But I don’t expect these people or their ideas to be popular until we have some big crises of various sorts.

      I also agree that the meta problem you’ve pointed out is at the source of the climate issues. I believe there are a couple other meta problems that impact the status quo as well.

      Systemic problems of the sort you describe are not going to be fixed absent some large major crisis or two. There is simply too much power aligned behind the status quo to expect any changes in systemic problems of any sort.

      History demonstrates than mankind is not very good at change without crisis. Power doesn’t like change. It fights it tooth and nail.

  5. John Sidles says:

    One common-sense page that Azimuth is a page with two sections: (1) Roadmaps that have Broadly Succeeded, and (2) Roadmaps that have Broadly Failed.

    Enterprises like NASA and Intel, which are in the business of creating and implementing roadmaps, devote large resource to analyzing both the failure modes and the best practices of past roadmaps.

    This analytic process is slow, and very often, the conclusions reached are painfully discomfiting … and yet, it scarcely likely that a project like Azimuth can succeed, unless it embraces these best-practices.

    Ed Wilson’s recent novel Anthill can be read as a fictional representation of this principle … and is well-worth reading for this reason.

    • Curtis Faith says:

      It is my personal opinion that the success of the Azimuth Project will depend almost entirely on the type of people who have already joined and who will join over the next few months. This will set the tone for the work that follows.

      We will learn what worked and what didn’t work. An analysis of past failures and successes will be helpful.

      I’m pretty familiar with failure of various sorts. I’ve had a lot of practice. :-) New independent volunteer projects are different than large-scale projects like NASA or Intel might undertake. They are inherently more fragile so the failure modes are different from my experience.

      Right now, the Azimuth Project is a volunteer organization still in its infancy. So the most important factors for success will be making sure we maintain a critical mass of people doing ongoing work and learning how to grow at the right pace so we can continue to improve the information we have in the Azimuth Library and accomplish the other Azimuth Project goals without diffusing our focus.

      To me, this depends critically on getting smart people who share our goals working together with us. If anyone has any suggestions for how we might accomplish this, we’d love to hear them.

      • John Sidles says:

        Curtis Faith asks: To me, this depends critically on getting smart people who share our goals working together with us. If anyone has any suggestions for how we might accomplish this, we’d love to hear them.

        Ed Wilson provides this three concrete suggestion (from Anthill, p. 348):

        “Raff lived by three maxims. Fortune favors the prepared mind. People follow someone who knows where he’s going. And control the middle, because that’s where the extremes eventually have to meet.:

        Wilson’s track record at “making a difference” is outstanding … and this advice follows a career that spans most of the 20th century … and so his suggestions are worth taking seriously.

  6. WebHubTel says:

    That was a fairly lengthy analogy, but what I got out of it was that the longer the chain of evidence that you get to support a medical diagnostic decision, the better off you are. In this sense, knowledge is power, and I think that fits the theme of Azimuth.

    • John F says:

      But there are other contraints (time etc) so it’s not the case that more evidence is necessarily better. I am related to MDs, and all have at times called for more tests merely so they could lay down and nap – who could argue that “more evidence to better decide”? In fact, Curtis Faith’s main point was that what is needed is a better framework for making decisions without more evidence.

      All Americans of a certain age know “If you choose not to decide, you still have made a choice.” from Rush’s Freewill

      First band I road tripped to see.

  7. Tim Rayner says:

    Excellent post Curtis. The 24/7 crisis of ER is perfect for this kind of meditation. I suspect though, that the training and experience of ER doctors conceals a further quality that is vital for dealing with uncertainty and risk: resilience in the face of failure. The best engagement strategies are for nought if we crumple and fold the moment we fail. This is why, in my classes on Philosophy for Change, I underscore the value of taking a Stoic attitude towards uncertain environments. One should accept that, ultimately, all that one can control are one’s judgements and responses to adversity. This is not an advocation of inwardsness – the Stoics were active agents, particularly in the political sphere. But they never allowed themselves to believe that their manifest power guarranteed them positive outcomes. When things failed, they would acknowlede that their ultimate capacity lay in getting a grip on themselves, mastering self-recrimination and despondency, and pushing ahead with a new approach. I expect this is an attitude that ER docs aquire the course of training. The rest of us have to constantly remind ourselves of the importance of the lesson.

  8. Curtis Faith says:

    Good point Tim. This is especially true when the stakes are high.

    The last rule, Focus on Decisions, Not Outcomes, is intended to keep you from beating yourself up over decisions that end up with bad outcomes. A Stoic attitude is critical at these points.

  9. I would like to support Nathan Urban’s emphasis on robust decisions. In addition to Rob Lempert’s RDM (Robust Decision Making), I’d like to point the community to info-gap theory. Info-gap has been used for strategic planning and decision under severe uncertainty in many disciplines, including climate change, biological conservation, engineering, economics, homeland security, medicine, and more. Examples, background material, and lots of references by workers around the world can be found at http://info-gap.com. In particular, sources on biological conservation and related topics can be found on the sub-page http://info-gap.com/content.php?id=12. I also have recently started a blog on decisions and info-gaps which deals with a range of issues which might interest folks here: http://decisions-and-info-gaps.blogspot.com.

You can use Markdown or HTML in your comments. You can also use LaTeX, like this: $latex E = m c^2 $. The word 'latex' comes right after the first dollar sign, with a space after it.

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out / Change )

Twitter picture

You are commenting using your Twitter account. Log Out / Change )

Facebook photo

You are commenting using your Facebook account. Log Out / Change )

Google+ photo

You are commenting using your Google+ account. Log Out / Change )

Connecting to %s