Seven Rules for Risk and Uncertainty

31 January, 2011

Curtis Faith

Saving the planet will not be easy. We know what the most urgent problems are, but there is too much uncertainty to predict any particular outcomes or events with precision.

So what are we to do? How do we formulate strategy and plans to avert or mitigate disasters? How do we plan for problems when we don’t know exactly what form they will take?

Seven rules

As I noted in my previous blog post, from my experience as a trader, an entrepreneur, and what what I learned about how emergency room (ER) doctors manage risk and uncertainty, those who are confronted with uncertainty as part of their daily lives use a similar strategy for managing that uncertainty. Further, the way they make decisions and develop plans for the future is very relevant to the Azimuth project.

My second book, Inside the Mind of the Turtles, described this strategy in detail. In the book, I outlined seven rules for managing risk and uncertainty. They are:

  • Overcome fear,
  • Remain flexible,
  • Take reasoned risks,
  • Prepare to be wrong,
  • Actively seek reality,
  • Respond quickly to change, and
  • Focus on decisions, not outcomes.

Most of you are familiar with many of the aspects of life-or-death emergencies, having experienced them when you or a loved one has been seriously sick or injured. So it may be a little easier to understand these rules if you examine them from the perspective of an ER doctor.

Overcome fear

Risk is everywhere in the ER. You can’t avoid it. Do nothing, and the patient may die. Do the wrong thing, and the patient may die. Do the right thing, and the patient still may die. You can’t avoid risk.

At times, there may be so many patients requiring assistance that it becomes impossible to give them all care. Yet decisions must be made. The time element is critical to emergency care, and this greatly increases the risk associated with delay or making the wrong decisions. The doctor must make decisions quickly when the ER is busy, and these decisions are extremely important. Unlike in trading or in a startup, in the ER, mistakes can kill someone.

To be a successful as an ER doctor, you must be able to handle life-or-death decisions every day. You must have the confidence in your own abilities and your own judgment to act quickly when there is very little time. No doctor who is afraid to make life-or-death decisions stays in ER for very long.

Remain flexible

One of the hallmarks of an ER facility is the ability to act very quickly to address virtually any type of critical medical need. A well-equipped ER will have diagnostic and surgical facilities onsite, defibrillators for heart attack victims, and even surgical tools for those times when a patient may not survive the trip up the elevator to a full surgical suite.

Another way that an ER facility organizes for flexibility is by making sure that there are sufficient doctors with a broad range of specialties available. ERs don’t staff for the average workload; they staff for the maximum expected workload. They keep a strategic reserve of doctors and nurses available to assist in case things get extremely busy.

Take reasoned risks

Triage is one way of managing the risks associated with the uncertainty of medical diagnoses and treatments. Triage is a way of sorting patients so those who require immediate assistance are helped first, those in potentially critical situations next, and those in no imminent danger of further damage are helped last. For example, if you go to the ER with a broken leg, you may or may not be the first person in line for treatment. If a shooting victim comes in, you will be shuffled back in line. Your injury, while serious, can wait because you are in little danger of dying, and a few hours’ delay in setting a bone is unlikely to cause permanent damage.

Diagnosis itself is one of the most important aspects of emergency treatment. The wrong diagnosis can kill a patient. The right diagnosis can save a life. Yet diagnosis is messy. There are no right answers, only probable answers.

Doctors weigh the probability of particular diseases or injuries against the seriousness of outcomes for the likely conditions and the time sensitivity of a given treatment. Some problems require immediate care, whereas some are less urgent. Good doctors can quickly evaluate the symptoms and results of diagnostic tests to deliver the best diagnosis. The diagnosis may be wrong, but a good doctor will evaluate the factors to determine the most likely one and will continue to run tests to eliminate rarer but potentially more serious problems in time to effect relevant treatment.

Prepare to be wrong

A preliminary diagnosis may be wrong; the onset of more serious symptoms may indicate that a problem is more urgent than anticipated initially. Doctors know this. This is why they and their staff continuously monitor the health status of their patients.

Often while the initial diagnosis is being treated doctors will order additional tests to verify the correctness of that initial diagnosis. They know they can be wrong in their assessment. So they allow for this by checking for alternatives even while treating for the current diagnosis.

More than perhaps any other experts in uncertainty, doctors understand the ways that uncertainty can manifest itself. As a profession, doctors have almost completely mapped the current thinking in medicine into a large tree of objective and even subjective tests that can be run to confirm or eliminate a particular diagnosis. So a doctor knows exactly how to tell if she is wrong and what to do in that event almost every time she makes a diagnosis. Doctors also know which other, less common medical problems also can exhibit the same symptoms that the previous diagnosis did.

For example, if a patient comes in with a medium-grade fever, a doctor normally will check the ears, nose, sinuses, lymph nodes, and breathing to eliminate organ-specific issues and then probably issue a diagnosis of a flu infection. If the fever rises above 102 degrees (39C), the doctor probably will start running some tests to eliminate more serious problems, such as a bacterial infection or viral meningitis.

Actively seek reality

Since doctors are not 100 percent certain that the diagnosis they have made for a given patient is correct, they continue to monitor that patient’s health. If the patient is in serious danger, he will be monitored continuously. Anyone who visits a hospital emergency room will notice all the various monitors and diagnostic machines. There are ECG monitors to check the general health of the heart, pulse monitors, blood oxygenation testers, etc. The ER staff always has up-to-the-second status for their patients. These immediate readings alert doctors and nurses quickly to changes indicating a worsening condition.

Once the monitors have been set up (generally by the nursing staff), ER doctors double-check their diagnosis by running tests to rule out more serious illnesses or injuries that may be less common. The more serious the patient’s condition, the more tests will be run. A small error in diagnosis may cost a patient’s life if she suffers from a serious condition with poor vital signs such as very low blood pressure or an erratic pulse. A large error in diagnosis may not matter for a patient who is relatively healthy. So more time and effort are spent to verify the diagnoses of patients with more serious conditions, and less time and effort are spent verifying the diagnoses of stable patients.

Actively seeking reality is extremely important in emergency medicine because initial diagnoses are likely to be in error to some degree a significant percentage of the time. Since misdiagnoses can kill people, much time and effort are spent to verify and check a diagnosis and to make sure that a patient does not regress.

Respond quickly to change

If caught early, a misdiagnosis or a significant change in a patient’s condition need not be cause for worry. If caught late, it can mean serious complications, extended hospitalization, or even death. For critical illness and injury, time is very important.

The entire point of closely monitoring a patient is to enable the doctor to quickly determine if there is something more serious wrong than was first evident. A doctor’s initial diagnosis comes from the symptoms that are readily apparent. A good doctor knows that there may be a more serious condition causing those symptoms. More serious conditions often warrant different treatment. Sometimes a patient’s condition is serious enough that a few hours can mean the difference between life and death or between full recovery and permanent brain damage.

For example, a mother comes into the ER with her preteen son, who is running a fever of 102 degrees (39C), has a headache, and is vomiting. These are most likely symptoms from a flu infection that is not particularly emergent. The treatment for the flu is normally just bed rest and drinking lots of fluids. So, if the ER is busy, the flu patient normally will wait as patients with more urgent problems get care.

The addition of one more symptom may change the treatment completely. If the patient who may have been sitting in the ER waiting room starts complaining of a stiff painful neck in addition to the flu symptoms, this may be indicative of spinal meningitis, which is a life-threatening disease if not treated quickly. The attending physician likely will order an immediate lumbar puncture (also called a spinal tap) to examine the spinal fluid to see if it is infected with the organisms that cause spinal meningitis. If it is a bacterial infection, treatment with antibiotics will begin right away. A few hours difference can save a life in the case of bacterial spinal meningitis.

The important thing to remember is that a good doctor knows what to look for that will indicate a more serious condition than was indicated initially. She also will respond very quickly to administer appropriate treatment when the symptoms or tests indicate a more serious condition. A good doctor is not afraid of being wrong. A good doctor is looking for any sign that she might have been wrong so that she can help the patient who has a more serious disease in time to treat it so the patient can recover completely.

Focus on decisions, not outcomes

One of the difficulties facing ER doctors because of the uncertainty of medical diagnoses and treatments is the fact that a doctor can do everything correctly, and the patient still may die or suffer permanent damage. The doctor might perform perfectly and still lose the patient.

At times, a patient may require risky surgery to save his life. The doctor will weigh the risk of the surgery itself against the risk of alternative treatments. If the surgery will increase the chances of the patient surviving, then the doctor will order the surgery or perform it herself in cases of extreme emergency.

A doctor may make the best decision under the circumstances using the very best information available, and still the patient may die. A good doctor will evaluate the decision not on the basis of how it turns out but according to the relative probabilities of the outcomes themselves. An outcome of a dead patient does not mean that surgery was a mistake. Likewise, it may be that the surgery should not have been performed even when it has a successful outcome.

If ER doctors evaluated their decisions on the basis of outcomes, then it would lead to bad medicine. For example, if a particular surgery has a 10 percent mortality rate, meaning that 10 percent of the patients who have the surgery die soon after, this is risky surgery. If a patient has an injury that will kill the patient 60 percent of the time without that surgery, then the correct action is to have the surgery performed because the patient will be six times more likely to live with it than without it. If an ER doctor orders the surgery and it is performed without error, the patient still may die. This does not change the fact that absent any new information, the decision to have the surgery still was correct.

The inherent uncertainty of diagnosis and treatment means that many times the right treatment will have a bad outcome. A good doctor knows this and will continue prescribing the best possible treatment even when a few rare examples cross her path.

Relevance for Azimuth

Like an ER doctor trying to diagnose a patient in critical condition, we don’t have much time. We need to prepare ourselves so that when problems arise and disaster strikes, we can quickly determine what’s wrong, stabilize the patient, make sure we have found all the problems, monitor progress, and maintain vigilance until the patient has recovered.

The sheer complexity of the issues, and the scope of the problems that endanger the planet and life on it, ensure that there will never be enough information to make a “correct” analysis, or one single foolproof plan of action. Except in the very broadest terms, we can’t know what the future will bring so we need to build plans that acknowledge that very real limitation.

Rather than pretend that we know more than is possible to know we should embrace the uncertainty. We need to build flexible organizations and structures so that we are prepared to act no matter what happens. We need to build flexible plans that can accommodate change.

We need to build the capability to acquire and assimilate an understanding of reality as it unfolds. We need to seek the truth about our condition and likely prospects for the future.

And we need to be willing to change our minds when circumstances indicate that our judgments have been wrong.

Being ready for any potential scenario will not be easy. It will require a tremendous effort on the part of a global network of scientists, engineers, and others who are interested in saving the planet.

I hope that you consider joining our effort.


Curtis Faith on the Azimuth Project

27 January, 2011

Hi, I’m Curtis Faith. I’m very excited to be helping with the Azimuth Project.

A few weeks ago, I read John’s exhortation for blog readers to join in the discussion on the Azimuth Forum, so I decided to check it out. I was surprised at the amount of work that has been done in the last six months. I was inspired by the project’s goals and decided to commit to helping.

Since we need help, and hope that other blog readers might pitch in to help too, John and I thought it would be a good idea for me to explain a little about myself, why I think the Azimuth Project is so important, and how I think I can help.

I just turned 47 on Sunday. I am a first-time father with a 9-month old daughter. She is amazing. I don’t want her to grow up and wonder why our generation let things get so bad and I didn’t do anything to help make the world better.

I’m a real optimist by nature. But ignoring the very clear trends of the last 30 to 40 years is no longer an option. Our generation must stand up and do something about this.

A few years back I thought that politics might be the answer. I spent a lot of time learning the ins and outs of politics. My wife and I even followed the 2008 U.S. election and filmed the campaigns of Obama and Ron Paul in the process of learning. It is clear to me—having seen the way the last few years have unfolded—that political solutions will not avert the coming crisis.

In the last few years, my wife and I have lived in southeast Asia for 4 months, and in South America for a few years. I wanted to get to understand the world from outside the U.S. perspective. To get to know people in other countries as individuals, as humans. This has made it even more clear what the major problems are, and that the solutions won’t be implemented until a major crisis strikes.

So I’ve been working on learning relevant technology and science for the last few years as a backup plan. Trying to see where I might be able to help out in the most effective way possible. I have also spent a lot of time investigating the various other efforts working on the major global problems. None of them appear to me to be facing reality. In contrast, the Azimuth Project fits what I’ve seen with my own eyes.

But most of all, the reason that I’m excited about the Azimuth Project is that it has the loftiest of goals and the Earth needs saving. We’ve screwed it up and we’re running out of time.

A bit about me

I’m best known for something that started 27 years ago, in the fall of 1983, when I was just 19 years old.

I dropped out of college because I was bored and joined a small group of traders who later became famous in the trading world because of our subsequent success and how we learned to trade. Some of the lessons I learned in that group about managing risk and uncertainty are very relevant to the Azimuth Project goals for saving the world.

A famous Chicago trader, Richard Dennis, took out large ads in the New York Times, Barrons, and the Wall Street Journal announcing trainee positions. After only two weeks of training we were given money to trade and at the end of the first month of practice with a small account, I was given a $2 million account to trade. Over the next 4 plus years, I turned that $2 million into more than $33 million, more than doubling the money each year. Most of the other trainees were also successful and this story became legend in trading circles as the group made more than $100 million for Richard during the life of the program. Our group was known as the Turtles and I wrote a book about this experience, Way of the Turtle, that became a bestseller in finance a few years back.

After Rich disbanded the Turtles, I got bored with trading. I was more interested in software and wanted to do something to make the world a better place. I started a few companies, built innovative software, tried to solve challenging problems and eventually found my way to Silicon Valley at the latter half of the Internet Boom.

Chaos, and risk and uncertainty

The sheer complexity of the issues and the scope of the problems that endanger the planet and life on it ensure that there will never be enough information to make a “correct” analysis. Except in the very broadest terms, we can’t know what the future will bring so we need to build plans that acknowledge that very real limitation.

We could pretend that we know more than is possible to know, or we can embrace the uncertainty and adapt to it. If we do this, we can concentrate on building flexibility and responsiveness along with an ability to assimilate and acquire an understanding of reality as it unfolds.

As a trader and entrepreneur, I learned about managing risk and uncertainty and how to develop flexible plans that will work when you can’t predict the future. Over time I came to see that other professionals who were forced to plan and make decisions under conditions of uncertainty used similar strategies.

But first, some background. While in Silicon Valley, I met a couple of guys who were forming a new hedge fund in the Virgin Islands, Simon Olsen and Bruce Tizes. In early 2001, it was obvious to most people that the Internet party was over in Silicon Valley. Pink slips were flying everywhere. So I thought it might be a good time to do something new for a few years.

I had often thought about getting back into trading so I could build up enough money to fund my own projects. I didn’t like the way that all the funding in software was focused on money. Most investors didn’t care about building cool software, and certainly not about doing positive things for the world. If those things came, they were secondary to profits. So for a while I thought it best to go make my own money. So I wouldn’t be restricted to only those strategies which optimized profits for investors.

So I decided to join Bruce and Simon in their hedge fund venture, and Bruce and I subsequently became good friends. Bruce had a very interesting background. He is one of the rare true polymaths that I’ve run into. He is incredibly bright, with a very flexible mind. He graduated high school at 15 years of age, college at 16, and medical school at 20. He later made a lot of money investing in real estate and trading stocks.

For most of the time since he had become a doctor, Bruce had been practicing emergency medicine at Mount Sinai Hospital in Chicago. Mount Sinai is the inspiration for the television series E.R. that also takes place in Chicago and is a major destination for accident and gunshot victims in the downtown Chicago area.

So in various discussions over lunch or dinner over the few years we worked together, I came to learn a bit about the life of an emergency room doctor. Over time, Bruce showed me that there were similarities in how ER doctors and traders approached risk and uncertainty.

From my experience with software entrepreneurs and venture capitalists, I knew that they too handle risks and uncertainty in similar ways. It seemed like everyone who was forced to deal with uncertainty in the normal course of business followed similar general principles, and that these principles would be very useful even for those who didn’t learn them on the job.

Since my first book sold very well, the publisher was interested in getting me to write another book. I agreed to write one. But this time I wanted to write a book about these important principles for managing risk and uncertainty rather than a trading book.

This became my second book, Inside the Mind of the Turtles. Unfortunately, against my wishes and better judgment, it was marketed as a trading book. The truth is that it is a much more general book written for times of chaos and uncertainty, even for the emergency room doctors for a planet in peril. It contains ideas that are very relevant to the Azimuth Project.

In my next post here, I’ll outline the Seven Rules for Risk I develop in the book and show how they are relevant for the Azimuth Project because of the tremendous uncertainty inherent in environmental and sustainability issues.

In the meantime, I urge you to join in and help with the Azimuth Project. Read some articles in the Azimuth Library and join the discussions on the related Azimuth Forum.

The Azimuth Project is multidisciplinary so there are opportunities for all different kinds of people to help out. For example, I have been interested in low-energy transportation alternatives. So I plan on doing more research, adding to the Azimuth library of articles for advanced transportation, and finding some of the best experts to see if they will help on the Azimuth Project itself. I am also good at simplifying and explaining complicated problems. So I plan to take some of the more complicated sustainability issues and summarize them for non-experts. This will make it easier for people of diverse talents to grasp the full scope of the problems Azimuth is tackling.

I’ve spent much of the last 10 years trying to figure out how I can best help make the world a better place.

For me, the Azimuth Project is that answer. Come check it out.


Azimuth News (Part 1)

24 January, 2011

The world seems to be heading for tough times. From a recent New York Times article:

Over the next 100 years, many scientists predict, 20 percent to 30 percent of species could be lost if the temperature rises 3.6 degrees to 5.4 degrees Fahrenheit. If the most extreme warming predictions are realized, the loss could be over 50 percent, according to the United Nations climate change panel.

But when the going gets tough, the tough get going! The idea of the Azimuth Project is create a place where scientists and engineers can meet and work together to help save the planet from global warming and other environmental threats. The first step was to develop a procedure for collecting reliable information and explaining it clearly. That means: not just a wiki, but a wiki with good procedures and a discussion forum to help us criticize and correct the articles.

Thanks to the technical wizardry of Andrew Stacey, and a lot of sage advice and help from Eric Forgy, the wiki and forum officially opened their doors about four months ago.

That seems like ages ago. For months a small band of us worked hard to get things started. With the beginning of the new year, we seem to be entering a phase transition: we’re getting a lot of new members. So, it’s time to give you an update!

There’s a lot going on now. If you’ve been reading this blogs and clicking some of the links, you’ve probably seen some of our pages on sea level rise, coral reefs, El Niño, biochar, photovoltaic solar power, peak oil, energy return on energy invested, and dozens of other topics. If you haven’t, check them out!

But that’s just the start of it. If you haven’t been reading the Azimuth Forum, you probably don’t know most of what’s going on. Let me tell you what we’re doing.

I’ll also tell you some things you can do to help.

Azimuth Project Pages

By far the easiest thing is to go to any Azimuth Project page, think of some information or reference that it’s missing, and add it! Go to the home page, click on a category, find an interesting article in that category and give it a try. Or, if you want to start a new page, do that. We desperately need more help from people in the life sciences, to build up our collection of pages on biodiversity.

If you need help, start here:

How to get started.

Plans of Action

We’re working through various plans for dealing with peak oil, global warming, and various environmental problems. You can see our progress here:

Plans of action, Azimuth Project.

So far it goes like this. First we write summaries of these plans. Then I blog about them. Then Frederik De Roo is distilling your criticisms and comments and adding them to the Azimuth Project. The idea is to build up a thorough comparison of many different plans.

We’re the furthest along when it comes to Pacala and Socolow’s plan:

Stabilization wedges, Azimuth Project.

You don’t need to be an expert on any particular discipline to help here! You just need to be able to read plans of action and write crisp precise summaries, as above. We also need help finding the most important plans of action.

In addition to plans of action, we’re also summarizing various ‘reports’. The idea is that a report presents facts, while a plan of action advocates a course of action. See:

Reports, Azimuth Project.

In practice the borderline between plans of action and reports is a bit fuzzy, but that’s okay.

Plan C

Analyzing plans of action is just the first step in a more ambitious project: we’d like to start formulating our own plans. Our nickname for this project is Plan C.

Why Plan C? Many other plans, like Lester Brown’s Plan B, are too optimistic. They assume that most people will change their behavior in dramatic ways before problems become very serious. We want a plan that works with actual humans.

In other words: while optimism is a crucial part of any successful endeavor, we also need plans that assume plausibly suboptimal behavior on the part of the human race. It would be best if we did everything right in the first place. It would be second best to catch problems before they get very bad — that’s the idea of Plan B. But realistically, we’ll be lucky if we do the third best thing: muddle through when things get bad.

Azimuth Code Project

Some people on the Amazon Project, most notably Tim van Beek, are writing software that illustrates ideas from climate physics and quantitative ecology. Full-fledged climate models are big and tough to develop; it’s a lot easier to start with simple models, which are good for educational purposes. I’m starting to use these in This Week’s Finds.

If you have a background in programming, we need your help! We have people writing programs in R and Sage… but Tim is writing code in Java for a systematic effort he calls the Azimuth Code Project. The idea is that over time, the results will become a repository of open-source modelling software. As a side effect, he’ll try to show that clean, simple, open-source, well-managed and up-to-date code handling is possible at a low cost — and he’ll explain how it can be done.

So far most of our software is connected to stochastic differential equations:

• Software for investigating the Hopf bifurcation and its stochastic version: see week308 of This Week’s Finds.

• Software for studying predator-prey models, including stochastic versions: see the page on quantitative ecology. Ultimately it would be nice to have some software to simulate quite general stochastic Petri nets.

• Software for studying stochastic resonance: see the page on stochastic resonance. We need a lot more on this, leading up to software that takes publicly available data on Milankovitch cycles — cyclic changes in the Earth’s orbit — and uses it to make predictions of the glacial cycles. It’s not clear how good these predictions will be — the graphs I’ve seen so far don’t look terribly convincing — but the Milankovitch cycle theory of the ice ages is pretty popular, so it’ll be fun to see.

• We would like a program that simulates the delayed action oscillator, which is an interesting simple model for the El Niño / Southern Oscillation.

Graham Jones has proposed some more challenging projects:

• An open source version of FESA, the Future Energy Scenario Assessment. FESA, put out by Orion Innovations, is proprietary software that models energy systems scenarios, including meteorological data, economic analysis and technology performance.

• An automated species-identification system. See the article Time to automate identification in the journal Nature. The authors say that taxonomists should work with specialists in pattern recognition, machine learning and artificial intelligence to increase accuracy and reduce drudgery.

David Tweed, who is writing a lot of our pages on the economics of energy, has suggested some others:

• Modeling advanced strategies for an electrical smart grid.

• Modeling smartphone or website based car- or ride-sharing schemes.

• Modeling supply routing systems for supermarkets that attempt to reduce their ecological footprint.

All these more challenging projects will only take off if we find some energetic people and get access to good data.

This Week’s Finds

I’m interviewing people for This Week’s Finds: especially scientists who have switched from physics to environmental issues, and people with big ideas about how to save the planet. The goal here is to attract people, especially students, into working on these subjects.

Here’s my progress so far:

Nathan Urban — climate change. Done.

Tim Palmer — weather prediction. Done.

Eliezer Yudkowsky — friendly AI. Interviewed.

Thomas Fischbacher — sustainability. Interviewed.

Gregory Benford — geoengineering. Underway.

David Ellerman – helping people, economics. Underway.

Eric Drexler — nanotechnology. Agreed to do it.

Chris Lee — bioinformatics. Agreed to do it.

If you’re a scientist or engineer doing interesting things on the topics we’re interested in at the Azimuth Project, and you’d like me to interview you, let me know! Of course, your ego should be tough enough to handle it if I say no.

Alternatively: if you know somebody like this, and you’re good at interviewing people, this is another place you might help. You could either send them to me, or interview them yourself! I’m already trying to subcontract out one interview to a mathematician friend.

Blog articles

While I’ve been writing most of the articles on this blog so far, I don’t want it to stay that way. If you want to write articles, let me know! I might or might not agree… but if you read this blog, you know what I like, so you can guess ahead of time whether I’ll like your article or not.

In fact, the next two articles here will be written by Curtis Faith, a new member of the Azimuth Forum.

More

There’s also a lot more you can do. For suggestions, try:

Things to do, Azimuth Project.

Open projects, Azimuth Project.


This Week’s Finds (Week 308)

24 December, 2010

Last week we met the El Niño-Southern Oscillation, or ENSO. I like to explain things as I learn about them. So, often I look back and find my explanations naive. But this time it took less than a week!

What did it was reading this:

• J. D. Neelin, D. S. Battisti, A. C. Hirst et al., ENSO theory, J. Geophys. Res. 103 (1998), 14261-14290.

I wouldn’t recommend this to the faint of heart. It’s a bit terrifying. It’s well-written, but it tells the long and tangled tale of how theories of the ENSO phenomenon evolved from 1969 to 1998 — a period that saw much progress, but did not end with a neat, clean understanding of this phenomenon. It’s packed with hundreds of references, and sprinkled with somewhat intimidating remarks like:

The Fourier-decomposed longitude and time dependence of these eigensolutions obey dispersion relations familiar to every physical oceanographer…

Nonetheless I found it fascinating — so, I’ll pick off one small idea and explain it now.

As I’m sure you’ve heard, climate science involves some extremely complicated models: some of the most complex known to science. But it also involves models of lesser complexity, like the "box model" explained by Nathan Urban in "week304". And it also involves some extremely simple models that are designed to isolate some interesting phenomena and display them in their Platonic ideal form, stripped of all distractions.

Because of their simplicity, these models are great for mathematicians to think about: we can even prove theorems about them! And simplicity goes along with generality, so the simplest models of all tend to be applicable — in a rough way — not just to the Earth’s climate, but to a vast number of systems. They are, one might say, general possibilities of behavior.

Of course, we can’t expect simple models to describe complicated real-world situations very accurately. That’s not what they’re good for. So, even calling them "models" could be a bit misleading. It might be better to call them "patterns": patterns that can help organize our thinking about complex systems.

There’s a nice mathematical theory of these patterns… indeed, several such theories. But instead of taking a top-down approach, which gets a bit abstract, I’d rather tell you about some examples, which I can illustrate using pictures. But I didn’t make these pictures. They were created by Tim van Beek as part of the Azimuth Code Project. The Azimuth Code Project is a way for programmers to help save the planet. More about that later, at the end of this article.

As we saw last time, the ENSO cycle relies crucially on interactions between the ocean and atmosphere. In some models, we can artificially adjust the strength of these interactions, and we find something interesting. If we set the interaction strength to less than a certain amount, the Pacific Ocean will settle down to a stable equilibrium state. But when we turn it up past that point, we instead see periodic oscillations! Instead of a stable equilibrium state where nothing happens, we have a stable cycle.

This pattern, or at least one pattern of this sort, is called the "Hopf bifurcation". There are various differential equations that exhibit a Hopf bifurcation, but here’s my favorite:

\frac{d x}{d t} =  -y + \beta  x - x (x^2 + y^2)

\frac{d y}{d t} =  \; x + \beta  y - y (x^2 + y^2)

Here x and y are functions of time, t, so these equations describe a point moving around on the plane. It’s easier to see what’s going on in polar coordinates:

\frac{d r}{d t} = \beta r - r^3

\frac{d \theta}{d t} = 1

The angle \theta goes around at a constant rate while the radius r does something more interesting. When \beta \le 0, you can see that any solution spirals in towards the origin! Or, if it starts at the origin, it stays there. So, we call the origin a "stable equilibrium".

Here’s a typical solution for \beta = -1/4, drawn as a curve in the x y plane. As time passes, the solution spirals in towards the origin:

The equations are more interesting for \beta > 0. Then dr/dt = 0 whenever

\beta r - r^3 = 0

This has two solutions, r = 0 and r = \sqrt{\beta}. Since r = 0 is a solution, the origin is still an equilibrium. But now it’s not stable: if r is between 0 and \sqrt{\beta}, we’ll have \beta r - r^3 > 0, so our solution will spiral out, away from the origin and towards the circle r = \sqrt{\beta}. So, we say the origin is an "unstable equilibrium". On the other hand, if r starts out bigger than \sqrt{\beta}, our solution will spiral in towards that circle.

Here’s a picture of two solutions for \beta = 1:

The red solution starts near the origin and spirals out towards the circle r = \sqrt{\beta}. The green solution starts outside this circle and spirals in towards it, soon becoming indistinguishable from the circle itself. So, this equation describes a system where x and y quickly settle down to a periodic oscillating behavior.

Since solutions that start anywhere near the circle r = \sqrt{\beta} will keep going round and round getting closer to this circle, it’s called a "stable limit cycle".

This is what the Hopf bifurcation is all about! We’ve got a dynamical system that depends on a parameter, and as we change this parameter, a stable fixed point become unstable, and a stable limit cycle forms around it.

This isn’t quite a mathematical definition yet, but it’s close enough for now. If you want something a bit more precise, try:

• Yuri A. Kuznetsov, Andronov-Hopf bifurcation, Scholarpedia, 2006.

Now, clearly the Hopf bifurcation idea is too simple for describing real-world weather cycles like the ENSO. In the Hopf bifurcation, our system settles down into an orbit very close to the limit cycle, which is perfectly periodic. The ENSO cycle is only roughly periodic:



The time between El Niños varies between 3 and 7 years, averaging around 4 years. There can also be two El Niños without an intervening La Niña, or vice versa. One can try to explain this in various ways.

One very simple, general idea to add random noise to whatever differential equation we were using to model the ENSO cycle, obtaining a so-called stochastic differential equation: a differential equation describing a random process. Richard Kleeman discusses this idea in Tim Palmer’s book:

• Richard Kleeman, Stochastic theories for the irregularity of ENSO, in Stochastic Physics and Climate Modelling, eds. Tim Palmer and Paul Williams, Cambridge U. Press, Cambridge, 2010, pp. 248-265.

Kleeman mentions three general theories for the irregularity of the ENSO. They all involve the idea of separating the weather into "modes" — roughly speaking, different ways that things can oscillate. Some modes are slow and some are fast. The ENSO cycle is defined by the behavior of certain slow modes, but of course these interact with the fast modes. So, there are various options:

  1. Perhaps the relevant slow modes interact with each other in a chaotic way.
  2. Perhaps the relevant slow modes interact with each other in a non-chaotic way, but also interact with chaotic fast modes, which inject noise into what would otherwise be simple periodic behavior.
  3. Perhaps the relevant slow modes interact with each other in a chaotic way, and also interact in a significant way with chaotic fast modes.

Kleeman reviews work on the first option but focuses on the second. The third option is the most complicated, so the pessimist in me suspects that’s what’s really going on. Still, it’s good to start by studying simple models!

How can we get a simple model that illustrates the second option? Simple: take the model we just saw, and add some noise! This idea is discussed in detail here:

• H. A. Dijkstra, L. M. Frankcombe and A.S von der Heydt, The Atlantic Multidecadal Oscillation: a stochastic dynamical systems view, in Stochastic Physics and Climate Modelling, eds. Tim Palmer and Paul Williams, Cambridge U. Press, Cambridge, 2010, pp. 287-306.

This paper is not about the ENSO cycle, but another one, which is often nicknamed the AMO. I would love to talk about it — but not now. Let me just show you the equations for a Hopf bifurcation with noise:

\frac{d x}{d t} =  -y + \beta  x - x (x^2 + y^2) + \lambda \frac{d W_1}{d t}

\frac{d y}{d t} =  \; x + \beta  y - y (x^2 + y^2) + \lambda \frac{d W_2}{d t}

They’re the same as before, but with some new extra terms at the end: that’s the noise.

This could easily get a bit technical, but I don’t want it to. So, I’ll just say some buzzwords and let you click on the links if you want more detail. W_1 and W_2 are two independent Wiener processes, so they describe Brownian motion in the x and y coordinates. When we differentiate a Wiener process we get white noise. So, we’re adding some amount of white noise to the equations we had before, and the number \lambda says precisely how much. That means that x and y are no longer specific functions of time: they’re random functions, also known as stochastic processes.

If this were a math course, I’d feel obliged to precisely define all the terms I just dropped on you. But it’s not, so I’ll just show you some pictures!

If \beta = 1 and \lambda = 0.1, here are some typical solutions:

They look similar to the solutions we saw before for \beta = 1, but now they have some random wiggles added on.

(You may be wondering what this picture really shows. After all, I said the solutions were random functions of time, not specific functions. But it’s tough to draw a "random function". So, to get one of the curves shown above, what Tim did is randomly choose a function according to some rule for computing probabilities, and draw that.)

If we turn up the noise, our solutions get more wiggly. If \beta = 1 and \lambda = 0.3, they look like this:

In these examples, \beta > 0, so we would have a limit cycle if there weren’t any noise — and you can see that even with noise, the solutions approximately tend towards the limit cycle. So, we can use an equation of this sort to describe systems that oscillate, but in a somewhat random way.

But now comes the really interesting part! Suppose \beta \le 0. Then we’ve seen that without noise, there’s no limit cycle: any solution quickly spirals in towards the origin. But with noise, something a bit different happens. If \beta = -1/4 and \lambda = 0.1 we get a picture like this:

We get irregular oscillations even though there’s no limit cycle! Roughly speaking, the noise keeps knocking the solution away from the stable fixed point at x = y = 0, so it keeps going round and round, but in an irregular way. It may seem to be spiralling in, but if we waited a bit longer it would get kicked out again.

This is a lot easier to see if we plot just x as a function of t. Then we can run our solution for a longer time without the picture becoming a horrible mess:

If you compare this with the ENSO cycle, you’ll see they look roughly similar:



That’s nice. Of course it doesn’t prove that a model based on a Hopf bifurcation plus noise is "right" — indeed, we don’t really have a model until we’ve chosen variables for both x and y. But it suggests that a model of this sort could be worth studying.

If you want to see how the Hopf bifurcation plus noise is applied to climate cycles, I suggest starting with the paper by Dijkstra, Frankcombe and von der Heydt. If you want to see it applied to the El Niño-Southern Oscillation, start with Section 6.3 of the ENSO theory paper, and then dig into the many references. Here it seems a model with \beta > 0 may work best. If so, noise is not required to keep the ENSO cycle going, but it makes the cycle irregular.

To a mathematician like me, what’s really interesting is how the addition of noise "smooths out" the Hopf bifurcation. When there’s no noise, the qualitative behavior of solutions jumps drastically at \beta = 0. For \beta \le 0 we have a stable equilibrium, while for \beta > 0 we have a stable limit cycle. But in the presence of noise, we get irregular cycles not only for \beta > 0 but also \beta \le 0. This is not really surprising, but it suggests a bunch of questions. Such as: what are some quantities we can use to describe the behavior of "irregular cycles", and how do these quantities change as a function of \lambda and \beta?

You’ll see some answers to this question in Dijkstra, Frankcombe and von der Heydt’s paper. However, if you’re a mathematician, you’ll instantly think of dozens more questions — like, how can I prove what these guys are saying?

If you make any progress, let me know. If you don’t know where to start, you might try the Dijkstra et al. paper, and then learn a bit about the Hopf bifurcation, stochastic processes, and stochastic differential equations:

• John Guckenheimer and Philip Holmes, Nonlinear Oscillations, Dynamical Systems and Bifurcations of Vector Fields, Springer, Berlin, 1983.

• Zdzisław Brzeźniak and Tomasz Zastawniak, Basic Stochastic Processes: A Course Through Exercises, Springer, Berlin, 1999.

• Bernt Øksendal, Stochastic Differential Equations: An Introduction with Applications, 6th edition, Springer, Berlin, 2003.

Now, about the Azimuth Code Project. Tim van Beek started it just recently, but the Azimuth Project seems to be attracting people who can program, so I have high hopes for it. Tim wrote:

My main objectives to start the Azimuth Code Project were:

• to have a central repository for the code used for simulations or data analysis on the Azimuth Project,

• to have an online free access repository and make all software open source, to enable anyone to use the software, for example to reproduce the results on the Azimuth Project. Also to show by example that this can and should be done for every scientific publication.

Of less importance is:

• to implement the software with an eye to software engineering principles.

This less important because the world of numerical high performance computing differs significantly from the rest of the software industry: it has special requirements and it is not clear at all which paradigms that are useful for the rest will turn out to be useful here. Nevertheless I’m confident that parts of the scientific community will profit from a closer interaction with software engineering.

So, if you like programming, I hope you’ll chat with us and consider joining in! Our next projects involve limit cycles in predator-prey models, stochastic resonance in some theories of the ice ages, and delay differential equations in ENSO models.

And in case you’re wondering, the code used for the pictures above is a simple implementation in Java of the Euler scheme, using random number generating algorithms from Numerical Recipes. Pictures were generated with gnuplot.


There are two ways of constructing a software design. One way is to make it so simple that there are obviously no deficiencies. And the other way is to make it so complicated that there are no obvious deficiencies. – C.A.R. Hoare


Energy Return on Energy Invested

27 October, 2010

The Azimuth Project wiki has been up and running for exactly one month!

We’ve built up a nice bunch of articles sketching some of the biggest environmental problems we face today — and some ideas for dealing with them. I invite you to look these over and improve them! It’s very easy to do.

I also invite you to join us at the Azimuth Forum, where we are deciding the fate of humanity (or something like that). We need your help!

In the weeks to come I want to tell you what we’ve learned so far. I especially want to talk about various plans of action that people have formulated to tackle global warming. Even sitting here in the comfort of this cozy blog, you can help us compare and criticize these plans.

But I also want to tell you about some interesting concepts. And the first is EROEI, or “Energy Return On Energy Invested”. The Azimuth Project entry on this concept was largely written by David Tweed. Three cheers for David Tweed!

It also had help from Eric Forgy, Graham Jones and David Pollard, and a major contribution from Anonymous Coward. I’ll shorten it and amp it up for the purposes of this blog. I know you’re here to be entertained.

The Idea

You’ve probably heard the saying “it takes money to make money”. Similarly, it takes energy to make energy. More precisely, it takes useful energy to make useful energy.

Energy Returned On Energy Invested or EROEI captures this idea: it’s simply the ratio of “useful energy acquired” to “useful energy expended”. Note that money does not enter into this concept. The difficult and often heated debate arises when we try to decide which inputs and outputs count as “useful”.

There are other names for this concept and closely related concepts. “Energy profit ratio”, “surplus energy”, “energy gain”, and “EROI”, and EROEI all describe virtually the same idea: how much energy we receive per energy put in. See:

• Nate Hagen, A Net Energy Parable: Why is ERoEI Important?, The Oil Drum, 2006.

The concept of “energy yield ratio” is also very similar, but tends to be used in slightly different ways. See the Azimuth Project article for more.

Details

The definition of EROEI for a process of “extracting energy”
is the useful acquired energy divided by the useful energy expended. The “useful” tag denotes energy which is usable by human beings now. For example: a supernova wastes a lot of energy in the process of making uranium and blasting it out into space. But that was done long before we came along, so it makes no sense to include it in the EROEI inputs.

In practice, people include inputs and outputs that aren’t strictly “energies”, but rather “substances from which energy can be extracted”. For example: one could look at the EROEI of growing trees for fuel, where the wood produced is counted as an output according to the energy extractable by burning.

In general, having a high EROEI value counts as “good”. Indeed, when the EROEI drops below 1 more energy is being used in the extraction process than is being output at the end! But because it only considers energy issues (and not resource scarcity, scalability, pollution, etc.), EROEI should only be one input into our process of deciding on technologies and actions.

When it comes to computing EROEI, the hard part is deciding which inputs and outputs should be included in the ratio — particularly since this involves considering which other competing processes are genuinely viable.

Another complication is that while various forms of energy can generally be converted to each other, this will incur losses due to conversion inefficiencies. So, you can’t look at two schemes with the same useful energy inputs that produce different kinds of energy — e.g., electricity and heat — and declare the one with the higher EROEI as more suitable.

Examples

To see some of the difficulties in calculating an EROEI, let’s imagine growing a crop of grass and then fermenting it to produce a liquid fuel. The most obvious inputs and outputs are:

“Energy” outputs:

1. The liquid fuel itself. This is unarguably useful output “energy”.

2. There may be excess heat produced by the fermentation process. Whether this is useful is debatable since the energy is of high entropy and produced at plants located away from energy consumers.

3. The remaining biomass may be suitable for burning. Again the usefulness is debatable, since the biomass may be better used for fertilising the fields used to grow the crop. Even if this isn’t the case, the biomass may require yet more energy to collect into a dry, burnable state.

“Energy” inputs:

1. Sunlight. Except for exceptional circumstances, there is no other use for sunlight falling on fields so this does not count as a useful input.

2. Artificial fertilizer. This requires energy to produce and could be used for growing food or other crops, so it definitely counts as a useful energy input.

3. Energy used by motorized vehicles, both during farming and transportation to the biomass plant. For the same reasons as fertilizer, this counts as a useful energy input.

4. Mechanical energy used to extract liquid fuel after fermentation and clear waste products from the apparatus. Again a useful energy input.

Thus one computation of EROEI would count outputs 1 and inputs 2, 3 and 4.

However, suppose that the grass crop is genuinely being grown for other reasons — e.g., as part of a crop rotation scheme — and the plant is sufficiently small that the excess heat can be used fully by the plant for staff heating. Then you could argue that the EROEI should count outputs 1 and 2 and count inputs 3 and 4. So, to determine the EROEI you need to decide which alternative uses are genuinely viable.

Note also that this EROEI calculation is purely about energy! It does not reflect issues such as whether the land usage is sustainable, possible soil depletion/erosion, scarcity of mineral inputs for artificial fertilizer, etc.

Comparison

Okay, but enough of these nuances and caveats. Important as they are, I know what you really want: a list of different forms of energy and their EROEI’s!

Natural gas: 10:1
Coal: 50:1
Oil (Ghawar supergiant field): 100:1
Oil (global average): 19:1
Tar sands: 5.2:1 to 5.8:1
Oil shale: 1.5:1 to 4:1

Wind: 18:1
Hydro: 11:1 to 267:1
Waves: 15:1
Tides: ~ 6:1
Geothermal power: 2:1 to 13:1
Solar photovoltaic power: 3.75:1 to 10:1
Solar thermal: 1.6:1

Nuclear power: 1.1:1 to 15:1

Biodiesel: 1.9:1 to 9:1
Ethanol: 0.5:1 to 8:1

This list comes from:

• Richard Heinberg, Searching for a Miracle: ‘Net Energy’ Limits & the Fate of Industrial Society.

You can read this report for more details on how he computed these numbers. If you’re like me, you’ll take a perverse interest in forms of energy production with the lowest EROEIs. For example, what idiot would make ethanol in a way that yields only half as much useful energy as it takes to make the stuff?

The US government, that’s who: the powerful corn lobby has been getting subsidies for some highly inefficient forms of biofuel! But things vary a lot from place to place: corn grows better in the heart of the corn belt (like Iowa) than near the edges (like Texas). So, the production of a bushel of corn in Iowa costs 43 megajoules of energy on average, while in Texas it costs 71 megajoules.

Similar, ethanol from sugar cane in Brazil has an EROEI of 8:1 to 10:1, but when made from Louisiana sugar cane in the United States, the EROEI is closer to 1:1.

“Solar thermal” also comes out looking bad in the table above, with an EROEI of just 1.6:1. But what’s “solar thermal”? Heinberg has a section on “active” or “concentrating” solar thermal power, where you focus sunlight to heat a liquid to drive a turbine. He also has one on “passive” solar, where you heat your house, or water, by sun falling on it. But he doesn’t give EROEI’s in either of these sections — unlike the sections on other forms of energy. So I can’t see where this figure of 1.6 is coming from.

Anyway, there’s a lot to think about here. Each one of the numbers listed above could serve as the starting-point for a fascinating discussion! Let’s start…


Recommended Reading

2 October, 2010

The Azimuth Project is taking off! Today I woke up and found two new articles full of cool stuff I hadn’t known. Check them out:

EROEI, about the idea of Energy Returned on Energy Invested.

Peak phosphorus, about the crucial role of phosphorus as a fertilizer, and how the moment of peak phosphorus production may have already passed.

Both were initiated by David Tweed, but they’ve both already been polished by other people — Eric Forgy and Graham Jones, so far. So, it’s working!

Here’s the easiest way for you to help save the planet today:

1) Think of the most important book or article you’ve read about environmental problems, how to solve them, or some closely related topic.

2) Go to the Recommended reading page on Azimuth.

3) Click the button that says “Edit” at the bottom left.

4) Add your recommended reading! You’ll see items that look sort of like this:

### _The Necessary Revolution_

* Authors: Peter M. Senge, Bryan Smith, Nina Kruschwitz, Joe Laur and Sara Schley
* Publisher: Random House, New York, 2008
* Recommended by: [[Moneesha Mehta]]
* [Link](http://www.google.com/search?q=the+necessary+revolution)

**Summary:** I confess, I haven’t read the book, but I’ve listened to the abridged version on CD many times as I drive between Toronto and Ottawa. It never fails to inspire me. Peter Senge et al discuss how organizations, private, public, and non-profit, can all work together and build on their organizational strengths to create more sustainable operations.

Copy this format and add:

## _the name of your favorite article or book_

* Author(s): the author(s) name(s)
* Publisher: publisher and date
* Recommended by: [[your name]]
* [Link](a URL to help people find more information)

**Summary:** A little summary.

5) Type your name in the little box at the bottom of the page, and hit the Submit button.

Easy!

And if step 4 seems too complicated, don’t worry! Just enter the information in a paragraph of text — we’ll fix up the formatting.

Our ultimate goal is not a huge unsorted list of important articles and books about environmental issues. We’re trying to build a structure where it’s easy to find information — in fact, wisdom — on specific topics!

But right now we’re just getting started. We need, among other things, to rapidly accumulate relevant data. So — take 5 minutes today to help us out. And find out what other people think you’d enjoy reading!


The Azimuth Project

27 September, 2010

Here’s the long-promised wiki:

The Azimuth Project

We’re going to make this into the place where scientists and engineers will go when they’re looking for reliable information on environmental problems, or ideas for projects to work on.

We’ve got our work cut out for us. If you click the link today — September 27th, 2010 — you won’t see much there. But I promise to keep making it better, with bulldog determination. And I hope you join me.

In addition to the wiki there’s a discussion forum:

The Azimuth Forum

where we can discuss work in progress on the Azimuth Project, engage in collaborative research, and decide on Azimuth policies.

Anybody can read the stuff on the Azimuth Forum. But if you want to join the conversation, you need to become a member. To learn how, read this and carefully follow the steps.

You’ll see a few sample articles on the Azimuth Project wiki, but they’re really just stubs. My plan now is to systematically go through some big issues — starting with some we’ve already discussed here — and blog about them. The resulting blog posts, and your responses to them, will then be fed into the wiki. My goal is to generate:

• readable, reliable summaries of environmental challenges we face,

• pointers to scientists and engineers working on these challenges,

and lists of

• ideas these people have had,

• questions that they have,

• questions that they should be thinking about, but aren’t.

Let the games begin! Let’s try to keep this planet a wonderful place!


Follow

Get every new post delivered to your Inbox.

Join 3,095 other followers