This Week’s Finds (Week 310)

28 February, 2011

I first encountered Gregory Benford through his science fiction novels: my favorite is probably In the Ocean of Night.

Later I learned that he’s an astrophysicist at U.C. Irvine, not too far from Riverside where I teach. But I only actually met him through my wife. She sometimes teaches courses on science fiction, and like Benford, she has some involvement with the Eaton Collection at U.C. Riverside—the largest publicly accessible SF library in the world. So, I was bound to eventually bump into him.

When I did, I learned about his work on electromagnetic filaments near the center of our galaxy—see “week252″ for more. I also learned he was seriously interested in climate change, and that he was going to the Asilomar International Conference on Climate Intervention Technologies—a controversial get-together designed to hammer out some policies for research on geoengineering.

Benford is a friendly but no-nonsense guy. Recently he sent me an email mentioning my blog, and said: "Your discussions on what to do are good, though general, while what we need is specifics NOW." Since I’d been meaning to interview him for a while, this gave me the perfect opening.

JB: You’ve been thinking about the future for a long time, since that’s part of your job as a science fiction writer.  For example, you’ve written a whole series about the expansion of human life through the galaxy.  From this grand perspective, global warming might seem like an annoying little road-bump before the ride even gets started.  How did you get interested in global warming? 

GB: I liked writing about the far horizons of our human prospect; it’s fun. But to get even above the envelope of our atmosphere in a sustained way, we have to stabilize the planet. Before we take on the galaxy, let’s do a smaller problem .

JB: Good point. We can’t all ship on out of here, and the way it’s going now, maybe none of us will, unless we get our act together.

Can you remember something that made you think "Wow, global warming is a really serious problem"?  As you know, not everyone is convinced yet.

GB: I looked at the migration of animals and then the steadily northward march of trees. They don’t read newspapers—the trees become newspapers—so their opinion matters more. Plus the retreat of the Arctic Sea ice in summer, the region of the world most endangered by the changes coming. I first focused on carbon capture using the CROPS method. I’m the guy who first proposed screening the Arctic with aerosols to cool it in summer.

JB: Let’s talk about each in turn. "CROPS" stands for Crop Residue Oceanic Permanent Sequestration. The idea sounds pretty simple: dump a lot of crop residues—stalks, leaves and stuff—on the deep ocean floor. That way, we’d be letting plants suck CO2 out of the atmosphere for us.

GB: Agriculture is the world’s biggest industry; we should take advantage of it. That’s what gave Bob Metzger and me the idea: collect farm waste and sink it to the bottom of the ocean, whence it shall not return for 1000 years. Cheap, easy, doable right now.

JB: But we have to think about what’ll happen if we dump all that stuff into the ocean, right? After all, the USA alone creates half a gigatonne of crop residues each year, and world-wide it’s ten times that. I’m getting these numbers from your papers:

• Robert A. Metzger and Gregory Benford, Sequestering of atmospheric carbon through permanent disposal of crop residue, Climatic Change 49 (2001), 11-19.

• Stuart E. Strand and Gregory Benford, Ocean sequestration of crop residue carbon: recycling fossil fuel carbon back to deep sediments, Environmental Science and Technology 43 (2009), 1000-1007.

Since we’re burning over 7 gigatonnes of carbon each year, burying 5 gigatonnes of crop waste is just enough to make a serious dent in our carbon footprint. But what’ll that much junk do at the bottom of the ocean?

GB: We’re testing the chemistry of how farm waste interacts with deep ocean sites offshore Monterey Bay right now. Here’s a picture of a bale 3.2 km down:

JB: I’m sure our audience will have more questions about this… but the answers to some are in your papers, and I want to spend a bit more time on your proposal to screen the Arctic. There’s a good summary here:

• Gregory Benford, Climate controls, Reason Magazine, November 1997.

But in brief, it sounds like you want to test the results of spraying a lot of micron-sized dust into the atmosphere above the Arctic Sea during the summer. You suggest diatomaceous earth as an option, because it’s chemically inert: just silica. How would the test work, exactly, and what would you hope to learn?

GB: The US has inflight refueling aircraft such as the KC-10 Extender that with minor changes spread aerosols at relevant altitudes, and pilots who know how to fly big sausages filled with fluids.



Rather than diatomaceous earth, I now think ordinary SO2 or H2S will work, if there’s enough water at the relevant altitudes. Turns out the pollutant issue is minor, since it would be only a percent or so of the SO2 already in the Arctic troposphere. The point is to spread aerosols to diminish sunlight and look for signals of less sunlight on the ground, changes in sea ice loss rates in summer, etc. It’s hard to do a weak experiment and be sure you see a signal. Doing regional experiments helps, so you can see a signal before the aerosols spread much. It’s a first step, an in-principle experiment.

Simulations show it can stop the sea ice retreat. Many fear if we lose the sea ice in summer ocean currents may alter; nobody really knows. We do know that the tundra is softening as it thaws, making roads impassible and shifting many wildlife patterns, with unforeseen long term effects. Cooling the Arctic back to, say, the 1950 summer temperature range would cost maybe $300 million/year, i.e., nothing. Simulations show to do this globally, offsetting say CO2 at 500 ppm, might cost a few billion dollars per year. That doesn’t help ocean acidification, but it’s a start on the temperature problem.

JB: There’s an interesting blog on Arctic political, military and business developments:

• Anatoly Karlin, Arctic Progress.

Here’s the overview:

Today, global warming is kick-starting Arctic history. The accelerating melting of Arctic sea ice promises to open up circumpolar shipping routes, halving the time needed for container ships and tankers to travel between Europe and East Asia. As the ice and permafrost retreat, the physical infrastructure of industrial civilization will overspread the region [...]. The four major populated regions encircling the Arctic Ocean—Alaska, Russia, Canada, Scandinavia (ARCS)—are all set for massive economic expansion in the decades ahead. But the flowering of industrial civilization’s fruit in the thawing Far North carries within it the seeds of its perils. The opening of the Arctic is making border disputes more serious and spurring Russian and Canadian military buildups in the region. The warming of the Arctic could also accelerate global warming—and not just through the increased economic activity and hydrocarbons production. One disturbing possibility is that the melting of the Siberian permafrost will release vast amounts of methane, a greenhouse gas that is far more potent than CO2, into the atmosphere, and tip the world into runaway climate change.

But anyway, unlike many people, I’m not mentioning risks associated with geoengineering in order to instantly foreclose discussion of it, because I know there are also risks associated with not doing it. If we rule out doing anything really new because it’s too expensive or too risky, we might wind up locking ourselves in a "business as usual" scenario. And that could be even more risky—and perhaps ultimately more expensive as well.

GB: Yes, no end of problems. Most impressive is how they look like a descending spiral, self-reinforcing.

Certainly countries now scramble for Arctic resources, trade routes opened by thawing—all likely to become hotly contested strategic assets. So too melting Himalayan glaciers can perhaps trigger "water wars" in Asia—especially India and China, two vast lands of very different cultures. Then, coming on later, come rising sea levels. Florida starts to go away. The list is endless and therefore uninteresting. We all saturate.

So droughts, floods, desertification, hammering weather events—they draw ever less attention as they grow more common. Maybe Darfur is the first "climate war." It’s plausible.

The Arctic is the canary in the climate coalmine. Cutting CO2 emissions will take far too long to significantly affect the sea ice. Permafrost melts there, giving additional positive feedback. Methane release from the not-so-perma-frost is the most dangerous amplifying feedback in the entire carbon cycle. As John Nissen has repeatedly called attention to, the permafrost permamelt holds a staggering 1.5 trillion tons of frozen carbon, about twice as much carbon as is in the atmosphere. Much would emerge as methane. Methane is 25 times as potent a heat-trapping gas as CO2 over a century, and 72 times as potent over the first 20 years! The carbon is locked in a freezer. Yet that’s the part of the planet warming up the fastest. Really bad news:

• Kevin Schaefer, Tingjun Zhang, Lori Bruhwiler and Andrew P. Barrett, Amount and timing of permafrost carbon release in response to climate warming, Tellus, 15 February 2011.

Abstract: The thaw and release of carbon currently frozen in permafrost will increase atmospheric CO2 concentrations and amplify surface warming to initiate a positive permafrost carbon feedback (PCF) on climate. We use surface weather from three global climate models based on the moderate warming, A1B Intergovernmental Panel on Climate Change emissions scenario and the SiBCASA land surface model to estimate the strength and timing of the PCF and associated uncertainty. By 2200, we predict a 29-59% decrease in permafrost area and a 53-97 cm increase in active layer thickness. By 2200, the PCF strength in terms of cumulative permafrost carbon flux to the atmosphere is 190±64 gigatonnes of carbon. This estimate may be low because it does not account for amplified surface warming due to the PCF itself and excludes some discontinuous permafrost regions where SiBCASA did not simulate permafrost. We predict that the PCF will change the arctic from a carbon sink to a source after the mid-2020s and is strong enough to cancel 42-88% of the total global land sink. The thaw and decay of permafrost carbon is irreversible and accounting for the PCF will require larger reductions in fossil fuel emissions to reach a target atmospheric CO2 concentration.

Particularly interesting is the slowing of thermohaline circulation.  In John Nissen’s "two scenarios" work there’s an uncomfortably cool future—if the Gulf Stream were to be diverted by meltwater flowing into NW Atlantic. There’s also an unbearably hot future, if the methane from not-so-permafrost and causes global warming to spiral out of control. So we have a terrifying menu.

JB: I recently interviewed Nathan Urban here. He explained a paper where he estimated the chance that the Atlantic current you’re talking about could collapse. (Technically, it’s the Atlantic meridional overturning circulation, not quite the same as the Gulf Stream.) They got a 10% chance of it happening in two centuries, assuming a business as usual scenario. But there are a lot of uncertainties in the modeling here.

Back to geoengineering. I want to talk about some ways it could go wrong, how soon we’d find out if it did, and what we could do then.

For example, you say we’ll put sulfur dioxide in the atmosphere below 15 kilometers, and most of the ozone is above 20 kilometers. That’s good, but then I wonder how much sulfur dioxide will diffuse upwards. As the name suggests, the stratosphere is "stratified" —there’s not much turbulence. That’s reassuring. But I guess one reason to do experiments is to see exactly what really happens.

GB: It’s really the only way to go forward. I fear we are now in the Decade of Dithering that will end with the deadly 2020s. Only then will experiments get done and issues engaged. All else, as tempting as ideas and simulations are, spell delay if they do not couple with real field experiments—from nozzle sizes on up to albedo measures —which finally decide.

JB: Okay. But what are some other things that could go wrong with this sulfur dioxide scheme? I know you’re not eager to focus on the dangers, but you must be able to imagine some plausible ones: you’re an SF writer, after all. If you say you can’t think of any, I won’t believe you! And part of good design is looking for possible failure modes.

GB: Plenty can go wrong with so vast an idea. But we can learn from volcanoes, that give us useful experiments, though sloppy and noisy ones, about putting aerosols into the air. Monitoring those can teach us a lot with little expense.

We can fail to get the aerosols to avoid clumping, so they fall out too fast. Or we can somehow trigger a big shift in rainfall patterns—a special danger in a system already loaded with surplus energy, as is already displaying anomalies like the bitter winters in Europe, floods in Pakistan, drought in Darfur. Indeed, some of Alan Robock’s simulations of Arctic aerosol use show a several percent decline in monsoon rain—though that may be a plus, since flooding is the #1 cause of death and destruction during the Indian monsoon.

Mostly, it might just plain fail to work. Guessing outcomes is useless, though.  Here’s where experiment rules, not simulations. This is engineering, which learns from mistakes. Consider the early days of aviation. Having more time to develop and test a system gives more time to learn how to avoid unwanted impacts. Of course, having a system ready also increases the probability of premature deployment; life is about choices and dangers.

More important right now than developing capability, is understanding the consequences of deployment of that capability by doing field experiments. One thing we know: both science and engineering advance most quickly by using the dance of theory with experiment. Neglecting this, preferring only experiment, is a fundamental mistake.

JB: Switching gears slightly: in March last year you went to the Asilomar Conference on climate intervention technologies. I’ve read the report:

• Asilomar Scientific Organizing Committee, The Asilomar Conference Recommendations on Principles for Research into Climate Engineering Techniques, Climate Institute, Washington DC, 2010.

It seems unobjectionable and a bit bland, no doubt deliberately so, with recommendations like this:

"Public participation and consultation in research planning and oversight, assessments, and development of decision-making mechanisms and processes must be provided."

What were some interesting things that you learned there? And what’ll happen next?

GB: It was the Woodstock of the policy wonks. I found it depressing. Not much actual science got discussed, and most just fearlessly called for more research, forming of panels and committees, etc. This is how bureaucracy digests a problem, turning it quite often into fertilizer.

I’m a physicist who does both theory and experiment. I want to see work that combines those to give us real information and paths to follow. I don’t see that anywhere now. Congress might hand out money for it but after the GAO report on geoengineering last September there seems little movement.

I did see some people pushing their carbon capture companies, to widespread disbelief. The simple things we could do right now like our CROPS carbon capture proposal are neglected, while entrepreneur companies hope for a government scheme to pay for sucking CO2 from the air. That’ll be the day!—far into the crisis, I think, maybe several decades from now. I also saw fine ideas pushed aside in favor of policy wonk initiatives. It was a classic triumph of process over results. As is many areas dominated by social scientists, people seemed to be saying, "Nobody can blame us if we go through the motions.”

That Decade of Dithering is upon us now. The great danger is that tipping points may not be obvious, even as we cross them. They may present as small events that nonetheless take us over an horizon from which we can never return.

For example, the loss of Greenland ice. Once the ice sheet melts down to an altitude below that needed to maintain it, we’ve lost it. The melt lubricates the glacier base and starts a slide we cannot stop. There are proposals of how to block that—essentially, draw the water out from the base as fast as it appears—but nobody’s funding such studies.

A reasonable, ongoing climate control program might cost $100 million annually. That includes small field experiments, trials with spraying aerosols, etc. We now spend about $5 billion per year globally studying the problem, so climate control studies would be 1/50 of that.

Even now, we may already be too late for a tipping point—we still barely glimpse the horrors we could be visiting on our children and their grandchildren’s grandchildren.

JB: I think a lot of young people are eager to do something. What would be your advice, especially to future scientists and engineers? What should they do? The problems seem so huge, and most so-called "adults" are shirking their responsibilities—perhaps hoping they’ll be dead before things get too bad.

GB: One reason people are paralyzed is simple: major interests would get hurt—coal, oil, etc. The fossil fuel industry is the second largest in the world; #1 is agriculture. We have ~50 trillion dollars of infrastructure invested in it. That and inertia—we’ve made the crucial fuel of our world a Bad Thing, and prohibition never works with free people. Look at the War on Drugs, now nearing its 40th anniversary.

That’s why I think adaptation—dikes, water conservation, reflecting roofs and blacktop to cool cities and lower their heating costs, etc.— is a smart way to prepare. We should also fund research in mineral weathering as a way to lock up CO2, which not only consumes CO2 but it can also generate ocean alkalinity. The acidification of the oceans is undeniable, easily measured, and accelerating. Plus geoengineering, which is probably the only fairly cheap, quick way to damp the coming chaos for a while. A stopgap, but we’re going to need plenty of those.

JB: And finally, what about you? What are you doing these days? Science fiction? Science? A bit of both?

Both, plus. Last year I published a look at how we viewed the future in the 20th Century, The Wonderful Future We Never Had, and have a novel in progress now cowritten with Larry Niven—about a Really Big Object. Plus some short stories and journalism.

My identical twin brother Jim & I published several papers looking at SETI from the perspective of those who would pay the bills for a SETI beacon, and reached conclusions opposite from what the SETI searches of the last half century have sought. Instead of steady, narrowband signals near 1 GHz, it is orders of magnitude cheaper to radiate pulsed, broadband beacon signals nearer 10 GHz. This suggests new way to look for pulsed signals, which some are trying to find. We may have been looking for the wrong thing all along. The papers are on the arXiv:

• James Benford, Gregory Benford and Dominic Benford, Messaging with cost optimized interstellar beacons.

• Gregory Benford, James Benford and Dominic Benford, Searching for cost optimized interstellar beacons.

For math types, David Wolpert and I have shown that Newcomb’s paradox arises from confusions in the statement, so is not a paradox:

• David H. Wolpert and Gregory Benford, What does Newcomb’s paradox teach us?

JB: The next guest on this show, Eliezer Yudkowsky, has also written about Newcomb’s paradox. I should probably say what it is, just for folks who haven’t heard yet. I’ll quote Yudkowsky’s formulation, since it’s nice and snappy:

A superintelligence from another galaxy, whom we shall call Omega, comes to Earth and sets about playing a strange little game. In this game, Omega selects a human being, sets down two boxes in front of them, and flies away.

Box A is transparent and contains a thousand dollars.
Box B is opaque, and contains either a million dollars, or nothing.

You can take both boxes, or take only box B.

And the twist is that Omega has put a million dollars in box B if and only if Omega has predicted that you will take only box B.

Omega has been correct on each of 100 observed occasions so far—everyone who took both boxes has found box B empty and received only a thousand dollars; everyone who took only box B has found B containing a million dollars. (We assume that box A vanishes in a puff of smoke if you take only box B; no one else can take box A afterward.)

Before you make your choice, Omega has flown off and moved on to its next game. Box B is already empty or already full.

Omega drops two boxes on the ground in front of you and flies off.

Do you take both boxes, or only box B?

If you say you’d take both boxes, I’ll argue that’s stupid: everyone who did that so far got just a thousand dollars, while the folks who took only box B got a million!

If you say you’d take only box B, I’ll argue that’s stupid: there has got to be more money in both boxes than in just one of them!

So, this puzzle has a kind of demonic attraction. Lots of people have written about it, though personally I’m waiting until a superintelligence from another galaxy actually shows up and performs this stunt.

Hmm—I see your paper uses Bayesian networks! I’ve been starting to think about those lately.

But I know that’s not all you’ve been doing.

GB: I also started several biotech companies 5 years ago, spurred in part by the agonizing experience of watching my wife die of cancer for decades, ending in 2002. They’re genomics companies devoted to extending human longevity by upregulating genes we know confer some defenses against cardio, neurological and other diseases. Our first product just came out, StemCell100, and did well in animal and human trials.

So I’m staying busy. The world gets more interesting all the time. Compared with growing up in the farm country of Alabama, this is a fine way to live.

JB: It’s been great to hear what you’re up to. Best of luck on all these projects, and thanks for answering my questions!


Few doubt that our climate stands in a class by itself in terms of complexity. Though much is made of how wondrous our minds are, perhaps the most complex entity known is our biosphere, in which we are mere mayflies. Absent a remotely useful theory of complexity in systems, we must proceed cautiously. – Gregory Benford


This Week’s Finds (Week 309)

17 February, 2011

In the next issues of This Week’s Finds, I’ll return to interviewing people who are trying to help humanity deal with some of the risks we face.

First I’ll talk to the science fiction author and astrophysicist Gregory Benford. I’ll ask him about his ideas on “geoengineering” — proposed ways of deliberately manipulating the Earth’s climate to counteract the effects of global warming.

After that, I’ll spend a few weeks asking Eliezer Yudkowsky about his ideas on rationality and “friendly artificial intelligence”. Yudkowsky believes that the possibility of dramatic increases in intelligence, perhaps leading to a technological singularity, should command more of our attention than it does.

Needless to say, all these ideas are controversial. They’re exciting to some people — and infuriating, terrifying or laughable to others. But I want to study lots of scenarios and lots of options in a calm, level-headed way without rushing to judgement. I hope you enjoy it.

This week, I want to say a bit more about the Hopf bifurcation!

Last week I talked about applications of this mathematical concept to climate cycles like the El Niño – Southern Oscillation. But over on the Azimuth Project, Graham Jones has explained an application of the same math to a very different subject:

Quantitative ecology, Azimuth Project.

That’s one thing that’s cool about math: the same patterns show up in different places. So, I’d like to take advantage of his hard work and show you how a Hopf bifurcation shows up in a simple model of predator-prey interactions.

Suppose we have some rabbits that reproduce endlessly, with their numbers growing at a rate proportional to their population. Let x(t) be the number of animals at time t. Then we have:

\frac{d x}{d t} = r x

where r is the growth rate. This gives exponential growth: it has solutions like

x(t) = x_0 e^{r t}

To get a slightly more realistic model, we can add ‘limits to growth’. Instead of a constant growth rate, let’s try a growth rate that decreases as the population increases. Let’s say it decreases in a linear way, and drops to zero when the population hits some value K. Then we have

\frac{d x}{d t} = r (1-x/K) x

This is called the “logistic equation”. K is known as the “carrying capacity”. The idea is that the environment has enough resources to support this population. If the population is less, it’ll grow; if it’s more, it’ll shrink.

If you know some calculus you can solve the logistic equation by hand by separating the variables and integrating both sides; it’s a textbook exercise. The solutions are called “logistic functions”, and they look sort of like this:



The above graph shows the simplest solution:

x = \frac{e^t}{e^t + 1}

of the simplest logistic equation in the world:

\frac{ d x}{d t} = (1 - x)x

Here the carrying capacity is 1. Populations less than 1 sound a bit silly, so think of it as 1 million rabbits. You can see how the solution starts out growing almost exponentially and then levels off. There’s a very different-looking solution where the population starts off above the carrying capacity and decreases. There’s also a silly solution involving negative populations. But whenever the population starts out positive, it approaches the carrying capacity.

The solution where the population just stays at the carrying capacity:

x = 1

is called a “stable equilibrium”, because it’s constant in time and nearby solutions approach it.

But now let’s introduce another species: some wolves, which eat the rabbits! So, let x be the number of rabbits, and y the number of wolves. Before the rabbits meet the wolves, let’s assume they obey the logistic equation:

\frac{ d x}{d t} = x(1-x/K)

And before the wolves meet the rabbits, let’s assume they obey this equation:

\frac{ d y}{d t} = -y

so that their numbers would decay exponentially to zero if there were nothing to eat.

So far, not very interesting. But now let’s include a term that describes how predators eat prey. Let’s say that on top of the above effect, the predators grow in numbers, and the prey decrease, at a rate proportional to:

x y/(1+x).

For small numbers of prey and predators, this means that predation increases nearly linearly with both x and y. But if you have one wolf surrounded by a million rabbits in a small area, the rate at which it eats rabbits won’t double if you double the number of rabbits! So, this formula includes a limit on predation as the number of prey increases.

Okay, so let’s try these equations:

\frac{ d x}{d t} = x(1-x/K) - 4x y/(x+1)

and

\frac{ d y}{d t} = -y + 2x y/(x+1)

The constants 4 and 2 here have been chosen for simplicity rather than realism.

Before we plunge ahead and get a computer to solve these equations, let’s see what we can do by hand. Setting d x/d t = 0 gives the interesting parabola

y = \frac{1}{4}(1-x/K)(x+1)

together with the boring line x = 0. (If you start with no prey, that’s how it will stay. It takes bunny to make bunny.)

Setting d y/d t = 0 gives the interesting line

x=1

together with the boring line y = 0.

The interesting parabola and the interesting line separate the x y plane into four parts, so these curves are called separatrices. They meet at the point

y = \frac{1}{2} (1 - 1/K)

which of course is an equilibrium, since d x / d t = d y / d t = 0 there. But when K < 1 this equilibrium occurs at a negative value of y, and negative populations make no sense.

So, if K < 1 there is no equilibrium population, and with a bit more work one can see the problem: the wolves die out. For larger values of K there is an equilibrium population. But the nature of this equilibrium depends on K: that’s the interesting part.

We could figure this out analytically, but let’s look at two of Graham’s plots. Here’s a solution when K = 2.5:

The gray curves are the separatrices. The red curve shows a solution of the equations, with the numbers showing the passage of time. So, you can see that the solution spirals in towards the equilibrium. That’s what you expect of a stable equilibrium.

Here’s a picture when K = 3.5:

The red and blue curves are two solutions, again numbered to show how time passes. The red curve spirals in towards the dotted gray curve. The blue one spirals out towards it. The gray curve is also a solution. It’s called a “stable limit cycle” because it’s periodic, and nearby solutions move closer and closer to it.

With a bit more work, we could show analytically that whenever 1 < K < 3 there is a stable equilibrium. As we increase K, when K passes 3 this stable equilibrium suddenly becomes a tiny stable limit cycle. This is a Hopf bifurcation!

Now, what if we add noise? We saw the answer last week: where we before had a stable equilibrium, we now can get irregular cycles — because the noise keeps pushing the solution away from the equilibrium!

Here’s how it looks for K=2.5 with white noise added:

The following graph shows a longer run in the noisy K=2.5 case, with rabbits (x) in black and wolves (y) in gray. Click on the picture to make it bigger:



There is irregular periodicity — and as you’d expect, the predators tends to lag behind the prey. A burst in the rabbit population causes a rise in the wolf population; a lot of wolves eat a lot of rabbits; a crash in rabbits causes a crash in wolves.

This sort of phenomenon is actually seen in nature sometimes. The most famous case involves the snowshoe hare and the lynx in Canada. It was first noted by MacLulich:

• D. A. MacLulich, Fluctuations in the Numbers of the Varying Hare (Lepus americanus), University of Toronto Studies Biological Series 43, University of Toronto Press, Toronto, 1937.

The snowshoe hare is also known as the “varying hare”, because its coat varies in color quite dramatically. In the summer it looks like this:



In the winter it looks like this:



The Canada lynx is an impressive creature:



But don’t be too scared: it only weighs 8-11 kilograms, nothing like a tiger or lion.

Down in the United States, the same species lynx went extinct in Colorado around 1973 — but now it’s back!

• Colorado Division of Wildlife, Success of the Lynx Reintroduction Program, 27 September, 2010.

In Canada, at least, the lynx rely for the snowshoe hare for 60% to 97% of their diet. I suppose this is one reason the hare has evolved such magnificent protective coloration. This is also why the hare and lynx populations are tightly coupled. They rise and crash in irregular cycles that look a bit like what we saw in our simplified model:



This cycle looks a bit more strongly periodic than Graham’s graph, so to fit this data, we might want to choose parameters that give a limit cycle rather than a stable equilibrium.

But I should warn you, in case it’s not obvious: everything about population biology is infinitely more complicated than the models I’ve showed you so far! Some obvious complications: snowshoe hare breed in the spring, their diet varies dramatically over the course of year, and the lynx also eat rodents and birds, carrion when it’s available, and sometimes even deer. Some less obvious ones: the hare will eat dead mice and even dead hare when they’re available, and the lynx can control the size of their litter depending on the abundance of food. And I’m sure all these facts are just the tip of the iceberg. So, it’s best to think of models here as crude caricatures designed to illustrate a few features of a very complex system.

I hope someday to say a bit more and go a bit deeper. Do any of you know good books or papers to read, or fascinating tidbits of information? Graham Jones recommends this book for some mathematical aspects of ecology:

• Michael R. Rose, Quantitative Ecological Theory, Johns Hopkins University Press, Maryland, 1987.

Alas, I haven’t read it yet.

Also: you can get Graham’s R code for predator-prey simulations at the Azimuth Project.


Under carefully controlled experimental circumstances, the organism will behave as it damned well pleases. – the Harvard Law of Animal Behavior


This Week’s Finds (Week 308)

24 December, 2010

Last week we met the El Niño-Southern Oscillation, or ENSO. I like to explain things as I learn about them. So, often I look back and find my explanations naive. But this time it took less than a week!

What did it was reading this:

• J. D. Neelin, D. S. Battisti, A. C. Hirst et al., ENSO theory, J. Geophys. Res. 103 (1998), 14261-14290.

I wouldn’t recommend this to the faint of heart. It’s a bit terrifying. It’s well-written, but it tells the long and tangled tale of how theories of the ENSO phenomenon evolved from 1969 to 1998 — a period that saw much progress, but did not end with a neat, clean understanding of this phenomenon. It’s packed with hundreds of references, and sprinkled with somewhat intimidating remarks like:

The Fourier-decomposed longitude and time dependence of these eigensolutions obey dispersion relations familiar to every physical oceanographer…

Nonetheless I found it fascinating — so, I’ll pick off one small idea and explain it now.

As I’m sure you’ve heard, climate science involves some extremely complicated models: some of the most complex known to science. But it also involves models of lesser complexity, like the "box model" explained by Nathan Urban in "week304". And it also involves some extremely simple models that are designed to isolate some interesting phenomena and display them in their Platonic ideal form, stripped of all distractions.

Because of their simplicity, these models are great for mathematicians to think about: we can even prove theorems about them! And simplicity goes along with generality, so the simplest models of all tend to be applicable — in a rough way — not just to the Earth’s climate, but to a vast number of systems. They are, one might say, general possibilities of behavior.

Of course, we can’t expect simple models to describe complicated real-world situations very accurately. That’s not what they’re good for. So, even calling them "models" could be a bit misleading. It might be better to call them "patterns": patterns that can help organize our thinking about complex systems.

There’s a nice mathematical theory of these patterns… indeed, several such theories. But instead of taking a top-down approach, which gets a bit abstract, I’d rather tell you about some examples, which I can illustrate using pictures. But I didn’t make these pictures. They were created by Tim van Beek as part of the Azimuth Code Project. The Azimuth Code Project is a way for programmers to help save the planet. More about that later, at the end of this article.

As we saw last time, the ENSO cycle relies crucially on interactions between the ocean and atmosphere. In some models, we can artificially adjust the strength of these interactions, and we find something interesting. If we set the interaction strength to less than a certain amount, the Pacific Ocean will settle down to a stable equilibrium state. But when we turn it up past that point, we instead see periodic oscillations! Instead of a stable equilibrium state where nothing happens, we have a stable cycle.

This pattern, or at least one pattern of this sort, is called the "Hopf bifurcation". There are various differential equations that exhibit a Hopf bifurcation, but here’s my favorite:

\frac{d x}{d t} =  -y + \beta  x - x (x^2 + y^2)

\frac{d y}{d t} =  \; x + \beta  y - y (x^2 + y^2)

Here x and y are functions of time, t, so these equations describe a point moving around on the plane. It’s easier to see what’s going on in polar coordinates:

\frac{d r}{d t} = \beta r - r^3

\frac{d \theta}{d t} = 1

The angle \theta goes around at a constant rate while the radius r does something more interesting. When \beta \le 0, you can see that any solution spirals in towards the origin! Or, if it starts at the origin, it stays there. So, we call the origin a "stable equilibrium".

Here’s a typical solution for \beta = -1/4, drawn as a curve in the x y plane. As time passes, the solution spirals in towards the origin:

The equations are more interesting for \beta > 0. Then dr/dt = 0 whenever

\beta r - r^3 = 0

This has two solutions, r = 0 and r = \sqrt{\beta}. Since r = 0 is a solution, the origin is still an equilibrium. But now it’s not stable: if r is between 0 and \sqrt{\beta}, we’ll have \beta r - r^3 > 0, so our solution will spiral out, away from the origin and towards the circle r = \sqrt{\beta}. So, we say the origin is an "unstable equilibrium". On the other hand, if r starts out bigger than \sqrt{\beta}, our solution will spiral in towards that circle.

Here’s a picture of two solutions for \beta = 1:

The red solution starts near the origin and spirals out towards the circle r = \sqrt{\beta}. The green solution starts outside this circle and spirals in towards it, soon becoming indistinguishable from the circle itself. So, this equation describes a system where x and y quickly settle down to a periodic oscillating behavior.

Since solutions that start anywhere near the circle r = \sqrt{\beta} will keep going round and round getting closer to this circle, it’s called a "stable limit cycle".

This is what the Hopf bifurcation is all about! We’ve got a dynamical system that depends on a parameter, and as we change this parameter, a stable fixed point become unstable, and a stable limit cycle forms around it.

This isn’t quite a mathematical definition yet, but it’s close enough for now. If you want something a bit more precise, try:

• Yuri A. Kuznetsov, Andronov-Hopf bifurcation, Scholarpedia, 2006.

Now, clearly the Hopf bifurcation idea is too simple for describing real-world weather cycles like the ENSO. In the Hopf bifurcation, our system settles down into an orbit very close to the limit cycle, which is perfectly periodic. The ENSO cycle is only roughly periodic:



The time between El Niños varies between 3 and 7 years, averaging around 4 years. There can also be two El Niños without an intervening La Niña, or vice versa. One can try to explain this in various ways.

One very simple, general idea to add random noise to whatever differential equation we were using to model the ENSO cycle, obtaining a so-called stochastic differential equation: a differential equation describing a random process. Richard Kleeman discusses this idea in Tim Palmer’s book:

• Richard Kleeman, Stochastic theories for the irregularity of ENSO, in Stochastic Physics and Climate Modelling, eds. Tim Palmer and Paul Williams, Cambridge U. Press, Cambridge, 2010, pp. 248-265.

Kleeman mentions three general theories for the irregularity of the ENSO. They all involve the idea of separating the weather into "modes" — roughly speaking, different ways that things can oscillate. Some modes are slow and some are fast. The ENSO cycle is defined by the behavior of certain slow modes, but of course these interact with the fast modes. So, there are various options:

  1. Perhaps the relevant slow modes interact with each other in a chaotic way.
  2. Perhaps the relevant slow modes interact with each other in a non-chaotic way, but also interact with chaotic fast modes, which inject noise into what would otherwise be simple periodic behavior.
  3. Perhaps the relevant slow modes interact with each other in a chaotic way, and also interact in a significant way with chaotic fast modes.

Kleeman reviews work on the first option but focuses on the second. The third option is the most complicated, so the pessimist in me suspects that’s what’s really going on. Still, it’s good to start by studying simple models!

How can we get a simple model that illustrates the second option? Simple: take the model we just saw, and add some noise! This idea is discussed in detail here:

• H. A. Dijkstra, L. M. Frankcombe and A.S von der Heydt, The Atlantic Multidecadal Oscillation: a stochastic dynamical systems view, in Stochastic Physics and Climate Modelling, eds. Tim Palmer and Paul Williams, Cambridge U. Press, Cambridge, 2010, pp. 287-306.

This paper is not about the ENSO cycle, but another one, which is often nicknamed the AMO. I would love to talk about it — but not now. Let me just show you the equations for a Hopf bifurcation with noise:

\frac{d x}{d t} =  -y + \beta  x - x (x^2 + y^2) + \lambda \frac{d W_1}{d t}

\frac{d y}{d t} =  \; x + \beta  y - y (x^2 + y^2) + \lambda \frac{d W_2}{d t}

They’re the same as before, but with some new extra terms at the end: that’s the noise.

This could easily get a bit technical, but I don’t want it to. So, I’ll just say some buzzwords and let you click on the links if you want more detail. W_1 and W_2 are two independent Wiener processes, so they describe Brownian motion in the x and y coordinates. When we differentiate a Wiener process we get white noise. So, we’re adding some amount of white noise to the equations we had before, and the number \lambda says precisely how much. That means that x and y are no longer specific functions of time: they’re random functions, also known as stochastic processes.

If this were a math course, I’d feel obliged to precisely define all the terms I just dropped on you. But it’s not, so I’ll just show you some pictures!

If \beta = 1 and \lambda = 0.1, here are some typical solutions:

They look similar to the solutions we saw before for \beta = 1, but now they have some random wiggles added on.

(You may be wondering what this picture really shows. After all, I said the solutions were random functions of time, not specific functions. But it’s tough to draw a "random function". So, to get one of the curves shown above, what Tim did is randomly choose a function according to some rule for computing probabilities, and draw that.)

If we turn up the noise, our solutions get more wiggly. If \beta = 1 and \lambda = 0.3, they look like this:

In these examples, \beta > 0, so we would have a limit cycle if there weren’t any noise — and you can see that even with noise, the solutions approximately tend towards the limit cycle. So, we can use an equation of this sort to describe systems that oscillate, but in a somewhat random way.

But now comes the really interesting part! Suppose \beta \le 0. Then we’ve seen that without noise, there’s no limit cycle: any solution quickly spirals in towards the origin. But with noise, something a bit different happens. If \beta = -1/4 and \lambda = 0.1 we get a picture like this:

We get irregular oscillations even though there’s no limit cycle! Roughly speaking, the noise keeps knocking the solution away from the stable fixed point at x = y = 0, so it keeps going round and round, but in an irregular way. It may seem to be spiralling in, but if we waited a bit longer it would get kicked out again.

This is a lot easier to see if we plot just x as a function of t. Then we can run our solution for a longer time without the picture becoming a horrible mess:

If you compare this with the ENSO cycle, you’ll see they look roughly similar:



That’s nice. Of course it doesn’t prove that a model based on a Hopf bifurcation plus noise is "right" — indeed, we don’t really have a model until we’ve chosen variables for both x and y. But it suggests that a model of this sort could be worth studying.

If you want to see how the Hopf bifurcation plus noise is applied to climate cycles, I suggest starting with the paper by Dijkstra, Frankcombe and von der Heydt. If you want to see it applied to the El Niño-Southern Oscillation, start with Section 6.3 of the ENSO theory paper, and then dig into the many references. Here it seems a model with \beta > 0 may work best. If so, noise is not required to keep the ENSO cycle going, but it makes the cycle irregular.

To a mathematician like me, what’s really interesting is how the addition of noise "smooths out" the Hopf bifurcation. When there’s no noise, the qualitative behavior of solutions jumps drastically at \beta = 0. For \beta \le 0 we have a stable equilibrium, while for \beta > 0 we have a stable limit cycle. But in the presence of noise, we get irregular cycles not only for \beta > 0 but also \beta \le 0. This is not really surprising, but it suggests a bunch of questions. Such as: what are some quantities we can use to describe the behavior of "irregular cycles", and how do these quantities change as a function of \lambda and \beta?

You’ll see some answers to this question in Dijkstra, Frankcombe and von der Heydt’s paper. However, if you’re a mathematician, you’ll instantly think of dozens more questions — like, how can I prove what these guys are saying?

If you make any progress, let me know. If you don’t know where to start, you might try the Dijkstra et al. paper, and then learn a bit about the Hopf bifurcation, stochastic processes, and stochastic differential equations:

• John Guckenheimer and Philip Holmes, Nonlinear Oscillations, Dynamical Systems and Bifurcations of Vector Fields, Springer, Berlin, 1983.

• Zdzisław Brzeźniak and Tomasz Zastawniak, Basic Stochastic Processes: A Course Through Exercises, Springer, Berlin, 1999.

• Bernt Øksendal, Stochastic Differential Equations: An Introduction with Applications, 6th edition, Springer, Berlin, 2003.

Now, about the Azimuth Code Project. Tim van Beek started it just recently, but the Azimuth Project seems to be attracting people who can program, so I have high hopes for it. Tim wrote:

My main objectives to start the Azimuth Code Project were:

• to have a central repository for the code used for simulations or data analysis on the Azimuth Project,

• to have an online free access repository and make all software open source, to enable anyone to use the software, for example to reproduce the results on the Azimuth Project. Also to show by example that this can and should be done for every scientific publication.

Of less importance is:

• to implement the software with an eye to software engineering principles.

This less important because the world of numerical high performance computing differs significantly from the rest of the software industry: it has special requirements and it is not clear at all which paradigms that are useful for the rest will turn out to be useful here. Nevertheless I’m confident that parts of the scientific community will profit from a closer interaction with software engineering.

So, if you like programming, I hope you’ll chat with us and consider joining in! Our next projects involve limit cycles in predator-prey models, stochastic resonance in some theories of the ice ages, and delay differential equations in ENSO models.

And in case you’re wondering, the code used for the pictures above is a simple implementation in Java of the Euler scheme, using random number generating algorithms from Numerical Recipes. Pictures were generated with gnuplot.


There are two ways of constructing a software design. One way is to make it so simple that there are obviously no deficiencies. And the other way is to make it so complicated that there are no obvious deficiencies. – C.A.R. Hoare


This Week’s Finds (Week 307)

14 December, 2010

I’d like to take a break from interviews and explain some stuff I’m learning about. I’m eager to tell you about some papers in the book Tim Palmer helped edit, Stochastic Physics and Climate Modelling. But those papers are highly theoretical, and theories aren’t very interesting until you know what they’re theories of. So today I’ll talk about "El Niño", which is part of a very interesting climate cycle. Next time I’ll get into more of the math.

I hadn’t originally planned to get into so much detail on the El Niño, but this cycle is a big deal in southern California. In the city of Riverside, where I live, it’s very dry. There is a small river, but it’s just a trickle of water most of the time: there’s a lot less "river" than "side". It almost never rains between March and December. Sometimes, during a "La Niña", it doesn’t even rain in the winter! But then sometimes we have an "El Niño" and get huge floods in the winter. At this point, the tiny stream that gives Riverside its name swells to a huge raging torrent. The difference is very dramatic.

So, I’ve always wanted to understand how the El Niño cycle works — but whenever I tried to read an explanation, I couldn’t follow it!

I finally broke that mental block when I read some stuff on William Kessler‘s website. He’s an expert on the El Niño phenomenon who works at the Pacific Marine Environmental Laboratory. One thing I like about his explanations is that he says what we do know about the El Niño, and also what we don’t know. We don’t know what triggers it!

In fact, Kessler says the El Niño would make a great research topic for a smart young scientist. In an email to me, which he has allowed me to quote, he said:

We understand lots of details but the big picture remains mysterious. And I enjoyed your interview with Tim Palmer because it brought out a lot of the sources of uncertainty in present-generation climate modeling. However, with El Niño, the mystery is beyond Tim’s discussion of the difficulties of climate modeling. We do not know whether the tropical climate system on El Niño timescales is stable (in which case El Niño needs an external trigger, of which there are many candidates) or unstable. In the 80s and 90s we developed simple "toy" models that convinced the community that the system was unstable and El Niño could be expected to arise naturally within the tropical climate system. Now that is in doubt, and we are faced with a fundamental uncertainty about the very nature of the beast. Since none of us old farts has any new ideas (I just came back from a conference that reviewed this stuff), this is a fruitful field for a smart young person.

So, I hope some smart young person reads this and dives into working on El Niño!

But let’s start at the beginning. Why did I have so much trouble understanding explanations of the El Niño? Well, first of all, I’m an old fart. Second, most people are bad at explaining stuff: they skip steps, use jargon they haven’t defined, and so on. But third, climate cycles are hard to explain. There’s a lot about them we don’t understand — as Kessler’s email points out. And they also involve a kind of "cyclic causality" that’s a bit tough to mentally process.

At least where I come from, people find it easy to understand linear chains of causality, like "A causes B, which causes C". For example: why is the king’s throne made of gold? Because the king told his minister "I want a throne of gold!" And the minister told the servant, "Make a throne of gold!" And the servant made the king a throne of gold.

Now that’s what I call an explanation! It’s incredibly satisfying, at least if you don’t wonder why the king wanted a throne of gold in the first place. It’s easy to remember, because it sounds like a story. We hear a lot of stories like this when we’re children, so we’re used to them. My example sounds like the beginning of a fairy tale, where the action is initiated by a "prime mover": the decree of a king.

There’s something a bit trickier about cyclic causality, like "A causes B, which causes C, which causes A." It may sound like a sneaky trick: we consider "circular reasoning" a bad thing. Sometimes it is a sneaky trick. But sometimes this is how things really work!

Why does big business have such influence in American politics? Because big business hires lots of lobbyists, who talk to the politicians, and even give them money. Why are they allowed to do this? Because big business has such influence in American politics. That’s an example of a "vicious circle". You might like to cut it off — but like a snake holding its tail in its mouth, it’s hard to know where to start.

Of course, not all circles are "vicious". Many are "virtuous".

But the really tricky thing is how a circle can sometimes reverse direction. In academia we worry about this a lot: we say a university can either "ratchet up" or "ratchet down". A good university attracts good students and good professors, who bring in more grant money, and all this makes it even better… while a bad university tends to get even worse, for all the same reasons. But sometimes a good university goes bad, or vice versa. Explaining that transition can be hard.

It’s also hard to explain why a La Niña switches to an El Niño, or vice versa. Indeed, it seems scientists still don’t understand this. They have some models that simulate this process, but there are still lots of mysteries. And even if they get models that work perfectly, they still may not be able to tell a good story about it. Wind and water are ultimately described by partial differential equations, not fairy tales.

But anyway, let me tell you a story about how it works. I’m just learning this stuff, so take it with a grain of salt…

The "El Niño/Southern Oscillation" or "ENSO" is the largest form of variability in the Earth’s climate on times scales greater than a year and less than a decade. It occurs across the tropical Pacific Ocean every 3 to 7 years, and on average every 4 years. It can cause extreme weather such as floods and droughts in many regions of the world. Countries dependent on agriculture and fishing, especially those bordering the Pacific Ocean, are the most affected.

And here’s a cute little animation of it produced by the Australian Bureau of Meteorology:



Let me tell you first about La Niña, and then El Niño. If you keep glancing back at this little animation, I promise you can understand everything I’ll say.

Winds called trade winds blow west across the tropical Pacific. During La Niña years, water at the ocean’s surface moves west with these winds, warming up in the sunlight as it goes. So, warm water collects at the ocean’s surface in the western Pacific. This creates more clouds and rainstorms in Asia. Meanwhile, since surface water is being dragged west by the wind, cold water from below gets pulled up to take its place in the eastern Pacific, off the coast of South America.

I hope this makes sense so far. But there’s another aspect to the story. Because the ocean’s surface is warmer in the western Pacific, it heats the air and makes it rise. So, wind blows west to fill the "gap" left by rising air. This strengthens the westward-blowing trade winds.

So, it’s a kind of feedback loop: the oceans being warmer in the western Pacific helps the trade winds blow west, and that makes the western oceans even warmer.

Get it? This should all make sense so far, except for one thing. There’s one big question, and I hope you’re asking it. Namely:

Why do the trade winds blow west?

If I don’t answer this, my story so far would work just as well if I switched the words "west" and "east". That wouldn’t necessarily mean my story was wrong. It might just mean that there were two equally good options: a La Niña phase where the trade winds blow west, and another phase — say, El Niño — where they blow east! From everything I’ve said so far, the world could be permanently stuck in one of these phases. Or, maybe it could randomly flip between these two phases for some reason.

Something roughly like this last choice is actually true. But it’s not so simple: there’s not a complete symmetry between west and east.

Why not? Mainly because the Earth is turning to the east.

Air near the equator warms up and rises, so new air from more northern or southern regions moves in to take its place. But because the Earth is fatter at the equator, the equator is moving faster to the east. So, the new air from other places is moving less quickly by comparison… so as seen by someone standing on the equator, it blows west. This is an example of the Coriolis effect:



By the way: in case this stuff wasn’t tricky enough already, a wind that blows to the west is called an easterly, because it blows from the east! That’s what happens when you put sailors in charge of scientific terminology. So the westward-blowing trade winds are called "northeasterly trades" and "southeasterly trades" in the picture above. But don’t let that confuse you.

(I also tend to think of Asia as the "Far East" and California as the "West Coast", so I always need to keep reminding myself that Asia is in the west Pacific, while California is in the east Pacific. But don’t let that confuse you either! Just repeat after me until it makes perfect sense: "The easterlies blow west from West Coast to Far East".)

Okay: silly terminology aside, I hope everything makes perfect sense so far. The trade winds have a good intrinsic reason to blow west, but in the La Niña phase they’re also part of a feedback loop where they make the western Pacific warmer… which in turn helps the trade winds blow west.

But then comes an El Niño! Now for some reason the westward winds weaken. This lets the built-up warm water in the western Pacific slosh back east. And with weaker westward winds, less cold water is pulled up to the surface in the east. So, the eastern Pacific warms up. This makes for more clouds and rain in the eastern Pacific — that’s when we get floods in Southern California. And with the ocean warmer in the eastern Pacific, hot air rises there, which tends to counteract the westward winds even more!

In other words, all the feedbacks reverse themselves.

But note: the trade winds never mainly blow east. During an El Niño they still blow west, just a bit less. So, the climate is not flip-flopping between two symmetrical alternatives. It’s flip-flopping between two asymmetrical alternatives.

I hope all this makes sense… except for one thing. There’s another big question, and I hope you’re asking it. Namely:

Why do the westward trade winds weaken?

We could also ask the same question about the start of the La Niña phase: why do the westward trade winds get stronger?

The short answer is that nobody knows. Or at least there’s no one story that everyone agrees on. There are actually several stories… and perhaps more than one of them is true. But now let me just show you the data:



The top graph shows variations in the water temperature of the tropical Eastern Pacific ocean. When it’s hot we have El Niños: those are the red hills in the top graph. The blue valleys are La Niñas. Note that it’s possible to have two El Niños in a row without an intervening La Niña, or vice versa!

The bottom graph shows the "Southern Oscillation Index" or "SOI". This is the air pressure in Tahiti minus the air pressure in Darwin, Australia. You can see those locations here:



So, when the SOI is high, the air pressure is higher in the east Pacific than in the west Pacific. This is what we expect in an La Niña: that’s why the westward trade winds are strong then! Conversely, the SOI is low in the El Niño phase. This variation in the SOI is called the Southern Oscillation.

If you look at the graphs above, you’ll see how one looks almost like an upside-down version of the other. So, El Niño/La Niña cycle is tightly linked to the Southern Oscillation.

Another thing you’ll see from is that ENSO cycle is far from perfectly periodic! Here’s a graph of the Southern Oscillation Index going back a lot further:



This graph was made by William Kessler. His explanations of the ENSO cycle are the first ones I really understood:

My own explanation here is a slow-motion, watered-down version of his. Any mistakes are, of course, mine. To conclude, I want to quote his discussion of theories about why an El Niño starts, and why it ends. As you’ll see, this part is a bit more technical. It involves three concepts I haven’t explained yet:

  • The "thermocline" is the border between the warmer surface water in the ocean and the cold deep water, 100 to 200 meters below the surface. During the La Niña phase, warm water is blown to the western Pacific, and cold water is pulled up to the surface of the eastern Pacific. So, the thermocline is deeper in the west than the east:

    When an El Niño occurs, the thermocline flattens out:

  • "Oceanic Rossby waves" are very low-frequency waves in the ocean’s surface and thermocline. At the ocean’s surface they are only 5 centimeters high, but hundreds of kilometers across. They move at about 10 centimeters/second, requiring months to years to cross the ocean! The surface waves are mirrored by waves in the thermocline, which are much larger, 10-50 meters in height. When the surface goes up, the thermocline goes down.
  • The "Madden-Julian Oscillation" or "MJO" is the largest form of variability in the tropical atmosphere on time scales of 30-90 days. It’s a pulse that moves east across the Indian Ocean and Pacific ocean at 4-8 meters/second. It manifests itself as patches of anomalously high rainfall and also anomalously low rainfall. Strong Madden-Julian Oscillations are often seen 6-12 months before an El Niño starts.

With this bit of background, let’s read what Kessler wrote:

There are two main theories at present. The first is that the event is initiated by the reflection from the western boundary of the Pacific of an oceanic Rossby wave (type of low-frequency planetary wave that moves only west). The reflected wave is supposed to lower the thermocline in the west-central Pacific and thereby warm the SST [sea surface temperature] by reducing the efficiency of upwelling to cool the surface. Then that makes winds blow towards the (slightly) warmer water and really start the event. The nice part about this theory is that the Rossby waves can be observed for months before the reflection, which implies that El Niño is predictable.

The other idea is that the trigger is essentially random. The tropical convection (organized largescale thunderstorm activity) in the rising air tends to occur in bursts that last for about a month, and these bursts propagate out of the Indian Ocean (known as the Madden-Julian Oscillation). Since the storms are geostrophic (rotating according to the turning of the earth, which means they rotate clockwise in the southern hemisphere and counter-clockwise in the north), storm winds on the equator always blow towards the east. If the storms are strong enough, or last long enough, then those eastward winds may be enought to start the sloshing. But specific Madden-Julian Oscillation events are not predictable much in advance (just as specific weather events are not predictable in advance), and so to the extent that this is the main element, then El Niño will not be predictable.

In my opinion both these two processes can be important in different El Niños. Some models that did not have the MJO storms were successful in predicting the events of 1986-87 and 1991-92. That suggests that the Rossby wave part was a main influence at that time. But those same models have failed to predict the events since then, and the westerlies have appeared to come from nowhere. It is also quite possible that these two general sets of ideas are incomplete, and that there are other causes entirely. The fact that we have very intermittent skill at predicting the major turns of the ENSO cycle (as opposed to the very good forecasts that can be made once an event has begun) suggests that there remain important elements that are await explanation.

Next time I’ll talk a bit about mathematical models of the ENSO and another climate cycle — but please keep in mind that these cycles are still far from fully understood!


To hate is to study, to study is to understand, to understand is to appreciate, to appreciate is to love. So maybe I’ll end up loving your theory. – John Archibald Wheeler


This Week’s Finds (Week 306)

7 December, 2010

This week I’ll interview another physicist who successfully made the transition from gravity to climate science: Tim Palmer.

JB: I hear you are starting to build a climate science research group at Oxford.  What led you to this point? What are your goals?

TP: I started my research career at Oxford University, doing a PhD in general relativity theory under the cosmologist Dennis Sciama (himself a student of Paul Dirac). Then I switched gear and have spent most of my career working on the dynamics and predictability of weather and climate, mostly working in national and international meteorological and climatological institutes. Now I’m back in Oxford as a Royal Society Research Professor in climate physics. Oxford has a lot of climate-related activities going on, both in basic science and in impact and policy issues. I want to develop activities in climate physics. Oxford has wonderful Physics and Mathematics Departments and I am keen to try to exploit human resources from these areas where possible.

The general area which interests me is in the area of uncertainty in climate prediction; finding ways to estimate uncertainty reliably and, of course, to reduce uncertainty. Over the years I have helped develop new techniques to predict uncertainty in weather forecasts. Because climate is a nonlinear system, the growth of initial uncertainty is flow dependent. Some days when the system is in a relatively stable part of state space, accurate weather predictions can be made a week or more ahead of time. In other more unstable situations, predictability is limited to a couple of days. Ensemble weather forecast techniques help estimate such flow dependent predictability, and this has enormous practical relevance.

How to estimate uncertainty in climate predictions is much more tricky than for weather prediction. There is, of course, the human element: how much we reduce greenhouse gas emissions will impact on future climate. But leaving this aside, there is the difficult issue of how to estimate the accuracy of the underlying computer models we use to predict climate.

To say a bit more about this, the problem is to do with how well climate models simulate the natural processes which amplify the anthropogenic increases in greenhouse gases (notably carbon dioxide). A key aspect of this amplification process is associated with the role of water in climate. For example, water vapour is itself a powerful greenhouse gas. If we were to assume that the relative humidity of the atmosphere (the percentage of the amount of water vapour at which the air would be saturated) was constant as the atmosphere warms under anthropogenic climate change, then humidity would amplify the climate change by a factor of two or more. On top of this, clouds — i.e. water in its liquid rather than gaseous form — have the potential to further amplify climate change (or indeed decrease it depending on the type or structure of the clouds). Finally, water in its solid phase can also be a significant amplifier of climate change. For example, sea ice reflects sunlight back to space. However as sea ice melts, e.g. in the Arctic, the underlying water absorbs more of the sunlight than before, again amplifying the underlying climate change signal.

We can approach these problems in two ways. Firstly we can use simplified mathematical models in which plausible assumptions (like the constant relative humidity one) are made to make the mathematics tractable. Secondly, we can try to simulate climate ab initio using the basic laws of physics (here, mostly, but not exclusively, the laws of classical physics). If we are to have confidence in climate predictions, this ab initio approach has to be pursued. However, unlike, say temperature in the atmosphere, water vapour and cloud liquid water have more of a fractal distribution, with both large and small scales. We cannot simulate accurately the small scales in a global climate model with fixed (say 100km) grid, and this, perhaps more than anything, is the source of uncertainty in climate predictions.

This is not just a theoretical problem (although there is some interesting mathematics involved, e.g. of multifractal distribution theory and so on). In the coming years, governments will be looking to spend billions on new infrastructure for society to adapt to climate change: more reservoirs, better flood defences, bigger storm sewers etc etc. It is obviously important that this money is spent wisely. Hence we need to have some quantitative and reliable estimate of certainty that in regions where more reservoirs are to be built, the climate really will get drier and so on.

There is another reason for developing quantitative methods for estimating uncertainty: climate geoengineering. If we spray aerosols in the stratosphere, or whiten clouds by spraying sea salt into them, we need to be sure we are not doing something terrible to our climate, like shutting off the monsoons, or decreasing rainfall over Amazonia (which might then make the rainforest a source of carbon for the atmosphere rather than a sink). Reliable estimates of uncertainty of regional impacts of geoengineering are going to be essential in the future.

My goals? To bring quantitative methods from physics and maths into climate decision making.  One area that particularly interests me is the application of nonlinear stochastic-dynamic techniques to represent unresolved scales of motion in the ab initio models. If you are interested to learn more about this, please see this book:

• Tim Palmer and Paul Williams, editors, Stochastic Physics and Climate Modelling, Cambridge U. Press, Cambridge, 2010.

JB: Thanks! I’ve been reading that book. I’ll talk about it next time on This Week’s Finds.

Suppose you were advising a college student who wanted to do something that would really make a difference when it comes to the world’s environmental problems.  What would you tell them?

TP: Well although this sounds a bit of a cliché, it’s important first and foremost to enjoy and be excited by what you are doing. If you have a burning ambition to work on some area of science without apparent application or use, but feel guilty because it’s not helping to save the planet, then stop feeling guilty and get on with fulfilling your dreams. If you work in some difficult area of science and achieve something significant, then this will give you a feeling of confidence that is impossible to be taught. Feeling confident in one’s abilities will make any subsequent move into new areas of activity, perhaps related to the environment, that much easier. If you demonstrate that confidence at interview, moving fields, even late in life, won’t be so difficult.

In my own case, I did a PhD in general relativity theory, and having achieved this goal (after a bleak period in the middle where nothing much seemed to be working out), I did sort of think to myself: if I can add to the pool of knowledge in this, traditionally difficult area of theoretical physics, I can pretty much tackle anything in science. I realize that sounds rather arrogant, and of course life is never as easy as that in practice.

JB: What if you were advising a mathematician or physicist who was already well underway in their career?  I know lots of such people who would like to do something "good for the planet", but feel that they’re already specialized in other areas, and find it hard to switch gears.  In fact I might as well admit it — I’m such a person myself!

TP: Talk to the experts in the field. Face to face. As many as possible. Ask them how your expertise can be put to use. Get them to advise you on key meetings you should try to attend.

JB: Okay.  You’re an expert in the field, so I’ll start with you.  How can my expertise be put to use?  What are some meetings that I should try to attend?

TP: The American Geophysical Union and the European Geophysical Union have big multi-session conferences each year which include mathematicians with an interest in climate. On top of this, mathematical science institutes are increasingly holding meetings to engage mathematicians and climate scientists. For example, the Isaac Newton Institute at Cambridge University is holding a six-month programme on climate and mathematics. I will be there for part of this programme. There have been similar programmes in the US and in Germany very recently.

Of course, as well as going to meetings, or perhaps before going to them, there is the small matter of some reading material. Can I strongly recommend the Working Group One report of the latest IPCC climate change assessments? WG1 is tasked with summarizing the physical science underlying climate change. Start with the WG1 Summary for Policymakers from the Fourth Assessment Report:

• Intergovernmental Panel on Climate Change, Climate Change 2007: The Physical Science Basis, Summary for Policymakers.

and, if you are still interested, tackle the main WG1 report:

• Intergovernmental Panel on Climate Change, Climate Change 2007: The Physical Science Basis, Cambridge U. Press, Cambridge, 2007.

There is a feeling that since the various so-called "Climategate" scandals, in which IPCC were implicated, climate scientists need to be more open about uncertainties in climate predictions and climate prediction models. But in truth, these uncertainties have always been openly discussed in the WG1 reports. These reports are absolutely not the alarmist documents many seem to think, and, I would say, give an extremely balanced picture of the science. The latest report dates from 2007.

JB: I’ve been slowly learning what’s in this report, thanks in part to Nathan Urban, whom I interviewed in previous issues of This Week’s Finds. I’ll have to keep at it.



You told me that there’s a big difference between the "butterfly effect" in chaotic systems with a few degrees of freedom, such as the Lorenz attractor shown above, and the "real butterfly effect" in systems with infinitely many degrees of freedom, like the Navier-Stokes equations, the basic equations describing fluid flow. What’s the main difference?

TP: Everyone knows, or at least think they know, what the butterfly effect is: the exponential growth of small initial uncertainties in chaotic systems, like the Lorenz system, after whom the butterfly effect was named by James Gleick in his excellent popular book:

• James Gleick, Chaos: Making a New Science, Penguin, London, 1998.

But in truth, this is not the butterfly effect as Lorenz had meant it (I knew Ed Lorenz quite well). If you think about it, the possible effect of a flap of a butterfly’s wings on the weather some days later, involves not only an increase in the amplitude of the uncertainty, but also the scale. If we think of a turbulent system like the atmosphere, comprising a continuum of scales, its evolution is described by partial differential equations, not a low order set of ordinary differential equations. Each scale can be thought of as having its own characteristic dominant Lyapunov exponent, and these scales interact nonlinearly.

If we want to estimate the time for a flap of a butterfly’s wings to influence a large scale weather system, we can imagine summing up all the Lyapunov timescales associated with all the scales from the small scales to the large scales. If this sum diverges, then very good, we can say it will take a very long time for a small scale error or uncertainty to influence a large-scale system. But alas, simple scaling arguments suggest that there may be situations (in 3 dimensional turbulence) where this sum converges. Normally, we thinking of convergence as a good thing, but in this case it means that the small scale uncertainty, no matter how small scale it is, can affect the accuracy of the large scale prediction… in finite time. This is quite different to the conventional butterfly effect in low order chaos, where arbitrarily long predictions can be made by reducing initial uncertainty to sufficiently small levels.

JB: What are the practical implications of this difference?

TP: Climate models are finite truncations of the underlying partial differential equations of climate. A crucial question is: how do solutions converge as the truncation gets better and better?  More practically, how many floating point operations per second (flops) does my computer need to have, in order that I can simulate the large-scale components of climate accurately. Teraflops, petaflops, exaflops? Is there an irreducible uncertainty in our ability to simulate climate no matter how many flops we have? Because of the "real" butterfly effect, we simply don’t know. This has real practical implications.

JB: Nobody has proved existence and uniqueness for solutions of the Navier-Stokes equations. Indeed Clay Mathematics Institute is offering a million-dollar prize for settling this question. But meteorologists use these equations to predict the weather with some success.  To mathematicians that might seem a bit strange.  What do you think is going on here?

TP: Actually, for certain simplifications to the Navier-Stokes equations, such as making them hydrostatic (which damps acoustic waves) then existence and uniqueness can be proven. And for weather forecasting we can get away with the hydrostatic approximation for most applications. But in general existence and uniqueness haven’t been proven. The "real" butterfly effect is linked to this. Well obviously the Intergovernmental Panel on Climate Change can’t wait for the mathematicians to solve this problem, but as I tried to suggest above, I don’t think the problem is just an arcane mathematical conundrum, but rather may help us understand better what is possible to predict about climate change and what not.

JB:  Of course, meteorologists are really using a cleverly discretized version of the Navier-Stokes equations to predict the weather. Something vaguely similar happens in quantum field theory: we can use "lattice QCD" to compute the mass of the proton to reasonable accuracy, but nobody knows for sure if QCD makes sense in the continuum.  Indeed, there’s another million-dollar Clay Prize waiting for the person who can figure that out.   Could it be that sometimes a discrete approximation to a continuum theory does a pretty good job even if the continuum theory fundamentally doesn’t make sense?

TP: There you are! Spend a few years working on the continuum limit of lattice QCD and you may end up advising government on the likelihood of unexpected consequences on regional climate arising from some geoengineering proposal! The idea that two so apparently different fields could have elements in common is something bureaucrats find it hard to get their heads round.  We at the sharp end in science need to find ways of making it easier for scientists to move fields (even on a temporary basis) should they want to.

This reminds me of a story. When I was finishing my PhD, my supervisor, Dennis Sciama announced one day that the process of Hawking radiation, from black holes, could be understood using the Principle of Maximum Entropy Production in non-equilibrium thermodynamics. I had never heard of this Principle before, no doubt a gap in my physics education. However, a couple of weeks later, I was talking to a colleague of a colleague who was a climatologist, and he was telling me about a recent paper that purported to show that many of the properties of our climate system could be deduced from the Principle of Maximum Entropy Production. That there might be such a link between black hole theory and climate physics, was one reason that I thought changing fields might not be so difficult after all.

JB: To what extent is the problem of predicting climate insulated from the problems of predicting weather?  I bet this is a hard question, but it seems important.  What do people know about this?

TP: John Von Neumann was an important figure in meteorology (as well, for example, as in quantum theory). He oversaw a project at Princeton just after the Second World War, to develop a numerical weather prediction model based on a discretised version of the Navier-Stokes equations. It was one of the early applications of digital computers. Some years later, the first long-term climate models were developed based on these weather prediction models. But then the two areas of work diverged. People doing climate modelling needed to represent lots of physical processes: the oceans, the cryosphere, the biosphere etc, whereas weather prediction tended to focus on getting better and better discretised representations of the Navier-Stokes equations.

One rationale for this separation was that weather forecasting is an initial value problem whereas climate is a "forced" problem (e.g. how does climate change with a specified increase in carbon dioxide?). Hence, for example, climate people didn’t need to agonise over getting ultra accurate estimates of the initial conditions for their climate forecasts.

But the two communities are converging again. We realise there are lots of synergies between short term weather prediction and climate prediction. Let me give you one very simple example. Whether anthropogenic climate change is going to be catastrophic to society, or is something we will be able to adapt to without too many major problems, we need to understand, as mentioned above, how clouds interact with increasing levels of carbon dioxide. Clouds cannot be represented explicitly in climate models because they occur on scales that can’t be resolved due to computational constraints. So they have to be represented by simplified "parametrisations". We can test these parametrisations in weather forecast models. To put it crudely (to be honest too crudely) if the cloud parametrisations (and corresponding representations of water vapour) are systematically wrong, then the forecasts of tomorrow’s daily maximum temperature will also be systematically wrong.

To give another example, I myself for a number of years have been developing stochastic methods to represent truncation uncertainty in weather prediction models. I am now trying to apply these methods in climate prediction. The ability to test the skill of these stochastic schemes in weather prediction mode is crucial to having confidence in them in climate prediction mode. There are lots of other examples of where a synergy between the two areas is important.

JB: When we met recently, you mentioned that there are currently no high-end supercomputers dedicated to climate issues.  That seems a bit odd.  What sort of resources are there?  And how computationally intensive are the simulations people are doing now?

TP: By "high end" I mean very high end: that is, machines in the petaflop range of performance. If one takes the view that climate change is one of the gravest threats to society, then throwing all the resources that science and technology allows, to try to quantify exactly how grave this threat really is, seems quite sensible to me. On top of that, if we are to spend billions (dollars, pounds, euros etc.) on new technology to adapt to climate change, we had better make sure we are spending the money wisely — no point building new reservoirs if climate change will make your region wetter. So the predictions that it will get drier in such a such a place better be right. Finally, if we are to ever take these geoengineering proposals seriously we’d better be sure we understand the regional consequences. We don’t want to end up shutting off the monsoons! Reliable climate predictions really are essential.

I would say that there is no more computationally complex problem in science than climate prediction. There are two key modes of instability in the atmosphere, the convective instabilites (thunderstorms) with scales of kilometers and what are called baroclinic instabilities (midlatitude weather systems) with scales of thousands of kilometers. Simulating these two instabilities, and their mutual global interactions, is beyond the capability of current global climate models because of computational constraints. On top of this, climate models try to represent not only the physics of climate (including the oceans and the cryosphere), but the chemistry and biology too. That introduces considerable computational complexity in addition to the complexity caused by the multi-scale nature of climate.

By and large individual countries don’t have the financial resources (or at least they claim they don’t!) to fund such high end machines dedicated to climate. And the current economic crisis is not helping! On top of which, for reasons discussed above in relation to the "real" butterfly effect, I can’t go to government and say: "Give me a 100 petaflop machine and I will absolutely definitely be able to reduce uncertainty in forecasts climate change by a factor of 10". In my view, the way forward may be to think about internationally funded supercomputing. So, just as we have internationally funded infrastructure in particle physics, astronomy, so too in climate prediction. Why not?

Actually, very recently the NSF in the US gave a consortium of climate scientists from the US, Europe and Japan, a few months of dedicated time on a top-end Cray XT4 computer called Athena. Athena wasn’t quite in the petaflop range, but not too far off, and using this dedicated time, we produced some fantastic results, otherwise unachievable, showing what the international community could achieve, given the computational resources. Results from the Athena project are currently being written up — they demonstrate what can be done where there is a will from the funding agencies.

JB: In a Guardian article on human-caused climate change you were quoted as saying "There might be a 50% risk of widespread problems or possibly only 1%.  Frankly, I would have said a risk of 1% was sufficient for us to take the problem seriously enough to start thinking about reducing emissions."

It’s hard to argue with that, but starting to think about reducing emissions is vastly less costly than actually reducing them.  What would you say to someone who replied, "If the risk is possibly just 1%, it’s premature to take action — we need more research first"?

TP: The implication of your question is that a 1% risk is just too small to worry about or do anything about. But suppose the next time you checked in to fly to Europe, and they said at the desk that there was a 1% chance that volcanic ash would cause the aircraft engines to fail mid flight, leading the plane to crash, killing all on board. Would you fly? I doubt it!

My real point is that in assessing whether emissions cuts are too expensive, given the uncertainty in climate predictions, we need to assess how much we value things like the Amazon rainforest, or of (preventing the destruction of) countries like Bangladesh or the African Sahel. If we estimate the damage caused by dangerous climate change — let’s say associated with a 4 °C or greater global warming — to be at least 100 times the cost of taking mitigating action, then it is worth taking this action even if the probability of dangerous climate change was just 1%. But of course, according to the latest predictions, the probability of realizing such dangerous climate changes is much nearer 50%. So in reality, it is worth cutting emissions if the value you place on current climate is comparable or greater than the cost of cutting emissions.

Summarising, there are two key points here. Firstly, rational decisions can be made in the light of uncertain scientific input. Secondly, whilst we do certainly need more research, that should not itself be used as a reason for inaction.

Thanks, John, for allowing me the opportunity to express some views about climate physics on your web site.

JB: Thank you!


The most important questions of life are, for the most part, really only problems of probability. – Pierre Simon, Marquis de Laplace


This Week’s Finds (Week 305)

5 November, 2010

Nathan Urban has been telling us about a paper where he estimated the probability that global warming will shut down a major current in the Atlantic Ocean:

• Nathan M. Urban and Klaus Keller, Probabilistic hindcasts and projections of the coupled climate, carbon cycle and Atlantic meridional overturning circulation system: a Bayesian fusion of century-scale observations with a simple model, Tellus A, July 16, 2010.

We left off last time with a cliff-hanger: I didn’t let him tell us what the probability is! Since you must have been clutching your chair ever since, you’ll be relieved to hear that the answer is coming now, in the final episode of this interview.

But it’s also very interesting how he and Klaus Keller got their answer. As you’ll see, there’s some beautiful math involved. So let’s get started…

JB: Last time you told us roughly how your climate model works. This time I’d like to ask you about the rest of your paper, leading up to your estimate of the probability that the Atlantic Meridional Overturning Current (or "AMOC") will collapse. But before we get into that, I’d like to ask some very general questions.

For starters, why are scientists worried that the AMOC might collapse?

Last time I mentioned the Younger Dryas event, a time when Europe became drastically colder for about 1300 years, starting around 10,800 BC. Lots of scientists think this event was caused by a collapse of the AMOC. And lots of them believe it was caused by huge amounts of fresh water pouring into the north Atlantic from an enormous glacial lake. But nothing quite like that is happening now! So if the AMOC collapses in the next few centuries, the cause would have to be a bit different.

NU: In order for the AMOC to collapse, the overturning circulation has to weaken. The overturning is driven by the sinking of cold and salty, and therefore dense, water in the north Atlantic. Anything that affects the density structure of the ocean can alter the overturning.

As you say, during the Younger Dryas, it is thought that a lot of fresh water suddenly poured into the Atlantic from the draining of a glacial lake. This lessened the density of the surface waters and reduced the rate at which they sank, shutting down the overturning.

Since there aren’t any large glacial lakes left that could abruptly drain into the ocean, the AMOC won’t shut down in the same way it previously did. But it’s still possible that climate change could cause it to shut down. The surface waters from the north Atlantic can still freshen (and become less dense), either due to the addition of fresh water from melting polar ice and snow, or due to increased precipitation to the northern latitudes. In addition, they can simply become warmer, which also makes them less dense, reducing their sinking rate and weakening the overturning.

In combination, these three factors (warming, increased precipitation, meltwater) can theoretically shut down the AMOC if they are strong enough. This will probably not be as abrupt or extreme an event as the Younger Dryas, but it can still persistently alter the regional climate.

JB: I’m trying to keep our readers in suspense for a bit longer, but I don’t think it’s giving away too much to say that when you run your model, sometimes the AMOC shuts down, or at least slows down. Can you say anything about how this tends to happen, when it does? In your model, that is. Can you tell if it’s mainly warming, or increased precipitation, or meltwater?

NU: The short answer is "mainly warming, probably". The long answer:

I haven’t done experiments with the box model myself to determine this, but I can quote from the Zickfeld et al. paper where this model was published. It says, for their baseline collapse experiment,

In the box model the initial weakening of the overturning circulation is mainly due to thermal forcing [...] This effect is amplified by a negative feedback on salinity, since a weaker circulation implies reduced salt advection towards the northern latitudes.

Even if they turn off all the freshwater input, they find substantial weakening of the AMOC from warming alone.

Freshwater could potentially become the dominant effect on the AMOC if more freshwater is added than in the paper’s baseline experiment. The paper did report computer experiments with different freshwater inputs, but upon skimming it, I can’t immediately tell whether the thermal effect loses its dominance.

These experiments have also been performed using more complex climate models. This paper reports that in all the models they studied, the AMOC weakening is caused more by changes in surface heat flux than by changes in surface water flux:

• J. M. Gregory et al., A model intercomparison of changes in the Atlantic thermohaline circulation in response to increasing atmospheric CO2 concentration, Geophysical Research Letters 32 (2005), L12703.

However, that paper studied "best-estimate" freshwater fluxes, not the fluxes on the high end of what’s possible, so I don’t know whether thermal effects would still dominate if the freshwater input ends up being large. There are papers that suggest freshwater input from Greenland, at least, won’t be a dominant factor any time soon:

• J. H. Jungclaus et al., Will Greenland melting halt the thermohaline circulation?, Geophysical Research Letters 33 (2006), L17708.

• E. Driesschaert et al., Modeling the influence of Greenland ice sheet melting on the Atlantic meridional overturning circulation during the next millennia, Geophysical Research Letters 34 (2007), L10707.

I’m not sure what the situation is for precipitation, but I don’t think that would be much larger than the meltwater flux. In summary, it’s probably the thermal effects that dominate, both in complex and simpler models.

Note that in our version of the box model, the precipitation and meltwater fluxes are combined into one number, the "North Atlantic hydrological sensitivity", so we can’t distinguish between those sources of water. This number is treated as uncertain in our analysis, lying within a range of possible values determined from the hydrologic changes predicted by complex models. The Zickfeld et al. paper experimented with separating them into the two individual contributions, but my version of the model doesn’t do that.

JB: Okay. Now back to what you and Klaus Keller actually did in your paper. You have a climate model with a bunch of adjustable knobs, or parameters. Some of these parameters you take as "known" from previous research. Others are more uncertain, and that’s where the Bayesian reasoning comes in. Very roughly, you use some data to guess the probability that the right settings of these knobs lie within any given range.

How many parameters do you treat as uncertain?

NU: 18 parameters in total. 7 model parameters that control dynamics, 4 initial conditions, and 7 parameters describing error statistics.

JB: What are a few of these parameters? Maybe you can tell us about some of the most important ones — or ones that are easy to understand.

NU: I’ve mentioned these briefly in "week304" in the model description. The AMOC-related parameter is the hydrologic sensitivity I described above, controlling the flux of fresh water into the North Atlantic.

There are three climate related parameters:

• the climate sensitivity (the equilibrium warming expected in response to doubled CO2),

• the ocean heat vertical diffusivity (controlling the rate at which oceans absorb heat from the atmosphere), and

• "aerosol scaling", a factor that multiplies the strength of the aerosol-induced cooling effect, mostly due to uncertainties in aerosol-cloud interactions.

I discussed these in "week302" in the part about total feedback estimates.

There are also three carbon cycle related parameters:

• the heterotrophic respiration sensitivity (describing how quickly dead plants decay when it gets warmer),

• CO2 fertilization (how much faster plants grow in CO2-elevated conditions), and

• the ocean carbon vertical diffusivity (the rate at which the oceans absorb CO2 from the atmosphere).

The initial conditions describe what the global temperature, CO2 level, etc. were at the start of my model simulations, in 1850. The statistical parameters describe the variance and autocorrelation of the residual error between the observations and the model, due to measurement error, natural variability, and model error.

JB: Could you say a bit about the data you use to estimate these uncertain parameters? I see you use a number of data sets.

NU: We use global mean surface temperature and ocean heat content to constrain the three climate parameters. We use atmospheric CO2 concentration and some ocean flux measurements to constrain the carbon parameters. We use measurements of the AMOC strength to constrain the AMOC parameter. These are all time series data, mostly global averages — except the AMOC strength, which is an Atlantic-specific quantity defined at a particular latitude.

The temperature data are taken by surface weather stations and are for the years 1850-2009. The ocean heat data are taken by shipboard sampling, 1953-1996. The atmospheric CO2 concentrations are measured from the Mauna Loa volcano in Hawaii, 1959-2009. There are also some ice core measurements of trapped CO2 at Law Dome, Antarctica, dated to 1854-1953. The air-sea CO2 fluxes, for the 1980s and 1990s, are derived from measurements of dissolved inorganic carbon in the ocean, combined with measurements of manmade chlorofluorocarbon to date the water masses in which the carbon resides. (The dates tell you when the carbon entered the ocean.)

The AMOC strength is reconstructed from station measurements of poleward water circulation over an east-west section of the Atlantic Ocean, near 25 °N latitude. Pairs of stations measure the northward velocity of water, inferred from the ocean bottom pressure differences between northward and southward station pairs. The velocities across the Atlantic are combined with vertical density profiles to determine an overall rate of poleward water mass transport. We use seven AMOC strength estimates measured sparsely between the years 1957 and 2004.

JB: So then you start the Bayesian procedure. You take your model, start it off with your 18 parameters chosen somehow or other, run it from 1850 to now, and see how well it matches all this data you just described. Then you tweak the parameters a bit — last time we called that "turning the knobs" — and run the model again. And then you do this again and again, lots of times. The goal is to calculate the probability that the right settings of these knobs lie within any given range.

Is that about right?

NU: Yes, that’s right.

JB: About how many times did you actually run the model? Is the sort of thing you can do on your laptop overnight, or is it a mammoth task?

NU: I ran the model a million times. This took about two days on a single CPU. Some of my colleagues later ported the model from Matlab to Fortran, and now I can do a million runs in half an hour on my laptop.

JB: Cool! So if I understand correctly, you generated a million lists of 18 numbers: those uncertain parameters you just mentioned.

Or in other words: you created a cloud of points: a million points in an 18-dimensional space. Each point is a choice of those 18 parameters. And the density of this cloud near any point should be proportional to the probability that the parameters have those values.

That’s the goal, anyway: getting this cloud to approximate the right probability density on your 18-dimensional space. To get this to happen, you used the Markov chain Monte Carlo procedure we discussed last time.

Could you say in a bit more detail how you did this, exactly?

NU: There are two steps. One is to write down a formula for the probability of the parameters (the "Bayesian posterior distribution"). The second is to draw random samples from that probability distribution using Markov chain Monte Carlo (MCMC).

Call the parameter vector θ and the data vector y. The Bayesian posterior distribution p(θ|y) is a function of θ which says how probable θ is, given the data y that you’ve observed. The little bar (|) indicates conditional probability: p(θ|y) is the probability of θ, assuming that you know y happened.

The posterior factorizes into two parts, the likelihood and the prior. The prior, p(θ) says how probable you think a particular 18-dimensional vector of parameters is, before you’ve seen the data you’re using. It encodes your "prior knowledge" about the problem, unconditional on the data you’re using.

The likelihood, p(y|θ), says how likely it is for the observed data to arise from a model run using some particular vector of parameters. It describes your data generating process: assuming you know what the parameters are, how likely are you to see data that looks like what you actually measured? (The posterior is the reverse of this: how probable are the parameters, assuming the data you’ve observed?)

Bayes’s theorem simply says that the posterior is proportional to the product of these two pieces:

p(θ|y) ∝ p(y|θ) × p(θ)

If I know the two pieces, I multiply them together and use MCMC to sample from that probability distribution.

Where do the pieces come from? For the prior, we assumed bounded uniform distributions on all but one parameter. Such priors express the belief that each parameter lies within some range we deemed reasonable, but we are agnostic about whether one value within that range is more probable than any other. The exception is the climate sensitivity parameter. We have prior evidence from computer models and paleoclimate data that the climate sensitivity is most likely around 2 or 3 °C, albeit with significant uncertainties. We encoded this belief using a "diffuse" Cauchy distribution peaked in this range, but allowing substantial probability to be outside it, so as to not prematurely exclude too much of the parameter range based on possibly overconfident prior beliefs. We assume the priors on all the parameters are independent of each other, so the prior for all of them is the product of the prior for each of them.

For the likelihood, we assumed a normal (Gaussian) distribution for the residual error (the scatter of the data about the model prediction). The simplest such distribution is the independent and identically distributed ("iid") normal distribution, which says that all the data points have the same error and the errors at each data point are independent of each other. Neither of these assumptions is true. The errors are not identical, since they get bigger farther in the past, when we measured data with less precision than we do today. And they’re not independent, because if one year is warmer than the model predicts, the next year likely to be also warmer than the model predicts. There are various possible reasons for this: chaotic variability, time lags in the system due to finite heat capacity, and so on.

In this analysis, we kept the identical-error assumption for simplicity, even though it’s not correct. I think this is justifiable, because the strongest constraints on the parameters come from the most recent data, when the largest climate and carbon cycle changes have occurred. That is, the early data are already relatively uninformative, so if their errors get bigger, it doesn’t affect the answer much.

We rejected the independent-error assumption, since there is very strong autocorrelation (serial dependence) in the data, and ignoring autocorrelation is known to lead to overconfidence. When the errors are correlated, it’s harder to distinguish between a short-term random fluctuation and a true trend, so you should be more uncertain about your conclusions. To deal with this, we assumed that the errors obey a correlated autoregressive "red noise" process instead of an uncorrelated "white noise" process. In the likelihood, we converted the red-noise errors to white noise via a "whitening" process, assuming we know how much correlation is present. (We’re allowed to do that in the likelihood, because it gives the probability of the data assuming we know what all the parameters are, and the autocorrelation is one of the parameters.) The equations are given in the paper.

Finally, this gives us the formula for our posterior distribution.

JB: Great! There’s a lot of technical material here, so I have many questions, but let’s go through the whole story first, and come back to those.

NU: Okay. Next comes step two, which is to draw random samples from the posterior probability distribution via MCMC.

To do this, we use the famous Metropolis algorithm, which was invented by a physicist of that name, along with others, to do computations in statistical physics. It’s a very simple algorithm which takes a "random walk" through parameter space.

You start out with some guess for the parameters. You randomly perturb your guess to a nearby point in parameter space, which you are going to propose to move to. If the new point is more probable than the point you were at (according to the Bayesian posterior distribution), then accept it as a new random sample. If the proposed point is less probable than the point you’re at, then you randomly accept the new point with a certain probability. Otherwise you reject the move, staying where you are, treating the old point as a duplicate random sample.

The acceptance probability is equal to the ratio of the posterior distribution at the new point to the posterior distribution at the old point. If the point you’re proposing to move to is, say, 5 times less probable than the point you are at now, then there’s a 20% chance you should move there, and a 80% chance that you should stay where you are.

If you iterate this method of proposing new "jumps" through parameter space, followed by the Metropolis accept/reject procedure, you can prove that you will eventually end up with a long list of (correlated) random samples from the Bayesian posterior distribution.

JB: Okay. Now let me ask a few questions, just to help all our readers get up to speed on some jargon.

Lots of people have heard of a "normal distribution" or "Gaussian", because it’s become sort of the default choice for probability distributions. It looks like a bell curve:

When people don’t know the probability distribution of something — like the tail lengths of newts or the IQ’s of politicians — they often assume it’s a Gaussian.

But I bet fewer of our readers have heard of a "Cauchy distribution". What’s the point of that? Why did you choose that for your prior probability distribution of the climate sensitivity?

NU: There is a long-running debate about the "upper tail" of the climate sensitivity distribution. High climate sensitivities correspond to large amounts of warming. As you can imagine, policy decisions depend a lot on how likely we think these extreme outcomes could be, i.e., how quickly the "upper tail" of the probability distribution drops to zero.

A Gaussian distribution has tails that drop off exponentially quickly, so very high sensitivities will never get any significant weight. If we used it for our prior, then we’d almost automatically get a "thin tailed" posterior, no matter what the data say. We didn’t want to put that in by assumption and automatically conclude that high sensitivities should get no weight, regardless of what the data say. So we used a weaker assumption, which is a "heavy tailed" prior distribution. With this prior, the probability of large amounts of warming drops off more slowly, as a power law, instead of exponentially fast. If the data strongly rule out high warming, we can get a thin tailed posterior, but if they don’t, it will be heavy tailed. The Cauchy distribution, a limiting case of the "Student t" distribution that students of statistics may have heard of, is one of the most conservative choices for a heavy-tailed prior. Probability drops off so slowly at its tails that its variance is infinite.

JB: The issue of "fat tails" is also important in the stock market, where big crashes happen more frequently than you might guess with a Gaussian distribution. After the recent economic crisis I saw a lot of financiers walking around with their tails between their legs, wishing their tails had been fatter.

I’d also like to ask about "white noise" versus "red noise". "White noise" is a mathematical description of a situation where some quantity fluctuates randomly with time in a way so that it’s value at any time is completely uncorrelated with its value at any other time. If you graph an example of white noise, it looks really spiky:



If you play it as a sound, it sounds like hissy static — quite unpleasant. If you could play it in the form of light, it would look white, hence the name.

"Red noise" is less wild. Its value at any time is still random, but it’s correlated to the values at earlier or later times, in a specific way. So it looks less spiky:



and it sounds less high-pitched, more like a steady rainfall. Since it’s stronger at low frequencies, it would look more red if you could play it in the form of light — hence the name "red noise".

If understand correctly, you’re assuming that some aspects of the climate are noisy, but in a red noise kind of way, when you’re computing p(y|θ): the likelihood that your data takes on the value y, given your climate model with some specific choice of parameters θ.

Is that right? You’re assuming this about all your data: the temperature data from weather stations, the ocean heat data are from shipboard samples, the atmospheric CO2 concentrations at Mauna Loa volcano in Hawaii, the ice core measurements of trapped CO2, the air-sea CO2 fluxes, and also the AMOC strength? Red, red, red — all red noise?

NU: I think the red noise you’re talking about refers to a specific type of autocorrelated noise ("Brownian motion"), with a power spectrum that is inversely proportional to the square of frequency. I’m using "red noise" more generically to speak of any autocorrelated process that is stronger at low frequencies. Specifically, the process we use is a first-order autoregressive, or "AR(1)", process. It has a more complicated spectrum than Brownian motion.

JB: Right, I was talking about "red noise" of a specific mathematically nice sort, but that’s probably less convenient for you. AR(1) sounds easier for computers to generate.

NU: It’s not only easier for computers, but closer to the spectrum we see in our analysis.

Note that when I talk about error I mean "residual error", which is the difference between the observations and the model prediction. If the residual error is correlated in time, that doesn’t necessarily reflect true red noise in the climate system. It could also represent correlated errors in measurement over time, or systematic errors in the model. I am not attempting to distinguish between all these sources of error. I’m just lumping them all together into one total error process, and assuming it has a simple statistical form.

We assume the residual errors in the annual surface temperature, ocean heat, and instrumental CO2 time series are AR(1). The ice core CO2, air-sea CO2 flux, and AMOC strength data are sparse, and we can’t really hope to estimate the correlation between them, so we assume their residual errors are uncorrelated.

Speaking of correlation, I’ve been talking about "autocorrelation", which is correlation within one data set between one time and another. It’s also possible for the errors in different data sets to be correlated with each other ("cross correlation"). We assumed there is no cross correlation (and residual analysis suggests only weak correlation between data sets).

JB: I have a few more technical questions, but I bet most of our readers are eager to know: so, what next?

You use all these nifty mathematical methods to work out p(θ|y), the probability that your 18 parameters have any specific value given your data. And now I guess you want to figure out the probability that the Atlantic Meridional Overturning Current, or AMOC, will collapse by some date or other.

How do you do this? I guess most people want to know the answer more than the method, but they’ll just have to wait a few more minutes.

NU: That’s easy. After MCMC, we have a million runs of the model, sampled in proportion how well the model fits historic data. There will be lots of runs that agree well with the data, and a few that agree less well. All we do now is extend each of those runs into the future, using an assumed scenario for what CO2 emissions and other radiative forcings will do in the future. To find out the probability that the AMOC will collapse by some date, conditional on the assumptions we’ve made, we just count what fraction of the runs have an AMOC strength of zero in whatever year we care about.

JB: Okay, that’s simple enough. What scenario, or scenarios, did you consider?

NU: We considered a worst-case "business as usual" scenario in which we continue to burn fossil fuels at an accelerating rate until we start to run out of them, and eventually burn the maximum amount of fossil fuels we think there might be remaining (about 5000 gigatons worth of of carbon, compared to the roughly 500 gigatons we’ve emitted so far). This assumes we get desperate for cheap energy and extract all the hard-to-get fossil resources in oil shales and tar sands, all the remaining coal, etc. It doesn’t necessarily preclude the use of non-fossil energy; it just assumes that our appetite for energy grows so rapidly that there’s no incentive to slow down fossil fuel extraction. We used a simple economic model to estimate how fast we might do this, if the world economy continues to grow at a similar rate to the last few decades.

JB: And now for the big question: what did you find? How likely is it that the AMOC will collapse, according to your model? Of course it depends how far into the future you look.

NU: We find a negligible probability that the AMOC will collapse this century. The odds start to increase around 2150, rising to about a 10% chance by 2200, and a 35% chance by 2300, the last year considered in our scenario.

JB: I guess one can take this as good news or really scary news, depending on how much you care about folks who are alive in 2300. But I have some more questions. First, what’s a "negligible probability"?

NU: In this case, it’s less than 1 in 3000. For computational reasons, we only ran 3000 of the million samples forward into the future. There were no samples in this smaller selection that had the AMOC collapsed in 2100. The probability rises to 1 in 3000 in the year 2130 (the first time I see a collapse in this smaller selection), and 1% in 2152. You should take these numbers with a grain of salt. It’s these rare "tail-area events" that are most sensitive to modeling assumptions.

JB: Okay. And second, don’t the extrapolations become more unreliable as you keep marching further into the future? You need to model not only climate physics but also the world economy. In this calculation, how many gigatons of carbon dioxide per year are you assuming will be emitted in 2300? I’m just curious. In 1998 it was about 27.6 gigatons. By 2008, it was about 30.4.

NU: Yes, the uncertainty grows with time (and this is reflected in our projections). And in considering a fixed emissions scenario, we’ve ignored the economic uncertainty, which, so far out into the future, is even larger than the climate uncertainty. Here we’re concentrating on just the climate uncertainty, and are hoping to get an idea of bounds, so we used something close to a worst-case economic scenario. In this scenario carbon emissions peak around 2150 at about 23 gigatons carbon per year (84 gigatons CO2). By 2300 they’ve tapered off to about 4 GtC (15 GtCO2).

Actual future emissions may be less than this, if we act to reduce them, or there are fewer economically extractable fossil resources than we assume, or the economy takes a prolonged downturn, etc. Actually, it’s not completely an economic worst case; it’s possible that the world economy could grow even faster than we assume. And it’s not the worst case scenario from a climate perspective, either. For example, we don’t model potential carbon emissions from permafrost or methane clathrates. It’s also possible that climate sensitivity could be higher than what we find in our analysis.

JB: Why even bother projecting so far out into the future, if it’s so uncertain?

NU: The main reason is because it takes a while for the AMOC to weaken, so if we’re interested in what it would take to make it collapse, we have to run the projections out a few centuries. But another motivation for writing this paper is policy related, having to do with the concept of "climate commitment" or "triggering". Even if it takes a few centuries for the AMOC to collapse, it may take less time than that to reach a "point of no return", where a future collapse has already been unavoidably "triggered". Again, to investigate this question, we have to run the projections out far enough to get the AMOC to collapse.

We define "the point of no return" to be a point in time which, if CO2 emissions were immediately reduced to zero and kept there forever, the AMOC would still collapse by the year 2300 (an arbitrary date chosen for illustrative purposes). This is possible because even if we stop emitting new CO2, existing CO2 concentrations, and therefore temperatures, will remain high for a long time (see "week303").

In reality, humans wouldn’t be able to reduce emissions instantly to zero, so the actual "point of no return" would likely be earlier than what we find in our study. We couldn’t economically reduce emissions fast enough to avoid triggering an AMOC collapse. (In this study we ignore the possibility of negative carbon emissions, that is, capturing CO2 directly from the atmosphere and sequestering it for a long period of time. We’re also ignoring the possibility of climate geoengineering, which is global cooling designed to cancel out greenhouse warming.)

So what do we find? Although we calculate a negligible probability that the AMOC will collapse by the end of this century, the probability that, in this century, we will commit later generations to a collapse (by 2300) is almost 5%. The probabilities of "triggering" rise rapidly, to almost 20% by 2150 and about 33% by 2200, even though the probability of experiencing a collapse by those dates is about 1% and 10%, respectively. You can see it in this figure from our paper:



The take-home message is that while most climate projections are currently run out to 2100, we shouldn’t fixate only on what might happen to people this century. We should consider what climate changes our choices in this century, and beyond, are committing future generations to experiencing.

JB: That’s a good point!

I’d like to thank you right now for a wonderful interview, that really taught me — and I hope our readers — a huge amount about climate change and climate modelling. I think we’ve basically reached the end here, but as the lights dim and the audience files out, I’d like to ask just a few more technical questions.

One of them was raised by David Tweed. He pointed out that while you’re "training" your model on climate data from the last 150 years or so, you’re using it to predict the future in a world that will be different in various ways: a lot more CO2 in the atmosphere, hotter, and so on. So, you’re extrapolating rather than interpolating, and that’s a lot harder. It seems especially hard if the collapse of the AMOC is a kind of "tipping point" — if it suddenly snaps off at some point, instead of linearly decreasing as some parameter changes.

This raises the question: why should we trust your model, or any model of this sort, to make such extrapolations correctly? In the discussion after that comment, I think you said that ultimately it boils down to

1) whether you think you have the physics right,

and

2) whether you think the parameters change over time.

That makes sense. So my question is: what are some of the best ways people could build on the work you’ve done, and make more reliable predictions about the AMOC? There’s a lot at stake here!

NU: Our paper is certainly an early step in making probabilistic AMOC projections, with room for improvement. I view the main points as (1) estimating how large the climate-related uncertainties may be within a given model, and (2) illustrating the difference between experiencing, and committing to, a climate change. It’s certainly not an end-all "prediction" of what will happen 300 years from now, taking into account all possible model limitations, economic uncertainties, etc.

To answer your question, the general ways to improve predictions are to improve the models, and/or improve the data constraints. I’ll discuss both.

Although I’ve argued that our simple box model reasonably reproduces the dynamics of the more complex model it was designed to approximate, that complex model itself isn’t the best model available for the AMOC. The problem with using complex climate models is that it’s computationally impossible to run them millions of times. My solution is to work with "statistical emulators", which are tools for building fast approximations to slow models. The idea is to run the complex model a few times at different points in its parameter space, and then statistically interpolate the resulting outputs to predict what the model would have output at nearby points. This works if the model output is a smooth enough function of the parameters, and there are enough carefully-chosen "training" points.

From an oceanographic standpoint, even current complex models are probably not wholly adequate (see the discussion at the end of "week304"). There is some debate about whether the AMOC becomes more stable as the resolution of the model increases. On the other hand, people still have trouble getting the AMOC in models, and the related climate changes, to behave as abruptly as they apparently did during the Younger Dryas. I think the range of current models is probably in the right ballpark, but there is plenty of room for improvement. Model developers continue to refine their models, and ultimately, the reliability of any projection is constrained by the quality of models available.

Another way to improve predictions is to improve the data constraints. It’s impossible to go back in time and take better historic data, although with things like ice cores, it is possible to dig up new cores to analyze. It’s also possible to improve some historic "data products". For example, the ocean heat data is subject to a lot of interpolation of sparse measurements in the deep ocean, and one could potentially improve the interpolation procedure without going back in time and taking more data. There are also various corrections being applied for known biases in the data-gathering instruments and procedures, and it’s possible those could be improved too.

Alternatively, we can simply wait. Wait for new and more precise data to become available.

But when I say "improve the data constraints", I’m mostly talking about adding more of them, that I simply didn’t include in the analysis, or looking at existing data in more detail (like spatial patterns instead of global averages). For example, the ocean heat data mostly serves to constrain the vertical mixing parameter, controlling how quickly heat penetrates into the deep ocean. But we can also look at the penetration of chemicals in the ocean (such carbon from fossil fuels, or chlorofluorocarbons). This is also informative about how quickly water masses mix down to the ocean depths, and indirectly informative about how fast heat mixes. I can’t do that with my simple model (which doesn’t have the ocean circulation of any of these chemicals in it), but I can with more complex models.

As another example, I could constrain the climate sensitivity parameter better with paleoclimate data, or more resolved spatial data (to try to, e.g., pick up the spatial fingerprint of industrial aerosols in the temperature data), or by looking at data sets informative about particular feedbacks (such as water vapor), or at satellite radiation budget data.

There is a lot of room for reducing uncertainties by looking at more and more data sets. However, this presents its own problems. Not only is this simply harder to do, but it runs more directly into limitations in the models and data. For example, if I look at what ocean temperature data implies about a model’s vertical mixing parameter, and what ocean chemical data imply, I might find that they imply two inconsistent values for the parameter! Or that those data imply a different mixing than is implied by AMOC strength measurements. This can happen if there are flaws in the model (or in the data). We have some evidence from other work that there are circumstances in which this can happen:

• A. Schmittner, N. M. Urban, K. Keller and D. Matthews, Using tracer observations to reduce the uncertainty of ocean diapycnal mixing and climate-carbon cycle projections, Global Biogeochemical Cycles 23 (2009), GB4009.

• M. Goes, N. M. Urban, R. Tonkonojenkov, M. Haran, and K. Keller, The skill of different ocean tracers in reducing uncertainties about projections of the Atlantic meridional overturning circulation, Journal of Geophysical Research — Oceans, in press (2010).

How to deal with this, if and when it happens, is an open research challenge. To an extent it depends on expert judgment about which model features and data sets are "trustworthy". Some say that expert judgment renders conclusions subjective and unscientific, but as a scientist, I say that such judgments are always applied! You always weigh how much you trust your theories and your data when deciding what to conclude about them.

In my response I’ve so far ignored the part about parameters changing in time. I think the hydrological sensitivity (North Atlantic freshwater input as a function of temperature) can change with time, and this could be improved by using a better climate model that includes ice and precipitation dynamics. Feedbacks can fluctuate in time, but I think it’s okay to treat them as a constant for long term projections. Some of these parameters can also be spatially dependent (e.g., the respiration sensitivity in the carbon cycle). I think treating them all as constant is a decent first approximation for the sorts of generic questions we’re asking in the paper. Also, all the parameter estimation methods I’ve described only work with static parameters. For time varying parameters, you need to get into state estimation methods like Kalman or particle filters.

JB: I also have another technical question, which is about the Markov chain Monte Carlo procedure. You generate your cloud of points in 18-dimensional space by a procedure where you keep either jumping randomly to a nearby point, or staying put, according to that decision procedure you described. Eventually this cloud fills out to a good approximation of the probability distribution you want. But, how long is "eventually"? You said you generated a million points. But how do you know that’s enough?

NU: This is something of an art. Although there is an asymptotic convergence theorem, there is no general way of knowing whether you’ve reached convergence. First you check to see whether your chains "look right". Are they sweeping across the full range of parameter space where you expect significant probability? Are they able to complete many sweeps (thoroughly exploring parameter space)? Is the Metropolis test accepting a reasonable fraction of proposed moves? Do you have enough effective samples in your Markov chain? (MCMC generates correlated random samples, so there are fewer "effectively independent" samples in the chain than there are total samples.) Then you can do consistency checks: start the chains at several different locations in parameter space, and see if they all converge to similar distributions.

If the posterior distribution shows, or is expected to show, a lot of correlation between parameters, you have to be more careful to ensure convergence. You want to propose moves that carry you along the "principal components" of the distribution, so you don’t waste time trying to jump away from the high probability directions. (Roughly, if your posterior density is concentrated on some low dimensional manifold, you want to construct your way of moving around parameter space to stay near that manifold.) You also have to be careful if you see, or expect, multimodality (multiple peaks in the probability distribution). It can be hard for MCMC to move from one mode to another through a low-probability "wasteland"; it won’t be inclined to jump across it. There are more advanced algorithms you can use in such situations, if you suspect you have multimodality. Otherwise, you might discover later that you only sampled one peak, and never noticed that there were others.

JB: Did you do some of these things when testing out the model in your paper? Do you have any intuition for the "shape" of the probability distribution in 18-dimensional space that lies at the heart of your model? For example: do you know if it has one peak, or several?

NU: I’m pretty confident that the MCMC in our analysis is correctly sampling the shape of the probability distribution. I ran lots and lots of analyses, starting the chain in different ways, tweaking the proposal distribution (jumping rule), looking at different priors, different model structures, different data, and so on.

It’s hard to "see" what an 18-dimensional function looks like, but we have 1-dimensional and 2-dimensional projections of it in our paper:





I don’t believe that it has multiple peaks, and I don’t expect it to. Multiple peaks usually show up when the model behavior is non-monotonic as a function of the parameters. This can happen in really nonlinear systems (an with threshold systems like the AMOC), but during the historic period I’m calibrating the model to, I see no evidence of this in the model.

There are correlations between parameters, so there are certain "directions" in parameter space that the posterior distribution is oriented along. And the distribution is not Gaussian. There is evidence of skew, and nonlinear correlations between parameters. Such correlations appear when the data are insufficient to completely identify the parameters (i.e., different combinations of parameters can produce similar model output). This is discussed in more detail in another of our papers:

• Nathan M. Urban and Klaus Keller, Complementary observational constraints on climate sensitivity, Geophysical Research Letters 36 (2009), L04708.

In a Gaussian distribution, the distribution of any pair of parameters will look ellipsoidal, but our distribution has some "banana" or "boomerang" shaped pairwise correlations. This is common, for example, when the model output is a function of the product of two parameters.

JB: Okay. It’s great that we got a chance to explore some of the probability theory and statistics underlying your work. It’s exciting for me to see these ideas being used to tackle a big real-life problem. Thanks again for a great interview.


Maturity is the capacity to endure uncertainty. – John Finley


This Week’s Finds (Week 304)

15 October, 2010

About 10,800 BC, something dramatic happened.

The last glacial period seemed to be ending quite nicely, things had warmed up a lot — but then, suddenly, the temperature in Europe dropped about 7 °C! In Greenland, it dropped about twice that much. In England it got so cold that glaciers started forming! In the Netherlands, in winter, temperatures regularly fell below -20 °C. Throughout much of Europe trees retreated, replaced by alpine landscapes, and tundra. The climate was affected as far as Syria, where drought punished the ancient settlement of Abu Hurerya. But it doesn’t seem to have been a world-wide event.

This cold spell lasted for about 1300 years. And then, just as suddenly as it began, it ended! Around 9,500 BC, the temperature in Europe bounced back.

This episode is called the Younger Dryas, after a certain wildflower that enjoys cold weather, whose pollen is common in this period.

What caused the Younger Dryas? Could it happen again? An event like this could wreak havoc, so it’s important to know. Alas, as so often in science, the answer to these questions is "we’re not sure, but…."

We’re not sure, but the most popular theory is that a huge lake in Canada, formed by melting glaciers, broke its icy banks and flooded out into the Saint Lawrence River. This lake is called Lake Agassiz. At its maximum, it held more water than all lakes in the world now put together:



In a massive torrent lasting for years, the water from this lake rushed out to the Labrador Sea. By floating atop the denser salt water, this fresh water blocked a major current that flows in the Altantic: the Atlantic Meridional Overturning Circulation, or AMOC. This current brings warm water north and helps keep northern Europe warm. So, northern Europe was plunged into a deep freeze!

That’s the theory, anyway.

Could something like this happen again? There are no glacial lakes waiting to burst their banks, but the concentration of fresh water in the northern Atlantic has been increasing, and ocean temperatures are changing too, so some scientists are concerned. The problem is, we don’t really know what it takes to shut down the Atlantic Meridional Overturning Circulation!

To make progress on this kind of question, we need a lot of insight, but we also need some mathematical models. And that’s what Nathan Urban will tell us about now. First we’ll talk in general about climate models, Bayesian reasoning, and Monte Carlo methods. We’ll even talk about the general problem of using simple models to study complex phenomena. And then he’ll walk us step by step through the particular model that he and a coauthor have used to study this question: will the AMOC run amok?

Sorry, I couldn’t resist that. It’s not so much "running amok" that the AMOC might do, it’s more like "fizzling out". But accuracy should never stand in the way of a good pun.

On with the show:

JB: Welcome back! Last time we were talking about the new work you’re starting at Princeton. You said you’re interested in the assessment of climate policy in the presence of uncertainties and "learning" – where new facts come along that revise our understanding of what’s going on. Could you say a bit about your methodology? Or, if you’re not far enough along on this work, maybe you could talk about the methodology of some other paper in this line of research.

NU: To continue the direction of discussion, I’ll respond by talking about the methodology of a few papers along the lines of what I hope to work on here at Princeton, rather than about my past papers on uncertainty quantification. They are Keller and McInerney on learning rates:

• Klaus Keller and David McInerney, The dynamics of learning about a climate threshold, Climate Dynamics 30 (2008), 321-332.

Keller and coauthors on learning and economic policy:

• Klaus Keller, Benjamin M. Bolkerb and David F. Bradford, Uncertain climate thresholds and optimal economic growth, Journal of Environmental Economics and Management 48 (2004), 723-741.

and Oppenheimer et al. on "negative" learning (what happens when science converges to the wrong answer):

• Michael Oppenheimer, Brian C. O’Neill and Mort Webster, Negative learning, Climatic Change 89 (2008), 155-172.

The general theme of this kind of work is to statistically compare a climate model to observed data in order to understand what model behavior is allowed by existing data constraints. Then, having quantified the range of possibilities, plug this uncertainty analysis into an economic-climate model (or "integrated assessment model"), and have it determine the economically "optimal" course of action.

So: start with a climate model. There is a hierarchy of such models, ranging from simple impulse-response or "box" models to complex atmosphere-ocean general circulation models. I often use the simple models, because they’re computationally efficient and it is therefore feasible to explore their full range of uncertainties. I’m moving toward more complex models, which requires fancier statistics to extract information from a limited set of time-consuming simulations.

Given a model, then apply a Monte Carlo analysis of its parameter space. Climate models cannot simulate the entire Earth from first principles. They have to make approximations, and those approximations involve free parameters whose values must be fit to data (or calculated from specialized models). For example, a simple model cannot explicitly describe all the possible feedback interactions that are present in the climate system. It might lump them all together into a single, tunable "climate sensitivity" parameter. The Monte Carlo analysis runs the model many thousands of times at different parameter settings, and then compares the model output to past data in order to see which parameter settings are plausible and which are not. I use Bayesian statistical inference, in combination with Markov chain Monte Carlo, to quantify the degree of "plausibility" (i.e., probability) of each parameter setting.

With probability weights for the model’s parameter settings, it is now possible to weight the probability of possible future outcomes predicted by the model. This describes, conditional on the model and data used, the uncertainty about the future climate.

JB: Okay. I think I roughly understand this. But you’re using jargon that may cause some readers’ eyes to glaze over. And that would be unfortunate, because this jargon is necessary to talk about some very cool ideas. So, I’d like to ask what some phrases mean, and beg you to explain them in ways that everyone can understand.

To help out — and maybe give our readers the pleasure of watching me flounder around — I’ll provide my own quick attempts at explanation. Then you can say how close I came to understanding you.

First of all, what’s an "impulse-response model"? When I think of "impulse response" I think of, say, tapping on a wineglass and listening to the ringing sound it makes, or delivering a pulse of voltage to an electrical circuit and watching what it does. And the mathematician in me knows that this kind of situation can be modelled using certain familiar kinds of math. But you might be applying that math to climate change: for example, how the atmosphere responds when you pump some carbon dioxide into it. Is that about right?

NU: Yes. (Physics readers will know "impulse response" as "Green’s functions", by the way).

The idea is that you have a complicated computer model of a physical system whose dynamics you want to represent as a simple model, for computational convenience. In my case, I’m working with a computer model of the carbon cycle which takes CO2 emissions as input and predicts how much CO2 is left in the air after natural sources and sinks operate on what’s there. It’s possible to explicitly model most of the relevant physical and biogeochemical processes, but it takes a long time for such a computer simulation to run. Too long to explore how it behaves under many different conditions, which is what I want to do.

How do you build a simple model that acts like a more complicated one? One way is to study the complex model’s "impulse response" — in this case, how it behaves in response to an instantaneous "pulse" of carbon to the atmosphere. In general, the CO2 in the atmosphere will suddenly jump up, and then gradually relax back toward its original concentration as natural sinks remove some of that carbon from the atmosphere. The curve showing how the concentration decreases over time is the "impulse response". You derive it by telling your complex computer simulation that a big pulse of carbon was added to the air, and recording what it predicts will happen to CO2 over time.

The trick in impulse response theory is to treat an arbitrary CO2 emissions trajectory as the sum of a bunch of impulses of different sizes, one right after another. So, if emissions are 1, 3, and 7 units of carbon in years 1, 2, and 3, then you can think of that as a 1-unit pulse of carbon in year one, plus a 3-unit pulse in year 2, plus a 7-unit pulse in year 3.

The crucial assumption you make at this point is that you can treat the response of the complex model to this series of impulses as the sum of the "impulse response" curve that you worked out for a single pulse. Therefore, just by running the model in response to a single unit pulse, you can work out what the model would predict for any emissions trajectory, by adding up its response to a bunch of individual pulses. The impulse response model makes its prediction by summing up lots of copies of the impulse repsonse curve, with different sizes and at different times. (Techincally, this is a convolution of the impulse response curve, or Green’s function, with the emissions trajectory curve.)

JB: Okay. Next, what’s a "box model"? I had to look that up, and after some floundering around I bumped into a Wikipedia article that mentioned "black box models" and "white box models".

A black box model is where you’ve got a system, and all you pay attention to is its input and output — in other words, what you do to it, and what it does to you, not what’s going on "inside". A white box model, or "glass box model", lets you see what’s going on inside but not directly tinker with it, except via your input.

Is this at all close? I don’t feel very confident that I’ve understood what a "box model" is.

NU: No, box models are the sorts of things you find in "systems dynamics" theory, where you have "stocks" of a substance and "flows" of it in and out. In the carbon cycle, the "boxes" (or stocks) could be "carbon stored in wood", "carbon stored in soil", "carbon stored in the surface ocean", etc. The flows are the sources and sinks of carbon. In an ocean model, boxes could be "the heat stored in the North Atlantic", "the heat stored in the deep ocean", etc., and flows of heat between them.

Box models are a way of spatially averaging over a lot of processes that are too complicated or time-consuming to treat in detail. They’re another way of producing simplified models from more complex ones, like impulse response theory, but without the linearity assumption. For example, one could replace a three dimensional circulation model of the ocean with a couple of "big boxes of water connected by pipes". Of course, you have to then verify that your simplified model is a "good enough" representation of whatever aspect of the more complex model that you’re interested in.

JB: Okay, sure — I know a bit about these "box models", but not that name. In fact the engineers who use "bond graphs" to depict complex physical systems made of interacting parts like to emphasize the analogy between electrical circuits and hydraulic systems with water flowing through pipes. So I think box models fit into the bond graph formalism pretty nicely. I’ll have to think about that more.

Anyway: next you mentioned taking a model and doing a "Monte Carlo analysis of its parameter space". This time you explained what you meant, but I’ll still go over it.

Any model has a bunch of adjustable parameters in it, for example the "climate sensitivity", which in a simple model just means how much warmer it gets per doubling of atmospheric carbon dioxide. We can think of these adjustable parameters as knobs we’re allowed to turn. The problem is that we don’t know the best settings of these knobs! And even worse, there are lots of allowed settings.

In a Monte Carlo analysis we randomly turn these knobs to some setting, run our model, and see how well it does — presumably by comparing its results to the "right answer" in some situation where we already know the right answer. Then we keep repeating this process. We turn the knobs again and again, and accumulate information, and try to use this to guess what the right knob settings are.

More precisely: we try to guess the probability that the correct knob settings lie within any given range! We don’t try to guess their one "true" setting, because we can’t be sure what that is, and it would be silly to pretend otherwise. So instead, we work out probabilities.

Is this roughly right?

NU: Yes, that’s right.

JB: Okay. That was the rough version of the story. But then you said something a lot more specific. You say you "use Bayesian statistical inference, in combination with Markov chain Monte Carlo, to quantify the degree of "plausibility" (or probability) of each parameter setting."

So, I’ve got a couple more questions. What’s "Markov chain Monte Carlo"? I guess it’s some specific way of turning those knobs over and over again.

NU: Yes. For physicists, it’s a "random walk" way of turning the knobs: you start out at the current knob settings, and tweak each one just a little bit away from where they currently are. In the most common Markov chain Monte Carlo (MCMC) algorithm, if the new setting takes you to a more plausible setting of the knobs, you keep that setting. If the new setting produces an outcome that is less plausible, then you might keep the new setting (with a likelihood proportional to how much less plausible the new setting is), or you might stay at the existing setting and try again with a new tweaking. The MCMC algorithm is designed so that the sequence of knob settings produced will sample randomly from the probability distribution you’re interested in.

JB: And what’s "Bayesian statistical inference"? I’m sorry, I know this subject deserves a semester-long graduate course. But like a bad science journalist, I will ask you to distill it down to a few sentences! Sometime I’ll do a whole series of This Week’s Finds about statistical inference, but not now.

NU: I can distill it to one sentence: in this context, it’s a branch of statistics which allows you to assign probabilities to different settings of model parameters, based on how well those settings cause the model to reproduce the observed data.

The more common "frequentist" approach to statistics doesn’t allow you to assign probabilities to model parameters. It has a different take on probability. As a Bayesian, you assume the observed data is known and talk about probabilities of hypotheses (here, model parameters). As a frequentist, you assume the hypothesis is known (hypothetically), and talk about probabilities of data that could result from it. They differ fundamentally in what you treat as known (data, or hypothesis) and what probabilities are applied to (hypothesis, or data).

JB: Okay, and one final question: sometimes you say "plausibility" and sometimes you say "probability". Are you trying to distinguish these, or say they’re the same?

NU: I am using "probability" as a technical term which quantifies how "plausible" a hypothesis is. Maybe I should just stick to "probability".

JB: Great. Thanks for suffering through that dissection of what you said.

I think I can summarize, in a sloppy way, as follows. You take a model with a bunch of adjustable knobs, and you use some data to guess the probability that the right settings of these knobs lie within any given range. Then, you can use this model to make predictions. But these predictions are only probabilistic.

Okay, then what?

NU: This is the basic uncertainty analysis. There are several things that one can do with it. One is to look at learning rates. You can generate "hypothetical data" that we might observe in the future, by taking a model prediction and adding some "observation noise" to it. (This presumes that the model is perfect, which is not the case, but it represents a lower bound on uncertainty.) Then feed the hypothetical data back into the uncertainty analysis to calculate how much our uncertainty in the future could be reduced as a result of "observing" this "new" data. See Keller and McInerney for an example.

Another thing to do is decision making under uncertainty. For this, you need an economic integrated assessment model (or some other kind of policy model). Such a model typically has a simple description of the world economy connected to a simple description of the global climate: the world population and the economy grow at a certain rate which is tied to the energy sector, policies to reduce fossil carbon emissions have economic costs, fossil carbon emissions influence the climate, and climate change has economic costs. Different models are more or less explicit about these components (is the economy treated as a global aggregate or broken up into regional economies, how realistic is the climate model, how detailed is the energy sector model, etc.)

If you feed some policy (a course of emissions reductions over time) into such a model, it will calculate the implied emissions pathway and emissions abatement costs, as well as the implied climate change and economic damages. The net costs or benefits of this policy can be compared with a "business as usual" scenario with no emissions reductions. The net benefit is converted from "dollars" to "utility" (accounting for things like the concept that a dollar is worth more to a poor person than a rich one), and some discounting factor is applied (to downweight the value of future utility relative to present). This gives "the (discounted) utility of the proposed policy".

So far this has not taken uncertainty into account. In reality, we’re not sure what kind of climate change will result from a given emissions trajectory. (There is also economic uncertainty, such as how much it really costs to reduce emissions, but I’ll concentrate on the climate uncertainty.) The uncertainty analysis I’ve described can give probability weights to different climate change scenarios. You can then take a weighted average over all these scenarios to compute the "expected" utility of a proposed policy.

Finally, you optimize over all possible abatement policies to find the one that has the maximum expected discounted utility. See Keller et al. for a simple conceptual example of this applied to a learning scenario, and this book for a deeper discussion:

• William Nordhaus, A Question of Balance, Yale U. Press, New Haven, 2008.

It is now possible to start elaborating on this theme. For instance, in the future learning problem, you can modify the "hypothetical data" to deviate from what your climate model predicts, in order to consider what would happen if the model is wrong and we observe something "unexpected". Then you can put that into an integrated assessment model to study how much being wrong would cost us, and how fast we need to learn that we’re wrong in order to change course, policy-wise. See that paper by Oppenheimer et al. for an example.

JB: Thanks for that tour of ideas! It sounds fascinating, important, and complex.

Now I’d like to move on to talking about a specific paper of yours. It’s this one:

• Nathan Urban and Klaus Keller, Probabilistic hindcasts and projections of the coupled climate, carbon cycle, and Atlantic meridional overturning circulation system: A Bayesian fusion of century-scale observations with a simple model, Tellus A 62 (2010), 737-750.

Before I ask you about the paper, let me start with something far more basic: what the heck is the "Atlantic meridional overturning circulation" or "AMOC"?

I know it has something to do with ocean currents, and how warm water moves north near the surface of the Atlantic and then gets cold, plunges down, and goes back south. Isn’t this related to the "Gulf Stream", that warm current that supposedly keeps Europe warmer than it otherwise would be?

NU: Your first sentence pretty much sums up the basic dynamics: the warm water from the tropics cools in the North Atlantic, sinks (because it’s colder and denser), and returns south as deep water. As the water cools, the heat it releases to the atmosphere warms the region.

This is the "overturning circulation". But it’s not synonymous with the Gulf Stream. The Gulf Stream is a mostly wind-driven phenomenon, not a density driven current. The "AMOC" has both wind driven and density driven components; the latter is sometimes referred to as the "thermohaline circulation" (THC), since both heat and salinity are involved. I haven’t gotten into salinity yet, but it also influences the density structure of the ocean, and you can read Stefan Rahmstorf’s review articles for more (read the parts on non-linear behavior):

• Stefan Rahmstorf, The thermohaline ocean circulation: a brief fact sheet.

• Stefan Rahmstorf, Thermohaline ocean circulation, in Encyclopedia of Quaternary Sciences, edited by S. A. Elias, Elsevier, Amsterdam 2006.



JB: Next, why are people worrying about the AMOC? I know some scientists have argued that shortly after the last ice age, the AMOC stalled out due to lots of fresh water from Lake Agassiz, a huge lake that used to exist in what’s now Canada, formed by melting glaciers. The idea, I think, was that this event temporarily killed the Gulf Stream and made temperatures in Europe drop enormously.

Do most people believe that story these days?

NU: You’re speaking of the "Younger Dryas" abrupt cooling event around 11 to 13 thousand years ago. The theory is that a large pulse of fresh water from Lake Agassiz lessened the salinity in the Atlantic and made it harder for water to sink, thus shutting down down the overturning circulation and decreasing its release of heat in the North Atlantic. This is still a popular theory, but geologists have had trouble tracing the path of a sufficiently large supply of fresh water, at the right place, and the right time, to shut down the AMOC. There was a paper earlier this year claiming to have finally done this:

• Julian B. Murton, Mark D. Bateman, Scott R. Dallimore, James T. Teller and Zhirong Yang, Identification of Younger Dryas outburst flood path from Lake Agassiz to the Arctic Ocean, Nature 464 (2010), 740-743.

but I haven’t read it yet.

The worry is that this could happen again — not because of a giant lake draining into the Atlantic, but because of warming (and the resulting changes in precipitation) altering the thermal and salinity structure of the ocean. It is believed that the resulting shutdown of the AMOC will cause the North Atlantic region to cool, but there is still debate over what it would take to cause it to shut down. It’s also debated whether this is one of the climate "tipping points" that people talk about — whether a certain amount of warming would trigger a shutdown, and whether that shutdown would be "irreversible" (or difficult to reverse) or "abrupt".

Cooling Europe may not be a bad thing in a warming world. In fact, in a warming world, Europe might not actually cool in response to an AMOC shutdown; it might just warm more slowly. The problem is if the cooling is abrupt (and hard to adapt to), or prolonged (permamently shifting climate patterns relative to the rest of the world). Perhaps worse than the direct temperature change could be the impacts on agriculture or ocean ecosystems, resulting from major reorganizations of regional precipitation or ocean circulation patterns.

JB: So, part of your paper consists of modelling the AMOC and how it interacts with the climate and the carbon cycle. Let’s go through this step by step.

First: how do you model the climate? You say you use "the DOECLIM physical climate component of the ACC2 model, which is an energy balance model of the atmosphere coupled to a one-dimensional diffusive ocean model". I guess these are well-known ideas in your world. But I don’t even know what the acronyms stand for! Could you walk us through these ideas in a gentle way?

NU: Don’t worry about the acronyms; they’re just names people have given to particular models.

The ACC2 model is a computer model of both the climate and the carbon cycle. The climate part of our model is called DOECLIM, which I’ve used to replace the original climate component of ACC2. An "energy balance model" is the simplest possible climate model, and is a form of "box model" that I mentioned above. It treats the Earth as a big heat sink that you dump energy into (e.g., by adding greenhouse gases). Given the laws of thermodynamics, you can compute how much temperature change you get from a given amount of heat input.

This energy balance model of the atmosphere is "zero dimensional", which means that it treats the Earth as a featureless sphere, and doesn’t attempt to keep track of how heat flows or temperature changes at different locations. There is no three dimensional circulation of the atmosphere or anything like that. The atmosphere is just a "lump of heat-absorbing material".

The atmospheric "box of heat" is connected to two other boxes, which are land and ocean. In DOECLIM, "land" is just another featureless lump of material, with a different heat capacity than air. The "ocean" is more complicated. Instead of a uniform box of water with a single temperature, the ocean is "one dimensional", meaning that it has depth, and temperature is allowed to vary with depth. Heat penetrates from the surface into the deep ocean by a diffusion process, which is intended to mimic the actual circulation-driven penetration of heat into the ocean. It’s worth treating the ocean in more detail since oceans are the Earth’s major heat sink, and therefore control how quickly the planet can change temperature.

The three parameters in the DOECLIM model which we treat as uncertain are the climate (temperature) sensitivity to CO2, the vertical mixing rate of heat into the ocean, and the strength of the "aerosol indirect effect" (what kind of cooling effect industrial aerosols in the atmosphere create due to their influence on cloud behavior).

JB: Okay, that’s clear enough. But at this point I have to raise an issue about models in general. As you know, a lot of climate skeptics like to complain about the fallibility of models. They would surely become even more skeptical upon hearing that you’re treating the Earth as a featureless sphere with same temperature throughout at any given time — and treating the temperature of ocean water as depending only on the depth, not the location. Why are you simplifying things so much? How could your results possibly be relevant to the real world?

Of course, as a mathematical physicist, I know the appeal of simple models. I also know the appeal of reducing the number of dimensions. I spent plenty of time studying quantum gravity in the wholly unrealistic case of a universe with one less dimension than our real world! Reducing the number of dimensions makes the math a lot simpler. And simplified models give us a lot of insight which — with luck — we can draw upon when tackling the really hard real-world problems. But we have to be careful: they can also lead us astray.

How do you think about results obtained from simplified climate models? Are they just mathematical warmup exercises? That would be fine — I have no problem with that, as long as we’re clear about it. Or are you hoping that they give approximately correct answers?

NU: I use simple models because they’re fast and it’s easier to expose and explore their assumptions. My attitude toward simple models is a little of both the points of view you suggest: partly proof of concept, but also hopefully approximately correct, for the questions I’m asking. Let me first argue for the latter perspective.

If you’re using a zero dimensional model, you can really only hope to answer "zero dimensional questions", i.e. about the globally averaged climate. Once you’ve simplified your question by averaging over a lot of the complexity of the data, you can hope that a simple model can reproduce the remaining dynamics. But you shouldn’t just hope. When using simple models, it’s important to test the predictions of their components against more complex models and against observed data.

You can show, for example, that as far as global average surface temperature is concerned, even simpler energy balance models than DOECLIM (e.g., without a 1D ocean) can do a decent job of reproducing the behavior of more complex models. See, e.g.:

• Isaac M. Held, Michael Winton, Ken Takahashi, Thomas Delworth, Fanrong Zeng and Geoffrey K. Vallis, Probing the fast and slow components of global warming by returning abruptly to preindustrial forcing, Journal of Climate 23 (2010), 2418-2427.

for a recent study. The differences between complex models can be captured merely by retuning the "effective parameters" of the simple model. For example, many of the complexities of different feedback effects can be captured by a tunable climate sensitivity parameter in the simple model, representing the total feedback. By turning this sensitivity "knob" in the simple model, you can get it to behave like complex models which have different feedbacks in them.

There is a long history in climate science of using simple models as "mechanistic emulators" of more complex models. The idea is to put just enough physics into the simple model to get it to reproduce some specific averaged behavior of the complex model, but no more. The classic "mechanistic emulator" used by the International Panel on Climate Change is called MAGICC. BERN-CC is another model frequently used by the IPCC for carbon cycle scenario analysis — that is, converting CO2 emissions scenarios to atmospheric CO2 concentrations. A simple model that people can play around with themselves on the Web may be found here:

• Ben Matthews, Chooseclimate.

Obviously a simple model cannot reproduce all the behavior of a more complex model. But if you can provide evidence that it reproduces the behavior you’re interested in for a particular problem, it is arguably at least as "approximately correct" as the more complex model you validate it against, for that specific problem. (Whether the more complex model is an "approximately correct" representation of the real world is a separate question!)

In fact, simple models are arguably more useful than more complex ones for certain applications. The problem with complex models is, well, their complexity. They make a lot of assumptions, and it’s hard to test all of them. Simpler models make fewer assumptions, so you can test more of them, and look at the sensitivity of your conclusions to your assumptions.

If I take all the complex models used by the IPCC, they will have a range of different climate sensitivities. But what if the actual climate sensitivity is above or below that range, because all the complex models have limitations? I can’t easily explore that possibility in a complex model, because "climate sensitivity" isn’t a knob I can turn. It’s an emergent property of many different physical processes. If I want to change the model’s climate sensitivity, I might have to rewrite the cloud physics module to obey different dynamical equations, or something complicated like that — and I still won’t be able to produce a specific sensitivity. But in a simple model, "climate sensitivity" sensitivity is a "knob", and I can turn it to any desired value above, below, or within the IPCC range to see what happens.

After that defense of simple models, there are obviously large caveats. Even if you can show that a simple model can reproduce the behavior of a more complex one, you can only test it under a limited range of assumptions about model parameters, forcings, etc. It’s possible to push a simple model too far, into a regime where it stops reproducing what a more complex model would do. Simple models can also neglect relevant feedbacks and other processes. For example, in the model I use, global warming can shut down the AMOC, but changes in the AMOC don’t feed back to cool the global temperature. But the cooling from an AMOC weakening should itself slow further AMOC weakening due to global warming. The AMOC model we use is designed to partly compensate for the lack of explicit feedback of ocean heat transport on the temperature forcing, but it’s still an approximation.

In our paper we discuss what we think are the most important caveats of our simple analysis. Ultimately we need to be able to do this sort of analysis with more complex models as well, to see how robust our conclusions are to model complexity and structural assumptions. I am working in that direction now, but the complexities involved might be the subject of another interview!

JB: I’d be very happy to do another interview with you. But you’re probably eager to finish this one first. So we should march on.

But I can’t resist one more comment. You say that models even simpler than DOECLIM can emulate the behavior of more complex models. And then you add, parenthetically, "whether the more complex model is an ‘approximately correct’ representation of the real world is a separate question!" But I think that latter question is the one that ordinary people find most urgent. They won’t be reassured to know that simple models do a good job of mimicking more complicated models. They want to know how well these models mimic reality!

But maybe we’ll get to that when we talk about the Monte Carlo Markov chain procedure and how you use that to estimate the probability that the "knobs" (that is, parameters) in your model are set correctly? Presumably in that process we learn a bit about how well the model matches real-world data?

If so, we can go on talking about the model now, and come back to this point in due time.

NU: The model’s ability to represent the real world is the most important question. But it’s not one I can hope to fully answer with a simple model. In general, you won’t expect a model to exactly reproduce the data. Partly this is due to model imperfections, but partly it’s due to random "natural variability" in the system. (And also, of course, to measurement error.) Natural variability is usually related to chaotic or otherwise unpredictable atmosphere-ocean interactions, e.g. at the scale of weather events, El Niño, etc. Even a perfect model can’t be expected to predict those. With a simple model it’s really hard to tell how much of the discrepancy between model and data is due to model structural flaws, and how much is attributable to expected "random fluctuations", because simple models are too simple to generate their own "natural variability".

To really judge how well models are doing, you have to use a complex model and see how much of the discrepancy can be accounted for by the natural variability it predicts. You also have to get into a lot of detail about the quality of the observations, which means looking at spatial patterns and not just global averages. This is the sort of thing done in model validation studies, "detection and attribution" studies, and observation system papers. But it’s beyond the scope of our paper. That’s why I said the best I can do is to use simple models that perform as well as complex models for limited problems. They will of course suffer any limitations of the complex models to which they’re tuned, and if you want to read about those, you should read those modeling papers.

As far as what I can do with a simple model, yes, the Bayesian probability calculation using MCMC is a form of data-model comparison, in that it gives higher weight to model parameter settings that fit the data better. But it’s not exactly a form of "model checking", because Bayesian probability weighting is a relative procedure. It will be quite happy to assign high probability to parameter settings that fit the data terribly, as long as they still fit better than all the other parameter settings. A Bayesian probability isn’t an absolute measure of model quality, and so it can’t be used to check models. This is where classical statistical measures of "goodness of fit" can be helpful. For a philosophical discussion, see:

• Andrew Gelman and Cosma Rohilla Shalizi, Philosophy and the practice of Bayesian statistics, available as arXiv:1006.3868.

That being said, you do learn about model fit during the MCMC procedure in its attempt to sample highly probable parameter settings. When you get to the best fitting parameters, you look at the difference between the model fit and the observations to get an idea of what the "residual error" is — that is, everything that your model wasn’t able to predict.

I should add that complex models disagree more about the strength of the AMOC than they do about more commonly discussed climate variables, such as surface temperature. This can been seen in Figure 10.15 of the IPCC AR4 WG1 report: there is a cluster of models that all tend to agree with the observed AMOC strength, but there are also some models that don’t. Some of those that don’t are known to have relatively poor physical modeling of the overturning circulation, so this is to be expected (i.e., the figure looks like a worse indictment of the models than it really is). But there is still disagreement between some of the "higher quality" models. Part of the problem is that we have poor historical observations of the AMOC, so it’s sometimes hard to tell what needs fixing in the models.

Since the complex models don’t all agree about the current state of the AMOC, one can (and should) question using a simple AMOC model which has been tuned to a particular complex model. Other complex models will predict something altogether different. (And in fact, the model that our simple model was tuned to is also simpler than the IPCC AR4 models.) In our analysis we try to get around this model uncertainty by including some tunable parameters that control both the initial strength of the AMOC and how quickly it weakens. By altering those parameters, we try to span the range of possible outcomes predicted by complex models, allowing the parameters to take on whatever range of values is compatible with the (noisy) observations. This, at a minimum, leads to significant uncertainty in what the AMOC will do.

I’m okay with the idea of uncertainty — that is, after all, what my research is about. But ultimately, even projections with wide error bars still have to be taken with a grain of salt, if the most advanced models still don’t entirely agree on simple questions like the current strength of the AMOC.

JB: Okay, thanks. Clearly the question of how well your model matches reality is vastly more complicated than what you started out trying to tell me: namely, what your model is. Let’s get back to that.

To recap, your model consists of three interacting parts: a model of the climate, a model of the carbon cycle, and a model of the Atlantic meridional overturning circulation (or "AMOC"). The climate model, called "DOECLIM", itself consists of three interacting parts:

• the "land" (modeled as a "box of heat"),

• the "atmosphere" (modeled as a "box of heat")
• the "ocean" (modelled as a one-dimensional object, so that temperature varies with depth)

Next: how do you model the carbon cycle?

NU: We use a model called NICCS (nonlinear impulse-response model of the coupled carbon-cycle climate system). This model started out as an impulse response model, but because of nonlinearities in the carbon cycle, it was augmented by some box model components. NICCS takes fossil carbon emissions to the air as input, and calculates how that carbon ends up being partitioned between the atmosphere, land (vegetation and soil), and ocean.

For the ocean, it has an impulse response model of the vertical advective/diffusive transport of carbon in the ocean. This is supplemented by a differential equation that models nonlinear ocean carbonate buffering chemistry. It doesn’t have any explicit treatment of ocean biology. For the terrestrial biosphere, it has a box model of the carbon cycle. There are four boxes, each containing some amount of carbon. They are "woody vegetation", "leafy vegetation", "detritus" (decomposing organic matter), and "humus" (more stable organic soil carbon). The box model has some equations describing how quickly carbon gets transported between these boxes (or back to the atmosphere).

In addition to carbon emissions, both the land and ocean modules take global temperature as an input. (So, there should be a red arrow pointing to the "ocean" too — this is a mistake in the figure.) This is because there are temperature-dependent feedbacks in the carbon cycle. In the ocean, temperature determines how readily CO2 will dissolve in water. On land, temperature influences how quickly organic matter in soil decays ("heterotrophic respiration"). There are also purely carbon cycle feedbacks, such as the buffering chemistry mentioned above, and also "CO2 fertilization", which quantifies how plants can grow better under elevated levels of atmospheric CO2.

The NICCS model also originally contained an impulse response model of the climate (temperature as a function of CO2), but we removed that and replaced it with DOECLIM. The NICCS model itself is tuned to reproduce the behavior of a more complex Earth system model. The key three uncertain parameters treated in our analysis control the soil respiration temperature feedback, the CO2 fertilization feedback, and the vertical mixing rate of carbon into the ocean.

JB: Okay. Finally, how do you model the AMOC?

NU: This is another box model. There is a classic 1961 paper by Stommel:

• Henry Stommel, Thermohaline convection with two stable regimes of flow, Tellus 2 (1961), 224-230.

which models the overturning circulation using two boxes of water, one representing water at high latitudes and one at low latitudes. The boxes contain heat and salt. Together, temperature and salinity determine water density, and density differences drive the flow of water between boxes.

It has been shown that such box models can have interesting nonlinear dynamics, exhibiting both hysteresis and threshold behavior. Hysteresis means that if you warm the climate and then cool it back down to its original temperature, the AMOC doesn’t return to its original state. Threshold behavior means that the system exhibits multiple stable states (such as an ocean circulation with or without overturning), and you can pass a "tipping point" beyond which the system flips from one stable equilibrium to another. Ultimately, this kind of dynamics means that it can be hard to return the AMOC to its historic state if it shuts down from anthropogenic climate change.

The extent to which the real AMOC exhibits hysteresis and threshold behavior remains an open question. The model we use in our paper is a box model that has this kind of nonlinearity in it:

• Kirsten Zickfeld, Thomas Slawig and Stefan Rahmstorf, A low-order model for the response of the Atlantic thermohaline circulation to climate change, Ocean Dynamics 54 (2004), 8-26.

Instead of Stommel’s two boxes, this model uses four boxes:

It has three surface water boxes (north, south, and tropics), and one box for an underlying pool of deep water. Each box has its own temperature and salinity, and flow is driven by density gradients between them. The boxes have their own "relaxation temperatures" which the box tries to restore itself to upon perturbation; these parameters are set in a way that attempts to compensate for a lack of explicit feedback on global temperature. The model’s parameters are tuned to match the output of an intermediate complexity climate model.

The input to the model is a change in global temperature (temperature anomaly). This is rescaled to produce different temperature anomalies over each of the three surface boxes (accounting for the fact that different latitudes are expected to warm at different rates). There are similar scalings to determine how much freshwater input, from both precipitation changes and meltwater, is expected in each of the surface boxes due to a temperature change.

The main uncertain parameter is the "hydrological sensitivity" of the North Atlantic surface box, controlling how much freshwater goes into that region in a warming scenario. This is the main effect by which the AMOC can weaken. Actually, anything that changes the density of water alters the AMOC, so the overturning can weaken due to salinity changes from freshwater input, or from direct temperature changes in the surface waters. However, the former is more uncertain than the latter, so we focus on freshwater in our uncertainty analysis.

JB: Great! I see you’re emphasizing the uncertain parameters; we’ll talk more later about how you estimate these parameters, though you’ve already sort of sketched the idea.

So: you’ve described to me the three components of your model: the climate, the carbon cycle and the Atlantic meridional overturning current (AMOC). I guess to complete the description of your model, you should say how these components interact — right?

NU: Right. There is a two-way coupling between the climate module (DOECLIM) and the carbon cycle module (NICCS). The global temperature from the climate module is fed into the carbon cycle module to predict temperature-dependent feedbacks. The atmospheric CO2 predicted by the carbon cycle module is fed into the climate module to predict temperature from its greenhouse effect. There is a one-way coupling between the climate module and the AMOC module. Global temperature alters the overturning circulation, but changes in the AMOC do not themselves alter global temperature:



There is no coupling between the AMOC module and the carbon cycle module, although there technically should be: both the overturning circulation and the uptake of carbon by the oceans depend on ocean vertical mixing processes. Similarly, the climate and carbon cycle modules have their own independent parameters controlling the vertical mixing of heat and carbon, respectively, in the ocean. In reality these mixing rates are related to each other. In this sense, the modules are not fully coupled, insofar as they have independent representations of physical processes that are not really independent of each other. This is discussed in our caveats.

JB: There’s one other thing that’s puzzling me. The climate model treats the "ocean" as a single entity whose temperature varies with depth but not location. The AMOC model involves four "boxes" of water: north, south, tropical, and deep ocean water, each with its own temperature. That seems a bit schizophrenic, if you know what I mean. How are these temperatures related in your model?

You say "there is a one-way coupling between the climate module and the AMOC module." Does the ocean temperature in the climate model affect the temperatures of the four boxes of water in the AMOC model? And if so, how?

NU: The surface temperature in the climate model affects the temperatures of the individual surface boxes in the AMOC model. The climate model works only with globally averaged temperature. To convert a (change in) global temperature to (changes in) the temperatures of the surface boxes of the AMOC model, there is a "pattern scaling" coefficient which converts global temperature (anomaly) to temperature (anomaly) in a particular box.

That is, if the climate model predicts a 1 degree warming globally, that might be more or less than 1 °C of warming in the north Atlantic, tropics, etc. For example, we generally expect to see "polar amplification" where the high northern latitudes warm more quickly than the global average. These latitudinal scaling coefficients are derived from the output of a more complex climate model under a particular warming scenario, and are assumed to be constant (independent of warming scenario).

The temperature from the climate model which is fed into the AMOC model is the global (land+ocean) average surface temperature, not the DOECLIM sea surface temperature alone. This is because the pattern scaling coefficients in the AMOC model were derived relative to global temperature, not sea surface temperature.

JB: Okay. That’s a bit complicated, but I guess some sort consistency is built in, which prevents the climate model and the AMOC model from disagreeing about the ocean temperature. That’s what I was worrying about.

Thanks for leading us through this model. I think this level of detail is just enough to get a sense for how it works. And that I know roughly what your model is, I’m eager to see how you used it and what results you got!

But I’m afraid many of our readers may be nearing the saturation point. After all, I’ve been talking with you for days, with plenty of time to mull it over, while they will probably read this interview in one solid blast! So, I think we should quit here and continue in the next episode.

So, everyone: I’m afraid you’ll just have to wait, clutching your chair in suspense, for the answer to the big question: will the AMOC get turned off, or not? Or really: how likely is such an event, according to this simple model?


…we’re entering dangerous territory and provoking an ornery beast. Our climate system has proven that it can do very strange things.Wallace S. Broecker


Follow

Get every new post delivered to your Inbox.

Join 2,843 other followers