This Week’s Finds (Week 307)

14 December, 2010

I’d like to take a break from interviews and explain some stuff I’m learning about. I’m eager to tell you about some papers in the book Tim Palmer helped edit, Stochastic Physics and Climate Modelling. But those papers are highly theoretical, and theories aren’t very interesting until you know what they’re theories of. So today I’ll talk about "El Niño", which is part of a very interesting climate cycle. Next time I’ll get into more of the math.

I hadn’t originally planned to get into so much detail on the El Niño, but this cycle is a big deal in southern California. In the city of Riverside, where I live, it’s very dry. There is a small river, but it’s just a trickle of water most of the time: there’s a lot less "river" than "side". It almost never rains between March and December. Sometimes, during a "La Niña", it doesn’t even rain in the winter! But then sometimes we have an "El Niño" and get huge floods in the winter. At this point, the tiny stream that gives Riverside its name swells to a huge raging torrent. The difference is very dramatic.

So, I’ve always wanted to understand how the El Niño cycle works — but whenever I tried to read an explanation, I couldn’t follow it!

I finally broke that mental block when I read some stuff on William Kessler‘s website. He’s an expert on the El Niño phenomenon who works at the Pacific Marine Environmental Laboratory. One thing I like about his explanations is that he says what we do know about the El Niño, and also what we don’t know. We don’t know what triggers it!

In fact, Kessler says the El Niño would make a great research topic for a smart young scientist. In an email to me, which he has allowed me to quote, he said:

We understand lots of details but the big picture remains mysterious. And I enjoyed your interview with Tim Palmer because it brought out a lot of the sources of uncertainty in present-generation climate modeling. However, with El Niño, the mystery is beyond Tim’s discussion of the difficulties of climate modeling. We do not know whether the tropical climate system on El Niño timescales is stable (in which case El Niño needs an external trigger, of which there are many candidates) or unstable. In the 80s and 90s we developed simple "toy" models that convinced the community that the system was unstable and El Niño could be expected to arise naturally within the tropical climate system. Now that is in doubt, and we are faced with a fundamental uncertainty about the very nature of the beast. Since none of us old farts has any new ideas (I just came back from a conference that reviewed this stuff), this is a fruitful field for a smart young person.

So, I hope some smart young person reads this and dives into working on El Niño!

But let’s start at the beginning. Why did I have so much trouble understanding explanations of the El Niño? Well, first of all, I’m an old fart. Second, most people are bad at explaining stuff: they skip steps, use jargon they haven’t defined, and so on. But third, climate cycles are hard to explain. There’s a lot about them we don’t understand — as Kessler’s email points out. And they also involve a kind of "cyclic causality" that’s a bit tough to mentally process.

At least where I come from, people find it easy to understand linear chains of causality, like "A causes B, which causes C". For example: why is the king’s throne made of gold? Because the king told his minister "I want a throne of gold!" And the minister told the servant, "Make a throne of gold!" And the servant made the king a throne of gold.

Now that’s what I call an explanation! It’s incredibly satisfying, at least if you don’t wonder why the king wanted a throne of gold in the first place. It’s easy to remember, because it sounds like a story. We hear a lot of stories like this when we’re children, so we’re used to them. My example sounds like the beginning of a fairy tale, where the action is initiated by a "prime mover": the decree of a king.

There’s something a bit trickier about cyclic causality, like "A causes B, which causes C, which causes A." It may sound like a sneaky trick: we consider "circular reasoning" a bad thing. Sometimes it is a sneaky trick. But sometimes this is how things really work!

Why does big business have such influence in American politics? Because big business hires lots of lobbyists, who talk to the politicians, and even give them money. Why are they allowed to do this? Because big business has such influence in American politics. That’s an example of a "vicious circle". You might like to cut it off — but like a snake holding its tail in its mouth, it’s hard to know where to start.

Of course, not all circles are "vicious". Many are "virtuous".

But the really tricky thing is how a circle can sometimes reverse direction. In academia we worry about this a lot: we say a university can either "ratchet up" or "ratchet down". A good university attracts good students and good professors, who bring in more grant money, and all this makes it even better… while a bad university tends to get even worse, for all the same reasons. But sometimes a good university goes bad, or vice versa. Explaining that transition can be hard.

It’s also hard to explain why a La Niña switches to an El Niño, or vice versa. Indeed, it seems scientists still don’t understand this. They have some models that simulate this process, but there are still lots of mysteries. And even if they get models that work perfectly, they still may not be able to tell a good story about it. Wind and water are ultimately described by partial differential equations, not fairy tales.

But anyway, let me tell you a story about how it works. I’m just learning this stuff, so take it with a grain of salt…

The "El Niño/Southern Oscillation" or "ENSO" is the largest form of variability in the Earth’s climate on times scales greater than a year and less than a decade. It occurs across the tropical Pacific Ocean every 3 to 7 years, and on average every 4 years. It can cause extreme weather such as floods and droughts in many regions of the world. Countries dependent on agriculture and fishing, especially those bordering the Pacific Ocean, are the most affected.

And here’s a cute little animation of it produced by the Australian Bureau of Meteorology:



Let me tell you first about La Niña, and then El Niño. If you keep glancing back at this little animation, I promise you can understand everything I’ll say.

Winds called trade winds blow west across the tropical Pacific. During La Niña years, water at the ocean’s surface moves west with these winds, warming up in the sunlight as it goes. So, warm water collects at the ocean’s surface in the western Pacific. This creates more clouds and rainstorms in Asia. Meanwhile, since surface water is being dragged west by the wind, cold water from below gets pulled up to take its place in the eastern Pacific, off the coast of South America.

I hope this makes sense so far. But there’s another aspect to the story. Because the ocean’s surface is warmer in the western Pacific, it heats the air and makes it rise. So, wind blows west to fill the "gap" left by rising air. This strengthens the westward-blowing trade winds.

So, it’s a kind of feedback loop: the oceans being warmer in the western Pacific helps the trade winds blow west, and that makes the western oceans even warmer.

Get it? This should all make sense so far, except for one thing. There’s one big question, and I hope you’re asking it. Namely:

Why do the trade winds blow west?

If I don’t answer this, my story so far would work just as well if I switched the words "west" and "east". That wouldn’t necessarily mean my story was wrong. It might just mean that there were two equally good options: a La Niña phase where the trade winds blow west, and another phase — say, El Niño — where they blow east! From everything I’ve said so far, the world could be permanently stuck in one of these phases. Or, maybe it could randomly flip between these two phases for some reason.

Something roughly like this last choice is actually true. But it’s not so simple: there’s not a complete symmetry between west and east.

Why not? Mainly because the Earth is turning to the east.

Air near the equator warms up and rises, so new air from more northern or southern regions moves in to take its place. But because the Earth is fatter at the equator, the equator is moving faster to the east. So, the new air from other places is moving less quickly by comparison… so as seen by someone standing on the equator, it blows west. This is an example of the Coriolis effect:



By the way: in case this stuff wasn’t tricky enough already, a wind that blows to the west is called an easterly, because it blows from the east! That’s what happens when you put sailors in charge of scientific terminology. So the westward-blowing trade winds are called "northeasterly trades" and "southeasterly trades" in the picture above. But don’t let that confuse you.

(I also tend to think of Asia as the "Far East" and California as the "West Coast", so I always need to keep reminding myself that Asia is in the west Pacific, while California is in the east Pacific. But don’t let that confuse you either! Just repeat after me until it makes perfect sense: "The easterlies blow west from West Coast to Far East".)

Okay: silly terminology aside, I hope everything makes perfect sense so far. The trade winds have a good intrinsic reason to blow west, but in the La Niña phase they’re also part of a feedback loop where they make the western Pacific warmer… which in turn helps the trade winds blow west.

But then comes an El Niño! Now for some reason the westward winds weaken. This lets the built-up warm water in the western Pacific slosh back east. And with weaker westward winds, less cold water is pulled up to the surface in the east. So, the eastern Pacific warms up. This makes for more clouds and rain in the eastern Pacific — that’s when we get floods in Southern California. And with the ocean warmer in the eastern Pacific, hot air rises there, which tends to counteract the westward winds even more!

In other words, all the feedbacks reverse themselves.

But note: the trade winds never mainly blow east. During an El Niño they still blow west, just a bit less. So, the climate is not flip-flopping between two symmetrical alternatives. It’s flip-flopping between two asymmetrical alternatives.

I hope all this makes sense… except for one thing. There’s another big question, and I hope you’re asking it. Namely:

Why do the westward trade winds weaken?

We could also ask the same question about the start of the La Niña phase: why do the westward trade winds get stronger?

The short answer is that nobody knows. Or at least there’s no one story that everyone agrees on. There are actually several stories… and perhaps more than one of them is true. But now let me just show you the data:



The top graph shows variations in the water temperature of the tropical Eastern Pacific ocean. When it’s hot we have El Niños: those are the red hills in the top graph. The blue valleys are La Niñas. Note that it’s possible to have two El Niños in a row without an intervening La Niña, or vice versa!

The bottom graph shows the "Southern Oscillation Index" or "SOI". This is the air pressure in Tahiti minus the air pressure in Darwin, Australia. You can see those locations here:



So, when the SOI is high, the air pressure is higher in the east Pacific than in the west Pacific. This is what we expect in an La Niña: that’s why the westward trade winds are strong then! Conversely, the SOI is low in the El Niño phase. This variation in the SOI is called the Southern Oscillation.

If you look at the graphs above, you’ll see how one looks almost like an upside-down version of the other. So, El Niño/La Niña cycle is tightly linked to the Southern Oscillation.

Another thing you’ll see from is that ENSO cycle is far from perfectly periodic! Here’s a graph of the Southern Oscillation Index going back a lot further:



This graph was made by William Kessler. His explanations of the ENSO cycle are the first ones I really understood:

My own explanation here is a slow-motion, watered-down version of his. Any mistakes are, of course, mine. To conclude, I want to quote his discussion of theories about why an El Niño starts, and why it ends. As you’ll see, this part is a bit more technical. It involves three concepts I haven’t explained yet:

  • The "thermocline" is the border between the warmer surface water in the ocean and the cold deep water, 100 to 200 meters below the surface. During the La Niña phase, warm water is blown to the western Pacific, and cold water is pulled up to the surface of the eastern Pacific. So, the thermocline is deeper in the west than the east:

    When an El Niño occurs, the thermocline flattens out:

  • "Oceanic Rossby waves" are very low-frequency waves in the ocean’s surface and thermocline. At the ocean’s surface they are only 5 centimeters high, but hundreds of kilometers across. They move at about 10 centimeters/second, requiring months to years to cross the ocean! The surface waves are mirrored by waves in the thermocline, which are much larger, 10-50 meters in height. When the surface goes up, the thermocline goes down.
  • The "Madden-Julian Oscillation" or "MJO" is the largest form of variability in the tropical atmosphere on time scales of 30-90 days. It’s a pulse that moves east across the Indian Ocean and Pacific ocean at 4-8 meters/second. It manifests itself as patches of anomalously high rainfall and also anomalously low rainfall. Strong Madden-Julian Oscillations are often seen 6-12 months before an El Niño starts.

With this bit of background, let’s read what Kessler wrote:

There are two main theories at present. The first is that the event is initiated by the reflection from the western boundary of the Pacific of an oceanic Rossby wave (type of low-frequency planetary wave that moves only west). The reflected wave is supposed to lower the thermocline in the west-central Pacific and thereby warm the SST [sea surface temperature] by reducing the efficiency of upwelling to cool the surface. Then that makes winds blow towards the (slightly) warmer water and really start the event. The nice part about this theory is that the Rossby waves can be observed for months before the reflection, which implies that El Niño is predictable.

The other idea is that the trigger is essentially random. The tropical convection (organized largescale thunderstorm activity) in the rising air tends to occur in bursts that last for about a month, and these bursts propagate out of the Indian Ocean (known as the Madden-Julian Oscillation). Since the storms are geostrophic (rotating according to the turning of the earth, which means they rotate clockwise in the southern hemisphere and counter-clockwise in the north), storm winds on the equator always blow towards the east. If the storms are strong enough, or last long enough, then those eastward winds may be enought to start the sloshing. But specific Madden-Julian Oscillation events are not predictable much in advance (just as specific weather events are not predictable in advance), and so to the extent that this is the main element, then El Niño will not be predictable.

In my opinion both these two processes can be important in different El Niños. Some models that did not have the MJO storms were successful in predicting the events of 1986-87 and 1991-92. That suggests that the Rossby wave part was a main influence at that time. But those same models have failed to predict the events since then, and the westerlies have appeared to come from nowhere. It is also quite possible that these two general sets of ideas are incomplete, and that there are other causes entirely. The fact that we have very intermittent skill at predicting the major turns of the ENSO cycle (as opposed to the very good forecasts that can be made once an event has begun) suggests that there remain important elements that are await explanation.

Next time I’ll talk a bit about mathematical models of the ENSO and another climate cycle — but please keep in mind that these cycles are still far from fully understood!


To hate is to study, to study is to understand, to understand is to appreciate, to appreciate is to love. So maybe I’ll end up loving your theory. – John Archibald Wheeler


The Azolla Event

18 November, 2010

My friend Bruce Smith just pointed out something I’d never heard of:

Azolla event, Wikipedia.

As you may recall, the dinosaurs were wiped out by an asteroid about 65 million years ago. Then came the Cenozoic Era: first the Paleocene, then the Eocene, and so on. Back in those days, the Earth was very warm compared to now:



Paleontologists call the peak of high temperatures the “Eocene optimum”. Back then, it was about 12 °C warmer on average. The polar regions were much warmer than today, perhaps as mild as the modern-day Pacific Northwest. In fact, giant turtles and alligators thrived north of the Arctic circle!

(“Optimum?” Yes: as if the arguments over global warming weren’t confusing enough already, paleontologists use the term “optimum” for any peak of high temperatures. I think that’s a bit silly. If you were a turtle north of the Arctic circle, it was indeed jolly optimal. But what matters now is not that certain temperature levels are inherently good or bad, but that the temperature is increasing too fast for life to easily adapt.)

Why did it get colder? This is a fascinating and important puzzle. And here’s one puzzle piece I’d never heard about. I don’t know how widely accepted this story is, but here’s how it goes:

In the early Eocene, the Arctic Ocean was almost entirely surrounded by land:



A surface layer of less salty water formed from inflowing rivers, and around 49 million years ago, vast blooms of freshwater fern Azolla began to grow in the Arctic Ocean. Apparently this stuff grows like crazy. And as bits of it died, it sank to the sea floor. This went on for about 800,000 years, and formed a layer 8 up to meters thick. And some scientists speculate that this process sucked up enough carbon dioxide to significantly chill the planet. Some say CO2 concentrations fell from 3500 ppm in the early Eocene to 650 ppm at around the time of this event!

I don’t understand much about this — I just wanted to mention it. After all, right now people are thinking about fertilizing the ocean to artificially create blooms of phytoplankton that’ll soak up CO2 and fall to the ocean floor. But if you want to read a well-informed blog article on this topic, try:

• Ole Nielsen, The Azolla event (dramatic bloom 49 million years ago).

By the way, there’s a nice graph of carbon dioxide concentrations here… inferred from boron isotope measurements:

• P. N. Pearson and M. R. Palmer, Atmospheric carbon dioxide concentrations over the past 60 million years, Nature 406 (2000), 695–699.


This Week’s Finds (Week 305)

5 November, 2010

Nathan Urban has been telling us about a paper where he estimated the probability that global warming will shut down a major current in the Atlantic Ocean:

• Nathan M. Urban and Klaus Keller, Probabilistic hindcasts and projections of the coupled climate, carbon cycle and Atlantic meridional overturning circulation system: a Bayesian fusion of century-scale observations with a simple model, Tellus A, July 16, 2010.

We left off last time with a cliff-hanger: I didn’t let him tell us what the probability is! Since you must have been clutching your chair ever since, you’ll be relieved to hear that the answer is coming now, in the final episode of this interview.

But it’s also very interesting how he and Klaus Keller got their answer. As you’ll see, there’s some beautiful math involved. So let’s get started…

JB: Last time you told us roughly how your climate model works. This time I’d like to ask you about the rest of your paper, leading up to your estimate of the probability that the Atlantic Meridional Overturning Current (or "AMOC") will collapse. But before we get into that, I’d like to ask some very general questions.

For starters, why are scientists worried that the AMOC might collapse?

Last time I mentioned the Younger Dryas event, a time when Europe became drastically colder for about 1300 years, starting around 10,800 BC. Lots of scientists think this event was caused by a collapse of the AMOC. And lots of them believe it was caused by huge amounts of fresh water pouring into the north Atlantic from an enormous glacial lake. But nothing quite like that is happening now! So if the AMOC collapses in the next few centuries, the cause would have to be a bit different.

NU: In order for the AMOC to collapse, the overturning circulation has to weaken. The overturning is driven by the sinking of cold and salty, and therefore dense, water in the north Atlantic. Anything that affects the density structure of the ocean can alter the overturning.

As you say, during the Younger Dryas, it is thought that a lot of fresh water suddenly poured into the Atlantic from the draining of a glacial lake. This lessened the density of the surface waters and reduced the rate at which they sank, shutting down the overturning.

Since there aren’t any large glacial lakes left that could abruptly drain into the ocean, the AMOC won’t shut down in the same way it previously did. But it’s still possible that climate change could cause it to shut down. The surface waters from the north Atlantic can still freshen (and become less dense), either due to the addition of fresh water from melting polar ice and snow, or due to increased precipitation to the northern latitudes. In addition, they can simply become warmer, which also makes them less dense, reducing their sinking rate and weakening the overturning.

In combination, these three factors (warming, increased precipitation, meltwater) can theoretically shut down the AMOC if they are strong enough. This will probably not be as abrupt or extreme an event as the Younger Dryas, but it can still persistently alter the regional climate.

JB: I’m trying to keep our readers in suspense for a bit longer, but I don’t think it’s giving away too much to say that when you run your model, sometimes the AMOC shuts down, or at least slows down. Can you say anything about how this tends to happen, when it does? In your model, that is. Can you tell if it’s mainly warming, or increased precipitation, or meltwater?

NU: The short answer is "mainly warming, probably". The long answer:

I haven’t done experiments with the box model myself to determine this, but I can quote from the Zickfeld et al. paper where this model was published. It says, for their baseline collapse experiment,

In the box model the initial weakening of the overturning circulation is mainly due to thermal forcing […] This effect is amplified by a negative feedback on salinity, since a weaker circulation implies reduced salt advection towards the northern latitudes.

Even if they turn off all the freshwater input, they find substantial weakening of the AMOC from warming alone.

Freshwater could potentially become the dominant effect on the AMOC if more freshwater is added than in the paper’s baseline experiment. The paper did report computer experiments with different freshwater inputs, but upon skimming it, I can’t immediately tell whether the thermal effect loses its dominance.

These experiments have also been performed using more complex climate models. This paper reports that in all the models they studied, the AMOC weakening is caused more by changes in surface heat flux than by changes in surface water flux:

• J. M. Gregory et al., A model intercomparison of changes in the Atlantic thermohaline circulation in response to increasing atmospheric CO2 concentration, Geophysical Research Letters 32 (2005), L12703.

However, that paper studied "best-estimate" freshwater fluxes, not the fluxes on the high end of what’s possible, so I don’t know whether thermal effects would still dominate if the freshwater input ends up being large. There are papers that suggest freshwater input from Greenland, at least, won’t be a dominant factor any time soon:

• J. H. Jungclaus et al., Will Greenland melting halt the thermohaline circulation?, Geophysical Research Letters 33 (2006), L17708.

• E. Driesschaert et al., Modeling the influence of Greenland ice sheet melting on the Atlantic meridional overturning circulation during the next millennia, Geophysical Research Letters 34 (2007), L10707.

I’m not sure what the situation is for precipitation, but I don’t think that would be much larger than the meltwater flux. In summary, it’s probably the thermal effects that dominate, both in complex and simpler models.

Note that in our version of the box model, the precipitation and meltwater fluxes are combined into one number, the "North Atlantic hydrological sensitivity", so we can’t distinguish between those sources of water. This number is treated as uncertain in our analysis, lying within a range of possible values determined from the hydrologic changes predicted by complex models. The Zickfeld et al. paper experimented with separating them into the two individual contributions, but my version of the model doesn’t do that.

JB: Okay. Now back to what you and Klaus Keller actually did in your paper. You have a climate model with a bunch of adjustable knobs, or parameters. Some of these parameters you take as "known" from previous research. Others are more uncertain, and that’s where the Bayesian reasoning comes in. Very roughly, you use some data to guess the probability that the right settings of these knobs lie within any given range.

How many parameters do you treat as uncertain?

NU: 18 parameters in total. 7 model parameters that control dynamics, 4 initial conditions, and 7 parameters describing error statistics.

JB: What are a few of these parameters? Maybe you can tell us about some of the most important ones — or ones that are easy to understand.

NU: I’ve mentioned these briefly in "week304" in the model description. The AMOC-related parameter is the hydrologic sensitivity I described above, controlling the flux of fresh water into the North Atlantic.

There are three climate related parameters:

• the climate sensitivity (the equilibrium warming expected in response to doubled CO2),

• the ocean heat vertical diffusivity (controlling the rate at which oceans absorb heat from the atmosphere), and

• "aerosol scaling", a factor that multiplies the strength of the aerosol-induced cooling effect, mostly due to uncertainties in aerosol-cloud interactions.

I discussed these in "week302" in the part about total feedback estimates.

There are also three carbon cycle related parameters:

• the heterotrophic respiration sensitivity (describing how quickly dead plants decay when it gets warmer),

• CO2 fertilization (how much faster plants grow in CO2-elevated conditions), and

• the ocean carbon vertical diffusivity (the rate at which the oceans absorb CO2 from the atmosphere).

The initial conditions describe what the global temperature, CO2 level, etc. were at the start of my model simulations, in 1850. The statistical parameters describe the variance and autocorrelation of the residual error between the observations and the model, due to measurement error, natural variability, and model error.

JB: Could you say a bit about the data you use to estimate these uncertain parameters? I see you use a number of data sets.

NU: We use global mean surface temperature and ocean heat content to constrain the three climate parameters. We use atmospheric CO2 concentration and some ocean flux measurements to constrain the carbon parameters. We use measurements of the AMOC strength to constrain the AMOC parameter. These are all time series data, mostly global averages — except the AMOC strength, which is an Atlantic-specific quantity defined at a particular latitude.

The temperature data are taken by surface weather stations and are for the years 1850-2009. The ocean heat data are taken by shipboard sampling, 1953-1996. The atmospheric CO2 concentrations are measured from the Mauna Loa volcano in Hawaii, 1959-2009. There are also some ice core measurements of trapped CO2 at Law Dome, Antarctica, dated to 1854-1953. The air-sea CO2 fluxes, for the 1980s and 1990s, are derived from measurements of dissolved inorganic carbon in the ocean, combined with measurements of manmade chlorofluorocarbon to date the water masses in which the carbon resides. (The dates tell you when the carbon entered the ocean.)

The AMOC strength is reconstructed from station measurements of poleward water circulation over an east-west section of the Atlantic Ocean, near 25 °N latitude. Pairs of stations measure the northward velocity of water, inferred from the ocean bottom pressure differences between northward and southward station pairs. The velocities across the Atlantic are combined with vertical density profiles to determine an overall rate of poleward water mass transport. We use seven AMOC strength estimates measured sparsely between the years 1957 and 2004.

JB: So then you start the Bayesian procedure. You take your model, start it off with your 18 parameters chosen somehow or other, run it from 1850 to now, and see how well it matches all this data you just described. Then you tweak the parameters a bit — last time we called that "turning the knobs" — and run the model again. And then you do this again and again, lots of times. The goal is to calculate the probability that the right settings of these knobs lie within any given range.

Is that about right?

NU: Yes, that’s right.

JB: About how many times did you actually run the model? Is the sort of thing you can do on your laptop overnight, or is it a mammoth task?

NU: I ran the model a million times. This took about two days on a single CPU. Some of my colleagues later ported the model from Matlab to Fortran, and now I can do a million runs in half an hour on my laptop.

JB: Cool! So if I understand correctly, you generated a million lists of 18 numbers: those uncertain parameters you just mentioned.

Or in other words: you created a cloud of points: a million points in an 18-dimensional space. Each point is a choice of those 18 parameters. And the density of this cloud near any point should be proportional to the probability that the parameters have those values.

That’s the goal, anyway: getting this cloud to approximate the right probability density on your 18-dimensional space. To get this to happen, you used the Markov chain Monte Carlo procedure we discussed last time.

Could you say in a bit more detail how you did this, exactly?

NU: There are two steps. One is to write down a formula for the probability of the parameters (the "Bayesian posterior distribution"). The second is to draw random samples from that probability distribution using Markov chain Monte Carlo (MCMC).

Call the parameter vector θ and the data vector y. The Bayesian posterior distribution p(θ|y) is a function of θ which says how probable θ is, given the data y that you’ve observed. The little bar (|) indicates conditional probability: p(θ|y) is the probability of θ, assuming that you know y happened.

The posterior factorizes into two parts, the likelihood and the prior. The prior, p(θ) says how probable you think a particular 18-dimensional vector of parameters is, before you’ve seen the data you’re using. It encodes your "prior knowledge" about the problem, unconditional on the data you’re using.

The likelihood, p(y|θ), says how likely it is for the observed data to arise from a model run using some particular vector of parameters. It describes your data generating process: assuming you know what the parameters are, how likely are you to see data that looks like what you actually measured? (The posterior is the reverse of this: how probable are the parameters, assuming the data you’ve observed?)

Bayes’s theorem simply says that the posterior is proportional to the product of these two pieces:

p(θ|y) ∝ p(y|θ) × p(θ)

If I know the two pieces, I multiply them together and use MCMC to sample from that probability distribution.

Where do the pieces come from? For the prior, we assumed bounded uniform distributions on all but one parameter. Such priors express the belief that each parameter lies within some range we deemed reasonable, but we are agnostic about whether one value within that range is more probable than any other. The exception is the climate sensitivity parameter. We have prior evidence from computer models and paleoclimate data that the climate sensitivity is most likely around 2 or 3 °C, albeit with significant uncertainties. We encoded this belief using a "diffuse" Cauchy distribution peaked in this range, but allowing substantial probability to be outside it, so as to not prematurely exclude too much of the parameter range based on possibly overconfident prior beliefs. We assume the priors on all the parameters are independent of each other, so the prior for all of them is the product of the prior for each of them.

For the likelihood, we assumed a normal (Gaussian) distribution for the residual error (the scatter of the data about the model prediction). The simplest such distribution is the independent and identically distributed ("iid") normal distribution, which says that all the data points have the same error and the errors at each data point are independent of each other. Neither of these assumptions is true. The errors are not identical, since they get bigger farther in the past, when we measured data with less precision than we do today. And they’re not independent, because if one year is warmer than the model predicts, the next year likely to be also warmer than the model predicts. There are various possible reasons for this: chaotic variability, time lags in the system due to finite heat capacity, and so on.

In this analysis, we kept the identical-error assumption for simplicity, even though it’s not correct. I think this is justifiable, because the strongest constraints on the parameters come from the most recent data, when the largest climate and carbon cycle changes have occurred. That is, the early data are already relatively uninformative, so if their errors get bigger, it doesn’t affect the answer much.

We rejected the independent-error assumption, since there is very strong autocorrelation (serial dependence) in the data, and ignoring autocorrelation is known to lead to overconfidence. When the errors are correlated, it’s harder to distinguish between a short-term random fluctuation and a true trend, so you should be more uncertain about your conclusions. To deal with this, we assumed that the errors obey a correlated autoregressive "red noise" process instead of an uncorrelated "white noise" process. In the likelihood, we converted the red-noise errors to white noise via a "whitening" process, assuming we know how much correlation is present. (We’re allowed to do that in the likelihood, because it gives the probability of the data assuming we know what all the parameters are, and the autocorrelation is one of the parameters.) The equations are given in the paper.

Finally, this gives us the formula for our posterior distribution.

JB: Great! There’s a lot of technical material here, so I have many questions, but let’s go through the whole story first, and come back to those.

NU: Okay. Next comes step two, which is to draw random samples from the posterior probability distribution via MCMC.

To do this, we use the famous Metropolis algorithm, which was invented by a physicist of that name, along with others, to do computations in statistical physics. It’s a very simple algorithm which takes a "random walk" through parameter space.

You start out with some guess for the parameters. You randomly perturb your guess to a nearby point in parameter space, which you are going to propose to move to. If the new point is more probable than the point you were at (according to the Bayesian posterior distribution), then accept it as a new random sample. If the proposed point is less probable than the point you’re at, then you randomly accept the new point with a certain probability. Otherwise you reject the move, staying where you are, treating the old point as a duplicate random sample.

The acceptance probability is equal to the ratio of the posterior distribution at the new point to the posterior distribution at the old point. If the point you’re proposing to move to is, say, 5 times less probable than the point you are at now, then there’s a 20% chance you should move there, and a 80% chance that you should stay where you are.

If you iterate this method of proposing new "jumps" through parameter space, followed by the Metropolis accept/reject procedure, you can prove that you will eventually end up with a long list of (correlated) random samples from the Bayesian posterior distribution.

JB: Okay. Now let me ask a few questions, just to help all our readers get up to speed on some jargon.

Lots of people have heard of a "normal distribution" or "Gaussian", because it’s become sort of the default choice for probability distributions. It looks like a bell curve:

When people don’t know the probability distribution of something — like the tail lengths of newts or the IQ’s of politicians — they often assume it’s a Gaussian.

But I bet fewer of our readers have heard of a "Cauchy distribution". What’s the point of that? Why did you choose that for your prior probability distribution of the climate sensitivity?

NU: There is a long-running debate about the "upper tail" of the climate sensitivity distribution. High climate sensitivities correspond to large amounts of warming. As you can imagine, policy decisions depend a lot on how likely we think these extreme outcomes could be, i.e., how quickly the "upper tail" of the probability distribution drops to zero.

A Gaussian distribution has tails that drop off exponentially quickly, so very high sensitivities will never get any significant weight. If we used it for our prior, then we’d almost automatically get a "thin tailed" posterior, no matter what the data say. We didn’t want to put that in by assumption and automatically conclude that high sensitivities should get no weight, regardless of what the data say. So we used a weaker assumption, which is a "heavy tailed" prior distribution. With this prior, the probability of large amounts of warming drops off more slowly, as a power law, instead of exponentially fast. If the data strongly rule out high warming, we can get a thin tailed posterior, but if they don’t, it will be heavy tailed. The Cauchy distribution, a limiting case of the "Student t" distribution that students of statistics may have heard of, is one of the most conservative choices for a heavy-tailed prior. Probability drops off so slowly at its tails that its variance is infinite.

JB: The issue of "fat tails" is also important in the stock market, where big crashes happen more frequently than you might guess with a Gaussian distribution. After the recent economic crisis I saw a lot of financiers walking around with their tails between their legs, wishing their tails had been fatter.

I’d also like to ask about "white noise" versus "red noise". "White noise" is a mathematical description of a situation where some quantity fluctuates randomly with time in a way so that it’s value at any time is completely uncorrelated with its value at any other time. If you graph an example of white noise, it looks really spiky:



If you play it as a sound, it sounds like hissy static — quite unpleasant. If you could play it in the form of light, it would look white, hence the name.

"Red noise" is less wild. Its value at any time is still random, but it’s correlated to the values at earlier or later times, in a specific way. So it looks less spiky:



and it sounds less high-pitched, more like a steady rainfall. Since it’s stronger at low frequencies, it would look more red if you could play it in the form of light — hence the name "red noise".

If understand correctly, you’re assuming that some aspects of the climate are noisy, but in a red noise kind of way, when you’re computing p(y|θ): the likelihood that your data takes on the value y, given your climate model with some specific choice of parameters θ.

Is that right? You’re assuming this about all your data: the temperature data from weather stations, the ocean heat data are from shipboard samples, the atmospheric CO2 concentrations at Mauna Loa volcano in Hawaii, the ice core measurements of trapped CO2, the air-sea CO2 fluxes, and also the AMOC strength? Red, red, red — all red noise?

NU: I think the red noise you’re talking about refers to a specific type of autocorrelated noise ("Brownian motion"), with a power spectrum that is inversely proportional to the square of frequency. I’m using "red noise" more generically to speak of any autocorrelated process that is stronger at low frequencies. Specifically, the process we use is a first-order autoregressive, or "AR(1)", process. It has a more complicated spectrum than Brownian motion.

JB: Right, I was talking about "red noise" of a specific mathematically nice sort, but that’s probably less convenient for you. AR(1) sounds easier for computers to generate.

NU: It’s not only easier for computers, but closer to the spectrum we see in our analysis.

Note that when I talk about error I mean "residual error", which is the difference between the observations and the model prediction. If the residual error is correlated in time, that doesn’t necessarily reflect true red noise in the climate system. It could also represent correlated errors in measurement over time, or systematic errors in the model. I am not attempting to distinguish between all these sources of error. I’m just lumping them all together into one total error process, and assuming it has a simple statistical form.

We assume the residual errors in the annual surface temperature, ocean heat, and instrumental CO2 time series are AR(1). The ice core CO2, air-sea CO2 flux, and AMOC strength data are sparse, and we can’t really hope to estimate the correlation between them, so we assume their residual errors are uncorrelated.

Speaking of correlation, I’ve been talking about "autocorrelation", which is correlation within one data set between one time and another. It’s also possible for the errors in different data sets to be correlated with each other ("cross correlation"). We assumed there is no cross correlation (and residual analysis suggests only weak correlation between data sets).

JB: I have a few more technical questions, but I bet most of our readers are eager to know: so, what next?

You use all these nifty mathematical methods to work out p(θ|y), the probability that your 18 parameters have any specific value given your data. And now I guess you want to figure out the probability that the Atlantic Meridional Overturning Current, or AMOC, will collapse by some date or other.

How do you do this? I guess most people want to know the answer more than the method, but they’ll just have to wait a few more minutes.

NU: That’s easy. After MCMC, we have a million runs of the model, sampled in proportion how well the model fits historic data. There will be lots of runs that agree well with the data, and a few that agree less well. All we do now is extend each of those runs into the future, using an assumed scenario for what CO2 emissions and other radiative forcings will do in the future. To find out the probability that the AMOC will collapse by some date, conditional on the assumptions we’ve made, we just count what fraction of the runs have an AMOC strength of zero in whatever year we care about.

JB: Okay, that’s simple enough. What scenario, or scenarios, did you consider?

NU: We considered a worst-case "business as usual" scenario in which we continue to burn fossil fuels at an accelerating rate until we start to run out of them, and eventually burn the maximum amount of fossil fuels we think there might be remaining (about 5000 gigatons worth of of carbon, compared to the roughly 500 gigatons we’ve emitted so far). This assumes we get desperate for cheap energy and extract all the hard-to-get fossil resources in oil shales and tar sands, all the remaining coal, etc. It doesn’t necessarily preclude the use of non-fossil energy; it just assumes that our appetite for energy grows so rapidly that there’s no incentive to slow down fossil fuel extraction. We used a simple economic model to estimate how fast we might do this, if the world economy continues to grow at a similar rate to the last few decades.

JB: And now for the big question: what did you find? How likely is it that the AMOC will collapse, according to your model? Of course it depends how far into the future you look.

NU: We find a negligible probability that the AMOC will collapse this century. The odds start to increase around 2150, rising to about a 10% chance by 2200, and a 35% chance by 2300, the last year considered in our scenario.

JB: I guess one can take this as good news or really scary news, depending on how much you care about folks who are alive in 2300. But I have some more questions. First, what’s a "negligible probability"?

NU: In this case, it’s less than 1 in 3000. For computational reasons, we only ran 3000 of the million samples forward into the future. There were no samples in this smaller selection that had the AMOC collapsed in 2100. The probability rises to 1 in 3000 in the year 2130 (the first time I see a collapse in this smaller selection), and 1% in 2152. You should take these numbers with a grain of salt. It’s these rare "tail-area events" that are most sensitive to modeling assumptions.

JB: Okay. And second, don’t the extrapolations become more unreliable as you keep marching further into the future? You need to model not only climate physics but also the world economy. In this calculation, how many gigatons of carbon dioxide per year are you assuming will be emitted in 2300? I’m just curious. In 1998 it was about 27.6 gigatons. By 2008, it was about 30.4.

NU: Yes, the uncertainty grows with time (and this is reflected in our projections). And in considering a fixed emissions scenario, we’ve ignored the economic uncertainty, which, so far out into the future, is even larger than the climate uncertainty. Here we’re concentrating on just the climate uncertainty, and are hoping to get an idea of bounds, so we used something close to a worst-case economic scenario. In this scenario carbon emissions peak around 2150 at about 23 gigatons carbon per year (84 gigatons CO2). By 2300 they’ve tapered off to about 4 GtC (15 GtCO2).

Actual future emissions may be less than this, if we act to reduce them, or there are fewer economically extractable fossil resources than we assume, or the economy takes a prolonged downturn, etc. Actually, it’s not completely an economic worst case; it’s possible that the world economy could grow even faster than we assume. And it’s not the worst case scenario from a climate perspective, either. For example, we don’t model potential carbon emissions from permafrost or methane clathrates. It’s also possible that climate sensitivity could be higher than what we find in our analysis.

JB: Why even bother projecting so far out into the future, if it’s so uncertain?

NU: The main reason is because it takes a while for the AMOC to weaken, so if we’re interested in what it would take to make it collapse, we have to run the projections out a few centuries. But another motivation for writing this paper is policy related, having to do with the concept of "climate commitment" or "triggering". Even if it takes a few centuries for the AMOC to collapse, it may take less time than that to reach a "point of no return", where a future collapse has already been unavoidably "triggered". Again, to investigate this question, we have to run the projections out far enough to get the AMOC to collapse.

We define "the point of no return" to be a point in time which, if CO2 emissions were immediately reduced to zero and kept there forever, the AMOC would still collapse by the year 2300 (an arbitrary date chosen for illustrative purposes). This is possible because even if we stop emitting new CO2, existing CO2 concentrations, and therefore temperatures, will remain high for a long time (see "week303").

In reality, humans wouldn’t be able to reduce emissions instantly to zero, so the actual "point of no return" would likely be earlier than what we find in our study. We couldn’t economically reduce emissions fast enough to avoid triggering an AMOC collapse. (In this study we ignore the possibility of negative carbon emissions, that is, capturing CO2 directly from the atmosphere and sequestering it for a long period of time. We’re also ignoring the possibility of climate geoengineering, which is global cooling designed to cancel out greenhouse warming.)

So what do we find? Although we calculate a negligible probability that the AMOC will collapse by the end of this century, the probability that, in this century, we will commit later generations to a collapse (by 2300) is almost 5%. The probabilities of "triggering" rise rapidly, to almost 20% by 2150 and about 33% by 2200, even though the probability of experiencing a collapse by those dates is about 1% and 10%, respectively. You can see it in this figure from our paper:



The take-home message is that while most climate projections are currently run out to 2100, we shouldn’t fixate only on what might happen to people this century. We should consider what climate changes our choices in this century, and beyond, are committing future generations to experiencing.

JB: That’s a good point!

I’d like to thank you right now for a wonderful interview, that really taught me — and I hope our readers — a huge amount about climate change and climate modelling. I think we’ve basically reached the end here, but as the lights dim and the audience files out, I’d like to ask just a few more technical questions.

One of them was raised by David Tweed. He pointed out that while you’re "training" your model on climate data from the last 150 years or so, you’re using it to predict the future in a world that will be different in various ways: a lot more CO2 in the atmosphere, hotter, and so on. So, you’re extrapolating rather than interpolating, and that’s a lot harder. It seems especially hard if the collapse of the AMOC is a kind of "tipping point" — if it suddenly snaps off at some point, instead of linearly decreasing as some parameter changes.

This raises the question: why should we trust your model, or any model of this sort, to make such extrapolations correctly? In the discussion after that comment, I think you said that ultimately it boils down to

1) whether you think you have the physics right,

and

2) whether you think the parameters change over time.

That makes sense. So my question is: what are some of the best ways people could build on the work you’ve done, and make more reliable predictions about the AMOC? There’s a lot at stake here!

NU: Our paper is certainly an early step in making probabilistic AMOC projections, with room for improvement. I view the main points as (1) estimating how large the climate-related uncertainties may be within a given model, and (2) illustrating the difference between experiencing, and committing to, a climate change. It’s certainly not an end-all "prediction" of what will happen 300 years from now, taking into account all possible model limitations, economic uncertainties, etc.

To answer your question, the general ways to improve predictions are to improve the models, and/or improve the data constraints. I’ll discuss both.

Although I’ve argued that our simple box model reasonably reproduces the dynamics of the more complex model it was designed to approximate, that complex model itself isn’t the best model available for the AMOC. The problem with using complex climate models is that it’s computationally impossible to run them millions of times. My solution is to work with "statistical emulators", which are tools for building fast approximations to slow models. The idea is to run the complex model a few times at different points in its parameter space, and then statistically interpolate the resulting outputs to predict what the model would have output at nearby points. This works if the model output is a smooth enough function of the parameters, and there are enough carefully-chosen "training" points.

From an oceanographic standpoint, even current complex models are probably not wholly adequate (see the discussion at the end of "week304"). There is some debate about whether the AMOC becomes more stable as the resolution of the model increases. On the other hand, people still have trouble getting the AMOC in models, and the related climate changes, to behave as abruptly as they apparently did during the Younger Dryas. I think the range of current models is probably in the right ballpark, but there is plenty of room for improvement. Model developers continue to refine their models, and ultimately, the reliability of any projection is constrained by the quality of models available.

Another way to improve predictions is to improve the data constraints. It’s impossible to go back in time and take better historic data, although with things like ice cores, it is possible to dig up new cores to analyze. It’s also possible to improve some historic "data products". For example, the ocean heat data is subject to a lot of interpolation of sparse measurements in the deep ocean, and one could potentially improve the interpolation procedure without going back in time and taking more data. There are also various corrections being applied for known biases in the data-gathering instruments and procedures, and it’s possible those could be improved too.

Alternatively, we can simply wait. Wait for new and more precise data to become available.

But when I say "improve the data constraints", I’m mostly talking about adding more of them, that I simply didn’t include in the analysis, or looking at existing data in more detail (like spatial patterns instead of global averages). For example, the ocean heat data mostly serves to constrain the vertical mixing parameter, controlling how quickly heat penetrates into the deep ocean. But we can also look at the penetration of chemicals in the ocean (such carbon from fossil fuels, or chlorofluorocarbons). This is also informative about how quickly water masses mix down to the ocean depths, and indirectly informative about how fast heat mixes. I can’t do that with my simple model (which doesn’t have the ocean circulation of any of these chemicals in it), but I can with more complex models.

As another example, I could constrain the climate sensitivity parameter better with paleoclimate data, or more resolved spatial data (to try to, e.g., pick up the spatial fingerprint of industrial aerosols in the temperature data), or by looking at data sets informative about particular feedbacks (such as water vapor), or at satellite radiation budget data.

There is a lot of room for reducing uncertainties by looking at more and more data sets. However, this presents its own problems. Not only is this simply harder to do, but it runs more directly into limitations in the models and data. For example, if I look at what ocean temperature data implies about a model’s vertical mixing parameter, and what ocean chemical data imply, I might find that they imply two inconsistent values for the parameter! Or that those data imply a different mixing than is implied by AMOC strength measurements. This can happen if there are flaws in the model (or in the data). We have some evidence from other work that there are circumstances in which this can happen:

• A. Schmittner, N. M. Urban, K. Keller and D. Matthews, Using tracer observations to reduce the uncertainty of ocean diapycnal mixing and climate-carbon cycle projections, Global Biogeochemical Cycles 23 (2009), GB4009.

• M. Goes, N. M. Urban, R. Tonkonojenkov, M. Haran, and K. Keller, The skill of different ocean tracers in reducing uncertainties about projections of the Atlantic meridional overturning circulation, Journal of Geophysical Research — Oceans, in press (2010).

How to deal with this, if and when it happens, is an open research challenge. To an extent it depends on expert judgment about which model features and data sets are "trustworthy". Some say that expert judgment renders conclusions subjective and unscientific, but as a scientist, I say that such judgments are always applied! You always weigh how much you trust your theories and your data when deciding what to conclude about them.

In my response I’ve so far ignored the part about parameters changing in time. I think the hydrological sensitivity (North Atlantic freshwater input as a function of temperature) can change with time, and this could be improved by using a better climate model that includes ice and precipitation dynamics. Feedbacks can fluctuate in time, but I think it’s okay to treat them as a constant for long term projections. Some of these parameters can also be spatially dependent (e.g., the respiration sensitivity in the carbon cycle). I think treating them all as constant is a decent first approximation for the sorts of generic questions we’re asking in the paper. Also, all the parameter estimation methods I’ve described only work with static parameters. For time varying parameters, you need to get into state estimation methods like Kalman or particle filters.

JB: I also have another technical question, which is about the Markov chain Monte Carlo procedure. You generate your cloud of points in 18-dimensional space by a procedure where you keep either jumping randomly to a nearby point, or staying put, according to that decision procedure you described. Eventually this cloud fills out to a good approximation of the probability distribution you want. But, how long is "eventually"? You said you generated a million points. But how do you know that’s enough?

NU: This is something of an art. Although there is an asymptotic convergence theorem, there is no general way of knowing whether you’ve reached convergence. First you check to see whether your chains "look right". Are they sweeping across the full range of parameter space where you expect significant probability? Are they able to complete many sweeps (thoroughly exploring parameter space)? Is the Metropolis test accepting a reasonable fraction of proposed moves? Do you have enough effective samples in your Markov chain? (MCMC generates correlated random samples, so there are fewer "effectively independent" samples in the chain than there are total samples.) Then you can do consistency checks: start the chains at several different locations in parameter space, and see if they all converge to similar distributions.

If the posterior distribution shows, or is expected to show, a lot of correlation between parameters, you have to be more careful to ensure convergence. You want to propose moves that carry you along the "principal components" of the distribution, so you don’t waste time trying to jump away from the high probability directions. (Roughly, if your posterior density is concentrated on some low dimensional manifold, you want to construct your way of moving around parameter space to stay near that manifold.) You also have to be careful if you see, or expect, multimodality (multiple peaks in the probability distribution). It can be hard for MCMC to move from one mode to another through a low-probability "wasteland"; it won’t be inclined to jump across it. There are more advanced algorithms you can use in such situations, if you suspect you have multimodality. Otherwise, you might discover later that you only sampled one peak, and never noticed that there were others.

JB: Did you do some of these things when testing out the model in your paper? Do you have any intuition for the "shape" of the probability distribution in 18-dimensional space that lies at the heart of your model? For example: do you know if it has one peak, or several?

NU: I’m pretty confident that the MCMC in our analysis is correctly sampling the shape of the probability distribution. I ran lots and lots of analyses, starting the chain in different ways, tweaking the proposal distribution (jumping rule), looking at different priors, different model structures, different data, and so on.

It’s hard to "see" what an 18-dimensional function looks like, but we have 1-dimensional and 2-dimensional projections of it in our paper:





I don’t believe that it has multiple peaks, and I don’t expect it to. Multiple peaks usually show up when the model behavior is non-monotonic as a function of the parameters. This can happen in really nonlinear systems (an with threshold systems like the AMOC), but during the historic period I’m calibrating the model to, I see no evidence of this in the model.

There are correlations between parameters, so there are certain "directions" in parameter space that the posterior distribution is oriented along. And the distribution is not Gaussian. There is evidence of skew, and nonlinear correlations between parameters. Such correlations appear when the data are insufficient to completely identify the parameters (i.e., different combinations of parameters can produce similar model output). This is discussed in more detail in another of our papers:

• Nathan M. Urban and Klaus Keller, Complementary observational constraints on climate sensitivity, Geophysical Research Letters 36 (2009), L04708.

In a Gaussian distribution, the distribution of any pair of parameters will look ellipsoidal, but our distribution has some "banana" or "boomerang" shaped pairwise correlations. This is common, for example, when the model output is a function of the product of two parameters.

JB: Okay. It’s great that we got a chance to explore some of the probability theory and statistics underlying your work. It’s exciting for me to see these ideas being used to tackle a big real-life problem. Thanks again for a great interview.


Maturity is the capacity to endure uncertainty. – John Finley


This Week’s Finds (Week 301)

27 August, 2010

The first 300 issues of This Week’s Finds were devoted to the beauty of math and physics. Now I want to bite off a bigger chunk of reality. I want to talk about all sorts of things, but especially how scientists can help save the planet. I’ll start by interviewing some scientists with different views on the challenges we face — including some who started out in other fields, because I’m trying to make that transition myself.

By the way: I know “save the planet” sounds pompous. As George Carlin joked: “Save the planet? There’s nothing wrong with the planet. The planet is fine. The people are screwed.” (He actually put it a bit more colorfully.)

But I believe it’s more accurate when he says:

I think, to be fair, the planet probably sees us as a mild threat. Something to be dealt with. And I am sure the planet will defend itself in the manner of a large organism, like a beehive or an ant colony, and muster a defense.

I think we’re annoying the biosphere. I’d like us to become less annoying, both for its sake and our own. I actually considered using the slogan how scientists can help humans be less annoying — but my advertising agency ran a focus group, and they picked how scientists can help save the planet.

Besides interviewing people, I want to talk about where we stand on various issues, and what scientists can do. It’s a very large task, so I’m really hoping lots of you reading this will help out. You can explain stuff, correct mistakes, and point me to good sources of information. With a lot of help from Andrew Stacey, I’m starting a wiki where we can collect these pointers. I’m hoping it will grow into something interesting.

But today I’ll start with a brief overview, just to get things rolling.

In case you haven’t noticed: we’re heading for trouble in a number of ways. Our last two centuries were dominated by rapid technology change and a rapidly soaring population:

The population is still climbing fast, though the percentage increase per year is dropping. Energy consumption per capita is also rising. So, from 1980 to 2007 the world-wide usage of power soared from 10 to 16 terawatts.

96% of this power now comes from fossil fuels. So, we’re putting huge amounts of carbon dioxide into the air: 30 billion metric tons in 2007. So, the carbon dioxide concentration of the atmosphere is rising at a rapid clip: from about 290 parts per million before the industrial revolution, to about 370 in the year 2000, to about 390 now:



 

As you’d expect, temperatures are rising:



 

But how much will they go up? The ultimate amount of warming will largely depend on the total amount of carbon dioxide we put into the air. The research branch of the National Academy of Sciences recently put out a report on these issues:

• National Research Council, Climate Stabilization Targets: Emissions, Concentrations, and Impacts over Decades to Millennia, 2010.

Here are their estimates:



 

You’ll note there’s lots of uncertainty, but a rough rule of thumb is that each doubling of carbon dioxide will raise the temperature around 3 degrees Celsius. Of course people love to argue about these things: you can find reasonable people who’ll give a number anywhere between 1.5 and 4.5 °C, and unreasonable people who say practically anything. We’ll get into this later, I’m sure.

But anyway: if we keep up “business as usual”, it’s easy to imagine us doubling the carbon dioxide sometime this century, so we need to ask: what would a world 3 °C warmer be like?

It doesn’t sound like much… until you realize that the Earth was only about 6 °C colder during the last ice age, and the Antarctic had no ice the last time the Earth was about 4 °C warmer. You also need to bear in mind the shocking suddenness of the current rise in carbon dioxide levels:



You can see several ice ages here — or technically, ‘glacial periods’. Carbon dioxide concentration and temperature go hand in hand, probably due to some feedback mechanisms that make each influence the other. But the scary part is the vertical line on the right where the carbon dioxide shoots up from 290 to 390 parts per million — instantaneously from a geological point of view, and to levels not seen for a long time. Species can adapt to slow climate changes, but we’re trying a radical experiment here.

But what, specifically, could be the effects of a world that’s 3 °C warmer? You can get some idea from the National Research Council report. Here are some of their predictions. I think it’s important to read these, to see that bad things will happen, but the world will not end. Psychologically, it’s easy to avoid taking action if you think there’s no problem — but it’s also easy if you think you’re doomed and there’s no point.

Between their predictions (in boldface) I’ve added a few comments of my own. These comments are not supposed to prove anything. They’re just anecdotal examples of the kind of events the report says we should expect.

For 3 °C of global warming, 9 out of 10 northern hemisphere summers will be “exceptionally warm”: warmer in most land areas than all but about 1 of the summers from 1980 to 2000.

This summer has certainly been exceptionally warm: for example, worldwide, it was the hottest June in recorded history, while July was the second hottest, beat out only by 2003. Temperature records have been falling like dominos. This is a taste of the kind of thing we might see.

Increases of precipitation at high latitudes and drying of the already semi-arid regions are projected with increasing global warming, with seasonal changes in several regions expected to be about 5-10% per degree of warming. However, patterns of precipitation show much larger variability across models than patterns of temperature.

Back home in southern California we’re in our fourth year of drought, which has led to many wildfires.

Large increases in the area burned by wildfire are expected in parts of Australia, western Canada, Eurasia and the United States.

We are already getting some unusually intense fires: for example, the Black Saturday bushfires that ripped through Victoria in February 2007, the massive fires in Greece later that year, and the hundreds of wildfires that broke out in Russia this July.

Extreme precipitation events — that is, days with the top 15% of rainfall — are expected to increase by 3-10% per degree of warming.

The extent to which these events cause floods, and the extent to which these floods cause serious damage, will depend on many complex factors. But today it hard not to think about the floods in Pakistan, which left about 20 million homeless, and ravaged an area equal to that of California.

In many regions the amount of flow in streams and rivers is expected to change by 5-15% per degree of warming, with decreases in some areas and increases in others.

The total number of tropical cyclones should decrease slightly or remain unchanged. Their wind speed is expected to increase by 1-4% per degree of warming.

It’s a bit counterintuitive that warming could decrease the number of cyclones, while making them stronger. I’ll have to learn more about this.

The annual average sea ice area in the Arctic is expected to decrease by 15% per degree of warming, with more decrease in the summertime.

The area of Arctic ice reached a record low in the summer of 2007, and the fabled Northwest Passage opened up for the first time in recorded history. Then the ice area bounced back. This year it was low again… but what matters more is the overall trend:



 

Global sea level has risen by about 0.2 meters since 1870. The sea level rise by 2100 is expected to be at least 0.6 meters due to thermal expansion and loss of ice from glaciers and small ice caps. This could be enough to permanently displace as many as 3 million people — and raise the risk of floods for many millions more. Ice loss is also occurring in parts of Greenland and Antarctica, but the effect on sea level in the next century remains uncertain.

Up to 2 degrees of global warming, studies suggest that crop yield gains and adaptation, especially at high latitudes, could balance losses in tropical and other regions. Beyond 2 degrees, studies suggest a rise in food prices.

The first sentence there is the main piece of good news — though not if you’re a poor farmer in central Africa.

Increased carbon dioxide also makes the ocean more acidic and lowers the ability of many organisms to make shells and skeleta. Seashells, coral, and the like are made of aragonite, one of the two crystal forms of calcium carbonate. North polar surface waters will become under-saturated for aragonite if the level of carbon dioxide in the atmosphere rises to 400-450 parts per million. Then aragonite will tend to dissolve, rather than form from seawater. For south polar surface waters, this effect will occur at 500-660 ppm. Tropical surface waters and deep ocean waters are expected to remain supersaturated for aragonite throughout the 20th century, but coral reefs may be negatively impacted.

Coral reefs are also having trouble due to warming oceans. For example, this summer there was a mass dieoff of corals off the coast of Indonesia due to ocean temperatures that were 4 °C higher than average.

Species are moving toward the poles to keep cool: the average shift over many types of terrestrial species has been 6 kilometers per decade. The rate of extinction of species will be enhanced by climate change.

I have a strong fondness for the diversity of animals and plants that grace this planet, so this particularly perturbs me. The report does not venture a guess for how many species may go extinct due to climate change, probably because it’s hard to estimate. However, it states that the extinction rate is now roughly 500 times what it was before humans showed up. The extinction rate is measured extinctions per million years per species. For mammals, it’s shot up from roughly 0.1-0.5 to roughly 50-200. That’s what I call annoying the biosphere!

So, that’s a brief summary of the problems that carbon dioxide emissions may cause. There’s just one more thing I want to say about this now.

Once carbon dioxide is put into the atmosphere, about 50% of it will stay there for decades. About 30% of it will stay there for centuries. And about 20% will stay there for thousands of years:



This particular chart is based on some 1993 calculations by Wigley. Later calculations confirm this idea: the carbon we burn will haunt our skies essentially forever:

• Mason Inman, Carbon is forever, Nature Reports Climate Change, 20 November 2008.

This is why we’re in serious trouble. In the above article, James Hansen puts it this way:

Because of this long CO2 lifetime, we cannot solve the climate problem by slowing down emissions by 20% or 50% or even 80%. It does not matter much whether the CO2 is emitted this year, next year, or several years from now. Instead … we must identify a portion of the fossil fuels that will be left in the ground, or captured upon emission and put back into the ground.

But I think it’s important to be more precise. We can put off global warming by reducing carbon dioxide emissions, and that may be a useful thing to do. But to prevent it, we have to cut our usage of fossil fuels to a very small level long before we’ve used them up.



Theoretically, another option is to quickly deploy new technologies to suck carbon dioxide out of the air, or cool the planet in other ways. But there’s almost no chance such technologies will be practical soon enough to prevent significant global warming. They may become important later on, after we’ve already screwed things up. We may be miserable enough to try them, even though they may carry significant risks of their own.

So now, some tough questions:

If we decide to cut our usage of fossil fuels dramatically and quickly, how can we do it? How should we do it? What’s the least painful way? Or should we just admit that we’re doomed to global warming and learn to live with it, at least until we develop technologies to reverse it?

And a few more questions, just for completeness:

Could this all be just a bad dream — or more precisely, a delusion of some sort? Could it be that everything is actually fine? Or at least not as bad as you’re saying?

I won’t attempt to answer any of these now. We’ll have to keep coming back to them, over and over.

So far I’ve only talked about carbon dioxide emissions. There are lots of other problems we should tackle, too! But presumably many of these are just symptoms of some deeper underlying problem. What is this deeper problem? I’ve been trying to figure that out for years. Is there any way to summarize what’s going on, or it is just a big complicated mess?

Here’s my attempt at a quick summary: the human race makes big decisions based on an economic model that ignores many negative externalities.

A ‘negative externality’ is, very roughly, a way in which my actions impose a cost on you, for which I don’t pay any price.

For example: suppose I live in a high-rise apartment and my toilet breaks. Instead of fixing it, I realize that I can just use a bucket — and throw its contents out the window! Whee! If society has no mechanism for dealing with people like me, I pay no price for doing this. But you, down there, will be very unhappy.

This isn’t just theoretical. Once upon a time in Europe there were few private toilets, and people would shout “gardyloo!” before throwing their waste down to the streets below. In retrospect that seems disgusting, but many of the big problems that afflict us now can be seen as the result of equally disgusting externalities. For example:

Carbon dioxide pollution caused by burning fossil fuels. If the expected costs of global warming and ocean acidification were included in the price of fossil fuels, other sources of energy would more quickly become competitive. This is the idea behind a carbon tax or a ‘cap-and-trade program’ where companies pay for permits to put carbon dioxide into the atmosphere.

Dead zones. Put too much nitrogen and phosophorus in the river, and lots of algae will grow in the ocean near the river’s mouth. When the algae dies and rots, the water runs out of dissolved oxygen, and fish cannot live there. Then we have a ‘dead zone’. Dead zones are expanding and increasing in number. For example, there’s one about 20,000 square kilometers in size near the mouth of the Mississippi River. Hog farming, chicken farming and runoff from fertilized crop lands are largely to blame.

Overfishing. Since there is no ownership of fish, everyone tries to catch as many fish as possible, even though this is depleting fish stocks to the point of near-extinction. There’s evidence that populations of all big predatory ocean fish have dropped 90% since 1950. Populations of cod, bluefish tuna and many other popular fish have plummeted, despite feeble attempts at regulation.

Species extinction due to habitat loss. Since the economic value of intact ecosystems has not been fully reckoned, in many parts of the world there’s little price to pay for destroying them.

Overpopulation. Rising population is a major cause of the stresses on our biosphere, yet it costs less to have your own child than to adopt one. (However, a pilot project in India is offering cash payments to couples who put off having children for two years after marriage.)

One could go on; I haven’t even bothered to mention many well-known forms of air and water pollution. The Acid Rain Program in the United States is an example of how people eliminated an externality: they imposed a cap-and-trade system on sulfur dioxide pollution.

Externalities often arise when we treat some resource as essentially infinite — for example fish, or clean water, or clean air. We thus impose no cost for using it. This is fine at first. But because this resource is free, we use more and more — until it no longer makes sense to act as if we have an infinite amount. As a physicist would say, the approximation breaks down, and we enter a new regime.

This is happening all over the place now. We have reached the point where we need to treat most resources as finite and take this into account in our economic decisions. We can’t afford so many externalities. It is irrational to let them go on.

But what can you do about this? Or what can I do?

We can do the things anyone can do. Educate ourselves. Educate our friends. Vote. Conserve energy. Don’t throw buckets of crap out of apartment windows.

But what can we do that maximizes our effectiveness by taking advantage of our special skills?

Starting now, a large portion of This Week’s Finds will be the continuing story of my attempts to answer this question. I want to answer it for myself. I’m not sure what I should do. But since I’m a scientist, I’ll pose the question a bit more broadly, to make it a bit more interesting.

How scientists can help save the planet — that’s what I want to know.


Addendum: In the new This Week’s Finds, you can often find the source for a claim by clicking on the nearest available link. This includes the figures. Four of the graphs in this issue were produced by Robert A. Rohde and more information about them can be found at Global Warming Art.


During the journey we commonly forget its goal. Almost every profession is chosen as a means to an end but continued as an end in itself. Forgetting our objectives is the most frequent act of stupidity. — Friedrich Nietzsche


Dying Coral Reefs

18 August, 2010


Global warming has been causing the "bleaching" of coral reefs. A bleached coral reef has lost its photosynthesizing symbiotic organisms, called zooxanthellae. It may look white as a ghost — as in the picture above — but it is not yet dead. If the zooxanthellae come back, the reef can recover.

With this year’s record high temperatures, many coral reefs are actually dying:

• Dan Charles, Massive coral die-off reported in Indonesia, Morning Edition, August 17, 2010.

DAN CHARLES: This past spring and early summer, the Andaman Sea, off the coast of Sumatra, was three, five, even seven degrees [Fahrenheit] warmer than normal. That can be dangerous to coral, so scientists from the Wildlife Conservation Society went out to the reefs to take a look. At that time, about 60 percent of the coral had turned white – it was under extreme stress but still alive.

Caleb McClennen from the Wildlife Conservation Society says they just went out to take a look again.

DR. CALEB MCCLENNEN: The shocking situation, now, is that about 80 percent of those that were bleached have now died.

CHARLES: That’s just in the area McClennen’s colleagues were able to survey. They’re asking other scientists to check on coral in other areas of the Andaman Sea.

Similar mass bleaching events have been observed this year in Sri Lanka, Thailand, Malaysia, and other parts of Indonesia.

For more, see:

• Environmental news service, Corals bleached and dying in overheated south Asian waters, August 16, 2010.

It’s interesting to look back back at the history of corals — click for a bigger view:



Corals have been around for a long time. But the corals we see now are completely different from those that ruled the seas before the Permian-Triassic extinction event 250 million years ago. Those earlier corals, in turn, are completely different from those that dominated before the Ordovician began around 490 million years ago. A major group of corals called the Heliolitida died out in the Late Devonian extinction. And so on.

Why? Corals live near the surface of the ocean and are thus particularly sensitive not only to temperature changes but also changes in sea levels and changes in the amount of dissolved CO2, which makes seawater more acid.

We are now starting to see what the Holocene extinction will do to corals. Not only the warming but also the acidification of oceans are hurting them. Indeed, seawater is reaching the point where aragonite, the mineral from which corals are made, becomes more soluble in water.

This paper reviews the issue:

• O. Hoegh-Guldberg, P. J. Mumby, A. J. Hooten, R. S. Steneck, P. Greenfield, E. Gomez, C. D. Harvell, P. F. Sale, A. J. Edwards, K. Caldeira, N. Knowlton, C. M. Eakin, R. Iglesias-Prieto, N. Muthiga, R. H. Bradbury, A. Dubi and M. E. Hatziolos, Coral reefs under rapid climate change and ocean acidification, Science 318 (14 December 2007), 1737-1742.

Chris Colose has a nice summary of what this paper predicts under three scenarios:

1) If CO2 is stablilized today, at 380 ppm-like conditions, corals will change a bit but areas will remain coral dominated. Hoegh-Guldberg et al. emphasize the importance of solving regional problems such as fishing pressure, and air/water quality which are human-induced but not directly linked to climate change/ocean acidification.

2) Increases of CO2 at 450 to 500 ppmv at current >1 ppmv/yr scenario will cause significant declines in coral populations. Natural adaptive shifts to symbionts with a +2°C resistance may delay the demise of some reefs, and this will differ by area. Carbonate-ion concentrations will drop below the 200 µmol kg-1 threshold and coral erosion will outweigh calcification, with significant impacts on marine biodiversity.

3) In the words of the study, a scenario of >500 ppmv and +2°C sea surface temperatures “will reduce coral reef ecosystems to crumbling frameworks with few calcareous corals”. Due to latitudinally decreasing aragonite concentrations and projected atmospheric CO2 increases adaptation to higher latitudes with areas of more thermal tolerance is unlikely. Coral reefs exist within a narrow band of temperature, light, and aragonite saturation states, and expected rises in SST’s will produce many changes on timescales of decades to centuries (Hoegh-Guldberg 2005). Rising sea levels may also harm reefs which necessitate shallow water conditions. Under business-as-usual to higher range scenarios used by the IPCC, corals will become rare in the tropics, and have huge impacts on biodiversity and the ecosystem services they provide.

The chemistry of coral is actually quite subtle. Here’s a nice introduction, at least for people who aren’t scared by section headings like “Why don’t corals simply pump more protons?”:

• Anne L. Cohen and Michael Holcomb, Why corals care about ocean acidification: uncovering the mechanism, Oceanography 22 (2009), 118-127.


Overfishing

28 July, 2010

While climate change is the 800-pound gorilla of ecological issues, I don’t want it to completely dominate the conversation here. There are a lot of other issues to think about. For example, overfishing!

My friend the mathematician John Terilla says that after we had dinner together at a friend’s house, he can’t help thinking about overfishing — especially when he eats fish. I’m afraid I have that effect on people these days.

(In case you’re wondering, we didn’t have fish for dinner.)

Anyway, John just pointed out this book review:

• Elizabeth Kolbert, The scales fall: is there any hope for our overfished oceans?, New Yorker, August 2, 2010.

It’s short and very readable. It starts out talking about tuna. In the last 40 years, the numbers of bluefin tuna have dropped by roughly 80 percent. A big part of the problem is ICCAT, which either means the International Commission for the Conservation of Atlantic Tunas, or else the International Conspiracy to Catch All Tunas, depending on whom you ask. In 2008, ICCAT scientists recommended that the bluefin catch in the eastern Atlantic and the Mediterranean be limited to 8500-15,000 tons. ICCAT went ahead and adopted a quota of 22,000 tons! So it’s no surprise that we’re in trouble now.

But it’s not just tuna. Look at what happened to cod off the east coast of Newfoundland:



In fact, there’s evidence that the population of all kinds of big predatory fish has dropped 90% since 1950:

• Ransom A. Myers and Boris Worm, Rapid worldwide depletion of predatory fish communities, Nature 423 May 15, 2003.

Of course you’d expect someone with the name “Worm” to be against fishing, but Myers agrees: “From giant blue marlin to mighty bluefin tuna, and from tropical groupers to Antarctic cod, industrial fishing has scoured the global ocean. There is no blue frontier left. Since 1950, with the onset of industrialized fisheries, we have rapidly reduced the resource base to less than 10 percent—not just in some areas, not just for some stocks, but for entire communities of these large fish species from the tropics to the poles.”

In fact, we’re “fishing down the food chain”: now that the big fish are gone, we’re going after larger and large numbers of smaller and smaller species, with former “trash fish” now available at your local market. It’s a classic tragedy of the commons: with nobody able to own fish, everyone is motivated to break agreements to limit fishing. Here’s a case where I think some intelligent applications of economics and game theory could work wonders. But who has the muscle to forge and enforce agreements? Clearly ICCAT and other existing bodies do not!

But there’s still hope. For starters, learn which fish to avoid eating. And think about this:

It is almost as though we use our military to fight the animals in the ocean. We are gradually winning this war to exterminate them. And to see this destruction happen, for nothing really – for no reason – that is a bit frustrating. Strangely enough, these effects are all reversible, all the animals that have disappeared would reappear, all the animals that were small would grow, all the relationships that you can’t see any more would re-establish themselves, and the system would re-emerge. So that’s one thing to be optimistic about. The oceans, much more so than the land, are reversible…Daniel Pauly