This Week’s Finds (Week 307)

14 December, 2010

I’d like to take a break from interviews and explain some stuff I’m learning about. I’m eager to tell you about some papers in the book Tim Palmer helped edit, Stochastic Physics and Climate Modelling. But those papers are highly theoretical, and theories aren’t very interesting until you know what they’re theories of. So today I’ll talk about "El Niño", which is part of a very interesting climate cycle. Next time I’ll get into more of the math.

I hadn’t originally planned to get into so much detail on the El Niño, but this cycle is a big deal in southern California. In the city of Riverside, where I live, it’s very dry. There is a small river, but it’s just a trickle of water most of the time: there’s a lot less "river" than "side". It almost never rains between March and December. Sometimes, during a "La Niña", it doesn’t even rain in the winter! But then sometimes we have an "El Niño" and get huge floods in the winter. At this point, the tiny stream that gives Riverside its name swells to a huge raging torrent. The difference is very dramatic.

So, I’ve always wanted to understand how the El Niño cycle works — but whenever I tried to read an explanation, I couldn’t follow it!

I finally broke that mental block when I read some stuff on William Kessler‘s website. He’s an expert on the El Niño phenomenon who works at the Pacific Marine Environmental Laboratory. One thing I like about his explanations is that he says what we do know about the El Niño, and also what we don’t know. We don’t know what triggers it!

In fact, Kessler says the El Niño would make a great research topic for a smart young scientist. In an email to me, which he has allowed me to quote, he said:

We understand lots of details but the big picture remains mysterious. And I enjoyed your interview with Tim Palmer because it brought out a lot of the sources of uncertainty in present-generation climate modeling. However, with El Niño, the mystery is beyond Tim’s discussion of the difficulties of climate modeling. We do not know whether the tropical climate system on El Niño timescales is stable (in which case El Niño needs an external trigger, of which there are many candidates) or unstable. In the 80s and 90s we developed simple "toy" models that convinced the community that the system was unstable and El Niño could be expected to arise naturally within the tropical climate system. Now that is in doubt, and we are faced with a fundamental uncertainty about the very nature of the beast. Since none of us old farts has any new ideas (I just came back from a conference that reviewed this stuff), this is a fruitful field for a smart young person.

So, I hope some smart young person reads this and dives into working on El Niño!

But let’s start at the beginning. Why did I have so much trouble understanding explanations of the El Niño? Well, first of all, I’m an old fart. Second, most people are bad at explaining stuff: they skip steps, use jargon they haven’t defined, and so on. But third, climate cycles are hard to explain. There’s a lot about them we don’t understand — as Kessler’s email points out. And they also involve a kind of "cyclic causality" that’s a bit tough to mentally process.

At least where I come from, people find it easy to understand linear chains of causality, like "A causes B, which causes C". For example: why is the king’s throne made of gold? Because the king told his minister "I want a throne of gold!" And the minister told the servant, "Make a throne of gold!" And the servant made the king a throne of gold.

Now that’s what I call an explanation! It’s incredibly satisfying, at least if you don’t wonder why the king wanted a throne of gold in the first place. It’s easy to remember, because it sounds like a story. We hear a lot of stories like this when we’re children, so we’re used to them. My example sounds like the beginning of a fairy tale, where the action is initiated by a "prime mover": the decree of a king.

There’s something a bit trickier about cyclic causality, like "A causes B, which causes C, which causes A." It may sound like a sneaky trick: we consider "circular reasoning" a bad thing. Sometimes it is a sneaky trick. But sometimes this is how things really work!

Why does big business have such influence in American politics? Because big business hires lots of lobbyists, who talk to the politicians, and even give them money. Why are they allowed to do this? Because big business has such influence in American politics. That’s an example of a "vicious circle". You might like to cut it off — but like a snake holding its tail in its mouth, it’s hard to know where to start.

Of course, not all circles are "vicious". Many are "virtuous".

But the really tricky thing is how a circle can sometimes reverse direction. In academia we worry about this a lot: we say a university can either "ratchet up" or "ratchet down". A good university attracts good students and good professors, who bring in more grant money, and all this makes it even better… while a bad university tends to get even worse, for all the same reasons. But sometimes a good university goes bad, or vice versa. Explaining that transition can be hard.

It’s also hard to explain why a La Niña switches to an El Niño, or vice versa. Indeed, it seems scientists still don’t understand this. They have some models that simulate this process, but there are still lots of mysteries. And even if they get models that work perfectly, they still may not be able to tell a good story about it. Wind and water are ultimately described by partial differential equations, not fairy tales.

But anyway, let me tell you a story about how it works. I’m just learning this stuff, so take it with a grain of salt…

The "El Niño/Southern Oscillation" or "ENSO" is the largest form of variability in the Earth’s climate on times scales greater than a year and less than a decade. It occurs across the tropical Pacific Ocean every 3 to 7 years, and on average every 4 years. It can cause extreme weather such as floods and droughts in many regions of the world. Countries dependent on agriculture and fishing, especially those bordering the Pacific Ocean, are the most affected.

And here’s a cute little animation of it produced by the Australian Bureau of Meteorology:



Let me tell you first about La Niña, and then El Niño. If you keep glancing back at this little animation, I promise you can understand everything I’ll say.

Winds called trade winds blow west across the tropical Pacific. During La Niña years, water at the ocean’s surface moves west with these winds, warming up in the sunlight as it goes. So, warm water collects at the ocean’s surface in the western Pacific. This creates more clouds and rainstorms in Asia. Meanwhile, since surface water is being dragged west by the wind, cold water from below gets pulled up to take its place in the eastern Pacific, off the coast of South America.

I hope this makes sense so far. But there’s another aspect to the story. Because the ocean’s surface is warmer in the western Pacific, it heats the air and makes it rise. So, wind blows west to fill the "gap" left by rising air. This strengthens the westward-blowing trade winds.

So, it’s a kind of feedback loop: the oceans being warmer in the western Pacific helps the trade winds blow west, and that makes the western oceans even warmer.

Get it? This should all make sense so far, except for one thing. There’s one big question, and I hope you’re asking it. Namely:

Why do the trade winds blow west?

If I don’t answer this, my story so far would work just as well if I switched the words "west" and "east". That wouldn’t necessarily mean my story was wrong. It might just mean that there were two equally good options: a La Niña phase where the trade winds blow west, and another phase — say, El Niño — where they blow east! From everything I’ve said so far, the world could be permanently stuck in one of these phases. Or, maybe it could randomly flip between these two phases for some reason.

Something roughly like this last choice is actually true. But it’s not so simple: there’s not a complete symmetry between west and east.

Why not? Mainly because the Earth is turning to the east.

Air near the equator warms up and rises, so new air from more northern or southern regions moves in to take its place. But because the Earth is fatter at the equator, the equator is moving faster to the east. So, the new air from other places is moving less quickly by comparison… so as seen by someone standing on the equator, it blows west. This is an example of the Coriolis effect:



By the way: in case this stuff wasn’t tricky enough already, a wind that blows to the west is called an easterly, because it blows from the east! That’s what happens when you put sailors in charge of scientific terminology. So the westward-blowing trade winds are called "northeasterly trades" and "southeasterly trades" in the picture above. But don’t let that confuse you.

(I also tend to think of Asia as the "Far East" and California as the "West Coast", so I always need to keep reminding myself that Asia is in the west Pacific, while California is in the east Pacific. But don’t let that confuse you either! Just repeat after me until it makes perfect sense: "The easterlies blow west from West Coast to Far East".)

Okay: silly terminology aside, I hope everything makes perfect sense so far. The trade winds have a good intrinsic reason to blow west, but in the La Niña phase they’re also part of a feedback loop where they make the western Pacific warmer… which in turn helps the trade winds blow west.

But then comes an El Niño! Now for some reason the westward winds weaken. This lets the built-up warm water in the western Pacific slosh back east. And with weaker westward winds, less cold water is pulled up to the surface in the east. So, the eastern Pacific warms up. This makes for more clouds and rain in the eastern Pacific — that’s when we get floods in Southern California. And with the ocean warmer in the eastern Pacific, hot air rises there, which tends to counteract the westward winds even more!

In other words, all the feedbacks reverse themselves.

But note: the trade winds never mainly blow east. During an El Niño they still blow west, just a bit less. So, the climate is not flip-flopping between two symmetrical alternatives. It’s flip-flopping between two asymmetrical alternatives.

I hope all this makes sense… except for one thing. There’s another big question, and I hope you’re asking it. Namely:

Why do the westward trade winds weaken?

We could also ask the same question about the start of the La Niña phase: why do the westward trade winds get stronger?

The short answer is that nobody knows. Or at least there’s no one story that everyone agrees on. There are actually several stories… and perhaps more than one of them is true. But now let me just show you the data:



The top graph shows variations in the water temperature of the tropical Eastern Pacific ocean. When it’s hot we have El Niños: those are the red hills in the top graph. The blue valleys are La Niñas. Note that it’s possible to have two El Niños in a row without an intervening La Niña, or vice versa!

The bottom graph shows the "Southern Oscillation Index" or "SOI". This is the air pressure in Tahiti minus the air pressure in Darwin, Australia. You can see those locations here:



So, when the SOI is high, the air pressure is higher in the east Pacific than in the west Pacific. This is what we expect in an La Niña: that’s why the westward trade winds are strong then! Conversely, the SOI is low in the El Niño phase. This variation in the SOI is called the Southern Oscillation.

If you look at the graphs above, you’ll see how one looks almost like an upside-down version of the other. So, El Niño/La Niña cycle is tightly linked to the Southern Oscillation.

Another thing you’ll see from is that ENSO cycle is far from perfectly periodic! Here’s a graph of the Southern Oscillation Index going back a lot further:



This graph was made by William Kessler. His explanations of the ENSO cycle are the first ones I really understood:

My own explanation here is a slow-motion, watered-down version of his. Any mistakes are, of course, mine. To conclude, I want to quote his discussion of theories about why an El Niño starts, and why it ends. As you’ll see, this part is a bit more technical. It involves three concepts I haven’t explained yet:

  • The "thermocline" is the border between the warmer surface water in the ocean and the cold deep water, 100 to 200 meters below the surface. During the La Niña phase, warm water is blown to the western Pacific, and cold water is pulled up to the surface of the eastern Pacific. So, the thermocline is deeper in the west than the east:

    When an El Niño occurs, the thermocline flattens out:

  • "Oceanic Rossby waves" are very low-frequency waves in the ocean’s surface and thermocline. At the ocean’s surface they are only 5 centimeters high, but hundreds of kilometers across. They move at about 10 centimeters/second, requiring months to years to cross the ocean! The surface waves are mirrored by waves in the thermocline, which are much larger, 10-50 meters in height. When the surface goes up, the thermocline goes down.
  • The "Madden-Julian Oscillation" or "MJO" is the largest form of variability in the tropical atmosphere on time scales of 30-90 days. It’s a pulse that moves east across the Indian Ocean and Pacific ocean at 4-8 meters/second. It manifests itself as patches of anomalously high rainfall and also anomalously low rainfall. Strong Madden-Julian Oscillations are often seen 6-12 months before an El Niño starts.

With this bit of background, let’s read what Kessler wrote:

There are two main theories at present. The first is that the event is initiated by the reflection from the western boundary of the Pacific of an oceanic Rossby wave (type of low-frequency planetary wave that moves only west). The reflected wave is supposed to lower the thermocline in the west-central Pacific and thereby warm the SST [sea surface temperature] by reducing the efficiency of upwelling to cool the surface. Then that makes winds blow towards the (slightly) warmer water and really start the event. The nice part about this theory is that the Rossby waves can be observed for months before the reflection, which implies that El Niño is predictable.

The other idea is that the trigger is essentially random. The tropical convection (organized largescale thunderstorm activity) in the rising air tends to occur in bursts that last for about a month, and these bursts propagate out of the Indian Ocean (known as the Madden-Julian Oscillation). Since the storms are geostrophic (rotating according to the turning of the earth, which means they rotate clockwise in the southern hemisphere and counter-clockwise in the north), storm winds on the equator always blow towards the east. If the storms are strong enough, or last long enough, then those eastward winds may be enought to start the sloshing. But specific Madden-Julian Oscillation events are not predictable much in advance (just as specific weather events are not predictable in advance), and so to the extent that this is the main element, then El Niño will not be predictable.

In my opinion both these two processes can be important in different El Niños. Some models that did not have the MJO storms were successful in predicting the events of 1986-87 and 1991-92. That suggests that the Rossby wave part was a main influence at that time. But those same models have failed to predict the events since then, and the westerlies have appeared to come from nowhere. It is also quite possible that these two general sets of ideas are incomplete, and that there are other causes entirely. The fact that we have very intermittent skill at predicting the major turns of the ENSO cycle (as opposed to the very good forecasts that can be made once an event has begun) suggests that there remain important elements that are await explanation.

Next time I’ll talk a bit about mathematical models of the ENSO and another climate cycle — but please keep in mind that these cycles are still far from fully understood!


To hate is to study, to study is to understand, to understand is to appreciate, to appreciate is to love. So maybe I’ll end up loving your theory. – John Archibald Wheeler


This Week’s Finds (Week 306)

7 December, 2010

This week I’ll interview another physicist who successfully made the transition from gravity to climate science: Tim Palmer.

JB: I hear you are starting to build a climate science research group at Oxford.  What led you to this point? What are your goals?

TP: I started my research career at Oxford University, doing a PhD in general relativity theory under the cosmologist Dennis Sciama (himself a student of Paul Dirac). Then I switched gear and have spent most of my career working on the dynamics and predictability of weather and climate, mostly working in national and international meteorological and climatological institutes. Now I’m back in Oxford as a Royal Society Research Professor in climate physics. Oxford has a lot of climate-related activities going on, both in basic science and in impact and policy issues. I want to develop activities in climate physics. Oxford has wonderful Physics and Mathematics Departments and I am keen to try to exploit human resources from these areas where possible.

The general area which interests me is in the area of uncertainty in climate prediction; finding ways to estimate uncertainty reliably and, of course, to reduce uncertainty. Over the years I have helped develop new techniques to predict uncertainty in weather forecasts. Because climate is a nonlinear system, the growth of initial uncertainty is flow dependent. Some days when the system is in a relatively stable part of state space, accurate weather predictions can be made a week or more ahead of time. In other more unstable situations, predictability is limited to a couple of days. Ensemble weather forecast techniques help estimate such flow dependent predictability, and this has enormous practical relevance.

How to estimate uncertainty in climate predictions is much more tricky than for weather prediction. There is, of course, the human element: how much we reduce greenhouse gas emissions will impact on future climate. But leaving this aside, there is the difficult issue of how to estimate the accuracy of the underlying computer models we use to predict climate.

To say a bit more about this, the problem is to do with how well climate models simulate the natural processes which amplify the anthropogenic increases in greenhouse gases (notably carbon dioxide). A key aspect of this amplification process is associated with the role of water in climate. For example, water vapour is itself a powerful greenhouse gas. If we were to assume that the relative humidity of the atmosphere (the percentage of the amount of water vapour at which the air would be saturated) was constant as the atmosphere warms under anthropogenic climate change, then humidity would amplify the climate change by a factor of two or more. On top of this, clouds — i.e. water in its liquid rather than gaseous form — have the potential to further amplify climate change (or indeed decrease it depending on the type or structure of the clouds). Finally, water in its solid phase can also be a significant amplifier of climate change. For example, sea ice reflects sunlight back to space. However as sea ice melts, e.g. in the Arctic, the underlying water absorbs more of the sunlight than before, again amplifying the underlying climate change signal.

We can approach these problems in two ways. Firstly we can use simplified mathematical models in which plausible assumptions (like the constant relative humidity one) are made to make the mathematics tractable. Secondly, we can try to simulate climate ab initio using the basic laws of physics (here, mostly, but not exclusively, the laws of classical physics). If we are to have confidence in climate predictions, this ab initio approach has to be pursued. However, unlike, say temperature in the atmosphere, water vapour and cloud liquid water have more of a fractal distribution, with both large and small scales. We cannot simulate accurately the small scales in a global climate model with fixed (say 100km) grid, and this, perhaps more than anything, is the source of uncertainty in climate predictions.

This is not just a theoretical problem (although there is some interesting mathematics involved, e.g. of multifractal distribution theory and so on). In the coming years, governments will be looking to spend billions on new infrastructure for society to adapt to climate change: more reservoirs, better flood defences, bigger storm sewers etc etc. It is obviously important that this money is spent wisely. Hence we need to have some quantitative and reliable estimate of certainty that in regions where more reservoirs are to be built, the climate really will get drier and so on.

There is another reason for developing quantitative methods for estimating uncertainty: climate geoengineering. If we spray aerosols in the stratosphere, or whiten clouds by spraying sea salt into them, we need to be sure we are not doing something terrible to our climate, like shutting off the monsoons, or decreasing rainfall over Amazonia (which might then make the rainforest a source of carbon for the atmosphere rather than a sink). Reliable estimates of uncertainty of regional impacts of geoengineering are going to be essential in the future.

My goals? To bring quantitative methods from physics and maths into climate decision making.  One area that particularly interests me is the application of nonlinear stochastic-dynamic techniques to represent unresolved scales of motion in the ab initio models. If you are interested to learn more about this, please see this book:

• Tim Palmer and Paul Williams, editors, Stochastic Physics and Climate Modelling, Cambridge U. Press, Cambridge, 2010.

JB: Thanks! I’ve been reading that book. I’ll talk about it next time on This Week’s Finds.

Suppose you were advising a college student who wanted to do something that would really make a difference when it comes to the world’s environmental problems.  What would you tell them?

TP: Well although this sounds a bit of a cliché, it’s important first and foremost to enjoy and be excited by what you are doing. If you have a burning ambition to work on some area of science without apparent application or use, but feel guilty because it’s not helping to save the planet, then stop feeling guilty and get on with fulfilling your dreams. If you work in some difficult area of science and achieve something significant, then this will give you a feeling of confidence that is impossible to be taught. Feeling confident in one’s abilities will make any subsequent move into new areas of activity, perhaps related to the environment, that much easier. If you demonstrate that confidence at interview, moving fields, even late in life, won’t be so difficult.

In my own case, I did a PhD in general relativity theory, and having achieved this goal (after a bleak period in the middle where nothing much seemed to be working out), I did sort of think to myself: if I can add to the pool of knowledge in this, traditionally difficult area of theoretical physics, I can pretty much tackle anything in science. I realize that sounds rather arrogant, and of course life is never as easy as that in practice.

JB: What if you were advising a mathematician or physicist who was already well underway in their career?  I know lots of such people who would like to do something "good for the planet", but feel that they’re already specialized in other areas, and find it hard to switch gears.  In fact I might as well admit it — I’m such a person myself!

TP: Talk to the experts in the field. Face to face. As many as possible. Ask them how your expertise can be put to use. Get them to advise you on key meetings you should try to attend.

JB: Okay.  You’re an expert in the field, so I’ll start with you.  How can my expertise be put to use?  What are some meetings that I should try to attend?

TP: The American Geophysical Union and the European Geophysical Union have big multi-session conferences each year which include mathematicians with an interest in climate. On top of this, mathematical science institutes are increasingly holding meetings to engage mathematicians and climate scientists. For example, the Isaac Newton Institute at Cambridge University is holding a six-month programme on climate and mathematics. I will be there for part of this programme. There have been similar programmes in the US and in Germany very recently.

Of course, as well as going to meetings, or perhaps before going to them, there is the small matter of some reading material. Can I strongly recommend the Working Group One report of the latest IPCC climate change assessments? WG1 is tasked with summarizing the physical science underlying climate change. Start with the WG1 Summary for Policymakers from the Fourth Assessment Report:

• Intergovernmental Panel on Climate Change, Climate Change 2007: The Physical Science Basis, Summary for Policymakers.

and, if you are still interested, tackle the main WG1 report:

• Intergovernmental Panel on Climate Change, Climate Change 2007: The Physical Science Basis, Cambridge U. Press, Cambridge, 2007.

There is a feeling that since the various so-called "Climategate" scandals, in which IPCC were implicated, climate scientists need to be more open about uncertainties in climate predictions and climate prediction models. But in truth, these uncertainties have always been openly discussed in the WG1 reports. These reports are absolutely not the alarmist documents many seem to think, and, I would say, give an extremely balanced picture of the science. The latest report dates from 2007.

JB: I’ve been slowly learning what’s in this report, thanks in part to Nathan Urban, whom I interviewed in previous issues of This Week’s Finds. I’ll have to keep at it.



You told me that there’s a big difference between the "butterfly effect" in chaotic systems with a few degrees of freedom, such as the Lorenz attractor shown above, and the "real butterfly effect" in systems with infinitely many degrees of freedom, like the Navier-Stokes equations, the basic equations describing fluid flow. What’s the main difference?

TP: Everyone knows, or at least think they know, what the butterfly effect is: the exponential growth of small initial uncertainties in chaotic systems, like the Lorenz system, after whom the butterfly effect was named by James Gleick in his excellent popular book:

• James Gleick, Chaos: Making a New Science, Penguin, London, 1998.

But in truth, this is not the butterfly effect as Lorenz had meant it (I knew Ed Lorenz quite well). If you think about it, the possible effect of a flap of a butterfly’s wings on the weather some days later, involves not only an increase in the amplitude of the uncertainty, but also the scale. If we think of a turbulent system like the atmosphere, comprising a continuum of scales, its evolution is described by partial differential equations, not a low order set of ordinary differential equations. Each scale can be thought of as having its own characteristic dominant Lyapunov exponent, and these scales interact nonlinearly.

If we want to estimate the time for a flap of a butterfly’s wings to influence a large scale weather system, we can imagine summing up all the Lyapunov timescales associated with all the scales from the small scales to the large scales. If this sum diverges, then very good, we can say it will take a very long time for a small scale error or uncertainty to influence a large-scale system. But alas, simple scaling arguments suggest that there may be situations (in 3 dimensional turbulence) where this sum converges. Normally, we thinking of convergence as a good thing, but in this case it means that the small scale uncertainty, no matter how small scale it is, can affect the accuracy of the large scale prediction… in finite time. This is quite different to the conventional butterfly effect in low order chaos, where arbitrarily long predictions can be made by reducing initial uncertainty to sufficiently small levels.

JB: What are the practical implications of this difference?

TP: Climate models are finite truncations of the underlying partial differential equations of climate. A crucial question is: how do solutions converge as the truncation gets better and better?  More practically, how many floating point operations per second (flops) does my computer need to have, in order that I can simulate the large-scale components of climate accurately. Teraflops, petaflops, exaflops? Is there an irreducible uncertainty in our ability to simulate climate no matter how many flops we have? Because of the "real" butterfly effect, we simply don’t know. This has real practical implications.

JB: Nobody has proved existence and uniqueness for solutions of the Navier-Stokes equations. Indeed Clay Mathematics Institute is offering a million-dollar prize for settling this question. But meteorologists use these equations to predict the weather with some success.  To mathematicians that might seem a bit strange.  What do you think is going on here?

TP: Actually, for certain simplifications to the Navier-Stokes equations, such as making them hydrostatic (which damps acoustic waves) then existence and uniqueness can be proven. And for weather forecasting we can get away with the hydrostatic approximation for most applications. But in general existence and uniqueness haven’t been proven. The "real" butterfly effect is linked to this. Well obviously the Intergovernmental Panel on Climate Change can’t wait for the mathematicians to solve this problem, but as I tried to suggest above, I don’t think the problem is just an arcane mathematical conundrum, but rather may help us understand better what is possible to predict about climate change and what not.

JB:  Of course, meteorologists are really using a cleverly discretized version of the Navier-Stokes equations to predict the weather. Something vaguely similar happens in quantum field theory: we can use "lattice QCD" to compute the mass of the proton to reasonable accuracy, but nobody knows for sure if QCD makes sense in the continuum.  Indeed, there’s another million-dollar Clay Prize waiting for the person who can figure that out.   Could it be that sometimes a discrete approximation to a continuum theory does a pretty good job even if the continuum theory fundamentally doesn’t make sense?

TP: There you are! Spend a few years working on the continuum limit of lattice QCD and you may end up advising government on the likelihood of unexpected consequences on regional climate arising from some geoengineering proposal! The idea that two so apparently different fields could have elements in common is something bureaucrats find it hard to get their heads round.  We at the sharp end in science need to find ways of making it easier for scientists to move fields (even on a temporary basis) should they want to.

This reminds me of a story. When I was finishing my PhD, my supervisor, Dennis Sciama announced one day that the process of Hawking radiation, from black holes, could be understood using the Principle of Maximum Entropy Production in non-equilibrium thermodynamics. I had never heard of this Principle before, no doubt a gap in my physics education. However, a couple of weeks later, I was talking to a colleague of a colleague who was a climatologist, and he was telling me about a recent paper that purported to show that many of the properties of our climate system could be deduced from the Principle of Maximum Entropy Production. That there might be such a link between black hole theory and climate physics, was one reason that I thought changing fields might not be so difficult after all.

JB: To what extent is the problem of predicting climate insulated from the problems of predicting weather?  I bet this is a hard question, but it seems important.  What do people know about this?

TP: John Von Neumann was an important figure in meteorology (as well, for example, as in quantum theory). He oversaw a project at Princeton just after the Second World War, to develop a numerical weather prediction model based on a discretised version of the Navier-Stokes equations. It was one of the early applications of digital computers. Some years later, the first long-term climate models were developed based on these weather prediction models. But then the two areas of work diverged. People doing climate modelling needed to represent lots of physical processes: the oceans, the cryosphere, the biosphere etc, whereas weather prediction tended to focus on getting better and better discretised representations of the Navier-Stokes equations.

One rationale for this separation was that weather forecasting is an initial value problem whereas climate is a "forced" problem (e.g. how does climate change with a specified increase in carbon dioxide?). Hence, for example, climate people didn’t need to agonise over getting ultra accurate estimates of the initial conditions for their climate forecasts.

But the two communities are converging again. We realise there are lots of synergies between short term weather prediction and climate prediction. Let me give you one very simple example. Whether anthropogenic climate change is going to be catastrophic to society, or is something we will be able to adapt to without too many major problems, we need to understand, as mentioned above, how clouds interact with increasing levels of carbon dioxide. Clouds cannot be represented explicitly in climate models because they occur on scales that can’t be resolved due to computational constraints. So they have to be represented by simplified "parametrisations". We can test these parametrisations in weather forecast models. To put it crudely (to be honest too crudely) if the cloud parametrisations (and corresponding representations of water vapour) are systematically wrong, then the forecasts of tomorrow’s daily maximum temperature will also be systematically wrong.

To give another example, I myself for a number of years have been developing stochastic methods to represent truncation uncertainty in weather prediction models. I am now trying to apply these methods in climate prediction. The ability to test the skill of these stochastic schemes in weather prediction mode is crucial to having confidence in them in climate prediction mode. There are lots of other examples of where a synergy between the two areas is important.

JB: When we met recently, you mentioned that there are currently no high-end supercomputers dedicated to climate issues.  That seems a bit odd.  What sort of resources are there?  And how computationally intensive are the simulations people are doing now?

TP: By "high end" I mean very high end: that is, machines in the petaflop range of performance. If one takes the view that climate change is one of the gravest threats to society, then throwing all the resources that science and technology allows, to try to quantify exactly how grave this threat really is, seems quite sensible to me. On top of that, if we are to spend billions (dollars, pounds, euros etc.) on new technology to adapt to climate change, we had better make sure we are spending the money wisely — no point building new reservoirs if climate change will make your region wetter. So the predictions that it will get drier in such a such a place better be right. Finally, if we are to ever take these geoengineering proposals seriously we’d better be sure we understand the regional consequences. We don’t want to end up shutting off the monsoons! Reliable climate predictions really are essential.

I would say that there is no more computationally complex problem in science than climate prediction. There are two key modes of instability in the atmosphere, the convective instabilites (thunderstorms) with scales of kilometers and what are called baroclinic instabilities (midlatitude weather systems) with scales of thousands of kilometers. Simulating these two instabilities, and their mutual global interactions, is beyond the capability of current global climate models because of computational constraints. On top of this, climate models try to represent not only the physics of climate (including the oceans and the cryosphere), but the chemistry and biology too. That introduces considerable computational complexity in addition to the complexity caused by the multi-scale nature of climate.

By and large individual countries don’t have the financial resources (or at least they claim they don’t!) to fund such high end machines dedicated to climate. And the current economic crisis is not helping! On top of which, for reasons discussed above in relation to the "real" butterfly effect, I can’t go to government and say: "Give me a 100 petaflop machine and I will absolutely definitely be able to reduce uncertainty in forecasts climate change by a factor of 10". In my view, the way forward may be to think about internationally funded supercomputing. So, just as we have internationally funded infrastructure in particle physics, astronomy, so too in climate prediction. Why not?

Actually, very recently the NSF in the US gave a consortium of climate scientists from the US, Europe and Japan, a few months of dedicated time on a top-end Cray XT4 computer called Athena. Athena wasn’t quite in the petaflop range, but not too far off, and using this dedicated time, we produced some fantastic results, otherwise unachievable, showing what the international community could achieve, given the computational resources. Results from the Athena project are currently being written up — they demonstrate what can be done where there is a will from the funding agencies.

JB: In a Guardian article on human-caused climate change you were quoted as saying "There might be a 50% risk of widespread problems or possibly only 1%.  Frankly, I would have said a risk of 1% was sufficient for us to take the problem seriously enough to start thinking about reducing emissions."

It’s hard to argue with that, but starting to think about reducing emissions is vastly less costly than actually reducing them.  What would you say to someone who replied, "If the risk is possibly just 1%, it’s premature to take action — we need more research first"?

TP: The implication of your question is that a 1% risk is just too small to worry about or do anything about. But suppose the next time you checked in to fly to Europe, and they said at the desk that there was a 1% chance that volcanic ash would cause the aircraft engines to fail mid flight, leading the plane to crash, killing all on board. Would you fly? I doubt it!

My real point is that in assessing whether emissions cuts are too expensive, given the uncertainty in climate predictions, we need to assess how much we value things like the Amazon rainforest, or of (preventing the destruction of) countries like Bangladesh or the African Sahel. If we estimate the damage caused by dangerous climate change — let’s say associated with a 4 °C or greater global warming — to be at least 100 times the cost of taking mitigating action, then it is worth taking this action even if the probability of dangerous climate change was just 1%. But of course, according to the latest predictions, the probability of realizing such dangerous climate changes is much nearer 50%. So in reality, it is worth cutting emissions if the value you place on current climate is comparable or greater than the cost of cutting emissions.

Summarising, there are two key points here. Firstly, rational decisions can be made in the light of uncertain scientific input. Secondly, whilst we do certainly need more research, that should not itself be used as a reason for inaction.

Thanks, John, for allowing me the opportunity to express some views about climate physics on your web site.

JB: Thank you!


The most important questions of life are, for the most part, really only problems of probability. – Pierre Simon, Marquis de Laplace


This Week’s Finds (Week 305)

5 November, 2010

Nathan Urban has been telling us about a paper where he estimated the probability that global warming will shut down a major current in the Atlantic Ocean:

• Nathan M. Urban and Klaus Keller, Probabilistic hindcasts and projections of the coupled climate, carbon cycle and Atlantic meridional overturning circulation system: a Bayesian fusion of century-scale observations with a simple model, Tellus A, July 16, 2010.

We left off last time with a cliff-hanger: I didn’t let him tell us what the probability is! Since you must have been clutching your chair ever since, you’ll be relieved to hear that the answer is coming now, in the final episode of this interview.

But it’s also very interesting how he and Klaus Keller got their answer. As you’ll see, there’s some beautiful math involved. So let’s get started…

JB: Last time you told us roughly how your climate model works. This time I’d like to ask you about the rest of your paper, leading up to your estimate of the probability that the Atlantic Meridional Overturning Current (or "AMOC") will collapse. But before we get into that, I’d like to ask some very general questions.

For starters, why are scientists worried that the AMOC might collapse?

Last time I mentioned the Younger Dryas event, a time when Europe became drastically colder for about 1300 years, starting around 10,800 BC. Lots of scientists think this event was caused by a collapse of the AMOC. And lots of them believe it was caused by huge amounts of fresh water pouring into the north Atlantic from an enormous glacial lake. But nothing quite like that is happening now! So if the AMOC collapses in the next few centuries, the cause would have to be a bit different.

NU: In order for the AMOC to collapse, the overturning circulation has to weaken. The overturning is driven by the sinking of cold and salty, and therefore dense, water in the north Atlantic. Anything that affects the density structure of the ocean can alter the overturning.

As you say, during the Younger Dryas, it is thought that a lot of fresh water suddenly poured into the Atlantic from the draining of a glacial lake. This lessened the density of the surface waters and reduced the rate at which they sank, shutting down the overturning.

Since there aren’t any large glacial lakes left that could abruptly drain into the ocean, the AMOC won’t shut down in the same way it previously did. But it’s still possible that climate change could cause it to shut down. The surface waters from the north Atlantic can still freshen (and become less dense), either due to the addition of fresh water from melting polar ice and snow, or due to increased precipitation to the northern latitudes. In addition, they can simply become warmer, which also makes them less dense, reducing their sinking rate and weakening the overturning.

In combination, these three factors (warming, increased precipitation, meltwater) can theoretically shut down the AMOC if they are strong enough. This will probably not be as abrupt or extreme an event as the Younger Dryas, but it can still persistently alter the regional climate.

JB: I’m trying to keep our readers in suspense for a bit longer, but I don’t think it’s giving away too much to say that when you run your model, sometimes the AMOC shuts down, or at least slows down. Can you say anything about how this tends to happen, when it does? In your model, that is. Can you tell if it’s mainly warming, or increased precipitation, or meltwater?

NU: The short answer is "mainly warming, probably". The long answer:

I haven’t done experiments with the box model myself to determine this, but I can quote from the Zickfeld et al. paper where this model was published. It says, for their baseline collapse experiment,

In the box model the initial weakening of the overturning circulation is mainly due to thermal forcing [...] This effect is amplified by a negative feedback on salinity, since a weaker circulation implies reduced salt advection towards the northern latitudes.

Even if they turn off all the freshwater input, they find substantial weakening of the AMOC from warming alone.

Freshwater could potentially become the dominant effect on the AMOC if more freshwater is added than in the paper’s baseline experiment. The paper did report computer experiments with different freshwater inputs, but upon skimming it, I can’t immediately tell whether the thermal effect loses its dominance.

These experiments have also been performed using more complex climate models. This paper reports that in all the models they studied, the AMOC weakening is caused more by changes in surface heat flux than by changes in surface water flux:

• J. M. Gregory et al., A model intercomparison of changes in the Atlantic thermohaline circulation in response to increasing atmospheric CO2 concentration, Geophysical Research Letters 32 (2005), L12703.

However, that paper studied "best-estimate" freshwater fluxes, not the fluxes on the high end of what’s possible, so I don’t know whether thermal effects would still dominate if the freshwater input ends up being large. There are papers that suggest freshwater input from Greenland, at least, won’t be a dominant factor any time soon:

• J. H. Jungclaus et al., Will Greenland melting halt the thermohaline circulation?, Geophysical Research Letters 33 (2006), L17708.

• E. Driesschaert et al., Modeling the influence of Greenland ice sheet melting on the Atlantic meridional overturning circulation during the next millennia, Geophysical Research Letters 34 (2007), L10707.

I’m not sure what the situation is for precipitation, but I don’t think that would be much larger than the meltwater flux. In summary, it’s probably the thermal effects that dominate, both in complex and simpler models.

Note that in our version of the box model, the precipitation and meltwater fluxes are combined into one number, the "North Atlantic hydrological sensitivity", so we can’t distinguish between those sources of water. This number is treated as uncertain in our analysis, lying within a range of possible values determined from the hydrologic changes predicted by complex models. The Zickfeld et al. paper experimented with separating them into the two individual contributions, but my version of the model doesn’t do that.

JB: Okay. Now back to what you and Klaus Keller actually did in your paper. You have a climate model with a bunch of adjustable knobs, or parameters. Some of these parameters you take as "known" from previous research. Others are more uncertain, and that’s where the Bayesian reasoning comes in. Very roughly, you use some data to guess the probability that the right settings of these knobs lie within any given range.

How many parameters do you treat as uncertain?

NU: 18 parameters in total. 7 model parameters that control dynamics, 4 initial conditions, and 7 parameters describing error statistics.

JB: What are a few of these parameters? Maybe you can tell us about some of the most important ones — or ones that are easy to understand.

NU: I’ve mentioned these briefly in "week304" in the model description. The AMOC-related parameter is the hydrologic sensitivity I described above, controlling the flux of fresh water into the North Atlantic.

There are three climate related parameters:

• the climate sensitivity (the equilibrium warming expected in response to doubled CO2),

• the ocean heat vertical diffusivity (controlling the rate at which oceans absorb heat from the atmosphere), and

• "aerosol scaling", a factor that multiplies the strength of the aerosol-induced cooling effect, mostly due to uncertainties in aerosol-cloud interactions.

I discussed these in "week302" in the part about total feedback estimates.

There are also three carbon cycle related parameters:

• the heterotrophic respiration sensitivity (describing how quickly dead plants decay when it gets warmer),

• CO2 fertilization (how much faster plants grow in CO2-elevated conditions), and

• the ocean carbon vertical diffusivity (the rate at which the oceans absorb CO2 from the atmosphere).

The initial conditions describe what the global temperature, CO2 level, etc. were at the start of my model simulations, in 1850. The statistical parameters describe the variance and autocorrelation of the residual error between the observations and the model, due to measurement error, natural variability, and model error.

JB: Could you say a bit about the data you use to estimate these uncertain parameters? I see you use a number of data sets.

NU: We use global mean surface temperature and ocean heat content to constrain the three climate parameters. We use atmospheric CO2 concentration and some ocean flux measurements to constrain the carbon parameters. We use measurements of the AMOC strength to constrain the AMOC parameter. These are all time series data, mostly global averages — except the AMOC strength, which is an Atlantic-specific quantity defined at a particular latitude.

The temperature data are taken by surface weather stations and are for the years 1850-2009. The ocean heat data are taken by shipboard sampling, 1953-1996. The atmospheric CO2 concentrations are measured from the Mauna Loa volcano in Hawaii, 1959-2009. There are also some ice core measurements of trapped CO2 at Law Dome, Antarctica, dated to 1854-1953. The air-sea CO2 fluxes, for the 1980s and 1990s, are derived from measurements of dissolved inorganic carbon in the ocean, combined with measurements of manmade chlorofluorocarbon to date the water masses in which the carbon resides. (The dates tell you when the carbon entered the ocean.)

The AMOC strength is reconstructed from station measurements of poleward water circulation over an east-west section of the Atlantic Ocean, near 25 °N latitude. Pairs of stations measure the northward velocity of water, inferred from the ocean bottom pressure differences between northward and southward station pairs. The velocities across the Atlantic are combined with vertical density profiles to determine an overall rate of poleward water mass transport. We use seven AMOC strength estimates measured sparsely between the years 1957 and 2004.

JB: So then you start the Bayesian procedure. You take your model, start it off with your 18 parameters chosen somehow or other, run it from 1850 to now, and see how well it matches all this data you just described. Then you tweak the parameters a bit — last time we called that "turning the knobs" — and run the model again. And then you do this again and again, lots of times. The goal is to calculate the probability that the right settings of these knobs lie within any given range.

Is that about right?

NU: Yes, that’s right.

JB: About how many times did you actually run the model? Is the sort of thing you can do on your laptop overnight, or is it a mammoth task?

NU: I ran the model a million times. This took about two days on a single CPU. Some of my colleagues later ported the model from Matlab to Fortran, and now I can do a million runs in half an hour on my laptop.

JB: Cool! So if I understand correctly, you generated a million lists of 18 numbers: those uncertain parameters you just mentioned.

Or in other words: you created a cloud of points: a million points in an 18-dimensional space. Each point is a choice of those 18 parameters. And the density of this cloud near any point should be proportional to the probability that the parameters have those values.

That’s the goal, anyway: getting this cloud to approximate the right probability density on your 18-dimensional space. To get this to happen, you used the Markov chain Monte Carlo procedure we discussed last time.

Could you say in a bit more detail how you did this, exactly?

NU: There are two steps. One is to write down a formula for the probability of the parameters (the "Bayesian posterior distribution"). The second is to draw random samples from that probability distribution using Markov chain Monte Carlo (MCMC).

Call the parameter vector θ and the data vector y. The Bayesian posterior distribution p(θ|y) is a function of θ which says how probable θ is, given the data y that you’ve observed. The little bar (|) indicates conditional probability: p(θ|y) is the probability of θ, assuming that you know y happened.

The posterior factorizes into two parts, the likelihood and the prior. The prior, p(θ) says how probable you think a particular 18-dimensional vector of parameters is, before you’ve seen the data you’re using. It encodes your "prior knowledge" about the problem, unconditional on the data you’re using.

The likelihood, p(y|θ), says how likely it is for the observed data to arise from a model run using some particular vector of parameters. It describes your data generating process: assuming you know what the parameters are, how likely are you to see data that looks like what you actually measured? (The posterior is the reverse of this: how probable are the parameters, assuming the data you’ve observed?)

Bayes’s theorem simply says that the posterior is proportional to the product of these two pieces:

p(θ|y) ∝ p(y|θ) × p(θ)

If I know the two pieces, I multiply them together and use MCMC to sample from that probability distribution.

Where do the pieces come from? For the prior, we assumed bounded uniform distributions on all but one parameter. Such priors express the belief that each parameter lies within some range we deemed reasonable, but we are agnostic about whether one value within that range is more probable than any other. The exception is the climate sensitivity parameter. We have prior evidence from computer models and paleoclimate data that the climate sensitivity is most likely around 2 or 3 °C, albeit with significant uncertainties. We encoded this belief using a "diffuse" Cauchy distribution peaked in this range, but allowing substantial probability to be outside it, so as to not prematurely exclude too much of the parameter range based on possibly overconfident prior beliefs. We assume the priors on all the parameters are independent of each other, so the prior for all of them is the product of the prior for each of them.

For the likelihood, we assumed a normal (Gaussian) distribution for the residual error (the scatter of the data about the model prediction). The simplest such distribution is the independent and identically distributed ("iid") normal distribution, which says that all the data points have the same error and the errors at each data point are independent of each other. Neither of these assumptions is true. The errors are not identical, since they get bigger farther in the past, when we measured data with less precision than we do today. And they’re not independent, because if one year is warmer than the model predicts, the next year likely to be also warmer than the model predicts. There are various possible reasons for this: chaotic variability, time lags in the system due to finite heat capacity, and so on.

In this analysis, we kept the identical-error assumption for simplicity, even though it’s not correct. I think this is justifiable, because the strongest constraints on the parameters come from the most recent data, when the largest climate and carbon cycle changes have occurred. That is, the early data are already relatively uninformative, so if their errors get bigger, it doesn’t affect the answer much.

We rejected the independent-error assumption, since there is very strong autocorrelation (serial dependence) in the data, and ignoring autocorrelation is known to lead to overconfidence. When the errors are correlated, it’s harder to distinguish between a short-term random fluctuation and a true trend, so you should be more uncertain about your conclusions. To deal with this, we assumed that the errors obey a correlated autoregressive "red noise" process instead of an uncorrelated "white noise" process. In the likelihood, we converted the red-noise errors to white noise via a "whitening" process, assuming we know how much correlation is present. (We’re allowed to do that in the likelihood, because it gives the probability of the data assuming we know what all the parameters are, and the autocorrelation is one of the parameters.) The equations are given in the paper.

Finally, this gives us the formula for our posterior distribution.

JB: Great! There’s a lot of technical material here, so I have many questions, but let’s go through the whole story first, and come back to those.

NU: Okay. Next comes step two, which is to draw random samples from the posterior probability distribution via MCMC.

To do this, we use the famous Metropolis algorithm, which was invented by a physicist of that name, along with others, to do computations in statistical physics. It’s a very simple algorithm which takes a "random walk" through parameter space.

You start out with some guess for the parameters. You randomly perturb your guess to a nearby point in parameter space, which you are going to propose to move to. If the new point is more probable than the point you were at (according to the Bayesian posterior distribution), then accept it as a new random sample. If the proposed point is less probable than the point you’re at, then you randomly accept the new point with a certain probability. Otherwise you reject the move, staying where you are, treating the old point as a duplicate random sample.

The acceptance probability is equal to the ratio of the posterior distribution at the new point to the posterior distribution at the old point. If the point you’re proposing to move to is, say, 5 times less probable than the point you are at now, then there’s a 20% chance you should move there, and a 80% chance that you should stay where you are.

If you iterate this method of proposing new "jumps" through parameter space, followed by the Metropolis accept/reject procedure, you can prove that you will eventually end up with a long list of (correlated) random samples from the Bayesian posterior distribution.

JB: Okay. Now let me ask a few questions, just to help all our readers get up to speed on some jargon.

Lots of people have heard of a "normal distribution" or "Gaussian", because it’s become sort of the default choice for probability distributions. It looks like a bell curve:

When people don’t know the probability distribution of something — like the tail lengths of newts or the IQ’s of politicians — they often assume it’s a Gaussian.

But I bet fewer of our readers have heard of a "Cauchy distribution". What’s the point of that? Why did you choose that for your prior probability distribution of the climate sensitivity?

NU: There is a long-running debate about the "upper tail" of the climate sensitivity distribution. High climate sensitivities correspond to large amounts of warming. As you can imagine, policy decisions depend a lot on how likely we think these extreme outcomes could be, i.e., how quickly the "upper tail" of the probability distribution drops to zero.

A Gaussian distribution has tails that drop off exponentially quickly, so very high sensitivities will never get any significant weight. If we used it for our prior, then we’d almost automatically get a "thin tailed" posterior, no matter what the data say. We didn’t want to put that in by assumption and automatically conclude that high sensitivities should get no weight, regardless of what the data say. So we used a weaker assumption, which is a "heavy tailed" prior distribution. With this prior, the probability of large amounts of warming drops off more slowly, as a power law, instead of exponentially fast. If the data strongly rule out high warming, we can get a thin tailed posterior, but if they don’t, it will be heavy tailed. The Cauchy distribution, a limiting case of the "Student t" distribution that students of statistics may have heard of, is one of the most conservative choices for a heavy-tailed prior. Probability drops off so slowly at its tails that its variance is infinite.

JB: The issue of "fat tails" is also important in the stock market, where big crashes happen more frequently than you might guess with a Gaussian distribution. After the recent economic crisis I saw a lot of financiers walking around with their tails between their legs, wishing their tails had been fatter.

I’d also like to ask about "white noise" versus "red noise". "White noise" is a mathematical description of a situation where some quantity fluctuates randomly with time in a way so that it’s value at any time is completely uncorrelated with its value at any other time. If you graph an example of white noise, it looks really spiky:



If you play it as a sound, it sounds like hissy static — quite unpleasant. If you could play it in the form of light, it would look white, hence the name.

"Red noise" is less wild. Its value at any time is still random, but it’s correlated to the values at earlier or later times, in a specific way. So it looks less spiky:



and it sounds less high-pitched, more like a steady rainfall. Since it’s stronger at low frequencies, it would look more red if you could play it in the form of light — hence the name "red noise".

If understand correctly, you’re assuming that some aspects of the climate are noisy, but in a red noise kind of way, when you’re computing p(y|θ): the likelihood that your data takes on the value y, given your climate model with some specific choice of parameters θ.

Is that right? You’re assuming this about all your data: the temperature data from weather stations, the ocean heat data are from shipboard samples, the atmospheric CO2 concentrations at Mauna Loa volcano in Hawaii, the ice core measurements of trapped CO2, the air-sea CO2 fluxes, and also the AMOC strength? Red, red, red — all red noise?

NU: I think the red noise you’re talking about refers to a specific type of autocorrelated noise ("Brownian motion"), with a power spectrum that is inversely proportional to the square of frequency. I’m using "red noise" more generically to speak of any autocorrelated process that is stronger at low frequencies. Specifically, the process we use is a first-order autoregressive, or "AR(1)", process. It has a more complicated spectrum than Brownian motion.

JB: Right, I was talking about "red noise" of a specific mathematically nice sort, but that’s probably less convenient for you. AR(1) sounds easier for computers to generate.

NU: It’s not only easier for computers, but closer to the spectrum we see in our analysis.

Note that when I talk about error I mean "residual error", which is the difference between the observations and the model prediction. If the residual error is correlated in time, that doesn’t necessarily reflect true red noise in the climate system. It could also represent correlated errors in measurement over time, or systematic errors in the model. I am not attempting to distinguish between all these sources of error. I’m just lumping them all together into one total error process, and assuming it has a simple statistical form.

We assume the residual errors in the annual surface temperature, ocean heat, and instrumental CO2 time series are AR(1). The ice core CO2, air-sea CO2 flux, and AMOC strength data are sparse, and we can’t really hope to estimate the correlation between them, so we assume their residual errors are uncorrelated.

Speaking of correlation, I’ve been talking about "autocorrelation", which is correlation within one data set between one time and another. It’s also possible for the errors in different data sets to be correlated with each other ("cross correlation"). We assumed there is no cross correlation (and residual analysis suggests only weak correlation between data sets).

JB: I have a few more technical questions, but I bet most of our readers are eager to know: so, what next?

You use all these nifty mathematical methods to work out p(θ|y), the probability that your 18 parameters have any specific value given your data. And now I guess you want to figure out the probability that the Atlantic Meridional Overturning Current, or AMOC, will collapse by some date or other.

How do you do this? I guess most people want to know the answer more than the method, but they’ll just have to wait a few more minutes.

NU: That’s easy. After MCMC, we have a million runs of the model, sampled in proportion how well the model fits historic data. There will be lots of runs that agree well with the data, and a few that agree less well. All we do now is extend each of those runs into the future, using an assumed scenario for what CO2 emissions and other radiative forcings will do in the future. To find out the probability that the AMOC will collapse by some date, conditional on the assumptions we’ve made, we just count what fraction of the runs have an AMOC strength of zero in whatever year we care about.

JB: Okay, that’s simple enough. What scenario, or scenarios, did you consider?

NU: We considered a worst-case "business as usual" scenario in which we continue to burn fossil fuels at an accelerating rate until we start to run out of them, and eventually burn the maximum amount of fossil fuels we think there might be remaining (about 5000 gigatons worth of of carbon, compared to the roughly 500 gigatons we’ve emitted so far). This assumes we get desperate for cheap energy and extract all the hard-to-get fossil resources in oil shales and tar sands, all the remaining coal, etc. It doesn’t necessarily preclude the use of non-fossil energy; it just assumes that our appetite for energy grows so rapidly that there’s no incentive to slow down fossil fuel extraction. We used a simple economic model to estimate how fast we might do this, if the world economy continues to grow at a similar rate to the last few decades.

JB: And now for the big question: what did you find? How likely is it that the AMOC will collapse, according to your model? Of course it depends how far into the future you look.

NU: We find a negligible probability that the AMOC will collapse this century. The odds start to increase around 2150, rising to about a 10% chance by 2200, and a 35% chance by 2300, the last year considered in our scenario.

JB: I guess one can take this as good news or really scary news, depending on how much you care about folks who are alive in 2300. But I have some more questions. First, what’s a "negligible probability"?

NU: In this case, it’s less than 1 in 3000. For computational reasons, we only ran 3000 of the million samples forward into the future. There were no samples in this smaller selection that had the AMOC collapsed in 2100. The probability rises to 1 in 3000 in the year 2130 (the first time I see a collapse in this smaller selection), and 1% in 2152. You should take these numbers with a grain of salt. It’s these rare "tail-area events" that are most sensitive to modeling assumptions.

JB: Okay. And second, don’t the extrapolations become more unreliable as you keep marching further into the future? You need to model not only climate physics but also the world economy. In this calculation, how many gigatons of carbon dioxide per year are you assuming will be emitted in 2300? I’m just curious. In 1998 it was about 27.6 gigatons. By 2008, it was about 30.4.

NU: Yes, the uncertainty grows with time (and this is reflected in our projections). And in considering a fixed emissions scenario, we’ve ignored the economic uncertainty, which, so far out into the future, is even larger than the climate uncertainty. Here we’re concentrating on just the climate uncertainty, and are hoping to get an idea of bounds, so we used something close to a worst-case economic scenario. In this scenario carbon emissions peak around 2150 at about 23 gigatons carbon per year (84 gigatons CO2). By 2300 they’ve tapered off to about 4 GtC (15 GtCO2).

Actual future emissions may be less than this, if we act to reduce them, or there are fewer economically extractable fossil resources than we assume, or the economy takes a prolonged downturn, etc. Actually, it’s not completely an economic worst case; it’s possible that the world economy could grow even faster than we assume. And it’s not the worst case scenario from a climate perspective, either. For example, we don’t model potential carbon emissions from permafrost or methane clathrates. It’s also possible that climate sensitivity could be higher than what we find in our analysis.

JB: Why even bother projecting so far out into the future, if it’s so uncertain?

NU: The main reason is because it takes a while for the AMOC to weaken, so if we’re interested in what it would take to make it collapse, we have to run the projections out a few centuries. But another motivation for writing this paper is policy related, having to do with the concept of "climate commitment" or "triggering". Even if it takes a few centuries for the AMOC to collapse, it may take less time than that to reach a "point of no return", where a future collapse has already been unavoidably "triggered". Again, to investigate this question, we have to run the projections out far enough to get the AMOC to collapse.

We define "the point of no return" to be a point in time which, if CO2 emissions were immediately reduced to zero and kept there forever, the AMOC would still collapse by the year 2300 (an arbitrary date chosen for illustrative purposes). This is possible because even if we stop emitting new CO2, existing CO2 concentrations, and therefore temperatures, will remain high for a long time (see "week303").

In reality, humans wouldn’t be able to reduce emissions instantly to zero, so the actual "point of no return" would likely be earlier than what we find in our study. We couldn’t economically reduce emissions fast enough to avoid triggering an AMOC collapse. (In this study we ignore the possibility of negative carbon emissions, that is, capturing CO2 directly from the atmosphere and sequestering it for a long period of time. We’re also ignoring the possibility of climate geoengineering, which is global cooling designed to cancel out greenhouse warming.)

So what do we find? Although we calculate a negligible probability that the AMOC will collapse by the end of this century, the probability that, in this century, we will commit later generations to a collapse (by 2300) is almost 5%. The probabilities of "triggering" rise rapidly, to almost 20% by 2150 and about 33% by 2200, even though the probability of experiencing a collapse by those dates is about 1% and 10%, respectively. You can see it in this figure from our paper:



The take-home message is that while most climate projections are currently run out to 2100, we shouldn’t fixate only on what might happen to people this century. We should consider what climate changes our choices in this century, and beyond, are committing future generations to experiencing.

JB: That’s a good point!

I’d like to thank you right now for a wonderful interview, that really taught me — and I hope our readers — a huge amount about climate change and climate modelling. I think we’ve basically reached the end here, but as the lights dim and the audience files out, I’d like to ask just a few more technical questions.

One of them was raised by David Tweed. He pointed out that while you’re "training" your model on climate data from the last 150 years or so, you’re using it to predict the future in a world that will be different in various ways: a lot more CO2 in the atmosphere, hotter, and so on. So, you’re extrapolating rather than interpolating, and that’s a lot harder. It seems especially hard if the collapse of the AMOC is a kind of "tipping point" — if it suddenly snaps off at some point, instead of linearly decreasing as some parameter changes.

This raises the question: why should we trust your model, or any model of this sort, to make such extrapolations correctly? In the discussion after that comment, I think you said that ultimately it boils down to

1) whether you think you have the physics right,

and

2) whether you think the parameters change over time.

That makes sense. So my question is: what are some of the best ways people could build on the work you’ve done, and make more reliable predictions about the AMOC? There’s a lot at stake here!

NU: Our paper is certainly an early step in making probabilistic AMOC projections, with room for improvement. I view the main points as (1) estimating how large the climate-related uncertainties may be within a given model, and (2) illustrating the difference between experiencing, and committing to, a climate change. It’s certainly not an end-all "prediction" of what will happen 300 years from now, taking into account all possible model limitations, economic uncertainties, etc.

To answer your question, the general ways to improve predictions are to improve the models, and/or improve the data constraints. I’ll discuss both.

Although I’ve argued that our simple box model reasonably reproduces the dynamics of the more complex model it was designed to approximate, that complex model itself isn’t the best model available for the AMOC. The problem with using complex climate models is that it’s computationally impossible to run them millions of times. My solution is to work with "statistical emulators", which are tools for building fast approximations to slow models. The idea is to run the complex model a few times at different points in its parameter space, and then statistically interpolate the resulting outputs to predict what the model would have output at nearby points. This works if the model output is a smooth enough function of the parameters, and there are enough carefully-chosen "training" points.

From an oceanographic standpoint, even current complex models are probably not wholly adequate (see the discussion at the end of "week304"). There is some debate about whether the AMOC becomes more stable as the resolution of the model increases. On the other hand, people still have trouble getting the AMOC in models, and the related climate changes, to behave as abruptly as they apparently did during the Younger Dryas. I think the range of current models is probably in the right ballpark, but there is plenty of room for improvement. Model developers continue to refine their models, and ultimately, the reliability of any projection is constrained by the quality of models available.

Another way to improve predictions is to improve the data constraints. It’s impossible to go back in time and take better historic data, although with things like ice cores, it is possible to dig up new cores to analyze. It’s also possible to improve some historic "data products". For example, the ocean heat data is subject to a lot of interpolation of sparse measurements in the deep ocean, and one could potentially improve the interpolation procedure without going back in time and taking more data. There are also various corrections being applied for known biases in the data-gathering instruments and procedures, and it’s possible those could be improved too.

Alternatively, we can simply wait. Wait for new and more precise data to become available.

But when I say "improve the data constraints", I’m mostly talking about adding more of them, that I simply didn’t include in the analysis, or looking at existing data in more detail (like spatial patterns instead of global averages). For example, the ocean heat data mostly serves to constrain the vertical mixing parameter, controlling how quickly heat penetrates into the deep ocean. But we can also look at the penetration of chemicals in the ocean (such carbon from fossil fuels, or chlorofluorocarbons). This is also informative about how quickly water masses mix down to the ocean depths, and indirectly informative about how fast heat mixes. I can’t do that with my simple model (which doesn’t have the ocean circulation of any of these chemicals in it), but I can with more complex models.

As another example, I could constrain the climate sensitivity parameter better with paleoclimate data, or more resolved spatial data (to try to, e.g., pick up the spatial fingerprint of industrial aerosols in the temperature data), or by looking at data sets informative about particular feedbacks (such as water vapor), or at satellite radiation budget data.

There is a lot of room for reducing uncertainties by looking at more and more data sets. However, this presents its own problems. Not only is this simply harder to do, but it runs more directly into limitations in the models and data. For example, if I look at what ocean temperature data implies about a model’s vertical mixing parameter, and what ocean chemical data imply, I might find that they imply two inconsistent values for the parameter! Or that those data imply a different mixing than is implied by AMOC strength measurements. This can happen if there are flaws in the model (or in the data). We have some evidence from other work that there are circumstances in which this can happen:

• A. Schmittner, N. M. Urban, K. Keller and D. Matthews, Using tracer observations to reduce the uncertainty of ocean diapycnal mixing and climate-carbon cycle projections, Global Biogeochemical Cycles 23 (2009), GB4009.

• M. Goes, N. M. Urban, R. Tonkonojenkov, M. Haran, and K. Keller, The skill of different ocean tracers in reducing uncertainties about projections of the Atlantic meridional overturning circulation, Journal of Geophysical Research — Oceans, in press (2010).

How to deal with this, if and when it happens, is an open research challenge. To an extent it depends on expert judgment about which model features and data sets are "trustworthy". Some say that expert judgment renders conclusions subjective and unscientific, but as a scientist, I say that such judgments are always applied! You always weigh how much you trust your theories and your data when deciding what to conclude about them.

In my response I’ve so far ignored the part about parameters changing in time. I think the hydrological sensitivity (North Atlantic freshwater input as a function of temperature) can change with time, and this could be improved by using a better climate model that includes ice and precipitation dynamics. Feedbacks can fluctuate in time, but I think it’s okay to treat them as a constant for long term projections. Some of these parameters can also be spatially dependent (e.g., the respiration sensitivity in the carbon cycle). I think treating them all as constant is a decent first approximation for the sorts of generic questions we’re asking in the paper. Also, all the parameter estimation methods I’ve described only work with static parameters. For time varying parameters, you need to get into state estimation methods like Kalman or particle filters.

JB: I also have another technical question, which is about the Markov chain Monte Carlo procedure. You generate your cloud of points in 18-dimensional space by a procedure where you keep either jumping randomly to a nearby point, or staying put, according to that decision procedure you described. Eventually this cloud fills out to a good approximation of the probability distribution you want. But, how long is "eventually"? You said you generated a million points. But how do you know that’s enough?

NU: This is something of an art. Although there is an asymptotic convergence theorem, there is no general way of knowing whether you’ve reached convergence. First you check to see whether your chains "look right". Are they sweeping across the full range of parameter space where you expect significant probability? Are they able to complete many sweeps (thoroughly exploring parameter space)? Is the Metropolis test accepting a reasonable fraction of proposed moves? Do you have enough effective samples in your Markov chain? (MCMC generates correlated random samples, so there are fewer "effectively independent" samples in the chain than there are total samples.) Then you can do consistency checks: start the chains at several different locations in parameter space, and see if they all converge to similar distributions.

If the posterior distribution shows, or is expected to show, a lot of correlation between parameters, you have to be more careful to ensure convergence. You want to propose moves that carry you along the "principal components" of the distribution, so you don’t waste time trying to jump away from the high probability directions. (Roughly, if your posterior density is concentrated on some low dimensional manifold, you want to construct your way of moving around parameter space to stay near that manifold.) You also have to be careful if you see, or expect, multimodality (multiple peaks in the probability distribution). It can be hard for MCMC to move from one mode to another through a low-probability "wasteland"; it won’t be inclined to jump across it. There are more advanced algorithms you can use in such situations, if you suspect you have multimodality. Otherwise, you might discover later that you only sampled one peak, and never noticed that there were others.

JB: Did you do some of these things when testing out the model in your paper? Do you have any intuition for the "shape" of the probability distribution in 18-dimensional space that lies at the heart of your model? For example: do you know if it has one peak, or several?

NU: I’m pretty confident that the MCMC in our analysis is correctly sampling the shape of the probability distribution. I ran lots and lots of analyses, starting the chain in different ways, tweaking the proposal distribution (jumping rule), looking at different priors, different model structures, different data, and so on.

It’s hard to "see" what an 18-dimensional function looks like, but we have 1-dimensional and 2-dimensional projections of it in our paper:





I don’t believe that it has multiple peaks, and I don’t expect it to. Multiple peaks usually show up when the model behavior is non-monotonic as a function of the parameters. This can happen in really nonlinear systems (an with threshold systems like the AMOC), but during the historic period I’m calibrating the model to, I see no evidence of this in the model.

There are correlations between parameters, so there are certain "directions" in parameter space that the posterior distribution is oriented along. And the distribution is not Gaussian. There is evidence of skew, and nonlinear correlations between parameters. Such correlations appear when the data are insufficient to completely identify the parameters (i.e., different combinations of parameters can produce similar model output). This is discussed in more detail in another of our papers:

• Nathan M. Urban and Klaus Keller, Complementary observational constraints on climate sensitivity, Geophysical Research Letters 36 (2009), L04708.

In a Gaussian distribution, the distribution of any pair of parameters will look ellipsoidal, but our distribution has some "banana" or "boomerang" shaped pairwise correlations. This is common, for example, when the model output is a function of the product of two parameters.

JB: Okay. It’s great that we got a chance to explore some of the probability theory and statistics underlying your work. It’s exciting for me to see these ideas being used to tackle a big real-life problem. Thanks again for a great interview.


Maturity is the capacity to endure uncertainty. – John Finley


This Week’s Finds (Week 304)

15 October, 2010

About 10,800 BC, something dramatic happened.

The last glacial period seemed to be ending quite nicely, things had warmed up a lot — but then, suddenly, the temperature in Europe dropped about 7 °C! In Greenland, it dropped about twice that much. In England it got so cold that glaciers started forming! In the Netherlands, in winter, temperatures regularly fell below -20 °C. Throughout much of Europe trees retreated, replaced by alpine landscapes, and tundra. The climate was affected as far as Syria, where drought punished the ancient settlement of Abu Hurerya. But it doesn’t seem to have been a world-wide event.

This cold spell lasted for about 1300 years. And then, just as suddenly as it began, it ended! Around 9,500 BC, the temperature in Europe bounced back.

This episode is called the Younger Dryas, after a certain wildflower that enjoys cold weather, whose pollen is common in this period.

What caused the Younger Dryas? Could it happen again? An event like this could wreak havoc, so it’s important to know. Alas, as so often in science, the answer to these questions is "we’re not sure, but…."

We’re not sure, but the most popular theory is that a huge lake in Canada, formed by melting glaciers, broke its icy banks and flooded out into the Saint Lawrence River. This lake is called Lake Agassiz. At its maximum, it held more water than all lakes in the world now put together:



In a massive torrent lasting for years, the water from this lake rushed out to the Labrador Sea. By floating atop the denser salt water, this fresh water blocked a major current that flows in the Altantic: the Atlantic Meridional Overturning Circulation, or AMOC. This current brings warm water north and helps keep northern Europe warm. So, northern Europe was plunged into a deep freeze!

That’s the theory, anyway.

Could something like this happen again? There are no glacial lakes waiting to burst their banks, but the concentration of fresh water in the northern Atlantic has been increasing, and ocean temperatures are changing too, so some scientists are concerned. The problem is, we don’t really know what it takes to shut down the Atlantic Meridional Overturning Circulation!

To make progress on this kind of question, we need a lot of insight, but we also need some mathematical models. And that’s what Nathan Urban will tell us about now. First we’ll talk in general about climate models, Bayesian reasoning, and Monte Carlo methods. We’ll even talk about the general problem of using simple models to study complex phenomena. And then he’ll walk us step by step through the particular model that he and a coauthor have used to study this question: will the AMOC run amok?

Sorry, I couldn’t resist that. It’s not so much "running amok" that the AMOC might do, it’s more like "fizzling out". But accuracy should never stand in the way of a good pun.

On with the show:

JB: Welcome back! Last time we were talking about the new work you’re starting at Princeton. You said you’re interested in the assessment of climate policy in the presence of uncertainties and "learning" – where new facts come along that revise our understanding of what’s going on. Could you say a bit about your methodology? Or, if you’re not far enough along on this work, maybe you could talk about the methodology of some other paper in this line of research.

NU: To continue the direction of discussion, I’ll respond by talking about the methodology of a few papers along the lines of what I hope to work on here at Princeton, rather than about my past papers on uncertainty quantification. They are Keller and McInerney on learning rates:

• Klaus Keller and David McInerney, The dynamics of learning about a climate threshold, Climate Dynamics 30 (2008), 321-332.

Keller and coauthors on learning and economic policy:

• Klaus Keller, Benjamin M. Bolkerb and David F. Bradford, Uncertain climate thresholds and optimal economic growth, Journal of Environmental Economics and Management 48 (2004), 723-741.

and Oppenheimer et al. on "negative" learning (what happens when science converges to the wrong answer):

• Michael Oppenheimer, Brian C. O’Neill and Mort Webster, Negative learning, Climatic Change 89 (2008), 155-172.

The general theme of this kind of work is to statistically compare a climate model to observed data in order to understand what model behavior is allowed by existing data constraints. Then, having quantified the range of possibilities, plug this uncertainty analysis into an economic-climate model (or "integrated assessment model"), and have it determine the economically "optimal" course of action.

So: start with a climate model. There is a hierarchy of such models, ranging from simple impulse-response or "box" models to complex atmosphere-ocean general circulation models. I often use the simple models, because they’re computationally efficient and it is therefore feasible to explore their full range of uncertainties. I’m moving toward more complex models, which requires fancier statistics to extract information from a limited set of time-consuming simulations.

Given a model, then apply a Monte Carlo analysis of its parameter space. Climate models cannot simulate the entire Earth from first principles. They have to make approximations, and those approximations involve free parameters whose values must be fit to data (or calculated from specialized models). For example, a simple model cannot explicitly describe all the possible feedback interactions that are present in the climate system. It might lump them all together into a single, tunable "climate sensitivity" parameter. The Monte Carlo analysis runs the model many thousands of times at different parameter settings, and then compares the model output to past data in order to see which parameter settings are plausible and which are not. I use Bayesian statistical inference, in combination with Markov chain Monte Carlo, to quantify the degree of "plausibility" (i.e., probability) of each parameter setting.

With probability weights for the model’s parameter settings, it is now possible to weight the probability of possible future outcomes predicted by the model. This describes, conditional on the model and data used, the uncertainty about the future climate.

JB: Okay. I think I roughly understand this. But you’re using jargon that may cause some readers’ eyes to glaze over. And that would be unfortunate, because this jargon is necessary to talk about some very cool ideas. So, I’d like to ask what some phrases mean, and beg you to explain them in ways that everyone can understand.

To help out — and maybe give our readers the pleasure of watching me flounder around — I’ll provide my own quick attempts at explanation. Then you can say how close I came to understanding you.

First of all, what’s an "impulse-response model"? When I think of "impulse response" I think of, say, tapping on a wineglass and listening to the ringing sound it makes, or delivering a pulse of voltage to an electrical circuit and watching what it does. And the mathematician in me knows that this kind of situation can be modelled using certain familiar kinds of math. But you might be applying that math to climate change: for example, how the atmosphere responds when you pump some carbon dioxide into it. Is that about right?

NU: Yes. (Physics readers will know "impulse response" as "Green’s functions", by the way).

The idea is that you have a complicated computer model of a physical system whose dynamics you want to represent as a simple model, for computational convenience. In my case, I’m working with a computer model of the carbon cycle which takes CO2 emissions as input and predicts how much CO2 is left in the air after natural sources and sinks operate on what’s there. It’s possible to explicitly model most of the relevant physical and biogeochemical processes, but it takes a long time for such a computer simulation to run. Too long to explore how it behaves under many different conditions, which is what I want to do.

How do you build a simple model that acts like a more complicated one? One way is to study the complex model’s "impulse response" — in this case, how it behaves in response to an instantaneous "pulse" of carbon to the atmosphere. In general, the CO2 in the atmosphere will suddenly jump up, and then gradually relax back toward its original concentration as natural sinks remove some of that carbon from the atmosphere. The curve showing how the concentration decreases over time is the "impulse response". You derive it by telling your complex computer simulation that a big pulse of carbon was added to the air, and recording what it predicts will happen to CO2 over time.

The trick in impulse response theory is to treat an arbitrary CO2 emissions trajectory as the sum of a bunch of impulses of different sizes, one right after another. So, if emissions are 1, 3, and 7 units of carbon in years 1, 2, and 3, then you can think of that as a 1-unit pulse of carbon in year one, plus a 3-unit pulse in year 2, plus a 7-unit pulse in year 3.

The crucial assumption you make at this point is that you can treat the response of the complex model to this series of impulses as the sum of the "impulse response" curve that you worked out for a single pulse. Therefore, just by running the model in response to a single unit pulse, you can work out what the model would predict for any emissions trajectory, by adding up its response to a bunch of individual pulses. The impulse response model makes its prediction by summing up lots of copies of the impulse repsonse curve, with different sizes and at different times. (Techincally, this is a convolution of the impulse response curve, or Green’s function, with the emissions trajectory curve.)

JB: Okay. Next, what’s a "box model"? I had to look that up, and after some floundering around I bumped into a Wikipedia article that mentioned "black box models" and "white box models".

A black box model is where you’ve got a system, and all you pay attention to is its input and output — in other words, what you do to it, and what it does to you, not what’s going on "inside". A white box model, or "glass box model", lets you see what’s going on inside but not directly tinker with it, except via your input.

Is this at all close? I don’t feel very confident that I’ve understood what a "box model" is.

NU: No, box models are the sorts of things you find in "systems dynamics" theory, where you have "stocks" of a substance and "flows" of it in and out. In the carbon cycle, the "boxes" (or stocks) could be "carbon stored in wood", "carbon stored in soil", "carbon stored in the surface ocean", etc. The flows are the sources and sinks of carbon. In an ocean model, boxes could be "the heat stored in the North Atlantic", "the heat stored in the deep ocean", etc., and flows of heat between them.

Box models are a way of spatially averaging over a lot of processes that are too complicated or time-consuming to treat in detail. They’re another way of producing simplified models from more complex ones, like impulse response theory, but without the linearity assumption. For example, one could replace a three dimensional circulation model of the ocean with a couple of "big boxes of water connected by pipes". Of course, you have to then verify that your simplified model is a "good enough" representation of whatever aspect of the more complex model that you’re interested in.

JB: Okay, sure — I know a bit about these "box models", but not that name. In fact the engineers who use "bond graphs" to depict complex physical systems made of interacting parts like to emphasize the analogy between electrical circuits and hydraulic systems with water flowing through pipes. So I think box models fit into the bond graph formalism pretty nicely. I’ll have to think about that more.

Anyway: next you mentioned taking a model and doing a "Monte Carlo analysis of its parameter space". This time you explained what you meant, but I’ll still go over it.

Any model has a bunch of adjustable parameters in it, for example the "climate sensitivity", which in a simple model just means how much warmer it gets per doubling of atmospheric carbon dioxide. We can think of these adjustable parameters as knobs we’re allowed to turn. The problem is that we don’t know the best settings of these knobs! And even worse, there are lots of allowed settings.

In a Monte Carlo analysis we randomly turn these knobs to some setting, run our model, and see how well it does — presumably by comparing its results to the "right answer" in some situation where we already know the right answer. Then we keep repeating this process. We turn the knobs again and again, and accumulate information, and try to use this to guess what the right knob settings are.

More precisely: we try to guess the probability that the correct knob settings lie within any given range! We don’t try to guess their one "true" setting, because we can’t be sure what that is, and it would be silly to pretend otherwise. So instead, we work out probabilities.

Is this roughly right?

NU: Yes, that’s right.

JB: Okay. That was the rough version of the story. But then you said something a lot more specific. You say you "use Bayesian statistical inference, in combination with Markov chain Monte Carlo, to quantify the degree of "plausibility" (or probability) of each parameter setting."

So, I’ve got a couple more questions. What’s "Markov chain Monte Carlo"? I guess it’s some specific way of turning those knobs over and over again.

NU: Yes. For physicists, it’s a "random walk" way of turning the knobs: you start out at the current knob settings, and tweak each one just a little bit away from where they currently are. In the most common Markov chain Monte Carlo (MCMC) algorithm, if the new setting takes you to a more plausible setting of the knobs, you keep that setting. If the new setting produces an outcome that is less plausible, then you might keep the new setting (with a likelihood proportional to how much less plausible the new setting is), or you might stay at the existing setting and try again with a new tweaking. The MCMC algorithm is designed so that the sequence of knob settings produced will sample randomly from the probability distribution you’re interested in.

JB: And what’s "Bayesian statistical inference"? I’m sorry, I know this subject deserves a semester-long graduate course. But like a bad science journalist, I will ask you to distill it down to a few sentences! Sometime I’ll do a whole series of This Week’s Finds about statistical inference, but not now.

NU: I can distill it to one sentence: in this context, it’s a branch of statistics which allows you to assign probabilities to different settings of model parameters, based on how well those settings cause the model to reproduce the observed data.

The more common "frequentist" approach to statistics doesn’t allow you to assign probabilities to model parameters. It has a different take on probability. As a Bayesian, you assume the observed data is known and talk about probabilities of hypotheses (here, model parameters). As a frequentist, you assume the hypothesis is known (hypothetically), and talk about probabilities of data that could result from it. They differ fundamentally in what you treat as known (data, or hypothesis) and what probabilities are applied to (hypothesis, or data).

JB: Okay, and one final question: sometimes you say "plausibility" and sometimes you say "probability". Are you trying to distinguish these, or say they’re the same?

NU: I am using "probability" as a technical term which quantifies how "plausible" a hypothesis is. Maybe I should just stick to "probability".

JB: Great. Thanks for suffering through that dissection of what you said.

I think I can summarize, in a sloppy way, as follows. You take a model with a bunch of adjustable knobs, and you use some data to guess the probability that the right settings of these knobs lie within any given range. Then, you can use this model to make predictions. But these predictions are only probabilistic.

Okay, then what?

NU: This is the basic uncertainty analysis. There are several things that one can do with it. One is to look at learning rates. You can generate "hypothetical data" that we might observe in the future, by taking a model prediction and adding some "observation noise" to it. (This presumes that the model is perfect, which is not the case, but it represents a lower bound on uncertainty.) Then feed the hypothetical data back into the uncertainty analysis to calculate how much our uncertainty in the future could be reduced as a result of "observing" this "new" data. See Keller and McInerney for an example.

Another thing to do is decision making under uncertainty. For this, you need an economic integrated assessment model (or some other kind of policy model). Such a model typically has a simple description of the world economy connected to a simple description of the global climate: the world population and the economy grow at a certain rate which is tied to the energy sector, policies to reduce fossil carbon emissions have economic costs, fossil carbon emissions influence the climate, and climate change has economic costs. Different models are more or less explicit about these components (is the economy treated as a global aggregate or broken up into regional economies, how realistic is the climate model, how detailed is the energy sector model, etc.)

If you feed some policy (a course of emissions reductions over time) into such a model, it will calculate the implied emissions pathway and emissions abatement costs, as well as the implied climate change and economic damages. The net costs or benefits of this policy can be compared with a "business as usual" scenario with no emissions reductions. The net benefit is converted from "dollars" to "utility" (accounting for things like the concept that a dollar is worth more to a poor person than a rich one), and some discounting factor is applied (to downweight the value of future utility relative to present). This gives "the (discounted) utility of the proposed policy".

So far this has not taken uncertainty into account. In reality, we’re not sure what kind of climate change will result from a given emissions trajectory. (There is also economic uncertainty, such as how much it really costs to reduce emissions, but I’ll concentrate on the climate uncertainty.) The uncertainty analysis I’ve described can give probability weights to different climate change scenarios. You can then take a weighted average over all these scenarios to compute the "expected" utility of a proposed policy.

Finally, you optimize over all possible abatement policies to find the one that has the maximum expected discounted utility. See Keller et al. for a simple conceptual example of this applied to a learning scenario, and this book for a deeper discussion:

• William Nordhaus, A Question of Balance, Yale U. Press, New Haven, 2008.

It is now possible to start elaborating on this theme. For instance, in the future learning problem, you can modify the "hypothetical data" to deviate from what your climate model predicts, in order to consider what would happen if the model is wrong and we observe something "unexpected". Then you can put that into an integrated assessment model to study how much being wrong would cost us, and how fast we need to learn that we’re wrong in order to change course, policy-wise. See that paper by Oppenheimer et al. for an example.

JB: Thanks for that tour of ideas! It sounds fascinating, important, and complex.

Now I’d like to move on to talking about a specific paper of yours. It’s this one:

• Nathan Urban and Klaus Keller, Probabilistic hindcasts and projections of the coupled climate, carbon cycle, and Atlantic meridional overturning circulation system: A Bayesian fusion of century-scale observations with a simple model, Tellus A 62 (2010), 737-750.

Before I ask you about the paper, let me start with something far more basic: what the heck is the "Atlantic meridional overturning circulation" or "AMOC"?

I know it has something to do with ocean currents, and how warm water moves north near the surface of the Atlantic and then gets cold, plunges down, and goes back south. Isn’t this related to the "Gulf Stream", that warm current that supposedly keeps Europe warmer than it otherwise would be?

NU: Your first sentence pretty much sums up the basic dynamics: the warm water from the tropics cools in the North Atlantic, sinks (because it’s colder and denser), and returns south as deep water. As the water cools, the heat it releases to the atmosphere warms the region.

This is the "overturning circulation". But it’s not synonymous with the Gulf Stream. The Gulf Stream is a mostly wind-driven phenomenon, not a density driven current. The "AMOC" has both wind driven and density driven components; the latter is sometimes referred to as the "thermohaline circulation" (THC), since both heat and salinity are involved. I haven’t gotten into salinity yet, but it also influences the density structure of the ocean, and you can read Stefan Rahmstorf’s review articles for more (read the parts on non-linear behavior):

• Stefan Rahmstorf, The thermohaline ocean circulation: a brief fact sheet.

• Stefan Rahmstorf, Thermohaline ocean circulation, in Encyclopedia of Quaternary Sciences, edited by S. A. Elias, Elsevier, Amsterdam 2006.



JB: Next, why are people worrying about the AMOC? I know some scientists have argued that shortly after the last ice age, the AMOC stalled out due to lots of fresh water from Lake Agassiz, a huge lake that used to exist in what’s now Canada, formed by melting glaciers. The idea, I think, was that this event temporarily killed the Gulf Stream and made temperatures in Europe drop enormously.

Do most people believe that story these days?

NU: You’re speaking of the "Younger Dryas" abrupt cooling event around 11 to 13 thousand years ago. The theory is that a large pulse of fresh water from Lake Agassiz lessened the salinity in the Atlantic and made it harder for water to sink, thus shutting down down the overturning circulation and decreasing its release of heat in the North Atlantic. This is still a popular theory, but geologists have had trouble tracing the path of a sufficiently large supply of fresh water, at the right place, and the right time, to shut down the AMOC. There was a paper earlier this year claiming to have finally done this:

• Julian B. Murton, Mark D. Bateman, Scott R. Dallimore, James T. Teller and Zhirong Yang, Identification of Younger Dryas outburst flood path from Lake Agassiz to the Arctic Ocean, Nature 464 (2010), 740-743.

but I haven’t read it yet.

The worry is that this could happen again — not because of a giant lake draining into the Atlantic, but because of warming (and the resulting changes in precipitation) altering the thermal and salinity structure of the ocean. It is believed that the resulting shutdown of the AMOC will cause the North Atlantic region to cool, but there is still debate over what it would take to cause it to shut down. It’s also debated whether this is one of the climate "tipping points" that people talk about — whether a certain amount of warming would trigger a shutdown, and whether that shutdown would be "irreversible" (or difficult to reverse) or "abrupt".

Cooling Europe may not be a bad thing in a warming world. In fact, in a warming world, Europe might not actually cool in response to an AMOC shutdown; it might just warm more slowly. The problem is if the cooling is abrupt (and hard to adapt to), or prolonged (permamently shifting climate patterns relative to the rest of the world). Perhaps worse than the direct temperature change could be the impacts on agriculture or ocean ecosystems, resulting from major reorganizations of regional precipitation or ocean circulation patterns.

JB: So, part of your paper consists of modelling the AMOC and how it interacts with the climate and the carbon cycle. Let’s go through this step by step.

First: how do you model the climate? You say you use "the DOECLIM physical climate component of the ACC2 model, which is an energy balance model of the atmosphere coupled to a one-dimensional diffusive ocean model". I guess these are well-known ideas in your world. But I don’t even know what the acronyms stand for! Could you walk us through these ideas in a gentle way?

NU: Don’t worry about the acronyms; they’re just names people have given to particular models.

The ACC2 model is a computer model of both the climate and the carbon cycle. The climate part of our model is called DOECLIM, which I’ve used to replace the original climate component of ACC2. An "energy balance model" is the simplest possible climate model, and is a form of "box model" that I mentioned above. It treats the Earth as a big heat sink that you dump energy into (e.g., by adding greenhouse gases). Given the laws of thermodynamics, you can compute how much temperature change you get from a given amount of heat input.

This energy balance model of the atmosphere is "zero dimensional", which means that it treats the Earth as a featureless sphere, and doesn’t attempt to keep track of how heat flows or temperature changes at different locations. There is no three dimensional circulation of the atmosphere or anything like that. The atmosphere is just a "lump of heat-absorbing material".

The atmospheric "box of heat" is connected to two other boxes, which are land and ocean. In DOECLIM, "land" is just another featureless lump of material, with a different heat capacity than air. The "ocean" is more complicated. Instead of a uniform box of water with a single temperature, the ocean is "one dimensional", meaning that it has depth, and temperature is allowed to vary with depth. Heat penetrates from the surface into the deep ocean by a diffusion process, which is intended to mimic the actual circulation-driven penetration of heat into the ocean. It’s worth treating the ocean in more detail since oceans are the Earth’s major heat sink, and therefore control how quickly the planet can change temperature.

The three parameters in the DOECLIM model which we treat as uncertain are the climate (temperature) sensitivity to CO2, the vertical mixing rate of heat into the ocean, and the strength of the "aerosol indirect effect" (what kind of cooling effect industrial aerosols in the atmosphere create due to their influence on cloud behavior).

JB: Okay, that’s clear enough. But at this point I have to raise an issue about models in general. As you know, a lot of climate skeptics like to complain about the fallibility of models. They would surely become even more skeptical upon hearing that you’re treating the Earth as a featureless sphere with same temperature throughout at any given time — and treating the temperature of ocean water as depending only on the depth, not the location. Why are you simplifying things so much? How could your results possibly be relevant to the real world?

Of course, as a mathematical physicist, I know the appeal of simple models. I also know the appeal of reducing the number of dimensions. I spent plenty of time studying quantum gravity in the wholly unrealistic case of a universe with one less dimension than our real world! Reducing the number of dimensions makes the math a lot simpler. And simplified models give us a lot of insight which — with luck — we can draw upon when tackling the really hard real-world problems. But we have to be careful: they can also lead us astray.

How do you think about results obtained from simplified climate models? Are they just mathematical warmup exercises? That would be fine — I have no problem with that, as long as we’re clear about it. Or are you hoping that they give approximately correct answers?

NU: I use simple models because they’re fast and it’s easier to expose and explore their assumptions. My attitude toward simple models is a little of both the points of view you suggest: partly proof of concept, but also hopefully approximately correct, for the questions I’m asking. Let me first argue for the latter perspective.

If you’re using a zero dimensional model, you can really only hope to answer "zero dimensional questions", i.e. about the globally averaged climate. Once you’ve simplified your question by averaging over a lot of the complexity of the data, you can hope that a simple model can reproduce the remaining dynamics. But you shouldn’t just hope. When using simple models, it’s important to test the predictions of their components against more complex models and against observed data.

You can show, for example, that as far as global average surface temperature is concerned, even simpler energy balance models than DOECLIM (e.g., without a 1D ocean) can do a decent job of reproducing the behavior of more complex models. See, e.g.:

• Isaac M. Held, Michael Winton, Ken Takahashi, Thomas Delworth, Fanrong Zeng and Geoffrey K. Vallis, Probing the fast and slow components of global warming by returning abruptly to preindustrial forcing, Journal of Climate 23 (2010), 2418-2427.

for a recent study. The differences between complex models can be captured merely by retuning the "effective parameters" of the simple model. For example, many of the complexities of different feedback effects can be captured by a tunable climate sensitivity parameter in the simple model, representing the total feedback. By turning this sensitivity "knob" in the simple model, you can get it to behave like complex models which have different feedbacks in them.

There is a long history in climate science of using simple models as "mechanistic emulators" of more complex models. The idea is to put just enough physics into the simple model to get it to reproduce some specific averaged behavior of the complex model, but no more. The classic "mechanistic emulator" used by the International Panel on Climate Change is called MAGICC. BERN-CC is another model frequently used by the IPCC for carbon cycle scenario analysis — that is, converting CO2 emissions scenarios to atmospheric CO2 concentrations. A simple model that people can play around with themselves on the Web may be found here:

• Ben Matthews, Chooseclimate.

Obviously a simple model cannot reproduce all the behavior of a more complex model. But if you can provide evidence that it reproduces the behavior you’re interested in for a particular problem, it is arguably at least as "approximately correct" as the more complex model you validate it against, for that specific problem. (Whether the more complex model is an "approximately correct" representation of the real world is a separate question!)

In fact, simple models are arguably more useful than more complex ones for certain applications. The problem with complex models is, well, their complexity. They make a lot of assumptions, and it’s hard to test all of them. Simpler models make fewer assumptions, so you can test more of them, and look at the sensitivity of your conclusions to your assumptions.

If I take all the complex models used by the IPCC, they will have a range of different climate sensitivities. But what if the actual climate sensitivity is above or below that range, because all the complex models have limitations? I can’t easily explore that possibility in a complex model, because "climate sensitivity" isn’t a knob I can turn. It’s an emergent property of many different physical processes. If I want to change the model’s climate sensitivity, I might have to rewrite the cloud physics module to obey different dynamical equations, or something complicated like that — and I still won’t be able to produce a specific sensitivity. But in a simple model, "climate sensitivity" sensitivity is a "knob", and I can turn it to any desired value above, below, or within the IPCC range to see what happens.

After that defense of simple models, there are obviously large caveats. Even if you can show that a simple model can reproduce the behavior of a more complex one, you can only test it under a limited range of assumptions about model parameters, forcings, etc. It’s possible to push a simple model too far, into a regime where it stops reproducing what a more complex model would do. Simple models can also neglect relevant feedbacks and other processes. For example, in the model I use, global warming can shut down the AMOC, but changes in the AMOC don’t feed back to cool the global temperature. But the cooling from an AMOC weakening should itself slow further AMOC weakening due to global warming. The AMOC model we use is designed to partly compensate for the lack of explicit feedback of ocean heat transport on the temperature forcing, but it’s still an approximation.

In our paper we discuss what we think are the most important caveats of our simple analysis. Ultimately we need to be able to do this sort of analysis with more complex models as well, to see how robust our conclusions are to model complexity and structural assumptions. I am working in that direction now, but the complexities involved might be the subject of another interview!

JB: I’d be very happy to do another interview with you. But you’re probably eager to finish this one first. So we should march on.

But I can’t resist one more comment. You say that models even simpler than DOECLIM can emulate the behavior of more complex models. And then you add, parenthetically, "whether the more complex model is an ‘approximately correct’ representation of the real world is a separate question!" But I think that latter question is the one that ordinary people find most urgent. They won’t be reassured to know that simple models do a good job of mimicking more complicated models. They want to know how well these models mimic reality!

But maybe we’ll get to that when we talk about the Monte Carlo Markov chain procedure and how you use that to estimate the probability that the "knobs" (that is, parameters) in your model are set correctly? Presumably in that process we learn a bit about how well the model matches real-world data?

If so, we can go on talking about the model now, and come back to this point in due time.

NU: The model’s ability to represent the real world is the most important question. But it’s not one I can hope to fully answer with a simple model. In general, you won’t expect a model to exactly reproduce the data. Partly this is due to model imperfections, but partly it’s due to random "natural variability" in the system. (And also, of course, to measurement error.) Natural variability is usually related to chaotic or otherwise unpredictable atmosphere-ocean interactions, e.g. at the scale of weather events, El Niño, etc. Even a perfect model can’t be expected to predict those. With a simple model it’s really hard to tell how much of the discrepancy between model and data is due to model structural flaws, and how much is attributable to expected "random fluctuations", because simple models are too simple to generate their own "natural variability".

To really judge how well models are doing, you have to use a complex model and see how much of the discrepancy can be accounted for by the natural variability it predicts. You also have to get into a lot of detail about the quality of the observations, which means looking at spatial patterns and not just global averages. This is the sort of thing done in model validation studies, "detection and attribution" studies, and observation system papers. But it’s beyond the scope of our paper. That’s why I said the best I can do is to use simple models that perform as well as complex models for limited problems. They will of course suffer any limitations of the complex models to which they’re tuned, and if you want to read about those, you should read those modeling papers.

As far as what I can do with a simple model, yes, the Bayesian probability calculation using MCMC is a form of data-model comparison, in that it gives higher weight to model parameter settings that fit the data better. But it’s not exactly a form of "model checking", because Bayesian probability weighting is a relative procedure. It will be quite happy to assign high probability to parameter settings that fit the data terribly, as long as they still fit better than all the other parameter settings. A Bayesian probability isn’t an absolute measure of model quality, and so it can’t be used to check models. This is where classical statistical measures of "goodness of fit" can be helpful. For a philosophical discussion, see:

• Andrew Gelman and Cosma Rohilla Shalizi, Philosophy and the practice of Bayesian statistics, available as arXiv:1006.3868.

That being said, you do learn about model fit during the MCMC procedure in its attempt to sample highly probable parameter settings. When you get to the best fitting parameters, you look at the difference between the model fit and the observations to get an idea of what the "residual error" is — that is, everything that your model wasn’t able to predict.

I should add that complex models disagree more about the strength of the AMOC than they do about more commonly discussed climate variables, such as surface temperature. This can been seen in Figure 10.15 of the IPCC AR4 WG1 report: there is a cluster of models that all tend to agree with the observed AMOC strength, but there are also some models that don’t. Some of those that don’t are known to have relatively poor physical modeling of the overturning circulation, so this is to be expected (i.e., the figure looks like a worse indictment of the models than it really is). But there is still disagreement between some of the "higher quality" models. Part of the problem is that we have poor historical observations of the AMOC, so it’s sometimes hard to tell what needs fixing in the models.

Since the complex models don’t all agree about the current state of the AMOC, one can (and should) question using a simple AMOC model which has been tuned to a particular complex model. Other complex models will predict something altogether different. (And in fact, the model that our simple model was tuned to is also simpler than the IPCC AR4 models.) In our analysis we try to get around this model uncertainty by including some tunable parameters that control both the initial strength of the AMOC and how quickly it weakens. By altering those parameters, we try to span the range of possible outcomes predicted by complex models, allowing the parameters to take on whatever range of values is compatible with the (noisy) observations. This, at a minimum, leads to significant uncertainty in what the AMOC will do.

I’m okay with the idea of uncertainty — that is, after all, what my research is about. But ultimately, even projections with wide error bars still have to be taken with a grain of salt, if the most advanced models still don’t entirely agree on simple questions like the current strength of the AMOC.

JB: Okay, thanks. Clearly the question of how well your model matches reality is vastly more complicated than what you started out trying to tell me: namely, what your model is. Let’s get back to that.

To recap, your model consists of three interacting parts: a model of the climate, a model of the carbon cycle, and a model of the Atlantic meridional overturning circulation (or "AMOC"). The climate model, called "DOECLIM", itself consists of three interacting parts:

• the "land" (modeled as a "box of heat"),

• the "atmosphere" (modeled as a "box of heat")
• the "ocean" (modelled as a one-dimensional object, so that temperature varies with depth)

Next: how do you model the carbon cycle?

NU: We use a model called NICCS (nonlinear impulse-response model of the coupled carbon-cycle climate system). This model started out as an impulse response model, but because of nonlinearities in the carbon cycle, it was augmented by some box model components. NICCS takes fossil carbon emissions to the air as input, and calculates how that carbon ends up being partitioned between the atmosphere, land (vegetation and soil), and ocean.

For the ocean, it has an impulse response model of the vertical advective/diffusive transport of carbon in the ocean. This is supplemented by a differential equation that models nonlinear ocean carbonate buffering chemistry. It doesn’t have any explicit treatment of ocean biology. For the terrestrial biosphere, it has a box model of the carbon cycle. There are four boxes, each containing some amount of carbon. They are "woody vegetation", "leafy vegetation", "detritus" (decomposing organic matter), and "humus" (more stable organic soil carbon). The box model has some equations describing how quickly carbon gets transported between these boxes (or back to the atmosphere).

In addition to carbon emissions, both the land and ocean modules take global temperature as an input. (So, there should be a red arrow pointing to the "ocean" too — this is a mistake in the figure.) This is because there are temperature-dependent feedbacks in the carbon cycle. In the ocean, temperature determines how readily CO2 will dissolve in water. On land, temperature influences how quickly organic matter in soil decays ("heterotrophic respiration"). There are also purely carbon cycle feedbacks, such as the buffering chemistry mentioned above, and also "CO2 fertilization", which quantifies how plants can grow better under elevated levels of atmospheric CO2.

The NICCS model also originally contained an impulse response model of the climate (temperature as a function of CO2), but we removed that and replaced it with DOECLIM. The NICCS model itself is tuned to reproduce the behavior of a more complex Earth system model. The key three uncertain parameters treated in our analysis control the soil respiration temperature feedback, the CO2 fertilization feedback, and the vertical mixing rate of carbon into the ocean.

JB: Okay. Finally, how do you model the AMOC?

NU: This is another box model. There is a classic 1961 paper by Stommel:

• Henry Stommel, Thermohaline convection with two stable regimes of flow, Tellus 2 (1961), 224-230.

which models the overturning circulation using two boxes of water, one representing water at high latitudes and one at low latitudes. The boxes contain heat and salt. Together, temperature and salinity determine water density, and density differences drive the flow of water between boxes.

It has been shown that such box models can have interesting nonlinear dynamics, exhibiting both hysteresis and threshold behavior. Hysteresis means that if you warm the climate and then cool it back down to its original temperature, the AMOC doesn’t return to its original state. Threshold behavior means that the system exhibits multiple stable states (such as an ocean circulation with or without overturning), and you can pass a "tipping point" beyond which the system flips from one stable equilibrium to another. Ultimately, this kind of dynamics means that it can be hard to return the AMOC to its historic state if it shuts down from anthropogenic climate change.

The extent to which the real AMOC exhibits hysteresis and threshold behavior remains an open question. The model we use in our paper is a box model that has this kind of nonlinearity in it:

• Kirsten Zickfeld, Thomas Slawig and Stefan Rahmstorf, A low-order model for the response of the Atlantic thermohaline circulation to climate change, Ocean Dynamics 54 (2004), 8-26.

Instead of Stommel’s two boxes, this model uses four boxes:

It has three surface water boxes (north, south, and tropics), and one box for an underlying pool of deep water. Each box has its own temperature and salinity, and flow is driven by density gradients between them. The boxes have their own "relaxation temperatures" which the box tries to restore itself to upon perturbation; these parameters are set in a way that attempts to compensate for a lack of explicit feedback on global temperature. The model’s parameters are tuned to match the output of an intermediate complexity climate model.

The input to the model is a change in global temperature (temperature anomaly). This is rescaled to produce different temperature anomalies over each of the three surface boxes (accounting for the fact that different latitudes are expected to warm at different rates). There are similar scalings to determine how much freshwater input, from both precipitation changes and meltwater, is expected in each of the surface boxes due to a temperature change.

The main uncertain parameter is the "hydrological sensitivity" of the North Atlantic surface box, controlling how much freshwater goes into that region in a warming scenario. This is the main effect by which the AMOC can weaken. Actually, anything that changes the density of water alters the AMOC, so the overturning can weaken due to salinity changes from freshwater input, or from direct temperature changes in the surface waters. However, the former is more uncertain than the latter, so we focus on freshwater in our uncertainty analysis.

JB: Great! I see you’re emphasizing the uncertain parameters; we’ll talk more later about how you estimate these parameters, though you’ve already sort of sketched the idea.

So: you’ve described to me the three components of your model: the climate, the carbon cycle and the Atlantic meridional overturning current (AMOC). I guess to complete the description of your model, you should say how these components interact — right?

NU: Right. There is a two-way coupling between the climate module (DOECLIM) and the carbon cycle module (NICCS). The global temperature from the climate module is fed into the carbon cycle module to predict temperature-dependent feedbacks. The atmospheric CO2 predicted by the carbon cycle module is fed into the climate module to predict temperature from its greenhouse effect. There is a one-way coupling between the climate module and the AMOC module. Global temperature alters the overturning circulation, but changes in the AMOC do not themselves alter global temperature:



There is no coupling between the AMOC module and the carbon cycle module, although there technically should be: both the overturning circulation and the uptake of carbon by the oceans depend on ocean vertical mixing processes. Similarly, the climate and carbon cycle modules have their own independent parameters controlling the vertical mixing of heat and carbon, respectively, in the ocean. In reality these mixing rates are related to each other. In this sense, the modules are not fully coupled, insofar as they have independent representations of physical processes that are not really independent of each other. This is discussed in our caveats.

JB: There’s one other thing that’s puzzling me. The climate model treats the "ocean" as a single entity whose temperature varies with depth but not location. The AMOC model involves four "boxes" of water: north, south, tropical, and deep ocean water, each with its own temperature. That seems a bit schizophrenic, if you know what I mean. How are these temperatures related in your model?

You say "there is a one-way coupling between the climate module and the AMOC module." Does the ocean temperature in the climate model affect the temperatures of the four boxes of water in the AMOC model? And if so, how?

NU: The surface temperature in the climate model affects the temperatures of the individual surface boxes in the AMOC model. The climate model works only with globally averaged temperature. To convert a (change in) global temperature to (changes in) the temperatures of the surface boxes of the AMOC model, there is a "pattern scaling" coefficient which converts global temperature (anomaly) to temperature (anomaly) in a particular box.

That is, if the climate model predicts a 1 degree warming globally, that might be more or less than 1 °C of warming in the north Atlantic, tropics, etc. For example, we generally expect to see "polar amplification" where the high northern latitudes warm more quickly than the global average. These latitudinal scaling coefficients are derived from the output of a more complex climate model under a particular warming scenario, and are assumed to be constant (independent of warming scenario).

The temperature from the climate model which is fed into the AMOC model is the global (land+ocean) average surface temperature, not the DOECLIM sea surface temperature alone. This is because the pattern scaling coefficients in the AMOC model were derived relative to global temperature, not sea surface temperature.

JB: Okay. That’s a bit complicated, but I guess some sort consistency is built in, which prevents the climate model and the AMOC model from disagreeing about the ocean temperature. That’s what I was worrying about.

Thanks for leading us through this model. I think this level of detail is just enough to get a sense for how it works. And that I know roughly what your model is, I’m eager to see how you used it and what results you got!

But I’m afraid many of our readers may be nearing the saturation point. After all, I’ve been talking with you for days, with plenty of time to mull it over, while they will probably read this interview in one solid blast! So, I think we should quit here and continue in the next episode.

So, everyone: I’m afraid you’ll just have to wait, clutching your chair in suspense, for the answer to the big question: will the AMOC get turned off, or not? Or really: how likely is such an event, according to this simple model?


…we’re entering dangerous territory and provoking an ornery beast. Our climate system has proven that it can do very strange things.Wallace S. Broecker


This Week’s Finds (Week 303)

30 September, 2010

Now for the second installment of my interview with Nathan Urban, a colleague who started out in quantum gravity and now works on "global climate change from an Earth system perspective, with an emphasis on Bayesian data-model calibration, probabilistic prediction, risk assessment, and decision analysis".

But first, a word about Bali. One of the great things about living in Singapore is that it’s close to a lot of interesting places. My wife and I just spent a week in Ubud. This town is the cultural capital of Bali — full of dance, music, and crafts. It’s also surrounded by astounding terraced rice paddies:

In his book Whole Earth Discipline, Stewart Brand says "one of the finest examples of beautifully nuanced ecosystem engineering is the thousand-year-old terraced rice irrigation complex in Bali".

Indeed, when we took a long hike with a local guide, Made Dadug, we learned that that all the apparent "weeds" growing in luxuriant disarray near the rice paddies were in fact carefully chosen plants: cacao, coffee, taro, ornamental flowers, and so on. "See this bush? It’s citronella — people working on the fields grab a pinch and use it for mosquito repellent." When a paddy loses its nutrients they plant sweet potatos there instead of rice, to restore the soil.

Irrigation is managed by a system of local water temples, or "subaks". It’s not a top-down hierarchy: instead, each subak makes decisions in a more or less democratic way, while paying attention to what neighboring subaks do. Brand cites the work of Steve Lansing on this subject:

• J. Stephen Lansing, Perfect Order: Recognizing Complexity in Bali, Princeton U. Press, Princeton, New Jersey, 2006.

Physicists interested in the spontaneous emergence of order will enjoy this passage:

This book began with a question posed by a colleague. In 1992 I gave a lecture at the Santa Fe Institute, a recently created research center devoted to the study of "complex systems." My talk focused on a simulation model that my colleague James Kremer and I had created to investigate the ecological role of water temples. I need to explain a little about how this model came to be built; if the reader will bear with me, the relevance will soon become clear.

Kremer is a marine scientist, a systems ecologist, and a fellow surfer. One day on a California beach I told him the story of the water temples, and of my struggles to convince the consultants that the temples played a vital role in the ecology of the rice terraces. I asked Jim if a simulation model, like the ones he uses to study coastal ecology, might help to clarify the issue. It was not hard to persuade him to come to Bali to take a look. Jim quickly saw that a model of a single water temple would not be very useful. The whole point about water temples is that they interact. Bali is a steep volcanic island, and the rivers and streams are short and fast. Irrigation systems begin high up on the volcanoes, and follow one after another at short intervals all the way to the seacoast. The amount of water each subak gets depends less on rainfall than on how much water is used by its upstream neighbors. Water temples provide a venue for the farmers to plan their irrigation schedules so as to avoid shortages when the paddies need to be flooded. If pests are a problem, they can synchronize harvests and flood a block of terraces so that there is nothing for the pests to eat. Decisions about water taken by each subak thus inevitably affect its neighbors, altering both the availability of water and potential levels of pest infestations.

Jim proposed that we build a simulation model to capture all of these processes for an entire watershed. Having recently spent the best part of a year studying just one subak, the idea of trying to model nearly two hundred of them at once struck me as rather ambitious. But as Jim pointed out, the question is not whether flooding can control pests, but rather whether the entire collection of temples in a watershed can strike an optimal balance between water sharing and pest control.

We set to work plotting the location of all 172 subaks lying between the Oos and Petanu rivers in central Bali. We mapped the rivers and irrigation systems, and gathered data on rainfall, river flows, irrigation schedules, water uptake by crops such as rice and vegetables, and the population dynamics of the major rice pests. With these data Jim constructed a simulation model. At the beginning of each year the artificial subaks in the model are given a schedule of crops to plant for the next twelve months, which defines their irrigation needs. Then, based on historic rainfall data, we simulate rainfall, river flow, crop growth, and pest damage. The model keeps track of harvest data and also shows where water shortages or pest damage occur. It is possible to simulate differences in rainfall patterns or the growth of different kinds of crops, including both native Balinese rice and the new rice promoted by the Green Revolution planners. We tested the model by simulating conditions for two cropping seasons, and compared its predictions with real data on harvest yields for about half the subaks. The model did surprisingly well, accurately predicting most of the variation in yields between subaks. Once we knew that the model’s predictions were meaningful, we used it to compare different scenarios of water management. In the Green Revolution scenario, every subak tries to plant rice as often as possible and ignores the water temples. This produces large crop losses from pest outbreaks and water shortages, much like those that were happening in the real world. In contrast, the “water temple” scenario generates the best harvests by minimizing pests and water shortages.

Back at the Santa Fe Institute, I concluded this story on a triumphant note: consultants to the Asian Development Bank charged with evaluating their irrigation development project in Bali had written a new report acknowledging our conclusions. There would be no further opposition to management by water temples. When I finished my lecture, a researcher named Walter Fontana asked a question, the one that prompted this book: could the water temple networks self-organize? At first I did not understand what he meant by this. Walter explained that if he understood me correctly, Kremer and I had programmed the water temple system into our model, and shown that it had a functional role. This was not terribly surprising. After all, the farmers had had centuries to experiment with their irrigation systems and find the right scale of coordination. But what kind of solution had they found? Was there a need for a Great Designer or an Occasional Tinkerer to get the whole watershed organized? Or could the temple network emerge spontaneously, as one subak after another came into existence and plugged in to the irrigation systems? As a problem solver, how well could the temple networks do? Should we expect 10 percent of the subaks to be victims of water shortages at any given time because of the way the temple network interacts with the physical hydrology? Thirty percent? Two percent? Would it matter if the physical layout of the rivers were different? Or the locations of the temples?

Answers to most of these questions could only be sought if we could answer Walter’s first large question: could the water temple networks self-organize? In other words, if we let the artificial subaks in our model learn a little about their worlds and make their own decisions about cooperation, would something resembling a water temple network emerge? It turned out that this idea was relatively easy to implement in our computer model. We created the simplest rule we could think of to allow the subaks to learn from experience. At the end of a year of planting and harvesting, each artificial subak compares its aggregate harvests with those of its four closest neighbors. If any of them did better, copy their behavior. Otherwise, make no changes. After every subak has made its decision, simulate another year and compare the next round of harvests. The first time we ran the program with this simple learning algorithm, we expected chaos. It seemed likely that the subaks would keep flipping back and forth, copying first one neighbor and then another as local conditions changed. But instead, within a decade the subaks organized themselves into cooperative networks that closely resembled the real ones.

Lansing describes how attempts to modernize farming in Bali in the 1970′s proved problematic:

To a planner trained in the social sciences, management by water temples looks like an arcane relic from the premodern era. But to an ecologist, the bottom-up system of control has some obvious advantages. Rice paddies are artificial aquatic ecosystems, and by adjusting the flow of water farmers can exert control over many ecological processes in their fields. For example, it is possible to reduce rice pests (rodents, insects, and diseases) by synchronizing fallow periods in large contiguous blocks of rice terraces. After harvest, the fields are flooded, depriving pests of their habitat and thus causing their numbers to dwindle. This method depends on a smoothly functioning, cooperative system of water management, physically embodied in proportional irrigation dividers, which make it possible to tell at a glance how much water is flowing into each canal and so verify that the division is in accordance with the agreed-on schedule.

Modernization plans called for the replacement of these proportional dividers with devices called "Romijn gates," which use gears and screws to adjust the height of sliding metal gates inserted across the entrances to canals. The use of such devices makes it impossible to determine how much water is being diverted: a gate that is submerged to half the depth of a canal does not divert half the flow, because the velocity of the water is affected by the obstruction caused by the gate itself. The only way to accurately estimate the proportion of the flow diverted by a Romijn gate is with a calibrated gauge and a table. These were not supplied to the farmers, although $55 million was spent to install Romijn gates in Balinese irrigation canals, and to rebuild some weirs and primary canals.

The farmers coped with the Romijn gates by simply removing them or raising them out of the water and leaving them to rust.

On the other hand, Made said that the people village really appreciated this modern dam:

Using gears, it takes a lot less effort to open and close than the old-fashioned kind:

Later in this series of interviews we’ll hear more about sustainable agriculture from Thomas Fischbacher.

But now let’s get back to Nathan!

JB: Okay. Last time we were talking about the things that altered your attitude about climate change when you started working on it. And one of them was how carbon dioxide stays in the atmosphere a long time. Why is that so important? And is it even true? After all, any given molecule of CO2 that’s in the air now will soon get absorbed by the ocean, or taken up by plants.

NU: The longevity of atmospheric carbon dioxide is important because it determines the amount of time over which our actions now (fossil fuel emissions) will continue to have an influence on the climate, through the greenhouse effect.

You have heard correctly that a given molecule of CO2 doesn’t stay in the atmosphere for very long. I think it’s about 5 years. This is known as the residence time or turnover time of atmospheric CO2. Maybe that molecule will go into the surface ocean and come back out into the air; maybe photosynthesis will bind it in a tree, in wood, until the tree dies and decays and the molecule escapes back to the atmosphere. This is a carbon cycle, so it’s important to remember that molecules can come back into the air even after they’ve been removed from it.

But the fate of an individual CO2 molecule is not the same as how long it takes for the CO2 content of the atmosphere to decrease back to its original level after new carbon has been added. The latter is the answer that really matters for climate change. Roughly, the former depends on the magnitude of the gross carbon sink, while the latter depends on the magnitude of the net carbon sink (the gross sink minus the gross source).

As an example, suppose that every year 100 units of CO2 are emitted to the atmosphere from natural sources (organic decay, the ocean, etc.), and each year (say with a 5 year lag), 100 units are taken away by natural sinks (plants, the ocean, etc). The 5 years actually doesn’t matter here; the system is in steady-state equilibrium, and the amount of CO2 in the air is constant. Now suppose that humans add an extra 1 unit of CO2 each year. If nothing else changes, then the amount of carbon in the air will increase every year by 1 unit, indefinitely. Far from the carbon being purged in 5 years, we end up with an arbitrarily large amount of carbon in the air.

Even if you only add carbon to the atmosphere for a finite time (e.g., by running out of fossil fuels), the CO2 concentration will ultimately reach, and then perpetually remain at, a level equivalent to the amount of new carbon added. Individual CO2 molecules may still get absorbed within 5 years of entering the atmosphere, and perhaps fewer of the carbon atoms that were once in fossil fuels will ultimately remain in the atmosphere. But if natural sinks are only removing an amount of carbon equal in magnitude to natural sources, and both are fixed in time, you can see that if you add extra fossil carbon the overall atmospheric CO2 concentration can never decrease, regardless of what individual molecules are doing.

In reality, natural carbon sinks tend to grow in proportion to how much carbon is in the air, so atmospheric CO2 doesn’t remain elevated indefinitely in response to a pulse of carbon into the air. This is kind of the biogeochemical analog to the "Planck feedback" in climate dynamics: it acts to restore the system to equilibrium. To first order, atmospheric CO2 decays or "relaxes" exponentially back to the original concentration over time. But this relaxation time (variously known as a "response time", "adjustment time", "recovery time", or, confusingly, "residence time") isn’t a function of the residence time of a CO2 molecule in the atmosphere. Instead, it depends on how quickly the Earth’s carbon removal processes react to the addition of new carbon. For example, how fast plants grow, die, and decay, or how fast surface water in the ocean mixes to greater depths, where the carbon can no longer exchange freely with the atmosphere. These are slower processes.

There are actually a variety of response times, ranging from years to hundreds of thousands of years. The surface mixed layer of the ocean responds within a year or so; plants within decades to grow and take up carbon or return it to the atmosphere through rotting or burning. Deep ocean mixing and carbonate chemistry operate on longer time scales, centuries to millennia. And geologic processes like silicate weathering are even slower, tens of thousands of years. The removal dynamics are a superposition of all these processes, with a fair chunk taken out quickly by the fast processes, and slower processes removing the remainder more gradually.

To summarize, as David Archer put it, "The lifetime of fossil fuel CO2 in the atmosphere is a few centuries, plus 25 percent that lasts essentially forever." By "forever" he means "tens of thousands of years" — longer than the present age of human civilization. This inspired him to write this pop-sci book, taking a geologic view of anthropogenic climate change:

• David Archer, The Long Thaw: How Humans Are Changing the Next 100,000 Years of Earth’s Climate, Princeton University Press, Princeton, New Jersey, 2009.

A clear perspective piece on the lifetime of carbon is:

• Mason Inman, Carbon is forever, Nature Reports Climate Change, 20 November 2008.

which is based largely on this review article:

• David Archer, Michael Eby, Victor Brovkin, Andy Ridgwell, Long Cao, Uwe Mikolajewicz, Ken Caldeira, Katsumi Matsumoto, Guy Munhoven, Alvaro Montenegro, and Kathy Tokos, Atmospheric lifetime of fossil fuel carbon dioxide, Annual Review of Earth and Planetary Sciences 37 (2009), 117-134.

For climate implications, see:

• Susan Solomon, Gian-Kasper Plattner, Reto Knutti and Pierre Friedlingstein, Irreversible climate change due to carbon dioxide emissions, PNAS 106 (2009), 1704-1709.

• M. Eby, K. Zickfeld, A. Montenegro, D. Archer, K. J. Meissner and A. J. Weaver, Lifetime of anthropogenic climate change: millennial time scales of potential CO2 and surface temperature perturbations, Journal of Climate 22 (2009), 2501-2511.

• Long Cao and Ken Caldeira, Atmospheric carbon dioxide removal: long-term consequences and commitment, Environmental Research Letters 5 (2010), 024011.

For the very long term perspective (how CO2 may affect the glacial-interglacial cycle over geologic time), see:

• David Archer and Andrey Ganopolski, A movable trigger: Fossil fuel CO2 and the onset of the next glaciation, Geochemistry Geophysics Geosystems 6 (2005), Q05003.

JB: So, you’re telling me that even if we do something really dramatic like cut fossil fuel consumption by half in the next decade, we’re still screwed. Global warming will keep right on, though at a slower pace. Right? Doesn’t that make you feel sort of hopeless?

NU: Yes, global warming will continue even as we reduce emissions, although more slowly. That’s sobering, but not grounds for total despair. Societies can adapt, and ecosystems can adapt — up to a point. If we slow the rate of change, then there is more hope that adaptation can help. We will have to adapt to climate change, regardless, but the less we have to adapt, and the more gradual the adaptation necessary, the less costly it will be.

What’s even better than slowing the rate of change is to reduce the overall amount of it. To do that, we’d need to not only reduce carbon emissions, but to reduce them to zero before we consume all fossil fuels (or all of them that would otherwise be economically extractable). If we emit the same total amount of carbon, but more slowly, then we will get the same amount of warming, just more slowly. But if we ultimately leave some of that carbon in the ground and never burn it, then we can reduce the amount of final warming. We won’t be able to stop it dead, but even knocking a degree off the extreme scenarios would be helpful, especially if there are "tipping points" that might otherwise be crossed (like a threshold temperature above which a major ice sheet will disintegrate).

So no, I don’t feel hopeless that we can, in principle, do something useful to mitigate the worst effects of climate change, even though we can’t plausibly stop or reverse it on normal societal timescales. But sometimes I do feel hopeless that we lack the public and political will to actually do so. Or at least, that we will procrastinate until we start seeing extreme consequences, by which time it’s too late to prevent them. Well, it may not be too late to prevent future, even more extreme consequences, but the longer we wait, the harder it is to make a dent in the problem.

I suppose here I should mention the possibility of climate geoengineering, which is a proposed attempt to artificially counteract global warming through other means, such as reducing incoming sunlight with reflective particles in the atmosphere, or space mirrors. That doesn’t actually cancel all climate change, but it can negate a lot of the global warming. There are many risks involved, and I regard it as a truly last-ditch effort if we discover that we really are "screwed" and can’t bear the consequences.

There is also an extreme form of carbon cycle geoengineering, known as air capture and sequestration, which extracts CO2 from the atmosphere and sequesters it for long periods of time. There are various proposed technologies for this, but it’s highly uncertain whether this can feasibly be done on the necessary scales.

JB: Personally, I think society will procrastinate until we see extreme climate changes. Recently millions of Pakistanis were displaced by floods: a quarter of their country was covered by water. We can’t say for sure this was caused by global warming — but it’s exactly the sort of thing we should expect.

But you’ll notice, this disaster is nowhere near enough to make politicians talk about cutting fossil fuel usage! It’ll take a lot of disasters like this to really catch people’s attention. And by then we’ll be playing a desperate catch-up game, while people in many countries are struggling to survive. That won’t be easy. Just think how little attention the Pakistanis can spare for global warming right now.

Anyway, this is just my own cheery view. But I’m not hopeless, because I think there’s still a lot we can do to prevent a terrible situation from becoming even worse. Since I don’t think the human race will go extinct anytime soon, it would be silly to "give up".

Now, you’re just started a position at the Woodrow Wilson School at Princeton. When I was an undergrad there, this school was the place for would-be diplomats. What’s a nice scientist like you doing in a place like this? I see you’re in the Program in Science, Technology and Environmental Policy, or "STEP program". Maybe it’s too early for you to give a really good answer, but could you say a bit about what they do?

NU: Let me pause to say that I don’t know whether the Pakistan floods are "exactly the sort of thing we should expect" to happen to Pakistan, specifically, as a result of climate change. Uncertainty in the attribution of individual events is one reason why people don’t pay attention to them. But it is true that major floods are examples of extreme events which could become more (or less) common in various regions of the world in response to climate change.

Returning to your question, the STEP program includes a number of scientists, but we are all focused on policy issues because the Woodrow Wilson School is for public and international affairs. There are physicists who work on nuclear policy, ecologists who study environmental policy and conservation biology, atmospheric chemists who look at ozone and air pollution, and so on. Obviously, climate change is intimately related to public and international policy. I am mostly doing policy-relevant science but may get involved in actual policy to some extent. The STEP program has ties to other departments such as Geosciences, interdisciplinary umbrella programs like the Atmospheric and Ocean Sciences program and the Princeton Environmental Institute, and NOAA’s nearby Geophysical Fluid Dynamics Laboratory, one of the world’s leading climate modeling centers.

JB: How much do you want to get into public policy issues? Your new boss, Michael Oppenheimer, used to work as chief scientist for the Environmental Defense Fund. I hadn’t known much about them, but I’ve just been reading a book called The Climate War. This book says a lot about the Environmental Defense Fund’s role in getting the US to pass cap-and-trade legislation to reduce sulfur dioxide emissions. That’s quite an inspiring story! Many of the same people then went on to push for legislation to reduce greenhouse gases, and of course that story is less inspiring, so far: no success yet. Can you imagine yourself getting into the thick of these political endeavors?

NU: No, I don’t see myself getting deep into politics. But I am interested in what we should be doing about climate change, specifically, the economic assessment of climate policy in the presence of uncertainties and learning. That is, how hard should we be trying to reduce CO2 emissions, accounting for the fact that we’re unsure what climate the future will bring, but expect to learn more over time. Michael is very interested in this question too, and the harder problem of "negative learning":

• Michael Oppenheimer, Brian C. O’Neill and Mort Webster, Negative learning, Climatic Change 89 (2008), 155-172.

"Negative learning" occurs if what we think we’re learning is actually converging on the wrong answer. How fast could we detect and correct such an error? It’s hard enough to give a solid answer to what we might expect to learn, let alone what we don’t expect to learn, so I think I’ll start with the former.

I am also interested in the value of learning. How will our policy change if we learn more? Can there be any change in near-term policy recommendations, or will we learn slowly enough that new knowledge will only affect later policies? Is it more valuable — in terms of its impact on policy — to learn more about the most likely outcomes, or should we concentrate on understanding better the risks of the worst-case scenarios? What will cause us to learn the fastest? Better surface temperature observations? Better satellites? Better ocean monitoring systems? What observables should they we looking at?

The question "How much should we reduce emissions" is, partially, an economic one. The safest course of action from the perspective of climate impacts is to immediately reduce emissions to a much lower level. But that would be ridiculously expensive. So some kind of cost-benefit approach may be helpful: what should we do, balancing the costs of emissions reductions against their climate benefits, knowing that we’re uncertain about both. I am looking at so-called "economic integrated assessment" models, which combine a simple model of the climate with an even simpler model of the world economy to understand how they influence each other. Some argue these models are too simple. I view them more as a way of getting order-of-magnitude estimates of the relative values of different uncertainty scenarios or policy options under specified assumptions, rather than something that can give us "The Answer" to what our emissions targets should be.

In a certain sense it may be moot to look at such cost-benefit analyses, since there is a huge difference between "what may be economically optimal for us to do" and "what we will actually do". We have not yet approached current policy recommendations, so what’s the point of generating new recommendations? That’s certainly a valid argument, but I still think it’s useful to have a sense of the gap between what we are doing and what we "should" be doing.

Economics can only get us so far, however (and maybe not far at all). Traditional approaches to economics have a very narrow way of viewing the world, and tend to ignore questions of ethics. How do you put an economic value on biodiversity loss? If we might wipe out polar bears, or some other species, or a whole lot of species, how much is it "worth" to prevent that? What is the Great Barrier Reef worth? Its value in tourism dollars? Its value in "ecosystem services" (the more nebulous economic activity which indirectly depends on its presence, such as fishing)? Does it have intrinsic value, and is worth something (what?) to preserve, even if it has no quantifiable impact on the economy whatsoever?

You can continue on with questions like this. Does it make sense to apply standard economic discounting factors, which effectively value the welfare of future generations less than that of the current generation? See for example:

• John Quiggin, Stern and his critics on discounting and climate change: an editorial essay, Climatic Change 89 (2008), 195-205.

Economic models also tend to preserve present economic disparities. Otherwise, their "optimal" policy is to immediately transfer a lot of the wealth of developed countries to developing countries — and this is without any climate change — to maximize the average "well-being" of the global population, on the grounds that a dollar is worth more to a poor person than a rich person. This is not a realistic policy and arguably shouldn’t happen anyway, but you do have to be careful about hard-coding potential inequities into your models:

• Seth D. Baum and William E. Easterling, Space-time discounting in climate change adaptation, Mitigation and Adaptation Strategies for Global Change 15 (2010), 591-609.

More broadly, it’s possible for economics models to allow sea level rise to wipe out Bangladesh, or other extreme scenarios, simply because some countries have so little economic output that it doesn’t "matter" if they disappear, as long as other countries become even more wealthy. As I said, economics is a narrow lens.

After all that, it may seem silly to be thinking about economics at all. The main alternative is the "precautionary principle", which says that we shouldn’t take suspected risks unless we can prove them safe. After all, we have few geologic examples of CO2 levels rising as far and as fast as we are likely to increase them — to paraphrase Wally Broecker, we are conducting an uncontrolled and possibly unprecedented experiment on the Earth. This principle has some merits. The common argument, "We should do nothing unless we can prove the outcome is disastrous", is a strange burden of proof from a decision analytic point of view — it has little to do with the realities of risk management under uncertainty. Nobody’s going to say "You can’t prove the bridge will collapse, so let’s build it". They’re going to say "Prove it’s safe (to within a certain guarantee) before we build it". Actually, a better analogy to the common argument might be: you’re driving in the dark with broken headlights, and insist “You’ll have to prove there are no cliffs in front of me before I’ll consider slowing down.” In reality, people should slow down, even if it makes them late, unless they know there are no cliffs.

But the precautionary principle has its own problems. It can imply arbitrarily expensive actions in order to guard against arbitrarily unlikely hazards, simply because we can’t prove they’re safe, or precisely quantify their exact degree of unlikelihood. That’s why I prefer to look at quantitative cost-benefit analysis in a probabilistic framework. But it can be supplemented with other considerations. For example, you can look at stabilization scenarios: where you "draw a line in the sand" and say we can’t risk crossing that, and apply economics to find the cheapest way to avoid crossing the line. Then you can elaborate that to allow for some small but nonzero probability of crossing it, or to allow for temporary "overshoot", on the grounds that it might be okay to briefly cross the line, as long as we don’t stay on the other side indefinitely. You can tinker with discounting assumptions and the decision framework of expected utility maximization. And so on.

JB: This is fascinating stuff. You’re asking a lot of really important questions — I think I see about 17 question marks up there. Playing the devil’s advocate a bit, I could respond: do you known any answers? Of course I don’t expect "ultimate" answers, especially to profound questions like how much we should allow economics to guide our decision, versus tempering it with other ethical considerations. But it would be nice to see an example where thinking about these issues turned up new insights that actually changed people’s behavior. Cases where someone said "Oh, I hadn’t thought of that…", and then did something different that had a real effect.

You see, right now the world as it is seems so far removed from the world as it should be that one can even start to doubt the usefulness of pondering the questions you’re raising. As you said yourself, "We’re not yet even coming close to current policy recommendations, so what’s the point of generating new recommendations?"

I think the cap-and-trade idea is a good example, at least as far as sulfur dioxide emissions go: the Clean Air Act Amendments of 1990 managed to reduce SO2 emissions in the US from about 19 million tons in 1980 to about 7.6 million tons in 2007. Of course this idea is actually a bunch of different ideas that need to work together in a certain way… but anyway, some example related to global warming would be a bit more reassuring, given our current problems with that.

NU: Climate change economics has been very influential in generating momentum for putting a price on carbon (through cap-and-trade or otherwise), in Europe and the U.S., in showing that such policy had the potential to be a net benefit considering the risks of climate change. SO2 emissions markets are one relevant piece of this body of research, although the CO2 problem is much bigger in scope and presents more problems for such approaches. Climate economics has been an important synthesis of decision analysis and scientific uncertainty quantification, which I think we need more of. But to be honest, I’m not sure what immediate impact additional economic work may have on mitigation policy, unless we begin approaching current emissions targets. So from the perspective of immediate applications, I also ponder the usefulness of answering these questions.

That, however, is not the only perspective I think about. I’m also interested in how what we should do is related to what we might learn — if not today, then in the future. There are still important open questions about how well we can see something potentially bad coming, the answers to which could influence policies. For example, if a major ice sheet begins to substantially disintegrate within the next few centuries, would we be able to see that coming soon enough to step up our mitigation efforts in time to prevent it? In reality that’s a probabilistic question, but let’s pretend it’s a binary outcome. If the answer is "yes", that could call for increased investment in "early warning" observation systems, and a closer coupling of policy to the data produced by such systems. (Well, we should be investing more in those anyway, but people might get the point more strongly, especially if research shows that we’d only see it coming if we get those systems in place and tested soon.) If the answer is "no", that could go at least three ways. One way it could go is that the precautionary principle wins: if we think that we could put coastal cities under water, and we wouldn’t see it coming in time to prevent it, that might finally prompt more preemptive mitigation action. Another is that we start looking more seriously at last-ditch geoengineering approaches, or carbon air capture and sequestration. Or, if people give up on modifying the climate altogether, then it could prompt more research and development into adaptation. All of those outcomes raise new policy questions, concerning how much of what policy response we should aim for.

Which brings me to the next policy option. The U.S. presidential science advisor, John Holdren, has said that we have three choices for climate change: mitigate, adapt, or suffer. Regardless of what we do about the first, people will likely be doing some of the other two; the question is how much. If you’re interested in research that has a higher likelihood of influencing policy in the near term, adaptation is probably what you should work on. (That, or technological approaches like climate/carbon geoengineering, energy systems, etc.) People are already looking very seriously at adaptation (and in some cases are already putting plans into place). For example, the Port Authority of Los Angeles needs to know whether, or when, to fortify their docks against sea level rise, and whether a big chunk of their business could disappear if the Northwest Passage through the Arctic Ocean opens permanently. They have to make these investment decisions regardless of what may happen with respect to geopolitical emissions reduction negotiations. The same kinds of learning questions I’m interested in come into play here: what will we know, and when, and how should current decisions be structured knowing that we will be able to periodically adjust those decisions?

So, why am I not working on adaptation? Well, I expect that I will be, in the future. But right now, I’m still interested in a bigger question, which is how well can we bound the large risks and our ability to prevent disasters, rather than just finding the best way to survive them. What is the best and the worst that can happen, in principle? Also, I’m concerned that right now there is too much pressure to develop adaptation policies to a level of detail which we don’t yet have the scientific capability to develop. While global temperature projections are probably reasonable within their stated uncertainty ranges, we have a very limited ability to predict, for example, how precipitation may change over a particular city. But that’s what people want to know. So scientists are trying to give them an answer. But it’s very hard to say whether some of those answers right now are actionably credible. You have to choose your problems carefully when you work in adaptation. Right now I’m opting to look at sea level rise, partly because it is less affected by the some of the details of local meteorology.

JB: Interesting. I think I’m going to cut our conversation here, because at this point it took a turn that will really force me to do some reading! And it’s going to take a while. But it should be fun!


The climatic impacts of releasing fossil fuel CO2 to the atmosphere will last longer than Stonehenge, longer than time capsules, longer than nuclear waste, far longer than the age of human civilization so far. – David Archer


This Week’s Finds (Week 302)

9 September, 2010

In "week301" I sketched a huge picture in a very broad brush. Now I’d like to start filling in a few details: not just about the problems we face, but also about what we can do to tackle them. For the reasons I explained last time, I’ll focus on what scientists can do.

As I’m sure you’ve noticed, different people have radically different ideas about the mess we’re in, or if there even is a mess.

Maybe carbon emissions are causing really dangerous global warming. Maybe they’re not — or at least, maybe it’s not as bad as some say. Maybe we need to switch away from fossil fuels to solar power, or wind. Maybe nuclear power is the answer, because solar and wind are intermittent. Maybe nuclear power is horrible! Maybe using less energy is the key. But maybe boosting efficiency isn’t the way to accomplish that.

Maybe the problem is way too big for any of these conventional solutions to work. Maybe we need carbon sequestration: like, pumping carbon dioxide underground. Maybe we need to get serious about geoengineering — you know, something like giant mirrors in space, to cool down the Earth. Maybe geoengineering is absurdly impractical — or maybe it’s hubris on our part to think we could do it right! Maybe some radical new technology is the answer, like nanotech or biotech. Maybe we should build an intelligence greater than our own and let it solve our problems.

Maybe all this talk is absurd. Maybe all we need are some old technologies, like traditional farming practices, or biochar: any third-world peasant can make charcoal and bury it, harnessing the power of nature to do carbon sequestration without fancy machines. In fact, maybe we need go back to nature and get rid of the modern technological civilization that’s causing our problems. Maybe this would cause massive famines. But maybe they’re bound to come anyway: maybe overpopulation lies at the root of our problems and only a population crash will solve them. Maybe that idea just proves what we’ve known all along: the environmental movement is fundamentally anti-human.

Maybe all this talk is just focusing on symptoms: maybe what we need is a fundamental change in consciousness. Maybe that’s not possible. Maybe we’re just doomed. Or maybe we’ll muddle through the way we always do. Maybe, in fact, things are just fine!

To help sift through this mass of conflicting opinions, I think I’ll start by interviewing some people.



I’ll start with Nathan Urban, for a couple of reasons. First, he can help me understand climate science and the whole business of how we can assess risks due to climate change. Second, like me, he started out working on quantum gravity! Can I be happy switching from pure math and theoretical physics to more practical stuff? Maybe talking to him will help me find out.

So, here is the first of several conversations with Nathan Urban. This time we’ll talk about what it’s like to shift careers, how he got interested in climate change, and issue of "climate sensitivity": how much the temperature changes if you double the amount of carbon dioxide in the Earth’s atmosphere.

JB: It’s a real pleasure to interview you, since you’ve successfully made a transition that I’m trying to make now — from "science for its own sake" to work that may help save the planet.

I can’t resist telling our readers that when we first met, you had applied to U.C. Riverside because you were interested in working on quantum gravity. You wound up going elsewhere… and now you’re at Princeton, at the Woodrow Wilson School of Public and International Affairs, working on "global climate change from an Earth system perspective, with an emphasis on Bayesian data-model calibration, probabilistic prediction, risk assessment, and decision analysis". That’s quite a shift!

I’m curious about how you got from point A to point B. What was the hardest thing about it?

NU: I went to Penn State because it had a big physics department and one of the leading centers in quantum gravity. A couple years into my degree my nominal advisor, Lee Smolin, moved to the Perimeter Institute in Canada. PI was brand new and didn’t yet have a formal affiliation with a university to support graduate students, so it was difficult to follow him there. I ended up staying at Penn State, but leaving gravity. That was the hardest part of my transition, as I’d been passionately interested in gravity since high school.

I ultimately landed in computational statistical mechanics, partly due to the Monte Carlo computing background I’d acquired studying the dynamical triangulations approach to quantum gravity. My thesis work was interesting, but by the time I graduated, I’d decided it probably wasn’t my long term career.

During graduate school I had become interested in statistics. This was partly from my Monte Carlo simulation background, partly from a Usenet thread on Bayesian statistics (archived on your web page), and partly from my interest in statistical machine learning. I applied to a postdoc position in climate change advertised at Penn State which involved statistics and decision theory. At the time I had no particular plan to remain at Penn State, knew nothing about climate change, had no prior interest in it, and was a little skeptical that the whole subject had been exaggerated in the media … but I was looking for a job and it sounded interesting and challenging, so I accepted.

I had a great time with that job, because it involved a lot of
statistics and mathematical modeling, was very interdisciplinary — incorporating physics, geology, biogeochemistry, economics, public policy, etc. — and tackled big, difficult questions. Eventually it was time to move on, and I accepted a second postdoc at Princeton doing similar things.

JB: It’s interesting that you applied for that Penn State position even though you knew nothing about climate change. I think there are lots of scientists who’d like to work on environmental issues but feel they lack the necessary expertise. Indeed I sometimes feel that way myself! So what did you do to bone up on climate change? Was it important to start by working with a collaborator who knew more about that side of things?

NU: I think a physics background gives people the confidence (or arrogance!) to jump into a new field, trusting their quantitative skills to see them through.

It was very much like starting over as a grad student again — an experience I’d had before, switching from gravity to condensed matter — except faster. I read. A lot. But at the same time, I worked on a narrowly defined project, in collaboration with an excellent mentor, to get my feet wet and gain depth. The best way to learn is probably to just try to answer some specific research question. You can pick up what you need to know as you go along, with help. (One difficulty is in identifying a good and accessible problem!)

I started by reading the papers cited by the paper upon whose work my research was building. The IPCC Fourth Assessment Report came out shortly after that, which cites many more key references. I started following new articles in major journals, whatever seemed interesting or relevant to me. I also sampled some of the blog debates on climate change. Those were useful to understand what the public’s view of the important controversies may be, which is often very different from the actual controversies within the field. Some posters were willing to write useful tutorials on some aspects of the science as well. And of course I learned through research, through attending group meetings with collaborators, and talking to people.

It’s very important to start out working with a knowledgeable collaborator, and I’m lucky to have many. The history of science is littered with very smart people making serious errors when they get out of their depth. The physicist Leo Szilard once told a biologist colleague to "assume infinite intelligence and zero prior knowledge" when explaining to him. The error some make is in believing that intelligence alone will suffice. You also have to acquire knowledge, and become intimately familiar with the relevant scientific literature. And you will make mistakes in a new field, no matter how smart you are. That’s where a collaborator is crucial: someone who can help you identify flaws in arguments that you may not notice yourself at first. (And it’s not just to start with, either: I still need collaborators to teach me things about specific models, or data sets, that I don’t know.) Collaborators also can help you become familiar with the literature faster.

It’s helpful to have a skill that others need. I’ve built up expertise in statistical data-model comparison. I read as many statistics papers as I do climate papers, have statistician collaborators, and can speak their own language. I can act as an intermediary between scientists and statisticians. This expertise allows me to collaborate with some climate research groups who happen to lack such expertise themselves. As a result I have a lot of people who are willing to teach me what they know, so we can solve problems that neither of us alone could.

JB: You said you began with a bit of skepticism that perhaps the whole climate change thing had been exaggerated in the media. I think a lot of people feel that way. I’m curious how your attitude evolved as you began studying the subject more deeply. That might be a big question, so maybe we can break it down a little: do you remember the first thing you read that made you think "Wow! I didn’t know that!"?

NU: I’m not sure what was the first. It could have been that most of the warming from CO2 is currently thought to come from feedback effects, rather than its direct greenhouse effect. Or that ice ages (technically, glacial periods) were only 5-6 °C cooler than our preindustrial climate, globally speaking. Many people would guess something much colder, like 10 °C. It puts future warming in perspective to think that it could be as large, or even half as large, as the warming between an ice age and today. "A few degrees" doesn’t sound like much (especially in Celsius, to an American), but historically, it can be a big deal — particularly if you care about the parts of the planet that warm faster than the average rate. Also, I was surprised by the atmospheric longevity of CO2 concentrations. If CO2 is a problem, it will be a problem that’s around for a long time.

JB: These points are so important that I don’t want them to whiz past too quickly. So let me back up and ask a few more questions here.

By "feedback effects", I guess you mean things like this: when it gets warmer, ice near the poles tends to melt. But ice is white, so it reflects sunlight. When ice melts, the landscape gets darker, and absorbs more sunlight, so it gets warmer. So the warming effect amplifies itself — like feedback when a rock band has its amplifiers turned up too high.

On the other hand, any sort of cooling effect also amplifies itself. For example, when it gets colder, more ice forms, and that makes the landscape whiter, so more sunlight gets reflected, making it even colder.

Could you maybe explain some of the main feedback effects and give us numbers that say how big they are?

NU: Yes, feedbacks are when a change in temperature causes changes within the climate system that, themselves, cause further changes in temperature. Ice reflectivity, or "albedo", feedback is a good example. Another is water vapor feedback. When it gets warmer — due to, say, the CO2 greenhouse effect — the evaporation-condensation balance shifts in favor of relatively more evaporation, and the water vapor content of the atmosphere increases. But water vapor, like CO2, is a greenhouse gas, which causes additional warming. (The opposite happens in response to cooling.) These feedbacks which amplify the original cause (or "forcing") are known to climatologists as "positive feedbacks".

A somewhat less intuitive example is the "lapse rate feedback". The greenhouse effect causes atmospheric warming. But this warming itself causes the vertical temperature profile of the atmosphere to change. The rate at which air temperature decreases with height, or lapse rate, can itself increase or decrease. This change in lapse rate depends on interactions between radiative transfer, clouds and convection, and water vapor. In the tropics, the lapse rate is expected to decrease in response to the enhanced greenhouse effect, amplifying the warming in the upper troposphere and suppressing it at the surface. This suppression is a "negative feedback" on surface temperature. Toward the poles, the reverse happens (a positive feedback), but the tropics tend to dominate, producing an overall negative feedback.

Clouds create more complex feedbacks. Clouds have both an albedo effect (they are white and reflect sunlight) and a greenhouse effect. Low clouds tend to be thick and warm, with a high albedo and weak greenhouse effect, and so are net cooling agents. High clouds are often thin and cold, with low albedo and strong greenhouse effect, and are net warming agents. Temperature changes in the atmosphere can affect cloud amount, thickness, and location. Depending on the type of cloud and how temperature changes alter its behavior, this can result in either positive or negative feedbacks.

There are other feedbacks, but these are usually thought of as the big four: surface albedo (including ice albedo), water vapor, lapse rate, and clouds.

For the strengths of the feedbacks, I’ll refer to climate model predictions, mostly because they’re neatly summarized in one place:

Section 8.6 of the Intergovernmental Panel on Climate Change Fourth Assessment Report, Working Group 1 (AR4 WG1).

There are also estimates made from observational data. (Well, data plus simple models, because you need some kind of model of how temperatures depend on CO2, even if it’s just a simple linear feedback model.) But observational estimates are more scattered in the literature and harder to summarize, and some feedbacks are very difficult to estimate directly from data. This is a problem when testing the models. For now, I’ll stick to the models — not because they’re necessarily more credible than observational estimates, but just to make my job here easier.

Conventions vary, but the feedbacks I will give are measured in units of watts per square meter per kelvin. That is, they tell you how much of a radiative imbalance, or power flux, the feedback creates in the climate system in response to a given temperature change. The reciprocal of a feedback tells you how much temperature change you’d get in response to a given forcing.

Water vapor is the largest feedback. Referring to this paper cited in the AR4 WG1 report:

• Brian J. Solden and Isaac M. Held, An assessment of climate feedbacks in coupled ocean-atmosphere models, Journal of Climate 19 (2006), 3354-3360.

you can see that climate models predict a range of water vapor feedbacks of 1.48 to 2.14 W/m2/K.

The second largest in magnitude is lapse rate feedback, -0.41 to -1.27 W/m2/K. However, water vapor and lapse rate feedbacks are often combined into a single feedback, because stronger water vapor feedbacks also tend to produce stronger lapse rate feedbacks. The combined water vapor+lapse rate feedback ranges between 0.81 to 1.20 W/m2/K.

Clouds are the next largest feedback, 0.18 to 1.18 W/m2/K. But as you can see, different models can predict very different cloud feedbacks. It is the largest feedback uncertainty.

After that comes the surface albedo feedback. Its range is 0.07 to 0.34 W/m2/K.

People don’t necessarily find feedback values intuitive. Since
everyone wants to know what that means in terms of the climate, I’ll explain how to convert feedbacks into temperatures.

First, you have to assume a given amount of radiative forcing: a stronger greenhouse effect causes more warming. For reference, let’s consider a doubling of atmospheric CO2, which is estimated to create a greenhouse effect forcing of 4±0.3 W/m2. (The error bars represent the range of estimates I’ve seen, and aren’t any kind of statistical bound.) How much greenhouse warming? In the absence of feedbacks, about 1.2±0.1 °C of warming.

How much warming, including feedbacks? To convert a feedback to a temperature, add it to the so-called "Planck feedback" to get a combined feedback which accounts for the fact that hotter bodies radiate more infrared. Then divide it into the forcing and flip the sign to get the warming. Mathematically, this is….

JB: Whoa! Slow down! I’m glad you finally mentioned the "Planck feedback", because this is the mother of all feedbacks, and we should have talked about it first.

While the name "Planck feedback" sounds technical, it’s pathetically simple: hotter things radiate more heat, so they tend to cool down. Cooler things radiate less heat, so they tend to warm up. So this is a negative feedback. And this is what keeps our climate from spiralling out of control.

This is an utterly basic point that amateurs sometimes overlook — I did it myself at one stage, I’m embarrassed to admit. They say things like:

"Well, you listed a lot of feedback effects, and overall they give a positive feedback — so any bit of warming will cause more warming, while any bit of cooling will cause more cooling. But wouldn’t that mean the climate is unstable? Are you saying that the climate just happens to be perched at an unstable equilibrium, so that the slightest nudge would throw us into either an ice age or a spiral of ever-hotter weather? That’s absurdly unlikely! Climate science is a load of baloney!"

(Well, I didn’t actually say the last sentence: I realized I must be confused.)

The answer is that a hot Earth will naturally radiate away more heat, while a cold Earth will radiate away less. And this is enough to make the total feedback negative.

NU: Yes, the negative Planck feedback is crucial. Without this stabilizing feedback, which is always present for any thermodynamic body, any positive feedback would cause the climate to run away unstably. It’s so important that other feedbacks are often defined relative to it: people call the Planck feedback λ0, and they call the sum of the rest λ. Climatologists tend to take it for granted, and talk about just the non-Planck feedbacks, λ.

As a side note, the definition of feedbacks in climate science is
somewhat confused; different papers have used different conventions, some in opposition to conventions used in other fields like engineering. For a discussion of some of the ways feedbacks have been treated in the literature, see:

• J. R. Bates, Some considerations of the concept of climate
feedback, Quarterly Journal of the Royal Meteorological Society 133 (2007), 545-560.

JB: Okay. Sorry to slow you down like that, but we’re talking to a mixed crowd here.

So: you were saying how much it warms up when we apply a radiative forcing F, some number of watts per square meter. We could do this by turning up the dial on the Sun, or, more realistically, by pouring lots of carbon dioxide into the atmosphere to keep infrared radiation from getting out.

And you said: take the Planck feedback λ0, which is negative, and add to it the sum of all other feedbacks, which we call λ. Divide F by the result, and flip the sign to get the warming.

NU: Right. Mathematically, that’s

T = -F/(λ0+λ)

where

λ0 = -3.2 W/m2/K

is the Planck feedback and λ is the sum of other feedbacks. Let’s look at the forcing from doubled CO2:

F = 4.3 W/m2.

Here I’m using values taken from Soden and Held.

If the other feedbacks vanish (λ=0), this gives a "no-feedback" warming of T = 1.3 °C, which is about equal to the 1.2 °C that I mentioned above.

But we can then plug in other feedback values. For example, the water vapor feedbacks 1.48-2.14 W/m2/K will produce warmings of 2.5 to 4.1 °C, compared to only 1.3 °C without water vapor feedback. This is a huge temperature amplification. If you consider the combined water vapor+lapse rate feedback, that’s still a warming of 1.8 to 2.2 °C, almost a doubling of the "bare" CO2 greenhouse warming.

JB: Thanks for the intro to feedbacks — very clear. So, it seems the "take-home message", as annoying journalists like to put it, is this. When we double the amount of carbon dioxide in the atmosphere, as we’re well on the road to doing, we should expect significantly more than the 1.2 degree Celsius rise in temperature than we’d get without feedbacks.

What are the best estimates for exactly how much?

NU: The IPCC currently estimates a range of 2 to 4.5 °C for the overall climate sensitivity (the warming due to a doubling of CO2), compared to the 1.2 °C warming with no feedbacks. See Section 8.6 of the AR4 WG1 report for model estimates and Section 9.6 for observational estimates. An excellent review article on climate sensitivity is:

• Reto Knutti and Gabriele C. Hegerl, The equilibrium sensitivity of the Earth’s temperature to radiation changes, Nature Geoscience 1 (2008), 735-748.

I also recommend this review article on linear feedback analysis:

• Gerard Roe, Feedbacks, timescales, and seeing red, Annual Reviews of Earth and Planetary Science 37 (2009), 93-115.

But note that there are different feedback conventions; Roe’s λ is the negative of the reciprocal of the Soden & Held λ that I use, i.e. it’s a direct proportionality between forcing and temperature.

JB: Okay, I’ll read those.

Here’s another obvious question. You’ve listed estimates of feedbacks based on theoretical calculations. But what’s the evidence that these theoretical feedbacks are actually right?

NU: As I mentioned, there are also observational estimates of feedbacks. There are two approaches: to estimate the total feedback acting in the climate system, or to estimate all the individual feedbacks (that we know about). The former doesn’t require us to know what all the individual feedbacks are, but the second allows us to verify our physical understanding of physical feedback processes. I’m more familiar with the total feedback method, and have published my own simple estimate as a byproduct of an uncertainty analysis about the future ocean circulation:

• Nathan M. Urban and Klaus Keller, Probabilistic hindcasts and projections of the coupled climate, carbon cycle and Atlantic meridional overturning circulation system: a Bayesian fusion of century-scale observations with a simple model, Tellus A, July 16, 2010.

I will stick to discussing this method. To make a long story short, the observational and model estimates generally agree to within their estimated uncertainty bounds. But let me explain a bit more about where the observational estimates come from.

To estimate the total feedback, you first estimate the radiative forcing of the system, based on historic data on greenhouse gases, volcanic and industrial aerosols, black carbon (soot), solar activity, and other factors which can change the Earth’s radiative balance. Then you predict how much warming you should get from that forcing using a climate model, and tune the model’s feedback until it matches the observed warming. The tuned feedback factor is your observational estimate.

As I said earlier, there is no totally model-independent way of estimating feedbacks — you have to use some formula to turn forcings into temperatures. There is a balance between using simple formulas with few assumptions, or more realistic models with assumptions that are harder to verify. So far people have mostly used simple models, not only for transparency but also because they’re fast enough, and have few enough free parameters, to undertake a comprehensive uncertainty analysis.

What I’ve described is the "forward model" approach, where you run a climate model forward in time and match its output to data. For a trivial linear model of the climate, you can do something even simpler, which is the closest to a "model independent" calculation you can get: statistically regress forcing against temperature. This is the approach taken by, for example:

• Piers M. de F. Forster and Jonathan M. Gregory, The climate sensitivity and its components diagnosed from Earth radiation budget data, Journal of Climate 19 (2006), 39-52.

In the "total feedback" forward model approach, there are two major confounding factors which prevent us from making precise feedback estimates. One is that we’re not sure what the forcing is. Although we have good measurements of trace greenhouse gases, there is an important cooling effect produced by air pollution. Industrial emissions create a haze of aerosols in the atmosphere which reflects sunlight and cools the planet. While this can be measured, this direct effect is also supplemented by a far less understood indirect effect: the aerosols can influence cloud formation, which has its own climate effect. Since we’re not sure how strong that is, we’re not sure whether there is a strong or a weak net cooling effect from aerosols. You can explain the observed global warming with a strong feedback whose effects are partially cancelled by a strong aerosol cooling, or with a weak feedback along with weak aerosol cooling. Without precisely knowing one, you can’t precisely determine the other.

The other confounding factor is the rate at which the ocean takes up heat from the atmosphere. The oceans are, by far, the climate system’s major heat sink. The rate at which heat mixes into the ocean determines how quickly the surface temperature responds to a forcing. There is a time lag between applying a forcing and seeing the full response realized. Any comparison of forcing to response needs to take that lag into account. One way to explain the surface warming is with a strong feedback but a lot of heat mixing down into the deeper ocean, so you don’t see all the surface warming at once. Or you can do it with a weak feedback, and most of the heat staying near the surface, so you see the surface warming quickly. For a discussion, see:

• Nathan M. Urban and Klaus Keller, Complementary observational constraints on climate sensitivity, Geophysical Research Letters 36 (2009), L04708.

We don’t know precisely what this rate is, since it’s been hard to
monitor the whole ocean over long time periods (and there isn’t exactly a single "rate", either).

This is getting long enough, so I’m going to skip over a discussion of individual feedback estimates. These have been applied to various specific processes, such as water vapor feedback, and involve comparing, say, how the water vapor content of the atmosphere has changed to how the temperature of the atmosphere has changed. I’m also skipping a discussion of paleoclimate estimates of past feedbacks. It follows the usual formula of "compare the estimated forcing to the reconstructed temperature response", but there are complications because the boundary conditions were different (different surface albedo patterns, variations in the Earth’s orbit, or even continental configurations if you go back far enough) and the temperatures can only be indirectly inferred.

JB: Thanks for the summary of these complex issues. Clearly I’ve got my reading cut out for me.

What do you say to people like Lindzen, who say negative feedbacks due to clouds could save the day?

NU: Climate models tend to predict a positive cloud feedback, but it’s certainly possible that the net cloud feedback could be negative. However, Lindzen seems to think it’s so negative that it makes the total climate feedback negative, outweighing all positive feedbacks. That is, he claims a climate sensitivity even lower than the "bare" no-feedback value of 1.2 °C. I think Lindzen’s work has its own problems (there are published responses to his papers with more details). But generally speaking, independent of Lindzen’s specific arguments, I don’t think such a low climate sensitivity is supportable by data. It would be difficult to reproduce the modern instrumental atmospheric and ocean temperature data with such a low sensitivity. And it would be quite difficult to explain the large changes in the Earth’s climate over its geologic history if there were a stabilizing feedback that strong. The feedbacks I’ve mentioned generally act in response to any warming or cooling, not just from the CO2 greenhouse effect, so a strongly negative feedback would tend to prevent the climate from changing much at all.

JB: Yes, ever since the Antarctic froze over about 12 million years ago, it seems the climate has become increasingly "jittery":



As soon as I saw the incredibly jagged curve at the right end of this graph, I couldn’t help but think that some positive feedback is making it easy for the Earth to flip-flop between warmer and colder states. But then I wondered what "tamed" this positive feedback and kept the temperature between certain limits. I guess that the negative Planck feedback must be involved.

NU: You have to be careful: in the figure you cite, the resolution of the data decreases as you go back in time, so you can’t see all of the variability that could have been present. A lot of the high frequency variability (< 100 ky) is averaged out, so the more recent glacial-interglacial oscillations in temperature would not have been easily visible in the earlier data if they had occurred back then.

That being said, there has been a real change in variability over the time span of that graph. As the climate cooled from a "greenhouse" to an "icehouse" over the Cenozoic era, the glacial-interglacial cycles were able to start. These big swings in climate are a result of ice albedo feedback, when large continental ice sheets form and disintegrate, and weren’t present in earlier greenhouse climates. Also, as you can see from the last 5 million years:



the glacial-interglacial cycles themselves have gotten bigger over time (and the dominant period changed from 41 to 100 ky).

As a side note, the observation that glacial cycles didn’t occur in hot climates highlights the fact that climate sensitivity can be state-dependent. The ice albedo feedback, for example, vanishes when there is no ice. This is a subtle point when using paleoclimate data to constrain the climate sensitivity, because the sensitivity at earlier times might not be the same as the sensitivity now. Of course, they are related to each other, and you can make inferences about one from the other with additional physical reasoning. I do stand by my previous remarks: I don’t think you can explain past climate if the (modern) sensitivity is below 1 °C.

JB: I have one more question about feedbacks. It seems that during the last few glacial cycles, there’s sometimes a rise in
temperature before a rise in CO2 levels. I’ve heard people offer this explanation: warming oceans release CO2. Could that be another important feedback?

NU: Temperature affects both land and ocean carbon sinks, so it is another climate feedback (warming changes the amount of CO2 remaining in the atmosphere, which then changes temperature). The ocean is a very large repository of carbon, and both absorbs CO2 from, and emits CO2 to, the atmosphere. Temperature influences the balance between absorption and emission. One obvious influence is through the "solubility pump": CO2 dissolves less readily in warmer water, so as temperatures rise, the ocean can absorb carbon from the atmosphere less effectively. This is related to Henry’s law in chemistry.

JB: Henry’s law? Hmm, let me look up the Wikipedia article on Henry’s law. Okay, it basically just says that at any fixed temperature, the amount of carbon dioxide that’ll dissolve in water is proportional to the amount of carbon dioxide in the air. But what really matters for us is that when it gets warmer, this constant of proportionality goes down, so the water holds less CO2. Like you said.

NU: But this is not the only process going on. Surface warming leads to more stratification of the upper ocean layers and can reduce the vertical mixing of surface waters into the depths. This is important to the carbon cycle because some of the dissolved CO2 which is in the surface layers can return to the atmosphere, as part of an equilibrium exchange cycle. However, some of that carbon is also transported to deep water, where it can no longer exchange with the atmosphere, and can be sequestered there for a long time (about a millennium). If you reduce the rate at which carbon is mixed downward, so that relatively more carbon accumulates in the surface layers, you reduce the immediate ability of the ocean to store atmospheric CO2 in its depths. This is another potential feedback.

Another important process, which is more of a pure carbon cycle feedback than a climate feedback, is carbonate buffering chemistry. The straight Henry’s law calculation doesn’t tell the whole story of how carbon ends up in the ocean, because there are chemical reactions going on. CO2 reacts with carbonate ions and seawater to produce bicarbonate ions. Most of the dissolved carbon in the surface waters (about 90%) exists as bicarbonate; only about 0.5% is dissolved CO2, and the rest is carbonate. This "bicarbonate buffer" greatly enhances the ability of the ocean to absorb CO2 from the atmosphere beyond what simple thermodynamic arguments alone would suggest. A keyword here is the "Revelle factor", which is the relative ratio of CO2 to total carbon in the ocean. (A Revelle factor of 10, which is about the ocean average, means that a 10% increase in CO2 leads to a 1% increase in dissolved inorganic carbon.)

As more CO2 is added to the ocean, chemical reactions consume carbonate and produce hydrogen ions, leading to ocean acidification. You have already discussed this on your blog. In addition to acidification, the chemical buffering effect is lessened (the Revelle factor increased) when there are fewer carbonate ions available to participate in reactions. This weakens the ocean carbon sink. This is a feedback, but it is a purely carbon cycle feedback rather than a climate feedback, since only carbonate chemistry is involved. There can also be an indirect climate feedback, if climate change alters the spatial distribution of the Revelle factor in the ocean by changing the ocean’s circulation.

For more on this, try Section 7.3.4 of the IPCC AR4 WG1 report and Sections 8.3 and 10.2 of:

• J. L. Sarmiento and N. Gruber, Ocean Biogeochemical Dynamics, Princeton U. Press, Princeton, 2006.

JB: I’m also curious about other feedbacks. For example, I’ve heard that methane is an even more potent greenhouse gas than CO2, though it doesn’t hang around as long. And I’ve heard that another big positive feedback mechanism might be the release of methane from melting permafrost. Or maybe even from "methane clathrates" down at the bottom of the ocean! There’s a vast amount of methane down there, locked in cage-shaped ice crystals. As the ocean warms, some of this could be released. Some people even worry that this effect could cause a "tipping point" in the Earth’s climate. But I won’t force you to tell me your opinions on this — you’ve done enough for one week.

Instead, I just want to make a silly remark about hypothetical situations where there’s so much positive feedback that it completely cancels the Planck feedback. You see, as a mathematician, I couldn’t help wondering about this formula:

T = -F/(λ0+λ)

The Planck feedback λ0 is negative. The sum of all the other feedbacks, namely λ, is positive. So what if they add up to zero? Then we’re be dividing by zero! When I last checked, that was a no-no.

Here’s my guess. If λ0+λ becomes zero, the climate loses its stability: it can drift freely. A slight tap can push it arbitrarily far, like a ball rolling on a flat table.

And if λ were actually big enough to make λ0+λ positive, the climate would be downright unstable, like a ball perched on top of a hill!

But all this is only in some linear approximation. In reality, a hot object radiates power proportional to the fourth power of its temperature. So even if the Earth’s climate is unstable in some linear approximation, the Planck feedback due to radiation will eventually step in and keep the Earth from heating up, or cooling down, indefinitely.

NU: Yes, we do have to be careful to remember that the formula above is obtained from a linear feedback analysis. For a discussion of climate sensitivity in a nonlinear analysis to second order, see:

• I. Zaliapin and M. Ghil, Another look at climate sensitivity,
Nonlinear Processes in Geophysics 16 (2010), 113-122.

JB: Hmm, there’s some nice catastrophe theory in there — I see a fold catastrophe in Figure 5, which gives a "tipping point".

Okay. Thanks for everything, and we’ll continue next week!


The significant problems we have cannot be solved at the same level of thinking with which we created them. – Albert Einstein


This Week’s Finds (Week 301)

27 August, 2010

The first 300 issues of This Week’s Finds were devoted to the beauty of math and physics. Now I want to bite off a bigger chunk of reality. I want to talk about all sorts of things, but especially how scientists can help save the planet. I’ll start by interviewing some scientists with different views on the challenges we face — including some who started out in other fields, because I’m trying to make that transition myself.

By the way: I know “save the planet” sounds pompous. As George Carlin joked: “Save the planet? There’s nothing wrong with the planet. The planet is fine. The people are screwed.” (He actually put it a bit more colorfully.)

But I believe it’s more accurate when he says:

I think, to be fair, the planet probably sees us as a mild threat. Something to be dealt with. And I am sure the planet will defend itself in the manner of a large organism, like a beehive or an ant colony, and muster a defense.

I think we’re annoying the biosphere. I’d like us to become less annoying, both for its sake and our own. I actually considered using the slogan how scientists can help humans be less annoying — but my advertising agency ran a focus group, and they picked how scientists can help save the planet.

Besides interviewing people, I want to talk about where we stand on various issues, and what scientists can do. It’s a very large task, so I’m really hoping lots of you reading this will help out. You can explain stuff, correct mistakes, and point me to good sources of information. With a lot of help from Andrew Stacey, I’m starting a wiki where we can collect these pointers. I’m hoping it will grow into something interesting.

But today I’ll start with a brief overview, just to get things rolling.

In case you haven’t noticed: we’re heading for trouble in a number of ways. Our last two centuries were dominated by rapid technology change and a rapidly soaring population:

The population is still climbing fast, though the percentage increase per year is dropping. Energy consumption per capita is also rising. So, from 1980 to 2007 the world-wide usage of power soared from 10 to 16 terawatts.

96% of this power now comes from fossil fuels. So, we’re putting huge amounts of carbon dioxide into the air: 30 billion metric tons in 2007. So, the carbon dioxide concentration of the atmosphere is rising at a rapid clip: from about 290 parts per million before the industrial revolution, to about 370 in the year 2000, to about 390 now:



 

As you’d expect, temperatures are rising:



 

But how much will they go up? The ultimate amount of warming will largely depend on the total amount of carbon dioxide we put into the air. The research branch of the National Academy of Sciences recently put out a report on these issues:

• National Research Council, Climate Stabilization Targets: Emissions, Concentrations, and Impacts over Decades to Millennia, 2010.

Here are their estimates:



 

You’ll note there’s lots of uncertainty, but a rough rule of thumb is that each doubling of carbon dioxide will raise the temperature around 3 degrees Celsius. Of course people love to argue about these things: you can find reasonable people who’ll give a number anywhere between 1.5 and 4.5 °C, and unreasonable people who say practically anything. We’ll get into this later, I’m sure.

But anyway: if we keep up “business as usual”, it’s easy to imagine us doubling the carbon dioxide sometime this century, so we need to ask: what would a world 3 °C warmer be like?

It doesn’t sound like much… until you realize that the Earth was only about 6 °C colder during the last ice age, and the Antarctic had no ice the last time the Earth was about 4 °C warmer. You also need to bear in mind the shocking suddenness of the current rise in carbon dioxide levels:



You can see several ice ages here — or technically, ‘glacial periods’. Carbon dioxide concentration and temperature go hand in hand, probably due to some feedback mechanisms that make each influence the other. But the scary part is the vertical line on the right where the carbon dioxide shoots up from 290 to 390 parts per million — instantaneously from a geological point of view, and to levels not seen for a long time. Species can adapt to slow climate changes, but we’re trying a radical experiment here.

But what, specifically, could be the effects of a world that’s 3 °C warmer? You can get some idea from the National Research Council report. Here are some of their predictions. I think it’s important to read these, to see that bad things will happen, but the world will not end. Psychologically, it’s easy to avoid taking action if you think there’s no problem — but it’s also easy if you think you’re doomed and there’s no point.

Between their predictions (in boldface) I’ve added a few comments of my own. These comments are not supposed to prove anything. They’re just anecdotal examples of the kind of events the report says we should expect.

For 3 °C of global warming, 9 out of 10 northern hemisphere summers will be “exceptionally warm”: warmer in most land areas than all but about 1 of the summers from 1980 to 2000.

This summer has certainly been exceptionally warm: for example, worldwide, it was the hottest June in recorded history, while July was the second hottest, beat out only by 2003. Temperature records have been falling like dominos. This is a taste of the kind of thing we might see.

Increases of precipitation at high latitudes and drying of the already semi-arid regions are projected with increasing global warming, with seasonal changes in several regions expected to be about 5-10% per degree of warming. However, patterns of precipitation show much larger variability across models than patterns of temperature.

Back home in southern California we’re in our fourth year of drought, which has led to many wildfires.

Large increases in the area burned by wildfire are expected in parts of Australia, western Canada, Eurasia and the United States.

We are already getting some unusually intense fires: for example, the Black Saturday bushfires that ripped through Victoria in February 2007, the massive fires in Greece later that year, and the hundreds of wildfires that broke out in Russia this July.

Extreme precipitation events — that is, days with the top 15% of rainfall — are expected to increase by 3-10% per degree of warming.

The extent to which these events cause floods, and the extent to which these floods cause serious damage, will depend on many complex factors. But today it hard not to think about the floods in Pakistan, which left about 20 million homeless, and ravaged an area equal to that of California.

In many regions the amount of flow in streams and rivers is expected to change by 5-15% per degree of warming, with decreases in some areas and increases in others.

The total number of tropical cyclones should decrease slightly or remain unchanged. Their wind speed is expected to increase by 1-4% per degree of warming.

It’s a bit counterintuitive that warming could decrease the number of cyclones, while making them stronger. I’ll have to learn more about this.

The annual average sea ice area in the Arctic is expected to decrease by 15% per degree of warming, with more decrease in the summertime.

The area of Arctic ice reached a record low in the summer of 2007, and the fabled Northwest Passage opened up for the first time in recorded history. Then the ice area bounced back. This year it was low again… but what matters more is the overall trend:



 

Global sea level has risen by about 0.2 meters since 1870. The sea level rise by 2100 is expected to be at least 0.6 meters due to thermal expansion and loss of ice from glaciers and small ice caps. This could be enough to permanently displace as many as 3 million people — and raise the risk of floods for many millions more. Ice loss is also occurring in parts of Greenland and Antarctica, but the effect on sea level in the next century remains uncertain.

Up to 2 degrees of global warming, studies suggest that crop yield gains and adaptation, especially at high latitudes, could balance losses in tropical and other regions. Beyond 2 degrees, studies suggest a rise in food prices.

The first sentence there is the main piece of good news — though not if you’re a poor farmer in central Africa.

Increased carbon dioxide also makes the ocean more acidic and lowers the ability of many organisms to make shells and skeleta. Seashells, coral, and the like are made of aragonite, one of the two crystal forms of calcium carbonate. North polar surface waters will become under-saturated for aragonite if the level of carbon dioxide in the atmosphere rises to 400-450 parts per million. Then aragonite will tend to dissolve, rather than form from seawater. For south polar surface waters, this effect will occur at 500-660 ppm. Tropical surface waters and deep ocean waters are expected to remain supersaturated for aragonite throughout the 20th century, but coral reefs may be negatively impacted.

Coral reefs are also having trouble due to warming oceans. For example, this summer there was a mass dieoff of corals off the coast of Indonesia due to ocean temperatures that were 4 °C higher than average.

Species are moving toward the poles to keep cool: the average shift over many types of terrestrial species has been 6 kilometers per decade. The rate of extinction of species will be enhanced by climate change.

I have a strong fondness for the diversity of animals and plants that grace this planet, so this particularly perturbs me. The report does not venture a guess for how many species may go extinct due to climate change, probably because it’s hard to estimate. However, it states that the extinction rate is now roughly 500 times what it was before humans showed up. The extinction rate is measured extinctions per million years per species. For mammals, it’s shot up from roughly 0.1-0.5 to roughly 50-200. That’s what I call annoying the biosphere!

So, that’s a brief summary of the problems that carbon dioxide emissions may cause. There’s just one more thing I want to say about this now.

Once carbon dioxide is put into the atmosphere, about 50% of it will stay there for decades. About 30% of it will stay there for centuries. And about 20% will stay there for thousands of years:



This particular chart is based on some 1993 calculations by Wigley. Later calculations confirm this idea: the carbon we burn will haunt our skies essentially forever:

• Mason Inman, Carbon is forever, Nature Reports Climate Change, 20 November 2008.

This is why we’re in serious trouble. In the above article, James Hansen puts it this way:

Because of this long CO2 lifetime, we cannot solve the climate problem by slowing down emissions by 20% or 50% or even 80%. It does not matter much whether the CO2 is emitted this year, next year, or several years from now. Instead … we must identify a portion of the fossil fuels that will be left in the ground, or captured upon emission and put back into the ground.

But I think it’s important to be more precise. We can put off global warming by reducing carbon dioxide emissions, and that may be a useful thing to do. But to prevent it, we have to cut our usage of fossil fuels to a very small level long before we’ve used them up.



Theoretically, another option is to quickly deploy new technologies to suck carbon dioxide out of the air, or cool the planet in other ways. But there’s almost no chance such technologies will be practical soon enough to prevent significant global warming. They may become important later on, after we’ve already screwed things up. We may be miserable enough to try them, even though they may carry significant risks of their own.

So now, some tough questions:

If we decide to cut our usage of fossil fuels dramatically and quickly, how can we do it? How should we do it? What’s the least painful way? Or should we just admit that we’re doomed to global warming and learn to live with it, at least until we develop technologies to reverse it?

And a few more questions, just for completeness:

Could this all be just a bad dream — or more precisely, a delusion of some sort? Could it be that everything is actually fine? Or at least not as bad as you’re saying?

I won’t attempt to answer any of these now. We’ll have to keep coming back to them, over and over.

So far I’ve only talked about carbon dioxide emissions. There are lots of other problems we should tackle, too! But presumably many of these are just symptoms of some deeper underlying problem. What is this deeper problem? I’ve been trying to figure that out for years. Is there any way to summarize what’s going on, or it is just a big complicated mess?

Here’s my attempt at a quick summary: the human race makes big decisions based on an economic model that ignores many negative externalities.

A ‘negative externality’ is, very roughly, a way in which my actions impose a cost on you, for which I don’t pay any price.

For example: suppose I live in a high-rise apartment and my toilet breaks. Instead of fixing it, I realize that I can just use a bucket — and throw its contents out the window! Whee! If society has no mechanism for dealing with people like me, I pay no price for doing this. But you, down there, will be very unhappy.

This isn’t just theoretical. Once upon a time in Europe there were few private toilets, and people would shout “gardyloo!” before throwing their waste down to the streets below. In retrospect that seems disgusting, but many of the big problems that afflict us now can be seen as the result of equally disgusting externalities. For example:

Carbon dioxide pollution caused by burning fossil fuels. If the expected costs of global warming and ocean acidification were included in the price of fossil fuels, other sources of energy would more quickly become competitive. This is the idea behind a carbon tax or a ‘cap-and-trade program’ where companies pay for permits to put carbon dioxide into the atmosphere.

Dead zones. Put too much nitrogen and phosophorus in the river, and lots of algae will grow in the ocean near the river’s mouth. When the algae dies and rots, the water runs out of dissolved oxygen, and fish cannot live there. Then we have a ‘dead zone’. Dead zones are expanding and increasing in number. For example, there’s one about 20,000 square kilometers in size near the mouth of the Mississippi River. Hog farming, chicken farming and runoff from fertilized crop lands are largely to blame.

Overfishing. Since there is no ownership of fish, everyone tries to catch as many fish as possible, even though this is depleting fish stocks to the point of near-extinction. There’s evidence that populations of all big predatory ocean fish have dropped 90% since 1950. Populations of cod, bluefish tuna and many other popular fish have plummeted, despite feeble attempts at regulation.

Species extinction due to habitat loss. Since the economic value of intact ecosystems has not been fully reckoned, in many parts of the world there’s little price to pay for destroying them.

Overpopulation. Rising population is a major cause of the stresses on our biosphere, yet it costs less to have your own child than to adopt one. (However, a pilot project in India is offering cash payments to couples who put off having children for two years after marriage.)

One could go on; I haven’t even bothered to mention many well-known forms of air and water pollution. The Acid Rain Program in the United States is an example of how people eliminated an externality: they imposed a cap-and-trade system on sulfur dioxide pollution.

Externalities often arise when we treat some resource as essentially infinite — for example fish, or clean water, or clean air. We thus impose no cost for using it. This is fine at first. But because this resource is free, we use more and more — until it no longer makes sense to act as if we have an infinite amount. As a physicist would say, the approximation breaks down, and we enter a new regime.

This is happening all over the place now. We have reached the point where we need to treat most resources as finite and take this into account in our economic decisions. We can’t afford so many externalities. It is irrational to let them go on.

But what can you do about this? Or what can I do?

We can do the things anyone can do. Educate ourselves. Educate our friends. Vote. Conserve energy. Don’t throw buckets of crap out of apartment windows.

But what can we do that maximizes our effectiveness by taking advantage of our special skills?

Starting now, a large portion of This Week’s Finds will be the continuing story of my attempts to answer this question. I want to answer it for myself. I’m not sure what I should do. But since I’m a scientist, I’ll pose the question a bit more broadly, to make it a bit more interesting.

How scientists can help save the planet — that’s what I want to know.


Addendum: In the new This Week’s Finds, you can often find the source for a claim by clicking on the nearest available link. This includes the figures. Four of the graphs in this issue were produced by Robert A. Rohde and more information about them can be found at Global Warming Art.


During the journey we commonly forget its goal. Almost every profession is chosen as a means to an end but continued as an end in itself. Forgetting our objectives is the most frequent act of stupidity. — Friedrich Nietzsche


This Week’s Finds in Mathematical Physics (Week 300)

11 August, 2010

This is the last of the old series of This Week’s Finds. Soon the new series will start, focused on technology and environmental issues — but still with a hefty helping of math, physics, and other science.

When I decided to do something useful for a change, I realized that the best way to start was by interviewing people who take the future and its challenges seriously, but think about it in very different ways. So far, I’ve done interviews with:

Tim Palmer on climate modeling and predictability.

Thomas Fischbacher on sustainability and permaculture.

Eliezer Yudkowsky on artificial intelligence and the art of rationality.

I hope to do more. I think it’ll be fun having This Week’s Finds be a dialogue instead of a monologue now and then.

Other things are changing too. I started a new blog! If you’re interested in how scientists can help save the planet, I hope you visit:

1) Azimuth, http://johncarlosbaez.wordpress.com

This is where you can find This Week’s Finds, starting now

Also, instead of teaching math in hot dry Riverside, I’m now doing research at the Centre for Quantum Technologies in hot and steamy Singapore. This too will be reflected in the new This Week’s Finds.

But now… the grand finale of This Week’s Finds in Mathematical Physics!

I’d like to take everything I’ve been discussing so far and wrap it up in a nice neat package. Unfortunately that’s impossible – there are too many loose ends. But I’ll do my best: I’ll tell you how to categorify the Riemann zeta function. This will give us a chance to visit lots of our old friends one last time: the number 24, string theory, zeta functions, torsors, Joyal’s theory of species, groupoidification, and more.

Let me start by telling you how to count.

I’ll assume you already know how to count elements of a set, and move right along to counting objects in a groupoid.

A groupoid is a gadget with a bunch of objects and a bunch of isomorphisms between them. Unlike an element of a set, an object of a groupoid may have symmetries: that is, isomorphisms between it and itself. And unlike an element of a set, an object of a groupoid doesn’t always count as “1 thing”: when it has n symmetries, it counts as “1/nth of a thing”. That may seem strange, but it’s really right. We also need to make sure not to count isomorphic objects as different.

So, to count the objects in our groupoid, we go through it, take one representative of each isomorphism class, and add 1/n to our count when this representative has n symmetries.

Let’s see how this works. Let’s start by counting all the n-element sets!

Now, you may have thought there were infinitely many sets with n elements, and that’s true. But remember: we’re not counting the set of n-element sets – that’s way too big. So big, in fact, that people call it a “class” rather than a set! Instead, we’re counting the groupoid of n-element sets: the groupoid with n-element sets as objects, and one-to-one and onto functions between these as isomorphisms.

All n-element sets are isomorphic, so we only need to look at one. It has n! symmetries: all the permutations of n elements. So, the answer is 1/n!.

That may seem weird, but remember: in math, you get to make up the rules of the game. The only requirements are that the game be consistent and profoundly fun – so profoundly fun, in fact, that it seems insulting to call it a mere “game”.

Now let’s be more ambitious: let’s count all the finite sets. In other words, let’s work out the cardinality of the groupoid where the objects are all the finite sets, and the isomorphisms are all the one-to-one and onto functions between these.

There’s only one 0-element set, and it has 0! symmetries, so it counts for 1/0!. There are tons of 1-element sets, but they’re all isomorphic, and they each have 1! symmetries, so they count for 1/1!. Similarly the 2-element sets count for 1/2!, and so on. So the total count is

1/0! + 1/1! + 1/2! + … = e

The base of the natural logarithm is the number of finite sets! You learn something new every day.

Spurred on by our success, you might want to find a groupoid whose cardinality is π. It’s not hard to do: you can just find a groupoid whose cardinality is 3, and a groupoid whose cardinality is .1, and a groupoid whose cardinality is .04, and so on, and lump them all together to get a groupoid whose cardinality is 3.14… But this is a silly solution: it doesn’t shed any light on the nature of π.

I don’t want to go into it in detail now, but the previous problem really does shed light on the nature of e: it explains why this number is related to combinatorics, and it gives a purely combinatorial proof that the derivative of ex is ex, and lots more. Try these books to see what I mean:

2) Herbert Wilf, Generatingfunctionology, Academic Press, Boston, 1994. Available for free at http://www.cis.upenn.edu/~wilf/.

3) F. Bergeron, G. Labelle, and P. Leroux, Combinatorial Species and Tree-Like Structures, Cambridge, Cambridge U. Press, 1998.

For example: if you take a huge finite set, and randomly pick a permutation of it, the chance every element is mapped to a different element is close to 1/e. It approaches 1/e in the limit where the set gets larger and larger. That’s well-known – but the neat part is how it’s related to the cardinality of the groupoid of finite sets.

Anyway, I have not succeeded in finding a really illuminating groupoid whose cardinality is π, but recently James Dolan found a nice one whose cardinality is π2/6, and I want to lead up to that.

Here’s a not-so-nice groupoid whose cardinality is π2/6. You can build a groupoid as the “disjoint union” of a collection of groups. How? Well, you can think of a group as a groupoid with one object: just one object having that group of symmetries. And you can build more complicated groupoids as disjoint unions of groupoids with one object. So, if you give me a collection of groups, I can take their disjoint union and get a groupoid.

So give me this collection of groups:

Z/1×Z/1, Z/2×Z/2, Z/3×Z/3, …

where Z/n is the integers mod n, also called the “cyclic group” with n elements. Then I’ll take their disjoint union and get a groupoid, and the cardinality of this groupoid is

1/12 + 1/22 + 1/32 + … = π2/6

This is not as silly as the trick I used to get a groupoid whose cardinality is π, but it’s still not perfectly satisfying, because I haven’t given you a groupoid of “interesting mathematical gadgets and isomorphisms between them”, as I did for e. Later we’ll see Jim’s better answer.

We might also try taking various groupoids of interesting mathematical gadgets and computing their cardinality. For example, how about the groupoid of all finite groups? I think that’s infinite – there are just “too many”. How about the groupoid of all finite abelian groups? I’m not sure, that could be infinite too.

But suppose we restrict ourselves to abelian groups whose size is some power of a fixed prime p? Then we’re in business! The answer isn’t a famous number like π, but it was computed by Philip Hall here:

4) Philip Hall, A partition formula connected with Abelian groups, Comment. Math. Helv. 11 (1938), 126-129.

We can write the answer using an infinite product:

1/(1-p-1)(1-p-2)(1-p-3) …

Or, we can write the answer using an infinite sum:

p(0)/p0 + p(1)/p1 + p(2)/p2 + …

Here p(n) is the number of “partitions” of n: that is, the number of ways to write it as a sum of positive integers in decreasing order. For example, p(4) = 5 since we can write 4 as a sum in 5 ways like this:

4 = 4
4 = 3+1
4 = 2+2
4 = 2+1+1
4 = 1+1+1+1

If you haven’t thought about this before, you can have fun proving that the infinite product equals the infinite sum. It’s a cute fact, and quite famous.

But Hall proved something even cuter. This number

p(0)/p0 + p(1)/p1 + p(2)/p2 + …

is also the cardinality of another, really different groupoid. Remember how I said you can build a groupoid as the “disjoint union” of a collection of groups? To get this other groupoid, we take the disjoint union of all the abelian groups whose size is a power of p.

Hall didn’t know about groupoid cardinality, so here’s how he said it:

The sum of the reciprocals of the orders of all the Abelian groups of order a power of p is equal to the sum of the reciprocals of the orders of their groups of automorphisms.

It’s pretty easy to see that sum of the reciprocals of the orders of all the Abelian groups of order a power of p is

p(0)/p0 + p(1)/p1 + p(2)/p2 + …

To do this, you just need to show that there are p(n) abelian groups with pn elements. If I shows you how it works for n = 4, you can guess how the proof works in general:

4 = 4                 Z/p4

4 = 3+1           Z/p3 × Z/p

4 = 2+2           Z/p2 × Z/p2

4 = 2+1+1       Z/p2 × Z/p2 × Z/p

4 = 1+1+1+1   Z/p × Z/p × Z/p × Z/p

So, the hard part is showing that

p(0)/p0 + p(1)/p1 + p(2)/p2 + …

is also the sum of the reciprocals of the sizes of the automorphism groups of all groups whose size is a power of p.

I learned of Hall’s result from Aviv Censor, a colleague who is an expert on groupoids. He had instantly realized this result had a nice formulation in terms of groupoid cardinality. We went through several proofs, but we haven’t yet been able to extract any deep inner meaning from them:

5) Avinoam Mann, Philip Hall’s “rather curious” formula for abelian p-groups, Israel J. Math. 96 (1996), part B, 445-448.

6) Francis Clarke, Counting abelian group structures, Proceedings of the AMS, 134 (2006), 2795-2799.

However, I still have hopes, in part because the math is related to zeta functions… and that’s what I want to turn to now.

Let’s do another example: what’s the cardinality of the groupoid of semisimple commutative rings with n elements?

What’s a semisimple commutative ring? Well, since we’re only talking about finite ones, I can avoid giving the general definition and take advantage of a classification theorem. Finite semisimple commutative rings are the same as finite products of finite fields. There’s a finite field with pn whenever p is prime and n is a positive integer. This field is called Fpn, and it has n symmetries. And that’s all the finite fields! In other words, they’re all isomorphic to these.

This is enough to work out the cardinality of the groupoid of semisimple commutative rings with n elements. Let’s do some examples. Let’s try n = 6, for example.

This one is pretty easy. The only way to get a finite product of finite fields with 6 elements is to take the product of F2 and F3:

F2 × F3

This has just one symmetry – the identity – since that’s all the symmetries either factor has, and there’s no symmetry that interchanges the two factors. (Hmm… you may need check this, but it’s not hard.)

Since we have one object with one symmetry, the groupoid cardinality is

1/1 = 1

Let’s try a more interesting one, say n = 4. Now there are two options:

F4

F2 × F2

The first option has 2 symmetries: remember, Fpn has n symmetries. The second option also has 2 symmetries, namely the identity and the symmetry that switches the two factors. So, the groupoid cardinality is

1/2 + 1/2 = 1

But now let’s try something even more interesting, like n = 16. Now there are 5 options:

F16

F8×F2

F4×F4

F4×F2×F2

F2×F2×F2×F2

The field F16 has 4 symmetries because 16 = 24, and any field Fpn has n symmetries. F8×F2 has 3 symmetries, coming from the symmetries of the first factor. F4×F4 has 2 symmetries in each factor and 2 coming from permutations of the factors, for a total of 2× 2×2 = 8. F4×F2×F2 has 2 symmetries coming from those of the first factor, and 2 symmetries coming from permutations of the last two factors, for a total of 2×2 = 4 symmetries. And finally, F2×F2×F2×F2 has 24 symmetries coming from permutations of the factors. So, the cardinality of this groupoid works out to be

1/4 + 1/3 + 1/8 + 1/4 + 1/24

Hmm, let’s put that on a common denominator:

6/24 + 8/24 + 3/24 + 6/24 + 1/24 = 24/24 = 1

So, we’re getting the same answer again: 1.

Is this just a weird coincidence? No: this is what we always get! For any positive integer n, the groupoid of n-element semsimple commutative rings has cardinality 1. For a proof, see:

7) John Baez and James Dolan, Zeta functions, at http://ncatlab.org/johnbaez/show/Zeta+functions

Now, you might think this fact is just a curiosity, but actually it’s a step towards categorifying the Riemann zeta function. The Riemann zeta function is

ζ(s) = ∑n > 0 n-s

It’s an example of a “Dirichlet series”, meaning a series of this form:

n > 0 an n-s

In fact, any reasonable way of equipping finite sets with extra stuff gives a Dirichlet series – and if this extra stuff is “being a semisimple commutative ring”, we get the Riemann zeta function.

To explain this, I need to remind you about “stuff types”, and then explain how they give Dirichlet series.

A stuff type is a groupoid Z where the objects are finite sets equipped with “extra stuff” of some type. More precisely, it’s a groupoid with a functor to the groupoid of finite sets. For example, Z could be the groupoid of finite semsimple commutative rings – that’s the example we care about now. Here the functor forgets that we have a semisimple commutative ring, and only remembers the underlying finite set. In other words, it forgets the “extra stuff”.

In this example, the extra stuff is really just extra structure, namely the structure of being a semisimple commutative ring. But we could also take X to be the groupoid of pairs of finite sets. A pair of finite sets is a finite set equipped with honest-to-goodness extra stuff, namely another finite set!

Structure is a special case of stuff. If you’re not clear on the difference, try this:

8) John Baez and Mike Shulman, Lectures on n-categories and cohomology, Sec. 2.4: Stuff, structure and properties, in n-Categories: Foundations and Applications, eds. John Baez and Peter May, Springer, Berlin, 2009. Also available as arXiv:math/0608420.

Then you can tell your colleagues: “I finally understand stuff.” And they’ll ask: “What stuff?” And you can answer, rolling your eyes condescendingly: “Not any particular stuff – just stuff, in general!”

But it’s not really necessary to understand stuff in general here. Just think of a stuff type as a groupoid where the objects are finite sets equipped with extra bells and whistles of some particular sort.

Now, if we have a stuff type, say Z, we get a list of groupoids Z(n). How? Simple! Objects of Z are finite sets equipped with some particular type of extra stuff. So, we can take the objects of Z(n) to be the n-element sets equipped with that type of extra stuff. The groupoid Z will be a disjoint union of these groupoids Z(n).

We can encode the cardinalities of all these groupoids into a Dirichlet series:

z(s) = ∑n > 0 |Z(n)| n-s

where |Z(n)| is the cardinality of Z(n). In case you’re wondering about the minus sign: it’s just a dumb convention, but I’m too overawed by the authority of tradition to dream of questioning it, even though it makes everything to come vastly more ugly.

Anyway: the point is that a Dirichlet series is like the “cardinality” of a stuff type. To show off, we say stuff types categorify Dirichlet series: they contain more information, and they’re objects in a category (or something even better, like a 2-category) rather than elements of a set.

Let’s look at an example. When Z is the groupoid of finite semisimple commutative rings, then

|Z(n)| = 1

so the corresponding Dirichlet series is the Riemann zeta function:

z(s) = ζ(s)

So, we’ve categorified the Riemann zeta function! Using this, we can construct an interesting groupoid whose cardinality is

ζ(2) = ∑n > 0 n-2 = π2/6

How? Well, let’s step back and consider a more general problem. Any stuff type Z gives a Dirichlet series

z(s) = ∑n > 0 |Z(n)| n-s

How can use this to concoct a groupoid whose cardinality is z(s) for some particular value of s? It’s easy when s is a negative integer (here that minus sign raises its ugly head). Suppose S is a set with s elements:

|S| = s

Then we can define a groupoid as follows:

Z(-S) = ∑n > 0 Z(n) × nS

Here we are playing some notational tricks: nS means “the set of functions from S to our favorite n-element set”, the symbol × stands for the product of groupoids, and ∑ stands for what I’ve been calling the “disjoint union” of groupoids (known more technically as the “coproduct”). So, Z(-S) is a groupoid. But this formula is supposed to remind us of a simpler one, namely

z(-s) = ∑n > 0 |Z(n)| ns

and indeed it’s a categorified version of this simpler formula.

In particular, if we take the cardinality of the groupoid Z(-S), we get the number z(-s). To see this, you just need to check each step in this calculation:

|Z(-S)| = |∑ Z(n) × nS|

= ∑ |Z(n) × nS|

= ∑ |Z(n)| × |nS|

= ∑ |Z(n)| × ns

= z(-s)

The notation is supposed to make these steps seem plausible.

Even better, the groupoid Z(-S) has a nice description in plain English: it’s the groupoid of finite sets equipped with Z-stuff and a map from the set S.

Well, okay – I’m afraid that’s what passes for plain English among mathematicians! We don’t talk to ordinary people very often. But the idea is really simple. Z is some sort of stuff that we can put on a finite set. So, we can do that and also choose a map from S to that set. And there’s a groupoid of finite sets equipped with all this extra baggage, and isomorphisms between those.

If this sounds too abstract, let’s do an example. Say our favorite example, where Z is the groupoid of finite semisimple commutative rings. Then Z(-S) is the groupoid of finite semisimple commutative rings equipped with a map from the set S.

If this still sounds too abstract, let’s do an example. Do I sound repetitious? Well, you see, category theory is the subject where you need examples to explain your examples – and n-category theory is the subject where this process needs to be repeated n times. So, suppose S is a 1-element set – we can just write

S = 1

Then Z(-1) is a groupoid where the objects are finite semisimple commutative rings with a chosen element. The isomorphisms are ring isomorphisms that preserve the chosen element. And the cardinality of this groupoid is

|Z(-1)| = ζ(-1) = 1 + 2 + 3 + …

Whoops – it diverges! Luckily, people who study the Riemann zeta function know that

1 + 2 + 3 + … = -1/12

They get this crazy answer by analytically continuing the Riemann zeta function ζ(s) from values of s with a big positive real part, where it converges, over to values where it doesn’t. And it turns out that this trick is very important in physics. In fact, back in "week124" – "week126", I explained how this formula

ζ(-1) = -1/12

is the reason bosonic string theory works best when our string has 24 extra dimensions to wiggle around in besides the 2 dimensions of the string worldsheet itself.

So, if we’re willing to allow this analytic continuation trick, we can say that

THE GROUPOID OF FINITE SEMISIMPLE COMMUTATIVE RINGS WITH A CHOSEN ELEMENT HAS CARDINALITY -1/12

Someday people will see exactly how this is related to bosonic string theory. Indeed, it should be just a tiny part of a big story connecting number theory to string theory… some of which is explained here:

9) J. M. Luck, P. Moussa, and M. Waldschmidt, eds., Number Theory and Physics, Springer Proceedings in Physics, Vol. 47, Springer-Verlag, Berlin, 1990.

10) C. Itzykson, J. M. Luck, P. Moussa, and M. Waldschmidt, eds, From Number Theory to Physics, Springer, Berlin, 1992.

Indeed, as you’ll see in these books (or in "week126"), the function we saw earlier:

1/(1-p-1)(1-p-2)(1-p-3) … = p(0)/p0 + p(1)/p1 + p(2)/p2 + …

is also important in string theory: it shows up as a “partition function”, in the physical sense, where the number p(n) counts the number of ways a string can have energy n if it has one extra dimension to wiggle around in besides the 2 dimensions of its worldsheet.

But it’s the 24th power of this function that really matters in string theory – because bosonic string theory works best when our string has 24 extra dimensions to wiggle around in. For more details, try:

11) John Baez, My favorite numbers: 24. Available at http://math.ucr.edu/home/baez/numbers/24.pdf

But suppose we don’t want to mess with divergent sums: suppose we want a groupoid whose cardinality is, say,

ζ(2) = 1-2 + 2-2 + 3-2 + … = π2/6

Then we need to categorify the evalution of Dirichlet series at positive integers. We can only do this for certain stuff types – for example, our favorite one. So, let Z be the groupoid of finite semisimple commutative rings, and let S be a finite set. How can we make sense of

Z(S) = ∑n > 0 Z(n) × n-S ?

The hard part is n-S, because this has a minus sign in it. How can we raise an n-element set to the -Sth power? If we could figure out some sort of groupoid that serves as the reciprocal of an n-element set, we’d be done, because then we could take that to the Sth power. Remember, S is a finite set, so to raise something (even a groupoid) to the Sth power, we just multiply a bunch of copies of that something – one copy for each element of S.

So: what’s the reciprocal of an n-element set? There’s no answer in general – but there’s a nice answer when that set is a group, because then that group gives a groupoid with one object, and the cardinality of this groupoid is just 1/n.

Here is where our particular stuff type Z comes to the rescue. Each object of Z(n) is a semisimple commutative ring with n elements, so it has an underlying additive group – which is a group with n elements!

So, we don’t interpret Z(n) × n-S as an ordinary product, but something a bit sneakier, a “twisted product”. An object in Z(n) × n-S is just an object of Z(n), that is an n-element semisimple commutative ring. But we define a symmetry of such an object to be a symmetry of this ring together with an S-tuple of elements of its underlying additive group. We compose these symmetries with the help of addition in this group. This ensures that

|Z(n) × n-S| = |Z(n)| × n-s

when |S| = s. And this in turn means that

|Z(S)| = |∑ Z(n) × n-S|

= ∑ |Z(n) × n-S|

= ∑ |Z(n)| × n-s

= ζ(-s)

So, in particular, if S is a 2-element set, we can write

S = 2

for short and get

|Z(2)| = ζ(2) = π2/6

Can we describe the groupoid Z(2) in simple English, suitable for a nice bumper sticker? It’s a bit tricky. One reason is that I haven’t described the objects of Z(2) as mathematical gadgets of an appealing sort, as I did for Z(-1). Another closely related reason is that I only described the symmetries of any object in Z(2) – or more technically, morphisms from that object to itself. It’s much better if we also describe morphisms from one object to another.

For this, it’s best to define Z(n) × n-S with the help of torsors. The idea of a torsor is that you can take the one-object groupoid associated to any group G and find a different groupoid, which is nonetheless equivalent, and which is a groupoid of appealing mathematical gadgets. These gadgets are called “G-torsors”. A “G-torsor” is just a nonempty set on which G acts freely and transitively:

12) John Baez, Torsors made easy, http://math.ucr.edu/home/baez/torsors.html

All G-torsors are isomorphic, and the group of symmetries of any G-torsor is G.

Now, any ring R has an underlying additive group, which I will simply call R. So, the concept of “R-torsor” makes sense. This lets us define an object of Z(n) × n-S to be an n-element semisimple commutative ring R together with an S-tuple of R-torsors.

But what about the morphisms between these? We define a morphism between these to be a ring isomorphism together with an S-tuple of torsor isomorphisms. There’s a trick hiding here: a ring isomorphism f: R → R’ lets us take any R-torsor and turn it into an R’-torsor, or vice versa. So, it lets us talk about an isomorphism from an R-torsor to an R’-torsor – a concept that at first might have seemed nonsensical.

Anyway, it’s easy to check that this definition is compatible with our earlier one. So, we see:

THE GROUPOID OF FINITE SEMISIMPLE COMMUTATIVE RINGS EQUIPPED WITH AN n-TUPLE OF TORSORS HAS CARDINALITY ζ(n)

I did a silly change of variables here: I thought this bumper sticker would sell better if I said “n-tuple” instead of “S-tuple”. Here n is any positive integer.

While we’re selling bumper stickers, we might as well include this one:

THE GROUPOID OF FINITE SEMISIMPLE COMMUTATIVE RINGS EQUIPPED WITH A PAIR OF TORSORS HAS CARDINALITY π2/6

Now, you might think this fact is just a curiosity. But I don’t think so: it’s actually a step towards categorifying the general theory of zeta functions. You see, the Riemann zeta function is just one of many zeta functions. As Hasse and Weil discovered, every sufficiently nice commutative ring R has a zeta function. The Riemann zeta function is just the simplest example: the one where R is the ring of integers. And the cool part is that all these zeta functions come from stuff types using the recipe I described!

How does this work? Well, from any commutative ring R, we can build a stuff type ZR as follows: an object of ZR is a finite semisimple commutative ring together with a homomorphism from R to that ring. Then it turns out the Dirichlet series of this stuff type, say

ζR(s) = ∑n > 0 |ZR(n)| n-s

is the usual Hasse-Weil zeta function of the ring R!

Of course, that fact is vastly more interesting if you already know and love Hasse-Weil zeta functions. You can find a definition of them either in my paper with Jim, or here:

12) Jean-Pierre Serre, Zeta and L functions, Arithmetical Algebraic Geometry (Proc. Conf. Purdue Univ., 1963), Harper and Row, 1965, pp. 82–92.

But the basic idea is simple. You can think of any commutative ring R as the functions on some space – a funny sort of space called an “affine scheme”. You’re probably used to spaces where all the points look alike – just little black dots. But the points of an affine scheme come in many different colors: for starters, one color for each prime power pk! The Hasse-Weil zeta function of R is a clever trick for encoding the information about the numbers of points of these different colors in a single function.

Why do we get points of different colors? I explained this back in "week205". The idea is that for any commutative ring k, we can look at the homomorphisms

f: R → k

and these are called the “k-points” of our affine scheme. In particular, we can take k to be a finite field, say Fpn. So, we get a set of points for each prime power pn. The Hasse-Weil zeta function is a trick for keeping track of many Fpn-points there are for each prime power pn.

Given all this, you shouldn’t be surprised that we can get the Hasse-Weil zeta function of R by taking the Dirichlet series of the stuff type ZR, where an object is a finite semisimple commutative ring k together with a homomorphism f: R → k. Especially if you remember that finite semisimple commutative rings are built from finite fields!

In fact, this whole theory of Hasse-Weil zeta functions works for gadgets much more general than commutative rings, also known as affine schemes. They can be defined for “schemes of finite type over the integers”, and that’s how Serre and other algebraic geometers usually do it. But Jim and I do it even more generally, in a way that doesn’t require any expertise in algebraic geometry. Which is good, because we don’t have any.

I won’t explain that here – it’s in our paper.

I’ll wrap up by making one more connection explicit – it’s sort of lurking in what I’ve said, but maybe it’s not quite obvious.

First of all, this idea of getting Dirichlet series from stuff types is part of the groupoidification program. Stuff types are a generalization of “structure types”, often called “species”. André Joyal developed the theory of species and showed how any species gives rise to a formal power series called its generating function. I told you about this back in "week185" and "week190". The recipe gets even simpler when we go up to stuff types: the generating function of a stuff type Z is just

n ≥ 0 |Z(n)| zn

Since we can also describe states of the quantum harmonic oscillator as power series, with zn corresponding to the nth energy level, this
lets us view stuff types as states of a categorified quantum harmonic oscillator! This explains the combinatorics of Feynman diagrams:

14) Jeffrey Morton, Categorified algebra and quantum mechanics, TAC 16 (2006), 785-854, available at http://www.emis.de/journals/TAC/volumes/16/29/16-29abs.html Also available as arXiv:math/0601458.

And, it’s a nice test case of the groupoidification program, where we categorify lots of algebra by saying “wherever we see a number, let’s try to think of it as the cardinality of a groupoid”:

15) John Baez, Alex Hoffnung and Christopher Walker, Higher-dimensional algebra VII: Groupoidification, available as arXiv:0908.4305

But now I’m telling you something new! I’m saying that any stuff type also gives a Dirichlet series, namely

n > 0 |Z(n)| n-s

This should make you wonder what’s going on. My paper with Jim explains it – at least for structure types. The point is that the groupoid of finite sets has two monoidal structures: + and ×. This gives the category of structure types two monoidal structures, using a trick called “Day convolution”. The first of these categorifies the usual product of formal power series, while the second categorifies the usual product of Dirichlet series. People in combinatorics love the first one, since they love chopping a set into two disjoint pieces and putting a structure on each piece. People in number theory secretly love the second one, without fully realizing it, because they love taking a number and decomposing it into prime factors. But they both fit into a single picture!

There’s a lot more to say about this, because actually the category of structure types has five monoidal structures, all fitting together in a nice way. You can read a bit about this here:

16) nLab, Schur functors, http://ncatlab.org/nlab/show/Schur+functor

This is something Todd Trimble and I are writing, which will eventually evolve into an actual paper. We consider structure types for which there is a vector space of structures for each finite set instead of a set of structures. But much of the abstract theory is similar. In particular, there are still five monoidal structures.

Someday soon, I hope to show that two of the monoidal structures on the category of species make it into a “ring category”, while the other two – the ones I told you about, in fact! – are better thought of as “comonoidal” structures, making it into a “coring category”. Putting these together, the category of species should become a “biring category”. Then the fifth monoidal structure, called “plethysm”, should make it into a monoid in the monoidal bicategory of biring categories!

This sounds far-out, but it’s all been worked out at a decategorified level: rings, corings, birings, and monoids in the category of birings:

17) D. Tall and Gavin Wraith, Representable functors and operations on rings, Proc. London Math. Soc. (3), 1970, 619-643.

18) James Borger and B. Wieland, Plethystic algebra, Advances in Mathematics 194 (2005), 246-283. Also available at http://wwwmaths.anu.edu.au/~borger/papers/03/paper03.html

19) Andrew Stacey and S. Whitehouse, The hunting of the Hopf ring, Homology, Homotopy and Applications, 11 (2009), 75-132, available at http://intlpress.com/HHA/v11/n2/a6/ Also available as arXiv:0711.3722.

Borger and Wieland call a monoid in the category of birings a “plethory”. The star example is the algebra of symmetric functions. But this is basically just a decategorified version of the category of Vect-valued species. So, the whole story should categorify.

In short: starting from very simple ideas, we can very quickly find a treasure trove of rich structures. Indeed, these structures are already staring us in the face – we just need to open our eyes. They clarify and unify a lot of seemingly esoteric and disconnected things that mathematicians and physicists love.



I think we are just beginning to glimpse the real beauty of math and physics. I bet it will be both simpler and more startling than most people expect.

I would love to spend the rest of my life chasing glimpses of this beauty. I wish we lived in a world where everyone had enough of the bare necessities of life to do the same if they wanted – or at least a world that was safely heading in that direction, a world where politicians were looking ahead and tackling problems before they became desperately serious, a world where the oceans weren’t dying.

But we don’t.


Certainty of death. Small chance of success. What are we waiting for?
– Gimli


Follow

Get every new post delivered to your Inbox.

Join 2,796 other followers