This Week’s Finds (Week 303)

30 September, 2010

Now for the second installment of my interview with Nathan Urban, a colleague who started out in quantum gravity and now works on "global climate change from an Earth system perspective, with an emphasis on Bayesian data-model calibration, probabilistic prediction, risk assessment, and decision analysis".

But first, a word about Bali. One of the great things about living in Singapore is that it’s close to a lot of interesting places. My wife and I just spent a week in Ubud. This town is the cultural capital of Bali — full of dance, music, and crafts. It’s also surrounded by astounding terraced rice paddies:

In his book Whole Earth Discipline, Stewart Brand says "one of the finest examples of beautifully nuanced ecosystem engineering is the thousand-year-old terraced rice irrigation complex in Bali".

Indeed, when we took a long hike with a local guide, Made Dadug, we learned that that all the apparent "weeds" growing in luxuriant disarray near the rice paddies were in fact carefully chosen plants: cacao, coffee, taro, ornamental flowers, and so on. "See this bush? It’s citronella — people working on the fields grab a pinch and use it for mosquito repellent." When a paddy loses its nutrients they plant sweet potatos there instead of rice, to restore the soil.

Irrigation is managed by a system of local water temples, or "subaks". It’s not a top-down hierarchy: instead, each subak makes decisions in a more or less democratic way, while paying attention to what neighboring subaks do. Brand cites the work of Steve Lansing on this subject:

• J. Stephen Lansing, Perfect Order: Recognizing Complexity in Bali, Princeton U. Press, Princeton, New Jersey, 2006.

Physicists interested in the spontaneous emergence of order will enjoy this passage:

This book began with a question posed by a colleague. In 1992 I gave a lecture at the Santa Fe Institute, a recently created research center devoted to the study of "complex systems." My talk focused on a simulation model that my colleague James Kremer and I had created to investigate the ecological role of water temples. I need to explain a little about how this model came to be built; if the reader will bear with me, the relevance will soon become clear.

Kremer is a marine scientist, a systems ecologist, and a fellow surfer. One day on a California beach I told him the story of the water temples, and of my struggles to convince the consultants that the temples played a vital role in the ecology of the rice terraces. I asked Jim if a simulation model, like the ones he uses to study coastal ecology, might help to clarify the issue. It was not hard to persuade him to come to Bali to take a look. Jim quickly saw that a model of a single water temple would not be very useful. The whole point about water temples is that they interact. Bali is a steep volcanic island, and the rivers and streams are short and fast. Irrigation systems begin high up on the volcanoes, and follow one after another at short intervals all the way to the seacoast. The amount of water each subak gets depends less on rainfall than on how much water is used by its upstream neighbors. Water temples provide a venue for the farmers to plan their irrigation schedules so as to avoid shortages when the paddies need to be flooded. If pests are a problem, they can synchronize harvests and flood a block of terraces so that there is nothing for the pests to eat. Decisions about water taken by each subak thus inevitably affect its neighbors, altering both the availability of water and potential levels of pest infestations.

Jim proposed that we build a simulation model to capture all of these processes for an entire watershed. Having recently spent the best part of a year studying just one subak, the idea of trying to model nearly two hundred of them at once struck me as rather ambitious. But as Jim pointed out, the question is not whether flooding can control pests, but rather whether the entire collection of temples in a watershed can strike an optimal balance between water sharing and pest control.

We set to work plotting the location of all 172 subaks lying between the Oos and Petanu rivers in central Bali. We mapped the rivers and irrigation systems, and gathered data on rainfall, river flows, irrigation schedules, water uptake by crops such as rice and vegetables, and the population dynamics of the major rice pests. With these data Jim constructed a simulation model. At the beginning of each year the artificial subaks in the model are given a schedule of crops to plant for the next twelve months, which defines their irrigation needs. Then, based on historic rainfall data, we simulate rainfall, river flow, crop growth, and pest damage. The model keeps track of harvest data and also shows where water shortages or pest damage occur. It is possible to simulate differences in rainfall patterns or the growth of different kinds of crops, including both native Balinese rice and the new rice promoted by the Green Revolution planners. We tested the model by simulating conditions for two cropping seasons, and compared its predictions with real data on harvest yields for about half the subaks. The model did surprisingly well, accurately predicting most of the variation in yields between subaks. Once we knew that the model’s predictions were meaningful, we used it to compare different scenarios of water management. In the Green Revolution scenario, every subak tries to plant rice as often as possible and ignores the water temples. This produces large crop losses from pest outbreaks and water shortages, much like those that were happening in the real world. In contrast, the “water temple” scenario generates the best harvests by minimizing pests and water shortages.

Back at the Santa Fe Institute, I concluded this story on a triumphant note: consultants to the Asian Development Bank charged with evaluating their irrigation development project in Bali had written a new report acknowledging our conclusions. There would be no further opposition to management by water temples. When I finished my lecture, a researcher named Walter Fontana asked a question, the one that prompted this book: could the water temple networks self-organize? At first I did not understand what he meant by this. Walter explained that if he understood me correctly, Kremer and I had programmed the water temple system into our model, and shown that it had a functional role. This was not terribly surprising. After all, the farmers had had centuries to experiment with their irrigation systems and find the right scale of coordination. But what kind of solution had they found? Was there a need for a Great Designer or an Occasional Tinkerer to get the whole watershed organized? Or could the temple network emerge spontaneously, as one subak after another came into existence and plugged in to the irrigation systems? As a problem solver, how well could the temple networks do? Should we expect 10 percent of the subaks to be victims of water shortages at any given time because of the way the temple network interacts with the physical hydrology? Thirty percent? Two percent? Would it matter if the physical layout of the rivers were different? Or the locations of the temples?

Answers to most of these questions could only be sought if we could answer Walter’s first large question: could the water temple networks self-organize? In other words, if we let the artificial subaks in our model learn a little about their worlds and make their own decisions about cooperation, would something resembling a water temple network emerge? It turned out that this idea was relatively easy to implement in our computer model. We created the simplest rule we could think of to allow the subaks to learn from experience. At the end of a year of planting and harvesting, each artificial subak compares its aggregate harvests with those of its four closest neighbors. If any of them did better, copy their behavior. Otherwise, make no changes. After every subak has made its decision, simulate another year and compare the next round of harvests. The first time we ran the program with this simple learning algorithm, we expected chaos. It seemed likely that the subaks would keep flipping back and forth, copying first one neighbor and then another as local conditions changed. But instead, within a decade the subaks organized themselves into cooperative networks that closely resembled the real ones.

Lansing describes how attempts to modernize farming in Bali in the 1970’s proved problematic:

To a planner trained in the social sciences, management by water temples looks like an arcane relic from the premodern era. But to an ecologist, the bottom-up system of control has some obvious advantages. Rice paddies are artificial aquatic ecosystems, and by adjusting the flow of water farmers can exert control over many ecological processes in their fields. For example, it is possible to reduce rice pests (rodents, insects, and diseases) by synchronizing fallow periods in large contiguous blocks of rice terraces. After harvest, the fields are flooded, depriving pests of their habitat and thus causing their numbers to dwindle. This method depends on a smoothly functioning, cooperative system of water management, physically embodied in proportional irrigation dividers, which make it possible to tell at a glance how much water is flowing into each canal and so verify that the division is in accordance with the agreed-on schedule.

Modernization plans called for the replacement of these proportional dividers with devices called "Romijn gates," which use gears and screws to adjust the height of sliding metal gates inserted across the entrances to canals. The use of such devices makes it impossible to determine how much water is being diverted: a gate that is submerged to half the depth of a canal does not divert half the flow, because the velocity of the water is affected by the obstruction caused by the gate itself. The only way to accurately estimate the proportion of the flow diverted by a Romijn gate is with a calibrated gauge and a table. These were not supplied to the farmers, although $55 million was spent to install Romijn gates in Balinese irrigation canals, and to rebuild some weirs and primary canals.

The farmers coped with the Romijn gates by simply removing them or raising them out of the water and leaving them to rust.

On the other hand, Made said that the people village really appreciated this modern dam:

Using gears, it takes a lot less effort to open and close than the old-fashioned kind:

Later in this series of interviews we’ll hear more about sustainable agriculture from Thomas Fischbacher.

But now let’s get back to Nathan!

JB: Okay. Last time we were talking about the things that altered your attitude about climate change when you started working on it. And one of them was how carbon dioxide stays in the atmosphere a long time. Why is that so important? And is it even true? After all, any given molecule of CO2 that’s in the air now will soon get absorbed by the ocean, or taken up by plants.

NU: The longevity of atmospheric carbon dioxide is important because it determines the amount of time over which our actions now (fossil fuel emissions) will continue to have an influence on the climate, through the greenhouse effect.

You have heard correctly that a given molecule of CO2 doesn’t stay in the atmosphere for very long. I think it’s about 5 years. This is known as the residence time or turnover time of atmospheric CO2. Maybe that molecule will go into the surface ocean and come back out into the air; maybe photosynthesis will bind it in a tree, in wood, until the tree dies and decays and the molecule escapes back to the atmosphere. This is a carbon cycle, so it’s important to remember that molecules can come back into the air even after they’ve been removed from it.

But the fate of an individual CO2 molecule is not the same as how long it takes for the CO2 content of the atmosphere to decrease back to its original level after new carbon has been added. The latter is the answer that really matters for climate change. Roughly, the former depends on the magnitude of the gross carbon sink, while the latter depends on the magnitude of the net carbon sink (the gross sink minus the gross source).

As an example, suppose that every year 100 units of CO2 are emitted to the atmosphere from natural sources (organic decay, the ocean, etc.), and each year (say with a 5 year lag), 100 units are taken away by natural sinks (plants, the ocean, etc). The 5 years actually doesn’t matter here; the system is in steady-state equilibrium, and the amount of CO2 in the air is constant. Now suppose that humans add an extra 1 unit of CO2 each year. If nothing else changes, then the amount of carbon in the air will increase every year by 1 unit, indefinitely. Far from the carbon being purged in 5 years, we end up with an arbitrarily large amount of carbon in the air.

Even if you only add carbon to the atmosphere for a finite time (e.g., by running out of fossil fuels), the CO2 concentration will ultimately reach, and then perpetually remain at, a level equivalent to the amount of new carbon added. Individual CO2 molecules may still get absorbed within 5 years of entering the atmosphere, and perhaps fewer of the carbon atoms that were once in fossil fuels will ultimately remain in the atmosphere. But if natural sinks are only removing an amount of carbon equal in magnitude to natural sources, and both are fixed in time, you can see that if you add extra fossil carbon the overall atmospheric CO2 concentration can never decrease, regardless of what individual molecules are doing.

In reality, natural carbon sinks tend to grow in proportion to how much carbon is in the air, so atmospheric CO2 doesn’t remain elevated indefinitely in response to a pulse of carbon into the air. This is kind of the biogeochemical analog to the "Planck feedback" in climate dynamics: it acts to restore the system to equilibrium. To first order, atmospheric CO2 decays or "relaxes" exponentially back to the original concentration over time. But this relaxation time (variously known as a "response time", "adjustment time", "recovery time", or, confusingly, "residence time") isn’t a function of the residence time of a CO2 molecule in the atmosphere. Instead, it depends on how quickly the Earth’s carbon removal processes react to the addition of new carbon. For example, how fast plants grow, die, and decay, or how fast surface water in the ocean mixes to greater depths, where the carbon can no longer exchange freely with the atmosphere. These are slower processes.

There are actually a variety of response times, ranging from years to hundreds of thousands of years. The surface mixed layer of the ocean responds within a year or so; plants within decades to grow and take up carbon or return it to the atmosphere through rotting or burning. Deep ocean mixing and carbonate chemistry operate on longer time scales, centuries to millennia. And geologic processes like silicate weathering are even slower, tens of thousands of years. The removal dynamics are a superposition of all these processes, with a fair chunk taken out quickly by the fast processes, and slower processes removing the remainder more gradually.

To summarize, as David Archer put it, "The lifetime of fossil fuel CO2 in the atmosphere is a few centuries, plus 25 percent that lasts essentially forever." By "forever" he means "tens of thousands of years" — longer than the present age of human civilization. This inspired him to write this pop-sci book, taking a geologic view of anthropogenic climate change:

• David Archer, The Long Thaw: How Humans Are Changing the Next 100,000 Years of Earth’s Climate, Princeton University Press, Princeton, New Jersey, 2009.

A clear perspective piece on the lifetime of carbon is:

• Mason Inman, Carbon is forever, Nature Reports Climate Change, 20 November 2008.

which is based largely on this review article:

• David Archer, Michael Eby, Victor Brovkin, Andy Ridgwell, Long Cao, Uwe Mikolajewicz, Ken Caldeira, Katsumi Matsumoto, Guy Munhoven, Alvaro Montenegro, and Kathy Tokos, Atmospheric lifetime of fossil fuel carbon dioxide, Annual Review of Earth and Planetary Sciences 37 (2009), 117-134.

For climate implications, see:

• Susan Solomon, Gian-Kasper Plattner, Reto Knutti and Pierre Friedlingstein, Irreversible climate change due to carbon dioxide emissions, PNAS 106 (2009), 1704-1709.

• M. Eby, K. Zickfeld, A. Montenegro, D. Archer, K. J. Meissner and A. J. Weaver, Lifetime of anthropogenic climate change: millennial time scales of potential CO2 and surface temperature perturbations, Journal of Climate 22 (2009), 2501-2511.

• Long Cao and Ken Caldeira, Atmospheric carbon dioxide removal: long-term consequences and commitment, Environmental Research Letters 5 (2010), 024011.

For the very long term perspective (how CO2 may affect the glacial-interglacial cycle over geologic time), see:

• David Archer and Andrey Ganopolski, A movable trigger: Fossil fuel CO2 and the onset of the next glaciation, Geochemistry Geophysics Geosystems 6 (2005), Q05003.

JB: So, you’re telling me that even if we do something really dramatic like cut fossil fuel consumption by half in the next decade, we’re still screwed. Global warming will keep right on, though at a slower pace. Right? Doesn’t that make you feel sort of hopeless?

NU: Yes, global warming will continue even as we reduce emissions, although more slowly. That’s sobering, but not grounds for total despair. Societies can adapt, and ecosystems can adapt — up to a point. If we slow the rate of change, then there is more hope that adaptation can help. We will have to adapt to climate change, regardless, but the less we have to adapt, and the more gradual the adaptation necessary, the less costly it will be.

What’s even better than slowing the rate of change is to reduce the overall amount of it. To do that, we’d need to not only reduce carbon emissions, but to reduce them to zero before we consume all fossil fuels (or all of them that would otherwise be economically extractable). If we emit the same total amount of carbon, but more slowly, then we will get the same amount of warming, just more slowly. But if we ultimately leave some of that carbon in the ground and never burn it, then we can reduce the amount of final warming. We won’t be able to stop it dead, but even knocking a degree off the extreme scenarios would be helpful, especially if there are "tipping points" that might otherwise be crossed (like a threshold temperature above which a major ice sheet will disintegrate).

So no, I don’t feel hopeless that we can, in principle, do something useful to mitigate the worst effects of climate change, even though we can’t plausibly stop or reverse it on normal societal timescales. But sometimes I do feel hopeless that we lack the public and political will to actually do so. Or at least, that we will procrastinate until we start seeing extreme consequences, by which time it’s too late to prevent them. Well, it may not be too late to prevent future, even more extreme consequences, but the longer we wait, the harder it is to make a dent in the problem.

I suppose here I should mention the possibility of climate geoengineering, which is a proposed attempt to artificially counteract global warming through other means, such as reducing incoming sunlight with reflective particles in the atmosphere, or space mirrors. That doesn’t actually cancel all climate change, but it can negate a lot of the global warming. There are many risks involved, and I regard it as a truly last-ditch effort if we discover that we really are "screwed" and can’t bear the consequences.

There is also an extreme form of carbon cycle geoengineering, known as air capture and sequestration, which extracts CO2 from the atmosphere and sequesters it for long periods of time. There are various proposed technologies for this, but it’s highly uncertain whether this can feasibly be done on the necessary scales.

JB: Personally, I think society will procrastinate until we see extreme climate changes. Recently millions of Pakistanis were displaced by floods: a quarter of their country was covered by water. We can’t say for sure this was caused by global warming — but it’s exactly the sort of thing we should expect.

But you’ll notice, this disaster is nowhere near enough to make politicians talk about cutting fossil fuel usage! It’ll take a lot of disasters like this to really catch people’s attention. And by then we’ll be playing a desperate catch-up game, while people in many countries are struggling to survive. That won’t be easy. Just think how little attention the Pakistanis can spare for global warming right now.

Anyway, this is just my own cheery view. But I’m not hopeless, because I think there’s still a lot we can do to prevent a terrible situation from becoming even worse. Since I don’t think the human race will go extinct anytime soon, it would be silly to "give up".

Now, you’re just started a position at the Woodrow Wilson School at Princeton. When I was an undergrad there, this school was the place for would-be diplomats. What’s a nice scientist like you doing in a place like this? I see you’re in the Program in Science, Technology and Environmental Policy, or "STEP program". Maybe it’s too early for you to give a really good answer, but could you say a bit about what they do?

NU: Let me pause to say that I don’t know whether the Pakistan floods are "exactly the sort of thing we should expect" to happen to Pakistan, specifically, as a result of climate change. Uncertainty in the attribution of individual events is one reason why people don’t pay attention to them. But it is true that major floods are examples of extreme events which could become more (or less) common in various regions of the world in response to climate change.

Returning to your question, the STEP program includes a number of scientists, but we are all focused on policy issues because the Woodrow Wilson School is for public and international affairs. There are physicists who work on nuclear policy, ecologists who study environmental policy and conservation biology, atmospheric chemists who look at ozone and air pollution, and so on. Obviously, climate change is intimately related to public and international policy. I am mostly doing policy-relevant science but may get involved in actual policy to some extent. The STEP program has ties to other departments such as Geosciences, interdisciplinary umbrella programs like the Atmospheric and Ocean Sciences program and the Princeton Environmental Institute, and NOAA’s nearby Geophysical Fluid Dynamics Laboratory, one of the world’s leading climate modeling centers.

JB: How much do you want to get into public policy issues? Your new boss, Michael Oppenheimer, used to work as chief scientist for the Environmental Defense Fund. I hadn’t known much about them, but I’ve just been reading a book called The Climate War. This book says a lot about the Environmental Defense Fund’s role in getting the US to pass cap-and-trade legislation to reduce sulfur dioxide emissions. That’s quite an inspiring story! Many of the same people then went on to push for legislation to reduce greenhouse gases, and of course that story is less inspiring, so far: no success yet. Can you imagine yourself getting into the thick of these political endeavors?

NU: No, I don’t see myself getting deep into politics. But I am interested in what we should be doing about climate change, specifically, the economic assessment of climate policy in the presence of uncertainties and learning. That is, how hard should we be trying to reduce CO2 emissions, accounting for the fact that we’re unsure what climate the future will bring, but expect to learn more over time. Michael is very interested in this question too, and the harder problem of "negative learning":

• Michael Oppenheimer, Brian C. O’Neill and Mort Webster, Negative learning, Climatic Change 89 (2008), 155-172.

"Negative learning" occurs if what we think we’re learning is actually converging on the wrong answer. How fast could we detect and correct such an error? It’s hard enough to give a solid answer to what we might expect to learn, let alone what we don’t expect to learn, so I think I’ll start with the former.

I am also interested in the value of learning. How will our policy change if we learn more? Can there be any change in near-term policy recommendations, or will we learn slowly enough that new knowledge will only affect later policies? Is it more valuable — in terms of its impact on policy — to learn more about the most likely outcomes, or should we concentrate on understanding better the risks of the worst-case scenarios? What will cause us to learn the fastest? Better surface temperature observations? Better satellites? Better ocean monitoring systems? What observables should they we looking at?

The question "How much should we reduce emissions" is, partially, an economic one. The safest course of action from the perspective of climate impacts is to immediately reduce emissions to a much lower level. But that would be ridiculously expensive. So some kind of cost-benefit approach may be helpful: what should we do, balancing the costs of emissions reductions against their climate benefits, knowing that we’re uncertain about both. I am looking at so-called "economic integrated assessment" models, which combine a simple model of the climate with an even simpler model of the world economy to understand how they influence each other. Some argue these models are too simple. I view them more as a way of getting order-of-magnitude estimates of the relative values of different uncertainty scenarios or policy options under specified assumptions, rather than something that can give us "The Answer" to what our emissions targets should be.

In a certain sense it may be moot to look at such cost-benefit analyses, since there is a huge difference between "what may be economically optimal for us to do" and "what we will actually do". We have not yet approached current policy recommendations, so what’s the point of generating new recommendations? That’s certainly a valid argument, but I still think it’s useful to have a sense of the gap between what we are doing and what we "should" be doing.

Economics can only get us so far, however (and maybe not far at all). Traditional approaches to economics have a very narrow way of viewing the world, and tend to ignore questions of ethics. How do you put an economic value on biodiversity loss? If we might wipe out polar bears, or some other species, or a whole lot of species, how much is it "worth" to prevent that? What is the Great Barrier Reef worth? Its value in tourism dollars? Its value in "ecosystem services" (the more nebulous economic activity which indirectly depends on its presence, such as fishing)? Does it have intrinsic value, and is worth something (what?) to preserve, even if it has no quantifiable impact on the economy whatsoever?

You can continue on with questions like this. Does it make sense to apply standard economic discounting factors, which effectively value the welfare of future generations less than that of the current generation? See for example:

• John Quiggin, Stern and his critics on discounting and climate change: an editorial essay, Climatic Change 89 (2008), 195-205.

Economic models also tend to preserve present economic disparities. Otherwise, their "optimal" policy is to immediately transfer a lot of the wealth of developed countries to developing countries — and this is without any climate change — to maximize the average "well-being" of the global population, on the grounds that a dollar is worth more to a poor person than a rich person. This is not a realistic policy and arguably shouldn’t happen anyway, but you do have to be careful about hard-coding potential inequities into your models:

• Seth D. Baum and William E. Easterling, Space-time discounting in climate change adaptation, Mitigation and Adaptation Strategies for Global Change 15 (2010), 591-609.

More broadly, it’s possible for economics models to allow sea level rise to wipe out Bangladesh, or other extreme scenarios, simply because some countries have so little economic output that it doesn’t "matter" if they disappear, as long as other countries become even more wealthy. As I said, economics is a narrow lens.

After all that, it may seem silly to be thinking about economics at all. The main alternative is the "precautionary principle", which says that we shouldn’t take suspected risks unless we can prove them safe. After all, we have few geologic examples of CO2 levels rising as far and as fast as we are likely to increase them — to paraphrase Wally Broecker, we are conducting an uncontrolled and possibly unprecedented experiment on the Earth. This principle has some merits. The common argument, "We should do nothing unless we can prove the outcome is disastrous", is a strange burden of proof from a decision analytic point of view — it has little to do with the realities of risk management under uncertainty. Nobody’s going to say "You can’t prove the bridge will collapse, so let’s build it". They’re going to say "Prove it’s safe (to within a certain guarantee) before we build it". Actually, a better analogy to the common argument might be: you’re driving in the dark with broken headlights, and insist “You’ll have to prove there are no cliffs in front of me before I’ll consider slowing down.” In reality, people should slow down, even if it makes them late, unless they know there are no cliffs.

But the precautionary principle has its own problems. It can imply arbitrarily expensive actions in order to guard against arbitrarily unlikely hazards, simply because we can’t prove they’re safe, or precisely quantify their exact degree of unlikelihood. That’s why I prefer to look at quantitative cost-benefit analysis in a probabilistic framework. But it can be supplemented with other considerations. For example, you can look at stabilization scenarios: where you "draw a line in the sand" and say we can’t risk crossing that, and apply economics to find the cheapest way to avoid crossing the line. Then you can elaborate that to allow for some small but nonzero probability of crossing it, or to allow for temporary "overshoot", on the grounds that it might be okay to briefly cross the line, as long as we don’t stay on the other side indefinitely. You can tinker with discounting assumptions and the decision framework of expected utility maximization. And so on.

JB: This is fascinating stuff. You’re asking a lot of really important questions — I think I see about 17 question marks up there. Playing the devil’s advocate a bit, I could respond: do you known any answers? Of course I don’t expect "ultimate" answers, especially to profound questions like how much we should allow economics to guide our decision, versus tempering it with other ethical considerations. But it would be nice to see an example where thinking about these issues turned up new insights that actually changed people’s behavior. Cases where someone said "Oh, I hadn’t thought of that…", and then did something different that had a real effect.

You see, right now the world as it is seems so far removed from the world as it should be that one can even start to doubt the usefulness of pondering the questions you’re raising. As you said yourself, "We’re not yet even coming close to current policy recommendations, so what’s the point of generating new recommendations?"

I think the cap-and-trade idea is a good example, at least as far as sulfur dioxide emissions go: the Clean Air Act Amendments of 1990 managed to reduce SO2 emissions in the US from about 19 million tons in 1980 to about 7.6 million tons in 2007. Of course this idea is actually a bunch of different ideas that need to work together in a certain way… but anyway, some example related to global warming would be a bit more reassuring, given our current problems with that.

NU: Climate change economics has been very influential in generating momentum for putting a price on carbon (through cap-and-trade or otherwise), in Europe and the U.S., in showing that such policy had the potential to be a net benefit considering the risks of climate change. SO2 emissions markets are one relevant piece of this body of research, although the CO2 problem is much bigger in scope and presents more problems for such approaches. Climate economics has been an important synthesis of decision analysis and scientific uncertainty quantification, which I think we need more of. But to be honest, I’m not sure what immediate impact additional economic work may have on mitigation policy, unless we begin approaching current emissions targets. So from the perspective of immediate applications, I also ponder the usefulness of answering these questions.

That, however, is not the only perspective I think about. I’m also interested in how what we should do is related to what we might learn — if not today, then in the future. There are still important open questions about how well we can see something potentially bad coming, the answers to which could influence policies. For example, if a major ice sheet begins to substantially disintegrate within the next few centuries, would we be able to see that coming soon enough to step up our mitigation efforts in time to prevent it? In reality that’s a probabilistic question, but let’s pretend it’s a binary outcome. If the answer is "yes", that could call for increased investment in "early warning" observation systems, and a closer coupling of policy to the data produced by such systems. (Well, we should be investing more in those anyway, but people might get the point more strongly, especially if research shows that we’d only see it coming if we get those systems in place and tested soon.) If the answer is "no", that could go at least three ways. One way it could go is that the precautionary principle wins: if we think that we could put coastal cities under water, and we wouldn’t see it coming in time to prevent it, that might finally prompt more preemptive mitigation action. Another is that we start looking more seriously at last-ditch geoengineering approaches, or carbon air capture and sequestration. Or, if people give up on modifying the climate altogether, then it could prompt more research and development into adaptation. All of those outcomes raise new policy questions, concerning how much of what policy response we should aim for.

Which brings me to the next policy option. The U.S. presidential science advisor, John Holdren, has said that we have three choices for climate change: mitigate, adapt, or suffer. Regardless of what we do about the first, people will likely be doing some of the other two; the question is how much. If you’re interested in research that has a higher likelihood of influencing policy in the near term, adaptation is probably what you should work on. (That, or technological approaches like climate/carbon geoengineering, energy systems, etc.) People are already looking very seriously at adaptation (and in some cases are already putting plans into place). For example, the Port Authority of Los Angeles needs to know whether, or when, to fortify their docks against sea level rise, and whether a big chunk of their business could disappear if the Northwest Passage through the Arctic Ocean opens permanently. They have to make these investment decisions regardless of what may happen with respect to geopolitical emissions reduction negotiations. The same kinds of learning questions I’m interested in come into play here: what will we know, and when, and how should current decisions be structured knowing that we will be able to periodically adjust those decisions?

So, why am I not working on adaptation? Well, I expect that I will be, in the future. But right now, I’m still interested in a bigger question, which is how well can we bound the large risks and our ability to prevent disasters, rather than just finding the best way to survive them. What is the best and the worst that can happen, in principle? Also, I’m concerned that right now there is too much pressure to develop adaptation policies to a level of detail which we don’t yet have the scientific capability to develop. While global temperature projections are probably reasonable within their stated uncertainty ranges, we have a very limited ability to predict, for example, how precipitation may change over a particular city. But that’s what people want to know. So scientists are trying to give them an answer. But it’s very hard to say whether some of those answers right now are actionably credible. You have to choose your problems carefully when you work in adaptation. Right now I’m opting to look at sea level rise, partly because it is less affected by the some of the details of local meteorology.

JB: Interesting. I think I’m going to cut our conversation here, because at this point it took a turn that will really force me to do some reading! And it’s going to take a while. But it should be fun!


The climatic impacts of releasing fossil fuel CO2 to the atmosphere will last longer than Stonehenge, longer than time capsules, longer than nuclear waste, far longer than the age of human civilization so far. – David Archer


This Week’s Finds (Week 302)

9 September, 2010

In "week301" I sketched a huge picture in a very broad brush. Now I’d like to start filling in a few details: not just about the problems we face, but also about what we can do to tackle them. For the reasons I explained last time, I’ll focus on what scientists can do.

As I’m sure you’ve noticed, different people have radically different ideas about the mess we’re in, or if there even is a mess.

Maybe carbon emissions are causing really dangerous global warming. Maybe they’re not — or at least, maybe it’s not as bad as some say. Maybe we need to switch away from fossil fuels to solar power, or wind. Maybe nuclear power is the answer, because solar and wind are intermittent. Maybe nuclear power is horrible! Maybe using less energy is the key. But maybe boosting efficiency isn’t the way to accomplish that.

Maybe the problem is way too big for any of these conventional solutions to work. Maybe we need carbon sequestration: like, pumping carbon dioxide underground. Maybe we need to get serious about geoengineering — you know, something like giant mirrors in space, to cool down the Earth. Maybe geoengineering is absurdly impractical — or maybe it’s hubris on our part to think we could do it right! Maybe some radical new technology is the answer, like nanotech or biotech. Maybe we should build an intelligence greater than our own and let it solve our problems.

Maybe all this talk is absurd. Maybe all we need are some old technologies, like traditional farming practices, or biochar: any third-world peasant can make charcoal and bury it, harnessing the power of nature to do carbon sequestration without fancy machines. In fact, maybe we need go back to nature and get rid of the modern technological civilization that’s causing our problems. Maybe this would cause massive famines. But maybe they’re bound to come anyway: maybe overpopulation lies at the root of our problems and only a population crash will solve them. Maybe that idea just proves what we’ve known all along: the environmental movement is fundamentally anti-human.

Maybe all this talk is just focusing on symptoms: maybe what we need is a fundamental change in consciousness. Maybe that’s not possible. Maybe we’re just doomed. Or maybe we’ll muddle through the way we always do. Maybe, in fact, things are just fine!

To help sift through this mass of conflicting opinions, I think I’ll start by interviewing some people.



I’ll start with Nathan Urban, for a couple of reasons. First, he can help me understand climate science and the whole business of how we can assess risks due to climate change. Second, like me, he started out working on quantum gravity! Can I be happy switching from pure math and theoretical physics to more practical stuff? Maybe talking to him will help me find out.

So, here is the first of several conversations with Nathan Urban. This time we’ll talk about what it’s like to shift careers, how he got interested in climate change, and issue of "climate sensitivity": how much the temperature changes if you double the amount of carbon dioxide in the Earth’s atmosphere.

JB: It’s a real pleasure to interview you, since you’ve successfully made a transition that I’m trying to make now — from "science for its own sake" to work that may help save the planet.

I can’t resist telling our readers that when we first met, you had applied to U.C. Riverside because you were interested in working on quantum gravity. You wound up going elsewhere… and now you’re at Princeton, at the Woodrow Wilson School of Public and International Affairs, working on "global climate change from an Earth system perspective, with an emphasis on Bayesian data-model calibration, probabilistic prediction, risk assessment, and decision analysis". That’s quite a shift!

I’m curious about how you got from point A to point B. What was the hardest thing about it?

NU: I went to Penn State because it had a big physics department and one of the leading centers in quantum gravity. A couple years into my degree my nominal advisor, Lee Smolin, moved to the Perimeter Institute in Canada. PI was brand new and didn’t yet have a formal affiliation with a university to support graduate students, so it was difficult to follow him there. I ended up staying at Penn State, but leaving gravity. That was the hardest part of my transition, as I’d been passionately interested in gravity since high school.

I ultimately landed in computational statistical mechanics, partly due to the Monte Carlo computing background I’d acquired studying the dynamical triangulations approach to quantum gravity. My thesis work was interesting, but by the time I graduated, I’d decided it probably wasn’t my long term career.

During graduate school I had become interested in statistics. This was partly from my Monte Carlo simulation background, partly from a Usenet thread on Bayesian statistics (archived on your web page), and partly from my interest in statistical machine learning. I applied to a postdoc position in climate change advertised at Penn State which involved statistics and decision theory. At the time I had no particular plan to remain at Penn State, knew nothing about climate change, had no prior interest in it, and was a little skeptical that the whole subject had been exaggerated in the media … but I was looking for a job and it sounded interesting and challenging, so I accepted.

I had a great time with that job, because it involved a lot of
statistics and mathematical modeling, was very interdisciplinary — incorporating physics, geology, biogeochemistry, economics, public policy, etc. — and tackled big, difficult questions. Eventually it was time to move on, and I accepted a second postdoc at Princeton doing similar things.

JB: It’s interesting that you applied for that Penn State position even though you knew nothing about climate change. I think there are lots of scientists who’d like to work on environmental issues but feel they lack the necessary expertise. Indeed I sometimes feel that way myself! So what did you do to bone up on climate change? Was it important to start by working with a collaborator who knew more about that side of things?

NU: I think a physics background gives people the confidence (or arrogance!) to jump into a new field, trusting their quantitative skills to see them through.

It was very much like starting over as a grad student again — an experience I’d had before, switching from gravity to condensed matter — except faster. I read. A lot. But at the same time, I worked on a narrowly defined project, in collaboration with an excellent mentor, to get my feet wet and gain depth. The best way to learn is probably to just try to answer some specific research question. You can pick up what you need to know as you go along, with help. (One difficulty is in identifying a good and accessible problem!)

I started by reading the papers cited by the paper upon whose work my research was building. The IPCC Fourth Assessment Report came out shortly after that, which cites many more key references. I started following new articles in major journals, whatever seemed interesting or relevant to me. I also sampled some of the blog debates on climate change. Those were useful to understand what the public’s view of the important controversies may be, which is often very different from the actual controversies within the field. Some posters were willing to write useful tutorials on some aspects of the science as well. And of course I learned through research, through attending group meetings with collaborators, and talking to people.

It’s very important to start out working with a knowledgeable collaborator, and I’m lucky to have many. The history of science is littered with very smart people making serious errors when they get out of their depth. The physicist Leo Szilard once told a biologist colleague to "assume infinite intelligence and zero prior knowledge" when explaining to him. The error some make is in believing that intelligence alone will suffice. You also have to acquire knowledge, and become intimately familiar with the relevant scientific literature. And you will make mistakes in a new field, no matter how smart you are. That’s where a collaborator is crucial: someone who can help you identify flaws in arguments that you may not notice yourself at first. (And it’s not just to start with, either: I still need collaborators to teach me things about specific models, or data sets, that I don’t know.) Collaborators also can help you become familiar with the literature faster.

It’s helpful to have a skill that others need. I’ve built up expertise in statistical data-model comparison. I read as many statistics papers as I do climate papers, have statistician collaborators, and can speak their own language. I can act as an intermediary between scientists and statisticians. This expertise allows me to collaborate with some climate research groups who happen to lack such expertise themselves. As a result I have a lot of people who are willing to teach me what they know, so we can solve problems that neither of us alone could.

JB: You said you began with a bit of skepticism that perhaps the whole climate change thing had been exaggerated in the media. I think a lot of people feel that way. I’m curious how your attitude evolved as you began studying the subject more deeply. That might be a big question, so maybe we can break it down a little: do you remember the first thing you read that made you think "Wow! I didn’t know that!"?

NU: I’m not sure what was the first. It could have been that most of the warming from CO2 is currently thought to come from feedback effects, rather than its direct greenhouse effect. Or that ice ages (technically, glacial periods) were only 5-6 °C cooler than our preindustrial climate, globally speaking. Many people would guess something much colder, like 10 °C. It puts future warming in perspective to think that it could be as large, or even half as large, as the warming between an ice age and today. "A few degrees" doesn’t sound like much (especially in Celsius, to an American), but historically, it can be a big deal — particularly if you care about the parts of the planet that warm faster than the average rate. Also, I was surprised by the atmospheric longevity of CO2 concentrations. If CO2 is a problem, it will be a problem that’s around for a long time.

JB: These points are so important that I don’t want them to whiz past too quickly. So let me back up and ask a few more questions here.

By "feedback effects", I guess you mean things like this: when it gets warmer, ice near the poles tends to melt. But ice is white, so it reflects sunlight. When ice melts, the landscape gets darker, and absorbs more sunlight, so it gets warmer. So the warming effect amplifies itself — like feedback when a rock band has its amplifiers turned up too high.

On the other hand, any sort of cooling effect also amplifies itself. For example, when it gets colder, more ice forms, and that makes the landscape whiter, so more sunlight gets reflected, making it even colder.

Could you maybe explain some of the main feedback effects and give us numbers that say how big they are?

NU: Yes, feedbacks are when a change in temperature causes changes within the climate system that, themselves, cause further changes in temperature. Ice reflectivity, or "albedo", feedback is a good example. Another is water vapor feedback. When it gets warmer — due to, say, the CO2 greenhouse effect — the evaporation-condensation balance shifts in favor of relatively more evaporation, and the water vapor content of the atmosphere increases. But water vapor, like CO2, is a greenhouse gas, which causes additional warming. (The opposite happens in response to cooling.) These feedbacks which amplify the original cause (or "forcing") are known to climatologists as "positive feedbacks".

A somewhat less intuitive example is the "lapse rate feedback". The greenhouse effect causes atmospheric warming. But this warming itself causes the vertical temperature profile of the atmosphere to change. The rate at which air temperature decreases with height, or lapse rate, can itself increase or decrease. This change in lapse rate depends on interactions between radiative transfer, clouds and convection, and water vapor. In the tropics, the lapse rate is expected to decrease in response to the enhanced greenhouse effect, amplifying the warming in the upper troposphere and suppressing it at the surface. This suppression is a "negative feedback" on surface temperature. Toward the poles, the reverse happens (a positive feedback), but the tropics tend to dominate, producing an overall negative feedback.

Clouds create more complex feedbacks. Clouds have both an albedo effect (they are white and reflect sunlight) and a greenhouse effect. Low clouds tend to be thick and warm, with a high albedo and weak greenhouse effect, and so are net cooling agents. High clouds are often thin and cold, with low albedo and strong greenhouse effect, and are net warming agents. Temperature changes in the atmosphere can affect cloud amount, thickness, and location. Depending on the type of cloud and how temperature changes alter its behavior, this can result in either positive or negative feedbacks.

There are other feedbacks, but these are usually thought of as the big four: surface albedo (including ice albedo), water vapor, lapse rate, and clouds.

For the strengths of the feedbacks, I’ll refer to climate model predictions, mostly because they’re neatly summarized in one place:

Section 8.6 of the Intergovernmental Panel on Climate Change Fourth Assessment Report, Working Group 1 (AR4 WG1).

There are also estimates made from observational data. (Well, data plus simple models, because you need some kind of model of how temperatures depend on CO2, even if it’s just a simple linear feedback model.) But observational estimates are more scattered in the literature and harder to summarize, and some feedbacks are very difficult to estimate directly from data. This is a problem when testing the models. For now, I’ll stick to the models — not because they’re necessarily more credible than observational estimates, but just to make my job here easier.

Conventions vary, but the feedbacks I will give are measured in units of watts per square meter per kelvin. That is, they tell you how much of a radiative imbalance, or power flux, the feedback creates in the climate system in response to a given temperature change. The reciprocal of a feedback tells you how much temperature change you’d get in response to a given forcing.

Water vapor is the largest feedback. Referring to this paper cited in the AR4 WG1 report:

• Brian J. Solden and Isaac M. Held, An assessment of climate feedbacks in coupled ocean-atmosphere models, Journal of Climate 19 (2006), 3354-3360.

you can see that climate models predict a range of water vapor feedbacks of 1.48 to 2.14 W/m2/K.

The second largest in magnitude is lapse rate feedback, -0.41 to -1.27 W/m2/K. However, water vapor and lapse rate feedbacks are often combined into a single feedback, because stronger water vapor feedbacks also tend to produce stronger lapse rate feedbacks. The combined water vapor+lapse rate feedback ranges between 0.81 to 1.20 W/m2/K.

Clouds are the next largest feedback, 0.18 to 1.18 W/m2/K. But as you can see, different models can predict very different cloud feedbacks. It is the largest feedback uncertainty.

After that comes the surface albedo feedback. Its range is 0.07 to 0.34 W/m2/K.

People don’t necessarily find feedback values intuitive. Since
everyone wants to know what that means in terms of the climate, I’ll explain how to convert feedbacks into temperatures.

First, you have to assume a given amount of radiative forcing: a stronger greenhouse effect causes more warming. For reference, let’s consider a doubling of atmospheric CO2, which is estimated to create a greenhouse effect forcing of 4±0.3 W/m2. (The error bars represent the range of estimates I’ve seen, and aren’t any kind of statistical bound.) How much greenhouse warming? In the absence of feedbacks, about 1.2±0.1 °C of warming.

How much warming, including feedbacks? To convert a feedback to a temperature, add it to the so-called "Planck feedback" to get a combined feedback which accounts for the fact that hotter bodies radiate more infrared. Then divide it into the forcing and flip the sign to get the warming. Mathematically, this is….

JB: Whoa! Slow down! I’m glad you finally mentioned the "Planck feedback", because this is the mother of all feedbacks, and we should have talked about it first.

While the name "Planck feedback" sounds technical, it’s pathetically simple: hotter things radiate more heat, so they tend to cool down. Cooler things radiate less heat, so they tend to warm up. So this is a negative feedback. And this is what keeps our climate from spiralling out of control.

This is an utterly basic point that amateurs sometimes overlook — I did it myself at one stage, I’m embarrassed to admit. They say things like:

"Well, you listed a lot of feedback effects, and overall they give a positive feedback — so any bit of warming will cause more warming, while any bit of cooling will cause more cooling. But wouldn’t that mean the climate is unstable? Are you saying that the climate just happens to be perched at an unstable equilibrium, so that the slightest nudge would throw us into either an ice age or a spiral of ever-hotter weather? That’s absurdly unlikely! Climate science is a load of baloney!"

(Well, I didn’t actually say the last sentence: I realized I must be confused.)

The answer is that a hot Earth will naturally radiate away more heat, while a cold Earth will radiate away less. And this is enough to make the total feedback negative.

NU: Yes, the negative Planck feedback is crucial. Without this stabilizing feedback, which is always present for any thermodynamic body, any positive feedback would cause the climate to run away unstably. It’s so important that other feedbacks are often defined relative to it: people call the Planck feedback λ0, and they call the sum of the rest λ. Climatologists tend to take it for granted, and talk about just the non-Planck feedbacks, λ.

As a side note, the definition of feedbacks in climate science is
somewhat confused; different papers have used different conventions, some in opposition to conventions used in other fields like engineering. For a discussion of some of the ways feedbacks have been treated in the literature, see:

• J. R. Bates, Some considerations of the concept of climate
feedback, Quarterly Journal of the Royal Meteorological Society 133 (2007), 545-560.

JB: Okay. Sorry to slow you down like that, but we’re talking to a mixed crowd here.

So: you were saying how much it warms up when we apply a radiative forcing F, some number of watts per square meter. We could do this by turning up the dial on the Sun, or, more realistically, by pouring lots of carbon dioxide into the atmosphere to keep infrared radiation from getting out.

And you said: take the Planck feedback λ0, which is negative, and add to it the sum of all other feedbacks, which we call λ. Divide F by the result, and flip the sign to get the warming.

NU: Right. Mathematically, that’s

T = -F/(λ0+λ)

where

λ0 = -3.2 W/m2/K

is the Planck feedback and λ is the sum of other feedbacks. Let’s look at the forcing from doubled CO2:

F = 4.3 W/m2.

Here I’m using values taken from Soden and Held.

If the other feedbacks vanish (λ=0), this gives a "no-feedback" warming of T = 1.3 °C, which is about equal to the 1.2 °C that I mentioned above.

But we can then plug in other feedback values. For example, the water vapor feedbacks 1.48-2.14 W/m2/K will produce warmings of 2.5 to 4.1 °C, compared to only 1.3 °C without water vapor feedback. This is a huge temperature amplification. If you consider the combined water vapor+lapse rate feedback, that’s still a warming of 1.8 to 2.2 °C, almost a doubling of the "bare" CO2 greenhouse warming.

JB: Thanks for the intro to feedbacks — very clear. So, it seems the "take-home message", as annoying journalists like to put it, is this. When we double the amount of carbon dioxide in the atmosphere, as we’re well on the road to doing, we should expect significantly more than the 1.2 degree Celsius rise in temperature than we’d get without feedbacks.

What are the best estimates for exactly how much?

NU: The IPCC currently estimates a range of 2 to 4.5 °C for the overall climate sensitivity (the warming due to a doubling of CO2), compared to the 1.2 °C warming with no feedbacks. See Section 8.6 of the AR4 WG1 report for model estimates and Section 9.6 for observational estimates. An excellent review article on climate sensitivity is:

• Reto Knutti and Gabriele C. Hegerl, The equilibrium sensitivity of the Earth’s temperature to radiation changes, Nature Geoscience 1 (2008), 735-748.

I also recommend this review article on linear feedback analysis:

• Gerard Roe, Feedbacks, timescales, and seeing red, Annual Reviews of Earth and Planetary Science 37 (2009), 93-115.

But note that there are different feedback conventions; Roe’s λ is the negative of the reciprocal of the Soden & Held λ that I use, i.e. it’s a direct proportionality between forcing and temperature.

JB: Okay, I’ll read those.

Here’s another obvious question. You’ve listed estimates of feedbacks based on theoretical calculations. But what’s the evidence that these theoretical feedbacks are actually right?

NU: As I mentioned, there are also observational estimates of feedbacks. There are two approaches: to estimate the total feedback acting in the climate system, or to estimate all the individual feedbacks (that we know about). The former doesn’t require us to know what all the individual feedbacks are, but the second allows us to verify our physical understanding of physical feedback processes. I’m more familiar with the total feedback method, and have published my own simple estimate as a byproduct of an uncertainty analysis about the future ocean circulation:

• Nathan M. Urban and Klaus Keller, Probabilistic hindcasts and projections of the coupled climate, carbon cycle and Atlantic meridional overturning circulation system: a Bayesian fusion of century-scale observations with a simple model, Tellus A, July 16, 2010.

I will stick to discussing this method. To make a long story short, the observational and model estimates generally agree to within their estimated uncertainty bounds. But let me explain a bit more about where the observational estimates come from.

To estimate the total feedback, you first estimate the radiative forcing of the system, based on historic data on greenhouse gases, volcanic and industrial aerosols, black carbon (soot), solar activity, and other factors which can change the Earth’s radiative balance. Then you predict how much warming you should get from that forcing using a climate model, and tune the model’s feedback until it matches the observed warming. The tuned feedback factor is your observational estimate.

As I said earlier, there is no totally model-independent way of estimating feedbacks — you have to use some formula to turn forcings into temperatures. There is a balance between using simple formulas with few assumptions, or more realistic models with assumptions that are harder to verify. So far people have mostly used simple models, not only for transparency but also because they’re fast enough, and have few enough free parameters, to undertake a comprehensive uncertainty analysis.

What I’ve described is the "forward model" approach, where you run a climate model forward in time and match its output to data. For a trivial linear model of the climate, you can do something even simpler, which is the closest to a "model independent" calculation you can get: statistically regress forcing against temperature. This is the approach taken by, for example:

• Piers M. de F. Forster and Jonathan M. Gregory, The climate sensitivity and its components diagnosed from Earth radiation budget data, Journal of Climate 19 (2006), 39-52.

In the "total feedback" forward model approach, there are two major confounding factors which prevent us from making precise feedback estimates. One is that we’re not sure what the forcing is. Although we have good measurements of trace greenhouse gases, there is an important cooling effect produced by air pollution. Industrial emissions create a haze of aerosols in the atmosphere which reflects sunlight and cools the planet. While this can be measured, this direct effect is also supplemented by a far less understood indirect effect: the aerosols can influence cloud formation, which has its own climate effect. Since we’re not sure how strong that is, we’re not sure whether there is a strong or a weak net cooling effect from aerosols. You can explain the observed global warming with a strong feedback whose effects are partially cancelled by a strong aerosol cooling, or with a weak feedback along with weak aerosol cooling. Without precisely knowing one, you can’t precisely determine the other.

The other confounding factor is the rate at which the ocean takes up heat from the atmosphere. The oceans are, by far, the climate system’s major heat sink. The rate at which heat mixes into the ocean determines how quickly the surface temperature responds to a forcing. There is a time lag between applying a forcing and seeing the full response realized. Any comparison of forcing to response needs to take that lag into account. One way to explain the surface warming is with a strong feedback but a lot of heat mixing down into the deeper ocean, so you don’t see all the surface warming at once. Or you can do it with a weak feedback, and most of the heat staying near the surface, so you see the surface warming quickly. For a discussion, see:

• Nathan M. Urban and Klaus Keller, Complementary observational constraints on climate sensitivity, Geophysical Research Letters 36 (2009), L04708.

We don’t know precisely what this rate is, since it’s been hard to
monitor the whole ocean over long time periods (and there isn’t exactly a single "rate", either).

This is getting long enough, so I’m going to skip over a discussion of individual feedback estimates. These have been applied to various specific processes, such as water vapor feedback, and involve comparing, say, how the water vapor content of the atmosphere has changed to how the temperature of the atmosphere has changed. I’m also skipping a discussion of paleoclimate estimates of past feedbacks. It follows the usual formula of "compare the estimated forcing to the reconstructed temperature response", but there are complications because the boundary conditions were different (different surface albedo patterns, variations in the Earth’s orbit, or even continental configurations if you go back far enough) and the temperatures can only be indirectly inferred.

JB: Thanks for the summary of these complex issues. Clearly I’ve got my reading cut out for me.

What do you say to people like Lindzen, who say negative feedbacks due to clouds could save the day?

NU: Climate models tend to predict a positive cloud feedback, but it’s certainly possible that the net cloud feedback could be negative. However, Lindzen seems to think it’s so negative that it makes the total climate feedback negative, outweighing all positive feedbacks. That is, he claims a climate sensitivity even lower than the "bare" no-feedback value of 1.2 °C. I think Lindzen’s work has its own problems (there are published responses to his papers with more details). But generally speaking, independent of Lindzen’s specific arguments, I don’t think such a low climate sensitivity is supportable by data. It would be difficult to reproduce the modern instrumental atmospheric and ocean temperature data with such a low sensitivity. And it would be quite difficult to explain the large changes in the Earth’s climate over its geologic history if there were a stabilizing feedback that strong. The feedbacks I’ve mentioned generally act in response to any warming or cooling, not just from the CO2 greenhouse effect, so a strongly negative feedback would tend to prevent the climate from changing much at all.

JB: Yes, ever since the Antarctic froze over about 12 million years ago, it seems the climate has become increasingly "jittery":



As soon as I saw the incredibly jagged curve at the right end of this graph, I couldn’t help but think that some positive feedback is making it easy for the Earth to flip-flop between warmer and colder states. But then I wondered what "tamed" this positive feedback and kept the temperature between certain limits. I guess that the negative Planck feedback must be involved.

NU: You have to be careful: in the figure you cite, the resolution of the data decreases as you go back in time, so you can’t see all of the variability that could have been present. A lot of the high frequency variability (< 100 ky) is averaged out, so the more recent glacial-interglacial oscillations in temperature would not have been easily visible in the earlier data if they had occurred back then.

That being said, there has been a real change in variability over the time span of that graph. As the climate cooled from a "greenhouse" to an "icehouse" over the Cenozoic era, the glacial-interglacial cycles were able to start. These big swings in climate are a result of ice albedo feedback, when large continental ice sheets form and disintegrate, and weren’t present in earlier greenhouse climates. Also, as you can see from the last 5 million years:



the glacial-interglacial cycles themselves have gotten bigger over time (and the dominant period changed from 41 to 100 ky).

As a side note, the observation that glacial cycles didn’t occur in hot climates highlights the fact that climate sensitivity can be state-dependent. The ice albedo feedback, for example, vanishes when there is no ice. This is a subtle point when using paleoclimate data to constrain the climate sensitivity, because the sensitivity at earlier times might not be the same as the sensitivity now. Of course, they are related to each other, and you can make inferences about one from the other with additional physical reasoning. I do stand by my previous remarks: I don’t think you can explain past climate if the (modern) sensitivity is below 1 °C.

JB: I have one more question about feedbacks. It seems that during the last few glacial cycles, there’s sometimes a rise in
temperature before a rise in CO2 levels. I’ve heard people offer this explanation: warming oceans release CO2. Could that be another important feedback?

NU: Temperature affects both land and ocean carbon sinks, so it is another climate feedback (warming changes the amount of CO2 remaining in the atmosphere, which then changes temperature). The ocean is a very large repository of carbon, and both absorbs CO2 from, and emits CO2 to, the atmosphere. Temperature influences the balance between absorption and emission. One obvious influence is through the "solubility pump": CO2 dissolves less readily in warmer water, so as temperatures rise, the ocean can absorb carbon from the atmosphere less effectively. This is related to Henry’s law in chemistry.

JB: Henry’s law? Hmm, let me look up the Wikipedia article on Henry’s law. Okay, it basically just says that at any fixed temperature, the amount of carbon dioxide that’ll dissolve in water is proportional to the amount of carbon dioxide in the air. But what really matters for us is that when it gets warmer, this constant of proportionality goes down, so the water holds less CO2. Like you said.

NU: But this is not the only process going on. Surface warming leads to more stratification of the upper ocean layers and can reduce the vertical mixing of surface waters into the depths. This is important to the carbon cycle because some of the dissolved CO2 which is in the surface layers can return to the atmosphere, as part of an equilibrium exchange cycle. However, some of that carbon is also transported to deep water, where it can no longer exchange with the atmosphere, and can be sequestered there for a long time (about a millennium). If you reduce the rate at which carbon is mixed downward, so that relatively more carbon accumulates in the surface layers, you reduce the immediate ability of the ocean to store atmospheric CO2 in its depths. This is another potential feedback.

Another important process, which is more of a pure carbon cycle feedback than a climate feedback, is carbonate buffering chemistry. The straight Henry’s law calculation doesn’t tell the whole story of how carbon ends up in the ocean, because there are chemical reactions going on. CO2 reacts with carbonate ions and seawater to produce bicarbonate ions. Most of the dissolved carbon in the surface waters (about 90%) exists as bicarbonate; only about 0.5% is dissolved CO2, and the rest is carbonate. This "bicarbonate buffer" greatly enhances the ability of the ocean to absorb CO2 from the atmosphere beyond what simple thermodynamic arguments alone would suggest. A keyword here is the "Revelle factor", which is the relative ratio of CO2 to total carbon in the ocean. (A Revelle factor of 10, which is about the ocean average, means that a 10% increase in CO2 leads to a 1% increase in dissolved inorganic carbon.)

As more CO2 is added to the ocean, chemical reactions consume carbonate and produce hydrogen ions, leading to ocean acidification. You have already discussed this on your blog. In addition to acidification, the chemical buffering effect is lessened (the Revelle factor increased) when there are fewer carbonate ions available to participate in reactions. This weakens the ocean carbon sink. This is a feedback, but it is a purely carbon cycle feedback rather than a climate feedback, since only carbonate chemistry is involved. There can also be an indirect climate feedback, if climate change alters the spatial distribution of the Revelle factor in the ocean by changing the ocean’s circulation.

For more on this, try Section 7.3.4 of the IPCC AR4 WG1 report and Sections 8.3 and 10.2 of:

• J. L. Sarmiento and N. Gruber, Ocean Biogeochemical Dynamics, Princeton U. Press, Princeton, 2006.

JB: I’m also curious about other feedbacks. For example, I’ve heard that methane is an even more potent greenhouse gas than CO2, though it doesn’t hang around as long. And I’ve heard that another big positive feedback mechanism might be the release of methane from melting permafrost. Or maybe even from "methane clathrates" down at the bottom of the ocean! There’s a vast amount of methane down there, locked in cage-shaped ice crystals. As the ocean warms, some of this could be released. Some people even worry that this effect could cause a "tipping point" in the Earth’s climate. But I won’t force you to tell me your opinions on this — you’ve done enough for one week.

Instead, I just want to make a silly remark about hypothetical situations where there’s so much positive feedback that it completely cancels the Planck feedback. You see, as a mathematician, I couldn’t help wondering about this formula:

T = -F/(λ0+λ)

The Planck feedback λ0 is negative. The sum of all the other feedbacks, namely λ, is positive. So what if they add up to zero? Then we’re be dividing by zero! When I last checked, that was a no-no.

Here’s my guess. If λ0+λ becomes zero, the climate loses its stability: it can drift freely. A slight tap can push it arbitrarily far, like a ball rolling on a flat table.

And if λ were actually big enough to make λ0+λ positive, the climate would be downright unstable, like a ball perched on top of a hill!

But all this is only in some linear approximation. In reality, a hot object radiates power proportional to the fourth power of its temperature. So even if the Earth’s climate is unstable in some linear approximation, the Planck feedback due to radiation will eventually step in and keep the Earth from heating up, or cooling down, indefinitely.

NU: Yes, we do have to be careful to remember that the formula above is obtained from a linear feedback analysis. For a discussion of climate sensitivity in a nonlinear analysis to second order, see:

• I. Zaliapin and M. Ghil, Another look at climate sensitivity,
Nonlinear Processes in Geophysics 16 (2010), 113-122.

JB: Hmm, there’s some nice catastrophe theory in there — I see a fold catastrophe in Figure 5, which gives a "tipping point".

Okay. Thanks for everything, and we’ll continue next week!


The significant problems we have cannot be solved at the same level of thinking with which we created them. – Albert Einstein


This Week’s Finds (Week 301)

27 August, 2010

The first 300 issues of This Week’s Finds were devoted to the beauty of math and physics. Now I want to bite off a bigger chunk of reality. I want to talk about all sorts of things, but especially how scientists can help save the planet. I’ll start by interviewing some scientists with different views on the challenges we face — including some who started out in other fields, because I’m trying to make that transition myself.

By the way: I know “save the planet” sounds pompous. As George Carlin joked: “Save the planet? There’s nothing wrong with the planet. The planet is fine. The people are screwed.” (He actually put it a bit more colorfully.)

But I believe it’s more accurate when he says:

I think, to be fair, the planet probably sees us as a mild threat. Something to be dealt with. And I am sure the planet will defend itself in the manner of a large organism, like a beehive or an ant colony, and muster a defense.

I think we’re annoying the biosphere. I’d like us to become less annoying, both for its sake and our own. I actually considered using the slogan how scientists can help humans be less annoying — but my advertising agency ran a focus group, and they picked how scientists can help save the planet.

Besides interviewing people, I want to talk about where we stand on various issues, and what scientists can do. It’s a very large task, so I’m really hoping lots of you reading this will help out. You can explain stuff, correct mistakes, and point me to good sources of information. With a lot of help from Andrew Stacey, I’m starting a wiki where we can collect these pointers. I’m hoping it will grow into something interesting.

But today I’ll start with a brief overview, just to get things rolling.

In case you haven’t noticed: we’re heading for trouble in a number of ways. Our last two centuries were dominated by rapid technology change and a rapidly soaring population:

The population is still climbing fast, though the percentage increase per year is dropping. Energy consumption per capita is also rising. So, from 1980 to 2007 the world-wide usage of power soared from 10 to 16 terawatts.

96% of this power now comes from fossil fuels. So, we’re putting huge amounts of carbon dioxide into the air: 30 billion metric tons in 2007. So, the carbon dioxide concentration of the atmosphere is rising at a rapid clip: from about 290 parts per million before the industrial revolution, to about 370 in the year 2000, to about 390 now:



 

As you’d expect, temperatures are rising:



 

But how much will they go up? The ultimate amount of warming will largely depend on the total amount of carbon dioxide we put into the air. The research branch of the National Academy of Sciences recently put out a report on these issues:

• National Research Council, Climate Stabilization Targets: Emissions, Concentrations, and Impacts over Decades to Millennia, 2010.

Here are their estimates:



 

You’ll note there’s lots of uncertainty, but a rough rule of thumb is that each doubling of carbon dioxide will raise the temperature around 3 degrees Celsius. Of course people love to argue about these things: you can find reasonable people who’ll give a number anywhere between 1.5 and 4.5 °C, and unreasonable people who say practically anything. We’ll get into this later, I’m sure.

But anyway: if we keep up “business as usual”, it’s easy to imagine us doubling the carbon dioxide sometime this century, so we need to ask: what would a world 3 °C warmer be like?

It doesn’t sound like much… until you realize that the Earth was only about 6 °C colder during the last ice age, and the Antarctic had no ice the last time the Earth was about 4 °C warmer. You also need to bear in mind the shocking suddenness of the current rise in carbon dioxide levels:



You can see several ice ages here — or technically, ‘glacial periods’. Carbon dioxide concentration and temperature go hand in hand, probably due to some feedback mechanisms that make each influence the other. But the scary part is the vertical line on the right where the carbon dioxide shoots up from 290 to 390 parts per million — instantaneously from a geological point of view, and to levels not seen for a long time. Species can adapt to slow climate changes, but we’re trying a radical experiment here.

But what, specifically, could be the effects of a world that’s 3 °C warmer? You can get some idea from the National Research Council report. Here are some of their predictions. I think it’s important to read these, to see that bad things will happen, but the world will not end. Psychologically, it’s easy to avoid taking action if you think there’s no problem — but it’s also easy if you think you’re doomed and there’s no point.

Between their predictions (in boldface) I’ve added a few comments of my own. These comments are not supposed to prove anything. They’re just anecdotal examples of the kind of events the report says we should expect.

For 3 °C of global warming, 9 out of 10 northern hemisphere summers will be “exceptionally warm”: warmer in most land areas than all but about 1 of the summers from 1980 to 2000.

This summer has certainly been exceptionally warm: for example, worldwide, it was the hottest June in recorded history, while July was the second hottest, beat out only by 2003. Temperature records have been falling like dominos. This is a taste of the kind of thing we might see.

Increases of precipitation at high latitudes and drying of the already semi-arid regions are projected with increasing global warming, with seasonal changes in several regions expected to be about 5-10% per degree of warming. However, patterns of precipitation show much larger variability across models than patterns of temperature.

Back home in southern California we’re in our fourth year of drought, which has led to many wildfires.

Large increases in the area burned by wildfire are expected in parts of Australia, western Canada, Eurasia and the United States.

We are already getting some unusually intense fires: for example, the Black Saturday bushfires that ripped through Victoria in February 2007, the massive fires in Greece later that year, and the hundreds of wildfires that broke out in Russia this July.

Extreme precipitation events — that is, days with the top 15% of rainfall — are expected to increase by 3-10% per degree of warming.

The extent to which these events cause floods, and the extent to which these floods cause serious damage, will depend on many complex factors. But today it hard not to think about the floods in Pakistan, which left about 20 million homeless, and ravaged an area equal to that of California.

In many regions the amount of flow in streams and rivers is expected to change by 5-15% per degree of warming, with decreases in some areas and increases in others.

The total number of tropical cyclones should decrease slightly or remain unchanged. Their wind speed is expected to increase by 1-4% per degree of warming.

It’s a bit counterintuitive that warming could decrease the number of cyclones, while making them stronger. I’ll have to learn more about this.

The annual average sea ice area in the Arctic is expected to decrease by 15% per degree of warming, with more decrease in the summertime.

The area of Arctic ice reached a record low in the summer of 2007, and the fabled Northwest Passage opened up for the first time in recorded history. Then the ice area bounced back. This year it was low again… but what matters more is the overall trend:



 

Global sea level has risen by about 0.2 meters since 1870. The sea level rise by 2100 is expected to be at least 0.6 meters due to thermal expansion and loss of ice from glaciers and small ice caps. This could be enough to permanently displace as many as 3 million people — and raise the risk of floods for many millions more. Ice loss is also occurring in parts of Greenland and Antarctica, but the effect on sea level in the next century remains uncertain.

Up to 2 degrees of global warming, studies suggest that crop yield gains and adaptation, especially at high latitudes, could balance losses in tropical and other regions. Beyond 2 degrees, studies suggest a rise in food prices.

The first sentence there is the main piece of good news — though not if you’re a poor farmer in central Africa.

Increased carbon dioxide also makes the ocean more acidic and lowers the ability of many organisms to make shells and skeleta. Seashells, coral, and the like are made of aragonite, one of the two crystal forms of calcium carbonate. North polar surface waters will become under-saturated for aragonite if the level of carbon dioxide in the atmosphere rises to 400-450 parts per million. Then aragonite will tend to dissolve, rather than form from seawater. For south polar surface waters, this effect will occur at 500-660 ppm. Tropical surface waters and deep ocean waters are expected to remain supersaturated for aragonite throughout the 20th century, but coral reefs may be negatively impacted.

Coral reefs are also having trouble due to warming oceans. For example, this summer there was a mass dieoff of corals off the coast of Indonesia due to ocean temperatures that were 4 °C higher than average.

Species are moving toward the poles to keep cool: the average shift over many types of terrestrial species has been 6 kilometers per decade. The rate of extinction of species will be enhanced by climate change.

I have a strong fondness for the diversity of animals and plants that grace this planet, so this particularly perturbs me. The report does not venture a guess for how many species may go extinct due to climate change, probably because it’s hard to estimate. However, it states that the extinction rate is now roughly 500 times what it was before humans showed up. The extinction rate is measured extinctions per million years per species. For mammals, it’s shot up from roughly 0.1-0.5 to roughly 50-200. That’s what I call annoying the biosphere!

So, that’s a brief summary of the problems that carbon dioxide emissions may cause. There’s just one more thing I want to say about this now.

Once carbon dioxide is put into the atmosphere, about 50% of it will stay there for decades. About 30% of it will stay there for centuries. And about 20% will stay there for thousands of years:



This particular chart is based on some 1993 calculations by Wigley. Later calculations confirm this idea: the carbon we burn will haunt our skies essentially forever:

• Mason Inman, Carbon is forever, Nature Reports Climate Change, 20 November 2008.

This is why we’re in serious trouble. In the above article, James Hansen puts it this way:

Because of this long CO2 lifetime, we cannot solve the climate problem by slowing down emissions by 20% or 50% or even 80%. It does not matter much whether the CO2 is emitted this year, next year, or several years from now. Instead … we must identify a portion of the fossil fuels that will be left in the ground, or captured upon emission and put back into the ground.

But I think it’s important to be more precise. We can put off global warming by reducing carbon dioxide emissions, and that may be a useful thing to do. But to prevent it, we have to cut our usage of fossil fuels to a very small level long before we’ve used them up.



Theoretically, another option is to quickly deploy new technologies to suck carbon dioxide out of the air, or cool the planet in other ways. But there’s almost no chance such technologies will be practical soon enough to prevent significant global warming. They may become important later on, after we’ve already screwed things up. We may be miserable enough to try them, even though they may carry significant risks of their own.

So now, some tough questions:

If we decide to cut our usage of fossil fuels dramatically and quickly, how can we do it? How should we do it? What’s the least painful way? Or should we just admit that we’re doomed to global warming and learn to live with it, at least until we develop technologies to reverse it?

And a few more questions, just for completeness:

Could this all be just a bad dream — or more precisely, a delusion of some sort? Could it be that everything is actually fine? Or at least not as bad as you’re saying?

I won’t attempt to answer any of these now. We’ll have to keep coming back to them, over and over.

So far I’ve only talked about carbon dioxide emissions. There are lots of other problems we should tackle, too! But presumably many of these are just symptoms of some deeper underlying problem. What is this deeper problem? I’ve been trying to figure that out for years. Is there any way to summarize what’s going on, or it is just a big complicated mess?

Here’s my attempt at a quick summary: the human race makes big decisions based on an economic model that ignores many negative externalities.

A ‘negative externality’ is, very roughly, a way in which my actions impose a cost on you, for which I don’t pay any price.

For example: suppose I live in a high-rise apartment and my toilet breaks. Instead of fixing it, I realize that I can just use a bucket — and throw its contents out the window! Whee! If society has no mechanism for dealing with people like me, I pay no price for doing this. But you, down there, will be very unhappy.

This isn’t just theoretical. Once upon a time in Europe there were few private toilets, and people would shout “gardyloo!” before throwing their waste down to the streets below. In retrospect that seems disgusting, but many of the big problems that afflict us now can be seen as the result of equally disgusting externalities. For example:

Carbon dioxide pollution caused by burning fossil fuels. If the expected costs of global warming and ocean acidification were included in the price of fossil fuels, other sources of energy would more quickly become competitive. This is the idea behind a carbon tax or a ‘cap-and-trade program’ where companies pay for permits to put carbon dioxide into the atmosphere.

Dead zones. Put too much nitrogen and phosophorus in the river, and lots of algae will grow in the ocean near the river’s mouth. When the algae dies and rots, the water runs out of dissolved oxygen, and fish cannot live there. Then we have a ‘dead zone’. Dead zones are expanding and increasing in number. For example, there’s one about 20,000 square kilometers in size near the mouth of the Mississippi River. Hog farming, chicken farming and runoff from fertilized crop lands are largely to blame.

Overfishing. Since there is no ownership of fish, everyone tries to catch as many fish as possible, even though this is depleting fish stocks to the point of near-extinction. There’s evidence that populations of all big predatory ocean fish have dropped 90% since 1950. Populations of cod, bluefish tuna and many other popular fish have plummeted, despite feeble attempts at regulation.

Species extinction due to habitat loss. Since the economic value of intact ecosystems has not been fully reckoned, in many parts of the world there’s little price to pay for destroying them.

Overpopulation. Rising population is a major cause of the stresses on our biosphere, yet it costs less to have your own child than to adopt one. (However, a pilot project in India is offering cash payments to couples who put off having children for two years after marriage.)

One could go on; I haven’t even bothered to mention many well-known forms of air and water pollution. The Acid Rain Program in the United States is an example of how people eliminated an externality: they imposed a cap-and-trade system on sulfur dioxide pollution.

Externalities often arise when we treat some resource as essentially infinite — for example fish, or clean water, or clean air. We thus impose no cost for using it. This is fine at first. But because this resource is free, we use more and more — until it no longer makes sense to act as if we have an infinite amount. As a physicist would say, the approximation breaks down, and we enter a new regime.

This is happening all over the place now. We have reached the point where we need to treat most resources as finite and take this into account in our economic decisions. We can’t afford so many externalities. It is irrational to let them go on.

But what can you do about this? Or what can I do?

We can do the things anyone can do. Educate ourselves. Educate our friends. Vote. Conserve energy. Don’t throw buckets of crap out of apartment windows.

But what can we do that maximizes our effectiveness by taking advantage of our special skills?

Starting now, a large portion of This Week’s Finds will be the continuing story of my attempts to answer this question. I want to answer it for myself. I’m not sure what I should do. But since I’m a scientist, I’ll pose the question a bit more broadly, to make it a bit more interesting.

How scientists can help save the planet — that’s what I want to know.


Addendum: In the new This Week’s Finds, you can often find the source for a claim by clicking on the nearest available link. This includes the figures. Four of the graphs in this issue were produced by Robert A. Rohde and more information about them can be found at Global Warming Art.


During the journey we commonly forget its goal. Almost every profession is chosen as a means to an end but continued as an end in itself. Forgetting our objectives is the most frequent act of stupidity. — Friedrich Nietzsche


This Week’s Finds in Mathematical Physics (Week 300)

11 August, 2010

This is the last of the old series of This Week’s Finds. Soon the new series will start, focused on technology and environmental issues — but still with a hefty helping of math, physics, and other science.

When I decided to do something useful for a change, I realized that the best way to start was by interviewing people who take the future and its challenges seriously, but think about it in very different ways. So far, I’ve done interviews with:

Tim Palmer on climate modeling and predictability.

Thomas Fischbacher on sustainability and permaculture.

Eliezer Yudkowsky on artificial intelligence and the art of rationality.

I hope to do more. I think it’ll be fun having This Week’s Finds be a dialogue instead of a monologue now and then.

Other things are changing too. I started a new blog! If you’re interested in how scientists can help save the planet, I hope you visit:

1) Azimuth, https://johncarlosbaez.wordpress.com

This is where you can find This Week’s Finds, starting now

Also, instead of teaching math in hot dry Riverside, I’m now doing research at the Centre for Quantum Technologies in hot and steamy Singapore. This too will be reflected in the new This Week’s Finds.

But now… the grand finale of This Week’s Finds in Mathematical Physics!

I’d like to take everything I’ve been discussing so far and wrap it up in a nice neat package. Unfortunately that’s impossible – there are too many loose ends. But I’ll do my best: I’ll tell you how to categorify the Riemann zeta function. This will give us a chance to visit lots of our old friends one last time: the number 24, string theory, zeta functions, torsors, Joyal’s theory of species, groupoidification, and more.

Let me start by telling you how to count.

I’ll assume you already know how to count elements of a set, and move right along to counting objects in a groupoid.

A groupoid is a gadget with a bunch of objects and a bunch of isomorphisms between them. Unlike an element of a set, an object of a groupoid may have symmetries: that is, isomorphisms between it and itself. And unlike an element of a set, an object of a groupoid doesn’t always count as “1 thing”: when it has n symmetries, it counts as “1/nth of a thing”. That may seem strange, but it’s really right. We also need to make sure not to count isomorphic objects as different.

So, to count the objects in our groupoid, we go through it, take one representative of each isomorphism class, and add 1/n to our count when this representative has n symmetries.

Let’s see how this works. Let’s start by counting all the n-element sets!

Now, you may have thought there were infinitely many sets with n elements, and that’s true. But remember: we’re not counting the set of n-element sets – that’s way too big. So big, in fact, that people call it a “class” rather than a set! Instead, we’re counting the groupoid of n-element sets: the groupoid with n-element sets as objects, and one-to-one and onto functions between these as isomorphisms.

All n-element sets are isomorphic, so we only need to look at one. It has n! symmetries: all the permutations of n elements. So, the answer is 1/n!.

That may seem weird, but remember: in math, you get to make up the rules of the game. The only requirements are that the game be consistent and profoundly fun – so profoundly fun, in fact, that it seems insulting to call it a mere “game”.

Now let’s be more ambitious: let’s count all the finite sets. In other words, let’s work out the cardinality of the groupoid where the objects are all the finite sets, and the isomorphisms are all the one-to-one and onto functions between these.

There’s only one 0-element set, and it has 0! symmetries, so it counts for 1/0!. There are tons of 1-element sets, but they’re all isomorphic, and they each have 1! symmetries, so they count for 1/1!. Similarly the 2-element sets count for 1/2!, and so on. So the total count is

1/0! + 1/1! + 1/2! + … = e

The base of the natural logarithm is the number of finite sets! You learn something new every day.

Spurred on by our success, you might want to find a groupoid whose cardinality is π. It’s not hard to do: you can just find a groupoid whose cardinality is 3, and a groupoid whose cardinality is .1, and a groupoid whose cardinality is .04, and so on, and lump them all together to get a groupoid whose cardinality is 3.14… But this is a silly solution: it doesn’t shed any light on the nature of π.

I don’t want to go into it in detail now, but the previous problem really does shed light on the nature of e: it explains why this number is related to combinatorics, and it gives a purely combinatorial proof that the derivative of ex is ex, and lots more. Try these books to see what I mean:

2) Herbert Wilf, Generatingfunctionology, Academic Press, Boston, 1994. Available for free at http://www.cis.upenn.edu/~wilf/.

3) F. Bergeron, G. Labelle, and P. Leroux, Combinatorial Species and Tree-Like Structures, Cambridge, Cambridge U. Press, 1998.

For example: if you take a huge finite set, and randomly pick a permutation of it, the chance every element is mapped to a different element is close to 1/e. It approaches 1/e in the limit where the set gets larger and larger. That’s well-known – but the neat part is how it’s related to the cardinality of the groupoid of finite sets.

Anyway, I have not succeeded in finding a really illuminating groupoid whose cardinality is π, but recently James Dolan found a nice one whose cardinality is π2/6, and I want to lead up to that.

Here’s a not-so-nice groupoid whose cardinality is π2/6. You can build a groupoid as the “disjoint union” of a collection of groups. How? Well, you can think of a group as a groupoid with one object: just one object having that group of symmetries. And you can build more complicated groupoids as disjoint unions of groupoids with one object. So, if you give me a collection of groups, I can take their disjoint union and get a groupoid.

So give me this collection of groups:

Z/1×Z/1, Z/2×Z/2, Z/3×Z/3, …

where Z/n is the integers mod n, also called the “cyclic group” with n elements. Then I’ll take their disjoint union and get a groupoid, and the cardinality of this groupoid is

1/12 + 1/22 + 1/32 + … = π2/6

This is not as silly as the trick I used to get a groupoid whose cardinality is π, but it’s still not perfectly satisfying, because I haven’t given you a groupoid of “interesting mathematical gadgets and isomorphisms between them”, as I did for e. Later we’ll see Jim’s better answer.

We might also try taking various groupoids of interesting mathematical gadgets and computing their cardinality. For example, how about the groupoid of all finite groups? I think that’s infinite – there are just “too many”. How about the groupoid of all finite abelian groups? I’m not sure, that could be infinite too.

But suppose we restrict ourselves to abelian groups whose size is some power of a fixed prime p? Then we’re in business! The answer isn’t a famous number like π, but it was computed by Philip Hall here:

4) Philip Hall, A partition formula connected with Abelian groups, Comment. Math. Helv. 11 (1938), 126-129.

We can write the answer using an infinite product:

1/(1-p-1)(1-p-2)(1-p-3) …

Or, we can write the answer using an infinite sum:

p(0)/p0 + p(1)/p1 + p(2)/p2 + …

Here p(n) is the number of “partitions” of n: that is, the number of ways to write it as a sum of positive integers in decreasing order. For example, p(4) = 5 since we can write 4 as a sum in 5 ways like this:

4 = 4
4 = 3+1
4 = 2+2
4 = 2+1+1
4 = 1+1+1+1

If you haven’t thought about this before, you can have fun proving that the infinite product equals the infinite sum. It’s a cute fact, and quite famous.

But Hall proved something even cuter. This number

p(0)/p0 + p(1)/p1 + p(2)/p2 + …

is also the cardinality of another, really different groupoid. Remember how I said you can build a groupoid as the “disjoint union” of a collection of groups? To get this other groupoid, we take the disjoint union of all the abelian groups whose size is a power of p.

Hall didn’t know about groupoid cardinality, so here’s how he said it:

The sum of the reciprocals of the orders of all the Abelian groups of order a power of p is equal to the sum of the reciprocals of the orders of their groups of automorphisms.

It’s pretty easy to see that sum of the reciprocals of the orders of all the Abelian groups of order a power of p is

p(0)/p0 + p(1)/p1 + p(2)/p2 + …

To do this, you just need to show that there are p(n) abelian groups with pn elements. If I shows you how it works for n = 4, you can guess how the proof works in general:

4 = 4                 Z/p4

4 = 3+1           Z/p3 × Z/p

4 = 2+2           Z/p2 × Z/p2

4 = 2+1+1       Z/p2 × Z/p2 × Z/p

4 = 1+1+1+1   Z/p × Z/p × Z/p × Z/p

So, the hard part is showing that

p(0)/p0 + p(1)/p1 + p(2)/p2 + …

is also the sum of the reciprocals of the sizes of the automorphism groups of all groups whose size is a power of p.

I learned of Hall’s result from Aviv Censor, a colleague who is an expert on groupoids. He had instantly realized this result had a nice formulation in terms of groupoid cardinality. We went through several proofs, but we haven’t yet been able to extract any deep inner meaning from them:

5) Avinoam Mann, Philip Hall’s “rather curious” formula for abelian p-groups, Israel J. Math. 96 (1996), part B, 445-448.

6) Francis Clarke, Counting abelian group structures, Proceedings of the AMS, 134 (2006), 2795-2799.

However, I still have hopes, in part because the math is related to zeta functions… and that’s what I want to turn to now.

Let’s do another example: what’s the cardinality of the groupoid of semisimple commutative rings with n elements?

What’s a semisimple commutative ring? Well, since we’re only talking about finite ones, I can avoid giving the general definition and take advantage of a classification theorem. Finite semisimple commutative rings are the same as finite products of finite fields. There’s a finite field with pn whenever p is prime and n is a positive integer. This field is called Fpn, and it has n symmetries. And that’s all the finite fields! In other words, they’re all isomorphic to these.

This is enough to work out the cardinality of the groupoid of semisimple commutative rings with n elements. Let’s do some examples. Let’s try n = 6, for example.

This one is pretty easy. The only way to get a finite product of finite fields with 6 elements is to take the product of F2 and F3:

F2 × F3

This has just one symmetry – the identity – since that’s all the symmetries either factor has, and there’s no symmetry that interchanges the two factors. (Hmm… you may need check this, but it’s not hard.)

Since we have one object with one symmetry, the groupoid cardinality is

1/1 = 1

Let’s try a more interesting one, say n = 4. Now there are two options:

F4

F2 × F2

The first option has 2 symmetries: remember, Fpn has n symmetries. The second option also has 2 symmetries, namely the identity and the symmetry that switches the two factors. So, the groupoid cardinality is

1/2 + 1/2 = 1

But now let’s try something even more interesting, like n = 16. Now there are 5 options:

F16

F8×F2

F4×F4

F4×F2×F2

F2×F2×F2×F2

The field F16 has 4 symmetries because 16 = 24, and any field Fpn has n symmetries. F8×F2 has 3 symmetries, coming from the symmetries of the first factor. F4×F4 has 2 symmetries in each factor and 2 coming from permutations of the factors, for a total of 2× 2×2 = 8. F4×F2×F2 has 2 symmetries coming from those of the first factor, and 2 symmetries coming from permutations of the last two factors, for a total of 2×2 = 4 symmetries. And finally, F2×F2×F2×F2 has 24 symmetries coming from permutations of the factors. So, the cardinality of this groupoid works out to be

1/4 + 1/3 + 1/8 + 1/4 + 1/24

Hmm, let’s put that on a common denominator:

6/24 + 8/24 + 3/24 + 6/24 + 1/24 = 24/24 = 1

So, we’re getting the same answer again: 1.

Is this just a weird coincidence? No: this is what we always get! For any positive integer n, the groupoid of n-element semsimple commutative rings has cardinality 1. For a proof, see:

7) John Baez and James Dolan, Zeta functions, at http://ncatlab.org/johnbaez/show/Zeta+functions

Now, you might think this fact is just a curiosity, but actually it’s a step towards categorifying the Riemann zeta function. The Riemann zeta function is

ζ(s) = ∑n > 0 n-s

It’s an example of a “Dirichlet series”, meaning a series of this form:

n > 0 an n-s

In fact, any reasonable way of equipping finite sets with extra stuff gives a Dirichlet series – and if this extra stuff is “being a semisimple commutative ring”, we get the Riemann zeta function.

To explain this, I need to remind you about “stuff types”, and then explain how they give Dirichlet series.

A stuff type is a groupoid Z where the objects are finite sets equipped with “extra stuff” of some type. More precisely, it’s a groupoid with a functor to the groupoid of finite sets. For example, Z could be the groupoid of finite semsimple commutative rings – that’s the example we care about now. Here the functor forgets that we have a semisimple commutative ring, and only remembers the underlying finite set. In other words, it forgets the “extra stuff”.

In this example, the extra stuff is really just extra structure, namely the structure of being a semisimple commutative ring. But we could also take X to be the groupoid of pairs of finite sets. A pair of finite sets is a finite set equipped with honest-to-goodness extra stuff, namely another finite set!

Structure is a special case of stuff. If you’re not clear on the difference, try this:

8) John Baez and Mike Shulman, Lectures on n-categories and cohomology, Sec. 2.4: Stuff, structure and properties, in n-Categories: Foundations and Applications, eds. John Baez and Peter May, Springer, Berlin, 2009. Also available as arXiv:math/0608420.

Then you can tell your colleagues: “I finally understand stuff.” And they’ll ask: “What stuff?” And you can answer, rolling your eyes condescendingly: “Not any particular stuff – just stuff, in general!”

But it’s not really necessary to understand stuff in general here. Just think of a stuff type as a groupoid where the objects are finite sets equipped with extra bells and whistles of some particular sort.

Now, if we have a stuff type, say Z, we get a list of groupoids Z(n). How? Simple! Objects of Z are finite sets equipped with some particular type of extra stuff. So, we can take the objects of Z(n) to be the n-element sets equipped with that type of extra stuff. The groupoid Z will be a disjoint union of these groupoids Z(n).

We can encode the cardinalities of all these groupoids into a Dirichlet series:

z(s) = ∑n > 0 |Z(n)| n-s

where |Z(n)| is the cardinality of Z(n). In case you’re wondering about the minus sign: it’s just a dumb convention, but I’m too overawed by the authority of tradition to dream of questioning it, even though it makes everything to come vastly more ugly.

Anyway: the point is that a Dirichlet series is like the “cardinality” of a stuff type. To show off, we say stuff types categorify Dirichlet series: they contain more information, and they’re objects in a category (or something even better, like a 2-category) rather than elements of a set.

Let’s look at an example. When Z is the groupoid of finite semisimple commutative rings, then

|Z(n)| = 1

so the corresponding Dirichlet series is the Riemann zeta function:

z(s) = ζ(s)

So, we’ve categorified the Riemann zeta function! Using this, we can construct an interesting groupoid whose cardinality is

ζ(2) = ∑n > 0 n-2 = π2/6

How? Well, let’s step back and consider a more general problem. Any stuff type Z gives a Dirichlet series

z(s) = ∑n > 0 |Z(n)| n-s

How can use this to concoct a groupoid whose cardinality is z(s) for some particular value of s? It’s easy when s is a negative integer (here that minus sign raises its ugly head). Suppose S is a set with s elements:

|S| = s

Then we can define a groupoid as follows:

Z(-S) = ∑n > 0 Z(n) × nS

Here we are playing some notational tricks: nS means “the set of functions from S to our favorite n-element set”, the symbol × stands for the product of groupoids, and ∑ stands for what I’ve been calling the “disjoint union” of groupoids (known more technically as the “coproduct”). So, Z(-S) is a groupoid. But this formula is supposed to remind us of a simpler one, namely

z(-s) = ∑n > 0 |Z(n)| ns

and indeed it’s a categorified version of this simpler formula.

In particular, if we take the cardinality of the groupoid Z(-S), we get the number z(-s). To see this, you just need to check each step in this calculation:

|Z(-S)| = |∑ Z(n) × nS|

= ∑ |Z(n) × nS|

= ∑ |Z(n)| × |nS|

= ∑ |Z(n)| × ns

= z(-s)

The notation is supposed to make these steps seem plausible.

Even better, the groupoid Z(-S) has a nice description in plain English: it’s the groupoid of finite sets equipped with Z-stuff and a map from the set S.

Well, okay – I’m afraid that’s what passes for plain English among mathematicians! We don’t talk to ordinary people very often. But the idea is really simple. Z is some sort of stuff that we can put on a finite set. So, we can do that and also choose a map from S to that set. And there’s a groupoid of finite sets equipped with all this extra baggage, and isomorphisms between those.

If this sounds too abstract, let’s do an example. Say our favorite example, where Z is the groupoid of finite semisimple commutative rings. Then Z(-S) is the groupoid of finite semisimple commutative rings equipped with a map from the set S.

If this still sounds too abstract, let’s do an example. Do I sound repetitious? Well, you see, category theory is the subject where you need examples to explain your examples – and n-category theory is the subject where this process needs to be repeated n times. So, suppose S is a 1-element set – we can just write

S = 1

Then Z(-1) is a groupoid where the objects are finite semisimple commutative rings with a chosen element. The isomorphisms are ring isomorphisms that preserve the chosen element. And the cardinality of this groupoid is

|Z(-1)| = ζ(-1) = 1 + 2 + 3 + …

Whoops – it diverges! Luckily, people who study the Riemann zeta function know that

1 + 2 + 3 + … = -1/12

They get this crazy answer by analytically continuing the Riemann zeta function ζ(s) from values of s with a big positive real part, where it converges, over to values where it doesn’t. And it turns out that this trick is very important in physics. In fact, back in "week124" – "week126", I explained how this formula

ζ(-1) = -1/12

is the reason bosonic string theory works best when our string has 24 extra dimensions to wiggle around in besides the 2 dimensions of the string worldsheet itself.

So, if we’re willing to allow this analytic continuation trick, we can say that

THE GROUPOID OF FINITE SEMISIMPLE COMMUTATIVE RINGS WITH A CHOSEN ELEMENT HAS CARDINALITY -1/12

Someday people will see exactly how this is related to bosonic string theory. Indeed, it should be just a tiny part of a big story connecting number theory to string theory… some of which is explained here:

9) J. M. Luck, P. Moussa, and M. Waldschmidt, eds., Number Theory and Physics, Springer Proceedings in Physics, Vol. 47, Springer-Verlag, Berlin, 1990.

10) C. Itzykson, J. M. Luck, P. Moussa, and M. Waldschmidt, eds, From Number Theory to Physics, Springer, Berlin, 1992.

Indeed, as you’ll see in these books (or in "week126"), the function we saw earlier:

1/(1-p-1)(1-p-2)(1-p-3) … = p(0)/p0 + p(1)/p1 + p(2)/p2 + …

is also important in string theory: it shows up as a “partition function”, in the physical sense, where the number p(n) counts the number of ways a string can have energy n if it has one extra dimension to wiggle around in besides the 2 dimensions of its worldsheet.

But it’s the 24th power of this function that really matters in string theory – because bosonic string theory works best when our string has 24 extra dimensions to wiggle around in. For more details, try:

11) John Baez, My favorite numbers: 24. Available at http://math.ucr.edu/home/baez/numbers/24.pdf

But suppose we don’t want to mess with divergent sums: suppose we want a groupoid whose cardinality is, say,

ζ(2) = 1-2 + 2-2 + 3-2 + … = π2/6

Then we need to categorify the evalution of Dirichlet series at positive integers. We can only do this for certain stuff types – for example, our favorite one. So, let Z be the groupoid of finite semisimple commutative rings, and let S be a finite set. How can we make sense of

Z(S) = ∑n > 0 Z(n) × n-S ?

The hard part is n-S, because this has a minus sign in it. How can we raise an n-element set to the -Sth power? If we could figure out some sort of groupoid that serves as the reciprocal of an n-element set, we’d be done, because then we could take that to the Sth power. Remember, S is a finite set, so to raise something (even a groupoid) to the Sth power, we just multiply a bunch of copies of that something – one copy for each element of S.

So: what’s the reciprocal of an n-element set? There’s no answer in general – but there’s a nice answer when that set is a group, because then that group gives a groupoid with one object, and the cardinality of this groupoid is just 1/n.

Here is where our particular stuff type Z comes to the rescue. Each object of Z(n) is a semisimple commutative ring with n elements, so it has an underlying additive group – which is a group with n elements!

So, we don’t interpret Z(n) × n-S as an ordinary product, but something a bit sneakier, a “twisted product”. An object in Z(n) × n-S is just an object of Z(n), that is an n-element semisimple commutative ring. But we define a symmetry of such an object to be a symmetry of this ring together with an S-tuple of elements of its underlying additive group. We compose these symmetries with the help of addition in this group. This ensures that

|Z(n) × n-S| = |Z(n)| × n-s

when |S| = s. And this in turn means that

|Z(S)| = |∑ Z(n) × n-S|

= ∑ |Z(n) × n-S|

= ∑ |Z(n)| × n-s

= ζ(-s)

So, in particular, if S is a 2-element set, we can write

S = 2

for short and get

|Z(2)| = ζ(2) = π2/6

Can we describe the groupoid Z(2) in simple English, suitable for a nice bumper sticker? It’s a bit tricky. One reason is that I haven’t described the objects of Z(2) as mathematical gadgets of an appealing sort, as I did for Z(-1). Another closely related reason is that I only described the symmetries of any object in Z(2) – or more technically, morphisms from that object to itself. It’s much better if we also describe morphisms from one object to another.

For this, it’s best to define Z(n) × n-S with the help of torsors. The idea of a torsor is that you can take the one-object groupoid associated to any group G and find a different groupoid, which is nonetheless equivalent, and which is a groupoid of appealing mathematical gadgets. These gadgets are called “G-torsors”. A “G-torsor” is just a nonempty set on which G acts freely and transitively:

12) John Baez, Torsors made easy, http://math.ucr.edu/home/baez/torsors.html

All G-torsors are isomorphic, and the group of symmetries of any G-torsor is G.

Now, any ring R has an underlying additive group, which I will simply call R. So, the concept of “R-torsor” makes sense. This lets us define an object of Z(n) × n-S to be an n-element semisimple commutative ring R together with an S-tuple of R-torsors.

But what about the morphisms between these? We define a morphism between these to be a ring isomorphism together with an S-tuple of torsor isomorphisms. There’s a trick hiding here: a ring isomorphism f: R → R’ lets us take any R-torsor and turn it into an R’-torsor, or vice versa. So, it lets us talk about an isomorphism from an R-torsor to an R’-torsor – a concept that at first might have seemed nonsensical.

Anyway, it’s easy to check that this definition is compatible with our earlier one. So, we see:

THE GROUPOID OF FINITE SEMISIMPLE COMMUTATIVE RINGS EQUIPPED WITH AN n-TUPLE OF TORSORS HAS CARDINALITY ζ(n)

I did a silly change of variables here: I thought this bumper sticker would sell better if I said “n-tuple” instead of “S-tuple”. Here n is any positive integer.

While we’re selling bumper stickers, we might as well include this one:

THE GROUPOID OF FINITE SEMISIMPLE COMMUTATIVE RINGS EQUIPPED WITH A PAIR OF TORSORS HAS CARDINALITY π2/6

Now, you might think this fact is just a curiosity. But I don’t think so: it’s actually a step towards categorifying the general theory of zeta functions. You see, the Riemann zeta function is just one of many zeta functions. As Hasse and Weil discovered, every sufficiently nice commutative ring R has a zeta function. The Riemann zeta function is just the simplest example: the one where R is the ring of integers. And the cool part is that all these zeta functions come from stuff types using the recipe I described!

How does this work? Well, from any commutative ring R, we can build a stuff type ZR as follows: an object of ZR is a finite semisimple commutative ring together with a homomorphism from R to that ring. Then it turns out the Dirichlet series of this stuff type, say

ζR(s) = ∑n > 0 |ZR(n)| n-s

is the usual Hasse-Weil zeta function of the ring R!

Of course, that fact is vastly more interesting if you already know and love Hasse-Weil zeta functions. You can find a definition of them either in my paper with Jim, or here:

12) Jean-Pierre Serre, Zeta and L functions, Arithmetical Algebraic Geometry (Proc. Conf. Purdue Univ., 1963), Harper and Row, 1965, pp. 82–92.

But the basic idea is simple. You can think of any commutative ring R as the functions on some space – a funny sort of space called an “affine scheme”. You’re probably used to spaces where all the points look alike – just little black dots. But the points of an affine scheme come in many different colors: for starters, one color for each prime power pk! The Hasse-Weil zeta function of R is a clever trick for encoding the information about the numbers of points of these different colors in a single function.

Why do we get points of different colors? I explained this back in "week205". The idea is that for any commutative ring k, we can look at the homomorphisms

f: R → k

and these are called the “k-points” of our affine scheme. In particular, we can take k to be a finite field, say Fpn. So, we get a set of points for each prime power pn. The Hasse-Weil zeta function is a trick for keeping track of many Fpn-points there are for each prime power pn.

Given all this, you shouldn’t be surprised that we can get the Hasse-Weil zeta function of R by taking the Dirichlet series of the stuff type ZR, where an object is a finite semisimple commutative ring k together with a homomorphism f: R → k. Especially if you remember that finite semisimple commutative rings are built from finite fields!

In fact, this whole theory of Hasse-Weil zeta functions works for gadgets much more general than commutative rings, also known as affine schemes. They can be defined for “schemes of finite type over the integers”, and that’s how Serre and other algebraic geometers usually do it. But Jim and I do it even more generally, in a way that doesn’t require any expertise in algebraic geometry. Which is good, because we don’t have any.

I won’t explain that here – it’s in our paper.

I’ll wrap up by making one more connection explicit – it’s sort of lurking in what I’ve said, but maybe it’s not quite obvious.

First of all, this idea of getting Dirichlet series from stuff types is part of the groupoidification program. Stuff types are a generalization of “structure types”, often called “species”. André Joyal developed the theory of species and showed how any species gives rise to a formal power series called its generating function. I told you about this back in "week185" and "week190". The recipe gets even simpler when we go up to stuff types: the generating function of a stuff type Z is just

n ≥ 0 |Z(n)| zn

Since we can also describe states of the quantum harmonic oscillator as power series, with zn corresponding to the nth energy level, this
lets us view stuff types as states of a categorified quantum harmonic oscillator! This explains the combinatorics of Feynman diagrams:

14) Jeffrey Morton, Categorified algebra and quantum mechanics, TAC 16 (2006), 785-854, available at http://www.emis.de/journals/TAC/volumes/16/29/16-29abs.html Also available as arXiv:math/0601458.

And, it’s a nice test case of the groupoidification program, where we categorify lots of algebra by saying “wherever we see a number, let’s try to think of it as the cardinality of a groupoid”:

15) John Baez, Alex Hoffnung and Christopher Walker, Higher-dimensional algebra VII: Groupoidification, available as arXiv:0908.4305

But now I’m telling you something new! I’m saying that any stuff type also gives a Dirichlet series, namely

n > 0 |Z(n)| n-s

This should make you wonder what’s going on. My paper with Jim explains it – at least for structure types. The point is that the groupoid of finite sets has two monoidal structures: + and ×. This gives the category of structure types two monoidal structures, using a trick called “Day convolution”. The first of these categorifies the usual product of formal power series, while the second categorifies the usual product of Dirichlet series. People in combinatorics love the first one, since they love chopping a set into two disjoint pieces and putting a structure on each piece. People in number theory secretly love the second one, without fully realizing it, because they love taking a number and decomposing it into prime factors. But they both fit into a single picture!

There’s a lot more to say about this, because actually the category of structure types has five monoidal structures, all fitting together in a nice way. You can read a bit about this here:

16) nLab, Schur functors, http://ncatlab.org/nlab/show/Schur+functor

This is something Todd Trimble and I are writing, which will eventually evolve into an actual paper. We consider structure types for which there is a vector space of structures for each finite set instead of a set of structures. But much of the abstract theory is similar. In particular, there are still five monoidal structures.

Someday soon, I hope to show that two of the monoidal structures on the category of species make it into a “ring category”, while the other two – the ones I told you about, in fact! – are better thought of as “comonoidal” structures, making it into a “coring category”. Putting these together, the category of species should become a “biring category”. Then the fifth monoidal structure, called “plethysm”, should make it into a monoid in the monoidal bicategory of biring categories!

This sounds far-out, but it’s all been worked out at a decategorified level: rings, corings, birings, and monoids in the category of birings:

17) D. Tall and Gavin Wraith, Representable functors and operations on rings, Proc. London Math. Soc. (3), 1970, 619-643.

18) James Borger and B. Wieland, Plethystic algebra, Advances in Mathematics 194 (2005), 246-283. Also available at http://wwwmaths.anu.edu.au/~borger/papers/03/paper03.html

19) Andrew Stacey and S. Whitehouse, The hunting of the Hopf ring, Homology, Homotopy and Applications, 11 (2009), 75-132, available at http://intlpress.com/HHA/v11/n2/a6/ Also available as arXiv:0711.3722.

Borger and Wieland call a monoid in the category of birings a “plethory”. The star example is the algebra of symmetric functions. But this is basically just a decategorified version of the category of Vect-valued species. So, the whole story should categorify.

In short: starting from very simple ideas, we can very quickly find a treasure trove of rich structures. Indeed, these structures are already staring us in the face – we just need to open our eyes. They clarify and unify a lot of seemingly esoteric and disconnected things that mathematicians and physicists love.



I think we are just beginning to glimpse the real beauty of math and physics. I bet it will be both simpler and more startling than most people expect.

I would love to spend the rest of my life chasing glimpses of this beauty. I wish we lived in a world where everyone had enough of the bare necessities of life to do the same if they wanted – or at least a world that was safely heading in that direction, a world where politicians were looking ahead and tackling problems before they became desperately serious, a world where the oceans weren’t dying.

But we don’t.


Certainty of death. Small chance of success. What are we waiting for?
– Gimli


Follow

Get every new post delivered to your Inbox.

Join 3,094 other followers