Mathematics of Planet Earth

20 March, 2011

While struggling to prepare my talk on “what mathematicians can do”, I remembered this website pointed out by Tom Leinster:

Mathematics of Planet Earth 2013.

The idea is to get lots of mathematicians involved in programs on these topics:

• Weather, climate, and environment
• Health, human and social services
• Planetary resources
• Population dynamics, ecology and genomics of species
• Energy utilization and efficiency
• Connecting the planet together
• Geophysical processes
• Global economics, safety and stability

There are already a lot of partner societies (including the American Mathematical Society) and partner institutes. I would love to see more details, but this website seems directed mainly at getting more organizations involved, rather than saying what any of them are going to do.

There is a call for proposals, but it’s a bit sketchy. It says:

A call to join is sent to the planet.

which makes me want to ask “From where?”

(That must be why I’m sitting here blogging instead of heading an institute somewhere. I never fully grew up.)

I guess the details will eventually become clearer. Does anyone know some activities that have been planned?


Energy, the Environment, and What Mathematicians Can Do (Part 1)

18 March, 2011

I’m preparing a talk to give at Hong Kong University next week. It’s only half done, but I could use your feedback on this part while I work on the rest:

Energy, The Environment, and What Mathematicians Can Do.

So far it makes a case for why mathematicians should get involved in these issues… but doesn’t say what they can to help! That’ll be the second part. So, you’ll just have to bear with the suspense for now.

By the way, all the facts and graphs should have clickable links that lead you to online references. The links aren’t easy to see, but if you hover the cursor over a fact or graph, and click, it should work.


Guess Who Wrote This?

3 March, 2011

Guess who wrote this report. I’ll quote a bunch of it:

The climate change crisis is far from over. The decade 2000-2010 is the hottest ever recorded and data reveals each decade over the last 50 years to be hotter than the previous one. The planet is enduring more and more heat waves and rain levels—high and low—that test the outer bounds of meteorological study.

The failure of the USA, Australia and Japan to implement relevant legislation after the Copenhagen Accord, as well as general global inaction, might lead people to shrug off the climate issue. Many are quick to doubt the science. Amid such ambiguity a discontinuity is building as expert and public opinion diverge.

This divergence is not sustainable!

Society continues to face a dilemma posed here: a failure to reduce emissions now will mean considerably greater cost in the future. But concerted global action is still too far off given the extreme urgency required.

CO2 price transparency needed

Some countries forge ahead with national and local measures but many are moving away from market-based solutions and are punishing traditional energy sources. Cap-and-trade systems risk being discredited. The EU-Emissions Trading System (EU-ETS) has failed to deliver an adequate CO2 price. Industry lobbying for free allowance allocations is driving demands for CO2 taxes to eliminate perceived industry windfalls. In some cases this has led to political stalemate.

The transparency of a CO2 price is central to delivering least-cost emission reductions, but it also contributes to growing political resistance to cap-and-trade
systems. Policy makers are looking to instruments – like mandates – where emissions value is opaque. This includes emission performance standards (EPSs) for electricity plants and other large fixed sources. Unfortunately, policies aimed at building renewable energy capacity are also displacing more natural gas than coal where the CO2 price is low or absent. This is counter-productive when it comes to reducing emissions. Sometimes the scale of renewables capacity also imposes very high system costs. At other times, policy support for specific renewables is maintained even after the technology reaches its efficient scale, as is the case in the US.

The recession has raised a significant issue for the EU-ETS: how to design cap-and-trade systems in the face of economic and technological uncertainty? Phase III of the ETS risks delivering a structurally low CO2 price due to the impact of the recession on EU emissions. A balanced resetting of the cap should be considered. It is more credible to introduce a CO2 price floor ahead of such shocks than engage in the ad hoc recalibration of the cap in response to them. This would signal to investors that unexpected shortfalls in emissions would be used in part to step up reductions and reduce uncertainty in investments associated with the CO2 price. This is an important issue for the design of Phase IV of the ETS.

Climate too low a priority

Structural climate policy problems aside, the global recession has moved climate concerns far down the hierarchy of government objectives. The financial crisis and Gulf of Mexico oil spill have also hurt trust in the private sector, spawning tighter regulation and leading to increased risk aversion. This hits funding and political support for new technologies, in particular Carbon Capture and Sequestration (CCS) where industry needs indemnification from some risk. Recent moves by the EU and the US regarding long-term liabilities show this support is far from secured. Government support for technology development may also be hit as they work to cut deficits.

In this environment of policy drift and increasing challenge to market-based solutions, it is important to remain strongly focused on least-cost solutions today and advances in new technologies for the future. Even if more pragmatic policy choices prevail, it is important that they are consistent with, and facilitate the eventual implementation of market-based solutions.

Interdependent ecosystems approach

Global policy around environmental sustainability focuses almost exclusively on climate change and CO2 emissions reduction. But since 2008, an approach which considers interdependent ecosystems has emerged and gradually gained influence.

This approach argues that targeting climate change and CO2 alone is insufficient. The planet is a system of inextricably inter-related environmental processes and each must be managed in balance with the others to sustain stability.

Research published by the Stockholm Resilience Centre in early 2009 consolidates this thinking and proposes a framework based on ‘biophysical environmental subsystems’. The Nine Planetary Boundaries collectively define a safe operating space for humanity where social and economic development does not create lasting and catastrophic environmental change.

According to the framework, planetary boundaries collectively determine ecological stability. So far, limits have been quantified for seven boundaries which, if surpassed, could result in more ecological volatility and potentially disastrous consequences. As Table 1 shows, three boundaries have already been exceeded. Based on current trends, the limits of others are fast approaching.

For the energy industry, CO2 management and reduction is the chief concern and the focus of much research and investment. But the interdependence of the other systems means that if one limit is reached, others come under intense pressure. The climate-change boundary relies on careful management of freshwater, land use, atmospheric aerosol concentration, nitrogen–phosphorus, ocean and stratospheric boundaries. Continuing to pursue an environmental policy centered on climate change will fail to preserve the planet’s environmental stability unless the other defined boundaries are addressed with equal vigour.


Child Earth

11 February, 2011


Mary Catherine Bateson is a cultural anthropologist, the daughter of Margaret Mead and Gregory Bateson. Here’s a thought-provoking snippet from Stewart Brand’s summary of her talk at the Long Now Foundation:

The birth of a first child is the most intense disruption that most adults experience. Suddenly the new parents have no sleep, no social life, no sex, and they have to keep up with a child that changes from week to week. “Two ignorant adults learn from the newborn how to be decent parents.” Everything now has to be planned ahead, and the realization sinks in that it will go on that way for twenty years.

[…]

Herself reflecting on parenthood, Bateson proposed that the metaphor of “mother Earth” is no longer accurate or helpful. Human impact on nature is now so complete and irreversible that we’re better off thinking of the planet as if it were our first child. It will be here after us. Its future is unknown and uncontrollable. We are forced to plan ahead for it. Our first obligation is to keep it from harm. We are learning from it how to be decent parents.


Stabilization Wedges (Part 3)

17 December, 2010

I bet you thought I’d never get back to this! Sorry, I like to do lots of things.

Remember the idea: in 2004, Stephen Pacala and Robert Socolow wrote a now-famous paper on how we could hold atmospheric carbon dioxide below 500 parts per million. They said that to do this, it would be enough to find 7 ways to reduce carbon emissions, each one ramping up linearly to the point of reducing carbon emissions by 1 gigaton per year by 2054.

They called these stabilization wedges, for the obvious reason:



Their paper listed 15 of these wedges. The idea here is to go through them and critique them. In Part 1 of this series we talked about four wedges involving increased efficiency and conservation. In Part 2 we covered one about shifting from coal to natural gas, and three about carbon capture and storage.

Now let’s do nuclear power and renewable energy!

9. Nuclear power. As Pacala and Socolow already argued in wedge 5, replacing 700 gigawatts of efficient coal-fired power plants with some carbon-neutral form of power would save us a gigaton of carbon per year. This would require 700 gigawatts of nuclear power plants running at 90% capacity (just as assumed for the coal plants). The means doubling the world production of nuclear power. The global pace of nuclear power plant construction from 1975 to 1990 could do this! So, this is one of the few wedges that doesn’t seem to require heroic technical feats. But of course, there’s still a downside: we can only substantially boost the use of nuclear power if people become confident about all aspects of its safety.

10. Wind power. Wind power is intermittent: Pacala and Socolow estimate that the ‘peak’ capacity (the amount you get under ideal circumstances) is about 3 times the ‘baseload’ capacity (the amount you can count on). So, to save a gigaton of carbon per year by replacing 700 gigawatts of coal-fired power plants, we need roughly 2000 gigawatts of peak wind power. Wind power was growing at about 30% per year when they wrote their paper, and it had reached a world total of 40 gigawatts. So, getting to 2000 gigawatts would mean multiplying the world production of wind power by a factor of 50. The wind turbines would “occupy” about 30 million hectares, or about 30-45 square meters per person — some on land and some offshore. But because windmills are widely spaced, land with windmills can have multiple uses.

11. Photovoltaic solar power. This too is intermittent, so to save a gigaton of carbon per year we need 2000 gigawatts of peak photovoltaic solar power to replace coal. Like wind, photovoltaic solar was growing at 30% per year when Pacala and Socolow wrote their paper. However, only 3 gigawatts had been installed worldwide. So, getting to 2000 gigawatts would require multiplying the world production of photovoltaic solar power by a factor of 700. See what I mean about ‘heroic feats’? In terms of land, this would take about 2 million hectares, or 2-3 square meters per person.

12. Renewable hydrogen. You’ve probably heard about hydrogen-powered cars. Of course you’ve got to make the hydrogen. Renewable electricity can produce hydrogen for vehicle fuel. 4000 gigawatts of peak wind power, for example, used in high-efficiency fuel-cell cars, could keep us from burning a gigaton of carbon each year in the form of gasoline or diesel fuel. Unfortunately, this is twice as much wind power as we’d need in wedge 10, where we use wind to eliminate the need for burning some coal. Why? Gasoline and diesel have less carbon per unit of energy than coal does.

13. Biofuels. Fossil-carbon fuels can also be replaced by biofuels such as ethanol. To save a gigaton per year of carbon, we could make 5.4 gigaliters per day of ethanol as a replacement for gasoline — provided the process of making this ethanol didn’t burn fossil fuels! Doing this would require multiplying the world production of bioethanol by a factor of 50. It would require 250 million hectares committed to high-yield plantations, or 250-375 square meters per person. That’s an area equal to about one-sixth of the world’s cropland. An even larger area would be required to the extent that the biofuels require fossil-fuel inputs. Clearly this could cut into the land used for growing food.

There you go… let me hear your critique! Which of these measures seem best to you? Which seem worst? But more importantly: why?

Remember: it takes a total of 7 wedges to save the world, according to this paper by Pacala and Socolow.

Next time I’ll tell you about the final two stabilization wedges… and then I’ll give you an update on their idea.


Power Density

5 October, 2010

Today I’ve been thinking about “power density”, and I’ve got some questions for you.

But let’s start at the beginning!

In his 2009 talk at the Long Now Foundation, the engineer Saul Griffith made some claims that fill me with intense dread. Stewart Brand summarized the talk as follows:

The world currently runs on about 16 terawatts (trillion watts) of energy, most of it burning fossil fuels. To level off at 450 ppm of carbon dioxide, we will have to reduce the fossil fuel burning to 3 terawatts and produce all the rest with renewable energy, and we have to do it in 25 years or it’s too late. Currently about half a terrawatt comes from clean hydropower and one terrawatt from clean nuclear. That leaves 11.5 terawatts to generate from new clean sources.

That would mean the following. (Here I’m drawing on notes and extrapolations I’ve written up previously from discussion with Griffith):

“Two terawatts of photovoltaic would require installing 100 square meters of 15-percent-efficient solar cells every second, second after second, for the next 25 years. (That’s about 1,200 square miles of solar cells a year, times 25 equals 30,000 square miles of photovoltaic cells.) Two terawatts of solar thermal? If it’s 30 percent efficient all told, we’ll need 50 square meters of highly reflective mirrors every second. (Some 600 square miles a year, times 25.) Half a terawatt of biofuels? Something like one Olympic swimming pools of genetically engineered algae, installed every second. (About 15,250 square miles a year, times 25.) Two terawatts of wind? That’s a 300-foot-diameter wind turbine every 5 minutes. (Install 105,000 turbines a year in good wind locations, times 25.) Two terawatts of geothermal? Build 3 100-megawatt steam turbines every day — 1,095 a year, times 25. Three terawatts of new nuclear? That’s a 3-reactor, 3-gigawatt plant every week — 52 a year, times 25”.

In other words, the land area dedicated to renewable energy (“Renewistan”) would occupy a space about the size of Australia to keep the carbon dioxide level at 450 ppm. To get to Hansen’s goal of 350 ppm of carbon dioxide, fossil fuel burning would have to be cut to ZERO, which means another 3 terawatts would have to come from renewables, expanding the size of Renewistan further by 26 percent.

The main scary part is the astounding magnitude of this project, and how far we are from doing anything remotely close. Griffith describes it as not like the Manhattan Project, but like World War II — only with everyone on the same side.

But another scary part is the amount of land that needs to get devoted to “Renewistan” in this scheme. This is where power density comes in.

The term power density is used in various ways, but in the work of Vaclav Smil it means the number of usable watts that can be produced per square meter of land (or water) by a given technology, and that’s how I’ll use it here.

Smil’s main point is that renewable forms of energy generally have a much lower power density than fossil fuels. As Griffith points out, this could have massive effects. Or consider the plan for England, Scotland and Wales on page 215 of David MacKay‘s book Without the Hot Air:



That’s a lot of land devoted to energy production!

Smil wrote an interesting paper about power density:

• Vaclav Smil, Power density primer: understanding the spatial dimension of the unfolding transition to renewable electricity generation

In it, he writes:

Energy density is easy – power density is confusing.

One look at energy densities of common fuels is enough to understand while we prefer coal over wood and oil over coal: air-dried wood is, at best, 17 MJ/kg, good-quality bituminous coal is 22-25 MJ/kg, and refined oil products are around 42 MJ/kg. And a comparison of volumetric energy densities makes it clear why shipping non-compressed, non-liquefied natural gas would never work while shipping crude oil is cheap: natural gas rates around 35 MJ/m3, crude oil has around 35 GJ/m3 and hence its volumetric energy density is a thousand times (three orders of magnitude) higher. An obvious consequence: without liquefied (or at least compressed) natural gas there can be no intercontinental shipments of that clean fuel.

Power density is a much more complicated variable. Engineers have used power densities as revealing measures of performance for decades – but several specialties have defined them in their own particular ways….

For the past 25 years I have favored a different, and a much broader, measure of power density as perhaps the most universal measure of energy flux: W/m2 of horizontal area of land or water surface rather than per unit of the working surface of a converter.

Here are some of his results:

• No other mode of large-scale electricity generation occupies as little space as gas turbines: besides their compactness they do not need fly ash disposal or flue gas desulfurization. Mobile gas turbines generate electricity with power densities higher than 15,000 W/m2 and large (>100 MW) stationary set-ups can easily deliver 4,000-5,000 W/m2. (What about the area needed for mining?)

• Most large modern coal-fired power plants generate electricity with power densities ranging from 100 to 1,000 W/m2, including the area of the mine, the power plant, etcetera.

• Concentrating solar power (CSP) projects use tracking parabolic mirrors in order to reflect and concentrate solar radiation on a central receiver placed in a high tower, for the purposes of powering a steam engine. All facilities included, these deliver at most 10 W/m2.

• Photovoltaic panels are fixed in an optimal tilted south-facing position and hence receive more radiation than a unit of horizontal surface, but the average power densities of solar parks are low. Additional land is needed for spacing the panels for servicing, access roads, inverter and transformer facilities and service structures — and only 85% of a panel’s DC rating is transmitted from the park to the grid as AC power. All told, they deliver 4-9 W/m2.

• Wind turbines have fairly high power densities when the rate measures the flux of wind’s kinetic energy moving through the working surface: the area swept by blades. This power density is commonly above 400 W/m2 — but power density expressed as electricity generated per land area is much less! At best we can expect a peak power of 6.6 W/m2 and even a relatively high average capacity factor of 30% would bring that down to only about 2 W/m2.

• The energy density of dry wood (18-21 GJ/ton) is close to that of sub-bituminous coal. But if we were to supply a significant share of a nation’s electricity from wood we would have to establish extensive tree plantations. We could not expect harvests surpassing 20 tons/hectare, with 10 tons/hectare being more typical. Harvesting all above-ground tree mass and feeding it into chippers would allow for 95% recovery of the total field production, but even if the fuel’s average energy density were 19 GJ/ton, the plantation would yield no more than 190 GJ/hectare, resulting in harvest power density of 0.6 W/m2.

Of course, power density is of limited value in making decisions regarding power generation, because:

1. The price of a square meter of land or water varies vastly depending on its location.

2. Using land for one purpose does not always prevent its use for others: e.g. solar panels on roofs, crops or solar panels between wind turbines.

Nonetheless, Smil’s basic point, that most forms of renewable forms of energy will require us to devote larger areas of the Earth to energy production, seems fairly robust. (An arguable exception is breeder reactors, which in conjunction with extracting uranium from seawater might be considered a form of renewable energy. This is important.)

On the other hand, fans of solar energy argue that much smaller areas would be needed to supply the world’s power. There are two possible reasons, and I haven’t sorted them out yet:

1) They may be talking about electrical power, which is roughly one order of magnitude less than total power usage.

2) As Smil’s calculations show, solar power allows for significantly greater power density than wind or biofuels. Griffith’s area for ‘Renewistan’ may be large because it includes a significant amount of power from those other sources.

What do you folks think? I’ve got a lot of questions:

• what’s the power density for nuclear power?

• what’s the power density for sea-based wind farms?

and some harder ones, like:

• how useful is the concept of power density?

• how much land area would be devoted to power production in a well-crafted carbon-neutral economy?

and that perennial favorite:

• what am I ignoring that I should be thinking about?

If Saul Griffith’s calculations are wrong, and keeping the world from exceeding 450 ppm of CO2 is easier than he thinks, we need to know!


This Week’s Finds (Week 303)

30 September, 2010

Now for the second installment of my interview with Nathan Urban, a colleague who started out in quantum gravity and now works on "global climate change from an Earth system perspective, with an emphasis on Bayesian data-model calibration, probabilistic prediction, risk assessment, and decision analysis".

But first, a word about Bali. One of the great things about living in Singapore is that it’s close to a lot of interesting places. My wife and I just spent a week in Ubud. This town is the cultural capital of Bali — full of dance, music, and crafts. It’s also surrounded by astounding terraced rice paddies:

In his book Whole Earth Discipline, Stewart Brand says "one of the finest examples of beautifully nuanced ecosystem engineering is the thousand-year-old terraced rice irrigation complex in Bali".

Indeed, when we took a long hike with a local guide, Made Dadug, we learned that that all the apparent "weeds" growing in luxuriant disarray near the rice paddies were in fact carefully chosen plants: cacao, coffee, taro, ornamental flowers, and so on. "See this bush? It’s citronella — people working on the fields grab a pinch and use it for mosquito repellent." When a paddy loses its nutrients they plant sweet potatos there instead of rice, to restore the soil.

Irrigation is managed by a system of local water temples, or "subaks". It’s not a top-down hierarchy: instead, each subak makes decisions in a more or less democratic way, while paying attention to what neighboring subaks do. Brand cites the work of Steve Lansing on this subject:

• J. Stephen Lansing, Perfect Order: Recognizing Complexity in Bali, Princeton U. Press, Princeton, New Jersey, 2006.

Physicists interested in the spontaneous emergence of order will enjoy this passage:

This book began with a question posed by a colleague. In 1992 I gave a lecture at the Santa Fe Institute, a recently created research center devoted to the study of "complex systems." My talk focused on a simulation model that my colleague James Kremer and I had created to investigate the ecological role of water temples. I need to explain a little about how this model came to be built; if the reader will bear with me, the relevance will soon become clear.

Kremer is a marine scientist, a systems ecologist, and a fellow surfer. One day on a California beach I told him the story of the water temples, and of my struggles to convince the consultants that the temples played a vital role in the ecology of the rice terraces. I asked Jim if a simulation model, like the ones he uses to study coastal ecology, might help to clarify the issue. It was not hard to persuade him to come to Bali to take a look. Jim quickly saw that a model of a single water temple would not be very useful. The whole point about water temples is that they interact. Bali is a steep volcanic island, and the rivers and streams are short and fast. Irrigation systems begin high up on the volcanoes, and follow one after another at short intervals all the way to the seacoast. The amount of water each subak gets depends less on rainfall than on how much water is used by its upstream neighbors. Water temples provide a venue for the farmers to plan their irrigation schedules so as to avoid shortages when the paddies need to be flooded. If pests are a problem, they can synchronize harvests and flood a block of terraces so that there is nothing for the pests to eat. Decisions about water taken by each subak thus inevitably affect its neighbors, altering both the availability of water and potential levels of pest infestations.

Jim proposed that we build a simulation model to capture all of these processes for an entire watershed. Having recently spent the best part of a year studying just one subak, the idea of trying to model nearly two hundred of them at once struck me as rather ambitious. But as Jim pointed out, the question is not whether flooding can control pests, but rather whether the entire collection of temples in a watershed can strike an optimal balance between water sharing and pest control.

We set to work plotting the location of all 172 subaks lying between the Oos and Petanu rivers in central Bali. We mapped the rivers and irrigation systems, and gathered data on rainfall, river flows, irrigation schedules, water uptake by crops such as rice and vegetables, and the population dynamics of the major rice pests. With these data Jim constructed a simulation model. At the beginning of each year the artificial subaks in the model are given a schedule of crops to plant for the next twelve months, which defines their irrigation needs. Then, based on historic rainfall data, we simulate rainfall, river flow, crop growth, and pest damage. The model keeps track of harvest data and also shows where water shortages or pest damage occur. It is possible to simulate differences in rainfall patterns or the growth of different kinds of crops, including both native Balinese rice and the new rice promoted by the Green Revolution planners. We tested the model by simulating conditions for two cropping seasons, and compared its predictions with real data on harvest yields for about half the subaks. The model did surprisingly well, accurately predicting most of the variation in yields between subaks. Once we knew that the model’s predictions were meaningful, we used it to compare different scenarios of water management. In the Green Revolution scenario, every subak tries to plant rice as often as possible and ignores the water temples. This produces large crop losses from pest outbreaks and water shortages, much like those that were happening in the real world. In contrast, the “water temple” scenario generates the best harvests by minimizing pests and water shortages.

Back at the Santa Fe Institute, I concluded this story on a triumphant note: consultants to the Asian Development Bank charged with evaluating their irrigation development project in Bali had written a new report acknowledging our conclusions. There would be no further opposition to management by water temples. When I finished my lecture, a researcher named Walter Fontana asked a question, the one that prompted this book: could the water temple networks self-organize? At first I did not understand what he meant by this. Walter explained that if he understood me correctly, Kremer and I had programmed the water temple system into our model, and shown that it had a functional role. This was not terribly surprising. After all, the farmers had had centuries to experiment with their irrigation systems and find the right scale of coordination. But what kind of solution had they found? Was there a need for a Great Designer or an Occasional Tinkerer to get the whole watershed organized? Or could the temple network emerge spontaneously, as one subak after another came into existence and plugged in to the irrigation systems? As a problem solver, how well could the temple networks do? Should we expect 10 percent of the subaks to be victims of water shortages at any given time because of the way the temple network interacts with the physical hydrology? Thirty percent? Two percent? Would it matter if the physical layout of the rivers were different? Or the locations of the temples?

Answers to most of these questions could only be sought if we could answer Walter’s first large question: could the water temple networks self-organize? In other words, if we let the artificial subaks in our model learn a little about their worlds and make their own decisions about cooperation, would something resembling a water temple network emerge? It turned out that this idea was relatively easy to implement in our computer model. We created the simplest rule we could think of to allow the subaks to learn from experience. At the end of a year of planting and harvesting, each artificial subak compares its aggregate harvests with those of its four closest neighbors. If any of them did better, copy their behavior. Otherwise, make no changes. After every subak has made its decision, simulate another year and compare the next round of harvests. The first time we ran the program with this simple learning algorithm, we expected chaos. It seemed likely that the subaks would keep flipping back and forth, copying first one neighbor and then another as local conditions changed. But instead, within a decade the subaks organized themselves into cooperative networks that closely resembled the real ones.

Lansing describes how attempts to modernize farming in Bali in the 1970’s proved problematic:

To a planner trained in the social sciences, management by water temples looks like an arcane relic from the premodern era. But to an ecologist, the bottom-up system of control has some obvious advantages. Rice paddies are artificial aquatic ecosystems, and by adjusting the flow of water farmers can exert control over many ecological processes in their fields. For example, it is possible to reduce rice pests (rodents, insects, and diseases) by synchronizing fallow periods in large contiguous blocks of rice terraces. After harvest, the fields are flooded, depriving pests of their habitat and thus causing their numbers to dwindle. This method depends on a smoothly functioning, cooperative system of water management, physically embodied in proportional irrigation dividers, which make it possible to tell at a glance how much water is flowing into each canal and so verify that the division is in accordance with the agreed-on schedule.

Modernization plans called for the replacement of these proportional dividers with devices called "Romijn gates," which use gears and screws to adjust the height of sliding metal gates inserted across the entrances to canals. The use of such devices makes it impossible to determine how much water is being diverted: a gate that is submerged to half the depth of a canal does not divert half the flow, because the velocity of the water is affected by the obstruction caused by the gate itself. The only way to accurately estimate the proportion of the flow diverted by a Romijn gate is with a calibrated gauge and a table. These were not supplied to the farmers, although $55 million was spent to install Romijn gates in Balinese irrigation canals, and to rebuild some weirs and primary canals.

The farmers coped with the Romijn gates by simply removing them or raising them out of the water and leaving them to rust.

On the other hand, Made said that the people village really appreciated this modern dam:

Using gears, it takes a lot less effort to open and close than the old-fashioned kind:

Later in this series of interviews we’ll hear more about sustainable agriculture from Thomas Fischbacher.

But now let’s get back to Nathan!

JB: Okay. Last time we were talking about the things that altered your attitude about climate change when you started working on it. And one of them was how carbon dioxide stays in the atmosphere a long time. Why is that so important? And is it even true? After all, any given molecule of CO2 that’s in the air now will soon get absorbed by the ocean, or taken up by plants.

NU: The longevity of atmospheric carbon dioxide is important because it determines the amount of time over which our actions now (fossil fuel emissions) will continue to have an influence on the climate, through the greenhouse effect.

You have heard correctly that a given molecule of CO2 doesn’t stay in the atmosphere for very long. I think it’s about 5 years. This is known as the residence time or turnover time of atmospheric CO2. Maybe that molecule will go into the surface ocean and come back out into the air; maybe photosynthesis will bind it in a tree, in wood, until the tree dies and decays and the molecule escapes back to the atmosphere. This is a carbon cycle, so it’s important to remember that molecules can come back into the air even after they’ve been removed from it.

But the fate of an individual CO2 molecule is not the same as how long it takes for the CO2 content of the atmosphere to decrease back to its original level after new carbon has been added. The latter is the answer that really matters for climate change. Roughly, the former depends on the magnitude of the gross carbon sink, while the latter depends on the magnitude of the net carbon sink (the gross sink minus the gross source).

As an example, suppose that every year 100 units of CO2 are emitted to the atmosphere from natural sources (organic decay, the ocean, etc.), and each year (say with a 5 year lag), 100 units are taken away by natural sinks (plants, the ocean, etc). The 5 years actually doesn’t matter here; the system is in steady-state equilibrium, and the amount of CO2 in the air is constant. Now suppose that humans add an extra 1 unit of CO2 each year. If nothing else changes, then the amount of carbon in the air will increase every year by 1 unit, indefinitely. Far from the carbon being purged in 5 years, we end up with an arbitrarily large amount of carbon in the air.

Even if you only add carbon to the atmosphere for a finite time (e.g., by running out of fossil fuels), the CO2 concentration will ultimately reach, and then perpetually remain at, a level equivalent to the amount of new carbon added. Individual CO2 molecules may still get absorbed within 5 years of entering the atmosphere, and perhaps fewer of the carbon atoms that were once in fossil fuels will ultimately remain in the atmosphere. But if natural sinks are only removing an amount of carbon equal in magnitude to natural sources, and both are fixed in time, you can see that if you add extra fossil carbon the overall atmospheric CO2 concentration can never decrease, regardless of what individual molecules are doing.

In reality, natural carbon sinks tend to grow in proportion to how much carbon is in the air, so atmospheric CO2 doesn’t remain elevated indefinitely in response to a pulse of carbon into the air. This is kind of the biogeochemical analog to the "Planck feedback" in climate dynamics: it acts to restore the system to equilibrium. To first order, atmospheric CO2 decays or "relaxes" exponentially back to the original concentration over time. But this relaxation time (variously known as a "response time", "adjustment time", "recovery time", or, confusingly, "residence time") isn’t a function of the residence time of a CO2 molecule in the atmosphere. Instead, it depends on how quickly the Earth’s carbon removal processes react to the addition of new carbon. For example, how fast plants grow, die, and decay, or how fast surface water in the ocean mixes to greater depths, where the carbon can no longer exchange freely with the atmosphere. These are slower processes.

There are actually a variety of response times, ranging from years to hundreds of thousands of years. The surface mixed layer of the ocean responds within a year or so; plants within decades to grow and take up carbon or return it to the atmosphere through rotting or burning. Deep ocean mixing and carbonate chemistry operate on longer time scales, centuries to millennia. And geologic processes like silicate weathering are even slower, tens of thousands of years. The removal dynamics are a superposition of all these processes, with a fair chunk taken out quickly by the fast processes, and slower processes removing the remainder more gradually.

To summarize, as David Archer put it, "The lifetime of fossil fuel CO2 in the atmosphere is a few centuries, plus 25 percent that lasts essentially forever." By "forever" he means "tens of thousands of years" — longer than the present age of human civilization. This inspired him to write this pop-sci book, taking a geologic view of anthropogenic climate change:

• David Archer, The Long Thaw: How Humans Are Changing the Next 100,000 Years of Earth’s Climate, Princeton University Press, Princeton, New Jersey, 2009.

A clear perspective piece on the lifetime of carbon is:

• Mason Inman, Carbon is forever, Nature Reports Climate Change, 20 November 2008.

which is based largely on this review article:

• David Archer, Michael Eby, Victor Brovkin, Andy Ridgwell, Long Cao, Uwe Mikolajewicz, Ken Caldeira, Katsumi Matsumoto, Guy Munhoven, Alvaro Montenegro, and Kathy Tokos, Atmospheric lifetime of fossil fuel carbon dioxide, Annual Review of Earth and Planetary Sciences 37 (2009), 117-134.

For climate implications, see:

• Susan Solomon, Gian-Kasper Plattner, Reto Knutti and Pierre Friedlingstein, Irreversible climate change due to carbon dioxide emissions, PNAS 106 (2009), 1704-1709.

• M. Eby, K. Zickfeld, A. Montenegro, D. Archer, K. J. Meissner and A. J. Weaver, Lifetime of anthropogenic climate change: millennial time scales of potential CO2 and surface temperature perturbations, Journal of Climate 22 (2009), 2501-2511.

• Long Cao and Ken Caldeira, Atmospheric carbon dioxide removal: long-term consequences and commitment, Environmental Research Letters 5 (2010), 024011.

For the very long term perspective (how CO2 may affect the glacial-interglacial cycle over geologic time), see:

• David Archer and Andrey Ganopolski, A movable trigger: Fossil fuel CO2 and the onset of the next glaciation, Geochemistry Geophysics Geosystems 6 (2005), Q05003.

JB: So, you’re telling me that even if we do something really dramatic like cut fossil fuel consumption by half in the next decade, we’re still screwed. Global warming will keep right on, though at a slower pace. Right? Doesn’t that make you feel sort of hopeless?

NU: Yes, global warming will continue even as we reduce emissions, although more slowly. That’s sobering, but not grounds for total despair. Societies can adapt, and ecosystems can adapt — up to a point. If we slow the rate of change, then there is more hope that adaptation can help. We will have to adapt to climate change, regardless, but the less we have to adapt, and the more gradual the adaptation necessary, the less costly it will be.

What’s even better than slowing the rate of change is to reduce the overall amount of it. To do that, we’d need to not only reduce carbon emissions, but to reduce them to zero before we consume all fossil fuels (or all of them that would otherwise be economically extractable). If we emit the same total amount of carbon, but more slowly, then we will get the same amount of warming, just more slowly. But if we ultimately leave some of that carbon in the ground and never burn it, then we can reduce the amount of final warming. We won’t be able to stop it dead, but even knocking a degree off the extreme scenarios would be helpful, especially if there are "tipping points" that might otherwise be crossed (like a threshold temperature above which a major ice sheet will disintegrate).

So no, I don’t feel hopeless that we can, in principle, do something useful to mitigate the worst effects of climate change, even though we can’t plausibly stop or reverse it on normal societal timescales. But sometimes I do feel hopeless that we lack the public and political will to actually do so. Or at least, that we will procrastinate until we start seeing extreme consequences, by which time it’s too late to prevent them. Well, it may not be too late to prevent future, even more extreme consequences, but the longer we wait, the harder it is to make a dent in the problem.

I suppose here I should mention the possibility of climate geoengineering, which is a proposed attempt to artificially counteract global warming through other means, such as reducing incoming sunlight with reflective particles in the atmosphere, or space mirrors. That doesn’t actually cancel all climate change, but it can negate a lot of the global warming. There are many risks involved, and I regard it as a truly last-ditch effort if we discover that we really are "screwed" and can’t bear the consequences.

There is also an extreme form of carbon cycle geoengineering, known as air capture and sequestration, which extracts CO2 from the atmosphere and sequesters it for long periods of time. There are various proposed technologies for this, but it’s highly uncertain whether this can feasibly be done on the necessary scales.

JB: Personally, I think society will procrastinate until we see extreme climate changes. Recently millions of Pakistanis were displaced by floods: a quarter of their country was covered by water. We can’t say for sure this was caused by global warming — but it’s exactly the sort of thing we should expect.

But you’ll notice, this disaster is nowhere near enough to make politicians talk about cutting fossil fuel usage! It’ll take a lot of disasters like this to really catch people’s attention. And by then we’ll be playing a desperate catch-up game, while people in many countries are struggling to survive. That won’t be easy. Just think how little attention the Pakistanis can spare for global warming right now.

Anyway, this is just my own cheery view. But I’m not hopeless, because I think there’s still a lot we can do to prevent a terrible situation from becoming even worse. Since I don’t think the human race will go extinct anytime soon, it would be silly to "give up".

Now, you’re just started a position at the Woodrow Wilson School at Princeton. When I was an undergrad there, this school was the place for would-be diplomats. What’s a nice scientist like you doing in a place like this? I see you’re in the Program in Science, Technology and Environmental Policy, or "STEP program". Maybe it’s too early for you to give a really good answer, but could you say a bit about what they do?

NU: Let me pause to say that I don’t know whether the Pakistan floods are "exactly the sort of thing we should expect" to happen to Pakistan, specifically, as a result of climate change. Uncertainty in the attribution of individual events is one reason why people don’t pay attention to them. But it is true that major floods are examples of extreme events which could become more (or less) common in various regions of the world in response to climate change.

Returning to your question, the STEP program includes a number of scientists, but we are all focused on policy issues because the Woodrow Wilson School is for public and international affairs. There are physicists who work on nuclear policy, ecologists who study environmental policy and conservation biology, atmospheric chemists who look at ozone and air pollution, and so on. Obviously, climate change is intimately related to public and international policy. I am mostly doing policy-relevant science but may get involved in actual policy to some extent. The STEP program has ties to other departments such as Geosciences, interdisciplinary umbrella programs like the Atmospheric and Ocean Sciences program and the Princeton Environmental Institute, and NOAA’s nearby Geophysical Fluid Dynamics Laboratory, one of the world’s leading climate modeling centers.

JB: How much do you want to get into public policy issues? Your new boss, Michael Oppenheimer, used to work as chief scientist for the Environmental Defense Fund. I hadn’t known much about them, but I’ve just been reading a book called The Climate War. This book says a lot about the Environmental Defense Fund’s role in getting the US to pass cap-and-trade legislation to reduce sulfur dioxide emissions. That’s quite an inspiring story! Many of the same people then went on to push for legislation to reduce greenhouse gases, and of course that story is less inspiring, so far: no success yet. Can you imagine yourself getting into the thick of these political endeavors?

NU: No, I don’t see myself getting deep into politics. But I am interested in what we should be doing about climate change, specifically, the economic assessment of climate policy in the presence of uncertainties and learning. That is, how hard should we be trying to reduce CO2 emissions, accounting for the fact that we’re unsure what climate the future will bring, but expect to learn more over time. Michael is very interested in this question too, and the harder problem of "negative learning":

• Michael Oppenheimer, Brian C. O’Neill and Mort Webster, Negative learning, Climatic Change 89 (2008), 155-172.

"Negative learning" occurs if what we think we’re learning is actually converging on the wrong answer. How fast could we detect and correct such an error? It’s hard enough to give a solid answer to what we might expect to learn, let alone what we don’t expect to learn, so I think I’ll start with the former.

I am also interested in the value of learning. How will our policy change if we learn more? Can there be any change in near-term policy recommendations, or will we learn slowly enough that new knowledge will only affect later policies? Is it more valuable — in terms of its impact on policy — to learn more about the most likely outcomes, or should we concentrate on understanding better the risks of the worst-case scenarios? What will cause us to learn the fastest? Better surface temperature observations? Better satellites? Better ocean monitoring systems? What observables should they we looking at?

The question "How much should we reduce emissions" is, partially, an economic one. The safest course of action from the perspective of climate impacts is to immediately reduce emissions to a much lower level. But that would be ridiculously expensive. So some kind of cost-benefit approach may be helpful: what should we do, balancing the costs of emissions reductions against their climate benefits, knowing that we’re uncertain about both. I am looking at so-called "economic integrated assessment" models, which combine a simple model of the climate with an even simpler model of the world economy to understand how they influence each other. Some argue these models are too simple. I view them more as a way of getting order-of-magnitude estimates of the relative values of different uncertainty scenarios or policy options under specified assumptions, rather than something that can give us "The Answer" to what our emissions targets should be.

In a certain sense it may be moot to look at such cost-benefit analyses, since there is a huge difference between "what may be economically optimal for us to do" and "what we will actually do". We have not yet approached current policy recommendations, so what’s the point of generating new recommendations? That’s certainly a valid argument, but I still think it’s useful to have a sense of the gap between what we are doing and what we "should" be doing.

Economics can only get us so far, however (and maybe not far at all). Traditional approaches to economics have a very narrow way of viewing the world, and tend to ignore questions of ethics. How do you put an economic value on biodiversity loss? If we might wipe out polar bears, or some other species, or a whole lot of species, how much is it "worth" to prevent that? What is the Great Barrier Reef worth? Its value in tourism dollars? Its value in "ecosystem services" (the more nebulous economic activity which indirectly depends on its presence, such as fishing)? Does it have intrinsic value, and is worth something (what?) to preserve, even if it has no quantifiable impact on the economy whatsoever?

You can continue on with questions like this. Does it make sense to apply standard economic discounting factors, which effectively value the welfare of future generations less than that of the current generation? See for example:

• John Quiggin, Stern and his critics on discounting and climate change: an editorial essay, Climatic Change 89 (2008), 195-205.

Economic models also tend to preserve present economic disparities. Otherwise, their "optimal" policy is to immediately transfer a lot of the wealth of developed countries to developing countries — and this is without any climate change — to maximize the average "well-being" of the global population, on the grounds that a dollar is worth more to a poor person than a rich person. This is not a realistic policy and arguably shouldn’t happen anyway, but you do have to be careful about hard-coding potential inequities into your models:

• Seth D. Baum and William E. Easterling, Space-time discounting in climate change adaptation, Mitigation and Adaptation Strategies for Global Change 15 (2010), 591-609.

More broadly, it’s possible for economics models to allow sea level rise to wipe out Bangladesh, or other extreme scenarios, simply because some countries have so little economic output that it doesn’t "matter" if they disappear, as long as other countries become even more wealthy. As I said, economics is a narrow lens.

After all that, it may seem silly to be thinking about economics at all. The main alternative is the "precautionary principle", which says that we shouldn’t take suspected risks unless we can prove them safe. After all, we have few geologic examples of CO2 levels rising as far and as fast as we are likely to increase them — to paraphrase Wally Broecker, we are conducting an uncontrolled and possibly unprecedented experiment on the Earth. This principle has some merits. The common argument, "We should do nothing unless we can prove the outcome is disastrous", is a strange burden of proof from a decision analytic point of view — it has little to do with the realities of risk management under uncertainty. Nobody’s going to say "You can’t prove the bridge will collapse, so let’s build it". They’re going to say "Prove it’s safe (to within a certain guarantee) before we build it". Actually, a better analogy to the common argument might be: you’re driving in the dark with broken headlights, and insist “You’ll have to prove there are no cliffs in front of me before I’ll consider slowing down.” In reality, people should slow down, even if it makes them late, unless they know there are no cliffs.

But the precautionary principle has its own problems. It can imply arbitrarily expensive actions in order to guard against arbitrarily unlikely hazards, simply because we can’t prove they’re safe, or precisely quantify their exact degree of unlikelihood. That’s why I prefer to look at quantitative cost-benefit analysis in a probabilistic framework. But it can be supplemented with other considerations. For example, you can look at stabilization scenarios: where you "draw a line in the sand" and say we can’t risk crossing that, and apply economics to find the cheapest way to avoid crossing the line. Then you can elaborate that to allow for some small but nonzero probability of crossing it, or to allow for temporary "overshoot", on the grounds that it might be okay to briefly cross the line, as long as we don’t stay on the other side indefinitely. You can tinker with discounting assumptions and the decision framework of expected utility maximization. And so on.

JB: This is fascinating stuff. You’re asking a lot of really important questions — I think I see about 17 question marks up there. Playing the devil’s advocate a bit, I could respond: do you known any answers? Of course I don’t expect "ultimate" answers, especially to profound questions like how much we should allow economics to guide our decision, versus tempering it with other ethical considerations. But it would be nice to see an example where thinking about these issues turned up new insights that actually changed people’s behavior. Cases where someone said "Oh, I hadn’t thought of that…", and then did something different that had a real effect.

You see, right now the world as it is seems so far removed from the world as it should be that one can even start to doubt the usefulness of pondering the questions you’re raising. As you said yourself, "We’re not yet even coming close to current policy recommendations, so what’s the point of generating new recommendations?"

I think the cap-and-trade idea is a good example, at least as far as sulfur dioxide emissions go: the Clean Air Act Amendments of 1990 managed to reduce SO2 emissions in the US from about 19 million tons in 1980 to about 7.6 million tons in 2007. Of course this idea is actually a bunch of different ideas that need to work together in a certain way… but anyway, some example related to global warming would be a bit more reassuring, given our current problems with that.

NU: Climate change economics has been very influential in generating momentum for putting a price on carbon (through cap-and-trade or otherwise), in Europe and the U.S., in showing that such policy had the potential to be a net benefit considering the risks of climate change. SO2 emissions markets are one relevant piece of this body of research, although the CO2 problem is much bigger in scope and presents more problems for such approaches. Climate economics has been an important synthesis of decision analysis and scientific uncertainty quantification, which I think we need more of. But to be honest, I’m not sure what immediate impact additional economic work may have on mitigation policy, unless we begin approaching current emissions targets. So from the perspective of immediate applications, I also ponder the usefulness of answering these questions.

That, however, is not the only perspective I think about. I’m also interested in how what we should do is related to what we might learn — if not today, then in the future. There are still important open questions about how well we can see something potentially bad coming, the answers to which could influence policies. For example, if a major ice sheet begins to substantially disintegrate within the next few centuries, would we be able to see that coming soon enough to step up our mitigation efforts in time to prevent it? In reality that’s a probabilistic question, but let’s pretend it’s a binary outcome. If the answer is "yes", that could call for increased investment in "early warning" observation systems, and a closer coupling of policy to the data produced by such systems. (Well, we should be investing more in those anyway, but people might get the point more strongly, especially if research shows that we’d only see it coming if we get those systems in place and tested soon.) If the answer is "no", that could go at least three ways. One way it could go is that the precautionary principle wins: if we think that we could put coastal cities under water, and we wouldn’t see it coming in time to prevent it, that might finally prompt more preemptive mitigation action. Another is that we start looking more seriously at last-ditch geoengineering approaches, or carbon air capture and sequestration. Or, if people give up on modifying the climate altogether, then it could prompt more research and development into adaptation. All of those outcomes raise new policy questions, concerning how much of what policy response we should aim for.

Which brings me to the next policy option. The U.S. presidential science advisor, John Holdren, has said that we have three choices for climate change: mitigate, adapt, or suffer. Regardless of what we do about the first, people will likely be doing some of the other two; the question is how much. If you’re interested in research that has a higher likelihood of influencing policy in the near term, adaptation is probably what you should work on. (That, or technological approaches like climate/carbon geoengineering, energy systems, etc.) People are already looking very seriously at adaptation (and in some cases are already putting plans into place). For example, the Port Authority of Los Angeles needs to know whether, or when, to fortify their docks against sea level rise, and whether a big chunk of their business could disappear if the Northwest Passage through the Arctic Ocean opens permanently. They have to make these investment decisions regardless of what may happen with respect to geopolitical emissions reduction negotiations. The same kinds of learning questions I’m interested in come into play here: what will we know, and when, and how should current decisions be structured knowing that we will be able to periodically adjust those decisions?

So, why am I not working on adaptation? Well, I expect that I will be, in the future. But right now, I’m still interested in a bigger question, which is how well can we bound the large risks and our ability to prevent disasters, rather than just finding the best way to survive them. What is the best and the worst that can happen, in principle? Also, I’m concerned that right now there is too much pressure to develop adaptation policies to a level of detail which we don’t yet have the scientific capability to develop. While global temperature projections are probably reasonable within their stated uncertainty ranges, we have a very limited ability to predict, for example, how precipitation may change over a particular city. But that’s what people want to know. So scientists are trying to give them an answer. But it’s very hard to say whether some of those answers right now are actionably credible. You have to choose your problems carefully when you work in adaptation. Right now I’m opting to look at sea level rise, partly because it is less affected by the some of the details of local meteorology.

JB: Interesting. I think I’m going to cut our conversation here, because at this point it took a turn that will really force me to do some reading! And it’s going to take a while. But it should be fun!


The climatic impacts of releasing fossil fuel CO2 to the atmosphere will last longer than Stonehenge, longer than time capsules, longer than nuclear waste, far longer than the age of human civilization so far. – David Archer


Follow

Get every new post delivered to your Inbox.

Join 3,095 other followers