Monte Carlo Methods in Climate Science

23 July, 2013

joint with David Tweed

One way the Azimuth Project can help save the planet is to get bright young students interested in ecology, climate science, green technology, and stuff like that. So, we are writing an article for Math Horizons, an American magazine for undergraduate math majors. This blog article is a draft of that. You can also see it in PDF form here.

We’d really like to hear your comments! There are severe limits on including more detail, since the article should be easy to read and short. So please don’t ask us to explain more stuff: we’re most interested to know if you sincerely don’t understand something, or feel that students would have trouble understanding something. For comparison, you can see sample Math Horizons articles here.

Introduction

They look placid lapping against the beach on a calm day, but the oceans are actually quite dynamic. The ocean currents act as ‘conveyor belts’, transporting heat both vertically between the water’s surface and the depths and laterally from one area of the globe to another. This effect is so significant that the temperature and precipitation patterns can change dramatically when currents do.

For example: shortly after the last ice age, northern Europe experienced a shocking change in climate from 10,800 to 9,500 BC. At the start of this period temperatures plummeted in a matter of decades. It became 7° Celsius colder, and glaciers started forming in England! The cold spell lasted for over a thousand years, but it ended as suddenly as it had begun.

Why? The most popular theory is that that a huge lake in North America formed by melting glaciers burst its bank—and in a massive torrent lasting for years, the water from this lake rushed out to the northern Atlantic ocean. By floating atop the denser salt water, this fresh water blocked a major current: the Atlantic Meridional Overturning Circulation. This current brings warm water north and helps keep northern Europe warm. So, when iit shut down, northern Europe was plunged into a deep freeze.

Right now global warming is causing ice sheets in Greenland to melt and release fresh water into the North Atlantic. Could this shut down the Atlantic Meridional Overturning Circulation and make the climate of Northern Europe much colder? In 2010, Keller and Urban [KU] tackled this question using a simple climate model, historical data, probability theory, and lots of computing power. Their goal was to understand the spectrum of possible futures compatible with what we know today.

Let us look at some of the ideas underlying their work.

Box models

The earth’s physical behaviour, including the climate is far too complex to simulate from the bottom up using basic physical principles, at least for now. The most detailed models today can take days to run on very powerful computers. So to make reasonable predictions on a laptop in a tractable time-frame, geophysical modellers use some tricks.

First, it is possible to split geophysical phenomena into ‘boxes’ containing strongly related things. For example: atmospheric gases, particulate levels and clouds all affect each other strongly; likewise the heat content, currents and salinity of the oceans all interact strongly. However, the interactions between the atmosphere and the oceans are weaker, and we can approximately describe them using just a few settings, such as the amount of atmospheric CO2 entering or leaving the oceans. Clearly these interactions must be consistent—for example, the amount of CO2 leaving the atmosphere box must equal the amount entering the ocean box—but breaking a complicated system into parts lets different specialists focus on different aspects; then we can combine these parts and get an approximate model of entire planet. The box model used by Keller and Urban is shown in Figure 1.



1. The box model used by Keller and Urban.

 
Second, it turn out that simple but effective box models can be distilled from the complicated physics in terms of forcings and feedbacks. Essentially a forcing is a measured input to the system, such as solar radiation or CO2 released by burning fossil fuels. As an analogy, consider a child on a swing: the adult’s push every so often is a forcing. Similarly a feedback describes how the current ‘box variables’ influence future ones. In the swing analogy, one feedback is how the velocity will influence the future height. Specifying feedbacks typically uses knowledge of the detailed low-level physics to derive simple, tractable functional relationships between groups of large-scale observables, a bit like how we derive the physics of a gas by thinking about collisions of lots of particles.

However, it is often not feasible to get actual settings for the parameters in our model starting from first principles. In other words, often we can get the general form of the equations in our model, but they contain a lot of constants that we can estimate only by looking at historical data.

Probability modeling

Suppose we have a box model that depends on some settings S. For example, in Keller and Urban’s model, S is a list of 18 numbers. To keep things simple, suppose the settings are element of some finite set. Suppose we also have huge hard disc full of historical measurements, and we want to use this to find the best estimate of S. Because our data is full of ‘noise’ from other, unmodeled phenomena we generally cannot unambiguously deduce a single set of settings. Instead we have to look at things in terms of probabilities. More precisely, we need to study the probability that S take some value s given that the measurements take some value. Let’s call the measurements M, and again let’s keep things simple by saying M takes values in some finite set of possible measurements.

The probability that S = s given that M takes some value m is called the conditional probability P(S=s | M=m). How can we compute this conditional probability? This is a somewhat tricky problem.

One thing we can more easily do is repeatedly run our model with randomly chosen settings and see what measurements it predicts. By doing this, we can compute the probability that given setting values S = s, the model predicts measurements M=m. This again is a conditional probability, but now it is called P(M=m|S=s).

This is not what we want: it’s backwards! But here Bayes’ rule comes to the rescue, relating what we want to what we can more easily compute:

\displaystyle{ P(S = s | M = m) = P(M = m| S = s) \frac{P(S = s)}{P(M = m)} }

Here P(S = s) is the probability that the settings take a specific value s, and similarly for P(M = m). Bayes’ rule is quite easy to prove, and it is actually a general rule that applies to any random variables, not just the settings and the measurements in our problem [Y]. It underpins most methods of figuring out hidden quantities from observed ones. For this reason, it is widely used in modern statistics and data analysis [K].

How does Bayes’ rule help us here? When we repeatedly run our model with randomly chosen settings, we have control over P(S = s). As mentioned, we can compute P(M=m| S=s). Finally, P(M = m) is independent of our choice of settings. So, we can use Bayes’ rule to compute P(S = s | M = m) up to a constant factor. And since probabilities must sum to 1, we can figure out this constant.

This lets us do many things. It lets us find the most likely values of the settings for our model, given our hard disc full of observed data. It also lets us find the probability that the settings lie within some set. This is important: if we’re facing the possibility of a climate disaster, we don’t just want to know the most likely outcome. We would like to know to know that with 95% probability, the outcome will lie in some range.

An example

Let us look at an example much simpler than that considered by Keller and Urban. Suppose our measurements are real numbers m_0,\dots, m_T related by

m_{t+1} = s m_t - m_{t-1} + N_t

Here s, a real constant, is our ‘setting’, while N_t is some ‘noise’: an independent Gaussian random variable for each time t, each with mean zero and some fixed standard deviation. Then the measurements m_t will have roughly sinusoidal behavior but with irregularity added by the noise at each time step, as illustrated in Figure 2.



2. The example system: red are predicted measurements for a given value of the settings, green is another simulation for the same s value and blue is a simulation for a slightly different s.

 
Note how there is no clear signal from either the curves or the differences that the green curve is at the correct setting value while the blue one has the wrong one: the noise makes it nontrivial to estimate s. This is a baby version of the problem faced by Keller and Urban.

Markov Chain Monte Carlo

Having glibly said that we can compute the conditional probability P(M=m | S=s), how do we actually do this? The simplest way would be to run our model many, many times with the settings set at S=s and determine the fraction of times it predicts measurements equal to m. This gives us an estimate of P(M=m | S=s). Then we can use Bayes’ rule to work out P(M=m|S=s), at least up to a constant factor.

Doing all this by hand would be incredibly time consuming and error prone, so computers are used for this task. In our example, we do this in Figure 3. As we keep running our model over and over, the curve showing P(M=m |S=s) as a function of s settles down to the right answer.


3. The estimates of P(M=m | S=s) as a function of s using uniform sampling, ending up with 480 samples at each point.

 

However, this is computationally inefficient, as shown in the probability distribution for small numbers of samples. This has quite a few ‘kinks’, which only disappear later. The problem is that there are lots of possible choices of s to try. And this is for a very simple model!

When dealing with the 18 settings involved in the model of Keller and Urban, trying every combination would take far too long. A way to avoid this is Markov Chain Monte Carlo sampling. Monte Carlo is famous for its casinos, so a ‘Monte Carlo’ algorithm is one that uses randomness. A ‘Markov chain’ is a random walk: for example, where you repeatedly flip a coin and take one step right when you get heads, and one step right when you get tails. So, in Markov Chain Monte Carlo, we perform a random walk through the collection of all possible settings, collecting samples.

The key to making this work is that at each step on the walk a proposed modification s' to the current settings s is generated randomly—but it may be rejected if it does not seem to improve the estimates. The essence of the rule is:

The modification s \mapsto s' is randomly accepted with a probability equal to the ratio

\displaystyle{ \frac{P(M=m | S=s')}{ P(M=m | S=s)} }

Otherwise the walk stays at the current position.

If the modification is better, so that the ratio is greater than 1, the new state is always accepted. With some additional tricks—such as discarding the very beginning of the walk—this gives a set of samples from which can be used to compute P(M=m | S=s). Then we can compute P(S = s | M = m) using Bayes’ rule.

Figure 4 shows the results of using the Markov Chain Monte Carlo procedure to figure out P(S= s| M= m) in our example.


4. The estimates of P(S = s|M = m) curves using Markov Chain Monte Carlo, showing the current distribution estimate at increasing intervals. The red line shows the current position of the random walk. Again the kinks are almost gone in the final distribution.

 

Note that the final distribution has only peformed about 66 thousand simulations in total, while the full sampling peformed over 1.5 million. The key advantage of Markov Chain Monte Carlo is that it avoids performing many simulations in areas where the probability is low, as we can see from the way the walk path remains under the big peak in the probability density almost all the time. What is more impressive is that it achieves this without any global view of the probability density, just by looking at how P(M=m | S=s) changes when we make small changes in the settings. This becomes even more important as we move to dealing with systems with many more dimensions and settings, where it proves very effective at finding regions of high probability density whatever their shape.

Why is it worth doing so much work to estimate the probability distribution for settings for a climate model? One reason is that we can then estimate probabilities of future events, such as the collapse of the Atlantic Meridional Ocean Current. And what’s the answer? According to Keller and Urban’s calculation, this current will likely weaken by about a fifth in the 21st century, but a complete collapse is unlikely before 2300. This claim needs to be checked in many ways—for example, using more detailed models. But the importance of the issue is clear, and we hope we have made the importance of good mathematical ideas for climate science clear as well.

Exploring the topic

The Azimuth Project is a group of scientists, engineers and computer programmers interested in questions like this [A]. If you have questions, or want to help out, just email us. Versions of the computer programs we used in this paper will be made available here in a while.

Here are some projects you can try, perhaps with the help of Kruschke’s textbook [K]:

• There are other ways to do setting estimation using time series: compare some to MCMC in terms of accuracy and robustness.

• We’ve seen a 1-dimensional system with one setting. Simulate some multi-dimensional and multi-setting systems. What new issues arise?

Acknowledgements. We thank Nathan Urban and other
members of the Azimuth Project for many helpful discussions.

References

[A] Azimuth Project, http://www.azimuthproject.org.

[KU] Klaus Keller and Nathan Urban, Probabilistic hindcasts and projections of the coupled climate, carbon cycle and Atlantic meridional overturning circulation system: a Bayesian fusion of century-scale measurements with a simple model, Tellus A 62 (2010), 737–750. Also available free online.

[K] John K. Kruschke, Doing Bayesian Data Analysis: A Tutorial with R and BUGS, Academic Press, New York, 2010.

[Y] Eliezer S. Yudkowsky, An intuitive explanation of Bayes’ theorem.


Bridging the Greenhouse-Gas Emissions Gap

28 April, 2013

I could use some help here, finding organizations that can help cut greenhouse gas emissions. I’ll explain what I mean in a minute. But the big question is:

How can we bridge the gap between what we are doing about global warming and what we should be doing?

That’s what this paper is about:

• Kornelis Blok, Niklas Höhne, Kees van der Leun and Nicholas Harrison, Bridging the greenhouse-gas emissions gap, Nature Climate Change 2 (2012), 471-474.

According to the United Nations Environment Programme, we need to cut CO2 emissions by about 12 gigatonnes/year by 2020 to hold global warming to 2 °C.

After the UN climate conference in Copenhagen, many countries made pledges to reduce CO2 emissions. But by 2020 these pledges will cut emissions by at most 6 gigatonnes/year. Even worse, a lot of these pledges are contingent on other people meeting other pledges, and so on… so the confirmed value of all these pledges is only 3 gigatonnes/year.

The authors list 21 things that cities, large companies and individual citizens can do, which they claim will cut greenhouse gas emissions by the equivalent of 10 gigatonnes/year of CO2 by 2020. For each initiative on their list, they claim:

(1) there is a concrete starting position from which a significant up-scaling until the year 2020 is possible;

(2) there are significant additional benefits besides a reduction of greenhouse-gas emissions, so people can be driven by self-interest or internal motivation, not external pressure;

(3) there is an organization or combination of organizations that can lead the initiative;

(4) the initiative has the potential to reach an emission reduction by about 0.5 Gt CO2e by 2020.

21 Initiatives

Now I want to quote the paper and list the 21 initiatives. And here’s where I could use your help! For each of these, can you point me to one or more organizations that are in a good position to lead the initiative?

Some are already listed, but even for these I bet there are other good answers. I want to compile a list, and then start exploring what’s being done, and what needs to be done.

By the way, even if the UN estimate of the greenhouse-emissions gap is wrong, and even if all the numbers I’m about to quote are wrong, most of them are probably the right order of magnitude—and that’s all we need to get a sense of what needs to be done, and how we can do it.

Companies

1. Top 1,000 companies’ emission reductions. Many of the 1,000 largest greenhouse-gas-emitting companies already have greenhouse-gas emission-reduction goals to decrease their energy use and increase their long-term competitiveness, as well as to demonstrate their corporate social responsibility. An association such as the World Business Council for Sustainable Development could lead 30% of the top 1,000 companies to reduce energy-related emissions 10% below business as usual by 2020 and all companies to reduce their non-carbon dioxide greenhouse-gas emissions by 50%. Impact in 2020: up to 0.7 Gt CO2e.

2. Supply-chain emission reductions. Several companies already have social and environmental requirements for their suppliers, which are driven by increased competitiveness, corporate social responsibility and the ability to be a front-runner. An organization such as the Consumer Goods Forum could stimulate 30% of companies to require their supply chains to reduce emissions 10% below business as usual by 2020. Impact in 2020: up to 0.2 Gt CO2e.

3. Green financial institutions. More than 200 financial organizations are already members of the finance initiative of the United Nations Environment Programme (UNEP-FI). They are committed to environmental goals owing to corporate social responsibility, to gain investor certainty and to be placed well in emerging markets. UNEP-FI could lead the 20 largest banks to reduce the carbon footprint of 10% of their assets by 80%. Impact in 2020: up to 0.4 Gt of their assets by 80%. Impact in 2020: up to 0.4 Gt CO2e.

4. Voluntary-offset companies. Many companies are already offsetting their greenhouse-gas emissions, mostly without explicit external pressure. A coalition between an organization with convening power, for example UNEP, and offset providers could motivate 20% of the companies in the light industry and commercial sector to calculate their greenhouse-gas emissions, apply emission-reduction measures and offset the remaining emissions (retiring the purchased credits). It is ensured that offset projects really reduce emissions by using the ‘gold standard’ for offset projects or another comparable mechanism. Governments could provide incentives by giving tax credits for offsetting, similar to those commonly given for charitable donations. Impact by 2020: up to 2.0 Gt CO2e.

Other actors

5. Voluntary-offset consumers. A growing number of individuals (especially with high income) already offset their greenhouse-gas emissions, mostly for flights, but also through carbon-neutral products. Environmental NGOs could motivate 10% of the 20% of richest individuals to offset their personal emissions from electricity use, heating and transport at cost to them of around US$200 per year. Impact in 2020: up to 1.6 Gt CO2e.

6. Major cities initiative. Major cities are large emitters of greenhouse gases and many have greenhouse-gas reduction targets. Cities are intrinsically highly motivated to act so as to improve local air quality, attractiveness and local job creation. Groups like the C40 Cities Climate Leadership Group and ICLEI — Local Governments for Sustainability could lead the 40 cities in C40 or an equivalent sample to reduce emissions 20% below business as usual by 2020, building on the thousands of emission-reduction activities already implemented by the C40 cities. Impact in 2020: up to 0.7 Gt CO2e.

7. Subnational governments. Several states in the United States and provinces in Canada have already introduced support mechanisms for renewable energy, emission-trading schemes, carbon taxes and industry regulation. As a result, they expect an increase in local competitiveness, jobs and energy security. Following the example set by states such as California, these ambitious US states and Canadian provinces could accept an emission-reduction target of 15–20% below business as usual by 2020, as some states already have. Impact in 2020: up to 0.6 Gt CO2e.

Energy efficiency

8. Building heating and cooling. New buildings, and increasingly existing buildings, are designed to be extremely energy efficient to realize net savings and increase comfort. The UN Secretary General’s Sustainable Energy for All Initiative could bring together the relevant players to realize 30% of the full reduction potential for 2020. Impact in 2020: up to 0.6 Gt CO2e.

9. Ban of incandescent lamps. Many countries already have phase-out schedules for incandescent lamps as it provides net savings in the long term. The en.lighten initiative of UNEP and the Global Environment Facility already has a target to globally ban incandescent lamps by 2016. Impact in 2020: up to 0.2 Gt CO2e.

10. Electric appliances. Many international labelling schemes and standards already exist for energy efficiency of appliances, as efficient appliances usually give net savings in the long term. The Collaborative Labeling and Appliance Standards Program or the Super-efficient Equipment and Appliance Deployment Initiative could drive use of the most energy-efficient appliances on the market. Impact in 2020: up to 0.6 Gt CO2e.

11. Cars and trucks. All car and truck manufacturers put emphasis on developing vehicles that are more efficient. This fosters innovation and increases their long-term competitive position. The emissions of new cars in Europe fell by almost 20% in the past decade. A coalition of manufacturers and NGOs joined by the UNEP Partnership for Clean Fuels and Vehicles could agree to save one additional liter per 100 km globally by 2020 for cars, and equivalent reductions for trucks. Impact in 2020: up to 0.7 Gt CO2e.

Energy supply

12. Boost solar photovoltaic energy. Prices of solar photovoltaic systems have come down rapidly in recent years, and installed capacity has increased much faster than expected. It created a new industry, an export market and local value added through, for example, roof installations. A coalition of progressive governments and producers could remove barriers by introducing good grid access and net metering rules, paving the way to add another 1,600 GW by 2020 (growth consistent with recent years). Impact in 2020: up to 1.4 Gt CO2e.

13. Wind energy. Cost levels for wind energy have come down dramatically, making wind economically competitive with fossil-fuel-based power generation in many cases. The Global Wind Energy Council could foster the global introduction of arrangements that lead to risk reduction for investments in wind energy, with, for example, grid access and guarantees. This could lead to an installation of 1,070 GW by 2020, which is 650 GW over a reference scenario. Impact in 2020: up to 1.2 Gt CO2e.

14. Access to energy through low-emission options. Strong calls and actions are already underway to provide electricity access to 1.4 billion people who are at present without and fulfill development goals. The UN Secretary General’s Sustainable Energy for All Initiative could ensure that all people without access to electricity get access through low-emission options. Impact in 2020: up to 0.4 Gt CO2e.

15. Phasing out subsidies for fossil fuels. This highly recognized option to reduce emissions would improve investment in clean energy, provide other environmental, health and security benefits, and generate income. The International Energy Agency could work with countries to phase out half of all fossil-fuel subsidies. Impact in 2020: up to 0.9 Gt CO2e.

Special sectors

16. International aviation and maritime transport. The aviation and shipping industries are seriously considering efficiency measures and biofuels to increase their competitive advantage. Leading aircraft and ship manufacturers could agree to design their vehicles to capture half of the technical mitigation potential. Impact in 2020: up to 0.2 Gt CO2e.

17. Fluorinated gases (hydrofluorocarbons, perflourocarbons, SF6). Recent industry-led initiatives are already underway to reduce emissions of these gases originating from refrigeration, air-conditioning and industrial processes. Industry associations, such as Refrigerants, Naturally!, could work towards meeting half of the technical mitigation potential. Impact in 2020: up to 0.3 Gt CO2e.

18. Reduce deforestation. Some countries have already shown that it is strongly possible to reduce deforestation with an integrated approach that eliminates the drivers of deforestation. This has benefits for local air pollution and biodiversity, and can support the local population. Led by an individual with convening power, for example, the United Kingdom’s Prince of Wales or the UN Secretary General, such approaches could be rolled out to all the major countries with high deforestation emissions, halving global deforestation by 2020. Impact in 2020: up to 1.8 Gt CO2e.

19. Agriculture. Options to reduce emissions from agriculture often increase efficiency. The International Federation of Agricultural Producers could help to realize 30% of the technical mitigation potential. (Well, at least it could before it collapsed, after this paper was written.) Impact in 2020: up to 0.8 Gt CO2e.

Air pollutants

20. Enhanced reduction of air pollutants. Reduction of classic air pollutants including black carbon has been pursued for years owing to positive impacts on health and local air quality. UNEP’s Climate and Clean Air Coalition To Reduce Short-Lived Climate Pollutants already has significant political momentum and could realize half of the technical mitigation potential. Impact in 2020: a reduction in radiative forcing impact equivalent to an emission reduction of greenhouse gases in the order of 1 Gt CO2e, but outside of the definition of the gap.

21. Efficient cook-stoves. Cooking in rural areas is a source of carbon dioxide emissions. Furthermore, there are emissions of black carbon, which also leads to global warming. Replacing these cook-stoves would also significantly increase local air quality and reduce pressure on forests from fuel-wood demand. A global development organization such as the UN Development Programme could take the lead in scaling-up the many already existing programs to eventually replace half of the existing cook-stoves. Impact in 2020: a reduction in radiative forcing impact equivalent to an emission reduction of greenhouse gases of up to 0.6 Gt CO2e, included in the effect of the above initiative and outside of the definition of the gap.

For more

For more, see the supplementary materials to this paper, and also:

• Niklas Höhne, Wedging the gap: 21 initiatives to bridge the greenhouse gas emissions gap.

The size of the emissions gap was calculated here:

The Emissions Gap Report 2012, United Nations Environment Programme (UNEP).

If you’re in a rush, just read the executive summary.


Energy and the Environment – What Physicists Can Do

25 April, 2013

 

The Perimeter Institute is a futuristic-looking place where over 250 physicists are thinking about quantum gravity, quantum information theory, cosmology and the like. Since I work on some of these things, I was recently invited to give the weekly colloquium there. But I took the opportunity to try to rally them into action:

Energy and the Environment: What Physicists Can Do. Watch the video or read the slides.

Abstract. The global warming crisis is part of a bigger transformation in which humanity realizes that the Earth is a finite system and that our population, energy usage, and the like cannot continue to grow exponentially. While politics and economics pose the biggest challenges, physicists are in a good position to help make this transition a bit easier. After a quick review of the problems, we discuss a few ways physicists can help.

On the video you can hear me say a lot of stuff that’s not on the slides: it’s more of a coherent story. The advantage of the slides is that anything in blue, you can click on to get more information. So for example, when I say that solar power capacity has been growing annually by 75% in recent years, you can see where I got that number.

I was pleased by the response to this talk. Naturally, it was not a case of physicists saying “okay, tomorrow I’ll quit working on the foundations of quantum mechanics and start trying to improve quantum dot solar cells.” It’s more about getting them to see that huge problems are looming ahead of us… and to see the huge opportunities for physicists who are willing to face these problems head-on, starting now. Work on energy technologies, the smart grid, and ‘ecotechnology’ is going to keep growing. I think a bunch of the younger folks, at least, could see this.

However, perhaps the best immediate outcome of this talk was that Lee Smolin introduced me to Manjana Milkoreit. She’s at the school of international affairs at Waterloo University, practically next door to the Perimeter Institute. She works on “climate change governance, cognition and belief systems, international security, complex systems approaches, especially threshold behavior, and the science-policy interface.”

So, she knows a lot about the all-important human and political side of climate change. Right now she’s interviewing diplomats involved in climate treaty negotiations, trying to see what they believe about climate change. And it’s very interesting!

In my next post, I’ll talk about something she pointed me to. Namely: what we can do to hold the temperature increase to 2 °C or less, given that the pledges made by various nations aren’t enough.


Milankovitch Cycles and the Earth’s Climate

13 April, 2013

Here are the slides for a talk I’m giving at the Cal State Northridge Climate Science Seminar:

Milankovitch Cycles and the Earth’s Climate.

It’s a gentle introduction to these ideas, and it presents a lot of what Blake Pollard and I have said about Milankovitch cycles, in a condensed way. Of course when I give the talk, I’ll add more words, especially about the different famous ‘puzzles’.

If you have any corrections, please let me know!

I’m eager to visit Cal State Northridge and especially David Klein in their math department, since I’d like to incorporate some climate science in our math curriculum the way they’ve done there.


Geoengineering Report

11 March, 2013

I think we should start serious research on geoengineering schemes, including actual experiments, not just calculations and simulations. I think we should do this with an open mind about whether we’ll decide that these schemes are good ideas or bad. Either way, we need to learn more about them. Simultaneously, we need an intelligent, well-informed debate about the many ethical, legal and political aspects.

Many express the fear that merely researching geoengineering schemes will automatically legitimate them, however hare-brained they are. There’s some merit to that fear. But I suspect that public opinion on geoengineering will suddenly tip from “unthinkable!” to “let’s do it now!” as soon as global warming becomes perceived as a real and present threat. This is especially true because oil, coal and gas companies have a big interest in finding solutions to global warming that don’t make them stop digging.

So if we don’t learn more about geoengineering schemes, and we start getting heat waves that threaten widespread famine, we should not be surprised if some big government goes it alone and starts doing something cheap and easy like putting tons of sulfur into the upper atmosphere… even if it’s been inadequately researched.

It’s hard to imagine a more controversial topic. But I think there’s one thing most of us should be able to agree on: we should pay attention to what governments are doing about geoengineering! So, let me quote a bit of this report prepared for the US Congress:

• Kelsi Bracmort and Richard K. Lattanzio, Geoengineering: Governance and Technology Policy, CRS Report for Congress, Congressional Research Service, 2 January 2013.

Kelsi Bracmort is a specialist in agricultural conservation and natural Resources Policy, and Richard K. Lattanzio is an analyst in environmental policy.

I will delete references to footnotes, since they’re huge and I’m too lazy to include them all here. So, go to the original text for those!

Introduction

Climate change has received considerable policy attention in the past several years both internationally and within the United States. A major report released by the Intergovernmental Panel on Climate Change (IPCC) in 2007 found widespread evidence of climate warming, and many are concerned that climate change may be severe and rapid with potentially catastrophic consequences for humans and the functioning of ecosystems. The National Academies maintains that the climate change challenge is unlikely to be solved with any single strategy or by the people of any single country.

Policy efforts to address climate change use a variety of methods, frequently including mitigation and adaptation. Mitigation is the reduction of the principal greenhouse gas (GHG) carbon dioxide (CO2) and other GHGs. Carbon dioxide is the dominant greenhouse gas emitted naturally through the carbon cycle and through human activities like the burning of fossil fuels. Other commonly discussed GHGs include methane, nitrous oxide, hydrofluorocarbons, perfluorocarbons, and sulfur hexafluoride. Adaptation seeks to improve an individual’s or institution’s ability to cope with or avoid harmful impacts of climate change, and to take advantage of potential beneficial ones.

Some observers are concerned that current mitigation and adaptation strategies may not prevent change quickly enough to avoid extreme climate disruptions. Geoengineering has been suggested by some as a timely additional method to mitigation and adaptation that could be included in climate change policy efforts. Geoengineering technologies, applied to the climate, aim to achieve large-scale and deliberate modifications of the Earth’s energy balance in order to reduce temperatures and counteract anthropogenic (i.e., human-made) climate change; these climate modifications would not be limited by country boundaries. As an unproven concept, geoengineering raises substantial environmental and ethical concerns for some observers. Others respond that the uncertainties of geoengineering may only be resolved through further scientific and technical examination.

Proposed geoengineering technologies vary greatly in terms of their technological characteristics and possible consequences. They are generally classified in two main groups:

• Solar radiation management (SRM) method: technologies that would increase the reflectivity, or albedo, of the Earth’s atmosphere or surface, and

• Carbon dioxide removal (CDR) method: technologies or practices that would remove CO2 and other GHGs from the atmosphere.

Much of the geoengineering technology discussion centers on SRM methods (e.g., enhanced albedo, aerosol injection). SRM methods could be deployed relatively quickly if necessary, and their impact on the climate would be more imme diate than that of CDR methods. Because SRM methods do not reduce GHG from the atmosphere, global warming could resume at a rapid pace if a deployed SRM method fails or is terminated at any time. At least one relatively simple SRM method is already being deployed with government assistance. [Enhanced albedo is one SRM effort currently being undertaken by the U.S. Environmental Protection Agency. See the Enhanced Albedo section for more information.] Other proposed SRM methods are at the conceptualization stage. CDR methods include afforestation, ocean fertilization, and the use of biomass to capture and store carbon.

The 112th Congress did not take any legislative action on geoengineering. In 2009, the House Science and Technology Committee of the 111th Congress held hearings on geoengineering that examined the “potential environmental risks and benefits of various proposals, associated domestic and international governance issues, evaluation mechanisms and criteria, research and development (R&D) needs, and economic rationales supporting the deployment of geoengineering activities.”

Some foreign governments, including the United Kingdom’s, as well as scientists from Germany and India, have begun considering engaging in the research or deployment of geoengineering technologies be cause of concern over the slow progress of emissions reductions, the uncertainties of climate sensitivity, the possible existence of climate thresholds (or “tipping points”), and the political, social, and economic impact of pursuing aggressive GHG mitigation strategies.

Congressional interest in geoengineering has focused primarily on whether geoengineering is a realistic, effective, and appropriate tool for the United States to use to address climate change. However, if geoengineering technologies are deployed by the United States, another government, or a private entity, several new concerns are likely to arise related to government support for, and oversight of, geoengineering as well as the transboundary and long-term effects of geoengineering. Such was the case in the summer of 2012, when an American citizen conducted a geoengineering experiment, specifically ocean fertilization, off the west coast of Canada that some say violated two international conventions.

This report is intended as a primer on the policy issues, science, and governance of geoengineering technologies. The report will first set the policy parameters under which geoengineering technologies may be considered. It will then describe selected technologies in detail and discuss their status. The third section provides a discussion of possible approaches to governmental involvement in, and oversight of, geoengineering, including a summary of domestic and international instruments and institutions that may affect geoengineering projects.

Geoengineering governance

Geoengineering technologies aim to modify the Earth’s energy balance in order to reduce temperatures and counteract anthropogenic climate change through large-scale and deliberate modifications. Implementation of some of the technologies may be controlled locally, while other technologies may require global input on implementation. Additionally, whether a technology can be controlled or not once implemented differs by technology type. Little research has been done on most geoengineering methods, and no major directed research programs are in place. Peer reviewed literature is scant, and deployment of the technology—either through controlled field tests or commercial enterprise—has been minimal.

Most interested observers agree that more research would be required to test the feasibility, effectiveness, cost, social and environmental impacts, and the possible unintended consequences of geoengineering before deployment; others reject exploration of the options as too risky. The uncertainties have led some policymakers to consider the need and the role for governmental oversight to guide research in the short term and to oversee potential deployment in the long term. Such governance structures, both domestic and international, could either support or constrain geoengineering activities, depending on the decisions of policymakers. As both technological development and policy considerations for geoengineering are in their early stages, several questions of governance remain in play:

• What risk factors and policy considerations enter into the debate over geoengineering activities and government oversight?

• At what point, if ever, should there be government oversight of geoengineering activities?

• If there is government oversight, what form should it take?

• If there is government oversight, who should be responsible for it?

• If there is publicly funded research and development, what should it cover and which disciplines should be engaged in it?

Risk Factors

As a new and emerging set of technologies potentially able to address climate change, geoengineering possesses many risk factors that must be taken into policy considerations. From a research perspective, the risk of geoengineering activities most often rests in the uncertainties of the new technology (i.e., the risk of failure, accident, or unintended consequences). However, many observers believe that the greater risk in geoengineering activities may lie in the social, ethical, legal, and political uncertainties associated with deployment. Given these risks, there is an argument that appropriate mechanisms for government oversight should be established before the federal government and its agencies take steps to promote geoengineering technologies and before new geoengineering projects are commenced. Yet, the uncertainty behind the technologies makes it unclear which methods, if any, may ever mature to the point of being deemed sufficiently effective, affordable, safe, and timely as to warrant potential deployment.

Some of the more significant risks factors associated with geoengineering are as follows:

Technology Control Dilemma. An analytical impasse inherent in all emerging technologies is that potential risks may be foreseen in the design phase but can only be proven and resolved through actual research, development, and demonstration. Ideally, appropriate safeguards are put in place during the early stages of conceptualization and development, but anticipating the evolution of a new technology can be difficult. By the time a technology is widely deployed, it may be impossible to build desirable oversight and risk management provisions without major disruptions to established interests. Flexibility is often required to both support investigative research and constrain potentially harmful deployment.

Reversibility. Risk mitigation relies on the ability to cease a technology program and terminate its adverse effects in a short period of time. In principle, all geoengineering options could be abandoned on short notice, with either an instant cessation of direct climate effects or a small time lag after abandonment.

However, the issue of reversibility applies to more than just the technologies themselves. Given the importance of internal adjustments and feedbacks in the climate system—still imperfectly understood—it is unlikely that all secondary effects from large-scale deployment would end immediately. Also, choices made regarding geoengineering methods may influence other social, economic, and technological choices regarding climate science. Advancing geoengineering options in lieu of effectively mitigating GHG emissions, for example, could result in a number of adverse effects, including ocean acidification, stresses on biodiversity, climate sensitivity shocks, and other irreversible consequences. Further, investing financially in the physical infrastructure to support geoengineering may create a strong economic resistance to reversing research and deployment activities.

Encapsulation. Risk mitigation also relies on whether a technology program is modular and contained or whether it involves the release of materials into the wider environment. The issue can be framed in the context of pollution (i.e., encapsulated technologies are often viewed as more “ethical” in that they are seen as non-polluting). Several geoengineering technologies are demonstrably non-encapsulated, and their release and deployment into the wider environment may lead to technical uncertainties, impacts on non-participants, and complex policy choices. But encapsulated technologies may still have localized environmental impacts, depending on the nature, size, and location of the application. The need for regulatory action may arise as much from the indirect impacts of activities on agro-forestry, species, and habitat as from the direct impacts of released materials in atmospheric or oceanic ecosystems.

Commercial Involvement. The role of private-sector engagement in the development and promotion of geoengineering may be debated. Commercial involvement, including competition, may be positive in that it mobilizes innovation and capital investment, which could lead to the development of more effective and less costly technologies at a faster rate than in the public sector.

However, commercial involvement could bypass or neglect social, economic, and environmental risk assessments in favor of what one commentator refers to as “irresponsible entrepreneurial behavior.” Private-sector engagement would likely require some form of public subsidies or GHG emission pricing to encourage investment, as well as additional considerations including ownership models, intellectual property rights, and trade and transfer mechanisms for the dissemination of the technologies.

Public Engagement. The consequences of geoengineering—including both benefits and risks discussed above—could affect people and communities across the world. Public attitudes toward geoengineering, and public engagement in the formation, development, and execution of proposed governance, could have a critical bearing on the future of the technologies. Perceptions of risks, levels of trust, transparency of actions, provisions for liabilities and compensation, and economies of investment could play a significant role in the political feasibility of geoengineering. Public acceptance may require a wider dialogue between scientists, policymakers, and the public.


Tipping Points in Climate Systems

4 March, 2013

If you’ve just recently gotten a PhD, you can get paid to spend a week this summer studying tipping points in climate systems!

They’re having a program on this at ICERM: the Institute for Computational and Experimental Research in Mathematics, in Providence, Rhode Island. It’s happening from July 15th to 19th, 2013. But you have to apply soon, by the 15th of March!

For details, see below. But first, a word about tipping points… in case you haven’t thought about them much.

Tipping Points

A tipping point occurs when adjusting some parameter of a system causes it to transition abruptly to a new state. The term refers to a well-known example: as you push more and more on a glass of water, it gradually leans over further until you reach the point where it suddenly falls over. Another familiar example is pushing on a light switch until it ‘flips’ and the light turns on.

In the Earth’s climate, a number of tipping points could cause abrupt climate change:



(Click to enlarge.) They include:

• Loss of Arctic sea ice.
• Melting of the Greenland ice sheet.
• Melting of the West Antarctic ice sheet.
• Permafrost and tundra loss, leading to the release of methane.
• Boreal forest dieback.
• Amazon rainforest dieback
• West African monsoon shift.
• Indian monsoon chaotic multistability.
• Change in El Niño amplitude or frequency.
• Change in formation of Atlantic deep water.
• Change in the formation of Antarctic bottom water.

You can read about them here:

• T. M. Lenton, H. Held, E. Kriegler, J. W. Hall, W. Lucht, S. Rahmstorf, and H. J. Schellnhuber, Tipping elements in the Earth’s climate system, Proceedings of the National Academy of Sciences 105 (2008), 1786–1793.

Mathematicians are getting interested in how to predict when we’ll hit a tipping point:

• Peter Ashwin, Sebastian Wieczorek and Renato Vitolo, Tipping points in open systems: bifurcation, noise-induced and rate-dependent examples in the climate system, Phil. Trans. Roy. Soc. A 370 (2012), 1166–1184.

Abstract: Tipping points associated with bifurcations (B-tipping) or induced by noise (N-tipping) are recognized mechanisms that may potentially lead to sudden climate change. We focus here a novel class of tipping points, where a sufficiently rapid change to an input or parameter of a system may cause the system to “tip” or move away from a branch of attractors. Such rate-dependent tipping, or R-tipping, need not be associated with either bifurcations or noise. We present an example of all three types of tipping in a simple global energy balance model of the climate system, illustrating the possibility of dangerous rates of change even in the absence of noise and of bifurcations in the underlying quasi-static system.

We can test out these theories using actual data:

• J. Thompson and J. Sieber, Predicting climate tipping points as a noisy bifurcation: a review, International Journal of Chaos and Bifurcation 21 (2011), 399–423.

Abstract: There is currently much interest in examining climatic tipping points, to see if it is feasible to predict them in advance. Using techniques from bifurcation theory, recent work looks for a slowing down of the intrinsic transient responses, which is predicted to occur before an instability is encountered. This is done, for example, by determining the short-term auto-correlation coefficient ARC in a sliding window of the time series: this stability coefficient should increase to unity at tipping. Such studies have been made both on climatic computer models and on real paleoclimate data preceding ancient tipping events. The latter employ re-constituted time-series provided by ice cores, sediments, etc, and seek to establish whether the actual tipping could have been accurately predicted in advance. One such example is the end of the Younger Dryas event, about 11,500 years ago, when the Arctic warmed by 7 C in 50 years. A second gives an excellent prediction for the end of ’greenhouse’ Earth about 34 million years ago when the climate tipped from a tropical state into an icehouse state, using data from tropical Pacific sediment cores. This prediction science is very young, but some encouraging results are already being obtained. Future analyses will clearly need to embrace both real data from improved monitoring instruments, and simulation data generated from increasingly sophisticated predictive models.

The next paper is interesting because it studies tipping points experimentally by manipulating a lake. Doing this lets us study another important question: when can you push a system back to its original state after it’s already tipped?

• S. R. Carpenter, J. J. Cole, M. L. Pace, R. Batt, W. A. Brock, T. Cline, J. Coloso, J. R. Hodgson, J. F. Kitchell, D. A. Seekell, L. Smith, and B. Weidel, Early warnings of regime shifts: a whole-ecosystem experiment, Nature 332 (2011), 1079–1082.

Abstract: Catastrophic ecological regime shifts may be announced in advance by statistical early-warning signals such as slowing return rates from perturbation and rising variance. The theoretical background for these indicators is rich but real-world tests are rare, especially for whole ecosystems. We tested the hypothesis that these statistics would be early-warning signals for an experimentally induced regime shift in an aquatic food web. We gradually added top predators to a lake over three years to destabilize its food web. An adjacent lake was monitored simultaneously as a reference ecosystem. Warning signals of a regime shift were evident in the manipulated lake during reorganization of the food web more than a year before the food web transition was complete, corroborating theory for leading indicators of ecological regime shifts.

IdeaLab program

If you’re seriously interested in this stuff, and you recently got a PhD, you should apply to IdeaLab 2013, which is a program happening at ICERM from the 15th to the 19th of July, 2013. Here’s the deal:

The Idea-Lab invites 20 early career researchers (postdoctoral candidates and assistant professors) to ICERM for a week during the summer. The program will start with brief participant presentations on their research interests in order to build a common understanding of the breadth and depth of expertise. Throughout the week, organizers or visiting researchers will give comprehensive overviews of their research topics. Organizers will create smaller teams of participants who will discuss, in depth, these research questions, obstacles, and possible solutions. At the end of the week, the teams will prepare presentations on the problems at hand and ideas for solutions. These will be shared with a broad audience including invited program officers from funding agencies.

Two Research Project Topics:

• Tipping Points in Climate Systems (MPE2013 program)

• Towards Efficient Homomorphic Encryption

IdeaLab Funding Includes:

• Travel support

• Six nights accommodations

• Meal allowance

The Application Process:

IdeaLab applicants should be at an early stage of their post-PhD career. Applications for the 2013 IdeaLab are being accepted through MathPrograms.org.

Application materials will be reviewed beginning March 15, 2013.


Successful Predictions of Climate Science

5 February, 2013

guest post by Steve Easterbrook

In December I went to the 2012 American Geophysical Union Fall Meeting. I’d like to tell you about with the Tyndall lecture given by Ray Pierrehumbert, on “Successful Predictions”. You can watch the whole talk here:

But let me give you a summary, with some references.

Ray’s talk spanned 120 years of research on climate change. The key message is that science is a long, slow process of discovery, in which theories (and their predictions) tend to emerge long before they can be tested. We often learn just as much from the predictions that turned out to be wrong as we do from those that were right. But successful predictions eventually form the body of knowledge that we can be sure about, not just because they were successful, but because they build up into a coherent explanation of multiple lines of evidence.

Here are the successful predictions:

1896: Svante Arrhenius correctly predicts that increases in fossil fuel emissions would cause the earth to warm. At that time, much of the theory of how atmospheric heat transfer works was missing, but nevertheless, he got a lot of the process right. He was right that surface temperature is determined by the balance between incoming solar energy and outgoing infrared radiation, and that the balance that matters is the radiation budget at the top of the atmosphere. He knew that the absorption of infrared radiation was due to CO2 and water vapour, and he also knew that CO2 is a forcing while water vapour is a feedback. He understood the logarithmic relationship between CO2 concentrations in the atmosphere and surface temperature. However, he got a few things wrong too. His attempt to quantify the enhanced greenhouse effect was incorrect, because he worked with a 1-layer model of the atmosphere, which cannot capture the competition between water vapour and CO2, and doesn’t account for the role of convection in determining air temperatures. His calculations were incorrect because he had the wrong absorption characteristics of greenhouse gases. And he thought the problem would be centuries away, because he didn’t imagine an exponential growth in use of fossil fuels.

Arrhenius, as we now know, was way ahead of his time. Nobody really considered his work again for nearly 50 years, a period we might think of as the dark ages of climate science. The story perfectly illustrates Paul Hoffman’s tongue-in-cheek depiction of how scientific discoveries work: someone formulates the theory, other scientists then reject it, ignore it for years, eventually rediscover it, and finally accept it. These “dark ages” weren’t really dark, of course—much good work was done in this period. For example:

• 1900: Frank Very worked out the radiation balance, and hence the temperature, of the moon. His results were confirmed by Pettit and Nicholson in 1930.

• 1902-14: Arthur Schuster and Karl Schwarzschild used a 2-layer radiative-convective model to explain the structure of the sun.

• 1907: Robert Emden realized that a similar radiative-convective model could be applied to planets, and Gerard Kuiper and others applied this to astronomical observations of planetary atmospheres.

This work established the standard radiative-convective model of atmospheric heat transfer. This treats the atmosphere as two layers; in the lower layer, convection is the main heat transport, while in the upper layer, it is radiation. A planet’s outgoing radiation comes from this upper layer. However, up until the early 1930’s, there was no discussion in the literature of the role of carbon dioxide, despite occasional discussion of climate cycles. In 1928, George Simpson published a memoir on atmospheric radiation, which assumed water vapour was the only greenhouse gas, even though, as Richardson pointed out in a comment, there was evidence that even dry air absorbed infrared radiation.

1938: Guy Callendar is the first to link observed rises in CO2 concentrations with observed rises in surface temperatures. But Callendar failed to revive interest in Arrhenius’s work, and made a number of mistakes in things that Arrhenius had gotten right. Callendar’s calculations focused on the radiation balance at the surface, whereas Arrhenius had (correctly) focussed on the balance at the top of the atmosphere. Also, he neglected convective processes, which astrophysicists had already resolved using the radiative-convective model. In the end, Callendar’s work was ignored for another two decades.

1956: Gilbert Plass correctly predicts a depletion of outgoing radiation in the 15 micron band, due to CO2 absorption. This depletion was eventually confirmed by satellite measurements. Plass was one of the first to revisit Arrhenius’s work since Callendar, however his calculations of climate sensitivity to CO2 were also wrong, because, like Callendar, he focussed on the surface radiation budget, rather than the top of the atmosphere.

1961-2: Carl Sagan correctly predicts very thick greenhouse gases in the atmosphere of Venus, as the only way to explain the very high observed temperatures. His calculations showed that greenhouse gasses must absorb around 99.5% of the outgoing surface radiation. The composition of Venus’s atmosphere was confirmed by NASA’s Venus probes in 1967-70.

1959: Burt Bolin and Erik Eriksson correctly predict the exponential increase in CO2 concentrations in the atmosphere as a result of rising fossil fuel use. At that time they did not have good data for atmospheric concentrations prior to 1958, hence their hindcast back to 1900 was wrong, but despite this, their projection for changes forward to 2000 were remarkably good.

1967: Suki Manabe and Dick Wetherald correctly predict that warming in the lower atmosphere would be accompanied by stratospheric cooling. They had built the first completely correct radiative-convective implementation of the standard model applied to Earth, and used it to calculate a +2 °C equilibrium warming for doubling CO2, including the water vapour feedback, assuming constant relative humidity. The stratospheric cooling was confirmed in 2011 by Gillett et al.

1975: Suki Manabe and Dick Wetherald correctly predict that the surface warming would be much greater in the polar regions, and that there would be some upper troposphere amplification in the tropics. This was the first coupled general circulation model (GCM), with an idealized geography. This model computed changes in humidity, rather than assuming it, as had been the case in earlier models. It showed polar amplification, and some vertical amplification in the tropics. The polar amplification was measured, and confirmed by Serreze et al in 2009. However, the height gradient in the tropics hasn’t yet been confirmed (nor has it yet been falsified—see Thorne 2008 for an analysis)

1989: Ron Stouffer et. al. correctly predict that the land surface will warm more than the ocean surface, and that the southern ocean warming would be temporarily suppressed due to the slower ocean heat uptake. These predictions are correct, although these models failed to predict the strong warming we’ve seen over the antarctic peninsula.

Of course, scientists often get it wrong:

1900: Knut Ångström incorrectly predicts that increasing levels of CO2 would have no effect on climate, because he thought the effect was already saturated. His laboratory experiments weren’t accurate enough to detect the actual absorption properties, and even if they were, the vertical structure of the atmosphere would still allow the greenhouse effect to grow as CO2 is added.

1971: Rasool and Schneider incorrectly predict that atmospheric cooling due to aerosols would outweigh the warming from CO2. However, their model had some important weaknesses, and was shown to be wrong by 1975. Rasool and Schneider fixed their model and moved on. Good scientists acknowledge their mistakes.

1993: Richard Lindzen incorrectly predicts that warming will dry the troposphere, according to his theory that a negative water vapour feedback keeps climate sensitivity to CO2 really low. Lindzen’s work attempted to resolve a long standing conundrum in climate science. In 1981, the CLIMAP project reconstructed temperatures at the last Glacial maximum, and showed very little tropical cooling. This was inconsistent the general circulation models (GCMs), which predicted substantial cooling in the tropics (e.g. see Broccoli & Manabe 1987). So everyone thought the models must be wrong. Lindzen attempted to explain the CLIMAP results via a negative water vapour feedback. But then the CLIMAP results started to unravel, and newer proxies demonstrated that it was the CLIMAP data that was wrong, rather than the models. It eventually turns out the models were getting it right, and it was the CLIMAP data and Lindzen’s theories that were wrong. Unfortunately, bad scientists don’t acknowledge their mistakes; Lindzen keeps inventing ever more arcane theories to avoid admitting he was wrong.

1995: John Christy and Roy Spencer incorrectly calculate that the lower troposphere is cooling, rather than warming. Again, this turned out to be wrong, once errors in satellite data were corrected.

In science, it’s okay to be wrong, because exploring why something is wrong usually advances the science. But sometimes, theories are published that are so bad, they are not even wrong:

2007: Courtillot et. al. predicted a connection between cosmic rays and climate change. But they couldn’t even get the sign of the effect consistent across the paper. You can’t falsify a theory that’s incoherent! Scientists label this kind of thing as “Not even wrong”.

Finally, there are, of course, some things that scientists didn’t predict. The most important of these is probably the multi-decadal fluctuations in the warming signal. If you calculate the radiative effect of all greenhouse gases, and the delay due to ocean heating, you still can’t reproduce the flat period in the temperature trend in that was observed in 1950–1970. While this wasn’t predicted, we ought to be able to explain it after the fact. Currently, there are two competing explanations. The first is that the ocean heat uptake itself has decadal fluctuations, although models don’t show this. However, it’s possible that climate sensitivity is at the low end of the likely range (say 2 °C per doubling of CO2), it’s possible we’re seeing a decadal fluctuation around a warming signal. The other explanation is that aerosols took some of the warming away from GHGs. This explanation requires a higher value for climate sensitivity (say around 3 °C), but with a significant fraction of the warming counteracted by an aerosol cooling effect. If this explanation is correct, it’s a much more frightening world, because it implies much greater warming as CO2 levels continue to increase. The truth is probably somewhere between these two. (See Armour & Roe, 2011 for a discussion.)

To conclude, climate scientists have made many predictions about the effect of increasing greenhouse gases that have proven to be correct. They have earned a right to be listened to, but is anyone actually listening? If we fail to act upon the science, will future archaeologists wade through AGU abstracts and try to figure out what went wrong? There are signs of hope—in his re-election acceptance speech, President Obama revived his pledge to take action, saying “We want our children to live in an America that isn’t threatened by the destructive power of a warming planet.”


Follow

Get every new post delivered to your Inbox.

Join 3,095 other followers