Global Climate Change Negotiations

28 October, 2013

 

There were many interesting talks at the Interdisciplinary Climate Change Workshop last week—too many for me to describe them all in detail. But I really must describe the talks by Radoslav Dimitrov. They were full of important things I didn’t know. Some are quite promising.

Radoslav S. Dimitrov is a professor at the Department of Political Science at Western University. What’s interesting is that he’s also been a delegate for the European Union at the UN climate change negotiations since 1990! His work documents the history of climate negotiations from behind closed doors.

Here are some things he said:

• In international diplomacy, there is no questioning the reality and importance of human-caused climate change. The question is just what to do about it.

• Governments go through every line of the IPCC reports twice. They cannot add anything the scientists have written, but they can delete things. All governments have veto power. This makes the the IPCC reports more conservative than they otherwise would be: “considerably diluted”.

• The climate change negotiations have surprised political scientists in many ways:

1) There is substantial cooperation even without the USA taking the lead.

2) Developing countries are accepting obligations, with many overcomplying.

3) There has been action by many countries and subnational entities without any treaty obligations.

4) There have been repeated failures of negotiation despite policy readiness.

• In 2011, China and Saudi Arabia rejected the final agreement at Durban as inadequate. Only Canada, the United States and Australia had been resisting stronger action on climate change. Canada abandoned the Kyoto Protocol the day after the collapse of negotiations at Durban. They publicly blamed China, India and Brazil, even though Brazil had accepted dramatic emissions cuts and China had, for the first time, accepted limits on emissions. Only India had taken a “hardline” attitude. Publicly blaming some other country for the collapse of negotiations is a no-no in diplomacy, so the Chinese took this move by Canada as a slap in the face. In return, they blamed Canada and “the West” for the collapse of Durban.

• Dimitrov is studying the role of persuasion in diplomacy, recording and analyzing hundreds of hours of discussions. Countries try to change each other’s minds, not just behavior.

• The global elite do not see climate change negotiations as an environmental issue. Instead, they feel they are “negotiating the future economy”. They focus on the negative economic consequences of inaction, and the economic benefits of climate action.

• In particular, the EU has managed to persuade many countries that climate change is worth tackling now. They do this with economic, not environmental arguments. For example, they argue that countries who take the initiative will have an advantage in future employment, getting most of the “green jobs”. Results include China’s latest 5-year plan, which some have called “the most progressive legislation in history”, and also Japan’s plan for a 60-80% reduction of carbon emissions. The EU itself also expects big returns on investment in climate change.

I apologize for any oversimplifications or downright errors in my notes here.

References

You can see some slides for Dimitrov’s talks here:

• Radoslav S. Dimitrov, A climate of change.

For more, try reading this article, which is free online:

• Radoslav S. Dimitrov, Inside Copenhagen: the state of climate governance, Global Environmental Politics 10 (2010), 18–24.

and these more recent book chapters, which are apparently not as easy to get:

• Radoslav S. Dimitrov, Environmental diplomacy, in Handbook of Global Environmental Politics, edited by Paul Harris, Routledge, forthcoming as of 2013.

• Radoslav S. Dimitrov, International negotiations, in Handbook of Global Climate and Environmental Policy, edited by Robert Falkner, Wiley-Blackwell forthcoming as of 2013.

• Radoslav S. Dimitrov, Persuasion in world politics: The UN climate change negotiations, in Handbook of Global Environmental Politics, edited by Peter Dauvergne, Edward Elgar Publishing, Cheltenham, UK, 2012.

• Radoslav S. Dimitrov, American prosperity and the high politics of climate change, in Prospects for a Post-American World, edited by Sabrina Hoque and Sean Clark, University of Toronto Press, Toronto, 2012.


What To Do About Climate Change?

23 October, 2013

Here are the slides for my second talk in the Interdisciplinary Climate Change Workshop at the Balsillie School of International Affairs:

What To Do About Climate Change?

Like the first it’s just 15 minutes long, so it’s very terse.

I start by noting that slowing the rate of carbon burning won’t stop global warming: most carbon dioxide stays in the air over a century, though individual molecules come and go. Global warming is like a ratchet.

So, we will:

1) leave fossil fuels unburnt,

2) sequester carbon,

3) actively cool the Earth, and/or

4) live with a hotter climate.

Of course we may do a mix of these…. though we’ll certainly do some of option 4), and we might do only this one. My goal in this short talk is not mainly to argue for a particular mix! I mainly want to present some information about the various options.

I do not say anything about the best ways to do option 4); I merely provide some arguments that we’ll wind up doing a lot of this one… because I’m afraid some of the participants in the workshop may be in denial about that.

I also argue that we should start doing research on option 3), because like it or not, I think people are going to become very interested in geoengineering, and without enough solid information about it, people are likely to make bad mistakes: for example, diving into ambitious projects out of desperation.

As usual, if you click on a phrase in blue in this talk, you can get more information.

I want to really thank everyone associated with Azimuth for helping find and compile the information used in this talk! It’s really been a team effort!


What is Climate Change?

21 October, 2013

Here are the slides for a 15-minute talk I’m giving on Friday for the Interdisciplinary Climate Change Workshop at the Balsillie School of International Affairs:

What is Climate Change?

This will be the first talk of the workshop. Many participants are focused on diplomacy and economics. None are officially biologists or ecologists. So, I want to set the stage with a broad perspective that fits humans into the biosphere as a whole.

I claim that climate change is just one aspect of something bigger: a new geological epoch, the Anthropocene.

I start with evidence that human civilization is having such a big impact on the biosphere that we’re entering a new geological epoch.

Then I point out what this implies. Climate change is not an isolated ‘problem’ of the sort routinely ‘solved’ by existing human institutions. It is part of a shift from the exponential growth phase of human impact on the biosphere to a new, uncharted phase.

In this new phase, institutions and attitudes will change dramatically, like it or not:

Before we could treat ‘nature’ as distinct from ‘civilization’. Now, there is no nature separate from civilization.

Before, we might imagine ‘economic growth’ an almost unalloyed good, with many externalities disregarded. Now, many forms of growth have reached the point where they push the biosphere toward tipping points.

In a separate talk I’ll say a bit about ‘what we can do about it’. So, nothing about that here. You can click on words in blue to see sources for the information.


What Is Climate Change and What To Do About It?

13 October, 2013

Soon I’m going to a workshop on Interdisciplinary Perspectives on Climate Change at the Balsillie School of International Affairs, or BSIA, in Waterloo, Canada. It’s organized by Simon Dalby, who has a chair in the political economy of climate change at this school.

The plan is to gather people from many different disciplines to provide views on two questions: what is climate change, and what to do about it?

We’re giving really short talks, leaving time for discussion. But before I get there I need to write a 2000-word paper on my view of climate change—’as a mathematician’, supposedly. That’s where I want your help. I think I know roughly what I want to say, and I’ll post some drafts here as soon as I write them. But I’d like get your ideas, too.

For starters, the program looks like this:

Friday 25 October: What is Climate Change?

9:00 – 9:30 Introductory remarks
John Ravenhill, Director, BSIA
Dan Scott, University of Waterloo, Interdiscipinary Centre for Climate Change.
Simon Dalby, BSIA

9:30 – 10:45 Presentation Session 1
Chair: Sara Koopman, BSIA
John Baez, University of California (Mathematics)
Jean Andrey, University of Waterloo (Geography)
Byron Williston, Wilfrid Laurier University (Philosophy)

11:15 – 12:30 Presentation Session 2
Chair: Marisa Beck, BSIA
Chris Russill, Carleton University (Communications)
Mike Hulme, Kings’ College London (Climate Science)
Radoslav Dimitrov, Western University (Political Science)

1:30 – 2:30 Presentation Session 3
Chair: Matt Gaudreau, BSIA
Jatin Nathwani, University of Waterloo (Engineering)
Sarah Burch, University of Waterloo (New Social Media and Education)

3:00 – 5:00 Roundtable 1 (all presenters)
Chair: Lucie Edwards, BSIA
Discussant: Vanessa Schweizer, University of Waterloo

5:00 – 5:15 Wrap-up
Dan Scott and Simon Dalby

Saturday 26 October: What Should We Do About It?

9:00 – 10:15 Presentation Session 4
Chair: Matt Gaudreau, BSIA
Radoslav Dimitrov, Western University (Political Science)
Mike Hulme, Kings’ College London (Climate Science)
Jean Andrey, University of Waterloo (Geography)

10:45 – 12:00 Presentation Session 5
Chair: Lucie Edwards, BSIA
Jatin Nathwani, University of Waterloo (Engineering)
Sarah Burch, University of Waterloo (Environmental Education)
Chris Russill, Carleton University (Communications)

1:00 – 2:00 Presentation Session 6
Chair: Marisa Beck, BSIA
Byron Williston, Wilfrid Laurier University (Philosophy)
John Baez, University of California (Mathematics)

2:30 – 4:30 Roundtable 2 (all presenters)
Chair: Sara Koopman, BSIA
Discussant: James Orbinski, CIGI Chair in Global Health

4:30 – 5:00 Wrap-up
Dan Scott and Simon Dalby

Some thoughts

Though I’m playing a designated role in this workshop—the “mathematician”—I don’t think it makes sense to focus on mathematical models of climate change, or the math projects I’m working on now.

I will probably seem strange and “mathematical” enough just saying what I think about climate change! Most of the other people come from fields quite different than mine: they seem much more in tune with the nitty-gritty details of politics and economics. So, perhaps my proper role is to mention some facts and numbers that they probably know already, to remind them of the magnitude, scope and urgency of the problem.

It may also be useful to emphasize that with very high probability, we won’t do enough soon enough, so we need to study a series of fallback positions, not just an ‘optimal’ response to climate change. And these fallback positions should go as far as thinking about what happens if we burn all the available carbon. What to do then?

When I talked about this workshop with the mathematician Sasha Beilinson, he wickedly suggested that the best solution to global warming might be a global economic collapse… and he asked if anyone was looking into this.

Of course this solution comes along with huge problems, and anyone who actually advocates it is branded as a nut and excluded from the ‘serious’ discourse on global warming. But the funny thing is, a global economic collapse could be just as probable as some more optimistic scenarios, for example those that require a massive outbreak of altruism worldwide.

So it’s worth thinking about economic collapse scenarios, and ‘burn carbon until there’s none left’ scenarios, even if we don’t want them. And these are the sort of things that only the mathematician in the room may be brave—or foolish—enough to mention.

What else?


The North Pole Was, Briefly, a Lake

22 August, 2013


It happened over a month ago. The picture above was taken on 22 July 2013. It shows a buoy anchored near a remote webcam at the North Pole, surrounded by a lake of melted ice:

• Becky Oskin, North Pole now a lake, LiveScience, 23 July 2013.

Instead of snow and ice whirling on the wind, a foot-deep aquamarine lake now sloshes around a webcam stationed at the North Pole. The meltwater lake started forming July 13, following two weeks of warm weather in the high Arctic. In early July, temperatures were 2 to 5 degrees Fahrenheit (1 to 3 degrees Celsius) higher than average over much of the Arctic Ocean, according to the National Snow & Ice Data Center.

Meltwater ponds sprout more easily on young, thin ice, which now accounts for more than half of the Arctic’s sea ice. The ponds link up across the smooth surface of the ice, creating a network that traps heat from the sun. Thick and wrinkly multi-year ice, which has survived more than one freeze-thaw season, is less likely sport a polka-dot network of ponds because of its rough, uneven surface.

This particular meltwater pond was “just over 2 feet deep and a few hundred feet wide”, according to this article:

• Hannah Hickey, Santa’s workshop not flooded—but lots of melting in the Arctic, 30 July 2013.

The pond drained out through cracks in the ice late July 27.

More important is the overall trend in the the total sea ice volume as estimated by the Pan-Arctic Ice Ocean Modeling and Assimilation System (PIOMAS).

The trend line from 1979 to 2011 shows that Arctic sea ice is melting at an average rate of roughly 3,000 cubic kilometers per decade.

In 2010, 2011 and 2012, so much ice melted that the volume fell more than 2 standard deviations below from the trend line—that’s why the jagged curve falls below the shaded region at the far right of the graph. At the end of July this year, it was just about 2 standard deviations below the trend line. The ice volume seems unlikely to break last year’s record low.

As usual, click the picture for more details.


Monte Carlo Methods in Climate Science

23 July, 2013

joint with David Tweed

One way the Azimuth Project can help save the planet is to get bright young students interested in ecology, climate science, green technology, and stuff like that. So, we are writing an article for Math Horizons, an American magazine for undergraduate math majors. This blog article is a draft of that. You can also see it in PDF form here.

We’d really like to hear your comments! There are severe limits on including more detail, since the article should be easy to read and short. So please don’t ask us to explain more stuff: we’re most interested to know if you sincerely don’t understand something, or feel that students would have trouble understanding something. For comparison, you can see sample Math Horizons articles here.

Introduction

They look placid lapping against the beach on a calm day, but the oceans are actually quite dynamic. The ocean currents act as ‘conveyor belts’, transporting heat both vertically between the water’s surface and the depths and laterally from one area of the globe to another. This effect is so significant that the temperature and precipitation patterns can change dramatically when currents do.

For example: shortly after the last ice age, northern Europe experienced a shocking change in climate from 10,800 to 9,500 BC. At the start of this period temperatures plummeted in a matter of decades. It became 7° Celsius colder, and glaciers started forming in England! The cold spell lasted for over a thousand years, but it ended as suddenly as it had begun.

Why? The most popular theory is that that a huge lake in North America formed by melting glaciers burst its bank—and in a massive torrent lasting for years, the water from this lake rushed out to the northern Atlantic ocean. By floating atop the denser salt water, this fresh water blocked a major current: the Atlantic Meridional Overturning Circulation. This current brings warm water north and helps keep northern Europe warm. So, when iit shut down, northern Europe was plunged into a deep freeze.

Right now global warming is causing ice sheets in Greenland to melt and release fresh water into the North Atlantic. Could this shut down the Atlantic Meridional Overturning Circulation and make the climate of Northern Europe much colder? In 2010, Keller and Urban [KU] tackled this question using a simple climate model, historical data, probability theory, and lots of computing power. Their goal was to understand the spectrum of possible futures compatible with what we know today.

Let us look at some of the ideas underlying their work.

Box models

The earth’s physical behaviour, including the climate is far too complex to simulate from the bottom up using basic physical principles, at least for now. The most detailed models today can take days to run on very powerful computers. So to make reasonable predictions on a laptop in a tractable time-frame, geophysical modellers use some tricks.

First, it is possible to split geophysical phenomena into ‘boxes’ containing strongly related things. For example: atmospheric gases, particulate levels and clouds all affect each other strongly; likewise the heat content, currents and salinity of the oceans all interact strongly. However, the interactions between the atmosphere and the oceans are weaker, and we can approximately describe them using just a few settings, such as the amount of atmospheric CO2 entering or leaving the oceans. Clearly these interactions must be consistent—for example, the amount of CO2 leaving the atmosphere box must equal the amount entering the ocean box—but breaking a complicated system into parts lets different specialists focus on different aspects; then we can combine these parts and get an approximate model of entire planet. The box model used by Keller and Urban is shown in Figure 1.



1. The box model used by Keller and Urban.

 
Second, it turn out that simple but effective box models can be distilled from the complicated physics in terms of forcings and feedbacks. Essentially a forcing is a measured input to the system, such as solar radiation or CO2 released by burning fossil fuels. As an analogy, consider a child on a swing: the adult’s push every so often is a forcing. Similarly a feedback describes how the current ‘box variables’ influence future ones. In the swing analogy, one feedback is how the velocity will influence the future height. Specifying feedbacks typically uses knowledge of the detailed low-level physics to derive simple, tractable functional relationships between groups of large-scale observables, a bit like how we derive the physics of a gas by thinking about collisions of lots of particles.

However, it is often not feasible to get actual settings for the parameters in our model starting from first principles. In other words, often we can get the general form of the equations in our model, but they contain a lot of constants that we can estimate only by looking at historical data.

Probability modeling

Suppose we have a box model that depends on some settings S. For example, in Keller and Urban’s model, S is a list of 18 numbers. To keep things simple, suppose the settings are element of some finite set. Suppose we also have huge hard disc full of historical measurements, and we want to use this to find the best estimate of S. Because our data is full of ‘noise’ from other, unmodeled phenomena we generally cannot unambiguously deduce a single set of settings. Instead we have to look at things in terms of probabilities. More precisely, we need to study the probability that S take some value s given that the measurements take some value. Let’s call the measurements M, and again let’s keep things simple by saying M takes values in some finite set of possible measurements.

The probability that S = s given that M takes some value m is called the conditional probability P(S=s | M=m). How can we compute this conditional probability? This is a somewhat tricky problem.

One thing we can more easily do is repeatedly run our model with randomly chosen settings and see what measurements it predicts. By doing this, we can compute the probability that given setting values S = s, the model predicts measurements M=m. This again is a conditional probability, but now it is called P(M=m|S=s).

This is not what we want: it’s backwards! But here Bayes’ rule comes to the rescue, relating what we want to what we can more easily compute:

\displaystyle{ P(S = s | M = m) = P(M = m| S = s) \frac{P(S = s)}{P(M = m)} }

Here P(S = s) is the probability that the settings take a specific value s, and similarly for P(M = m). Bayes’ rule is quite easy to prove, and it is actually a general rule that applies to any random variables, not just the settings and the measurements in our problem [Y]. It underpins most methods of figuring out hidden quantities from observed ones. For this reason, it is widely used in modern statistics and data analysis [K].

How does Bayes’ rule help us here? When we repeatedly run our model with randomly chosen settings, we have control over P(S = s). As mentioned, we can compute P(M=m| S=s). Finally, P(M = m) is independent of our choice of settings. So, we can use Bayes’ rule to compute P(S = s | M = m) up to a constant factor. And since probabilities must sum to 1, we can figure out this constant.

This lets us do many things. It lets us find the most likely values of the settings for our model, given our hard disc full of observed data. It also lets us find the probability that the settings lie within some set. This is important: if we’re facing the possibility of a climate disaster, we don’t just want to know the most likely outcome. We would like to know to know that with 95% probability, the outcome will lie in some range.

An example

Let us look at an example much simpler than that considered by Keller and Urban. Suppose our measurements are real numbers m_0,\dots, m_T related by

m_{t+1} = s m_t - m_{t-1} + N_t

Here s, a real constant, is our ‘setting’, while N_t is some ‘noise’: an independent Gaussian random variable for each time t, each with mean zero and some fixed standard deviation. Then the measurements m_t will have roughly sinusoidal behavior but with irregularity added by the noise at each time step, as illustrated in Figure 2.



2. The example system: red are predicted measurements for a given value of the settings, green is another simulation for the same s value and blue is a simulation for a slightly different s.

 
Note how there is no clear signal from either the curves or the differences that the green curve is at the correct setting value while the blue one has the wrong one: the noise makes it nontrivial to estimate s. This is a baby version of the problem faced by Keller and Urban.

Markov Chain Monte Carlo

Having glibly said that we can compute the conditional probability P(M=m | S=s), how do we actually do this? The simplest way would be to run our model many, many times with the settings set at S=s and determine the fraction of times it predicts measurements equal to m. This gives us an estimate of P(M=m | S=s). Then we can use Bayes’ rule to work out P(M=m|S=s), at least up to a constant factor.

Doing all this by hand would be incredibly time consuming and error prone, so computers are used for this task. In our example, we do this in Figure 3. As we keep running our model over and over, the curve showing P(M=m |S=s) as a function of s settles down to the right answer.


3. The estimates of P(M=m | S=s) as a function of s using uniform sampling, ending up with 480 samples at each point.

 

However, this is computationally inefficient, as shown in the probability distribution for small numbers of samples. This has quite a few ‘kinks’, which only disappear later. The problem is that there are lots of possible choices of s to try. And this is for a very simple model!

When dealing with the 18 settings involved in the model of Keller and Urban, trying every combination would take far too long. A way to avoid this is Markov Chain Monte Carlo sampling. Monte Carlo is famous for its casinos, so a ‘Monte Carlo’ algorithm is one that uses randomness. A ‘Markov chain’ is a random walk: for example, where you repeatedly flip a coin and take one step right when you get heads, and one step right when you get tails. So, in Markov Chain Monte Carlo, we perform a random walk through the collection of all possible settings, collecting samples.

The key to making this work is that at each step on the walk a proposed modification s' to the current settings s is generated randomly—but it may be rejected if it does not seem to improve the estimates. The essence of the rule is:

The modification s \mapsto s' is randomly accepted with a probability equal to the ratio

\displaystyle{ \frac{P(M=m | S=s')}{ P(M=m | S=s)} }

Otherwise the walk stays at the current position.

If the modification is better, so that the ratio is greater than 1, the new state is always accepted. With some additional tricks—such as discarding the very beginning of the walk—this gives a set of samples from which can be used to compute P(M=m | S=s). Then we can compute P(S = s | M = m) using Bayes’ rule.

Figure 4 shows the results of using the Markov Chain Monte Carlo procedure to figure out P(S= s| M= m) in our example.


4. The estimates of P(S = s|M = m) curves using Markov Chain Monte Carlo, showing the current distribution estimate at increasing intervals. The red line shows the current position of the random walk. Again the kinks are almost gone in the final distribution.

 

Note that the final distribution has only peformed about 66 thousand simulations in total, while the full sampling peformed over 1.5 million. The key advantage of Markov Chain Monte Carlo is that it avoids performing many simulations in areas where the probability is low, as we can see from the way the walk path remains under the big peak in the probability density almost all the time. What is more impressive is that it achieves this without any global view of the probability density, just by looking at how P(M=m | S=s) changes when we make small changes in the settings. This becomes even more important as we move to dealing with systems with many more dimensions and settings, where it proves very effective at finding regions of high probability density whatever their shape.

Why is it worth doing so much work to estimate the probability distribution for settings for a climate model? One reason is that we can then estimate probabilities of future events, such as the collapse of the Atlantic Meridional Ocean Current. And what’s the answer? According to Keller and Urban’s calculation, this current will likely weaken by about a fifth in the 21st century, but a complete collapse is unlikely before 2300. This claim needs to be checked in many ways—for example, using more detailed models. But the importance of the issue is clear, and we hope we have made the importance of good mathematical ideas for climate science clear as well.

Exploring the topic

The Azimuth Project is a group of scientists, engineers and computer programmers interested in questions like this [A]. If you have questions, or want to help out, just email us. Versions of the computer programs we used in this paper will be made available here in a while.

Here are some projects you can try, perhaps with the help of Kruschke’s textbook [K]:

• There are other ways to do setting estimation using time series: compare some to MCMC in terms of accuracy and robustness.

• We’ve seen a 1-dimensional system with one setting. Simulate some multi-dimensional and multi-setting systems. What new issues arise?

Acknowledgements. We thank Nathan Urban and other
members of the Azimuth Project for many helpful discussions.

References

[A] Azimuth Project, http://www.azimuthproject.org.

[KU] Klaus Keller and Nathan Urban, Probabilistic hindcasts and projections of the coupled climate, carbon cycle and Atlantic meridional overturning circulation system: a Bayesian fusion of century-scale measurements with a simple model, Tellus A 62 (2010), 737–750. Also available free online.

[K] John K. Kruschke, Doing Bayesian Data Analysis: A Tutorial with R and BUGS, Academic Press, New York, 2010.

[Y] Eliezer S. Yudkowsky, An intuitive explanation of Bayes’ theorem.


Bridging the Greenhouse-Gas Emissions Gap

28 April, 2013

I could use some help here, finding organizations that can help cut greenhouse gas emissions. I’ll explain what I mean in a minute. But the big question is:

How can we bridge the gap between what we are doing about global warming and what we should be doing?

That’s what this paper is about:

• Kornelis Blok, Niklas Höhne, Kees van der Leun and Nicholas Harrison, Bridging the greenhouse-gas emissions gap, Nature Climate Change 2 (2012), 471-474.

According to the United Nations Environment Programme, we need to cut CO2 emissions by about 12 gigatonnes/year by 2020 to hold global warming to 2 °C.

After the UN climate conference in Copenhagen, many countries made pledges to reduce CO2 emissions. But by 2020 these pledges will cut emissions by at most 6 gigatonnes/year. Even worse, a lot of these pledges are contingent on other people meeting other pledges, and so on… so the confirmed value of all these pledges is only 3 gigatonnes/year.

The authors list 21 things that cities, large companies and individual citizens can do, which they claim will cut greenhouse gas emissions by the equivalent of 10 gigatonnes/year of CO2 by 2020. For each initiative on their list, they claim:

(1) there is a concrete starting position from which a significant up-scaling until the year 2020 is possible;

(2) there are significant additional benefits besides a reduction of greenhouse-gas emissions, so people can be driven by self-interest or internal motivation, not external pressure;

(3) there is an organization or combination of organizations that can lead the initiative;

(4) the initiative has the potential to reach an emission reduction by about 0.5 Gt CO2e by 2020.

21 Initiatives

Now I want to quote the paper and list the 21 initiatives. And here’s where I could use your help! For each of these, can you point me to one or more organizations that are in a good position to lead the initiative?

Some are already listed, but even for these I bet there are other good answers. I want to compile a list, and then start exploring what’s being done, and what needs to be done.

By the way, even if the UN estimate of the greenhouse-emissions gap is wrong, and even if all the numbers I’m about to quote are wrong, most of them are probably the right order of magnitude—and that’s all we need to get a sense of what needs to be done, and how we can do it.

Companies

1. Top 1,000 companies’ emission reductions. Many of the 1,000 largest greenhouse-gas-emitting companies already have greenhouse-gas emission-reduction goals to decrease their energy use and increase their long-term competitiveness, as well as to demonstrate their corporate social responsibility. An association such as the World Business Council for Sustainable Development could lead 30% of the top 1,000 companies to reduce energy-related emissions 10% below business as usual by 2020 and all companies to reduce their non-carbon dioxide greenhouse-gas emissions by 50%. Impact in 2020: up to 0.7 Gt CO2e.

2. Supply-chain emission reductions. Several companies already have social and environmental requirements for their suppliers, which are driven by increased competitiveness, corporate social responsibility and the ability to be a front-runner. An organization such as the Consumer Goods Forum could stimulate 30% of companies to require their supply chains to reduce emissions 10% below business as usual by 2020. Impact in 2020: up to 0.2 Gt CO2e.

3. Green financial institutions. More than 200 financial organizations are already members of the finance initiative of the United Nations Environment Programme (UNEP-FI). They are committed to environmental goals owing to corporate social responsibility, to gain investor certainty and to be placed well in emerging markets. UNEP-FI could lead the 20 largest banks to reduce the carbon footprint of 10% of their assets by 80%. Impact in 2020: up to 0.4 Gt of their assets by 80%. Impact in 2020: up to 0.4 Gt CO2e.

4. Voluntary-offset companies. Many companies are already offsetting their greenhouse-gas emissions, mostly without explicit external pressure. A coalition between an organization with convening power, for example UNEP, and offset providers could motivate 20% of the companies in the light industry and commercial sector to calculate their greenhouse-gas emissions, apply emission-reduction measures and offset the remaining emissions (retiring the purchased credits). It is ensured that offset projects really reduce emissions by using the ‘gold standard’ for offset projects or another comparable mechanism. Governments could provide incentives by giving tax credits for offsetting, similar to those commonly given for charitable donations. Impact by 2020: up to 2.0 Gt CO2e.

Other actors

5. Voluntary-offset consumers. A growing number of individuals (especially with high income) already offset their greenhouse-gas emissions, mostly for flights, but also through carbon-neutral products. Environmental NGOs could motivate 10% of the 20% of richest individuals to offset their personal emissions from electricity use, heating and transport at cost to them of around US$200 per year. Impact in 2020: up to 1.6 Gt CO2e.

6. Major cities initiative. Major cities are large emitters of greenhouse gases and many have greenhouse-gas reduction targets. Cities are intrinsically highly motivated to act so as to improve local air quality, attractiveness and local job creation. Groups like the C40 Cities Climate Leadership Group and ICLEI — Local Governments for Sustainability could lead the 40 cities in C40 or an equivalent sample to reduce emissions 20% below business as usual by 2020, building on the thousands of emission-reduction activities already implemented by the C40 cities. Impact in 2020: up to 0.7 Gt CO2e.

7. Subnational governments. Several states in the United States and provinces in Canada have already introduced support mechanisms for renewable energy, emission-trading schemes, carbon taxes and industry regulation. As a result, they expect an increase in local competitiveness, jobs and energy security. Following the example set by states such as California, these ambitious US states and Canadian provinces could accept an emission-reduction target of 15–20% below business as usual by 2020, as some states already have. Impact in 2020: up to 0.6 Gt CO2e.

Energy efficiency

8. Building heating and cooling. New buildings, and increasingly existing buildings, are designed to be extremely energy efficient to realize net savings and increase comfort. The UN Secretary General’s Sustainable Energy for All Initiative could bring together the relevant players to realize 30% of the full reduction potential for 2020. Impact in 2020: up to 0.6 Gt CO2e.

9. Ban of incandescent lamps. Many countries already have phase-out schedules for incandescent lamps as it provides net savings in the long term. The en.lighten initiative of UNEP and the Global Environment Facility already has a target to globally ban incandescent lamps by 2016. Impact in 2020: up to 0.2 Gt CO2e.

10. Electric appliances. Many international labelling schemes and standards already exist for energy efficiency of appliances, as efficient appliances usually give net savings in the long term. The Collaborative Labeling and Appliance Standards Program or the Super-efficient Equipment and Appliance Deployment Initiative could drive use of the most energy-efficient appliances on the market. Impact in 2020: up to 0.6 Gt CO2e.

11. Cars and trucks. All car and truck manufacturers put emphasis on developing vehicles that are more efficient. This fosters innovation and increases their long-term competitive position. The emissions of new cars in Europe fell by almost 20% in the past decade. A coalition of manufacturers and NGOs joined by the UNEP Partnership for Clean Fuels and Vehicles could agree to save one additional liter per 100 km globally by 2020 for cars, and equivalent reductions for trucks. Impact in 2020: up to 0.7 Gt CO2e.

Energy supply

12. Boost solar photovoltaic energy. Prices of solar photovoltaic systems have come down rapidly in recent years, and installed capacity has increased much faster than expected. It created a new industry, an export market and local value added through, for example, roof installations. A coalition of progressive governments and producers could remove barriers by introducing good grid access and net metering rules, paving the way to add another 1,600 GW by 2020 (growth consistent with recent years). Impact in 2020: up to 1.4 Gt CO2e.

13. Wind energy. Cost levels for wind energy have come down dramatically, making wind economically competitive with fossil-fuel-based power generation in many cases. The Global Wind Energy Council could foster the global introduction of arrangements that lead to risk reduction for investments in wind energy, with, for example, grid access and guarantees. This could lead to an installation of 1,070 GW by 2020, which is 650 GW over a reference scenario. Impact in 2020: up to 1.2 Gt CO2e.

14. Access to energy through low-emission options. Strong calls and actions are already underway to provide electricity access to 1.4 billion people who are at present without and fulfill development goals. The UN Secretary General’s Sustainable Energy for All Initiative could ensure that all people without access to electricity get access through low-emission options. Impact in 2020: up to 0.4 Gt CO2e.

15. Phasing out subsidies for fossil fuels. This highly recognized option to reduce emissions would improve investment in clean energy, provide other environmental, health and security benefits, and generate income. The International Energy Agency could work with countries to phase out half of all fossil-fuel subsidies. Impact in 2020: up to 0.9 Gt CO2e.

Special sectors

16. International aviation and maritime transport. The aviation and shipping industries are seriously considering efficiency measures and biofuels to increase their competitive advantage. Leading aircraft and ship manufacturers could agree to design their vehicles to capture half of the technical mitigation potential. Impact in 2020: up to 0.2 Gt CO2e.

17. Fluorinated gases (hydrofluorocarbons, perflourocarbons, SF6). Recent industry-led initiatives are already underway to reduce emissions of these gases originating from refrigeration, air-conditioning and industrial processes. Industry associations, such as Refrigerants, Naturally!, could work towards meeting half of the technical mitigation potential. Impact in 2020: up to 0.3 Gt CO2e.

18. Reduce deforestation. Some countries have already shown that it is strongly possible to reduce deforestation with an integrated approach that eliminates the drivers of deforestation. This has benefits for local air pollution and biodiversity, and can support the local population. Led by an individual with convening power, for example, the United Kingdom’s Prince of Wales or the UN Secretary General, such approaches could be rolled out to all the major countries with high deforestation emissions, halving global deforestation by 2020. Impact in 2020: up to 1.8 Gt CO2e.

19. Agriculture. Options to reduce emissions from agriculture often increase efficiency. The International Federation of Agricultural Producers could help to realize 30% of the technical mitigation potential. (Well, at least it could before it collapsed, after this paper was written.) Impact in 2020: up to 0.8 Gt CO2e.

Air pollutants

20. Enhanced reduction of air pollutants. Reduction of classic air pollutants including black carbon has been pursued for years owing to positive impacts on health and local air quality. UNEP’s Climate and Clean Air Coalition To Reduce Short-Lived Climate Pollutants already has significant political momentum and could realize half of the technical mitigation potential. Impact in 2020: a reduction in radiative forcing impact equivalent to an emission reduction of greenhouse gases in the order of 1 Gt CO2e, but outside of the definition of the gap.

21. Efficient cook-stoves. Cooking in rural areas is a source of carbon dioxide emissions. Furthermore, there are emissions of black carbon, which also leads to global warming. Replacing these cook-stoves would also significantly increase local air quality and reduce pressure on forests from fuel-wood demand. A global development organization such as the UN Development Programme could take the lead in scaling-up the many already existing programs to eventually replace half of the existing cook-stoves. Impact in 2020: a reduction in radiative forcing impact equivalent to an emission reduction of greenhouse gases of up to 0.6 Gt CO2e, included in the effect of the above initiative and outside of the definition of the gap.

For more

For more, see the supplementary materials to this paper, and also:

• Niklas Höhne, Wedging the gap: 21 initiatives to bridge the greenhouse gas emissions gap.

The size of the emissions gap was calculated here:

The Emissions Gap Report 2012, United Nations Environment Programme (UNEP).

If you’re in a rush, just read the executive summary.


Energy and the Environment – What Physicists Can Do

25 April, 2013

 

The Perimeter Institute is a futuristic-looking place where over 250 physicists are thinking about quantum gravity, quantum information theory, cosmology and the like. Since I work on some of these things, I was recently invited to give the weekly colloquium there. But I took the opportunity to try to rally them into action:

Energy and the Environment: What Physicists Can Do. Watch the video or read the slides.

Abstract. The global warming crisis is part of a bigger transformation in which humanity realizes that the Earth is a finite system and that our population, energy usage, and the like cannot continue to grow exponentially. While politics and economics pose the biggest challenges, physicists are in a good position to help make this transition a bit easier. After a quick review of the problems, we discuss a few ways physicists can help.

On the video you can hear me say a lot of stuff that’s not on the slides: it’s more of a coherent story. The advantage of the slides is that anything in blue, you can click on to get more information. So for example, when I say that solar power capacity has been growing annually by 75% in recent years, you can see where I got that number.

I was pleased by the response to this talk. Naturally, it was not a case of physicists saying “okay, tomorrow I’ll quit working on the foundations of quantum mechanics and start trying to improve quantum dot solar cells.” It’s more about getting them to see that huge problems are looming ahead of us… and to see the huge opportunities for physicists who are willing to face these problems head-on, starting now. Work on energy technologies, the smart grid, and ‘ecotechnology’ is going to keep growing. I think a bunch of the younger folks, at least, could see this.

However, perhaps the best immediate outcome of this talk was that Lee Smolin introduced me to Manjana Milkoreit. She’s at the school of international affairs at Waterloo University, practically next door to the Perimeter Institute. She works on “climate change governance, cognition and belief systems, international security, complex systems approaches, especially threshold behavior, and the science-policy interface.”

So, she knows a lot about the all-important human and political side of climate change. Right now she’s interviewing diplomats involved in climate treaty negotiations, trying to see what they believe about climate change. And it’s very interesting!

In my next post, I’ll talk about something she pointed me to. Namely: what we can do to hold the temperature increase to 2 °C or less, given that the pledges made by various nations aren’t enough.


Milankovitch Cycles and the Earth’s Climate

13 April, 2013

Here are the slides for a talk I’m giving at the Cal State Northridge Climate Science Seminar:

Milankovitch Cycles and the Earth’s Climate.

It’s a gentle introduction to these ideas, and it presents a lot of what Blake Pollard and I have said about Milankovitch cycles, in a condensed way. Of course when I give the talk, I’ll add more words, especially about the different famous ‘puzzles’.

If you have any corrections, please let me know!

I’m eager to visit Cal State Northridge and especially David Klein in their math department, since I’d like to incorporate some climate science in our math curriculum the way they’ve done there.


Geoengineering Report

11 March, 2013

I think we should start serious research on geoengineering schemes, including actual experiments, not just calculations and simulations. I think we should do this with an open mind about whether we’ll decide that these schemes are good ideas or bad. Either way, we need to learn more about them. Simultaneously, we need an intelligent, well-informed debate about the many ethical, legal and political aspects.

Many express the fear that merely researching geoengineering schemes will automatically legitimate them, however hare-brained they are. There’s some merit to that fear. But I suspect that public opinion on geoengineering will suddenly tip from “unthinkable!” to “let’s do it now!” as soon as global warming becomes perceived as a real and present threat. This is especially true because oil, coal and gas companies have a big interest in finding solutions to global warming that don’t make them stop digging.

So if we don’t learn more about geoengineering schemes, and we start getting heat waves that threaten widespread famine, we should not be surprised if some big government goes it alone and starts doing something cheap and easy like putting tons of sulfur into the upper atmosphere… even if it’s been inadequately researched.

It’s hard to imagine a more controversial topic. But I think there’s one thing most of us should be able to agree on: we should pay attention to what governments are doing about geoengineering! So, let me quote a bit of this report prepared for the US Congress:

• Kelsi Bracmort and Richard K. Lattanzio, Geoengineering: Governance and Technology Policy, CRS Report for Congress, Congressional Research Service, 2 January 2013.

Kelsi Bracmort is a specialist in agricultural conservation and natural Resources Policy, and Richard K. Lattanzio is an analyst in environmental policy.

I will delete references to footnotes, since they’re huge and I’m too lazy to include them all here. So, go to the original text for those!

Introduction

Climate change has received considerable policy attention in the past several years both internationally and within the United States. A major report released by the Intergovernmental Panel on Climate Change (IPCC) in 2007 found widespread evidence of climate warming, and many are concerned that climate change may be severe and rapid with potentially catastrophic consequences for humans and the functioning of ecosystems. The National Academies maintains that the climate change challenge is unlikely to be solved with any single strategy or by the people of any single country.

Policy efforts to address climate change use a variety of methods, frequently including mitigation and adaptation. Mitigation is the reduction of the principal greenhouse gas (GHG) carbon dioxide (CO2) and other GHGs. Carbon dioxide is the dominant greenhouse gas emitted naturally through the carbon cycle and through human activities like the burning of fossil fuels. Other commonly discussed GHGs include methane, nitrous oxide, hydrofluorocarbons, perfluorocarbons, and sulfur hexafluoride. Adaptation seeks to improve an individual’s or institution’s ability to cope with or avoid harmful impacts of climate change, and to take advantage of potential beneficial ones.

Some observers are concerned that current mitigation and adaptation strategies may not prevent change quickly enough to avoid extreme climate disruptions. Geoengineering has been suggested by some as a timely additional method to mitigation and adaptation that could be included in climate change policy efforts. Geoengineering technologies, applied to the climate, aim to achieve large-scale and deliberate modifications of the Earth’s energy balance in order to reduce temperatures and counteract anthropogenic (i.e., human-made) climate change; these climate modifications would not be limited by country boundaries. As an unproven concept, geoengineering raises substantial environmental and ethical concerns for some observers. Others respond that the uncertainties of geoengineering may only be resolved through further scientific and technical examination.

Proposed geoengineering technologies vary greatly in terms of their technological characteristics and possible consequences. They are generally classified in two main groups:

• Solar radiation management (SRM) method: technologies that would increase the reflectivity, or albedo, of the Earth’s atmosphere or surface, and

• Carbon dioxide removal (CDR) method: technologies or practices that would remove CO2 and other GHGs from the atmosphere.

Much of the geoengineering technology discussion centers on SRM methods (e.g., enhanced albedo, aerosol injection). SRM methods could be deployed relatively quickly if necessary, and their impact on the climate would be more imme diate than that of CDR methods. Because SRM methods do not reduce GHG from the atmosphere, global warming could resume at a rapid pace if a deployed SRM method fails or is terminated at any time. At least one relatively simple SRM method is already being deployed with government assistance. [Enhanced albedo is one SRM effort currently being undertaken by the U.S. Environmental Protection Agency. See the Enhanced Albedo section for more information.] Other proposed SRM methods are at the conceptualization stage. CDR methods include afforestation, ocean fertilization, and the use of biomass to capture and store carbon.

The 112th Congress did not take any legislative action on geoengineering. In 2009, the House Science and Technology Committee of the 111th Congress held hearings on geoengineering that examined the “potential environmental risks and benefits of various proposals, associated domestic and international governance issues, evaluation mechanisms and criteria, research and development (R&D) needs, and economic rationales supporting the deployment of geoengineering activities.”

Some foreign governments, including the United Kingdom’s, as well as scientists from Germany and India, have begun considering engaging in the research or deployment of geoengineering technologies be cause of concern over the slow progress of emissions reductions, the uncertainties of climate sensitivity, the possible existence of climate thresholds (or “tipping points”), and the political, social, and economic impact of pursuing aggressive GHG mitigation strategies.

Congressional interest in geoengineering has focused primarily on whether geoengineering is a realistic, effective, and appropriate tool for the United States to use to address climate change. However, if geoengineering technologies are deployed by the United States, another government, or a private entity, several new concerns are likely to arise related to government support for, and oversight of, geoengineering as well as the transboundary and long-term effects of geoengineering. Such was the case in the summer of 2012, when an American citizen conducted a geoengineering experiment, specifically ocean fertilization, off the west coast of Canada that some say violated two international conventions.

This report is intended as a primer on the policy issues, science, and governance of geoengineering technologies. The report will first set the policy parameters under which geoengineering technologies may be considered. It will then describe selected technologies in detail and discuss their status. The third section provides a discussion of possible approaches to governmental involvement in, and oversight of, geoengineering, including a summary of domestic and international instruments and institutions that may affect geoengineering projects.

Geoengineering governance

Geoengineering technologies aim to modify the Earth’s energy balance in order to reduce temperatures and counteract anthropogenic climate change through large-scale and deliberate modifications. Implementation of some of the technologies may be controlled locally, while other technologies may require global input on implementation. Additionally, whether a technology can be controlled or not once implemented differs by technology type. Little research has been done on most geoengineering methods, and no major directed research programs are in place. Peer reviewed literature is scant, and deployment of the technology—either through controlled field tests or commercial enterprise—has been minimal.

Most interested observers agree that more research would be required to test the feasibility, effectiveness, cost, social and environmental impacts, and the possible unintended consequences of geoengineering before deployment; others reject exploration of the options as too risky. The uncertainties have led some policymakers to consider the need and the role for governmental oversight to guide research in the short term and to oversee potential deployment in the long term. Such governance structures, both domestic and international, could either support or constrain geoengineering activities, depending on the decisions of policymakers. As both technological development and policy considerations for geoengineering are in their early stages, several questions of governance remain in play:

• What risk factors and policy considerations enter into the debate over geoengineering activities and government oversight?

• At what point, if ever, should there be government oversight of geoengineering activities?

• If there is government oversight, what form should it take?

• If there is government oversight, who should be responsible for it?

• If there is publicly funded research and development, what should it cover and which disciplines should be engaged in it?

Risk Factors

As a new and emerging set of technologies potentially able to address climate change, geoengineering possesses many risk factors that must be taken into policy considerations. From a research perspective, the risk of geoengineering activities most often rests in the uncertainties of the new technology (i.e., the risk of failure, accident, or unintended consequences). However, many observers believe that the greater risk in geoengineering activities may lie in the social, ethical, legal, and political uncertainties associated with deployment. Given these risks, there is an argument that appropriate mechanisms for government oversight should be established before the federal government and its agencies take steps to promote geoengineering technologies and before new geoengineering projects are commenced. Yet, the uncertainty behind the technologies makes it unclear which methods, if any, may ever mature to the point of being deemed sufficiently effective, affordable, safe, and timely as to warrant potential deployment.

Some of the more significant risks factors associated with geoengineering are as follows:

Technology Control Dilemma. An analytical impasse inherent in all emerging technologies is that potential risks may be foreseen in the design phase but can only be proven and resolved through actual research, development, and demonstration. Ideally, appropriate safeguards are put in place during the early stages of conceptualization and development, but anticipating the evolution of a new technology can be difficult. By the time a technology is widely deployed, it may be impossible to build desirable oversight and risk management provisions without major disruptions to established interests. Flexibility is often required to both support investigative research and constrain potentially harmful deployment.

Reversibility. Risk mitigation relies on the ability to cease a technology program and terminate its adverse effects in a short period of time. In principle, all geoengineering options could be abandoned on short notice, with either an instant cessation of direct climate effects or a small time lag after abandonment.

However, the issue of reversibility applies to more than just the technologies themselves. Given the importance of internal adjustments and feedbacks in the climate system—still imperfectly understood—it is unlikely that all secondary effects from large-scale deployment would end immediately. Also, choices made regarding geoengineering methods may influence other social, economic, and technological choices regarding climate science. Advancing geoengineering options in lieu of effectively mitigating GHG emissions, for example, could result in a number of adverse effects, including ocean acidification, stresses on biodiversity, climate sensitivity shocks, and other irreversible consequences. Further, investing financially in the physical infrastructure to support geoengineering may create a strong economic resistance to reversing research and deployment activities.

Encapsulation. Risk mitigation also relies on whether a technology program is modular and contained or whether it involves the release of materials into the wider environment. The issue can be framed in the context of pollution (i.e., encapsulated technologies are often viewed as more “ethical” in that they are seen as non-polluting). Several geoengineering technologies are demonstrably non-encapsulated, and their release and deployment into the wider environment may lead to technical uncertainties, impacts on non-participants, and complex policy choices. But encapsulated technologies may still have localized environmental impacts, depending on the nature, size, and location of the application. The need for regulatory action may arise as much from the indirect impacts of activities on agro-forestry, species, and habitat as from the direct impacts of released materials in atmospheric or oceanic ecosystems.

Commercial Involvement. The role of private-sector engagement in the development and promotion of geoengineering may be debated. Commercial involvement, including competition, may be positive in that it mobilizes innovation and capital investment, which could lead to the development of more effective and less costly technologies at a faster rate than in the public sector.

However, commercial involvement could bypass or neglect social, economic, and environmental risk assessments in favor of what one commentator refers to as “irresponsible entrepreneurial behavior.” Private-sector engagement would likely require some form of public subsidies or GHG emission pricing to encourage investment, as well as additional considerations including ownership models, intellectual property rights, and trade and transfer mechanisms for the dissemination of the technologies.

Public Engagement. The consequences of geoengineering—including both benefits and risks discussed above—could affect people and communities across the world. Public attitudes toward geoengineering, and public engagement in the formation, development, and execution of proposed governance, could have a critical bearing on the future of the technologies. Perceptions of risks, levels of trust, transparency of actions, provisions for liabilities and compensation, and economies of investment could play a significant role in the political feasibility of geoengineering. Public acceptance may require a wider dialogue between scientists, policymakers, and the public.


Follow

Get every new post delivered to your Inbox.

Join 2,711 other followers