Carbon Trading in California

20 December, 2010

It’s for real!

Or — to be more cautious — it might soon be for real!

On Thursday December 16th, 2010, California’s Air Resources Board began a cap and trade system for carbon. This system will implement the state’s law mandating that carbon emissions be reduced back to 1990 levels by 2020.

This will amount to a 15% decrease from current emissions.

The system will let greenhouse gas emitters buy and sell emission allowances. It covers everyone who emits more than 5,000 tons of carbon dioxide per year. That’s about 360 businesses, who taken together emit about 85% of the CO2.

At first these business will receive free allowances that cover most of their emissions, but as time passes, they’ll have to buy those allowances through quarterly auctions. According to the plan, there will be two phases. By 2012, all major industrial sources and utilities will be covered. By 2015, distributors of fuels and natural gas will also be included.

The chair of the Air Resources Board, Mary Nichols, gave a speech. Among other things, she said:

This program is the capstone of our climate policy, and will accelerate California’s progress toward a clean energy economy. It rewards efficiency and provides companies with the greatest flexibility to find innovative solutions that drive green jobs, clean our environment, increase our energy security and ensure that California stands ready to compete in the booming global market for clean and renewable energy.

The governor also showed up at this historic board meeting, and gave a speech.

But I can guess what you’re wondering, or at least one of the many things you should be wondering.

“How much can California do by itself?”

Luckily, California is not doing this by itself. By the time the program gets rolling in 2012, California plans to have built a framework for carbon trading with New Mexico, British Columbia, Ontario and Quebec — some of its partners in the Western Climate Initiative.


Western Climate Initiative

The green guys are the ‘partners’; the other guys, blue because they’re watching carefully but sadly not taking part, are the ‘observers’.

Furthermore, ten states of the US — New York, New Jersey, Delaware, Maryland and the New England states — have started up another system, the Regional Greenhouse Gas Initiative, which covers only electric utilities. They are already doing auctions.



So, while in theory it might make sense to institute carbon trading on a national basis, political realities have pushed North America down a different path, where smaller regions take the lead in groupings that may transcend national boundaries! And that is very interesting in itself.


Stabilization Wedges (Part 3)

17 December, 2010

I bet you thought I’d never get back to this! Sorry, I like to do lots of things.

Remember the idea: in 2004, Stephen Pacala and Robert Socolow wrote a now-famous paper on how we could hold atmospheric carbon dioxide below 500 parts per million. They said that to do this, it would be enough to find 7 ways to reduce carbon emissions, each one ramping up linearly to the point of reducing carbon emissions by 1 gigaton per year by 2054.

They called these stabilization wedges, for the obvious reason:



Their paper listed 15 of these wedges. The idea here is to go through them and critique them. In Part 1 of this series we talked about four wedges involving increased efficiency and conservation. In Part 2 we covered one about shifting from coal to natural gas, and three about carbon capture and storage.

Now let’s do nuclear power and renewable energy!

9. Nuclear power. As Pacala and Socolow already argued in wedge 5, replacing 700 gigawatts of efficient coal-fired power plants with some carbon-neutral form of power would save us a gigaton of carbon per year. This would require 700 gigawatts of nuclear power plants running at 90% capacity (just as assumed for the coal plants). The means doubling the world production of nuclear power. The global pace of nuclear power plant construction from 1975 to 1990 could do this! So, this is one of the few wedges that doesn’t seem to require heroic technical feats. But of course, there’s still a downside: we can only substantially boost the use of nuclear power if people become confident about all aspects of its safety.

10. Wind power. Wind power is intermittent: Pacala and Socolow estimate that the ‘peak’ capacity (the amount you get under ideal circumstances) is about 3 times the ‘baseload’ capacity (the amount you can count on). So, to save a gigaton of carbon per year by replacing 700 gigawatts of coal-fired power plants, we need roughly 2000 gigawatts of peak wind power. Wind power was growing at about 30% per year when they wrote their paper, and it had reached a world total of 40 gigawatts. So, getting to 2000 gigawatts would mean multiplying the world production of wind power by a factor of 50. The wind turbines would “occupy” about 30 million hectares, or about 30-45 square meters per person — some on land and some offshore. But because windmills are widely spaced, land with windmills can have multiple uses.

11. Photovoltaic solar power. This too is intermittent, so to save a gigaton of carbon per year we need 2000 gigawatts of peak photovoltaic solar power to replace coal. Like wind, photovoltaic solar was growing at 30% per year when Pacala and Socolow wrote their paper. However, only 3 gigawatts had been installed worldwide. So, getting to 2000 gigawatts would require multiplying the world production of photovoltaic solar power by a factor of 700. See what I mean about ‘heroic feats’? In terms of land, this would take about 2 million hectares, or 2-3 square meters per person.

12. Renewable hydrogen. You’ve probably heard about hydrogen-powered cars. Of course you’ve got to make the hydrogen. Renewable electricity can produce hydrogen for vehicle fuel. 4000 gigawatts of peak wind power, for example, used in high-efficiency fuel-cell cars, could keep us from burning a gigaton of carbon each year in the form of gasoline or diesel fuel. Unfortunately, this is twice as much wind power as we’d need in wedge 10, where we use wind to eliminate the need for burning some coal. Why? Gasoline and diesel have less carbon per unit of energy than coal does.

13. Biofuels. Fossil-carbon fuels can also be replaced by biofuels such as ethanol. To save a gigaton per year of carbon, we could make 5.4 gigaliters per day of ethanol as a replacement for gasoline — provided the process of making this ethanol didn’t burn fossil fuels! Doing this would require multiplying the world production of bioethanol by a factor of 50. It would require 250 million hectares committed to high-yield plantations, or 250-375 square meters per person. That’s an area equal to about one-sixth of the world’s cropland. An even larger area would be required to the extent that the biofuels require fossil-fuel inputs. Clearly this could cut into the land used for growing food.

There you go… let me hear your critique! Which of these measures seem best to you? Which seem worst? But more importantly: why?

Remember: it takes a total of 7 wedges to save the world, according to this paper by Pacala and Socolow.

Next time I’ll tell you about the final two stabilization wedges… and then I’ll give you an update on their idea.


This Week’s Finds (Week 307)

14 December, 2010

I’d like to take a break from interviews and explain some stuff I’m learning about. I’m eager to tell you about some papers in the book Tim Palmer helped edit, Stochastic Physics and Climate Modelling. But those papers are highly theoretical, and theories aren’t very interesting until you know what they’re theories of. So today I’ll talk about "El Niño", which is part of a very interesting climate cycle. Next time I’ll get into more of the math.

I hadn’t originally planned to get into so much detail on the El Niño, but this cycle is a big deal in southern California. In the city of Riverside, where I live, it’s very dry. There is a small river, but it’s just a trickle of water most of the time: there’s a lot less "river" than "side". It almost never rains between March and December. Sometimes, during a "La Niña", it doesn’t even rain in the winter! But then sometimes we have an "El Niño" and get huge floods in the winter. At this point, the tiny stream that gives Riverside its name swells to a huge raging torrent. The difference is very dramatic.

So, I’ve always wanted to understand how the El Niño cycle works — but whenever I tried to read an explanation, I couldn’t follow it!

I finally broke that mental block when I read some stuff on William Kessler‘s website. He’s an expert on the El Niño phenomenon who works at the Pacific Marine Environmental Laboratory. One thing I like about his explanations is that he says what we do know about the El Niño, and also what we don’t know. We don’t know what triggers it!

In fact, Kessler says the El Niño would make a great research topic for a smart young scientist. In an email to me, which he has allowed me to quote, he said:

We understand lots of details but the big picture remains mysterious. And I enjoyed your interview with Tim Palmer because it brought out a lot of the sources of uncertainty in present-generation climate modeling. However, with El Niño, the mystery is beyond Tim’s discussion of the difficulties of climate modeling. We do not know whether the tropical climate system on El Niño timescales is stable (in which case El Niño needs an external trigger, of which there are many candidates) or unstable. In the 80s and 90s we developed simple "toy" models that convinced the community that the system was unstable and El Niño could be expected to arise naturally within the tropical climate system. Now that is in doubt, and we are faced with a fundamental uncertainty about the very nature of the beast. Since none of us old farts has any new ideas (I just came back from a conference that reviewed this stuff), this is a fruitful field for a smart young person.

So, I hope some smart young person reads this and dives into working on El Niño!

But let’s start at the beginning. Why did I have so much trouble understanding explanations of the El Niño? Well, first of all, I’m an old fart. Second, most people are bad at explaining stuff: they skip steps, use jargon they haven’t defined, and so on. But third, climate cycles are hard to explain. There’s a lot about them we don’t understand — as Kessler’s email points out. And they also involve a kind of "cyclic causality" that’s a bit tough to mentally process.

At least where I come from, people find it easy to understand linear chains of causality, like "A causes B, which causes C". For example: why is the king’s throne made of gold? Because the king told his minister "I want a throne of gold!" And the minister told the servant, "Make a throne of gold!" And the servant made the king a throne of gold.

Now that’s what I call an explanation! It’s incredibly satisfying, at least if you don’t wonder why the king wanted a throne of gold in the first place. It’s easy to remember, because it sounds like a story. We hear a lot of stories like this when we’re children, so we’re used to them. My example sounds like the beginning of a fairy tale, where the action is initiated by a "prime mover": the decree of a king.

There’s something a bit trickier about cyclic causality, like "A causes B, which causes C, which causes A." It may sound like a sneaky trick: we consider "circular reasoning" a bad thing. Sometimes it is a sneaky trick. But sometimes this is how things really work!

Why does big business have such influence in American politics? Because big business hires lots of lobbyists, who talk to the politicians, and even give them money. Why are they allowed to do this? Because big business has such influence in American politics. That’s an example of a "vicious circle". You might like to cut it off — but like a snake holding its tail in its mouth, it’s hard to know where to start.

Of course, not all circles are "vicious". Many are "virtuous".

But the really tricky thing is how a circle can sometimes reverse direction. In academia we worry about this a lot: we say a university can either "ratchet up" or "ratchet down". A good university attracts good students and good professors, who bring in more grant money, and all this makes it even better… while a bad university tends to get even worse, for all the same reasons. But sometimes a good university goes bad, or vice versa. Explaining that transition can be hard.

It’s also hard to explain why a La Niña switches to an El Niño, or vice versa. Indeed, it seems scientists still don’t understand this. They have some models that simulate this process, but there are still lots of mysteries. And even if they get models that work perfectly, they still may not be able to tell a good story about it. Wind and water are ultimately described by partial differential equations, not fairy tales.

But anyway, let me tell you a story about how it works. I’m just learning this stuff, so take it with a grain of salt…

The "El Niño/Southern Oscillation" or "ENSO" is the largest form of variability in the Earth’s climate on times scales greater than a year and less than a decade. It occurs across the tropical Pacific Ocean every 3 to 7 years, and on average every 4 years. It can cause extreme weather such as floods and droughts in many regions of the world. Countries dependent on agriculture and fishing, especially those bordering the Pacific Ocean, are the most affected.

And here’s a cute little animation of it produced by the Australian Bureau of Meteorology:



Let me tell you first about La Niña, and then El Niño. If you keep glancing back at this little animation, I promise you can understand everything I’ll say.

Winds called trade winds blow west across the tropical Pacific. During La Niña years, water at the ocean’s surface moves west with these winds, warming up in the sunlight as it goes. So, warm water collects at the ocean’s surface in the western Pacific. This creates more clouds and rainstorms in Asia. Meanwhile, since surface water is being dragged west by the wind, cold water from below gets pulled up to take its place in the eastern Pacific, off the coast of South America.

I hope this makes sense so far. But there’s another aspect to the story. Because the ocean’s surface is warmer in the western Pacific, it heats the air and makes it rise. So, wind blows west to fill the "gap" left by rising air. This strengthens the westward-blowing trade winds.

So, it’s a kind of feedback loop: the oceans being warmer in the western Pacific helps the trade winds blow west, and that makes the western oceans even warmer.

Get it? This should all make sense so far, except for one thing. There’s one big question, and I hope you’re asking it. Namely:

Why do the trade winds blow west?

If I don’t answer this, my story so far would work just as well if I switched the words "west" and "east". That wouldn’t necessarily mean my story was wrong. It might just mean that there were two equally good options: a La Niña phase where the trade winds blow west, and another phase — say, El Niño — where they blow east! From everything I’ve said so far, the world could be permanently stuck in one of these phases. Or, maybe it could randomly flip between these two phases for some reason.

Something roughly like this last choice is actually true. But it’s not so simple: there’s not a complete symmetry between west and east.

Why not? Mainly because the Earth is turning to the east.

Air near the equator warms up and rises, so new air from more northern or southern regions moves in to take its place. But because the Earth is fatter at the equator, the equator is moving faster to the east. So, the new air from other places is moving less quickly by comparison… so as seen by someone standing on the equator, it blows west. This is an example of the Coriolis effect:



By the way: in case this stuff wasn’t tricky enough already, a wind that blows to the west is called an easterly, because it blows from the east! That’s what happens when you put sailors in charge of scientific terminology. So the westward-blowing trade winds are called "northeasterly trades" and "southeasterly trades" in the picture above. But don’t let that confuse you.

(I also tend to think of Asia as the "Far East" and California as the "West Coast", so I always need to keep reminding myself that Asia is in the west Pacific, while California is in the east Pacific. But don’t let that confuse you either! Just repeat after me until it makes perfect sense: "The easterlies blow west from West Coast to Far East".)

Okay: silly terminology aside, I hope everything makes perfect sense so far. The trade winds have a good intrinsic reason to blow west, but in the La Niña phase they’re also part of a feedback loop where they make the western Pacific warmer… which in turn helps the trade winds blow west.

But then comes an El Niño! Now for some reason the westward winds weaken. This lets the built-up warm water in the western Pacific slosh back east. And with weaker westward winds, less cold water is pulled up to the surface in the east. So, the eastern Pacific warms up. This makes for more clouds and rain in the eastern Pacific — that’s when we get floods in Southern California. And with the ocean warmer in the eastern Pacific, hot air rises there, which tends to counteract the westward winds even more!

In other words, all the feedbacks reverse themselves.

But note: the trade winds never mainly blow east. During an El Niño they still blow west, just a bit less. So, the climate is not flip-flopping between two symmetrical alternatives. It’s flip-flopping between two asymmetrical alternatives.

I hope all this makes sense… except for one thing. There’s another big question, and I hope you’re asking it. Namely:

Why do the westward trade winds weaken?

We could also ask the same question about the start of the La Niña phase: why do the westward trade winds get stronger?

The short answer is that nobody knows. Or at least there’s no one story that everyone agrees on. There are actually several stories… and perhaps more than one of them is true. But now let me just show you the data:



The top graph shows variations in the water temperature of the tropical Eastern Pacific ocean. When it’s hot we have El Niños: those are the red hills in the top graph. The blue valleys are La Niñas. Note that it’s possible to have two El Niños in a row without an intervening La Niña, or vice versa!

The bottom graph shows the "Southern Oscillation Index" or "SOI". This is the air pressure in Tahiti minus the air pressure in Darwin, Australia. You can see those locations here:



So, when the SOI is high, the air pressure is higher in the east Pacific than in the west Pacific. This is what we expect in an La Niña: that’s why the westward trade winds are strong then! Conversely, the SOI is low in the El Niño phase. This variation in the SOI is called the Southern Oscillation.

If you look at the graphs above, you’ll see how one looks almost like an upside-down version of the other. So, El Niño/La Niña cycle is tightly linked to the Southern Oscillation.

Another thing you’ll see from is that ENSO cycle is far from perfectly periodic! Here’s a graph of the Southern Oscillation Index going back a lot further:



This graph was made by William Kessler. His explanations of the ENSO cycle are the first ones I really understood:

My own explanation here is a slow-motion, watered-down version of his. Any mistakes are, of course, mine. To conclude, I want to quote his discussion of theories about why an El Niño starts, and why it ends. As you’ll see, this part is a bit more technical. It involves three concepts I haven’t explained yet:

  • The "thermocline" is the border between the warmer surface water in the ocean and the cold deep water, 100 to 200 meters below the surface. During the La Niña phase, warm water is blown to the western Pacific, and cold water is pulled up to the surface of the eastern Pacific. So, the thermocline is deeper in the west than the east:

    When an El Niño occurs, the thermocline flattens out:

  • "Oceanic Rossby waves" are very low-frequency waves in the ocean’s surface and thermocline. At the ocean’s surface they are only 5 centimeters high, but hundreds of kilometers across. They move at about 10 centimeters/second, requiring months to years to cross the ocean! The surface waves are mirrored by waves in the thermocline, which are much larger, 10-50 meters in height. When the surface goes up, the thermocline goes down.
  • The "Madden-Julian Oscillation" or "MJO" is the largest form of variability in the tropical atmosphere on time scales of 30-90 days. It’s a pulse that moves east across the Indian Ocean and Pacific ocean at 4-8 meters/second. It manifests itself as patches of anomalously high rainfall and also anomalously low rainfall. Strong Madden-Julian Oscillations are often seen 6-12 months before an El Niño starts.

With this bit of background, let’s read what Kessler wrote:

There are two main theories at present. The first is that the event is initiated by the reflection from the western boundary of the Pacific of an oceanic Rossby wave (type of low-frequency planetary wave that moves only west). The reflected wave is supposed to lower the thermocline in the west-central Pacific and thereby warm the SST [sea surface temperature] by reducing the efficiency of upwelling to cool the surface. Then that makes winds blow towards the (slightly) warmer water and really start the event. The nice part about this theory is that the Rossby waves can be observed for months before the reflection, which implies that El Niño is predictable.

The other idea is that the trigger is essentially random. The tropical convection (organized largescale thunderstorm activity) in the rising air tends to occur in bursts that last for about a month, and these bursts propagate out of the Indian Ocean (known as the Madden-Julian Oscillation). Since the storms are geostrophic (rotating according to the turning of the earth, which means they rotate clockwise in the southern hemisphere and counter-clockwise in the north), storm winds on the equator always blow towards the east. If the storms are strong enough, or last long enough, then those eastward winds may be enought to start the sloshing. But specific Madden-Julian Oscillation events are not predictable much in advance (just as specific weather events are not predictable in advance), and so to the extent that this is the main element, then El Niño will not be predictable.

In my opinion both these two processes can be important in different El Niños. Some models that did not have the MJO storms were successful in predicting the events of 1986-87 and 1991-92. That suggests that the Rossby wave part was a main influence at that time. But those same models have failed to predict the events since then, and the westerlies have appeared to come from nowhere. It is also quite possible that these two general sets of ideas are incomplete, and that there are other causes entirely. The fact that we have very intermittent skill at predicting the major turns of the ENSO cycle (as opposed to the very good forecasts that can be made once an event has begun) suggests that there remain important elements that are await explanation.

Next time I’ll talk a bit about mathematical models of the ENSO and another climate cycle — but please keep in mind that these cycles are still far from fully understood!


To hate is to study, to study is to understand, to understand is to appreciate, to appreciate is to love. So maybe I’ll end up loving your theory. – John Archibald Wheeler


Cancún

12 December, 2010

What happened at the United Nations Climate Change Conference in Cancún this year? I’m trying to figure that out, and I could use your help.

But if you’re just as confused as I am, this is an easy place to start:

Climate talks wrap with hope for developing nations, Weekend Edition Saturday, National Public Radio.

Here’s what I’ve learned so far.

The good news is, first, that the negotiations didn’t completely collapse. That was a real fear.

Second, 190 countries agreed to start a Green Climate Fund to raise and disburse $100 billion per year to help developing countries deal with climate change… starting in 2020.

A good idea, but maybe too late. The World Bank estimates that the cost of adapting to a world that’s 2 °C warmer by 2050 will be about $75-100 billion per year. The International Energy Agency estimates that the cost of supporting clean energy technology in developing countries is $110 billion per year if we’re going to keep the temperature rise below 2 °C. But these organizations say we need to start now, not a decade from now!

And how to raise the money? The Prime Minister of Norway, Jens Stoltenberg, leads the UN committee that’s supposed to answer this question. He told the BBC that the best approach would be a price on carbon that begins to reflect the damage it does:

Carbon pricing has a double climate effect — it’s a huge source for revenue, but also gives the right incentives for reducing emissions by making it expensive to pollute. The more ambitious we are, the higher the price will be – so there’s a very close link between the ceiling we set for emissions and the price. We estimate that we need a price of about $20/25 per tonne to mobilise the $100bn.

Third, our leaders made some steps towards saving the world’s forests. Every year, forests equal to the area of England get cut down. T This has got to stop, for all sorts of reasons. For one thing, it causes 20% of the world’s greenhouse gas emissions — about the same as transportation worldwide!

Cancun set up a framework called REDD+, which stands for Reducing Emissions from Deforestation and Degrading Emissions, with the cute little + standing for broader ecosystem conservation. This is supposed to create incentives to keep forests standing. But there’s a lot of work left. For example, while a $4.1 billion start-up fund is already in place, there’s no long-term plan for financing REDD+ yet.

The bad news? Well, the main bad news is that there’s still a gap between what countries have pledged to do to reduce carbon emissions, and what they’d need to do to keep the expected rise in temperature below 2 °C — or if you want a clearer goal, keeping CO2 concentrations below 450 parts per million.

But it’s not as bad as you might think… at least if you believe this chart put out by the Center for American Progress. They say:

We found that even prior to the Copenhagen climate summit, if all parties did everything they claimed they would do at the time, the world was only five gigatons of annual emissions shy of the estimated 17 gigatons of carbon dioxide or CO2 equivalent annual reductions needed to put us on a reasonable 2°C pathway. Since three gigatons of the projected reductions came from the economic downturn and improved projections on deforestation and peat emissions, the actual pledges of countries for additional reductions were slightly less than two-thirds of what was needed. But they were still not sufficient for the 2°C target.

and then:

After the Copenhagen Accord was finalized at the December 2009 climate summit, a January 2010 deadline was established for countries to submit pledges for actions by 2020 consistent with the accord’s 2°C goal. Two breakdowns of the pledges in February, and later in March, by Project Catalyst estimated that the five-gigaton gap had shrunk somewhat and more pledges had come in from developing countries. Part of the reason that pledges increased from developing countries was that the Copenhagen Accord had finally made a significant step forward on establishing a system of cooperation between developed and developing countries that had a chance at providing incentives for additional reductions.

And now, they say, the gap is down to 4 gigatons per year. This chart details it… click to make it bigger:



That 4-gigaton gap doesn’t sound so bad. But of course, this estimate assumes that pledges translate into reality!

So, the fingernail-biting saga of our planet continues…


Quantum Foundations Mailing List

9 December, 2010

Bob Coecke and Jamie Vicary have started a mailing list on “quantum foundations”.

They write:

It was agreed by many that the existence of a quantum foundations mailing list, with a wide scope and involving the broad international community, was long overdue. This moderated list (to avoid spam or abuse) will mainly distribute announcements of conferences and other international events in the area, as well as other relevant adverts such as jobs in the area. It is set up at Oxford University, which should provide a guarantee of stability and sustainability. The scope ranges from the mathematical end of quantum foundations research to the purely philosophical issues.

(UN)SUBSCRIBING INSTRUCTIONS:

To subscribe to the list, send a blank email to
quantum-foundations-subscribe@maillist.ox.ac.uk

To unsubscribe from the list, send a blank email to
quantum-foundations-unsubscribe@maillist.ox.ac.uk

Any complaints etc can be send to Bob Coecke and Jamie Vicary.

I have deleted their email addresses here, along with the address for posting articles to the list, to lessen the amount of spam these addresses get. But it’s easy enough to find Bob and Jamie’s addresses, and presumably when you subscribe you’ll be told how to post messages!


This Week’s Finds (Week 306)

7 December, 2010

This week I’ll interview another physicist who successfully made the transition from gravity to climate science: Tim Palmer.

JB: I hear you are starting to build a climate science research group at Oxford.  What led you to this point? What are your goals?

TP: I started my research career at Oxford University, doing a PhD in general relativity theory under the cosmologist Dennis Sciama (himself a student of Paul Dirac). Then I switched gear and have spent most of my career working on the dynamics and predictability of weather and climate, mostly working in national and international meteorological and climatological institutes. Now I’m back in Oxford as a Royal Society Research Professor in climate physics. Oxford has a lot of climate-related activities going on, both in basic science and in impact and policy issues. I want to develop activities in climate physics. Oxford has wonderful Physics and Mathematics Departments and I am keen to try to exploit human resources from these areas where possible.

The general area which interests me is in the area of uncertainty in climate prediction; finding ways to estimate uncertainty reliably and, of course, to reduce uncertainty. Over the years I have helped develop new techniques to predict uncertainty in weather forecasts. Because climate is a nonlinear system, the growth of initial uncertainty is flow dependent. Some days when the system is in a relatively stable part of state space, accurate weather predictions can be made a week or more ahead of time. In other more unstable situations, predictability is limited to a couple of days. Ensemble weather forecast techniques help estimate such flow dependent predictability, and this has enormous practical relevance.

How to estimate uncertainty in climate predictions is much more tricky than for weather prediction. There is, of course, the human element: how much we reduce greenhouse gas emissions will impact on future climate. But leaving this aside, there is the difficult issue of how to estimate the accuracy of the underlying computer models we use to predict climate.

To say a bit more about this, the problem is to do with how well climate models simulate the natural processes which amplify the anthropogenic increases in greenhouse gases (notably carbon dioxide). A key aspect of this amplification process is associated with the role of water in climate. For example, water vapour is itself a powerful greenhouse gas. If we were to assume that the relative humidity of the atmosphere (the percentage of the amount of water vapour at which the air would be saturated) was constant as the atmosphere warms under anthropogenic climate change, then humidity would amplify the climate change by a factor of two or more. On top of this, clouds — i.e. water in its liquid rather than gaseous form — have the potential to further amplify climate change (or indeed decrease it depending on the type or structure of the clouds). Finally, water in its solid phase can also be a significant amplifier of climate change. For example, sea ice reflects sunlight back to space. However as sea ice melts, e.g. in the Arctic, the underlying water absorbs more of the sunlight than before, again amplifying the underlying climate change signal.

We can approach these problems in two ways. Firstly we can use simplified mathematical models in which plausible assumptions (like the constant relative humidity one) are made to make the mathematics tractable. Secondly, we can try to simulate climate ab initio using the basic laws of physics (here, mostly, but not exclusively, the laws of classical physics). If we are to have confidence in climate predictions, this ab initio approach has to be pursued. However, unlike, say temperature in the atmosphere, water vapour and cloud liquid water have more of a fractal distribution, with both large and small scales. We cannot simulate accurately the small scales in a global climate model with fixed (say 100km) grid, and this, perhaps more than anything, is the source of uncertainty in climate predictions.

This is not just a theoretical problem (although there is some interesting mathematics involved, e.g. of multifractal distribution theory and so on). In the coming years, governments will be looking to spend billions on new infrastructure for society to adapt to climate change: more reservoirs, better flood defences, bigger storm sewers etc etc. It is obviously important that this money is spent wisely. Hence we need to have some quantitative and reliable estimate of certainty that in regions where more reservoirs are to be built, the climate really will get drier and so on.

There is another reason for developing quantitative methods for estimating uncertainty: climate geoengineering. If we spray aerosols in the stratosphere, or whiten clouds by spraying sea salt into them, we need to be sure we are not doing something terrible to our climate, like shutting off the monsoons, or decreasing rainfall over Amazonia (which might then make the rainforest a source of carbon for the atmosphere rather than a sink). Reliable estimates of uncertainty of regional impacts of geoengineering are going to be essential in the future.

My goals? To bring quantitative methods from physics and maths into climate decision making.  One area that particularly interests me is the application of nonlinear stochastic-dynamic techniques to represent unresolved scales of motion in the ab initio models. If you are interested to learn more about this, please see this book:

• Tim Palmer and Paul Williams, editors, Stochastic Physics and Climate Modelling, Cambridge U. Press, Cambridge, 2010.

JB: Thanks! I’ve been reading that book. I’ll talk about it next time on This Week’s Finds.

Suppose you were advising a college student who wanted to do something that would really make a difference when it comes to the world’s environmental problems.  What would you tell them?

TP: Well although this sounds a bit of a cliché, it’s important first and foremost to enjoy and be excited by what you are doing. If you have a burning ambition to work on some area of science without apparent application or use, but feel guilty because it’s not helping to save the planet, then stop feeling guilty and get on with fulfilling your dreams. If you work in some difficult area of science and achieve something significant, then this will give you a feeling of confidence that is impossible to be taught. Feeling confident in one’s abilities will make any subsequent move into new areas of activity, perhaps related to the environment, that much easier. If you demonstrate that confidence at interview, moving fields, even late in life, won’t be so difficult.

In my own case, I did a PhD in general relativity theory, and having achieved this goal (after a bleak period in the middle where nothing much seemed to be working out), I did sort of think to myself: if I can add to the pool of knowledge in this, traditionally difficult area of theoretical physics, I can pretty much tackle anything in science. I realize that sounds rather arrogant, and of course life is never as easy as that in practice.

JB: What if you were advising a mathematician or physicist who was already well underway in their career?  I know lots of such people who would like to do something "good for the planet", but feel that they’re already specialized in other areas, and find it hard to switch gears.  In fact I might as well admit it — I’m such a person myself!

TP: Talk to the experts in the field. Face to face. As many as possible. Ask them how your expertise can be put to use. Get them to advise you on key meetings you should try to attend.

JB: Okay.  You’re an expert in the field, so I’ll start with you.  How can my expertise be put to use?  What are some meetings that I should try to attend?

TP: The American Geophysical Union and the European Geophysical Union have big multi-session conferences each year which include mathematicians with an interest in climate. On top of this, mathematical science institutes are increasingly holding meetings to engage mathematicians and climate scientists. For example, the Isaac Newton Institute at Cambridge University is holding a six-month programme on climate and mathematics. I will be there for part of this programme. There have been similar programmes in the US and in Germany very recently.

Of course, as well as going to meetings, or perhaps before going to them, there is the small matter of some reading material. Can I strongly recommend the Working Group One report of the latest IPCC climate change assessments? WG1 is tasked with summarizing the physical science underlying climate change. Start with the WG1 Summary for Policymakers from the Fourth Assessment Report:

• Intergovernmental Panel on Climate Change, Climate Change 2007: The Physical Science Basis, Summary for Policymakers.

and, if you are still interested, tackle the main WG1 report:

• Intergovernmental Panel on Climate Change, Climate Change 2007: The Physical Science Basis, Cambridge U. Press, Cambridge, 2007.

There is a feeling that since the various so-called "Climategate" scandals, in which IPCC were implicated, climate scientists need to be more open about uncertainties in climate predictions and climate prediction models. But in truth, these uncertainties have always been openly discussed in the WG1 reports. These reports are absolutely not the alarmist documents many seem to think, and, I would say, give an extremely balanced picture of the science. The latest report dates from 2007.

JB: I’ve been slowly learning what’s in this report, thanks in part to Nathan Urban, whom I interviewed in previous issues of This Week’s Finds. I’ll have to keep at it.



You told me that there’s a big difference between the "butterfly effect" in chaotic systems with a few degrees of freedom, such as the Lorenz attractor shown above, and the "real butterfly effect" in systems with infinitely many degrees of freedom, like the Navier-Stokes equations, the basic equations describing fluid flow. What’s the main difference?

TP: Everyone knows, or at least think they know, what the butterfly effect is: the exponential growth of small initial uncertainties in chaotic systems, like the Lorenz system, after whom the butterfly effect was named by James Gleick in his excellent popular book:

• James Gleick, Chaos: Making a New Science, Penguin, London, 1998.

But in truth, this is not the butterfly effect as Lorenz had meant it (I knew Ed Lorenz quite well). If you think about it, the possible effect of a flap of a butterfly’s wings on the weather some days later, involves not only an increase in the amplitude of the uncertainty, but also the scale. If we think of a turbulent system like the atmosphere, comprising a continuum of scales, its evolution is described by partial differential equations, not a low order set of ordinary differential equations. Each scale can be thought of as having its own characteristic dominant Lyapunov exponent, and these scales interact nonlinearly.

If we want to estimate the time for a flap of a butterfly’s wings to influence a large scale weather system, we can imagine summing up all the Lyapunov timescales associated with all the scales from the small scales to the large scales. If this sum diverges, then very good, we can say it will take a very long time for a small scale error or uncertainty to influence a large-scale system. But alas, simple scaling arguments suggest that there may be situations (in 3 dimensional turbulence) where this sum converges. Normally, we thinking of convergence as a good thing, but in this case it means that the small scale uncertainty, no matter how small scale it is, can affect the accuracy of the large scale prediction… in finite time. This is quite different to the conventional butterfly effect in low order chaos, where arbitrarily long predictions can be made by reducing initial uncertainty to sufficiently small levels.

JB: What are the practical implications of this difference?

TP: Climate models are finite truncations of the underlying partial differential equations of climate. A crucial question is: how do solutions converge as the truncation gets better and better?  More practically, how many floating point operations per second (flops) does my computer need to have, in order that I can simulate the large-scale components of climate accurately. Teraflops, petaflops, exaflops? Is there an irreducible uncertainty in our ability to simulate climate no matter how many flops we have? Because of the "real" butterfly effect, we simply don’t know. This has real practical implications.

JB: Nobody has proved existence and uniqueness for solutions of the Navier-Stokes equations. Indeed Clay Mathematics Institute is offering a million-dollar prize for settling this question. But meteorologists use these equations to predict the weather with some success.  To mathematicians that might seem a bit strange.  What do you think is going on here?

TP: Actually, for certain simplifications to the Navier-Stokes equations, such as making them hydrostatic (which damps acoustic waves) then existence and uniqueness can be proven. And for weather forecasting we can get away with the hydrostatic approximation for most applications. But in general existence and uniqueness haven’t been proven. The "real" butterfly effect is linked to this. Well obviously the Intergovernmental Panel on Climate Change can’t wait for the mathematicians to solve this problem, but as I tried to suggest above, I don’t think the problem is just an arcane mathematical conundrum, but rather may help us understand better what is possible to predict about climate change and what not.

JB:  Of course, meteorologists are really using a cleverly discretized version of the Navier-Stokes equations to predict the weather. Something vaguely similar happens in quantum field theory: we can use "lattice QCD" to compute the mass of the proton to reasonable accuracy, but nobody knows for sure if QCD makes sense in the continuum.  Indeed, there’s another million-dollar Clay Prize waiting for the person who can figure that out.   Could it be that sometimes a discrete approximation to a continuum theory does a pretty good job even if the continuum theory fundamentally doesn’t make sense?

TP: There you are! Spend a few years working on the continuum limit of lattice QCD and you may end up advising government on the likelihood of unexpected consequences on regional climate arising from some geoengineering proposal! The idea that two so apparently different fields could have elements in common is something bureaucrats find it hard to get their heads round.  We at the sharp end in science need to find ways of making it easier for scientists to move fields (even on a temporary basis) should they want to.

This reminds me of a story. When I was finishing my PhD, my supervisor, Dennis Sciama announced one day that the process of Hawking radiation, from black holes, could be understood using the Principle of Maximum Entropy Production in non-equilibrium thermodynamics. I had never heard of this Principle before, no doubt a gap in my physics education. However, a couple of weeks later, I was talking to a colleague of a colleague who was a climatologist, and he was telling me about a recent paper that purported to show that many of the properties of our climate system could be deduced from the Principle of Maximum Entropy Production. That there might be such a link between black hole theory and climate physics, was one reason that I thought changing fields might not be so difficult after all.

JB: To what extent is the problem of predicting climate insulated from the problems of predicting weather?  I bet this is a hard question, but it seems important.  What do people know about this?

TP: John Von Neumann was an important figure in meteorology (as well, for example, as in quantum theory). He oversaw a project at Princeton just after the Second World War, to develop a numerical weather prediction model based on a discretised version of the Navier-Stokes equations. It was one of the early applications of digital computers. Some years later, the first long-term climate models were developed based on these weather prediction models. But then the two areas of work diverged. People doing climate modelling needed to represent lots of physical processes: the oceans, the cryosphere, the biosphere etc, whereas weather prediction tended to focus on getting better and better discretised representations of the Navier-Stokes equations.

One rationale for this separation was that weather forecasting is an initial value problem whereas climate is a "forced" problem (e.g. how does climate change with a specified increase in carbon dioxide?). Hence, for example, climate people didn’t need to agonise over getting ultra accurate estimates of the initial conditions for their climate forecasts.

But the two communities are converging again. We realise there are lots of synergies between short term weather prediction and climate prediction. Let me give you one very simple example. Whether anthropogenic climate change is going to be catastrophic to society, or is something we will be able to adapt to without too many major problems, we need to understand, as mentioned above, how clouds interact with increasing levels of carbon dioxide. Clouds cannot be represented explicitly in climate models because they occur on scales that can’t be resolved due to computational constraints. So they have to be represented by simplified "parametrisations". We can test these parametrisations in weather forecast models. To put it crudely (to be honest too crudely) if the cloud parametrisations (and corresponding representations of water vapour) are systematically wrong, then the forecasts of tomorrow’s daily maximum temperature will also be systematically wrong.

To give another example, I myself for a number of years have been developing stochastic methods to represent truncation uncertainty in weather prediction models. I am now trying to apply these methods in climate prediction. The ability to test the skill of these stochastic schemes in weather prediction mode is crucial to having confidence in them in climate prediction mode. There are lots of other examples of where a synergy between the two areas is important.

JB: When we met recently, you mentioned that there are currently no high-end supercomputers dedicated to climate issues.  That seems a bit odd.  What sort of resources are there?  And how computationally intensive are the simulations people are doing now?

TP: By "high end" I mean very high end: that is, machines in the petaflop range of performance. If one takes the view that climate change is one of the gravest threats to society, then throwing all the resources that science and technology allows, to try to quantify exactly how grave this threat really is, seems quite sensible to me. On top of that, if we are to spend billions (dollars, pounds, euros etc.) on new technology to adapt to climate change, we had better make sure we are spending the money wisely — no point building new reservoirs if climate change will make your region wetter. So the predictions that it will get drier in such a such a place better be right. Finally, if we are to ever take these geoengineering proposals seriously we’d better be sure we understand the regional consequences. We don’t want to end up shutting off the monsoons! Reliable climate predictions really are essential.

I would say that there is no more computationally complex problem in science than climate prediction. There are two key modes of instability in the atmosphere, the convective instabilites (thunderstorms) with scales of kilometers and what are called baroclinic instabilities (midlatitude weather systems) with scales of thousands of kilometers. Simulating these two instabilities, and their mutual global interactions, is beyond the capability of current global climate models because of computational constraints. On top of this, climate models try to represent not only the physics of climate (including the oceans and the cryosphere), but the chemistry and biology too. That introduces considerable computational complexity in addition to the complexity caused by the multi-scale nature of climate.

By and large individual countries don’t have the financial resources (or at least they claim they don’t!) to fund such high end machines dedicated to climate. And the current economic crisis is not helping! On top of which, for reasons discussed above in relation to the "real" butterfly effect, I can’t go to government and say: "Give me a 100 petaflop machine and I will absolutely definitely be able to reduce uncertainty in forecasts climate change by a factor of 10". In my view, the way forward may be to think about internationally funded supercomputing. So, just as we have internationally funded infrastructure in particle physics, astronomy, so too in climate prediction. Why not?

Actually, very recently the NSF in the US gave a consortium of climate scientists from the US, Europe and Japan, a few months of dedicated time on a top-end Cray XT4 computer called Athena. Athena wasn’t quite in the petaflop range, but not too far off, and using this dedicated time, we produced some fantastic results, otherwise unachievable, showing what the international community could achieve, given the computational resources. Results from the Athena project are currently being written up — they demonstrate what can be done where there is a will from the funding agencies.

JB: In a Guardian article on human-caused climate change you were quoted as saying "There might be a 50% risk of widespread problems or possibly only 1%.  Frankly, I would have said a risk of 1% was sufficient for us to take the problem seriously enough to start thinking about reducing emissions."

It’s hard to argue with that, but starting to think about reducing emissions is vastly less costly than actually reducing them.  What would you say to someone who replied, "If the risk is possibly just 1%, it’s premature to take action — we need more research first"?

TP: The implication of your question is that a 1% risk is just too small to worry about or do anything about. But suppose the next time you checked in to fly to Europe, and they said at the desk that there was a 1% chance that volcanic ash would cause the aircraft engines to fail mid flight, leading the plane to crash, killing all on board. Would you fly? I doubt it!

My real point is that in assessing whether emissions cuts are too expensive, given the uncertainty in climate predictions, we need to assess how much we value things like the Amazon rainforest, or of (preventing the destruction of) countries like Bangladesh or the African Sahel. If we estimate the damage caused by dangerous climate change — let’s say associated with a 4 °C or greater global warming — to be at least 100 times the cost of taking mitigating action, then it is worth taking this action even if the probability of dangerous climate change was just 1%. But of course, according to the latest predictions, the probability of realizing such dangerous climate changes is much nearer 50%. So in reality, it is worth cutting emissions if the value you place on current climate is comparable or greater than the cost of cutting emissions.

Summarising, there are two key points here. Firstly, rational decisions can be made in the light of uncertain scientific input. Secondly, whilst we do certainly need more research, that should not itself be used as a reason for inaction.

Thanks, John, for allowing me the opportunity to express some views about climate physics on your web site.

JB: Thank you!


The most important questions of life are, for the most part, really only problems of probability. – Pierre Simon, Marquis de Laplace


Archimede

3 December, 2010

You may have heard the legend of how in 212 BC, Archimedes defended the port city of Syracuse against the invading Romans by setting their ships afire with the help of mirrors that concentrated the sun’s light. It sounds a bit implausible…

However, maybe you’ve heard of Comte Buffon — the guy who figured out how to compute the number pi by dropping needles on the floor. According to Michael Lahans, Buffon also did an experiment to see if Archmedes’ idea was practical. He got a lot of mirrors, each 8 × 10 inches in size, adjusted to focus their light at a distance of 150 feet. And according to Lahans:

The array turned out to be a formidable weapon. At 66 feet 40 mirrors ignited a creosoted plank and at 150 feet, 128 mirrors ignited a pine plank instantly. In another experiment 45 mirrors melted six pounds of tin at 20 feet.

Should we believe this? I don’t know. Some calculations could probably settle it. Or you could try the experiment yourself. If you do, tell us how it goes.

But there are also non-military uses of concentrated solar power. For example, the new power plant named after Archimedes, located in Sicily, fairly near Syracuse:

Archimede website.

Archimede solar power plant, Wikipedia.

It started operations on July 14th of this year. It produces 5 megawatts of electricity, enough for 4,500 families. That’s not much compared to the 1 gigawatt from a typical coal- or gas-powered plant. But it’s an interesting experiment.

It consists of about 50 parabolic trough mirrors, each 100 meters long, with a total area of around 30,000 square meters. They concentrate sunlight onto 5,400 metres of pipe. This pipe carries molten salts — potassium nitrate and sodium nitrate — at a temperature of up to 550 °C. This goes on to produce steam, which powers an electrical generator.

The news is the use of molten salt instead of oil to carry the heat. Molten salt works at higher temperatures than oils, which only go up to about 390° C. So, the system is more efficient. The higher temperature also lets you use steam turbines of the sort already common in gas-fired power plants. That could make it easier to replace conventional power plants with solar ones.

The project is being run by Enel, Europe’s third-largest energy provider. It was developed with the help of ENEA, an Italian agency that deals with new technologies, energy and sustainable economic development. At the Guardian, Carlo Ombello writes:

So why hasn’t this technology come before? There are both political and technical issues behind this. Let’s start with politics. The concept dates back to 2001, when Italian nuclear physicist and Nobel prize winner Carlo Rubbia, ENEA’s President at the time, first started Research & Development on molten salt technology in Italy. Rubbia has been a preminent CSP [concentrated solar power] advocate for a long time, and was forced to leave ENEA in 2005 after strong disagreements with the Italian Government and its lack of convincing R&D policies. He then moved to CIEMAT, the Spanish equivalent of ENEA. Under his guidance, Spain has now become world leader in the CSP industry. Luckily for the Italian industry, the Archimede project was not abandoned and ENEA continued its development till completion.

There are also various technical reasons that have prevented an earlier development of this new technology. Salts tend to solidify at temperatures around 220°C, which is a serious issue for the continuous operation of a plant. ENEA and Archimede Solar Energy, a private company focusing on receiver pipes, developed several patents in order to improve the pipes’ ability to absorbe heat, and the parabolic mirrors’ reflectivity, therefore maximising the heat transfer to the fluid carrier. The result of these and several other technological improvements is a top-notch world’s first power plant with a price tag of around 60 million euros. It’s a hefty price for a 5 MW power plant, even compared to other CSP plants, but there is overwhelming scope for a massive roll-out of this new technology at utility scale in sunny regions like Northern Africa, the Middle East, Australia, the US.

The last sentence is probably a reference to DESERTEC. We’ll have to talk about that sometime, too.

If you know anything about Archimede, or DESERTEC, or concentrated solar power, or you have any questions, let us know!


Follow

Get every new post delivered to your Inbox.

Join 3,095 other followers