Quantum Optics with Quantum Dots

7 September, 2010

Here at the CQT, Alexia Auffèves from the Institut Néel is talking about “Revisiting cavity quantum electrodynamics with quantum dots and semiconducting cavities (when decoherence becomes a resource)”.

She did her graduate work doing experimental work on Rydberg atoms — that is, atoms in highly excited states, which can be much larger than normal atoms. But then — according to her collaborator Marcelo Santos, who introduced her — she “went over to the dark side” and became a theorist. She now works at a quantum optics group at Institut Néel in Grenoble, France.

This group does a lot of quantum optics with quantum dots. If you’ve never heard about quantum optics or quantum dots, I’ve got to tell you about them: they’re really quite cool! So, the next section will be vastly less sophisticated than Auffèves’ actual talk. Experts can hold their noses and skip straight to the section after that.

An elementary digression

Quantum optics is the branch of optics where we take into account the fact that light obeys the rules of quantum theory. So, light energy comes in discrete packets, called “photons”. You shouldn’t visualize a photon as a tiny pellet: light comes in waves, which can be very smeared out, but the strength of any particular wave comes in discrete amounts: no photons, one photon, two photons, etc. To really understand photons, you need to learn a theory called quantum electrodynamics, or QED for short.

A quantum dot is a very tiny piece of semiconductor, often stuck onto a semiconductor made of some different material. Here are a bunch of quantum dots made by an outfit called Essential Research:



 

What’s a semiconductor? It’s a material in which electrons and holes like to run around. Any matter made of atoms has a lot of electrons in it, of course. But a semiconductor can have some extra electrons, not attached to any atom, which roam around freely. It can also have some missing electrons, and these so-called holes can also roam around freely, just as if they were particles.

Now, let quantum optics meet semiconductor! If you hit a semiconductor with photon, you can create an electron-hole pair: an extra electron here, and a missing one there. If you think about it, this is just a fancy way of talking about knocking an electron off one of the atoms! But it’s a useful way of thinking.

Imagine, for example, a line of kids each holding one apple in each hand. You knock an apple out of one kid’s hand and another kid catches it. Now you’ve got a “hole” in your line of apples, but also a kid with an extra apple further down the line. As the kids try to correct your disturbance by passing their apples around, you will see the extra apple move along, and also perhaps see the hole move along, until they meet and — poof! — annihilate each other.

To strain your powers of visualization: just like a photon, an electron or hole is not really a little pellet. Quantum mechanics applies, so every particle is a wave.

To add to the fun, electrons and holes often attract each other and sort of orbit each other before they annihilate. An electron-hole pair engaged in such a dance is called an exciton — and intriguingly, an exciton can itself roam around like a particle!

But in a quantum dot, it cannot. A quantum dot is too small for an exciton to “roam around”: it can only sit, trapped there, vibrating.

Next, let quantum optics meet quantum dot! If a quantum dot absorbs a photon, an exciton may form. Conversely, when the exciton decays — the electron and hole annihilating each other — the quantum dot may emit a photon.

Put this setup in a very, very tiny box with an open door — a “cavity” — and you can do all sorts of fun things.

Back to business

The quantum optics group at the Institut Néel does both experimental and theoretical work. Four members of this group have come to visit the CQT. There are three main topics studied by this group:

• Cavity QED with quantum dots and optical semiconducting cavities. There are interesting similarities and differences between quantum dots and isolated atoms.

• One-dimensional solid-state atoms. This kind of system can operate at “giant optical nonlinearity”, and it can be stimulated with single photons.

• “Broad” atomic ensembles coupled to cavities, and their potential for solid-state quantum memories.

She will only talk about the first!

The simplest sort of cavity QED involves a 2-level system — for example, an atom that can hop between two energy levels — coupled to the electromagnetic field in a cavity.

But instead of an atom, Alexia Auffèves will consider a quantum dot made of one semiconducting material sitting on some other semiconducting material. An electron-hole pair created in the dot wants to stay in the dot, since it has less energy there. Like an atom, a quantum dot may be approximated by a 2-level system. But now the two “levels” are the state with nothing there, and the state with an electron-hole pair. The electron-hole pair has an energy of about 1 eV more than the state with nothing there.

Next, let’s put our quantum dot in a cavity. We want an ultrasmall cavity that has a high Q factor. Remember: when you’ve got a damped harmonic oscillator, a high Q factor means not much damping, so you get a tall, sharp resonance. For a cavity to have a high Q factor, we need light bouncing around inside to leak out slowly. That way, the cavity emits photons at quite sharply defined frequencies.

There are various ways to make tiny cavities with a Q factor from 1000 to 100,000. But the trick is getting a quantum dot to sit in the right place in the cavity!

Now, a quantum dot acts differently than an isolated atom: after all, it’s attached to a hunk of semiconductor. So, our quantum dot interacts with electrons and holes and phonons in this stuff. This causes a lowering of its Q factor, hence a broadening of its spectral lines. But we can adjust how this works, so the dot acts like a 2-level system with a tunable environment.

This lets us probe a new regime for cavity QED! The theorists’ game: replace a 2-level atom by a quantum dot, and see what happens to standard cavity QED results.

For example, look at spontaneous emission by a quantum dot in a cavity.

For an atom in a cavity, the atomic spectral lines are usually much narrower than the cavity resonance modes. Then the atom emits light at essentially its natural frequencies, with a strength affected by the cavity resonance modes.

But with quantum dots, we can make the quantum dot spectral lines much wider than the cavity resonance modes! Then the dot seems to emit white light, as far as the cavity is concerned. But, the dot emits more photons at the cavity frequency: this is called “cavity feeding”. People have been working on understanding this since 2007.

I think I’ll stop here, though this is where the real meat of Auffèves’ talk actually starts! You can get a bit more of a sense of it from her abstract:

Abstract: Thanks to technological progresses in the field of solid-state physics, a wide range of quantum optics experiments previously restricted to atomic physics, can now be implemented using quantum dots (QDs) and semi-conducting cavities. Still, a QD is far from being an isolated two-level atom. As a matter of fact, solid state emitters are intrinsically coupled to the matrix they are embedded in, leading to decoherence processes that unavoidably broaden any transition between the discrete states of these artificial atoms. At the same time, very high quality factors and ultra small modal volumes are achieved for state of the art cavities. These new conditions open an unexplored regime for cavity quantum electrodynamics (CQED) so far, where the emitter’s linewidth can be of the same order of magnitude, or even broader than the cavity mode one. In this kind of exotic regime, unusual phenomena can be observed. In particular, we have shown [1] that photons spontaneously emitted by a QD coupled to a detuned cavity can efficiently by emitted at the cavity frequency, even if the detuning is large; whereas if the QD is continously pumped, decoherence can induce lasing [2]. These effects clearly show that decoherence, far from being a drawback, is a fundamental resource in solid-state cavity quantum electrodynamics, offering appealing perspectives in the context of advanced nano-photonic devices.

And for more details, read the references:

• [1] Alexia Auffèves, Jean-Michel Gérard, and Jean-Philippe Poizat, Pure emitter’s dephasing: a resource for advanced single photon sources PRA 79, 053838 (2009).

• [2] A. Auffèves, D. Gerace, J. M. Gérard, M. Franca Santos, L. C. Andreani, and J. P. Poizat, Controlling the dynamics of a coupled atom-cavity system by pure dephasing: basics and applications in nanophotonics, PRB 81, 245419 (2010).


Sustainability in Palo Alto

7 September, 2010

One thing I’d like to do on this blog is announce conferences suitable for mathematicians and physicists who are interested in climate change, sustainability, energy technology, and the like. I’m not really part of that crowd yet, so I don’t automatically hear about such conferences. If you know some coming up, please tell me. Once the Azimuth wiki gets up and running — soon, soon! — it’ll be easy for us to keep a list of them there.

Anyway, my friend Bruce Smith told me about this one:

Sustainability problems, January 10-14, 2011, American Institute of Mathematics, Palo Alto, California. Organized by Ellis Cumberbatch and Wei Kang.

I won’t go, since flying across the Pacific to a 4-day workshop on sustainability seems… well, painfully ironic. But maybe you will, or know someone who will. I’d love to hear how it goes.

I hear the American Institute of Mathematics really means business when they hold workshops. You have to pick a problem to solve, and the staff coaches you and monitors how much progress you’re making! Has anyone here been to one?

This workshop is about four problems:

• Sustaining aquifers

• Reserve requirements for large-scale renewable energy integration

• Optimization of energy harvesting techniques

• Liquid-vapor fluid flow through a Tesla turbine

You can read more details on their webpage. Here’s the plan:

Each problem will be described by an engineer or scientist who represents the industry or public agency and who is well versed in the problem area. Teams of mathematicians and graduate students will work intensively on problem formulation, analysis, and implementation. The style of the workshop will be a blend of the format of the Math-in-Industry Study Group introduced in Oxford and the focused, collaborative style of AIM workshops.


Probability Puzzles (Part 2)

5 September, 2010


Sometimes places become famous not because of what’s there, but because of the good times people have there.

There’s a somewhat historic bar in Singapore called the Colbar. Apparently that’s short for “Colonial Bar”. It’s nothing to look at: pretty primitive, basically a large shed with no air conditioning and a roofed-over patio made of concrete. Its main charm is that it’s “locked in a time warp”. It used to be set in the British army barracks, but it was moved in 2003. According to a food blog:

Thanks to the petitions of Colbar regulars and the subsequent intervention of the Jurong Town Council (JTC), who wanted to preserve its colourful history, Colbar was replicated and relocated just a stone’s throw away from the old site. Built brick by brick and copied to close exact, Colbar reopened its doors last year looking no different from what it used to be.

It’s now in one of the few remaining forested patches of Singapore. The Chinese couple who run it are apparently pretty well-off; they’ve been at it since the place opened in 1953, even before Singapore became a country.

Every Friday, a bunch of philosophers go there to drink beer, play chess, strum guitars and talk. Since my wife teaches in the philosophy department at NUS, we became part of this tradition, and it’s a lot of fun.

Anyway, the last time we went there, one of the philosophers posed this puzzle:

You know a woman who has two children. One day you see her walking by with one. You notice it’s a boy. What’s the probability that both her children are boys?

Of course I instantly thought of the probability puzzles we’ve discussed here. It’s not exactly any of the versions we have already talked about. So I thought you folks might enjoy it.

What’s the answer?


How Long Would Uranium Last?

3 September, 2010

Physicists love a good ‘back-of-the-envelope calculation’: a quick calculation that lets you roughly estimate something. The goal is not precision: you’re doing fine if you come within a factor of 10 of the right answer. Instead, the goal is a kind of rough preliminary insight. After all, if you can’t figure out the answer roughly, you probably shouldn’t charge ahead with calculations that claim to get the answer accurately.

Here’s a guest post by Charlie Clingen. It’s a back-of-the-envelope calculation that tackles this question:

If we kept using electricity at a constant rate, how long would today’s uranium supply last if the world switched overnight to generating all electrical power with today’s nuclear technology?

His answer: 10 years.

Your first reaction may be a howl of indignation. After all, you’ve probably seen drastically longer times mentioned as answers to this question… or… umm… at least similar-sounding questions. For example, read:

• Martin Sevior, Is nuclear power a viable option for our energy needs?, The Oil Drum, March 1, 2007.

He says “unlike conventional oil, uranium resource exhaustion will not be an issue for the foreseeable future”. And he shows a truly heart-warming graph by M. King Hubbert, who is famous for his ‘peak oil’ theory:



So maybe Clingen’s answer is way off. It certainly involves a lot of simplifying assumptions that are clearly unrealistic. But because these assumptions are clearly stated, we can change them and see how the answer changes.

For example, his calculation assumes that the world has about 5 million tonnes of uranium ready and waiting to be mined and refined for a reasonable cost. This comes from the Red Book, put out by the International Atomic Energy Agency. But the Red Book also says that over 35 million tons could be lurking around somewhere if we’re clever enough to find it. If you’re willing to go with that higher figure, just multiply Charlie’s answer by 7. The new answer: 70 years. Of course, this neglects the fact that electricity usage may go up.

So, please take this in the right spirit: it’s not supposed to be a definitive answer, just a starting-point for more detailed work. Criticism is cheap: see if you can do better. I would love it if you did some more detailed and realistic calculations!

Indeed, I’m eager to get more ‘guest posts’ of many kinds here on Azimuth. The first, Terry Bollinger’s post on turning renewable energy into fuels, led to a great discussion that taught us a lot about this issue. Greg Egan’s Probability Puzzles were a fun way to sharpen our understanding of probability theory. Keep ’em coming!

But on with the show…


Question

If we kept using electricity at a constant rate, how long would today’s uranium supply last if the world switched overnight to generating all electrical power with today’s nuclear technology?

Objective

The goal is to get a rough estimate of how long currently known reserves of uranium will suffice to provide nuclear power, using today’s technologies, to satisfy all electrical power requirements, worldwide, at today’s level of consumption.

We consider a highly simplified base case using as inputs today’s known uranium reserves, today’s nuclear power technologies, and today’s total world-wide power requirements. This base case, although totally unrealistic, can be refined in a controlled, step-by-step fashion by easing restrictions and revising assumptions in ways that highlight major areas needing further investigation. Because this toy model requires only four inputs and yields a single output, it is easy to quickly test various hypothetical situations with mental or back-of-the-envelope computations, thereby easily achieving an intuitive understanding of the various critical assumptions, requirements, and issues involved.

The framework used here could easily be refined and extended to build a more useful model with multiple input parameters providing realistic and useful outputs.

There are two kinds of assumptions and restrictions recognized here:

1) Known assumptions and restrictions used to compute this rough estimate. These will be listed.

2) Unknown assumptions that are hidden in data taken from various sources. It is best to assume that all input data are inaccurate. Whenever possible, assumptions used to compute the values of input data should be discovered and stated.

All the technology, estimation, science, costing etc. details are hidden in the assumptions. The real difficulties in achieving a reasonable understanding of this problem are all buried in the assumptions.

When possible, the sensitivity of the results upon various assumptions should be made explicit.

Results

Under the assumptions stated below, using the most conservative values and assuming that cited inputs are reasonably accurate, the current uranium supply would be depleted in about ten years (9.6 – 11.2 years).

Using a less conservative estimate for the known world-wide uranium reserve, the current uranium supply would be depleted in about 70 years.

Assumptions

There are many known assumptions underlying the calculation:

1. It is assumed that the changeover to nuclear power, supplying the total world-wide requirements for electrical power, will occur instantaneously — instant power plant construction, instant fuel availability, etc. Nuclear power replaces carbon-based power generation, hydroelectric power, wind power, etcetera. This is the most extreme assumption.

2. All costs are assumed to be unchanging and irrelevant. One exception: the total known uranium reserves estimates of 4.7 – 5.5 million tonnes are those that can now be mined at a price of US$ 130 per kilogram.

3. Total known reserves of uranium are assumed to be fixed. Even the “currently known” values which were used are dependent on numerous assumptions and predictions.

4. Mining and processing (cost, capacity, and time) of uranium is assumed not to be a limiting factor. Processed uranium fuel is assumed to be available as soon as needed.

5. Annual worldwide consumption of electrical power is assumed to be fixed at “today’s” rate. No population growth, no increased power requirements.

6. Power generation technology and efficiency are assumed to be fixed at today’s levels.

7. Note also that the calculation only concerns uranium, not thorium.

There are also unknown assumptions:

1. The estimates for total known uranium reserves world-wide are highly variable and based on assumptions that are not evaluated here.

2. The estimates of power production efficiency are also based on assumptions not evaluated here. Breeder reactor technology, if feasible for wide-scale deployment, might vastly improve efficiency.

3. The estimate of current-day worldwide total electricity consumption is also based on assumptions not evaluated here.

4. There must be further implicit assumptions that we have overlooked.

Analysis

The number of years that available reserves of uranium will support “today’s” worldwide electric power consumption is given by:

T = U × (E/U) / (E/T)

where

T = time for which world-wide supply of uranium will last

U = total known reserves of uranium

E/U = terawatt-hours of electricity generated per (metric) tonne of uranium

E/T = terawatt-hours of electricity consumed per year world-wide

This gives:

T = (4.7 – 5.5 million tonnes) × (38,750 TWh/million tonnes) / (19,000 TWh/year)

   = 9.6 – 11.2 years

Note. To get a less conservative estimate, using the value of 35 million tonnes for the total uranium reserve, as opposed to the 4.7 – 5.5 million tonne value, we can simply multiply our result by 7. Then

T = 10 years × 7 = 70 years

Similarly, if average power generation efficiency were assumed to double (instantaneously) we could multiply the result by 2; if world-wide power demand were to double, we could divide the result by 2. And if we were to ramp up any or all of the factors over a period of time — for example, if power production were to ramp up linearly over a period of 50 years, rather than instantaneously — a simple multiplicative factor can be computed to adjust the final result, D. In short, it is quite easy to do simple sensitivity analyses and to adjust results based on changes to input assumptions.

Estimates for U, E/U, and E/T

U: total uranium reserves, in tonnes

Here we have two different estimates:

U = 4.7 – 5.5 million tonnes, or 35 million tonnes.

Sources: an International Atomic Energy Agency report from June, 2006:

Uranium 2005: Resources, Production and Demand.

also called the “Red Book”, estimates the total identified amount of conventional uranium stock, which can be mined for less than USD 130 per kg, to be about 4.7 million tonnes. This number was made for 2005; underlying assumptions unknown.

The 2007 Red Book estimate was 5.5 million tonnes:

Uranium 2007: Resources, Production and Demand.

This book estimates the identified amount of conventional uranium resources which can be mined for less than US$ 130/kg to be about 5.5 million tonnes, up from the 4.7 million tonnes reported in 2005. Undiscovered resources, i.e. uranium deposits that can be expected to be found based on the geological characteristics of already discovered resources, have also risen to 10.5 million tonnes. This is an increase of 0.5 million tonnes compared to the previous edition of the report. The increases are due to both new discoveries and re-evaluations of known resources, encouraged by higher prices.

It’s worth noting that the 2006 Red Book says: “However, world uranium resources in total are considered to be much higher. Based on geological evidence and knowledge of uranium in phosphates the study considers more than 35 million tonnes is available for exploitation.”

E/U: energy per tonne of uranium

Estimate:

E/U = (2,558 TWh/year) / (0.066 million tonnes/year) = 38,760 TWh/million tonnes

Source:

Press release by OECD Nuclear Energy Agency, June 3, 2008.

which says:

At the end of 2006, world uranium production (39 603 tonnes) provided about 60% of world reactor requirements (66 500 tonnes) for the 435 commercial nuclear reactors in operation. The gap between production and requirements was made up by secondary sources drawn from government and commercial inventories (such as the dismantling of over 12 000 nuclear warheads and the re-enrichment of uranium tails). Most secondary resources are now in decline and the gap will increasingly need to be closed by new production.

The 2009 estimate for nuclear power generation is given as 2,558 TWh (terawatt-hours) (see below).

Comparison: an unsourced webpage at the Argonne National Labs says: “One ton of natural uranium can produce more than 40 million kilowatt-hours of electricity.”

This is roughly consistent with the 38,760 TWh/million tons used here.

E/T: Worldwide electrical power usage, in terawatt-hours/year

Estimate:

E/T = 19,000 TWh/year

Source:

World Nuclear News, May 5, 2010.

states that last year, nuclear power generated 2,558 TWh of electricity, comprising 13-14% of the world’s electricity demand. This suggests an annual world-wide rate of total electricity consumption in 2009 of around 19,000 TWh.

Thus the total world-wide electrical energy consumption for 2009 was estimated (by this source) at 19,000 terrawatt-hours, corresponding to a power consumption rate of 19,000 terrawatt-hours/year.

Also, the nuclear power generated in 2009 was estimated at 2,558 TWh (terawatt-hours).

Comparison: Wikipedia lists information from the US Energy Information Administration saying the total electrical power usage in 2007 was 17,100 TWh/year. This is roughly consistent with the above value of 19,000 TWh.


Hi, it’s John again. I don’t like “terawatt-hours per year” since this unit of power is not part of the standard metric system, like “watts” or “terawatts”. So let me express Clingen’s assumptions in metric:

U = 4.7 – 5.5 million tonnes, or 35 million tonnes, depending on assumptions.

E/T = 2.1 terawatts

E/U = 140 terajoules / tonne = 140 gigajoules / kilogram

By the way, because I spent a lot of time doing pure mathematics, you should not trust my ability to multiply numbers correctly. Check my work — and Charlie Clingen’s, too.


Cap-and-trade in China?

31 August, 2010

I’m a bit slow on the uptake here, but this is potentially a very big deal, so better late than never:

• Li Jing, Carbon trading in pipeline, China Daily, July 22, 2010.

The country is set to begin domestic carbon trading programs during its 12th Five-Year Plan period (2011-2015) to help it meet its 2020 carbon intensity target.

The decision was made at a closed-door meeting chaired by Xie Zhenhua, deputy director of the National Development and Reform Commission (NDRC), and attended by officials from related ministries, enterprises, environmental exchanges and think tanks, a participant told China Daily on Wednesday on condition of anonymity.

“The consensus that a domestic carbon-trading scheme is essential was reached, but a debate is still ongoing among experts and industries regarding what approach should be adopted,” the source said.

The meeting concluded that such efforts are self-imposed and should be strictly separated from ongoing international negotiations for a successor to the Kyoto Protocol to fight global warming, the source said.

As a developing country, China does not shoulder legally binding responsibilities to reduce carbon emissions, according to the basic principle set by the United Nations Framework Convention on Climate Change.

Putting a price on carbon is a crucial step for the country to employ the market to reduce its carbon emissions and genuinely shift to a low-carbon economy, industry analysts said.

China has mostly relied on administrative tools to realize its 20 percent energy intensity reduction target between 2006 to 2010. To that effect, the country’s top 1,000 energy consumers have signed contracts with the central government to improve their energy efficiency.

But with rising domestic energy demand, administrative measures are too expensive for the country to meet its future energy conservation targets — something that was also agreed at the meeting, said Tang Renhu from the low-carbon center at China Datang Corporation who also joined the discussion.

Although China has refuted the International Energy Agency’s label of being the world’s top energy consumer, its energy consumption for 2009 stood at 2.132 billion tons of oil equivalent, according to the National Bureau of Statistics.

“The market-based carbon-trading schemes will be a cost-effective supplement to administrative means,” said Yu Jie, an independent policy observer who previously worked for several international climate-related institutes.

Tang [Tang Renhu from the low-carbon center at the China Datang Corporation] also said the differences are centered on whether the pilot carbon trade projects should start from a selected industry, or a certain area.

Possible sectors for piloting carbon trade projects include carbon-intensive industries such as coal-fired power generation, Tang said.

One of the proposals include setting an absolute cap on carbon dioxide emissions in a certain area or industry. Others argue that the country’s carbon intensity target can be converted to some carbon-related allowances for trading schemes.

China has pledged to cut its carbon emissions per unit of economic growth by 40 to 45 percent by 2020 from 2005 levels.

Yu said it would be very complicated to work out a trading scheme that allocates the carbon-related emission permits among the enterprises in an open and fair manner.

“My suggestion is that the number of participating enterprises should be limited, as the goal of pilot trading is to try out the rules and establish a mechanism especially suitable for China,” Yu said.

As always, the devil is in the details.


Bjørn Lomborg’s New Book

31 August, 2010

Arguments about the ‘skeptical environmentalist’ Bjørn Lomborg will doubtless take a new turn with the news that he’s written a new book, Smart Solutions to Climate Change Comparing Costs and Benefits, which calls for tens of billions of dollars a year to be invested in tackling climate change. “Investing $100bn annually would mean that we could essentially resolve the climate change problem by the end of this century,” he now says.

• Juliette Jowit, Bjørn Lomborg: the dissenting climate change voice who changed his tune, The Guardian, August 30, 2010.

• Juliette Jowit, Bjørn Lomborg: $100bn a year needed to fight climate change, The Guardian, August 30, 2010.

A quote from the first article:

… he is still deeply critical of the dominant, cutting-carbon approach, which four of the five economists who were asked to rank the options put at the bottom of their lists. Only Nancy Stokey, of the University of Chicago, ranked lower- and mid-level carbon taxes more highly, around the middle of her list. Instead, the book suggests the best policies would be investment in clean technology research and development, and more climate engineering development work. He suggests this could be funded by a $7-a-tonne tax on carbon emissions, which he says would raise $250bn a year. Of this, $100bn could be spent on clean-tech R&D, about $1bn on climate engineering, $50bn on adapting to changes (building sea defences, for example), and the remaining $99bn or so on “getting virtually everybody on the planet healthcare, basic education, clean drinking water, and so on. It seems a pretty good deal,” he says.

A quote from Lomborg, taken from the first article:

“If we care about the environment and about leaving this planet and its inhabitants with the best possible future, we actually have only one option: we all need to start seriously focusing, right now, on the most effective ways to fix global warming.”

I’m not particularly interested in arguments about a book that hasn’t appeared yet, so I won’t enable comments on this blog entry.

But when it shows up, let’s read it — and then let’s talk about it.


This Week’s Finds (Week 301)

27 August, 2010

The first 300 issues of This Week’s Finds were devoted to the beauty of math and physics. Now I want to bite off a bigger chunk of reality. I want to talk about all sorts of things, but especially how scientists can help save the planet. I’ll start by interviewing some scientists with different views on the challenges we face — including some who started out in other fields, because I’m trying to make that transition myself.

By the way: I know “save the planet” sounds pompous. As George Carlin joked: “Save the planet? There’s nothing wrong with the planet. The planet is fine. The people are screwed.” (He actually put it a bit more colorfully.)

But I believe it’s more accurate when he says:

I think, to be fair, the planet probably sees us as a mild threat. Something to be dealt with. And I am sure the planet will defend itself in the manner of a large organism, like a beehive or an ant colony, and muster a defense.

I think we’re annoying the biosphere. I’d like us to become less annoying, both for its sake and our own. I actually considered using the slogan how scientists can help humans be less annoying — but my advertising agency ran a focus group, and they picked how scientists can help save the planet.

Besides interviewing people, I want to talk about where we stand on various issues, and what scientists can do. It’s a very large task, so I’m really hoping lots of you reading this will help out. You can explain stuff, correct mistakes, and point me to good sources of information. With a lot of help from Andrew Stacey, I’m starting a wiki where we can collect these pointers. I’m hoping it will grow into something interesting.

But today I’ll start with a brief overview, just to get things rolling.

In case you haven’t noticed: we’re heading for trouble in a number of ways. Our last two centuries were dominated by rapid technology change and a rapidly soaring population:

The population is still climbing fast, though the percentage increase per year is dropping. Energy consumption per capita is also rising. So, from 1980 to 2007 the world-wide usage of power soared from 10 to 16 terawatts.

96% of this power now comes from fossil fuels. So, we’re putting huge amounts of carbon dioxide into the air: 30 billion metric tons in 2007. So, the carbon dioxide concentration of the atmosphere is rising at a rapid clip: from about 290 parts per million before the industrial revolution, to about 370 in the year 2000, to about 390 now:



 

As you’d expect, temperatures are rising:



 

But how much will they go up? The ultimate amount of warming will largely depend on the total amount of carbon dioxide we put into the air. The research branch of the National Academy of Sciences recently put out a report on these issues:

• National Research Council, Climate Stabilization Targets: Emissions, Concentrations, and Impacts over Decades to Millennia, 2010.

Here are their estimates:



 

You’ll note there’s lots of uncertainty, but a rough rule of thumb is that each doubling of carbon dioxide will raise the temperature around 3 degrees Celsius. Of course people love to argue about these things: you can find reasonable people who’ll give a number anywhere between 1.5 and 4.5 °C, and unreasonable people who say practically anything. We’ll get into this later, I’m sure.

But anyway: if we keep up “business as usual”, it’s easy to imagine us doubling the carbon dioxide sometime this century, so we need to ask: what would a world 3 °C warmer be like?

It doesn’t sound like much… until you realize that the Earth was only about 6 °C colder during the last ice age, and the Antarctic had no ice the last time the Earth was about 4 °C warmer. You also need to bear in mind the shocking suddenness of the current rise in carbon dioxide levels:



You can see several ice ages here — or technically, ‘glacial periods’. Carbon dioxide concentration and temperature go hand in hand, probably due to some feedback mechanisms that make each influence the other. But the scary part is the vertical line on the right where the carbon dioxide shoots up from 290 to 390 parts per million — instantaneously from a geological point of view, and to levels not seen for a long time. Species can adapt to slow climate changes, but we’re trying a radical experiment here.

But what, specifically, could be the effects of a world that’s 3 °C warmer? You can get some idea from the National Research Council report. Here are some of their predictions. I think it’s important to read these, to see that bad things will happen, but the world will not end. Psychologically, it’s easy to avoid taking action if you think there’s no problem — but it’s also easy if you think you’re doomed and there’s no point.

Between their predictions (in boldface) I’ve added a few comments of my own. These comments are not supposed to prove anything. They’re just anecdotal examples of the kind of events the report says we should expect.

For 3 °C of global warming, 9 out of 10 northern hemisphere summers will be “exceptionally warm”: warmer in most land areas than all but about 1 of the summers from 1980 to 2000.

This summer has certainly been exceptionally warm: for example, worldwide, it was the hottest June in recorded history, while July was the second hottest, beat out only by 2003. Temperature records have been falling like dominos. This is a taste of the kind of thing we might see.

Increases of precipitation at high latitudes and drying of the already semi-arid regions are projected with increasing global warming, with seasonal changes in several regions expected to be about 5-10% per degree of warming. However, patterns of precipitation show much larger variability across models than patterns of temperature.

Back home in southern California we’re in our fourth year of drought, which has led to many wildfires.

Large increases in the area burned by wildfire are expected in parts of Australia, western Canada, Eurasia and the United States.

We are already getting some unusually intense fires: for example, the Black Saturday bushfires that ripped through Victoria in February 2007, the massive fires in Greece later that year, and the hundreds of wildfires that broke out in Russia this July.

Extreme precipitation events — that is, days with the top 15% of rainfall — are expected to increase by 3-10% per degree of warming.

The extent to which these events cause floods, and the extent to which these floods cause serious damage, will depend on many complex factors. But today it hard not to think about the floods in Pakistan, which left about 20 million homeless, and ravaged an area equal to that of California.

In many regions the amount of flow in streams and rivers is expected to change by 5-15% per degree of warming, with decreases in some areas and increases in others.

The total number of tropical cyclones should decrease slightly or remain unchanged. Their wind speed is expected to increase by 1-4% per degree of warming.

It’s a bit counterintuitive that warming could decrease the number of cyclones, while making them stronger. I’ll have to learn more about this.

The annual average sea ice area in the Arctic is expected to decrease by 15% per degree of warming, with more decrease in the summertime.

The area of Arctic ice reached a record low in the summer of 2007, and the fabled Northwest Passage opened up for the first time in recorded history. Then the ice area bounced back. This year it was low again… but what matters more is the overall trend:



 

Global sea level has risen by about 0.2 meters since 1870. The sea level rise by 2100 is expected to be at least 0.6 meters due to thermal expansion and loss of ice from glaciers and small ice caps. This could be enough to permanently displace as many as 3 million people — and raise the risk of floods for many millions more. Ice loss is also occurring in parts of Greenland and Antarctica, but the effect on sea level in the next century remains uncertain.

Up to 2 degrees of global warming, studies suggest that crop yield gains and adaptation, especially at high latitudes, could balance losses in tropical and other regions. Beyond 2 degrees, studies suggest a rise in food prices.

The first sentence there is the main piece of good news — though not if you’re a poor farmer in central Africa.

Increased carbon dioxide also makes the ocean more acidic and lowers the ability of many organisms to make shells and skeleta. Seashells, coral, and the like are made of aragonite, one of the two crystal forms of calcium carbonate. North polar surface waters will become under-saturated for aragonite if the level of carbon dioxide in the atmosphere rises to 400-450 parts per million. Then aragonite will tend to dissolve, rather than form from seawater. For south polar surface waters, this effect will occur at 500-660 ppm. Tropical surface waters and deep ocean waters are expected to remain supersaturated for aragonite throughout the 20th century, but coral reefs may be negatively impacted.

Coral reefs are also having trouble due to warming oceans. For example, this summer there was a mass dieoff of corals off the coast of Indonesia due to ocean temperatures that were 4 °C higher than average.

Species are moving toward the poles to keep cool: the average shift over many types of terrestrial species has been 6 kilometers per decade. The rate of extinction of species will be enhanced by climate change.

I have a strong fondness for the diversity of animals and plants that grace this planet, so this particularly perturbs me. The report does not venture a guess for how many species may go extinct due to climate change, probably because it’s hard to estimate. However, it states that the extinction rate is now roughly 500 times what it was before humans showed up. The extinction rate is measured extinctions per million years per species. For mammals, it’s shot up from roughly 0.1-0.5 to roughly 50-200. That’s what I call annoying the biosphere!

So, that’s a brief summary of the problems that carbon dioxide emissions may cause. There’s just one more thing I want to say about this now.

Once carbon dioxide is put into the atmosphere, about 50% of it will stay there for decades. About 30% of it will stay there for centuries. And about 20% will stay there for thousands of years:



This particular chart is based on some 1993 calculations by Wigley. Later calculations confirm this idea: the carbon we burn will haunt our skies essentially forever:

• Mason Inman, Carbon is forever, Nature Reports Climate Change, 20 November 2008.

This is why we’re in serious trouble. In the above article, James Hansen puts it this way:

Because of this long CO2 lifetime, we cannot solve the climate problem by slowing down emissions by 20% or 50% or even 80%. It does not matter much whether the CO2 is emitted this year, next year, or several years from now. Instead … we must identify a portion of the fossil fuels that will be left in the ground, or captured upon emission and put back into the ground.

But I think it’s important to be more precise. We can put off global warming by reducing carbon dioxide emissions, and that may be a useful thing to do. But to prevent it, we have to cut our usage of fossil fuels to a very small level long before we’ve used them up.



Theoretically, another option is to quickly deploy new technologies to suck carbon dioxide out of the air, or cool the planet in other ways. But there’s almost no chance such technologies will be practical soon enough to prevent significant global warming. They may become important later on, after we’ve already screwed things up. We may be miserable enough to try them, even though they may carry significant risks of their own.

So now, some tough questions:

If we decide to cut our usage of fossil fuels dramatically and quickly, how can we do it? How should we do it? What’s the least painful way? Or should we just admit that we’re doomed to global warming and learn to live with it, at least until we develop technologies to reverse it?

And a few more questions, just for completeness:

Could this all be just a bad dream — or more precisely, a delusion of some sort? Could it be that everything is actually fine? Or at least not as bad as you’re saying?

I won’t attempt to answer any of these now. We’ll have to keep coming back to them, over and over.

So far I’ve only talked about carbon dioxide emissions. There are lots of other problems we should tackle, too! But presumably many of these are just symptoms of some deeper underlying problem. What is this deeper problem? I’ve been trying to figure that out for years. Is there any way to summarize what’s going on, or it is just a big complicated mess?

Here’s my attempt at a quick summary: the human race makes big decisions based on an economic model that ignores many negative externalities.

A ‘negative externality’ is, very roughly, a way in which my actions impose a cost on you, for which I don’t pay any price.

For example: suppose I live in a high-rise apartment and my toilet breaks. Instead of fixing it, I realize that I can just use a bucket — and throw its contents out the window! Whee! If society has no mechanism for dealing with people like me, I pay no price for doing this. But you, down there, will be very unhappy.

This isn’t just theoretical. Once upon a time in Europe there were few private toilets, and people would shout “gardyloo!” before throwing their waste down to the streets below. In retrospect that seems disgusting, but many of the big problems that afflict us now can be seen as the result of equally disgusting externalities. For example:

Carbon dioxide pollution caused by burning fossil fuels. If the expected costs of global warming and ocean acidification were included in the price of fossil fuels, other sources of energy would more quickly become competitive. This is the idea behind a carbon tax or a ‘cap-and-trade program’ where companies pay for permits to put carbon dioxide into the atmosphere.

Dead zones. Put too much nitrogen and phosophorus in the river, and lots of algae will grow in the ocean near the river’s mouth. When the algae dies and rots, the water runs out of dissolved oxygen, and fish cannot live there. Then we have a ‘dead zone’. Dead zones are expanding and increasing in number. For example, there’s one about 20,000 square kilometers in size near the mouth of the Mississippi River. Hog farming, chicken farming and runoff from fertilized crop lands are largely to blame.

Overfishing. Since there is no ownership of fish, everyone tries to catch as many fish as possible, even though this is depleting fish stocks to the point of near-extinction. There’s evidence that populations of all big predatory ocean fish have dropped 90% since 1950. Populations of cod, bluefish tuna and many other popular fish have plummeted, despite feeble attempts at regulation.

Species extinction due to habitat loss. Since the economic value of intact ecosystems has not been fully reckoned, in many parts of the world there’s little price to pay for destroying them.

Overpopulation. Rising population is a major cause of the stresses on our biosphere, yet it costs less to have your own child than to adopt one. (However, a pilot project in India is offering cash payments to couples who put off having children for two years after marriage.)

One could go on; I haven’t even bothered to mention many well-known forms of air and water pollution. The Acid Rain Program in the United States is an example of how people eliminated an externality: they imposed a cap-and-trade system on sulfur dioxide pollution.

Externalities often arise when we treat some resource as essentially infinite — for example fish, or clean water, or clean air. We thus impose no cost for using it. This is fine at first. But because this resource is free, we use more and more — until it no longer makes sense to act as if we have an infinite amount. As a physicist would say, the approximation breaks down, and we enter a new regime.

This is happening all over the place now. We have reached the point where we need to treat most resources as finite and take this into account in our economic decisions. We can’t afford so many externalities. It is irrational to let them go on.

But what can you do about this? Or what can I do?

We can do the things anyone can do. Educate ourselves. Educate our friends. Vote. Conserve energy. Don’t throw buckets of crap out of apartment windows.

But what can we do that maximizes our effectiveness by taking advantage of our special skills?

Starting now, a large portion of This Week’s Finds will be the continuing story of my attempts to answer this question. I want to answer it for myself. I’m not sure what I should do. But since I’m a scientist, I’ll pose the question a bit more broadly, to make it a bit more interesting.

How scientists can help save the planet — that’s what I want to know.


Addendum: In the new This Week’s Finds, you can often find the source for a claim by clicking on the nearest available link. This includes the figures. Four of the graphs in this issue were produced by Robert A. Rohde and more information about them can be found at Global Warming Art.


During the journey we commonly forget its goal. Almost every profession is chosen as a means to an end but continued as an end in itself. Forgetting our objectives is the most frequent act of stupidity. — Friedrich Nietzsche


Follow

Get every new post delivered to your Inbox.

Join 3,095 other followers