Eddy Who?

25 August, 2011

Or: A very short introduction to turbulence

guest post by Tim van Beek

Have a look at this picture:

Then look at this one:

Do they look similar?

They should! They are both examples of a Kelvin-Helmoltz instability.

The first graphic is a picture of billow clouds (the fancier name is altostratus undulatus clouds):

The picture is taken from:

• C. Donald Ahrens: Meteorology Today, 9th edition, Brooks/Cole, Kentucky, 2009.

The second graphic:

shows a lab experiment and is taken from:

• G.L. Brown and A. Roshko, online available Density effects and large structure in turbulent mixing layers, Journal of Fluid Mechanics 64 (1974), 775-816.

Isn’t it strange that clouds in the sky would show the same pattern as some gases in a small laboratory experiment? The reason for this is not quite understood today. In this post, I would like to talk a little bit about what is known.

Matter that tries to get out of its own way

Fluids like water and air can be well described by Newton’s laws of classical mechanics. When you start learning about classical mechanics, you consider discrete masses, most of the time. Billiard balls, for example. But it is possible to formulate Newton’s laws of motion for fluids by treating them as ‘infinitely many infinitesimally small’ billiard balls, all pushing and rubbing against each other and therefore trying to get out of the way of each other.

If we do this, we get some equations describing fluid flow: the Navier-Stokes equations.

The Navier-Stokes equations are a complicated set of nonlinear partial differential equations. A lot of mathematical questions about them are still unanswered, like: under what conditions is there a smooth solution to these equations? If you can answer that question, you will win one of the a million dollars from the Clay Mathematics Institute.

If you completed the standard curriculum of physics as I did, chances are that you never attended a class on fluid dynamics. At least I never did. When you take a first look at the field, you will notice: the literature about the Navier-Stokes equations alone is huge! Not to mention all the special aspects of numerical simulations, special aspects of the climate and so on.

So it is nice to find a pedagogical introduction to the subject for people who have some background knowledge in partial differential equations, for the mathematical theory:

• C. Foias, R. Rosa, O. Manley and R. Temam, Navier-Stokes Equations and Turbulence, Cambridge U. Press, Cambridge, 2001.

So, there is a lot of fun to be had for the mathematically inclined. But today I would like to talk about an aspect of fluid flows that also has a tremendous practical importance, especially for the climate of the Earth: turbulence!

There is no precise definition of turbulence, but people know it when they see it. A fluid can flow in layers, with one layer above the other, maybe slightly slower or faster. Material of one layer does hardly mix with material of another layer. These flows are called laminar flows. When a laminar flow gets faster and faster, it turns into a turbulent flow at some point:

This is a fluid flow inside a circular pipe, with a layer of some darker fluid in the middle.

As a first guess we could say that a characteristic property of turbulent flow is the presence of circular flows, commonly called eddies.

Tempests in Teapots

A funny aspect of the Navier-Stokes equations is that they don’t come with any recommended length scale. Properties of the fluid flow like velocity and pressure are modelled as smooth functions of continuous time and space. Of course we know that this model does not work on a atomic length scale, where we have to consider individual atoms. Pressure and velocity of a fluid flow don’t make any sense on a length scale that is smaller than the average distance between electrons and the atom nucleus.

We know this, but this is a fact that is not present in the model comprised by the Navier-Stokes equations!

But let us look at bigger length scales. An interesting feature of the solutions of the Navier-Stokes equations is that there are fluid flows that stretch over hundreds of meters that look like fluid flows that stretch over centimeters only. And it is really astonishing that this phenomenon can be observed in nature. This is another example of the unreasonable effectiveness of mathematical models.

You have seen an example of this in the introduction already. That was a boundary layer instability. Here is a full blown turbulent example:

The last two pictures are from the book:

• Arkady Tsinober: An Informal Conceptual Introduction to Turbulence, 2nd edition, Springer, Fluid Mechanics and Its Applications Volume 92, Berlin, 2009.

This is a nice introduction to the subject, especially if you are more interested in phenomenology than mathematical details.

Maybe you noticed the “Reynolds number” in the label text of the last picture. What is that?

People in business administration like management ratios; they throw all the confusing information they have about a company into a big mixer and extract one or two numbers that tell them where they stand, like business volume and earnings. People in hydrodynamics are somewhat similar; they define all kinds of “numbers” that condense a lot of information about fluid flows.

A CEO would want to know if the earnings of his company are positive or negative. We would like to know a number that tells us if a fluid flow is laminar or turbulent. Luckily, such a number already exists. It is the Reynolds number! A low number indicates a laminar flow, a high number a turbulent flow. Like the calculation of the revenue of a company, the calculation of the Reynolds number of a given fluid flow is not an exact science. Instead there is some measure of estimation necessary. The definition involves, for example, a “characteristic length scale”. This is a fuzzy concept that usually involves some object that interacts with – in our case – the fluid flow. The characteristic length scale in this case is the physical dimension of the object. While there is usually no objectively correct way to assing a “characteristic length” to a three dimensional object, this concept allows us nevertheless to distinguish the scattering of water waves on a ocean liner (length scale ≈ 103 meter) from their scattering on a peanut (length scale ≈ 10-2 meter).

The following graphic shows laminar and turbulent flows and their characteristic Reynolds numbers:

schematic display of laminar and turbulent flow for different Reynolds numbers

This graphic is from the book

• Thomas Bohr, Mogens H. Jensen, Giovanni Paladin and Angelo Vulpiani, Dynamical Systems Approach to Turbulence, Cambridge U. Press, Cambridge, 1998

But let us leave the Reynolds number for now and turn to one of its ingredients: viscosity. Understanding viscosity is important for understanding how eddies in a fluid flow are connected to energy dissipation.

Eddies dissipate kinetic energy

“Eddies,” said Ford, “in the space-time continuum.”

“Ah,” nodded Arthur, “is he? Is he?” He pushed his hands into the pocket of his dressing gown and looked knowledgeably into the distance.

“What?” said Ford.

“Er, who,” said Arthur, “is Eddy, then, exactly, then?”

– from Douglas Adams: Life, the Universe and Everything

A fluid flow can be pictured as consisting of a lot of small fluid packages that move alongside each other. In many situations, there will be some friction between these packages. In the case of fluids, this friction is called viscosity.

It is an empirical fact that at small velocities fluid flows are laminar: there are layers upon layers, with one layer moving at a constant speed, and almost no mixing. At the boundaries, the fluid will attach to the surrounding material, and the relative fluid flow will be zero. If you picture such a flow between a plate that is at rest, and a plate that is moving forward, you will see that due to friction between the layers a force needs to be exerted to keep the moving plate moving:

In the simplest approximation, you will have to exert some force F per unit area A, in order to sustain a linear increase of the velocity of the upper plate along the y-axis, \partial u / \partial y. The constant of proportionality is called the viscosity \mu:

\displaystyle{ \frac{F}{A} = \mu \frac{\partial u}{\partial y}}

More friction means a bigger viscosity: honey has a bigger viscosity than water.

If you stir honey, the fluid flow will come to a halt rather fast. The energy that you put in to start the fluid flow is turned to heat by dissipation. This mechanism is of course related to friction and therefore to viscosity.

It is possible to formulate an exact formula for this dissipation process using the Navier-Stokes equations. It is not hard to prove it, but I will only explain the involved gadgets.

A fluid flow in three dimensions can be described by stating the velocity of the fluid flow at a certain time t and \vec{x} \in \mathbb{R}^3 (I’m not specifying the region of the fluid flow or any boundary or initial conditions). Let’s call the velocity \vec{u}(t, \vec{x}).

Let’s assume that the fluid has a constant density \rho. Such a fluid is called incompressible. For convenience let us assume that the density is 1: \rho = 1. Then the kinetic energy E(\vec{u}) of a fluid flow at a fixed time t is given by

\displaystyle{ E(\vec{u}) = \int \| \vec{u}(t, \vec{x}) \|^2 \; d^3 x }

Let’s just assume that this integral is finite for the moment. This is the first gadget we need.

The second gadget is called enstrophy \epsilon of the fluid flow. This is a measure of how much eddies there are. It is the integral

\displaystyle{\epsilon = \int \| \nabla \times \vec{u} \|^2 \; d^3 x }

where \nabla \times denotes the curl of the fluid velocity. The faster the fluid rotates, the bigger the curl is.

(The math geeks will notice that the vector fields \vec{u} that have a finite kinetic energy and a finite enstrophy are precisely the elements of the Sobolev space H^1(\mathbb{R}^3))

Here is the relationship of the decay of the kinetic energy and the enstrophy, which is a consequence of the Navier-Stokes equations (and suitable boundary conditions):

\displaystyle{\frac{d}{d t} E = - \mu \epsilon}

This equation says that the energy decays with time, and it decays faster if there is a higher viscosity, and if there are more and stronger eddies.

If you are interested in the mathematically precise derivation of this equation, you can look it up in the book I already mentioned:

• C. Foias, R. Rosa, O. Manley and R. Temam, Navier-Stokes Equations and Turbulence, Cambridge U. Press, Cambridge, 2001.

This connection of eddies and dissipation could indicate that there is also a connection of eddies and some maximum entropy principle. Since eddies maximize dissipation, natural fluid flows should somehow tend towards the production of eddies. It would be interesting to know more about this!

In this post we have seen eddies at different length scales. There are buzzwords in meteorology for this:

You have seen eddies at the ‘microscale’ (left) and at the ‘mesoscale’ (middle). A blog post about eddies should of course mention the most famous eddy of the last decade, which formed at the ‘synoptic scale’:

Do you recognize it? That was Hurricane Katrina.

It is obviously important to understand disasters like this one on the synoptic scale. This is an active topic of ongoing research, both in meteorology and in climate science.


This Week’s Finds (Week 318)

21 August, 2011

The Earth’s climate is complicated. How well do we understand how it will react to the drastic increase in atmospheric carbon dioxide that we’re imposing on it? One reason it’s hard to be 100% sure is that the truly dramatic changes most scientists expect lie mostly in the future. There’s a lot of important evidence we’ll get only when it’s too late.

Luckily, the Earth’s past also shows signs of dramatic climate change: for example, the glacial cycles I began discussing last time in "week317". These cycles make an interesting test of how well we understand climate change. Of course, their mechanism is very different from that of human-caused global warming, so we might understand one but not the other. Indeed, earlier episodes like the Paleocene-Eocene Thermal Maximum might shed more light on what we’re doing to the Earth now! But still, the glacial cycles are an impressive instance of dramatic climate change, which we’d do well to understand.

As I hinted last week, a lot of scientists believe that the Earth’s glacial cycles are related to cyclic changes in the Earth’s orbit and the tilt of its axis. Since one of the first scientists to carefully study this issue was Milutin Milankovitch, these are called Milankovitch cycles. The three major types of Milankovitch cycle are:

• changes in the eccentricity of the Earth’s orbit – that is, how much the orbit deviates from being a circle:




(changes greatly exaggerated)

• changes in the obliquity, or tilt of the Earth’s axis:



precession, meaning changes in the direction of the Earth’s axis relative to the fixed stars:



Now, the first important thing to realize is this: it’s not obvious that Milankovitch cycles can cause glacial cycles. During a glacial period, the Earth is about 5°C cooler than it is now. But the Milankovitch cycles barely affect the overall annual amount of solar radiation hitting the Earth!

This fact is clear for precession or changes in obliquity, since these just involve the tilt of the Earth’s axis, and the Earth is nearly a sphere. The amount of Sun hitting a sphere doesn’t depend on how the sphere is ’tilted’.

For changes in the eccentricity of the Earth’s orbit, this fact is a bit less obvious. After all, when the orbit is more eccentric, the Earth gets closer to the Sun sometimes, but farther at other times. So you need to actually sit down and do some math to figure out the net effect. Luckily, Greg Egan did this for us—I’ll show you his calculation at the end of this article. It turns out that when the Earth’s orbit is at its most eccentric, it gets very, very slightly more energy from the Sun each year: 0.167% more than when its orbit is at its least eccentric.

So, there are interesting puzzles involved in the Milankovitch cycles. They don’t affect the total amount of radiation that hits the Earth each year—not much, anyway—but they do cause substantial changes in the amount of radiation that hits the Earth at various different latitudes in various different seasons. We need to understand what such changes might do.

James Croll was one of the first to think about this, back around 1875. He decided that what really matters is the amount of sunlight hitting the far northern latitudes in winter. When this was low, he claimed, glaciers would tend to form and an ice age would start. But later, in the 1920s, Milankovitch made the opposite claim: what really matters is the amount of sunlight hitting the far northern latitudes in summer. When this was low, an ice age would start.

If we take a quick look at the data, we see that the truth is not obvious:


I like this graph because it’s pretty… but I wish the vertical axes were labelled. We will see some more precise graphs in future weeks.

Nonetheless, this graph gives some idea of what’s going on. Precession, obliquity and eccentricity vary in complex but still predictable ways. From this you can compute the amount of solar energy that hits the surface of the Earth’s atmosphere on July 1st at a latitude of 65° N. That’s the yellow curve. People believe this quantity has some relation to the Earth’s temperature, as shown by the black curve at bottom. However, the relation is far from clear!

Indeed, if you only look at this graph, you might easily decide that Milankovitch cycles are not important in causing glacial cycles. But people have analyzed temperature proxies over long spans of time, and found evidence for cyclic changes at periods that match those of the Milankovitch cycles. Here’s a classic paper on this subject:

• J. D. Hays, J. Imbrie, and N. J. Shackleton, Variations in the earth’s orbit: pacemaker of the Ice Ages, Science 194 (1976), 1121-1132.

They selected two sediment cores from the Indian ocean, which contain sediments deposited over the last 450,000 years. They measured:

1) Ts, an estimate of summer sea-surface temperatures at the core site, derived from a statistical analysis of tiny organisms called radiolarians found in the sediments.

2) δ18O, the excess of the heavy isotope of oxygen in tiny organisms called foraminifera also found in the sediments.

3) The percentage of radiolarians that are Cycladophora davisiana—a certain species not used in the estimation of Ts.

Identical samples were analyzed for the three variables at 10-centimeter intervals throughout each core. Then they took a Fourier transform of this data to see at which frequencies these variables wiggle the most! When we take the Fourier transform of a function and then square it, the result is called the power spectrum. So, they actually graphed the power spectra for these three variables:

The top graph shows the power spectra for Ts, δ18O, and the percentage of Cycladophora davisiana. The second one shows the spectra after a bit of extra messing around. Either way, there seem to be peaks at frequencies of 19, 23, 42 and roughly 100 thousand years. However the last number is quite fuzzy: if you look, you’ll see the three different power spectra have peaks at 94, 106 and 122 thousand years.

So, some sort of cycles seem to be occurring. This is far from the only piece of evidence, but it’s a famous one.

Now let’s go over the three major forms of Milankovitch cycle, and keep our eye out for cycles that take place every 19, 23, 42 or roughly 100 thousand years!

Eccentricity

The Earth’s orbit is an ellipse, and the eccentricity of this ellipse says how far it is from being circular. But the eccentricity of the Earth’s orbit slowly changes: it varies from being nearly circular, with an eccentricity of 0.005, to being more strongly elliptical, with an eccentricity of 0.058. The mean eccentricity is 0.028. There are several periodic components to these variations. The strongest occurs with a period of 413,000 years, and changes the eccentricity by ±0.012. Two other components have periods of 95,000 and 123,000 years.

The eccentricity affects the percentage difference in incoming solar radiation between the perihelion, the point where the Earth is closest to the Sun, and the aphelion, when it is farthest from the Sun. This works as follows. The percentage difference between the Earth’s distance from the Sun at perihelion and aphelion is twice the eccentricity, and the percentage change in incoming solar radiation is about twice that. The first fact follows from the definition of eccentricity, while the second follows from differentiating the inverse-square relationship between brightness and distance.

Right now the eccentricity is 0.0167, or 1.67%. Thus, the distance from the Earth to Sun varies 3.34% over the course of a year. This in turn gives an annual variation in incoming solar radiation of about 6.68%. Note that this is not the cause of the seasons: those arise due to the Earth’s tilt, and occur at different times in the northern and southern hemispheres.

Obliquity

The angle of the Earth’s axial tilt with respect to the plane of its orbit, called the obliquity, varies between 22.1° and 24.5° in a roughly periodic way, with a period of 41,000 years. When the obliquity is high, the strength of seasonal variations is stronger.

Right now the obliquity is 23.44°, roughly halfway between its extreme values. It is decreasing, and will reach its minimum value around the year 10,000 CE.

Precession

The slow turning in the direction of the Earth’s axis of rotation relative to the fixed stars, called precession, has a period of roughly 23,000 years. As precession occurs, the seasons drift in and out of phase with the perihelion and aphelion of the Earth’s orbit.

Right now the perihelion occurs during the southern hemisphere’s summer, while the aphelion is reached during the southern winter. This tends to make the southern hemisphere seasons more extreme than the northern hemisphere seasons.

The gradual precession of the Earth is not due to the same physical mechanism as the wobbling of the top. That sort of wobbling does occur, but it has a period of only 427 days. The 23,000-year precession is due to tidal interactions between the Earth, Sun and Moon. For details, see:

• John Baez, The wobbling of the Earth and other curiosities.

In the real world, most things get more complicated the more carefully you look at them. For example, precession actually has several periodic components. According to André Berger, a top expert on changes in the Earth’s orbit, the four biggest components have these periods:

• 23,700 years

• 22,400 years

• 18,980 years

• 19,160 years

in order of decreasing strength. But in geology, these tend to show up either as a single peak around the mean value of 21,000 years, or two peaks at frequencies of 23,000 and 19,000 years.

To add to the fun, the three effects I’ve listed—changes in eccentricity, changes in obliquity, and precession—are not independent. According to Berger, cycles in eccentricity arise from ‘beats’ between different precession cycles:

• The 95,000-year eccentricity cycle arises from a beat between the 23,700-year and 19,000-year precession cycles.

• The 123,000-year eccentricity cycle arises from a beat between the 22,4000-year and 18,000-year precession cycles.

We should delve into all this stuff more deeply someday. For now, let me just refer you to this classic review paper:

• André Berger, Pleistocene climatic variability at astronomical frequencies, Quaternary International 2 (1989), 1-14.

Later, as I get up to speed, I’ll talk about more modern work.

Paleontology versus astronomy

So now we can compare the data from ocean sediments to the Milankovitch cycles as computed in astronomy:

• The roughly 19,000-year cycle in ocean sediments may come from 18,980-year and 19,160-year precession cycles.

• The roughly 23,000-year cycle in ocean sediments may come from 23,700-year precession cycle.

• The roughly 42,000-year cycle in ocean sediments may come from the 41,000-year obliquity cycle.

• The roughly 100,000-year cycle in ocean sediments may come from the 95,000-year and 123,000-year eccentricity cycles.

Again, the last one looks the most fuzzy. As we saw, different kinds of sediments seem to indicate cycles of 94, 106 and 122 thousand years. At least two of these periods match eccentricity cycles fairly well. But a detailed analysis would be required to distinguish between real effects and coincidences in this subject!

The effect of eccentricity

I bet some of you are hungry for some actual math. As I mentioned, it takes some work to see how changes in the eccentricity of the Earth’s orbit affect the annual average of sunlight hitting the top of the Earth’s atmosphere. Luckily Greg Egan has done this work for us. While the result is surely not new, his approach makes nice use of the fact that both gravity and solar radiation obey an inverse-square law. That’s pretty cool.

Here is his calculation:

The angular velocity of a planet is

\displaystyle{\frac{d \theta}{d t} = \frac{J}{m r^2} }

where J is the constant orbital angular momentum of the planet and m is its mass. Thus the radiant energy delivered per unit time to the planet is

\displaystyle{ \frac{d U}{d t} = \frac{C}{r^2}}

for some constant C. It follows that the energy delivered per unit of angular progress around the orbit is

\displaystyle{ \frac{d U}{d \theta} = \frac{C}{r^2} \frac{d t}{d \theta} = \frac{C m}{J} }

So, the total energy delivered in one period will be

\displaystyle{ U=\frac{2\pi C m}{J} }

How can we relate the orbital angular momentum J to the shape of the orbit? If you equate the total energy of the planet, kinetic \frac{1}{2}m v^2 plus potential -\frac{G M m}{r}, at its aphelion r_1 and perihelion r_2, and use J to get the velocity in the kinetic energy term from its distance, v=\frac{J}{m r}, when we solve for J we get:

\displaystyle{J = m \sqrt{\frac{2 G M r_1 r_2}{r_1+r_2}} = m b \sqrt{\frac{G M}{a}}}

where

\displaystyle{ a=\frac{1}{2} (r_1+r_2)}

is the semi-major axis of the orbit and

\displaystyle{ b=\sqrt{r_1 r_2} }

is the semi-minor axis. But we can also relate J to the period of the orbit, T, by integrating the rate at which orbital area is swept out by the planet:

\displaystyle{\frac{1}{2}  r^2 \frac{d \theta}{d t} = \frac{J}{2 m} }

over one orbit. Since the area of an ellipse is \pi a b, this gives us:

\displaystyle{ J = \frac{2 \pi a b m}{T} }

Equating these two expressions for J shows that the period is:

\displaystyle{ T = 2 \pi \sqrt{\frac{a^3}{G M}}}

So the period depends only on the semi-major axis; for a fixed value of a, it’s independent of the eccentricity.

As the eccentricity of the Earth’s orbit changes, the orbital period T, and hence the semi-major axis a, remains almost constant. So, we have:

\displaystyle{ U=\frac{2\pi C m}{J} = \frac{2\pi C}{b} \sqrt{\frac{a}{G M}} }

Expressing the semi-minor axis in terms of the semi-major axis and the eccentricity, b^2 = a^2 (1-e^2), we get:

\displaystyle{U=\frac{2\pi C}{\sqrt{G M a (1-e^2)}}}

So to second order in e, we have:

\displaystyle{U = \frac{\pi C}{\sqrt{G M a}} (2+e^2) }

The expressions simplify if we consider average rate of energy delivery over an orbit, which makes all the grungy constants related to gravitational dynamics go away:

\displaystyle{\frac{U}{T} = \frac{C}{a^2 \sqrt{1-e^2}} }

or to second order in e:

\displaystyle{ \frac{U}{T} = \frac{C}{a^2} (1+\frac{1}{2} e^2) }

We can now work out how much the actual changes in the Earth’s orbit affect the amount of solar radiation it gets! The eccentricity of the Earth’s orbit varies between 0.005 and 0.058. The total energy the Earth gets each year from solar radiation is proportional to

\displaystyle{ \frac{1}{\sqrt{1-e^2}} }

where e is the eccentricity. When the eccentricity is at its lowest value, e = 0.005, we get

\displaystyle{ \frac{1}{\sqrt{1-e^2}} = 1.0000125 }

When the eccentricity is at its highest value, e = 0.058, we get

\displaystyle{\frac{1}{\sqrt{1-e^2}} = 1.00168626 }

So, the change is about

\displaystyle{1.00168626/1.0000125 = 1.00167373 }

In other words, a change of merely 0.167%.

That’s very small And the effect on the Earth’s temperature would naively be even less!

Naively, we can treat the Earth as a greybody: an ideal object whose tendency to absorb or emit radiation is the same at all wavelengths and temperatures. Since the temperature of a greybody is proportional to the fourth root of the power it receives, a 0.167% change in solar energy received per year corresponds to a percentage change in temperature roughly one fourth as big. That’s a 0.042% change in temperature. If we imagine starting with an Earth like ours, with an average temperature of roughly 290 kelvin, that’s a change of just 0.12 kelvin!

The upshot seems to be this: in a naive model without any amplifying effects, changes in the eccentricity of the Earth’s orbit would cause temperature changes of just 0.12 °C!

This is much less than the roughly 5 °C change we see between glacial and interglacial periods. So, if changes in eccentricity are important in glacial cycles, we have some explaining to do. Possible explanations include season-dependent phenomena and climate feedback effects. Probably both are very important!

Next time I’ll start talking about some theories of how Milankovitch cycles might cause the glacial cycles. I thank Frederik De Roo, Martin Gisser and Cameron Smith for suggesting improvements to this issue before its release, over on the Azimuth Forum. Please join us over there.


Little did I suspect, at the time I made this resolution, that it would become a path so entangled that fully twenty years would elapse before I could get out of it. – James Croll, on his decision to study the cause of the glacial cycles


Environmental News From China

13 August, 2011

I was unable to access this blog last week while I was in Changchun—sorry!

But I’m back in Singapore now, so here’s some news, mostly from the 2 August 2011 edition of China Daily, the government’s official English newspaper. As you’ll see, they’re pretty concerned about environmental problems. But to balance the picture, here’s a picture from Changbai Mountain, illustrating the awesome beauty of the parts of China that remain wild:

The Chinese have fallen in love with cars. Though less than 6% of Chinese own cars so far, that’s already 75 million cars, a market exceeded only by the US.

The price of real estate in China is shooting up—but as car ownership soars, you’ll have to pay a lot more if you want to buy a parking lot for your apartment. The old apartments don’t have them. In Beijing the average price of a parking lot is 140,000 yuan, which is about $22,000. In Shanghai it’s 150,000 yuan. But in fancy neighborhoods the price can be much higher: for example, up to 800,000 yuan in Beijing!

For comparison, the average salary in Beijing was 36,000 yuan in 2007—and the median is probably much lower, since there are lots of poor people and just a few rich ones. On top of that, I bet this figure doesn’t include the many undocumented people who have come from the countryside to work in Beijing. The big cities in China are much richer than the rest of the country: the average salary throughout the country was 11,000 yuan, and the average rural wage was just 3,600 yuan. This disparity is causing young people to flood into the cities, leaving behind villages mostly full of old folks.

Thanks to intensive use of coal, increasing car ownership and often-ignored regulations, air quality is bad in most Chinese cities. In Changchun, a typical summer day resembles the very worst days in Los Angeles, where the air is yellowish-grey except for a small blue region directly overhead.

In a campaign to improve the air quality in Beijing, drivers are getting subsidized to turn in cars made in 1995 or earlier. As usual, it’s the old clunkers that stink the worst: 27% of the cars in Beijing are over 8 years old, but they make 60% of the air pollution. The government is hoping to eliminate 400,000 old cars and cut the emission of nitrogen oxide by more than 10,000 tonnes per year by 2015.

But this policy is also supposed to stoke the market for new automobiles. That’s a bit strange, since Beijing is a huge city with massive traffic jams—some say the worst in the world! As a result, the government has taken strong steps to limit car sales in Beijing.


In Beijing, if you want to buy a car, you have to enter a lottery to get a license plate! Car sales have been capped at 240,000 this year, and for the first lottery people’s chances of winning were just one in ten:

• Louisa Lim, License plate lottery meant to curb Beijing traffic, Morning Edition, 26 January 2011.

Why is the government trying to stoke new car sales in Beijing while simultaneously trying to limit them? Maybe it’s just a rhetorical move to placate the car dealers, who hate the lottery system. Or maybe it’s because the government makes money from selling cars: it’s a state-controlled industry.

On another front, since July there has been a drought in the provinces of Gansu, Guizhou and Hunan, the Inner Mongolia autonomous region, and the Ningxia Hui autonomous region, which is home to many non-Han ethnic groups including the Hui. It’s caused water shortages for 4.3 million people. In some villages all the crops have died. Drought relief agencies are sending out more water pumps and delivering drinking water.

In Gansu province, at least, the current drought is part of a bigger desertification process.

Once they grew rice in Gansu, but then they moved to wheat:

• Tu Xin-Yi, Drought in Gansu, Tzu Chi, 5 January 2011.

China is among the nations that are experiencing severe desertification. One of the hardest hit areas is Gansu Province, deep in the nation’s heartland. The province, which includes parts of the Gobi, Badain Jaran, and Tengger Deserts, is suffering moisture drawdown year after year. As water goes up into the air, so does irrigation and agriculture. People can hardly make a living from the arid land.

But the land was once quite rich and hospitable to agriculture, a far cry from what greets the eye today. Ruoli, in central Gansu, epitomizes the big dry-up. The area used to be verdant farmland where, with abundant rainfall, all kinds of plants grew lush and dense; but now the land is dry and yields next to nothing. All this dramatic change has come about in just 50 years—lightning-fast, a mere blink of an eye in geological terms.

Rapid desertification is forcing many parties, including the government, to take action. Some residents have moved away to seek better livelihoods elsewhere, and the government offers incentives for people to relocate to the lowlands Tzu Chi built a new village to accommodate some of these migrants.

Tzu Chi is a Buddhist organization with a strong interest in climate change. The dramatic change they speak of seems to be part of a longer-term drying trend in this region. Here is one of a series of watchtowers near Dunhuang, once a thriving city at the eastern end of the Silk Road. I don’t think this area was such a desert back then:

Meanwhile, down in southern China, the Guanxi Zhuang autonomous region is seeing its worst electricity shortage in the last 2 decades, with 30% of the demand for electric power unmet, and rolling blackouts. They blame the situation on a shortage of coal and the fact that the local river isn’t deep enough to provide hydropower.

On the bright side, China is investing a lot in wind power. Their response to the financial crisis of of 2009 included $220 billion investment in renewable energy. Baoding province is now one of the world’s centers for producing wind turbines, and by 2020 China plans to have 100 gigawatts of peak wind power online.

That’s pretty good! Remember our discussion of Pacala and Socolow’s stabilization wedges? The world needs to reduce carbon emissions by roughly 10 gigatonnes per year by about 2050 to stay out of trouble. Pacala and Socolow call each 1-gigatonne slice of this carbon pie a ‘wedge’. We could reduce carbon emissions by one ‘wedge’ by switching 700 gigawatts of coal power to 2000 gigawatts of peak wind power. Why 700 of coal for 2000 of wind? Because unfortunately most of the time wind power doesn’t work at peak efficiency!

So, the Chinese plan to do 1/20 of a wedge of wind power by 2020. Multiply that effort by a factor of 200 worldwide by 2050, and we’ll be in okay shape. That’s quite a challenge! Of course we won’t do it all with wind.

And while the US and Europe are worried about excessive government and private debt, China is struggling to figure out how to manage its vast savings. China has a $3.2 trillion foreign reserve, which is 30% of the world’s total. The fraction invested in the US dollars has dropped from 71% in 1999 to 61% in 2010, but that’s still a lot of money, so any talk of the US defaulting, or a drop in the dollar, makes the Chinese government very nervous. This article goes into a bit more detail:

• Zhang Monan, Dollar depreciation dilemma, China Daily, 2 August 2011.

In a move to keep the value of their foreign reserves and improve their ratio of return, an increasing number of countries have set up sovereign wealth funds in recent years, especially since the onset of the global financial crisis. So far, nearly 30 countries or regions have established sovereign wealth funds and the total assets at their disposal amounted to $3.98 trillion in early 2011.

Compared to its mammoth official foreign reserve, China has made much slower progress than many countries in the expansion of its sovereign wealth funds, especially in its stock investments. Currently, China has only three main sovereign wealth funds: One with assets of $347.1 billion is managed by the Hong Kong-based SAFE Investment Co Ltd; the second, with assets of $288.8 billion, is managed by the China Investment Corporation, a wholly State-owned enterprise engaging in foreign assets investment; the third fund of $146.5 billion is managed by the National Social Security Fund.

From the perspective of its investment structure, China’s sovereign wealth funds have long attached excessive importance to mobility and security. For example, the China Investment Corporation has invested 87.4 percent of its funds in cash assets and only 3.2 percent in stocks, in sharp contrast to the global average of 45 percent in stock investments.

What’s interesting to me is that on the one hand we have these big problems, like global warming, and on the other hand these people with tons of money struggling to find good ways to invest it. Is there a way to make each of these problems the solution to the other?


Azimuth on Google Plus (Part 1)

24 July, 2011

Google Plus is here… and it’s pretty cool.

If you’re on Google Plus, and you want to get Azimuth-related news items, please let me know, either here or there. I’ll add you to my Azimuth circle.

Or even better, tell me how to broadcast items on Google Plus so that 1) everyone can see them, but 2) only people who want Azimuth stuff needs to see them. I can send stuff to “Public”, but so far I’ve been using that for fun stuff I think everyone would enjoy. Maybe future improvements to Google Plus will help me solve my dilemma.

Here’s a sample of Azimuth items on Google Plus. But note: these look a lot nicer on Google Plus.

Solar panels could reduce heat reaching the rooftop by as much as 38%. So, while making electricity you also spend less energy cooling your building in the summer. Not so nice in the winter, maybe.

Japan has been paying the dues for other countries in the International Whaling Commission — and then these countries vote against banning whaling. Japanese academic Atsushi Ishii said that this form of vote-buying was “very likely,” but added “I would not call it corruption.” Yeah, right. But the good news: now things may change a bit, since the International Whaling Commission has decided to ban this practice!

A plastic bottle filled with water refracts sunlight and acts like a 55-watt bulb — during the day, if you have a hole in your roof. For many that would be a good thing.

Congress may finally kill a $6 billion annual subsidy for turning corn into ethanol. This would be a good thing in many ways. Even some bioethanol producers say they don’t need this subsidy anymore. After all, there are laws requiring the use of ethanol in fuels, and as oil prices continue to rise, ethanol is becoming competitive.

Koch Industries Inc. and Exxon Mobil helped write legislation that’s been introduced in Montana, New Mexico, New Hampshire, Oregon, Washington and other states in the USA. It includes these words: “a tremendous amount of economic growth would be sacrificed for a reduction in carbon emissions that would have no appreciable impact on global concentrations of carbon dioxide.” They did this through an organization called the American Legislative Exchange Council (ALEC).

See all the things that are wrong with scientific publishing today. On Google Plus we’ve been discussing ways to solve these problems.

Paleoecologist Micha Ruhl of Utrecht University has a new paper on the Permian-Triassic extinction. It argues that a fairly small release in CO2 from volcanoes was enough to make the sea floor release methane and cause the world’s worst mass extinction event. How much is “fairly small”, I wonder?

There’s a new Canadian study on “astroturfing”. Students who viewed a fake “grassroots” website with arguments against the existence of manmade global warming not only became less certain about the cause of global warming; they also believed that the issue was less important than before! Worst, the responses of participants who had viewed sites “Funded by Exxon-Mobil” weren’t different than those who had viewed sites funded by the “Conservation Heritage Fund,” by “donations by people like you,” or sites that didn’t list the source of funding at all.

And just for fun, especially for those of you suffering from the US heat wave… here’s what happens when you throw boiling water into the air at -35 °C:

But why isn’t she wearing a hat?


This Week’s Finds (Week 317)

22 July, 2011

Anyone seriously interested in global warming needs to learn about the ‘ice ages’, or more technically ‘glacial periods’. After all, these are some of the most prominent natural variations in the Earth’s temperature. And they’re rather mysterious. They could be caused by changes in the Earth’s orbit called Milankovich cycles… but the evidence is not completely compelling. I want to talk about that.

But to understand ice ages, the first thing we need to know is that the Earth hasn’t always had them! The Earth’s climate has been cooling and becoming more erratic for the last 35 million years, with full-blown glacial periods kicking in only about 1.8 million years ago.

So, this week let’s start with a little tour of the Earth’s climate history. Somewhat arbitrarily, let’s begin with the extinction of the dinosaurs about 65 million years ago. Here’s a graph of what the temperature has been doing since then:

Of course you should have lots of questions about how this graph was made, and how well we really know these ancient temperatures! But for now I’m just giving a quick overview—click on the graphs for more. In future weeks I should delve into more technical details.

The Paleocene Epoch, 65 – 55 million years ago

The Paleocene began with a bang, as an asteroid 10 kilometers across hit the Gulf of Mexico in an explosion two million times larger than the biggest nuclear weapon ever detonated. A megatsunami thousands of meters high ripped across the Atlantic, and molten quartz hurled high into the atmosphere ignited wildfires over the whole planet. A day to remember, for sure.

The Earth looked like this back then:

The Paleocene started out hot: the ocean was 10° to 15° Celsius warmer than today. Then it got even hotter! Besides a gradual temperature rise, at the very end of this epoch there was a drastic incident called the Paleocene-Eocene Thermal Maximum— that’s the spike labelled "PETM". Ocean surface temperatures worldwide shot up by 5-8°C for a few thousand years—but in the Arctic, it heated up even more, to a balmy 23°C. This caused a severe dieoff of little ocean critters called foraminifera, and a drastic change of the dominant mammal species. What caused it? That’s a good question, but right now I’m just giving you a quick tour.

The Eocene Epoch, 55 – 34 million years ago

During the Eocene, temperatures continued to rise until the so-called ‘Eocene Optimum’, about halfway through. Even at the start, the continents were close to where they are now—but the average annual temperature in arctic Canada and Siberia was a balmy 18 °C. The dominant plants up there were palm trees and cycads. Fossil monitor lizards (sort of like alligators) dating back to this era have been found in Svalbard, an island north of Greenland that’s now covered with ice all year. Antarctica was home to cool temperate forests, including beech trees and ferns. In particular, our Earth had no permanent polar ice caps!

Life back then was very different. The biggest member of the order Carnivora, which now includes dogs, cats, bears, and the like, was merely the size of a housecat. The largest predatory mammals were of another, now extinct order: the creodonts, like this one drawn by Dmitry Bogdanov:


But the biggest predator of all was not a mammal: it was
Diatryma, the 8-foot tall "terror bird", with a fearsome beak!


But it’s not as huge as it looks here, because horses were only half a meter high back then!

For more on this strange world and its end as the Earth cooled, see:

• Donald R. Prothero, The Eocene-Oligocene Transition: Paradise Lost, Critical Moments in Paleobiology and Earth History Series, Columbia University Press, New York, 1994.

The Oligocene Epoch, 34 – 24 million years ago

As the Eocene drew to a close, temperatures began to drop. And at the start of the Oligocene, they plummeted! Glaciers started forming in Antarctica. The growth of ice sheets led to a dropping of the sea level. Tropical jungles gave ground to cooler woodlands.

What caused this? That’s another good question. Some seek the answer in plate tectonics. The Oligocene is when India collided with Asia, throwing up the Himalayas and the vast Tibetan plateau. Some argue this led to a significant change in global weather patterns. But this is also the time when Australia and South America finally separated from Antarctica. Some argue that the formation of an ocean completely surrounding Antarctica led to the cooling weather patterns. After all, that lets cold water go round and round Antarctica without ever being driven up towards the equator.

The Miocene Epoch, 24 – 5.3 million years ago

Near the end of the Oligocene temperatures shot up again and the Antarctic thawed. Then it cooled, then it warmed again… but by the middle of the Miocene, temperatures began to drop more seriously, and glaciers again formed on the Antarctic. It’s been frozen ever since. Why all these temperature fluctuations? That’s another good question.

The Miocene is when grasslands first became common. It’s sort of amazing that something we take so much for granted—grass—can be so new! But grasslands, as opposed to thicker forests and jungles, are characteristic of cooler climates. And as Nigel Calder has suggested, grasslands were crucial to the development of humans! Early hominids lived on the border between forests and grasslands. That has a lot to do with why we stand on our hind legs and have hands rather than paws. Much later, the agricultural revolution relied heavily on grasses like wheat, rice, corn, sorghum, rye, and millet. As we ate more of these plants, we drastically transformed them by breeding, and removed forests to grow more grasses. In return, the grasses drastically transformed us: the ability to stockpile surplus grains ended our hunter-gatherer lifestyle and gave rise to cities, kingdoms, and slave labor.

So, you could say we coevolved with grasses!

Indeed, the sequence of developments leading to humans came shortly after the rise of grasslands. Apes split off from monkeys 21 million years ago, in the Miocene. The genus Homo split off from other apes like gorillas and chimpanzees 5 million years ago, near the beginning of the Pliocene. The fully bipedal Homo erectus dates back to 1.9 million years ago, near the end of the Pliocene. But we’re getting ahead of ourselves…

The Pliocene Epoch, 5.3 – 1.8 million years ago

Starting around the Pliocene, the Earth’s temperature has been getting every more jittery as it cools. Something is making the temperature unstable! And these fluctuations are not just getting more severe—they’re also lasting longer.

These temperature fluctuations are far from being neatly periodic, despite the optimistic labels on the above graph saying “41 kiloyear cycle” and “100 kiloyear cycle”. And beware: the data in the above graph was manipulated so it would synchronize with the Milankovitch cycles! Is that really justified? Do these cycles really cause the changes in the Earth’s climate? More good questions.

Here’s a graph that shows more clearly the noisy nature of the Earth’s climate in the last 7 million years:

You can tell this graph was made by a real paleontologist, because they like to put the present on the left instead of on the right.

And maybe you’re getting curious about this “δ18O benthic carbonate” business? Well, we can’t directly measure the temperatures long ago by sticking a thermometer into an ancient rock! We need to use ‘climate proxies’: things we can measure now, that we believe are correlated to features of the climate long ago. δ18O is the change in the amount of oxygen-18 (a less common, heavier isotope of oxygen) in carbonate deposits dug up from ancient ocean sediments. These deposits were made by foraminifera and other tiny ocean critters. The amount of oxygen-18 in these deposits is used as temperature proxy: the more of it there is, the colder we think it was. Why? That’s another good question.

The Pleistocene Epoch, 1.8 – .01 million years ago

By the beginning of the Pleistocene, the Earth’s jerky temperature variations became full-fledged ‘glacial cycles’. In the last million years there have been about ten glacial cycles, though it’s hard to count them in any precise way—it’s like counting mountains in a mountain range:

Now the present is on the right again—but just to keep you on your toes, here up means cold, or at least more oxygen-18. I copied this graph from:

• Barry Saltzman, Dynamical Paleoclimatology: Generalized
Theory of Global Climate Change
, Academic Press, New York,
2002, fig. 1-4.

We can get some more detail on the last four glacial periods from the change in the amount of deuterium in Vostok and EPICA ice core samples, and also changes in the amount of oxygen-18 in foraminifera (that’s the graph labelled ‘Ice Volume’):

As you can see here, the third-to-last glacial ended about 380,000 years ago. In the warm period that followed, the first signs of Homo neanderthalensis appear about 350,000 years ago, and the first Homo sapiens about 250,000 years ago.

Then, 200,000 years ago, came the second-to-last glacial period: the Wolstonian. This lasted until about 130,000 years ago. Then came a warm period called the Eemian, which lasted until about 110,000 years ago. During the Eemian, Neanderthalers hunted rhinos in Switzerland! It was a bit warmer then that it is now, and sea levels may have been about 4-6 meters higher—worth thinking about, if you’re interested in the effects of global warming.

The last glacial period started around 110,000 years ago. This is called the Winsconsinan or Würm period, depending on location… but let’s just call it the last glacial period.

A lot happened during the last glacial period. Homo sapiens reached the Middle East 100,000 years ago, and arrived in central Asia 50 thousand years ago. The Neanderthalers died out in Asia around that time. They died out in Europe 35 thousand years ago, about when Homo sapiens got there. Anyone notice a pattern?

The oldest cave paintings are 32 thousand years old, and the oldest known calendars and flutes also date back to about this time. It’s striking how many radical innovations go back to about this time.

The glaciers reached their maximum extent around 26 to 18 thousand years ago. There were ice sheets down to the Great Lakes in America, and covering the British Isles, Scandinavia, and northern Germany. Much of Europe was tundra. And so much water was locked up in ice that the sea level was 120 meters lower than it is today!

Then things started to warm up. About 18 thousand years ago, Homo sapiens arrived in America. In Eurasia, people started cultivating plants and herding of animals around this time.

There was, however, a shocking setback 12,700 years ago: the Younger Dryas episode, a cold period lasting about 1,300 years. We talked about this in “week304”, so I won’t go into it again here.

The Younger Dryas ended about 11,500 years ago. The last glacial period, and with it the Pleistocene, officially ended 10,000 years ago. Or more precisely: 10,000 BP. Whenever I’ve been saying ‘years ago’, I really mean ‘Before Present’, where the ‘present’, you’ll be amused to learn, is officially set in 1950. Of course the precise definition of ‘the present’ doesn’t matter much for very ancient events, but it would be annoying if a thousand years from now we had to revise all the textbooks to say the Pleistocene ended 11,000 years ago. It’ll still be 10,000 BP.

(But if 1950 was the present, now it’s the future! This could explain why such weird science-fiction-type stuff is happening.)

The Holocene Epoch, .01 – 0 million years ago

As far as geology goes, the Holocene is a rather silly epoch, not like the rest. It’s just a name for the time since the last ice age ended. In the long run it’ll probably be called the Early Anthropocene, since it marks the start of truly massive impacts of Homo sapiens on the biosphere. We may have started killing off species in the late Pleistocene, but now we’re killing more—and changing the climate, perhaps even postponing the next glacial period.

Here’s what the temperature has been doing since 12000 BC:

Finally, here’s a closeup of a tiny sliver of time: the last 2000 years:

In both these graphs, different colored lines correspond to different studies; click for details. The biggish error bars give people lots to argue about, as you may have noticed. But right now I’m more interested in the big picture, and questions like these:

• Why was it so hot in the early Eocene?

• Why has it generally been cooling down ever since the Eocene?

• Why have temperature fluctuations been growing since the Miocene?

• What causes the glacial cycles?

For More

Next time we’ll get into a bit more detail. For now, here are some fun easy things to read.

This is a very enjoyable overview of climate change during the Holocene, and its effect on human civilization:

• Brian Fagan, The Long Summer, Basic Books, New York, 2005. Summary available at Azimuth Library.

These dig a bit further back:

• Chris Turney, Ice, Mud and Blood: Lessons from Climates Past, Macmillan, New York, 2008.

• Steven Mithen, After the Ice: A Global Human History 20,000-5000 BC, Harvard University Press, Cambridge, 2005.

I couldn’t stomach the style of the second one: it’s written as a narrative, with a character named Lubbock travelling through time. But a lot of people like it, and they say it’s well-researched.

For a history of how people discovered and learned about ice ages, try:

• Doug Macdougall, Frozen Earth: The Once and Future Story of Ice Ages, University of California Press, Berkeley, 2004.

For something a bit more technical, but still introductory, try:

• Richard W. Battarbee and Heather A. Binney, Natural Climate Variability and Global Warming: a Holocene Perspective, Wiley-Blackwell, Chichester, 2008.

To learn how this graph was made:

and read a good overview of the Earth’s climate throughout the Cenozoic, read this:

• James Zachos, Mark Pagani, Lisa Sloan, Ellen Thomas and Katharina Billups, Trends, rhythms, and aberrations in global climate 65 Ma to present, Science 292 (27 April 2001), 686-693.

I got the beautiful maps illustrating continental drift from here:

• Christopher R. Scotes, Paleomap Project.

and I urge you to check out this website for a nice visual tour of the Earth’s history.

Finally, I thank Frederik de Roo and Nathan Urban for suggesting improvements to this issue. You can see what they said on the Azimuth Forum. If you join the forum, you too can help write This Week’s Finds! I could really use help from earth scientists, biologists, paleontologists and folks like that: I’m okay at math and physics, but I’m trying to broaden the scope now.


We are at the very beginning of time for the human race. It is not unreasonable that we grapple with problems. But there are tens of thousands of years in the future. Our responsibility is to do what we can, learn what we can, improve the solutions, and pass them on. – Richard Feynman


Heat Wave in the USA

14 July, 2011

It’s hot in the United States! This picture from the NOAA Environmental Visualization Laboratory shows the temperature at 5 pm Eastern Time on the 12th of July:

Half the population is suffering under ‘heat advisories’. These kick in when the heat index—a measure of perceived temperature that takes humidity into account—surpasses 105°F (about 40 °C), or when the nighttime low exceeds 80°F (about 27 °C) for two consecutive nights.

Here’s what the Capital Weather Gang has to say:

East of the continental divide, it’s difficult to escape today’s searing heat. NOAA reported that as of 1 p.m., heat advisories or excessive heat warnings affected 150 million Americans in 23 states. Washington, D.C. had been under a heat advisory earlier today, but it was canceled when it became clear temperatures would fall just below advisory criteria.

Almost all of the south central and southeast states have seen heat indices exceed 105 degrees Tuesday afternoon. Some sample readings at 3 p.m.: Little Rock 109, St. Louis 109, Raleigh 105, Memphis 111, Charleston 108.

In recent days, the searing heat has set scores of new record high temperatures across the eastern two thirds of the country. Yesterday alone, 41 record highs were set including Ft. Smith, Ar. (107), Indianapolis, In. (96), Louisville, Ky. (97), Watertown, Ny. (90), Altoona, Pa. (94), and Charleston, WV (95).

Record high minimum temperatures have been more even pervasive, offering little nighttime relief from the oppressive afternoon heat. On Monday, 132 record high lows were set.

In Louisville, Kentucky this morning, the low dropped to a mere 84 degrees. Meteorologist Eric Fisher at The Weather Channel tweeted: “That. Is. Filthy. Heat Index was still above 100 at 5am.”

Some of the most remarkable heat occurred on in central Plains on July 9 and 10. Oklahoma City reached 110 degrees on the 9th, tying its all-time high for the month. Wichita, Kansas rose to 111 degrees on the 10th, its hottest temperature in 30 years. See CapitalClimate for more on the records which extended into Arkansas and Missouri.

In both Oklahoma City (13 days) and Dallas (10 days), the mercury has reached 100 or better for at least ten straight days. Hot weather is predicted to persist there through the weekend, at least.

Across the country during the month of July, record highs have outnumbered record lows 349 to 68 (or more than 5:1).

Could any of this be related to, umm, global warming? Joe Romm has a blistering critique of the American media’s failure to mention this possibility:

• Joe Romm, After Story on Monster Heat Wave, NBC Asks “What Explains This?” The Answer: “We’re Stuck in a Summer Pattern”!, Climate Progress, 13 July 2011.

Inference is a tricky business. It’s easy to spot patterns where they don’t exist, especially when the patterns are as subtle as an increase in extreme weather events, also known as ‘global weirding’. If there’s a flood, or a drought, we can easily explain it this way. The human mind, after all, is programmed to seek out patterns: we can see faces in clouds.

But it’s also easy to fail to recognize patterns where they do exist—especially when acknowledging them would require difficult changes in behavior. “Am I an alcoholic? No, I just got really drunk last night… and, okay, the night before…”

Earlier this spring, Bill McKibben had a sarcastic editorial about this:

• Bill McKibben, A link between climate change and Joplin tornadoes? Never!, Washington Post, 24 May 2011.

It starts:

Caution: It is vitally important not to make connections. When you see pictures of rubble like this week’s shots from Joplin, Mo., you should not wonder: Is this somehow related to the tornado outbreak three weeks ago in Tuscaloosa, Ala., or the enormous outbreak a couple of weeks before that (which, together, comprised the most active April for tornadoes in U.S. history). No, that doesn’t mean a thing.

It is far better to think of these as isolated, unpredictable, discrete events. It is not advisable to try to connect them in your mind with, say, the fires burning across Texas — fires that have burned more of America at this point this year than any wildfires have in previous years. Texas, and adjoining parts of Oklahoma and New Mexico, are drier than they’ve ever been — the drought is worse than that of the Dust Bowl. But do not wonder if they’re somehow connected.

If you did wonder, you see, you would also have to wonder about whether this year’s record snowfalls and rainfalls across the Midwest — resulting in record flooding along the Mississippi — could somehow be related. And then you might find your thoughts wandering to, oh, global warming, and to the fact that climatologists have been predicting for years that as we flood the atmosphere with carbon we will also start both drying and flooding the planet, since warm air holds more water vapor than cold air.

It’s far smarter to repeat to yourself the comforting mantra that no single weather event can ever be directly tied to climate change. There have been tornadoes before, and floods — that’s the important thing. Just be careful to make sure you don’t let yourself wonder why all these record-breaking events are happening in such proximity — that is, why there have been unprecedented megafloods in Australia, New Zealand and Pakistan in the past year. Why it’s just now that the Arctic has melted for the first time in thousands of years. No, better to focus on the immediate casualties, watch the videotape from the store cameras as the shelves are blown over. Look at the news anchorman standing in his waders in the rising river as the water approaches his chest.

Luckily, scientists are busy at work on these questions. For example, these papers on floods came out in February:

• Pardeep Pall, Tolu Aina, Dáithí Stone, Peter Stott, Toru Nozawa, Arno Hilberts, Dag Lohmann, and Myles Allen, Anthropogenic greenhouse gas contribution to flood risk in England and Wales in autumn 2000, Nature 470 (17 February 2011), 382–385. Supplementary information available for free online.

• Seung-Ki Min, Xuebin Zhang, Francis W. Zwiers and Gabriele C. Hegerl, Human contribution to more-intense precipitation extremes, Nature 470 (17 February 2011), 378-381.

I believe that someday we will understand whether and how extreme weather events are linked to global warming—if not individually, at least statistically. Whether we’ll understand it soon enough for it to make much difference—I’m less sure about that.

Luckily, I’m back in Singapore now, so I don’t personally have to worry about the heat wave in the USA. No heat advisory here! The weather is quite normal, with the heat index a nice cool 100 °F (or 38 °C).


A Quantum of Warmth

2 July, 2011

guest post by Tim van Beek

The Case of the Missing 33 Kelvin, Continued

Last time, when we talked about putting the Earth in a box, we saw that a simple back-of-the-envelope calculation of the energy balance and resulting black body temperature of the earth comes surprisingly close to the right answer.

But there was a gap: the black body temperature calculated with a zero-dimensional energy balance model is about 33 kelvin lower than the estimated average surface temperature on Earth.

In other words, this simplified model predicts an Earth that’s 33 °C colder than it really is!

In such a situation, as theoretical physicists, we start by taking a bow, patting ourselves on the back, and congratulating ourselves on a successful first approximation.

Then we look for the next most important effect that we need to include in our model.

This effect needs to:

1) have a steady and continuous influence over thousands of years,

2) have a global impact,

3) be rather strong, because heating the planet Earth by 33 kelvin on the average needs a lot of power.

The simplest explanation would of course be that there is something fundamentally wrong with our back-of-the-envelope calculation.

One possibility, as Itai Bar-Natan mentioned last time, is geothermal energy. It certainly matches point 1, maybe matches point 2, but it is hard to guess if it matches point 3. As John pointed out, we can check the Earth’s energy budget on Wikipedia. This suggests that the geothermal heating is very small. Should we trust Wikipedia? I don’t know. We should check it out!

But I will not do that today. Instead I would like to talk about the most prominent explanation:

Most of you will of course have heard about the effect that climate scientists talk about, which is often—but confusingly—called the ‘greenhouse effect’, or ‘back radiation’. However, the term that is most accurate is downward longwave radiation (DLR), so I would like to use that instead.

In order to assess if this is a viable explanation of the missing 33 kelvin, we will first have to understand the effect better. So this is what I will talk about today.

In order to get a better understanding, we will have to peek into our simple model’s box and figure out what is going on in there in more detail.

Peeking into the Box: Surface and Atmosphere

To get a better approximation, instead of treating the whole earth as a black body, we will have to split up the system into the Earth itself, and its atmosphere. For the surface of the Earth it is still a good approximation to say that it is a black body.

The atmosphere is more complicated. In a next approximation step, I would like to pretend that the atmosphere is a body of its own, hovering above the surface of the earth, as a separate system. So we will ignore that there are several different layers in the atmosphere doing different things, including interactions with the surface. Well, we are not going to ignore the interaction with the surface completely, as you will see.

Since one can quickly get lost in details when discussing the atmosphere, I’m going to cheat and look up the overall average effects in an introductory meteorology textbook:

• C. Donald Ahrens: Meteorology Today, 9th edition, Brooks/Cole, Florence, Kentucky, 2009.

Here is what atmosphere and Earth’s surface do to the incoming radiation from the Sun (from page 48):

Of 100 units of inbound solar energy flux, 30 are reflected or scattered back to space without a contribution to the energy balance of the Earth. This corresponds to an overall average albedo of 0.3 for the Earth.

The next graphic shows the most important processes of heat and mass transport caused by the remaining 70 units of energy flux, with their overall average effect (from page 49):

Maybe you have some questions about this graphic; I certainly do.

Conduction and Convection?

Introductory classes for partial differential equations sometimes start with the one dimensional heat equation. This equation describes the temperature distribution of a rod of metal that is heated on one end and kept cool on the other. The kind of heat transfer occurring here is called conduction. The atoms or molecules stay where they are and transfer energy by interacting with their neighbors.

However, heat transfer by conduction is negligible for gases like the atmosphere. Why is it there in the graphic? The answer may be that conduction is still important for boundary layers. Or maybe the author wanted to include it to avoid the question “why is conduction not in the graphic?” I don’t know. But I’ll trust that the number associated with the “convection and conduction” part is correct, for now.

What is Latent Heat?

There is a label “latent heat” on the left part of the atmosphere: latent heat is energy input that does not result in a temperature increase, or energy output that does not result in a temperature decrease. This can happen, when there is a phase change of a component of the system. For example, when liquid water at 0°C freezes, it turns into ice at 0°C while losing energy to its environment. But the temperature of the whole system stays at 0°C.

The human body uses this effect, too, when it cools itself by sweating. This cooling effect works as long as the fluid water turns into water vapor and withdraws energy from the skin in the process.

The picture above shows a forest with water vapor (invisible), fluid (dispersed in the air) and snow. As the Sun sets, parts of the water vapor will eventually condense, and fluid water will turn into ice, releasing energy to the environment. During the phase changes there will be energy loss without a temperature decrease of the water.

Downward Longwave Radiation

When there is a lot of light there are also dark shadows. — main character in Johann Wolfgang von Goethe’s Götz von Berlichingen

Last time we pretended that the Earth as a whole behaves like a black body.

Now that we split up the Earth into surface and atmosphere, you may notice that:

a) a lot of sunlight passes through the atmosphere and reaches the surface, and

b) there is a lot of energy flowing downwards from the atmosphere to the surface in form of infrared radiation. This is called downward longwave radiation.

Observation a) shows that the atmosphere does not act like a black body at all. Instead, it has a nonzero transmittance, which means that not all incoming radiation is absorbed.

Observation b) shows that assuming that the black body temperature of the Earth is equal to the average surface temperature could go wrong, because—from the viewpoint of the surface—there is an additional inbound energy flux from the atmosphere.

The reason for both observations is that the atmosphere consists of various gases, like O2, N2, H2O (water vapor) and CO2. Any gas molecule can absorb and emit radiation only at certain frequencies, which are called its emission spectrum. This fact led to the development of quantum mechanics, which can be used to calculate the emission spectrum of any molecule.

Molecules and Degrees of Freedom

When a photon hits a molecule, the molecule can absorb the photon and gain energy in three main ways:

• One of its electron can climb to a higher energy level.

• The molecule can vibrate more strongly.

• The molecule can rotate more rapidly.

To get a first impression of the energy levels involved in these three processes, let’s have a look at this graphic:

This is taken from the book

* Sune Svanberg, _Atomic and Molecular Spectroscopy: Basic Aspects and Practical Applications_, 4th edition, Advanced Texts in Physics, Springer, Berlin, 2004.

The y-axis shows the energy difference in ‘eV’, or ‘electron volts’. An electron volt is the amount of energy an electron gains or loses as its potential changes by one volt.

Accoding to quantum mechanics, a molecule can emit and absorb only photons whose energy matches the difference of one of the discrete energy levels in the graphic, for any one of the three processes.

It is possible to use the characteristic absorption and emission properties of molecules of different chemical species to analyze the chemical composition of an unknown probe of gases (and other materials, too). These methods are usually called names involving the word ‘spectroscopy’. For example, infrared spectroscopy involves methods that examine what happens to infrared radiation when you send it to your probe.

By the way, Wikipedia has a funny animated picture of the different vibrational modes of a molecule on the page about infrared spectroscopy.

But why does so much of radiation from the Sun pass through the atmosphere, while a lot of infrared radiation emitted by the Earth instead bounces back to the surface? The answer to this puzzle involves a specific property of certain components of the atmosphere.

Can You See an Electron Hopping?

Here is a nice overview of the spectrum of electromagnetic radiation:

The energy E and the wavelength \lambda of a photon have a very simple relationship:

\displaystyle{ E = \frac{c \; h}{\lambda}}

where h is Planck’s constant and c is the speed of light. In short, photons with longer wavelengths have less energy.

Planck’s constant is

h \approx 6 \times 10^{-15} \; eV \times s

while the speed of light is

c \approx 3 \times 10^{8} \; m/s

Plugging these into the formula we get that a photon with an energy of one electron volt has a wavelength of about 1.2 micrometers, which is just outside the visible range, a bit towards the infrared direction. The visible range corresponds to 1.6 to 3.4 electron volts. If you want, you can scroll up to the graphic with the energy levels and calculate which processese will result in which kind of radiation.

Electrons that take a step down the orbital ladder in an atom emit a photon. Depending on the atom and the kind of transition, some of those photons will be in the visible range, and some will be in the ultraviolet.

There is no Infrared from the Sun (?)

From the Planck distribution, we can determine that the Sun and Earth, which are approximately black bodies, emit radiation mostly at very different wavelengths:

This graphic is sometimes called ‘twin peak graph’.

Oversimplifying, we could say: The Earth emits infrared radiation; the Sun emits almost no infrared. So, if you find infrared radiation on earth, you can be sure that it did not come from the Sun.

The problem with this statement is that, strictly speaking, the Sun does emit radiation at wavelengths that are in the infrared range. This is the reason why people have come up with the term near-infra-red radiation, which we define to be the range of 0.85 and 5.0 micrometer wavelength. Radiation with longer wavelengths is called far infrared. With these definitions we can say that the Sun radiates in the near-infra-red range, and earth does not.

Only certain components of the atmosphere emit and absorb radiation in the infrared part. These are called—somewhat misleadingly—greenhouse gases. I would like to call them ‘infrared-active gases’ instead, but unfortunately the ‘greenhouse gas’ misnomer is very popular. Two prominent ones are H2O and CO2:

The atmospheric window at 8 to 12μm is quite transparent, which means that this radiation passes from the surface through the atmosphere into space without much ado. Therefore, this window is used by satellites to estimate the surface temperature.

Since most radiation coming from the Earth is infrared, and only some constituents of the atmosphere react to it—excluding the major ones—a small amount of, say, CO2 could have a lot of influence on the energy balance. Like being the only one in a group of hundreds with a boom box. But we should check that more thoroughly.

Can a Cold Body Warm a Warmer Body?

Downward longwave radiation warms the surface, but the atmosphere is colder than the surface, so how can radiation from the colder atmosphere result in a higher surface temperature? Doesn’t that violate the second law of thermodynamics?

The answer is: no, it does not. It turns out that others have already taken pains to explain this on the blogosphere, so I’d like to point you there instead of trying to do a better job here:

• Roy Spencer, Yes, Virginia, cooler objects can make warmer objects even warmer still, 23 July 2010.

• The Science of Doom, The amazing case of “back-radiation”, 27 July 2010.

It’s the Numbers, Stupid!

Shut up and calculate! — Leitmotiv of several prominent physicists after becoming exhausted by philosophical discussions about the interpretation of quantum mechanics.

Maybe we have succeeded by now to convince the imaginary advisory board of the zero dimensional energy balance model project that there really is an effect like ‘downward longwave radiation’. It certainly should be there if quantum mechanics is right. But I have not explained yet how big it is. According to the book Meteorology Today, it is big. But maybe the people who contributed to the graphic got fooled somehow; and there really is a different explanation for the case of the missing 33 kelvin.

What do you think?

When we dip our toes into a new topic, it is important to keep simple yet fundamental questions like this in mind, and keep asking them.

In this case we are lucky: it is possible to measure the amount of downward longwave radiation. There are a lot of field studies, and the results have been incorporated in global climate models. But we will have to defer this story to another day.


Follow

Get every new post delivered to your Inbox.

Join 3,095 other followers