## Tipping Points in Climate Systems

4 March, 2013

If you’ve just recently gotten a PhD, you can get paid to spend a week this summer studying tipping points in climate systems!

They’re having a program on this at ICERM: the Institute for Computational and Experimental Research in Mathematics, in Providence, Rhode Island. It’s happening from July 15th to 19th, 2013. But you have to apply soon, by the 15th of March!

For details, see below. But first, a word about tipping points… in case you haven’t thought about them much.

### Tipping Points

A tipping point occurs when adjusting some parameter of a system causes it to transition abruptly to a new state. The term refers to a well-known example: as you push more and more on a glass of water, it gradually leans over further until you reach the point where it suddenly falls over. Another familiar example is pushing on a light switch until it ‘flips’ and the light turns on.

In the Earth’s climate, a number of tipping points could cause abrupt climate change:

(Click to enlarge.) They include:

• Loss of Arctic sea ice.
• Melting of the Greenland ice sheet.
• Melting of the West Antarctic ice sheet.
• Permafrost and tundra loss, leading to the release of methane.
• Boreal forest dieback.
• Amazon rainforest dieback
• West African monsoon shift.
• Indian monsoon chaotic multistability.
• Change in El Niño amplitude or frequency.
• Change in formation of Atlantic deep water.
• Change in the formation of Antarctic bottom water.

• T. M. Lenton, H. Held, E. Kriegler, J. W. Hall, W. Lucht, S. Rahmstorf, and H. J. Schellnhuber, Tipping elements in the Earth’s climate system, Proceedings of the National Academy of Sciences 105 (2008), 1786–1793.

Mathematicians are getting interested in how to predict when we’ll hit a tipping point:

• Peter Ashwin, Sebastian Wieczorek and Renato Vitolo, Tipping points in open systems: bifurcation, noise-induced and rate-dependent examples in the climate system, Phil. Trans. Roy. Soc. A 370 (2012), 1166–1184.

Abstract: Tipping points associated with bifurcations (B-tipping) or induced by noise (N-tipping) are recognized mechanisms that may potentially lead to sudden climate change. We focus here a novel class of tipping points, where a sufficiently rapid change to an input or parameter of a system may cause the system to “tip” or move away from a branch of attractors. Such rate-dependent tipping, or R-tipping, need not be associated with either bifurcations or noise. We present an example of all three types of tipping in a simple global energy balance model of the climate system, illustrating the possibility of dangerous rates of change even in the absence of noise and of bifurcations in the underlying quasi-static system.

We can test out these theories using actual data:

• J. Thompson and J. Sieber, Predicting climate tipping points as a noisy bifurcation: a review, International Journal of Chaos and Bifurcation 21 (2011), 399–423.

Abstract: There is currently much interest in examining climatic tipping points, to see if it is feasible to predict them in advance. Using techniques from bifurcation theory, recent work looks for a slowing down of the intrinsic transient responses, which is predicted to occur before an instability is encountered. This is done, for example, by determining the short-term auto-correlation coefﬁcient ARC in a sliding window of the time series: this stability coefﬁcient should increase to unity at tipping. Such studies have been made both on climatic computer models and on real paleoclimate data preceding ancient tipping events. The latter employ re-constituted time-series provided by ice cores, sediments, etc, and seek to establish whether the actual tipping could have been accurately predicted in advance. One such example is the end of the Younger Dryas event, about 11,500 years ago, when the Arctic warmed by 7 C in 50 years. A second gives an excellent prediction for the end of ’greenhouse’ Earth about 34 million years ago when the climate tipped from a tropical state into an icehouse state, using data from tropical Paciﬁc sediment cores. This prediction science is very young, but some encouraging results are already being obtained. Future analyses will clearly need to embrace both real data from improved monitoring instruments, and simulation data generated from increasingly sophisticated predictive models.

The next paper is interesting because it studies tipping points experimentally by manipulating a lake. Doing this lets us study another important question: when can you push a system back to its original state after it’s already tipped?

• S. R. Carpenter, J. J. Cole, M. L. Pace, R. Batt, W. A. Brock, T. Cline, J. Coloso, J. R. Hodgson, J. F. Kitchell, D. A. Seekell, L. Smith, and B. Weidel, Early warnings of regime shifts: a whole-ecosystem experiment, Nature 332 (2011), 1079–1082.

Abstract: Catastrophic ecological regime shifts may be announced in advance by statistical early-warning signals such as slowing return rates from perturbation and rising variance. The theoretical background for these indicators is rich but real-world tests are rare, especially for whole ecosystems. We tested the hypothesis that these statistics would be early-warning signals for an experimentally induced regime shift in an aquatic food web. We gradually added top predators to a lake over three years to destabilize its food web. An adjacent lake was monitored simultaneously as a reference ecosystem. Warning signals of a regime shift were evident in the manipulated lake during reorganization of the food web more than a year before the food web transition was complete, corroborating theory for leading indicators of ecological regime shifts.

### IdeaLab program

If you’re seriously interested in this stuff, and you recently got a PhD, you should apply to IdeaLab 2013, which is a program happening at ICERM from the 15th to the 19th of July, 2013. Here’s the deal:

The Idea-Lab invites 20 early career researchers (postdoctoral candidates and assistant professors) to ICERM for a week during the summer. The program will start with brief participant presentations on their research interests in order to build a common understanding of the breadth and depth of expertise. Throughout the week, organizers or visiting researchers will give comprehensive overviews of their research topics. Organizers will create smaller teams of participants who will discuss, in depth, these research questions, obstacles, and possible solutions. At the end of the week, the teams will prepare presentations on the problems at hand and ideas for solutions. These will be shared with a broad audience including invited program officers from funding agencies.

Two Research Project Topics:

• Tipping Points in Climate Systems (MPE2013 program)

• Towards Efficient Homomorphic Encryption

IdeaLab Funding Includes:

• Travel support

• Six nights accommodations

• Meal allowance

The Application Process:

IdeaLab applicants should be at an early stage of their post-PhD career. Applications for the 2013 IdeaLab are being accepted through MathPrograms.org.

Application materials will be reviewed beginning March 15, 2013.

## Successful Predictions of Climate Science

5 February, 2013

guest post by Steve Easterbrook

In December I went to the 2012 American Geophysical Union Fall Meeting. I’d like to tell you about with the Tyndall lecture given by Ray Pierrehumbert, on “Successful Predictions”. You can watch the whole talk here:

But let me give you a summary, with some references.

Ray’s talk spanned 120 years of research on climate change. The key message is that science is a long, slow process of discovery, in which theories (and their predictions) tend to emerge long before they can be tested. We often learn just as much from the predictions that turned out to be wrong as we do from those that were right. But successful predictions eventually form the body of knowledge that we can be sure about, not just because they were successful, but because they build up into a coherent explanation of multiple lines of evidence.

Here are the successful predictions:

1896: Svante Arrhenius correctly predicts that increases in fossil fuel emissions would cause the earth to warm. At that time, much of the theory of how atmospheric heat transfer works was missing, but nevertheless, he got a lot of the process right. He was right that surface temperature is determined by the balance between incoming solar energy and outgoing infrared radiation, and that the balance that matters is the radiation budget at the top of the atmosphere. He knew that the absorption of infrared radiation was due to CO2 and water vapour, and he also knew that CO2 is a forcing while water vapour is a feedback. He understood the logarithmic relationship between CO2 concentrations in the atmosphere and surface temperature. However, he got a few things wrong too. His attempt to quantify the enhanced greenhouse effect was incorrect, because he worked with a 1-layer model of the atmosphere, which cannot capture the competition between water vapour and CO2, and doesn’t account for the role of convection in determining air temperatures. His calculations were incorrect because he had the wrong absorption characteristics of greenhouse gases. And he thought the problem would be centuries away, because he didn’t imagine an exponential growth in use of fossil fuels.

Arrhenius, as we now know, was way ahead of his time. Nobody really considered his work again for nearly 50 years, a period we might think of as the dark ages of climate science. The story perfectly illustrates Paul Hoffman’s tongue-in-cheek depiction of how scientific discoveries work: someone formulates the theory, other scientists then reject it, ignore it for years, eventually rediscover it, and finally accept it. These “dark ages” weren’t really dark, of course—much good work was done in this period. For example:

• 1900: Frank Very worked out the radiation balance, and hence the temperature, of the moon. His results were confirmed by Pettit and Nicholson in 1930.

• 1902-14: Arthur Schuster and Karl Schwarzschild used a 2-layer radiative-convective model to explain the structure of the sun.

• 1907: Robert Emden realized that a similar radiative-convective model could be applied to planets, and Gerard Kuiper and others applied this to astronomical observations of planetary atmospheres.

This work established the standard radiative-convective model of atmospheric heat transfer. This treats the atmosphere as two layers; in the lower layer, convection is the main heat transport, while in the upper layer, it is radiation. A planet’s outgoing radiation comes from this upper layer. However, up until the early 1930′s, there was no discussion in the literature of the role of carbon dioxide, despite occasional discussion of climate cycles. In 1928, George Simpson published a memoir on atmospheric radiation, which assumed water vapour was the only greenhouse gas, even though, as Richardson pointed out in a comment, there was evidence that even dry air absorbed infrared radiation.

1938: Guy Callendar is the first to link observed rises in CO2 concentrations with observed rises in surface temperatures. But Callendar failed to revive interest in Arrhenius’s work, and made a number of mistakes in things that Arrhenius had gotten right. Callendar’s calculations focused on the radiation balance at the surface, whereas Arrhenius had (correctly) focussed on the balance at the top of the atmosphere. Also, he neglected convective processes, which astrophysicists had already resolved using the radiative-convective model. In the end, Callendar’s work was ignored for another two decades.

1956: Gilbert Plass correctly predicts a depletion of outgoing radiation in the 15 micron band, due to CO2 absorption. This depletion was eventually confirmed by satellite measurements. Plass was one of the first to revisit Arrhenius’s work since Callendar, however his calculations of climate sensitivity to CO2 were also wrong, because, like Callendar, he focussed on the surface radiation budget, rather than the top of the atmosphere.

1961-2: Carl Sagan correctly predicts very thick greenhouse gases in the atmosphere of Venus, as the only way to explain the very high observed temperatures. His calculations showed that greenhouse gasses must absorb around 99.5% of the outgoing surface radiation. The composition of Venus’s atmosphere was confirmed by NASA’s Venus probes in 1967-70.

1959: Burt Bolin and Erik Eriksson correctly predict the exponential increase in CO2 concentrations in the atmosphere as a result of rising fossil fuel use. At that time they did not have good data for atmospheric concentrations prior to 1958, hence their hindcast back to 1900 was wrong, but despite this, their projection for changes forward to 2000 were remarkably good.

1967: Suki Manabe and Dick Wetherald correctly predict that warming in the lower atmosphere would be accompanied by stratospheric cooling. They had built the first completely correct radiative-convective implementation of the standard model applied to Earth, and used it to calculate a +2 °C equilibrium warming for doubling CO2, including the water vapour feedback, assuming constant relative humidity. The stratospheric cooling was confirmed in 2011 by Gillett et al.

1975: Suki Manabe and Dick Wetherald correctly predict that the surface warming would be much greater in the polar regions, and that there would be some upper troposphere amplification in the tropics. This was the first coupled general circulation model (GCM), with an idealized geography. This model computed changes in humidity, rather than assuming it, as had been the case in earlier models. It showed polar amplification, and some vertical amplification in the tropics. The polar amplification was measured, and confirmed by Serreze et al in 2009. However, the height gradient in the tropics hasn’t yet been confirmed (nor has it yet been falsified—see Thorne 2008 for an analysis)

1989: Ron Stouffer et. al. correctly predict that the land surface will warm more than the ocean surface, and that the southern ocean warming would be temporarily suppressed due to the slower ocean heat uptake. These predictions are correct, although these models failed to predict the strong warming we’ve seen over the antarctic peninsula.

Of course, scientists often get it wrong:

1900: Knut Ångström incorrectly predicts that increasing levels of CO2 would have no effect on climate, because he thought the effect was already saturated. His laboratory experiments weren’t accurate enough to detect the actual absorption properties, and even if they were, the vertical structure of the atmosphere would still allow the greenhouse effect to grow as CO2 is added.

1971: Rasool and Schneider incorrectly predict that atmospheric cooling due to aerosols would outweigh the warming from CO2. However, their model had some important weaknesses, and was shown to be wrong by 1975. Rasool and Schneider fixed their model and moved on. Good scientists acknowledge their mistakes.

1993: Richard Lindzen incorrectly predicts that warming will dry the troposphere, according to his theory that a negative water vapour feedback keeps climate sensitivity to CO2 really low. Lindzen’s work attempted to resolve a long standing conundrum in climate science. In 1981, the CLIMAP project reconstructed temperatures at the last Glacial maximum, and showed very little tropical cooling. This was inconsistent the general circulation models (GCMs), which predicted substantial cooling in the tropics (e.g. see Broccoli & Manabe 1987). So everyone thought the models must be wrong. Lindzen attempted to explain the CLIMAP results via a negative water vapour feedback. But then the CLIMAP results started to unravel, and newer proxies demonstrated that it was the CLIMAP data that was wrong, rather than the models. It eventually turns out the models were getting it right, and it was the CLIMAP data and Lindzen’s theories that were wrong. Unfortunately, bad scientists don’t acknowledge their mistakes; Lindzen keeps inventing ever more arcane theories to avoid admitting he was wrong.

1995: John Christy and Roy Spencer incorrectly calculate that the lower troposphere is cooling, rather than warming. Again, this turned out to be wrong, once errors in satellite data were corrected.

In science, it’s okay to be wrong, because exploring why something is wrong usually advances the science. But sometimes, theories are published that are so bad, they are not even wrong:

2007: Courtillot et. al. predicted a connection between cosmic rays and climate change. But they couldn’t even get the sign of the effect consistent across the paper. You can’t falsify a theory that’s incoherent! Scientists label this kind of thing as “Not even wrong”.

Finally, there are, of course, some things that scientists didn’t predict. The most important of these is probably the multi-decadal fluctuations in the warming signal. If you calculate the radiative effect of all greenhouse gases, and the delay due to ocean heating, you still can’t reproduce the flat period in the temperature trend in that was observed in 1950–1970. While this wasn’t predicted, we ought to be able to explain it after the fact. Currently, there are two competing explanations. The first is that the ocean heat uptake itself has decadal fluctuations, although models don’t show this. However, it’s possible that climate sensitivity is at the low end of the likely range (say 2 °C per doubling of CO2), it’s possible we’re seeing a decadal fluctuation around a warming signal. The other explanation is that aerosols took some of the warming away from GHGs. This explanation requires a higher value for climate sensitivity (say around 3 °C), but with a significant fraction of the warming counteracted by an aerosol cooling effect. If this explanation is correct, it’s a much more frightening world, because it implies much greater warming as CO2 levels continue to increase. The truth is probably somewhere between these two. (See Armour & Roe, 2011 for a discussion.)

To conclude, climate scientists have made many predictions about the effect of increasing greenhouse gases that have proven to be correct. They have earned a right to be listened to, but is anyone actually listening? If we fail to act upon the science, will future archaeologists wade through AGU abstracts and try to figure out what went wrong? There are signs of hope—in his re-election acceptance speech, President Obama revived his pledge to take action, saying “We want our children to live in an America that isn’t threatened by the destructive power of a warming planet.”

## Milankovich vs the Ice Ages

30 January, 2013

guest post by Blake Pollard

Hi! My name is Blake S. Pollard. I am a physics graduate student working under Professor Baez at the University of California, Riverside. I studied Applied Physics as an undergraduate at Columbia University. As an undergraduate my research was more on the environmental side; working as a researcher at the Water Center, a part of the Earth Institute at Columbia University, I developed methods using time-series satellite data to keep track of irrigated agriculture over northwestern India for the past decade.

I am passionate about physics, but have the desire to apply my skills in more terrestrial settings. That is why I decided to come to UC Riverside and work with Professor Baez on some potentially more practical cross-disciplinary problems. Before starting work on my PhD I spent a year surfing in Hawaii, where I also worked in experimental particle physics at the University of Hawaii at Manoa. My current interests (besides passing my classes) lie in exploring potential applications of the analogy between information and entropy, as well as in understanding parallels between statistical, stochastic, and quantum mechanics.

Glacial cycles are one essential feature of Earth’s climate dynamics over timescales on the order of 100′s of kiloyears (kyr). It is often accepted as common knowledge that these glacial cycles are in some way forced by variations in the Earth’s orbit. In particular many have argued that the approximate 100 kyr period of glacial cycles corresponds to variations in the Earth’s eccentricity. As we saw in Professor Baez’s earlier posts, while the variation of eccentricity does affect the total insolation arriving to Earth, this variation is small. Thus many have proposed the existence of a nonlinear mechanism by which such small variations become amplified enough to drive the glacial cycles. Others have proposed that eccentricity is not primarily responsible for the 100 kyr period of the glacial cycles.

Here is a brief summary of some time series analysis I performed in order to better understand the relationship between the Earth’s Ice Ages and the Milankovich cycles.

I used publicly available data on the Earth’s orbital parameters computed by André Berger (see below for all references). This data includes an estimate of the insolation derived from these parameters, which is plotted below against the Earth’s temperature, as estimated using deuterium concentrations in an ice core from a site in the Antarctic called EPICA Dome C:

As you can see, it’s a complicated mess, even when you click to enlarge it! However, I’m going to focus on the orbital parameters themselves, which behave more simply. Below you can see graphs of three important parameters:

• obliquity (tilt of the Earth’s axis),
• precession (direction the tilted axis is pointing),
• eccentricity (how much the Earth’s orbit deviates from being circular).

You can click on any of the graphs here to enlarge them:

Richard Muller and Gordon MacDonald have argued that another astronomical parameter is important: the angle between the plane Earth’s orbit and the ‘invariant plane’ of the solar system. This invariant plane of the solar system depends on the angular momenta of the planets, but roughly coincides with the plane of Jupiter’s orbit, from what I understand. Here is a plot of the orbital plane inclination for the past 800 kyr:

One can see from these plots, or from some spectral analysis, that the main periodicities of the orbital parameters are:

• Obliquity ~ 42 kyr
• Precession ~ 21 kyr
• Eccentricity ~100 kyr
• Orbital plane ~ 100 kyr

Of course the curves clearly are not simple sine waves with those frequencies. Fourier transforms give information regarding the relative power of different frequencies occurring in a time series, but there is no information left regarding the time dependence of these frequencies as the time dependence is integrated out in the Fourier transform.

The Gabor transform is a generalization of the Fourier transform, sometimes referred to as the ‘windowed’ Fourier transform. For the Fourier transform:

$\displaystyle{ F(w) = \dfrac{1}{\sqrt{2\pi}} \int_{-\infty}^{\infty} f(t) e^{-iwt} \, dt}$

one may think of $e^{-iwt}$, the ‘kernel function’, as the guy acting as your basis element in both spaces. For the Gabor transform instead of $e^{-iwt}$ one defines a family of functions,

$g_{(b,\omega)}(t) = e^{i\omega(t-b)}g(t-b)$

where $g \in L^{2}(\mathbb{R})$ is called the window function. Typical windows are square windows and triangular (Bartlett) windows, but the most common is the Gaussian:

$\displaystyle{ g(t)= e^{-kt^2} }$

which is used in the analysis below. The Gabor transform of a function $f(t)$ is then given by

$\displaystyle{ G_{f}(b,w) = \int_{-\infty}^\infty f(t) \overline{g(t-b)} e^{-iw(t-b)} \, dt }$

Note the output of a Gabor transform, like the Fourier transform, is a complex function. The modulus of this function indicates the strength of a particular frequency in the signal, while the phase carries information about the… well, phase.

For example the modulus of the Gabor transform of

$\displaystyle{ f(t)=\sin(\dfrac{2\pi t}{100}) }$

is shown below. For these I used the package Rwave, originally written in S by Rene Carmona and Bruno Torresani; R port by Brandon Whitcher.

You can see that the line centered at a frequency of .01 corresponds to the function’s period of 100 time units.

A Fourier transform would do okay for such a function, but consider now a sine wave whose frequency increases linearly. As you can see below, the Gabor transform of such a function shows the linear increase of frequency with time:

The window parameter in both of the above Gabor transforms is 100 time units. Adjusting this parameter effects the vertical blurriness of the Gabor transform. For example here is the same plot as a above, but with window parameters of 300, 200, 100, and 50 time units:

You can see as you make the window smaller the line gets sharper, but only to a point. When the window becomes approximately smaller than a given period of the signal the line starts to blur again. This makes sense, because you can’t know the frequency of a signal precisely at a precise moment in time… just like you can’t precisely know both the momentum and position of a particle in quantum mechanics! The math is related, in fact.

Now let’s look at the Earth’s temperature over the past 800 kyr, estimated from the EPICA ice core deuterium concentrations:

When you look at this, first you notice spikes occurring about every 100 kyr. You can also see that the last 5 of these spikes appear to be bigger and more dramatic than the ones occurring before 500 kyr ago. Roughly speaking, each of these spikes corresponds to rapid warming of the Earth, after which occurs slightly less rapid cooling, and then a slow decrease in temperature until the next spike occurs. These are the Earth’s glacial cycles.

At the bottom of the curve, where the temperature is about about 4 °C cooler than the mean of this curve, glaciers are forming and extending down across the northern hemisphere. The relatively warm periods on the top of the spikes, about 10 °C hotter than the glacial periods. are called the interglacials. You can see that we are currently in the middle of an interglacial, so the Earth is relatively warm compared to rest of the glacial cycles.

Now we’ll take a look at the windowed Fourier transform, or the Gabor transform, of this data. The window size for these plots is 300 kyr.

Zooming in a bit, one can see a few interesting features in this plot:

We see one line at a frequency of about .024, with a sampling rate of 1 kyr, corresponds to a period of about 42 kyr, close to the period of obliquity. We also see a few things going on around a frequency of .01, corresponding to a 100 kyr period.

The band at .024 appears to be relatively horizontal, indicating an approximately constant frequency. Around the 100 kyr periods there is more going on. At a slightly higher frequency, about .015, there appears to be a band of slowly increasing frequency. Also, around .01 it’s hard to say what is really going on. It is possible that we see a combination of two frequency elements, one increasing, one decreasing, but almost symmetric. This may just be an artifact of the Gabor transform or the window and frequency parameters.

The window size for the plots below is slightly smaller, about 250 kyr. If we put the temperature and obliquity Gabor Transforms side by side, we see this:

It’s clear the lines at .024 line up pretty well.

Doing the same with eccentricity:

Eccentricity does not line up well with temperature in this exercise though both have bright bands above and below .01 .

Now for temperature and orbital inclination:

One sees that the frequencies line up better for this than for eccentricity, but one has to keep in mind that there is a nonlinear transformation performed on the ‘raw’ orbital plane data to project this down into the ‘invariant plane’ of the solar system. While this is physically motivated, it surely nudges the spectrum.

The temperature data clearly has a component with a period of approximately 42 kyr, matching well with obliquity. If you tilt your head a bit you can also see an indication of a fainter response at a frequency a bit above .04, corresponding roughly to period just below 25 kyrs, close to that of precession.

As far as the 100 kyr period goes, which is the periodicity of the glacial cycles, this analysis confirms much of what is known, namely that we can’t say for sure. Eccentricity seems to line up well with a periodicity of approximately 100 kyr, but on closer inspection there seems to be some discrepancies if you try to understand the glacial cycles as being forced by variations in eccentricity. The orbital plane inclination has a more similar Gabor transform modulus than does eccentricity.

A good next step would be to look the relative phases of the orbital parameters versus the temperature, but that’s all for now.

If you have any questions or comments or suggestions, please let me know!

### References

The orbital data used above is due to André Berger et al and can be obtained here:

Orbital variations and insolation database, NOAA/NCDC/WDC Paleoclimatology.

The temperature proxy is due to J. Jouzel et al, and it’s based on changes in deuterium concentrations from the EPICA Antarctic ice core dating back over 800 kyr. This data can be found here:

EPICA Dome C – 800 kyr deuterium data and temperature estimates, NOAA Paleoclimatology.

Here are the papers by Muller and Macdonald that I mentioned:

• Richard Muller and Gordan MacDonald, Glacial cycles and astronomical forcing, Science 277 (1997), 215–218.

• Richard Muller and Gordan MacDonald, Spectrum of 100-kyr glacial cycle: orbital inclination, not eccentricity, PNAS 1997, 8329–8334.

They also have a book:

• Richard Muller and Gordan MacDonald, Ice Ages and Astronomical Causes, Springer, Berlin, 2002.

You can also get files of the data I used here:

Berger et al orbital parameter data, with explanatory text here.

Jouzel et al EPICA Dome C temperature data, with explanatory text here.

## Anasazi America (Part 2)

24 January, 2013

Last time I told you a story of the American Southwest, starting with the arrival of small bands of hunters around 10,000 BC. I focused on the Anasazi, or ‘ancient Pueblo people’, and I led up to the Late Basketmaker III Era, from 500 to 750 AD.

The big invention during this time was the bow and arrow. Before then, large animals were killed by darts thrown from slings, which required a lot more skill and luck. But even more important was the continuing growth of agriculture: the cultivation of corn, beans and squash. This was fueled a period of dramatic population growth.

But this was just the start!

### The Pueblo I and II Eras

The Pueblo I Era began around 750 AD. At this time people started living in ‘pueblos’: houses with flat roofs held up by wooden poles. Towns became bigger, holding up to 600 people. But these towns typically lasted only 30 years or so. It seems people needed to move when conditions changed.

Starting around 800 AD, the ancient Pueblo people started building ‘great houses’: multi-storied buildings with high ceilings, rooms much larger than those in domestic dwellings, and elaborate subterranean rooms called ‘kivas’. And around 900 AD, people started building houses with stone roofs. We call this the start of the Pueblo II Era.

The center of these developments was the Chaco Canyon area in New Mexico:

Chaco Canyon is 125 kilometers east of Canyon de Chelly.
Unfortunately, I didn’t see it on my trip—I wanted to, but we didn’t have time.

By 950 AD, there were pueblos on every ridge and hilltop of the Chaco Canyon area. Due to the high population density and unpredictable rainfall, this area could no longer provide enough meat to sustain the needs of the local population. Apparently they couldn’t get enough fat, salt and minerals from a purely vegan diet—a shortcoming we have now overcome!

Yet the population continued to grow until 1000 AD. In his book Anasazi America, David Stuart wrote:

Millions of us buy mutual funds, believing the risk is spread among millions of investors and a large “basket” of fund stocks. Millions divert a portion of each hard-earned paycheck to purchase such funds for retirement. “Get in! Get in!” hawk the TV ads. “The market is going up. Historically, it always goes up in the long haul. The average rate of return this century is 9 percent per year!” Every one of us who does that is a Californian at heart, believing in growth, risk, power. It works—until an episode of too-rapid expansion in the market, combined with brutal business competition, threatens to undo it.

That is about what it was like, economically, at Chaco Canyon in the year 1000—rapid agricultural expansion, no more land to be gotten, and deepening competition. Don’t think of it as “romantic” or “primitive”. Think of it as just like 1999 in the United States, when the Dow Jones Industrial Average hit 11,000 and 30 million investors held their breath to see what would happen next.

### The Chaco phenomenon

In 1020 the rainfall became more predictable. There wasn’t more rain, it was simply less erratic. This was good for the ancient Pueblo people. At this point the ‘Chaco phenomenon’ began: an amazing flowering of civilization.

We see this in places like Pueblo Bonito, the largest great house in Chaco Canyon:

Pueblo Bonito was founded in the 800s. But starting in 1020 it grew immensely, and it kept growing until 1120. By this time it had 700 rooms, nearly half devoted to grain storage. It also had 33 kivas, which are the round structures you see here.

But Pueblo Bonito is just one of a dozen great houses built in Chaco Canyon by 1120. About 215 thousand ponderosa pine trees were cut down in this building spree! Stuart estimates that building these houses took over 2 million man-hours of work. They also built about 650 kilometers of roads! Most of these connect one great house to another… but some mysteriously seem to go to ‘nowhere’.

By 1080, however, the summer rainfall had started to decline. And by 1090 there were serious summer drought lasting for five years. We know this sort of thing from tree rings: there are enough ponderosa logs and the like that archaeologists have built up a detailed year-by-year record.

Thanks to overpopulation and these droughts, Chaco Canyon civilization was in serious trouble at this point, but it charged ahead:

Part of Chacoan society were already in deep trouble after AD 1050 as health and living conditions progressively eroded in the southern districts’ open farming communities. The small farmers in the south had first created reliable surpluses to be stored in the great houses. Ultimately, it was the increasingly terrible conditions of those farmers, the people who grew the corn, that had made Chacoan society so fatally vulnerable. They simply got back too little from their efforts to carry on.

[....]

Still, the great-house dwellers didn’t merely sit on their hands. As some farms failed, they used farm labor to expand roads, rituals, and great houses. This prehistoric version of a Keynesian growth model apparently alleviated enough of the stresses and strains to sustain growth through the 1070s. Then came the waning rainfall of the 1080s, followed by drought in the 1090s.

Circumstances in farming communities worsened quickly and dramatically with this drought; the very survival of many was at stake. The great-house elites at Chaco Canyon apparently responded with even more roads, rituals, and great houses. This was actually a period of great-house and road infrastructure “in-fill”, both in and near established open communities. In a few years, the rains returned. This could not help but powerfully reinforce the elites’ now well-established, formulaic response to problems.

But roads, rituals, and great houses simply did not do enough for the hungry farmers who produced corn and pottery. As the eleventh century drew to a close, even though the rains had come again, they walked away, further eroding the surpluses that had fueled the system. Imagine it: the elites must have believe the situation was saved, even as more farmers gave up in despair. Inexplicably, they never “exported” the modest irrigation system that had caught and diverted midsummer runoff from the mesa tops at Chaco Canyon and made local fields more productive. Instead, once again the elites responded with the sacred formula—more roads, more rituals, more great houses.

So, Stuart argues that the last of the Chaco Canyon building projects were “the desperate economic reactions of a fragile and frightened society”.

Regardless of whether this is true, we know that starting around 1100 AD, many of the ancient Pueblo people left the Chaco Canyon area. Many moved upland, to places with more rain and snow. Instead of great houses, many returned to building the simpler pit houses of old.

Tribes descending from the ancient Pueblo people still have myths about the decline of the Chaco civilization. While such tales should be taken with a huge grain of salt, these are too fascinating not to repeat. Here are two quotes:

In our history we talk of things that occurred a long time ago, of people who had enormous amounts of power, spiritual power and power over people. I think that those kinds of people lived here in Chaco…. Here at Chaco there were very powerful people who had a lot of spiritual power, and these people probably used their power in ways that caused things to change, and that may have been one of the reasons why the migrations were set to start again, because these these people were causing changes that were never meant to occur.

My response to the canyon was that some sensibility other than my Pueblo ancestors had worked on the Chaco great houses. There were the familiar elements such as the nansipu (the symbolic opening into the underworld), kivas, plazas and earth materials, but they were overlain by a strictness and precision of design that was unfamiliar…. It was clear that the purpose of these great villages was not to restate their oneness with the earth but to show the power and specialness of humans… a desire to control human and natural resources… These were men who embraced a social-political-religious hierarchy and envisioned control and power over places, resources and people.

These quotes are from an excellent book on the changing techniques and theories of archaeologists of the American Southwest:

• Stephen H. Lekson, A History of the Ancient Southwest, School for Advanced Research, Santa Fe, New Mexico, 2008.

What these quotes show, I think, is that the sensibility of current-day Pueblo people is very different from that of the people who built the great houses of Chaco Canyon. According to David Stuart, the Chaco civilization was a ‘powerful’ culture, while their descendants became an ‘efficient’ culture:

… a powerful society (or organism) captures more energy and expends (metabolizes) it more rapidly than an efficient one. Such societies tend to be structurally more complex, more wasteful of energy, more competitive, and faster paced than an efficient one. Think of modern urban America as powerful, and you will get the picture. In contrast, an efficient society “metabolizes” its energy more slowly, and so it is structurally less complex, less wasteful, less competitive, and slower. Think of Amish farmers in Pennsylvania or contemporary Pueblo farms in the American Southwest.

In competitive terms, the powerful society has an enormous short-term advantage over the efficient one if enough energy is naturally available to “feed” it, or if its technology and trade can bring in energy rapidly enough to sustain it. But when energy (food, fuel and resources) becomes scarce, or when trade and technology fail, an efficient society is advantageous because it simpler, less wasteful structure is more easily sustained in times of scarcity.

### The Pueblo III Era, and collapse

By 1150 AD, some of the ancient Pueblo people began building cliff dwellings at higher elevations—like Mesa Verde in Colorado, shown above. This marks the start of the Pueblo III Era. But this era lasted a short time. By 1280, Mesa Verde was deserted!

Some of the ruins in Canyon de Chelly also date to the Pueblo III Era. For example, the White House Ruins were built around 1200. Here are some of my pictures of this marvelous place. Click to enlarge:

But again, they were deserted by the end of the Pueblo III Era.

Why did the ancient Pueblo people move to cliff dwellings? And why did they move out so soon?

Nobody is sure. Cliff dwellings are easy to defend against attack. Built into the south face of a cliff, they catch the sun in winter to stay warm—it gets cold here in winter!—but they stay cool when the sun is straight overhead in summer. These are good reasons to build cliff dwellings. But these reasons don’t explain why cliff dwellings were so popular from 1150 to 1280, and then were abandoned!

One important factor seems to be this: there was a series of severe droughts starting around 1275. There were also raids from other tribes: speakers of Na-Dené languages, who eventually became the current-day Navajo inhabitants of this area.

But drought alone may be unable to explain what happened. There have been some fascinating attempts to model the collapse of the Anasazi culture. One is called the Artificial Anasazi Project. It used ‘agent-based modeling’ to study what the ancient Pueblo people did in Long House Valley, Arizona, from 200 to 1300. The Villages Project, a collaboration of Washington State University and the Crow Canyon Archaeological Center, focused on the region near Mesa Verde.

Quoting Stephen Lekson’s book:

Both projects mirrored actual settlement patterns from 800 to 1250 with admirable accuracy. Problems rose, however, with the abandonments of the regions, in both cases after 1250. There were unexplained exceptions, misfits between the models and reality.

Those misfits were not minor. Neither model predicted complete abandonment. Yet it happened. That’s perplexing. In the Scientific American summary of the Long House Valley model, Kohler, Gummerman, and Reynolds write, “We can only conclude that sociopolitical, ideological or environmental factors not included in our model must have contributed to the total depopulation of the valley.” Similar conundrums best the Villages Project: “None of our simulations terminated with a population decline as dramatic as what actually happened in the Mesa Verde region in the late 1200.”

These simulation projects look interesting! Of course they leave out many factors, but that’s okay: it suggests that one of those factors could be important in understanding the collapse.

For more info, click on the links. Also try this short review by the author of a famous book on why civilizations collapse:

• Jared Diamond, Life with the artificial Anasazi, Nature 419 (2002), 567–569.

From this article, here are the simulated versus ‘actual’ populations of the ancient Pueblo people in Long House Valley, Arizona, from 800 to 1350 AD:

The so-called ‘actual’ population is estimated using the number of house sites that were active at a given time, assuming five people per house.

This graph gives a shocking and dramatic ending to our tale! Lets hope our current-day tale doesn’t end so abruptly, because in abrupt transitions much gets lost. But of course the ancient Pueblo people didn’t disappear. They didn’t all die. They became an ‘efficient’ society: they learned to make do with diminished resources.

## Why It’s Getting Hot

22 January, 2013

The Berkeley Earth Surface Temperature project concludes: carbon dioxide concentration and volcanic activity suffice to explain most of the changes in earth’s surface temperature from 1751 to 2011. Carbon dioxide increase explains most of the warming; volcanic outbursts explain most of the bits of sudden cooling. The fit is not improved by the addition of a term for changes in the behavior of the Sun!

For details, see:

• Robert Rohde, Richard A. Muller, Robert Jacobsen, Elizabeth Muller, Saul Perlmutter, Arthur Rosenfeld, Jonathan Wurtele, Donald Groom and Charlotte Wickham, A new estimate of the average earth surface land temperature spanning 1753 to 2011, Geoinformatics and Geostatics: an Overview 1 (2012).

The downward spikes are explained nicely by volcanic activity. For example, you can see the 1815 eruption of Tambora in Indonesia, which blanketed the atmosphere with ash. 1816 was called The Year Without a Summer: frost and snow were reported in June and July in both New England and Northern Europe! Average global temperatures dropped 0.4–0.7 °C, resulting in major food shortages across the Northern Hemisphere. Similarly, the dip in 1783-1785 seems to be to due to Grímsvötn in Iceland.

(Carbon dioxide goes up a tiny bit in volcanic eruptions, but that’s mostly irrelevant. It’s the ash and sulfur dioxide, forming sulfuric acid droplets that help block incoming sunlight, that really matter for volcanoes!)

It’s worth noting that they get their best fit if each doubling of carbon dioxide concentration causes a 3.1 ± 0.3°C increase in land temperature. This is consistent with the 2007 IPCC report’s estimate of a 3 ± 1.5°C warming for land plus oceans when carbon dioxide doubles. This quantity is called climate sensitivity, and determining it is very important.

They also get their best fit if each extra 100 gigatonnes of atmospheric sulfates (from volcanoes) cause 1.5 ± 0.5°C of cooling.

They also look at the left-over temperature variations that are not explained by this simple model: 3.1°C of warming with each doubling of carbon dioxide, and 1.5°C of cooling for each extra 100 gigatonnes of atmospheric sulfates. Here’s what they get:

The left-over temperature variations, or ‘residuals’, are shown in black, with error bars in gray. On top is the annual data, on bottom you see a 10-year moving average. The red line is an index of the Atlantic Multidecadal Oscillation, a fluctuation in the sea surface temperature in the North Atlantic Ocean with a rough ‘period’ of 70 years.

Apparently the BEST team places more weight on the Atlantic Multidecadal Oscillation than most climate scientists. Most consider the [El Niño Southern Oscillation](http://www.azimuthproject.org/azimuth/show/ENSO) to be more important in explaining global temperature variations! I haven’t seen why the BEST team prefers to focus attention on the Atlantic Multidecadal Oscillation. I’d like to see some more graphs…

## Anasazi America (Part 1)

20 January, 2013

A few weeks ago I visited Canyon de Chelly, which is home to some amazing cliff dwellings. I took a bunch of photos, like this picture of the so-called ‘First Ruin’. You can see them and read about my adventures starting here:

• John Baez, Diary, 21 December 2012.

Here I’d like to talk about what happened to the civilization that built these cliff dwellings! It’s a fascinating tale full of mystery… and it’s full of lessons for the problems we face today, involving climate change, agriculture, energy production, and advances in technology.

First let me set the stage! Canyon de Chelly is in the Navajo Nation, a huge region with its own laws and government, not exactly part of the United States, located at the corners of Arizona, New Mexico, and Utah:

The hole in the middle is the Hopi Reservation. The Hopi are descended from,the people who built the cliff dwellings in Canyon de Chelly. Those people are often called the Anasazi, but these days the favored term is ancient Pueblo peoples.

The Hopi speak a Uto-Aztecan language, and so presumably did the Anasazi. Uto-Aztecan speakers were spread out like this shortly before the Europeans invaded:

with a bunch more down in what’s now Mexico. The Navajo are part of a different group, the Na-Dené language group:

So, the Navajo aren’t a big part of the story in this fascinating book:

• David E. Stuart, Anasazi America, U. of New Mexico Press, Albuquerque, New Mexico, 2000.

Let me summarize this story here!

### After the ice

The last Ice Age, called the Wisconsin glaciation, began around 70,000 BC. The glaciers reached their maximum extent about 18,000 BC, with ice sheets down to what are now the Great Lakes. In places the ice was over 1.6 kilometers thick!

Then it started warming up. By 16,000 BC people started cultivating plants and herding animals. Around 12,000 BC, before the land bridge connecting Siberia and Canada melted, people from the so-called Clovis culture came to the Americas.

It seems likely that other people got to America earlier, moving down the Pacific coast before the inland glaciers melted. But even if the Clovis culture didn’t get there first, their arrival was a big deal. They be traced by their distinctive and elegant spear tips, called Clovis points:

After they arrived, the Clovis people broke into several local cultures, roughly around the time of the Younger Dryas cold spell beginning around 10,800 BC. By 10,000 BC, small bands of hunters roamed the Southwest, first hunting mammoths, huge bison, camels, horses and elk, and later—perhaps because they killed off the really big animals—the more familiar bison, deer, elk and antelopes we see today.

For about 5000 years the population of current-day New Mexico probably fluctuated between 2 and 6 thousand people—a density of just one person per 50 to 150 square kilometers! Changes in culture and climate were slow.

### The Altithermal

Around 5,000 BC, the climate near Canyon de Chelly began to warm up, dry out, and become more strongly seasonal. This epoch is called the ‘Altithermal’. The lush grasslands that once supported huge herds of bison began to disappear in New Mexico, and those bison moved north. By 4,000 BC, the area near Canyon de Chelly became very hot, with summers often reaching 45°C, and sometimes 57° at the ground’s surface.

The people in this area responded in an interesting way: by focusing much more on gathering, and less on hunting. We know this from their improved tools for processing plants, especially yucca roots. The yucca is now the state flower of New Mexico. Here’s a picture taken by Stan Shebs:

David Stuart writes:

At first this might seem an unlikely response to unremitting heat and aridity. One could argue that the deteriorating climate might first have forced people to reduce their numbers by restricting sex, marriage, and child-bearing so that survivors would have enough game. That might well have been the short-term solution [....] When once-plentiful game becomes scarce, hunter-gatherers typically become extremely conservative about sex and reproduction. [...] But by early Archaic times, the change in focus to plant resources—undoubtedly by necessity—had actually produced a marginally growing population in the San Juan Basin and its margins in spite of climatic adversity.

[....]

Ecologically, these Archaic hunters and gatherers had moved one entire link down the food chain, thereby eliminating the approximately 90-percent loss in food value that occurs when one feeds on an animal that is a plant-eater.

[....]

This is sound ecological behavior—they could not have found a better basic strategy even if they had the advantage of a contemporary university education. Do I attribute this to their genius? No. It is simply that those who stubbornly clung to the traditional big game hunting of their Paleo-Indian ancestors could not prosper, so they left fewer descendents. Those more willing to experiment, or more desperate, fared better, so their behavior eventually became traditional among their more numerous descendents.

### The San Jose Period

By 3,000 BC the Altithermal was ending, big game was returning to the Southwest, yet the people retained their new-found agricultural skills. They also developed a new kind of dart for hunting, the ‘San Jose point’. So, this epoch is called the ‘San Jose period’. Populations rose to maybe about 15 to 30 thousand people in New Mexico, a vast increase over the earlier level of 2-6 thousand. But still, that’s just one person per 10 or 20 square kilometers!

The population increased until around 2,000 BC. At this point population pressures became acute… but two lucky things happened. First, the weather got wetter. Second, corn was introduced from Mexico. The first varieties had very small cobs, but gradually they were improved.

The wet weather lasted until around 500 BC. And at just about this time, beans were introduced, also from Mexico.

Their addition was critical. Corn alone is a costly food to metabolize. Its proteins are incomplete and hard to synthesize. Beans contain large amounts of lysine, the amino acid missing from corn and squash. In reasonable balance, corn, beans and squash together provide complimentary amino acids and form the basis of a nearly complete diet. This diet lacks only the salt, fat and mineral nutrients found in most meats to be healthy and complete.

By 500 BC, nearly all the elements for accelerating cultural and economic changes were finally in place—a fairly complete diet that could, if rainfall cooperated, largely replace the traditional foraging one; several additional, modestly larger-cobbed varieties of corn that not only prospered under varying growing conditions but also provided a bigger harvest; a population large enough to invest the labor necessary to plant and harvest; nearly 10 centuries of increasing familiarity with cultigens; and enhanced food-processing and storage techniques. Lacking were compelling reasons to transform an Archaic society accustomed to earning a living with approximately 500 hours of labor a year into one willing to invest the 1,000 to 2,000 yours coming to contemporary hand-tool horticulturalists.

Nature then stepped in with one persuasive, though not compelling, reason for people to make the shift.

Namely, droughts! Precipitation became very erratic for about 500 years. People responded in various ways. Some went back to the old foraging techniques. Others improved their agricultural skills, developing better breeds of corn, and tricks for storing water. The latter are the ones whose populations grew.

This led to the Basketmaker culture, where people started living in dugout ‘pit houses’ in small villages. More precisely, the Late Basketmaker II Era lasted from about 50 AD to 500 AD. New technologies included the baskets that gave this culture its name:

Pottery entered the scene around 300 AD. Have you ever thought about how important this is? Before pots, people had to cook corn and beans by putting rocks in fires and then transferring them to holes containing water!

Now, porridge and stews could be put to boil in a pot set directly into a central fire pit. The amount of heat lost and fuel used in the old cooking process—an endless cycle of collecting, heating, transferring, removing and replacing hot stones just to boil a few quarts of water—had always been enormous. By comparison, cooking with pots became quick, easy, and far more efficient. In a world more densely populated, firewood had to be gathered from greater distances. Now, less of it was needed. And there was newer fuel to supplement it—dried corncobs.

Not all the changes were good. Most adult skeletons from this period show damage from long periods spend stooping—either using a stone hoe to tend garden plots, or grinding corn while kneeling. And as they ate more corn and beans and fewer other vegetables, mineral deficiencies became common. Extreme osteoporosis afflicted many of these people: we find skulls that are porous, and broken bones. It reminds me a little of the plague of obesity, with its many side-affects, afflicting modern Americans as we move to a culture where most people work sitting down.

On the other hand, there was a massive growth in population. The number of pit-house villages grew nine-fold from 200 AD to 700 AD!

It must have been an exciting time. In only some 25 generations, these folks had transformed themselves from forager and hunters with a small economic sideline in corn, beans and squash into semisedentary villagers who farmed and kept up their foraging to fill in the economic gaps.

But this was just the beginning. By 1020, the ancient Pueblo people would begin to build housing complexes that would remain the biggest in North America until the 1880s! This happened in Chaco Canyon, 125 kilometers east of Canyon de Chelly.

Next time I’ll tell you the story of how that happened, and how later, around 1200, these people left Chaco Canyon and started to build cliff dwellings.

For now, I’ll leave you with some pictures I took of the most famous cliff dwelling in Canyon de Chelly: the ‘White House Ruins’. Click to enlarge:

## Our Galactic Environment

27 December, 2012

While I’m focused on the Earth these days, I can’t help looking up and thinking about outer space now and then.

So, let me tell you about the Kuiper Belt, the heliosphere, the Local Bubble—and what may happen when our Solar System hits the next big cloud! Could it affect the climate on Earth?

### New Horizons

We’re going on a big adventure!

New Horizons has already taken great photos of volcanoes on Jupiter’s moon Io. It’s already closer to Pluto than we’ve ever been. And on 14 July 2016 it will fly by Pluto and its moons Charon, Hydra, and Nix!

But that’s just the start: then it will go to see some KBOs!

The Kuiper Belt stretches from the orbit of Neptune to almost twice as far from the Sun. It’s a bit like the asteroid belt, but much bigger: 20 times as wide and 20 – 200 times as massive. But while most asteroids are made of rock and metal, most Kuiper Belt Objects or ‘KBOs’ are composed largely of frozen methane, ammonia and water.

The Earth’s orbit has a radius of one astronomical unit, or AU. The Kuiper Belt goes from 30 AU to 50 AU out. For comparison, the heliosphere, the region dominated by the energetic fast-flowing solar wind, fizzles out around 120 AU. That’s where Voyager 1 is now.

New Horizons will fly through the Kuiper Belt from 2016 to 2020… and, according to plan, its mission will end in 2026. How far out will it be then? I don’t know! Of course it will keep going…

For more see:

### The heliosphere

Here’s a young star zipping through the Orion Nebula. It’s called LL Orionis, and this picture was taken by the Hubble Telescope in February 1995:

The star is moving through the interstellar gas at supersonic speeds. So, when this gas hits the fast wind of particles shooting out from the star, it creates a bow shock half a light-year across. It’s a bit like when a boat moves through the water faster than the speed of water waves.

There’s also a bow shock where the solar wind hits the Earth’s magnetic field. It’s about 17 kilometers thick, and located about 90,000 kilometers from Earth:

For a long time scientists thought there was a bow shock where nearby interstellar gas hit the Sun’s solar wind. But this was called into question this year when a satellite called the Interstellar Boundary Explorer (IBEX) discovered the Solar System is moving slower relative to this gas than we thought!

IBEX isn’t actually going to the edge of the heliosphere—it’s in Earth orbit, looking out. But Voyager 1 seems close to hitting the heliopause, where the Earth’s solar wind comes to a stop. And it’s seeing strange things!

### The Interstellar Boundary Explorer

The Sun shoots out a hot wind of ions moving at 300 to 800 kilometers per second. They form a kind of bubble in space: the heliosphere. These charged particles slow down and stop when they hit the hydrogen and helium atoms in interstellar space. But those atoms can penetrate the heliosphere, at least when they’re neutral—and a near-earth satellite called IBEX, the Interstellar Boundary Explorer, has been watching them! And here’s what IBEX has seen:

In December 2008, IBEX first started detecting energetic neutral atoms penetrating the heliosphere. By October 2009 it had collected enough data to see the ‘IBEX ribbon’: an unexpected arc-shaped region in the sky has many more energetic neutral atoms than expected. You can see it here!

The color shows how many hundreds of energetic neutral atoms are hitting the heliosphere per second per square centimeter per keV. A keV, or kilo-electron-volt, is a unit of energy. Different atoms are moving with different energies, so it makes sense to count them this way.

You can see how the Voyager spacecraft are close to leaving the heliosphere. You can also see how the interstellar magnetic field lines avoid this bubble. Ever since the IBEX ribbon was detected, the IBEX team has been trying to figure out what causes it. They think it’s related to the interstellar magnetic field. The ribbon has been moving and changing intensity quite a bit in the couple of years they’ve been watching it!

Recently, IBEX announced that our Solar System has no bow shock—a big surprise. Previously, scientists thought the heliosphere created a bow-shaped shock wave in the interstellar gas as it moved along, like that star in the Orion Nebula we just looked at.

### The Local Bubble

Get to know the neighborhood!

I love the names of these nearby stars! Some I knew: Vega, Altair, Fomalhaut, Alpha Centauri, Sirius, Procyon, Denebola, Pollux, Castor, Mizar, Aldebaran, Algol. But many I didn’t: Rasalhague, Skat, Gaorux, Pherkad, Thuban, Phact, Alphard, Wazn, and Algieba! How come none of the science fiction I’ve read uses these great names? Or maybe I just forgot.

The Local Bubble is a bubble of hot interstellar gas 300 light years across, probably blasted out by the supernova called Geminga near the bottom of this picture.

### Geminga

Here’s the sky viewed in gamma rays. A lot come from a blazar 7 billion light years away that erupted in 2005: a supermassive black hole at the center of a galaxy, firing particles in a jet that happens to be aimed straight at us. Some come from nearby pulsars: rapidly rotating neutron stars formed by the collapse of stars that went supernova. The one I want you to think about is Geminga.

Geminga is just 800 light years away from us, and it exploded only 300,000 years ago! That may seem far away and long ago to you, but not to me. The first Neanderthalers go back around 350,000 years… and they would have seen this supernova in the daytime, it was so close.

But here’s the reason I want you to think about Geminga. It seems to have blasted out the bubble of hot low-density gas our Solar System finds itself in: the Local Bubble. Astronomers have even detected micrometer-sized interstellar meteor particles coming from its direction!

We may think of interstellar space as all the same—empty and boring—but that’s far from true. The density of interstellar space varies immensely from place to place! The Local Bubble has just 0.05 atoms per cubic centimeter, but the average in our galaxy is about 20 times that, and we’re heading toward some giant clouds that are 2000 to 20,000 times as dense. The fun will start when we hit those…. but more on that later.

### Nearby clouds

While we live in the Local Bubble, several thousand years ago we entered a small cloud of cooler, denser gas: the Local Fluff. We’ll leave this in at most 4 thousand years. But that’s just the beginning! As we pass the Scorpius-Centaurus Association, we’ll hit bigger, colder and denser clouds—and they’ll squash the heliosphere.

When will this happen? People seem very unsure. I’ve seen different sources saying we entered the Local Fluff sometime between 44,000 and 150,000 years ago, and that we’ll stay within it for between 4,000 and 20,000 years.

We’ll then return to the hotter, less dense gas of the Local Bubble until we hit the next cloud. That may take at least 50,000 years. Two candidates for the first cloud we’ll hit are the G Cloud and the Apex Cloud. The Apex Cloud is just 15 light years away:

• Priscilla C. Frisch, Local interstellar matter: the Apex Cloud.

When we hit a big cloud, it will squash the heliosphere. Right now, remember, this is roughly 120 AU in radius. But before we entered the Local Fluff, it was much bigger. And when we hit thicker clouds, it may shrink down to just 1 or 2 AU!

The heliosphere protects us from galactic cosmic rays. So, when we hit the next cloud, more of these cosmic rays will reach the Earth. Nobody knows for sure what the effects will be… but life on Earth has survived previous incidents like this, and other problems will hit us much sooner, so don’t stay awake at night worrying about it!

Indeed, ice core samples from the Antarctic show spikes in the concentration of the radioactive isotope beryllium-10 in two seperate events, one about 60,000 years ago and another about 33,000 years ago. These might have been caused by a sudden increase in cosmic rays. But nobody is really sure.

People have studied the possibility that cosmic rays could influence the Earth’s weather, for example by seeding clouds:

• K. Scherer, H. Fichtner et al, Interstellar-terrestrial relations: variable cosmic environments, the dynamic heliosphere, and their imprints on terrestrial archives and climate, Space Science Reviews 127 (2006), 327–465.

• Benjamin A. Laken, Enric Pallé, Jaša Čalogović and Eimear M. Dunne, A cosmic ray-climate link and cloud observations, J. Space Weather Space Clim. 2 (2012), A18.

Despite the title of the second paper, its conclusion is that “it is clear that there is no robust evidence of a widespread link between the cosmic ray flux and clouds.” That’s clouds on Earth, not clouds of interstellar gas! The first paper is much more optimistic about the existence of such a link, but it doesn’t provide a ‘smoking gun’.

And—in case you’re wondering—variations in cosmic rays this century don’t line up with global warming:

The top curves are the Earth’s temperature as estimated by GISTEMP (the brown curve), and the carbon dioxide concentration in the Earth’s atmosphere as measured by Charles David Keeling (in green). The bottom ones are galactic cosmic rays as measured by CLIMAX (the gray dots), the sunspot cycle as measured by the Solar Influences Data Analysis Center (in red), and total solar irradiance as estimated by Judith Lean (in blue).

But be careful: the galactic cosmic ray curve has been flipped upside down, since when solar activity is high, then fewer galactic cosmic rays make it to Earth! You can see that here:

I’m sorry these graphs aren’t neatly lined up, but you can see that peaks in the sunspot cycle happened near 1980, 1989 and 2002, which is when we had minima in the galactic cosmic rays.

For more on the neighborhood of the Solar System and what to expect as we pass through various interstellar clouds, try this great article:

• Priscilla Frisch, The galactic environment of the Sun, American Scientist 88 (January-February 2000).

I have lots of scientific heroes: whenever I study something, I find impressive people have already been there. This week my hero is Priscilla Frisch. She edited a book called Solar Journey: The Significance of Our Galactic Environment for the Heliosphere and Earth. The book isn’t free, but this chapter is:

• Priscilla C. Frisch and Jonathan D. Slavin, Short-term variations in the galactic environment of the Sun.

For more on how what the heliosphere might do when we hit the next big cloud, see:

• Hans-R. Mueller, Priscilla C. Frisch, Vladimir Florinski and Gary P. Zank, Heliospheric response to different possible interstellar environments.

### The Aquila Rift

Just for fun, let’s conclude by leaving our immediate neighborhood and going a bit further out. Here’s a picture of the Aquila Rift, taken by Adam Block of the Mt. Lemmon SkyCenter at the University of Arizona:

The Aquila Rift is a region of molecular clouds about 600 light years away in the direction of the star Altair. Hundreds of stars are being formed in these clouds.

A molecular cloud is a region in space where the interstellar gas gets so dense that hydrogen forms molecules, instead of lone atoms. While the Local Fluff near us has about 0.3 atoms per cubic centimeter, and the Local Bubble is much less dense, a molecular cloud can easily have 100 or 1000 atoms per cubic centimeter. Molecular clouds often contain filaments, sheets, and clumps of submicrometer-sized dust particles, coated with frozen carbon monoxide and nitrogen. That’s the dark stuff here!

I don’t know what will happen to the Earth when our Solar System hits a really dense molecular cloud. It might have already happened once. But it probably won’t happen again for a long time.

## Teaching the Math of Climate Science

18 December, 2012

When you’re just getting started on simulating the weather, it’s good to start with an aqua-planet. That’s a planet like our Earth, but with no land!

Click on this picture to see an aqua-planet created by H. Miura:

Of course, it’s important to include land, because it has huge effects. Click on this to see what I mean:

This simulation is supposed to illustrate a Madden–Julian oscillation: the largest form of variability in the tropical atmosphere on time scales of 30-90 days! It’s a pulse that moves east across the Indian Ocean and Pacific ocean at 4-8 meters/second. It manifests itself as patches of anomalously high rainfall… but also patches of anomalously low rainfall. Strong Madden-Julian Oscillations are often, but not always, seen 6-12 months before an El Niño starts.

Wouldn’t it be cool if math majors could learn to do simulations like these? If not of the full-fledged Earth, at least of an aqua-planet?

Soon they will.

### Climate science at Cal State Northridge

At the huge fall meeting of the American Geophysical Union, I met Helen Steele Cox from the geography department at Cal State Northridge. She was standing in front of a poster describing their new Climate Science Program. They got a ‘NICE’ grant from NASA to develop new courses—where ‘NICE’ means NASA Innovations in Climate Education. This grant also helps them run a seminar every other week where they invite climate scientists and the like from JPL and other nearby places to talk about their work.

What really excited me about this program is that it includes courses designed to teach math majors—and others—the skills needed to go into climate science. Since I’m supposed to be developing the syllabus for an undergraduate ‘Mathematics of the Environment’ course, I’m eager to hear about such things.

She told me to talk to David Klein in the math department there. He used to work on general relativity, but now—like me—he’s gotten interested in climate issues. I emailed him, and he told me what’s going on.

They’ve taught this course twice:

Phys 595 CL. Mathematics and Physics of Climate Change. Atmospheric dynamics and thermodynamics, radiation and radiative transfer, green-house effect, mathematics of remote sounding, introduction to atmospheric and climate modeling. Syllabus here.

They’ve just finished teaching this one:

Math 396 CL. Introduction to Mathematical Climate Science. This course in applied mathematics will introduce students to applications of vector calculus and differential equations to the study of global climate. Fundamental equations governing atmospheric dynamics will be derived and solved for a variety of situations. Topics include: thermodynamics of the atmosphere, potential temperature, parcel concepts, hydrostatic balance, dynamics of air motion and wind flows, energy balance, an introduction to radiative transfer, and elementary mathematical climate models. Syllabus here.

In some ways, the most intriguing is the one they haven’t taught yet:

Math 483 CL. Mathematical Modeling. Possible topics include fundamental principles of atmospheric radiation and convection, two dimensional models, varying parameters within models, numerical simulation of atmospheric fluid flow from both a theoretical and applied setting.

There’s no syllabus it yet, but they want to focus the course on four projects:

1. Modeling a Lorenz dynamical system, using the trajectories as analogies to weather and the attractor as an analogy to climate.

2. Modeling a land-sea breeze.

3. Creating a 2d model of an aqua-planet: that is, one with no land.

4. Doing some projects with EdGCM, a proprietary ‘educational general climate model’.

It would be great to take student-made software and add it to the Azimuth Code Project. If they were well-documented, future generations of students could go ahead and improve on them. And an open-source GCM would be a wonderful thing.

As more and more schools teach climate science—not just to Earth scientists, but also to math and computer science students—this sort of ‘open-source climate modeling software’ should become more and more common.

Some questions:

Do you know other schools that are teaching climate modeling in the math department?

Do you know of efforts to formalize the sharing of open-source climate software for educational purposes?

## Mathematics of the Environment (Part 10)

4 December, 2012

There’s a lot more to say, but just one more class to say it! Next quarter I’ll be busy teaching an undergraduate course on evolutionary game theory and a grad course on Lagrangian methods in classical mechanics, together with this seminar and weekly meetings with my students. So, to keep from burning out, I’m going to temporarily switch this seminar to a different topic, where I have a textbook all lined up:

• John Baez and Jacob Biamonte, A Course on Quantum Techniques in Stochastic Mechanics.

I will stop putting up online notes. I’ll also teach the classical mechanics using a book I helped write:

• John Baez and Derek Wise, Lectures on Classical Mechanics.

This should make my job a bit easier: explaining climate physics is a lot more work, since I’m just an amateur! But I hope to come back to this topic someday.

In this final class let’s talk a bit about recent work on glacial cycles and changes in the Earth’s orbit. To keep my job manageable, I’ll just talk about one paper.

### The work of Didier Paillard

We’ve seen a few puzzles about how Milankovich cycles are related to the glacial cycles. There are many more I haven’t even gotten around to explaining:

Milankovich cycles: problems, Wikipedia.

But let’s dive in and look at a model that tries to solve some:

• Didier Paillard, The timing of Pleistocene glaciations from a simple multiple-state climate model, Nature 391 (1998), 378–391.

Paillard starts by telling us the good news:

The Earth’s climate over the past million years has been characterized by a succession of cold and warm periods, known as glacial–interglacial cycles, with periodicities corresponding to those of the Earth’s main orbital parameters; precession (23 kyr), obliquity (41 kyr) and eccentricity (100 kyr). The astronomical theory of climate, in which the orbital variations are taken to drive the climate changes, has been very successful in explaining many features of the palaeoclimate records.

I’m not including reference numbers, but here he cites a famous paper which we discussed in Part 8:

• J. D. Hays, J. Imbrie, and N. J. Shackleton, Variations in the earth’s orbit: pacemaker of the Ice Ages, Science 194 (1976), 1121–1132.

The main result of this paper was to find peaks in the power spectrum of various temperature proxies that match some of the periods of the Milankovitch cycles. This has repeatedly been confirmed. In fact, one of the students in this course, Blake Pollard, has already checked this. I want to pressure him to write a blog article including the nice graphs he’s generated.

But then comes the bad news:

Nevertheless, the timing of the main glacial and interglacial periods remains puzzling in many respects. In particular, the main glacial–interglacial switches occur approximately every 100 kyr, but the changes in insolation forcing are very small in this frequency band.

Here’s an article on the first problem:

100,000-year problem, Wikipedia.

The basic idea is that during the last million years, the glacial cycles seem to happening roughly every 100 thousand years:

The Milankovich cycles that most closely match this are two cycles in the eccentricity of the Earth’s orbit which have periods of 95 and 123 thousand years. But as we saw last time, these have very tiny effects on the average solar energy hitting the Earth year round. The obliquity and precession cycles have no effect on the average solar energy hitting the Earth, but they have a noticeable effect on how much hits it in a given latitude in a given season!

Alas, we didn’t get around to calculating that yet. But this gives you a sense of it:

As common in paleontology, time here goes from right to left. The yellow curve shows the amount of solar power hitting the Earth at a latitude of 65° N at the summer solstice. This quantity is often called simply the insolation, though that term also means other things. The insolation curve most closely resembles the red curve showing precession cycles, which have periods near 20 thousand years. But during this stretch of time, ice ages have been happening roughly once every 100 thousand years! Why? That’s the 100,000 year problem.

Continuing the quotation:

Similarly, an especially warm interglacial episode, about 400,000 years ago, occurred at a time when insolation variations were minimal.

If you look at the graph above, you’ll see what he means.

Next, he sketches what he’ll do:

Here I propose that multiple equilibria in the climate system can provide a resolution of these problems
within the framework of astronomical theory. I present two simple models that successfully simulate each glacial–interglacial cycle over the late Pleistocene epoch at the correct time and with approximately the correct amplitude. Moreover, in a simulation over the past 2 million years, the onset of the observed prominent 100-kyr cycles around 0.8 to 1 million years ago is correctly reproduced.

### Paillard’s model

I’ll just talk about his first, simpler model. It assumes the Earth can be in three different states:

i: interglacial

g: mild glacial

G: full glacial

In this model:

• The Earth goes from i to g as soon as the insolation goes below some level $i_0.$

• The Earth then goes from g to G as soon as the volume of ice goes above some level $v_{\mathrm{max}}.$

• The Earth then goes from G to i as soon as the insolation goes above some level $i_1.$

Only the transitions ig and gG are allowed! The reverse transitions Gg and gi are forbidden. Paillard draws a schematic picture of the model, like this:

Of course, he also most specify how the ice volume grows when the Earth is in its mild glacial g state. He says:

I assume that the ice sheet needs some minimal time $t_g$ in order to grow and exceed the volume $v_{\mathrm{max}}$ [...] and that the insolation maxima preceding the gG transition must remain below the level $i_3.$ The gG transition then can occur at the next insolation decrease, when it falls below $i_2$.

Being a mathematician rather than a climate scientist, I can think of more than one way to interpret this. I think it means:

1. If the Earth is in its g state and the insolation stays below some value $i_3$ for a time $t_g,$ then the Earth jumps into the G state.

2. If the Earth is in its g state and the insolation rises above $i_3,$ we wait until it drops below some value $i_2,$ and then the Earth jumps into its G state.

An alternative interpretation is:

2′. If the Earth is in its g state and the insolation rises above $i_3,$ we wait until it drops below some value $i_2.$ Then we ‘reset the clock’ and proceed according to rule 1.

I’ll try to sort this out. Now, the insolation as a function of time is known—you can compute it using the formula and the data here:

Insolation, Azimuth Project.

So, the only thing required to complete Paillard’s model are choices of these numbers:

$i_0, i_1, i_2, i_3, t_g$

He likes to measure insolation in terms of its standard deviation from its mean value. With this normalization he takes:

$i_0 = -0.75, \qquad i_1 = i_2 = 0 , \qquad i_3 = 1$

and

$t_g = 33 \; \mathrm{kyr}$

Then his model gives these results:

(Click to enlarge.) The bottom graph shows temperature as measured by the extra amount of oxygen-18 in some geological records. So, we can see that the Earth often pops rather suddenly into a warm interglacial state and cools a bit more slowly into a glacial state. In the model, this ‘popping into a warm state’ happens instantaneously in the middle graph. The main thing is to compare this to the bottom graph!

The way the models pops suddenly into the very cold G state does not look quite so good. But still, it’s exciting how such a simple model fits the overall profile of the glacial cycle—at least for the last million years.

Paillard says his model is fairly robust, too:

This model is not very sensitive to parameter changes. Different threshold values will slightly offset the transitions by a few hundred years, but the overall shape will remain the same for a broad range of values. There is no significant changes when $i_0$ is between -0.97 and -0.64, $i_1$ between -0.23 and 0.32, $i_2$ between -0.30 and 0.13, $i_3$ between 0.97 and 1.16, and $t_g$ between 27 kyr and 60 kyr. Even when the parameters are out of these bounds, the changes are minor: when $i_0$ is between -0.63 and -0.09, the succession of regimes remains the same except for present time, which becomes a g regime. When $i_1$ is chosen between 0.33 and 0.87, only the duration of stage 11.3 changes to become more comparable to other interglacial stages.

### Marine isotope stages

There’s a lot more to say. For example, what does the model say about the time more than a million years ago, when the glacial cycles happened roughly every 41 thousand years, instead of every 100? I won’t answer this. Instead, I’ll conclude by explaining something very basic—but worth knowing.

What’s ‘stage 11.3′? This refers to the numbers down at the bottom of Paillard’s chart: these numbers are Marine Isotope Stages. 11.3 is a ‘substage’, not shown on the chart.

Marine Isotopes Stages are official periods of time used by people who study glacial cycles. The even-numbered ones roughly correspond to glacial periods, and the odd-numbered ones to interglacials. By now over a hundred stages have been identified, going back 6 million years!

Just to give you a little sense of what’s going on, here are the start dates of the last 11 stages, with hot ones in red and the cold ones in blue:

MIS 1: 11 thousand years ago. This marks the end of the last glacial cycle. More precisely, this is about 500 years after the end of the Younger Dryas event.

MIS 2: 24 thousand years ago. The Last Glacial Maximum occurred between 26.5 and 19 thousand years ago. At that time we had ice sheets down to the Great Lakes, the mouth of the Rhine, and covering the British Isles. Homo sapiens arrived in the Americas later, around 18 thousand years ago.

MIS 3: 60 thousand years ago. For comparison, Homo sapiens arrived in central Asia around 50 thousand years ago. About 35 thousand years ago the calendar was invented, Homo sapiens arrived in Europe, and Homo neanderthalensis. went extinct.

MIS 4: 71 (or maybe 74) thousand years ago.

MIS 5: 130 thousand years ago. The Eemian, the last really warm interglacial period before ours, began at this time and ended about 114 thousand years ago. If you look at this chart, you’ll see MIS 3 was a much less warm interglacial:

(Now time is going to the right again. Click for more details.)

MIS 6: 190 thousand years ago.

MIS 7: 244 thousand years ago. The first known Homo sapiens date back to 250 thousand years ago.

MIS 8: 301 thousand years ago.

MIS 9: 334 thousand years ago.

MIS 10: 364 thousand years ago. The first known Homo neanderthalensis date back to about 350 thousand years ago.

MIS 11: 427 thousand years ago. This stage is supposedly the most similar to MIS 1, and looking at the graph above you can see why people say that.

I hope you agree that it’s worth understanding the glacial cycles, not just because we need to understand how the Earth will respond to the big boost of carbon dioxide that we’re dosing it with now, but because it’s a fascinating physics problem—and because glaciation has been a powerful force in Earth’s recent history, and the history of our species.

For your convenience, here are links to all the notes for this course:

• Part 1 – The mathematics of planet Earth.
• Part 2 – Simple estimates of the Earth’s temperature.
• Part 3 – The greenhouse effect.
• Part 4 – History of the Earth’s climate.
• Part 5 – A model showing bistability of the Earth’s climate due to the ice albedo effect: statics.
• Part 6 – A model showing bistability of the Earth’s climate due to the ice albedo effect: dynamics.
• Part 7 – Stochastic differential equations and stochastic resonance.
• Part 8 – A stochastic energy balance model and Milankovitch cycles.
• Part 9 – Changes in insolation due to changes in the eccentricity of the Earth’s orbit.
• Part 10 – Didier Paillard’s model of the glacial cycles.

## Mathematics of the Environment (Part 9)

27 November, 2012

I didn’t manage to cover everything I intended last time, so I’m moving the stuff about the eccentricity of the Earth’s orbit to this week, and expanding it.

### Sunshine and the Earth’s orbit

I bet some of you are hungry for some math. As I mentioned, it takes some work to see how changes in the eccentricity of the Earth’s orbit affect the annual average of sunlight hitting the top of the Earth’s atmosphere. Luckily Greg Egan has done this work for us. While the result is surely not new, his approach makes nice use of the fact that both gravity and solar radiation obey an inverse-square law. That’s pretty cool.

Here is his calculation with some details filled in.

Let’s think of the Earth as moving around an ellipse with one focus at the origin. Its angular momentum is then

$\displaystyle{ J = m r v_\theta }$

where $m$ is its mass, $r$ and $\theta$ are its polar coordinates, and $v_\theta$ is the angular component of its velocity:

$\displaystyle{ v_\theta = r \frac{d \theta}{d t} }$

So,

$\displaystyle{ J = m r^2 \frac{d \theta}{d t} }$

and

$\displaystyle{\frac{d \theta}{d t} = \frac{J}{m r^2} }$

Since the brightness of a distant object goes like $1/r^2$, the solar energy hitting the Earth per unit time is

$\displaystyle{ \frac{d U}{d t} = \frac{C}{r^2}}$

for some constant $C.$ It follows that the energy delivered per unit of angular progress around the orbit is

$\displaystyle{ \frac{d U}{d \theta} = \frac{d U/d t}{d \theta/ dt} = \frac{C m}{J} }$

Thus, the total energy delivered in one period will be

$\begin{array}{ccl} U &=& \displaystyle{ \int_0^{2 \pi} \frac{d U}{d \theta} \, d \theta} \\ \\ &=& \displaystyle{ \frac{2\pi C m}{J} } \end{array}$

So far we haven’t used the the fact that the Earth’s orbit is elliptical. Next we’ll do that. Our goal will be to show that $U$ depends only very slightly on the eccentricity of the Earth’s orbit. But we need to review a bit of geometry first.

### The geometry of ellipses

If the Earth is moving in an ellipse with one focus at the origin, its equation in polar coordinates is

$\displaystyle{ r = \frac{p}{1 + e \cos \theta} }$

where $e$ is the eccentricity and $p$ is the somewhat dirty-sounding semi-latus rectum. You can think of $p$ as a kind of average radius of the ellipse—more on that in a minute.

Let’s think of the origin in this coordinate system as the Sun—that’s close to true, though the Sun moves a little. Then the Earth gets closest to the Sun when $\cos \theta$ is as big as possible. So, the Earth is closest to the Sun when $\theta = 0$, and then its distance is

$\displaystyle{ r_1 = \frac{p}{1 + e} }$

Similarly, the Earth is farthest from the Sun happens when $\theta = \pi$, and then its distance is

$\displaystyle{ r_2 = \frac{p}{1 - e} }$

We call $r_1$ the perihelion and $r_2$ the aphelion.

The semi-major axis is half the distance between the opposite points on the Earth’s orbit that are farthest from each other. This is denoted $a.$ These points occur at $\theta = 0$ and $\theta = \pi$, so the distance between these points is $r_1 + r_2$, and

$\displaystyle{ a = \frac{r_1 + r_2}{2} }$

So, the semi-major axis is the arithmetic mean of the perihelion and aphelion.

The semi-minor axis is half the distance between the opposite points on the Earth’s orbit that are closest to each other. This is denoted $b.$

Puzzle 1. Show that the semi-minor axis is the geometric mean of the perihelion and aphelion:

$\displaystyle{ b = \sqrt{r_1 r_2} }$

I said the semi-latus rectum $p$ is also a kind of average radius of the ellipse. Just to make that precise, try this:

Puzzle 2. Show that the semi-latus rectum is the harmonic mean of the perihelion and aphelion:

$\displaystyle{ p = \frac{1}{\frac{1}{2}\left(\frac{1}{r_1} + \frac{1}{r_2}\right) } }$

This puzzle is just for fun: the Greeks loved arithmetic, geometric and harmonic means, and the Greek mathematician Apollonius wrote a book on conic sections, so he must have known these facts and loved them. The conventional wisdom is that the Greeks never realized that the planets move in elliptical orbits. However, the wonderful movie Agora presents a great alternative history in which Hypatia figures it all out shortly before being killed! And the mathematician Sandro Graffi (who incidentally taught a course I took in college on the self-adjointness of quantum-mechanical Hamiltonians) has claimed:

Now an infrequently read work of Plutarch, several parts of the Natural History of Plinius, of the Natural Questions of Seneca, and of the Architecture of Vitruvius, also infrequently read, especially by scientists, clearly show that the cultural elite of the early imperial age (first century A.D.) were fully aware of and convinced of a heliocentric dynamical theory of planetary motions based on the attractions of the planets toward the Sun by a force proportional to the inverse square of the distance between planet and Sun. The inverse square dependence on the distance comes from the assumption that the attraction is propagated along rays emanating from the surfaces of the bodies.

I have no idea if the controversial last part of this claim is true. But it’s fun to imagine!

More importantly for what’s to come, we can express the semi-minor axis in terms of the semi-major axis and the eccentricity. Since

$\displaystyle{ r_1 = \frac{p}{1 + e} , \qquad r_2 = \frac{p}{1 - e} }$

we have

$\displaystyle{ r_1 + r_2 = \frac{p}{1 + e} + \frac{p}{1 - e} = \frac{2 p}{1 - e^2} }$

so the semi-minor axis is

$\displaystyle{ a = \frac{p}{1 - e^2} }$

while

$\displaystyle {r_1 r_2 = \frac{p^2}{1 - e^2} }$

so the semi-major axis is

$\displaystyle { b = \frac{p}{\sqrt{1 - e^2}} }$

and thus they are related by

$b = a \sqrt{1 - e^2}$

Remember this!

### How total annual sunshine depends on eccentricity

We saw a nice formula for the total solar energy hitting the Earth in one year in terms of its angular momentum $J$:

$\displaystyle{ U = \frac{2\pi C m}{J} }$

How can we relate the angular momentum $J$ to the shape of the Earth’s orbit? The Earth’s energy, kinetic plus potential, is constant throughout the year. The kinetic energy is

$\frac{1}{2}m v^2$

and the potential energy is

$\displaystyle{ -\frac{G M m}{r} }$

At the aphelion or perihelion the Earth isn’t moving in or out, just around, so by our earlier work

$\displaystyle{v = v_\theta = \frac{J}{m r} }$

and the kinetic energy is

$\displaystyle{ \frac{J^2}{2 r^2} }$

Equating the Earth’s energy at aphelion and perihelion, we thus get

$\displaystyle{\frac{J^2}{2m r_1^2} -\frac{G M m}{r_1} = \frac{J^2}{2m r_2^2} -\frac{G M m}{r_2} }$

and doing some algebra:

$\displaystyle{\frac{J^2}{2m} \left(\frac{1}{r_1^2} - \frac{1}{r_2^2}\right) = G M m \left( \frac{1}{r_1} - \frac{1}{r_2} \right) }$

$\displaystyle{\frac{J^2}{2m} \left(\frac{r_2^2 - r_1^2}{r_1^2 r_2^2}\right) = G M m \left( \frac{r_2 - r_1}{r_1 r_2} \right) }$

$\displaystyle{\frac{J^2}{2m} \left(\frac{r_1 + r_2}{r_1 r_2}\right) = G M m }$

and solving for $J,$

$\displaystyle{ J = m \sqrt{\frac{2 G M r_1 r_2}{r_1 + r_2}} }$

But remember that the semi-major and semi-minor axis of the Earth’s orbit are given by

$\displaystyle{ a=\frac{1}{2} (r_1+r_2)} , \qquad \displaystyle{ b=\sqrt{r_1 r_2} }$

respectively! So, we have

$\displaystyle{ J = mb \sqrt{\frac{GM}{a}} }$

This lets us rewrite our old formula for the energy $U$ in the form of sunshine that hits the Earth each year:

$\displaystyle{ U=\frac{2\pi C m}{J} = \frac{2\pi C}{b} \sqrt{\frac{a}{G M}} }$

But we’ve also seen that

$b = a \sqrt{1 - e^2}$

so we get the formula we’ve been seeking:

$\displaystyle{U=\frac{2\pi C}{\sqrt{G M a (1-e^2)}}}$

This tells us $U$ as a function of semi-major axis and eccentricity.

As we’ll see later, the semi-major axis $a$ is almost unchanged by small perturbations of the Earth’s orbit. The main thing that changes is the eccentricity $e$. But if $e$ is small, $e^2$ is even smaller, so $U$ doesn’t change much when we change $e.$

We can make this more quantiative. Let’s work out how much the actual changes in the Earth’s orbit affect the amount of solar radiation it gets! As we’ll see, the semi-major axis is almost constant, so we can ignore that. Complicated calculations we can’t redo here show that the eccentricity varies between 0.005 and 0.058. We’ve seen thetotal energy the Earth gets each year from solar radiation is proportional to

$\displaystyle{ \frac{1}{\sqrt{1-e^2}} }$

When the eccentricity is at its lowest value, $e = 0.005,$ we get

$\displaystyle{ \frac{1}{\sqrt{1-e^2}} = 1.0000125 }$

When the eccentricity is at its highest value, $e = 0.058,$ we get

$\displaystyle{\frac{1}{\sqrt{1-e^2}} = 1.00168626 }$

So, the solar power hitting the Earth each year changes by a factor of

$\displaystyle{1.00168626/1.0000125 = 1.00167373 }$

In other words, it changes by merely 0.167%.

That’s very small And the effect on the Earth’s temperature would naively be even less!

Naively, we can treat the Earth as a greybody: an ideal object whose tendency to absorb or emit radiation is the same at all wavelengths and temperatures. Since the temperature of a greybody is proportional to the fourth root of the power it receives, a 0.167% change in solar energy received per year corresponds to a percentage change in temperature roughly one fourth as big. That’s a 0.042% change in temperature. If we imagine starting with an Earth like ours, with an average temperature of roughly 290 kelvin, that’s a change of just 0.12 kelvin!

The upshot seems to be this: in a naive model without any amplifying effects, changes in the eccentricity of the Earth’s orbit would cause temperature changes of just 0.12 °C!

This is much less than the roughly 5 °C change we see between glacial and interglacial periods. So, if changes in eccentricity are important in glacial cycles, we have some explaining to do. Possible explanations include season-dependent phenomena and climate feedback effects, like the ice albedo effect we’ve been discussing. Probably both are very important!

Why does the semi-major axis of the Earth’s orbit remain almost unchanged under small perturbations? The reason is that it’s an ‘adiabatic invariant’. This is basically just a fancy way of saying it remains almost unchanged. But the point is, there’s a whole theory of adiabatic invariants… which supposedly explains the near-constancy of the semi-major axis.

According to Wikipedia:

The Earth’s eccentricity varies primarily due to interactions with the gravitational fields of Jupiter and Saturn. As the eccentricity of the orbit evolves, the semi-major axis of the orbital ellipse remains unchanged. From the perspective of the perturbation theory used in celestial mechanics to compute the evolution of the orbit, the semi-major axis is an adiabatic invariant. According to Kepler’s third law the period of the orbit is determined by the semi-major axis. It follows that the Earth’s orbital period, the length of a sidereal year, also remains unchanged as the orbit evolves. As the semi-minor axis is decreased with the eccentricity increase, the seasonal changes increase. But the mean solar irradiation for the planet changes only slightly for small eccentricity, due to Kepler’s second law.

Unfortunately, even though I understand a bit about the general theory of adiabatic invariants, I have not gotten around to convincing myself that the semi-major axis is such a thing, for the perturbations experienced by the Earth.

Here’s something easier: checking that the semi-major axis of the Earth’s orbit determines the period of the Earth’s orbit, say $T$. To do this, first relate the angular momentum to the period by integrating the rate at which orbital area is swept out by the planet:

$\displaystyle{\frac{1}{2} r^2 \frac{d \theta}{d t} = \frac{J}{2 m} }$

over one orbit. Since the area of an ellipse is $\pi a b$, this gives us:

$\displaystyle{ J = \frac{2 \pi a b m}{T} }$

On the other hand, we’ve seen

$\displaystyle{J = m b \sqrt{\frac{G M}{a}}}$

Equating these two expressions for $J$ shows that the period is:

$\displaystyle{ T = 2 \pi \sqrt{\frac{a^3}{G M}}}$

So, the period depends only on the semi-major axis, not the eccentricity. Conversely, we could solve this equation to see that the semi-major axis depends only on the period, not the eccentricity.

I’m treating $G$ and $M$ as constants here. If the mass of the Sun decreases, as it eventually will when it becomes a red giant and puffs out lots of gas, the semi-major axes of the Earth’s orbit will change. It will actually increase! This is one reason people are still arguing about just when the Earth will get swallowed up by the Sun:

• David Appell, The Sun will eventually engulf the Earth—maybe, Scientific American, 8 September 2008.

And, to show just how subtle these things are, if the mass of the Sun slowly changes, while the semi-major axis of the Earth’s orbit will change, the eccentricity will remain almost unchanged. Why? Because for this kind of process, it’s the eccentricity that’s an adiabatic invariant!

Indeed, I got all excited when I started reading a homework problem in Landau and Lifschitz’s book Classical Mechanics, which describes adiabatic invariants for the gravitational 2-body problem. But I was bummed out when they concluded that the eccentricity was an adiabatic invariant for gradual changes in $M$. They didn’t discuss any problems for which the semi-major axis was an adiabatic invariant.

I’ll have to get back to this later sometime, probably with the help of a good book on celestial mechanics. If you’re curious about the concept of adiabatic invariant, start here: