Mathematics of the Environment (Part 8)

20 November, 2012

We’re trying to get a little insight into the Earth’s glacial cycles, using very simple models. We have a long way to go, but let’s review where we are.

In Part 5 and Part 6, we studied a model where the Earth can be bistable. In other words, if the parameters are set right it has two stable states: one where it’s cold and stays cold because it’s icy and reflects lots of sunlight, and another where it’s hot and stays hot because it’s dark and absorbs lots of sunlight.

In Part 7 we saw a bistable system being pushed around by noise and also an oscillating ‘force’. When the parameters are set right, we see stochastic resonance: the noise amplifies the system’s response to the oscillations! This is supposed to remind us of how the Milankovich cycles in the Earth’s orbit may get amplified to cause glacial cycles. But this system was not based on any model of the Earth’s climate.

Now a student in this course has put the pieces together! Try this program by Michael Knap and see what happens:

A stochastic energy balance model.

Remember, our simple model of the Earth’s climate amounted to this differential equation:

\displaystyle{ C \frac{d T}{d t} = - A - B T + Q(t) c(T(t)) }

Here:

A + B T is a linear approximation to the power per square meter emitted by the Earth at temperature T. Satellite measurements give A = 218 watts/meter2 and B = 1.90 watts/meter2 per degree Celsius.

Q(t) is the insolation: the solar power per square meter hitting the top of the Earth’s atmosphere, averaged over location. In Michael’s software you can choose various functions for Q(t). For example, it can be an step function with

Q(t) = \left\{ \begin{array}{ccl} Q_{\mathrm{base}} + X & \mathrm{for} & 0 \le t \le \tau \\  Q & \mathrm{for} & t > t_i  \end{array} \right.

Here Q_{\mathrm{base}} = 341.5 watts/meter2 is the average insolation we actually see now. X is an extra ‘insolation bump’ that lasts for a time \tau. Here we are imagining, just for fun, that we can turn up the brightness of the Sun for a while!

Or, you can choose Q(t) to be a sinusoidal function:

Q(t) = Q_{\mathrm{base}} + X \sin(t/\tau)

where X is the amplitude and 2 \pi t_i is the period of the oscillation. This is better for understanding stochastic resonance.

c(T) is the average coalbedo of the Earth: that is, the fraction of solar power it absorbs. We approximate this by:

c(T) = c_i + \frac{1}{2} (c_f-c_i) (1 + \tanh(\gamma T))

Here c_i = 0.35 is the icy coalbedo, good for very low temperatures when much of the Earth is light in color. c_f = 0.7 is the ice-free coalbedo, good for high temperatures when the Earth is darker. Finally, \gamma is the coalbedo transition rate, which says how rapidly the coalbedo changes as the Earth’s temperature increases.

Now Michael Knap is adding noise, giving this equation:

\displaystyle{ C \frac{d T}{d t} = - A - B T + Q(t) c(T(t)) } + \sigma w(t)

where w(t) is white noise and \sigma is another constant. To look like professionals we can also write this using differentials instead of derivatives:

\displaystyle{ C d T = \left(- A - B T + Q(t) c(T(t))\right) \, d T + \sigma \, dW(t) }

where W is the Wiener process and we’re using

\displaystyle{ w(t) = \frac{d W}{d t} }

Play around with Michael’s model and see what effects you can get!

Puzzle. List some of the main reasons this model is not yet ready for realistic investigations of how the Milankovitch cycles cause glacial cycles.

You can think of some big reasons just knowing what I’ve told you already in this course. But you’ll be able to think of some more once you know more about Milankovitch cycles.

Milankovitch cycles

We’ve been having fun with models, and we’ve been learning something from them, but now it’s time to go back and think harder about what’s really going on.

As I keep hinting, a lot of scientists believe that the Earth’s glacial cycles are related to cyclic changes in the Earth’s orbit and the tilt of its axis. Since one of the first scientists to carefully study this issue was Milutin Milankovitch, these are called Milankovitch cycles. Now let’s look at them in more detail.

The three major types of Milankovitch cycle are:

• changes in the eccentricity of the Earth’s orbit – that is, how much the orbit deviates from being a circle:




(changes greatly exaggerated)

• changes in the obliquity, or tilt of the Earth’s axis:



precession, meaning changes in the direction of the Earth’s axis relative to the fixed stars:



The first important thing to realize is this: it’s not obvious that Milankovitch cycles can cause glacial cycles. During a glacial period, the Earth is about 5°C cooler than it is now. But the Milankovitch cycles barely affect the overall annual amount of solar radiation hitting the Earth!

This fact is clear for precession or changes in obliquity, since these just involve the tilt of the Earth’s axis, and the Earth is nearly a sphere. The amount of Sun hitting a sphere doesn’t depend on how the sphere is ’tilted’.

For changes in the eccentricity of the Earth’s orbit, this fact is a bit less obvious. After all, when the orbit is more eccentric, the Earth gets closer to the Sun sometimes, but farther at other times. So you need to actually sit down and do some math to figure out the net effect. Luckily, Greg Egan did this calculation in a very nice way, earlier here on the Azimuth blog. I’ll show you his calculation at the end of this article. It turns out that when the Earth’s orbit is at its most eccentric, it gets very, very slightly more energy from the Sun each year: 0.167% more than when its orbit is at its least eccentric. This is not enough to warm the Earth very much.

So, there are interesting puzzles involved in the Milankovitch cycles. They don’t affect the total amount of radiation that hits the Earth each year—not much, anyway—but they do cause substantial changes in the amount of radiation that hits the Earth at various different latitudes in various different seasons. We need to understand what such changes might do.

James Croll was one of the first to think about this, back around 1875. He decided that what really matters is the amount of sunlight hitting the far northern latitudes in winter. When this was low, he claimed, glaciers would tend to form and an ice age would start. But later, in the 1920s, Milankovitch made the opposite claim: what really matters is the amount of sunlight hitting the far northern latitudes in summer. When this was low, an ice age would start.

If we take a quick look at the data, we see that the truth is not obvious:


I like this graph because it’s pretty… but I wish the vertical axes were labelled. We will see some more precise graphs in future weeks.

Nonetheless, this graph gives some idea of what’s going on. Precession, obliquity and eccentricity vary in complex but still predictable ways. From this you can compute the amount of solar energy that hits the surface of the Earth’s atmosphere on July 1st at a latitude of 65° N. That’s the yellow curve. People believe this quantity has some relation to the Earth’s temperature, as shown by the black curve at bottom. However, the relation is far from clear!

Indeed, if you only look at this graph, you might easily decide that Milankovitch cycles are not important in causing glacial cycles. But people have analyzed temperature proxies over long spans of time, and found evidence for cyclic changes at periods that match those of the Milankovitch cycles. Here’s a classic paper on this subject:

• J. D. Hays, J. Imbrie, and N. J. Shackleton, Variations in the earth’s orbit: pacemaker of the Ice Ages, Science 194 (1976), 1121-1132.

They selected two sediment cores from the Indian ocean, which contain sediments deposited over the last 450,000 years. They measured:

1) Ts, an estimate of summer sea-surface temperatures at the core site, derived from a statistical analysis of tiny organisms called radiolarians found in the sediments.

2) δ18O, the excess of the heavy isotope of oxygen in tiny organisms called foraminifera also found in the sediments.

3) The percentage of radiolarians that are Cycladophora davisiana—a certain species not used in the estimation of Ts.

Identical samples were analyzed for the three variables at 10-centimeter intervals throughout each core. Then they took a Fourier transform of this data to see at which frequencies these variables wiggle the most! When we take the Fourier transform of a function and then square it, the result is called the power spectrum. So, they actually graphed the power spectra for these three variables:

The top graph shows the power spectra for Ts, δ18O, and the percentage of Cycladophora davisiana. The second one shows the spectra after a bit of extra messing around. Either way, there seem to be peaks at frequencies of 19, 23, 42 and roughly 100 thousand years. However the last number is quite fuzzy: if you look, you’ll see the three different power spectra have peaks at 94, 106 and 122 thousand years.

So, some sort of cycles seem to be occurring. This is far from the only piece of evidence, but it’s a famous one.

Now let’s go over the three major forms of Milankovitch cycle, and keep our eye out for cycles that take place every 19, 23, 42 or roughly 100 thousand years!

Eccentricity

The Earth’s orbit is an ellipse, and the eccentricity of this ellipse says how far it is from being circular. But the eccentricity of the Earth’s orbit slowly changes: it varies from being nearly circular, with an eccentricity of 0.005, to being more strongly elliptical, with an eccentricity of 0.058. The mean eccentricity is 0.028. There are several periodic components to these variations. The strongest occurs with a period of 413,000 years, and changes the eccentricity by ±0.012. Two other components have periods of 95,000 and 123,000 years.

The eccentricity affects the percentage difference in incoming solar radiation between the perihelion, the point where the Earth is closest to the Sun, and the aphelion, when it is farthest from the Sun. This works as follows. The percentage difference between the Earth’s distance from the Sun at perihelion and aphelion is twice the eccentricity, and the percentage change in incoming solar radiation is about twice that. The first fact follows from the definition of eccentricity, while the second follows from differentiating the inverse-square relationship between brightness and distance.

Right now the eccentricity is 0.0167, or 1.67%. Thus, the distance from the Earth to Sun varies 3.34% over the course of a year. This in turn gives an annual variation in incoming solar radiation of about 6.68%. Note that this is not the cause of the seasons: those arise due to the Earth’s tilt, and occur at different times in the northern and southern hemispheres.

Obliquity

The angle of the Earth’s axial tilt with respect to the plane of its orbit, called the obliquity, varies between 22.1° and 24.5° in a roughly periodic way, with a period of 41,000 years. When the obliquity is high, the strength of seasonal variations is stronger.

Right now the obliquity is 23.44°, roughly halfway between its extreme values. It is decreasing, and will reach its minimum value around the year 10,000 CE.

Precession

The slow turning in the direction of the Earth’s axis of rotation relative to the fixed stars, called precession, has a period of roughly 23,000 years. As precession occurs, the seasons drift in and out of phase with the perihelion and aphelion of the Earth’s orbit.

Right now the perihelion occurs during the southern hemisphere’s summer, while the aphelion is reached during the southern winter. This tends to make the southern hemisphere seasons more extreme than the northern hemisphere seasons.

The gradual precession of the Earth is not due to the same physical mechanism as the wobbling of the top. That sort of wobbling does occur, but it has a period of only 427 days. The 23,000-year precession is due to tidal interactions between the Earth, Sun and Moon. For details, see:

• John Baez, The wobbling of the Earth and other curiosities.

In the real world, most things get more complicated the more carefully you look at them. For example, precession actually has several periodic components. According to André Berger, a top expert on changes in the Earth’s orbit, the four biggest components have these periods:

• 23,700 years

• 22,400 years

• 18,980 years

• 19,160 years

in order of decreasing strength. But in geology, these tend to show up either as a single peak around the mean value of 21,000 years, or two peaks at frequencies of 23,000 and 19,000 years.

To add to the fun, the three effects I’ve listed—changes in eccentricity, changes in obliquity, and precession—are not independent. According to Berger, cycles in eccentricity arise from ‘beats’ between different precession cycles:

• The 95,000-year eccentricity cycle arises from a beat between the 23,700-year and 19,000-year precession cycles.

• The 123,000-year eccentricity cycle arises from a beat between the 22,4000-year and 18,000-year precession cycles.

I’d like to delve into all this stuff more deeply someday, but I haven’t had time yet. For now, let me just refer you to this classic review paper:

• André Berger, Pleistocene climatic variability at astronomical frequencies, Quaternary International 2 (1989), 1-14.

Later, as I get up to speed, I’ll talk about more modern work.

Paleontology versus astronomy

Now we can compare the data from ocean sediments to the Milankovitch cycles as computed in astronomy:

• The roughly 19,000-year cycle in ocean sediments may come from 18,980-year and 19,160-year precession cycles.

• The roughly 23,000-year cycle in ocean sediments may come from 23,700-year precession cycle.

• The roughly 42,000-year cycle in ocean sediments may come from the 41,000-year obliquity cycle.

• The roughly 100,000-year cycle in ocean sediments may come from the 95,000-year and 123,000-year eccentricity cycles.

Again, the last one looks the most fuzzy. As we saw, different kinds of sediments seem to indicate cycles of 94, 106 and 122 thousand years. At least two of these periods match eccentricity cycles fairly well. But a detailed analysis would be required to distinguish between real effects and coincidences in this subject! Such analyses have been done, of course. But until I study them more, I won’t try to discuss them.


Talk at Berkeley

15 November, 2012

This Friday, November 16, 2012, I’ll be giving the annual Lang Lecture at the math department of U. C. Berkeley. I’ll be speaking on The Mathematics of Planet Earth. There will be tea and cookies in 1015 Evans Hall from 3 to 4 pm. The talk itself will be in 105 Northgate Hall from 4 to 5 pm, with questions going on to 5:30 if people are interested.

You’re all invited!


Mathematics of the Environment (Part 7)

13 November, 2012

Last time we saw how the ice albedo effect could, in theory, make the Earth’s climate have two stable states. In a very simple model, we saw that a hot Earth might stay hot since it’s dark and absorbs lots of sunlight, while a cold Earth might stay cold—since it’s icy and white and reflects most of the sunlight.

If you haven’t tried it yet, make sure to play around with this program pioneered by Lesley de Cruz and then implemented in Java by Allan Erskine:

Temperature dynamics.

The explanations were all given in Part 6 so I won’t repeat them here!

This week, we’ll see how noise affects this simple climate model. We’ll borrow lots of material from here:

Glyn Adgie and Tim van Beek, Increasing the signal-to-noise ratio with more noise.

And we’ll use software written by these guys together with Allan Erskine. The power of the Azimuth Project knows no bounds!

Stochastic differential equations

The Milankovich cycles are periodic changes in how the Earth orbits the Sun. One question is: can these changes can be responsible for the ice ages? On the first sight it seems impossible, because the changes are simply too small. But it turns out that we can find a dynamical system where a small periodic external force is actually strengthened by random ‘noise’ in the system. This phenomenon has been dubbed ‘stochastic resonance’ and has been proposed as an explanation for the ice ages. It also shows up in many other phenomena:

• Roberto Benzi, Stochastic resonance: from climate to biology.

But to understand it, we need to think a little about stochastic differential equations.

A lot of systems can be described by ordinary differential equations:

\displaystyle{ \frac{d x}{d t} = f(x, t)}

If f is nice, the time evolution of the system will be a nice smooth function x(t), like the trajectory of a thrown stone. But there are situations where we have some kind of noise, a chaotic, fluctuating influence, that we would like to take into account. This could be, for example, turbulence in the air flow around a rocket. Or, in our case, short term fluctuations of the weather of the earth. If we take this into account, we get an equation of the form

\displaystyle{ \frac{d x}{d t} = f(x, t) + w(t) }

where the w is a ‘random function’ which models the noise. Typically this noise is just a simplified way to take into account rapidly changing fine-grained aspects of the system at hand. This way we do not have to explicitly model these aspects, which is often impossible.

White noise

We’ll look at a model of this sort:

\displaystyle{ \frac{d x}{d t} = f(x, t) + w(t) }

where w(t) is ‘white noise’. But what’s that?

Very naively speaking, white noise is a random function that typically looks very wild, like this:

White noise actually has a sound, too: it sounds like this! The idea is that you can take a random function like the one graphed above, and use it to drive the speakers of your computer, to produce sound waves of an equally wild sort. And it sounds like static.

However, all this is naive. Why? The concept of a ‘random function’ is not terribly hard to define, at least if you’ve taken the usual year-long course on real analysis that we force on our grad students at U.C. Riverside: it’s a probability measure on some space of functions. But white noise is a bit too spiky and singular too be a random function: it’s a random distribution.

Distributions were envisioned by Dirac but formalized later by Laurent Schwarz and others. A distribution D(t) is a bit like a function, but often distributions are too nasty to have well-defined values at points! Instead, all that makes sense are expressions like this:

\displaystyle{ \int_\infty^\infty D(t) f(t) \, d t}

where we multiply the distribution by any compactly supported smooth function f(t) and then integrate it. Indeed, we specify a distribution just by saying what all these integrals equal. For example, the Dirac delta \delta(t) is a distribution defined by

\displaystyle{ \int_\infty^\infty \delta(t) f(t) \, d t = f(0) }

If you try to imagine the Dirac delta as a function, you run into a paradox: it should be zero everywhere except at t = 0, but its integral should equal 1. So, if you try to graph it, the region under the graph should be an infinitely skinny infinitely tall spike whose area is 1:

But that’s impossible, at least in the standard framework of mathematics—so such a function does not really exist!

Similarly, white noise w(t) is too spiky to be an honest random function, but if we multiply it by any compactly supported smooth function f(t) and integrate, we get a random variable

\displaystyle{ \int_\infty^\infty w(t) f(t) \, d t }

whose probability distribution is a Gaussian with mean zero and standard deviation equal to

\displaystyle{ \sqrt{ \int_\infty^\infty f(t)^2 \, d t} }

(Note: the word ‘distribution’ has a completely different meaning when it shows up in the phrase probability distribution! I’m assuming you’re comfortable with that meaning already.)

Indeed, the above formulas make sense and are true, not just when f is compactly supported and smooth, but whenever

\displaystyle{ \sqrt{ \int_\infty^\infty f(t)^2 \, d t} } < \infty

If you know about Gaussians and you know about this sort of integral, which shows up all over math and physics, you’ll realize that white noise is an extremely natural concept!

Brownian motion

While white noise is not a random function, its integral

\displaystyle{ W(s) = \int_0^s w(s) \, ds }

turns out to be a well-defined random function. It has a lot of names, including: the Wiener process, Brownian motion, red noise and—as a kind of off-color joke—brown noise.

The capital W here stands for Wiener: that is, Norbert Wiener, the famously absent-minded MIT professor who studied this random function… and also invented cybernetics:

Brownian noise sounds like this.

Puzzle: How does it sound different from white noise, and why?

It looks like this:

Here we are zooming in on closer and closer, while rescaling the vertical axis as well. We see that Brownian noise is self-similar: if W(t) is Brownian noise, so is W(c t)/\sqrt{c} for all c > 0.

More importantly for us, Brownian noise is the solution of this differential equation:

\displaystyle{ \frac{d W}{d t} = w(t) }

where w(t) is white noise. This is essentially true by definition, but making it rigorous takes some work. More fancy stochastic differential equations

\displaystyle{ \frac{d x}{d t} = f(x, t) + w(t) }

take even more work to rigorously formulate and solve. You can read about them here:

Stochastic differential equation, Azimuth Library.

It’s actually much easier to explain the difference equations we use to approximately solve these stochastic differential equation on the computer. Suppose we discretize time into steps like this:

t_i = i \Delta t

where \Delta t > 0 is some fixed number, our ‘time step’. Then we can define

x(t_{i+1}) = x(t_i) + f(x(t_i), t_i) \; \Delta t + w_i

where w_i are independent Gaussian random variables with mean zero and standard deviation

\sqrt{ \Delta t}

The square root in this formula comes from the definition I gave of white noise.

If we use a random number generator to crank out random numbers w_i distributed in this way, we can write a program to work out the numbers x(t_i) if we are given some initial value x(0). And if the time step \Delta t is small enough, we can hope to get a ‘good approximation to the true solution’. Of course defining what we mean by a ‘good approximation’ is tricky here… but I think it’s more important to just plunge in and see what happens, to get a feel for what’s going on here.

Stochastic resonance

Let’s do an example of this equation:

\displaystyle{ \frac{d x}{d t} = f(x, t) + w(t) }

which exhibits ‘stochastic resonance’. Namely:

\displaystyle{ \frac{d x}{d t} = x(t) - x(t)^3 + A \;  \sin(t)  + \sqrt{2D} w(t) }

Here A and D are constants we get to choose. If we set them both to zero we get:

\displaystyle{ \frac{d x}{d t} = x(t) - x(t)^3 }

This has stable equilibrium solutions at x = \pm 1 and an unstable equilibrium in between at x = 0. So, this is a bistable model similar to the one we’ve been studying, but mathematically simpler!

Then we can add an oscillating time-dependent term:

\displaystyle{ \frac{d x}{d t} = x(t) - x(t)^3 + A \;  \sin(t) }

which wiggles the system back and forth. This can make it jump from one equilibrium to another.

And then we can add on noise:

\displaystyle{ \frac{d x}{d t} = x(t) - x(t)^3 + A \;  \sin(t)  + \sqrt{2D} w(t) }

Let’s see what the solutions look like!

In the following graphs, the green curve is A \sin(t),, while the red curve is x(t). Here is a simulation with a low level of noise:

low noise level

As you can see, within the time of the simulation there is no transition from the stable state at 1 to the one at -1. If we were doing a climate model, this would be like the Earth staying in the warm state.

Here is a simulation with a high noise level:

high noise level

The solution x(t) jumps around wildly. By inspecting the graph with your eyes only, you don’t see any pattern in it, do you?

But finally, here is a simulation where the noise level is not too small, and not too big:

high noise level

Here we see the noise helping the solution x(t) hops from around x = -1 to x = 1, or vice versa. The solution x(t) is not at all ‘periodic’—it’s quite random. But still, it tends to hop back and forth thanks to the combination of the sinusoidal term A \sin(t) and the noise.

Glyn Adgie, Allan Erskine, Jim Stuttard and Tim van Beek have created an online model that lets you solve this stochastic differential equation for different values of A and D. You can try it here:

Stochastic resonance example.

You can change the values using the sliders under the graphic and see what happens. You can also choose different ‘random seeds’, which means that the random numbers used in the simulation will be different.

To read more about stochastic resonance, go here:

Stochastic resonance, Azimuth Library.

In future weeks I hope to say more about the actual evidence that stochastic resonance plays a role in our glacial cycles! It would also be great to go back to our climate model from last time and add noise. We’re working on that.


Wind and Water on Mars

11 November, 2012

Frosty dunes

 

I love this photo, because it shows that Mars is a lively place with wind and water. These dunes near the north pole, occupying a region the size of Texas, have been sculpted by wind into long lines with crests 500 meters apart. Their hollows are covered with frost, which appears bluish-white in this infrared photograph. The big white spot near the bottom is a hill 100 meters high.

For more info, go here:

• THEMIS, North polar sand sea.

If you download the full-sized version of this photo, either by clicking on my picture or going to this webpage, you’ll see it’s astoundingly detailed!

THEMIS is the Thermal Emission Imaging System aboard the Mars Odyssey spacecraft, which has been orbiting Mars since 2002. It combines a 5-wavelength visual imaging system with a 9-wavelength infrared imaging system. It’s been taking great pictures—especially of regions that are too rugged for rovers like Opportunity, Spirit and Curiosity.

Because those rovers landed in places that were chosen to be safe, the pictures they take sometimes make Mars look… well, a bit dull. It’s not!

Let me show you what I mean.

Barchans

 

These are barchans on Mars, C-shaped sand dunes that slowly move through the desert like this:

C
  C
    C

And see the dark fuzzy stuff? More on that later!

Barchans are also found on Earth, and surely on many other planets across the Universe. They’re one of several basic dune patterns—an inevitable consequence of the laws of nature under fairly common conditions.

Sand gradually accumulates on the upwind side of a barchan. Then it falls down the other side, called the ‘slip face’. The upwind slope is gentle, while the slope of the slip face is the angle of repose for sand: the maximum angle it can tolerate before it starts slipping down. On Earth that’s between 32 and 34 degrees.

Puzzle: What is the angle of repose of sand on Mars? Does the weaker pull of gravity let sandpiles be steeper? Or are they just as steep as on Earth?

Barchans gradually migrate in the direction of the wind, with small barchans moving faster than big ones. And when barchans collide, the smaller ones pass right through the big ones! So, they’re a bit like what physicists call solitons: waves that maintain their identity like particles. However, they display more complicated behaviors.

This simulation shows what can happen when two collide:

Depending on the parameters, they can:

c: coalesce into one barchan,

b: breed to form more barchans,

bu: bud, with the smaller one splitting in two, or

s: act like solitons, with one going right through the other!

This picture is from here:

• Orencio Durán, Veit Schwámmle and Hans J. Herrmann, Simulations of binary collisions of barchan dunes: the collision dynamics and its influence on the dune size distribution.

In this picture there is no ‘offset’ between the colliding barchans: they hit head-on. With an offset, more complicated things can happen – check out this picture:

It may seem surprising that there’s enough wind on Mars to create dunes. After all, the air pressure there is about 1% what it is here on Earth! But in fact the wind speed on Mars often exceeds 200 kilometers per hour, with gusts up to 600 kilometers per hour. There are dust storms on Mars so big they were first seen from telescopes on Earth long ago. So, wind is a big factor in Martian geology:

• NASA, Mars exploration program: dust storms.

The Mars rover Spirit even got its solar panels cleaned by some dust devils, and it took some movies of them:

Geysers?

 

This picture shows a dune field less than 400 kilometers from the north pole, bordered on both sides by flat regions—but also a big cliff at one end.

Here’s a closeup of those dunes… with stands of trees on top?!?

No, that’s an optical illusion. But whatever it is, it’s something strange. Robert Krulwich put it nicely:

They were first seen in 1998; they don’t look like anything we have here on Earth. To this day, no one is sure what they are, but we now know this: They come, then they go. Every Martian spring, they appear out of nowhere, showing up—70 percent of the time—where they were the year before. They pop up suddenly, sometimes overnight. When winter comes, they vanish.

In 2010, astronomer Candy Hansen tried to explain what’s going on, writing:

There is a vast region of sand dunes at high northern latitudes on Mars. In the winter, a layer of carbon dioxide ice covers the dunes, and in the spring as the sun warms the ice it evaporates. This is a very active process, and sand dislodged from the crests of the dunes cascades down, forming dark streaks.

She focused our attention on this piece of the image:

and she wrote:

In the subimage falling material has kicked up a small cloud of dust. The color of the ice surrounding adjacent streaks of material suggests that dust has settled on the ice at the bottom after similar events.

Also discernible in this subimage are polygonal cracks in the ice on the dunes (the cracks disappear when the ice is gone).

More recently, though, scientists have suggested that geysers are involved in this process, which might make it very active indeed!

Geysers formed as frozen carbon dioxide turns to gas, shooting out clumps of dark, basaltic sand, which slide down the dunes… that’s the most popular explanation. But maybe they’re colonies of photosynthetic Martian microorganisms soaking up the sunlight! Or maybe geysers are shooting up dark stuff that’s organic matter formed by some biological process. A bunch form right around sunrise, so something is being rapidly triggered by the sun.

This has some nice prose and awesome pictures:

• Robert Krulwich, Are those spidery black things on Mars dangerous? (maybe), Krulwich Wonders, National Public Radio, 3 October 2012.

The big picture above, and Candy Hansen’s explanation, can be found here:

• HiRiSE, Falling material kicks up cloud of dust on dunes.

HiRiSE, which stands for High Resolution Imaging Science Experiments, is a project based in Arizona that’s created an amazing website full of great Mars photos. For more clues, try this:

Martian geyser, Wikipedia.

Cryptic terrain

What’s going on in this region of Mars?

Candy Hansen writes:

There is an enigmatic region near the south pole of Mars known as the “cryptic” terrain. It stays cold in the spring, even as its albedo darkens and the sun rises in the sky.

This region is covered by a layer of translucent seasonal carbon dioxide ice that warms and evaporates from below. As carbon dioxide gas escapes from below the slab of seasonal ice it scours dust from the surface. The gas vents to the surface, where the dust is carried downwind by the prevailing wind.

The channels carved by the escaping gas are often radially organized and are known informally as “spiders.”

This is from:

• HiRISE, Cryptic terrain on Mars.

Vastitas Borealis

Here’s ice in a crater in the northern plains on Mars—the region with the wonderful name Vastitas Borealis:

Many scientists believe this huge plain was an ocean during the Hesperian Epoch, a period of Martian history that stretches from about 3.5 to about 1.8 billion years ago. Later, around the end of the Hesperian, they think about 30% of the water on Mars evaporated and left the atmosphere, drifting off into outer space… part of the danger of life on a planet without much gravity. The oceans then froze. Most of them slowly sublimated, disappearing into water vapor without ever melting. This water vapor was also lost to outer space.

• Linda M. V. Martel, Ancient floodwaters and seas on Mars.

But there’s still a lot of water left, especially in the polar ice caps. The north pole has an ice cap with 820,000 cubic kilometers of ice! That’s equal to 30% of the Earth’s Greenland ice sheet—enough to cover the whole surface of Mars to a depth of 5.6 meters if it melted, if we pretend Mars is flat.

And the south pole is covered by a slab of ice about 3 kilometers thick, a mixture of 85% carbon dioxide ice and 15% water ice, surrounded by steep slopes made almost entirely of water ice. This has enough water that if it melted it would cover the whole surface to a depth of 11 meters!

There’s also lots of permafrost underground, and frost on the surface, and bits of ice like this. The picture above was taken by the Mars Express satellite:

• ESA, Water ice in crater at Martian north pole.

The image is close to natural color, but the vertical relief is exaggerated by a factor of 3. The crater is 35 kilometers wide and 2 kilometers deep. It’s incredible how they can get this kind of picture from satellite photos and lots of clever image processing. I hope they didn’t do too much stuff just to make it look pretty.

Chasma Boreale

Here is the north pole of Mars:

As in Antarctica and Greenland, cold dense air flows downwards off the polar ice cap, creating intense winds called katabatic winds. These pick up and redeposit surface ice to make grooves in the ice. The swirly pattern comes from the Coriolis effect: while the winds are blowing more or less straight, Mars is turning around its pole, so they seem to swerve.

As you can see, the north polar ice cap has a huge canyon running through it, called Chasma Boreale:

Here’s an amazing picture of what it’d be like to stand near the head of this chasm:

Click to enlarge this—it deserves to be bigger! Here’s the story:

Climatic cycles of ice and dust built the Martian polar caps, season by season, year by year—and then whittled down their size when the climate changed. Here we are looking at the head of Chasma Boreale, a canyon that reaches 570 kilometers (350 miles) into the north polar cap. Canyon walls rise about 1,400 meters (4,600 feet) above the floor. Where the edge of the ice cap has retreated, sheets of sand are emerging that accumulated during earlier ice-free climatic cycles. Winds blowing off the ice have pushed loose sand into dunes, then driven them down-canyon in a westward direction, toward our viewpoint.

The above picture was cleverly created using photos from THEMIS. The vertical scale has been exaggerated by a factor of 2.5, I’m sad to say. You can download a 9-megabyte version from here:

• THEMIS, Chasma Boreale and the north polar ice cap.

and you can see an actual photo of this same canyon here:

• THEMIS, Dunes and ice in Chasma Boreale.

It’s beautifully detailed; here’s a miniature version:

and a sub-image that shows the layers of ice and sand:

Scientists are studying these layers in the ice cap to see if they match computer simulations of the climate of Mars. Just as the Earth’s orbit goes through changes called Milankovitch cycles, so does the orbit of Mars. These affect the climate: for example, when the tilt is big the tropics become colder, and polar ice migrates toward the equator. I don’t know much about this, despite my interest in Milankovitch cycles. What’s a good place to start learning more?

Here’s a closer view of icy dunes near the North pole:

Martian Sunset

As we’ve seen, Mars is a beautiful world, but a world in a minor key, a world whose glory days—the Hesperian Epoch—are long gone, whose once grand oceans are now reduced to windy canyons, icy dunes, and the massive ice caps of the poles. Let’s say goodbye to it for now… leaving off with this Martian sunset, photographed by the rover Spirit in Gusev Crater on May 19th, 2005.

• NASA Mars Exploration Rover Mission, A moment frozen in time.

This Panoramic Camera (Pancam) mosaic was taken around 6:07 in the evening of the rover’s 489th martian day, or sol. Spirit was commanded to stay awake briefly after sending that sol’s data to the Mars Odyssey orbiter just before sunset. This small panorama of the western sky was obtained using Pancam’s 750-nanometer, 530-nanometer and 430-nanometer color filters. This filter combination allows false color images to be generated that are similar to what a human would see, but with the colors slightly exaggerated. In this image, the bluish glow in the sky above the Sun would be visible to us if we were there, but an artifact of the Pancam’s infrared imaging capabilities is that with this filter combination the redness of the sky farther from the sunset is exaggerated compared to the daytime colors of the martian sky.

Because Mars is farther from the Sun than the Earth is, the Sun appears only about two-thirds the size that it appears in a sunset seen from the Earth. The terrain in the foreground is the rock outcrop “Jibsheet”, a feature that Spirit has been investigating for several weeks (rover tracks are dimly visible leading up to Jibsheet). The floor of Gusev crater is visible in the distance, and the Sun is setting behind the wall of Gusev some 80 km (50 miles) in the distance.

This mosaic is yet another example from MER of a beautiful, sublime martian scene that also captures some important scientific information. Specifically, sunset and twilight images are occasionally acquired by the science team to determine how high into the atmosphere the martian dust extends, and to look for dust or ice clouds. Other images have shown that the twilight glow remains visible, but increasingly fainter, for up to two hours before sunrise or after sunset. The long martian twilight (compared to Earth’s) is caused by sunlight scattered around to the night side of the planet by abundant high altitude dust. Similar long twilights or extra-colorful sunrises and sunsets sometimes occur on Earth when tiny dust grains that are erupted from powerful volcanoes scatter light high in the atmosphere.


Mathematics of the Environment (Part 6)

10 November, 2012

Last time we saw a ‘bistable’ climate model, where the temperatures compatible with a given amount of sunshine can form an S-shaped curve like this:

The horizontal axis is insolation, the vertical is temperature. Between the green and the red lines the Earth can have 3 temperatures compatible with a given insolation. For example, the black vertical line intersects the S-shaped curve in three points. So we get three possible solutions: a hot Earth, a cold Earth, and an intermediate Earth.

But last time I claimed the intermediate Earth was unstable, so there are just two stable solutions. So, we say this model is bistable. This is like a simple light switch, which has two stable positions but also an intermediate unstable position halfway in between.

(Have you ever enjoyed putting a light switch into this intermediate position? If not, you must not be a physicist.)

Why is the intermediate equilibrium unstable? It seems plausible from the light switch example, but to be sure, we need to go back and study the original equation:

\displaystyle{ C \frac{d T}{d t} = - A - B T + Q c(T(t)) }

We need see what happens when we push T slightly away from one of its equilibrium values. We could do this analytically or numerically.

Luckily, Allan Erskine has made a wonderful program that lets us study it numerically: check it out!

Temperature dynamics.

Here’s you’ll see a bunch of graphs of temperature as a function of time, T(t). To spice things up, Allan has made the insolation a function of time, which starts out big for some interval [0,\tau] and then drops to its usual value Q = 341.5. So, these graphs are solutions of

\displaystyle{ C \frac{d T}{d t} = - A - B T + Q(t) c(T(t)) }

where Q(t) is a step function with

Q(t) = \left\{ \begin{array}{ccl} Q + X & \mathrm{for} & 0 \le t \le \tau \\  Q & \mathrm{for} & t > \tau  \end{array} \right.

The different graphs show solutions with different initial conditions, ranging from hot to cold. Using sliders on the bottom, you can adjust:

• the coalbedo transition rate \gamma,

• the amount X of extra insolation,

• the time \tau at which the extra insolation ends.

I urge you to start by setting \tau to its maximum value. That will make the insolation be constant as a function of time. Then you if \gamma and X are big enough, you’ll get bistability. For example:

I get this with \gamma about 0.08, X about 28.5. You can see a hot stable equilibrium, a cold one, and a solution that hesitates between the two for quite a while before going up to the hot one. This intermediate solution must be starting out very slightly above the unstable equilibrium.

When X is zero, there’s only one equilibrium solution: the cold Earth.

I can’t make X so big that the hot Earth is the only equilibrium, but it’s possible according to our model: I’ll need to change the software a bit to let us make the insolation bigger.

All sorts of more interesting things happen when we move \tau down from its maximum value. I hope you play with the parameters and see what happens. But essentially, what happens is that the hot Earth is only stable before t = \tau, since we need the extra insolation to make that happen. After that, the Earth is fated to go to a cold state.

Needless to say, these results should not be trusted when it comes to the actual climate of our actual planet! More about that later.

We can also check the bistability in a more analytical way. We get an equilibrium solution of

\displaystyle{ C \frac{d T}{d t} = - A - B T + Q c(T(t)) }

whenever we find a number T obeying this equation:

- A - B T + Q c(T) = 0

We can show that for certain values of \gamma and Q, we get solutions for three different temperatures T. It’s easy to see that - A - B T + Q c(T) is positive for very small T: if the Earth were extremely cold, the Sun would warm it up. Similarly, this quantity is negative for very large T: the Earth would cool down if it were very hot. So, the reason

- A - B T + Q c(T) = 0

has three solutions is that it starts out positive, then goes down below zero, then goes up above zero, and then goes down below zero again. So, for the intermediate point at which it’s zero, we have

\displaystyle{\frac{d}{dT}( -A - B T + Q c(T)) > 0  }

That means that if it starts out slightly warmer than this value of T, the temperature will increase—so this solution is unstable. For the hot and cold solutions, we get

\displaystyle{ \frac{d}{dT}(-A - B T + Q c(T)) < 0  }

so these equilibria are stable.

A moral

What morals can we extract from this model?

As far as climate science goes, one moral is that it pays to spend some time making sure we understand simple models before we dive into more complicated ones. Right now we’re looking at a very simple one, but we’re already seeing some interesting phenomena. The kind of model we’re looking at now is called a Budyko-Sellers model. These have been studied since the late 1960’s:

• M. I. Budyko, On the origin of glacial epochs (in Russian), Meteor. Gidrol. 2 (1968), 3-8.

• M. I. Budyko, The effect of solar radiation variations on the climate of the earth, Tellus 21 (1969), 611-619.

• William D. Sellers, A global climatic model based on the energy balance of the earth-atmosphere system, J. Appl. Meteor. 8 (1969), 392-400.

• Carl Crafoord and Erland Källén, A note on the condition for existence of more than one steady state solution in Budyko-Sellers type models, J. Atmos. Sci. 35 (1978), 1123-1125.

• Gerald R. North, David Pollard and Bruce Wielicki, Variational formulation of Budyko-Sellers climate models, J. Atmos. Sci. 36 (1979), 255-259.

I should talk more about some slightly more complex models someday.

It also pays to compare our models to reality! For example, the graphs we’ve seen show some remarkably hot and cold temperatures for the Earth. That’s a bit unnerving. Let’s investigate. Suppose we set \gamma = 0 on our slider. Then the coalbedo of the Earth becomes independent of temperature: it’s 0.525, halfway between its icy and ice-free values. Then, when the insolation takes its actual value of 342.5 watts per square meter, the model says the Earth’s temperature is very chilly: about -20 °C!

Does that mean the model is fundamentally flawed? Maybe not! After all, it’s based on very light-colored Earth. Suppose we use the actual albedo of the Earth. Of course that’s hard to define, much less determine. But let’s just look up some average value of the Earth’s albedo: supposedly it’s about 0.3. That gives a coalbedo of c = 0.7. If we plug that in our formula:

\displaystyle{ Q = \frac{ A + B T } {c} }

we get 11 °C. That’s not too far from the Earth’s actual average temperature, namely about 15 °C. So the chilly temperature of -20 °C seems to come from an Earth that’s a lot lighter in color than ours.

Our model includes the greenhouse effect, since the coeficients A and B were determined by satellite measurements of how much radiation actually escapes the Earth’s atmosphere and shoots out into space. As a further check to our model, we can look at an even simpler zero-dimensional energy balance model: a completely black Earth with no greenhouse effect. We discussed that earlier.

As he explains, this model gives the Earth a temperature of 6 °C. He also shows that in this model, lowering the albedo to a realistic value of 0.3 lowers the temperature to a chilly -18 ° C. To get from that to something like our Earth, we must take the greenhouse effect into account.

This sort of fiddling around is the sort of thing we must do to study the flaws and virtues of a climate model. Of course, any realistic climate model is vastly more sophisticated than the little toy we’ve been looking at, so the ‘fiddling around’ must also be more sophisticated. With a more sophisticated model, we can also be more demanding. For example, when I said 11 °C is “is not too far from the Earth’s actual average temperature, namely about 15 °C”, I was being very blasé about what’s actually a big discrepancy. I only took that attitude because the calculations we’re doing now are very preliminary.


Graduate Program in Biostatistics

7 November, 2012

Are you an undergrad who likes math and biology and wants a good grad program? This one sounds really interesting. The ad I bumped into is focused on minority applicants, maybe because U.C. Riverside is packed with students whose skin ain’t pale. But I’d say biostatistics is a good career even if you have the misfortune of needing high-SPF sunscreen:    

The Department of Biostatistics, which administers PhD training at the Harvard School of Public Health, seeks outstanding minority applicants for its graduate programs in Biostatistics.

Biostatistics is an excellent career choice for students interested in mathematics applied to real world problems. The current data explosion is contributing to the rising stature of, and demand for biostatisticians, as noted in the New York Times:

I keep saying that the sexy job in the next 10 years will be statisticians … and I’m not kidding.

To date, Biostatistics has not been successful in attracting qualified minority students, particularly African Americans. Students best suited for careers in Biostatistics are those with strong mathematical abilities, combined with interests in health and biology. Unfortunately, statistics is not widely taught at the undergraduate level, and many potentially excellent candidates simply do not learn about the possibility of a valuable and fulfilling career in Biostatistics. Many minority students who could thrive in a Biostatistics program choose instead to enter medical school. Public health in general, and Biostatistics in particular, are not even considered as options. We would like your help in identifying qualified students before they make their choices regarding graduate school or other career paths.

All doctoral students accepted in our department are guaranteed full tuition and stipend support throughout their program, as long as they are making satisfactory progress towards the PhD degree. Every effort is made to meet the individual needs of each student, and to insure the successful completion of graduate work.

The web site for prospective students is here.

Please note the deadline for submitting applications to the MA and PhD programs for entry in the fall of 2013 is December 15, 2012.

We look forward to answering any questions you may have. Questions about our graduate programs can be directed to Jelena Follweiller, at jtillots@hsph.harvard.edu.


Mathematics of the Environment (Part 5)

6 November, 2012

We saw last time that the Earth’s temperature seems to have been getting colder but also more erratic for the last 30 million years or so. Here’s the last 5 million again:

People think these glacial cycles are due to variations in the Earth’s orbit, but as we’ll see later, those cause quite small changes in ‘insolation’—roughly, the amount of sunshine hitting the Earth (as a function of time and location). So, M. I. Budyko, an expert on the glacial cycles, wrote:

Thus, the present thermal regime and glaciations of the Earth prove to be characterized by high instability. Comparatively small changes of radiation—only by 1.0-1.5%—are sufficient for the development of ice cover on the land and oceans that reaches temperate latitudes.

How can small changes in the amount of sunlight hitting the Earth, or other parameters, create big changes in the Earth’s temperature? The obvious answer is positive feedback: some sort of amplifying effect.

But what could it be? Do we know feedback mechanisms that can amplify small changes in temperature? Yes. Here are a few obvious ones:

Water vapor feedback. When it gets warmer, more water evaporates, and the air becomes more humid. But water vapor is a greenhouse gas, which causes additional warming. Conversely, when the Earth cools down, the air becomes drier, so the greenhouse effect becomes weaker, which tends to cool things down.

Ice albedo feedback. Snow and ice reflect more light than liquid oceans or soil. When the Earth warms up, snow and ice melt, so the Earth becomes darker, absorbs more light, and tends to get get even warmer. Conversely, when the Earth cools down, more snow and ice form, so the Earth becomes lighter, absorbs less light, and tends to get even cooler.

Carbon dioxide solubility feedback. Cold water can hold more carbon dioxide than warm water: that’s why opening a warm can of soda can be so explosive. So, when the Earth’s oceans warm up, they release carbon dioxide. But carbon dioxide is a greenhouse gas, which causes additional warming. Conversely, when the oceaans cool down, they absorb more carbon dioxide, so the greenhouse effect becomes weaker, which tends to cool things down.

Of course, there are also negative feedbacks: otherwise the climate would be utterly unstable! There are also complicated feedbacks whose overall effect is harder to evaluate:

Planck feedback. A hotter world radiates more heat, which cools it down. This is the big negative feedback that keeps all the positive feedbacks from making the Earth insanely hot or insanely cold.

Cloud feedback. A warmer Earth has more clouds, which reflect more light but also increase the greenhouse effect.

Lapse rate feedback. An increased greenhouse effect changes the vertical temperature profile of the atmosphere, which has effects of its own—but this works differently near the poles and near the equator.

See also “week302” of This Week’s Finds, where Nathan Urban tells us more about feedbacks and how big they’re likely to be.

Understanding all these feedbacks, and which ones are important for the glacial cycles we see, is a complicated business. Instead of diving straight into this, let’s try something much simpler. Let’s just think about how the ice albedo effect could, in theory, make the Earth bistable.

To do this, let’s look at the very simplest model in this great not-yet-published book:

• Gerald R. North, Simple Models of Global Climate.

This is a zero-dimensional energy balance model, meaning that it only involves the average temperature of the earth, the average solar radiation coming in, and the average infrared radiation going out.

The average temperature will be T, measured in Celsius. We’ll assume the Earth radiates power square meter equal to

\displaystyle{ A + B T }

where A = 218 watts/meter2 and B = 1.90 watts/meter2 per degree Celsius. This is a linear approximation taken from satellite data on our Earth. In reality, the power emitted grows faster than linearly with temperature.

We’ll assume the Earth absorbs solar energy power per square meter equal to

Q c(T)

Here:

Q is the average insolation: that is, the amount of solar power per square meter hitting the top of the Earth’s atmosphere, averaged over location and time of year. In reality Q is about 341.5 watts/meter2. This is one quarter of the solar constant, meaning the solar power per square meter that would hit a panel hovering in space above the Earth’s atmosphere and facing directly at the Sun. (Why a quarter? We’ve seen why: it’s because the area of a sphere is 4 \pi r^2 while the area of a circle is just \pi r^2.)

c(T) is the coalbedo: the fraction of solar power that gets absorbed. The coalbedo depends on the temperature; we’ll have to say how.

Given all this, we get

\displaystyle{ C \frac{d T}{d t} = - A - B T + Q c(T(t)) }

where C is Earth’s heat capacity in joules per degree per square meter. Of course this is a funny thing, because heat energy is stored not only at the surface but also in the air and/or water, and the details vary a lot depending on where we are. But if we consider a uniform planet with dry air and no ocean, North says we may roughly take C equal to about half the heat capacity at constant pressure of the column of dry air over a square meter, namely 5 million joules per degree Celsius.

The easiest thing to do is find equilibrium solutions, meaning solutions where \frac{d T}{d t} = 0, so that

A + B T = Q c(T)

Now C doesn’t matter anymore! We’d like to solve for T as a function of the insolation Q, but it’s easier to solve for Q as a function of T:

\displaystyle{ Q = \frac{ A + B T } {c(T)} }

To go further, we need to guess some formula for the coalbedo c(T). The coalbedo, remember, is the fraction of sunlight that gets absorbed when it hits the Earth. It’s 1 minus the albedo, which is the fraction that gets reflected. Here’s a little chart of albedos:

If you get mixed up between albedo and coalbedo, just remember: coal has a high coalbedo.

Since we’re trying to keep things very simple right not, not model nature in all its glorious complexity, let’s just say the average albedo of the Earth is 0.65 when it’s very cold and there’s lots of snow. So, let

c_i = 1  - 0.65 =  0.35

be the ‘icy’ coalbedo, good for very low temperatures. Similarly, let’s say the average albedo drops to 0.3 when its very hot and the Earth is darker. So, let

c_f = 1 - 0.3 = 0.7

be the ‘ice-free’ coalbedo, good for high temperatures when the Earth is darker.

Then, we need a function of temperature that interpolates between c_i and c_f. Let’s try this:

c(T) = c_i + \frac{1}{2} (c_f-c_i) (1 + \tanh(\gamma T))

If you’re not a fan of the hyperbolic tangent function \tanh, this may seem scary. But don’t be intimidated!

The function \frac{1}{2}(1 + \tanh(\gamma T)) is just a function that goes smoothly from 0 at low temperatures to 1 at high temperatures. This ensures that the coalbedo is near its icy value c_i at low temperatures, and near its ice-free value c_f at high temperatures. But the fun part here is \gamma, a parameter that says how rapidly the coalbedo rises as the Earth gets warmer. Depending on this, we’ll get different effects!

The function c(T) rises fastest at T = 0, since that’s where \tanh (\gamma T) has the biggest slope. We’re just lucky that in Celsius T = 0 is the melting point of ice, so this makes a bit of sense.

Now Allan Erskine‘s programming magic comes into play! I’m very fortunate that the Azimuth Project has attracted some programmers who can make nice software for me to show you. Unfortunately his software doesn’t work on this blog—yet!—so please hop over here to see it in action:

Temperature versus insolation.

You can slide a slider to adjust the parameter \gamma to various values between 0 and 1.

In the little graph at right, you can see how the coalbedo c(T) changes as a function of the temperature T. In this graph the temperature ranges from -50 °C and 50 °C; the graph depends on what value of \gamma you choose with slider.

In the big graph at left, you can see how the insolation Q required to yield a given temperature T between -50 °C and 50 °C. As we’ve seen, it’s easiest to graph Q as a function of T:

\displaystyle{ Q = \frac{ A + B T } {c_i + \frac{1}{2} (c_f-c_i) (1 + \tanh(\gamma T))} }

Solving for T here is hard, but we can just flip the graph over to see what equilibrium temperatures T are allowed for a given insolation Q between 200 and 500 watts per square mater.

The exciting thing is that when \gamma gets big enough, three different temperatures are compatible with the same amount of insolation! This means the Earth can be hot, cold or something intermediate even when the amount of sunlight hitting it is fixed. The intermediate state is unstable, it turns out—we’ll see why later. Only the hot and cold states are stable. So, we say the Earth is bistable in this simplified model.

Can you see how big \gamma needs to be for this bistability to kick in? It’s certainly there when \gamma = 0.05, since then we get a graph like this:

When the insolation is less than about 385 W/m2 there’s only a cold state. When it hits 385 W/m2, as shown by the green line, suddenly there are two possible temperatures: a cold one and a much hotter one. When the insolation is higher, as shown by the black line, there are three possible temperatures: a cold one, and unstable intermediate one, and a hot one. And when the insolation gets above 465 W/m2, as shown by the red line, there’s only a hot state!

Mathematically, this model illustrates catastrophe theory. As we slowly turn up \gamma, we get different curves showing how temperature is a function of insolation… until suddenly the curve isn’t the graph of a function anymore: it becomes infinitely steep at one point! After that, we get bistability:


\gamma = 0.00

\gamma = 0.01

\gamma = 0.02

\gamma = 0.03

\gamma = 0.04

\gamma = 0.05

This is called a cusp catastrophe, and you can visualize these curves as slices of a surface in 3d, which looks roughly like this picture:



from here:

• Wolfram Mathworld, Cusp catastrophe. (Includes Mathematica package.)

The cusp catastrophe is ‘structurally stable’, meaning that small perturbations don’t change its qualitative behavior. In other words, whenever you have a smooth graph of a function that gets steeper and steeper until it ‘overhangs’ and ceases to be the graph of a function, it looks like this cusp catastrophe. This statement is quite vague as I’ve just said it— but it’s made 100% precise in catastrophe theory.

Structural stability is a useful concept, because it focuses our attention on robust features of models: features that don’t go away if the model is slightly wrong, as it always is.

There are lots more things to say, but the most urgent question to answer is this: why is the intermediate state unstable when it exists? Why are the other two equilibria stable? We’ll talk about that next time!


Follow

Get every new post delivered to your Inbox.

Join 3,095 other followers