The Mathematics of Planet Earth

31 October, 2012

Here’s a public lecture I gave yesterday, via videoconferencing, at the 55th annual meeting of the South African Mathematical Society:

Abstract: The International Mathematical Union has declared 2013 to be the year of The Mathematics of Planet Earth. The global warming crisis is part of a bigger transformation in which humanity realizes that the Earth is a finite system and that our population, energy usage, and the like cannot continue to grow exponentially. If civilization survives this transformation, it will affect mathematics—and be affected by it—just as dramatically as the agricultural revolution or industrial revolution. We cannot know for sure what the effect will be, but we can already make some guesses.

To watch the talk, click on the video above. To see slides of the talk, click here. To see the source of any piece of information in these slides, just click on it!

My host Bruce Bartlett, an expert on topological quantum field theory, was crucial in planning the event. He was the one who edited the video, and put it on YouTube. He also made this cute poster:



I was planning to fly there using my superpowers to avoid taking a plane and burning a ton of carbon. But it was early in the morning and I was feeling a bit tired, so I used Skype.

By the way: if you’re interested in science, energy and the environment, check out the Azimuth Project, which is a collaboration to create a focal point for scientists and engineers interested in saving the planet. We’ve got some interesting projects going. If you join the Azimuth Forum, you can talk to us, learn more, and help out as much or as little as you want. The only hard part about joining the Azimuth Forum is reading the instructions well enough that you choose your whole real name, with spaces between words, as your username.


John Harte

27 October, 2012

Earlier this week I gave a talk on the Mathematics of Planet Earth at the University of Southern California, and someone there recommended that I look into John Harte’s work on maximum entropy methods in ecology. He works at U.C. Berkeley.

I checked out his website and found that his goals resemble mine: save the planet and understand its ecosystems. He’s a lot further along than I am, since he comes from a long background in ecology while I’ve just recently blundered in from mathematical physics. I can’t really say what I think of his work since I’m just learning about it. But I thought I should point out its existence.

This free book is something a lot of people would find interesting:

• John and Mary Ellen Harte, Cool the Earth, Save the Economy: Solving the Climate Crisis Is EASY, 2008.

EASY? Well, it’s an acronym. Here’s the basic idea of the US-based plan described in this book:

Any proposed energy policy should include these two components:

Technical/Behavioral: What resources and technologies are to be used to supply energy? On the demand side, what technologies and lifestyle changes are being proposed to consumers?

Incentives/Economic Policy: How are the desired supply and demand options to be encouraged or forced? Here the options include taxes, subsidies, regulations, permits, research and development, and education.

And a successful energy policy should satisfy the AAA criteria:

Availability. The climate crisis will rapidly become costly to society if we do not take action expeditiously. We need to adopt now those technologies that are currently available, provided they meet the following two additional criteria:

Affordability. Because of the central role of energy in our society, its cost to consumers should not increase significantly. In fact, a successful energy policy could ultimately save consumers money.

Acceptability. All energy strategies have environmental, land use, and health and safety implications; these must be acceptable to the public. Moreover, while some interest groups will undoubtedly oppose any particular energy policy, political acceptability at a broad scale is necessary.

Our strategy for preventing climate catastrophe and achieving energy independence includes:

Energy Efficient Technology at home and at the workplace. Huge reductions in home energy use can be achieved with available technologies, including more efficient appliances such as refrigerators, water heaters, and light bulbs. Home retrofits and new home design features such as “smart” window coatings, lighter-colored roofs where there are hot summers, better home insulation, and passive solar designs can also reduce energy use. Together, energy efficiency in home and industry can save the U.S. up to approximately half of the energy currently consumed in those sectors, and at no net cost—just by making different choices. Sounds good, doesn’t it?

Automobile Fuel Efficiency. Phase in higher Corporate Average Fuel Economy (CAFE) standards for automobiles, SUVs and light trucks by requiring vehicles to go 35 miles per gallon of gas (mpg) by 2015, 45 mpg by 2020, and 60 mpg by 2030. This would rapidly wipe out our dependence on foreign oil and cut emissions from the vehicle sector by two-thirds. A combination of plug-in hybrid, lighter car body materials, re-design and other innovations could readily achieve these standards. This sounds good, too!

Solar and Wind Energy. Rooftop photovoltaic panels and solar water heating units should be phased in over the next 20 years, with the goal of solar installation on 75% of U.S. homes and commercial buildings by 2030. (Not all roofs receive sufficient sunlight to make solar panels practical for them.) Large wind farms, solar photovoltaic stations, and solar thermal stations should also be phased in so that by 2030, all U.S. electricity demand will be supplied by existing hydroelectric, existing and possibly some new nuclear, and, most importantly, new solar and wind units. This will require investment in expansion of the grid to bring the new supply to the demand, and in research and development to improve overnight storage systems. Achieving this goal would reduce our dependence on coal to practically zero. More good news!

You are part of the answer. Voting wisely for leaders who promote the first three components is one of the most important individual actions one can make. Other actions help, too. Just as molecules make up mountains, individual actions taken collectively have huge impacts. Improved driving skills, automobile maintenance, reusing and recycling, walking and biking, wearing sweaters in winter and light clothing in summer, installing timers on thermostats and insulating houses, carpooling, paying attention to energy efficiency labels on appliances, and many other simple practices and behaviors hugely influence energy consumption. A major education campaign, both in schools for youngsters and by the media for everyone, should be mounted to promote these consumer practices.

No part of EASY can be left out; all parts are closely integrated. Some parts might create much larger changes—for example, more efficient home appliances and automobiles—but all parts are essential. If, for example, we do not achieve the decrease in electricity demand that can be brought about with the E of EASY, then it is extremely doubtful that we could meet our electricity needs with the S of EASY.

It is equally urgent that once we start implementing the plan, we aggressively export it to other major emitting nations. We can reduce our own emissions all we want, but the planet will continue to warm if we can’t convince other major global emitters to reduce their emissions substantially, too.

What EASY will achieve. If no actions are taken to reduce carbon dioxide emissions, in the year 2030 the U.S. will be emitting about 2.2 billion tons of carbon in the form of carbon dioxide. This will be an increase of 25% from today’s emission rate of about 1.75 billion tons per year of carbon. By following the EASY plan, the U.S. share in a global effort to solve the climate crisis (that is, prevent catastrophic warming) will result in U.S emissions of only about 0.4 billion tons of carbon by 2030, which represents a little less than 25% of 2007 carbon dioxide emissions.128 Stated differently, the plan provides a way to eliminate 1.8 billion tons per year of carbon by that date.

We must act urgently: in the 14 months it took us to write this book, atmospheric CO2 levels rose by several billion tons of carbon, and more climatic consequences have been observed. Let’s assume that we conserve our forests and other natural carbon reservoirs at our current levels, as well as maintain our current nuclear and hydroelectric plants (or replace them with more solar and wind generators). Here’s what implementing EASY will achieve, as illustrated by Figure 3.1 on the next page.

Please check out this book and help me figure out if the numbers add up! I could also use help understanding his research, for example:

• John Harte, Maximum Entropy and Ecology: A Theory of Abundance, Distribution, and Energetics, Oxford University Press, Oxford, 2011.

The book is not free but the first chapter is.

This paper looks really interesting too:

• J. Harte, T. Zillio, E. Conlisk and A. B. Smith, Maximum entropy and the state-variable approach to macroecology, Ecology 89 (2008), 2700–-2711.

Again, it’s not freely available—tut tut. Ecologists should follow physicists and make their work free online; if you’re serious about saving the planet you should let everyone know what you’re doing! However, the abstract is visible to all, and of course I can use my academic superpowers to get ahold of the paper for myself:

Abstract: The biodiversity scaling metrics widely studied in macroecology include the species-area relationship (SAR), the scale-dependent species-abundance distribution (SAD), the distribution of masses or metabolic energies of individuals within and across species, the abundance-energy or abundance-mass relationship across species, and the species-level occupancy distributions across space. We propose a theoretical framework for predicting the scaling forms of these and other metrics based on the state-variable concept and an analytical method derived from information theory. In statistical physics, a method of inference based on information entropy results in a complete macro-scale description of classical thermodynamic systems in terms of the state variables volume, temperature, and number of molecules. In analogy, we take the state variables of an ecosystem to be its total area, the total number of species within any specified taxonomic group in that area, the total number of individuals across those species, and the summed metabolic energy rate for all those individuals. In terms solely of ratios of those state variables, and without invoking any specific ecological mechanisms, we show that realistic functional forms for the macroecological metrics listed above are inferred based on information entropy. The Fisher log series SAD emerges naturally from the theory. The SAR is predicted to have negative curvature on a log-log plot, but as the ratio of the number of species to the number of individuals decreases, the SAR becomes better and better approximated by a power law, with the predicted slope z in the range of 0.14-0.20. Using the 3/4 power mass-metabolism scaling relation to relate energy requirements and measured body sizes, the Damuth scaling rule relating mass and abundance is also predicted by the theory. We argue that the predicted forms of the macroecological metrics are in reasonable agreement with the patterns observed from plant census data across habitats and spatial scales. While this is encouraging, given the absence of adjustable fitting parameters in the theory, we further argue that even small discrepancies between data and predictions can help identify ecological mechanisms that influence macroecological patterns.


Mathematics of the Environment (Part 4)

22 October, 2012

We’ve been looking at some very simple models of the Earth’s climate. Pretty soon I want to show you one that illustrates the ice albedo effect. This effect says that when it’s colder, there’s more ice and snow, so the Earth gets lighter in color, so it reflects more sunlight and tends to get even colder. In other words, it’s a positive feedback mechanism: a reaction that strengthens the process that caused the reaction.

Since a higher temperature leads to a higher radiation and therefore to cooling, and a lower temperature leads to a lower radiation, according to the Planck distribution, there is always also a negative feedback present in the climate system of the earth. This is dubbed the Planck feedback, and this is what ultimately protects the Earth against getting arbitrarily hot or cold.

However, the ice albedo effect may be important for the ‘ice ages’ or more properly ‘glacial cycles’ that we’ve been having for the last few ten million years… and also for much earlier, much colder Snowball Earth events. In reverse, melting ice now tends to make the Earth darker and even warmer. So, this is an interesting topic for many reasons… including the math, which we’ll get to later.

Now, obviously the dinosaurs did not keep records of the temperature, so how we estimate temperatures on the ancient Earth is an important question, which deserves a long discussion—but not today! Today I’ll be fairly sketchy about that. I just want you to get a feel for the overall story, and some open questions.

The Earth’s temperature since the last glacial period

First, here’s a graph of Greenland temperatures over the last 18,000 years:

(As usual, click to enlarge and/or get more information.) This chart is based on ice cores, taken from:

• Richard B. Alley, The Two-Mile Time Machine: Ice Cores, Abrupt Climate Change, and our Future, Princeton U. Press, Princeton, 2002.

This is a good book for learning how people reconstruct the
history of temperatures in Greenland from looking at a two-mile-long ice core drilled out of the glaciers there.

As you can see, first Greenland was very cold, and then it warmed up at the end of the last ‘ice age’, or glacial period. But there are lot of other things to see in this graph. For example, there was a severe cold spell between 12.9 and 11.5 thousand years ago: the Younger Dryas event.

I love that name! It comes from the tough little Arctic flower
Dryas octopetala, whose plentiful pollen in certain ice samples gave evidence that this time period was chilly. Was there an Older Dryas? Yes: before the Younger Dryas there was a warm spell called the Allerød, and before that a cold period called the Older Dryas.

The Younger Dryas lasted about 1400 years. Temperatures dropped dramatically in Europe: about 7 °C in only 20 years! In Greenland, it was 15 °C colder during the Younger Dryas than today. In England, the average annual temperature was -5 °C, so glaciers started forming. We can see evidence of this event from oxygen isotope records and many other things.

Why the sudden chill? One popular theory is that the melting of the ice sheet on North America lowered the salinity of North Atlantic waters. This in turn blocked a current called the
Atlantic meridional overturning circulation, or AMOC for short, which normally brings warm water up the coast of Europe. Proponents of this theory argue that this current is what makes London much warmer than, say, Winnipeg in Canada or Irkutsk in Russia. Turn it off and—wham!—you’ll get glaciers forming in England.

Anyway, whatever caused it, the Younger Dryas ended as suddenly at it began, with temperatures jumping 7 °C. Since then, the Earth continued warming up until about 6 thousand years ago—the mid-Holocene thermal maximum. The earth was about 1° or 2° Celsius warmer than today. Since then, it’s basically been cooling off—not counting various smaller variations, like the global warming we’re experiencing in this century.

However, these smaller variations are very interesting! From 6000 to 2500 years ago things cooled down, with the coolest
stretch occurring between 4000 and 2500 years ago: the Iron Age Cold Epoch.

Then things warmed up for a while, and then they cooled down
from 500 to 1000 AD. Yes, the so-called "Dark Ages" were also chilly!

After this came the Medieval Warm Period, a period from about 1000 to 1300 AD:

From 1450 AD to 1890 there was a period of cooling, often called the Little Ice Age. This killed off the Icelandic colonies in Greenland, as described in this gripping book:

• Jane Smiley, The Greenlanders, Ballantine Books, New York, 1996.

However, the term "Little Ice Age" exaggerates the importance of a small blip in the grand scheme of things. It was nowhere near as big as the Younger Dryas: temperatures may have dropped a measly 0.2° Celsius from the Medieval optimum, and it may have happened only in Europe—though this was a subject of debate when I last checked.

Since then, things have been warming up:

The subject has big political implications, and is thus subject to enormous controversy. But, I think it’s quite safe to say that we’ve been seeing a rapid temperature rise since 1900, with the Northern Hemisphere average temperature rising roughly 1 °C since then. Each of the last 11 years, from 2001 to 2011, was one of the 12 warmest years since 1901. (The other one was 1998.)

All these recent variations in the Earth’s climate are very much worth trying to understand. but now let’s back off to longer time periods! We don’t have many Earth-like planets whose climate we can study in detail—at least not yet, since they’re too far away. But we do have one planet, the Earth, that’s gone through many changes. The climate since the end of the last ice age is just a tiny sliver of a long and exciting story!

The Earth’s long-term climate history

Here’s a nice old chart showing estimates of the Earth’s average temperature in the last 150 years, the last 16,000 years, the last 150,000 years and the last million years:


Here “ka” or “kilo-annum” means a thousand years. These temperatures are estimated by various methods; I got this chart from:

• Barry Saltzman, Dynamical Paleoclimatology: Generalized Theory of Global Climate Change, Academic Press, New York, 2002, fig. 3-4.

As we keep zooming in towards the present we keep seeing more detail:

• Over the last million years there have been about ten glacial periods—though trying to count them is a bit like trying to count ‘very deep valleys’ in a hilly landscape!

• From 150 to 120 thousand years ago it warmed up rather rapidly. From 120 thousand years ago to 16 thousand years ago it cooled down—that was the last glacial period. Then it warmed up rather rapidly again.

• Over the last 10 thousand years temperatures have been unusually constant.

• Over the last 150 years it’s been warming up slightly.

If we go back further, say to 5 million years, we see that temperatures have been colder but also more erratic during this period:

This figure is based on this paper:

• L. E. Lisiecki and M. E. Raymo, A Pliocene-Pleistocene stack of 57 globally distributed benthic δ18O records, Paleoceanography 20 (2005), PA1003.

Lisieki and Raymo combined measurements of oxygen isotopes in the shells of tiny sea creatures called foraminifera from 57 globally distributed deep sea sediment cores. But beware: they constructed this record by first applying a computer aided process to align the data in each sediment core. Then the resulting stacked record was tuned to make the positions of peaks and valleys match the known Milankovitch cycles in the Earth’s orbit. The temperature scale was chosen to match Vostok ice core data. So, there are a lot of theoretical assumptions built into this graph.

Going back 65 million years, we see how unusual the current glacial cycles are:


Click to make this graph bigger; it’s from:

• Robert Rohde, 65 million years of climate change, at Global Warming Art.

This graph shows the Earth’s temperature since the extinction of the dinosaurs about 65 million years ago—the end of the Mesozoic and beginning of the Cenozoic. At first the Earth warmed up, reaching its warmest 50 million years ago: the "Eocene Optimum". The spike before that labelled "PETM" is a fascinating event called the Paleocene-Eocene Thermal Maximum. At the end of the Eocene the Earth cooled rapidly and the Antarctic acquired year-round ice. After a warming spell near the end of the Oligocene, further cooling and an increasingly jittery climate led ultimately to the current age of rapid glacial cycles.

Why is the Earth’s climate so jittery nowadays? That’s a fascinating puzzle, which I’d like to discuss in the weeks to come.

Why did the Earth suddenly cool at the end of the Eocene 34 million years ago? One theory relies on the fact that this is when Antarctica first became separated from Australia and South America. After the Tasmanian Gateway between Australia and Antarctica opened, the only thing that kept water from swirling endlessly around Antarctica, getting colder and colder, was the connection between this continent and South America. South America seems to have separated from Antarctica around the end of the Eocene.

In the early Eocene, Antarctica was fringed with a warm temperate to sub-tropical rainforest. But as the Eocene progressed it became colder, and by the start of the Oligocene it had deciduous forests and vast stretches of tundra. Eventually it became almost completely covered with ice.

Thanks to the ice albedo effect, an icy Antarctic tends to keep the Earth cooler. But is that the only or even the main explanation of the overall cooling trend over the last 30 million years? Scientists argue about this.

Going back further:

Here "Ma" or "mega-annum" means "million years". This chart was drawn from many sources; I got it from:

• Barry Saltzman, Dynamical Paleoclimatology: Generalized Theory of Global Climate Change, Academic Press, New York, 2002, fig. 1-3.

Among other things on this chart, you can sort of see hints of the Snowball Earth events that may have happened early in the Earth’s history. These are thought to have occurred during the Cryogenian period 850 to 635 million years ago, and also during the Huronian glaciation 2400 to 2100 million years ago. In both these events a large portion of the Earth was frozen—much more, it seems, than in the recent glacial periods! Ice albedo feedback plays a big role in theories of these events… though also, of course, there must be some explanation of why they ended.

As you can see, there’s a lot of things a really universal climate model might seek to explain. We don’t necessarily need to understand the whole Earth’s history to model it well now, but thinking about other eras is a good way to check our understanding of the present-day Earth.


Mathematics of the Environment (Part 3)

13 October, 2012

This week I’ll release these notes before my seminar, so my students (and all of you, too) can read them ahead of time. The reason is that I’m pushing into topics I don’t understand as well as I’d like. So, my notes wrestle with some ideas in too much detail to cover in class—and I’m hoping some students will look at these notes ahead of time to prepare. Also, I’d appreciate your comments!

This week I’ll borrow material shamelessly from here:

• Seymour L. Hess, Introduction to Theoretical Meteorology, Henry Holt and Company, New York, 1959.

It’s an old book: for example, it talks about working out the area under a curve using a gadget called a planimeter, which is what people did before computers.

It also talks about how people measured the solar constant (roughly, the brightness of the Sun) before we could easily put satellites up above the Earth’s atmosphere! And it doesn’t mention global warming.

But despite or perhaps even because of these quaint features, it’s simple and clear. In case it’s not obvious yet, I’m teaching this quarter’s seminar in order to learn stuff. So, I’ll sometimes talk about old work… but if you catch me saying things that are seriously wrong (as opposed to merely primitive), please let me know.

The plan

Last time we considered a simple model Earth, a blackbody at uniform constant temperature absorbing sunlight and re-emitting the same amount of power in the form of blackbody radiation. We worked out that its temperature would be 6 °C, which is not bad. But then we took into account the fact that the Earth is not black. We got a temperature of -18 °C, which is too cold. The reason is that we haven’t yet equipped our model Earth with an atmosphere! So today let’s try that.

At this point things get a lot more complicated, even if we try a 1-dimensional model where the temperature, pressure and other features of the atmosphere only depend on altitude. So, I’ll only do what I can easily do. I’ll explain some basic laws governing radiation, and then sketch how people applied them to the Earth.

It’ll be good to start with a comment about what we did last time.

Kirchoff’s law of radiation

When we admitted the Earth wasn’t black, we said that it absorbed only about 70% of the radiation hitting it… but we still modeled it as emitting radiation just like a blackbody! Isn’t there something fishy about this?

Well, no. The Earth is mainly absorbing sunlight at visible frequencies, and at these frequencies it only absorbs about 70% of the radiation that hits it. But it mainly emits infrared light, and at these frequencies it acts like it’s almost black. These frequencies are almost completely different from those where absorption occurs.

But still, this issue is worth thinking about.

After all, emission and absorption are flip sides of the same coin. There’s a deep principle in physics, called reciprocity, which says that how X affects Y is not a separate question from how Y affects X. In fact, if you know the answer to one of these questions, you can figure out the answer to the other!

The first place most people see this principle is Newton’s third law of classical mechanics, saying that if X exerts a force on Y, Y exerts an equal and opposite force on X.

For example: if I punched your nose, your nose punched my fist just as hard, so you have no right to complain.

This law is still often stated in its old-fashioned form:

For every action there is an equal and opposite reaction.

I found this confusing as a student, because ‘force’ was part of the formal terminology of classical mechanics, but not ‘action’—at least not as used in this sentence!—and certainly not ‘reaction’. But as a statement of the basic intuition behind reciprocity, it’s got a certain charm.

In engineering, the principle of reciprocity is sometimes stated like this:

Reciprocity in linear systems is the principle that the response R_{ab} measured at a location a when the system is excited at a location b is exactly equal to R_{ba}, which is the response at location b when that same excitation is applied at a. This applies for all frequencies of the excitation.

Again this is a bit confusing, at least if you’re a mathematician who would like to know exactly how a ‘response’ or an ‘excitation’ is defined. It’s also disappointing to see the principle stated in a way that limits it to linear systems. Nonetheless it’s tremendously inspiring. What’s really going on here?

I don’t claim to have gotten to the bottom of it. My hunch is that to a large extent it will come down to the fact that mixed partial derivatives commute. If we’ve got a smooth function f of a bunch of variables x_1, \dots, x_n, and we set

\displaystyle{ R_{ab} = \frac{\partial^2 f}{\partial x_a \partial x_b} }

then

R_{ab} = R_{ba}

However, I haven’t gotten around to showing that reciprocity boils down to this in all the examples yet. Yet another unification project to add to my list!

Anyway: reciprocity has lots of interesting applications to electromagnetism. And that’s what we’re really talking about now. After all, light is electromagnetic radiation!

The simplest application is one we learn as children:

If I can see you, then you can see me.

or at least:

If light can go from X to Y in a static environment, it can also go from Y to X.

But we want something that sounds a bit different. Namely:

The tendency of a substance to absorb light at some frequency equals its tendency to emit light at that frequency.

This is too vague. We should make it precise, and in a minute I’ll try, but first let me motivate this idea with a thought experiment. Suppose we have a black rock and a white rock in a sealed mirrored container. Suppose they’re in thermal equilibrium at a very high temperature, so they’re glowing red-hot. So, there’s red light bouncing around the container. The black rock will absorb more of this light. But since they’re in thermal equilibrium, the black rock must also be emitting more of this light, or it would gain energy and get hotter than the white one. That would violate the zeroth law of thermodynamics, which implies that in thermal equilibrium, all the parts of a system must be at the same temperature.

More precisely, we have:

Kirchoff’s Law of Thermal Radiation. For any body in thermal equilibrium, its emissivity equals its absorptivity.

Let me explain. Suppose we have a surface made of some homogeneous isotropic material in thermal equilibrium at temperature T. If it’s perfectly black, we saw last time that it emits light with a monochromatic energy flux given by the Planck distribution:

\displaystyle{ f_{\lambda}(T) = \frac{2 hc^2}{\lambda^5} \frac{1}{ e^{\frac{hc}{\lambda k T}} - 1 } }

Here \lambda is the wavelength of light and the ‘monochromatic energy flux’ has units of power per area per wavelength.

But if our surface is not perfectly black, we have to multiply this by a fudge factor between 0 and 1 to get the right answer. This factor is called the emissivity of the substance. It can depend on the wavelength of the light quite a lot, and also on the surface’s temperature (since for example ice melts at high temperatures and gets darker). So, let’s call it e_\lambda(T).

We can also talk about the absorptivity of our surface, which is the fraction of light it absorbs. Again this depends on the wavelength of the light and the temperature of our surface. So, let’s call it a_\lambda(T).

Then Kirchoff’s law of thermal radiation says

e_\lambda(T) = a_\lambda(T)

So, for each frequency the emissivity must equal the absorptivity… but it’s still possible for the Earth to have an average emissivity near 1 at the wavelengths of infrared light and near 0.7 at the wavelengths of visible light. So there’s no paradox.

Puzzle 1. Is this law named after the same guy who discovered Kirchhoff’s laws governing electrical circuits?

Schwarzschild’s equation

Now let’s talk about light shining through the Earth’s atmosphere. Or more generally, light shining through a medium. What happens? It can get absorbed. It can get scattered, bouncing off in different directions. Light can also get emitted, especially if the medium is hot. The air in our atmosphere isn’t hot enough to emit a lot of visible light, but it definitely emits infrared light and microwaves.

It sounds complicated, and it is, but there are things we can say about it. Let me tell you about Schwarzschild’s equation.

Light comes in different wavelengths. So, can ask how much power per square meter this light carries per wavelength. We call this the monochromatic energy flux I_{\lambda}, since it depends on the wavelength \lambda. As mentioned last time, this has units W/m2μm, where μm stands for micrometers, a unit of wavelength.

However, because light gets absorbed, scattered and emitted the monochromatic energy flux is really a function I_{\lambda}(s), where s is the distance through the medium. Here I’m imagining an essentially one-dimensional situation, like a beam of sunlight coming down through the air when the Sun is directly overhead. We can generalize this later.

Let’s figure out the basic equation describing how I_{\lambda}(s) changes as a function of s. This is called the equation of radiative transfer, or Schwarzschild’s equation. It won’t tell us how different gases absorb different amounts of light of different frequencies—for that we need to do hard calculations, or experiments. But we can feed the results of these calculations into Schwarzschild’s equation.

For starters, let’s assume that light only gets absorbed but not emitted or scattered. Later we’ll include emission, which is very important for what we’re doing: the Earth’s atmosphere is warm enough to emit significant amounts of infrared light (though not hot enough to emit much visible light). Scattering is also very important, but it can be treated as a combination of absorption and emission.

For absorption only, we have the Beer–Lambert law:

\displaystyle{  \frac{d I_\lambda(s)}{d s} = - a_\lambda(s) I_\lambda(s)  }

In other words, the amount of radiation that gets absorbed per distance is proportional to the amount of radiation. However, the constant of proportionality a_\lambda (s) can depend on the frequency and the details of our medium at the position s. I don’t know the standard name for this constant a_\lambda (s), so let’s call it the absorption rate.

Puzzle 2. Assuming the Beer–Lambert law, show that the intensity of light at two positions s_1 and s_2 is related by

\displaystyle{ I_\lambda(s_2) = e^{-\tau} \; I_\lambda(s_1) }

where the optical depth \tau of the intervening medium is defined by

\displaystyle{ \tau = \int_{s_1}^{s_2} a_\lambda(s) \, ds }

So, a layer of stuff has optical depth equal to 1 if light shining through it has its intensity reduced by a factor of 1/e.

We can go a bit further if our medium is a rather thin gas like the air in our atmosphere. Then the absorption rate is given by

a_\lambda(s) = k_\lambda(s) \, \rho(s)

where \rho(s) is the density of the air at the position s and k_\lambda(s) is its absorption coefficient.

In other words, air absorbs light at a rate proportional to its density, but also depending on what it’s made of, which may vary with position. For example, both the density and the humidity of the atmosphere can depend on its altitude.

What about emission? Air doesn’t just absorb infrared light, it also emits significant amounts of it! As mentioned, a blackbody at temperature T emits light with a monochromatic energy flux given by the Planck distribution:

\displaystyle{ f_{\lambda}(T) = \frac{2 hc^2}{\lambda^5} \frac{1}{ e^{\frac{hc}{\lambda k T}} - 1 } }

But a gas like air is far from a blackbody, so we have to multiply this by a fudge factor. Luckily, thanks to Kirchoff’s law of radiation, this factor isn’t so fudgy: it’s just the absorption rate a_\lambda(s).

Here are we generalizing Kirchoff’s law from a surface to a column of air, but that’s okay because we can treat a column as a stack of surfaces; letting these become very thin we arrive at a differential formulation of the law that applies to absorption and emission rates instead of absorptivity and emissivity. (If you’re very sharp, you’ll remember that Kirchoff’s law applies to thermal equilibrium, and wonder about that. Air in the atmosphere isn’t in perfect thermal equilibrium, but it’s close enough for what we’re doing here.)

So, when we take absorption and also emission into account, Beer’s law gets another term:

\displaystyle{  \frac{d I_\lambda(s)}{d s} = - a_\lambda(s) I_\lambda(s) + a_\lambda(s) f_\lambda(T(s)) }

where T is the temperature of our gas at the position s. In other words:

\displaystyle{  \frac{d I_\lambda(s)}{d s} =  a_\lambda(s) ( f_\lambda(T) - I_\lambda(s))}

This is Schwarzschild’s equation.

Puzzle 3. Is this equation named after the same guy who discovered the Schwarzschild metric in general relativity, describing a spherically symmetric black hole?

Application to the atmosphere

In principle, we can use Schwarzschild’s equation to help work out how much sunlight of any frequency actually makes it through the atmosphere down to the Earth, and also how much infrared radiation makes it through the atmosphere out to space. But this is not a calculation I can do here today, because it’s very complicated.

If we actually measure what fraction of radiation of different frequencies makes it through the atmosphere, you’ll see why:


Everything here is a function of the wavelength, measured in micrometers. The smooth red curve is the Planck distribution for light coming from the Sun at a temperature of 5325 K. Most of it is visible light, with a wavelength between 0.4 and 0.7 micrometers. The jagged red region shows how much of this gets through—on a clear day, I assume—and you can see that most of it gets through. The smooth bluish curves are the Planck distributions for light coming from the Earth at various temperatures between 210 K and 310 K. Most of it is infrared light, and not much of it gets through.

This, in a nutshell, is what keeps the Earth warmer than the chilly -18 °C we got last time for an Earth with no atmosphere!

This is the greenhouse effect. As you can see, the absorption of infrared light is mainly due to water vapor, and then carbon dioxide, and then other lesser greenhouse gases, mainly methane, nitrous oxide. Oxygen and ozone also play a minor role, but ozone is more important in blocking ultraviolet light. Rayleigh scattering—the scattering of light by small particles, including molecules and atoms—is also important at short wavelengths, because its strength is proportional to 1/\lambda^4. This is why the sky is blue!

Here the wavelengths are measured in nanometers; there are 1000 nanometers in a micrometer. Rayleigh scattering continues to become more important in the ultraviolet.

But right now I want to talk about the infrared. As you can see, the all-important absorption of infrared radiation by water vapor and carbon dioxide is quite complicated. You need quantum mechanics to predict how this works from first principles. Tim van Beek gave a gentle introduction to some of the key ideas here:

• Tim van Beek, A quantum of warmth, Azimuth, 2 July 2011.

Someday it would be fun to get into the details. Not today, though!

You can see what’s going on a bit more clearly here:


The key fact is that infrared is almost completely absorbed for wavelengths between 5.5 and 7 micrometers, or over 14 micrometers. (A ‘micron’ is just an old name for a micrometer.)

The work of Simpson

The first person to give a reasonably successful explanation of how the power of radiation emitted by the Earth balances the power of the sunlight it absorbs was George Simpson. He did it in 1928:

• George C. Simpson, Further studies in terrestrial radiation, Mem. Roy. Meteor. Soc. 3 (1928), 1–26.

One year earlier, he had tried and failed to understand this problem using a ‘gray atmosphere’ model where the fraction of light that gets through was independent of its wavelength. If you’ve been paying attention, I think you can see why that didn’t work.

In 1928, since he didn’t have a computer, he made a simple model that treated emission of infrared radiation as follows.

He treated the atmosphere as made of layers of varying thickness, each layer containing 0.03 grams per centimeter2 of water vapor. The Earth’s surface radiates infrared almost as a black body. Part of the power is absorbed by the first layer above the surface, while some makes it through. The first layer then re-radiates at the same wavelengths at a rate determined by its temperature. Half this goes downward, while half goes up. Of the part going upward, some is absorbed by the next layer… and so on, up to the top layer. He took this top layer to end at the stratosphere, since the atmosphere is much drier in the stratosphere.

He did this all in a way that depends on the wavelength, but using a simplified model of how each of these layers absorbs infrared light. He assumed it was:

• completely opaque from 5.5 to 7 micrometers (due to water vapor),

• partly transparent from 7 to 8.5 micrometers (interpolating between opaque and transparent),

• completely transparent from 8.5 to 11 micrometers,

• partly transparent from 11 to 14 micrometers (interpolating between transparent and opaque),

• completely opaque above 14 micrometers (due to carbon dioxide and water vapor).

He got this result, at the latitude 50° on a clear day:


The upper smooth curve is the Planck distribution for a temperature of 280 K, corresponding to the ground. The lower smooth curve is the Planck distribution at 218 K, corresponding to the stratosphere. The shaded region is his calculation of the monochromatic flux emitted into space by the Earth. As you can see, it matches the Planck distribution for the stratosphere where the lower atmosphere is completely opaque in his model—between 5.5 and 7 micrometers, and over 14 micrometers. It matches the Planck distribution for the stratosphere where the lower atmosphere is completely transparent. Elsewhere, it interpolates between the two.

The area of this shaded region—calculated with a planimeter, perhaps?—is the total flux emitted into space.

This is just part of the story: he also took clouds into account, and he did different calculations at different latitudes. He got a reasonably good balance between the incoming and outgoing power. In short, he showed that an Earth with its observed temperatures is roughly compatible with his model of how the Earth absorbs and emits radiation. Note that this is just another way to tackle the problem of predicting the temperature given a model.

Also note that Simpson didn’t quite use the Schwarzschild equation. But I guess that in some sense he discretized it—right?

And so on

This was just the beginning of a series of more and more sophisticated models. I’m too tired to go on right now.

You’ll note one big thing we’ve omitted: any sort of calculation of how the pressure, temperature and humidity of the air varies with altitude! To the extent we talked about those at all, we treated them as inputs. But for a full-fledged one-dimensional model of the Earth’s atmosphere, we’d want to derive them from some principles. There are, after all, some important puzzles:

Puzzle 4. If hot air rises, why does the atmosphere generally get colder as you go upward, at least until you reach the stratosphere?

Puzzle 5. Why is there a tropopause? In other words, why is there a fairly sudden transition 10 kilometers up between the troposphere, where the air is moist, cooler the higher you go, and turbulent, and the stratosphere, where the air is drier, warmer the higher you go, and not turbulent?

There’s a limit to how much we can understand these puzzles using a 1-dimensional model, but we should at least try to make a model of a thin column of air with pressure, temperature and humidity varying as a function of altitude, with sunlight streaming downward and infrared radiation generally going up. If we can’t do that, we’ll never understand more complicated things, like the actual atmosphere.


Mathematics of the Environment (Part 2)

11 October, 2012

Here are some notes for the second session of my seminar. They are shamelessly borrowed from these sources:

• Tim van Beek, Putting the Earth In a Box, Azimuth, 19 June 2011.

Climate model, Azimuth Library.

Climate models

Though it’s not my central concern in this class, we should talk a little about climate models.

There are many levels of sophistication when it comes to climate models. It is wise to start with simple, not very realistic models before ascending to complicated, supposedly more realistic ones. This is true in every branch of math or physics: working with simple models gives you insights that are crucial for correctly handling more complicated models. You shouldn’t fly a fighter jet if you haven’t tried something simpler yet, like a bicycle: you’ll probably crash and burn.

As I mentioned last time, models in biology, ecology and climate science pose new challenges compared to models of the simpler systems that physicists like best. As Chris Lee emphasizes, biology inherently deals with ‘high data’ systems where the relevant information can rarely be captured in a few variables, or even a few field equations.

(Field theories involve infinitely many variables, but somehow the ones physicists like best allow us to make a small finite number of measurements and extract a prediction from them! It would be nice to understand this more formally. In quantum field theory, the ‘nice’ field theories are called ‘renormalizable’, but a similar issue shows up classically, as we’ll see in a second.)

The climate system is in part a system that feels like ‘physics’: the flow of air in the atmosphere and water in the ocean. But some of the equations here, for example the Navier–Stokes equations, are already ‘nasty’ by the standards of mathematical physics, since the existence of solutions over long periods of time has not been proved. This is related to ‘turbulence’, a process where information at one length scale can significantly affect information at another dramatically different length scale, making precise predictions difficult.

Climate prediction is, we hope and believe, somewhat insulated from the challenges of weather prediction: we can hope to know the average temperature of the Earth within a degree or two in 5 years even though we don’t know whether it will rain in Manhattan on October 8, 2017. But this hope is something that needs to be studied, not something we can take for granted.

On top of this, the climate is, quite crucially, a biological system. Plant and animal life really affects the climate, as well as being affected by it. So, for example, a really detailed climate model may have a portion specially devoted to the behavior of plankton in the Mediterranean. This means that climate models will never be as ‘neat and clean’ as physicists and mathematicians tend to want—at least, not if these models are trying to be truly realistic. And as I suggested last time, this general type of challenge—the challenge posed by biosystems too complex to precisely model—may ultimately push mathematics in very new directions.

I call this green mathematics, without claiming I know what it will be like. The term is mainly an incitement to think big. I wrote a little about it here.

However, being a bit of an old-fashioned mathematician myself, I’ll start by talking about some very simple climate models, gradually leading up to some interesting puzzles about the ‘ice ages’ or, more properly, ‘glacial cycles’ that have been pestering the Earth for the last 20 million years or so. First, though, let’s take a quick look at the hierarchy of different climate models.

Different kinds of climate models

Zero-dimensional models are like theories of classical mechanics instead of classical field theory. In other words, they only consider with globally averaged quantities, like the average temperature of the Earth, or perhaps regionally averaged quantities, like the average temperature of each ocean and each continent. This sounds silly, but it’s a great place to start. It amounts to dealing with finitely many variables depending on time:

(x_1(t), \dots x_n(t))

We might assume these obey a differential equation, which we can always make first-order by introducing extra variables:

\displaystyle{ \frac{d x_i}{d t} = f_i(t, x_1(t), \dots, x_n(t))  }

This kind of model is studied quite generally in the subject of dynamical systems theory.

In particular, energy balance models try to predict the average surface temperature of the Earth depending on the energy flow. Energy comes in from the Sun and is radiated to outer space by the Earth. What happens in between is modeled by averaged feedback equations.

The Earth has various approximately conserved quantities like the total amount of carbon, or oxygen, or nitrogen—radioactive decay creates and destroys these elements, but it’s pretty negligible in climate physics. So, these things move around from one form to another. We can imagine a model where some of our variables x_i(t) are the amounts of carbon in the air, or in the soil, or in the ocean—different ‘boxes’, abstractly speaking. It will flow from one box to another in a way that depends on various other variables in our model. This idea gives class of models called box models.

Here’s one described by Nathan Urban in “week304” of This Week’s Finds:

I’m interested in box models because they’re a simple example of ‘networked systems’: we’ve got boxes hooked up by wires, or pipes, and we can imagine a big complicated model formed by gluing together smaller models, attaching the wires from one to the wires of another. We can use category theory to formalize this. In category theory we’d call these smaller models ‘morphisms’, and the process of gluing them together is called ‘composing’ them. I’ll talk about this a lot more someday.

One-dimensional models treat temperature and perhaps other quantities as a function of one spatial coordinate (in addition to time): for example, the altitude. This lets us include one dimensional processes of heat transport in the model, like radiation and (a very simplified model of) convection.

Two-dimensional models treat temperature and other quantities as a function of two spatial coordinates (and time): for example, altitude and latitude. Alternatively, we could treat the atmosphere as a thin layer and think of temperature at some fixed altitude as a function of latitude and longitude!

Three-dimensional models treat temperature and other quantities as a function of all three spatial coordinates. At this point we can, if we like, use the full-fledged Navier–Stokes equations to describe the motion of air in the atmosphere and water in the ocean. Needless to say, these models can become very complex and computation-intensive, depending on how many effects we want to take into account and at what resolution we wish to model the atmosphere and ocean.

General circulation models or GCMs try to model the circulation of the atmosphere and/or ocean.

Atmospheric GCMs or AGCMs model the atmosphere and typically contain a land-surface model, while imposing some boundary conditions describing sea surface temperatures. Oceanic GCMs or OGCMs model the ocean (with fluxes from the atmosphere imposed) and may or may not contain a sea ice model. Coupled atmosphere–ocean GCMs or AOGCMs do both atmosphere and ocean. These the basis for detailed predictions of future climate, such as are discussed by the Intergovernmental Panel on Climate Change, or IPCC.

• Backing down a bit, we can consider Earth models of intermediate complexity or EMICs. These might have a 3-dimensional atmosphere and a 2-dimensional ‘slab ocean’, or a 3d ocean and an energy-moisture balance atmosphere.

• Alternatively, we can consider regional circulation models or RCMs. These are limited-area models that can be run at higher resolution than the GCMs and are thus able to better represent fine-grained phenomena, including processes resulting from finer-scale topographic and land-surface features. Typically the regional atmospheric model is run while receiving lateral boundary condition inputs from a relatively-coarse resolution atmospheric analysis model or from the output of a GCM. As Michael Knap pointed out in class, there’s again something from network theory going on here: we are ‘gluing in’ the RCM into a ‘hole’ cut out of a GCM.

Modern GCMs as used in the 2007 IPCC report tended to run around 100-kilometer resolution. Individual clouds can only start to be resolved at about 10 kilometers or below. One way to deal with this is to take the output of higher resolution regional climate models and use it to adjust parameters, etcetera, in GCMs.

The hierarchy of climate models

The climate scientist Isaac Held has a great article about the hierarchy of climate models:

• Isaac Held, The gap between simulation and understanding in climate modeling, Bulletin of the American Meteorological Society (November 2005), 1609–1614.

In it, he writes:

The importance of such a hierarchy for climate modeling and studies of atmospheric and oceanic dynamics has often been emphasized. See, for example, Schneider and Dickinson (1974), and, especially, Hoskins (1983). But, despite notable exceptions in a few subfields, climate theory has not, in my opinion, been very successful at hierarchy construction. I do not mean to imply that important work has not been performed, of course, but only that the gap between comprehensive climate models and more idealized models has not been successfully closed.

Consider, by analogy, another field that must deal with exceedingly complex systems—molecular biology. How is it that biologists have made such dramatic and steady progress in sorting out the human genome and the interactions of the thousands of proteins of which we are constructed? Without doubt, one key has been that nature has provided us with a hierarchy of biological systems of increasing complexity that are amenable to experimental manipulation, ranging from bacteria to fruit fly to mouse to man. Furthermore, the nature of evolution assures us that much of what we learn from simpler organisms is directly relevant to deciphering the workings of their more complex relatives. What good fortune for biologists to be presented with precisely the kind of hierarchy needed to understand a complex system! Imagine how much progress would have been made if they were limited to studying man alone.

Unfortunately, Nature has not provided us with simpler climate systems that form such a beautiful hierarchy. Planetary atmospheres provide insights into the range of behaviors that are possible, but the known planetary atmospheres are few, and each has its own idiosyncrasies. Their study has connected to terrestrial climate theory on occasion, but the influence has not been systematic. Laboratory simulations of rotating and/or convecting fluids remain valuable and underutilized, but they cannot address our most complex problems. We are left with the necessity of constructing our own hierarchies of climate models.

Because nature has provided the biological hierarchy, it is much easier to focus the attention of biologists on a few representatives of the key evolutionary steps toward greater complexity. And, such a focus is central to success. If every molecular biologist had simply studied his or her own favorite bacterium or insect, rather than focusing so intensively on E. coli or Drosophila melanogaster, it is safe to assume that progress would have been far less rapid.

It is emblematic of our problem that studying the biological hierarchy is experimental science, while constructing and studying climate hierarchies is theoretical science. A biologist need not convince her colleagues that the model organism she is advocating for intensive study is well designed or well posed, but only that it fills an important niche in the hierarchy of complexity and that it is convenient for study. Climate theorists are faced with the difficult task of both constructing a hierarchy of models and somehow focusing the attention of the community on a few of these models so that our efforts accumulate efficiently. Even if one believes that one has defined the E. coli of climate models, it is difficult to energize (and fund) a significant number of researchers to take this model seriously and devote years to its study.

And yet, despite the extra burden of trying to create a consensus as to what the appropriate climate model hierarchies are, the construction of such hierarchies must, I believe, be a central goal of climate theory in the twenty-first century. There are no alternatives if we want to understand the climate system and our
comprehensive climate models. Our understanding will be embedded within these hierarchies.

It is possible that mathematicians, with a lot of training from climate scientists, have the sort of patience and delight in ‘study for study’s sake’ to study this hierarchy of models. Here’s one that Held calls ‘the fruit fly of climate models’:

For more, see:

• Isaac Held, The fruit fly of climate models.

The very simplest model

The very simplest model is a zero-dimensional energy balance model. In this model we treat the Earth as having just one degree of freedom—its temperature—and we treat it as a blackbody in equilibrium with the radiation coming from the Sun.

A black body is an object that perfectly absorbs and therefore also perfectly emits all electromagnetic radiation at all frequencies. Real bodies don’t have this property; instead, they absorb radiation at certain frequencies better than others, and some not at all. But there are materials that do come rather close to a black body. Usually one adds another assumption to the characterization of an ideal black body: namely, that the radiation is independent of the direction.

When the black body has a certain temperature T, it will emit electromagnetic radiation, so it will send out a certain amount of energy per second for every square meter of surface area. We will call this the energy flux and denote this as f. The SI unit for f is W/m2: that is, watts per square meter. Here the watt is a unit of energy per time.

Electromagnetic radiation comes in different wavelengths. So, can ask how much energy flux our black body emits per change in wavelength. This depends on the wavelength. We will call this the monochromatic energy flux f_{\lambda}. The SI unit for f_{\lambda} is W/m2μm, where μm stands for micrometer: a millionth of a meter, which is a unit of wavelength. We call f_\lambda the ‘monochromatic’ energy flux because it gives a number for any fixed wavelength \lambda. When we integrate the monochromatic energy flux over all wavelengths, we get the energy flux f.

Max Planck was able to calculate f_{\lambda} for a blackbody at temperature T, but only by inventing a bit of quantum mechanics. His result is called the Planck distribution: if

\displaystyle{ f_{\lambda}(T) = \frac{2 hc^2}{\lambda^5} \frac{1}{ e^{\frac{hc}{\lambda k T}} - 1 } }

where h is Planck’s constant, c is the speed of light, and k is Boltzmann’s constant. Deriving this would be tons of fun, but also a huge digression from the point of this class.

You can integrate f_\lambda over all wavelengths \lambda to get the total energy flux—that is, the total power per square meter emitted by a blackbody. The answer is surprisingly simple: if the total energy flux is defined by

\displaystyle{f = \int_0^\infty f_{\lambda}(T) \, d \lambda }

then in fact we can do the integral and get

f = \sigma \; T^4

for some constant \sigma. This fact is called the Stefan–Boltzmann law, and \sigma is called the Stefan-Boltzmann constant:

\displaystyle{ \sigma=\frac{2\pi^5 k^4}{15c^2h^3} \approx 5.67 \times 10^{-8}\, \frac{\mathrm{W}}{\mathrm{m}^2 \mathrm{K}^4} }

Using this formula, we can assign to every energy flux f a black body temperature T, which is the temperature that an ideal black body would need to have to emit f.

Let’s use this to calculate the temperature of the Earth in this simple model! A planet like Earth gets energy from the Sun and loses energy by radiating to space. Since the Earth sits in empty space, these two processes are the only relevant ones that describe the energy flow.

The sunshine near Earth carries an energy flux of about 1370 watts per square meter. If the temperature of the Earth is constant, as much energy is coming in as going out. So, we might try to balance the incoming energy flux with the outgoing flux of a blackbody at temperature T:

\displaystyle{ 1370 \, \textrm{W}/\textrm{m}^2 = \sigma T^4 }

and then solve for T:

\displaystyle{ T = \left(\frac{1370 \textrm{W}/\textrm{m}^2}{\sigma}\right)^{1/4} }

We’re making a big mistake here. Do you see what it is? But let’s go ahead and see what we get. As mentioned, the Stefan–Boltzmann constant has a value of

\displaystyle{ \sigma \approx 5.67 \times 10^{-8} \, \frac{\mathrm{W}}{\mathrm{m}^2 \mathrm{K}^4}  }

so we get

\displaystyle{ T = \left(\frac{1370}{5.67 \times 10^{-8}} \right)^{1/4} \mathrm{K} }  \approx (2.4 \cdot 10^9)^{1/4} \mathrm{K} \approx 394 \mathrm{K}

This is much too hot! Remember, this temperature is in kelvin, so we need to subtract 273 to get Celsius. Doing so, we get a temperature of 121 °C. This is above the boiling point of water!

Do you see what we did wrong? We neglected a phenomenon known as night. The Earth emits infrared radiation in all directions, but it only absorbs sunlight on the daytime side. Our calculation would be correct if the Earth were a flat disk of perfectly black stuff facing the Sun and perfectly insulated on the back so that it could only emit infrared radiation over the same area that absorbs sunlight! But in fact emission takes place over a larger area than absorption. This makes the Earth cooler.

To get the right answer, we need to take into account the fact that the Earth is round. But just for fun, let’s see how well a flat Earth theory does. A few climate skeptics may even believe this theory. Suppose the Earth were a flat disk of radius r, made of black stuff facing the Sun but not insulated on back. Then it would absorb power equal to

1370 \cdot \pi r^2

since the area of the disk is \pi r^2, but it would emit power equal to

\sigma T^4 \cdot 2 \pi r^2

since it emits from both the front and back. Setting these equal, we now get

\displaystyle{ \frac{1370}{2} \textrm{W}/\textrm{m}^2 = \sigma T^4 }

or

\displaystyle{ T = \left(\frac{1370 \textrm{W}/\textrm{m}^2}{2 \sigma}\right)^{1/4} }

This reduces the temperature by a factor of 2^{-1/4} \approx 0.84 from our previous estimate. So now the temperature works out to be less:

0.84 \cdot 394 \mathrm{K} \approx 331 \mathrm{K}

But this is still too hot! It’s 58 °C, or 136 °F for you Americans out there who don’t have a good intuition for Celsius.

So, a flat black Earth facing the Sun would be a very hot Earth.

But now let’s stop goofing around and do the calculation with a round Earth. Now it absorbs a beam of sunlight with area equal to its cross-section, a circle of area \pi r^2. But it emits infrared over its whole area of 4 \pi r^2: four times as much. So now we get

\displaystyle{ T = \left(\frac{1370 \textrm{W}/\textrm{m}^2}{4 \sigma}\right)^{1/4} }

so the temperature is reduced by a further factor of 2^{-1/4}. We get

0.84 \cdot 331 \mathrm{K} \approx 279 \mathrm{K}

That’s 6 °C. Not bad for a crude approximation! Amusingly, it’s crucial that the area of a sphere is 4 times the area of a circle of the same radius. The question if there is some deeper reason for this simple relation was posed as a geometry puzzle here on Azimuth.

I hope my clowning around hasn’t distracted you from the main point. On average our simplified blackbody Earth absorbs 1370/4 = 342.5 watts of solar power per square meter. So, that’s how much infrared radiation it has to emit. If you can imagine how much heat a 60-watt bulb puts out when it’s surrounded by black paper, we’re saying our simplified Earth emits about 6 times that heat per square meter.

The second simplest climate model

The next step is to take into account the ‘albedo’ of the Earth. The albedo is the fraction of radiation that is instantly reflected without being absorbed. The albedo of a surface does depend on the material of the surface, and in particular on the wavelength of the radiation, of course. But in a first approximation for the average albedo of earth we can take:

\mathrm{albedo}_{\mathrm{Earth}} = 0.3

This means that 30% of the radiation is instantly reflected and only 70% contributes to heating earth. So, instead of getting heated by an average of 342.5 watts per square meter of sunlight, let’s assume it’s heated by

0.7 \times 342.5 \approx 240

watts per square meter. Now we get a temperature of

\displaystyle{ T = \left(\frac{240}{5.67 \times 10^{-8}} \right)^{1/4} K }  \approx (4.2 \cdot 10^9)^{1/4} K \approx 255 K

This is -18 °C. The average temperature of earth is actually estimated to be considerably warmer: about +15 °C. This should not be a surprise: after all, 70% of the planet is covered by liquid water! This is an indication that the average temperature is most probably not below the freezing point of water.

So, our new ‘improved’ calculation gives a worse agreement with reality. The actual Earth is roughly 33 kelvin warmer than our model Earth! What’s wrong?

The main explanation for the discrepancy seems to be: our model Earth doesn’t have an atmosphere yet! Thanks in part to greenhouse gases like water vapor and carbon dioxide, sunlight at visible frequencies can get into the atmosphere more easily than infrared radiation can get out. This warms the Earth. This, in a nutshell, is why dumping a lot of extra carbon dioxide into the air can change our climate. But of course we’ll need to turn to more detailed models, or experimental data, to see how strong this effect is.

Besides the greenhouse effect, there are many other things our ultra-simplified model leaves out: everything associated to the atmosphere and oceans, such as weather, clouds, the altitude-dependence of the temperature of the atmosphere… and also the way the albedo of the Earth depends on location and even on temperature and other factors. There is much much more to say about all this… but not today!


Mathematics of the Environment (Part 1)

4 October, 2012

 

I’m running a graduate math seminar called here at U. C. Riverside, and here are the slides for the first class:

Mathematics of the Environment, 2 October 2012.

I said a lot of things that aren’t on the slides, so they might be a tad cryptic. I began by showing some graphs everyone should know by heart:

• human population and the history of civilization,

• the history of carbon emissions,

• atmospheric CO2 concentration for the last century or so,

• global average temperatures for the last century or so,

• the melting of the Arctic ice, and

• the longer historical perspective of CO2 concentrations.

You can click on these graphs for more details—there are lots of links in the slides.

Then I posed the question of what mathematicians can do about this. I suggested looking at the birth of written mathematics during the agricultural revolution as a good comparison, since we’re at the start of an equally big revolution now. Have you thought about how Babylonian mathematics was intertwined with the agricultural revolution?

Then, I raised the idea of ‘ecotechnology’ as a goal to strive for, assuming our current civilization doesn’t collapse to the point where it becomes pointless to even try. As an example, I describe the perfect machine for reversing global warming—and show a nice picture of it.

Finally, I began sketching how ecotechnology is related to the mathematics of networks, though this will be a much longer story for later on.

Part of the idea here is that mathematics takes time to have an effect, so mathematicians might as well look ahead a little bit, while politicians, economists, business people and engineers should be doing things that have a big effect soon.


Melting Arctic Sea Ice

5 September, 2012

I’ve been quiet about global warming lately because I’ve decided that people won’t pay much attention until I present some ideas for what to do. But I don’t want you to think I’ve simply stopped paying attention. As you’ve probably heard, the area of the Arctic sea ice hit a new record low this year:

This graph was made using data from the National Snow and Ice Data Center. Lots of other data confirm this; you can see it here.

Here is how the minimum area of Arctic sea has been dropping, based on data from Cryosphere Today:

The volume is dropping even faster, as estimated by PIOMAS, the Pan-Arctic Ice Ocean Modeling and Assimilation System:

The rapid decline has taken a lot of experts by surprise. Neven Acropolis, who keeps a hawk’s eye on these matters at the Arctic Sea Ice Blog, writes:

Basically, I’m at a loss for words, and not just because my jaw has dropped and won’t go back up as long as I’m looking at the graphs. I’m also at a loss—and I have already said it a couple of times this year—because I just don’t know what to expect any longer. I had a very steep learning curve in the past two years. We all did. But it feels as if everything I’ve learned has become obsolete. As if you’ve learned to play the guitar a bit in two years’ time, and then all of a sudden have to play a xylophone. Will trend lines go even lower, or will the remaining ice pack with its edges so close to the North Pole start to freeze up?

Basically I have nothing to offer right now except short posts when yet another of those record dominoes has fallen. Hopefully I can come up with some useful post-melting season analysis when I return from a two-week holiday.

I’m at a loss at this loss. The 2007 record that stunned everyone, gets shattered without 2007 weather conditions. The ice is thin. PIOMAS was/is right.

The big question, of course, is how this should affect what we do. David Spratt put it this way:

The 2007 IPPC report suggested that by 2100 Arctic sea-ice would likely exist in summer, though at a much reduced extent. Because many of the Arctic’s climate system tipping points are significantly related to the loss of sea-ice, the implication was that the world had some reasonable time to eliminate greenhouse emissions, and still be on time to “save the Arctic”. The 2007 IPCC-framed goal of reducing emissions 25 to 40 per cent by 2020 and 80 per cent by 2050 would “do the job” for the Arctic.

But the physical world didn’t agree. By 2006, scientist Richard Alley had observed that the Arctic was already melting “100 years ahead of schedule”. But the Arctic is not melting 100 years ahead of schedule: the climate system appears to be more sensitive to perturbations than anticipated, with observations showing many climate change impacts happening more quickly and at lower temperatures that projected, of which the Arctic is a prime example.

Politically, we are 100 years behind where we need to be on emissions reductions.

Or carbon sequestration. Or geoengineering. Or preparing to live in a hotter world.


Follow

Get every new post delivered to your Inbox.

Join 3,095 other followers