Australian Carbon Tax

13 July, 2011

Australians burn a lot of carbon. Per person, they’re right up there with Americans:

The map here is based on data from 2000. In 2008, Australians spewed out 18.9 tonnes of CO2 per person in the process of burning fossil fuels and making cement. Americans spewed 17.5 tonnes per person. The world average was just 4.4.

Australians also mine a lot of coal. It’s their biggest export! On top of that, coal exports have more than doubled in recent years:

Last Sunday, however, Prime Minister Julia Gillard announced a tax on carbon!

In this scheme, the 500 biggest polluters in Australia will be taxed at AU $23 per tonne of carbon emissions starting in July 2012. The price will increase 2.5% each year until 2015, and then a carbon trading scheme will be introduced. The hope is that by 2020, Australian carbon emissions will drop 5% below 2000 levels.

Of course, the further we go into the future, the less sure we can be of anything. What if Gillard’s party gets voted out of power? There’s already considerable dissatisfaction with Gillard’s plan, in part because she had earlier said:

There will be no carbon tax under the Government I lead.

but mainly, of course, because taxes are unpopular and the coal lobby is very strong in Australia. There’s been a lot of talk about how the carbon tax will hurt the economy.

These objections are to be expected, and thus not terribly interesting (even if they’re valid). However, some more interesting objections are posed here:

• Annabel Crab, Australia’s diabolical carbon pricing scheme, ABC News, 13 July 2010.

First, it seems that Prime Minister Gillard favors continuing to sell lots of coal to other countries. As she recently said:

Tony Abbott was predicting Armageddon for the coal mining industry But the future of coal mining in Australia is bright.

But coal mining can’t really have a ‘bright future’ in a decarbonized world unless we capture and store the carbon dioxide emitted by coal-burning plants.

Second, in the planned carbon trading scheme beginning in 2015, Australian companies will be allowed to account for half of their emissions reductions by simply buying permits from overseas. I’m not sure this is bad: it could simply be efficient. However, Annabel Crab points out that it has some seemingly paradoxical effects. She quotes a Treasury document saying:

In a world where other countries pursue more ambitious abatement targets, the carbon price will be higher, and this increases the cost in terms of domestic production and income foregone.

Is this really bad? I’m not sure. I hope however that the Australian carbon tax goes forward to the point where we can see its effects instead of merely speculate about them.


Food Price Spike

9 July, 2011

Back in 2007, food prices surged. Millions went hungry, and there were riots from Egypt to Haiti and Cameroon to Bangladesh. In 2008 they dropped, but starting at the beginning of 2009 they’ve been going up, and now they’re staying high:

This graph shows the “food price index”, which is a weighted average of food commodity prices. The exact formula seems to be a carefully guarded secret… well, at least they don’t make it easy to find!

Here’s a more long-term picture:

taken from here:

• United Nations Environmental Programme, The Environmental Food Crisis, 2008.

What’s been happening since 2000? You can blame the rising world population, but that’s not something that suddenly hit us at the turn of the century. People point to many causes, including:

1) A growing middle class in India and China, eating more—including more meat, which pushes up grain prices. For example, according to the Economist, the average Chinese consumer ate 20 kilograms of meat in 1985, but 50 kilos of the stuff in 2007. If you consider the population of China, that’s a lot more meat!

2) The use of grain and other foodstuffs for biofuels, heavily subsidized by some governments like the US, has increased competition for grain and, perhaps worse, created a tighter link between oil prices and food prices. If the price of oil goes up, gasoline costs more, so people can charge more for ethanol, so grain prices go up!

One small piece of good news: the US federal budget crisis is making more people consider cutting grain ethanol subsidies. But it hasn’t happened yet: don’t underestimate the power of the corn lobby.

3) More weather disasters, like the heat wave that caused Russia to halt grain exports last year, or the drought in Brazil that’s pushing up sugar prices now, or the drought in India that set sugar prices soaring in the summer of 2009.

People like to argue about whether these weather disasters are really increasing, and whether they’re really due to climate change. It remains hard to prove. Some people, like Al Gore, have already made up their minds. On 20 June 2011, he said:

Look what’s happened in the last twelve months:

– The twenty million people displaced in Pakistan, a nuclear-armed country, one of the biggest flood events in their history.

– An area of Australia the size of France and Germany combined, flooded.

– The nation of Colombia, they’ve had five to six times the normal rainfall. Two million people are still homeless. Most of the country was underwater for a portion of last year.

– My hometown, my home city of Nashville, a thousand-year flood. Thousands of my neighbors lost their homes and businesses. They had no flood insurance because there had never been a flood in areas that were flooded.

– Drought. Russia, biggest drought in their history, biggest fires in their history, over 50,000 people killed, and then all of their wheat and other food crops, along with that of Ukraine and Kazakhstan, taken off the world markets, leading to an all-time record spike in food prices.

– Texas, right now. The drought raised from “extreme” to “exceptional.” 254 counties in Texas, 252 of them were filed in the major disaster.

– Today, biggest fire in the history of Arizona, spreading to New Mexico.

– Today, biggest flood in the history of the Mississippi River valley underway right now.

At what point is there a moment where we say, ‘Oh, we ought to do something about this?’

A growing world middle class, the rising use of food for fuel, the effects of climate change… when it comes to rising food prices, there are lots of other causes one can point to.

But the big question is whether it’s a matter of many small causes that coincidentally happen to be boosting food prices now, or something more systematic.

In other words, is our world civilization hitting the limits of what the planet can support?


Operads and the Tree of Life

6 July, 2011

This week Lisa and I are visiting her 90-year-old mother in Montréal. Friday I’m giving a talk at the Université du Québec à Montréal. The main person I know there is André Joyal, an expert on category theory and algebraic topology. So, I decided to give a talk explaining how some ideas from these supposedly ‘pure’ branches of math show up in biology.

My talk is called ‘Operads and the Tree of Life’.

Trees

In biology, trees are very important:

So are trees of a more abstract sort: phylogenetic trees describe the history of evolution. The biggest phylogenetic tree is the ‘Tree of Life’. It includes all the organisms on our planet, alive now or anytime in the past. Here’s a rough sketch of this enormous tree:

Its structure is far from fully understood. So, biologists typically study smaller phylogenetic trees, like this tree of dog-like species made by Elaine Ostrander:

Abstracting still further, we can also think of a tree as a kind of purely mathematical structure, like this:

Trees are important in combinatorics, but also in algebraic topology. The reason is that in algebraic topology we get pushed into studying spaces equipped with enormous numbers of operations. We’d get hopelessly lost without a good way of drawing these operations. We can draw an operation f with n inputs and one output as a little tree like this:

We can also draw the various ways of composing these operations. Composing them is just like building a big tree out of little trees!

An operation with n inputs and one output is called an n-ary operation. In the late 1960s, various mathematicians including Boardmann and Vogt realized that spaces with tons of n-ary operations were crucial to algebraic topology. To handle all these operations, Peter May invented the concept of an operad. This formalizes the way operations can be drawn as trees. By now operads are a standard tool, not just in topology, but also in algebraic geometry, string theory and many other subjects.

But how do operads show up in biology?

When attending a talk by Susan Holmes on phylogenetic trees, I noticed that her work on phylogenetic trees was closely related to a certain operad. And when I discussed her work here, James Griffin pointed out that this operad can be built using a slight variant of a famous construction due to Boardman and Vogt: their so-called ‘W construction’!

I liked the idea that trees and operads in topology might be related to phylogenetic trees. And thinking further, I found that the relation was real, and far from a coincidence. In fact, phylogenetic trees can be seen as operations in a certain operad… and this operad is closely related to the way computational biologists model DNA evolution as a branching sort of random walk.

That’s what I’d like to explain now.

I’ll be a bit sketchy, because I’d rather get across the basic ideas than the technicalities. I could even be wrong about some fine points, and I’d be glad to talk about those in the comments. But the overall picture is solid.

Phylogenetic trees

First, let’s ponder the mathematical structure of a phylogenetic tree. First, it’s a tree: a connected graph with no circuits. Second, it’s a rooted tree, meaning it has one vertex which is designated the root. And third, the leaves are labelled.

I should explain the third part! For any rooted tree, the vertices with just one edge coming out of them are called leaves. If the root is drawn at the bottom of the tree, the leaves are usually drawn at the top. In biology, the leaves are labelled by names of species: these labels matter. In mathematics, we can label the leaves by numbers 1, 2, \dots, n, where n is the number of leaves.

Summarizing all this, we can say a phylogenetic tree should at least be a leaf-labelled rooted tree.

That’s not all there is to it. But first, a comment. When you see a phylogenetic tree drawn by a biologist, it’ll pretty much always a binary tree, meaning that as we move up any edge, away from the root, it either branches into two new edges or ends in a leaf. The reason is that while species often split into two as they evolve, it is less likely for a species to split into three or more new species all at once.

So, the phylogenetic trees we see in biology are usually leaf-labeled rooted binary trees. However, we often want to guess such a tree from some data. In this game, trees that aren’t binary become important too!

Why? Well, here another fact comes into play. In a phylogenetic tree, typically each edge can be labeled with a number saying how much evolution occurred along that edge. But as this number goes to zero, we get a tree that’s not binary anymore. So, we think of non-binary trees as conceptually useful ‘borderline cases’ between binary trees.

So, it’s good to think about phylogenetic trees that aren’t necessarily binary… and have edges labelled by numbers. Let’s make this into a formal definition:

Definition A phylogenetic tree is a leaf-labeled rooted tree where each edge not touching a leaf is labeled by a positive real number called its length.

By the way, I’m not claiming that biologists actually use this definition. I’ll write \mathrm{Phyl}_n for the set of phylogenetic trees with n leaves. This becomes a topological space in a fairly obvious way, where we can trace out a continuous path by continuously varying the edge lengths of a tree. But when some edge lengths approach zero, our graph converges to one where the vertices at ends of these edges ‘fuse into one’, leaving us with a graph with fewer vertices.

Here’s an example for you to check your understanding of what I just said. With the topology I’m talking about, there’s a continuous path in \mathrm{Phyl}_4 that looks like this:

These trees are upside-down, but don’t worry about that. You can imagine this path as a process where biologists slowly change their minds about a phylogenetic tree as new data dribbles in. As they change their minds, the tree changes shape in a continuous way.

For more on the space of phylogenetic trees, see:

• Louis Billera, Susan Holmes and Karen Vogtmann, Geometry of the space of phylogenetic trees, Advances in Applied Mathematics 27 (2001), 733-767.

Operads

How are phylogenetic trees related to operads? I have three things to say about this. First, they are the operations of an operad:

Theorem 1. There is an operad called the phylogenetic operad, or \mathrm{Phyl}, whose space of n-ary operations is \mathrm{Phyl}_n.

If you don’t know what an operad is, I’d better tell you now. They come in different flavors, and technically I’ll be using ‘symmetric topological operads’. But instead of giving the full definition, which you can find on the nLab, I think it’s better if I sketch some of the key points.

For starters, an operad O consists of a topological space O_n for each n = 0,1,2,3 \dots. The point in O_n are called the n-ary operations of O. You can visualize an n-ary operation f \in O_n as a black box with n input wires and one output wire:

Of course, this also looks like a tree.

We can permute the inputs of an n-ary operation and get a new n-ary operation, so we have an action of the permutation group S_n on O_n. You visualize this as permuting input wires:

More importantly, we can compose operations! If we have an n-ary operation f, and n more operations, say g_1, \dots, g_n, we can compose f with all the rest and get an operation called

f \circ (g_1, \dots, g_n)

Here’s how you should imagine it:

Composition and permutation must obey some laws, all of which are completely plausible if you draw them as pictures. For example, the associative law makes a composite of composites like this well-defined:

Now, these pictures look a lot like trees. So it shouldn’t come as a shock that phylogenetic trees are the operations of some operad \mathrm{Phyl}. But let’s sketch why it’s true.

First, we can permute the ‘inputs’—meaning the labels on the leaves—of any phylogenetic tree and get a new phylogenetic tree. This is obvious.

Second, and more importantly, we can ‘compose’ phylogenetic trees. How do we do this? Simple: we glue the roots of a bunch of phylogenetic trees to the leaves of another and get a new one!

More precisely, suppose we have a phylogenetic tree with n leaves, say f. And suppose we have n more, say g_1, \dots, g_n. Then we can glue the roots of g_1, \dots, g_n to the leaves of g to get a new phylogenetic tree called

f \circ (g_1, \dots, g_n)

Third and finally, all the operad laws hold. Since these laws all look obvious when you draw them using pictures, this is really easy to show.

If you’ve been paying careful attention, you should be worrying about something now. In operad theory, we think of an operation f \in O_n as having n inputs and one output. For example, this guy has 3 inputs and one output:

But in biology, we think of a phylogenetic tree as having one input and n outputs. We start with one species (or other grouping of organisms) at the bottom of the tree, let it evolve and branch, and wind up with n of them!

In other words, operad theorists read a tree from top to bottom, while biologists read it from bottom to top.

Luckily, this isn’t a serious problem. Mathematicians often use a formal trick where they take an operation with n inputs and one output and think of it as having one input and n outputs. They use the prefix ‘co-‘ to indicate this formal trick.

So, we could say that phylogenetic trees stand for ‘co-operations’ rather than operations. Soon this trick will come in handy. But not just yet!

The W construction

Boardman and Vogt had an important construction for getting new operads for old, called the ‘W construction’. Roughly speaking, if you start with an operad O, this gives a new operad \mathrm{W}(O) whose operations are leaf-labelled rooted trees where:

1) all vertices except leaves are labelled by operations of O, and a vertex with n input edges must be labelled by an n-ary operation of O,

and

2) all edges except those touching the leaves are labelled by numbers in (0,1].

If you think about it, the operations of \mathrm{W}(O) are strikingly similar to phylogenetic trees, except that:

1) in phylogenetic trees the vertices don’t seem to be labelled by operations of a operad,

and

2) we use arbitrary nonnegative numbers to label edges, instead of numbers in (0,1].

The second point is a real difference, but it doesn’t matter much: if Boardman and Vogt had used nonnegative numbers instead of numbers in (0,1] to label edges in the W construction, it would have worked just as well. Technically, they’d get a ‘weakly equivalent’ operad.

The first point is not a real difference. You see, there’s an operad called \mathrm{Comm} which has exactly one operation of each arity. So, labelling vertices by operations of \mathrm{Comm} is a completely trivial process.

As a result, we conclude:

Theorem 2. The phylogenetic operad is weakly equivalent to \mathrm{W}(\mathrm{Comm}).

If you’re not an expert on operads (such a person is called an ‘operadchik’), you may be wondering what \mathrm{Comm} stands for. The point is that operads have ‘algebras’, where the abstract operations of the operad are realized as actual operations on some topological space. And the algebras of \mathrm{Comm} are precisely commutative topological monoids: that is, topological spaces equipped with a commutative associative product!

Branching Markov processes and evolution

By now, if you haven’t fallen asleep, you should be brimming with questions, such as:

1) What does it mean that phylogenetic trees are the operations of some operad \mathrm{Phyl}? Why should we care?

2) What does it mean to apply the W construction to the operad \mathrm{Comm}? What’s the significance of doing this?

3) What does it mean that \mathrm{Phyl} is weakly equivalent to \mathrm{W}(\mathrm{Comm})? You can see the definition of weak equivalence here, but it’s pretty technical, so it needs some explanation.

The answers to questions 2) and 3) take us quickly into fairly deep waters of category theory and algebraic topology—deep, that is, if you’ve never tried to navigate them. However, these waters are well-trawled by numerous experts, and I have little to say about questions 2) and 3) that they don’t already know. So given how long this talk already is, I’ll instead try to answer question 1). This is where some ideas from biology come into play.

I’ll summarize my answer in a theorem, and then explain what the theorem means:

Theorem 3. Given any continuous-time Markov process on a finite set X, the vector space V whose basis is X naturally becomes a coalgebra of the phylogenetic operad.

Impressive, eh? But this theorem is really just saying that biologists are already secretly using the phylogenetic operad.

Biologists who try to infer phylogenetic trees from present-day genetic data often use simple models where the genotype of each species follows a ‘random walk’. Also, species branch in two at various times. These models are called Markov models.

The simplest Markov model for DNA evolution is the Jukes–Cantor model. Consider a genome of fixed length: that is, one or more pieces of DNA having a total of N base pairs. For example, this tiny genome has N = 4 base pairs, just enough to illustrate the 4 possible choices, which are called A, T, C and G:

Since there are 4 possible choices for each base pair, there are 4^N possible genotypes with N base pairs. In the human genome, N is about 3 \times 10^9. So, there are about

4^{3 \times 10^9} \approx 10^{1,800,000,000}

genotypes of this length. That’s a lot!

As time passes, the Jukes–Cantor model says that the human genome randomly walks through this enormous set of possibilities, with each base pair having the same rate of randomly flipping to any other base pair.

Biologists have studied many ways to make this model more realistic in many ways, but in a Markov model of DNA evolution we’ll typically have some finite set X of possible genotypes, together with some random walk on this set. But the term ‘random walk’ is a bit imprecise: what I really mean is a ‘continuous-time Markov process’. So let me define that.

Fix a finite set X. For each time t \in [0,\infty) and pair of points i, j in X, a continuous-time Markov process gives a number T_{ij}(t) \in [0,1] saying the probability that starting at the point i at time zero, the random walk will go to the point j at time t. We can think of these numbers as forming an X \times X square matrix T(t) at each time t. We demand that four properties hold:

1) T(t) depends continuously on t.

2) For all s, t we have T(s) T(t) = T(s + t).

3) T(0) is the identity matrix.

4) For all j and t we have:

\sum_{i \in X} T_{i j}(t) = 1.

All these properties make a lot of sense if you think a bit, though condition 2) says that the random walk does not change character with the passage of time, which would be false given external events like, say, ice ages. As far as math jargon goes, conditions 1)-3) say that T is a continuous one-parameter semigroup, while condition 4) together with the fact that T_{ij}(t) \in [0,1] says that at each time, T(t) is a stochastic matrix.

Let V be the vector space whose basis is X. To avoid getting confused, let’s write e_i for the basis vector corresponding to i \in X. Any probability distribution on X gives a vector in V. Why? Because it gives a probability \psi_i for each i \in X, and we can think of these as the components of a vector \psi \in V.

Similarly, for any time t \in [0,\infty), we can think of the matrix T(t) as a linear operator

T(t) : V \to V

So, if we start with some probability distribution \psi of genotypes, and let them evolve for a time t according to our continuous-time Markov process, by the end the probability distribution will be T(t) \psi.

But species do more than evolve this way: they also branch! A phylogenetic tree describes a way for species to evolve and branch.

So, you might hope that any phylogenetic tree f \in \mathrm{Phyl}_n gives a ‘co-operation’ that takes one probability distribution \psi \in V as input and returns n probability distributions as output.

That’s true. But these n probability distributions will be correlated, so it’s better to think of them as a single probability distribution on the set X^n. This can be seen as a vector in the vector space V^{\otimes n}, the tensor product of n copies of V.

So, any phylogenetic tree f \in \mathrm{Phyl}_n gives a linear operator from V to V^{\otimes n}. We’ll call it

T(f) : V \to V^{\otimes n}

because we’ll build it starting from the Markov process T.

Here’s a sketch of how we build it—I’ll give a more precise account in the next and final section. A phylogenetic tree is made of a bunch of vertices and edges. So, I just need to give you an operator for each vertex and each edge, and you can compose them and tensor them to get the operator T(f):

1) For each vertex with one edge coming in and n coming out:

we need an operator

V \to V^{\otimes n}

that describes what happens when one species branches into n species. This operator takes the probability distribution we put in and makes n identical and perfectly correlated copies. To define this operator, we use the fact that the vector space V has a basis e_i labelled by the genotypes i \in X. Here’s how the operator is defined:

e_i \mapsto e_i \otimes \cdots \otimes e_i \in V^{\otimes n}

2) For each edge of length t, we need an operator that describes a random walk of length t. This operator is provided by our continuous-time Markov process: it’s

T(f) : V \to V

And that’s it! By combining these two kinds of operators, one for ‘branching’ and one for ‘random walking’, we get a systematic way to take any phylogenetic tree f \in \mathrm{Phyl}_n and get an operator

T(f) : V \to V^{\otimes n}

In fact, these operators T(f) obey just the right axioms to make V into what’s called a ‘coalgebra’ of the phylogenetic operad. But to see this—that is, to prove Theorem 3—it helps to use a bit more operad technology.

The proof

I haven’t even defined coalgebras of operads yet. And I don’t think I’ll bother. Why not? Well, while the proof of Theorem 3 is fundamentally trivial, it’s sufficiently sophisticated that only operadchiks would enjoy it without a lengthy warmup. And you’re probably getting tired by now.

So, to most of you reading this: bye! It was nice seeing you! And I hope you sensed the real point of this talk:

Some of the beautiful structures used in algebraic topology are also lurking in biology. These structures may or may not be useful in biology… but we’ll never know if we don’t notice them and say what they are! So, it makes sense for mathematicians to spend some time looking for them.

Now, let me sketch a proof of Theorem 3. It follows from a more general theorem:

Theorem 4. Suppose V is an object in some symmetric monoidal topological category C. Suppose that V is equipped with an action of the additive monoid [0,\infty). Suppose also that V is a cocommutative coalgebra. Then V naturally becomes a coalgebra of the phylogenetic operad.

How does this imply Theorem 3? In Theorem 3, C is the category of finite-dimensional real vector space. The action of [0,\infty) on V is the continuous-time Markov process. And V becomes a cocommutative coalgebra because it’s a vector space with a distinguished basis, namely the finite set X. This makes V into a cocommutative coalgebra in the usual way, where the comultiplication:

\Delta: V \to V \otimes V

‘duplicates’ basis vectors:

\Delta : e_i \mapsto e_i \otimes e_i

while the counit:

\epsilon : V \to \mathbb{R}

‘deletes’ them:

\epsilon : e_i \to 1

These correspond to species splitting in two and species going extinct, respectively. (Biologists trying to infer phylogenetic trees often ignore extinction, but it’s mathematically and biologically natural to include it.) So, all the requirements are met to apply Theorem 4 and make V into coalgebra of the phylogenetic operad.

But how do we prove Theorem 4? It follows immediately from Theorem 5:

Theorem 5. The phylogenetic operad \mathrm{Phyl} is the coproduct of the operad \mathrm{Comm} and the additive monoid [0,\infty), viewed as an operad with only 1-ary operations.

Given how coproducts works, this means that an algebra of both \mathrm{Comm} and [0,\infty) is automatically an algebra of \mathrm{Phyl}. In other words, any commutative algebra with an action of [0,\infty) is an algebra of \mathrm{Phyl}. Dualizing, it follows that any cocommutative coalgebra with an action of [0,\infty) is an coalgebra of \mathrm{Phyl}. And that’s Theorem 4!

But why is Theorem 5 true? First of all, I should emphasize that the idea of using it was suggested by Tom Leinster in our last blog conversation on the phylogenetic operad. And in fact, Tom proved a result very similar to Theorem 5 here:

• Tom Leinster, Coproducts of operads, and the W-construction, 14 September 2000.

He gives an explicit description of the coproduct of an operad O and a monoid, viewed as an operad with only unary operations. He works with non-symmetric, non-topological operads, but his ideas also work for symmetric, topological ones. Applying his ideas to the coproduct of \mathrm{Comm} and [0,\infty), we see that we get the phylogenetic operad!

And so, phylogenetic trees turn out to be related to coproducts of operads. Who’d have thought it? But we really don’t have as many fundamentally different ideas as you might think: it’s hard to have new ideas. So if you see biologists and algebraic topologists both drawing pictures of trees, you should expect that they’re related.


A Quantum of Warmth

2 July, 2011

guest post by Tim van Beek

The Case of the Missing 33 Kelvin, Continued

Last time, when we talked about putting the Earth in a box, we saw that a simple back-of-the-envelope calculation of the energy balance and resulting black body temperature of the earth comes surprisingly close to the right answer.

But there was a gap: the black body temperature calculated with a zero-dimensional energy balance model is about 33 kelvin lower than the estimated average surface temperature on Earth.

In other words, this simplified model predicts an Earth that’s 33 °C colder than it really is!

In such a situation, as theoretical physicists, we start by taking a bow, patting ourselves on the back, and congratulating ourselves on a successful first approximation.

Then we look for the next most important effect that we need to include in our model.

This effect needs to:

1) have a steady and continuous influence over thousands of years,

2) have a global impact,

3) be rather strong, because heating the planet Earth by 33 kelvin on the average needs a lot of power.

The simplest explanation would of course be that there is something fundamentally wrong with our back-of-the-envelope calculation.

One possibility, as Itai Bar-Natan mentioned last time, is geothermal energy. It certainly matches point 1, maybe matches point 2, but it is hard to guess if it matches point 3. As John pointed out, we can check the Earth’s energy budget on Wikipedia. This suggests that the geothermal heating is very small. Should we trust Wikipedia? I don’t know. We should check it out!

But I will not do that today. Instead I would like to talk about the most prominent explanation:

Most of you will of course have heard about the effect that climate scientists talk about, which is often—but confusingly—called the ‘greenhouse effect’, or ‘back radiation’. However, the term that is most accurate is downward longwave radiation (DLR), so I would like to use that instead.

In order to assess if this is a viable explanation of the missing 33 kelvin, we will first have to understand the effect better. So this is what I will talk about today.

In order to get a better understanding, we will have to peek into our simple model’s box and figure out what is going on in there in more detail.

Peeking into the Box: Surface and Atmosphere

To get a better approximation, instead of treating the whole earth as a black body, we will have to split up the system into the Earth itself, and its atmosphere. For the surface of the Earth it is still a good approximation to say that it is a black body.

The atmosphere is more complicated. In a next approximation step, I would like to pretend that the atmosphere is a body of its own, hovering above the surface of the earth, as a separate system. So we will ignore that there are several different layers in the atmosphere doing different things, including interactions with the surface. Well, we are not going to ignore the interaction with the surface completely, as you will see.

Since one can quickly get lost in details when discussing the atmosphere, I’m going to cheat and look up the overall average effects in an introductory meteorology textbook:

• C. Donald Ahrens: Meteorology Today, 9th edition, Brooks/Cole, Florence, Kentucky, 2009.

Here is what atmosphere and Earth’s surface do to the incoming radiation from the Sun (from page 48):

Of 100 units of inbound solar energy flux, 30 are reflected or scattered back to space without a contribution to the energy balance of the Earth. This corresponds to an overall average albedo of 0.3 for the Earth.

The next graphic shows the most important processes of heat and mass transport caused by the remaining 70 units of energy flux, with their overall average effect (from page 49):

Maybe you have some questions about this graphic; I certainly do.

Conduction and Convection?

Introductory classes for partial differential equations sometimes start with the one dimensional heat equation. This equation describes the temperature distribution of a rod of metal that is heated on one end and kept cool on the other. The kind of heat transfer occurring here is called conduction. The atoms or molecules stay where they are and transfer energy by interacting with their neighbors.

However, heat transfer by conduction is negligible for gases like the atmosphere. Why is it there in the graphic? The answer may be that conduction is still important for boundary layers. Or maybe the author wanted to include it to avoid the question “why is conduction not in the graphic?” I don’t know. But I’ll trust that the number associated with the “convection and conduction” part is correct, for now.

What is Latent Heat?

There is a label “latent heat” on the left part of the atmosphere: latent heat is energy input that does not result in a temperature increase, or energy output that does not result in a temperature decrease. This can happen, when there is a phase change of a component of the system. For example, when liquid water at 0°C freezes, it turns into ice at 0°C while losing energy to its environment. But the temperature of the whole system stays at 0°C.

The human body uses this effect, too, when it cools itself by sweating. This cooling effect works as long as the fluid water turns into water vapor and withdraws energy from the skin in the process.

The picture above shows a forest with water vapor (invisible), fluid (dispersed in the air) and snow. As the Sun sets, parts of the water vapor will eventually condense, and fluid water will turn into ice, releasing energy to the environment. During the phase changes there will be energy loss without a temperature decrease of the water.

Downward Longwave Radiation

When there is a lot of light there are also dark shadows. — main character in Johann Wolfgang von Goethe’s Götz von Berlichingen

Last time we pretended that the Earth as a whole behaves like a black body.

Now that we split up the Earth into surface and atmosphere, you may notice that:

a) a lot of sunlight passes through the atmosphere and reaches the surface, and

b) there is a lot of energy flowing downwards from the atmosphere to the surface in form of infrared radiation. This is called downward longwave radiation.

Observation a) shows that the atmosphere does not act like a black body at all. Instead, it has a nonzero transmittance, which means that not all incoming radiation is absorbed.

Observation b) shows that assuming that the black body temperature of the Earth is equal to the average surface temperature could go wrong, because—from the viewpoint of the surface—there is an additional inbound energy flux from the atmosphere.

The reason for both observations is that the atmosphere consists of various gases, like O2, N2, H2O (water vapor) and CO2. Any gas molecule can absorb and emit radiation only at certain frequencies, which are called its emission spectrum. This fact led to the development of quantum mechanics, which can be used to calculate the emission spectrum of any molecule.

Molecules and Degrees of Freedom

When a photon hits a molecule, the molecule can absorb the photon and gain energy in three main ways:

• One of its electron can climb to a higher energy level.

• The molecule can vibrate more strongly.

• The molecule can rotate more rapidly.

To get a first impression of the energy levels involved in these three processes, let’s have a look at this graphic:

This is taken from the book

* Sune Svanberg, _Atomic and Molecular Spectroscopy: Basic Aspects and Practical Applications_, 4th edition, Advanced Texts in Physics, Springer, Berlin, 2004.

The y-axis shows the energy difference in ‘eV’, or ‘electron volts’. An electron volt is the amount of energy an electron gains or loses as its potential changes by one volt.

Accoding to quantum mechanics, a molecule can emit and absorb only photons whose energy matches the difference of one of the discrete energy levels in the graphic, for any one of the three processes.

It is possible to use the characteristic absorption and emission properties of molecules of different chemical species to analyze the chemical composition of an unknown probe of gases (and other materials, too). These methods are usually called names involving the word ‘spectroscopy’. For example, infrared spectroscopy involves methods that examine what happens to infrared radiation when you send it to your probe.

By the way, Wikipedia has a funny animated picture of the different vibrational modes of a molecule on the page about infrared spectroscopy.

But why does so much of radiation from the Sun pass through the atmosphere, while a lot of infrared radiation emitted by the Earth instead bounces back to the surface? The answer to this puzzle involves a specific property of certain components of the atmosphere.

Can You See an Electron Hopping?

Here is a nice overview of the spectrum of electromagnetic radiation:

The energy E and the wavelength \lambda of a photon have a very simple relationship:

\displaystyle{ E = \frac{c \; h}{\lambda}}

where h is Planck’s constant and c is the speed of light. In short, photons with longer wavelengths have less energy.

Planck’s constant is

h \approx 6 \times 10^{-15} \; eV \times s

while the speed of light is

c \approx 3 \times 10^{8} \; m/s

Plugging these into the formula we get that a photon with an energy of one electron volt has a wavelength of about 1.2 micrometers, which is just outside the visible range, a bit towards the infrared direction. The visible range corresponds to 1.6 to 3.4 electron volts. If you want, you can scroll up to the graphic with the energy levels and calculate which processese will result in which kind of radiation.

Electrons that take a step down the orbital ladder in an atom emit a photon. Depending on the atom and the kind of transition, some of those photons will be in the visible range, and some will be in the ultraviolet.

There is no Infrared from the Sun (?)

From the Planck distribution, we can determine that the Sun and Earth, which are approximately black bodies, emit radiation mostly at very different wavelengths:

This graphic is sometimes called ‘twin peak graph’.

Oversimplifying, we could say: The Earth emits infrared radiation; the Sun emits almost no infrared. So, if you find infrared radiation on earth, you can be sure that it did not come from the Sun.

The problem with this statement is that, strictly speaking, the Sun does emit radiation at wavelengths that are in the infrared range. This is the reason why people have come up with the term near-infra-red radiation, which we define to be the range of 0.85 and 5.0 micrometer wavelength. Radiation with longer wavelengths is called far infrared. With these definitions we can say that the Sun radiates in the near-infra-red range, and earth does not.

Only certain components of the atmosphere emit and absorb radiation in the infrared part. These are called—somewhat misleadingly—greenhouse gases. I would like to call them ‘infrared-active gases’ instead, but unfortunately the ‘greenhouse gas’ misnomer is very popular. Two prominent ones are H2O and CO2:

The atmospheric window at 8 to 12μm is quite transparent, which means that this radiation passes from the surface through the atmosphere into space without much ado. Therefore, this window is used by satellites to estimate the surface temperature.

Since most radiation coming from the Earth is infrared, and only some constituents of the atmosphere react to it—excluding the major ones—a small amount of, say, CO2 could have a lot of influence on the energy balance. Like being the only one in a group of hundreds with a boom box. But we should check that more thoroughly.

Can a Cold Body Warm a Warmer Body?

Downward longwave radiation warms the surface, but the atmosphere is colder than the surface, so how can radiation from the colder atmosphere result in a higher surface temperature? Doesn’t that violate the second law of thermodynamics?

The answer is: no, it does not. It turns out that others have already taken pains to explain this on the blogosphere, so I’d like to point you there instead of trying to do a better job here:

• Roy Spencer, Yes, Virginia, cooler objects can make warmer objects even warmer still, 23 July 2010.

• The Science of Doom, The amazing case of “back-radiation”, 27 July 2010.

It’s the Numbers, Stupid!

Shut up and calculate! — Leitmotiv of several prominent physicists after becoming exhausted by philosophical discussions about the interpretation of quantum mechanics.

Maybe we have succeeded by now to convince the imaginary advisory board of the zero dimensional energy balance model project that there really is an effect like ‘downward longwave radiation’. It certainly should be there if quantum mechanics is right. But I have not explained yet how big it is. According to the book Meteorology Today, it is big. But maybe the people who contributed to the graphic got fooled somehow; and there really is a different explanation for the case of the missing 33 kelvin.

What do you think?

When we dip our toes into a new topic, it is important to keep simple yet fundamental questions like this in mind, and keep asking them.

In this case we are lucky: it is possible to measure the amount of downward longwave radiation. There are a lot of field studies, and the results have been incorporated in global climate models. But we will have to defer this story to another day.


This Week’s Finds (Week 315)

27 June, 2011

This is the second and final part of my interview with Thomas Fischbacher. We’re talking about sustainable agriculture, and he was just about to discuss the role of paying attention to flows.

JB: So, tell us about flows.

TF: For natural systems, some of the most important flows are those of energy, water, mineral nutrients, and biomass. Now, while they are physically real, and keep natural systems going, we should remind ourselves that nature by and large does not make high level decisions to orchestrate them. So, flows arise due to processes in nature, but nature ‘works’ without being consciously aware of them. (Still, there are mechanisms such as evolutionary pressure that ensure that the flow networks of natural ecosystems work—those assemblies that were non-viable in the long term did not make it.)

Hence, flows are above everything else a useful conceptual framework—a mental tool devised by us for us—that helps us to make sense of an otherwise extremely complex and confusing natural world. The nice thing about flows is that they reduce complexity by abstracting away details when we do not want to focus on them—such as which particular species are involved in the calcium ion economy, say. Still, they retain a lot of important information, quite unlike some models used by economists that actually guide—or misguide—our present decision-making. They tell us a lot about key processes and longer term behaviour—in particular, if something needs to be corrected.

Sustainability is a complex subject that links to many different aspects of human experience—and of course the non-human world around us. When confronted with such a subject, my approach is to start by asking: ‘what I am most certain about’, and use these key insights as ‘anchors’ that set the scene. Everything else must respect these insights. Occasionally, some surprising new insight forces me to reevaluate some fundamental assumptions, and repaint part of the picture. But that’s life—that’s how we learn.

Very often, I find that those aspects which are both useful to obtain deeper insights and at the same time accessible to us are related to flows.

JB: Can you give an example?

TF: Okay, here’s another puzzle. What is the largest flow of solids induced by civilization?

JB: Umm… maybe the burning of fossil fuels, passing carbon into the atmosphere?

TF: I am by now fairly sure that the answer is: the unintentional export of topsoil from the land into the sea by wind and water erosion, due to agriculture. According to Brady & Weil, around the year 2000, the U.S. annually ‘exported’ about 4×1012 kilograms of topsoil to the sea. That’s roughly three cubic kilometers, taking a reasonable estimate for the density of humus.

JB: Okay. In 2007, the U.S. burnt 1.6 × 1012 kilograms of carbon. So, that’s comparable.

TF: Yes. When I cross check my number combining data from the NRCS on average erosion rates and from the CIA World Factbook on cultivated land area, I get a result that is within the same ballpark, so it seems to make sense. In comparison, total U.S. exports of economic goods in 2005 were 4.89×1011 kilograms: about an order of magnitude less, according to statistics from the Federal Highway Administration.

If we look at present soil degradation rates alone, it is patently clear that we see major changes ahead. In the long term, we just cannot hope to keep on feeding the population using methods that keep on rapidly destroying fertility. So, we pretty much know that something will happen there. (Sounds obvious, but alas, thinking of a number of discussions I had with some economists, I must say that, sadly, it is far from being so.)

What actually will happen mostly depends on how wisely we act. The possibilities range from nuclear war to a mostly smooth swift transition to fertility-building food production systems that also take large amounts of CO2 out of the atmosphere and convert it to soil humus. I am, of course, much in favour of scenarios close to the latter one, but that won’t happen unless we put in some effort—first and foremost, to educate people about how it can be done.

Flow analysis can be an extremely powerful tool for diagnosis, but its utility goes far beyond this. When we design systems, paying attention to how we design the flow networks of energy, water, materials, nutrients, etc., often makes a world of a difference.

Nature is a powerful teacher here: in a forest, there is no ‘waste’, as one system’s output is another system’s input. What else is ‘waste’ but an accumulation of unused output? So, ‘waste’ is an indication of an output mismatch problem. Likewise, if a system’s input is not in the right form, we have to pre-process it, hence do work, hence use energy. Therefore, if a process or system continually requires excessive amounts of energy (as many of our present designs do), this may well be an indication of a design problem—and could be related to an input mismatch.

Also, the flow networks of natural systems usually show both extremely high recycling rates and a lot of multi-functionality, which provides resilience. Every species provides its own portfolio of services to the assembly, which may include pest population control, creating habitat for other species, food, accumulating important nutrients, ‘waste’ transformation, and so on. No element has a single objective, in contrast to how we humans by and large like to engineer our systems. Each important function is covered by more than one element. Quite unlike many of our past approaches, design along such principles can have long-term viability. Nature works. So, we clearly can learn from studying nature’s networks and adopting some principles for our own designs.

Designing for sustainability with, around, and inspired by natural systems is an interesting intellectual challenge, much like solving a jigsaw puzzle. We cannot simultaneously comprehend the totality of all interactions and relations between adjacent pieces as we build it, but we keep on discovering clues by closely studying different aspects: form, colour, pattern. If we are on the right track, and one clue tells us how something should fit, we will discover that other aspects will fit as well. If we made a mistake, we need to apply force to maintain it and hammer other pieces into place—and unless we correct that mistake, we will need ever more brutal interventions to artificially stabilize the problems which are mere consequences of the original mistake. Think using nuclear weapons to seal off spilling oil wells drilled in deep waters needed because we used up all the easily accessible high-quality fuels. One mistake begets another.

There is a reason why jigsaw puzzles ‘work’: they were created that way. There is also a reason why the dance of natural systems ‘works’: coevolution. What happens when we run out of steam to stabilize poor designs (i.e. in an energy crisis)? We, as a society, will be forced to confront our past arrogance and pay close attention to resolving the design mistakes we so far always tried to talk away. That’s something I’d call ‘true progress’.

Actually, it’s quite evident now: many of our ‘problems’ are rather just symptoms of more fundamental problems. But as we do not track these down to the actual root, we keep on expending ever more energy by stacking palliatives on top of one another. Growing corn as a biofuel in a process that both requires a lot of external energy input and keeps on degrading soil fertility is a nice example. Now, if we look closer, we find numerous further, superficially unrelated, problems that should make us ask the question: "Did we assemble this part of the puzzle correctly? Is this approach really such a good idea? What else could we do instead? What other solutions would suggest themselves if we paid attention to the hints given by nature?" But we don’t do that. It’s almost as if we were proud to be thick.

JB: How would designing with flows in mind work?

TF: First, we have to be clear about the boundaries of our domain of influence. Resources will at some point enter our domain of influence and at some point leave it again. This certainly holds for a piece of land on which we would like to implement sustainable food production where one of the most important flows is that of water. But it also holds for a household or village economy, where an important flow through the system is that of purchase power—i.e. money (but in the wider sense). As resources percolate through a system, their utility generally degrades—entropy at work. Water high up in the landscape has more potential uses than water further down. So, we can derive a guiding principle for design: capture resources as early as possible, release them as late as possible, and see that you guide them in such a way that their natural drive to go downhill makes them perform many useful duties in between. Considering water flowing over a piece of land, this would suggest setting up rainwater catchment systems high up in the landscape. This water then can serve many useful purposes: there certainly are agricultural/silvicultural and domestic uses, maybe even aquaculture, potentially small-scale hydropower (say, in the 10-100 watts range), and possibly fire control.

JB: When I was a kid, I used to break lots of things. I guess lots of kids do. But then I started paying attention to why I broke things, and I discovered there were two main reasons. First, I might be distracted: paying attention to one thing while doing another. Second, I might be trying to overcome a problem by force instead of by slowing down and thinking about it. If I was trying to untangle a complicated knot, I might get frustrated and just pull on it… and rip the string.

I think that as a culture we make both these mistakes quite often. It sounds like part of what you’re saying is: "Pay more attention to what’s going on, and when you encounter problems, slow down and think about their origin a bit—don’t just try to bully your way through them."

But the tool of measuring flows is a nice way to organize this thought process. When you first told me about ‘input mismatch problems’ and ‘output mismatch problems’, it came as a real revelation! And I’ve been thinking about them a lot, and I want to keep doing that.

One thing I noticed is that problems tend to come in pairs. When the output of one system doesn’t fit nicely into the input of the next, we see two problems. First, ‘waste’ on the output side. Second, ‘deficiency’ on the input side. Sometimes it’s obvious that these are two aspects of the same problem. But sometimes we fail to see it.

For example, a while ago some ground squirrels chewed a hole in an irrigation pipe in our yard. Of course that’s our punishment for using too much water in a naturally dry environment, but look at the two problems it created. One: big gushers of water shooting out of the hole whenever that irrigation pipe was used, which caused all sort of further problems. Two: not enough water to the plants that system was supposed to be irrigating. Waste on one side, deficiency on the other.

That’s obvious, easy to see, and easy to fix: first plug the hole, then think carefully about why we’re using so much water in the first place. We’d already replaced our lawn with plants that use less water, but maybe we can do better.

But here’s a bigger problem that’s harder to fix. Huge amounts of fertilizer are being used on the cornfields of the midwestern United States. With the agricultural techniques they’re using, there’s a constant deficiency of nitrogen and phosphorus, so it’s supplied artificially. The figures I’ve seen show that about 30% of the energy used in US agriculture goes into making fertilizers. So, it’s been said that we’re ‘eating oil’—though technically, a lot of nitrogen fertilizer is made using natural gas. Anyway: a huge deficiency problem.

On the other hand, where is all this fertilizer going? In the midwestern United States, a lot of it winds up washing down the Mississipi River. And as a result, there are enormous ‘dead zones’ in the Gulf of Mexico. The fertilizer feeds algae, the algae dies and decays, and the decay process takes oxygen out of the water, killing off any life that needs oxygen. These dead zones range from 15 and 18 thousand square kilometers, and they’re in a place that’s one of the prime fishing spots for the US. So: a huge waste problem.

But they’re the same problem!

It reminds me of the old joke about a guy who was trying to button his shirt. "There are two things wrong with this shirt! First, it has an extra button on top. Second, it has an extra buttonhole on bottom!"

TF: Bill Mollison said it in a quite humorous-yet-sarcastic way in this episode of the Global Gardener movie:

• Bill Mollison, Urban permaculture strategies – part 1, YouTube.

While the potential to grow a large amount of calories in cities may be limited, growing fruit and vegetables nevertheless does make sense for multiple reasons. One of them is that many things that previously went into the garbage bin now have a much more appropriate place to go—such as the compost heap. Many urbanites who take up gardening are quite amazed when they realize how much of their household waste actually always ‘wanted’ to end up in a garden.

JB: Indeed. After I bought a compost bin, the amount of trash I threw out dropped dramatically. And instead of feeling vaguely guilty as I threw orange peels into the trash where they’d be mummified in a plastic bag in a landfill, I could feel vaguely virtuous as I watched them gradually turn into soil. It doesn’t take as long as you might think. And it comes as a bit of a revelation at first: "Oh, so that’s how we get soil."

TF: Perhaps the biggest problem I see with a mostly non-gardening society is that people without even the slightest own experience in growing food are expected to make up their mind about very important food-related questions and contribute to the democratic decision making process. Again, I must emphasize that whoever does not consciously invest some effort into getting at least some minimal first hand experience to improve their judgment capabilities will be easy prey for rat-catchers. And by and large, society is not aware of how badly they are lied to when it comes to food.

But back to flows. Every few years or so, I stumble upon a jaw-dropping idea, or a principle, that makes me realize that it is so general and powerful that, really, the limits of what it can be used for are the limits of my imagination and creativity. I recently had such a revelation with the PSLQ integer relation algorithm. Using flows as a mental tool for analysis and design was another such case. All of a sudden, a lot made sense, and could be analyzed with ease.

There always is, of course, the ‘man with a hammer problem’—if you are very fond of a new and shiny hammer, everything will look like a nail. I’ve also heard that expressed as ‘an idea is a very dangerous thing if it is the only one you have’.

So, while keeping this in mind, now that we got an idea about flows in nature, let us ask: "how can we abuse these concepts?" Mathematicians prefer the term ‘abstraction’, but it’s fun either way. So, let’s talk about the flow of money in economies. What is money? Essentially, it is just a book-keeping device invented to keep track of favours owed by society to individuals and vice versa. What function does it have? It works as ‘grease’, facilitating trade.

So, suppose you are a mayor of a small village. One of your important objectives is of course prosperity for your villagers. Your village trades with and hence is linked to an external economy, and just as goods and services are exchanged, so is money. So, at some point, purchase power (in the form of money) enters your domain of influence, and at some point, it will leave it again. What you want it to do is to facilitate many different economic activities—so you want to ensure it circulates within the village as long as possible. You should pay some attention to situations where money accumulates—for everything that accumulates without being put to good use is a form of ‘waste’, hence pollution. So, this naturally leads us to two ideas: (a) What incentives can you find to keep money on circulating within the village? (There are many answers, limited only by creativity.) And (b) what can you do to constrain the outflow? If the outlet is made smaller, system outflow will match inflow at a higher internal pressure, hence a higher level of resource availability within the system.

This leads us to an idea no school will ever tell you about—for pretty much the same reason why no state-run school will ever teach how to plan and successfully conduct a revolution. The road to prosperity is to systematically reduce your ‘Need To Earn’—i.e. the best way to spend money is to set up systems that allow you to keep more money in your pocket. An frequent misconception that keeps on arising when I mention this is that some think this idea would be about austerity. Quite to the contrary. You can make as much money as you want—but one thing you should keep in mind is that if you have that trump card up your sleeve that you could at any time just disconnect from most of the economy and get by with almost no money at all for extended periods of time, you are in a far better position to take risks and grasp exceptional opportunities as they arise as someone would be who committed himself to having to earn a couple of thousand pounds a month.

The problem is not with earning a lot of money. The problem is with being forced to continually make a lot of money. We readily manage to identify this as a key problem of drug addicts, but fail to see the same mechanism at work in mainstream society. A key assumption in economic theory is that exchange is voluntary. But how well is that assumption satisfied in practice if such forces are in place?

Now, what would happen if people started to get serious about investing the money they earn to systematically reduce their need to earn money in the future? Some decisions such as getting a photovoltaic array may have ‘payback times’ in the range of one or two decades, but I consider this ‘payback time’ concept as a self-propagating flawed idea. If something gives me an advantage in terms of depending on less external input now, this reduction of vulnerability also has to be taken into account—’payback times’ do not do that. So—if most people did such things, i.e. made strategic decisions to set up systems so that their essential needs can be satisfied with minimal effort—especially money, this would put a lot of political power back into their hands. A number of self-proclaimed ‘leaders’ certainly don’t like the idea of people being in a position to just ignore their orders. Also note that this would have a funny effect on the GDP—ever heard of ‘imputations’?

JB: No, what are those?

TF: It’s a funny thing, perhaps best explained by an example. If you fully own your own house, then you don’t pay rent. But for the purpose of determining the GDP, you are regarded as paying as much rent to yourself (!) as you would get if you rented out the house. See:

Imputed rent, Wikipedia.

Evidently, if people make a dedicated effort at the household level to become less dependent on the economy by being able to provide most of their essential needs themselves (housing, food, water, energy, etc.) to a much larger extent, this amounts to investing money in order to need less money in the future. If many people did this systematically, it would superficially have a devastating effect on the GDP—but it would bring about a much more resilient (because less dependent) society.

The problem is that the GDP really is not an appropriate measure for progress. But obviously, those who publish these figures know that as well, hence the need to fudge the result with imputations. So, a simple conclusion is: whenever there is an opportunity to invest money in a way that makes you less dependent on the economy in the future, that might be well worth a closer look. Especially if you get the idea that, if many people did this, the state would likely have to come up with other imputations to make the impact on the GDP disappear!

JB: That’s a nice thought. I tend to worry about how the GDP and other economic indicators warp our view of what’s right to do. But you’re saying that if people can get up the nerve to do what’s right, regardless, the economic indicators may just take care of themselves.

TF: We have to remember that sustainability is about systems that are viable in the long run. Environmental sustainability is just one important aspect. But you won’t go on for long doing what you do unless it also has economic long-term viability. Hence, we are dealing with multi-dimensional design constraints. And just as flow network analysis is useful to get an idea about the environmental context, the same holds for the economic context. It’s just that the resources are slightly different ones—money, labour, raw materials, etc. These thoughts can be carried much further, but I find it quite worthwhile to instead look at an example where someone did indeed design a successful system along such principles. In the UK, the first example that would come to my mind is Hill Holt Wood, because the founding director, Nigel Lowthrop, did do so many things right. I have high admiration for his work.

JB: When it comes to design of sustainable systems, you also seem to be a big fan of Bill Mollison and some of the ‘permaculture’ movement that he started. Could you say a bit about that? Why is it important?

TF: The primary reason why permaculture matters is that it has demonstrated some stunning successes with important issues such as land rehabilitation.

‘Permaculture’ means a lot of different things to a lot of different people. Curiously, where I grew up, the term is somewhat known, but mostly associated with an Austrian farmer, not Bill Mollison. And I’ve seen some physicists who first had come into contact with it through David Holmgren‘s book revise their opinions when they later read Mollison. Occasionally, some early adopters did not really understand the scientific aspects of it and tried to link it with some strange personal beliefs of the sort Martin Gardner discussed in Fads and Fallacies in the Name of Science. And so on. So, before we discuss permaculture, I have to point out that one might sometimes have to take a close look to evaluate it. A number of things claiming to be ‘permaculture’ actually are not.

When I started—some time ago—to make a systematic effort to get a useful overview over the structure of our massive sustainability-related problems, a key question to me always was: "what should I do?"—and a key conviction was: "someone must have had some good ideas about all this already." This led me to actually not read some well-known "environmentalist" books many people had read which are devoid of any discussion of our options and potential solutions, but to do a lot of detective work instead.

In doing so, I travelled, talked to a number of people, read a lot of books and manuscripts, did a number of my own experiments, cross-checked things against order-of-magnitude guesstimates, against the research literature, and so on. At one point—I think it was when I took a closer look into the work of the laureates of the ‘Right Livelihood award’ (sometimes called the ‘Alternative Nobel Prize’)—I came across Bill Mollison’s work. And it struck a chord.

Back in the 90s, when mad cow disease was a big topic in Europe, I spent quite some time pondering questions such as: "what’s wrong with the way farming works these days?" I immediately recognized a number of insights I independently had arrived at back then when studying Bill Mollison’s work, and yet, he went so much further—talked about a whole universe of issues I still was mostly unaware of at that time. So, an inner voice said to me: "if you take a close look at what that guy already did, that might save you a lot of time". Now, Mollison did get some things wrong, but I still think taking a close look at what he has to say is a very effective way to get a big picture overview over what we can achieve, and what needs urgent attention. I think it greatly helps (at least to me) that he comes from a scientific background. Before he decided to quit academia in 1978 and work full time on developing permaculture, he was a lecturer at the University of Hobart, Tasmania.

JB: But what actually is ‘permaculture’?

TF: That depends a lot on who you ask, but I like to think about permaculture as if it were an animal. The ‘skeleton’ is a framework with cleverly designed ‘static properties’ that holds the ‘flesh’ together in a way so that it can achieve things. The actual ‘flesh’ is provided by solutions to specific problems with long term viability being a key requirement. But it is more than just a mere semi-amorphous collage of solutions, due to its skeleton. The backbone of this animal is a very simple (deliberately so) yet functional (this is important) core ethics which one could regard as being the least common denominator of values considered as essential across pretty much all cultures. This gives it stability. Other bones that make this animal walk and talk are related to key principles. And these principles are mostly just applied common sense.

For example, it is pretty clear that as non-renewable resources keep on becoming more and more scarce, we will have to seriously ponder the question: what can we grow that can replace them? If our design constraints change, so does our engineering—should (for one reason or another) some particular resource such as steel become much more expensive than it is today, we would of course look into the question whether, say, bamboo may be a viable alternative for some applications. And that is not as exotic an idea as it may sound these days.

So, unquestionably, the true solutions to our problems will be a lot about growing things. But growing things in the way that our current-day agriculture mostly does it seems highly suspicious, as this keeps on destroying soil. So, evidently, we will have to think less along the lines of farming and more along the lines of gardening. Also, we must not fool ourselves about a key issue: most people on this planet are poor, hence for an approach to have wide impact, it must be accessible to the poor. Techniques that revolve around gardening often are.

Next, isn’t waiting for the big (hence, capital intensive) ‘technological miracle fix’ conspicuously similar to the concept of a ‘pie in the sky’? If we had any sense, shouldn’t we consider solving today’s problems with today’s solutions?

If one can distinguish between permaculture as it stands and attempts by some people who are interested in it to re-mold it so that it becomes ‘the permaculture part of permaculture plus Anthroposophy/Alchemy/Biodynamics/Dianetics/Emergy/Manifestation/New Age beliefs/whatever’, there is a lot of common sense in permaculture—the sort of ‘a practical gardener’s common sense’. In this framework, there is a place for both modern scientific methods and ancient tribal wisdom. I hence consider it a healthy antidote to both fanatical worship of ‘the almighty goddess of technological progress’—or any sort of fanatical worship for that matter—as well as to funny superstitious beliefs.

There are some things in the permaculture world, however, where I would love to see some change. For example, it would be great if people who know how to get things done paid more attention to closely keeping records of what they do to solve particular problems and to making these widely accessible. Solutions of the ‘it worked great for a friend of a friend’ sort do us a big disservice. Also, there are a number of ideas that easily get represented in overly simplistic form—such as ‘edge is good’—where one better should retain some healthy skepticism.

JB: Well, I’m going to keep on pressing you: what is permaculture… according to you? Can you list some of the key principles?

TF: That question is much easier to answer. The way I see it, permaculture is a design-oriented approach towards systematically reducing the total effort that has to be expended (in particular, in the long run) in order to keep society going and allow people to live satisfying lives. Here, ‘effort’ includes both work that is done by non-renewable resources (in particular fossil fuels), as well as human labour. So, permaculture is not about returning to pre-industrial agricultural drudgery with an extremely low degree of specialization, but rather about combining modern science with traditional wisdom to find low-effort solutions to essential problems. In that sense, it is quite generic and deals with issues ranging from food production to water supply to energy efficient housing and transport solutions.

To give one specific example: Land management practices that reduce the organic matter content of soils and hence soil fertility are bound to increase the effort needed to produce food in the long run and hence considered a step in the wrong direction. So, a permaculture approach would focus on using strategies that manage to build soil fertility while producing food. There are a number of ways to do that, but a key element is a deep understanding of nature’s soil food web and nutrient cycling processes. For example, permaculture pays great attention to ensuring a healthy soil microflora.

When the objective is to minimize the effort needed to sustain us, it is very important to closely observe those situations where we have to expend energy on a continual basis in order to fight natural processes. When this happens, there is a conflict between our views how things ought to look like and a system trying to demonstrate its own evolution. In some situations, we really want it that way and have to pay the corresponding price. But there are others—quite many of them—where we would be well advised to spend some thought on whether we could make our life easier by ‘going with the flow’. If thistles keep on being a nuisance on some piece of land, we might consider trying to fill this ecological niche by growing some closely related species, say some artichoke. If a meadow needs to be mowed regularly so that it does not turn into a shrub thicket, we would instead consider planting some useful shrubs in that place.

Naturally, permaculture design favours perennial plants in climatic regions where the most stable vegetation would be a forest. But it does not have to be this way. There are high-yielding low-effort (in particular: no-till, no-pesticide) ways to grow grains as well, mostly going back to Masanobu Fukuoka. They have gained some popularity in India, where they are known as ‘Rishi Kheti’—’agriculture of the sages’. Here’s a photo gallery containing some fairly recent pictures:

Raju Titus’s Public Gallery, Picasa.



Wheat growing amid fruit trees: no tillage, no pesticides — Hoghangabad, India

An interesting perspective towards weeds which we usually do not take is: the reason this plant could establish itself here is that it’s filling an unfilled ecological niche.

JB: Actually I’ve heard someone say: "If you have weeds, it means you don’t have enough plants".

TF: Right. So, when I take that weed out, I’d be well advised to take note of nature’s lesson and fill that particular niche with an ecological analog that is more useful. Otherwise, it will quite likely come back and need another intervention.

I would consider this "letting systems demonstrate their own evolution while closely watching what they want to tell us and providing some guidance" the most important principle of permaculture.

Another important principle is the ‘user pays‘ principle. A funny idea that comes up disturbingly often up in discussions of sustainability issues (even if it is not articulated explicitly) is that there are only a limited amount of resources which we keep on using up, and once we are done with that, this would be the end of mankind. Actually, that’s not how the world works.

Take an apple tree, for example. It starts out as a tiny seed, and has to accumulate a massive amount of (nutrient) resources to grow into a mature tree. Yet, once it completes its life cycle, dies down and is consumed by fungi, it leaves the world in a more fertile state than before. Fertility tends to keep growing, because natural systems by and large work according to the principle that any agent that takes something from the natural world will return something of equal or even greater ecosystemic value.

Let me come back to an example I briefly mentioned earlier on. At a very coarse level of detail, grazing cows eat grass and return cow dung. Now, in the intestines of the cow, quite a lot of interesting biochemistry has happened that converted nonprotein nitrogen (say, urea) into much more valuable protein:

• W. D. Gallup, Ruminant nutrition, review of utilization of nonprotein nitrogen in the ruminant, Journal of Agricultural and Food Chemistry 4 (1956), 625-627.

A completely different example: nutrient accumulators such as comfrey act as powerful pumps that draw up mineral nutrients from the subsoil, where they would be otherwise inaccessible, and make them available for ecosystemic cycling.



Russian comfrey, Symphytum x uplandicum

It is indeed possible to not only use this concept for garden management, but as a fundamental principle to run a sustainable economy. At the small scale (businesses), its viability has been demonstrated, but unfortunately this aspect of permaculture has not received as much attention yet as it should. Here, the key questions are along the lines of: do you need a washing machine, or is your actual need better matched by the description ‘access to some laundry service’?

Concerning energy and material flows, an important principle is "be aware of the boundaries of your domain of influence, capture them as early as you can, release them as late as you can, and extract as much beneficial use out of them as possible in between". We already talked about that. In the era of cheap labour from fossil fuels, it is often a very good idea to use big earthworking machinery to slightly adjust the topography of the landscape in order to capture and make better use of rainwater. Done right, such water harvesting earthworks can last many hundreds of years, and pay back the effort needed to create them many times over in terms of enhanced biological productivity. If this were implemented on a broad scale, not just by a small percentage of farmers, this could add significantly to flood protection as well. I am fairly confident that we will be doing this a lot in the 21st century, as the climate gets more erratic and we face both more extreme rainfall events (note that saturation water vapour pressure increases by about 7% for every Kelvin of temperature increase) as well as longer droughts. It would be smart to start with this now, rather than when high quality fuels are much more expensive. It would have been even smarter to start with this 20 years ago.

A further important principle is to create stability through a high degree of network connectivity. We’ve also briefly talked about that already. In ecosystem design, this means to ensure that every important ecosystemic function is provided by more than one element (read: species), while every species provides multiple functions to the assembly. So, if something goes wrong with one element, there are other stabilizing forces in place. The mental picture which I like to use here is that of a stellar cluster: If we put a small number of stars next to one another, the system will undergo fairly complicated dynamics and eventually separate: in some three-star encounters, two stars will enter a very close orbit, while the third receives enough energy to go over escape velocity. If we lump together a large number of stars, their dynamics will thermalize and make it much more difficult for an individual star to obtain enough energy to leave the cluster—and keep it for a sufficiently long time to actually do so. Of course, individual stars do ‘boil off’, but the entire system does not fall apart as fast as just a few stars would.

There are various philosophies how to best approach weaving an ecosystemic net, ranging from ‘ecosystem mimicry‘;—i.e. taking wild nature and substituting some species with ecological analogs that are more useful to us—to ‘total synthesis of a species assembly’, i.e. combining species which in theory should grow well together due to their ecological characteristics, even though they might never have done so in nature.

JB: Cool. You’ve given me quite a lot to think about. Finally, could you also leave me with a few good books to read on permaculture?

TF: It depends on what you want to focus on. Concerning a practical hands-on introduction, this is probably the most evolved text:

• Bill Mollison, Introduction to Permaculture, Tagari Publications, Tasmania, 1997.

If you want more theory but are fine with a less refined piece of work, then this is quite useful:

• Bill Mollison, Permaculture – A Designer’s Manual, Tagari Publications, Tasmania, 1988.

Concerning temperate climates—in particular, Europe—this is a well researched piece of work that almost could be used as a college textbook:

• Patrick Whitefield, The Earth Care Manual: a Permaculture Handbook for Britain and Other Temperate Climates, Permanent Publications, East Meon, 2004.

For Europeans, this would probably be my first recommendation.

JB: Thanks! It’s been a very thought-provoking interview.


Ecologists never apply good ecology to their gardens. Architects never understand the transmission of heat in buildings. And physicists live in houses with demented energy systems. It’s curious that we never apply what we know to how we actually live.Bill Mollison


Mathematics and the Environment in Iran

24 June, 2011

I’ve been invited to speak on mathematics and environmental issues at this conference:

Forty-Second Annual Iranian Mathematics Conference (AIMC42), Vali-e-Asr University, Rafsanjan, Iran, 5-8 September 2011.

There is something inherently distasteful about flying around the world to talk about global warming, given the large amount of carbon burnt to fuel air travel. I’ve already turned down a couple of requests to give talks in the US and Europe, since they’re so far from my current home in Singapore. Only later did I think of the good solution, though it was perfectly obvious in retrospect: accept all these invitations to speak, but only on the condition that I give my talk over video-link. That may nudge institutions a bit towards the post-carbon future. They can accept or not, but either way they’ll have to think about these issues.

But Iran is close enough to Singapore, and the opportunity to speak about these issues to Iranian mathematicians is unusual enough, and potentially important enough, that I feel this talk is a good idea.

Do you know anything interesting about what Iranians, especially mathematicians or physicists, are doing about environmental issues?

(Note: I’m not interested in talking about politics here.)


Putting the Earth in a Box

19 June, 2011

guest post by Tim van Beek

Fried on Mercury

Is it possible to fly to Mercury in a spaceship without being fried?

If you think it should be possible to do a simple back-on-the-envelope calculation that answers this question, you’re right! And NASA has already done it:

This is interesting for astronauts—but it’s interesting for a first estimation of the climate of planets, too. In particular, for the Earth. In this post, I would like to talk about how this estimate is done and what it means for climate science.

The Simplest Possible Model

How do physicists model a farm? They say: “First, let’s assume that all cows are spherical with homogeneous milk distribution”. – Anonymous

Theoretical physicists have a knack for creating the simplest possible model for very complicated systems and still have some measure of success with it. This is no different when the system is the whole climate of the earth:

The back-on-the-envelope calculation mentioned above has a name; people call it a ‘zero-dimensional energy balance model‘.

Surprisingly, the story of energy balance models starts with a prominent figure in physics and one of the most important discoveries of 20th century physics: Max Planck and ‘black body radiation’.

Black Body Radiation

Matter emits electromagnetic radiation—at least the matter that we know best. Physicists have also postulated the existence of matter out in space that does not radiate at all, called ‘dark matter’, but that doesn’t need to concern us here.

Around 1900, the German physicist Max Planck set out to solve an important problem of thermodynamics: to calculate the amount of radiation emitted by matter based on first principles.

To solve the problem, Planck made a couple of simplifying assumptions about the kind of matter he would think of. These assumptions characterize what is known in physics as a perfect ‘black body’.

A black body is an object that perfectly absorbs and therefore also perfectly emits all electromagnetic radiation at all frequencies. Real bodies don’t have this property; instead, they absorb radiation at certain frequencies better than others, and some not at all. But there are materials that do come rather close to a black body. Usually one adds another assumption to the characterization of an ideal black body: namely, that the radiation is independent of the direction.

When the black body has a certain temperature T, it will emit electromagnetic radiation, so it will send out a certain amount of energy per second for every square meter of surface area. We will call this the energy flux and denote this as f. The SI unit for f is W/m^2: that is, watts per square meter. Here the watt is a unit of energy per time.

This electromagnetic radiation comes in different wavelengths. So, can ask how much energy flux our black body emits per change in wavelength. This depends on the wavelength. We will call this the monochromatic energy flux f_{\lambda}. The SI unit for f_{\lambda} is W/(m^2 \; \mu m), where \mu m stands for micrometer: a millionth of a meter, which is a unit of wavelength. We call f_\lambda the ‘monochromatic’ energy flux because it gives a number for any fixed wavelength \lambda. When we integrate the monochromatic energy flux over all wavelengths, we get the energy flux f.

For the ideal black body, it turned out to be possible for Max Planck to calculate the monochromatic energy flux f_{\lambda}, but to his surprise, Planck had to introduce in addition the assumption that energy comes in quanta. This turned out to be the birth of quantum mechanics!

Understanding Thermodynamics: The Planck Distribution

His result is called the Planck distribution:

\displaystyle{ f_{\lambda}(T) = \frac{c_1}{\lambda^5 (e^{c_2/\lambda T} - 1)} }

Here I have written c_1 and c_2 for two constants. These can be calculated in terms of fundamental constants of physics. But for us this does not matter now. What matters is what the function looks like as a function of the wavelength \lambda for the temperature of the Sun and the Earth.

As usual, Wikipedia has a great page about this:

Black body radiation, Wikipedia.

The following picture shows the energy flux as a function of the wavelength, for different temperatures:

The Earth radiates roughly like the 300 kelvin curve and the Sun like the 5800 kelvin curve. You may notice that the maximum of the Sun’s radiation is at the wavelengths that are visible to human eyes.

Real surfaces are a little bit different than the ideal black body:

As we can see, the real surface emits less radiation than the ideal black body. This is not a coincidence: the black body is by definition the body that generates the highest energy flux at a fixed temperature.

A simple way to take this into account is to talk about a grey body, which is a body that has the same monochromatic energy flux as the black body, but reduced by a constant factor, the emissivity.

It is possible to integrate the black body radiation over all wavelengths, to get the relation between temperature T and energy flux f. The answer is surprisingly simple:

f = \sigma \; T^4

This is called the Stefan-Boltzmann law, and the constant \sigma is called the Stefan-Boltzmann constant. Using this formula, we can assign to every energy flux f a black body temperature T, which is the temperature that an ideal black body would need to have to emit f.

Energy Balance of Planets

A planet like Earth gets energy from the Sun and loses energy by radiating to space. Since the Earth sits in empty space, these two processes are the only relevant ones that describe the energy flow.

The radiation emitted by the Sun results at the distance of earth to an energy flux of about 1370 watts per square meter. We need to account for the fact, however, that the Earth receives energy from the Sun on one half of the globe only, on the area of a circle with the radius of the Earth, but radiates from the whole surface of the whole sphere. This means that the average outbound energy flux is actually \frac{1}{4} of the inbound energy flux. (The question if there is some deeper reason for this simple relation was posed as a geometry puzzle here on Azimuth.)

So, now we are in a position to check if NASA got it right!

The Stefan-Boltzmann constant has a value of

\sigma = 5.670 400 \times 10^{-8} \frac{W}{m^2 K^4}

which results in a black body temperature of about 279 kelvin, which is about 6 °C:

\frac{1370}{4} W m^{-2} \;\approx \; 5.67 \,\times \,10^{-8} \frac{W}{m^2 K^4} \, \times \, (279 K)^4

That is not bad for a first approximation! The next step is to take into account the ‘albedo’ of the Earth. The albedo is the fraction of radiation that is instantly reflected without being absorbed. The albedo of a surface does depend on the material of the surface, and in particular on the wavelength of the radiation, of course. But in a first approximation for the average albedo of earth we can take:

\mathrm{albedo}_{\mathrm{Earth}} = 0.3

This means that 30% of the radiation is instantly reflected and only 70% contributes to heating earth. When we take this into account by multiplying the left side of the previous equation by 0.7, we get a black body temperature of 255 kelvin, which is -18 °C.

Note that the emissivity factor for grey bodies does not change the equation, because it works both ways: the absorption of the incoming radiation is reduced by the same factor as the emitted radiation.

The average temperature of earth is actually estimated to be some 33 kelvin higher, that is about +15 °C. This should not be a surprise: after all, 70% of the planet is covered by liquid water! This is an indication that the average temperature is most probably not below the freezing point of water.

The albedo depends a lot on the material: for example, it is almost 1 for fresh snow. This is one reason people wear sunglasses for winter sports, even though the winter sun is considerably dimmer than the summer sun in high latitudes.

Since a higher albedo results in a lower temperature for the Earth, you may wonder what happens when there is more snow and ice? This results in a lower absorption, which leads to less heat, which results in even more snow and ice. This is an example of positive feedback, which is a reaction that strengthens the process that caused the reaction. There is a theory that something like this happened to the Earth about 600 million years ago. The scenario is aptly called Snowball Earth. This theory is based on geological evidence that at that time there was a glaciation that reached the equator! And it works the other way around, too.

Since a higher temperature leads to a higher radiation and therefore to cooling, and a lower temperature leads to a lower radiation, according to the Planck distribution, there is always a negative feedback present in the climate system of the earth. This is dubbed the Planck feedback and has already been mentioned in week 302 of “This Weeks Finds” here on Azimuth.

Now, the only variable that a zero dimensional energy balance model calculates is the average temperature of earth. But does it even make sense to talk about the “average” temperature of the whole planet?

The Role of the Atmosphere and Rotation

It is always possible to “put a planet into a box”, calculate the inbound energy flux, and compute from this a black body temperature T—given that the inbound energy per second is equal to the outgoing energy per second, which is the condition of thermodynamic equilibrium for this system. We will always be able to calculate this temperature T, but of course there may be very strange things going on inside the box, that make it nonsense to talk about an average temperature. As far as we know, one side of the planet may be burning and the other side may be freezing, for example.

For planets with slow rotation and no atmosphere, this actually happens! This applies to Mercury and the moon of the Earth, for example. In the case of Earth itself, most of the heat energy is stored in the oceans and it spins rather fast. This means that it is not completely implausible to talk about a ‘mean surface air temperature’. But it could be interesting to take into account the different energy input at different latitudes! Models that do that are called ‘one-dimensional’ energy balance models. And we should of course take a closer look at the heat and mass transfer processes of the earth. But since this post is already rather long, I’ll skip that for now.

The Case of the Missing 33 Kelvins

The simple back-of-the-envelope calculation of the simplest possible climate model shows that there is a difference of roughly 33 kelvin between the black body temperature and the mean surface temperature on earth.

There is an explanation for this difference; I bet that you have already heard of it! But I’ll postpone that one for another post.

If you would like to learn more about climate models, you should check out this book:

• Kendal McGuffie and Ann Henderson-Sellers, A Climate Modelling Primer, 3rd edition, Wiley, New York, 2005.

Whenever I wrote “NASA” I was actually referring to this paper:

• Albert J. Juhasz, An analysis and procedure for determining space environmental sink temperatures with selected computational results, NASA/TM—2001-210063, 2001.

The pictures of black body radiation are taken from this book:

• Frank P. Incropera, David P. DeWitt, Theodore L. Bergman, Adrienne S. Lavine, Fundamentals of Heat and Mass Transfer, 6th edition, Wiley, New York, 2006.

Being Cool on Mercury

I want to paint it black. — The Rolling Stones

Last but not least: you can fly to Mercury without getting fried… but you have to paint your spaceship white in order to get a higher albedo.

Really? Well, it depends on the albedo of the whitest paint you can find: the one that reflects the Sun’s energy flux the most.

So, here’s a puzzle: what’s the whitest paint you can find? What’s its albedo? And how hot would a spaceship with this paint get, if it were in Mercury’s orbit?


Follow

Get every new post delivered to your Inbox.

Join 3,095 other followers