This Week’s Finds (Week 317)

22 July, 2011

Anyone seriously interested in global warming needs to learn about the ‘ice ages’, or more technically ‘glacial periods’. After all, these are some of the most prominent natural variations in the Earth’s temperature. And they’re rather mysterious. They could be caused by changes in the Earth’s orbit called Milankovich cycles… but the evidence is not completely compelling. I want to talk about that.

But to understand ice ages, the first thing we need to know is that the Earth hasn’t always had them! The Earth’s climate has been cooling and becoming more erratic for the last 35 million years, with full-blown glacial periods kicking in only about 1.8 million years ago.

So, this week let’s start with a little tour of the Earth’s climate history. Somewhat arbitrarily, let’s begin with the extinction of the dinosaurs about 65 million years ago. Here’s a graph of what the temperature has been doing since then:

Of course you should have lots of questions about how this graph was made, and how well we really know these ancient temperatures! But for now I’m just giving a quick overview—click on the graphs for more. In future weeks I should delve into more technical details.

The Paleocene Epoch, 65 – 55 million years ago

The Paleocene began with a bang, as an asteroid 10 kilometers across hit the Gulf of Mexico in an explosion two million times larger than the biggest nuclear weapon ever detonated. A megatsunami thousands of meters high ripped across the Atlantic, and molten quartz hurled high into the atmosphere ignited wildfires over the whole planet. A day to remember, for sure.

The Earth looked like this back then:

The Paleocene started out hot: the ocean was 10° to 15° Celsius warmer than today. Then it got even hotter! Besides a gradual temperature rise, at the very end of this epoch there was a drastic incident called the Paleocene-Eocene Thermal Maximum— that’s the spike labelled "PETM". Ocean surface temperatures worldwide shot up by 5-8°C for a few thousand years—but in the Arctic, it heated up even more, to a balmy 23°C. This caused a severe dieoff of little ocean critters called foraminifera, and a drastic change of the dominant mammal species. What caused it? That’s a good question, but right now I’m just giving you a quick tour.

The Eocene Epoch, 55 – 34 million years ago

During the Eocene, temperatures continued to rise until the so-called ‘Eocene Optimum’, about halfway through. Even at the start, the continents were close to where they are now—but the average annual temperature in arctic Canada and Siberia was a balmy 18 °C. The dominant plants up there were palm trees and cycads. Fossil monitor lizards (sort of like alligators) dating back to this era have been found in Svalbard, an island north of Greenland that’s now covered with ice all year. Antarctica was home to cool temperate forests, including beech trees and ferns. In particular, our Earth had no permanent polar ice caps!

Life back then was very different. The biggest member of the order Carnivora, which now includes dogs, cats, bears, and the like, was merely the size of a housecat. The largest predatory mammals were of another, now extinct order: the creodonts, like this one drawn by Dmitry Bogdanov:


But the biggest predator of all was not a mammal: it was
Diatryma, the 8-foot tall "terror bird", with a fearsome beak!


But it’s not as huge as it looks here, because horses were only half a meter high back then!

For more on this strange world and its end as the Earth cooled, see:

• Donald R. Prothero, The Eocene-Oligocene Transition: Paradise Lost, Critical Moments in Paleobiology and Earth History Series, Columbia University Press, New York, 1994.

The Oligocene Epoch, 34 – 24 million years ago

As the Eocene drew to a close, temperatures began to drop. And at the start of the Oligocene, they plummeted! Glaciers started forming in Antarctica. The growth of ice sheets led to a dropping of the sea level. Tropical jungles gave ground to cooler woodlands.

What caused this? That’s another good question. Some seek the answer in plate tectonics. The Oligocene is when India collided with Asia, throwing up the Himalayas and the vast Tibetan plateau. Some argue this led to a significant change in global weather patterns. But this is also the time when Australia and South America finally separated from Antarctica. Some argue that the formation of an ocean completely surrounding Antarctica led to the cooling weather patterns. After all, that lets cold water go round and round Antarctica without ever being driven up towards the equator.

The Miocene Epoch, 24 – 5.3 million years ago

Near the end of the Oligocene temperatures shot up again and the Antarctic thawed. Then it cooled, then it warmed again… but by the middle of the Miocene, temperatures began to drop more seriously, and glaciers again formed on the Antarctic. It’s been frozen ever since. Why all these temperature fluctuations? That’s another good question.

The Miocene is when grasslands first became common. It’s sort of amazing that something we take so much for granted—grass—can be so new! But grasslands, as opposed to thicker forests and jungles, are characteristic of cooler climates. And as Nigel Calder has suggested, grasslands were crucial to the development of humans! Early hominids lived on the border between forests and grasslands. That has a lot to do with why we stand on our hind legs and have hands rather than paws. Much later, the agricultural revolution relied heavily on grasses like wheat, rice, corn, sorghum, rye, and millet. As we ate more of these plants, we drastically transformed them by breeding, and removed forests to grow more grasses. In return, the grasses drastically transformed us: the ability to stockpile surplus grains ended our hunter-gatherer lifestyle and gave rise to cities, kingdoms, and slave labor.

So, you could say we coevolved with grasses!

Indeed, the sequence of developments leading to humans came shortly after the rise of grasslands. Apes split off from monkeys 21 million years ago, in the Miocene. The genus Homo split off from other apes like gorillas and chimpanzees 5 million years ago, near the beginning of the Pliocene. The fully bipedal Homo erectus dates back to 1.9 million years ago, near the end of the Pliocene. But we’re getting ahead of ourselves…

The Pliocene Epoch, 5.3 – 1.8 million years ago

Starting around the Pliocene, the Earth’s temperature has been getting every more jittery as it cools. Something is making the temperature unstable! And these fluctuations are not just getting more severe—they’re also lasting longer.

These temperature fluctuations are far from being neatly periodic, despite the optimistic labels on the above graph saying “41 kiloyear cycle” and “100 kiloyear cycle”. And beware: the data in the above graph was manipulated so it would synchronize with the Milankovitch cycles! Is that really justified? Do these cycles really cause the changes in the Earth’s climate? More good questions.

Here’s a graph that shows more clearly the noisy nature of the Earth’s climate in the last 7 million years:

You can tell this graph was made by a real paleontologist, because they like to put the present on the left instead of on the right.

And maybe you’re getting curious about this “δ18O benthic carbonate” business? Well, we can’t directly measure the temperatures long ago by sticking a thermometer into an ancient rock! We need to use ‘climate proxies’: things we can measure now, that we believe are correlated to features of the climate long ago. δ18O is the change in the amount of oxygen-18 (a less common, heavier isotope of oxygen) in carbonate deposits dug up from ancient ocean sediments. These deposits were made by foraminifera and other tiny ocean critters. The amount of oxygen-18 in these deposits is used as temperature proxy: the more of it there is, the colder we think it was. Why? That’s another good question.

The Pleistocene Epoch, 1.8 – .01 million years ago

By the beginning of the Pleistocene, the Earth’s jerky temperature variations became full-fledged ‘glacial cycles’. In the last million years there have been about ten glacial cycles, though it’s hard to count them in any precise way—it’s like counting mountains in a mountain range:

Now the present is on the right again—but just to keep you on your toes, here up means cold, or at least more oxygen-18. I copied this graph from:

• Barry Saltzman, Dynamical Paleoclimatology: Generalized
Theory of Global Climate Change
, Academic Press, New York,
2002, fig. 1-4.

We can get some more detail on the last four glacial periods from the change in the amount of deuterium in Vostok and EPICA ice core samples, and also changes in the amount of oxygen-18 in foraminifera (that’s the graph labelled ‘Ice Volume’):

As you can see here, the third-to-last glacial ended about 380,000 years ago. In the warm period that followed, the first signs of Homo neanderthalensis appear about 350,000 years ago, and the first Homo sapiens about 250,000 years ago.

Then, 200,000 years ago, came the second-to-last glacial period: the Wolstonian. This lasted until about 130,000 years ago. Then came a warm period called the Eemian, which lasted until about 110,000 years ago. During the Eemian, Neanderthalers hunted rhinos in Switzerland! It was a bit warmer then that it is now, and sea levels may have been about 4-6 meters higher—worth thinking about, if you’re interested in the effects of global warming.

The last glacial period started around 110,000 years ago. This is called the Winsconsinan or Würm period, depending on location… but let’s just call it the last glacial period.

A lot happened during the last glacial period. Homo sapiens reached the Middle East 100,000 years ago, and arrived in central Asia 50 thousand years ago. The Neanderthalers died out in Asia around that time. They died out in Europe 35 thousand years ago, about when Homo sapiens got there. Anyone notice a pattern?

The oldest cave paintings are 32 thousand years old, and the oldest known calendars and flutes also date back to about this time. It’s striking how many radical innovations go back to about this time.

The glaciers reached their maximum extent around 26 to 18 thousand years ago. There were ice sheets down to the Great Lakes in America, and covering the British Isles, Scandinavia, and northern Germany. Much of Europe was tundra. And so much water was locked up in ice that the sea level was 120 meters lower than it is today!

Then things started to warm up. About 18 thousand years ago, Homo sapiens arrived in America. In Eurasia, people started cultivating plants and herding of animals around this time.

There was, however, a shocking setback 12,700 years ago: the Younger Dryas episode, a cold period lasting about 1,300 years. We talked about this in “week304″, so I won’t go into it again here.

The Younger Dryas ended about 11,500 years ago. The last glacial period, and with it the Pleistocene, officially ended 10,000 years ago. Or more precisely: 10,000 BP. Whenever I’ve been saying ‘years ago’, I really mean ‘Before Present’, where the ‘present’, you’ll be amused to learn, is officially set in 1950. Of course the precise definition of ‘the present’ doesn’t matter much for very ancient events, but it would be annoying if a thousand years from now we had to revise all the textbooks to say the Pleistocene ended 11,000 years ago. It’ll still be 10,000 BP.

(But if 1950 was the present, now it’s the future! This could explain why such weird science-fiction-type stuff is happening.)

The Holocene Epoch, .01 – 0 million years ago

As far as geology goes, the Holocene is a rather silly epoch, not like the rest. It’s just a name for the time since the last ice age ended. In the long run it’ll probably be called the Early Anthropocene, since it marks the start of truly massive impacts of Homo sapiens on the biosphere. We may have started killing off species in the late Pleistocene, but now we’re killing more—and changing the climate, perhaps even postponing the next glacial period.

Here’s what the temperature has been doing since 12000 BC:

Finally, here’s a closeup of a tiny sliver of time: the last 2000 years:

In both these graphs, different colored lines correspond to different studies; click for details. The biggish error bars give people lots to argue about, as you may have noticed. But right now I’m more interested in the big picture, and questions like these:

• Why was it so hot in the early Eocene?

• Why has it generally been cooling down ever since the Eocene?

• Why have temperature fluctuations been growing since the Miocene?

• What causes the glacial cycles?

For More

Next time we’ll get into a bit more detail. For now, here are some fun easy things to read.

This is a very enjoyable overview of climate change during the Holocene, and its effect on human civilization:

• Brian Fagan, The Long Summer, Basic Books, New York, 2005. Summary available at Azimuth Library.

These dig a bit further back:

• Chris Turney, Ice, Mud and Blood: Lessons from Climates Past, Macmillan, New York, 2008.

• Steven Mithen, After the Ice: A Global Human History 20,000-5000 BC, Harvard University Press, Cambridge, 2005.

I couldn’t stomach the style of the second one: it’s written as a narrative, with a character named Lubbock travelling through time. But a lot of people like it, and they say it’s well-researched.

For a history of how people discovered and learned about ice ages, try:

• Doug Macdougall, Frozen Earth: The Once and Future Story of Ice Ages, University of California Press, Berkeley, 2004.

For something a bit more technical, but still introductory, try:

• Richard W. Battarbee and Heather A. Binney, Natural Climate Variability and Global Warming: a Holocene Perspective, Wiley-Blackwell, Chichester, 2008.

To learn how this graph was made:

and read a good overview of the Earth’s climate throughout the Cenozoic, read this:

• James Zachos, Mark Pagani, Lisa Sloan, Ellen Thomas and Katharina Billups, Trends, rhythms, and aberrations in global climate 65 Ma to present, Science 292 (27 April 2001), 686-693.

I got the beautiful maps illustrating continental drift from here:

• Christopher R. Scotes, Paleomap Project.

and I urge you to check out this website for a nice visual tour of the Earth’s history.

Finally, I thank Frederik de Roo and Nathan Urban for suggesting improvements to this issue. You can see what they said on the Azimuth Forum. If you join the forum, you too can help write This Week’s Finds! I could really use help from earth scientists, biologists, paleontologists and folks like that: I’m okay at math and physics, but I’m trying to broaden the scope now.


We are at the very beginning of time for the human race. It is not unreasonable that we grapple with problems. But there are tens of thousands of years in the future. Our responsibility is to do what we can, learn what we can, improve the solutions, and pass them on. – Richard Feynman


This Week’s Finds (Week 316)

17 July, 2011

Here on this This Week’s Finds I’ve been talking about the future and what it might hold. But any vision of the future that ignores biotechnology is radically incomplete. Just look at this week’s news! They’ve ‘hacked the genome’:

• Ed Yong, Hacking the genome with a MAGE and a CAGE, Discover, 14 July 2011.

Or maybe they’ve ‘hijacked the genetic code’:

• Nicholas Wade, Genetic code of E. coli is hijacked by biologists, New York Times, 14 July 2011.

What exactly have they done? These articles explain it quite well… but it’s so cool I can’t resist talking about it.

Basically, some scientists from Harvard and MIT have figured out how to go through the whole genome of a bacterium and change every occurrence of one codon to some other codon. It’s a bit like the ‘global search and replace’ feature of a word processor. You know: that trick where you can take a document and replace one word with another every place it appears.

To understand this better, it helps to know a tiny bit about the genetic code. You may know this stuff, but let’s quickly review.

DNA is a double-stranded helix bridged by pairs of bases, which come in 4 kinds:

adenine (A)
thymine (T)
cytosine (C)
guanine (G)

Because of how they’re shaped, A can only connect to T:

while C can only connect to G:

So, all the information in the DNA is contained in the list of bases down either side of the helix. You can think of it as a long string of ‘letters’, like this:

ATCATTCAGCTTATGC…

This long string consists of many sections, which are the instructions to make different proteins. In the first step of the protein manufacture process, a section of this string copied to a molecule called ‘messenger RNA’. In this stage, the T gets copied to uracil, or U. The other three base pairs stay the same.

Here’s some messenger RNA:


You’ll note that the bases come in groups of three. Each group is called a ‘codon’, because it serves as the code for a specific amino acid. A protein is built as a string of amino acids, which then curls up into a complicated shape.

Here’s how the genetic code works:

The three-letter names like Phe and Leu are abbreviations for amino acids: phenylalanine, leucine and so on.

While there are 43 = 64 codons, they code for only 20 amino acids. So, typically more than one codon codes for the same amino acid. If you look at the chart, you’ll see one exception is methionine, which is encoded only by AUG. AUG is also the ‘start codon’, which tells the cell where a protein starts. So, methionine shows up at the start of every protein, at least at first. It’s usually removed later in the protein manufacture process.

There are also three ‘stop codons’, which mark the end of a protein. They have cute names:

amber: UAG
ochre: UAA
opal: UGA

UAG was named after Harris Bernstein, whose last name means ‘amber’ in German. The other two names were just a way of continuing the joke.

And now we’re ready to understand how a team of scientists led by Farren J. Isaacs and George M. Church are ‘hacking the genome’. They’re going through the DNA of the common E. coli bacterium and replacing every instance of amber with opal!

This is a lot more work than the word processor analogy suggests. They need to break the DNA into lots of fragments, change amber to opal in these fragments, and put them back together again. Read Ed Young’s article for more.

So, they’re not actually done yet.

But when they’re done, they’ll have an E. coli bacterium with no amber codons, just opal. But it’ll act just the same as ever, since amber and opal are both stop codons.

That’s a lot of work for no visible effect. What’s the point?

The point is that they’ll have freed up the codon amber for other purposes! This will let them do various further tricks.

First, with some work, they could make amber code for a new, unnatural amino acid that’s not one of the usual 20. This sounds like a lot of work, since it requires tinkering with the cell’s mechanisms for translating codons into amino acids: specifically, its set of transfer RNA and synthetase molecules. But this has already been done! Back in 1990, Jennifer Normanly found a viable mutant strain of E. coli that ‘reads through’ the amber codon, not stopping the protein there as it should. People have taken advantage of this to create E. coli where amber codes for a new amino acid:

• Nina Mejlhede, Peter E. Nielsen, and Michael Ibba, Adding new meanings to the genetic code, Nature Biotechnology 19 (2001), 532-533.

But I guess getting an E. coli that’s completely free of amber codons would let us put amber codons only where we want them, getting better control of the situation.

Second, tweaking the genetic code this way could yield a strain of E. coli that’s unable to ‘breed’ with the normal kind. This could increase the safety of genetic engineering. Of course bacteria are asexual, so they don’t precisely ‘breed’. But they do something similar: they exchange genes with each other! Three of the most popular ways are:

conjugation: two bacteria come into contact and pass DNA from one to the other.

tranformation: a bacterium produces a loop of DNA called a plasmid, which floats around and then enters another bacterium.

transduction: a virus carries DNA from one bacterium to another.

Thanks to these tricks, drug resistance and other traits can hop from one species of bug to another. So, for the sake of safe experiments, it would be nice to have a strain of bacteria whose genetic code was so different from others that it couldn’t share DNA.

And third, a bacterium with a modified genetic code could be resistant to viruses! I hadn’t known it, but the biotech firm Genzyme was shut down for three months and lost millions of dollars when its bacteria were hit by a virus.

This third application reminds me of a really spooky story by Greg Egan, called “The Moat”. In it, a detective discovers evidence that some people have managed to alter their genetic code. The big worry is that they could then set loose a virus that would kill everyone in the world except them.

That’s a scary idea, and one that just became a bit more practical… though so far only for E. coli, not H. sapiens.

So, I’ve got some questions for the biologists out there.

A virus that attacks bacteria is called a bacteriophage—or affectionately, a ‘phage’. Here’s a picture of one:

Isn’t it cute?

Whoops—that wasn’t one of the questions. Here are my questions for biologists:

• To what extent are E. coli populations kept under control by phages, or perhaps somehow by other viruses?

• If we released a strain of virus-resistant E. coli into the wild, could it take over, thanks to this advantage?

• What could the effects be? For example, if the E. coli in my gut became virus-resistant, would their populations grow enough to make me notice?

and more generally:

• What are some of the coolest possible applications of this new MAGE/CAGE technology?

Also, on a more technical note:

• What did people actually do with that strain of E. coli that ‘reads through’ amber?

• How could such a strain be viable, anyway? Does it mostly avoid using the amber codon, or does it somehow survive having a lot of big proteins where a normal E. coli would have smaller ones?

Finally, I can’t resist mentioning something amazing I just read. I said that our body uses 20 amino acids, and that ‘opal’ serves a stop codon. But neither of these are the whole truth! Sometimes opal codes for a 21st amino acid, called selenocysteine. And this one is different from the rest. Most amino acids contain carbon, hydrogen, oxygen and nitrogen, and cysteine contains sulfur, but selenocysteine contains… you guessed it… selenium!

Selenium is right below sulfur on the periodic table, so it’s sort of similar. If you eat too much selenium, your breath starts smelling like garlic and your hair falls out. Horses have died from the stuff. But it’s also an essential trace element: you have about 15 milligrams in your body. We use it in various proteins, which are called… you guessed it… selenoproteins!

So, a few more questions:

• Do humans use selenoproteins containing selenocysteine?

• How does our body tell when opal is getting used to code for selenocysteine, and when it’s getting used as a stop codon?

• Are there any cool theories about how life evolved to use selenium, and how the opal codon got hijacked for this secondary purpose?

Finally, here’s the new paper that all the fuss is about. It’s not free, but you can read the abstract for free:

• Farren J. Isaacs, Peter A. Carr, Harris H. Wang, Marc J. Lajoie, Bram Sterling, Laurens Kraal, Andrew C. Tolonen, Tara A. Gianoulis, Daniel B. Goodman, Nikos B. Reppas, Christopher J. Emig, Duhee Bang, Samuel J. Hwang, Michael C. Jewett, Joseph M. Jacobson, and George M. Church, Precise manipulation of chromosomes in vivo enables genome-wide codon replacement, Science 333 (15 July 2011), 348-353.


Pessimists should be reminded that part of their pessimism is an inability to imagine the creative ideas of the futureBrian Eno


This Week’s Finds (Week 315)

27 June, 2011

This is the second and final part of my interview with Thomas Fischbacher. We’re talking about sustainable agriculture, and he was just about to discuss the role of paying attention to flows.

JB: So, tell us about flows.

TF: For natural systems, some of the most important flows are those of energy, water, mineral nutrients, and biomass. Now, while they are physically real, and keep natural systems going, we should remind ourselves that nature by and large does not make high level decisions to orchestrate them. So, flows arise due to processes in nature, but nature ‘works’ without being consciously aware of them. (Still, there are mechanisms such as evolutionary pressure that ensure that the flow networks of natural ecosystems work—those assemblies that were non-viable in the long term did not make it.)

Hence, flows are above everything else a useful conceptual framework—a mental tool devised by us for us—that helps us to make sense of an otherwise extremely complex and confusing natural world. The nice thing about flows is that they reduce complexity by abstracting away details when we do not want to focus on them—such as which particular species are involved in the calcium ion economy, say. Still, they retain a lot of important information, quite unlike some models used by economists that actually guide—or misguide—our present decision-making. They tell us a lot about key processes and longer term behaviour—in particular, if something needs to be corrected.

Sustainability is a complex subject that links to many different aspects of human experience—and of course the non-human world around us. When confronted with such a subject, my approach is to start by asking: ‘what I am most certain about’, and use these key insights as ‘anchors’ that set the scene. Everything else must respect these insights. Occasionally, some surprising new insight forces me to reevaluate some fundamental assumptions, and repaint part of the picture. But that’s life—that’s how we learn.

Very often, I find that those aspects which are both useful to obtain deeper insights and at the same time accessible to us are related to flows.

JB: Can you give an example?

TF: Okay, here’s another puzzle. What is the largest flow of solids induced by civilization?

JB: Umm… maybe the burning of fossil fuels, passing carbon into the atmosphere?

TF: I am by now fairly sure that the answer is: the unintentional export of topsoil from the land into the sea by wind and water erosion, due to agriculture. According to Brady & Weil, around the year 2000, the U.S. annually ‘exported’ about 4×1012 kilograms of topsoil to the sea. That’s roughly three cubic kilometers, taking a reasonable estimate for the density of humus.

JB: Okay. In 2007, the U.S. burnt 1.6 × 1012 kilograms of carbon. So, that’s comparable.

TF: Yes. When I cross check my number combining data from the NRCS on average erosion rates and from the CIA World Factbook on cultivated land area, I get a result that is within the same ballpark, so it seems to make sense. In comparison, total U.S. exports of economic goods in 2005 were 4.89×1011 kilograms: about an order of magnitude less, according to statistics from the Federal Highway Administration.

If we look at present soil degradation rates alone, it is patently clear that we see major changes ahead. In the long term, we just cannot hope to keep on feeding the population using methods that keep on rapidly destroying fertility. So, we pretty much know that something will happen there. (Sounds obvious, but alas, thinking of a number of discussions I had with some economists, I must say that, sadly, it is far from being so.)

What actually will happen mostly depends on how wisely we act. The possibilities range from nuclear war to a mostly smooth swift transition to fertility-building food production systems that also take large amounts of CO2 out of the atmosphere and convert it to soil humus. I am, of course, much in favour of scenarios close to the latter one, but that won’t happen unless we put in some effort—first and foremost, to educate people about how it can be done.

Flow analysis can be an extremely powerful tool for diagnosis, but its utility goes far beyond this. When we design systems, paying attention to how we design the flow networks of energy, water, materials, nutrients, etc., often makes a world of a difference.

Nature is a powerful teacher here: in a forest, there is no ‘waste’, as one system’s output is another system’s input. What else is ‘waste’ but an accumulation of unused output? So, ‘waste’ is an indication of an output mismatch problem. Likewise, if a system’s input is not in the right form, we have to pre-process it, hence do work, hence use energy. Therefore, if a process or system continually requires excessive amounts of energy (as many of our present designs do), this may well be an indication of a design problem—and could be related to an input mismatch.

Also, the flow networks of natural systems usually show both extremely high recycling rates and a lot of multi-functionality, which provides resilience. Every species provides its own portfolio of services to the assembly, which may include pest population control, creating habitat for other species, food, accumulating important nutrients, ‘waste’ transformation, and so on. No element has a single objective, in contrast to how we humans by and large like to engineer our systems. Each important function is covered by more than one element. Quite unlike many of our past approaches, design along such principles can have long-term viability. Nature works. So, we clearly can learn from studying nature’s networks and adopting some principles for our own designs.

Designing for sustainability with, around, and inspired by natural systems is an interesting intellectual challenge, much like solving a jigsaw puzzle. We cannot simultaneously comprehend the totality of all interactions and relations between adjacent pieces as we build it, but we keep on discovering clues by closely studying different aspects: form, colour, pattern. If we are on the right track, and one clue tells us how something should fit, we will discover that other aspects will fit as well. If we made a mistake, we need to apply force to maintain it and hammer other pieces into place—and unless we correct that mistake, we will need ever more brutal interventions to artificially stabilize the problems which are mere consequences of the original mistake. Think using nuclear weapons to seal off spilling oil wells drilled in deep waters needed because we used up all the easily accessible high-quality fuels. One mistake begets another.

There is a reason why jigsaw puzzles ‘work’: they were created that way. There is also a reason why the dance of natural systems ‘works’: coevolution. What happens when we run out of steam to stabilize poor designs (i.e. in an energy crisis)? We, as a society, will be forced to confront our past arrogance and pay close attention to resolving the design mistakes we so far always tried to talk away. That’s something I’d call ‘true progress’.

Actually, it’s quite evident now: many of our ‘problems’ are rather just symptoms of more fundamental problems. But as we do not track these down to the actual root, we keep on expending ever more energy by stacking palliatives on top of one another. Growing corn as a biofuel in a process that both requires a lot of external energy input and keeps on degrading soil fertility is a nice example. Now, if we look closer, we find numerous further, superficially unrelated, problems that should make us ask the question: "Did we assemble this part of the puzzle correctly? Is this approach really such a good idea? What else could we do instead? What other solutions would suggest themselves if we paid attention to the hints given by nature?" But we don’t do that. It’s almost as if we were proud to be thick.

JB: How would designing with flows in mind work?

TF: First, we have to be clear about the boundaries of our domain of influence. Resources will at some point enter our domain of influence and at some point leave it again. This certainly holds for a piece of land on which we would like to implement sustainable food production where one of the most important flows is that of water. But it also holds for a household or village economy, where an important flow through the system is that of purchase power—i.e. money (but in the wider sense). As resources percolate through a system, their utility generally degrades—entropy at work. Water high up in the landscape has more potential uses than water further down. So, we can derive a guiding principle for design: capture resources as early as possible, release them as late as possible, and see that you guide them in such a way that their natural drive to go downhill makes them perform many useful duties in between. Considering water flowing over a piece of land, this would suggest setting up rainwater catchment systems high up in the landscape. This water then can serve many useful purposes: there certainly are agricultural/silvicultural and domestic uses, maybe even aquaculture, potentially small-scale hydropower (say, in the 10-100 watts range), and possibly fire control.

JB: When I was a kid, I used to break lots of things. I guess lots of kids do. But then I started paying attention to why I broke things, and I discovered there were two main reasons. First, I might be distracted: paying attention to one thing while doing another. Second, I might be trying to overcome a problem by force instead of by slowing down and thinking about it. If I was trying to untangle a complicated knot, I might get frustrated and just pull on it… and rip the string.

I think that as a culture we make both these mistakes quite often. It sounds like part of what you’re saying is: "Pay more attention to what’s going on, and when you encounter problems, slow down and think about their origin a bit—don’t just try to bully your way through them."

But the tool of measuring flows is a nice way to organize this thought process. When you first told me about ‘input mismatch problems’ and ‘output mismatch problems’, it came as a real revelation! And I’ve been thinking about them a lot, and I want to keep doing that.

One thing I noticed is that problems tend to come in pairs. When the output of one system doesn’t fit nicely into the input of the next, we see two problems. First, ‘waste’ on the output side. Second, ‘deficiency’ on the input side. Sometimes it’s obvious that these are two aspects of the same problem. But sometimes we fail to see it.

For example, a while ago some ground squirrels chewed a hole in an irrigation pipe in our yard. Of course that’s our punishment for using too much water in a naturally dry environment, but look at the two problems it created. One: big gushers of water shooting out of the hole whenever that irrigation pipe was used, which caused all sort of further problems. Two: not enough water to the plants that system was supposed to be irrigating. Waste on one side, deficiency on the other.

That’s obvious, easy to see, and easy to fix: first plug the hole, then think carefully about why we’re using so much water in the first place. We’d already replaced our lawn with plants that use less water, but maybe we can do better.

But here’s a bigger problem that’s harder to fix. Huge amounts of fertilizer are being used on the cornfields of the midwestern United States. With the agricultural techniques they’re using, there’s a constant deficiency of nitrogen and phosphorus, so it’s supplied artificially. The figures I’ve seen show that about 30% of the energy used in US agriculture goes into making fertilizers. So, it’s been said that we’re ‘eating oil’—though technically, a lot of nitrogen fertilizer is made using natural gas. Anyway: a huge deficiency problem.

On the other hand, where is all this fertilizer going? In the midwestern United States, a lot of it winds up washing down the Mississipi River. And as a result, there are enormous ‘dead zones’ in the Gulf of Mexico. The fertilizer feeds algae, the algae dies and decays, and the decay process takes oxygen out of the water, killing off any life that needs oxygen. These dead zones range from 15 and 18 thousand square kilometers, and they’re in a place that’s one of the prime fishing spots for the US. So: a huge waste problem.

But they’re the same problem!

It reminds me of the old joke about a guy who was trying to button his shirt. "There are two things wrong with this shirt! First, it has an extra button on top. Second, it has an extra buttonhole on bottom!"

TF: Bill Mollison said it in a quite humorous-yet-sarcastic way in this episode of the Global Gardener movie:

• Bill Mollison, Urban permaculture strategies – part 1, YouTube.

While the potential to grow a large amount of calories in cities may be limited, growing fruit and vegetables nevertheless does make sense for multiple reasons. One of them is that many things that previously went into the garbage bin now have a much more appropriate place to go—such as the compost heap. Many urbanites who take up gardening are quite amazed when they realize how much of their household waste actually always ‘wanted’ to end up in a garden.

JB: Indeed. After I bought a compost bin, the amount of trash I threw out dropped dramatically. And instead of feeling vaguely guilty as I threw orange peels into the trash where they’d be mummified in a plastic bag in a landfill, I could feel vaguely virtuous as I watched them gradually turn into soil. It doesn’t take as long as you might think. And it comes as a bit of a revelation at first: "Oh, so that’s how we get soil."

TF: Perhaps the biggest problem I see with a mostly non-gardening society is that people without even the slightest own experience in growing food are expected to make up their mind about very important food-related questions and contribute to the democratic decision making process. Again, I must emphasize that whoever does not consciously invest some effort into getting at least some minimal first hand experience to improve their judgment capabilities will be easy prey for rat-catchers. And by and large, society is not aware of how badly they are lied to when it comes to food.

But back to flows. Every few years or so, I stumble upon a jaw-dropping idea, or a principle, that makes me realize that it is so general and powerful that, really, the limits of what it can be used for are the limits of my imagination and creativity. I recently had such a revelation with the PSLQ integer relation algorithm. Using flows as a mental tool for analysis and design was another such case. All of a sudden, a lot made sense, and could be analyzed with ease.

There always is, of course, the ‘man with a hammer problem’—if you are very fond of a new and shiny hammer, everything will look like a nail. I’ve also heard that expressed as ‘an idea is a very dangerous thing if it is the only one you have’.

So, while keeping this in mind, now that we got an idea about flows in nature, let us ask: "how can we abuse these concepts?" Mathematicians prefer the term ‘abstraction’, but it’s fun either way. So, let’s talk about the flow of money in economies. What is money? Essentially, it is just a book-keeping device invented to keep track of favours owed by society to individuals and vice versa. What function does it have? It works as ‘grease’, facilitating trade.

So, suppose you are a mayor of a small village. One of your important objectives is of course prosperity for your villagers. Your village trades with and hence is linked to an external economy, and just as goods and services are exchanged, so is money. So, at some point, purchase power (in the form of money) enters your domain of influence, and at some point, it will leave it again. What you want it to do is to facilitate many different economic activities—so you want to ensure it circulates within the village as long as possible. You should pay some attention to situations where money accumulates—for everything that accumulates without being put to good use is a form of ‘waste’, hence pollution. So, this naturally leads us to two ideas: (a) What incentives can you find to keep money on circulating within the village? (There are many answers, limited only by creativity.) And (b) what can you do to constrain the outflow? If the outlet is made smaller, system outflow will match inflow at a higher internal pressure, hence a higher level of resource availability within the system.

This leads us to an idea no school will ever tell you about—for pretty much the same reason why no state-run school will ever teach how to plan and successfully conduct a revolution. The road to prosperity is to systematically reduce your ‘Need To Earn’—i.e. the best way to spend money is to set up systems that allow you to keep more money in your pocket. An frequent misconception that keeps on arising when I mention this is that some think this idea would be about austerity. Quite to the contrary. You can make as much money as you want—but one thing you should keep in mind is that if you have that trump card up your sleeve that you could at any time just disconnect from most of the economy and get by with almost no money at all for extended periods of time, you are in a far better position to take risks and grasp exceptional opportunities as they arise as someone would be who committed himself to having to earn a couple of thousand pounds a month.

The problem is not with earning a lot of money. The problem is with being forced to continually make a lot of money. We readily manage to identify this as a key problem of drug addicts, but fail to see the same mechanism at work in mainstream society. A key assumption in economic theory is that exchange is voluntary. But how well is that assumption satisfied in practice if such forces are in place?

Now, what would happen if people started to get serious about investing the money they earn to systematically reduce their need to earn money in the future? Some decisions such as getting a photovoltaic array may have ‘payback times’ in the range of one or two decades, but I consider this ‘payback time’ concept as a self-propagating flawed idea. If something gives me an advantage in terms of depending on less external input now, this reduction of vulnerability also has to be taken into account—’payback times’ do not do that. So—if most people did such things, i.e. made strategic decisions to set up systems so that their essential needs can be satisfied with minimal effort—especially money, this would put a lot of political power back into their hands. A number of self-proclaimed ‘leaders’ certainly don’t like the idea of people being in a position to just ignore their orders. Also note that this would have a funny effect on the GDP—ever heard of ‘imputations’?

JB: No, what are those?

TF: It’s a funny thing, perhaps best explained by an example. If you fully own your own house, then you don’t pay rent. But for the purpose of determining the GDP, you are regarded as paying as much rent to yourself (!) as you would get if you rented out the house. See:

Imputed rent, Wikipedia.

Evidently, if people make a dedicated effort at the household level to become less dependent on the economy by being able to provide most of their essential needs themselves (housing, food, water, energy, etc.) to a much larger extent, this amounts to investing money in order to need less money in the future. If many people did this systematically, it would superficially have a devastating effect on the GDP—but it would bring about a much more resilient (because less dependent) society.

The problem is that the GDP really is not an appropriate measure for progress. But obviously, those who publish these figures know that as well, hence the need to fudge the result with imputations. So, a simple conclusion is: whenever there is an opportunity to invest money in a way that makes you less dependent on the economy in the future, that might be well worth a closer look. Especially if you get the idea that, if many people did this, the state would likely have to come up with other imputations to make the impact on the GDP disappear!

JB: That’s a nice thought. I tend to worry about how the GDP and other economic indicators warp our view of what’s right to do. But you’re saying that if people can get up the nerve to do what’s right, regardless, the economic indicators may just take care of themselves.

TF: We have to remember that sustainability is about systems that are viable in the long run. Environmental sustainability is just one important aspect. But you won’t go on for long doing what you do unless it also has economic long-term viability. Hence, we are dealing with multi-dimensional design constraints. And just as flow network analysis is useful to get an idea about the environmental context, the same holds for the economic context. It’s just that the resources are slightly different ones—money, labour, raw materials, etc. These thoughts can be carried much further, but I find it quite worthwhile to instead look at an example where someone did indeed design a successful system along such principles. In the UK, the first example that would come to my mind is Hill Holt Wood, because the founding director, Nigel Lowthrop, did do so many things right. I have high admiration for his work.

JB: When it comes to design of sustainable systems, you also seem to be a big fan of Bill Mollison and some of the ‘permaculture’ movement that he started. Could you say a bit about that? Why is it important?

TF: The primary reason why permaculture matters is that it has demonstrated some stunning successes with important issues such as land rehabilitation.

‘Permaculture’ means a lot of different things to a lot of different people. Curiously, where I grew up, the term is somewhat known, but mostly associated with an Austrian farmer, not Bill Mollison. And I’ve seen some physicists who first had come into contact with it through David Holmgren‘s book revise their opinions when they later read Mollison. Occasionally, some early adopters did not really understand the scientific aspects of it and tried to link it with some strange personal beliefs of the sort Martin Gardner discussed in Fads and Fallacies in the Name of Science. And so on. So, before we discuss permaculture, I have to point out that one might sometimes have to take a close look to evaluate it. A number of things claiming to be ‘permaculture’ actually are not.

When I started—some time ago—to make a systematic effort to get a useful overview over the structure of our massive sustainability-related problems, a key question to me always was: "what should I do?"—and a key conviction was: "someone must have had some good ideas about all this already." This led me to actually not read some well-known "environmentalist" books many people had read which are devoid of any discussion of our options and potential solutions, but to do a lot of detective work instead.

In doing so, I travelled, talked to a number of people, read a lot of books and manuscripts, did a number of my own experiments, cross-checked things against order-of-magnitude guesstimates, against the research literature, and so on. At one point—I think it was when I took a closer look into the work of the laureates of the ‘Right Livelihood award’ (sometimes called the ‘Alternative Nobel Prize’)—I came across Bill Mollison’s work. And it struck a chord.

Back in the 90s, when mad cow disease was a big topic in Europe, I spent quite some time pondering questions such as: "what’s wrong with the way farming works these days?" I immediately recognized a number of insights I independently had arrived at back then when studying Bill Mollison’s work, and yet, he went so much further—talked about a whole universe of issues I still was mostly unaware of at that time. So, an inner voice said to me: "if you take a close look at what that guy already did, that might save you a lot of time". Now, Mollison did get some things wrong, but I still think taking a close look at what he has to say is a very effective way to get a big picture overview over what we can achieve, and what needs urgent attention. I think it greatly helps (at least to me) that he comes from a scientific background. Before he decided to quit academia in 1978 and work full time on developing permaculture, he was a lecturer at the University of Hobart, Tasmania.

JB: But what actually is ‘permaculture’?

TF: That depends a lot on who you ask, but I like to think about permaculture as if it were an animal. The ‘skeleton’ is a framework with cleverly designed ‘static properties’ that holds the ‘flesh’ together in a way so that it can achieve things. The actual ‘flesh’ is provided by solutions to specific problems with long term viability being a key requirement. But it is more than just a mere semi-amorphous collage of solutions, due to its skeleton. The backbone of this animal is a very simple (deliberately so) yet functional (this is important) core ethics which one could regard as being the least common denominator of values considered as essential across pretty much all cultures. This gives it stability. Other bones that make this animal walk and talk are related to key principles. And these principles are mostly just applied common sense.

For example, it is pretty clear that as non-renewable resources keep on becoming more and more scarce, we will have to seriously ponder the question: what can we grow that can replace them? If our design constraints change, so does our engineering—should (for one reason or another) some particular resource such as steel become much more expensive than it is today, we would of course look into the question whether, say, bamboo may be a viable alternative for some applications. And that is not as exotic an idea as it may sound these days.

So, unquestionably, the true solutions to our problems will be a lot about growing things. But growing things in the way that our current-day agriculture mostly does it seems highly suspicious, as this keeps on destroying soil. So, evidently, we will have to think less along the lines of farming and more along the lines of gardening. Also, we must not fool ourselves about a key issue: most people on this planet are poor, hence for an approach to have wide impact, it must be accessible to the poor. Techniques that revolve around gardening often are.

Next, isn’t waiting for the big (hence, capital intensive) ‘technological miracle fix’ conspicuously similar to the concept of a ‘pie in the sky’? If we had any sense, shouldn’t we consider solving today’s problems with today’s solutions?

If one can distinguish between permaculture as it stands and attempts by some people who are interested in it to re-mold it so that it becomes ‘the permaculture part of permaculture plus Anthroposophy/Alchemy/Biodynamics/Dianetics/Emergy/Manifestation/New Age beliefs/whatever’, there is a lot of common sense in permaculture—the sort of ‘a practical gardener’s common sense’. In this framework, there is a place for both modern scientific methods and ancient tribal wisdom. I hence consider it a healthy antidote to both fanatical worship of ‘the almighty goddess of technological progress’—or any sort of fanatical worship for that matter—as well as to funny superstitious beliefs.

There are some things in the permaculture world, however, where I would love to see some change. For example, it would be great if people who know how to get things done paid more attention to closely keeping records of what they do to solve particular problems and to making these widely accessible. Solutions of the ‘it worked great for a friend of a friend’ sort do us a big disservice. Also, there are a number of ideas that easily get represented in overly simplistic form—such as ‘edge is good’—where one better should retain some healthy skepticism.

JB: Well, I’m going to keep on pressing you: what is permaculture… according to you? Can you list some of the key principles?

TF: That question is much easier to answer. The way I see it, permaculture is a design-oriented approach towards systematically reducing the total effort that has to be expended (in particular, in the long run) in order to keep society going and allow people to live satisfying lives. Here, ‘effort’ includes both work that is done by non-renewable resources (in particular fossil fuels), as well as human labour. So, permaculture is not about returning to pre-industrial agricultural drudgery with an extremely low degree of specialization, but rather about combining modern science with traditional wisdom to find low-effort solutions to essential problems. In that sense, it is quite generic and deals with issues ranging from food production to water supply to energy efficient housing and transport solutions.

To give one specific example: Land management practices that reduce the organic matter content of soils and hence soil fertility are bound to increase the effort needed to produce food in the long run and hence considered a step in the wrong direction. So, a permaculture approach would focus on using strategies that manage to build soil fertility while producing food. There are a number of ways to do that, but a key element is a deep understanding of nature’s soil food web and nutrient cycling processes. For example, permaculture pays great attention to ensuring a healthy soil microflora.

When the objective is to minimize the effort needed to sustain us, it is very important to closely observe those situations where we have to expend energy on a continual basis in order to fight natural processes. When this happens, there is a conflict between our views how things ought to look like and a system trying to demonstrate its own evolution. In some situations, we really want it that way and have to pay the corresponding price. But there are others—quite many of them—where we would be well advised to spend some thought on whether we could make our life easier by ‘going with the flow’. If thistles keep on being a nuisance on some piece of land, we might consider trying to fill this ecological niche by growing some closely related species, say some artichoke. If a meadow needs to be mowed regularly so that it does not turn into a shrub thicket, we would instead consider planting some useful shrubs in that place.

Naturally, permaculture design favours perennial plants in climatic regions where the most stable vegetation would be a forest. But it does not have to be this way. There are high-yielding low-effort (in particular: no-till, no-pesticide) ways to grow grains as well, mostly going back to Masanobu Fukuoka. They have gained some popularity in India, where they are known as ‘Rishi Kheti’—’agriculture of the sages’. Here’s a photo gallery containing some fairly recent pictures:

Raju Titus’s Public Gallery, Picasa.



Wheat growing amid fruit trees: no tillage, no pesticides — Hoghangabad, India

An interesting perspective towards weeds which we usually do not take is: the reason this plant could establish itself here is that it’s filling an unfilled ecological niche.

JB: Actually I’ve heard someone say: "If you have weeds, it means you don’t have enough plants".

TF: Right. So, when I take that weed out, I’d be well advised to take note of nature’s lesson and fill that particular niche with an ecological analog that is more useful. Otherwise, it will quite likely come back and need another intervention.

I would consider this "letting systems demonstrate their own evolution while closely watching what they want to tell us and providing some guidance" the most important principle of permaculture.

Another important principle is the ‘user pays‘ principle. A funny idea that comes up disturbingly often up in discussions of sustainability issues (even if it is not articulated explicitly) is that there are only a limited amount of resources which we keep on using up, and once we are done with that, this would be the end of mankind. Actually, that’s not how the world works.

Take an apple tree, for example. It starts out as a tiny seed, and has to accumulate a massive amount of (nutrient) resources to grow into a mature tree. Yet, once it completes its life cycle, dies down and is consumed by fungi, it leaves the world in a more fertile state than before. Fertility tends to keep growing, because natural systems by and large work according to the principle that any agent that takes something from the natural world will return something of equal or even greater ecosystemic value.

Let me come back to an example I briefly mentioned earlier on. At a very coarse level of detail, grazing cows eat grass and return cow dung. Now, in the intestines of the cow, quite a lot of interesting biochemistry has happened that converted nonprotein nitrogen (say, urea) into much more valuable protein:

• W. D. Gallup, Ruminant nutrition, review of utilization of nonprotein nitrogen in the ruminant, Journal of Agricultural and Food Chemistry 4 (1956), 625-627.

A completely different example: nutrient accumulators such as comfrey act as powerful pumps that draw up mineral nutrients from the subsoil, where they would be otherwise inaccessible, and make them available for ecosystemic cycling.



Russian comfrey, Symphytum x uplandicum

It is indeed possible to not only use this concept for garden management, but as a fundamental principle to run a sustainable economy. At the small scale (businesses), its viability has been demonstrated, but unfortunately this aspect of permaculture has not received as much attention yet as it should. Here, the key questions are along the lines of: do you need a washing machine, or is your actual need better matched by the description ‘access to some laundry service’?

Concerning energy and material flows, an important principle is "be aware of the boundaries of your domain of influence, capture them as early as you can, release them as late as you can, and extract as much beneficial use out of them as possible in between". We already talked about that. In the era of cheap labour from fossil fuels, it is often a very good idea to use big earthworking machinery to slightly adjust the topography of the landscape in order to capture and make better use of rainwater. Done right, such water harvesting earthworks can last many hundreds of years, and pay back the effort needed to create them many times over in terms of enhanced biological productivity. If this were implemented on a broad scale, not just by a small percentage of farmers, this could add significantly to flood protection as well. I am fairly confident that we will be doing this a lot in the 21st century, as the climate gets more erratic and we face both more extreme rainfall events (note that saturation water vapour pressure increases by about 7% for every Kelvin of temperature increase) as well as longer droughts. It would be smart to start with this now, rather than when high quality fuels are much more expensive. It would have been even smarter to start with this 20 years ago.

A further important principle is to create stability through a high degree of network connectivity. We’ve also briefly talked about that already. In ecosystem design, this means to ensure that every important ecosystemic function is provided by more than one element (read: species), while every species provides multiple functions to the assembly. So, if something goes wrong with one element, there are other stabilizing forces in place. The mental picture which I like to use here is that of a stellar cluster: If we put a small number of stars next to one another, the system will undergo fairly complicated dynamics and eventually separate: in some three-star encounters, two stars will enter a very close orbit, while the third receives enough energy to go over escape velocity. If we lump together a large number of stars, their dynamics will thermalize and make it much more difficult for an individual star to obtain enough energy to leave the cluster—and keep it for a sufficiently long time to actually do so. Of course, individual stars do ‘boil off’, but the entire system does not fall apart as fast as just a few stars would.

There are various philosophies how to best approach weaving an ecosystemic net, ranging from ‘ecosystem mimicry‘;—i.e. taking wild nature and substituting some species with ecological analogs that are more useful to us—to ‘total synthesis of a species assembly’, i.e. combining species which in theory should grow well together due to their ecological characteristics, even though they might never have done so in nature.

JB: Cool. You’ve given me quite a lot to think about. Finally, could you also leave me with a few good books to read on permaculture?

TF: It depends on what you want to focus on. Concerning a practical hands-on introduction, this is probably the most evolved text:

• Bill Mollison, Introduction to Permaculture, Tagari Publications, Tasmania, 1997.

If you want more theory but are fine with a less refined piece of work, then this is quite useful:

• Bill Mollison, Permaculture – A Designer’s Manual, Tagari Publications, Tasmania, 1988.

Concerning temperate climates—in particular, Europe—this is a well researched piece of work that almost could be used as a college textbook:

• Patrick Whitefield, The Earth Care Manual: a Permaculture Handbook for Britain and Other Temperate Climates, Permanent Publications, East Meon, 2004.

For Europeans, this would probably be my first recommendation.

JB: Thanks! It’s been a very thought-provoking interview.


Ecologists never apply good ecology to their gardens. Architects never understand the transmission of heat in buildings. And physicists live in houses with demented energy systems. It’s curious that we never apply what we know to how we actually live.Bill Mollison


This Week’s Finds (Week 314)

6 June, 2011

This week I’d like to start an interview with Thomas Fischbacher, who teaches at the School of Engineering Sciences at the University of Southampton. He’s a man with many interests, but we’ll mainly talk about sustainable agriculture, leading up to an idea called "permaculture".

JB: Your published work is mainly in theoretical physics, and some of it is quite mathematical. You have a bunch of papers on theories of gravity related to string theory, and another bunch on magnetic materials, maybe with some applications to technology. But you’re also interested in sustainable agricultural and building practices! That seems like quite a leap… but I may be trying to make a similar leap myself, so I find it fascinating. How did you get interested in these other topics, which seem so very different in flavor?

TF: I think it’s quite natural that one’s interests are wider than what one actually publishes about—quite likely, the popularity of your blog which is about all sorts of interesting things witnesses this.

However, if something that seems interesting catches my attention, I often experience a strong drive to come to an advanced level of understanding—at least mastering the key mechanisms. As far as I can think back, my studies have been predominantly self-directed, often following very unusual and sometimes obscure paths, so I sometimes happen to know a few quite odd things. And actually, considering research, I get a lot of fun out of combining advanced ideas from very different fields. Most of my articles are of that type, e.g. "sparse tensor numerics meets database algorithms and metalinguistics", or "Feynman diagrams meet lazy evaluation and continuation coding", or "Exceptional groups meet sensitivity back-propagation". Basically, I like to see myself in the role of a bridge-builder. Very often, powerful ideas that have been developed in one field are completely unknown in another where they actually can be used to great advantage.

Concerning sustainability, it actually was mostly soil physics that initially got me going. When dealing with a highly complex phenomenon such as human civilization, it is sometimes very useful to take a close look at matter and energy flows in order to get an overview over important processes that determine the structure and long term behaviour of a system. Just doing a few order-of-magnitude guesstimates and looking at typical soil erosion and soil formation rates, I found that, from that perspective, quite a number of fundamental things did not add up and told a story very different from the oh-so-glorious picture of human progress. That’s one of the great things about physical reasoning: it allows one to independently make up one’s mind about things where one otherwise would have little other choice than to believe what one is told. And so, I started to look deeper.

JB: So what did you discover? I can’t resist mentioning something I learned from a book Kevin Kelly gave me:

• Neil Roberts, The Holocene: an Environmental History, Blackwell, London, 1998.

It describes how the landscape of Europe has been cycling through glacial and interglacial periods every 100,000 years or so for the last 1.3 million years. It’s a regular sort of pattern!

As a glacial period ends, first comes a phase when birches and pines immigrate from southern refuges into what had been tundra. Then comes a phase when mixed deciduous forest takes over, with oak and elm becoming dominant. During this period, rocky soils turn into brown forest soils. Next, leaching from rocks in glacial deposits leads to a shift from neutral to acid soils, which favor trees like spruce. Then, as spruce take over, fallen needles make the soil even more acid. Together with cooling temperatures as the next glacial approaches, this leads to the replacement of deciduous forest by heathland and pine forests. Finally, glaciers move in and scrape away the soil. And then the cycle repeats!

I thought this was really cool: it’s like seasons, but on a grand scale. And I thought this quote was even cooler:

It was believed by classical authors such as Varro and Seneca that there had once been a "Golden Age", "when man lived on those things which the virgin earth produced spontaneously" and when "the very soil was more fertile and productive." If ever there was such a "Golden Age" then surely it was in the early Holocene, when soils were still unweathered and uneroded, and when Mesolithic people lived off the fruits of the land without the physical toil of grinding labour.

Still unweathered and uneroded! So it takes an ice age to reset the clock and bring soils back to an optimum state?

But your discovery was probably about the effects of humans…

TF: There are a number of different processes, all of them important, that are associated with very different time scales. A general issue here is that, as a society, we have difficulties to get an idea how our life experience is shaped by our cultural heritage, by our species’ history, and by events that happened tens of thousands of years ago.

Coming to the cycles of glaciation, you are right that these shaped the soils in places such as Europe, by grinding down rock and exposing freshly weathered material. But it is also interesting to look at places where this has not happened—to give us sort of an outside perspective; glaciation was fairly minimal in Australia, for example. Also, the main other player, volcanism did not have much of an effect in exposing fresh minerals there either. And so, Australian soils are extremely old—millions of years, tens of millions of years even, and very poor in mineral nutrients, as so much has been leached out. This has profound influences on the vegetation, but also on fauna, and of course on the people who inhabited this land for tens of thousands of years, and their culture: the Aborigines. Now, I don’t want to claim that the Aborigines actually managed to evolve a fully "sustainable" system of land management—but it should be pretty self-evident that they must have developed some fairly interesting biological knowledge over such a long time.

Talking about long time scales and the long distant past, it sometimes takes a genius to spot something that in hindsight is obvious but no one noticed because the unusual situation is that the really important thing that matters is missing. Have you ever wondered, for example, what animal might eat an avocado and disperse its fairly large seed? Like other fruit (botanically speaking, the avocado is a berry, as is the banana), the avocado plant co-evolved with animals that would eat its fruits—but there is no animal around that would do so. Basically, the reason is that we are looking at a broken ecosystem: the co-evolutionary partners of the avocado, such as gomphotheres, became extinct some thousands of years ago.


A blink with respect to the time scales of evolution, but an awfully long time for human civilizations. There is an interesting book on this subject:

• Connie Barlow, The Ghosts of Evolution: Nonsensical Fruit, Missing Partners, and Other Ecological Anachronisms, Basic Books, New York, 2002. (Also listen to this song.)

Considering soils, the cycle of glaciations already should hold an important lesson for us. It is important to note that the plow is basically an invention that (somewhat) suits European agriculture and its geologically young soils. What happens if we take this way of farming to the tropics? While lush and abundant rainforests may seem to suggest otherwise, we have old and nutrient-poor soils here, and most mineral nutrients get stored and cycled by the vegetation. If we clear this, we release a flush of nutrients, but as the annual crops which we normally grow are not that good at holding on to these nutrients, we rapidly destroy the fertility of the land.

There are alternative options for how to produce food in such a situation, but before we look into this, it might be useful to know a few important ballpark figures related to agriculture—plow agriculture in particular.

The most widely used agricultural unit for "mass per area" is "metric tons per hectare", but I will instead use kilograms per square meter (as some people may find that easier to relate to), 1 kilogram per square meter being 10 tons/ha. Depending on the climate (windspeeds, severity of summer rains, etc.), plow agriculture will typically lead to soil loss rates due to erosion of something in the ballpark of 0.5 to 5 kilograms per square meter per year. In the US, erosion rates in the past have been as high as 4 kilograms per square meter per year and beyond, but have come down markedly. Still, soil loss rates of around 1 kilogram per square meter per year are not uncommon for the US. The problem is that, under good conditions, soil creation rates are in the ballpark of 0.02 to 0.2 kilograms per square meter per year. So, our present agriculture is destroying soil much faster than new soil gets formed. And, quite insidiously, erosion will always carry away the most fertile top layer of soil first.

It is worthwhile to compare this with agricultural yields: in Europe, good wheat yields are in the range of 0.6 kilograms per square meter per year, but yields depend a lot on water availability, and the world average is just 0.3 kilograms per square meter per year. In any case, the plow actually produces much more eroded land than food. You can see more information here:

• Food and Agriculture Organization of the UN, FAOSTAT.

Concerning ancient reports of a "Golden Age"—I am not so sure about this anymore. By and large, civilizations mostly seem to have had quite a negative long term impact on the soil fertility that sustained them—and a number of them failed due to that. But all things considered, we often find that some particular groups of species have a very positive long term effect on fertility and counteract nutrient leaching—tropical forests bear witness to that.

Now… what single species we can think of would be best equipped to make a positive contribution towards long-term fertility building?

JB: Hey, no fair—I thought I was the one asking the questions!

Hmm, I don’t know. Maybe some sort of rhizobium? You know, those bacteria that associate themselves to the roots of plants like clover, alfalfa and beans, and take nitrogen from the air and convert it to a form that’s usable by the plants?

But you said "one single species", so this answer is probably not right: there are lots of species of rhizobia.

TF: The answer is quite astounding—and it lies at the heart of understanding sustainability. The species that could have the largest positive impact on soil fertility is Homo sapiens—us! Now, considering the enormous ecological damage that has been done by that single species, such a proposition may seem quite outrageous. But note that I asked about the potential to make a positive contribution, not actual behaviour as observed so far.

JB: Oh! I should have guessed that. Darn!

TF: When I bring up this point, many people think that I might have some specific technique in mind, a "miracle cure", or "silver bullet" idea such as, say, biochar—which seems to be pretty en vogue now—or genetically engineered miracle plants, or some such thing.

But no—this is about a much more fundamental issue. Nature eventually will heal ecological wounds—but quite often, she is not in a particular hurry. Left to her own devices, she may take thousands of years to rebuild soils and turn devastated land back into fertile ecosystems. Now, this is where we enter the scene. With our outstanding intellectual power we can read landscapes, think about key flows—flows of energy, water, minerals, and living things through a site—and if necessary, provide a little bit of guidance to help nature take the next step. This way, we can often speed up the regeneration clock a hundredfold or more!

Let me give some specific examples. Technologically, these are often embarrassingly simple—yet at the same time highly sophisticated, in the sense that they address issues that are obvious only once one has developed an eye for them.

The first one is imprinting—in arid regions, this can be a mind-blowingly simple yet quite effective technology to kick-start a biological succession pathway.

JB: What’s "imprinting"?

TF: One could say, earthworks for rainwater harvesting, but on the centimeter scale. Basically, it is a simple way to implement a passive resource-concentration system for water and organic matter that "nucleates" the transition back from desert to prairie—kind of like providing ice microcrystals in supercooled water. The Imprinting Foundation has a good website. In particular, take a look at this:

• The Imprinting Foundation, Success Stories.

This video is also well worth watching—part of the "Global Gardener" series:

• Bill Mollison, Dryland permaculture strategies—part 3, YouTube.

Here is another example—getting the restoration of rainforest going in the tropical grasslands of Colombia.

• Zero Emissions Research and Initiatives (ZERI), Reforestation.

Here, the challenge is that the soil originally was so acidic (around pH 4) that aluminium went into the soil solution as toxic Al3+. What eventually managed to do the trick was to plant a nurse crop of Caribbean pines, Pinus caribbea (on 80 square kilometers—no mean feat) that have been provided with the right mycorrhizal symbiont (Pisolithus tinctorius, I think) that enabled the trees to grow in very acidic soil. An amazing subject in themselves, fungi, by the way.

These were big projects—but similar ideas work on pretty much any scale. Friends of mine have shown me great pictures of the progress of a degraded site in Nepal where they did something very simple a number of years ago—puting up four poles with strings between them on which birds like to gather. And personally, since I started to seriously ponder the issue of soil compaction and started to give double-digging a try in my own garden a few years ago, the results have been so amazing that I wonder why anyone bothers to garden with annuals any other way.

JB: What’s "double-digging"?

TF: A method to relieve soil compaction. As we humans live our lives above the soil, processes below can be rather alien to us—yet, this is where many very important things go on. By and large, most people do not realize how deep plant roots go—and how badly they are affected by compaction.

The term "double-digging" refers to digging out the top foot of topsoil from the bed, and then using a gardening fork to also loosen the next foot of soil (often subsoil) before putting back the topsoil. Now, this method does have its drawbacks, and also, it is not the "silver bullet" single miracle recipe for high gardening yields some armchair gardeners who have read Jeavons’s book believe it to be. But if your garden soil is badly compacted, as it often is the case when starting a new garden, double-digging may be a very good idea.

JB: Interesting!

TF: So, there is no doubt that our species can vastly accelerate natural healing processes. Indeed, we can link our lives with natural processes in a way that satisfies our needs while we benefit the whole species assembly around us—but there are some very non-obvious aspects to this. Hacking a hole into the forest to live "in harmony with nature" most certainly won’t do the trick.

The importance of the key insight—we have the capacity to act as the most powerful repair species around—cannot be overstated. There is at present a very powerful mental block that shows up in many discussions of sustainability: looking at our past conduct, it is easy to get the idea that Homo sapiens‘ modus operandi is to seek out the most valuable/powerful/convenient resource first, use this up, and then, driven by need, find ways to make do with the next most valuable resource, calling this "progress"—actually a downward spiral. I’ve indeed seen adverts for the emerging Liquefied Natural Gas industry that glorified this as "a motor of progress and growth". Now, the only reason why we consider this is that the more easily-accessible, easy-to-handle fuels have been pretty much used up. Same with deep-sea oil drilling. What kind of "progress" is it that the major difference between the recent oil spill in the Gulf of Mexico and the Ixtoc oil spill in 1979 is that this time, there’s a mile of water sitting on top of the well—because we used up the more easily accessible oil?

• Rachel Maddow, Ixtoc Deepwater Horizon parallels, YouTube.

Now, there are two dominant attitudes toward this observation that we despoil one resource after another.

One is some form of "denial". This is quite widespread amongst professional economists. Ultimately, the absurdity of their argument becomes clear when it is condensed to "sustainability is just one problem among many, and we are the better at solving problems the stronger our economy—so we need to use up resources fast to get rich fast so that we can afford to address the problems caused by us using up resources fast." Reminds me of a painter who lived in the village I grew up in. He was known to work very swiftly, and when asked why he always was in such a hurry, wittily replied: "but I have to get the job done before I run out of paint!"

The other attitude is some sort of self-hate that regards the key problem not as an issue of very poor management, but inherently linked to human existence. According to that line of thinking, collapse is inevitable and we should just make sure we do not gobble up resources so fast that we leave nothing for our children to despoil so that they can have a chance to live.

It is clear that as long as there is a deadlock between these two attitudes, we will not make much progress towards a noticeably more sustainable society. And waiting just exacerbates the problem. So, the key question is: does it really have to be like this—are we doomed to live by destroying the resources which we depend on? Well—every cow can do better than that. Cow-dung is more valuable in terms of fertility than what the cow has eaten. So, if we are such an amazing species—as we like to claim by calling ourselves "Homo sapiens"—why should we fail so miserably here?

JB: I can see all sorts of economic, political and cultural reasons why we do so badly. But it might be a bit less depressing to talk about how we can do better. For example, you mentioned paying attention to flows through systems.

TF: The important thing about flows is that they are a great concept tool to get some first useful ideas about those processes that really matter for the behaviour of complex systems—both for the purpose of analysis as well as design.

That’s quite an exciting subject, but as you mentioned it, I’d first like to briefly address the issue of depressing topics that frequently arise when taking a deeper look into sustainability—in particular, the present situation. Why? Because I think that our capacity as a society to deal with such emotions will be decisive for how well we will do or how badly we will fail when addressing a set of convergent challenges. On the one hand, it is very important to understand that such emotions are an essential part of human experience. On the other hand, they can have a serious negative impact on our judgment capacity. So, an important part of the sustainability challenge is to build the capacity to act and make sound decisions under emotional stress. That sounds quite obvious, but my impression is that, still, most people either are not yet fully aware of this point, or do not see what this idea might mean in practice.

JB: I’ve been trying to build that capacity myself. I don’t think mathematics or theoretical physics were very good at preparing me. Indeed, I suspect that many people in these fields enjoy not only the feeling of "certainty" they can provide, but also the calming sense that the universe is beautiful and perfect. When it comes to environmental issues there’s a lot more uncertainty, and also frequently the sense that the world is messed up—thanks to us! On top of that there’s a sense of urgency, and frustration. All this can be rather stressful. However, there are ways to deal with that, and I’m busy learning them.

TF: I think there is one particularly important lesson I have learned about the role of emotions, especially fear. Important because it probably is quite a fundamental part of the human condition. Emotions do have the power to veto some conclusions from ever surfacing in one’s conscious mind if they would be painful to bear. They can temporarily suspend sound reasoning and also access to otherwise sound memory.

This is extremely sinister, for you are not acting rationally at all, you are in fact driven by one of the most non-rational aspects of your existence, your fear, yet you yourself have next to no chance of ever discovering this, as your emotions abuse your cognitive abilities to systematically shield you from getting conscious access to any insight which would stand a chance of making you question your analysis.

JB: I think we can all name other people who suffer from this problem. But of course the challenge is to see it in ourselves, while it’s happening.

TF: Insidiously, having exceptional reasoning abilities will not help the very least bit here—a person with a powerful mind may be misguided as easily as anybody else by deep inner fears, it’s just that the mind of a person with strong reasoning skills will work harder and spin more sophisticated tales than that of an intellectually average person. So, this essentially is a question of "how fast a runner do you have to be to out-run your own shadow?" How intelligent do you have to be to recognize it when your emotions cause your mind to abuse your powerful reasoning abilities to deceive itself? Well, the answer probably is that the capacity to appreciate in oneself the problem of self-deception is not related to intelligence, but wisdom. I really admire the insight that "it’s hard to fight an enemy who has outposts in your head."

JB: Richard Feynman put it another way: "The first principle is that you must not fool yourself—and you are the easiest person to fool." And if you’re sure you’re not fooling yourself, then you definitely are.

TF: Of course, everything that has an impact on our ability to conduct a sound self-assessment of our own behaviour matters a lot for sustainability related issues.

But enough about the role of the human mind in all this. This certainly is a fascinating and important subject, but at the end of the day, there is a lot of ecosystem rehabilitation to be done, and mapping flows is a powerful approach to getting an idea about what is broken and how to repair it.

JB: Okay, great. But I think our readers need a break. Next time we’ll pick up where we left off, and talk about flows.


Permaculture is a philosophy of working with, rather than
against nature; of protracted and thoughtful observation rather than protracted and thoughtless labour; and of looking at plants and animals in all their functions, rather than treating any area as a single-product system.
– Bill Mollison


This Week’s Finds (Week 313)

25 March, 2011

Here’s the third and final part of my interview with Eliezer Yudkowsky. We’ll talk about three big questions… roughly these:

• How do you get people to work on potentially risky projects in a safe way?

• Do we understand ethics well enough to build “Friendly artificial intelligence”?

• What’s better to work on, artificial intelligence or environmental issues?

So, with no further ado:

JB: There are decent Wikipedia articles on “optimism bias” and “positive illusions”, which suggest that unrealistically optimistic people are more energetic, while more realistic estimates of success go hand-in-hand with mild depression. If this is true, I can easily imagine that most people working on challenging projects like quantum gravity (me, 10 years ago) or artificial intelligence (you) are unrealistically optimistic about our chances of success.

Indeed, I can easily imagine that the first researchers to create a truly powerful artificial intelligence will be people who underestimate its potential dangers. It’s an interesting irony, isn’t it? If most people who are naturally cautious avoid a certain potentially dangerous line of research, the people who pursue that line of research are likely to be less cautious than average.

I’m a bit worried about this when it comes to “geoengineering”, for example—attempts to tackle global warming by large engineering projects. We have people who say “oh no, that’s too dangerous”, and turn their attention to approaches they consider less risky, but that may leave the field to people who underestimate the risks.

So I’m very glad you are thinking hard about how to avoid the potential dangers of artificial intelligence—and even trying to make this problem sound exciting, to attract ambitious and energetic young people to work on it. Is that part of your explicit goal? To make caution and rationality sound sexy?

EY: The really hard part of the problem isn’t getting a few smart people to work on cautious, rational AI. It’s admittedly a harder problem than it should be, because there’s a whole system out there which is set up to funnel smart young people into all sorts of other things besides cautious rational long-term basic AI research. But it isn’t the really hard part of the problem.

The scary thing about AI is that I would guess that the first AI to go over some critical threshold of self-improvement takes all the marbles—first mover advantage, winner take all. The first pile of uranium to have an effective neutron multiplication factor greater than 1, or maybe the first AI smart enough to absorb all the poorly defended processing power on the Internet—there’s actually a number of different thresholds that could provide a critical first-mover advantage.

And it is always going to be fundamentally easier in some sense to go straight all out for AI and not worry about clean designs or stable self-modification or the problem where a near-miss on the value system destroys almost all of the actual value from our perspective. (E.g., imagine aliens who shared every single term in the human utility function but lacked our notion of boredom. Their civilization might consist of a single peak experience repeated over and over, which would make their civilization very boring from our perspective, compared to what it might have been. That is, leaving a single aspect out of the value system can destroy almost all of the value. So there’s a very large gap in the AI problem between trying to get the value system exactly right, versus throwing something at it that sounds vaguely good.)

You want to keep as much of an advantage as possible for the cautious rational AI developers over the crowd that is just gung-ho to solve this super interesting scientific problem and go down in the eternal books of fame. Now there should in fact be some upper bound on the combination of intelligence, methodological rationality, and deep understanding of the problem which you can possess, and still walk directly into the whirling helicopter blades. The problem is that it is probably a rather high upper bound. And you are trying to outrace people who are trying to solve a fundamentally easier wrong problem. So the question is not attracting people to the field in general, but rather getting the really smart competent people to either work for a cautious project or not go into the field at all. You aren’t going to stop people from trying to develop AI. But you can hope to have as many of the really smart people as possible working on cautious projects rather than incautious ones.

So yes, making caution look sexy. But even more than that, trying to make incautious AI projects look merely stupid. Not dangerous. Dangerous is sexy. As the old proverb goes, most of the damage is done by people who wish to feel themselves important. Human psychology seems to be such that many ambitious people find it far less scary to think about destroying the world, than to think about never amounting to much of anything at all. I have met people like this. In fact all the people I have met who think they are going to win eternal fame through their AI projects have been like this. The thought of potentially destroying the world is bearable; it confirms their own importance. The thought of not being able to plow full steam ahead on their incredible amazing AI idea is not bearable; it threatens all their fantasies of wealth and fame.

Now these people of whom I speak are not top-notch minds, not in the class of the top people in mainstream AI, like say Peter Norvig (to name someone I’ve had the honor of meeting personally). And it’s possible that if and when self-improving AI starts to get real top-notch minds working on it, rather than people who were too optimistic about/attached to their amazing bright idea to be scared away by the field of skulls, then these real stars will not fall prey to the same sort of psychological trap. And then again it is also plausible to me that top-notch minds will fall prey to exactly the same trap, because I have yet to learn from reading history that great scientific geniuses are always sane.

So what I would most like to see would be uniform looks of condescending scorn directed at people who claimed their amazing bright AI idea was going to lead to self-improvement and superintelligence, but who couldn’t mount an adequate defense of how their design would have a goal system stable after a billion sequential self-modifications, or how it would get the value system exactly right instead of mostly right. In other words, making destroying the world look unprestigious and low-status, instead of leaving it to the default state of sexiness and importance-confirmingness.

JB: “Get the value system exactly right”—now this phrase touches on another issue I’ve been wanting to talk about. How do we know what it means for a value system to be exactly right? It seems people are even further from agreeing on what it means to be good than on what it means to be rational. Yet you seem to be suggesting we need to solve this problem before it’s safe to build a self-improving artificial intelligence!

When I was younger I worried a lot about the foundations of ethics. I decided that you “can’t derive an ought from an is”—do you believe that? If so, all logical arguments leading up to the conclusion that “you should do X” must involve an assumption of the form “you should do Y”… and attempts to “derive” ethics are all implicitly circular in some way. This really bothered the heck out of me: how was I supposed to know what to do? But of course I kept on doing things while I was worrying about this… and indeed, it was painfully clear that there’s no way out of making decisions: even deciding to “do nothing” or commit suicide counts as a decision.

Later I got more comfortable with the idea that making decisions about what to do needn’t paralyze me any more than making decisions about what is true. But still, it seems that the business of designing ethical beings is going to provoke huge arguments, if and when we get around to that.

Do you spend as much time thinking about these issues as you do thinking about rationality? Of course they’re linked….

EY: Well, I probably spend as much time explaining these issues as I do rationality. There are also an absolutely huge number of pitfalls that people stumble into when they try to think about, as I would put it, Friendly AI. Consider how many pitfalls people run into when they try to think about Artificial Intelligence. Next consider how many pitfalls people run into when they try to think about morality. Next consider how many pitfalls philosophers run into when they try to think about the nature of morality. Next consider how many pitfalls people run into when they try to think about hypothetical extremely powerful agents, especially extremely powerful agents that are supposed to be extremely good. Next consider how many pitfalls people run into when they try to imagine optimal worlds to live in or optimal rules to follow or optimal governments and so on.

Now imagine a subject matter which offers discussants a lovely opportunity to run into all of those pitfalls at the same time.

That’s what happens when you try to talk about Friendly Artificial Intelligence.

And it only takes one error for a chain of reasoning to end up in Outer Mongolia. So one of the great motivating factors behind all the writing I did on rationality and all the sequences I wrote on Less Wrong was to actually make it possible, via two years worth of writing and probably something like a month’s worth of reading at least, to immunize people against all the usual mistakes.

Lest I appear to dodge the question entirely, I’ll try for very quick descriptions and google keywords that professional moral philosophers might recognize.

In terms of what I would advocate programming a very powerful AI to actually do, the keywords are “mature folk morality” and “reflective equilibrium”. This means that you build a sufficiently powerful AI to do, not what people say they want, or even what people actually want, but what people would decide they wanted the AI to do, if they had all of the AI’s information, could think about for as long a subjective time as the AI, knew as much as the AI did about the real factors at work in their own psychology, and had no failures of self-control.

There’s a lot of important reasons why you would want to do exactly that and not, say, implement Asimov’s Three Laws of Robotics (a purely fictional device, and if Asimov had depicted them as working well, he would have had no stories to write) or building a superpowerful AI which obeys people’s commands interpreted in literal English, or creating a god whose sole prime directive is to make people maximally happy, or any of the above plus a list of six different patches which guarantee that nothing can possibly go wrong, and various other things that seem like incredibly obvious failure scenarios but which I assure you I have heard seriously advocated over and over and over again.

In a nutshell, you want to use concepts like “mature folk morality” or “reflective equilibrium” because these are as close as moral philosophy has ever gotten to defining in concrete, computable terms what you could be wrong about when you order an AI to do the wrong thing.

For an attempt at nontechnical explanation of what one might want to program an AI to do and why, the best resource I can offer is an old essay of mine which is not written so as to offer good google keywords, but holds up fairly well nonetheless:

• Eliezer Yudkowsky, Coherent extrapolated volition, May 2004.

You also raised some questions about metaethics, where metaethics asks not “Which acts are moral?” but “What is the subject matter of our talk about ‘morality’?” i.e. “What are we talking about here anyway?” In terms of Google keywords, my brand of metaethics is closest to analytic descriptivism or moral functionalism. If I were to try to put that into a very brief nutshell, it would be something like “When we talk about ‘morality’ or ‘goodness’ or ‘right’, the subject matter we’re talking about is a sort of gigantic math question hidden under the simple word ‘right’, a math question that includes all of our emotions and all of what we use to process moral arguments and all the things we might want to change about ourselves if we could see our own source code and know what we were really thinking.”

The complete Less Wrong sequence on metaethics (with many dependencies to earlier ones) is:

• Eliezer Yudkowsky, Metaethics sequence, Less Wrong, 20 June to 22 August 2008.

And one of the better quick summaries is at:

• Eliezer Yudkowsky, Inseparably right; or, joy in the merely good, Less Wrong, 9 August 2008.

And if I am wise I shall not say any more.

JB: I’ll help you be wise. There are a hundred followup questions I’m tempted to ask, but this has been a long and grueling interview, so I won’t. Instead, I’d like to raise one last big question. It’s about time scales.

Self-improving artificial intelligence seems like a real possibility to me. But when? You see, I believe we’re in the midst of a global ecological crisis—a mass extinction event, whose effects will be painfully evident by the end of the century. I want to do something about it. I can’t do much, but I want to do something. Even if we’re doomed to disaster, there are different sizes of disaster. And if we’re going through a kind of bottleneck, where some species make it through and others go extinct, even small actions now can make a difference.

I can imagine some technological optimists—singularitarians, extropians and the like—saying: “Don’t worry, things will get better. Things that seem hard now will only get easier. We’ll be able to suck carbon dioxide from the atmosphere using nanotechnology, and revive species starting from their DNA.” Or maybe even: “Don’t worry: we won’t miss those species. We’ll be having too much fun doing things we can’t even conceive of now.”

But various things make me skeptical of such optimism. One of them is the question of time scales. What if the world goes to hell before our technology saves us? What if artificial intelligence comes along toolate to make a big impact on the short-term problems I’m worrying about? In that case, maybe I should focus on short-term solutions.

Just to be clear: this isn’t some veiled attack on your priorities. I’m just trying to decide on my own. One good thing about having billions of people on the planet is that we don’t all have to do the same thing. Indeed, a multi-pronged approach is best. But for my own decisions, I want some rough guess about how long various potentially revolutionary technologies will take to come online.

What do you think about all this?

EY: I’ll try to answer the question about timescales, but first let me explain in some detail why I don’t think the decision should be dominated by that question.

If you look up “Scope Insensitivity” on Less Wrong, you’ll see that when three different groups of subjects were asked how much they would pay in increased taxes to save 2,000 / 20,000 / 200,000 birds from drowning in uncovered oil ponds, the respective average answers were $80 / $78 / $88. People asked questions like this visualize one bird, wings slicked with oil, struggling to escape, and that creates some amount of emotional affect which determines willingness to pay, and the quantity gets tossed out the window since no one can visualize 200,000 of anything. Another hypothesis to explain the data is “purchase of moral satisfaction”, which says that people give enough money to create a “warm glow” inside themselves, and the amount required might have something to do with your personal financial situation, but it has nothing to do with birds. Similarly, residents of four US states were only willing to pay 22% more to protect all 57 wilderness areas in those states than to protect one area. The result I found most horrifying was that subjects were willing to contribute more when a set amount of money was needed to save one child’s life, compared to the same amount of money saving eight lives—because, of course, focusing your attention on a single person makes the feelings stronger, less diffuse.

So while it may make sense to enjoy the warm glow of doing good deeds after we do them, we cannot possibly allow ourselves to choose between altruistic causes based on the relative amounts of warm glow they generate, because our intuitions are quantitatively insane.

And two antidotes that absolutely must be applied in choosing between altruistic causes are conscious appreciation of scope and conscious appreciation of marginal impact.

By its nature, your brain flushes right out the window the all-important distinction between saving one life and saving a million lives. You’ve got to compensate for that using conscious, verbal deliberation. The Society For Curing Rare Diseases in Cute Puppies has got great warm glow, but the fact that these diseases are rare should call a screeching halt right there—which you’re going to have to do consciously, not intuitively. Even before you realize that, contrary to the relative warm glows, it’s really hard to make a moral case for trading off human lives against cute puppies. I suppose if you could save a billion puppies using one dollar I wouldn’t scream at someone who wanted to spend the dollar on that instead of cancer research.

And similarly, if there are a hundred thousand researchers and billions of dollars annually that are already going into saving species from extinction—because it’s a prestigious and popular cause that has an easy time generating warm glow in lots of potential funders—then you have to ask about the marginal value of putting your effort there, where so many other people are already working, compared to a project that isn’t so popular.

I wouldn’t say “Don’t worry, we won’t miss those species”. But consider the future intergalactic civilizations growing out of Earth-originating intelligent life. Consider the whole history of a universe which contains this world of Earth and this present century, and also billions of years of future intergalactic civilization continuing until the universe dies, or maybe forever if we can think of some ingenious way to carry on. Next consider the interval in utility between a universe-history in which Earth-originating intelligence survived and thrived and managed to save 95% of the non-primate biological species now alive, versus a universe-history in which only 80% of those species are alive. That utility interval is not very large compared to the utility interval between a universe in which intelligent life thrived and intelligent life died out. Or the utility interval between a universe-history filled with sentient beings who experience happiness and have empathy for each other and get bored when they do the same thing too many times, versus a universe-history that grew out of various failures of Friendly AI.

(The really scary thing about universes that grow out of a loss of human value is not that they are different, but that they are, from our standpoint, boring. The human utility function says that once you’ve made a piece of art, it’s more fun to make a different piece of art next time. But that’s just us. Most random utility functions will yield instrumental strategies that spend some of their time and resources exploring for the patterns with the highest utility at the beginning of the problem, and then use the rest of their resources to implement the pattern with the highest utility, over and over and over. This sort of thing will surprise a human who expects, on some deep level, that all minds are made out of human parts, and who thinks, “Won’t the AI see that its utility function is boring?” But the AI is not a little spirit that looks over its code and decides whether to obey it; the AI is the code. If the code doesn’t say to get bored, it won’t get bored. A strategy of exploration followed by exploitation is implicit in most utility functions, but boredom is not. If your utility function does not already contain a term for boredom, then you don’t care; it’s not something that emerges as an instrumental value from most terminal values. For more on this see: “In Praise of Boredom” in the Fun Theory Sequence on Less Wrong.)

Anyway: In terms of expected utility maximization, even large probabilities of jumping the interval between a universe-history in which 95% of existing biological species survive Earth’s 21st century, versus a universe-history where 80% of species survive, are just about impossible to trade off against tiny probabilities of jumping the interval between interesting universe-histories, versus boring ones where intelligent life goes extinct, or the wrong sort of AI self-improves.

I honestly don’t see how a rationalist can avoid this conclusion: At this absolutely critical hinge in the history of the universe—Earth in the 21st century—rational altruists should devote their marginal attentions to risks that threaten to terminate intelligent life or permanently destroy a part of its potential. Those problems, which Nick Bostrom named “existential risks“, have got all the scope. And when it comes to marginal impact, there are major risks outstanding that practically no one is working on. Once you get the stakes on a gut level it’s hard to see how doing anything else could be sane.

So how do you go about protecting the future of intelligent life? Environmentalism? After all, there are environmental catastrophes that could knock over our civilization… but then if you want to put the whole universe at stake, it’s not enough for one civilization to topple, you have to argue that our civilization is above average in its chances of building a positive galactic future compared to whatever civilization would rise again a century or two later. Maybe if there were ten people working on environmentalism and millions of people working on Friendly AI, I could see sending the next marginal dollar to environmentalism. But with millions of people working on environmentalism, and major existential risks that are completely ignored… if you add a marginal resource that can, rarely, be steered by expected utilities instead of warm glows, devoting that resource to environmentalism does not make sense.

Similarly with other short-term problems. Unless they’re little-known and unpopular problems, the marginal impact is not going to make sense, because millions of other people will already be working on them. And even if you argue that some short-term problem leverages existential risk, it’s not going to be perfect leverage and some quantitative discount will apply, probably a large one. I would be suspicious that the decision to work on a short-term problem was driven by warm glow, status drives, or simple conventionalism.

With that said, there’s also such a thing as comparative advantage—the old puzzle of the lawyer who works an hour in the soup clinic instead of working an extra hour as a lawyer and donating the money. Personally I’d say you can work an hour in the soup clinic to keep yourself going if you like, but you should also be working extra lawyer-hours and donating the money to the soup clinic, or better yet, to something with more scope. (See “Purchase Fuzzies and Utilons Separately” on Less Wrong.) Most people can’t work effectively on Artificial Intelligence (some would question if anyone can, but at the very least it’s not an easy problem). But there’s a variety of existential risks to choose from, plus a general background job of spreading sufficiently high-grade rationality and existential risk awareness. One really should look over those before going into something short-term and conventional. Unless your master plan is just to work the extra hours and donate them to the cause with the highest marginal expected utility per dollar, which is perfectly respectable.

Where should you go in life? I don’t know exactly, but I think I’ll go ahead and say “not environmentalism”. There’s just no way that the product of scope, marginal impact, and John Baez’s comparative advantage is going to end up being maximal at that point.

Which brings me to AI timescales.

If I knew exactly how to make a Friendly AI, and I knew exactly how many people I had available to do it, I still couldn’t tell you how long it would take because of Product Management Chaos.

As it stands, this is a basic research problem—which will always feel very hard, because we don’t understand it, and that means when our brain checks for solutions, we don’t see any solutions available. But this ignorance is not to be confused with the positive knowledge that the problem will take a long time to solve once we know how to solve it. It could be that some fundamental breakthrough will dissolve our confusion and then things will look relatively easy. Or it could be that some fundamental breakthrough will be followed by the realization that, now that we know what to do, it’s going to take at
least another 20 years to do it.

I seriously have no idea when AI is going to show up, although I’d be genuinely and deeply shocked if it took another century (barring a collapse of civilization in the meanwhile).

If you were to tell me that as a Bayesian I have to put probability distributions on things on pain of having my behavior be inconsistent and inefficient, well, I would actually suspect that my behavior is inconsistent. But if you were to try and induce from my behavior a median expected time where I spend half my effort planning for less and half my effort planning for more, it would probably look something like 2030.

But that doesn’t really matter to my decisions. Among all existential risks I know about, Friendly AI has the single largest absolute scope—it affects everything, and the problem must be solved at some point for worthwhile intelligence to thrive. It also has the largest product of scope of marginal impact, because practically no one is working on it, even compared to other existential risks. And my abilities seem applicable to it. So I may not like my uncertainty about timescales, but my decisions are not unstable with respect to that uncertainty.

JB: Ably argued! If I think of an interesting reply, I’ll put it in the blog discussion. Thanks for your time.


The best way to predict the future is to invent it. – Alan Kay


This Week’s Finds (Week 312)

14 March, 2011

This is the second part of my interview with Eliezer Yudkowsky. If you click on some technical terms here, you’ll go down to a section where I explain them.

JB: You’ve made a great case for working on artificial intelligence—and more generally, understanding how intelligence works, to figure out how we can improve it. It’s especially hard to argue against studying rationality. Even most people who doubt computers will ever get smarter will admit the possibility that people can improve. And it seems clear that the almost every problem we face could benefit from better thinking.

I’m intrigued by the title The Art of Rationality because it suggests that there’s a kind of art to it. We don’t know how to teach someone to be a great artist, but maybe we can teach them to be a better artist. So, what are some of the key principles when it comes to thinking better?

EY: Stars above, what an open-ended question. The idea behind the book is to explain all the drop-dead basic fundamentals that almost no one seems to know about, like what is evidence, what is simplicity, what is truth, the importance of actually changing your mind now and then, the major known cognitive biases that stop people from changing their minds, what it means to live in a universe where things are made of parts, and so on. This is going to be a book primarily aimed at people who are not completely frightened away by complex mathematical concepts such as addition, multiplication, and division (i.e., all you need to understand Bayes’ Theorem if it’s explained properly), albeit with the whole middle of the book being just practical advice based on cognitive biases for the benefit of people who don’t want to deal with multiplication and division. Each chapter is going to address a different aspect of rationality, not in full textbook detail, just enough to convey the sense of a concept, with each chapter being around 5-10,000 words broken into 4-10 bite-size sections of 500-2000 words each. Which of the 27 currently planned book chapters did you want me to summarize?

But if I had to pick just one thing, just one concept that’s most important, I think it would be the difference between rationality and rationalization.

Suppose there’s two boxes, only one of which contains a diamond. And on the two boxes there are various signs and portents which distinguish, imperfectly and probabilistically, between boxes which contain diamonds, and boxes which don’t. I could take a sheet of paper, and I could write down all the signs and portents that I understand, and do my best to add up the evidence, and then on the bottom line I could write, "And therefore, there is a 37% probability that Box A contains the diamond." That’s rationality. Alternatively, I could be the owner of Box A, and I could hire a clever salesman to sell Box A for the highest price he can get; and the clever salesman starts by writing on the bottom line of his sheet of paper, "And therefore, Box A contains the diamond", and then he writes down all the arguments he can think of on the lines above.

But consider: At the moment the salesman wrote down the bottom line on that sheet of paper, the truth or falsity of the statement was fixed. It’s already right or already wrong, and writing down arguments on the lines above isn’t going to change that. Or if you imagine a spread of probable worlds, some of which have different boxes containing the diamond, the correlation between the ink on paper and the diamond’s location became fixed at the moment the ink was written down, and nothing which doesn’t change the ink or the box is going to change that correlation.

That’s "rationalization", which should really be given a name that better distinguishes it from rationality, like "anti-rationality" or something. It’s like calling lying "truthization". You can’t make rational what isn’t rational to start with.

Whatever process your brain uses, in reality, to decide what you’re going to argue for, that’s what determines your real-world effectiveness. Rationality isn’t something you can use to argue for a side you already picked. Your only chance to be rational is while you’re still choosing sides, before you write anything down on the bottom line. If I had to pick one concept to convey, it would be that one.

JB: Okay. I wasn’t really trying to get you to summarize a whole book. I’ve seen you explain a whole lot of heuristics designed to help us be more rational. So I was secretly wondering if the "art of rationality" is mainly a long list of heuristics, or whether you’ve been able to find a few key principles that somehow spawn all those heuristics.

Either way, it could be a tremendously useful book. And even if you could distill the basic ideas down to something quite terse, in practice people are going to need all those heuristics—especially since many of them take the form "here’s something you tend to do without noticing you’re doing it—so watch out!" If we’re saddled with dozens of cognitive biases that we can only overcome through strenuous effort, then your book has to be long. You can’t just say "apply Bayes’ rule and all will be well."

I can see why you’d single out the principle that "rationality only comes into play before you’ve made up your mind", because so much seemingly rational argument is really just a way of bolstering support for pre-existing positions. But what is rationality? Is it something with a simple essential core, like "updating probability estimates according to Bayes’ rule", or is its very definition inherently long and complicated?

EY: I’d say that there are parts of rationality that we do understand very well in principle. Bayes’ Theorem, the expected utility formula, and Solomonoff induction between them will get you quite a long way. Bayes’ Theorem says how to update based on the evidence, Solomonoff induction tells you how to assign your priors (in principle, it should go as the Kolmogorov complexity aka algorithmic complexity of the hypothesis), and then once you have a function which predicts what will probably happen as the result of different actions, the expected utility formula says how to choose between them.

Marcus Hutter has a formalism called AIXI which combines all three to write out an AI as a single equation which requires infinite computing power plus a halting oracle to run. And Hutter and I have been debating back and forth for quite a while on which AI problems are or aren’t solved by AIXI. For example, I look at the equation as written and I see that AIXI will try the experiment of dropping an anvil on itself to resolve its uncertainty about what happens next, because the formalism as written invokes a sort of Cartesian dualism with AIXI on one side of an impermeable screen and the universe on the other; the equation for AIXI says how to predict sequences of percepts using Solomonoff induction, but it’s too simple to encompass anything as reflective as "dropping an anvil on myself will destroy that which is processing these sequences of percepts". At least that’s what I claim; I can’t actually remember whether Hutter was agreeing with me about that as of our last conversation. Hutter sees AIXI as important because he thinks it’s a theoretical solution to almost all of the important problems; I see AIXI as important because it demarcates the line between things that we understand in a fundamental sense and a whole lot of other things we don’t.

So there are parts of rationality—big, important parts too—which we know how to derive from simple, compact principles in the sense that we could write very simple pieces of code which would behave rationally along that dimension given unlimited computing power.

But as soon as you start asking "How can human beings be more rational?" then things become hugely more complicated because human beings make much more complicated errors that need to be patched on an individual basis, and asking "How can I be rational?" is only one or two orders of magnitude simpler than asking "How does the brain work?", i.e., you can hope to write a single book that will cover many of the major topics, but not quite answer it in an interview question…

On the other hand, the question "What is it that I am trying to do, when I try to be rational?" is a question for which big, important chunks can be answered by saying "Bayes’ Theorem", "expected utility formula" and "simplicity prior" (where Solomonoff induction is the canonical if uncomputable simplicity prior).

At least from a mathematical perspective. From a human perspective, if you asked "What am I trying to do, when I try to be rational?" then the fundamental answers would run more along the lines of "Find the truth without flinching from it and without flushing all the arguments you disagree with out the window", "When you don’t know, try to avoid just making stuff up", "Figure out whether the strength of evidence is great enough to support the weight of every individual detail", "Do what should lead to the best consequences, but not just what looks on the immediate surface like it should lead to the best consequences, you may need to follow extra rules that compensate for known failure modes like shortsightedness and moral rationalizing"…

JB: Fascinating stuff!

Yes, I can see that trying to improve humans is vastly more complicated than designing a system from scratch… but also very exciting, because you can tell a human a high-level principle like " "When you don’t know, try to avoid just making stuff up" and have some slight hope that they’ll understand it without it being explained in a mathematically precise way.

I guess AIXI dropping an anvil on itself is a bit like some of the self-destructive experiments that parents fear their children will try, like sticking a pin into an electrical outlet. And it seems impossible to avoid doing such experiments without having a base of knowledge that was either "built in" or acquired by means of previous experiments.

In the latter case, it seems just a matter of luck that none of these previous experiments were fatal. Luckily, people also have "built in" knowledge. More precisely, we have access to our ancestor’s knowledge and habits, which get transmitted to us genetically and culturally. But still, a fair amount of random blundering, suffering, and even death was required to build up that knowledge base.

So when you imagine "seed AIs" that keep on improving themselves and eventually become smarter than us, how can you reasonably hope that they’ll avoid making truly spectacular mistakes? How can they learn really new stuff without a lot of risk?

EY: The best answer I can offer is that they can be conservative externally and deterministic internally.

Human minds are constantly operating on the ragged edge of error, because we have evolved to compete with other humans. If you’re a bit more conservative, if you double-check your calculations, someone else will grab the banana and that conservative gene will not be passed on to descendants. Now this does not mean we couldn’t end up in a bad situation with AI companies competing with each other, but there’s at least the opportunity to do better.

If I recall correctly, the Titanic sank from managerial hubris and cutthroat cost competition, not engineering hubris. The original liners were designed far more conservatively, with triple-redundant compartmentalized modules and soon. But that was before cost competition took off, when the engineers could just add on safety features whenever they wanted. The part about the Titanic being extremely safe was pure marketing literature.

There is also no good reason why any machine mind should be overconfident the way that humans are. There are studies showing that, yes, managers prefer subordinates who make overconfident promises to subordinates who make accurate promises—sometimes I still wonder that people are this silly, but given that people are this silly, the social pressures and evolutionary pressures follow. And we have lots of studies showing that, for whatever reason, humans are hugely overconfident; less than half of students finish their papers by the time they think it 99% probable they’ll get done, etcetera.

And this is a form of stupidity an AI can simply do without. Rationality is not omnipotent; a bounded rationalist cannot do all things. But there is no reason why a bounded rationalist should ever have to overpromise, be systematically overconfident, systematically tend to claim it can do what it can’t. It does not have to systematically underestimate the value of getting more information, or overlook the possibility of unspecified Black Swans and what sort of general behavior helps to compensate. (A bounded rationalist does end up overlooking specific Black Swans because it doesn’t have enough computing power to think of all specific possible catastrophes.)

And contrary to how it works in say Hollywood, even if an AI does manage to accidentally kill a human being, that doesn’t mean it’s going to go “I HAVE KILLED” and dress up in black and start shooting nuns from rooftops. What it ought to do—what you’d want to see happen—would be for the utility function to go on undisturbed, and for the probability distribution to update based on whatever unexpected thing just happened and contradicted its old hypotheses about what does and does not kill humans. In other words, keep the same goals and say “oops” on the world-model; keep the same terminal values and revise its instrumental policies. These sorts of external-world errors are not catastrophic unless they can actually wipe out the planet in one shot, somehow.

The catastrophic sort of error, the sort you can’t recover from, is an error in modifying your own source code. If you accidentally change your utility function you will no longer want to change it back. And in this case you might indeed ask, "How will an AI make millions or billions of code changes to itself without making a mistake like that?" But there are in fact methods powerful enough to do a billion error-free operations. A friend of mine once said something along the lines of "a CPU does a mole of transistor operations, error-free, in a day" though I haven’t checked the numbers. When chip manufacturers are building a machine with hundreds of millions of interlocking pieces and they don’t want to have to change it after it leaves the factory, they may go so far as to prove the machine correct, using human engineers to navigate the proof space and suggest lemmas to prove (which AIs can’t do, they’re defeated by the exponential explosion) and complex theorem-provers to prove the lemmas (which humans would find boring) and simple verifiers to check the generated proof. It takes a combination of human and machine abilities and it’s extremely expensive. But I strongly suspect that an Artificial General Intelligence with a good design would be able to treat all its code that way—that it would combine all those abilities in a single mind, and find it easy and natural to prove theorems about its code changes. It could not, of course, prove theorems about the external world (at least not without highly questionable assumptions). It could not prove external actions correct. The only thing it could write proofs about would be events inside the highly deterministic environment of a CPU—that is, its own thought processes. But it could prove that it was processing probabilities about those actions in a Bayesian way, and prove that it was assessing the probable consequences using a particular utility function. It could prove that it was sanely trying to achieve the same goals.

A self-improving AI that’s unsure about whether to do something ought to just wait and do it later after self-improving some more. It doesn’t have to be overconfident. It doesn’t have to operate on the ragged edge of failure. It doesn’t have to stop gathering information too early, if more information can be productively gathered before acting. It doesn’t have to fail to understand the concept of a Black Swan. It doesn’t have to do all this using a broken error-prone brain like a human one. It doesn’t have to be stupid in the ways like overconfidence that humans seem to have specifically evolved to be stupid. It doesn’t have to be poorly calibrated (assign 99% probabilities that come true less that 99 out of 100 times), because bounded rationalists can’t do everything but they don’t have to claim what they can’t do. It can prove that its self-modifications aren’t making itself crazy or changing its goals, at least if the transistors work as specified, or make no more than any possible combination of 2 errors, etc. And if the worst does happen, so long as there’s still a world left afterward, it will say "Oops" and not do it again. This sounds to me like essentially the optimal scenario given any sort of bounded rationalist whatsoever.

And finally, if I was building a self-improving AI, I wouldn’t ask it to operate heavy machinery until after it had grown up. Why should it?

JB: Indeed!

Okay—I’d like to take a break here, explain some terms you used, and pick up next week with some less technical questions, like what’s a better use of time: tackling environmental problems, or trying to prepare for a technological singularity?

 
 

Some explanations

Here are some quick explanations. If you click on the links here you’ll get more details:


Cognitive Bias. A cognitive bias is a way in which people’s judgements systematically deviate from some norm—for example, from ideal rational behavior. You can see a long list of cognitive biases on Wikipedia. It’s good to know a lot of these and learn how to spot them in yourself and your friends.

For example, confirmation bias is the tendency to pay more attention to information that confirms our existing beliefs. Another great example is the bias blind spot: the tendency for people to think of themselves as less cognitively biased than average! I’m sure glad I don’t suffer from that.


Bayes’ Theorem. This is a rule for updating our opinions about probabilities when we get new information. Suppose you start out thinking the probability of some event A is P(A), and the probability of some event B is P(B). Suppose P(A|B) is the probability of event A given that B happens. Likewise, suppose P(B|A) is the probability of B given that A happens. Then the probability that both A and B happen is

P(A|B) P(B)

but by the same token it’s also

P(B|A) P(A)

so these are equal. A little algebra gives Bayes’ Theorem:

P(A|B) = P(B|A) P(A) / P(B)

If for some reason we know everything on the right-hand side, we can this equation to work out P(A|B), and thus update our probability for event A when we see event B happen.

For a longer explanation with examples, see:

• Eliezer Yudkowsky, An intuitive explanation of Bayes’ Theorem.

Some handy jargon: we call P(A) the prior probability of A, and P(A|B) the posterior probability.


Solomonoff Induction. Bayes’ Theorem helps us compute posterior probabilities, but where do we get the prior probabilities from? How can we guess probabilities before we’ve observed anything?

This famous puzzle led Ray Solomonoff to invent Solomonoff induction. The key new idea is algorithmic probability theory. This is a way to define a probability for any string of letters in some alphabet, where a string counts as more probable if it’s less complicated. If we think of a string as a "hypothesis"—it could be a sentence in English, or an equation—this becomes a way to formalize Occam’s razor: the idea that given two competing hypotheses, the simpler one is more likely to be true.

So, algorithmic probability lets us define a prior probability distribution on hypotheses, the so-called “simplicity prior”, that implements Occam’s razor.

More precisely, suppose we have a special programming language where:

  1. Computer programs are written as strings of bits.

  2. They contain a special bit string meaning “END” at the end, and nowhere else.

  3. They don’t take an input: they just run and either halt and print out a string of letters, or never halt.

Then to get the algorithmic probability of a string of letters, we take all programs that print out that string and add up

2-length of program

So, you can see that a string counts as more probable if it has more short programs that print it out.


Kolmogorov complexity. The Kolmologorov complexity of a string of letters is the length of the shortest program that prints it out, where programs are written in a special language as described above. This is a way of measuring how complicated a string is. It’s closely related to the algorithmic entropy: the difference between the Kolmogorov complexity of a string and minus the logarithm of its algorithmic probability is bounded by a constant, if we take logarithms using base 2. For more on all this stuff, see:

• M. Li and P. Vitányi, An Introduction to Kolmogorov Complexity Theory and its Applications, Springer, Berlin, 2008.


Halting Oracle. Alas, the algorithmic probability of a string is not computable. Why? Because to compute it, you’d need to go through all the programs in your special language that print out that string and add up a contribution from each one. But to do that, you’d need to know which programs halt—and there’s no systematic way to answer that question, which is called the halting problem.

But, we can pretend! We can pretend we have a magic box that will tell us whether any program in our special language halts. Computer scientists call any sort of magic box that answers questions an oracle. So, our particular magic box called a halting oracle.


AIXI. AIXI is Marcus Hutter’s attempt to define an agent that "behaves optimally in any computable environment". Since AIXI relies on the idea of algorithmic probability, you can’t run AIXI on a computer unless it has infinite computer power and—the really hard part—access to a halting oracle. However, Hutter has also defined computable approximations to AIXI. For a quick intro, see this:

• Marcus Hutter, Universal intelligence: a mathematical top-down approach.

For more, try this:

• Marcus Hutter, Universal Artificial Intelligence: Sequential Decisions Based on Algorithmic Probability, Springer, Berlin, 2005.


Utility. Utility is a hypothetical numerical measure of satisfaction. If you know the probabilities of various outcomes, and you know what your utility will be in each case, you can compute your "expected utility" by taking the probabilities of the different outcomes, multiplying them by the corresponding utilities, and adding them up. In simple terms, this is how happy you’ll be on average. The expected utility hypothesis says that a rational decision-maker has a utility function and will try to maximize its expected utility.


Bounded Rationality. In the real world, any decision-maker has limits on its computational power and the time it has to make a decision. The idea that rational decision-makers "maximize expected utility" is oversimplified unless it takes this into account somehow. Theories of bounded rationality try to take these limitations into account. One approach is to think of decision-making as yet another activity whose costs and benefits must be taken into account when making decisions. Roughly: you must decide how much time you want to spend deciding. Of course, there’s an interesting circularity here.


Black Swan. According to Nassim Taleb, human history is dominated by black swans: important events that were unpredicted and indeed unpredictable, but rationalized by hindsight and thus made to seem as if they could have been predicted. He believes that rather than trying to predict such events (which he considers largely futile), we should try to get good at adapting to them. For more see:

• Nassim Taleb, The Black Swan: The Impact of the Highly Improbable, Random House, New York, 2007.


The first principle is that you must not fool yourself—and you are the easiest person to fool. – Richard Feynman


This Week’s Finds (Week 311)

7 March, 2011

This week I’ll start an interview with Eliezer Yudkowsky, who works at an institute he helped found: the Singularity Institute of Artificial Intelligence.

While many believe that global warming or peak oil are the biggest dangers facing humanity, Yudkowsky is more concerned about risks inherent in the accelerating development of technology. There are different scenarios one can imagine, but a bunch tend to get lumped under the general heading of a technological singularity. Instead of trying to explain this idea in all its variations, let me rapidly sketch its history and point you to some reading material. Then, on with the interview!

In 1958, the mathematician Stanislaw Ulam wrote about some talks he had with John von Neumann:

One conversation centered on the ever accelerating progress of technology and changes in the mode of human life, which gives the appearance of approaching some essential singularity in the history of the race beyond which human affairs, as we know them, could not continue.

In 1965, the British mathematician Irving John Good raised the possibility of an "intelligence explosion": if machines could improve themselves to get smarter, perhaps they would quickly become a lot smarter than us.

In 1983 the mathematician and science fiction writer Vernor Vinge brought the singularity idea into public prominence with an article in Omni magazine, in which he wrote:

We will soon create intelligences greater than our own. When this happens, human history will have reached a kind of singularity, an intellectual transition as impenetrable as the knotted space-time at the center of a black hole, and the world will pass far beyond our understanding. This singularity, I believe, already haunts a number of science-fiction writers. It makes realistic extrapolation to an interstellar future impossible. To write a story set more than a century hence, one needs a nuclear war in between … so that the world remains intelligible.

In 1993 wrote an essay in which he even ventured a prediction as to when the singularity would happen:

Within thirty years, we will have the technological means to create superhuman intelligence. Shortly after, the human era will be ended.

You can read that essay here:

• Vernor Vinge, The coming technological singularity: how to survive in the post-human era, article for the VISION-21 Symposium, 30-31 March, 1993.

With the rise of the internet, the number of people interested in such ideas grew enormously: transhumanists, extropians, singularitarians and the like. In 2005, Ray Kurzweil wrote:

What, then, is the Singularity? It’s a future period during which the pace of technological change will be so rapid, its impact so deep, that human life will be irreversibly transformed. Although neither utopian or dystopian, this epoch will transform the concepts we rely on to give meaning to our lives, from our business models to the cycle of human life, including death itself. Understanding the Singularity will alter our perspective on the significance of our past and the ramifications for our future. To truly understand it inherently changes one’s view of life in general and one’s particular life. I regard someone who understands the Singularity and who has reflected on its implications for his or her own life as a "singularitarian".

He predicted that the singularity will occur around 2045. For more, see:

• Ray Kurzweil, The Singularity is Near: When Humans Transcend Biology, Viking, 2005.

Yudkowsky distinguishes three major schools of thought regarding the singularity:

Accelerating Change that is nonetheless somewhat predictable (e.g. Ray Kurzweil).

Event Horizon: after the rise of intelligence beyond our own, the future becomes absolutely unpredictable to us (e.g. Vernor Vinge).

Intelligence Explosion: a rapid chain reaction of self-amplifying intelligence until ultimate physical limits are reached (e.g. I. J. Good and Eliezer Yudkowsky).

Yukdowsky believes that an intelligence explosion could threaten everything we hold dear unless the first self-amplifying intelligence is "friendly". The challenge, then, is to design “friendly AI”. And this requires understanding a lot more than we currently do about intelligence, goal-driven behavior, rationality and ethics—and of course what it means to be “friendly”. For more, start here:

• The Singularity Institute of Artificial Intelligence, Publications.

Needless to say, there’s a fourth school of thought on the technological singularity, even more popular than those listed above:

Baloney: it’s all a load of hooey!

Most people in this school have never given the matter serious thought, but a few have taken time to formulate objections. Others think a technological singularity is possible but highly undesirable and avoidable, so they want to prevent it. For various criticisms, start here:

Technological singularity: Criticism, Wikipedia.

Personally, what I like most about singularitarians is that they care about the future and recognize that it may be very different from the present, just as the present is very different from the pre-human past. I wish there were more dialog between them and other sorts of people—especially people who also care deeply about the future, but have drastically different visions of it. I find it quite distressing how people with different visions of the future do most of their serious thinking within like-minded groups. This leads to groups with drastically different assumptions, with each group feeling a lot more confident about their assumptions than an outsider would deem reasonable. I’m talking here about environmentalists, singularitarians, people who believe global warming is a serious problem, people who don’t, etc. Members of any tribe can easily see the cognitive defects of every other tribe, but not their own. That’s a pity.

And so, this interview:

JB: I’ve been a fan of your work for quite a while. At first I thought your main focus was artificial intelligence (AI) and preparing for a technological singularity by trying to create "friendly AI". But lately I’ve been reading your blog, Less Wrong, and I get the feeling you’re trying to start a community of people interested in boosting their own intelligence—or at least, their own rationality. So, I’m curious: how would you describe your goals these days?

EY: My long-term goals are the same as ever: I’d like human-originating intelligent life in the Solar System to survive, thrive, and not lose its values in the process. And I still think the best means is self-improving AI. But that’s a bit of a large project for one person, and after a few years of beating my head against the wall trying to get other people involved, I realized that I really did have to go back to the beginning, start over, and explain all the basics that people needed to know before they could follow the advanced arguments. Saving the world via AI research simply can’t compete against the Society for Treating Rare Diseases in Cute Kittens unless your audience knows about things like scope insensitivity and the affect heuristic and the concept of marginal expected utility, so they can see why the intuitively more appealing option is the wrong one. So I know it sounds strange, but in point of fact, since I sat down and started explaining all the basics, the Singularity Institute for Artificial Intelligence has been growing at a better clip and attracting more interesting people.

Right now my short-term goal is to write a book on rationality (tentative working title: The Art of Rationality) to explain the drop-dead basic fundamentals that, at present, no one teaches; those who are impatient will find a lot of the core material covered in these Less Wrong sequences:

Map and territory.
How to actually change your mind.
Mysterious answers to mysterious questions.

though I intend to rewrite it all completely for the book so as to make it accessible to a wider audience. Then I probably need to take at least a year to study up on math, and then—though it may be an idealistic dream—I intend to plunge into the decision theory of self-modifying decision systems and never look back. (And finish the decision theory and implement it and run the AI, at which point, if all goes well, we Win.)

JB: I can think of lots of big questions at this point, and I’ll try to get to some of those, but first I can’t resist asking: why do you want to study math?

EY: A sense of inadequacy.

My current sense of the problems of self-modifying decision theory is that it won’t end up being Deep Math, nothing like the proof of Fermat’s Last Theorem—that 95% of the progress-stopping difficulty will be in figuring out which theorem is true and worth proving, not the proof. (Robin Hanson spends a lot of time usefully discussing which activities are most prestigious in academia, and it would be a Hansonian observation, even though he didn’t say it AFAIK, that complicated proofs are prestigious but it’s much more important to figure out which theorem to prove.) Even so, I was a spoiled math prodigy as a child—one who was merely amazingly good at math for someone his age, instead of competing with other math prodigies and training to beat them. My sometime coworker Marcello (he works with me over the summer and attends Stanford at other times) is a non-spoiled math prodigy who trained to compete in math competitions and I have literally seen him prove a result in 30 seconds that I failed to prove in an hour.

I’ve come to accept that to some extent we have different and complementary abilities—now and then he’ll go into a complicated blaze of derivations and I’ll look at his final result and say "That’s not right" and maybe half the time it will actually be wrong. And when I’m feeling inadequate I remind myself that having mysteriously good taste in final results is an empirically verifiable talent, at least when it comes to math. This kind of perceptual sense of truth and falsity does seem to be very much important in figuring out which theorems to prove. But I still get the impression that the next steps in developing a reflective decision theory may require me to go off and do some of the learning and training that I never did as a spoiled math prodigy, first because I could sneak by on my ability to "see things", and second because it was so much harder to try my hand at any sort of math I couldn’t see as obvious. I get the impression that knowing which theorems to prove may require me to be better than I currently am at doing the proofs.

On some gut level I’m also just embarrassed by the number of compliments I get for my math ability (because I’m a good explainer and can make math things that I do understand seem obvious to other people) as compared to the actual amount of advanced math knowledge that I have (practically none by any real mathematician’s standard). But that’s more of an emotion that I’d draw on for motivation to get the job done, than anything that really ought to factor into my long-term planning. For example, I finally looked up the drop-dead basics of category theory because someone else on a transhumanist IRC channel knew about it and I didn’t. I’m happy to accept my ignoble motivations as a legitimate part of myself, so long as they’re motivations to learn math.

JB: Ah, how I wish more of my calculus students took that attitude. Math professors worldwide will frame that last sentence of yours and put it on their office doors.

I’ve recently been trying to switch from pure math to more practical things. So I’ve been reading more about control theory, complex systems made of interacting parts, and the like. Jan Willems has written some very nice articles about this, and your remark about complicated proofs in mathematics reminds me of something he said:

… I have almost always felt fortunate to have been able to do research in a mathematics environment. The average competence level is high, there is a rich history, the subject is stable. All these factors are conducive for science. At the same time, I was never able to feel unequivocally part of the mathematics culture, where, it seems to me, too much value is put on difficulty as a virtue in itself. My appreciation for mathematics has more to do with its clarity of thought, its potential of sharply articulating ideas, its virtues as an unambiguous language. I am more inclined to treasure the beauty and importance of Shannon’s ideas on errorless communication, algorithms such as the Kalman filter or the FFT, constructs such as wavelets and public key cryptography, than the heroics and virtuosity surrounding the four-color problem, Fermat’s last theorem, or the Poincaré and Riemann conjectures.

I tend to agree. Never having been much of a prodigy myself, I’ve always preferred thinking of math as a language for understanding the universe, rather than a list of famous problems to challenge heroes, an intellectual version of the Twelve Labors of Hercules. But for me the universe includes very abstract concepts, so I feel "pure" math such as category theory can be a great addition to the vocabulary of any scientist.

Anyway: back to business. You said:

I’d like human-originating intelligent life in the Solar System to survive, thrive, and not lose its values in the process. And I still think the best means is self-improving AI.

I bet a lot of our readers would happily agree with your first sentence. It sounds warm and fuzzy. But a lot of them might recoil from the next sentence. "So we should build robots that take over the world???" Clearly there’s a long train of thought lurking here. Could you sketch how it goes?

EY: Well, there’s a number of different avenues from which to approach that question. I think I’d like to start off with a quick remark—do feel free to ask me to expand on it—that if you want to bring order to chaos, you have to go where the chaos is.

In the early twenty-first century the chief repository of scientific chaos is Artificial Intelligence. Human beings have this incredibly powerful ability that took us from running over the savanna hitting things with clubs to making spaceships and nuclear weapons, and if you try to make a computer do the same thing, you can’t because modern science does not understand how this ability works.

At the same time, the parts we do understand, such as that human intelligence is almost certainly running on top of neurons firing, suggest very strongly that human intelligence is not the limit of the possible. Neurons fire at, say, 200 hertz top speed; transmit signals at 150 meters/second top speed; and even in the realm of heat dissipation (where neurons still have transistors beat cold) a synaptic firing still dissipates around a million times as much heat as the thermodynamic limit for a one-bit irreversible operation at 300 Kelvin. So without shrinking the brain, cooling the brain, or invoking things like reversible computing, it ought to be physically possible to build a mind that works at least a million times faster than a human one, at which rate a subjective year would pass for every 31 sidereal seconds, and all the time from Ancient Greece up until now would pass in less than a day. This is talking about hardware because the hardware of the brain is a lot easier to understand, but software is probably a lot more important; and in the area of software, we have no reason to believe that evolution came up with the optimal design for a general intelligence, starting from incremental modification of chimpanzees, on its first try.

People say things like "intelligence is no match for a gun" and they’re thinking like guns grew on trees, or they say "intelligence isn’t as important as social skills" like social skills are implemented in the liver instead of the brain. Talking about smarter-than-human intelligence is talking about doing a better version of that stuff humanity has been doing over the last hundred thousand years. If you want to accomplish large amounts of good you have to look at things which can make large differences.

Next lemma: Suppose you offered Gandhi a pill that made him want to kill people. Gandhi starts out not wanting people to die, so if he knows what the pill does, he’ll refuse to take the pill, because that will make him kill people, and right now he doesn’t want to kill people. This is an informal argument that Bayesian expected utility maximizers with sufficient self-modification ability will self-modify in such a way as to preserve their own utility function. You would like me to make that a formal argument. I can’t, because if you take the current formalisms for things like expected utility maximization, they go into infinite loops and explode when you talk about self-modifying the part of yourself that does the self-modifying. And there’s a little thing called Löb’s Theorem which says that no proof system at least as powerful as Peano Arithmetic can consistently assert its own soundness, or rather, if you can prove a theorem of the form

□P ⇒ P

(if I prove P then it is true) then you can use this theorem to prove P. Right now I don’t know how you could even have a self-modifying AI that didn’t look itself over and say, "I can’t trust anything this system proves to actually be true, I had better delete it". This is the class of problems I’m currently working on—reflectively consistent decision theory suitable for self-modifying AI. A solution to this problem would let us build a self-improving AI and know that it was going to keep whatever utility function it started with.

There’s a huge space of possibilities for possible minds; people makethe mistake of asking "What will AIs do?" like AIs were the Tribe that Lives Across the Water, foreigners all of one kind from the same country. A better way of looking at it would be to visualize a gigantic space of possible minds and all human minds fitting into one tiny little dot inside the space. We want to understand intelligence well enough to reach into that gigantic space outside and pull out one of the rare possibilities that would be, from our perspective, a good idea to build.

If you want to maximize your marginal expected utility you have to maximize on your choice of problem over the combination of high impact, high variance, possible points of leverage, and few other people working on it. The problem of stable goal systems in self-improving Artificial Intelligence has no realistic competitors under any three of these criteria, let alone all four.

That gives you rather a lot of possible points for followup questions so I’ll stop there.

JB: Sure, there are so many followup questions that this interview should be formatted as a tree with lots of branches instead of in a linear format. But until we can easily spin off copies of ourselves I’m afraid that would be too much work.

So, I’ll start with a quick point of clarification. You say "if you want to bring order to chaos, you have to go where the chaos is." I guess that at one level you’re just saying that if we want to make a lot of progress in understanding the universe, we have to tackle questions that we’re really far from understanding—like how intelligence works.

And we can say this in a fancier way, too. If we wants models of reality that reduce the entropy of our probabilistic predictions (there’s a concept of entropy for probability distributions, which is big when the probability distribution is very smeared out), then we have to find subjects where our predictions have a lot of entropy.

Am I on the right track?

EY: Well, if we wanted to torture the metaphor a bit further, we could talk about how what you really want is not high-entropy distributions but highly unstable ones. For example, if I flip a coin, I have no idea whether it’ll come up heads or tails (maximum entropy) but whether I see it come up heads or tails doesn’t change my prediction for the next coinflip. If you zoom out and look at probability distributions over sequences of coinflips, then high-entropy distributions tend not to ever learn anything (seeing heads on one flip doesn’t change your prediction next time), while inductive probability distributions (where your beliefs about probable sequences are such that, say, 11111 is more probable than 11110) tend to be lower-entropy because learning requires structure. But this would be torturing the metaphor, so I should probably go back to the original tangent:

Richard Hamming used to go around annoying his colleagues at Bell Labs by asking them what were the important problems in their field, and then, after they answered, he would ask why they weren’t working on them. Now, everyone wants to work on "important problems", so why areso few people working on important problems? And the obvious answer is that working on the important problems doesn’t get you an 80% probability of getting one more publication in the next three months. And most decision algorithms will eliminate options like that before they’re even considered. The question will just be phrased as, "Of the things that will reliably keep me on my career track and not embarrass me, which is most important?"

And to be fair, the system is not at all set up to support people who want to work on high-risk problems. It’s not even set up to socially support people who want to work on high-risk problems. In Silicon Valley a failed entrepreneur still gets plenty of respect, which Paul Graham thinks is one of the primary reasons why Silicon Valley produces a lot of entrepreneurs and other places don’t. Robin Hanson is a truly excellent cynical economist and one of his more cynical suggestions is that the function of academia is best regarded as the production of prestige, with the production of knowledge being something of a byproduct. I can’t do justice to his development of that thesis in a few words (keywords: hanson academia prestige) but the key point I want to take away is that if you work on a famous problem that lots of other people are working on, your marginal contribution to human knowledge may be small, but you’ll get to affiliate with all the other prestigious people working on it.

And these are all factors which contribute to academia, metaphorically speaking, looking for its keys under the lamppost where the light is better, rather than near the car where it lost them. Because on a sheer gut level, the really important problems are often scary. There’s a sense of confusion and despair, and if you affiliate yourself with the field, that scent will rub off on you.

But if you try to bring order to an absence of chaos—to some field where things are already in nice, neat order and there is no sense of confusion and despair—well, the results are often well described in a little document you may have heard of called the Crackpot Index. Not that this is the only thing crackpot high-scorers are doing wrong, but the point stands, you can’t revolutionize the atomic theory of chemistry because there isn’t anything wrong with it.

We can’t all be doing basic science, but people who see scary, unknown, confusing problems that no one else seems to want to go near and think "I wouldn’t want to work on that!" have got their priorities exactly backward.

JB: The never-ending quest for prestige indeed has unhappy side-effects in academia. Some of my colleagues seem to reason as follows:

If Prof. A can understand Prof. B’s work, but Prof. B can’t understand Prof. A, then Prof. A must be smarter—so Prof. A wins.

But I’ve figured out a way to game the system. If I write in a way that few people can understand, everyone will think I’m smarter than I actually am! Of course I need someone to understand my work, or I’ll be considered a crackpot. But I’ll shroud my work in jargon and avoid giving away my key insights in plain language, so only very smart, prestigious colleagues can understand it.

On the other hand, tenure offers immense opportunities for risky and exciting pursuits if one is brave enough to seize them. And there are plenty of folks who do. After all, lots of academics are self-motivated, strong-willed rebels.

This has been on my mind lately since I’m trying to switch from pure math to something quite different. I’m not sure what, exactly. And indeed that’s why I’m interviewing you!

(Next week: Yudkowsky on The Art of Rationality, and what it means to be rational.)


Whenever there is a simple error that most laymen fall for, there is always a slightly more sophisticated version of the same problem that experts fall for. – Amos Tversky


This Week’s Finds (Week 310)

28 February, 2011

I first encountered Gregory Benford through his science fiction novels: my favorite is probably In the Ocean of Night.

Later I learned that he’s an astrophysicist at U.C. Irvine, not too far from Riverside where I teach. But I only actually met him through my wife. She sometimes teaches courses on science fiction, and like Benford, she has some involvement with the Eaton Collection at U.C. Riverside—the largest publicly accessible SF library in the world. So, I was bound to eventually bump into him.

When I did, I learned about his work on electromagnetic filaments near the center of our galaxy—see “week252″ for more. I also learned he was seriously interested in climate change, and that he was going to the Asilomar International Conference on Climate Intervention Technologies—a controversial get-together designed to hammer out some policies for research on geoengineering.

Benford is a friendly but no-nonsense guy. Recently he sent me an email mentioning my blog, and said: "Your discussions on what to do are good, though general, while what we need is specifics NOW." Since I’d been meaning to interview him for a while, this gave me the perfect opening.

JB: You’ve been thinking about the future for a long time, since that’s part of your job as a science fiction writer.  For example, you’ve written a whole series about the expansion of human life through the galaxy.  From this grand perspective, global warming might seem like an annoying little road-bump before the ride even gets started.  How did you get interested in global warming? 

GB: I liked writing about the far horizons of our human prospect; it’s fun. But to get even above the envelope of our atmosphere in a sustained way, we have to stabilize the planet. Before we take on the galaxy, let’s do a smaller problem .

JB: Good point. We can’t all ship on out of here, and the way it’s going now, maybe none of us will, unless we get our act together.

Can you remember something that made you think "Wow, global warming is a really serious problem"?  As you know, not everyone is convinced yet.

GB: I looked at the migration of animals and then the steadily northward march of trees. They don’t read newspapers—the trees become newspapers—so their opinion matters more. Plus the retreat of the Arctic Sea ice in summer, the region of the world most endangered by the changes coming. I first focused on carbon capture using the CROPS method. I’m the guy who first proposed screening the Arctic with aerosols to cool it in summer.

JB: Let’s talk about each in turn. "CROPS" stands for Crop Residue Oceanic Permanent Sequestration. The idea sounds pretty simple: dump a lot of crop residues—stalks, leaves and stuff—on the deep ocean floor. That way, we’d be letting plants suck CO2 out of the atmosphere for us.

GB: Agriculture is the world’s biggest industry; we should take advantage of it. That’s what gave Bob Metzger and me the idea: collect farm waste and sink it to the bottom of the ocean, whence it shall not return for 1000 years. Cheap, easy, doable right now.

JB: But we have to think about what’ll happen if we dump all that stuff into the ocean, right? After all, the USA alone creates half a gigatonne of crop residues each year, and world-wide it’s ten times that. I’m getting these numbers from your papers:

• Robert A. Metzger and Gregory Benford, Sequestering of atmospheric carbon through permanent disposal of crop residue, Climatic Change 49 (2001), 11-19.

• Stuart E. Strand and Gregory Benford, Ocean sequestration of crop residue carbon: recycling fossil fuel carbon back to deep sediments, Environmental Science and Technology 43 (2009), 1000-1007.

Since we’re burning over 7 gigatonnes of carbon each year, burying 5 gigatonnes of crop waste is just enough to make a serious dent in our carbon footprint. But what’ll that much junk do at the bottom of the ocean?

GB: We’re testing the chemistry of how farm waste interacts with deep ocean sites offshore Monterey Bay right now. Here’s a picture of a bale 3.2 km down:

JB: I’m sure our audience will have more questions about this… but the answers to some are in your papers, and I want to spend a bit more time on your proposal to screen the Arctic. There’s a good summary here:

• Gregory Benford, Climate controls, Reason Magazine, November 1997.

But in brief, it sounds like you want to test the results of spraying a lot of micron-sized dust into the atmosphere above the Arctic Sea during the summer. You suggest diatomaceous earth as an option, because it’s chemically inert: just silica. How would the test work, exactly, and what would you hope to learn?

GB: The US has inflight refueling aircraft such as the KC-10 Extender that with minor changes spread aerosols at relevant altitudes, and pilots who know how to fly big sausages filled with fluids.



Rather than diatomaceous earth, I now think ordinary SO2 or H2S will work, if there’s enough water at the relevant altitudes. Turns out the pollutant issue is minor, since it would be only a percent or so of the SO2 already in the Arctic troposphere. The point is to spread aerosols to diminish sunlight and look for signals of less sunlight on the ground, changes in sea ice loss rates in summer, etc. It’s hard to do a weak experiment and be sure you see a signal. Doing regional experiments helps, so you can see a signal before the aerosols spread much. It’s a first step, an in-principle experiment.

Simulations show it can stop the sea ice retreat. Many fear if we lose the sea ice in summer ocean currents may alter; nobody really knows. We do know that the tundra is softening as it thaws, making roads impassible and shifting many wildlife patterns, with unforeseen long term effects. Cooling the Arctic back to, say, the 1950 summer temperature range would cost maybe $300 million/year, i.e., nothing. Simulations show to do this globally, offsetting say CO2 at 500 ppm, might cost a few billion dollars per year. That doesn’t help ocean acidification, but it’s a start on the temperature problem.

JB: There’s an interesting blog on Arctic political, military and business developments:

• Anatoly Karlin, Arctic Progress.

Here’s the overview:

Today, global warming is kick-starting Arctic history. The accelerating melting of Arctic sea ice promises to open up circumpolar shipping routes, halving the time needed for container ships and tankers to travel between Europe and East Asia. As the ice and permafrost retreat, the physical infrastructure of industrial civilization will overspread the region [...]. The four major populated regions encircling the Arctic Ocean—Alaska, Russia, Canada, Scandinavia (ARCS)—are all set for massive economic expansion in the decades ahead. But the flowering of industrial civilization’s fruit in the thawing Far North carries within it the seeds of its perils. The opening of the Arctic is making border disputes more serious and spurring Russian and Canadian military buildups in the region. The warming of the Arctic could also accelerate global warming—and not just through the increased economic activity and hydrocarbons production. One disturbing possibility is that the melting of the Siberian permafrost will release vast amounts of methane, a greenhouse gas that is far more potent than CO2, into the atmosphere, and tip the world into runaway climate change.

But anyway, unlike many people, I’m not mentioning risks associated with geoengineering in order to instantly foreclose discussion of it, because I know there are also risks associated with not doing it. If we rule out doing anything really new because it’s too expensive or too risky, we might wind up locking ourselves in a "business as usual" scenario. And that could be even more risky—and perhaps ultimately more expensive as well.

GB: Yes, no end of problems. Most impressive is how they look like a descending spiral, self-reinforcing.

Certainly countries now scramble for Arctic resources, trade routes opened by thawing—all likely to become hotly contested strategic assets. So too melting Himalayan glaciers can perhaps trigger "water wars" in Asia—especially India and China, two vast lands of very different cultures. Then, coming on later, come rising sea levels. Florida starts to go away. The list is endless and therefore uninteresting. We all saturate.

So droughts, floods, desertification, hammering weather events—they draw ever less attention as they grow more common. Maybe Darfur is the first "climate war." It’s plausible.

The Arctic is the canary in the climate coalmine. Cutting CO2 emissions will take far too long to significantly affect the sea ice. Permafrost melts there, giving additional positive feedback. Methane release from the not-so-perma-frost is the most dangerous amplifying feedback in the entire carbon cycle. As John Nissen has repeatedly called attention to, the permafrost permamelt holds a staggering 1.5 trillion tons of frozen carbon, about twice as much carbon as is in the atmosphere. Much would emerge as methane. Methane is 25 times as potent a heat-trapping gas as CO2 over a century, and 72 times as potent over the first 20 years! The carbon is locked in a freezer. Yet that’s the part of the planet warming up the fastest. Really bad news:

• Kevin Schaefer, Tingjun Zhang, Lori Bruhwiler and Andrew P. Barrett, Amount and timing of permafrost carbon release in response to climate warming, Tellus, 15 February 2011.

Abstract: The thaw and release of carbon currently frozen in permafrost will increase atmospheric CO2 concentrations and amplify surface warming to initiate a positive permafrost carbon feedback (PCF) on climate. We use surface weather from three global climate models based on the moderate warming, A1B Intergovernmental Panel on Climate Change emissions scenario and the SiBCASA land surface model to estimate the strength and timing of the PCF and associated uncertainty. By 2200, we predict a 29-59% decrease in permafrost area and a 53-97 cm increase in active layer thickness. By 2200, the PCF strength in terms of cumulative permafrost carbon flux to the atmosphere is 190±64 gigatonnes of carbon. This estimate may be low because it does not account for amplified surface warming due to the PCF itself and excludes some discontinuous permafrost regions where SiBCASA did not simulate permafrost. We predict that the PCF will change the arctic from a carbon sink to a source after the mid-2020s and is strong enough to cancel 42-88% of the total global land sink. The thaw and decay of permafrost carbon is irreversible and accounting for the PCF will require larger reductions in fossil fuel emissions to reach a target atmospheric CO2 concentration.

Particularly interesting is the slowing of thermohaline circulation.  In John Nissen’s "two scenarios" work there’s an uncomfortably cool future—if the Gulf Stream were to be diverted by meltwater flowing into NW Atlantic. There’s also an unbearably hot future, if the methane from not-so-permafrost and causes global warming to spiral out of control. So we have a terrifying menu.

JB: I recently interviewed Nathan Urban here. He explained a paper where he estimated the chance that the Atlantic current you’re talking about could collapse. (Technically, it’s the Atlantic meridional overturning circulation, not quite the same as the Gulf Stream.) They got a 10% chance of it happening in two centuries, assuming a business as usual scenario. But there are a lot of uncertainties in the modeling here.

Back to geoengineering. I want to talk about some ways it could go wrong, how soon we’d find out if it did, and what we could do then.

For example, you say we’ll put sulfur dioxide in the atmosphere below 15 kilometers, and most of the ozone is above 20 kilometers. That’s good, but then I wonder how much sulfur dioxide will diffuse upwards. As the name suggests, the stratosphere is "stratified" —there’s not much turbulence. That’s reassuring. But I guess one reason to do experiments is to see exactly what really happens.

GB: It’s really the only way to go forward. I fear we are now in the Decade of Dithering that will end with the deadly 2020s. Only then will experiments get done and issues engaged. All else, as tempting as ideas and simulations are, spell delay if they do not couple with real field experiments—from nozzle sizes on up to albedo measures —which finally decide.

JB: Okay. But what are some other things that could go wrong with this sulfur dioxide scheme? I know you’re not eager to focus on the dangers, but you must be able to imagine some plausible ones: you’re an SF writer, after all. If you say you can’t think of any, I won’t believe you! And part of good design is looking for possible failure modes.

GB: Plenty can go wrong with so vast an idea. But we can learn from volcanoes, that give us useful experiments, though sloppy and noisy ones, about putting aerosols into the air. Monitoring those can teach us a lot with little expense.

We can fail to get the aerosols to avoid clumping, so they fall out too fast. Or we can somehow trigger a big shift in rainfall patterns—a special danger in a system already loaded with surplus energy, as is already displaying anomalies like the bitter winters in Europe, floods in Pakistan, drought in Darfur. Indeed, some of Alan Robock’s simulations of Arctic aerosol use show a several percent decline in monsoon rain—though that may be a plus, since flooding is the #1 cause of death and destruction during the Indian monsoon.

Mostly, it might just plain fail to work. Guessing outcomes is useless, though.  Here’s where experiment rules, not simulations. This is engineering, which learns from mistakes. Consider the early days of aviation. Having more time to develop and test a system gives more time to learn how to avoid unwanted impacts. Of course, having a system ready also increases the probability of premature deployment; life is about choices and dangers.

More important right now than developing capability, is understanding the consequences of deployment of that capability by doing field experiments. One thing we know: both science and engineering advance most quickly by using the dance of theory with experiment. Neglecting this, preferring only experiment, is a fundamental mistake.

JB: Switching gears slightly: in March last year you went to the Asilomar Conference on climate intervention technologies. I’ve read the report:

• Asilomar Scientific Organizing Committee, The Asilomar Conference Recommendations on Principles for Research into Climate Engineering Techniques, Climate Institute, Washington DC, 2010.

It seems unobjectionable and a bit bland, no doubt deliberately so, with recommendations like this:

"Public participation and consultation in research planning and oversight, assessments, and development of decision-making mechanisms and processes must be provided."

What were some interesting things that you learned there? And what’ll happen next?

GB: It was the Woodstock of the policy wonks. I found it depressing. Not much actual science got discussed, and most just fearlessly called for more research, forming of panels and committees, etc. This is how bureaucracy digests a problem, turning it quite often into fertilizer.

I’m a physicist who does both theory and experiment. I want to see work that combines those to give us real information and paths to follow. I don’t see that anywhere now. Congress might hand out money for it but after the GAO report on geoengineering last September there seems little movement.

I did see some people pushing their carbon capture companies, to widespread disbelief. The simple things we could do right now like our CROPS carbon capture proposal are neglected, while entrepreneur companies hope for a government scheme to pay for sucking CO2 from the air. That’ll be the day!—far into the crisis, I think, maybe several decades from now. I also saw fine ideas pushed aside in favor of policy wonk initiatives. It was a classic triumph of process over results. As is many areas dominated by social scientists, people seemed to be saying, "Nobody can blame us if we go through the motions.”

That Decade of Dithering is upon us now. The great danger is that tipping points may not be obvious, even as we cross them. They may present as small events that nonetheless take us over an horizon from which we can never return.

For example, the loss of Greenland ice. Once the ice sheet melts down to an altitude below that needed to maintain it, we’ve lost it. The melt lubricates the glacier base and starts a slide we cannot stop. There are proposals of how to block that—essentially, draw the water out from the base as fast as it appears—but nobody’s funding such studies.

A reasonable, ongoing climate control program might cost $100 million annually. That includes small field experiments, trials with spraying aerosols, etc. We now spend about $5 billion per year globally studying the problem, so climate control studies would be 1/50 of that.

Even now, we may already be too late for a tipping point—we still barely glimpse the horrors we could be visiting on our children and their grandchildren’s grandchildren.

JB: I think a lot of young people are eager to do something. What would be your advice, especially to future scientists and engineers? What should they do? The problems seem so huge, and most so-called "adults" are shirking their responsibilities—perhaps hoping they’ll be dead before things get too bad.

GB: One reason people are paralyzed is simple: major interests would get hurt—coal, oil, etc. The fossil fuel industry is the second largest in the world; #1 is agriculture. We have ~50 trillion dollars of infrastructure invested in it. That and inertia—we’ve made the crucial fuel of our world a Bad Thing, and prohibition never works with free people. Look at the War on Drugs, now nearing its 40th anniversary.

That’s why I think adaptation—dikes, water conservation, reflecting roofs and blacktop to cool cities and lower their heating costs, etc.— is a smart way to prepare. We should also fund research in mineral weathering as a way to lock up CO2, which not only consumes CO2 but it can also generate ocean alkalinity. The acidification of the oceans is undeniable, easily measured, and accelerating. Plus geoengineering, which is probably the only fairly cheap, quick way to damp the coming chaos for a while. A stopgap, but we’re going to need plenty of those.

JB: And finally, what about you? What are you doing these days? Science fiction? Science? A bit of both?

Both, plus. Last year I published a look at how we viewed the future in the 20th Century, The Wonderful Future We Never Had, and have a novel in progress now cowritten with Larry Niven—about a Really Big Object. Plus some short stories and journalism.

My identical twin brother Jim & I published several papers looking at SETI from the perspective of those who would pay the bills for a SETI beacon, and reached conclusions opposite from what the SETI searches of the last half century have sought. Instead of steady, narrowband signals near 1 GHz, it is orders of magnitude cheaper to radiate pulsed, broadband beacon signals nearer 10 GHz. This suggests new way to look for pulsed signals, which some are trying to find. We may have been looking for the wrong thing all along. The papers are on the arXiv:

• James Benford, Gregory Benford and Dominic Benford, Messaging with cost optimized interstellar beacons.

• Gregory Benford, James Benford and Dominic Benford, Searching for cost optimized interstellar beacons.

For math types, David Wolpert and I have shown that Newcomb’s paradox arises from confusions in the statement, so is not a paradox:

• David H. Wolpert and Gregory Benford, What does Newcomb’s paradox teach us?

JB: The next guest on this show, Eliezer Yudkowsky, has also written about Newcomb’s paradox. I should probably say what it is, just for folks who haven’t heard yet. I’ll quote Yudkowsky’s formulation, since it’s nice and snappy:

A superintelligence from another galaxy, whom we shall call Omega, comes to Earth and sets about playing a strange little game. In this game, Omega selects a human being, sets down two boxes in front of them, and flies away.

Box A is transparent and contains a thousand dollars.
Box B is opaque, and contains either a million dollars, or nothing.

You can take both boxes, or take only box B.

And the twist is that Omega has put a million dollars in box B if and only if Omega has predicted that you will take only box B.

Omega has been correct on each of 100 observed occasions so far—everyone who took both boxes has found box B empty and received only a thousand dollars; everyone who took only box B has found B containing a million dollars. (We assume that box A vanishes in a puff of smoke if you take only box B; no one else can take box A afterward.)

Before you make your choice, Omega has flown off and moved on to its next game. Box B is already empty or already full.

Omega drops two boxes on the ground in front of you and flies off.

Do you take both boxes, or only box B?

If you say you’d take both boxes, I’ll argue that’s stupid: everyone who did that so far got just a thousand dollars, while the folks who took only box B got a million!

If you say you’d take only box B, I’ll argue that’s stupid: there has got to be more money in both boxes than in just one of them!

So, this puzzle has a kind of demonic attraction. Lots of people have written about it, though personally I’m waiting until a superintelligence from another galaxy actually shows up and performs this stunt.

Hmm—I see your paper uses Bayesian networks! I’ve been starting to think about those lately.

But I know that’s not all you’ve been doing.

GB: I also started several biotech companies 5 years ago, spurred in part by the agonizing experience of watching my wife die of cancer for decades, ending in 2002. They’re genomics companies devoted to extending human longevity by upregulating genes we know confer some defenses against cardio, neurological and other diseases. Our first product just came out, StemCell100, and did well in animal and human trials.

So I’m staying busy. The world gets more interesting all the time. Compared with growing up in the farm country of Alabama, this is a fine way to live.

JB: It’s been great to hear what you’re up to. Best of luck on all these projects, and thanks for answering my questions!


Few doubt that our climate stands in a class by itself in terms of complexity. Though much is made of how wondrous our minds are, perhaps the most complex entity known is our biosphere, in which we are mere mayflies. Absent a remotely useful theory of complexity in systems, we must proceed cautiously. – Gregory Benford


This Week’s Finds (Week 309)

17 February, 2011

In the next issues of This Week’s Finds, I’ll return to interviewing people who are trying to help humanity deal with some of the risks we face.

First I’ll talk to the science fiction author and astrophysicist Gregory Benford. I’ll ask him about his ideas on “geoengineering” — proposed ways of deliberately manipulating the Earth’s climate to counteract the effects of global warming.

After that, I’ll spend a few weeks asking Eliezer Yudkowsky about his ideas on rationality and “friendly artificial intelligence”. Yudkowsky believes that the possibility of dramatic increases in intelligence, perhaps leading to a technological singularity, should command more of our attention than it does.

Needless to say, all these ideas are controversial. They’re exciting to some people — and infuriating, terrifying or laughable to others. But I want to study lots of scenarios and lots of options in a calm, level-headed way without rushing to judgement. I hope you enjoy it.

This week, I want to say a bit more about the Hopf bifurcation!

Last week I talked about applications of this mathematical concept to climate cycles like the El Niño – Southern Oscillation. But over on the Azimuth Project, Graham Jones has explained an application of the same math to a very different subject:

Quantitative ecology, Azimuth Project.

That’s one thing that’s cool about math: the same patterns show up in different places. So, I’d like to take advantage of his hard work and show you how a Hopf bifurcation shows up in a simple model of predator-prey interactions.

Suppose we have some rabbits that reproduce endlessly, with their numbers growing at a rate proportional to their population. Let x(t) be the number of animals at time t. Then we have:

\frac{d x}{d t} = r x

where r is the growth rate. This gives exponential growth: it has solutions like

x(t) = x_0 e^{r t}

To get a slightly more realistic model, we can add ‘limits to growth’. Instead of a constant growth rate, let’s try a growth rate that decreases as the population increases. Let’s say it decreases in a linear way, and drops to zero when the population hits some value K. Then we have

\frac{d x}{d t} = r (1-x/K) x

This is called the “logistic equation”. K is known as the “carrying capacity”. The idea is that the environment has enough resources to support this population. If the population is less, it’ll grow; if it’s more, it’ll shrink.

If you know some calculus you can solve the logistic equation by hand by separating the variables and integrating both sides; it’s a textbook exercise. The solutions are called “logistic functions”, and they look sort of like this:



The above graph shows the simplest solution:

x = \frac{e^t}{e^t + 1}

of the simplest logistic equation in the world:

\frac{ d x}{d t} = (1 - x)x

Here the carrying capacity is 1. Populations less than 1 sound a bit silly, so think of it as 1 million rabbits. You can see how the solution starts out growing almost exponentially and then levels off. There’s a very different-looking solution where the population starts off above the carrying capacity and decreases. There’s also a silly solution involving negative populations. But whenever the population starts out positive, it approaches the carrying capacity.

The solution where the population just stays at the carrying capacity:

x = 1

is called a “stable equilibrium”, because it’s constant in time and nearby solutions approach it.

But now let’s introduce another species: some wolves, which eat the rabbits! So, let x be the number of rabbits, and y the number of wolves. Before the rabbits meet the wolves, let’s assume they obey the logistic equation:

\frac{ d x}{d t} = x(1-x/K)

And before the wolves meet the rabbits, let’s assume they obey this equation:

\frac{ d y}{d t} = -y

so that their numbers would decay exponentially to zero if there were nothing to eat.

So far, not very interesting. But now let’s include a term that describes how predators eat prey. Let’s say that on top of the above effect, the predators grow in numbers, and the prey decrease, at a rate proportional to:

x y/(1+x).

For small numbers of prey and predators, this means that predation increases nearly linearly with both x and y. But if you have one wolf surrounded by a million rabbits in a small area, the rate at which it eats rabbits won’t double if you double the number of rabbits! So, this formula includes a limit on predation as the number of prey increases.

Okay, so let’s try these equations:

\frac{ d x}{d t} = x(1-x/K) - 4x y/(x+1)

and

\frac{ d y}{d t} = -y + 2x y/(x+1)

The constants 4 and 2 here have been chosen for simplicity rather than realism.

Before we plunge ahead and get a computer to solve these equations, let’s see what we can do by hand. Setting d x/d t = 0 gives the interesting parabola

y = \frac{1}{4}(1-x/K)(x+1)

together with the boring line x = 0. (If you start with no prey, that’s how it will stay. It takes bunny to make bunny.)

Setting d y/d t = 0 gives the interesting line

x=1

together with the boring line y = 0.

The interesting parabola and the interesting line separate the x y plane into four parts, so these curves are called separatrices. They meet at the point

y = \frac{1}{2} (1 - 1/K)

which of course is an equilibrium, since d x / d t = d y / d t = 0 there. But when K < 1 this equilibrium occurs at a negative value of y, and negative populations make no sense.

So, if K < 1 there is no equilibrium population, and with a bit more work one can see the problem: the wolves die out. For larger values of K there is an equilibrium population. But the nature of this equilibrium depends on K: that’s the interesting part.

We could figure this out analytically, but let’s look at two of Graham’s plots. Here’s a solution when K = 2.5:

The gray curves are the separatrices. The red curve shows a solution of the equations, with the numbers showing the passage of time. So, you can see that the solution spirals in towards the equilibrium. That’s what you expect of a stable equilibrium.

Here’s a picture when K = 3.5:

The red and blue curves are two solutions, again numbered to show how time passes. The red curve spirals in towards the dotted gray curve. The blue one spirals out towards it. The gray curve is also a solution. It’s called a “stable limit cycle” because it’s periodic, and nearby solutions move closer and closer to it.

With a bit more work, we could show analytically that whenever 1 < K < 3 there is a stable equilibrium. As we increase K, when K passes 3 this stable equilibrium suddenly becomes a tiny stable limit cycle. This is a Hopf bifurcation!

Now, what if we add noise? We saw the answer last week: where we before had a stable equilibrium, we now can get irregular cycles — because the noise keeps pushing the solution away from the equilibrium!

Here’s how it looks for K=2.5 with white noise added:

The following graph shows a longer run in the noisy K=2.5 case, with rabbits (x) in black and wolves (y) in gray. Click on the picture to make it bigger:



There is irregular periodicity — and as you’d expect, the predators tends to lag behind the prey. A burst in the rabbit population causes a rise in the wolf population; a lot of wolves eat a lot of rabbits; a crash in rabbits causes a crash in wolves.

This sort of phenomenon is actually seen in nature sometimes. The most famous case involves the snowshoe hare and the lynx in Canada. It was first noted by MacLulich:

• D. A. MacLulich, Fluctuations in the Numbers of the Varying Hare (Lepus americanus), University of Toronto Studies Biological Series 43, University of Toronto Press, Toronto, 1937.

The snowshoe hare is also known as the “varying hare”, because its coat varies in color quite dramatically. In the summer it looks like this:



In the winter it looks like this:



The Canada lynx is an impressive creature:



But don’t be too scared: it only weighs 8-11 kilograms, nothing like a tiger or lion.

Down in the United States, the same species lynx went extinct in Colorado around 1973 — but now it’s back!

• Colorado Division of Wildlife, Success of the Lynx Reintroduction Program, 27 September, 2010.

In Canada, at least, the lynx rely for the snowshoe hare for 60% to 97% of their diet. I suppose this is one reason the hare has evolved such magnificent protective coloration. This is also why the hare and lynx populations are tightly coupled. They rise and crash in irregular cycles that look a bit like what we saw in our simplified model:



This cycle looks a bit more strongly periodic than Graham’s graph, so to fit this data, we might want to choose parameters that give a limit cycle rather than a stable equilibrium.

But I should warn you, in case it’s not obvious: everything about population biology is infinitely more complicated than the models I’ve showed you so far! Some obvious complications: snowshoe hare breed in the spring, their diet varies dramatically over the course of year, and the lynx also eat rodents and birds, carrion when it’s available, and sometimes even deer. Some less obvious ones: the hare will eat dead mice and even dead hare when they’re available, and the lynx can control the size of their litter depending on the abundance of food. And I’m sure all these facts are just the tip of the iceberg. So, it’s best to think of models here as crude caricatures designed to illustrate a few features of a very complex system.

I hope someday to say a bit more and go a bit deeper. Do any of you know good books or papers to read, or fascinating tidbits of information? Graham Jones recommends this book for some mathematical aspects of ecology:

• Michael R. Rose, Quantitative Ecological Theory, Johns Hopkins University Press, Maryland, 1987.

Alas, I haven’t read it yet.

Also: you can get Graham’s R code for predator-prey simulations at the Azimuth Project.


Under carefully controlled experimental circumstances, the organism will behave as it damned well pleases. – the Harvard Law of Animal Behavior


This Week’s Finds (Week 308)

24 December, 2010

Last week we met the El Niño-Southern Oscillation, or ENSO. I like to explain things as I learn about them. So, often I look back and find my explanations naive. But this time it took less than a week!

What did it was reading this:

• J. D. Neelin, D. S. Battisti, A. C. Hirst et al., ENSO theory, J. Geophys. Res. 103 (1998), 14261-14290.

I wouldn’t recommend this to the faint of heart. It’s a bit terrifying. It’s well-written, but it tells the long and tangled tale of how theories of the ENSO phenomenon evolved from 1969 to 1998 — a period that saw much progress, but did not end with a neat, clean understanding of this phenomenon. It’s packed with hundreds of references, and sprinkled with somewhat intimidating remarks like:

The Fourier-decomposed longitude and time dependence of these eigensolutions obey dispersion relations familiar to every physical oceanographer…

Nonetheless I found it fascinating — so, I’ll pick off one small idea and explain it now.

As I’m sure you’ve heard, climate science involves some extremely complicated models: some of the most complex known to science. But it also involves models of lesser complexity, like the "box model" explained by Nathan Urban in "week304". And it also involves some extremely simple models that are designed to isolate some interesting phenomena and display them in their Platonic ideal form, stripped of all distractions.

Because of their simplicity, these models are great for mathematicians to think about: we can even prove theorems about them! And simplicity goes along with generality, so the simplest models of all tend to be applicable — in a rough way — not just to the Earth’s climate, but to a vast number of systems. They are, one might say, general possibilities of behavior.

Of course, we can’t expect simple models to describe complicated real-world situations very accurately. That’s not what they’re good for. So, even calling them "models" could be a bit misleading. It might be better to call them "patterns": patterns that can help organize our thinking about complex systems.

There’s a nice mathematical theory of these patterns… indeed, several such theories. But instead of taking a top-down approach, which gets a bit abstract, I’d rather tell you about some examples, which I can illustrate using pictures. But I didn’t make these pictures. They were created by Tim van Beek as part of the Azimuth Code Project. The Azimuth Code Project is a way for programmers to help save the planet. More about that later, at the end of this article.

As we saw last time, the ENSO cycle relies crucially on interactions between the ocean and atmosphere. In some models, we can artificially adjust the strength of these interactions, and we find something interesting. If we set the interaction strength to less than a certain amount, the Pacific Ocean will settle down to a stable equilibrium state. But when we turn it up past that point, we instead see periodic oscillations! Instead of a stable equilibrium state where nothing happens, we have a stable cycle.

This pattern, or at least one pattern of this sort, is called the "Hopf bifurcation". There are various differential equations that exhibit a Hopf bifurcation, but here’s my favorite:

\frac{d x}{d t} =  -y + \beta  x - x (x^2 + y^2)

\frac{d y}{d t} =  \; x + \beta  y - y (x^2 + y^2)

Here x and y are functions of time, t, so these equations describe a point moving around on the plane. It’s easier to see what’s going on in polar coordinates:

\frac{d r}{d t} = \beta r - r^3

\frac{d \theta}{d t} = 1

The angle \theta goes around at a constant rate while the radius r does something more interesting. When \beta \le 0, you can see that any solution spirals in towards the origin! Or, if it starts at the origin, it stays there. So, we call the origin a "stable equilibrium".

Here’s a typical solution for \beta = -1/4, drawn as a curve in the x y plane. As time passes, the solution spirals in towards the origin:

The equations are more interesting for \beta > 0. Then dr/dt = 0 whenever

\beta r - r^3 = 0

This has two solutions, r = 0 and r = \sqrt{\beta}. Since r = 0 is a solution, the origin is still an equilibrium. But now it’s not stable: if r is between 0 and \sqrt{\beta}, we’ll have \beta r - r^3 > 0, so our solution will spiral out, away from the origin and towards the circle r = \sqrt{\beta}. So, we say the origin is an "unstable equilibrium". On the other hand, if r starts out bigger than \sqrt{\beta}, our solution will spiral in towards that circle.

Here’s a picture of two solutions for \beta = 1:

The red solution starts near the origin and spirals out towards the circle r = \sqrt{\beta}. The green solution starts outside this circle and spirals in towards it, soon becoming indistinguishable from the circle itself. So, this equation describes a system where x and y quickly settle down to a periodic oscillating behavior.

Since solutions that start anywhere near the circle r = \sqrt{\beta} will keep going round and round getting closer to this circle, it’s called a "stable limit cycle".

This is what the Hopf bifurcation is all about! We’ve got a dynamical system that depends on a parameter, and as we change this parameter, a stable fixed point become unstable, and a stable limit cycle forms around it.

This isn’t quite a mathematical definition yet, but it’s close enough for now. If you want something a bit more precise, try:

• Yuri A. Kuznetsov, Andronov-Hopf bifurcation, Scholarpedia, 2006.

Now, clearly the Hopf bifurcation idea is too simple for describing real-world weather cycles like the ENSO. In the Hopf bifurcation, our system settles down into an orbit very close to the limit cycle, which is perfectly periodic. The ENSO cycle is only roughly periodic:



The time between El Niños varies between 3 and 7 years, averaging around 4 years. There can also be two El Niños without an intervening La Niña, or vice versa. One can try to explain this in various ways.

One very simple, general idea to add random noise to whatever differential equation we were using to model the ENSO cycle, obtaining a so-called stochastic differential equation: a differential equation describing a random process. Richard Kleeman discusses this idea in Tim Palmer’s book:

• Richard Kleeman, Stochastic theories for the irregularity of ENSO, in Stochastic Physics and Climate Modelling, eds. Tim Palmer and Paul Williams, Cambridge U. Press, Cambridge, 2010, pp. 248-265.

Kleeman mentions three general theories for the irregularity of the ENSO. They all involve the idea of separating the weather into "modes" — roughly speaking, different ways that things can oscillate. Some modes are slow and some are fast. The ENSO cycle is defined by the behavior of certain slow modes, but of course these interact with the fast modes. So, there are various options:

  1. Perhaps the relevant slow modes interact with each other in a chaotic way.
  2. Perhaps the relevant slow modes interact with each other in a non-chaotic way, but also interact with chaotic fast modes, which inject noise into what would otherwise be simple periodic behavior.
  3. Perhaps the relevant slow modes interact with each other in a chaotic way, and also interact in a significant way with chaotic fast modes.

Kleeman reviews work on the first option but focuses on the second. The third option is the most complicated, so the pessimist in me suspects that’s what’s really going on. Still, it’s good to start by studying simple models!

How can we get a simple model that illustrates the second option? Simple: take the model we just saw, and add some noise! This idea is discussed in detail here:

• H. A. Dijkstra, L. M. Frankcombe and A.S von der Heydt, The Atlantic Multidecadal Oscillation: a stochastic dynamical systems view, in Stochastic Physics and Climate Modelling, eds. Tim Palmer and Paul Williams, Cambridge U. Press, Cambridge, 2010, pp. 287-306.

This paper is not about the ENSO cycle, but another one, which is often nicknamed the AMO. I would love to talk about it — but not now. Let me just show you the equations for a Hopf bifurcation with noise:

\frac{d x}{d t} =  -y + \beta  x - x (x^2 + y^2) + \lambda \frac{d W_1}{d t}

\frac{d y}{d t} =  \; x + \beta  y - y (x^2 + y^2) + \lambda \frac{d W_2}{d t}

They’re the same as before, but with some new extra terms at the end: that’s the noise.

This could easily get a bit technical, but I don’t want it to. So, I’ll just say some buzzwords and let you click on the links if you want more detail. W_1 and W_2 are two independent Wiener processes, so they describe Brownian motion in the x and y coordinates. When we differentiate a Wiener process we get white noise. So, we’re adding some amount of white noise to the equations we had before, and the number \lambda says precisely how much. That means that x and y are no longer specific functions of time: they’re random functions, also known as stochastic processes.

If this were a math course, I’d feel obliged to precisely define all the terms I just dropped on you. But it’s not, so I’ll just show you some pictures!

If \beta = 1 and \lambda = 0.1, here are some typical solutions:

They look similar to the solutions we saw before for \beta = 1, but now they have some random wiggles added on.

(You may be wondering what this picture really shows. After all, I said the solutions were random functions of time, not specific functions. But it’s tough to draw a "random function". So, to get one of the curves shown above, what Tim did is randomly choose a function according to some rule for computing probabilities, and draw that.)

If we turn up the noise, our solutions get more wiggly. If \beta = 1 and \lambda = 0.3, they look like this:

In these examples, \beta > 0, so we would have a limit cycle if there weren’t any noise — and you can see that even with noise, the solutions approximately tend towards the limit cycle. So, we can use an equation of this sort to describe systems that oscillate, but in a somewhat random way.

But now comes the really interesting part! Suppose \beta \le 0. Then we’ve seen that without noise, there’s no limit cycle: any solution quickly spirals in towards the origin. But with noise, something a bit different happens. If \beta = -1/4 and \lambda = 0.1 we get a picture like this:

We get irregular oscillations even though there’s no limit cycle! Roughly speaking, the noise keeps knocking the solution away from the stable fixed point at x = y = 0, so it keeps going round and round, but in an irregular way. It may seem to be spiralling in, but if we waited a bit longer it would get kicked out again.

This is a lot easier to see if we plot just x as a function of t. Then we can run our solution for a longer time without the picture becoming a horrible mess:

If you compare this with the ENSO cycle, you’ll see they look roughly similar:



That’s nice. Of course it doesn’t prove that a model based on a Hopf bifurcation plus noise is "right" — indeed, we don’t really have a model until we’ve chosen variables for both x and y. But it suggests that a model of this sort could be worth studying.

If you want to see how the Hopf bifurcation plus noise is applied to climate cycles, I suggest starting with the paper by Dijkstra, Frankcombe and von der Heydt. If you want to see it applied to the El Niño-Southern Oscillation, start with Section 6.3 of the ENSO theory paper, and then dig into the many references. Here it seems a model with \beta > 0 may work best. If so, noise is not required to keep the ENSO cycle going, but it makes the cycle irregular.

To a mathematician like me, what’s really interesting is how the addition of noise "smooths out" the Hopf bifurcation. When there’s no noise, the qualitative behavior of solutions jumps drastically at \beta = 0. For \beta \le 0 we have a stable equilibrium, while for \beta > 0 we have a stable limit cycle. But in the presence of noise, we get irregular cycles not only for \beta > 0 but also \beta \le 0. This is not really surprising, but it suggests a bunch of questions. Such as: what are some quantities we can use to describe the behavior of "irregular cycles", and how do these quantities change as a function of \lambda and \beta?

You’ll see some answers to this question in Dijkstra, Frankcombe and von der Heydt’s paper. However, if you’re a mathematician, you’ll instantly think of dozens more questions — like, how can I prove what these guys are saying?

If you make any progress, let me know. If you don’t know where to start, you might try the Dijkstra et al. paper, and then learn a bit about the Hopf bifurcation, stochastic processes, and stochastic differential equations:

• John Guckenheimer and Philip Holmes, Nonlinear Oscillations, Dynamical Systems and Bifurcations of Vector Fields, Springer, Berlin, 1983.

• Zdzisław Brzeźniak and Tomasz Zastawniak, Basic Stochastic Processes: A Course Through Exercises, Springer, Berlin, 1999.

• Bernt Øksendal, Stochastic Differential Equations: An Introduction with Applications, 6th edition, Springer, Berlin, 2003.

Now, about the Azimuth Code Project. Tim van Beek started it just recently, but the Azimuth Project seems to be attracting people who can program, so I have high hopes for it. Tim wrote:

My main objectives to start the Azimuth Code Project were:

• to have a central repository for the code used for simulations or data analysis on the Azimuth Project,

• to have an online free access repository and make all software open source, to enable anyone to use the software, for example to reproduce the results on the Azimuth Project. Also to show by example that this can and should be done for every scientific publication.

Of less importance is:

• to implement the software with an eye to software engineering principles.

This less important because the world of numerical high performance computing differs significantly from the rest of the software industry: it has special requirements and it is not clear at all which paradigms that are useful for the rest will turn out to be useful here. Nevertheless I’m confident that parts of the scientific community will profit from a closer interaction with software engineering.

So, if you like programming, I hope you’ll chat with us and consider joining in! Our next projects involve limit cycles in predator-prey models, stochastic resonance in some theories of the ice ages, and delay differential equations in ENSO models.

And in case you’re wondering, the code used for the pictures above is a simple implementation in Java of the Euler scheme, using random number generating algorithms from Numerical Recipes. Pictures were generated with gnuplot.


There are two ways of constructing a software design. One way is to make it so simple that there are obviously no deficiencies. And the other way is to make it so complicated that there are no obvious deficiencies. – C.A.R. Hoare


Follow

Get every new post delivered to your Inbox.

Join 2,711 other followers