Calculating Catastrophe

14 June, 2011

This book could be interesting. If you read it, could you tell us what you think?

• Gordon Woo, Calculating Catastrophe, World Scientific Press, Singapore, 2011.

Apparently Dr. Gordon Woo was trained in mathematical physics at Cambridge, MIT and Harvard, and has made his career as a ‘calculator of catastrophes’. He has consulted for the IAEA on the seismic safety of nuclear plants and for BP on offshore oil well drilling—it’ll be fun to see what he has to say about his triumphant success in preventing disasters in both those areas. He now works at a company called Risk Management Solutions, where he works on modelling catastrophes for insurance purposes, and has designed a model for terrorism risk.

According to the blurb I got:

This book has been written to explain, to a general readership, the underlying philosophical ideas and scientific principles that govern catastrophic events, both natural and man-made. Knowledge of the broad range of catastrophes deepens understanding of individual modes of disaster. This book will be of interest to anyone aspiring to understand catastrophes better, but will be of particular value to those engaged in public and corporate policy, and the financial markets.

The table of contents lists: Natural Hazards; Societal Hazards; A Sense of Scale; A Measure of Uncertainty; A Matter of Time; Catastrophe Complexity; Terrorism; Forecasting; Disaster Warning; Disaster Scenarios; Catastrophe Cover; Catastrophe Risk Securitization; Risk Horizons.

Maybe you know other good books on the same subject?

For a taste of his thinking, you can try this:

• Gordon Woo, Terrorism risk.

Terrorism sounds like a particularly difficult risk to model, since it involves intelligent agents who try to do unexpected things. But maybe there are still some guiding principles. Woo writes:

It turns out that the number of operatives involved in planning and preparing attacks has a tipping point in respect of the ease with which the dots might be joined by counter-terrorism forces. The opportunity for surveillance experts to spot a community of terrorists, and gather sufficient evidence for courtroom convictions, increases nonlinearly with the number of operatives – above a critical number, the opportunity improves dramatically. This nonlinearity emerges from analytical studies of networks, using modern graph theory methods (Derenyi et al. [21]). Below the tipping point, the pattern of terrorist links may not necessarily betray much of a signature to the counter-terrorism services. However, above the tipping point, a far more obvious signature may become apparent in the guise of a large connected network cluster of dots, which reveals the presence of a form of community. The most ambitious terrorist plans, involving numerous operatives, are thus liable to be thwarted. As exemplified by the audacious attempted replay in 2006 of the Bojinka spectacular, too many terrorists spoil the plot (Woo, [22]).

Intelligence surveillance and eavesdropping of terrorist networks thus constrain the pipeline of planned attacks that logistically might otherwise seem almost boundless. Indeed, such is the capability of the Western forces of counterterrorism, that most planned attacks, as many as 80% to 90%, are interdicted. For example, in the three years before the 7/7/05 London attack, eight plots were interdicted. Yet any non-interdicted planned attack is construed as a significant intelligence failure. The public expectation of flawless security is termed the ‘90-10 paradox.’ Even if 90% of plots are foiled, it is by the 10% which succeed that the security services are ultimately remembered.

Of course the reference to “modern graph theoretical methods” will be less intimidating or impressive to many readers here than to the average, quite possibly innumerate reader of this document. But here’s the actual reference, in case you’re curious:

• I. Derenyi, G. Palla and T. Vicsek, Clique percolation in random networks, Phys. Rev. Lett. 94 (2005), 160202.

Just for fun, let me summarize the main result, so you can think about how relevant it might be to terrorist networks.

A graph is roughly a bunch of dots connected by edges. A clique in a graph is some subset of dots each of which is connected to every other. So, if dots are people and we draw an edge when two people are friends, a clique is a bunch of people who are all friends with each other—hence the name ‘clique’. But we might also use a clique to represent a bunch of people who are all engaged in the same activity, like a terrorist plot.

We’ve talked here before about Erdős–Rényi random graphs. These are graphs formed by taking a bunch of dots and randomly connecting each pair by an edge with some fixed probability p. In the paper above, the authors argue that for an Erdős–Rényi random graph with N vertices, the chance that most of the cliques with k elements all touch each other and form one big fat ‘giant component’ shoots up suddenly when

p \ge [(k-1) N]^{-1/k-1}

This sort of effect is familiar in many different contexts: it’s called a ‘percolation threshold’. I can guess the implications for terrorist networks that Gordon Woo is alluding to. However, doubt the details of the math are very important here, since social networks are not well modeled by Erdős–Rényi random graphs.

In the real world, if you and I have a mutual friend, that will increase the chance that we’ll be friends. Similarly, if we share a conspirator, that increases the chance that we’re in the same conspiracy. But in a world where friendship was described by an Erdős–Rényi random graph, that would not be the case!

So, while I agree that large terrorist networks are easier to catch than small ones, I don’t think the math of Erdős–Rényi random graphs give any quantitative insight into how much easier it is.


Your Model Is Verified, But Not Valid! Huh?

12 June, 2011

guest post by Tim van Beek

Among the prominent tools in climate science are complicated computer models. For more on those, try this blog:

• Steve Easterbrook, Serendipity, or What has Software Engineering got to do with Climate Change?

After reading Easterbrook’s blog post about “climate model validation”, and some discussions of this topic elsewhere, I noticed that there is some “computer terminology” floating around that disguises itself as plain English! This has led to some confusion, so I’d like to explain some of it here.

Technobabble: The Quest for Cooperation

Climate change may be the first problem in the history of humankind that has to be tackled on a global scale, by people all over the world working together. Of course, a prerequisite of working together is a mutual understanding and a mutual language. Unfortunately every single one of the many professions that scientists and engineers engage in have created their own dialect. And most experts are proud of it!

When I read about the confusion that “validation” versus “verification” of climate models has caused, I was reminded of the phrase “technobabble”, which screenwriters for the TV series Star Trek used whenever they had to write a dialog involving the engineers on the Starship Enterprise. Something like this:

“Captain, we have to send an inverse tachyon beam through the main deflector dish!”

“Ok, make it so!”

Fortunately, neither Captain Picard nor the audience had to understand what was really going on.

It’s a bit different in the real world, where not everyone may have the luxury of staying on the sidelines while the trustworthy crew members in the Enterprise’s engine room solve all the problems. We can start today by explaining some software engineering technobabble that came up in the context of climate models. But why would software engineers bother in the first place?

Short Review of Climate Models

Climate models come in a hierarchy of complexity. The simplest ones only try to simulate the energy balance of the planet earth. These are called energy balance models. They don’t take into account the spherical shape of the earth, for example.

At the opposite extreme, the most complex ones try to simulate the material and heat flow of the atmosphere and the oceans on a topographical model of the spinning earth. These are called general circulation models, or GCMs for short. GCMs have a lot of code, sometimes more than a million lines of code.

A line of code is basically one instruction for the computer to carry out, like:

add 1/2 and 1/6 and store the result in a variable called e

print e on the console

In order to understand what a computer program does, in theory, one has to memorize every single line of code and understand it. And most programs use a lot of other programs, so in theory one would have to understand those, too. This is of course not possible for a single person!

We hope that taking into account a lot of effects, which results in a lot of lines of code, makes the models more accurate. But it certainly means that they are complex enough to be interesting for software engineers.

In the case of software that is used to run an internet shop, a million lines of code isn’t much. But it is already too big for one single person to handle. Basically, this is where all the problems start, that software engineering seeks to solve.

When more than one person works on a software project things often get complicated.

(From the manual of CVS, the “Concurrent Versions System”.)

Software Design Circle

The job of software engineer is in some terms similar to the work of an architect. The differences are mainly due to the abstract nature of software. Everybody can see if a building is finished or if it isn’t, but that’s not possible with software. Nevertheless every software project does come to an end, and people have to decide whether or not the product, the software, is finished and does what it should. But since software is so abstract, people have come up with special ideas about how the software “production process” should work and how to tell if the software is correct. I would like to explain these a little bit further.

Stakeholders and Shifts in Stakeholder Analysis

There are many different people working in an office building with different interests: cleaning squads, janitors, plant security, and so on. When you design a new office building, you need to identify and take into account all the different interests of all these groups. Most software projects are similar, and the process just mentioned is usually called stakeholder analysis.

Of course, if you take into account only the groups already mentioned, you’ll build an office building without any offices, because that would obviously be the simplest one to monitor and to keep working. Such an office building wouldn’t make much sense, of course! This is because we made a fatal mistake with our stakeholder analysis: we failed to take into account the most important stakeholders, the people who will actually use the offices. These are the key stakeholders of the office building project.

After all, the primary purpose of an office building is to provide offices. And in the end, if we have an office building without offices, we’ll notice that no one will pay us for our efforts.

Gathering Requirements

While it may be obvious what most people want from an office building, the situation is usually much more abstract, hence much more complicated, for software projects.

This is why software people carry out a requirement analysis, where they ask the stakeholders what they would like the software to do. A requirement for an office building might be, for example, “we need a railway station nearby, because most of the people who will work in the building don’t have cars.” A requirement for a software project might be, for example, “we need the system to send email notifications to our clients on a specific schedule”.

In an ideal world, the requirement analysis would result in a document —usually called something like a system specification—that contains both the requirements, and also descriptions of the test cases that are needed to test whether the finished system meets the requirements. For example:

“Employee A lives in an apartment 29 km away from the office building and does not have a car. She gets to work within 30 minutes by using public transportation.”

Verification versus Validation

When we have finished the office building (or the software system), we’ll have to do some acceptance testing, in order to convince our customer that she should pay us (or simply to use the system, if it is for free). When you buy a car, your “acceptance test” is driving away with it—if that does not work, you know that there is something wrong with your car! But for complicated software—or office buildings—we need to agree on what we do to test if the system is finished. That’s what we need the test cases for.

If we are lucky, the relevant test cases will already be described in the system specification, as noted above. But that is not the whole story.

Every scientific community that has its own identity invents its own new language, often borrowing words from everyday language and defining new, surprising, special meanings for them. Software engineers are no different. There are, for example, two very different aspects to testing a system:

• Did we do everything according to the system specification?

and:

• Now that the system is there, and our key stakeholders can see it for themselves, did we get the system specification right: is our product useful to them?

The first is called verification, the second validation. As you can see, software engineers took two almost synonymous words from everyday language and gave them quite different meanings!

For example, if you wrote in the specification for an online book seller:

“we calculate the book price by multiplying the ISBN number by pi”

and the final software system does just that, then the system is verified. But if the book seller would like to stay in business, I bet that he won’t say the system has been validated.

Stakeholders of Climate Models

So, for business applications, it’s not quite right to ask “is the software correct?” The really important question is: “is the software as useful for the key stakeholders as it should be?”

But in Mathematics Everything is Either True or False!

One may wonder if this “true versus useful” stuff above makes any sense when we think about a piece of software that calculates, for example, a known mathematical function like a “modified Bessel function of the first kind”. After all, it is defined precisely in mathematics what these functions look like.

If we are talking about creating a program that can evaluate these functions, there are a lot of technical choices that need to be specified. Here is a random example (if you don’t understand it, don’t worry, that is not necessary to get the point):

• Current computers know data types with a finite value range and finite precision only, so we need to agree on which such data type we want as a model of the real or complex numbers. For example, we might want to use the “double precision floating-point format”, which is an international standard.

Another aspect is, for example, “how long may the function take to return a value?” This is an example of a non-functional requirement (see Wikipedia). These requirements will play a role in the implementation too, of course.

However, apart from these technical choices, there is no ambiguity as to what the function should do, so there is no need to distinguish verification and validation. Thank god that mathematics is eternal! A Bessel function will always be the same, for all of eternity.

Unfortunately, this is no longer true when a computer program computes something that we would like to compare to the real world. Like, for example, a weather forecast. In this case the computer model will, like all models, include some aspects of the real world. Or rather, some specific implementations of a mathematical model of a part of the real world.

Verification will still be the same, if we understand it to be the stage where we test to see if the single pieces of the program compute what they are supposed to. The parts of the program that do things that can be defined in a mathematically precise way. But validation will be a whole different step if understood in the sense of “is the model useful?”

But Everybody Knows What Weather Is!

But still, does this apply to climate models at all? I mean, everybody knows what “climate” is, and “climate models” should simulate just that, right?

As it turns out, it is not so easy, because climate models serve very different purposes:

• Climate scientists want to test their understanding of basic climate processes, just as physicists calculate a lot of solutions to their favorite theories to gain a better understanding of what these theories can and do model.

• Climate models are also used to analyse observational data, to supplement such data and/or to correct them. Climate models have had success in detecting misconfiguration and biases in observational instruments.

• Finally, climate models are also used for global and/or local predictions of climate change.

The question “is my climate model right?” therefore translates to the question “is my climate model useful?” This question has to refer to a specific use of the model, or rather: to the viewpoint of the key stakeholders.

The Shift of Stakeholders

One problem of the discussions of the past seems to be due to a shift of the key stakeholders. For example: some climate models have been developed as a tool for climate scientists to play around with certain aspects of the climate. When the scientists published papers, including insights gained from these models, they usually did not publish anything about the implementation details. Mostly, they did not publish anything about the model at all.

This is nothing unusual. After all, a physicist or mathematician will routinely publish her results and conclusions—maybe with proofs. But she is not required to publish every single thought she had to think to produce her results.

But after the results of climate science became a topic in international politics, a change of the key stakeholders occurred: a lot of people outside the climate science community developed an interest in the models. This is a good thing. There is a legitimate need of researchers to limit participation in the review process, of course. But when the results of a scientific community become the basis of far-reaching political decisions, there is a legitimate public interest in the details of the ongoing research process, too. The problem in this case is that the requirements of the new key stakeholders, such as interested software engineers outside the climate research community, are quite different from the requirements of the former key stakeholders, climate scientists.

For example, if you write a program for your own eyes only, there is hardly any need to write a detailed documentation of it. If you write it for others to understand it, as rule of thumb, you’ll have to produce at least as much documentation as code.

Back to the Start: Farms, Fields and Forests

As an example of a rather prominent critic of climate models, let’s quote the physicist Freeman Dyson:

The models solve the equations of fluid dynamics and do a very good job of describing the fluid motions of the atmosphere and the oceans.

They do a very poor job of describing the clouds, the dust, the chemistry and the biology of fields, farms and forests. They are full of fudge factors so the models more or less agree with the observed data. But there is no reason to believe the same fudge factors would give the right behaviour in a world with different chemistry, for example in a world with increased CO2.

Let’s assume that Dyson is talking here about GCMs, with all their parametrizations of unresolved processes (which he calls “fudge factors”). Then the first question that comes to my mind is “why would a climate model need to describe fields, farms and forests in more detail?”

I’m quite sure that the answer will depend on what aspects of the climate the model should represent, in what regions and over what timescale.

And that certainly depends on the answer to the question “what will we use our model for?” Dyson seems to assume that the answer to this question is obvious, but I don’t think that this is true. So, maybe we should start with “stakeholder analysis” first.


How Sea Level Rise Will Affect New York

9 June, 2011

Let’s try answering this question on Quora:

How will global warming, and particularly sea level rises, affect New York City?

I doubt sea level rise will be the first way we’ll get badly hurt by global warming. I think it’ll be crop losses caused by floods, droughts and heat waves, and property damage caused by storms. But the question focuses on sea level rise, so perhaps we should think about that… along with any other ways that New York City is particularly susceptible to the effects of global warming.

Suppose you know a lot about New York, but you need an estimate of sea level rise to get started. In the Azimuth Project page on sea level rise, you’ll see a lot of discussion of this subject. Naturally, it’s complicated. But say you just want some numbers. Okay: very roughly, by the end of the century we can expect a sea level of at least 0.6 meters, not counting any melting from Greenland and Antarctica and at most 2 meters, including Greenland and Antarctica. That’s roughly between 2 and 6 feet.

On the other hand, there’s at least one report saying sea levels may rise in the Northeast US at twice the average global rate. What’s the latest word on that?

Now, here’s a website that claims to show what various amounts of sea level rise would do to different areas:

• Firetree.net, Flood maps, including New York City.

Details on how these maps were made are here. One problem is that they focus too much on really big sea level rises: the smallest rise shown is 1 meter, then 2 meters… and it goes up to 60 meters!

Anyway, here’s part of New York City now:

Here it is after a 1-meter (3-foot) sea level rise:


(Click to enlarge any of these.) And here’s 2 meters, or 6 feet:


It’s a bit hard to spot the effects in Manhattan. They’re much more noticeable in the low-lying areas between Jersey City and Secaucus. What are those: parks, industrial areas, or suburbs? I’ve heard New Yorkers crack jokes about the ‘swamps of Jersey’…

But of course, a lot of the city is underground. What will happen to subways and other infrastructure, like sewage systems? And what about water supplies? On coastlines, saltwater can infiltrate into surface waters and aquifers. Where does freshwater meet saltwater near New York City? How will the effect of floods and storms change?

And of course, there are other parts of New York City these little maps don’t show: for those, go here. But watch out: at first you’ll see the effect of a 7-meter sea level rise… you’ll need to change the settings to see the effects of a more realistic rise.

If you live in a place that will be flooded, let me know!

Luckily, we don’t have to figure everything out ourselves: the state of New York has a task force devoted to this. And as task forces do, they’ve written a report:

• New York Department of Environmental Conservation, Sea Level Rise Task Force, Final Report.

New York City also has an ambitious environmental plan:

• New York City, PlaNYC 2030.

Finally, let me quote part of this:

• Jim O’Grady, Sea level rise could turn New York into Venice, experts warn, WNYC News, 9 February 2011.

Because it looks ahead 200 years, this article paints a more dire picture than my remarks above:

David Bragdon, Director of the Mayor’s Office of Long-Term Planning & Sustainability, is charged with preparing for the dangers of climate change. He said the city is taking precautions like raising the pumps at a wastewater treatment plant in the Rockaways and building the Willets Point development in Queens on six feet of landfill. The goal is to manage the risk from 100-year storms—one of the most severe. The mayor’s report says by the end of this century, 100-year storms could start arriving every 15 to 35 years.

Klaus Jacob, a Columbia University research scientist who specializes in disaster risk management, said that estimate may be too conservative. “What is now the impact of a 100-year storm will be, by the end of this century, roughly a 10-year storm,” he warned.

Back on the waterfront, oceanographer Malcolm Bowman offered what he said is a suitably outsized solution to this existential threat: storm surge barriers.

They would rise from the waters at Throgs Neck, where Long Island Sound and the East River meet, and at the opening to the lower harbor between the Rockaways and Sandy Hook, New Jersey. Like the barriers on the Thames River that protect London, they would stay open most of the time to let ships pass but close to protect the city during hurricanes and severe storms.

The structures at their highest points would be 30 feet above the harbor surface. Preliminary engineering studies put the cost at around $11 billion.

Jacob suggested a different but equally drastic approach. He said sea level rise may force New Yorkers to pull back from vulnerable neighborhoods. “We will have to densify the high-lying areas and use the low-lying areas as parks and buffer zones,” he said.

In this scenario, New York in 200 years looks like Venice. Concentrations of greenhouse gases in the atmosphere have melted ice sheets in Greenland and Antarctica and raised our local sea level by six to eight feet. Inundating storms at certain times of year swell the harbor until it spills into the streets. Dozens of skyscrapers in Lower Manhattan have been sealed at the base and entrances added to higher floors. The streets of the financial district have become canals.

“You may have to build bridges or get Venice gondolas or your little speed boats ferrying yourself up to those buildings,” Jacob said.

David Bragdon is not comfortable with such scenarios. He’d rather talk about the concrete steps he’s taking now, like updating the city’s flood evacuation plan to show more neighborhoods at risk. That would help the people living in them be better prepared to evacuate.

He said it’s too soon to contemplate the “extreme” step of moving “two, three, four hundred thousand people out of areas they’ve occupied for generations,” and disinvesting “literally billions of dollars of infrastructure in those areas.” On the other hand: “Another extreme would be to hide our heads in the sand and say, ‘Nothing’s going to happen.’”

Bragdon said he doesn’t think New Yorkers of the future will have to retreat very far from shore, if at all, but he’s not sure. And he would neither commit to storm surge barriers nor eliminate them as an option. He said what’s needed is more study—and that he’ll have further details in April, when the city updates PlaNYC.

Jacob warned that in preparing for disaster, no matter how far off, there’s a gulf between study and action. “There’s a good intent,” he said of New York’s climate change planning to date. “But, you know, mother nature doesn’t care about intent. Mother nature wants to see resiliency. And that is questionable, whether we have that.”


This Week’s Finds (Week 314)

6 June, 2011

This week I’d like to start an interview with Thomas Fischbacher, who teaches at the School of Engineering Sciences at the University of Southampton. He’s a man with many interests, but we’ll mainly talk about sustainable agriculture, leading up to an idea called "permaculture".

JB: Your published work is mainly in theoretical physics, and some of it is quite mathematical. You have a bunch of papers on theories of gravity related to string theory, and another bunch on magnetic materials, maybe with some applications to technology. But you’re also interested in sustainable agricultural and building practices! That seems like quite a leap… but I may be trying to make a similar leap myself, so I find it fascinating. How did you get interested in these other topics, which seem so very different in flavor?

TF: I think it’s quite natural that one’s interests are wider than what one actually publishes about—quite likely, the popularity of your blog which is about all sorts of interesting things witnesses this.

However, if something that seems interesting catches my attention, I often experience a strong drive to come to an advanced level of understanding—at least mastering the key mechanisms. As far as I can think back, my studies have been predominantly self-directed, often following very unusual and sometimes obscure paths, so I sometimes happen to know a few quite odd things. And actually, considering research, I get a lot of fun out of combining advanced ideas from very different fields. Most of my articles are of that type, e.g. "sparse tensor numerics meets database algorithms and metalinguistics", or "Feynman diagrams meet lazy evaluation and continuation coding", or "Exceptional groups meet sensitivity back-propagation". Basically, I like to see myself in the role of a bridge-builder. Very often, powerful ideas that have been developed in one field are completely unknown in another where they actually can be used to great advantage.

Concerning sustainability, it actually was mostly soil physics that initially got me going. When dealing with a highly complex phenomenon such as human civilization, it is sometimes very useful to take a close look at matter and energy flows in order to get an overview over important processes that determine the structure and long term behaviour of a system. Just doing a few order-of-magnitude guesstimates and looking at typical soil erosion and soil formation rates, I found that, from that perspective, quite a number of fundamental things did not add up and told a story very different from the oh-so-glorious picture of human progress. That’s one of the great things about physical reasoning: it allows one to independently make up one’s mind about things where one otherwise would have little other choice than to believe what one is told. And so, I started to look deeper.

JB: So what did you discover? I can’t resist mentioning something I learned from a book Kevin Kelly gave me:

• Neil Roberts, The Holocene: an Environmental History, Blackwell, London, 1998.

It describes how the landscape of Europe has been cycling through glacial and interglacial periods every 100,000 years or so for the last 1.3 million years. It’s a regular sort of pattern!

As a glacial period ends, first comes a phase when birches and pines immigrate from southern refuges into what had been tundra. Then comes a phase when mixed deciduous forest takes over, with oak and elm becoming dominant. During this period, rocky soils turn into brown forest soils. Next, leaching from rocks in glacial deposits leads to a shift from neutral to acid soils, which favor trees like spruce. Then, as spruce take over, fallen needles make the soil even more acid. Together with cooling temperatures as the next glacial approaches, this leads to the replacement of deciduous forest by heathland and pine forests. Finally, glaciers move in and scrape away the soil. And then the cycle repeats!

I thought this was really cool: it’s like seasons, but on a grand scale. And I thought this quote was even cooler:

It was believed by classical authors such as Varro and Seneca that there had once been a "Golden Age", "when man lived on those things which the virgin earth produced spontaneously" and when "the very soil was more fertile and productive." If ever there was such a "Golden Age" then surely it was in the early Holocene, when soils were still unweathered and uneroded, and when Mesolithic people lived off the fruits of the land without the physical toil of grinding labour.

Still unweathered and uneroded! So it takes an ice age to reset the clock and bring soils back to an optimum state?

But your discovery was probably about the effects of humans…

TF: There are a number of different processes, all of them important, that are associated with very different time scales. A general issue here is that, as a society, we have difficulties to get an idea how our life experience is shaped by our cultural heritage, by our species’ history, and by events that happened tens of thousands of years ago.

Coming to the cycles of glaciation, you are right that these shaped the soils in places such as Europe, by grinding down rock and exposing freshly weathered material. But it is also interesting to look at places where this has not happened—to give us sort of an outside perspective; glaciation was fairly minimal in Australia, for example. Also, the main other player, volcanism did not have much of an effect in exposing fresh minerals there either. And so, Australian soils are extremely old—millions of years, tens of millions of years even, and very poor in mineral nutrients, as so much has been leached out. This has profound influences on the vegetation, but also on fauna, and of course on the people who inhabited this land for tens of thousands of years, and their culture: the Aborigines. Now, I don’t want to claim that the Aborigines actually managed to evolve a fully "sustainable" system of land management—but it should be pretty self-evident that they must have developed some fairly interesting biological knowledge over such a long time.

Talking about long time scales and the long distant past, it sometimes takes a genius to spot something that in hindsight is obvious but no one noticed because the unusual situation is that the really important thing that matters is missing. Have you ever wondered, for example, what animal might eat an avocado and disperse its fairly large seed? Like other fruit (botanically speaking, the avocado is a berry, as is the banana), the avocado plant co-evolved with animals that would eat its fruits—but there is no animal around that would do so. Basically, the reason is that we are looking at a broken ecosystem: the co-evolutionary partners of the avocado, such as gomphotheres, became extinct some thousands of years ago.


A blink with respect to the time scales of evolution, but an awfully long time for human civilizations. There is an interesting book on this subject:

• Connie Barlow, The Ghosts of Evolution: Nonsensical Fruit, Missing Partners, and Other Ecological Anachronisms, Basic Books, New York, 2002. (Also listen to this song.)

Considering soils, the cycle of glaciations already should hold an important lesson for us. It is important to note that the plow is basically an invention that (somewhat) suits European agriculture and its geologically young soils. What happens if we take this way of farming to the tropics? While lush and abundant rainforests may seem to suggest otherwise, we have old and nutrient-poor soils here, and most mineral nutrients get stored and cycled by the vegetation. If we clear this, we release a flush of nutrients, but as the annual crops which we normally grow are not that good at holding on to these nutrients, we rapidly destroy the fertility of the land.

There are alternative options for how to produce food in such a situation, but before we look into this, it might be useful to know a few important ballpark figures related to agriculture—plow agriculture in particular.

The most widely used agricultural unit for "mass per area" is "metric tons per hectare", but I will instead use kilograms per square meter (as some people may find that easier to relate to), 1 kilogram per square meter being 10 tons/ha. Depending on the climate (windspeeds, severity of summer rains, etc.), plow agriculture will typically lead to soil loss rates due to erosion of something in the ballpark of 0.5 to 5 kilograms per square meter per year. In the US, erosion rates in the past have been as high as 4 kilograms per square meter per year and beyond, but have come down markedly. Still, soil loss rates of around 1 kilogram per square meter per year are not uncommon for the US. The problem is that, under good conditions, soil creation rates are in the ballpark of 0.02 to 0.2 kilograms per square meter per year. So, our present agriculture is destroying soil much faster than new soil gets formed. And, quite insidiously, erosion will always carry away the most fertile top layer of soil first.

It is worthwhile to compare this with agricultural yields: in Europe, good wheat yields are in the range of 0.6 kilograms per square meter per year, but yields depend a lot on water availability, and the world average is just 0.3 kilograms per square meter per year. In any case, the plow actually produces much more eroded land than food. You can see more information here:

• Food and Agriculture Organization of the UN, FAOSTAT.

Concerning ancient reports of a "Golden Age"—I am not so sure about this anymore. By and large, civilizations mostly seem to have had quite a negative long term impact on the soil fertility that sustained them—and a number of them failed due to that. But all things considered, we often find that some particular groups of species have a very positive long term effect on fertility and counteract nutrient leaching—tropical forests bear witness to that.

Now… what single species we can think of would be best equipped to make a positive contribution towards long-term fertility building?

JB: Hey, no fair—I thought I was the one asking the questions!

Hmm, I don’t know. Maybe some sort of rhizobium? You know, those bacteria that associate themselves to the roots of plants like clover, alfalfa and beans, and take nitrogen from the air and convert it to a form that’s usable by the plants?

But you said "one single species", so this answer is probably not right: there are lots of species of rhizobia.

TF: The answer is quite astounding—and it lies at the heart of understanding sustainability. The species that could have the largest positive impact on soil fertility is Homo sapiens—us! Now, considering the enormous ecological damage that has been done by that single species, such a proposition may seem quite outrageous. But note that I asked about the potential to make a positive contribution, not actual behaviour as observed so far.

JB: Oh! I should have guessed that. Darn!

TF: When I bring up this point, many people think that I might have some specific technique in mind, a "miracle cure", or "silver bullet" idea such as, say, biochar—which seems to be pretty en vogue now—or genetically engineered miracle plants, or some such thing.

But no—this is about a much more fundamental issue. Nature eventually will heal ecological wounds—but quite often, she is not in a particular hurry. Left to her own devices, she may take thousands of years to rebuild soils and turn devastated land back into fertile ecosystems. Now, this is where we enter the scene. With our outstanding intellectual power we can read landscapes, think about key flows—flows of energy, water, minerals, and living things through a site—and if necessary, provide a little bit of guidance to help nature take the next step. This way, we can often speed up the regeneration clock a hundredfold or more!

Let me give some specific examples. Technologically, these are often embarrassingly simple—yet at the same time highly sophisticated, in the sense that they address issues that are obvious only once one has developed an eye for them.

The first one is imprinting—in arid regions, this can be a mind-blowingly simple yet quite effective technology to kick-start a biological succession pathway.

JB: What’s "imprinting"?

TF: One could say, earthworks for rainwater harvesting, but on the centimeter scale. Basically, it is a simple way to implement a passive resource-concentration system for water and organic matter that "nucleates" the transition back from desert to prairie—kind of like providing ice microcrystals in supercooled water. The Imprinting Foundation has a good website. In particular, take a look at this:

• The Imprinting Foundation, Success Stories.

This video is also well worth watching—part of the "Global Gardener" series:

• Bill Mollison, Dryland permaculture strategies—part 3, YouTube.

Here is another example—getting the restoration of rainforest going in the tropical grasslands of Colombia.

• Zero Emissions Research and Initiatives (ZERI), Reforestation.

Here, the challenge is that the soil originally was so acidic (around pH 4) that aluminium went into the soil solution as toxic Al3+. What eventually managed to do the trick was to plant a nurse crop of Caribbean pines, Pinus caribbea (on 80 square kilometers—no mean feat) that have been provided with the right mycorrhizal symbiont (Pisolithus tinctorius, I think) that enabled the trees to grow in very acidic soil. An amazing subject in themselves, fungi, by the way.

These were big projects—but similar ideas work on pretty much any scale. Friends of mine have shown me great pictures of the progress of a degraded site in Nepal where they did something very simple a number of years ago—puting up four poles with strings between them on which birds like to gather. And personally, since I started to seriously ponder the issue of soil compaction and started to give double-digging a try in my own garden a few years ago, the results have been so amazing that I wonder why anyone bothers to garden with annuals any other way.

JB: What’s "double-digging"?

TF: A method to relieve soil compaction. As we humans live our lives above the soil, processes below can be rather alien to us—yet, this is where many very important things go on. By and large, most people do not realize how deep plant roots go—and how badly they are affected by compaction.

The term "double-digging" refers to digging out the top foot of topsoil from the bed, and then using a gardening fork to also loosen the next foot of soil (often subsoil) before putting back the topsoil. Now, this method does have its drawbacks, and also, it is not the "silver bullet" single miracle recipe for high gardening yields some armchair gardeners who have read Jeavons’s book believe it to be. But if your garden soil is badly compacted, as it often is the case when starting a new garden, double-digging may be a very good idea.

JB: Interesting!

TF: So, there is no doubt that our species can vastly accelerate natural healing processes. Indeed, we can link our lives with natural processes in a way that satisfies our needs while we benefit the whole species assembly around us—but there are some very non-obvious aspects to this. Hacking a hole into the forest to live "in harmony with nature" most certainly won’t do the trick.

The importance of the key insight—we have the capacity to act as the most powerful repair species around—cannot be overstated. There is at present a very powerful mental block that shows up in many discussions of sustainability: looking at our past conduct, it is easy to get the idea that Homo sapiens‘ modus operandi is to seek out the most valuable/powerful/convenient resource first, use this up, and then, driven by need, find ways to make do with the next most valuable resource, calling this "progress"—actually a downward spiral. I’ve indeed seen adverts for the emerging Liquefied Natural Gas industry that glorified this as "a motor of progress and growth". Now, the only reason why we consider this is that the more easily-accessible, easy-to-handle fuels have been pretty much used up. Same with deep-sea oil drilling. What kind of "progress" is it that the major difference between the recent oil spill in the Gulf of Mexico and the Ixtoc oil spill in 1979 is that this time, there’s a mile of water sitting on top of the well—because we used up the more easily accessible oil?

• Rachel Maddow, Ixtoc Deepwater Horizon parallels, YouTube.

Now, there are two dominant attitudes toward this observation that we despoil one resource after another.

One is some form of "denial". This is quite widespread amongst professional economists. Ultimately, the absurdity of their argument becomes clear when it is condensed to "sustainability is just one problem among many, and we are the better at solving problems the stronger our economy—so we need to use up resources fast to get rich fast so that we can afford to address the problems caused by us using up resources fast." Reminds me of a painter who lived in the village I grew up in. He was known to work very swiftly, and when asked why he always was in such a hurry, wittily replied: "but I have to get the job done before I run out of paint!"

The other attitude is some sort of self-hate that regards the key problem not as an issue of very poor management, but inherently linked to human existence. According to that line of thinking, collapse is inevitable and we should just make sure we do not gobble up resources so fast that we leave nothing for our children to despoil so that they can have a chance to live.

It is clear that as long as there is a deadlock between these two attitudes, we will not make much progress towards a noticeably more sustainable society. And waiting just exacerbates the problem. So, the key question is: does it really have to be like this—are we doomed to live by destroying the resources which we depend on? Well—every cow can do better than that. Cow-dung is more valuable in terms of fertility than what the cow has eaten. So, if we are such an amazing species—as we like to claim by calling ourselves "Homo sapiens"—why should we fail so miserably here?

JB: I can see all sorts of economic, political and cultural reasons why we do so badly. But it might be a bit less depressing to talk about how we can do better. For example, you mentioned paying attention to flows through systems.

TF: The important thing about flows is that they are a great concept tool to get some first useful ideas about those processes that really matter for the behaviour of complex systems—both for the purpose of analysis as well as design.

That’s quite an exciting subject, but as you mentioned it, I’d first like to briefly address the issue of depressing topics that frequently arise when taking a deeper look into sustainability—in particular, the present situation. Why? Because I think that our capacity as a society to deal with such emotions will be decisive for how well we will do or how badly we will fail when addressing a set of convergent challenges. On the one hand, it is very important to understand that such emotions are an essential part of human experience. On the other hand, they can have a serious negative impact on our judgment capacity. So, an important part of the sustainability challenge is to build the capacity to act and make sound decisions under emotional stress. That sounds quite obvious, but my impression is that, still, most people either are not yet fully aware of this point, or do not see what this idea might mean in practice.

JB: I’ve been trying to build that capacity myself. I don’t think mathematics or theoretical physics were very good at preparing me. Indeed, I suspect that many people in these fields enjoy not only the feeling of "certainty" they can provide, but also the calming sense that the universe is beautiful and perfect. When it comes to environmental issues there’s a lot more uncertainty, and also frequently the sense that the world is messed up—thanks to us! On top of that there’s a sense of urgency, and frustration. All this can be rather stressful. However, there are ways to deal with that, and I’m busy learning them.

TF: I think there is one particularly important lesson I have learned about the role of emotions, especially fear. Important because it probably is quite a fundamental part of the human condition. Emotions do have the power to veto some conclusions from ever surfacing in one’s conscious mind if they would be painful to bear. They can temporarily suspend sound reasoning and also access to otherwise sound memory.

This is extremely sinister, for you are not acting rationally at all, you are in fact driven by one of the most non-rational aspects of your existence, your fear, yet you yourself have next to no chance of ever discovering this, as your emotions abuse your cognitive abilities to systematically shield you from getting conscious access to any insight which would stand a chance of making you question your analysis.

JB: I think we can all name other people who suffer from this problem. But of course the challenge is to see it in ourselves, while it’s happening.

TF: Insidiously, having exceptional reasoning abilities will not help the very least bit here—a person with a powerful mind may be misguided as easily as anybody else by deep inner fears, it’s just that the mind of a person with strong reasoning skills will work harder and spin more sophisticated tales than that of an intellectually average person. So, this essentially is a question of "how fast a runner do you have to be to out-run your own shadow?" How intelligent do you have to be to recognize it when your emotions cause your mind to abuse your powerful reasoning abilities to deceive itself? Well, the answer probably is that the capacity to appreciate in oneself the problem of self-deception is not related to intelligence, but wisdom. I really admire the insight that "it’s hard to fight an enemy who has outposts in your head."

JB: Richard Feynman put it another way: "The first principle is that you must not fool yourself—and you are the easiest person to fool." And if you’re sure you’re not fooling yourself, then you definitely are.

TF: Of course, everything that has an impact on our ability to conduct a sound self-assessment of our own behaviour matters a lot for sustainability related issues.

But enough about the role of the human mind in all this. This certainly is a fascinating and important subject, but at the end of the day, there is a lot of ecosystem rehabilitation to be done, and mapping flows is a powerful approach to getting an idea about what is broken and how to repair it.

JB: Okay, great. But I think our readers need a break. Next time we’ll pick up where we left off, and talk about flows.


Permaculture is a philosophy of working with, rather than
against nature; of protracted and thoughtful observation rather than protracted and thoughtless labour; and of looking at plants and animals in all their functions, rather than treating any area as a single-product system.
– Bill Mollison


Earth System Research for Global Sustainability

4 June, 2011

Some good news!

The International Mathematical Union or IMU is mainly famous for running the International Congress of Mathematicians every four years. But they do other things, too. The new vice-president of the IMU is Christiane Rousseau. Rousseau was already spearheading the Mathematics of Planet Earth 2013 project. Now she’s trying to get the IMU involved in a ten-year research initiative on sustainability.

As you can see from this editorial, she treats climate change and sustainability with the seriousness they deserve. Let’s hope more mathematicians join in!

I would like to get involved somehow, but I’m not exactly sure how.

Editorial

I had the privilege of being elected Vice-president of the IMU at the last General Assembly, and it is now five months that I am following the activities of the IMU. The subjects discussed at the Executive Committee are quite diverse, from the establishment of the permanent office to the ranking and pricing of journals, to mathematics in developing countries and the future ICM, and the members of the
Executive Committee tend to specialize on one or two dossiers. Although I am a pure mathematician myself, I am becoming more and more interested in the science of sustainability, so let me talk to you of this.

IMU is one of the international unions inside the International Council of Science (ICSU). At the Executive we regularly receive messages from ICSU asking for input from its members. While it is not new that scientists are involved in the study of climate change and sustainability issues, a new feeling of emergency has developed. The warning signs are becoming more numerous that urgent action is needed if we want to save the planet from a disastrous future, since we may not be far from a point of no return: climate change with more extreme weather events, rising of the sea level with the melting of glaciers, shortage of food and water in the near future because of the increase of the world population and the climate change, loss of biodiversity, new epidemics or invasive species, etc. This explains why ICSU is starting a new 10-year research initiative: EARTH SYSTEM RESEARCH FOR GLOBAL SUSTAINABILITY, and a Steering Committee for this initiative is presently nominated. The goals of the Initiative are to:

1. Deliver at global and regional scales the knowledge that societies need to effectively respond to global change while meeting economic and social goals;

2. Coordinate and focus international scientific research to address the Grand Challenges and Belmont Challenge;

3. Engage a new generation of researchers in the social, economic, natural, health, and engineering sciences in global sustainability research.

In the same spirit, ICSU is preparing a strong scientific presence at the next United Nations Conference on Sustainable Development (Rio+20) that will take place on June 4-6, 2012 in Rio de Janeiro. For this, ICSU is organizing a number of preparatory regional and global meetings. It is clear that mathematical sciences have an essential role in the interdisciplinary research that needs to take place in order to achieve significant impact. The other scientific disciplines concerned are numerous from physics, to biology, to economics, etc.

Let me quote Graciela Chichilnisky, the author of the carbon market of the UN Kyoto Protocol: “It is the physicists that study the climate change, but it is the economists who advise the politicians that take the decisions.” Considering the importance of the contribution of mathematical sciences in sustainability issues, IMU has asked to participate actively in these preparatory meetings and be represented at Rio+20. This should be an occasion to build partnerships with the other scientific unions inside ICSU. More and more mathematicians and research institutes around the world become interested in sustainable development as is acknowledged by the large participation in Mathematics of Planet Earth 2013 which was recently endorsed by IMU. But the world needs more than a one year initiative. The science of sustainability is full of challenging problems which are very interesting mathematically. Many of these problems require new mathematical techniques. We could hope that these initiatives will allow training a new generation of researchers in mathematical sciences who will be able to work in interdisciplinary teams to address these issues.

Christiane Rousseau
Vice-President of Executive Committee of IMU


A Characterization of Entropy

2 June, 2011

Over at the n-Category Café some of us have been trying an experiment: writing a math paper in full public view, both on that blog and on its associated wiki: the nLab. One great thing about doing things this way is that people can easily chip in with helpful suggestions. It’s also more fun! Both these tend to speed the process.

Like Frankenstein’s monster, our paper’s main result was initially jolted into life by huge blasts of power: in this case, not lightning but category theory. It was awesome to behold, but too scary for this blog.

First Tom Leinster realized that the concept of entropy fell out — unexpectedly, but very naturally — from considerations involving ‘operads’, which are collections of abstract operations. He was looking at a particular operad where the operations are ‘convex linear combinations’, and he discovered that this operad has entropy lurking in its heart. Then Tobias Fritz figured out a nice way to state Tom’s result without mentioning operads. By now we’ve taught the monster table manners, found it shoes that fit, and it’s ready for polite society:

• John Baez, Tobias Fritz and Tom Leinster, A characterization of entropy in terms of information loss.

The idea goes like this. Say you’ve got a finite set X with a probability measure p on it, meaning a number 0 \le p_i \le 1 for each point i \in X, obeying

\sum_{i \in X} p_i = 1

Then the Shannon entropy of p is defined by

S(p) = - \sum_{i \in X} p_i \, \ln(p_i)

This funny-looking formula can be justified in many ways. Our new way involves focusing not on entropy itself, but on changes in entropy. This makes sense for lots of reasons. For example, in physics we don’t usually measure entropy directly. Instead, we measure changes in entropy, using the fact that a system at temperature T absorbing a tiny amount of heat \Delta Q in a reversible way will experience an entropy change of \Delta Q / T. But our real reason for focusing on changes in entropy is that it gives a really slick theorem.

Suppose we have two finite sets with probability measures, say (X,p) and (Y,q). Then we define a morphism f: (X,p) \to (Y,q) to be a measure-preserving function: in other words, one for which the probability q_j of any point in Y is the sum of the probabilities p_i of the points in X with f(i) = j.

A morphism of this sort is a deterministic process that carries one random situation to another. For example, if I have a random integer between -10 and 10, chosen according to some probability distribution, and I square it, I get a random integer between 0 and 100. A process of this sort always decreases the entropy: given any morphism f: (X,p) \to (Y,q), we have

S(q) \le S(p)

Since the second law of thermodynamics says that entropy always increases, this may seem counterintuitive or even paradoxical! But there’s no paradox here. It makes more intuitive sense if you think of entropy as information, and the function f as some kind of data processing that doesn’t introduce any additional randomness. Such a process can only decrease the amount of information. For example, squaring the number -5 gives the same answer as squaring 5, so if I tell you “this number squared is 25”, I’m giving you less information than if I said “this number is -5”.

For this reason, we call the difference S(p) - S(q) the information loss of the morphism f : (X,p) \to (Y,q). And here’s our characterization of Shannon entropy in terms of information loss:

First, let’s write a morphism f: (X,p) \to (Y,q) as f : p \to q for short. Suppose F is a function that assigns to any such morphism a number F(f) \in [0,\infty), which we think of as its information loss. And suppose that F obeys three axioms:

1. Functoriality. Whenever we can compose morphisms f and g, we demand that

F(f \circ g) = F(f) + F(g)

In other words: when we do a process consisting of two stages, the amount of information lost in the whole process is the sum of the amounts lost in each stage!

2. Convex linearity. Suppose we have two finite sets equipped with probability measures, say p and q, and a real number \lambda \in [0, 1]. Then there is a probability measure \lambda p \oplus (1 - \lambda) q on the disjoint union of the two sets, obtained by weighting the two measures by \lambda and 1 - \lambda, respectively. Similarly, given morphisms f: p \to p' and g: q \to q' there is an obvious morphism from \lambda p \oplus (1 - \lambda) q to \lambda p' \oplus (1 - \lambda) q'. Let’s call this morphism \lambda f \oplus (1 - \lambda) g. We demand that

F(\lambda f \oplus (1 - \lambda) g) = \lambda F(f) + (1 - \lambda) F(g)

In other words: if we flip a probability-λ coin to decide whether to do one process or another, the information lost is λ times the information lost by the first process plus (1 – λ) times the information lost by the second!

3. Continuity. The same function between finite sets can be thought of as a measure-preserving map in different ways, by changing the measures on these sets. In this situation the quantity F(f) should depend continuously on the measures in question.

In other words: if we slightly change what we do a process to, the information it loses changes only slightly.

Then we conclude that there exists a constant c \ge 0 such that for any morphism f: (X,p) \to (Y,q), we have

F(f) = c(S(p) - S(q))

In other words: the information loss is some multiple of the change in Shannon entropy!

What’s pleasing about this theorem is that the three axioms are pretty natural, and it’s hard to see the formula

S(p) = - \sum_{i \in X} p_i \, \ln(p_i)

hiding in them… but it’s actually there.

(We also prove a version of this theorem for Tsallis entropy, in case you care. This obeys a mutant version of axiom 2, namely:

F(\lambda f \oplus (1 - \lambda) g) = \lambda^\alpha F(f) + (1 - \lambda)^\alpha F(g)

where \alpha is a parameter with 0 < \alpha< \infty. Tsallis entropy is a close relative of Rényi entropy, which I discussed here earlier. Just as Rényi entropy is a kind of q-derivative of the free energy, the Tsallis entropy is a q-derivative of the partition function. I’m not sure either of them are really important, but when you’re trying to uniquely characterize Shannon entropy, it’s nice for it to have some competitors to fight against, and these are certainly the main two. Both of them depend on a parameter and reduce to the Shannon entropy at a certain value of that parameter.)


The Stockholm Memorandum

1 June, 2011

In May this year, the 3rd Nobel Laureate Symposium produced a document called The Stockholm Memorandum signed by 17 Nobel laureates, presumably from among these participants. It’s a clear call to action, so I’ll reproduce it all here.

I. Mind-shift for a Great Transformation

The Earth system is complex. There are many aspects that we do not yet understand. Nevertheless, we are the first generation with the insight of the new global risks facing humanity.

We face the evidence that our progress as the dominant species has come at a very high price. Unsustainable patterns of production, consumption, and population growth are challenging the resilience of the planet to support human activity. At the same time, inequalities between and within societies remain high, leaving behind billions with unmet basic human needs and disproportionate vulnerability to global environmental change.

This situation concerns us deeply. As members of the 3rd Nobel Laureate Symposium we call upon all leaders of the 21st century to exercise a collective responsibility of planetary stewardship. This means laying the foundation for a sustainable and equitable global civilization in which the entire Earth community is secure and prosperous.

Science indicates that we are transgressing planetary boundaries that have kept civilization safe for the past 10,000 years. Evidence is growing that human pressures are starting to overwhelm the Earth’s buffering capacity.

Humans are now the most significant driver of global change, propelling the planet into a new geological epoch, the Anthropocene. We can no longer exclude the possibility that our collective actions will trigger tipping points, risking abrupt and irreversible consequences for human communities and ecological systems.

We cannot continue on our current path. The time for procrastination is over. We cannot afford the luxury of denial. We must respond rationally, equipped with scientific evidence.

Our predicament can only be redressed by reconnecting human development and global sustainability, moving away from the false dichotomy that places them in opposition.

In an interconnected and constrained world, in which we have a symbiotic relationship with the planet, environmental sustainability is a precondition for poverty eradication, economic development, and social justice.

Our call is for fundamental transformation and innovation in all spheres and at all scales in order to stop and reverse global environmental change and move toward fair and lasting prosperity for present and future generations.

II. Priorities for Coherent Global Action

We recommend a dual track approach:

a) emergency solutions now, that begin to stop and reverse negative environmental trends and redress inequalities in the inadequate institutional frameworks within which we operate, and

b) long term structural solutions that gradually change values, institutions and policy frameworks. We need to support our ability to innovate, adapt, and learn.

1. Reaching a more equitable world

Unequal distribution of the benefits of economic development are at the root of poverty. Despite efforts to address poverty, more than a third of the world’s population still live on less than $2 per day. This needs our immediate attention. Environment and development must go hand in hand. We need to:

• Achieve the Millennium Development Goals, in the spirit of the Millennium Declaration, recognising that global sustainability is a precondition of success.

• Adopt a global contract between industrialized and developing countries to scale up investment in approaches that integrate poverty reduction, climate stabilization, and ecosystem stewardship.

2. Managing the climate – energy challenge

We urge governments to agree on global emission reductions guided by science and embedded in ethics and justice. At the same time, the energy needs of the three billion people who lack access to reliable sources of energy need to be fulfilled. Global efforts need to:

• Keep global warming below 2°C, implying a peak in global CO2 emissions no later than 2015 and recognise that even a warming of 2°C carries a very high risk of serious impacts and the need for major adaptation efforts.

• Put a sufficiently high price on carbon and deliver the G-20 commitment to phase out fossil fuel subsidies, using these funds to contribute to the several hundred billion US dollars per year needed to scale up investments in renewable energy.

3. Creating an efficiency revolution

We must transform the way we use energy and materials. In practice this means massive efforts to enhance energy efficiency and resource productivity, avoiding unintended secondary consequences. The “throw away concept” must give way to systematic efforts to develop circular material flows. We must:

• Introduce strict resource efficiency standards to enable a decoupling of economic growth from resource use.

• Develop new business models, based on radically improved energy and material efficiency.

4. Ensuring affordable food for all

Current food production systems are often unsustainable, inefficient and wasteful, and increasingly threatened by dwindling oil and phosphorus resources, financial speculation, and climate impacts. This is already causing widespread hunger and malnutrition today. We can no longer afford the massive loss of biodiversity and reduction in carbon sinks when ecosystems are converted into cropland. We need to:

• Foster a new agricultural revolution where more food is produced in a sustainable way on current agricultural land and within safe boundaries of water resources.

• Fund appropriate sustainable agricultural technology to deliver significant yield increases on small farms in developing countries.

5. Moving beyond green growth

There are compelling reasons to rethink the conventional model of economic development. Tinkering with the economic system that generated the global crises is not enough. Markets and entrepreneurship will be prime drivers of decision making and economic change, but must be complemented by policy frameworks that promote a new industrial metabolism and resource use. We should:

• Take account of natural capital, ecosystem services and social aspects of progress in all economic decisions and poverty reduction strategies. This requires the development of new welfare indicators that address the shortcomings of GDP as an indicator of growth.

• Reset economic incentives so that innovation is driven by wider societal interests and reaches the large proportion of the global population that is currently not benefitting from these innovations.

6. Reducing human pressures

Consumerism, inefficient resource use and inappropriate technologies are the primary drivers of humanity’s growing impact on the planet. However, population growth also needs attention. We must:

• Raise public awareness about the impacts of unsustainable consumption and shift away from the prevailing culture of consumerism to sustainability.

• Greatly increase access to reproductive health services, education and credit, aiming at empowering women all over the world. Such measures are important in their own right but will also reduce birth rates.

7. Strengthening earth system governance

The multilateral system must be reformed to cope with the defining challenges of our time, namely transforming humanity’s relationship with the planet and rebuilding trust between people and nations. Global governance must be strengthened to respect planetary boundaries and to support regional, national and local approaches. We should:

• Develop and strengthen institutions that can integrate the climate, biodiversity and development agendas.

• Explore new institutions that help to address the legitimate interests of future generations.

8. Enacting a new contract between science and society

Filling gaps in our knowledge and deepening our understanding is necessary to find solutions to the challenges of the Anthropocene, and calls for major investments in science. A dialogue with decision-makers and the general public is also an important part of a new contract between science and society. We need to:

• Launch a major initiative on the earth system research for global sustainability, at a scale similar to those devoted to areas such as space, defence and health, to tap all sources of ingenuity across disciplines and across the globe.

• Scale up our education efforts to increase scientific literacy especially among the young.

We are the first generation facing the evidence of global change. It therefore falls upon us to change our relationship with the planet, in order to tip the scales towards a sustainable world for future generations.


Follow

Get every new post delivered to your Inbox.

Join 3,095 other followers