**(7) To stay below 2 °C of warming, the world must become carbon negative.**

Only one of the four future scenarios (RCP2.6) shows us staying below the UN’s commitment to no more than 2 ºC of warming. In RCP2.6, emissions peak soon (within the next decade or so), and then drop fast, under a stronger emissions reduction policy than anyone has ever proposed in international negotiations to date. For example, the post-Kyoto negotiations have looked at targets in the region of 80% reductions in emissions over say a 50 year period. In contrast, the chart below shows something far more ambitious: we need more than 100% emissions reductions. We need to become carbon negative:

The graph on the left shows four possible CO2 emissions paths that would all deliver the RCP2.6 scenario, while the graph on the right shows the resulting temperature change for these four. They all give similar results for temperature change, but differ in how we go about reducing emissions. For example, the black curve shows CO2 emissions peaking by 2020 at a level barely above today’s, and then dropping steadily until emissions are below zero by about 2070. Two other curves show what happens if emissions peak higher and later: the eventual reduction has to happen much more steeply. The blue dashed curve offers an implausible scenario, so consider it a thought experiment: if we held emissions constant at today’s level, we have exactly 30 years left before we would have to instantly reduce emissions to zero forever.

Notice where the zero point is on the scale on that left-hand graph. Ignoring the unrealistic blue dashed curve, all of these pathways require the world to go *net carbon negative* sometime soon after mid-century. None of the emissions targets currently being discussed by any government anywhere in the world are sufficient to achieve this. We should be talking about how to become carbon negative.

One further detail. The graph above shows the temperature response staying well under 2°C for all four curves, although the uncertainty band reaches up to 2°C. But note that this analysis deals only with CO2. The other greenhouse gases have to be accounted for too, and together they push the temperature change right up to the 2°C threshold. There’s no margin for error.

You can download all of *Climate Change 2013: The Physical Science Basis* **here**. It’s also available chapter by chapter here:

- Introduction
- Observations: Atmosphere and Surface
- Observations: Ocean
- Observations: Cryosphere
- Information from Paleoclimate Archives
- Carbon and Other Biogeochemical Cycles
- Clouds and Aerosols
- Anthropogenic and Natural Radiative Forcing
- Evaluation of Climate Models
- Detection and Attribution of Climate Change: from Global to Regional
- Near-term Climate Change: Projections and Predictability
- Long-term Climate Change: Projections, Commitments and Irreversibility
- Sea Level Change
- Climate Phenomena and their Relevance for Future Regional Climate Change

]]>

**(6) We have to choose which future we want very soon.**

In the previous IPCC reports, projections of future climate change were based on a set of scenarios that mapped out different ways in which human society might develop over the rest of this century, taking account of likely changes in population, economic development and technological innovation. However, none of the old scenarios took into account the impact of strong global efforts at climate mitigation. In other words, they all represented futures in which we don’t take serious action on climate change. For this report, the new ‘RCPs’ have been chosen to allow us to explore the choice we face.

This chart sums it up nicely. If we do nothing about climate change, we’re choosing a path that will look most like RCP8.5. Recall that this is the one where emissions keep rising just as they have done throughout the 20th century. On the other hand, if we get serious about curbing emissions, we’ll end up in a future that’s probably somewhere between RCP2.6 and RCP4.5 (the two blue lines). All of these futures give us a much warmer planet. All of these futures will involve many challenges as we adapt to life on a warmer planet. But by curbing emissions soon, we can minimize this future warming.

Note also that the uncertainty range (the shaded region) is much bigger for RCP8.5 than it is for the other scenarios. The more the climate changes beyond what we’ve experienced in the recent past, the harder it is to predict what will happen. We tend to use the difference across different models as an indication of uncertainty (the coloured numbers shows how many different models participated in each experiment). But there’s also the possibility of ‘unknown unknowns’—surprises that aren’t in the models, so the uncertainty range is likely to be even bigger than this graph shows.

You can download all of *Climate Change 2013: The Physical Science Basis* **here**. It’s also available chapter by chapter here:

- Introduction
- Observations: Atmosphere and Surface
- Observations: Ocean
- Observations: Cryosphere
- Information from Paleoclimate Archives
- Carbon and Other Biogeochemical Cycles
- Clouds and Aerosols
- Anthropogenic and Natural Radiative Forcing
- Evaluation of Climate Models
- Detection and Attribution of Climate Change: from Global to Regional
- Near-term Climate Change: Projections and Predictability
- Long-term Climate Change: Projections, Commitments and Irreversibility
- Sea Level Change
- Climate Phenomena and their Relevance for Future Regional Climate Change

]]>

**(5) Current rates of ocean acidification are unprecedented.**

The IPCC report says:

Ocean acidification has sometimes been ignored in discussions about climate change, but it is a much simpler process, and is much easier to calculate (notice the uncertainty range on the graph above is much smaller than most of the other graphs). This graph shows the projected acidification in the best and worst case scenarios (RCP2.6 and RCP8.5). Recall that RCP8.5 is the “business as usual” future.The pH of seawater has decreased by 0.1 since the beginning of the industrial era, corresponding to a 26% increase in hydrogen ion concentration. [...] It is virtually certain that the increased storage of carbon by the ocean will increase acidification in the future, continuing the observed trends of the past decades. [...] Estimates of future atmospheric and oceanic carbon dioxide concentrations indicate that, by the end of this century, the average surface ocean pH could be lower than it has been for more than 50 million years.

Note that this doesn’t mean the ocean will become acid. The ocean has always been slightly alkaline—well above the neutral value of pH 7. So “acidification” refers to a drop in pH, rather than a drop below pH 7. As this continues, the ocean becomes steadily less alkaline. Unfortunately, as the pH drops, the ocean stops being supersaturated for calcium carbonate. If it’s no longer supersaturated, anything made of calcium carbonate starts dissolving. Corals and shellfish can no longer form. If you kill these off, the entire ocean food chain is affected. Here’s what the IPCC report says:

Surface waters are projected to become seasonally corrosive to aragonite in parts of the Arctic and in some coastal upwelling systems within a decade, and in parts of the Southern Ocean within 1–3 decades in most scenarios. Aragonite, a less stable form of calcium carbonate, undersaturation becomes widespread in these regions at atmospheric CO2 levels of 500–600 ppm.

You can download all of *Climate Change 2013: The Physical Science Basis* **here**. It’s also available chapter by chapter here:

- Introduction
- Observations: Atmosphere and Surface
- Observations: Ocean
- Observations: Cryosphere
- Information from Paleoclimate Archives
- Carbon and Other Biogeochemical Cycles
- Clouds and Aerosols
- Anthropogenic and Natural Radiative Forcing
- Evaluation of Climate Models
- Detection and Attribution of Climate Change: from Global to Regional
- Near-term Climate Change: Projections and Predictability
- Long-term Climate Change: Projections, Commitments and Irreversibility
- Sea Level Change
- Climate Phenomena and their Relevance for Future Regional Climate Change

]]>

**(4) Most of the heat is going into the oceans**

The oceans have a huge thermal mass compared to the atmosphere and land surface. They act as the planet’s heat storage and transportation system, as the ocean currents redistribute the heat. This is important because if we look at the global surface temperature as an indication of warming, we’re only getting some of the picture. The oceans act as a huge storage heater, and will continue to warm up the lower atmosphere (no matter what changes we make to the atmosphere in the future).

Note the relationship between this figure (which shows where the heat goes) and the figure from Part 2 that showed change in cumulative energy budget from different sources:

Both graphs show zettajoules accumulating over about the same period (1970-2011). But the graph from Part 1 has a cumulative total just short of 800 zettajoules by the end of the period, while today’s new graph shows the earth storing “only” about 300 zettajoules of this. Where did the remaining energy go? Because the earth’s temperature rose during this period, it also lost increasingly more energy back into space. When greenhouse gases trap heat, the earth’s temperature keeps rising until outgoing energy and incoming energy are in balance again.

*Climate Change 2013: The Physical Science Basis* **here**. It’s also available chapter by chapter here:

- Introduction
- Observations: Atmosphere and Surface
- Observations: Ocean
- Observations: Cryosphere
- Information from Paleoclimate Archives
- Carbon and Other Biogeochemical Cycles
- Clouds and Aerosols
- Anthropogenic and Natural Radiative Forcing
- Evaluation of Climate Models
- Detection and Attribution of Climate Change: from Global to Regional
- Near-term Climate Change: Projections and Predictability
- Long-term Climate Change: Projections, Commitments and Irreversibility
- Sea Level Change
- Climate Phenomena and their Relevance for Future Regional Climate Change

]]>

**(3) The warming is largely irreversible**

The summary for policymakers says:

A large fraction of anthropogenic climate change resulting from CO2 emissions is irreversible on a multi-century to millennial time scale, except in the case of a large net removal of CO2 from the atmosphere over a sustained period. Surface temperatures will remain approximately constant at elevated levels for many centuries after a complete cessation of net anthropogenic CO2 emissions.

The conclusions about irreversibility of climate change are greatly strengthened from the previous assessment report, as recent research has explored this in much more detail. The problem is that a significant fraction of our greenhouse gas emissions stay in the atmosphere for thousands of years, so even if we stop emitting them altogether, they hang around, contributing to more warming. In simple terms, whatever peak temperature we reach, we’re stuck at for millennia, unless we can figure out a way to artificially *remove* massive amounts of CO2 from the atmosphere.

The graph is the result of an experiment that runs (simplified) models for a thousand years into the future. The major climate models are generally too computational expensive to be run for such a long simulation, so these experiments use simpler models, so-called EMICS (Earth system Models of Intermediate Complexity).

The four curves in this figure correspond to four “Representative Concentration Pathways”, which map out four ways in which the composition of the atmosphere is likely to change in the future. These four RCPs were picked to capture four possible futures: two in which there is little to no coordinated action on reducing global emissions (worst case—RCP8.5 and best case—RCP6) and two on which there is serious global action on climate change (worst case—RCP4.5 and best case—RCP 2.6). A simple way to think about them is as follows. RCP8.5 represents ‘business as usual’—strong economic development for the rest of this century, driven primarily by dependence on fossil fuels. RCP6 represents a world with no global coordinated climate policy, but where lots of localized clean energy initiatives do manage to stabilize emissions by the latter half of the century. RCP4.5 represents a world that implements strong limits on fossil fuel emissions, such that greenhouse gas emissions peak by mid-century and then start to fall. RCP2.6 is a world in which emissions peak in the next few years, and then fall dramatically, so that the world becomes carbon neutral by about mid-century.

Note that in RCP2.6 the temperature does fall, after reaching a peak just below 2°C of warming over pre-industrial levels. That’s because RCP2.6 is a scenario in which concentrations of greenhouse gases in the atmosphere start to fall before the end of the century. This is only possible if we reduce global emissions so fast that we achieve carbon neutrality soon after mid-century, and then go carbon negative. By carbon negative, I mean that globally, each year, we remove more CO2 from the atmosphere than we add. Whether this is possible is an interesting question. But even if it is, the model results show there is no time within the next thousand years when it is anywhere near as cool as it is *today*.

*Climate Change 2013: The Physical Science Basis* **here**. It’s also available chapter by chapter here:

- Introduction
- Observations: Atmosphere and Surface
- Observations: Ocean
- Observations: Cryosphere
- Information from Paleoclimate Archives
- Carbon and Other Biogeochemical Cycles
- Clouds and Aerosols
- Anthropogenic and Natural Radiative Forcing
- Evaluation of Climate Models
- Detection and Attribution of Climate Change: from Global to Regional
- Near-term Climate Change: Projections and Predictability
- Long-term Climate Change: Projections, Commitments and Irreversibility
- Sea Level Change
- Climate Phenomena and their Relevance for Future Regional Climate Change

]]>

**(2) Humans caused the majority of it**

The summary for policymakers says:

It is extremely likely that human influence has been the dominant cause of the observed warming since the mid-20th century.

This chart summarizes the impact of different drivers of warming and/or cooling, by showing the total cumulative energy added to the earth system since 1970 from each driver. Note that the chart is in zettajoules (10^{21}J). For comparison, one zettajoule is about the energy that would be released from 200 million bombs of the size of the one dropped on Hiroshima. The world’s total annual global energy consumption is about 0.5 zettajoule.

Long lived greenhouse gases, such as CO2, contribute the majority of the warming (the purple line). Aerosols, such as particles of industrial pollution, block out sunlight and cause some cooling (the dark blue line), but nowhere near enough to offset the warming from greenhouse gases. Note that aerosols have the largest uncertainty bar; much of the remaining uncertainty about the likely magnitude of future climate warming is due to uncertainty about how much of the warming might be offset by aerosols. The uncertainty on the aerosols curve is, in turn, responsible for most of the uncertainty on the black line, which shows the total effect if you add up all the individual contributions.

The graph also puts into perspective some of other things that people like to blame for climate change, including changes in energy received from the sun (‘solar’), and the impact of volcanoes. Changes in the sun (shown in orange) are tiny compared to greenhouse gases, but do show a very slight warming effect. Volcanoes have a larger (cooling) effect, but it is short-lived. There were two major volcanic eruptions in this period, El Chichón in 1982 and and Pinatubo in 1992. Each can be clearly seen in the graph as an immediate cooling effect, which then tapers off after a a couple of years.

*Climate Change 2013: The Physical Science Basis* **here**. It’s also available chapter by chapter here:

- Introduction
- Observations: Atmosphere and Surface
- Observations: Ocean
- Observations: Cryosphere
- Information from Paleoclimate Archives
- Carbon and Other Biogeochemical Cycles
- Clouds and Aerosols
- Anthropogenic and Natural Radiative Forcing
- Evaluation of Climate Models
- Detection and Attribution of Climate Change: from Global to Regional
- Near-term Climate Change: Projections and Predictability
- Long-term Climate Change: Projections, Commitments and Irreversibility
- Sea Level Change
- Climate Phenomena and their Relevance for Future Regional Climate Change

]]>

In October, I trawled through the final draft of this report, which was released at that time:

• Intergovernmental Panel on Climate Change (IPCC), *Climate Change 2013: The Physical Science Basis*.

Here’s what I think are its key messages:

**The warming is unequivocal.****Humans caused the majority of it.****The warming is largely irreversible.****Most of the heat is going into the oceans.****Current rates of ocean acidification are unprecedented.****We have to choose which future we want very soon.****To stay below 2°C of warming, the world must become carbon negative.****To stay below 2°C of warming, most fossil fuels must stay buried in the ground.**

I’ll talk about the first of these here, and the rest in future parts. But before I start, a little preamble.

The IPCC was set up in 1988 as a UN intergovernmental body to provide an overview of the science. Its job is to assess what the peer-reviewed science says, in order to inform policymaking, but it is not tasked with making specific policy recommendations. The IPCC and its workings seem to be widely misunderstood in the media. The dwindling group of people who are still in denial about climate change particularly like to indulge in IPCC-bashing, which seems like a classic case of ‘blame the messenger’. The IPCC itself has a very small staff (no more than a dozen or so people). However, the assessment reports are written and reviewed by a very large team of scientists (several thousands), all of whom volunteer their time to work on the reports. The scientists are are organised into three working groups: WG1 focuses on the physical science basis, WG2 focuses on impacts and climate adaptation, and WG3 focuses on how climate mitigation can be achieved.

In October, the WG1 report was released as a final draft, although it was accompanied by bigger media event around the approval of the final wording on the WG1 “Summary for Policymakers”. The final version of the full WG1 report, plus the WG2 and WG3 reports, have come out since then.

I wrote about the WG1 draft in October, but John has solicited this post for *Azimuth* only now. By now, the draft I’m talking about here has undergone some minor editing/correcting, and some of the figures might have ended up re-drawn. Even so, most of the text is unlikely to have changed, and the major findings can be considered final.

In this post and the parts to come I’ll give my take on the most important findings, along with a key figure to illustrate each.

**(1) The warming is unequivocal**

The text of the summary for policymakers says:

Warming of the climate system is unequivocal, and since the 1950s, many of the observed changes are unprecedented over decades to millennia. The atmosphere and ocean have warmed, the amounts of snow and ice have diminished, sea level has risen, and the concentrations of greenhouse gases have increased.

Unfortunately, there has been much play in the press around a silly idea that the warming has “paused” in the last decade. If you squint at the last few years of the top graph, you might be able to convince yourself that the temperature has been nearly flat for a few years, but only if you cherry pick your starting date, and use a period that’s too short to count as climate. When you look at it in the context of an entire century and longer, such arguments are clearly just wishful thinking.

The other thing to point out here is that the *rate* of warming is unprecedented:

With very high confidence, the current rates of CO2, CH4 and N2O rise in atmospheric concentrations and the associated radiative forcing are unprecedented with respect to the highest resolution ice core records of the last 22,000 years

and there is

medium confidence that the rate of change of the observed greenhouse gas rise is also unprecedented compared with the lower resolution records of the past 800,000 years.

In other words, there is nothing in any of the ice core records that is comparable to what we have done to the atmosphere over the last century. The earth has warmed and cooled in the past due to natural cycles, but never anywhere near as fast as modern climate change.

*Climate Change 2013: The Physical Science Basis* **here**. It’s also available chapter by chapter here:

- Introduction
- Observations: Atmosphere and Surface
- Observations: Ocean
- Observations: Cryosphere
- Information from Paleoclimate Archives
- Carbon and Other Biogeochemical Cycles
- Clouds and Aerosols
- Anthropogenic and Natural Radiative Forcing
- Evaluation of Climate Models
- Detection and Attribution of Climate Change: from Global to Regional
- Near-term Climate Change: Projections and Predictability
- Long-term Climate Change: Projections, Commitments and Irreversibility
- Sea Level Change
- Climate Phenomena and their Relevance for Future Regional Climate Change

]]>

• Nafeez Ahmed, NASA-funded study: industrial civilisation headed for ‘irreversible collapse’?, *Earth Insight*, blog on *The Guardian*, 14 March 2014.

Sounds dramatic! But notice the question mark in the title. The article says that “global industrial civilisation could collapse in coming decades due to unsustainable resource exploitation and increasingly unequal wealth distribution.” But with the word “could” in there, who could possibly argue? It’s certainly *possible*. What’s the actual news here?

It’s about a new paper that’s been accepted the Elsevier journal *Ecological Economics*. Since this paper has not been published, and I don’t even know the title, it’s hard to get details yet. According to Nafeez Ahmed,

The research project is based on a new cross-disciplinary ‘Human And Nature DYnamical’ (HANDY) model, led by applied mathematician Safa Motesharrei of the US National Science Foundation-supported National Socio-Environmental Synthesis Center, in association with a team of natural and social scientists.

So I went to Safa Motesharrei‘s webpage. It says he’s a grad student getting his PhD at the Socio-Environmental Synthesis Center, working with a team of people including:

• Eugenia Kalnay (atmospheric science)

• James Yorke (mathematics)

• Matthias Ruth (public policy)

• Victor Yakovenko (econophysics)

• Klaus Hubacek (geography)

• Ning Zeng (meteorology)

• Fernando Miralles-Wilhelm (hydrology).

I was able to find this paper draft:

• Safa Motesharri, Jorge Rivas and Eugenia Kalnay, A minimal model for human and nature interaction, 13 November 2012.

I’m not sure how this is related to the paper discussed by Nafeez Ahmed, but it includes some (though not all) of the passages quoted by him, and it describes the HANDY model. It’s an extremely simple model, so I’ll explain it to you.

But first let me quote a bit more of the *Guardian* article, so you can see why it’s attracting attention:

By investigating the human-nature dynamics of these past cases of collapse, the project identifies the most salient interrelated factors which explain civilisational decline, and which may help determine the risk of collapse today: namely, Population, Climate, Water, Agriculture, and Energy.

These factors can lead to collapse when they converge to generate two crucial social features: “the stretching of resources due to the strain placed on the ecological carrying capacity”; and “the economic stratification of society into Elites [rich] and Masses (or “Commoners”) [poor]” These social phenomena have played “a central role in the character or in the process of the collapse,” in all such cases over “the last five thousand years.”

Currently, high levels of economic stratification are linked directly to overconsumption of resources, with “Elites” based largely in industrialised countries responsible for both:

“… accumulated surplus is not evenly distributed throughout society, but rather has been controlled by an elite. The mass of the population, while producing the wealth, is only allocated a small portion of it by elites, usually at or just above subsistence levels.”

The study challenges those who argue that technology will resolve these challenges by increasing efficiency:

“Technological change can raise the efficiency of resource use, but it also tends to raise both per capita resource consumption and the scale of resource extraction, so that, absent policy effects, the increases in consumption often compensate for the increased efficiency of resource use.”

Productivity increases in agriculture and industry over the last two centuries has come from “increased (rather than decreased) resource throughput,” despite dramatic efficiency gains over the same period.

Modelling a range of different scenarios, Motesharri and his colleagues conclude that under conditions “closely reflecting the reality of the world today… we find that collapse is difficult to avoid.” In the first of these scenarios, civilisation:

“…. appears to be on a sustainable path for quite a long time, but even using an optimal depletion rate and starting with a very small number of Elites, the Elites eventually consume too much, resulting in a famine among Commoners that eventually causes the collapse of society. It is important to note that this Type-L collapse is due to an inequality-induced famine that causes a loss of workers, rather than a collapse of Nature.”

Another scenario focuses on the role of continued resource exploitation, finding that “with a larger depletion rate, the decline of the Commoners occurs faster, while the Elites are still thriving, but eventually the Commoners collapse completely, followed by the Elites.”

In both scenarios, Elite wealth monopolies mean that they are buffered from the most “detrimental effects of the environmental collapse until much later than the Commoners”, allowing them to “continue ‘business as usual’ despite the impending catastrophe.” The same mechanism, they argue, could explain how “historical collapses were allowed to occur by elites who appear to be oblivious to the catastrophic trajectory (most clearly apparent in the Roman and Mayan cases).”

Applying this lesson to our contemporary predicament, the study warns that:

“While some members of society might raise the alarm that the system is moving towards an impending collapse and therefore advocate structural changes to society in order to avoid it, Elites and their supporters, who opposed making these changes, could point to the long sustainable trajectory ‘so far’ in support of doing nothing.”

However, the scientists point out that the worst-case scenarios are by no means inevitable, and suggest that appropriate policy and structural changes could avoid collapse, if not pave the way toward a more stable civilisation.

The two key solutions are to reduce economic inequality so as to ensure fairer distribution of resources, and to dramatically reduce resource consumption by relying on less intensive renewable resources and reducing population growth:

“Collapse can be avoided and population can reach equilibrium if the per capita rate of depletion of nature is reduced to a sustainable level, and if resources are distributed in a reasonably equitable fashion.”

So what’s the model?

It’s 4 ordinary differential equations:

where:

• is the population of the **commoners** or **masses**

• is the population of the **elite**

• represents **natural resources**

• represents **wealth**

The authors say that

Natural resources exist in three forms: nonrenewable stocks (fossil fuels, mineral deposits, etc), renewable stocks (forests, soils, aquifers), and flows (wind, solar radiation, rivers). In future versions of HANDY, we plan to disaggregate Nature into these three different forms, but for simplication in this version, we have adopted a single formulation intended to represent an amalgamation of the three forms.

So, it’s possible that the paper to be published in *Ecological Economics* treats natural resources using three variables instead of just one.

Now let’s look at the equations one by one:

This looks weird at first, but and aren’t both constants, which would be redundant. is a constant **birth rate for commoners**, while the **death rate for commoners**, is a function of wealth.

Similarly, in

is a constant **birth rate for the elite**, while the **death rate for the elite**, is a function of wealth. The death rate is different for the elite and commoners:

For both the elite and commoners, the death rate drops linearly with increasing wealth from its maximum value to its minimum values . But it drops faster for the elite, of course! For the commoners it reaches its minimum when the wealth reaches some value but for the elite it reaches its minimum earlier, when , where is some number bigger than 1.

Next, how do natural resources change?

The first part of this equation:

describes how natural resources renew themselves if left alone. This is just the **logistic equation**, famous in models of population growth. Here is the equilibrium level of natural resources, while is another number that helps say how fast the resources renew themselves. Solutions of the logistic equation look like this:

But the whole equation

has a term saying that natural resources get used up at a rate proportional to the population of commoners times the amount of natural resources is just a constant of proportionality.

It’s curious that the population of elites doesn’t affect the depletion of natural resources, and also that doubling the amount of natural resources doubles the rate at which they get used up. Regarding the first issue, the authors offer this explanation:

The depletion term includes a rate of depletion per worker, and is proportional to both Nature and the number of workers. However, the economic activity of Elites is modeled to represent executive, management, and supervisory functions, but not engagement in the direct extraction of resources, which is done by Commoners. Thus, only Commoners produce.

I didn’t notice a discussion of the second issue.

Finally, the change in the amount of wealth is described by this equation:

The first term at right precisely matches the depletion of natural resources in the previous equation, but with the opposite sign: natural resources are getting turned into ‘wealth’. describes **consumption by commoners** and describes **consumption by the elite**. These are both functions of wealth, a bit like the death rates… but as you’d expect increasing wealth *increases* consumption:

For both the elite and commoners, consumption grows linearly with increasing wealth until wealth reaches the critical level But it grows faster for the elites, and reaches a higher level.

So, that’s the model… at least in this preliminary version of the paper.

There are many parameters in this model, and many different things can happen depending on their values and the initial conditions. The paper investigates many different scenarios. I don’t have the energy to describe them all, so I urge you to skim it and look at the graphs.

I’ll just show you three. Here is one that Nafeez Ahmed mentioned, where civilization

appears to be on a sustainable path for quite a long time, but even using an optimal depletion rate and starting with a very small number of Elites, the Elites eventually consume too much, resulting in a famine among Commoners that eventually causes the collapse of society.

I can see why Ahmed would like to talk about this scenario: he’s written a book called *A User’s Guide to the Crisis of Civilization and How to Save It*. Clearly it’s worth putting some thought into risks of this sort. But how likely is this particular scenario compared to others? For that we’d need to think hard about how well this model matches reality.

It’s obviously a crude simplification of an immensely complex and unknowable system: the whole civilization on this planet. That doesn’t mean it’s fundamentally wrong! Its predictions could still be qualitatively correct. But to gain confidence in this, we’d need material that is not made in the draft paper I’ve seen. It says:

The scenarios most closely reflecting the reality of our world today are found in the third group of experiments (see section 5.3), where we introduced economic stratication. Under such conditions,

we find that collapse is difficult to avoid.

But it would be nice to see a more careful approach to setting model parameters, justifying the simplifications built into the model, exploring what changes when some simplifications are reduced, and so on.

Here’s a happier scenario, where the parameters are chosen differently:

The main difference is that the depletion of resources per commoner, , is smaller.

And here’s yet another, featuring cycles of prosperity, overshoot and collapse:

I hope you see that I’m neither trying to ‘shoot down’ this model nor defend it. I’m just trying to understand it.

I think it’s very important—and fun—to play around with models like this, keep refining them, comparing them against each other, and using them as tools to help our thinking. But I’m not very happy that Nafeez Ahmed called this piece of work a “highly credible wake-up call” without giving us any details about what was actually done.

I don’t expect blog articles on the *Guardian* to feature differential equations! But it would be great if journalists who wrote about new scientific results would provide a link to the actual work, so people who want to could dig deeper can do so. Don’t make us scour the internet looking for clues.

And scientists: if your results are potentially important, let everyone actually see them! If you think civilization could be heading for collapse, burying your evidence and your recommendations for avoiding this calamity in a closed-access Elsevier journal is not the optimal strategy to deal with the problem.

There’s been a whole side-battle over whether NASA actually funded this study:

• Keith Kloor, About that popular *Guardian* story on the collapse of industrial civilization, *Collide-A-Scape, blog on **Discover*, March 21, 2014.

• Nafeez Ahmed, Did NASA fund ‘civilisation collapse’ study, or not?, *Earth Insight*, blog on *The Guardian*, 21 March 2014.

But that’s very boring compared to fun of thinking about the model used in this study… and the challenging, difficult business of trying to think clearly about the risks of civilizational collapse.

The paper is now freely available here:

• Safa Motesharri, Jorge Rivas and Eugenia Kalnay, Human and nature dynamics (HANDY): modeling inequality and use of resources in the collapse or sustainability of societies, *Ecological Economics* **101** (2014), 90–102.

]]>

There will be a 5-day workshop on Programming with Chemical Reaction Networks: Mathematical Foundation at BIRS from Sunday, June 8 to Friday June 13, 2014 It’s being organized by

• Anne Condon (University of British Columbia)

• David Doty (California Institute of Technology)

• Chris Thachuk (University of Oxford).

BIRS is the Banff International Research Station, in the mountains west of Calgary, in Alberta, Canada.

Here’s the workshop proposal on the BIRS website. It’s a pretty interesting proposal, especially if you’ve already read Luca Cardelli’s description of computing with chemical reaction networks, at the end of our series of posts on chemical reaction networks. The references include a lot of cool papers, so I’ve created links to those to help you get ahold of them.

This workshop will explore three of the most important research themes concerning stochastic chemical reaction networks (CRNs). Below we motivate each theme and highlight key questions that the workshop will address. Our main objective is to bring together distinct research communities in order to consider new problems that could not be fully appreciated in isolation. It is also our aim to determine commonalities between different disciplines and bodies of research. For example, research into population protocols, vector addition systems, and Petri networks provide a rich body of theoretical results that may already address contemporary problems arising in the study of CRNs.

## Computational power of CRNs

Before designing robust and practical systems, it is useful to know the limits to computing with a chemical soup. Some interesting theoretical results are already known for stochastic chemical reaction networks. The computational power of CRNs depend upon a number of factors, including: (i) is the computation deterministic, or probabilistic, and (ii) does the CRN have an initial context — certain species, independent of the input, that are initially present in some exact, constant count.

In general, CRNs with a constant number of species (independent of the input length) are capable of Turing universal computation [17], if the input is represented by the exact (unary) count of one molecular species, some small probability of error is permitted and an initial context in the form of a single-copy leader molecule is used.

Could the same result hold in the absence of an initial context? In a surprising result based on the distributed computing model of population protocols, it has been shown that if a computation must be error-free, then deterministic computation with CRNs having an initial context is limited to computing semilinear predicates [1], later extended to functions outputting natural numbers encoded by molecular counts [5].

Furthermore, any semilinear predicate or function can be computed by that class of CRNs in expected time polylogarithmic in the input length. Building on this result, it was recently shown that by incurring an expected time linear in the input length, the same result holds for “leaderless” CRNs [8] — CRNs with no initial context. Can this result be improved to sub-linear expected time? Which class of functions can be computed deterministically by a CRN without an initial context in expected time polylogarithmic in the input length?

While (restricted) CRNs are Turing-universal, current results use space proportional to the computation time. Using a non-uniform construction, where the number of species is proportional to the input length and each initial species is present in some constant count, it is known that any S(n) space-bounded computation can be computed by a logically-reversible tagged CRN, within a reaction volume of size poly(S(n)) [18]. Tagged CRNs were introduced to model explicitly the fuel molecules in physical realizations of CRNs such as DNA strand displacement systems [6] that are necessary to supply matter and energy for implementing reactions such as X → X + Y that violate conservation of mass and/or energy.

Thus, for space-bounded computation, there exist CRNs that are time-efficient or are space-efficient. Does there exist time- and space-efficient CRNs to compute any space-bounded function?

## Designing and verifying robust CRNs

While CRNs provide a concise model of chemistry, their physical realizations are often more complicated and more granular. How can one be sure they accurately implement the intended network behaviour? Probabilistic model checking has already been employed to find and correct inconsistencies between CRNs and their DNA strand displacement system (DSD) implementations [9]. However, at present, model checking of arbitrary CRNs is only capable of verifying the correctness of very small systems. Indeed, verification of these types of systems is a difficult problem: probabilistic state reachability is undecidable [17, 20] and general state reachability is EXPSPACE-hard [4].

How can larger systems be verified? A deeper understanding of CRN behaviour may simplify the process of model checking. As a motivating example, there has been recent progress towards verifying that certain DSD implementations correctly simulate underlying CRNs [16, 7, 10]. This is an important step to ensuring correctness, prior to experiments. However, DSDs can also suffer from other errors when implementing CRNs, such as spurious hybridization or strand displacement. Can DSDs and more generally CRNs be designed to be robust to such predictable errors? Can error correcting codes and redundant circuit designs used in traditional computing be leveraged in these chemical computers? Many other problems arise when implementing CRNs. Currently, unique types of fuel molecules must be designed for every reaction type. This complicates the engineering process significantly. Can a universal type of fuel be designed to smartly implement any reaction?

## Energy efficient computing with CRNs

Rolf Landauer showed that logically irreversible computation — computation as modeled by a standard Turing machine — dissipates an amount of energy proportional to the number of bits of information lost, such as previous state information, and therefore cannot be energy efficient [11]. However, Charles Bennett showed that, in principle, energy efficient computation is possible, by proposing a universal Turing machine to perform logically-reversible computation and identified nucleic acids (RNA/DNA) as a potential medium to realize logically-reversible computation in a physical system [2].

There have been examples of logically-reversible DNA strand displacement systems — a physical realization of CRNs — that are, in theory, capable of complex computation [12, 19]. Are these systems energy efficient in a physical sense? How can this argument be made formally to satisfy both the computer science and the physics communities? Is a physical experiment feasible, or are these results merely theoretical footnotes?

## References

[1] D. Angluin, J. Aspnes, and D. Eisenstat. Stably computable predicates are semilinear. In PODC, pages 292–299, 2006.

[2] C. H. Bennett. Logical reversibility of computation. IBM Journal of Research and Development, 17 (6):525–532, 1973.

[3] L. Cardelli and A. Csikasz-Nagy. The cell cycle switch computes approximate majority. Scientific Reports, 2, 2012.

[4] E. Cardoza, R. Lipton, A. R. Meyer. Exponential space complete problems for Petri nets and commutative semigroups (Preliminary Report). Proceedings of the Eighth Annual ACM Symposium on Theory of Computing, pages 507–54, 1976.

[5] H. L. Chen, D. Doty, and D. Soloveichik. Deterministic function computation with chemical reaction networks. DNA Computing and Molecular Programming, pages 25–42, 2012.

[6] A. Condon, A. J. Hu, J. Manuch, and C. Thachuk. Less haste, less waste: on recycling and its limits in strand displacement systems. Journal of the Royal Society: Interface Focus, 2 (4):512–521, 2012.

[7] Q. Dong. A bisimulation approach to verification of molecular implementations of formal chemical reaction network. Master’s thesis. SUNY Stony Brook, 2012.

[8] D. Doty and M. Hajiaghayi. Leaderless deterministic chemical reaction networks. In Proceedings of the 19th International Meeting on DNA Computing and Molecular Programming, 2013.

[9] M. R. Lakin, D. Parker, L. Cardelli, M. Kwiatkowska, and A. Phillips. Design and analysis of DNA strand displacement devices using probabilistic model checking. Journal of The Royal Society Interface, 2012.

[10] M. R. Lakin, D. Stefanovic and A. Phillips. Modular Verification of Two-domain DNA Strand Displacement Networks via Serializability Analysis. In Proceedings of the 19th Annual conference on DNA computing, 2013.

[11] R. Landauer. Irreversibility and heat generation in the computing process. IBM Journal of research and development, 5 (3):183–191, 1961.

[12] L. Qian, D. Soloveichik, and E. Winfree. Efficient Turing-universal computation with DNA polymers (extended abstract) . In Proceedings of the 16th Annual conference on DNA computing, pages 123–140, 2010.

[13] L. Qian and E. Winfree. Scaling up digital circuit computation with DNA strand displacement cascades. Science, 332 (6034):1196–1201, 2011.

[14] L. Qian, E. Winfree, and J. Bruck. Neural network computation with DNA strand displacement cascades. Nature, 475 (7356):368–372, 2011.

[15] G. Seelig, D. Soloveichik, D.Y. Zhang, and E. Winfree. Enzyme-free nucleic acid logic circuits. Science, 314 (5805):1585–1588, 2006.

[16] S. W. Shin. Compiling and verifying DNA-based chemical reaction network implementations. Master’s thesis. California Insitute of Technology, 2011.

[17] D. Soloveichik, M. Cook, E. Winfree, and J. Bruck. Computation with finite stochastic chemical reaction networks. Natural Computing, 7 (4):615–633, 2008.

[18] C. Thachuk. Space and energy efficient molecular programming. PhD thesis, University of British Columbia, 2012.

[19] C. Thachuk and A. Condon. Space and energy efficient computation with DNA strand displacement systems. In Proceedings of the 18th Annual International Conference on DNA computing and Molecular Programming, 2012.

[20] G. Zavattaro and L. Cardelli. Termination Problems in Chemical Kinetics. In Proceedings of the 2008 Conference on Concurrency Theory, pages 477–491, 2008.

]]>

Hi, I’m Eugene Lerman. I met John back in the mid 1980s when John and I were grad students at MIT. John was doing mathematical physics and I was studying symplectic geometry. We never talked about networks. Now I teach in the math department at the University of Illinois at Urbana, and we occasionally talk about networks on his blog.

A few years ago a friend of mine who studies locomotion in humans and other primates asked me if I knew of any math that could be useful to him.

I remember coming across an expository paper on ‘coupled cell networks’:

• Martin Golubitsky and Ian Stewart, Nonlinear dynamics of networks: the groupoid formalism, *Bull. Amer. Math. Soc.* **43** (2006), 305–364.

In this paper, Golubitsky and Stewart used the study of animal gaits and models for the hypothetical neural networks called ‘central pattern generators’ that give rise to these gaits to motivate the study of networks of ordinary differential equations with symmetry. In particular they were interested in ‘synchrony’. When a horse trots, or canters, or gallops, its limbs move in an appropriate pattern, with different pairs of legs moving in synchrony:

They explained that synchrony (and the patterns) could arise when the differential equations have finite group symmetries. They also proposed several systems of symmetric ordinary differential equations that could generate the appropriate patterns.

Later on Golubitsky and Stewart noticed that there are systems of ODEs that have no group symmetries but whose solutions nonetheless exhibit certain synchrony. They found an explanation: these ODEs were ‘groupoid invariant’. I thought that it would be fun to understand what ‘groupoid invariant’ meant and why such invariance leads to synchrony.

I talked my colleague Lee DeVille into joining me on this adventure. At the time Lee had just arrived at Urbana after a postdoc at NYU. After a few years of thinking about these networks Lee and I realized that strictly speaking one doesn’t really need groupoids for these synchrony results and it’s better to think of the social life of networks instead. Here is what we figured out—a full and much too precise story is here:

• Eugene Lerman and Lee DeVille, Dynamics on networks of manifolds.

Let’s start with an example of a class of ODEs with a mysterious property:

**Example.** Consider this ordinary differential equation for a function

for some function It is easy to see that a function solving

gives a solution of these equations if we set

You can think of the differential equations in this example as describing the dynamics of a complex system built out of three interacting subsystems. Then any solution of the form

may be thought of as a **synchronization**: the three subsystems are evolving in lockstep.

One can also view the result geometrically: the diagonal

is an invariant subsystem of the continuous-time dynamical system defined by the differential equations. Remarkably enough, such a subsystem exists for *any* choice of a function

Where does such a synchronization or invariant subsystem come from? There is no apparent symmetry of that preserves the differential equations and fixes the diagonal and thus could account for this invariant subsystem. It turns out that what matters is the structure of the mutual dependencies of the three subsystems making up the big system. That is, the evolution of depends only on and the evolution of depends only on and and the evolution of depends only on and

These dependencies can be conveniently pictured as a directed graph:

The graph has no symmetries. Nonetheless, the existence of the invariant subsystem living on the diagonal can be deduced from certain properties of the graph The key is the existence of a surjective map of graphs

from our graph to a graph with exactly one node, call it and one arrow. It is also crucial that has the following lifting property: there is a unique way to lift the one and only arrow of to an arrow of once we specify the target node of the lift.

We now formally define the notion of a ‘network of phase spaces’ and of a continuous-time dynamical system on such a network. Equivalently, we define a ‘network of continuous-time dynamical systems’. We start with a directed graph

Here is the set of edges, is the set of nodes, and the two arrows assign to an edge its source and target, respectively. To each node we attach a phase space (or more formally a manifold, perhaps with boundary or corners). Here ‘attach’ means that we choose a function it assigns to each node a phase space

In our running example, to each node of the graph we attach the real line (If we think of the set as a discrete category and as a category of manifolds and smooth maps, then is simply a functor.)

Thus a **network of phase spaces** is a pair where is a directed graph and is an assignment of phase spaces to the nodes of

We think of the collection as the collection of phase spaces of the subsystems constituting the network We refer to as a **phase space function**. Since the state of the network should be determined completely and uniquely by the states of its subsystems, it is reasonable to take the total phase space of the network to be the product

In the example the total phase space of the network is while the phase space of the network is the real line

Finally we need to interpret the arrows. An arrow in a graph should encode the fact that the dynamics of the subsystem associated to the node depends on the states of the subsystem associated to the node To make this precise requires the notion of an ‘open system’, or ‘control system’, which we define below. It also requires a way to associate an open system to the set of arrows coming into a node/vertex

To a first approximation an **open system** (or **control system**, I use the two terms interchangeably) is a system of ODEs depending on parameters. I like to think of a control system geometrically: a control system on a phase space controlled by the the phase space is a map

where is the tangent bundle of the space so that for all is a vector tangent to at the point Given phase spaces and the set of all corresponding control systems forms a vector space which we denote by

(More generally one can talk about the space of control systems associated with a surjective submersion For us, submersions of the form are general enough.)

To encode the incoming arrows, we introduce the **input tree** (this is a very short tree, a corolla if you like). The input tree of a node of a graph is a directed graph whose arrows are precisely the arrows of coming into the vertex but any two parallel arrows of with target will have disjoint sources in In the example the input tree of the one node of of is the tree

There is always a map of graphs For instance for the input tree in the example we just discussed, is the map

Consequently if is a network and is an input tree of a node of then is also a network. This allows us to talk about the phase space of an input tree. In our running example,

Given a network there is a vector space of open systems associated with every node of

In our running example, the vector space associated to the one node of is

In the same example, the network has three nodes and we associate the same vector space to each one of them.

We then construct an interconnection map

from the product of spaces of all control systems to the *space of vector fields*

on the total phase space of the network. (We use the standard notation to denote the tangent bundle of a manifold by and the space of vector fields on by ). In our running example the interconnection map for the network is the map

The interconnection map for the network is the map

given by

To summarize: a dynamical system on a network of phase spaces is the data where is a directed graph, is a phase space function and is a collection of open systems compatible with the graph and the phase space function. The corresponding vector field on the total space of the network is obtained by interconnecting the open systems.

Dynamical systems on networks can be made into the objects of a category. Carrying this out gives us a way to associate maps of dynamical systems to combinatorial data.

The first step is to form the category of networks of phase spaces, which we call In this category, by definition, a morphism from a network to a network is a map of directed graphs which is compatible with the phase space functions:

Using the universal properties of products it is easy to show that a map of networks defines a map of total phase spaces in the *opposite* direction:

In the category theory language the total phase space assignment extends to a contravariant functor

We call this functor the **total phase space functor**. In our running example, the map

is given by

Continuous-time dynamical systems also form a category, which we denote by The objects of this category are pairs consisting of a phase space and a vector field on this phase space. A morphism in this category is a smooth map of phase spaces that intertwines the two vector fields. That is:

for any pair of objects in

In general, morphisms in this category are difficult to determine explicitly. For example if then a morphism from to some dynamical system is simply a piece of an integral curve of the vector field defined on an interval And if is the constant vector field on the circle then a morphism from to is a periodic orbit of Proving that a given dynamical system has a periodic orbit is usually hard.

Consequently, given a map of networks

and a collection of open systems

on we expect it to be very difficult if not impossible to find a collection of open systems so that

is a map of dynamical systems.

It is therefore somewhat surprising that there is a class of maps of graphs for which the above problem has an easy solution! The graph maps of this class are known by several different names. Following Boldi and Vigna we refer to them as **graph fibrations**. Note that despite what the name suggests, graph fibrations in general are not required to be surjective on nodes or edges. For example, the inclusion

is a graph fibration. We say that a map of networks

is a **fibration** of networks if is a graph fibration. With some work one can show that a fibration of networks induces a pullback map

on the sets of tuples of the associated open systems. The main result of Dynamics on networks of manifolds is a proof that for a fibration of networks the maps and the two interconnection maps and are compatible:

**Theorem.** Let be a fibration of networks of manifolds. Then the pullback map

is compatible with the interconnection maps

and

Namely for any collection of open systems on the network the diagram

commutes. In other words,

is a map of continuous-time dynamical systems, a morphism in

At this point the pure mathematician in me is quite happy: I have a theorem! Better yet, the theorem solves the puzzle at the beginning of this post. But if you have read this far, you may well be wondering: “Ok, you told us about your theorem. Very nice. Now what?”

There is plenty to do. On the practical side of things, the continuous-time dynamical systems are much too limited for contemporary engineers. Most of the engineers I know care a lot more about hybrid systems. These kinds of systems exhibit both continuous time behavior and abrupt transitions, hence the name. For example, anti-lock breaks on a car is just that kind of a system — if a sensor detects that the car is skidding and the foot break is pressed, it starts pulsing the breaks. This is not your father’s continuous time dynamical system! Hybrid dynamical systems are very hard to understand. They have been all the rage in the engineering literature for the last 10-15 years. Sadly, most pure mathematicians never heard of them. It would be interesting to extend the theorem I am bragging about to hybrid systems.

Here is another problem that may be worth thinking about: how much of the theorem holds up to numerical simulation? My feeling is that any explicit numerical integration method will behave well. Implicit methods I am not sure about.

And finally a more general issue. John has been talking about networks quite a bit on this blog. But his networks and my networks look like very different mathematical structures. What do they have in common besides nodes and arrows? What mathematical structure are they glimpses of?

]]>