Network Theory (Part 1)

4 March, 2011

As a mathematician who has gotten interested in the problems facing our planet, I’ve been trying to cook up some new projects to work on. Over the decades I’ve spent a lot of time studying quantum field theory, quantum gravity, n-categories, and numerous pretty topics in pure math. My accumulated knowledge doesn’t seem terribly relevant to my new goals. But I don’t feel like doing a complete ‘brain dump’ and starting from scratch. And my day job still requires that I prove theorems.

Green Mathematics

I wish there were a branch of mathematics—in my dreams I call it green mathematics—that would interact with biology and ecology just as fruitfully as traditional mathematics interacts with physics. If the 20th century was the century of physics, while the 21st is the century of biology, shouldn’t mathematics change too? As we struggle to understand and improve humanity’s interaction with the biosphere, shouldn’t mathematicians have some role to play?

Of course, it’s possible that when you study truly complex systems—from a living cell to the Earth as a whole—mathematics loses the unreasonable effectiveness it so famously has when it comes to simple things like elementary particles. So, maybe there is no ‘green mathematics’.

Or maybe ‘green mathematics’ can only be born after we realize it needs to be fundamentally different than traditional mathematics. For starters, it may require massive use of computers, instead of the paper-and-pencil methods that work so well in traditional math. Simulations might become more important than proofs. That’s okay with me. Mathematicians like things to be elegant—but one can still have elegant definitions and elegant models, even if one needs computer simulations to see how the models behave.

Perhaps ‘green mathematics’ will require a radical shift of viewpoint that we can barely begin to imagine.

It’s also possible that ‘green mathematics’ already exists in preliminary form, scattered throughout many different fields: mathematical biology, quantitative ecology, bioinformatics, artificial life studies, and so on. Maybe we just need more mathematicians to learn these fields and seek to synthesize them.

I’m not sure what I think about this ‘green mathematics’ idea. But I think I’m getting a vague feel for it. This may sound corny, but I feel it should be about structures that are more like this:

than this:

I’ve spent a long time exploring the crystalline beauty of traditional mathematics, but now I’m feeling an urge to study something slightly more earthy.

Network Theory

When dreaming of grand syntheses, it’s easy to get bogged down in vague generalities. Let’s start with something smaller and more manageable.

Network theory, and the use of diagrams, have emerged independently in many fields of science. In particle physics we have Feynman diagrams:


In the humbler but more practical field of electronics we have circuit diagrams:

amplifier with bass boost

Throughout engineering we also have various other styles of diagram, such as bond graphs:


I’ve already told you about Petri nets, which are popular in computer science… but also nice for describing chemical reactions:

petri net

‘Chemical reaction networks’ do a similar job, in a more primitive way:

chemical reaction network

Chemistry shades imperceptibly into biology, and biology uses so many styles of diagram that an organization has tried to standardize them:

Systems Biology Graphical Notation (SBGN) homepage.

SBGN is made up of 3 different languages, representing different visions of biological systems. Each language involves a comprehensive set of symbols with precise semantics, together with detailed syntactic rules how maps are to be interpreted:

1) The Process Description language shows the temporal course of biochemical interactions in a network.


PD

2) The Entity Relationship language lets you to see all the relationships in which a given entity participates, regardless of the temporal aspects.


ER

3) The Activity Flow language depicts the flow of information between biochemical entities in a network.


AF

Biology shades into ecology, and in the 1950s, Howard T. Odum developed the ‘Energy Systems Language’ while studying tropical forests. Odum is now considered to be the founder of ‘systems ecology’. If you can get ahold of this big fat book, you’ll see it’s packed with interesting diagrams describing the flow of energy through ecosystems:

• Howard T. Odum, Systems Ecology: an Introduction, Wiley-Interscience, New York, 1983.

His language is sometimes called ‘Energese’, for short:

Energy Systems Symbols

The list goes on and on, and I won’t try for completeness… but we shouldn’t skip probability theory, statistics and machine learning! A Bayesian network, also known as a “belief network”, is a way to represent knowledge about some domain: it consists of a graph where the nodes are labelled by random variables and the edges represent probabilistic dependencies between these random variables. Various styles of diagrams have been used for these:


structural equation modeling

And don’t forget neural networks!

What Mathematicians Can Do

It’s clear that people from different subjects are reinventing the same kinds of diagrams. It’s also clear that diagrams are being used in a number of fundamentally different ways. So, there’s a lot to sort out.

I already mentioned one attempt to straighten things out: Systems Biology Graphical Notation. But that’s not the only one. For example, in 2001 the International Council on Systems Engineering set up a committee to customize their existing Unified Modeling Language and create something called Systems Modeling Language. This features nine types of diagrams!

So, people are already trying to systematize the use of diagrams. But mathematicians should join the fray.

Why? Because mathematicians are especially good at soaring above the particulars and seeing general patterns. Also, they know ways to think of diagrams, not just as handy tools, but as rigorously defined structures that you can prove theorems about… with the help of category theory.

I’ve written a bit about diagrams already, but not their ‘green’ applications. Instead, I focused on their applications to traditional subjects like topology, physics, logic and computation:

• John Baez and Aaron Lauda, A prehistory of n-categorical physics, to appear in Deep Beauty: Mathematical Innovation and the Search for an Underlying Intelligibility of the Quantum World, ed. Hans Halvorson, Cambridge U. Press.

• John Baez and Mike Stay, A Rosetta stone: topology, physics, logic and computation, in New Structures for Physics, ed. Bob Coecke, Lecture Notes in Physics vol. 813, Springer, Berlin, 2011, pp. 95-174.

It would be good to expand this circle of ideas to include chemistry, biology, ecology, statistics, and so on. There should be a mathematical theory underlying the use of networks in all these disciplines.

I’ve started a project on this with Jacob Biamonte, who works on two other applications of diagrams, namely to quantum computation and condensed matter physics:

• Jacob D. Biamonte, Stephen R. Clark and Dieter Jaksch, Categorical tensor networks.

So far we’ve focused on one aspect: stochastic Petri nets, which are used to describe chemical reactions and also certain predator-prey models in quantitative ecology. In the posts to come, I want to show how ideas from quantum field theory be used in studying stochastic Petri nets, and how this relates to the ‘categorification’ of Feynman diagram theory.


Summer Program on Climate Software

3 March, 2011

Here’s a great opportunity if you’re a student looking for something to do this summer. The Climate Code Foundation is working on open-source versions of important climate software. If you’re lucky, you could get paid to help!

• Climate Code Foundation, Google Summer of Code.

It seems the window for student applications is March 28-April 8.

They write:

Google have announced their Summer of Code, and we intend to be a mentoring organisation. If you’re a student, this is an opportunity to work on our open source code and earn a bit of money doing so (Google give a stipend of USD 5000 qualifying students, and an honorarium of USD 500 to the mentoring organisation).

We have an ideas page, most of which revolves around our ccc-gistemp project. Ideas range from improving ccc-gistemp in various ways, through novel reconstructions, to clear implementations of other climate codes. If you have ideas of your own, we’d like to hear about those too.

If you are interested in participating as a student, then please get in touch.

We have not been a Summer of Code mentor before, but we bring many years (decades even!) of experience to the table: experience in computer science, software engineering, project management, and so on. We hope to help students make a success of their projects!

In case you’re wondering, ccc-gistemp is a version of GISTEMP written in Python.

Isn’t it annoying how explaining one mysterious word can require two more? In case you’re still wondering: Python is a really groovy modern programming language, in comparison to older ones like FORTRAN—and GISTEMP is an important computer program, mostly written in FORTRAN, which NASA uses to analyze the historical temperature record. GISTEMP is what gives us graphs like this:



So, it’s very important to update this program and search the existing program for bugs—and that’s what the Climate Code Foundation is doing:

The “all Python” milestone was achieved with ccc-gistemp release 0.2.0 on 2010-01-11. Naturally we have found (minor) bugs while doing this, but nothing else. Since 0.2.0 we have made major simplifications, chiefly by removing dependencies, and generally processing data internally (by avoiding writing it to intermediate files, which was only necessary on computers that would be considered extremely memory constrained by today’s standards).

Work continue on further simplification, clarification, generalisation, and extension.

Hone your programming skills while helping save the planet!


Guess Who Wrote This?

3 March, 2011

Guess who wrote this report. I’ll quote a bunch of it:

The climate change crisis is far from over. The decade 2000-2010 is the hottest ever recorded and data reveals each decade over the last 50 years to be hotter than the previous one. The planet is enduring more and more heat waves and rain levels—high and low—that test the outer bounds of meteorological study.

The failure of the USA, Australia and Japan to implement relevant legislation after the Copenhagen Accord, as well as general global inaction, might lead people to shrug off the climate issue. Many are quick to doubt the science. Amid such ambiguity a discontinuity is building as expert and public opinion diverge.

This divergence is not sustainable!

Society continues to face a dilemma posed here: a failure to reduce emissions now will mean considerably greater cost in the future. But concerted global action is still too far off given the extreme urgency required.

CO2 price transparency needed

Some countries forge ahead with national and local measures but many are moving away from market-based solutions and are punishing traditional energy sources. Cap-and-trade systems risk being discredited. The EU-Emissions Trading System (EU-ETS) has failed to deliver an adequate CO2 price. Industry lobbying for free allowance allocations is driving demands for CO2 taxes to eliminate perceived industry windfalls. In some cases this has led to political stalemate.

The transparency of a CO2 price is central to delivering least-cost emission reductions, but it also contributes to growing political resistance to cap-and-trade
systems. Policy makers are looking to instruments – like mandates – where emissions value is opaque. This includes emission performance standards (EPSs) for electricity plants and other large fixed sources. Unfortunately, policies aimed at building renewable energy capacity are also displacing more natural gas than coal where the CO2 price is low or absent. This is counter-productive when it comes to reducing emissions. Sometimes the scale of renewables capacity also imposes very high system costs. At other times, policy support for specific renewables is maintained even after the technology reaches its efficient scale, as is the case in the US.

The recession has raised a significant issue for the EU-ETS: how to design cap-and-trade systems in the face of economic and technological uncertainty? Phase III of the ETS risks delivering a structurally low CO2 price due to the impact of the recession on EU emissions. A balanced resetting of the cap should be considered. It is more credible to introduce a CO2 price floor ahead of such shocks than engage in the ad hoc recalibration of the cap in response to them. This would signal to investors that unexpected shortfalls in emissions would be used in part to step up reductions and reduce uncertainty in investments associated with the CO2 price. This is an important issue for the design of Phase IV of the ETS.

Climate too low a priority

Structural climate policy problems aside, the global recession has moved climate concerns far down the hierarchy of government objectives. The financial crisis and Gulf of Mexico oil spill have also hurt trust in the private sector, spawning tighter regulation and leading to increased risk aversion. This hits funding and political support for new technologies, in particular Carbon Capture and Sequestration (CCS) where industry needs indemnification from some risk. Recent moves by the EU and the US regarding long-term liabilities show this support is far from secured. Government support for technology development may also be hit as they work to cut deficits.

In this environment of policy drift and increasing challenge to market-based solutions, it is important to remain strongly focused on least-cost solutions today and advances in new technologies for the future. Even if more pragmatic policy choices prevail, it is important that they are consistent with, and facilitate the eventual implementation of market-based solutions.

Interdependent ecosystems approach

Global policy around environmental sustainability focuses almost exclusively on climate change and CO2 emissions reduction. But since 2008, an approach which considers interdependent ecosystems has emerged and gradually gained influence.

This approach argues that targeting climate change and CO2 alone is insufficient. The planet is a system of inextricably inter-related environmental processes and each must be managed in balance with the others to sustain stability.

Research published by the Stockholm Resilience Centre in early 2009 consolidates this thinking and proposes a framework based on ‘biophysical environmental subsystems’. The Nine Planetary Boundaries collectively define a safe operating space for humanity where social and economic development does not create lasting and catastrophic environmental change.

According to the framework, planetary boundaries collectively determine ecological stability. So far, limits have been quantified for seven boundaries which, if surpassed, could result in more ecological volatility and potentially disastrous consequences. As Table 1 shows, three boundaries have already been exceeded. Based on current trends, the limits of others are fast approaching.

For the energy industry, CO2 management and reduction is the chief concern and the focus of much research and investment. But the interdependence of the other systems means that if one limit is reached, others come under intense pressure. The climate-change boundary relies on careful management of freshwater, land use, atmospheric aerosol concentration, nitrogen–phosphorus, ocean and stratospheric boundaries. Continuing to pursue an environmental policy centered on climate change will fail to preserve the planet’s environmental stability unless the other defined boundaries are addressed with equal vigour.


Information Geometry (Part 7)

2 March, 2011

Today, I want to describe how the Fisher information metric is related to relative entropy. I’ve explained both these concepts separately (click the links for details); now I want to put them together.

But first, let me explain what this whole series of blog posts is about. Information geometry, obviously! But what’s that?

Information geometry is the geometry of ‘statistical manifolds’. Let me explain that concept twice: first vaguely, and then precisely.

Vaguely speaking, a statistical manifold is a manifold whose points are hypotheses about some situation. For example, suppose you have a coin. You could have various hypotheses about what happens when you flip it. For example: you could hypothesize that the coin will land heads up with probability x, where x is any number between 0 and 1. This makes the interval [0,1] into a statistical manifold. Technically this is a manifold with boundary, but that’s okay.

Or, you could have various hypotheses about the IQ’s of American politicians. For example: you could hypothesize that they’re distributed according to a Gaussian probability distribution with mean x and standard deviation y. This makes the space of pairs (x,y) into a statistical manifold. Of course we require y \ge 0, which gives us a manifold with boundary. We might also want to assume x \ge 0, which would give us a manifold with corners, but that’s okay too. We’re going to be pretty relaxed about what counts as a ‘manifold’ here.

If we have a manifold whose points are hypotheses about some situation, we say the manifold ‘parametrizes’ these hypotheses. So, the concept of statistical manifold is fundamental to the subject known as parametric statistics.

Parametric statistics is a huge subject! You could say that information geometry is the application of geometry to this subject.

But now let me go ahead and make the idea of ‘statistical manifold’ more precise. There’s a classical and a quantum version of this idea. I’m working at the Centre of Quantum Technologies, so I’m being paid to be quantum—but today I’m in a classical mood, so I’ll only describe the classical version. Let’s say a classical statistical manifold is a smooth function p from a manifold M to the space of probability distributions on some measure space \Omega.

We should think of \Omega as a space of events. In our first example, it’s just \{H, T\}: we flip a coin and it lands either heads up or tails up. In our second it’s \mathbb{R}: we measure the IQ of an American politician and get some real number.

We should think of M as a space of hypotheses. For each point x \in M, we have a probability distribution p_x on \Omega. This is hypothesis about the events in question: for example “when I flip the coin, there’s 55% chance that it will land heads up”, or “when I measure the IQ of an American politician, the answer will be distributed according to a Gaussian with mean 0 and standard deviation 100.”

Now, suppose someone hands you a classical statistical manifold (M,p). Each point in M is a hypothesis. Apparently some hypotheses are more similar than others. It would be nice to make this precise. So, you might like to define a metric on M that says how ‘far apart’ two hypotheses are. People know lots of ways to do this; the challenge is to find ways that have clear meanings.

Last time I explained the concept of relative entropy. Suppose we have two probability distributions on \Omega, say p and q. Then the entropy of p relative to q is the amount of information you gain when you start with the hypothesis q but then discover that you should switch to the new improved hypothesis p. It equals:

\int_\Omega \; \frac{p}{q} \; \ln(\frac{p}{q}) \; q d \omega

You could try to use this to define a distance between points x and y in our statistical manifold, like this:

S(x,y) =  \int_\Omega \; \frac{p_x}{p_y} \; \ln(\frac{p_x}{p_y}) \; p_y d \omega

This is definitely an important function. Unfortunately, as I explained last time, it doesn’t obey the axioms that a distance function should! Worst of all, it doesn’t obey the triangle inequality.

Can we ‘fix’ it? Yes, we can! And when we do, we get the Fisher information metric, which is actually a Riemannian metric on M. Suppose we put local coordinates on some patch of M containing the point x. Then the Fisher information metric is given by:

g_{ij}(x) = \int_\Omega  \partial_i (\ln p_x) \; \partial_j (\ln p_x) \; p_x d \omega

You can think of my whole series of articles so far as an attempt to understand this funny-looking formula. I’ve shown how to get it from a few different starting-points, most recently back in Part 3. But now let’s get it starting from relative entropy!

Fix any point in our statistical manifold and choose local coordinates for which this point is the origin, 0. The amount of information we gain if move to some other point x is the relative entropy S(x,0). But what’s this like when x is really close to 0? We can imagine doing a Taylor series expansion of S(x,0) to answer this question.

Surprisingly, to first order the answer is always zero! Mathematically:

\partial_i S(x,0)|_{x = 0} = 0

In plain English: if you change your mind slightly, you learn a negligible amount — not an amount proportional to how much you changed your mind.

This must have some profound significance. I wish I knew what. Could it mean that people are reluctant to change their minds except in big jumps?

Anyway, if you think about it, this fact makes it obvious that S(x,y) can’t obey the triangle inequality. S(x,y) could be pretty big, but if we draw a curve from x and y, and mark n closely spaced points x_i on this curve, then S(x_{i+1}, x_i) is zero to first order, so it must be of order 1/n^2, so if the triangle inequality were true we’d have

S(x,y) \le \sum_i S(x_{i+i},x_i) \le \mathrm{const} \, n \cdot \frac{1}{n^2}

for all n, which is a contradiction.

In plain English: if you change your mind in one big jump, the amount of information you gain is more than the sum of the amounts you’d gain if you change your mind in lots of little steps! This seems pretty darn strange, but the paper I mentioned in part 1 helps:

• Gavin E. Crooks, Measuring thermodynamic length.

You’ll see he takes a curve and chops it into lots of little pieces as I just did, and explains what’s going on.

Okay, so what about second order? What’s

\partial_i \partial_j S(x,0)|_{x = 0} ?

Well, this is the punchline of this blog post: it’s the Fisher information metric:

\partial_i \partial_j S(x,0)|_{x = 0} = g_{ij}

And since the Fisher information metric is a Riemannian metric, we can then apply the usual recipe and define distances in a way that obeys the triangle inequality. Crooks calls this distance thermodynamic length in the special case that he considers, and he explains its physical meaning.

Now let me prove that

\partial_i S(x,0)|_{x = 0} = 0

and

\partial_i \partial_j S(x,0)|_{x = 0} = g_{ij}

This can be somewhat tedious if you do it by straighforwardly grinding it out—I know, I did it. So let me show you a better way, which requires more conceptual acrobatics but less brute force.

The trick is to work with the universal statistical manifold for the measure space \Omega. Namely, we take M to be the space of all probability distributions on \Omega! This is typically an infinite-dimensional manifold, but that’s okay: we’re being relaxed about what counts as a manifold here. In this case, we don’t need to write p_x for the probability distribution corresponding to the point x \in M. In this case, a point of M just is a probability distribution on \Omega, so we’ll just call it p.

If we can prove the formulas for this universal example, they’ll automatically follow for every other example, by abstract nonsense. Why? Because any statistical manifold with measure space \Omega is the same as a manifold with a smooth map to the universal statistical manifold! So, geometrical structures on the universal one ‘pull back’ to give structures on all the rest. The Fisher information metric and the function S can be defined as pullbacks in this way! So, to study them, we can just study the universal example.

(If you’re familiar with ‘classifying spaces for bundles’ or other sorts of ‘classifying spaces’, all this should seem awfully familiar. It’s a standard math trick.)

So, let’s prove that

\partial_i S(x,0)|_{x = 0} = 0

by proving it in the universal example. Given any probability distribution q, and taking a nearby probability distribution p, we can write

\frac{p}{q} = 1 + f

where f is some small function. We only need to show that S(p,q) is zero to first order in f. And this is pretty easy. By definition:

S(p,q) =  \int_\Omega \; \frac{p}{q} \, \ln(\frac{p}{q}) \; q d \omega

or in other words,

S(p,q) =  \int_\Omega \; (1 + f) \, \ln(1 + f) \; q d \omega

We can calculate this to first order in f and show we get zero. But let’s actually work it out to second order, since we’ll need that later:

\ln (1 + f) = f - \frac{1}{2} f^2 + \cdots

so

(1 + f) \, \ln (1+ f) = f + \frac{1}{2} f^2 + \cdots

so

\begin{aligned} S(p,q) &=& \int_\Omega \; (1 + f) \; \ln(1 + f) \; q d \omega \\ &=& \int_\Omega f \, q d \omega + \frac{1}{2} \int_\Omega f^2\, q d \omega + \cdots \end{aligned}

Why does this vanish to first order in f? It’s because p and q are both probability distributions and p/q = 1 + f, so

\int_\Omega (1 + f) \, q d\omega = \int_\Omega p d\omega = 1

but also

\int_\Omega q d\omega = 1

so subtracting we see

\int_\Omega f \, q d\omega = 0

So, S(p,q) vanishes to first order in f. Voilà!

Next let’s prove the more interesting formula:

\partial_i \partial_j S(x,0)|_{x = 0} = g_{ij}

which relates relative entropy to the Fisher information metric. Since both sides are symmetric matrices, it suffices to show their diagonal entries agree in any coordinate system:

\partial^2_i S(x,0)|_{x = 0} = g_{ii}

Devoted followers of this series of posts will note that I keep using this trick, which takes advantage of the polarization identity.

To prove

\partial^2_i S(x,0)|_{x = 0} = g_{ii}

it’s enough to consider the universal example. We take the origin to be some probability distribution q and take x to be a nearby probability distribution p which is pushed a tiny bit in the ith coordinate direction. As before we write p/q = 1 + f. We look at the second-order term in our formula for S(p,q):

\frac{1}{2} \int_\Omega f^2\, q d \omega

Using the usual second-order Taylor’s formula, which has a \frac{1}{2} built into it, we can say

\partial^2_i S(x,0)|_{x = 0} = \int_\Omega f^2\, q d \omega

On the other hand, our formula for the Fisher information metric gives

g_{ii} = \left. \int_\Omega  \partial_i \ln p \; \partial_i \ln p \; q d \omega \right|_{p=q}

The right hand sides of the last two formulas look awfully similar! And indeed they agree, because we can show that

\left. \partial_i \ln p \right|_{p = q} = f

How? Well, we assumed that p is what we get by taking q and pushing it a little bit in the ith coordinate direction; we have also written that little change as

p/q = 1 + f

for some small function f. So,

\partial_i (p/q) = f

and thus:

\partial_i p = f q

and thus:

\partial_i \ln p = \frac{\partial_i p}{p} = \frac{fq}{p}

so

\left. \partial_i \ln p \right|_{p=q} = f

as desired.

This argument may seem a little hand-wavy and nonrigorous, with words like ‘a little bit’. If you’re used to taking arguments involving infinitesimal changes and translating them into calculus (or differential geometry), it should make sense. If it doesn’t, I apologize. It’s easy to make it more rigorous, but only at the cost of more annoying notation, which doesn’t seem good in a blog post.

Boring technicalities

If you’re actually the kind of person who reads a section called ‘boring technicalities’, I’ll admit to you that my calculations don’t make sense if the integrals diverge, or we’re dividing by zero in the ratio p/q. To avoid these problems, here’s what we should do. Fix a \sigma-finite measure space (\Omega, d\omega). Then, define the universal statistical manifold to be the space P(\Omega,d \omega) consisting of all probability measures that are equivalent to d\omega, in the usual sense of measure theory. By Radon-Nikodym, we can write any such measure as q d \omega where q \in L^1(\Omega, d\omega). Moreover, given two of these guys, say p d \omega and q d\omega, they are absolutely continuous with respect to each other, so we can write

p d \omega = \frac{p}{q} \; q d \omega

where the ratio p/q is well-defined almost everywhere and lies in L^1(\Omega, q d\omega). This is enough to guarantee that we’re never dividing by zero, and I think it’s enough to make sure all my integrals converge.

We do still need to make P(\Omega,d \omega) into some sort of infinite-dimensional manifold, to justify all the derivatives. There are various ways to approach this issue, all of which start from the fact that L^1(\Omega, d\omega) is a Banach space, which is about the nicest sort of infinite-dimensional manifold one could imagine. Sitting in L^1(\Omega, d\omega) is the hyperplane consisting of functions q with

\int_\Omega q d\omega = 1

and this is a Banach manifold. To get P(\Omega,d \omega) we need to take a subspace of that hyperplane. If this subspace were open then P(\Omega,d \omega) would be a Banach manifold in its own right. I haven’t checked this yet, for various reasons.

For one thing, there’s a nice theory of ‘diffeological spaces’, which generalize manifolds. Every Banach manifold is a diffeological space, and every subset of a diffeological space is again a diffeological space. For many purposes we don’t need our ‘statistical manifolds’ to be manifolds: diffeological spaces will do just fine. This is one reason why I’m being pretty relaxed here about what counts as a ‘manifold’.

For another, I know that people have worked out a lot of this stuff, so I can just look things up when I need to. And so can you! This book is a good place to start:

• Paolo Gibilisco, Eva Riccomagno, Maria Piera Rogantin and Henry P. Wynn, Algebraic and Geometric Methods in Statistics, Cambridge U. Press, Cambridge, 2009.

I find the chapters by Raymond Streater especially congenial. For the technical issue I’m talking about now it’s worth reading section 14.2, “Manifolds modelled by Orlicz spaces”, which tackles the problem of constructing a universal statistical manifold in a more sophisticated way than I’ve just done. And in chapter 15, “The Banach manifold of quantum states”, he tackles the quantum version!


This Week’s Finds (Week 310)

28 February, 2011

I first encountered Gregory Benford through his science fiction novels: my favorite is probably In the Ocean of Night.

Later I learned that he’s an astrophysicist at U.C. Irvine, not too far from Riverside where I teach. But I only actually met him through my wife. She sometimes teaches courses on science fiction, and like Benford, she has some involvement with the Eaton Collection at U.C. Riverside—the largest publicly accessible SF library in the world. So, I was bound to eventually bump into him.

When I did, I learned about his work on electromagnetic filaments near the center of our galaxy—see “week252” for more. I also learned he was seriously interested in climate change, and that he was going to the Asilomar International Conference on Climate Intervention Technologies—a controversial get-together designed to hammer out some policies for research on geoengineering.

Benford is a friendly but no-nonsense guy. Recently he sent me an email mentioning my blog, and said: "Your discussions on what to do are good, though general, while what we need is specifics NOW." Since I’d been meaning to interview him for a while, this gave me the perfect opening.

JB: You’ve been thinking about the future for a long time, since that’s part of your job as a science fiction writer.  For example, you’ve written a whole series about the expansion of human life through the galaxy.  From this grand perspective, global warming might seem like an annoying little road-bump before the ride even gets started.  How did you get interested in global warming? 

GB: I liked writing about the far horizons of our human prospect; it’s fun. But to get even above the envelope of our atmosphere in a sustained way, we have to stabilize the planet. Before we take on the galaxy, let’s do a smaller problem .

JB: Good point. We can’t all ship on out of here, and the way it’s going now, maybe none of us will, unless we get our act together.

Can you remember something that made you think "Wow, global warming is a really serious problem"?  As you know, not everyone is convinced yet.

GB: I looked at the migration of animals and then the steadily northward march of trees. They don’t read newspapers—the trees become newspapers—so their opinion matters more. Plus the retreat of the Arctic Sea ice in summer, the region of the world most endangered by the changes coming. I first focused on carbon capture using the CROPS method. I’m the guy who first proposed screening the Arctic with aerosols to cool it in summer.

JB: Let’s talk about each in turn. "CROPS" stands for Crop Residue Oceanic Permanent Sequestration. The idea sounds pretty simple: dump a lot of crop residues—stalks, leaves and stuff—on the deep ocean floor. That way, we’d be letting plants suck CO2 out of the atmosphere for us.

GB: Agriculture is the world’s biggest industry; we should take advantage of it. That’s what gave Bob Metzger and me the idea: collect farm waste and sink it to the bottom of the ocean, whence it shall not return for 1000 years. Cheap, easy, doable right now.

JB: But we have to think about what’ll happen if we dump all that stuff into the ocean, right? After all, the USA alone creates half a gigatonne of crop residues each year, and world-wide it’s ten times that. I’m getting these numbers from your papers:

• Robert A. Metzger and Gregory Benford, Sequestering of atmospheric carbon through permanent disposal of crop residue, Climatic Change 49 (2001), 11-19.

• Stuart E. Strand and Gregory Benford, Ocean sequestration of crop residue carbon: recycling fossil fuel carbon back to deep sediments, Environmental Science and Technology 43 (2009), 1000-1007.

Since we’re burning over 7 gigatonnes of carbon each year, burying 5 gigatonnes of crop waste is just enough to make a serious dent in our carbon footprint. But what’ll that much junk do at the bottom of the ocean?

GB: We’re testing the chemistry of how farm waste interacts with deep ocean sites offshore Monterey Bay right now. Here’s a picture of a bale 3.2 km down:

JB: I’m sure our audience will have more questions about this… but the answers to some are in your papers, and I want to spend a bit more time on your proposal to screen the Arctic. There’s a good summary here:

• Gregory Benford, Climate controls, Reason Magazine, November 1997.

But in brief, it sounds like you want to test the results of spraying a lot of micron-sized dust into the atmosphere above the Arctic Sea during the summer. You suggest diatomaceous earth as an option, because it’s chemically inert: just silica. How would the test work, exactly, and what would you hope to learn?

GB: The US has inflight refueling aircraft such as the KC-10 Extender that with minor changes spread aerosols at relevant altitudes, and pilots who know how to fly big sausages filled with fluids.



Rather than diatomaceous earth, I now think ordinary SO2 or H2S will work, if there’s enough water at the relevant altitudes. Turns out the pollutant issue is minor, since it would be only a percent or so of the SO2 already in the Arctic troposphere. The point is to spread aerosols to diminish sunlight and look for signals of less sunlight on the ground, changes in sea ice loss rates in summer, etc. It’s hard to do a weak experiment and be sure you see a signal. Doing regional experiments helps, so you can see a signal before the aerosols spread much. It’s a first step, an in-principle experiment.

Simulations show it can stop the sea ice retreat. Many fear if we lose the sea ice in summer ocean currents may alter; nobody really knows. We do know that the tundra is softening as it thaws, making roads impassible and shifting many wildlife patterns, with unforeseen long term effects. Cooling the Arctic back to, say, the 1950 summer temperature range would cost maybe $300 million/year, i.e., nothing. Simulations show to do this globally, offsetting say CO2 at 500 ppm, might cost a few billion dollars per year. That doesn’t help ocean acidification, but it’s a start on the temperature problem.

JB: There’s an interesting blog on Arctic political, military and business developments:

• Anatoly Karlin, Arctic Progress.

Here’s the overview:

Today, global warming is kick-starting Arctic history. The accelerating melting of Arctic sea ice promises to open up circumpolar shipping routes, halving the time needed for container ships and tankers to travel between Europe and East Asia. As the ice and permafrost retreat, the physical infrastructure of industrial civilization will overspread the region […]. The four major populated regions encircling the Arctic Ocean—Alaska, Russia, Canada, Scandinavia (ARCS)—are all set for massive economic expansion in the decades ahead. But the flowering of industrial civilization’s fruit in the thawing Far North carries within it the seeds of its perils. The opening of the Arctic is making border disputes more serious and spurring Russian and Canadian military buildups in the region. The warming of the Arctic could also accelerate global warming—and not just through the increased economic activity and hydrocarbons production. One disturbing possibility is that the melting of the Siberian permafrost will release vast amounts of methane, a greenhouse gas that is far more potent than CO2, into the atmosphere, and tip the world into runaway climate change.

But anyway, unlike many people, I’m not mentioning risks associated with geoengineering in order to instantly foreclose discussion of it, because I know there are also risks associated with not doing it. If we rule out doing anything really new because it’s too expensive or too risky, we might wind up locking ourselves in a "business as usual" scenario. And that could be even more risky—and perhaps ultimately more expensive as well.

GB: Yes, no end of problems. Most impressive is how they look like a descending spiral, self-reinforcing.

Certainly countries now scramble for Arctic resources, trade routes opened by thawing—all likely to become hotly contested strategic assets. So too melting Himalayan glaciers can perhaps trigger "water wars" in Asia—especially India and China, two vast lands of very different cultures. Then, coming on later, come rising sea levels. Florida starts to go away. The list is endless and therefore uninteresting. We all saturate.

So droughts, floods, desertification, hammering weather events—they draw ever less attention as they grow more common. Maybe Darfur is the first "climate war." It’s plausible.

The Arctic is the canary in the climate coalmine. Cutting CO2 emissions will take far too long to significantly affect the sea ice. Permafrost melts there, giving additional positive feedback. Methane release from the not-so-perma-frost is the most dangerous amplifying feedback in the entire carbon cycle. As John Nissen has repeatedly called attention to, the permafrost permamelt holds a staggering 1.5 trillion tons of frozen carbon, about twice as much carbon as is in the atmosphere. Much would emerge as methane. Methane is 25 times as potent a heat-trapping gas as CO2 over a century, and 72 times as potent over the first 20 years! The carbon is locked in a freezer. Yet that’s the part of the planet warming up the fastest. Really bad news:

• Kevin Schaefer, Tingjun Zhang, Lori Bruhwiler and Andrew P. Barrett, Amount and timing of permafrost carbon release in response to climate warming, Tellus, 15 February 2011.

Abstract: The thaw and release of carbon currently frozen in permafrost will increase atmospheric CO2 concentrations and amplify surface warming to initiate a positive permafrost carbon feedback (PCF) on climate. We use surface weather from three global climate models based on the moderate warming, A1B Intergovernmental Panel on Climate Change emissions scenario and the SiBCASA land surface model to estimate the strength and timing of the PCF and associated uncertainty. By 2200, we predict a 29-59% decrease in permafrost area and a 53-97 cm increase in active layer thickness. By 2200, the PCF strength in terms of cumulative permafrost carbon flux to the atmosphere is 190±64 gigatonnes of carbon. This estimate may be low because it does not account for amplified surface warming due to the PCF itself and excludes some discontinuous permafrost regions where SiBCASA did not simulate permafrost. We predict that the PCF will change the arctic from a carbon sink to a source after the mid-2020s and is strong enough to cancel 42-88% of the total global land sink. The thaw and decay of permafrost carbon is irreversible and accounting for the PCF will require larger reductions in fossil fuel emissions to reach a target atmospheric CO2 concentration.

Particularly interesting is the slowing of thermohaline circulation.  In John Nissen’s "two scenarios" work there’s an uncomfortably cool future—if the Gulf Stream were to be diverted by meltwater flowing into NW Atlantic. There’s also an unbearably hot future, if the methane from not-so-permafrost and causes global warming to spiral out of control. So we have a terrifying menu.

JB: I recently interviewed Nathan Urban here. He explained a paper where he estimated the chance that the Atlantic current you’re talking about could collapse. (Technically, it’s the Atlantic meridional overturning circulation, not quite the same as the Gulf Stream.) They got a 10% chance of it happening in two centuries, assuming a business as usual scenario. But there are a lot of uncertainties in the modeling here.

Back to geoengineering. I want to talk about some ways it could go wrong, how soon we’d find out if it did, and what we could do then.

For example, you say we’ll put sulfur dioxide in the atmosphere below 15 kilometers, and most of the ozone is above 20 kilometers. That’s good, but then I wonder how much sulfur dioxide will diffuse upwards. As the name suggests, the stratosphere is "stratified" —there’s not much turbulence. That’s reassuring. But I guess one reason to do experiments is to see exactly what really happens.

GB: It’s really the only way to go forward. I fear we are now in the Decade of Dithering that will end with the deadly 2020s. Only then will experiments get done and issues engaged. All else, as tempting as ideas and simulations are, spell delay if they do not couple with real field experiments—from nozzle sizes on up to albedo measures —which finally decide.

JB: Okay. But what are some other things that could go wrong with this sulfur dioxide scheme? I know you’re not eager to focus on the dangers, but you must be able to imagine some plausible ones: you’re an SF writer, after all. If you say you can’t think of any, I won’t believe you! And part of good design is looking for possible failure modes.

GB: Plenty can go wrong with so vast an idea. But we can learn from volcanoes, that give us useful experiments, though sloppy and noisy ones, about putting aerosols into the air. Monitoring those can teach us a lot with little expense.

We can fail to get the aerosols to avoid clumping, so they fall out too fast. Or we can somehow trigger a big shift in rainfall patterns—a special danger in a system already loaded with surplus energy, as is already displaying anomalies like the bitter winters in Europe, floods in Pakistan, drought in Darfur. Indeed, some of Alan Robock’s simulations of Arctic aerosol use show a several percent decline in monsoon rain—though that may be a plus, since flooding is the #1 cause of death and destruction during the Indian monsoon.

Mostly, it might just plain fail to work. Guessing outcomes is useless, though.  Here’s where experiment rules, not simulations. This is engineering, which learns from mistakes. Consider the early days of aviation. Having more time to develop and test a system gives more time to learn how to avoid unwanted impacts. Of course, having a system ready also increases the probability of premature deployment; life is about choices and dangers.

More important right now than developing capability, is understanding the consequences of deployment of that capability by doing field experiments. One thing we know: both science and engineering advance most quickly by using the dance of theory with experiment. Neglecting this, preferring only experiment, is a fundamental mistake.

JB: Switching gears slightly: in March last year you went to the Asilomar Conference on climate intervention technologies. I’ve read the report:

• Asilomar Scientific Organizing Committee, The Asilomar Conference Recommendations on Principles for Research into Climate Engineering Techniques, Climate Institute, Washington DC, 2010.

It seems unobjectionable and a bit bland, no doubt deliberately so, with recommendations like this:

"Public participation and consultation in research planning and oversight, assessments, and development of decision-making mechanisms and processes must be provided."

What were some interesting things that you learned there? And what’ll happen next?

GB: It was the Woodstock of the policy wonks. I found it depressing. Not much actual science got discussed, and most just fearlessly called for more research, forming of panels and committees, etc. This is how bureaucracy digests a problem, turning it quite often into fertilizer.

I’m a physicist who does both theory and experiment. I want to see work that combines those to give us real information and paths to follow. I don’t see that anywhere now. Congress might hand out money for it but after the GAO report on geoengineering last September there seems little movement.

I did see some people pushing their carbon capture companies, to widespread disbelief. The simple things we could do right now like our CROPS carbon capture proposal are neglected, while entrepreneur companies hope for a government scheme to pay for sucking CO2 from the air. That’ll be the day!—far into the crisis, I think, maybe several decades from now. I also saw fine ideas pushed aside in favor of policy wonk initiatives. It was a classic triumph of process over results. As is many areas dominated by social scientists, people seemed to be saying, "Nobody can blame us if we go through the motions.”

That Decade of Dithering is upon us now. The great danger is that tipping points may not be obvious, even as we cross them. They may present as small events that nonetheless take us over an horizon from which we can never return.

For example, the loss of Greenland ice. Once the ice sheet melts down to an altitude below that needed to maintain it, we’ve lost it. The melt lubricates the glacier base and starts a slide we cannot stop. There are proposals of how to block that—essentially, draw the water out from the base as fast as it appears—but nobody’s funding such studies.

A reasonable, ongoing climate control program might cost $100 million annually. That includes small field experiments, trials with spraying aerosols, etc. We now spend about $5 billion per year globally studying the problem, so climate control studies would be 1/50 of that.

Even now, we may already be too late for a tipping point—we still barely glimpse the horrors we could be visiting on our children and their grandchildren’s grandchildren.

JB: I think a lot of young people are eager to do something. What would be your advice, especially to future scientists and engineers? What should they do? The problems seem so huge, and most so-called "adults" are shirking their responsibilities—perhaps hoping they’ll be dead before things get too bad.

GB: One reason people are paralyzed is simple: major interests would get hurt—coal, oil, etc. The fossil fuel industry is the second largest in the world; #1 is agriculture. We have ~50 trillion dollars of infrastructure invested in it. That and inertia—we’ve made the crucial fuel of our world a Bad Thing, and prohibition never works with free people. Look at the War on Drugs, now nearing its 40th anniversary.

That’s why I think adaptation—dikes, water conservation, reflecting roofs and blacktop to cool cities and lower their heating costs, etc.— is a smart way to prepare. We should also fund research in mineral weathering as a way to lock up CO2, which not only consumes CO2 but it can also generate ocean alkalinity. The acidification of the oceans is undeniable, easily measured, and accelerating. Plus geoengineering, which is probably the only fairly cheap, quick way to damp the coming chaos for a while. A stopgap, but we’re going to need plenty of those.

JB: And finally, what about you? What are you doing these days? Science fiction? Science? A bit of both?

Both, plus. Last year I published a look at how we viewed the future in the 20th Century, The Wonderful Future We Never Had, and have a novel in progress now cowritten with Larry Niven—about a Really Big Object. Plus some short stories and journalism.

My identical twin brother Jim & I published several papers looking at SETI from the perspective of those who would pay the bills for a SETI beacon, and reached conclusions opposite from what the SETI searches of the last half century have sought. Instead of steady, narrowband signals near 1 GHz, it is orders of magnitude cheaper to radiate pulsed, broadband beacon signals nearer 10 GHz. This suggests new way to look for pulsed signals, which some are trying to find. We may have been looking for the wrong thing all along. The papers are on the arXiv:

• James Benford, Gregory Benford and Dominic Benford, Messaging with cost optimized interstellar beacons.

• Gregory Benford, James Benford and Dominic Benford, Searching for cost optimized interstellar beacons.

For math types, David Wolpert and I have shown that Newcomb’s paradox arises from confusions in the statement, so is not a paradox:

• David H. Wolpert and Gregory Benford, What does Newcomb’s paradox teach us?

JB: The next guest on this show, Eliezer Yudkowsky, has also written about Newcomb’s paradox. I should probably say what it is, just for folks who haven’t heard yet. I’ll quote Yudkowsky’s formulation, since it’s nice and snappy:

A superintelligence from another galaxy, whom we shall call Omega, comes to Earth and sets about playing a strange little game. In this game, Omega selects a human being, sets down two boxes in front of them, and flies away.

Box A is transparent and contains a thousand dollars.
Box B is opaque, and contains either a million dollars, or nothing.

You can take both boxes, or take only box B.

And the twist is that Omega has put a million dollars in box B if and only if Omega has predicted that you will take only box B.

Omega has been correct on each of 100 observed occasions so far—everyone who took both boxes has found box B empty and received only a thousand dollars; everyone who took only box B has found B containing a million dollars. (We assume that box A vanishes in a puff of smoke if you take only box B; no one else can take box A afterward.)

Before you make your choice, Omega has flown off and moved on to its next game. Box B is already empty or already full.

Omega drops two boxes on the ground in front of you and flies off.

Do you take both boxes, or only box B?

If you say you’d take both boxes, I’ll argue that’s stupid: everyone who did that so far got just a thousand dollars, while the folks who took only box B got a million!

If you say you’d take only box B, I’ll argue that’s stupid: there has got to be more money in both boxes than in just one of them!

So, this puzzle has a kind of demonic attraction. Lots of people have written about it, though personally I’m waiting until a superintelligence from another galaxy actually shows up and performs this stunt.

Hmm—I see your paper uses Bayesian networks! I’ve been starting to think about those lately.

But I know that’s not all you’ve been doing.

GB: I also started several biotech companies 5 years ago, spurred in part by the agonizing experience of watching my wife die of cancer for decades, ending in 2002. They’re genomics companies devoted to extending human longevity by upregulating genes we know confer some defenses against cardio, neurological and other diseases. Our first product just came out, StemCell100, and did well in animal and human trials.

So I’m staying busy. The world gets more interesting all the time. Compared with growing up in the farm country of Alabama, this is a fine way to live.

JB: It’s been great to hear what you’re up to. Best of luck on all these projects, and thanks for answering my questions!


Few doubt that our climate stands in a class by itself in terms of complexity. Though much is made of how wondrous our minds are, perhaps the most complex entity known is our biosphere, in which we are mere mayflies. Absent a remotely useful theory of complexity in systems, we must proceed cautiously. – Gregory Benford


This Week’s Finds (Week 309)

17 February, 2011

In the next issues of This Week’s Finds, I’ll return to interviewing people who are trying to help humanity deal with some of the risks we face.

First I’ll talk to the science fiction author and astrophysicist Gregory Benford. I’ll ask him about his ideas on “geoengineering” — proposed ways of deliberately manipulating the Earth’s climate to counteract the effects of global warming.

After that, I’ll spend a few weeks asking Eliezer Yudkowsky about his ideas on rationality and “friendly artificial intelligence”. Yudkowsky believes that the possibility of dramatic increases in intelligence, perhaps leading to a technological singularity, should command more of our attention than it does.

Needless to say, all these ideas are controversial. They’re exciting to some people — and infuriating, terrifying or laughable to others. But I want to study lots of scenarios and lots of options in a calm, level-headed way without rushing to judgement. I hope you enjoy it.

This week, I want to say a bit more about the Hopf bifurcation!

Last week I talked about applications of this mathematical concept to climate cycles like the El Niño – Southern Oscillation. But over on the Azimuth Project, Graham Jones has explained an application of the same math to a very different subject:

Quantitative ecology, Azimuth Project.

That’s one thing that’s cool about math: the same patterns show up in different places. So, I’d like to take advantage of his hard work and show you how a Hopf bifurcation shows up in a simple model of predator-prey interactions.

Suppose we have some rabbits that reproduce endlessly, with their numbers growing at a rate proportional to their population. Let x(t) be the number of animals at time t. Then we have:

\frac{d x}{d t} = r x

where r is the growth rate. This gives exponential growth: it has solutions like

x(t) = x_0 e^{r t}

To get a slightly more realistic model, we can add ‘limits to growth’. Instead of a constant growth rate, let’s try a growth rate that decreases as the population increases. Let’s say it decreases in a linear way, and drops to zero when the population hits some value K. Then we have

\frac{d x}{d t} = r (1-x/K) x

This is called the “logistic equation”. K is known as the “carrying capacity”. The idea is that the environment has enough resources to support this population. If the population is less, it’ll grow; if it’s more, it’ll shrink.

If you know some calculus you can solve the logistic equation by hand by separating the variables and integrating both sides; it’s a textbook exercise. The solutions are called “logistic functions”, and they look sort of like this:



The above graph shows the simplest solution:

x = \frac{e^t}{e^t + 1}

of the simplest logistic equation in the world:

\frac{ d x}{d t} = (1 - x)x

Here the carrying capacity is 1. Populations less than 1 sound a bit silly, so think of it as 1 million rabbits. You can see how the solution starts out growing almost exponentially and then levels off. There’s a very different-looking solution where the population starts off above the carrying capacity and decreases. There’s also a silly solution involving negative populations. But whenever the population starts out positive, it approaches the carrying capacity.

The solution where the population just stays at the carrying capacity:

x = 1

is called a “stable equilibrium”, because it’s constant in time and nearby solutions approach it.

But now let’s introduce another species: some wolves, which eat the rabbits! So, let x be the number of rabbits, and y the number of wolves. Before the rabbits meet the wolves, let’s assume they obey the logistic equation:

\frac{ d x}{d t} = x(1-x/K)

And before the wolves meet the rabbits, let’s assume they obey this equation:

\frac{ d y}{d t} = -y

so that their numbers would decay exponentially to zero if there were nothing to eat.

So far, not very interesting. But now let’s include a term that describes how predators eat prey. Let’s say that on top of the above effect, the predators grow in numbers, and the prey decrease, at a rate proportional to:

x y/(1+x).

For small numbers of prey and predators, this means that predation increases nearly linearly with both x and y. But if you have one wolf surrounded by a million rabbits in a small area, the rate at which it eats rabbits won’t double if you double the number of rabbits! So, this formula includes a limit on predation as the number of prey increases.

Okay, so let’s try these equations:

\frac{ d x}{d t} = x(1-x/K) - 4x y/(x+1)

and

\frac{ d y}{d t} = -y + 2x y/(x+1)

The constants 4 and 2 here have been chosen for simplicity rather than realism.

Before we plunge ahead and get a computer to solve these equations, let’s see what we can do by hand. Setting d x/d t = 0 gives the interesting parabola

y = \frac{1}{4}(1-x/K)(x+1)

together with the boring line x = 0. (If you start with no prey, that’s how it will stay. It takes bunny to make bunny.)

Setting d y/d t = 0 gives the interesting line

x=1

together with the boring line y = 0.

The interesting parabola and the interesting line separate the x y plane into four parts, so these curves are called separatrices. They meet at the point

y = \frac{1}{2} (1 - 1/K)

which of course is an equilibrium, since d x / d t = d y / d t = 0 there. But when K < 1 this equilibrium occurs at a negative value of y, and negative populations make no sense.

So, if K < 1 there is no equilibrium population, and with a bit more work one can see the problem: the wolves die out. For larger values of K there is an equilibrium population. But the nature of this equilibrium depends on K: that’s the interesting part.

We could figure this out analytically, but let’s look at two of Graham’s plots. Here’s a solution when K = 2.5:

The gray curves are the separatrices. The red curve shows a solution of the equations, with the numbers showing the passage of time. So, you can see that the solution spirals in towards the equilibrium. That’s what you expect of a stable equilibrium.

Here’s a picture when K = 3.5:

The red and blue curves are two solutions, again numbered to show how time passes. The red curve spirals in towards the dotted gray curve. The blue one spirals out towards it. The gray curve is also a solution. It’s called a “stable limit cycle” because it’s periodic, and nearby solutions move closer and closer to it.

With a bit more work, we could show analytically that whenever 1 < K < 3 there is a stable equilibrium. As we increase K, when K passes 3 this stable equilibrium suddenly becomes a tiny stable limit cycle. This is a Hopf bifurcation!

Now, what if we add noise? We saw the answer last week: where we before had a stable equilibrium, we now can get irregular cycles — because the noise keeps pushing the solution away from the equilibrium!

Here’s how it looks for K=2.5 with white noise added:

The following graph shows a longer run in the noisy K=2.5 case, with rabbits (x) in black and wolves (y) in gray. Click on the picture to make it bigger:



There is irregular periodicity — and as you’d expect, the predators tends to lag behind the prey. A burst in the rabbit population causes a rise in the wolf population; a lot of wolves eat a lot of rabbits; a crash in rabbits causes a crash in wolves.

This sort of phenomenon is actually seen in nature sometimes. The most famous case involves the snowshoe hare and the lynx in Canada. It was first noted by MacLulich:

• D. A. MacLulich, Fluctuations in the Numbers of the Varying Hare (Lepus americanus), University of Toronto Studies Biological Series 43, University of Toronto Press, Toronto, 1937.

The snowshoe hare is also known as the “varying hare”, because its coat varies in color quite dramatically. In the summer it looks like this:



In the winter it looks like this:



The Canada lynx is an impressive creature:



But don’t be too scared: it only weighs 8-11 kilograms, nothing like a tiger or lion.

Down in the United States, the same species lynx went extinct in Colorado around 1973 — but now it’s back!

• Colorado Division of Wildlife, Success of the Lynx Reintroduction Program, 27 September, 2010.

In Canada, at least, the lynx rely for the snowshoe hare for 60% to 97% of their diet. I suppose this is one reason the hare has evolved such magnificent protective coloration. This is also why the hare and lynx populations are tightly coupled. They rise and crash in irregular cycles that look a bit like what we saw in our simplified model:



This cycle looks a bit more strongly periodic than Graham’s graph, so to fit this data, we might want to choose parameters that give a limit cycle rather than a stable equilibrium.

But I should warn you, in case it’s not obvious: everything about population biology is infinitely more complicated than the models I’ve showed you so far! Some obvious complications: snowshoe hare breed in the spring, their diet varies dramatically over the course of year, and the lynx also eat rodents and birds, carrion when it’s available, and sometimes even deer. Some less obvious ones: the hare will eat dead mice and even dead hare when they’re available, and the lynx can control the size of their litter depending on the abundance of food. And I’m sure all these facts are just the tip of the iceberg. So, it’s best to think of models here as crude caricatures designed to illustrate a few features of a very complex system.

I hope someday to say a bit more and go a bit deeper. Do any of you know good books or papers to read, or fascinating tidbits of information? Graham Jones recommends this book for some mathematical aspects of ecology:

• Michael R. Rose, Quantitative Ecological Theory, Johns Hopkins University Press, Maryland, 1987.

Alas, I haven’t read it yet.

Also: you can get Graham’s R code for predator-prey simulations at the Azimuth Project.


Under carefully controlled experimental circumstances, the organism will behave as it damned well pleases. – the Harvard Law of Animal Behavior


Child Earth

11 February, 2011


Mary Catherine Bateson is a cultural anthropologist, the daughter of Margaret Mead and Gregory Bateson. Here’s a thought-provoking snippet from Stewart Brand’s summary of her talk at the Long Now Foundation:

The birth of a first child is the most intense disruption that most adults experience. Suddenly the new parents have no sleep, no social life, no sex, and they have to keep up with a child that changes from week to week. “Two ignorant adults learn from the newborn how to be decent parents.” Everything now has to be planned ahead, and the realization sinks in that it will go on that way for twenty years.

[…]

Herself reflecting on parenthood, Bateson proposed that the metaphor of “mother Earth” is no longer accurate or helpful. Human impact on nature is now so complete and irreversible that we’re better off thinking of the planet as if it were our first child. It will be here after us. Its future is unknown and uncontrollable. We are forced to plan ahead for it. Our first obligation is to keep it from harm. We are learning from it how to be decent parents.


Follow

Get every new post delivered to your Inbox.

Join 3,095 other followers