Relative Entropy in Evolutionary Dynamics

22 January, 2014

guest post by Marc Harper

In John’s information geometry series, he mentioned some of my work in evolutionary dynamics. Today I’m going to tell you about some exciting extensions!

The replicator equation

First a little refresher. For a population of n replicating types, such as individuals with different eye colors or a gene with n distinct alleles, the ‘replicator equation’ expresses the main idea of natural selection: the relative rate of growth of each type should be proportional to the difference between the fitness of the type and the mean fitness in the population.

To see why this equation should be true, let P_i be the population of individuals of the ith type, which we allow to be any nonnegative real number. We can list all these numbers and get a vector:

P = (P_1, \dots, P_n)

The Lotka–Volterra equation is a very general rule for how these numbers can change with time:

\displaystyle{ \frac{d P_i}{d t} = f_i(P) P_i }

Each population grows at a rate proportional to itself, where the ‘constant of proportionality’, f_i(P), is not necessarily constant: it can be any real-valued function of P. This function is called the fitness of the ith type. Taken all together, these functions f_i are called the fitness landscape.

Let p_i be the fraction of individuals who are of the ith type:

\displaystyle{ p_i = \frac{P_i}{\sum_{i =1}^n P_i } }

These numbers p_i are between 0 and 1, and they add up to 1. So, we can also think of them as probabilities: p_i is the probability that a randomly chosen individual is of the ith type. This is how probability theory, and eventually entropy, gets into the game.

Again, we can bundle these numbers into a vector:

p = (p_1, \dots, p_n)

which we call the population distribution. It turns out that the Lotka–Volterra equation implies the replicator equation:

\displaystyle{ \frac{d p_i}{d t} = \left( f_i(P) - \langle f(P) \rangle \right) \, p_i }

where

\displaystyle{ \langle f(P) \rangle = \sum_{i =1}^n  f_i(P)  p_i  }

is the mean fitness of all the individuals. You can see the proof in Part 9 of the information geometry series.

By the way: if each fitness f_i(P) only depends on the fraction of individuals of each type, not the total numbers, we can write the replicator equation in a simpler way:

\displaystyle{ \frac{d p_i}{d t} = \left( f_i(p) - \langle f(p) \rangle \right) \, p_i }

From now on, when talking about this equation, that’s what I’ll do.

Anyway, the take-home message is this: the replicator equation says the fraction of individuals of any type changes at a rate proportional to fitness of that type minus the mean fitness.

Now, it has been known since the late 1970s or early 1980s, thanks to the work of Akin, Bomze, Hofbauer, Shahshahani, and others, that the replicator equation has some very interesting properties. For one thing, it often makes ‘relative entropy’ decrease. For another, it’s often an example of ‘gradient flow’. Let’s look at both of these in turn, and then talk about some new generalizations of these facts.

Relative entropy as a Lyapunov function

I mentioned that we can think of a population distribution as a probability distribution. This lets us take ideas from probability theory and even information theory and apply them to evolutionary dynamics! For example, given two population distributions p and q, the information of q relative to p is

I(q,p) = \displaystyle{ \sum_i q_i \ln \left(\frac{q_i}{p_i }\right)}

This measures how much information you gain if you have a hypothesis about some state of affairs given by the probability distribution p, and then someone tells you “no, the best hypothesis is q!”

It may seem weird to treat a population distribution as a hypothesis, but this turns out to be a good idea. Evolution can then be seen as a learning process: a process of improving the hypothesis.

We can make this precise by seeing how the relative information changes with the passage of time. Suppose we have two population distributions q and p. Suppose q is fixed, while p evolves in time according to the replicator equation. Then

\displaystyle{  \frac{d}{d t} I(q,p)  =  \sum_i f_i(P) (p_i - q_i) }

For the proof, see Part 11 of the information geometry series.

So, the information of q relative to p will decrease as p evolves according to the replicator equation if

\displaystyle{  \sum_i f_i(P) (p_i - q_i) } \le 0

If q makes this true for all p, we say q is an evolutionarily stable state. For some reasons why, see Part 13.

What matters now is that when q is an evolutionarily stable state, I(q,p) says how much information the population has ‘left to learn’—and we’re seeing that this always decreases. Moreover, it turns out that we always have

I(q,p) \ge 0

and I(q,p) = 0 precisely when p = q.

People summarize all this by saying that relative information is a ‘Lyapunov function’. Very roughly, a Lyapunov function is something that decreases with the passage of time, and is zero only at the unique stable state. To be a bit more precise, suppose we have a differential equation like

\displaystyle{  \frac{d}{d t} x(t) = v(x(t)) }

where x(t) \in \mathbb{R}^n and v is some smooth vector field on \mathbb{R}^n. Then a smooth function

V : \mathbb{R}^n \to \mathbb{R}

is a Lyapunov function if

V(x) \ge 0 for all x

V(x) = 0 iff x is some particular point x_0

and

\displaystyle{ \frac{d}{d t} V(x(t)) \le 0 } for every solution of our differential equation.

In this situation, the point x_0 is a stable equilibrium for our differential equation: this is Lyapunov’s theorem.

The replicator equation as a gradient flow equation

The basic idea of Lyapunov’s theorem is that when a ball likes to roll downhill and the landscape has just one bottom point, that point will be the unique stable equilibrium for the ball.

The idea of gradient flow is similar, but different: sometimes things like to roll downhill as efficiently as possible: they move in the exactly the best direction to make some quantity smaller! Under certain conditions, the replicator equation is an example of this phenomenon.

Let’s fill in some details. For starters, suppose we have some function

F : \mathbb{R}^n \to \mathbb{R}

Think of V as ‘height’. Then the gradient flow equation says how a point x(t) \in \mathbb{R}^n will move if it’s always trying its very best to go downhill:

\displaystyle{ \frac{d}{d t} x(t) = - \nabla V(x(t)) }

Here \nabla is the usual gradient in Euclidean space:

\displaystyle{ \nabla V = \left(\partial_1 V, \dots, \partial_n V \right)  }

where \partial_i is short for the partial derivative with respect to the ith coordinate.

The interesting thing is that under certain conditions, the replicator equation is an example of a gradient flow equation… but typically not one where \nabla is the usual gradient in Euclidean space. Instead, it’s the gradient on some other space, the space of all population distributions, which has a non-Euclidean geometry!

The space of all population distributions is a simplex:

\{ p \in \mathbb{R}^n : \; p_i \ge 0, \; \sum_{i = 1}^n p_i = 1 \} .

For example, it’s an equilateral triangle when n = 3. The equilateral triangle looks flat, but if we measure distances another way it becomes round, exactly like a portion of a sphere, and that’s the non-Euclidean geometry we need!

In fact this trick works in any dimension. The idea is to give the simplex a special Riemannian metric, the ‘Fisher information metric’. The usual metric on Euclidean space is

\delta_{i j} = \left\{\begin{array}{ccl} 1 & \mathrm{ if } & i = j \\                                       0 &\mathrm{ if } & i \ne j \end{array} \right.

This simply says that two standard basis vectors like (0,1,0,0) and (0,0,1,0) have dot product zero if the 1′s are in different places, and one if they’re in the same place. The Fisher information metric is a bit more complicated:

\displaystyle{ g_{i j} = \frac{\delta_{i j}}{p_i} }

As before, g_{i j} is a formula for the dot product of the ith and jth standard basis vectors, but now it depends on where you are in the simplex of population distributions.

We saw how this formula arises from information theory back in Part 7. I won’t repeat the calculation, but the idea is this. Fix a population distribution p and consider the information of another one, say q, relative to this. We get I(q,p). If q = p this is zero:

\displaystyle{ \left. I(q,p)\right|_{q = p} = 0 }

and this point is a local minimum for the relative information. So, the first derivative of I(q,p) as we change q must be zero:

\displaystyle{ \left. \frac{\partial}{\partial q_i} I(q,p) \right|_{q = p} = 0 }

But the second derivatives are not zero. In fact, since we’re at a local minimum, it should not be surprising that we get a positive definite matrix of second derivatives:

\displaystyle{  g_{i j} = \left. \frac{\partial^2}{\partial q_i \partial q_j} I(q,p) \right|_{q = p} = 0 }

And, this is the Fisher information metric! So, the Fisher information metric is a way of taking dot products between vectors in the simplex of population distribution that’s based on the concept of relative information.

This is not the place to explain Riemannian geometry, but any metric gives a way to measure angles and distances, and thus a way to define the gradient of a function. After all, the gradient of a function should point at right angles to the level sets of that function, and its length should equal the slope of that function:

So, if we change our way of measuring angles and distances, we get a new concept of gradient! The ith component of this new gradient vector field turns out to b

(\nabla_g V)^i = g^{i j} \partial_j V

where g^{i j} is the inverse of the matrix g_{i j}, and we sum over the repeated index j. As a sanity check, make sure you see why this is the usual Euclidean gradient when g_{i j} = \delta_{i j}.

Now suppose the fitness landscape is the good old Euclidean gradient of some function. Then it turns out that the replicator equation is a special case of gradient flow on the space of population distributions… but where we use the Fisher information metric to define our concept of gradient!

To get a feel for this, it’s good to start with the Lotka–Volterra equation, which describes how the total number of individuals of each type changes. Suppose the fitness landscape is the Euclidean gradient of some function V:

\displaystyle{ f_i(P) = \frac{\partial V}{\partial P_i} }

Then the Lotka–Volterra equation becomes this:

\displaystyle{ \frac{d P_i}{d t} = \frac{\partial V}{\partial P_i} \, P_i }

This doesn’t look like the gradient flow equation, thanks to that annoying P_i on the right-hand side! It certainly ain’t the gradient flow coming from the function V and the usual Euclidean gradient. However, it is gradient flow coming from V and some other metric on the space

\{ P \in \mathbb{R}^n : \; P_i \ge 0 \}

For a proof, and the formula for this other metric, see Section 3.7 in this survey:

• Marc Harper, Information geometry and evolutionary game theory.

Now let’s turn to the replicator equation:

\displaystyle{ \frac{d p_i}{d t} = \left( f_i(p)  - \langle f(p) \rangle \right) \, p_i }

Again, if the fitness landscape is a Euclidean gradient, we can rewrite the replicator equation as a gradient flow equation… but again, not with respect to the Euclidean metric. This time we need to use the Fisher information metric! I sketch a proof in my paper above.

In fact, both these results were first worked out by Shahshahani:

• Siavash Shahshahani, A New Mathematical Framework for the Study of Linkage and Selection, Memoirs of the AMS 17, 1979.

New directions

All this is just the beginning! The ideas I just explained are unified in information geometry, where distance-like quantities such as the relative entropy and the Fisher information metric are studied. From here it’s a short walk to a very nice version of Fisher’s fundamental theorem of natural selection, which is familiar to researchers both in evolutionary dynamics and in information geometry.

You can see some very nice versions of this story for maximum likelihood estimators and linear programming here:

• Akio Fujiwara and Shun-ichi Amari, Gradient systems in view of information geometry, Physica D: Nonlinear Phenomena 80 (1995), 317–327.

Indeed, this seems to be the first paper discussing the similarities between evolutionary game theory and information geometry.

Dash Fryer (at Pomona College) and I have generalized this story in several interesting ways.

First, there are two famous ways to generalize the usual formula for entropy: Tsallis entropy and Rényi entropy, both of which involve a parameter q. There are Tsallis and Rényi versions of relative entropy and the Fisher information metric as well. Everything I just explained about:

• conditions under which relative entropy is a Lyapunov function for the replicator equation, and

• conditions under which the replicator equation is a special case of gradient flow

generalize to these cases! However, these generalized entropies give modified versions of the replicator equation. When we set q=1 we get back the usual story. See

• Marc Harper, Escort evolutionary game theory.

My initial interest in these alternate entropies was mostly mathematical—what is so special about the corresponding geometries?—but now researchers are starting to find populations that evolve according to these kinds of modified population dynamics! For example:

• A. Hernando et al, The workings of the Maximum Entropy Principle in collective human behavior.

There’s an interesting special case worth some attention. Lots of people fret about the relative entropy not being a distance function obeying the axioms that mathematicians like: for example, it doesn’t obey the triangle inequality. Many describe the relative entropy as a distance-like function, and this is often a valid interpretation contextually. On the other hand, the q=0 relative entropy is one-half the Euclidean distance squared! In this case the modified version of the replicator equation looks like this:

\displaystyle{ \frac{d p_i}{d t} = f_i(p) - \frac{1}{n} \sum_{j = 1}^n f_j(p) }

This equation is called the projection dynamic.

Later, I showed that there is a reasonable definition of relative entropy for a much larger family of geometries that satisfies a similar distance minimization property.

In a different direction, Dash showed that you can change the way that selection acts by using a variety of alternative ‘incentives’, extending the story to some other well-known equations describing evolutionary dynamics. By replacing the terms x_i f_i(x) in the replicator equation with a variety of other functions, called incentives, we can generate many commonly studied models of evolutionary dynamics. For instance if we exponentiate the fitness landscape (to make it always positive), we get what is commonly known as the logit dynamic. This amounts to changing the fitness landscape as follows:

\displaystyle{ f_i \mapsto \frac{x_i e^{\beta f_i}}{\sum_j{x_j e^{\beta f_j}}} }

where \beta is known as an inverse temperature in statistical thermodynamics and as an intensity of selection in evolutionary dynamics. There are lots of modified versions of the replicator equation, like the best-reply and projection dynamics, more common in economic applications of evolutionary game theory, and they can all be captured in this family. (There are also other ways to simultaneously capture such families, such as Bill Sandholm’s revision protocols, which were introduced earlier in his exploration of the foundations of game dynamics.)

Dash showed that there is a natural generalization of evolutionarily stable states to ‘incentive stable states’, and that for incentive stable states, the relative entropy is decreasing to zero when the trajectories get near the equilibrium. For the logit and projection dynamics, incentive stable states are simply evolutionarily stable states, and this happens frequently, but not always.

The third generalization is to look at different ‘time-scales’—that is, different ways of describing time! We can make up the symbol \mathbb{T} for a general choice of ‘time-scale’. So far I’ve been treating time as a real number, so

\mathbb{T} = \mathbb{R}

But we can also treat time as coming in discrete evenly spaced steps, which amounts to treating time as an integer:

\mathbb{T} = \mathbb{Z}

More generally, we can make the steps have duration h, where h is any positive real number:

\mathbb{T} = h\mathbb{Z}

There is a nice way to simultaneously describe the cases \mathbb{T} = \mathbb{R} and \mathbb{T} = h\mathbb{Z} using the time-scale calculus and time-scale derivatives. For the time-scale \mathbb{T} = \mathbb{R} the time-scale derivative is just the ordinary derivative. For the time-scale \mathbb{T} = h\mathbb{Z}, the time-scale derivative is given by the difference quotient from first year calculus:

\displaystyle{ f^{\Delta}(z) = \frac{f(z+h) - f(z)}{h} }

and using this as a substitute for the derivative gives difference equations like a discrete-time version of the replicator equation. There are many other choices of time-scale, such as the quantum time-scale given by \mathbb{T} = q^{\mathbb{Z}}, in which case the time-scale derivative is called the q-derivative, but that’s a tale for another time. In any case, the fact that the successive relative entropies are decreasing can be simply state by saying they have negative \mathbb{T} = h\mathbb{Z} time-scale derivative. The continuous case we started with corresponds to \mathbb{T} = \mathbb{R}.

Remarkably, Dash and I were able to show that you can combine all three of these generalizations into one theorem, and even allow for multiple interacting populations! This produces some really neat population trajectories, such as the following two populations with three types, with fitness functions corresponding to the rock-paper-scissors game. On top we have the replicator equation, which goes along with the Fisher information metric; on the bottom we have the logit dynamic, which goes along with the Euclidean metric on the simplex:

From our theorem, it follows that the relative entropy (ordinary relative entropy on top, the q = 0 entropy on bottom) converges to zero along the population trajectories.

The final form of the theorem is loosely as follows. Pick a Riemannian geometry given by a metric g (obeying some mild conditions) and an incentive for each population, as well as a time scale (\mathbb{R} or h \mathbb{Z}) for every population. This gives an evolutionary dynamic with a natural generalization of evolutionarily stable states, and a suitable version of the relative entropy. Then, if there is an evolutionarily stable state in the interior of the simplex, the time-scale derivative of sum of the relative entropies for each population will decrease as the trajectories converge to the stable state!

When there isn’t such a stable state, we still get some interesting population dynamics, like the following:


See this paper for details:

• Marc Harper and Dashiell E. A. Fryer, Stability of evolutionary dynamics on time scales.

Next time we’ll see how to make the main idea work in finite populations, without derivatives or deterministic trajectories!


Life’s Struggle to Survive

19 December, 2013

Here’s the talk I gave at the SETI Institute:

When pondering the number of extraterrestrial civilizations, it is worth noting that even after it got started, the success of life on Earth was not a foregone conclusion. In this talk, I recount some thrilling episodes from the history of our planet, some well-documented but others merely theorized: our collision with the planet Theia, the oxygen catastrophe, the snowball Earth events, the Permian-Triassic mass extinction event, the asteroid that hit Chicxulub, and more, including the massive environmental changes we are causing now. All of these hold lessons for what may happen on other planets!

To watch the talk, click on the video above. To see
slides of the talk, click here!

Here’s a mistake in my talk that doesn’t appear in the slides: I suggested that Theia started at the Lagrange point in Earth’s orbit. After my talk, an expert said that at that time, the Solar System had lots of objects with orbits of high eccentricity, and Theia was probably one of these. He said the Lagrange point theory is an idiosyncratic theory, not widely accepted, that somehow found its way onto Wikipedia.

Another issue was brought up in the questions. In a paper in Science, Sherwood and Huber argued that:

Any exceedence of 35 °C for extended periods should
induce hyperthermia in humans and other mammals, as dissipation of metabolic heat becomes impossible. While this never happens now, it would begin to occur with global-mean warming of about 7 °C, calling the habitability of some regions into question. With 11-12 °C warming, such regions would spread to encompass the majority of the human population as currently distributed. Eventual warmings of 12 °C are
possible from fossil fuel burning.

However, the Paleocene-Eocene Thermal Maximum seems to have been even hotter:

So, the question is: where did mammals live during this period, which mammals went extinct, if any, and does the survival of other mammals call into question Sherwood and Huber’s conclusion?


Entropy and Information in Biological Systems

2 November, 2013

John Harte is an ecologist who uses maximum entropy methods to predict the distribution, abundance and energy usage of species. Marc Harper uses information theory in bioinformatics and evolutionary game theory. Harper, Harte and I are organizing a workshop on entropy and information in biological systems, and I’m really excited about it!

It’ll take place at the National Institute for Mathematical and Biological Synthesis in Knoxville Tennesee. We are scheduling it for Wednesday-Friday, April 8-10, 2015. When the date gets confirmed, I’ll post an advertisement so you can apply to attend.

Writing the proposal was fun, because we got to pull together lots of interesting people who are applying information theory and entropy to biology in quite different ways. So, here it is!

Proposal

Ever since Shannon initiated research on information theory in 1948, there have been hopes that the concept of information could serve as a tool to help systematize and unify work in biology. The link between information and entropy was noted very early on, and it suggested that a full thermodynamic understanding of biology would naturally involve the information processing and storage that are characteristic of living organisms. However, the subject is full of conceptual pitfalls for the unwary, and progress has been slower than initially expected. Premature attempts at ‘grand syntheses’ have often misfired. But applications of information theory and entropy to specific highly focused topics in biology have been increasingly successful, such as:

• the maximum entropy principle in ecology,
• Shannon and Rényi entropies as measures of biodiversity,
• information theory in evolutionary game theory,
• information and the thermodynamics of individual cells.

Because they work in diverse fields, researchers in these specific topics have had little opportunity to trade insights and take stock of the progress so far. The aim of the workshop is to do just this.

In what follows, participants’ names are in boldface, while the main goals of the workshop are in italics.

Roderick Dewar is a key advocate of the principle of Maximum Entropy Production, which says that biological systems—and indeed all open, non-equilibrium systems—act to produce entropy at the maximum rate. Along with others, he has applied this principle to make testable predictions in a wide range of biological systems, from ATP synthesis [DJZ2006] to respiration and photosynthesis of individual plants [D2010] and plant communities. He has also sought to derive this principle from ideas in statistical mechanics [D2004, D2009], but it remains controversial.

The first goal of this workshop is to study the validity of this principle.

While they may be related, the principle of Maximum Entropy Production should not be confused with the MaxEnt inference procedure, which says that we should choose the probabilistic hypothesis with the highest entropy subject to the constraints provided by our data. MaxEnt was first explicitly advocated by Jaynes. He noted that it is already implicit in the procedures of statistical mechanics, but convincingly argued that it can also be applied to situations where entropy is more ‘informational’ than ‘thermodynamic’ in character.

Recently John Harte has applied MaxEnt in this way to ecology, using it to make specific testable predictions for the distribution, abundance and energy usage of species across spatial scales and across habitats and taxonomic groups [Harte2008, Harte2009, Harte2011]. Annette Ostling is an expert on other theories that attempt to explain the same data, such as the ‘neutral model’ [AOE2008, ODLSG2009, O2005, O2012]. Dewar has also used MaxEnt in ecology [D2008], and he has argued that it underlies the principle of Maximum Entropy Production.

Thus, a second goal of this workshop is to familiarize all the participants with applications of the MaxEnt method to ecology, compare it with competing approaches, and study whether MaxEnt provides a sufficient justification for the principle of Maximum Entropy Production.

Entropy is not merely a predictive tool in ecology: it is also widely used as a measure of biodiversity. Here Shannon’s original concept of entropy naturally generalizes to ‘Rényi entropy’, which depends on a parameter \alpha \ge 0. This equals

\displaystyle{ H_\alpha(p) = \frac{1}{1-\alpha} \log \sum_i p_i^\alpha  }

where p_i is the fraction of organisms of the ith type (which could mean species, some other taxon, etc.). In the limit \alpha \to 1 this reduces to the Shannon entropy:

\displaystyle{  H(p) = - \sum_i p_i \log p_i }

As \alpha increases, we give less weight to rare types of organisms. Christina Cobbold and Tom Leinster have described a systematic and highly flexible system of biodiversity measurement, with Rényi entropy at its heart [CL2012]. They consider both the case where all we have are the numbers p_i, and the more subtle case where we take the distance between different types of organisms into account.

John Baez has explained the role of Rényi entropy in thermodynamics [B2011], and together with Tom Leinster and Tobias Fritz he has proved other theorems characterizing entropy which explain its importance for information processing [BFL2011]. However, these ideas have not yet been connected to the widespread use of entropy in biodiversity studies. More importantly, the use of entropy as a measure of biodiversity has not been clearly connected to MaxEnt methods in ecology. Does the success of MaxEnt methods imply a tendency for ecosystems to maximize biodiversity subject to the constraints of resource availability? This seems surprising, but a more nuanced statement along these general lines might be correct.

So, a third goal of this workshop is to clarify relations between known characterizations of entropy, the use of entropy as a measure of biodiversity, and the use of MaxEnt methods in ecology.

As the amount of data to analyze in genomics continues to surpass the ability of humans to analyze it, we can expect automated experiment design to become ever more important. In Chris Lee and Marc Harper’s RoboMendel program [LH2013], a mathematically precise concept of ‘potential information’—how much information is left to learn—plays a crucial role in deciding what experiment to do next, given the data obtained so far. It will be useful for them to interact with William Bialek, who has expertise in estimating entropy from empirical data and using it to constrain properties of models [BBS, BNS2001, BNS2002], and Susanne Still, who applies information theory to automated theory building and biology [CES2010, PS2012].

However, there is another link between biology and potential information. Harper has noted that in an ecosystem where the population of each type of organism grows at a rate proportional to its fitness (which may depend on the fraction of organisms of each type), the quantity

\displaystyle{ I(q||p) = \sum_i q_i \ln(q_i/p_i) }

always decreases if there is an evolutionarily stable state [Harper2009]. Here p_i is the fraction of organisms of the ith genotype at a given time, while q_i is this fraction in the evolutionarily stable state. This quantity is often called the Shannon information of q ‘relative to’ p. But in fact, it is precisely the same as Lee and Harper’s potential information! Indeed, there is a precise mathematical analogy between evolutionary games and processes where a probabilistic hypothesis is refined by repeated experiments.

Thus, a fourth goal of this workshop is to develop the concept of evolutionary games as ‘learning’ processes in which information is gained over time.

We shall try to synthesize this with Carl Bergstrom and Matina Donaldson-Matasci’s work on the ‘fitness value of information’: a measure of how much increase in fitness a population can obtain per bit of extra information [BL2004, DBL2010, DM2013]. Following Harper, we shall consider not only relative Shannon entropy, but also relative Rényi entropy, as a measure of information gain [Harper2011].

A fifth and final goal of this workshop is to study the interplay between information theory and the thermodynamics of individual cells and organelles.

Susanne Still has studied the thermodynamics of prediction in biological systems [BCSS2012]. And in a celebrated related piece of work, Jeremy England used thermodynamic arguments to a derive a lower bound for the amount of entropy generated during a process of self-replication of a bacterial cell [England2013]. Interestingly, he showed that E. coli comes within a factor of 3 of this lower bound.

In short, information theory and entropy methods are becoming powerful tools in biology, from the level of individual cells, to whole ecosystems, to experimental design, model-building, and the measurement of biodiversity. The time is ripe for an investigative workshop that brings together experts from different fields and lets them share insights and methods and begin to tackle some of the big remaining questions.

Bibliography

[AOE2008] D. Alonso, A. Ostling and R. Etienne, The assumption of symmetry and species abundance distributions, Ecology Letters 11 (2008), 93–105.

[TMMABB2012} D. Amodei, W. Bialek, M. J. Berry II, O. Marre, T. Mora, and G. Tkacik, The simplest maximum entropy model for collective behavior in a neural network, arXiv:1207.6319 (2012).

[B2011] J. Baez, Rényi entropy and free energy, arXiv:1102.2098 (2011).

[BFL2011] J. Baez, T. Fritz and T. Leinster, A characterization of entropy in terms of information loss, Entropy 13 (2011), 1945–1957.

[B2011] J. Baez and M. Stay, Algorithmic thermodynamics, Math. Struct. Comp. Sci. 22 (2012), 771–787.

[BCSS2012] A. J. Bell, G. E. Crooks, S. Still and D. A Sivak, The thermodynamics of prediction, Phys. Rev. Lett. 109 (2012), 120604.

[BL2004] C. T. Bergstrom and M. Lachmann, Shannon information and biological fitness, in IEEE Information Theory Workshop 2004, IEEE, 2004, pp. 50-54.

[BBS] M. J. Berry II, W. Bialek and E. Schneidman, An information theoretic approach to the functional classification of neurons, in Advances in Neural Information Processing Systems 15, MIT Press, 2005.

[BNS2001] W. Bialek, I. Nemenman and N. Tishby, Predictability, complexity and learning, Neural Computation 13 (2001), 2409–2463.

[BNS2002] W. Bialek, I. Nemenman and F. Shafee, Entropy and inference, revisited, in Advances in Neural Information Processing Systems 14, MIT Press, 2002.

[CL2012] C. Cobbold and T. Leinster, Measuring diversity: the importance of species similarity, Ecology 93 (2012), 477–489.

[CES2010] J. P. Crutchfield, S. Still and C. Ellison, Optimal causal inference: estimating stored information and approximating causal architecture, Chaos 20 (2010), 037111.

[D2004] R. C. Dewar, Maximum entropy production and non-equilibrium statistical mechanics, in Non-Equilibrium Thermodynamics and Entropy Production: Life, Earth and Beyond, eds. A. Kleidon and R. Lorenz, Springer, New York, 2004, 41–55.

[DJZ2006] R. C. Dewar, D. Juretíc, P. Zupanovíc, The functional design of the rotary enzyme ATP synthase is consistent with maximum entropy production, Chem. Phys. Lett. 430 (2006), 177–182.

[D2008] R. C. Dewar, A. Porté, Statistical mechanics unifies different ecological patterns, J. Theor. Bio. 251 (2008), 389–403.

[D2009] R. C. Dewar, Maximum entropy production as an inference algorithm that translates physical assumptions into macroscopic predictions: don’t shoot the messenger, Entropy 11 (2009), 931–944.

[D2010] R. C. Dewar, Maximum entropy production and plant optimization theories, Phil. Trans. Roy. Soc. B 365 (2010) 1429–1435.

[DBL2010} M. C. Donaldson-Matasci, C. T. Bergstrom, and
M. Lachmann, The fitness value of information, Oikos 119 (2010), 219-230.

[DM2013] M. C. Donaldson-Matasci, G. DeGrandi-Hoffman, and A. Dornhaus, Bigger is better: honey bee colonies as distributed information-gathering systems, Animal Behaviour 85 (2013), 585–592.

[England2013] J. L. England, Statistical physics of self-replication, J. Chem. Phys. 139 (2013), 121923.

[ODLSG2009} J. L. Green, J. K. Lake, J. P. O’Dwyer, A. Ostling and V. M. Savage, An integrative framework for stochastic, size-structured community assembly, PNAS 106 (2009), 6170--6175.

[Harper2009] M. Harper, Information geometry and evolutionary game theory, arXiv:0911.1383 (2009).

[Harper2011] M. Harper, Escort evolutionary game theory, Physica D 240 (2011), 1411–1415.

[Harte2008] J. Harte, T. Zillio, E. Conlisk and A. Smith, Maximum entropy and the state-variable approach to macroecology, Ecology 89 (2008), 2700–2711.

[Harte2009] J. Harte, A. Smith and D. Storch, Biodiversity scales from plots to biomes with a universal species-area curve, Ecology Letters 12 (2009), 789–797.

[Harte2011] J. Harte, Maximum Entropy and Ecology: A Theory of Abundance, Distribution, and Energetics, Oxford U. Press, Oxford, 2011.

[LH2013] M. Harper and C. Lee, Basic experiment planning via information metrics: the RoboMendel problem, arXiv:1210.4808 (2012).

[O2005] A. Ostling, Neutral theory tested by birds, Nature 436 (2005), 635.

[O2012] A. Ostling, Do fitness-equalizing tradeoffs lead to neutral communities?, Theoretical Ecology 5 (2012), 181–194.

[PS2012] D. Precup and S. Still, An information-theoretic approach to curiosity-driven reinforcement learning, Theory in Biosciences 131 (2012), 139–148.


Autocatalysis in Reaction Networks

11 October, 2013

guest post by Manoj Gopalkrishnan

Since this is my first time writing a blog post here, let me start with a word of introduction. I am a computer scientist at the Tata Institute of Fundamental Research, broadly interested in connections between Biology and Computer Science, with a particular interest in reaction networks. I first started thinking about them during my Ph.D. at the Laboratory for Molecular Science. My fascination with them has been predominantly mathematical. As a graduate student, I encountered an area with rich connections between combinatorics and dynamics, and surprisingly easy-to-state and compelling unsolved conjectures, and got hooked.

There is a story about Richard Feynman that he used to take bets with mathematicians. If any mathematician could make Feynman understand a mathematical statement, then Feynman would guess whether or not the statement was true. Of course, Feynman was in a habit of winning these bets, which allowed him to make the boast that mathematics, especially in its obsession for proof, was essentially irrelevant, since a relative novice like himself could after a moment’s thought guess at the truth of these mathematical statements. I have always felt Feynman’s claim to be unjust, but have often wondered what mathematical statement I would put to him so that his chances of winning were no better than random.

Today I want to tell you of a result about reaction networks that I have recently discovered with Abhishek Deshpande. The statement seems like a fine candidate to throw at Feynman because until we proved it, I would not have bet either way about its truth. Even after we obtained a short and elementary proof, I do not completely ‘see’ why it must be true. I am hoping some of you will be able to demystify it for me. So, I’m just going to introduce enough terms to be able to make the statement of our result, and let you think about how to prove it.

John and his colleagues have been talking about reaction networks as Petri nets in the network theory series on this blog. As discussed in part 2 of that series, a Petri net is a diagram like this:

Following John’s terminology, I will call the aqua squares ‘transitions’ and the yellow circles ‘species’. If we have some number #rabbit of rabbits and some number #wolf of wolves, we draw #rabbit many black dots called ‘tokens’ inside the yellow circle for rabbit, and #wolf tokens inside the yellow circle for wolf, like this:

Here #rabbit = 4 and #wolf = 3. The predation transition consumes one ‘rabbit’ token and one ‘wolf’ token, and produces two ‘wolf’ tokens, taking us here:

John explained in parts 2 and 3 how one can put rates on different transitions. For today I am only going to be concerned with ‘reachability:’ what token states are reachable from what other token states. John talked about this idea in part 25.

By a complex I will mean a population vector: a snapshot of the number of tokens in each species. In the example above, (#rabbit, #wolf) is a complex. If y, y' are two complexes, then we write

y \to y'

if we can get from y to y' by a single transition in our Petri net. For example, we just saw that

(4,3)\to (3,4)

via the predation transition.

Reachability, denoted \to^*, is the transitive closure of the relation \to. So y\to^* y' (read y' is reachable from y) iff there are complexes

y=y_0,y_1,y_2,\dots,y_k =y'

such that

y_0\to y_1\to\cdots\to y_{k-1}\to y_k.

For example, here (5,1) \to^* (1, 5) by repeated predation.

I am very interested in switches. After all, a computer is essentially a box of switches! You can build computers by connecting switches together. In fact, that’s how early computers like the Z3 were built. The CMOS gates at the heart of modern computers are essentially switches. By analogy, the study of switches in reaction networks may help us understand biochemical circuits.

A siphon is a set of species that is ‘switch-offable’. That is, if there are no tokens in the siphon states, then they will remain absent in future. Equivalently, the only reactions that can produce tokens in the siphon states are those that require tokens from the siphon states before they can fire. Note that no matter how many rabbits there are, if there are no wolves, there will continue to be no wolves. So {wolf} is a siphon. Similarly, {rabbit} is a siphon, as is the union {rabbit, wolf}. However, when Hydrogen and Oxygen form Water, {Water} is not a siphon.

For another example, consider this Petri net:

The set {HCl, NaCl} is a siphon. However, there is a conservation law: whenever an HCl token is destroyed, an NaCl token is created, so that #HCl + #NaCl is invariant. If both HCl and NaCl were present to begin with, the complexes where both are absent are not reachable. In this sense, this siphon is not ‘really’ switch-offable. As a first pass at capturing this idea, we will introduce the notion of ‘critical set’.

A conservation law is a linear expression involving numbers of tokens that is invariant under every transition in the Petri net. A conservation law is positive if all the coefficients are non-negative. A critical set of states is a set that does not contain the support of a positive conservation law.

For example, the support of the positive conservation law #HCl + #NaCl is {HCl, NaCl}, and hence no set containing this set is critical. Thus {HCl, NaCl} is a siphon, but not critical. On the other hand, the set {NaCl} is critical but not a siphon. {HCl} is a critical siphon. And in our other example, {Wolf, Rabbit} is a critical siphon.

Of particular interest to us will be minimal critical siphons, the minimal sets among critical siphons. Consider this example:

Here we have two transitions:

X \to 2Y

and

2X \to Y

The set \{X,Y\} is a critical siphon. But so is the smaller set \{X\}. So, \{X,Y\} is not minimal.

We define a self-replicable set to be a set A of species such that there exist complexes y and y' with y\to^* y' such that for all i \in A we have

y'_i > y_i

So, there are transitions that accomplish the job of creating more tokens for all the species in A. In other words: these species can ‘replicate themselves’.

We define a drainable set by changing the > to a <. So, there are transitions that accomplish the job of reducing the number of tokens for all the species in A. These species can ‘drain away’.

Now here comes the statement:

Every minimal critical siphon is either drainable or self-replicable!

We prove it in this paper:

• Abhishek Deshpande and Manoj Gopalkrishnan, Autocatalysis in reaction networks.

But first note that the statement becomes false if the critical siphon is not minimal. Look at this example again:

The set \{X,Y\} is a critical siphon. However \{X,Y\} is neither self-replicable (since every reaction destroys X) nor drainable (since every reaction produces Y). But we’ve already seen that \{X,Y\} is not minimal. It has a critical subsiphon, namely \{X\}. This one is minimal—and it obeys our theorem, because it is drainable.

Checking these statements is a good way to make sure you understand the concepts! I know I’ve introduced a lot of terminology here, and it takes a while to absorb.

Anyway: our proof that every minimal critical siphon is either drainable or self-replicable makes use of a fun result about matrices. Consider a real square matrix with a sign pattern like this:

\left( \begin{array}{cccc} <0 & >0 & \cdots & > 0 \\ >0 & <0 & \cdots &> 0 \\ \vdots & \vdots & <0 &> 0 \\ >0 & >0 & \cdots & <0 \end{array} \right)

If the matrix is full-rank then there is a positive linear combination of the rows of the matrix so that all the entries are nonzero and have the same sign. In fact, we prove something stronger in Theorem 5.9 of our paper. At first, we thought this statement about matrices should be equivalent to one of the many well-known alternative statements of Farkas’ lemma, like Gordan’s theorem.

However, we could not find a way to make this work, so we ended up proving it by a different technique. Later, my colleague Jaikumar Radhakrishnan came up with a clever proof that uses Farkas’ lemma twice. However, so far we have not obtained the stronger result in Theorem 5.9 with this proof technique.

My interest in the result that every minimal critical siphon is either drainable or self-replicable is not purely aesthetic (though aesthetics is a big part of it). There is a research community of folks who are thinking of reaction networks as a programming language, and synthesizing molecular systems that exhibit sophisticated dynamical behavior as per specification:

International Conference on DNA Computing and Molecular Programming.

Foundations of Nanoscience: Self-Assembled Architectures and Devices.

Molecular Programming Architectures, Abstractions, Algorithms and Applications.

Networks that exhibit some kind of catalytic behavior are a recurring theme among such systems, and even more so in biochemical circuits.

Here is an example of catalytic behavior:

A + C \to B + C

The ‘catalyst’ C helps transform A to B. In the absence of C, the reaction is turned off. Hence, catalysts are switches in chemical circuits! From this point of view, it is hardly surprising that they are required for the synthesis of complex behaviors.

In information processing, one needs amplification to make sure that a signal can propagate through a circuit without being overwhelmed by errors. Here is a chemical counterpart to such amplification:

A + C \to 2C

Here the catalyst C catalyzes its own production: it is an ‘autocatalyst’, or a self-replicating species. By analogy, autocatalysis is key for scaling synthetic molecular systems.

Our work deals with these notions on a network level. We generalize the notion of catalysis in two ways. First, we allow a catalyst to be a set of species instead of a single species; second, its absence can turn off a reaction pathway instead of a single reaction. We propose the notion of self-replicable siphons as a generalization of the notion of autocatalysis. In particular, ‘weakly reversible’ networks have critical siphons precisely when they exhibit autocatalytic behavior. I was led to this work when I noticed the manifestation of this last statement in many examples.

Another hope I have is that perhaps one can study the dynamics of each minimal critical siphon of a reaction network separately, and then somehow be able to answer interesting questions about the dynamics of the entire network, by stitching together what we know for each minimal critical siphon. On the synthesis side, perhaps this could lead to a programming language to synthesize a reaction network that will achieve a specified dynamics. If any of this works out, it would be really cool! I think of how abelian group theory (and more broadly, the theory of abelian categories, which includes categories of vector bundles) benefits from a fundamental theorem that lets you break a finite abelian group into parts that are easy to study—or how number theory benefits from a special case, the fundamental theorem of arithmetic. John has also pointed out that reaction networks are really presentations of symmetric monoidal categories, so perhaps this could point the way to a Fundamental Theorem for Symmetric Monoidal Categories.

And then there is the Global Attractor Conjecture, a
long-standing open problem concerning the long-term behavior of solutions to the rate equations. Now that is a whole story by itself, and will have to wait for another day.


Maximum Entropy and Ecology

21 February, 2013

I already talked about John Harte’s book on how to stop global warming. Since I’m trying to apply information theory and thermodynamics to ecology, I was also interested in this book of his:

John Harte, Maximum Entropy and Ecology, Oxford U. Press, Oxford, 2011.

There’s a lot in this book, and I haven’t absorbed it all, but let me try to briefly summarize his maximum entropy theory of ecology. This aims to be “a comprehensive, parsimonious, and testable theory of the distribution, abundance, and energetics of species across spatial scales”. One great thing is that he makes quantitative predictions using this theory and compares them to a lot of real-world data. But let me just tell you about the theory.

It’s heavily based on the principle of maximum entropy (MaxEnt for short), and there are two parts:

Two MaxEnt calculations are at the core of the theory: the first yields all the metrics that describe abundance and energy distributions, and the second describes the spatial scaling properties of species’ distributions.

Abundance and energy distributions

The first part of Harte’s theory is all about a conditional probability distribution

R(n,\epsilon | S_0, N_0, E_0)

which he calls the ecosystem structure function. Here:

S_0: the total number of species under consideration in some area.

N_0: the total number of individuals under consideration in that area.

E_0: the total rate of metabolic energy consumption of all these individuals.

Given this,

R(n,\epsilon | S_0, N_0, E_0) \, d \epsilon

is the probability that given S_0, N_0, E_0, if a species is picked from the collection of species, then it has n individuals, and if an individual is picked at random from that species, then its rate of metabolic energy consumption is in the interval (\epsilon, \epsilon + d \epsilon).

Here of course d \epsilon is ‘infinitesimal’, meaning that we take a limit where it goes to zero to make this idea precise (if we’re doing analytical work) or take it to be very small (if we’re estimating R from data).

I believe that when we ‘pick a species’ we’re treating them all as equally probable, not weighting them according to their number of individuals.

Clearly R obeys some constraints. First, since it’s a probability distribution, it obeys the normalization condition:

\displaystyle{ \sum_n \int d \epsilon \; R(n,\epsilon | S_0, N_0, E_0) = 1 }

Second, since the average number of individuals per species is N_0/S_0, we have:

\displaystyle{ \sum_n \int d \epsilon \; n R(n,\epsilon | S_0, N_0, E_0) = N_0 / S_0 }

Third, since the average over species of the total rate of metabolic energy consumption of individuals within the species is E_0/ S_0, we have:

\displaystyle{ \sum_n \int d \epsilon \; n \epsilon R(n,\epsilon | S_0, N_0, E_0) = E_0 / S_0 }

Harte’s theory is that R maximizes entropy subject to these three constraints. Here entropy is defined by

\displaystyle{ - \sum_n \int d \epsilon \; R(n,\epsilon | S_0, N_0, E_0) \ln(R(n,\epsilon | S_0, N_0, E_0)) }

Harte uses this theory to calculate R, and tests the results against data from about 20 ecosystems. For example, he predicts the abundance of species as a function of their rank, with rank 1 being the most abundant, rank 2 being the second most abundant, and so on. And he gets results like this:

The data here are from:

• Green, Harte, and Ostling’s work on a serpentine grassland,

• Luquillo’s work on a 10.24-hectare tropical forest, and

• Cocoli’s work on a 2-hectare wet tropical forest.

The fit looks good to me… but I should emphasize that I haven’t had time to study these matters in detail. For more, you can read this paper, at least if your institution subscribes to this journal:

• J. Harte, T. Zillio, E. Conlisk and A. Smith, Maximum entropy and the state-variable approach to macroecology, Ecology 89 (2008), 2700–2711.

Spatial abundance distribution

The second part of Harte’s theory is all about a conditional probability distribution

\Pi(n | A, n_0, A_0)

This is the probability that n individuals of a species are found in a region of area A given that it has n_0 individuals in a larger region of area A_0.

\Pi obeys two constraints. First, since it’s a probability distribution, it obeys the normalization condition:

\displaystyle{ \sum_n  \Pi(n | A, n_0, A_0) = 1 }

Second, since the mean value of n across regions of area A equals n_0 A/A_0, we have

\displaystyle{ \sum_n n \Pi(n | A, n_0, A_0) = n_0 A/A_0 }

Harte’s theory is that \Pi maximizes entropy subject to these two constraints. Here entropy is defined by

\displaystyle{- \sum_n  \Pi(n | A, n_0, A_0)\ln(\Pi(n | A, n_0, A_0)) }

Harte explains two approaches to use this idea to derive ‘scaling laws’ for how n varies with n. And again, he compares his predictions to real-world data, and get results that look good to my (amateur, hasty) eye!

I hope sometime I can dig deeper into this subject. Do you have any ideas, or knowledge about this stuff?


Prospects for a Green Mathematics

15 February, 2013

contribution to the Mathematics of Planet Earth 2013 blog by John Baez and David Tanzer

It is increasingly clear that we are initiating a sequence of dramatic events across our planet. They include habitat loss, an increased rate of extinction, global warming, the melting of ice caps and permafrost, an increase in extreme weather events, gradually rising sea levels, ocean acidification, the spread of oceanic “dead zones”, a depletion of natural resources, and ensuing social strife.

These events are all connected. They come from a way of life that views the Earth as essentially infinite, human civilization as a negligible perturbation, and exponential economic growth as a permanent condition. Deep changes will occur as these idealizations bring us crashing into the brick wall of reality. If we do not muster the will to act before things get significantly worse, we will need to do so later. While we may plead that it is “too difficult” or “too late”, this doesn’t matter: a transformation is inevitable. All we can do is start where we find ourselves, and begin adapting to life on a finite-sized planet.

Where does math fit into all this? While the problems we face have deep roots, major transformations in society have always caused and been helped along by revolutions in mathematics. Starting near the end of the last ice age, the Agricultural Revolution eventually led to the birth of written numerals and geometry. Centuries later, the Enlightenment and Industrial Revolution brought us calculus and eventually a flowering of mathematics unlike any before. Now, as the 21st century unfolds, mathematics will become increasingly driven by our need to understand the biosphere and our role within it.

We refer to mathematics suitable for understanding the biosphere as green mathematics. Although it is just being born, we can already see some of its outlines.

Since the biosphere is a massive network of interconnected elements, we expect network theory will play an important role in green mathematics. Network theory is a sprawling field, just beginning to become organized, which combines ideas from graph theory, probability theory, biology, ecology, sociology and more. Computation plays an important role here, both because it has a network structure—think of networks of logic gates—and because it provides the means for simulating networks.

One application of network theory is to tipping points, where a system abruptly passes from one regime to another. Scientists need to identify nearby tipping points in the biosphere to help policy makers to head off catastrophic changes. Mathematicians, in turn, are challenged to develop techniques for detecting incipient tipping points. Another application of network theory is the study of shocks and resilience. When can a network recover from a major blow to one of its subsystems?

We claim that network theory is not just another name for biology, ecology, or any other existing science, because in it we can see new mathematical terrains. Here are two examples.

First, consider a leaf. In The Formation of a Tree Leaf by Qinglan Xia, we see a possible key to Nature’s algorithm for the growth of leaf veins. The vein system, which is a transport network for nutrients and other substances, is modeled by Xia as a directed graph with nodes for cells and edges for the “pipes” that connect the cells. Each cell gives a revenue of energy, and incurs a cost for transporting substances to and from it.

The total transport cost depends on the network structure. There are costs for each of the pipes, and costs for turning the fluid around the bends. For each pipe, the cost is proportional to the product of its length, its cross-sectional area raised to a power α, and the number of leaf cells that it feeds. The exponent α captures the savings from using a thicker pipe to transport materials together. Another parameter β expresses the turning cost.

Development proceeds through cycles of growth and network optimization. During growth, a layer of cells gets added, containing each potential cell with a revenue that would exceed its cost. During optimization, the graph is adjusted to find a local cost minimum. Remarkably, by varying α and β, simulations yield leaves resembling those of specific plants, such as maple or mulberry.


A growing network

Unlike approaches that merely create pretty images resembling leaves, Xia presents an algorithmic model, simplified yet illuminating, of how leaves actually develop. It is a network-theoretic approach to a biological subject, and it is mathematics—replete with lemmas, theorems and algorithms—from start to finish.

A second example comes from stochastic Petri nets, which are a model for networks of reactions. In a stochastic Petri net, entities are designated by “tokens” and entity types by “places” which hold the tokens. “Reactions” remove tokens from their input places and deposit tokens at their output places. The reactions fire probabilistically, in a Markov chain where each reaction rate depends on the number of its input tokens.


A stochastic Petri net

Perhaps surprisingly, many techniques from quantum field theory are transferable to stochastic Petri nets. The key is to represent stochastic states by power series. Monomials represent pure states, which have a definite number of tokens at each place. Each variable in the monomial stands for a place, and its exponent indicates the token count. In a linear combination of monomials, each coefficient represents the probability of being in the associated state.

In quantum field theory, states are representable by power series with complex coefficients. The annihilation and creation of particles are cast as operators on power series. These same operators, when applied to the stochastic states of a Petri net, describe the annihilation and creation of tokens. Remarkably, the commutation relations between annihilation and creation operators, which are often viewed as a hallmark of quantum theory, make perfect sense in this classical probabilistic context.

Each stochastic Petri net has a “Hamiltonian” which gives its probabilistic law of motion. It is built from the annihilation and creation operators. Using this, one can prove many theorems about reaction networks, already known to chemists, in a compact and elegant way. See the Azimuth network theory series for details.

Conclusion: The life of a network, and the networks of life, are brimming with mathematical content.

We are pursuing these subjects in the Azimuth Project, an open collaboration between mathematicians, scientists, engineers and programmers trying to help save the planet. On the Azimuth Wiki and Azimuth Blog we are trying to explain the main environmental and energy problems the world faces today. We are also studying plans of action, network theory, climate cycles, the programming of climate models, and more.

If you would like to help, we need you and your special expertise. You can write articles, contribute information, pose questions, fill in details, write software, help with research, help with writing, and more. Just drop us a line.


This post appeared on the blog for Mathematics of Planet Earth 2013, an international project involving over 100 scientific societies, universities, research institutes, and organizations. They’re trying to have a new blog article every day, and you can submit articles as described here.

Here are a few of their other articles:

The mathematics of extreme climatic events—with links to videos.

From the Joint Mathematics Meetings: Conceptual climate models short course—with links to online course materials.

There will always be a Gulf Stream—and exercise in singular perturbation technique.


Graduate Program in Biostatistics

7 November, 2012

Are you an undergrad who likes math and biology and wants a good grad program? This one sounds really interesting. The ad I bumped into is focused on minority applicants, maybe because U.C. Riverside is packed with students whose skin ain’t pale. But I’d say biostatistics is a good career even if you have the misfortune of needing high-SPF sunscreen:    

The Department of Biostatistics, which administers PhD training at the Harvard School of Public Health, seeks outstanding minority applicants for its graduate programs in Biostatistics.

Biostatistics is an excellent career choice for students interested in mathematics applied to real world problems. The current data explosion is contributing to the rising stature of, and demand for biostatisticians, as noted in the New York Times:

I keep saying that the sexy job in the next 10 years will be statisticians … and I’m not kidding.

To date, Biostatistics has not been successful in attracting qualified minority students, particularly African Americans. Students best suited for careers in Biostatistics are those with strong mathematical abilities, combined with interests in health and biology. Unfortunately, statistics is not widely taught at the undergraduate level, and many potentially excellent candidates simply do not learn about the possibility of a valuable and fulfilling career in Biostatistics. Many minority students who could thrive in a Biostatistics program choose instead to enter medical school. Public health in general, and Biostatistics in particular, are not even considered as options. We would like your help in identifying qualified students before they make their choices regarding graduate school or other career paths.

All doctoral students accepted in our department are guaranteed full tuition and stipend support throughout their program, as long as they are making satisfactory progress towards the PhD degree. Every effort is made to meet the individual needs of each student, and to insure the successful completion of graduate work.

The web site for prospective students is here.

Please note the deadline for submitting applications to the MA and PhD programs for entry in the fall of 2013 is December 15, 2012.

We look forward to answering any questions you may have. Questions about our graduate programs can be directed to Jelena Follweiller, at jtillots@hsph.harvard.edu.


Azimuth News (Part 2)

28 September, 2012

Last week I finished a draft of a book and left Singapore, returning to my home in Riverside, California. It’s strange and interesting, leaving the humid tropics for the dry chaparral landscape I know so well.

Now I’m back to my former life as a math professor at the University of California. I’ll be going back to the Centre for Quantum Technology next summer, and summers after that, too. But life feels different now: a 2-year period of no teaching allowed me to change my research direction, but now it’s time to teach people what I’ve learned!

It also happens to be a time when the Azimuth Project is about to do a lot of interesting things. So, let me tell you some news!

Programming with Petri nets

The Azimuth Project has a bunch of new members, who are bringing with them new expertise and lots of energy. One of them is David Tanzer, who was an undergraduate math major at U. Penn, and got a Ph.D. in computer science at NYU. Now he’s a software developer, and he lives in Brooklyn, New York.

He writes:

My areas of interest include:

• Queryable encyclopedias

• Machine representation of scientific theories

• Machine representation of conflicts between contending theories

• Social and technical structures to support group problem-solving activities

• Balkan music, Afro-Latin rhythms, and jazz guitar

To me, the most meaningful applications of science are to the myriad of problems that beset the human race. So the Aziumuth Project is a good focal point for me.

And on Azimuth, he’s starting to write some articles on ‘programming with Petri nets’. We’ve talked about them a lot in the network theory series:

They’re a very general modelling tool in chemistry, biology and computer science, precisely the sort of tool we need for a deep understanding of the complex systems that keep our living planet going—though, let’s be perfectly clear about this, just one of many such tools, and one of the simplest. But as mathematical physicists, Jacob Biamonte and I have studied Petri nets in a highly theoretical way, somewhat neglecting the all-important problem of how you write programs that simulate Petri nets!

Such programs are commercially available, but it’s good to see how to write them yourself, and that’s what David Tanzer will tell us. He’ll use the language Python to write these programs in a nice modern object-oriented way. So, if you like coding, this is where the rubber meets the road.

I’m no expert on programming, but it seems the modularity of Python code nicely matches the modularity of Petri nets. This is something I’d like to get into more deeply someday, in my own effete theoretical way. I think the category-theoretic foundations of computer languages like Python are worth understanding, perhaps more interesting in fact than purely functional languages like Haskell, which are better understood. And I think they’ll turn out to be nicely related to the category-theoretic foundations of Petri nets and other networks I’m going to tell you about!

And I believe this will be important if we want to develop ‘ecotechnology’, where our machines and even our programming methodologies borrow ingenuity and wisdom from biological processes… and learn to blend with nature instead of fighting it.

Petri nets, systems biology, and beyond

Another new member of the Azimuth Project is Ken Webb. He has a BA in Cognitive Science from Carleton University in Ottawa, and an MSc in Evolutionary and Adaptive Systems from The University of Sussex in Brighton. Since then he’s worked for many years as a software developer and consultant, using many different languages and approaches.

He writes:

Things that I’m interested in include:

• networks of all types, hierarchical organization of network nodes, and practical applications

• climate change, and “saving the planet”

• programming code that anyone can run in their browser, and that anyone can edit and extend in their browser

• approaches to software development that allow independently-developed apps to work together

• the relationship between computer-science object-oriented (OO) concepts and math concepts

• how everything is connected

I’ve been paying attention to the Azimuth Project because it parallels my own interests, but with a more math focus (math is not one of my strong points). As learning exercises, I’ve reimplemented a few of the applications mentioned on Azimuth pages. Some of my online workbooks (blog-like entries that are my way of taking active notes) were based on content at the Azimuth Project.

He’s started building a Petri net modeling and simulation tool called Xholon. It’s written in Java and can be run online using Java Web Start (JNLP). Using this tool you can completely specify Petri net models using XML. You can see more details, and examples, on his Azimuth page. If I were smarter, or had more spare time, I would have already figured out how to include examples that actually run in an interactive way in blog articles here! But more on that later.

Soon I hope Ken will finish a blog entry in which he discusses how Petri nets fit into a bigger setup that can also describe ‘containers’, where molecules are held in ‘membranes’ and these membranes can allow chosen molecules through, and also split or merge—more like biology than inorganic chemistry. His outline is very ambitious:

This tutorial works through one simple example to demonstrate the commonality/continuity between a large number of different ways that people use to understand the structure and behavior of the world around us. These include chemical reaction networks, Petri nets, differential equations, agent-based modeling, mind maps, membrane computing, Unified Modeling Language, Systems Biology Markup Language, and Systems Biology Graphical Notation. The intended audience includes scientists, engineers, programmers, and other technically literate nonexperts. No math knowledge is required.


The Azimuth Server

With help from Glyn Adgie and Allan Erskine, Jim Stuttard has been setting up a server for Azimuth. All these folks are programmers, and Jim Stuttard, in particular, was a systems consultant and software applications programmer in C, C++ and Java until 2001. But he’s really interested in formal methods, and now he programs in Haskell.

I won’t say anything about the Azimuth server, since I’ll get it wrong, it’s not quite ready yet, and Jim wisely prefers to get it working a bit more before he talks about it. But you can get a feeling for what’s coming by going here.

How to find out more

You can follow what we’re doing by visiting the Azimuth Forum. Most of our conversations there are open to the world, but some can only be seen if you become a member. This is easy to do, except for one little thing.

Nobody, nobody , seems capable of reading the directions where I say, in boldface for easy visibility:

Use your whole real name as username. Spaces and capital letters are good. So, for example, a username like ‘Tim van Beek’ is good, ‘timvanbeek’ not so good, and ‘Tim’ or ‘tvb’ won’t be allowed.

The main point is that we want people involved with the Azimuth Project to have clear identities. The second, more minor point is that our software is not braindead, so you can choose a username that’s your actual name, like

Tim van Beek

instead of having to choose something silly like

timvanbeek

or

tim_van_beek

But never mind me: I’m just a crotchety old curmudgeon. Come join the fun and help us save the planet by developing software that explains climate science, biology, and ecology—and, just maybe, speeds up the development of green mathematics and ecotechnology!


Carbon Cycle Box Models

24 July, 2012

guest post by Staffan Liljegren

What?

I think the carbon cycle must be the greatest natural invention, all things considered. It’s been the basis for all organic life on Earth through eons of time. Through evolution, it gradually creates more and more biodiversity. It is important to do more research on the carbon cycle for the earth sciences, biology and in particular global warming—or more generally, climate science and environmental science, which are among the foci of the Azimuth project.

It is a beautiful and complex nonlinear geochemical cycle, I decided to give a rough outline of its beauty and complexity. Plants eat water and carbon dioxide with help from the sun (photosynthesis) and while doing so they produce air and sugar for others to metabolize. These plants in turn may be eaten by vegan animals (herbivores), while animals may also be eaten by other animals like us humans, being meat eaters or animals that eat both animals and plants (carnivores or omnivores).

Here is an overview of the cycle, where yellow arrows show release of carbon dioxide and purple arrows show uptake:

carbon cycle

Say a plant gets eaten by an animal on land. Then the animal can use its carbon while breathing in air and breathing out water and carbon dioxide. Ruminant animals like cows and sheep also produce methane, which is a greenhouse gas like carbon dioxide. When a plant or animal dies it gets eaten by others, and any remains go down into the soil and sediments. A lot of the carbon in the sediments actually transforms into carbonate rock. This happens over millions of years. Some of this carbon makes it back into the air later through volcanoes.

Where?

Carbon is not a very abundant element on this planet: it’s only 0.08% of the total mass of the Earth. Nonetheless, we all know that many products of this atom are found throughout nature: for example in diamonds, marble, oil… and living organisms. If you remember your high school chemistry you might recall that the lab experiments with organic chemistry were the fun part of chemistry! The reason is that carbon has the ability to easily form compounds with other elements. So there is a tremendous global market that depends on the carbon cycle.

We humans are one fifth carbon. Other examples are trees, which we humans use for many things in our economic growth. But there are also fascinating flows inside the trees. I’ve read about these in Colin Tudge’s book The Secret Life of Trees – How They Live and why they Matter, so I will use this book for examples about forests and trees. You may already be familiar with these, but maybe not know a lot of details about their part in the carbon cycle.


When I stood in front of an tall monkey-puzzle tree in the genus Auracaria I was just flabbergasted by its age, and how it used to be widespread when the dinosaurs where around. But how does it manage to get the water to its leaves? Colin Tudge writes that during evolution trees invented stem-cell usage to grow the new outer layer, and developed microtechnology before we even existed as a species, where the leaves pull on several micron sized channels through osmosis and respiration to get the water up through the roots and trunk to the leaves at speeds typically around 6 meters per hour. But if needed, they can crank it up to 40 meters per hour to get it to the top in an hour or two!

Why?

Global warming is a fact and there are several remote sensing technologies that have confirmed this. You can see it nicely by clicking on this—you should see a NASA animation of satellite measurements superposed on top of Keeling’s famous graph of CO2 measured at Mauna Loa measurements from 2002 to 2009. Here’s more of that graph:

Many of the greenhouse gases that contribute to increasing temperature contains carbon: carbon dioxide, methane and carbon monoxide. I will focus on carbon dioxide. Its behavior is vastly different in air or water. In air it doesn’t react with other chemicals so its stays around for a longer time in the atmosphere. In the ocean and on land the carbon dioxide reacts a lot more, so there’s an uptake of carbon in both. But not in the ocean where it stays a lot longer mainly due to ocean buffering. I will have a lot more to say about the ocean geochemistry in the upcoming blog postings.

The carbon dioxide levels in the atmosphere in 2011 are soon approaching 400 parts per million (ppm) and the growth is increasing for every year. The parts per million is in relation to the volume of the atmosphere. David Archer says that if all the carbon dioxide were to fall as frozen carbon dioxide—’dry ice’—it would just be around 10 centimeters deep. But the important thing to understand is that we have thrown the carbon cycle seriously out of balance with our human emissions, so we might be close to some climate tipping points.

Colin my fellow ‘tree-hugger’ has looked at global warming and its implication for the trees. Intuitively it might seem that warmer temperatures and higher levels of CO2 might be beneficial for their growth. Indeed, the climate predictions of the International Panel on Climate Change assume this will happen. But there is a point where the micro-channels (stomata) start to close, due to too much photosynthesis and carbon dioxide. Taken together with higher temperature, this can make the trees’ respiration faster than its photosynthesis, so they end up supplying more carbon dioxide to atmosphere.

Trees also are very excellent at preventing floods, since one tree can divert 500 litres per day through transpiration. This easily adds up to 5000 cubic metres per square kilometre, making trees very good at reducing flood and and reducing our need for disaster preventions if they are left alone to do do their job.

How?

One way of understanding how the carbon cycle works is to use simple models like box models where we treat the carbon as contained in various ‘boxes’ and look at how it moves between boxes as time passes. A box can represent the Earth, the ocean, the atmosphere, or depending on what I want to study, any other part of the carbon cycle.

I’ll just mention a few examples of flows in the carbon cycle, to give you a feeling for them: breathing, photosynthesis, erosion, emission and decay. Breathing is easy to grasp—try to stop doing it yourself for a short moment! But how is photosynthesis a flow? This wonderful process was invented by the cyanobacteria 3.5 billion years ago and it has been used by plants ever since. It takes carbon out of the atmosphere and moves it into plant tissues.

In a box model, the average time something stays in a box is called its residence time, e-folding time, or response time by scientists. The rest of the flows in my list I leave up to you to think about: which are uptakes which are releases, and where do they occur?

The basic equation in a box model is called the mass balance equation:

\dot m = \sum \textrm{sources} - \sum \textrm{sinks}

Here m is the mass of some substance in some box. The sources are what flows into that box together with any internal sources (production). The sinks are what flows out together with any internal sinks (loss and deposition).

In my initial experiments where I used the year 2008, when I looked at a 1-dimensional global box model of CO2 in the atmosphere with only the fossil fuel as source, I get similar results to this diagram from the Global Carbon Project (petagram of carbon per year, which is the same as gigatonnes per year):

global carbon budget 2000 - 2010

I used the observed value from measurements at Mauna Loa. The atmosphere sink is 3.9 gigatonnes of carbon per year and the fossil fuel emission source is 8.7 GtC per year. The ocean also absorbs 2.1 GtC per year, and the land acts as a sink at 2.5 GtC per year.

I hope this will be the first of a series of posts! Next time I want to talk about a box model for the ocean’s role in the carbon cycle.

References

• Colin Tudge, The Secret Life of Trees: How They Live and Why They Matter, Penguin, London, 2005.

• David Archer, The Global Carbon Cycle, Princeton U. Press, Princeton, NJ, 2011.


Disease-Spreading Zombies

20 July, 2012

Are you a disease-spreading zombie?

You may have read about the fungus that can infect an ant and turn it into a zombie, making it climb up the stem of a plant and hang onto it, then die and release spores from a stalk that grows out of its head.

But this isn’t the only parasite that controls the behavior of its host.

If you ever got sick, had diarrhea, and thought hard about why, you’ll understand what I mean. You were helping spread the disease… especially if you were poor and didn’t have a toilet. This is why improved sanitation actually reduces the virulence of some diseases: it’s no longer such a good strategy for bacteria to cause diarrhea, so they evolve away from it!

There are plenty of other examples. Lots of diseases make you sneeze or cough, spreading the germs to other people. The rabies virus drives dogs crazy and makes them want to bite. There’s a parasitic flatworm that makes ants want to climb to the top of a blade of grass, lock their jaws onto it and wait there until they get eaten by a sheep! But the protozoan Toxoplasma gondii is more mysterious.

It causes a disease called toxoplasmosis. You can get it from cats, you can get it from eating infected meat, and you can even inherit it from your mother.

Lots of people have it: somewhere between 1/3 and 1/2 of everyone in the world!

A while back, the Czech scientist Jaroslav Flegr did some experiments. He found that people who tested positive for this parasite have slower reaction times. But even more interestingly, he claims that men with the parasite are more introverted, suspicious, oblivious to other people’s opinions of them, and inclined to disregard rules… while infected women, are more outgoing, trusting, image-conscious, and rule-abiding than uninfected women!

What could explain this?

The disease is carried by both cats and mice. Cats catch it by eating mice. The disease causes behavior changes in mice: they seem to become more anxious and run around more. This may increase their chance of getting eaten by a cat and passing on the disease. But we are genetically similar to mice… so we too may become more anxious when we’re infected with this disease. And men and women may act differently when they’re anxious.

It’s just a theory so far. Nonetheless, I won’t be surprised to hear there are parasites that affect our behavior in subtle ways. I don’t know if viruses or bacteria are sophisticated enough to trigger changes in behavior more subtle than diarrhea… but there are always lots of bacteria in your body, about 10 times as many as actual human cells. Many of these belong to unidentified species. And as long as they don’t cause obvious pathologies, doctors have had little reason to study them.

As for viruses, don’t forget that about 8% of your DNA is made of viruses that once copied themselves into your ancestors’ genome. They’re called endogenous retroviruses, and I find them very spooky and fascinating. Once they get embedded in our DNA, they can’t always get back out: a lot of them are defective, containing deletions or nonsense mutations. But some may still be able to get back out. And there are hints that some are implicated in certain kinds of cancer and autoimmune disease.

Even more intriguingly, a 2004 study reported that antibodies to endogenous retroviruses were more common in people with schizophrenia! And the cerebrospinal fluid of people who’d recently gotten schizophrenia contained levels of a key enzyme used by retroviruses, reverse transcriptase, four times higher than control subjects.

So it’s possible—just possible—that some viruses, either free-living or built into our DNA, may change our behavior in subtle ways that increase their chance of spreading.

For more on Jaroslav Flegr’s research, read this fascinating article:

• Kathleen MacAuliffe, How your cat is making you crazy, The Atlantic, March 2012.

Among other things you’ll read about the parasitologists
Glenn McConkey and Joanne Webster, who have shown that Toxoplasma gondii has two genes that allow it to crank up production of the neurotransmitter dopamine in the host’s brain. It seems this makes rats feel pleasure when they smell a cat!

(Do you like cats? Hmm.)

Of course, in business and politics we see many examples of ‘parasites’ that hijack organizations and change these organizations’ behavior to benefit themselves. It’s not nice. But it’s natural.

So even if you aren’t a disease-spreading zombie, it’s quite possible you’re dealing with them on a regular basis.


Follow

Get every new post delivered to your Inbox.

Join 2,712 other followers