## Network Theory (Part 7)

guest post by Jacob Biamonte

This post is part of a series on what John and I like to call Petri net field theory. Stochastic Petri nets can be used to model everything from vending machines to chemical reactions. Chemists have proven some powerful theorems about when these systems have equilibrium states. We’re trying to bind these old ideas into our fancy framework, in hopes that quantum field theory techniques could also be useful in this deep subject. We’ll describe the general theory later; today we’ll do an example from population biology.

Those of you following this series should know that I’m the calculation bunny for this project, with John playing the role of the wolf. If I don’t work quickly, drawing diagrams and trying to keep up with John’s space-bending quasar of information, I’ll be eaten alive! It’s no joke, so please try to respond and pretend to enjoy anything you read here. This will keep me alive for longer. If I did not take notes during our meetings, lots of this stuff would have never made it here, so hope you enjoy.

#### Amoeba reproduction and competition

Here’s a stochastic Petri net:

It shows a world with one state, amoeba, and two transitions:

reproduction, where one amoeba turns into two. Let’s call the rate constant for this transition $\alpha$.

competition, where two amoebas battle for resources and only one survives. Let’s call the rate constant for this transition $\beta$.

We are going to analyse this example in several ways. First we’ll study the deterministic dynamics it describes: we’ll look at its rate equation, which turns out to be the logistic equation, familiar in population biology. Then we’ll study the stochastic dynamics, meaning its master equation. That’s where the ideas from quantum field theory come in.

#### The rate equation

If $P(t)$ is the population of amoebas at time $t$, we can follow the rules explained in Part 3 and crank out this rate equation:

$\displaystyle{ \frac{d P}{d t} = \alpha P - \beta P^2}$

We can rewrite this as

$\displaystyle{\frac{d P }{d t}= k P(1-\frac{P}{Q}) }$

where

$\displaystyle{ Q = \frac{\alpha}{\beta} , \qquad k = \alpha}$

What’s the meaning of $Q$ and $k$?

$Q$ is the carrying capacity, that is, the maximum sustainable population the environment can support.

$k$ is the growth rate describing the approximately exponential growth of population when $P(t)$ is small.

It’s a rare treat to find such an important differential equation that can be solved by analytical methods. Let’s enjoy solving it.

We start by separating variables and integrating both sides:

$\displaystyle{\int \frac{d P}{P (1-P/Q)} = \int k d t}$

We need to use partial fractions on the left side above, resulting in

$\displaystyle{\int \frac{d P}{P}} + \displaystyle{\int \frac{d P}{Q-P} } = \displaystyle{\int k d t}$

and so we pick up a constant $C$, let

$A= \pm e^{-C}$

and rearrange things as

$\displaystyle{\frac{Q-P}{P}=A e^{-k t} }$

so the population as a function of time becomes

$\displaystyle{ P(t) = \frac{Q}{1+A e^{-k t}}}$

At $t=0$ we can determine $A$ uniquely. We write $P_0 := P(0)$ and $A$ becomes

$\displaystyle{ A = \frac{Q-P_0}{P_0}}$

The model now becomes very intuitive. Let’s set $Q = k=1$ and make a plot for various values of $A$:

We arrive at three distinct cases:

equilibrium ($A=0$). The horizontal blue line corresponds to the case where the initial population $P_0$ exactly equals the carrying capacity. In this case the population is constant.

dieoff ($A < 0$). The three decaying curves above the horizontal blue line correspond to cases where initial population is higher than the carrying capacity. The population dies off over time and approaches the carrying capacity.

growth ($A > 0$). The four increasing curves below the horizontal blue line represent cases where the initial population is lower than the carrying capacity. Now the population grows over time and approaches the carrying capacity.

#### The master equation

Next, let us follow the rules explained in Part 6 to write down the master equation for our example. Remember, now we write:

$\displaystyle{\Psi(t) = \sum_{n = 0}^\infty \psi_n(t) z^n }$

where $\psi_n(t)$ is the probability of having $n$ amoebas at time $t$, and $z$ is a formal variable. The master equation says

$\displaystyle{\frac{d}{d t} \Psi(t) = H \Psi(t)}$

where $H$ is an operator on formal power series called the Hamiltonian. To get the Hamiltonian we take each transition in our Petri net and build an operator built from creation and annihilation operators, as follows. Reproduction works like this:

while competition works like this:

Here $a$ is the annihilation operator, $a^\dagger$ is the creation operator and $N = a^\dagger a$ is the number operator. Last time John explained precisely how the $N$‘s arise. So the theory is already in place, and we arrive at this Hamiltonian:

$H = \alpha (a^\dagger a^\dagger a - N) \;\; + \;\; \beta(a^\dagger a a - N(N-1))$

Remember, $\alpha$ is the rate constant for reproduction, while $\beta$ is the rate constant for competition.

The master equation can be solved: it’s equivalent to

$\frac{d}{d t}(e^{-t H}\Psi(t))=0$

so that $e^{-t H}\Psi(t)$ is constant, and so

$\Psi(t) = e^{t H}\Psi(0)$

and that’s it! We can calculate the time evolution starting from any initial probability distribution of populations. Maybe everyone is already used to this, but I find it rather remarkable.

Here’s how it works. We pick a population, say $n$ amoebas at $t=0.$ This would mean $\Psi(0) = z^n$. We then evolve this state using $e^{t H}$. We expand this operator as

$\begin{array}{ccl} e^{t H} &=&\displaystyle{ \sum_{n=0}^\infty \frac{t^n H^n}{n!} } \\ \\ &=& \displaystyle{ 1 + t H + \frac{1}{2}t^2 H^n + \cdots }\end{array}$

This operator contains the full information for the evolution of the system. It contains the histories of all possible amoeba populations—an amoeba mosaic if you will. From this, we can construct amoeba Feynman diagrams.

To do this, we work out each of the $H^n$ terms in the expansion above. The first-order terms correspond to the Hamiltonian acting once. These are proportional to either $\alpha$ or $\beta$. The second-order terms correspond to the Hamiltonian acting twice. These are proportional to either $\alpha^2$, $\alpha\beta$ or $\beta^2$. And so on.

This is where things start to get interesting! To illustrate how it works, we will consider two possibilities for the second-order terms:

1) We start with a lone amoeba, so $\Psi(0) = z$. It reproduces and splits into two. In the battle of the century, the resulting amoebas compete and one dies. At the end we have:

$\frac{\alpha \beta}{2} (a^\dagger a a)(a^\dagger a^\dagger a) z$

We can draw this as a Feynman diagram:

You might find this tale grim, and you may not like the odds either. It’s true, the odds could be better, but people are worse off than amoebas! The great Japanese swordsman Miyamoto Musashi quoted the survival odds of fair sword duels as 1/3, seeing that 1/3 of the time both participants die. A remedy is to cheat, but these amoeba are competing honestly.

2) We start with two amoebas, so the initial state is $\Psi(0) = z^2$. One of these amoebas splits into two. One of these then gets into an argument with the original amoeba over the Azimuth blog. The amoeba who solved all John’s puzzles survives. At the end we have

$\frac{\alpha \beta}{2} (a^\dagger a a)(a^\dagger a^\dagger a) z^2$

with corresponding Feynman diagram:

This should give an idea of how this all works. The exponential of the Hamiltonian gives all possible histories, and each of these can be translated into a Feynman diagram. In a future blog entry, we might explain this theory in detail.

#### An equilibrium state

We’ve seen the equilibrium solution for the rate equation; now let’s look for equilibrium solutions of the master equation. This paper:

• D. F. Anderson, G. Craciun and T.G. Kurtz, Product-form stationary distributions for deficiency zero chemical reaction networks, arXiv:0803.3042.

proves that for a large class of stochastic Petri nets, there exists an equilibrium solution of the master equation where the number of things in each state is distributed according to a Poisson distribution. Even more remarkably, these probability distributions are independent, so knowing how many things are in one state tells you nothing about how many are in another!

Here’s a nice quote from this paper:

The surprising aspect of the deficiency zero theorem is that the assumptions of the theorem are completely related to the network of the system whereas the conclusions of the theorem are related to the dynamical properties of the system.

The ‘deficiency zero theorem’ is a result of Feinberg, which says that for a large class of stochastic Petri nets, the rate equation has a unique equilibrium solution. Anderson showed how to use this fact to get equilibrium solutions of the master equation!

We will consider this in future posts. For now, we need to talk a bit about ‘coherent states’.

These are all over the place in quantum theory. Legend (or at least Wikipedia) has it that Erwin Schrödinger himself discovered coherent states when he was looking for states of a quantum system that look ‘as classical as possible’. Suppose you have a quantum harmonic oscillator. Then the uncertainty principle says that

$\Delta p \Delta q \ge \hbar/2$

where $\Delta p$ is the uncertainty in the momentum and $\Delta q$ is the uncertainty in position. Suppose we want to make $\Delta p \Delta q$ as small as possible, and suppose we also want $\Delta p = \Delta q$. Then we need our particle to be in a ‘coherent state’. That’s the definition. For the quantum harmonic oscillator, there’s a way to write quantum states as formal power series

$\displaystyle{ \Psi = \sum_{n = 0}^\infty \psi_n z^n}$

where $\psi_n$ is the amplitude for having $n$ quanta of energy. A coherent state then looks like this:

$\displaystyle{ \Psi = e^{c z} = \sum_{n = 0}^\infty \frac{c^n}{n!} z^n}$

where $c$ can be any complex number. Here we have omitted a constant factor necessary to normalize the state.

We can also use coherent states in classical stochastic systems like collections of amoebas! Now the coefficient of $z^n$ tells us the probability of having $n$ amoebas, so $c$ had better be real. And probabilities should sum to 1, so we really should normalize $\Psi$ as follows:

$\displaystyle{ \Psi = \frac{e^{c z}}{e^c} = e^{-c} \sum_{n = 0}^\infty \frac{c^n}{n!} z^n }$

Now, the probability distribution

$\displaystyle{\psi_n = e^{-c} \; \frac{c^n}{n!}}$

is called a Poisson distribution. So, for starters you can think of a ‘coherent state’ as an over-educated way of talking about a Poisson distribution.

Let’s work out the expected number of amoebas in this Poisson distribution. In the answers to the puzzles in Part 6, we started using this abbreviation:

$\displaystyle{ \sum \Psi = \sum_{n = 0}^\infty \psi_n }$

We also saw that the expected number of amoebas in the probability distribution $\Psi$ is

$\displaystyle{ \sum N \Psi }$

What does this equal? Remember that $N = a^\dagger a$. The annihilation operator $a$ is just $\frac{d}{d z}$, so

$\displaystyle{ a \Psi = c \Psi}$

and we get

$\displaystyle{ \sum N \Psi = \sum a^\dagger a \Psi = c \sum a^\dagger \Psi }$

But we saw in Part 5 that $a^\dagger$ is stochastic, meaning

$\displaystyle{ \sum a^\dagger \Psi = \sum \Psi }$

for any $\Psi.$ Furthermore, our $\Psi$ here has

$\displaystyle{ \sum \Psi = 1}$

since it’s a probability distribution. So:

$\displaystyle{ \sum N \Psi = c \sum a^\dagger \Psi = c \sum \Psi = c}$

The expected number of amoebas is just $c$.

Puzzle 1. This calculation must be wrong if $c$ is negative: there can’t be a negative number of amoebas. What goes wrong then?

Puzzle 2. Use the same tricks to calculate the standard deviation of the number of amoebas in the Poisson distribution $\Psi$.

Now let’s return to our problem and consider the initial amoeba state

$\displaystyle{ \Psi = e^{c z}}$

Here aren’t bothering to normalize it, because we’re going to look for equilibrium solutions to the master equation, meaning solutions where $\Psi(t)$ doesn’t change with time. So, we want to solve

$\displaystyle{ H \Psi = 0}$

Since this equation is linear, the normalization of $\Psi$ doesn’t really matter.

Remember,

$\displaystyle{ H\Psi = \alpha (a^\dagger a^\dagger a - N)\Psi + \beta(a^\dagger a a - N(N-1)) \Psi }$

Let’s work this out. First consider the two $\alpha$ terms:

$\displaystyle{ a^\dagger a^\dagger a \Psi = c z^2 \Psi }$

and

$\displaystyle{ -N \Psi = -a^\dagger a\Psi = -c z \Psi}$

Likewise for the $\beta$ terms we find

$\displaystyle{ a^\dagger a a\Psi=c^2 z \Psi}$

and

$\displaystyle{ -N(N-1)\psi = -a^\dagger a^\dagger a a \Psi = -c^2 z^2\Psi }$

Here I’m using something John showed in Part 6: the product $a^\dagger a^\dagger a a$ equals the ‘falling power’ $N(N-1)$.

The sum of all four terms must vanish. This happens whenever

$\displaystyle{ \alpha(c z^2 - c z)+\beta(c^2 z-c^2 z^2) = 0}$

which is satisfied for

$\displaystyle{ c= \frac{\alpha}{\beta}}$

Yipee! We’ve found an equilibrium solution, since we found a value for $c$ that makes $H \Psi = 0$. Even better, we’ve seen that the expected number of amoebas in this equilibrium state is

$\displaystyle{ \frac{\alpha}{\beta}}$

This is just the same as the equilibrium population we saw for the rate equation—that is, the carrying capacity for the logistic equation! That’s pretty cool, but it’s no coincidence: in fact, Anderson proved it works like this for lots of stochastic Petri nets.

I’m not sure what’s up next or what’s in store, since I’m blogging at gun point from inside a rabbit cage:

I’d imagine we’re going to work out the theory behind this example and prove the existence of equilibrium solutions for master equations more generally. One idea John had was to have me start a night shift—that way you’ll get Azimuth posts 24 hours a day.

### 37 Responses to Network Theory (Part 7)

1. Blake Stacey says:

You’re missing a “latex” on a $\Psi$ and a $\beta$. I’ll try to check the rest of the ciphering more carefully when I don’t have to be going to bed so I can get up in time for a meeting. . . nice post, though!

• John Baez says:

Fixed!

• Mike Stay says:

As long as you’re fixing up LaTeX, it would be nice if you tossed in some \displaystyle directives to make the fractions like e^{cz}/e^c a legible size and get the limits on the sums in the right place.

2. Now the coefficient of $z^n$ tells us the probability of having $n$ amoebas, so $c$ had better be real.

$c$ had better be non-negative then, preventing the situation described in puzzle 1.

3. Puzzle 2: Variance = $\sum N^2 \Psi - (\sum N \Psi)^2$.

$\begin{array}{ccl} \sum N^2 \Psi &=& \sum a^{\dagger} a a^{\dagger} a \Psi \\ &=& c \sum a^{\dagger} a a^{\dagger} \Psi \\ &=& c \sum a a^{\dagger} \Psi \\ &=& c \sum (a^{\dagger} a + 1) \Psi \\ &=& c(c + 1)\\ \end{array}$

Variance = $c$.

• John Baez says:

Great! I hadn’t actually gotten up the nerve to do this one! I just knew it had to be doable.

4. By the way, the rate equation (first equation of the post) is not displaying:

$latex\displaystyle{ \frac{d P}{d t} = \alpha P – \beta P^2}$

Looks like the space after ‘latex’ is missing.

• John Baez says:

Thanks, fixed! I screwed it up when I inserted \displaystyle according to Mike’s suggestion.

5. Pietro says:

Great post, as usual! I am enjoying this introduction to network theory very much.

An almost certainly very stupid question – sorry, it’s been ages since I touched a differential equation: how do you get that

$\frac{d}{dt}(e^{-tH} \Psi(t)) = 0$

from the master equation? I can see how $e^{tH} \Psi(0)$ is a solution for it – it follows at once from $\frac{d e^{tH}}{dt} = H(e^{tH})$ – but I am getting the feeling that I am missing something obvious but important here…

Thanks!

• Hi. Starting with

$\frac{d}{d t} \Psi = H \Psi$

or

$\frac{d}{d t}\Psi - H \Psi = 0$

We will pull one of John’s rabbits out and multiply this by $e^{-t H}$ and note that $H$ commutes with any power series in $H$ (which is the case of interest here from the expansion of $e^{-t H}$). So

$e^{-t H}\frac{d \Psi}{d t} - H e^{-t H}\Psi = 0$

and this is actually

$\frac{d}{d t}(e^{-t H}\Psi) = 0$

So it’s consistent, at the price of one rabbit. The other method is to tinker with

$e^{t H}\frac{d}{d t}(e^{-t H}\Psi)$

and do the same thing, backwards.

Cheers!

• Pietro says:

Oh, I see – thanks a lot!

• Eric says:

Because $\Psi(0)$ is a constant.

6. Web Hub Tel says:

I looked at the referenced paper by Anderson, Craciun and Kurtz, and they have a section on the multiscale nature of reaction networks. I think this is the important pull-quote “Within a cell, some chemical species may be present in much greater abundance than others. In addition, the rate constants k may vary over several orders of magnitude.” That is disorder in a nutshell and it plays a huge role in many practical network applications, including the ubiquitous TCP/IP networks.

So for the classic Logistic equation derivation, note that the rate constant k is set as a constant. But in a multiscale network, the k can vary widely, and so getting a Logistic sigmoid as a result would be unlikely based on the classical derivation. In other words, what combination of functions could generate a stable Logistic sigmoid? (I mean stable in the sense that the Gaussian, Cauchy, and Levy distributions are considered stable.)

Until we realize that not only can the Logistic equation can be derived from the obvious but “brain-dead” separation of variables technique, but also from a truly stochastic approach using ideas such as the Maximum Entropy Principle, we are spinning our wheels. By that I mean that we live in a deterministic world when we try to solve the equations, but in many cases the only practical results come from a stochastic framing.

That’s why I think this “Green Math” work is so incredibly vital. I don’t have the high-powered math skills that you guys have, but being an engineer I hope I can pick out useful ideas to apply to renewable energy strategies. Many of the significant renewable energy sources and practical devices are maximally disordered, including wind, PV cells, geothermal, etc.

7. Ginger Greenfield says:

Hi Jake.

Have the quantum field theory techniques been useful here, with Azimuth?

Ginger

• Hi Ginger,

The techniques appearing in quantum theory are very well developed, so our hope here is two-fold. (i) Cast these methods into new domains in hopes that we could leverage these tools to attack existing problems and (ii) Conversely, build a bridge so these important problems in network science and ecology (etc) can be addressed by others working in quantum theory!

8. […] We have already seen one example of this theorem, back in Part 7. […]

9. Jens says:

I dont understand how the pertubation approach works. If we set $\alpha = \beta = 1$ and $\Psi(0) = z^2$ then the first order term generates a negative $\psi_2$ for t>1/4. The second order term gets a negative $\psi_3$ for t>2/13. Is this supposed toget fixed a higher order?
Didn’t seem to go through the first time.

• John Baez says:

I haven’t thought about this particular calculation for a long time, but yes, you have to go to high enough order to get a decent answer for large $t$… and the power series may not actually converge for large $t,$ so you may need to use resummation methods like Borel summation.

In practice, this perturbation theory works best for small $t.$ There are other methods that work better for large $t.$

• Jens says:

I’ve tried for a few $\alpha$ and $\beta$‘s up to fourth order. Generally it seems the region in t for which I have strictly positive coefficients goes down with higher order! That is the higher order approximations becomes invalid(or less accurate, meaningless or whatever is actually going on) earlier than the lower orders. Do you have an example where this method is actually used? ( I didn’t find any in the book and I’m afraid I might have miscalculated something)

• John Baez says:

I don’t have a concrete calculation to point you to. If you can show me some calculations I could see if they look right.

10. Tobias Fritz says:

Recently I’ve been thinking a bit about this stuff and come across a vexing property of the amoeba system: if you introduce a very large number $M$ as the maximal population size by simply not letting the population grow above that threshold, then the only equilibrium state is the one where all amoebas are dead. In other words: in order to obtain the Poisson distribution as an equilibrium, you *need* to have an infinite state space and allow the population to grow arbitrarily large.

How does this come about? First, note that there’s another solution of your equilibrium equation

$\alpha (c z^2 - c z) + \beta (c^2 z - c^2 z^2) = 0$

that you’ve swept under the rug, namely $c = 0$! The corresponding “coherent” state has probability 1 at a population size of 0. This is the state in which all amoebas are dead with certainty. It is obviously an equilibrium state!

Moreover, for any population that evolves for a time $\delta t$, there is a positive probability for the population to end up in this state by dying out completely, and then no new amoebas will ever be born again. In the terminology of Markov chains, all amoebas being dead is an absorbing state. So one may be tempted to conclude that, if one only waits long enough, then any initial population of amoebas will die out at some point! In particular, there should not be any equilibrium solution other than amoeba extinction: any finite absorbing Markov chain like this ends up in the absorbing state with probability 1.

So what’s going on here? How is it possible that there is an equilibrium distribution other than total extinction? I think the answer is that the actual Markov chain that describes the amoeba population can be arbitrarily large: for a Markov chain with infinite state space, the theorem that an absorbing Markov chain ends up in the absorbing state with probability 1 does not hold anymore.

Now here’s the question: does this mean that any initial population has probability 1 to either die out completely or grow indefinitely? (By “grow indefinitely”, I mean that the lim sup of population size is infinite. So I would allow the population to bounce back to small sizes before shooting up again, even infinitely many times.)

• Graham Jones says:

“does this mean that any initial population has probability 1 to either die out completely or grow indefinitely?”

When the population is 1, competition is impossible, so it can never reach 0.

• Tobias Fritz says:

Graham wrote:

When the population is 1, competition is impossible, so it can never reach 0.

Whoops, thanks for pointing that out! But if you replace “population is 0” by “population is 1” everywhere, then my comment is still valid.

• Graham Jones says:

But population=1 is not an equilibrium. If there is one ameoba, there will be two sometime later.

• Graham Jones says:

Erm, replace ‘equilibrium’ with ‘absorbing state’. It is possible – John even explicity mentioned it – that the carrying capacity is 1.

• Tobias Fritz says:

True… I have to admit that I have secretly been thinking in terms of a different model, namely one in which competition is replaced by a “death rate” per individual which is proportional to current population size. This results in a slightly different kind of Hamiltonian with which the population can die out completely. (It probably won’t arise from a Petri net, though.)

In any case, I understand now that infinite absorbing Markov chains can behave quite differently from finite absorbing ones: the latter can’t have non-trivial equilibria, while the former can! Introducing a finite population cap and then letting it tend to infinity will not detect such an equilibrium.

(In order to find it, I guess that one will have to look at the second largest eigenvalue of the transition matrix and see whether it converges to 1 as the population cap tends to infinity.)

• Tobias Fritz says:

The model with only the two transitions $A \to 0$ and $0 \to A$ looks like a good toy model. But in order for your statements here about the rate constants to make sense, I assume that you were meaning to write down these two transitions:

$A \to A + A$

$A \to 0$

This is still not what I had in mind with “death rate per individual proportional to population size”, since the probability of an individual to die is a constant rather than proportional to population size. But it’s an interesting model to consider, due to the phenomenon that no new individuals can be born once the whole population has died out. If the reaction rates are $\alpha$ and $\beta$, then the rate equation looks like this:

$\displaystyle{ \frac{dP}{dt} = \alpha P - \beta P}$

Since the deficiency is $3 - 1 - 1 = 1,$ the deficiency zero theorem doesn’t apply here, and so we have to analyze the equilibria by hand. As you pointed out, we need $\alpha = \beta$ in order for an equilibrium to exist, so let’s assume this and put $\alpha = \beta = 1$ for the sake of simplicity. Then any constant $P$ is an equilibrium solution of the rate equation — but only $P=0$ is complex balanced, as you can see by noting that the complex $A+A$ gets produced in the case $P>0$, but never gets destroyed, thereby violating complex balance.

The master equation looks like this:

$\frac{d}{dt}\Psi = (a^\dag a^\dag a - N + a - N)\Psi$

I’m not quite sure what to do with this. The Anderson-Craciun-Kurtz theorem won’t tell us much, since we only have a trivial complex balanced equilibrium of the rate equation. Your sentiment that eventual extinction will occur with probability 1 sounds right to me; do you have any idea of how to prove it? It seems related to the recurrence of a one-dimensional random walk.

Side question: the Hamiltonians that appear in these master equations vaguely resemble Laplace operators on Cayley graphs, with respect to which the equilibrium distributions are harmonic functions. Is there a connection here?

By the way, some friends and I are currently studying mating systems, and we will hopefully be able to put this stuff to good use! I’m concerned about the possibility of the population dropping below 2, which for us implies guaranteed extinction. I wonder whether this means that we can’t expect our master equations to have any non-trivial equilibrium solutions.

• John Baez says:

The order of the comments here has gotten screwed up in a way I seem unable to fix. You have to look at the times comments were posted to read them in the right order! So, I’ve added links to help people navigate the conversations.

Over here, Tobias wrote:

But in order for your statements here about the rate constants to make sense, I assume that you were meaning to write down these two transitions:

$A \to A + A$

$A \to 0$

Right.

This is still not what I had in mind with “death rate per individual proportional to population size.”

Right. I didn’t see the “per individual” part.

But it’s an interesting model to consider, due to the phenomenon that no new individuals can be born once the whole population has died out. If the reaction rates are $\alpha$ and $\beta$, then the rate equation looks like this:

$\displaystyle{ \frac{dP}{dt} = \alpha P - \beta P}$

Since the deficiency is $3 - 1 - 1 = 1,$ the deficiency zero theorem doesn’t apply here, and so we have to analyze the equilibria by hand.

Right. That seems pretty easy in this case. But if you ever get a harder example with deficiency 1 or more, you might look at the ‘deficiency one theorem’. It’s in here:

• Martin Feinberg, Chemical reaction network structure and the stability of complex isothermal reactors: I. The deficiency zero and deficiency one theorems, Chemical Engineering Science 42 (1987), 2229–2268.

and probably also somewhere in these great free online notes:

• Martin Feinberg, Lectures On Reaction Networks, 1979.

The deficiency one theorem is more complicated than the deficiency zero theorem, with weaker implications, but still powerful. Despite its name it applies to some cases of deficiency higher than one, too!

As you pointed out, we need $\alpha = \beta$ in order for an equilibrium to exist, so let’s assume this and put $\alpha = \beta = 1$ for the sake of simplicity. Then any constant $P$ is an equilibrium solution of the rate equation — but only $P=0$ is complex balanced, as you can see by noting that the complex $A+A$ gets produced in the case $P>0$, but never gets destroyed, thereby violating complex balance.

Right.

The master equation looks like this:

$\frac{d}{dt}\Psi = (a^\dag a^\dag a - N + a - N)\Psi$

I’m not quite sure what to do with this. The Anderson–Craciun–Kurtz theorem won’t tell us much, since we only have a trivial complex balanced equilibrium of the rate equation.

Right, that theorem is useless here.

Your sentiment that eventual extinction will occur with probability 1 sounds right to me; do you have any idea of how to prove it?

I don’t know the tricks for solving this sort of problem, but some people do. There’s a whole industry of showing that random walks of various kinds will almost surely hit some set.

Side question: the Hamiltonians that appear in these master equations vaguely resemble Laplace operators on Cayley graphs, with respect to which the equilibrium distributions are harmonic functions. Is there a connection here?

Continuous-time Markov processes where the Hamiltonian is a Dirichlet operator—both infinitesimal stochastic and self-adjoint—are basically generalizations of the heat equation. For example, when there are finitely many states, a Dirichlet operator is the same as the Laplace operator associated to some weighted graph. This is Problem 39 in Section 22.1 of the book Jacob and I are writing.

But now you are talking about similar things for a situation with infinitely many states… and in some examples, for Hamiltonians that are not self-adjoint.

I’m concerned about the possibility of the population dropping below 2, which for us implies guaranteed extinction. I wonder whether this means that we can’t expect our master equations to have any non-trivial equilibrium solutions.

It sounds like in this situation you should be able to prove there aren’t any nontrivial equilibrium solutions, at least if a few reasonable conditions hold. I’d try argue something like this. Suppose that when the population goes to 1 it can never go back to 2, but has a nonzero probability per time of going to 0. Then in any equilibrium the probability of having population 1 must be zero (since otherwise the probability of having population 0 would grow).

But if there’s a nonzero probability per time of the population going from 2 to 1, this implies that in any equilibrium the probability of having population 2 must be zero ((since otherwise the probability of having population 1 would grow).

And so on.

• Tobias Fritz says:

Thanks a lot for that extensive answer!

Concerning the model with transitions $A \to A + A$ and $A \to 0$ occurring at equal rates, I had asked:

Your sentiment that eventual extinction will occur with probability 1 sounds right to me; do you have any idea of how to prove it?

I think I know how to do this now. For any population size $P$, consider the next population size that you get as soon as a fission or death event happens. Since the rates are equal, this next population size is either $P-1$ or $P+1$ with equal probability. In this way, we obtain an old friend, namely the ordinary random walk on the integers, modulo the caveat that the walk never leaves the origin once it has arrived there. Due to the recurrence of this random walk, we end up at the origin eventually with probability one, no matter where we started out with. In other words, the above model ends up at extinction with probability one. If you like theatrical analogies, think of this as a mathematical proof showing that doomsday will strike us some day!

John wrote:

But now you are talking about similar things for a situation with infinitely many states… and in some examples, for Hamiltonians that are not self-adjoint.

Countably many states shouldn’t be a problem — people study Laplace operators on countable graphs for example in geometric group theory. About self-adjointness, you’re right, but maybe we just need to be using the “right” scalar product in order for the Hamiltonian to become self-adjoint.

John wrote:

I’d try argue something like this. Suppose that when the population goes to 1 it can never go back to 2, but has a nonzero probability per time of going to 0. Then in any equilibrium the probability of having population 1 must be zero [..]

Beautiful! It feels good to have this nagging problem settled.

• John Baez says:

Tobias wrote:

It feels good to have this nagging problem settled.

Thanks. By the way, one way to show certain Markov processes have no equilibrium state where the probability of having population 1 vanishes is to copy the argument that if

$\nabla^2 \phi = 0$

on some region and $\phi$ is zero on the boundary of the region, then $\phi = 0$ (under certain side conditions). The idea is that the Laplace equation

$\nabla^2 \phi = 0$

says that $\phi$ at any point is the average of its values on a small sphere around that point. So, if it’s zero on the boundary of the shape it should be zero everywhere. In the 1-dimensional discrete setting the Laplace equation becomes

$\phi(n) = \frac{1}{2} \left( \phi(n-1) + \phi(n+1) \right)$

Of course, this doesn’t prohibit the solution $\phi(n) = n$ for $n \ge 0,$ but that solution isn’t ‘normalizable’. (That’s part of what I meant by ‘side conditions’.) Any normalizable solution has to be zero.

About self-adjointness, you’re right, but maybe we just need to be using the “right” scalar product in order for the Hamiltonian to become self-adjoint.

Right, there’s a theory of generalized Dirichlet operators on measure spaces, which allows you to adjust the measure and thus the $L^2$ inner product:

• M. Fukushima, Dirichlet Forms and Markov Processes, North-Holland, Amsterdam, 1980.

In the book, I wanted to apply this theory to the case of continuous-time Markov process with a finite space $X$ of states. If we allow ourselves to use a measure other than counting measure on $X,$ we get new concepts of ‘self-adjoint’ and ‘infinitesimal stochastic’ operators on $L^1(X)$ and $L^2(X).$ It would be nice, and not too hard, to work out the consequences. But I never got around to it!

There’s also a further generalization of Dirichlet operators that allows them to have a skew-adjoint part as long as it’s dominated by the self-adjoint part:

• Zhi-Ming Ma and Michael Röckner, Introduction to the Theory of (Non-Symmetric) Dirichlet Forms, Springer, Berlin, 1992.

• Graham Jones says:

For general problems of this type, I would look at the theory of branching processes. (http://en.wikipedia.org/wiki/Branching_process) A branching process is like a very restricted kind of reaction network where the only reactions have one thing on the left.

$A \to 3B + 4C$

is allowed, but no $0 \to A$ or $2A \to A$ etc is allowed. (Often branching processes are restricted to one type, but you can have multi-type ones.) As for tricks, I know two, and you’re both doing the first one already. That is, find the embedded discrete time Markov chain by ignoring the time it takes to get to the next state, and just look at transition probabilities $P(i|j)$ of going from j to i at the next change. In all cases so far, j has been i-1 or i+1, but it doesn’t have to be.

The other trick I know is probability generating functions, which I bet you do know now I’ve reminded you!

One way in which branching processes can be more general than the reaction networks I’ve seen is that they can have an infinite number of reactions, eg

$A \to nA$

for every n. (Might be handy for fish which can lay millions of eggs.) This means that the process can ‘explode’, that is, make an infinite number of particles in a finite time. So a ‘non-explosion’ hypothesis is assumed. (I think finite number of reactions implies the hypothesis.) I mention this because of the technical difficulty in the recent blog post The Large-Number Limit for Reaction Networks (Part 3) about passing a limit through a derivative. I bet an exploding process would fail here! But I thought that if you looked at how the ‘non-explosion’ hypothesis is used in branching theory it might help prove the theorem.

• John Baez says:

Yes, ‘explosion’ is one of the things I worry about when studying the large-number limit, not only for passing the limit through the derivative, but also for showing that starting with a semiclassical family of states it remains semiclassical for a while as we evolve it. This amounts to showing that moments don’t grow huge or blow up. The moments of the population probably do blow up even before the mean population blows up in a stochastic reaction network like

$A + A \to A + A + A$

• John Baez says:

Over here, Tobias wrote:

True… I have to admit that I have secretly been thinking in terms of a different model, namely one in which competition is replaced by a “death rate” per individual which is proportional to current population size. This results in a slightly different kind of Hamiltonian with which the population can die out completely.

Right.

(It probably won’t arise from a Petri net, though.)

It does, actually. We just need one species $A$ (for ‘amoeba’) and two transitions

$0 \longrightarrow A$

$A \longrightarrow 0$

with exactly the same rate constant if you want the rate equation to have equilibrium solutions. (Otherwise all solutions of the rate equation will go to infinity (if births outpace deaths) or zero (if deaths outpace births.)

I believe that when the rate constants are equal, the master equation will have the property that with probability 1, the population eventually reaches zero and never bounces back. I haven’t proved this.

Or, if you want something a bit less delicately balanced, you can do something like this:

$A \longrightarrow A + A$

$A + A \longrightarrow A$

$A \to 0$

This shows up in models of fish populations, where the last process represents fishing. For many choices of rate constants the rate equation should have a unique nonzero equilibrium solution… but also zero will be an equilibrium solution. I believe that in these cases, the master equation will have the property that the population eventually reaches zero with probability 1, and never bounces back.

• John Baez says:

Tobias Fritz wrote:

Now here’s the question: does this mean that any initial population has probability 1 to […] grow indefinitely?

(I omit the possibility of extinction here because Graham pointed out that in this particular model death only occurs when there are at least 2 amoebas, so extinction is impossible.)

I haven’t tried to prove it, but that’s what I’d expect. I don’t find this disturbing, because the ‘peculiar’ phenomena you mention—near-total extinction, or shooting far above the equilibrium population—happen less and less frequently in the correct large-number limit. This is the limit where we keep increasing the equilibrium population while also counting the population in larger and larger units and also change the rate constants in the manner described in this paper.

If the equilibrium population is 1, it’s quite likely for an initial population with this equilibrium size to double in size. If the equilibrium population is 1 million, this is less likely to happen soon… though I believe the population is still certain to eventually double at some point.

An example where extinction is a possibility: if you keep playing the lottery, eventually you either run out of money, die, or get lucky and win.

11. […] A stochastic Petri net from population biology whose rate equation is the logistic equation; an equilibrium solution of the corresponding master equation, Network Theory Part 7  […]

This site uses Akismet to reduce spam. Learn how your comment data is processed.