joint with Jacob Biamonte
It’s time to resume the network theory series! We’re writing a little book based on some of these posts, so we want to finish up our discussion of stochastic Petri nets and chemical reaction networks. But it’s been a long time, so we’ll assume you forgot everything we’ve said before, and make this post as self-contained as possible.
Last time we started looking at a simple example: a diatomic gas.
A diatomic molecule of this gas can break apart into two atoms:

and conversely, two atoms can combine to form a diatomic molecule:

We can draw both these reactions using a chemical reaction network:
where we’re writing
instead of
to abstract away some detail that’s just distracting here.
Last time we looked at the rate equation for this chemical reaction network, and found equilibrium solutions of that equation. Now let’s look at the master equation, and find equilibrium solutions of that. This will serve as a review of three big theorems.
The master equation
We’ll start from scratch. The master equation is all about how atoms or molecules or rabbits or wolves or other things interact randomly and turn into other things. So, let’s write
for the probability that we have
atoms of
and
molecule of
in our container. These probabilities are functions of time, and master equation will say how they change.
First we need to pick a rate constant for each reaction. Let’s say the rate constant for this reaction is some number
:

while the rate constant for the reverse reaction is some number
:

Before we make it pretty using the ideas we’ve been explaining all along, the master equation says

where we define
to be zero if either
or
is negative.
Yuck!
Normally we don’t show you such nasty equations. Indeed the whole point of our work has been to demonstrate that by packaging the equations in a better way, we can understand them using high-level concepts instead of mucking around with millions of scribbled symbols. But we thought we’d show you what’s secretly lying behind our beautiful abstract formalism, just once.
Each term has a meaning. For example, the third one:

means that the reaction
will tend to increase the probability of there being
atoms of
and
molecules of
if we start with
atoms of
and
molecules of
This reaction can happen in
ways. And it happens at a probabilistic rate proportional to the rate constant for this reaction,
We won’t go through the rest of the terms. It’s a good exercise to do so, but there could easily be a typo in the formula, since it’s so long and messy. So let us know if you find one!
To simplify this mess, the key trick is to introduce a generating function that summarizes all the probabilities in a single power series:

It’s a power series in two variables,
and
since we have two chemical species:
s and
s.
Using this trick, the master equation looks like

where the Hamiltonian
is a sum of terms, one for each reaction. This Hamiltonian is built from operators that annihilate and create
s and
s. The annihilation and creation operators for
atoms are:

The annihilation operator differentiates our power series with respect to the variable
The creation operator multiplies it by that variable. Similarly, the annihilation and creation operators for
molecules are:

In Part 8 we explained a recipe that lets us stare at our chemical reaction network and write down this Hamiltonian:

As promised, there’s one term for each reaction. But each term is itself a sum of two: one that increases the probability that our container of chemicals will be in a new state, and another that decreases the probability that it’s in its original state. We get a total of four terms, which correspond to the four terms in our previous way of writing the master equation.
Puzzle 1. Show that this new way of writing the master equation is equivalent to the previous one.
Equilibrium solutions
Now we will look for all equilibrium solutions of the master equation: in other words, solutions that don’t change with time. So, we’re trying to solve

Given the rather complicated form of the Hamiltonian, this seems tough. The challenge looks more concrete but even more scary if we go back to our original formulation. We’re looking for probabilities
nonnegative numbers that sum to one, such that

for all
and
This equation is horrid! But the good news is that it’s linear, so a linear combination of solutions is again a solution. This lets us simplify the problem using a conserved quantity.
Clearly, there’s a quantity that the reactions here don’t change:
What’s that? It’s the number of
s plus twice the number of
s. After all, a
can turn into two
s, or vice versa.
(Of course the secret reason is that
is a diatomic molecule made of two
s. But you’d be able to follow the logic here even if you didn’t know that, just by looking at the chemical reaction network… and sometimes this more abstract approach is handy! Indeed, the way chemists first discovered that certain molecules are made of certain atoms is by seeing which reactions were possible and which weren’t.)
Suppose we start in a situation where we know for sure that the number of
s plus twice the number of
s equals some number
:

Then we know
is initially of the form

But since the number of
s plus twice the number of
s is conserved, if
obeys the master equation it will continue to be of this form!
Put a fancier way, we know that if a solution of the master equation starts in this subspace:

it will stay in this subspace. So, because the master equation is linear, we can take any solution
and write it as a linear combination of solutions
one in each subspace 
In particular, we can do this for an equilibrium solution
And then all the solutions
are also equilibrium solutions: they’re linearly independent, so if one of them changed with time,
would too.
This means we can just look for equilibrium solutions in the subspaces
If we find these, we can get all equilibrium solutions by taking linear combinations.
Once we’ve noticed that, our horrid equation makes a bit more sense:

Note that if the pair of subscripts
obey
the same is true for the other pairs of subscripts here! So our equation relates the values of
for all the points
with integer coordinates lying on this line segment:

You should be visualizing something like this:
If you think about it a minute, you’ll see that if we know
at two points on such a line, we can keep using our equation to recursively work out all the rest. So, there are at most two linearly independent equilibrium solutions of the master equation in each subspace 
Why at most two? Why not two? Well, we have to be a bit careful about what happens at the ends of the line segment: remember that
is defined to be zero when
or
becomes negative. If we think very hard about this, we’ll see there’s just one linearly independent equilibrium solution of the master equation in each subspace
But this is the sort of nitty-gritty calculation that’s not fun to watch someone else do, so we won’t bore you with that.
Soon we’ll move on to a more high-level approach to this problem. But first, one remark. Our horrid equation is like a fancy version of the usual discretized form of the equation

namely:

And this makes sense, since we get

by taking the heat equation:

and assuming
doesn’t depend on time. So what we’re doing is a lot like looking for equilibrium solutions of the heat equation.
The heat equation describes how heat smears out as little particles of heat randomly move around. True, there don’t really exist ‘little particles of heat’, but this equation also describes the diffusion of any other kind of particles as they randomly move around undergoing Brownian motion. Similarly, our master equation describes a random walk on this line segment:

or more precisely, the points on this segment with integer coordinates. The equilibrium solutions arise when the probabilities
have diffused as much as possible.
If you think about it this way, it should be physically obvious that there’s just one linearly independent equilibrium solution of the master equation for each value of
There’s a general moral here, too, which we’re seeing in a special case: the master equation for a chemical reaction network really describes a bunch of random walks, one for each allowed value of the conserved quantities that can be built as linear combinations of number operators. In our case we have one such conserved quantity, but in general there may be more (or none).
Furthermore, these ‘random walks’ are what we’ve been calling Markov processes.
Noether’s theorem
We simplified our task of finding equilibrium solutions of the master equation by finding a conserved quantity. The idea of simplifying problems using conserved quantities is fundamental to physics: this is why physicists are so enamored with quantities like energy, momentum, angular momentum and so on.
Nowadays physicists often use ‘Noether’s theorem’ to get conserved quantities from symmetries. There’s a very simple version of Noether’s theorem for quantum mechanics, but in Part 11 we saw a version for stochastic mechanics, and it’s that version that is relevant now. Here’s a paper which explains it in detail:
• John Baez and Brendan Fong, Noether’s theorem for Markov processes.
We don’t really need Noether’s theorem now, since we found the conserved quantity and exploited it without even noticing the symmetry. Nonetheless it’s interesting to see how it relates to what we’re doing.
For the reaction we’re looking at now, the idea is that the subspaces
are eigenspaces of an operator that commutes with the Hamiltonian
It follows from standard math that a solution of the master equation that starts in one of these subspaces, stays in that subspace.
What is this operator? It’s built from ‘number operators’. The number operator for
s is

and the number operator for
s is

A little calculation shows

so the eigenvalue of
is the number of
s, while the eigenvalue of
is the number of
s. This is why they’re called number operators.
As a consequence, the eigenvalue of the operator
is the number of
s plus twice the number of
s:

Let’s call this operator
since it’s so important:

If you think about it, the spaces
we saw a minute ago are precisely the eigenspaces of this operator:

As we’ve seen, solutions of the master equation that start in one of these eigenspaces will stay there. This lets take some techniques that are very familiar in quantum mechanics, and apply them to this stochastic situation.
First of all, time evolution as described by the master equation is given by the operators
In other words,

But if you start in some eigenspace of
you stay there. Thus if
is an eigenvector of
so is
with the same eigenvalue. In other words,

implies

But since we can choose a basis consisting of eigenvectors of
we must have

or, throwing caution to the winds and differentiating:

So, as we’d expect from Noether’s theorem, our conserved quantity commutes with the Hamiltonian! This in turn implies that
commutes with any polynomial in
which in turn suggests that

and also

The last equation says that
generates a 1-parameter family of ‘symmetries’: operators
that commute with time evolution. But what do these symmetries actually do? Since

we have

So, this symmetry takes any probability distribution
and multiplies it by 
In other words, our symmetry multiplies the relative probability of finding our container of gas in a given state by a factor of
for each
atom, and by a factor of
for each
molecule. It might not seem obvious that this operation commutes with time evolution! However, experts on chemical reaction theory are familiar with this fact.
Finally, a couple of technical points. Starting where we said “throwing caution to the winds”, our treatment has not been rigorous, since
and
are unbounded operators, and these must be handled with caution. Nonetheless, all the commutation relations we wrote down are true.
The operators
are unbounded for positive
They’re bounded for negative
so they give a one-parameter semigroup of bounded operators. But they’re not stochastic operators: even for
negative, they don’t map probability distributions to probability distributions. However, they do map any nonzero vector
with
to a vector
with the same properties. So, we can just normalize this vector and get a probability distribution. The need for this normalization is why we spoke of relative probabilities.
The Anderson–Craciun–Kurtz theorem
Now we’ll actually find all equilibrium solutions of the master equation in closed form. To understand this final section, you really do need to remember some things we’ve discussed earlier. Last time we considered the same chemical reaction network we’re studying today, but we looked at its rate equation, which looks like this:


This describes how the number of
s and
s changes in the limit where there are lots of them and we can treat them as varying continuously, in a deterministic way. The number of
s is
and the number of
s is
We saw that the quantity

is conserved, just as today we’ve seen that
is conserved. We saw that the rate equation has one equilibrium solution for each choice of
And we saw that these equilibrium solutions obey

The Anderson–Craciun–Kurtz theorem, introduced in Part 9, is a powerful result that gets equilibrium solution of the master equation from equilibrium solutions of the rate equation. It only applies to equilibrium solutions that are ‘complex balanced’, but that’s okay:
Puzzle 2. Show that the equilibrium solutions of the rate equation for the chemical reaction network
are complex balanced.
So, given any equilibrium solution
of our rate equation, we can hit it with the Anderson-Craciun-Kurtz theorem and get an equilibrium solution of the master equation! And it looks like this:

In this solution, the probability distribution

is a product of Poisson distributions. The factor in front is there to make the numbers
add up to one. And remember,
are any nonnegative numbers with

So from all we’ve said, the above formula is an explicit closed-form solution of the horrid equation

That’s pretty nice. We found some solutions without ever doing any nasty calculations.
But we’ve really done better than getting some equilibrium solutions of the master equation. By restricting attention to
with
our formula for
gives an equilibrium solution that lives in the eigenspace
:

And by what we’ve said, linear combinations of these give all equilibrium solutions of the master equation.
And we got them with very little work! Despite all the fancy talk in today’s post, we essentially just took the equilibrium solutions of the rate equation and plugged them into a straightforward formula to get equilibrium solutions of the master equation. This is why the Anderson–Craciun–Kurtz theorem is so nice. And of course we’re looking at a very simple reaction network: for more complicated ones it becomes even better to use this theorem to avoid painful calculations.
We could go further. For example, we could study nonequilibrium solutions using Feynman diagrams like this:
But instead, we will leave off with two more puzzles. We introduced some symmetries, but we haven’t really explored them yet:
Puzzle 3. What do the symmetries associated to the conserved quantity
do to the equilibrium solutions of the master equation given by

where
is an equilibrium solution of the rate equation? In other words, what is the significance of the one-parameter family of solutions

Also, we used a conceptual argument to check that
commutes with
but it’s good to know that we can check this sort of thing directly:
Puzzle 4. Compute the commutator
![[H, O] = H O - O H [H, O] = H O - O H](https://s0.wp.com/latex.php?latex=%5BH%2C+O%5D+%3D+H+O+-+O+H+&bg=ffffff&fg=333333&s=0)
and show it vanishes.