Okay, now let’s dig deeper into the proof of the deficiency zero theorem. We’re only going to prove a baby version, at first. Later we can enhance it:
Deficiency Zero Theorem (Baby Version). Suppose we have a weakly reversible reaction network with deficiency zero. Then for any choice of rate constants there exists an equilibrium solution of the rate equation where all species are present in nonzero amounts.
The first step is to write down the rate equation in a new, more conceptual way. It’s incredibly cool. You’ve probably seen Schrödinger’s equation, which describes the motion of a quantum particle:
If you’ve been following this series, you’ve probably also seen the master equation, which describes the motion of a ‘stochastic’ particle:
A ‘stochastic’ particle is one that’s carrying out a random walk, and now describes its probability to be somewhere, instead of its amplitude.
Today we’ll see that the rate equation for a reaction network looks somewhat similar:
where is some matrix, and is defined using a new thing called ‘matrix exponentiation’, which makes the equation nonlinear!
If you’re reading this you probably know how to multiply a vector by a matrix. But if you’re like me, you’ve never seen anyone take a vector and raise to the power of some matrix! I’ll explain it, don’t worry… right now I’m just trying to get you intrigued. It’s not complicated, but it’s exciting how this unusual operation shows up naturally in chemistry. That’s just what I’m looking for these days: new math ideas that show up in practical subjects like chemistry, and new ways that math can help people in these subjects.
Since we’re looking for an equilibrium solution of the rate equation, we actually want to solve
or in other words
In fact we will do better: we will find a solution of
And we’ll do this in two stages:
• First we’ll find all solutions of
This equation is linear, so it’s easy to understand.
• Then, among these solutions we’ll find one that also obeys
This is a nonlinear problem involving matrix exponentiation, but still, we can do it, using a clever trick called ‘logarithms’.
Putting the pieces together, we get our solution of
and thus our equilibrium solution of the rate equation:
That’s a rough outline of the plan. But now let’s get started, because the details are actually fascinating. Today I’ll just show you how to rewrite the rate equation in this new way.
The rate equation
Remember how the rate equation goes. We start with a stochastic reaction network, meaning a little diagram like this:
This contains quite a bit of information:
• a finite set of transitions,
• a finite set of complexes,
• a finite set of species,
• a map giving a rate constant for each transition,
• source and target maps saying where each transition starts and ends,
• a one-to-one map saying how each complex is made of species.
Given all this, the rate equation says how the amount of each species changes with time. We describe these amounts with a vector So, we want a differential equation filling in the question marks here:
Now last time, we started by thinking of as a subset of and thus of the vector space Back then, we wrote the rate equation as follows:
where vector exponentiation is defined by
when and are vectors in
However, we’ve now switched to thinking of our set of complexes as a set in its own right that is mapped into by This is good for lots of reasons, like defining the concept of ‘deficiency’, which we did last time. But it means the rate equation above doesn’t quite parse anymore! Things like and live in ; we need to explicitly convert them into elements of using for our equation to make sense!
So now we have to write the rate equation like this:
This looks more ugly, but if you’ve got even one mathematical bone in your body, you can already see vague glimmerings of how we’ll rewrite this the way we want:
First, we extend our maps and to linear maps between vector spaces:
Then, we put an inner product on the vector spaces and For we do this in the most obvious way, by letting the complexes be an orthonormal basis. So, given two complexes we define their inner product by
We do the same for But for we define the inner product in a more clever way involving the rate constants. If are two transitions, we define their inner product by:
This will seem perfectly natural when we continue our study of circuits made of electrical resistors, and if you’re very clever you can already see it lurking in Part 16. But never mind.
Having put inner products on these three vector spaces, we can take the adjoints of the linear maps between them, to get linear maps going back the other way:
These are defined in the usual way—though we’re using daggers here they way physicists do, where mathematicians would prefer to see stars! For example, is defined by the relation
and so on.
Next, we set up a random walk on the set of complexes. Remember, our reaction network is a graph with complexes as vertices and transitions as edges, like this:
Each transition has a number attached to it: the rate constant So, we can randomly hop from complex to complex along these transitions, with probabilities per unit time described by these numbers. The probability of being at some particular complex will then be described by a function
which also depends on time, and changes according to the equation
for some Hamiltonian
I defined this Hamiltonian back in Part 15, but now I see a slicker way to write it:
I’ll justify this next time. For now, the main point is that with this Hamiltonian, the rate equation is equivalent to this:
The only thing I haven’t defined yet is the funny exponential That’s what makes the equation nonlinear. We’re taking a vector to the power of a matrix and getting a vector. This sounds weird—but it actually makes sense!
It only makes sense because we have chosen bases for our vector spaces. To understand it, let’s number our species as we’ve been doing all along, and number our complexes Our linear map then becomes a matrix of natural numbers. Its entries say how many times each species shows up in each complex:
The entry says how many times the th species shows up in the th complex.
Now, let’s be a bit daring and think of the vector as a row vector with entries:
Then we can multiply on the right by the matrix and get a vector in :
So far, no big deal. But now you’re ready to see the definition of which is very similar:
It’s exactly the same, but with multiplication replacing addition, and exponentiation replacing multiplication! Apparently my class on matrices stopped too soon: we learned about matrix multiplication, but matrix exponentiation is also worthwhile.
What’s the point of it? Well, suppose you have a certain number of hydrogen molecules, a certain number of oxygen molecules, a certain number of water molecules, and so on—a certain number of things of each species. You can list these numbers and get a vector Then the components of describe how many ways you can build up each complex from the things you have. For example,
say roughly how many ways you can build complex 1 by picking things of species 1, things of species 2, and so on.
Why ‘roughly’? Well, we’re pretending we can pick the same thing twice. So if we have 4 water molecules and we need to pick 3, this formula gives The right answer is To get this answer we’d need to use the ‘falling power’ as explained in Part 4. But the rate equation describes chemistry in the limit where we have lots of things of each species. In this limit, the ordinary power becomes a good approximation.
Puzzle. In this post we’ve seen a vector raised to a matrix power, which is a vector, and also a vector raised to a vector power, which is a number. How are they related?
There’s more to say about this, which I’d be glad to explain if you’re interested. But let’s get to the punchline:
Theorem. The rate equation:
is equivalent to this equation:
or in other words:
Proof. It’s enough to show
So, we’ll compute and think about the meaning of each quantity we get en route.
We start with This is a list of numbers saying how many things of each species we have: our raw ingredients, as it were. Then we compute
This is a vector in It’s a list of numbers saying how many ways we can build each complex starting from our raw ingredients.
Alternatively, we can write this vector as a sum over basis vectors:
Next let’s apply to this. We claim that
In other words, we claim is the sum of all the transitions having as their source, weighted by their rate constants! To prove this claim, it’s enough to take the inner product of each side with any transition and check that we get the same answer. For the left side we get
To compute the right side, we need to use the cleverly chosen inner product on Here we get
In the first step here, the factor of in the cleverly chosen inner product canceled the visible factor of For the second step, you just need to think for half a minute—or ten, depending on how much coffee you’ve had.
Either way, we we conclude that indeed
Next let’s combine this with our formula for :
We get this:
In other words, is a linear combination of transitions, where each one is weighted both by the rate it happens and how many ways it can happen starting with our raw ingredients.
Our goal is to compute We’re almost there. Remember, says which complex is the input of a given transition, and says which complex is the output. So, says the total rate at which complexes are created and/or destroyed starting with the species in as our raw ingredients.
That sounds good. But let’s just pedantically check that everything works. Applying to both sides of our last equation, we get
Remember, our goal was to prove that this equals
But if you stare at these a while and think, you’ll see they’re equal. █
It took me a couple of weeks to really understand this, so I’ll be happy if it takes you just a few days. It seems peculiar at first but ultimately it all makes sense. The interesting subtlety is that we use the linear map called ‘multiplying by ’:
to take a bunch of complexes and work out the species they contain, while we use the nonlinear map called ‘raising to the th power’:
to take a bunch of species and work out how many ways we can build each complex from them. There is much more to say about this: for example, these maps arise from a pair of what category theorists call ‘adjoint functors’. But I’m worn out and you probably are too, if you’re still here at all.
I found this thesis to be the most helpful reference when I was trying to understand the proof of the deficiency zero theorem:
• Jonathan M. Guberman, Mass Action Reaction Networks and the Deficiency Zero Theorem, B.A. thesis, Department of Mathematics, Harvard University, 2003.
I urge you to check it out. In particular, Section 3 and Appendix A discuss matrix exponentiation. Has anyone discussed this before?
Here’s another good modern treatment of the deficiency zero theorem:
• Jeremy Gunawardena, Chemical reaction network theory for in silico biologists, 2003.
The theorem was first proved here:
• Martin Feinberg, Chemical reaction network structure and the stability of complex isothermal reactors: I. The deficiency zero and deficiency one theorems, Chemical Engineering Science 42 (1987), 2229-2268.
However, Feinberg’s treatment here relies heavily on this paper:
• F. Horn and R. Jackson, General mass action kinetics, Archive for Rational Mechanics and Analysis 47 (1972), 81-116.
(Does anyone work on ‘irrational mechanics’?) These lectures are also very helpful:
• Martin Feinberg, Lectures on reaction networks, 1979.
If you’ve seen other proofs, let us know.