joint with Jacob Biamonte
Last time we explained how ‘reaction networks’, as used in chemistry, are just another way of talking about Petri nets. We stated an amazing result, the deficiency zero theorem, which settles quite a number of questions about chemical reactions. Now let’s illustrate it with an example.
Our example won’t show how powerful this theorem is: it’s too simple. But it’ll help explain the ideas involved.
Diatomic molecules
A diatomic molecule consists of two atoms of the same kind, stuck together:

At room temperature there are 5 elements that are diatomic gases: hydrogen, nitrogen, oxygen, fluorine, chlorine. Bromine is a diatomic liquid, but easily evaporates into a diatomic gas:

Iodine is a crystal at room temperatures:

but if you heat it a bit, it becomes a diatomic liquid and then a gas:

so people often list it as a seventh member of the diatomic club.
When you heat any diatomic gas enough, it starts becoming a ‘monatomic’ gas as molecules break down into individual atoms. However, just as a diatomic molecule can break apart into two atoms:
two atoms can recombine to form a diatomic molecule:
So in equilibrium, the gas will be a mixture of diatomic and monatomic forms. The exact amount of each will depend on the temperature and pressure, since these affect the likelihood that two colliding atoms stick together, or a diatomic molecule splits apart. The detailed nature of our gas also matters, of course.
But we don’t need to get into these details here! Instead, we can just write down the ‘rate equation’ for the reactions we’re talking about. All the details we’re ignoring will be hiding in some constants called ‘rate constants’. We won’t try to compute these; we’ll leave that to our chemist friends.
A reaction network
To write down our rate equation, we start by drawing a ‘reaction network’. For this, we can be a bit abstract and call the diatomic molecule instead of
. Then it looks like this:

We could write down the same information using a Petri net:

But today let’s focus on the reaction network! Staring at this picture, we can read off various things:
• Species. The species are the different kinds of atoms, molecules, etc. In our example the set of species is .
• Complexes. A complex is a finite sum of species, like or
or for a fancier example using more efficient notation,
So, we can think of a complex as a vector
The complexes that actually show up in our reaction network form a set
In our example,
• Reactions. A reaction is an arrow going from one complex to another. In our example we have two reactions: and
Chemists define a reaction network to be a triple where
is a set of species,
is the set of complexes that appear in the reactions, and
is the set of reactions
where
(Stochastic Petri net people call reactions transitions, hence the letter
)
So, in our example we have:
• Species
• Complexes:
• Reactions:
To get the rate equation, we also need one more piece of information: a rate constant for each reaction
This is a nonnegative real number that affects how fast the reaction goes. All the details of how our particular diatomic gas behaves at a given temperature and pressure are packed into these constants!
The rate equation
The rate equation says how the expected numbers of the various species—atoms, molecules and the like—changes with time. This equation is deterministic. It’s a good approximation when the numbers are large and any fluctuations in these numbers are negligible by comparison.
Here’s the general form of the rate equation:
Let’s take a closer look. The quantity is the expected population of the
th species. So, this equation tells us how that changes. But what about the right hand side? As you might expect, it’s a sum over reactions. And:
• The term for the reaction is proportional to the rate constant
• Each reaction goes between two complexes, so we can write it as
Among chemists the input
is called the reactant complex, and the output is called the product complex. The difference
tells us how many items of species
get created, minus how many get destroyed. So, it’s the net amount of this species that gets produced by the reaction
The term for the reaction
is proportional to this, too.
• Finally, the law of mass action says that the rate of a reaction is proportional to the product of the concentrations of the species that enter as inputs. More precisely, if we have a reaction with input is the complex
, we define
The law of mass action says the term for the reaction
is proportional to this, too!
Let’s see what this says for the reaction network we’re studying:

Let’s write for the number of
atoms and
for the number of
molecules. Let the rate constant for the reaction
be
and let the rate constant for
be
Then the rate equation is this:
This is a bit intimidating. However, we can solve it in closed form thanks to something very precious: a conserved quantity.
We’ve got two species, and
. But remember,
is just an abbreviation for a molecule made of two
atoms. So, the total number of
atoms is conserved by the reactions in our network. This is the number of A‘s plus twice the number of B‘s:
So, this should be a conserved quantity: it should not change with time. Indeed, by adding the first equation above to twice the second, we see:
As a consequence, any solution will stay on a line
for some constant . We can use this fact to rewrite the rate equation just in terms of
:
This is a separable differential equation, so we can solve it if we can figure out how to do this integral
and then solve for
This sort of trick won’t work for more complicated examples.
But the idea remains important: the numbers of atoms of various kinds—hydrogen, helium, lithium, and so on—are conserved by chemical reactions, so a solution of the rate equation can’t roam freely in It will be trapped on some hypersurface, which is called a ‘stoichiometric compatibility class’. And this is very important.
We don’t feel like doing the integral required to solve our rate equation in closed form, because this idea doesn’t generalize too much. On the other hand, we can always solve the rate equation numerically. So let’s try that!
For example, suppose we set We can plot the solutions for three different choices of initial conditions, say
and
. We get these graphs:

It looks like the solution always approaches an equilibrium. We seem to be getting different equilibria for different initial conditions, and the pattern is a bit mysterious. However, something nice happens when we plot the ratio :

Apparently it always converges to 1. Why should that be? It’s not terribly surprising. With both rate constants equal to 1, the reaction proceeds at a rate equal to the square of the number of
‘s, namely
. The reverse reaction proceeds at a rate equal to the number of
‘s, namely
So in equilibrium, we should have
But why is the equilibrium stable? In this example we could see that using the closed-form solution, or maybe just common sense. But it also follows from a powerful theorem that handles a lot of reaction networks.
The deficiency zero theorem
It’s called Feinberg’s deficiency zero theorem, and we saw it last time. Very roughly, it says that if our reaction network is ‘weakly reversible’ and has ‘deficiency zero’, the rate equation will have equilibrium solutions that behave about as nicely as you could want.
Let’s see how this works. We need to remember some jargon:
• Weakly reversible. A reaction network is weakly reversible if for every reaction in the network, there exists a path of reactions in the network starting at
and leading back to
• Reversible. A reaction network is reversible if for every reaction in the network,
is also a reaction in the network. Any reversible reaction network is weakly reversible. Our example is reversible, since it consists of reactions
But what about ‘deficiency zero’? We defined that concept last time, but let’s review:
• Connected component. A reaction network gives a kind of graph with complexes as vertices and reactions as edges. Two complexes lie in the same connected component if we can get from one to the other by a path of reactions, where at each step we’re allowed to go either forward or backward along a reaction. Chemists call a connected component a linkage class. In our example there’s just one:

• Stoichiometric subspace. The stoichiometric subspace is the subspace spanned by the vectors of the form
for all reactions
in our reaction network. This subspace describes the directions in which a solution of the rate equation can move. In our example, it’s spanned by
and
, or if you prefer,
and
These vectors are linearly dependent, so the stoichiometric subspace has dimension 1.
• Deficiency. The deficiency of a reaction network is the number of complexes, minus the number of connected components, minus the dimension of the stoichiometric subspace. In our example there are 2 complexes, 1 connected component, and the dimension of the stoichiometric subspace is 1. So, our reaction network has deficiency 2 – 1 – 1 = 0.
So, the deficiency zero theorem applies! What does it say? To understand it, we need a bit more jargon. First of all, a vector tells us how much we’ve got of each species: the amount of species
is the number
And then:
• Stoichiometric compatibility class. Given a vector , its stoichiometric compatibility class is the subset of all vectors that we could reach using the reactions in our reaction network:
In our example, where the stoichiometric subspace is spanned by the stoichiometric compatibility class of the vector
is the line consisting of points
where the parameter ranges over all real numbers. Notice that this line can also be written as
We’ve already seen that if we start with initial conditions on such a line, the solution will stay on this line. And that’s how it always works: as time passes, any solution of the rate equation stays in the same stoichiometric compatibility class!
In other words: the stoichiometric subspace is defined by a bunch of linear equations, one for each linear conservation law that all the reactions in our network obey.
Here a linear conservation law is a law saying that some linear combination of the numbers of species does not change.
Next:
• Positivity. A vector in is positive if all its components are positive; this describes a a container of chemicals where all the species are actually present. The positive stoichiometric compatibility class of
consists of all positive vectors in its stoichiometric compatibility class.
We finally have enough jargon in our arsenal to state the zero deficiency theorem. We’ll only state the part we need today:
Zero Deficiency Theorem (Feinberg). If a reaction network has deficiency zero and is weakly reversible, and the rate constants are positive, the rate equation has exactly one equilibrium solution in each positive stoichiometric compatibility class. Any sufficiently nearby solution that starts in the same class will approach this equilibrium as
In our example, this theorem says there’s just one positive
equilibrium in each line
We can find it by setting the time derivatives to zero:
Solving these, we get
So, these are our equilibrium solutions. It’s easy to verify that indeed, there’s one of these in each stoichiometric compatibility class And the zero deficiency theorem also tells us that any sufficiently nearby solution that starts in the same class will approach this equilibrium as
.
This partially explains what we saw before in our graphs. It shows that in the case any solution that starts by nearly having
will actually have
But in fact, in this example we don’t even need to start near the equilibrium for our solution to approach the equilibrium! What about in general? We don’t know, but just to get the ball rolling, we’ll risk the following wild guess:
Conjecture. If a reaction network has deficiency zero and is weakly reversible, and the rate constants are positive, the rate equation has exactly one equilibrium solution in each positive stoichiometric compatibility class, and any positive solution that starts in the same class will approach this equilibrium as .
If anyone knows a proof or counterexample, we’d be interested. If this result were true, it would really clarify the dynamics of reaction networks in the zero deficiency case.
Next time we’ll talk about this same reaction network from a stochastic point of view, where we think of the atoms and molecules as reacting in a probabilistic way. And we’ll see how the conservation laws we’ve been talking about today are related to Noether’s theorem for Markov processes!
In your conjecture, presumably you’re still taking deficiency to be zero. It would be nice to see the simplest deficiency one network where there fails to be a single fixed point per coset. Feinberg wrote about deficiency one networks, as mentioned by Gunawardena. That seems a very good paper, if one had time. That idea that his
is “trying to be linear” looks important.
I guess what your conjecture needs is prevention of states heading off to the boundary of the non-negative cone, where one or more of the concentrations tends to zero.
Yikes, we left the ‘deficiency zero’ hypothesis out of our statement of the deficiency zero theorem, and also our conjecture! Thanks for catching that—I’ll put it in.
I’ve been sadly slow understanding the proof of the deficiency zero theorem or Feinberg’s work on deficiency one networks. Gunawardena’s paper is very good, but Feinberg’s original papers also look very good (and free):
• Martin Feinberg, Chemical reaction network structure and the stability of complex isothermal reactors: I. The deficiency zero and deficiency one theorems, Chemical Engineering Science 42 (1987), 2229–2268.
• Martin Feinberg, Lectures on reaction networks, 1979.
I’ve just been slow about reading them!
That’s part of it: that’s what can easily go wrong if we drop the ‘weakly reversible’ condition. Periodic orbits are already ruled out. But there’s also the possibility, in principle, of more complicated chaotic motions!
A standard trick in situations like this would be to cook up a ‘Lyapunov function’ on each stoichiometric compatibility class, in order to show each class contains a globally asymptotically stable equilibrium—which just means an equilibrium that all other solutions (in that class) approach as
. I should look and see what Feinberg did, and whether or why he didn’t or couldn’t do that.
David wrote:
Let’s see. Feinberg mentions a nice example on page 5 here: it’s called the Edelstein network:
This has 5 complexes and 2 components, and the stoichiometric subspace has dimension 2, since it’s the span of the vectors
and
. So, the deficiency of this reaction network is 5 – 2 – 2 = 1.
He says that for certain choices of the rate constants, the set of equilibrium is a roughly S-shaped curve that hits one of the stoichiometric compatibility classes—which are planes—in three points! For example, he says these rate constants work:
I’d like to play with this and other examples myself, soon. He lists some more. These ‘play’ networks, as he calls them, all seem unrealistic in one way or another—at least in chemistry. In particular,
may be fine for amoebas, I don’t think it happens in chemistry—unless, just as our amoebas are supplied with food, our chemical is a self-replicating molecule floating in a ‘bath’ of raw materials that we don’t take explicit account of here.
I clearly don’t yet understand this stuff well enough, because the ruling-out of periodic orbits wasn’t transparently obvious. (The first kind of counterexample I’d try to find to the conjecture would be a reaction network whose dynamics go to a limit cycle of some sort.)
Time to dust off that part of my brain which learned about the Belousov–Zhabotinsky reaction….
Blake wrote:
Just so nobody gets confused: periodic orbits are ruled out for deficiency zero, weakly reversible reaction networks—this is part of the deficiency zero theorem as stated last time.
This theorem is not obvious to me, so we’re in the same boat: I need to understand what’s so great about ‘deficiency zero’.
Reaction networks violating these hypotheses can have periodic orbits:
• Daniel B. Forger, Signal processing in cellular clocks, PNAS 108 (2011), 4281-4285.
Me too! It’s not chaotic, but you might like the reaction on page 10 here, which is based on the ‘Brusselator’. It has an unstable equilibrium inside a stable limit cycle.
(I used to see ‘Brusselator’ and ‘Belousov–Zhabotinsky’ in the same books and papers, maybe just because they’re both examples of interestingly complex chemical reactions. But somehow ‘Brusselator’ sounds like a science-fiction weapon made up by a kid who hates Brussels sprouts.)
I take it those chemical oscillators aren’t weakly reversible. It doesn’t look like it from here.
Actually, that’s really a very strong condition, if you think about it, that every reaction between complexes may be reversed along some path. I see Gunawardena points out that
He refers to P. M. Schlosser’s thesis, A graphical determination of the possibility of multiple steady states in complex isothermal CFSTRs. PhD thesis, University of Rochester, 1988.
To quote the song, I don’t know much about the BZ reaction, but I’ve heard it can display chaotic behavior when it’s pushed into the right regime in a machine called a “continuous-flow, stirred-tank reactor”. Some historical references pulled from Strogatz’s Nonlinear Dynamics and Chaos (1994) are:
R. A. Schmitz, K. R. Graziani, and J. L. Hudson (1977), “Experimental evidence of chaotic states in the Belousov–Zhabotinskii reaction” J. Chem. Phys. 67, 3040, doi:10.1063/1.435267.
R. H. Simoyi, A. Wolf and H. L. Swinney (1982), “One-dimensional dynamics in a multicomponent chemical reaction” Physical Review Letters 49, 245, doi:10.1103/PhysRevLett.49.245.
J. C. Roux, R. H. Simoyi and H. L. Swinney (1983), “Observation of a strange attractor” Physica D 8, 257, doi:10.1016/0167-2789(83)90323-8.
The “Brusselator” is a simplified model which tries to capture the BZ dynamics. The Oregonator, invented more recently, appears to be a more chemically realistic version.
The rate equations for the Oregonator are as follows:
Here,
denotes the concentration of HBrO2,
that of the negative bromide ion and
that of the positive cerium ion. The other letters stand for concentrations which change more slowly and are often taken to be constant.
(Not a chemist — I’m just reading all this out of Scholarpedia.)
Here is the stable limit cycle of the chemical reaction network Blake pointed out, the ‘Oregonator’:
This is taken from
• Richard J. Field, Oregonator, Scholarpedia.
David wrote:
Thanks for that link! Yes, the way they describe the Bray–Liebhafsky reaction, there are reactions that turn hydrogen peroxide to water but not vice versa:
5 H2O2 + I2 → 2 IO3- + 2 H+ + 4 H2O
5 H2O2 + 2 IO3- + 2 H+ → I2 + 5 O2 + 6 H2O
Here the decay of hydrogen peroxide to water is powering a cycle where iodine gets oxidized to iodate and then iodate gets reduced back to iodine.
So, it’s a bit similar to how the wheels on a cart turn around as the cart rolls downhill. They only turn as it’s rolling; when the system reaches equilibrium at the bottom of the hill, they stop.
So, I think the Bray–Liebhafsky reaction will only oscillate as long as we have plenty of hydrogen peroxide around; in the
limit it all becomes water, and things become boring.
But as John Maynard Keynes noted, “in the long run we’re all dead”. So, we shouldn’t get too focused on
behavior if we’re studying biology.
You’re right: so-called ‘weak reversibility’ is pretty strong.
For example, this reaction network is not weakly reversible:
A → B
A + B → A + A
However, as long as we have at least one A around to serve as a ‘catalyst’, we can use these reactions to turn any number of A’s into the same number of B’s, or vice versa.
On the other hand: in real life all chemical reaction networks are not just weakly reversible but actually reversible. If a chemical reaction is possible, so is its reverse! But if the rate constant for the reverse reaction is sufficiently low, we often neglect it.
John,
Given your interest in leaves over hypercubes (e.g., 1st network th post), perhaps the Belousov–Zhabotinsky reaction would stimulate some ideas? It is an example of a class of reactions that under certain circumstances, exhibit fundamental self organizing properties (like leaves!).
Yes, I would like to think about this! I’ve started with a lot of general formalism concerning chemical reaction networks, and some examples that are fairly dull in the sense that the solutions converge to an equilibrium, but I want to start studying self-organizing systems… as well as expand the ‘network theory’ ideas to cover other subjects. I’ve done more on the latter, so far. There’s a lot of stuff already done that I need to explain. But thanks for pushing me towards the former!
Concerning your conjecture the following might be helpful:
• David F. Anderson, Global asymptotic stability for a class of nonlinear chemical equations, SIAM J. Appl. Math. 68(5) (May 2008), 1464 – 1476.
He generalizes a result of E. Sontag (from 2001) that ensures global asymptotic stability if there are no equilibria on the boundary of the positive compatibility classes.
Thanks a million, Uwe! I added a link to the arXiv version.
In response to a question by David, I guessed that the key to proving stability would be coming up with a suitable Lyapunov function. That’s a pretty obvious guess, of course: no stroke of genius required there. But I’m happy to see that in the paper you cite, a specific candidate Lyapunov function is written down in equation (3)—and David will be just as pleased as me to see that the formula is reminiscent of entropy!
Maybe it’s more like ‘free energy’, but no matter: we’ve already seen that certain simple reaction networks show up in evolutionary game theory, and there the Lyapunov function is indeed a close relative of entropy: namely, the information the species has left to learn about its environment! This keeps dropping with the passage of time, and the species is in equilibrium when it has nothing left to learn.
As we saw back then, Marc Harper has a nice paper on this subject. The most relevant section is the one called ““Kullback-Leibler Divergence is a Lyapunov function for the Replicator Dynamic”.
I wrote about this in a post on information geometry. And I’m happy now because my interest in information geometry and my interest in reaction networks are starting to overlap!
Dear John,
I was going to point you to the global attractor conjecture, but I see Uwe Stroinski has already done this. Your conjecture, and some generalizations of it, are well-known open questions in the community. In fact, in their 1972 paper, Horn and Jackson claimed a proof for a similar conjecture, but Horn found a problem with their proof, and withdrew it in 1974, and the problem has remained open since then. A generalization of the conjecture was posed in 1989 by Marty Feinberg. This has come to be called the “persistence conjecture.”
The problem is, in essence, the following: starting from a strictly positive vector of concentrations, can species become extinct? If they can not, one says the network is persistent. Feinberg’s conjecture is that if the network is weakly-reversible, then it must be persistent: species can not go extinct. It is not hard to show that this implies the global attractor conjecture, not just for deficiency zero, but for all complex-balanced systems (roughly, systems that admit a “free energy” function).
Here’s a far-from-comprehensive summary of the state of the art: even the three-dimensional version (the stoichiometric compatibility class is 3-dimensional) of the persistence conjecture is open. The two-dimensional version has been settled due to recent work by Craciun, Nazarov, and Pantea http://arxiv.org/abs/1010.3050 . David Anderson also has a very nice recent paper http://arxiv.org/abs/1101.0761 that settles a special case (the graph is strongly-connected) of this result. I can prove (http://arxiv.org/abs/1006.3627 ) the persistence conjecture in the absence of catalysis in the network.
Thanks for all the info, Manoj! This is really great. I hadn’t expected my conjecture was new, so I’m happy to see it might be true, and intrigued that it’s a well-known open puzzle.
This is very interesting:
This is basically what David guessed:
So, 10 points for David! But I’ll need to read why it’s actually true, because a priori I could imagine problems other than one or more concentrations going to zero (= species going extinct).
Anyway, I’ve got my reading cut out for me!
It’s not that hard to see. Here’s the basic idea. Consider the omega-limit set of a trajectory. Note the following two facts about this set:
1. Every point on this omega-limit set has the same value of the Lyapunov function.
2. The omega-limit set is invariant for the dynamics.
Therefore, the time derivative of the Lyapunov function is zero for points in the omega-limit set. Since we have a strict Lyapunov function in the relative interior of the invariant polyhedron, this omega-limit set can not contain any positive point, except the isolated positive equilibrium points. So if there are no omega-limit points on the boundary, we’re done.
I should add that trajectories can not escape to infinity because the level sets of our Lyapunov function are bounded.
By the way, I really hate saying “positive stoichiometric compatibility class.” I prefer the terms “conservation class” instead of stoichiometric compatibility class (the affine space), and “invariant polyhedron” for the intersection of the conservation class with the positive orthant (possibly very close to what you call the positive stoichiometric compatibility class, I’ll have to check your definition).
Yes, I have been using ‘positive stoichiometric compatibility class’ to mean the intersection of a stochiometric compatibility class (an affine space
, in my notation) with the open positive orthant (the set
, in my notation). I agree that this is long and clunky, but I’ve seen various experts do it and I haven’t dared deviate yet.
I agree that mathematicians would find ‘invariant polyhedron’ vastly more appealing. For one thing, few of them know what ‘stoichiometric’ means, and it’s a bit of a jawbreaker. For another, ‘invariant polyhedron’ will instantly make some sense to them.
To go along with ‘invariant polyhedron’, one might say ‘invariant coset’ for a stochiometric compatibility class, since it’s a coset of the subspace
in the vector space
. Then it all sounds very much like math.
[…] in the ongoing series about reaction networks, Azimuth had a rather self-contained post about the practical applications of the deficiency zero theorem […]
[…] Last time we started looking at a simple example: a diatomic gas […]
If s is the number of species, and r the dimensionality of
in the language of linear algebra,
can be identified with the row space. Then as the null space is the orthogonal complement of the row space in
we have s-r independent null space vectors, and therefore s-r linear conservation laws. So, in deficiency 0 networks the number of connected components is equal to the number of conservation laws. Is this correct?
The number of linear conservation laws is the number of species minus the dimension of the stoichiometric subspace. Even better, the procedure given here for determining the stoichiometric subspace can be used to find all the linear conservation laws. So, this answers your earlier question about how to find conservation laws.
Let’s see. The deficiency is the number of complexes minus the number of connected components minus the dimension of the stochiometric subspace. So, if it’s zero,
• The number of complexes equals the number of connected components plus the dimension of the stochiometric subspace.
On the other hand,
• The number of linear conservation laws is the number of species minus the dimension of the stoichiometric subspace.
Putting these equations together, I can get an equation involving the number of connected components and the number of conservation laws. But it doesn’t seem to be the equation you suggest! What equation do we actually get?
Yes, I got a little mixed up.
We get : no. of complexes – no. of species = no. of connected components – no. of conservation laws.
Okay, that looks great! Think about some examples. And think about some counterexamples when the reaction network doesn’t have deficiency zero. I think it should make intuitive sense after a while.
[…] An example of the deficiency zero theorem: a diatomic gas, Network Theory Part 18 […]