*joint with Jacob Biamonte*

We’ve seen how Petri nets can be used to describe chemical reactions. Indeed our very first example came from chemistry:

However, chemists rarely use the formalism of Petri nets. They use a different but entirely equivalent formalism, called ‘reaction networks’. So now we’d like to tell you about those.

You may wonder: why bother with another formalism, if it’s equivalent to the one we’ve already seen? Well, one goal of this network theory program is to get people from different subjects to talk to each other—or at least be *able* to. This requires setting up some dictionaries to translate between formalisms. Furthermore, lots of deep results on stochastic Petri nets are being proved by chemists—but phrased in terms of reaction networks. So you need to learn this other formalism to read their papers. Finally, this other formalism is actually better in some ways!

** Reaction networks **

Here’s a reaction network:

This network involves 5 **species**: that is, different kinds of things. They could be atoms, molecules, ions or whatever: chemists call all of these species, and there’s no need to limit the applications to chemistry: in population biology, they could even be biological species! We’re calling them A, B, C, D, and E, but in applications, we’d call them by specific names like CO_{2} and HCO_{3}^{–}, or ‘rabbit’ and ‘wolf’, or whatever.

This network also involves 5 **reactions**, which are shown as arrows. Each reaction turns one bunch of species into another. So, written out more longwindedly, we’ve got these reactions:

If you remember how Petri nets work, you can see how to translate any reaction network into a Petri net, or vice versa. For example, the reaction network we’ve just seen gives this Petri net:

Each species corresponds to a **state** of this Petri net, drawn as a yellow circle. And each reaction corresponds to **transition** of this Petri net, drawn as a blue square. The arrows say how many things of each species appear as input or output to each transition. There’s less explicit emphasis on complexes in the Petri net notation, but you can read them off if you want them.

In chemistry, a bunch of species is called a ‘complex’. But what do I mean by ‘bunch’, exactly? Well, I mean that in a given complex, each species can show up 0,1,2,3… or any natural number of times. So, we can formalize things like this:

**Definition.** Given a set of **species**, a **complex** of those species is a function .

Roughly speaking, a reaction network is a graph whose vertices are labelled by complexes. Unfortunately, the word ‘graph’ means different things in mathematics—appallingly many things! Everyone agrees that a graph has vertices and edges, but there are lots of choices about the details. Most notably:

• We can either put arrows on the edges, or not.

• We can either allow more than one edge between vertices, or not.

• We can either allow edges from a vertex to itself, or not.

If we say ‘no’ in every case we get the concept of ‘simple graph’, which we discussed last time. At the other extreme, if we say ‘yes’ in every case we get the concept of ‘directed multigraph’, which is what we want now. A bit more formally:

**Definition.** A **directed multigraph** consists of a set of **vertices**, a set of **edges**, and functions saying the **source** and **target** of each edge.

Given this, we can say:

**Definition.** A **reaction network** is a set of species together with a directed multigraph whose vertices are labelled by complexes of those species.

You can now prove that reaction networks are equivalent to Petri nets:

**Puzzle 1.** Show that any reaction network gives a Petri net, and vice versa.

In a stochastic Petri net each transition is labelled by a **rate constant**: that is, a numbers in . This lets us write down some differential equations saying how species turn into each other. So, let’s make this definition (which is not standard, but will clarify things for us):

**Definition.** A **stochastic reaction network** is a reaction network where each reaction is labelled by a rate constant.

Now you can do this:

**Puzzle 2.** Show that any stochastic reaction network gives a stochastic Petri net, and vice versa.

For extra credit, show that in each of these puzzles we actually get an equivalence of categories! For this you need to define morphisms between Petri nets, morphisms between reaction networks, and similarly for stochastic Petric nets and stochastic reaction networks. If you get stuck, ask Eugene Lerman for advice. There are different ways to define morphisms, but he knows a cool one.

We’ve been downplaying category theory so far, but it’s been lurking beneath everything we do, and someday it may rise to the surface.

#### The deficiency zero theorem

You may have already noticed one advantage of reaction networks over Petri nets: they’re quicker to draw. This is true even for tiny examples. For example, this reaction network:

corresponds to this Petri net:

But there’s also a deeper advantage. As we saw in Part 8, any stochastic Petri net gives two equations:

• The **master equation**, which says how the *probability that we have a given number of things of each species* changes with time.

• The **rate equation**, which says how the *expected number of things in each state* changes with time.

The simplest solutions of these equations are the equilibrium solutions, where nothing depends on time. Back in Part 9, we explained when an equilibrium solution of the rate equation gives an equilibrium solution of the master equation. But when is there an equilibrium solution of the rate equation in the first place?

Feinberg’s ‘deficiency zero theorem’ gives a handy sufficient condition. And this condition is best stated using reaction networks! But to understand it, we need to understand the ‘deficiency’ of a reaction network. So let’s define that, and they say what all the words in the definition mean:

**Definition.** The **deficiency** of a reaction network is:

• the number of vertices

minus

• the number of connected components

minus

• the dimension of the stoichiometric subspace.

The first two concepts here are easy. A reaction network is a graph (okay, a directed multigraph). So, it has some number of vertices, and also some number of connected components. Two vertices lie in the same **connected component** iff you can get from one to the other by a path *where you don’t care which way the arrows point*. For example, this reaction network:

has 5 vertices and 2 connected components.

So, what’s the ‘stoichiometric subspace’? ‘Stoichiometry’ is a scary-sounding word. According to the Wikipedia article:

‘Stoichiometry’ is derived from the Greek words στοιχεῖον (

stoicheion, meaning element) and μέτρον (metron, meaning measure.) In patristic Greek, the wordStoichiometriawas used by Nicephorus to refer to the number of line counts of the canonical New Testament and some of the Apocrypha.

But for us, stoichiometry is just the art of counting species. To do this, we can form a vector space where is the set of species. A vector in is a function from species to real numbers, saying how much of each species is present. Any complex gives a vector in , because it’s actually a function from species to *natural* numbers.

**Definition.** The **stoichiometric subspace** of a reaction network is the subspace spanned by vectors of the form where and are complexes connected by a reaction.

‘Complexes connected by a reaction’ makes sense because vertices in the reaction network are complexes, and edges are reactions. Let’s see how it works in our example:

Each complex here can be seen as a vector in , which is a vector space whose basis we can call . Each reaction gives a difference of two vectors coming from complexes:

• The reaction gives the vector

• The reaction gives the vector

• The reaction gives the vector

• The reaction gives the vector

• The reaction gives the vector

The pattern is obvious, I hope.

These 5 vectors span the stoichiometric subspace. But this subspace isn’t 5-dimensional, because these vectors are linearly dependent! The first vector is the negative of the second one. The last is the sum of the previous two. And those are all the linear dependencies, so the stoichiometric subspace is 3 dimensional. For example, it’s spanned by these 3 linearly independent vectors: and .

I hope you see the moral of this example: the stoichiometric subspace is the space of *ways to move in* * that are allowed by the reactions in our reaction network!* And this is important because the rate equation describes how the amount of each species changes as time passes. So, it describes a point moving around in .

Thus, if is the stoichiometric subspace, and is a solution of the rate equation, then always stays within the set

Mathematicians would call this set the coset of , but chemists call it the **stoichiometric compatibility class** of

Anyway: what’s the deficiency of the reaction complex in our example? It’s

since there are 5 complexes, 2 connected components and the dimension of the stoichiometric subspace is 3.

But what’s the deficiency zero theorem? You’re almost ready for it. You just need to know *one* more piece of jargon! A reaction network is **weakly reversible** if whenever there’s a reaction going from a complex to a complex , there’s a path of reactions going back from to . Here the paths need to follow the arrows.

So, this reaction network is *not* weakly reversible:

since we can get from to but not back from to and from to but not back, and so on. However, the network becomes weakly reversible if we add a reaction going back from to :

If a reaction network isn’t weakly reversible, one complex can turn into another, but not vice versa. In this situation, what typically happens is that as time goes on we have less and less of one species. We could have an equilibrium where there’s *none* of this species. But we have little right to expect an equilibrium solution of the rate equation that’s **positive**, meaning that it sits at a point , where there’s a nonzero amount of every species.

My argument here is not watertight: you’ll note that I fudged the difference between species and complexes. But it can be made so when our reaction network has deficiency zero:

**Deficiency Zero Theorem (Feinberg).** Suppose we are given a reaction network with a finite set of species , and suppose its deficiency is zero. Then:

(i) If the network is not weakly reversible and the rate constants are positive, the rate equation does not have a positive equilibrium solution.

(ii) If the network is not weakly reversible and the rate constants are positive, the rate equation does not have a positive periodic solution, that is, a periodic solution lying in .

(iii) If the network is weakly reversible and the rate constants are positive, the rate equation has exactly one equilibrium solution in each positive stoichiometric compatibility class.

Any sufficiently nearby solution that starts in the same stoichiometric compatibility class will approach this equilibrium as . Furthermore, there are no other positive periodic solutions.

This is quite an impressive result. We’ll look at an easy example next time.

#### References and remarks

The deficiency zero theorem was published here:

• Martin Feinberg, Chemical reaction network structure and the stability of complex isothermal reactors: I. The deficiency zero and deficiency one theorems, *Chemical Engineering Science* **42** (1987), 2229–2268.

These other explanations are also very helpful:

• Martin Feinberg, Lectures on reaction networks, 1979.

• Jeremy Gunawardena, Chemical reaction network theory for in-silico biologists, 2003.

At first glance the deficiency zero theorem might seem to settle all the basic questions about the dynamics of reaction networks, or stochastic Petri nets… but actually, it just means that deficiency zero reaction networks don’t display very interesting dynamics in the limit as So, to get more interesting behavior, we need to look at reaction networks that don’t have deficiency zero.

For example, in biology it’s interesting to think about ‘bistable’ chemical reactions: reactions that have two stable equilibria. An electrical switch of the usual sort is a bistable system: it has stable ‘on’ and ‘off’ positions. A bistable chemical reaction can serve as a kind of biological switch:

• Gheorghe Craciun, Yangzhong Tang and Martin Feinberg, Understanding bistability in complex enzyme-driven reaction networks, *PNAS* **103** (2006), 8697-8702.

It’s also interesting to think about chemical reactions with stable periodic solutions. Such a reaction can serve as a biological clock:

• Daniel B. Forger, Signal processing in cellular clocks, *PNAS* **108** (2011), 4281-4285.

Quick observation: a couple instances of “infty” should be “\infty”.

And

Will read more closely now.

Thanks! Fixed!

I think I don’t understand the Petri net for the ABCDE example at all. Where is C? Does the transition on the left have no outputs?

Whoops! It’s seriously screwed up, just as you say. I’ll get it fixed. In the meantime, the recipe for turning a reaction network into a Petri net is explained pretty clearly, I hope. It’s not supposed to be tricky…

Okay, the Petri net is fixed now, I hope. Does everyone agree that this reaction network:

corresponds to this Petri net?

That looks right to me.

Yes, looks right.

Is there a whiff of Euler characteristic/homology going on with that definition of the deficiency of a reaction complex?

Yes there is! I was wondering if you’d ask. I’m still a bit confused, but here’s where am I now.

Suppose first that all the reactions give linearly independent elements . Then the dimension of the stoichiometric subspace is the number of

edgesof the reaction complex. Then the deficiency of the reaction network isThis should remind you a lot of the Euler characteristic of a simplicial complex:

(number of vertices) – (number of edges) + (number of triangles) – …

namely the alternating sum of the numbers of simplices of various dimensions.

But what about the

components?Well, if you remember the relation between the category of simplices, , and the category of totally ordered finite sets, you’ll remember it’s a bit funny: then-dimensional simplex corresponds to the set withn+1elements. So, the empty set corresponds to a (-1)-dimensional simplex! Mathematicians often ignore these (-1)-dimensional simplices, but not always… and if we start with any simplicial complex, we canintroduce(-1)-dimensional simplices by defining them to beconnected components.If we do this, we get a modified concept of Euler characteristic for a simplicial complex, given by:

Does anyone think about this? Maybe it’s the Euler characteristic in ‘reduced homology’???

Anyway, the deficiency of a reaction complex is this concept, specialized to graphs, but then generalized to a context where we can talk about ‘linearly dependent’ edges of a graph: instead of just counting the edges, we count the dimension of the space they span! This is this stoichiometric subspace.

I’m not sure what this means yet.

Interesting! Of course, one place where ‘deficiency’ turns up is as a synonym of defect, as in angular defect. Could there be a link to Descartes’ theorem and whether a space is hyperbolic, elliptic, etc.?

In the context of the Euler characteristic, your talk of cosets put me in mind of thinking of this situation in terms of a flow on a state space, with the whole apparatus of initial conditions, orbits, equilibria, vector field, indices, etc.

Hmm, what was the joke back here:

David,

There is a functor from

acategory of stochastic Petri nets to the category of polynomial dynamical systems (polynomial vector fields on finite dimensional vector spaces).I can send you a rough draft of the construction, if you are interested.

David wrote:

Right, the ‘rate equation’ of a stochastic Petri net is a dynamical system, so we get to use all these concepts.

I get the feeling you’re trying to tell me to set up a relationship between these two results:

• the theorem saying any vector field on a compact manifold of nonzero Euler characteristic must vanish somewhere, so the dynamical system it determines must have an equilibrium solution.

• the theorem saying any weakly reversible reaction network of deficiency zero gives a rate equation with an equilibrium solution.

I’d like to relate these! As I mentioned, “deficiency” is related to “Euler characteristic”, so that’s good. But there are a couple of problems I don’t see how to overcome.

The obvious one is that the first result talks about

nonzeroEuler characteristic, while the second talks aboutzerodeficiency. So things seem exactly backward!The less obvious one is that the rate equation of a reaction network gives a dynamical system on , which is noncompact and always has Euler characteristic zero. I don’t see how to relate the deficiency of the reaction network to the Euler characteristic of some compact manifold!

I think the first step would be for me to understand the proof of Feinberg’s deficiency zero theorem, which is a good thing to do even if the above thoughts are completely misguided.

Eugene wrote:

This of course is part of what I was advertising when I said:

If you get the draft to the point where you don’t mind showing it to the world, I’d love to blog about it here!

By the way, we should work out a time for you to visit and work on this stuff. Sometime in May or June or July or August next year would be good… I won’t return to California until late August or September. However, Tom Leinster is running a program on the mathematics of biodiversity at CRM in Barcelona from June 18 to July 20, and I want to attend some of that. Unfortunately working out the precise dates involves negotiations with my wife, which is the kind of thing we tend to put off. And she just got invited to spend a bunch of time in Paris next summer, which adds an extra layer of complexity to this process.

Eugene,

Sure I’d like to see that draft.

John,

Yes, a few hurdles to jump before a connection is made. There seems to be the added difference that there’s an equilibrium point for every coset, so there could be a subspace of equilibria.

How about the very simple Petri net with two species, where A transforms to B at rate r, and B transforms to A at rate s. So cosets are diagonal lines of gradient -1. For each coset there’s an equilibrium point on the line through the origin with gradient s/r.

Hmm, I wonder if the conditions in the theorem are forcing the cosets to remain bounded.

Glad to see this post, John! A very trivial comment that I’m almost ashamed for making: there’s an extra “i” in “stoichiometric.” I’m eagerly looking forward to future posts on this topic from you.

Whoops! Don’t be ashamed, I’m a spelling perfectionist (my dad was an editor), so if I misspell something, I want to hear about it.

Maybe it’ll help if I look up the etymology:

Just to remind everyone, as John often has, the full network series is available here:

http://math.ucr.edu/home/baez/networks/networks.html

except for part 17 as of the time of this comment

Yes, I’m often one behind. The WordPress blog uses a different style of LaTeX than the jsmath used on my website, so I have to work to turn one into the other, which is the sort of task I save for a rainy day.

Luckily in Singapore it rains almost every day.

One general comment about Puzzles 1 and 2 (let’s see if I can successfully put a foot in my mouth again).

It seems to me that both Petri nets and reaction networks both encode directed hypergraphs. Any hypergraph can be coded by a Petri net – put a transition for every hyperedge, a place for every element of the hyperedge, an arrow from a place to the transition if the place is an element of the tail (source) of hyperedge and an arrow from the transition to a place if the place is an element of the head (target) of the hyperedge.

Conversely given a Petri net, for every transition form a hyperedge whose tail is the set of places with arrows to the transition and whose target is the set of places with arrows out of the transition.

It is also easy to write a dictionary between between reaction networks and hypergraphs: a hyperedge is a reaction from the complex formed by the tail of the hyperedge to the complex formed by the head of the hyperedge.

All that sounds good. We can also remember this from Part 3:

So, to specify a Petri net we first give a set of objects called

states:and then a set of generating morphisms called

transitions:where each goes from some tensor product of states to some other tensor product of states. This data determines a strict monoidal category, say .

To specify a reaction network we do the exact same thing! Now, however, the states are called

speciesand the morphisms are calledreactions.Note that every object of is isomorphic to exactly one of the form

so is equivalent to a ‘skeleton’ where these are the only objects. The set of objects of this skeleton is just . In the literature on reaction networks, is called the set of

complexes.By the way, I’ve been acting as if and are finite, and people often assume that, since we need it to prove certain theorems—but we don’t really need it in anything I just said. There are also important examples where and are infinite. For example, in ‘stochastic Petri net field theories’ we might take to be a lattice. Mathematical biologists can use this to describe animals moving around in space. Blake Stacey may tell us more about these sometime; he’s already shown us lots of references.

To get a

stochasticPetri net, we let be the symmmetric monoidal category with one object and as morphisms, where composition and tensoring of morphisms are both given by multiplication. Then a stochastic Petri net is a strict symmetric monoidal functorSince is freely generated by some objects and morphisms, all does is assign an arbitrary number to each transition . This number is called the

rateof that transition.The same is true for stochastic reaction networks; there’s no real difference!

I don’t know much about hypergraphs. But if I understand what you said, we can think of a directed hypergraph as a presentation of a free strict symmetric monoidal category, just as we can think of a directed graph, as a presentation of a free category.

When people use a directed graph to present a free category , they call the directed graph a ‘quiver’. A representation of this quiver is then a functor . Since is free these functors are easy to describe. We’re doing something similar now except we’re working with symmetric monoidal categories, which are a bit more complicated, but replacing by , which is simpler.

Thanks! I need to think about what you said. I am not very comfortable with hypergraphs either, and different sources seem to use slightly different definitions. In particular some authors order the vertices of the source and target of a hyperedge and others don’t, which confuses me. Are places in Petri nets orderd? It seems that they should be… Hmm.

Eugene wrote:

I’ve been calling them ‘states’ or ‘species’ and calling the set of them .

You’ve got an interesting point there! is not ordered according to any definition I’ve seen or used: my definition, one of the standard ones, is this:

Definition.APetri netconsists of a set ofstatesand a set oftransitions, together with a functionsaying how many copies of each state shows up as

inputfor each transition, and a functionsaying how many times it shows up as

output.But if you’re using states and transitions as objects and morphisms of a symmetric monoidal categories you might want an order on the states that appear as inputs or outputs to a transition. Why? In a symmetric monoidal category ; all we’ve got is a chosen isomorphism , the ‘braiding’. So, turning a transition into a morphism

requires that we choose an ordering for the states (objects) showing up in this expression.

However, right now I actually think the way people are doing things is acceptable, if perhaps suboptimal in some ways. If we’re using the states and transitions to generate a free strict symmetric monoidal category, we need to choose a bunch of orderings to turn the transitions into well-defined morphisms and get a symmetric monoidal category. However, a different choice of orderings gives an isomorphic symmetric monoidal category, and this isomorphism is canonical.

There’s also a concept of ‘commutative’ monoidal category where the braiding is the identity and thus . Normally these are evil, but sometimes they’re okay! They’re important in algebraic geometry, for example.

In my post I talked a lot about the set of

complexes, . This is the free commutative monoid on the set , at least when is finite. And this free commutative monoid is the set of objects in the free commutative monoidal category on our states and transitions!So, I was almost tempted to use commutatve monoidal categories. I’d feel okay about this if the free commutative monoidal category on some states (objects) and transitions (morphisms) were equivalent to the free strict symmetric monoidal category on that data. But just while writing this I think I’ve seen they’re not.

And while we’re talking about ‘evil’, of course

strictsymmetric monoidal categories are also a bit evil. However, my hunch is that the freestrictsymmetric monoidal category on some objects and morphisms is equivalent to the freeweaksymmetric monoidal category on that data. If so, we can use strict ones without being punished by the math gods.I think the categorical gadget which is most natural in this context is a “path category” of a directed hypergraph. I am not convinced yet that this “path category” will end up being a symmetric monoidal category of some sort. It may be equivalent to some flavor of such category, but let’s not rush things.

But what’s a directed hypergraph? The wikipedia seems only to define undirected hypergraphs. The definition in

Directed hypergraphs and applicationsdoes not seem suitable. David Spivak gives two definitions in Higher-dimensional models of networks. One orders the vertices. The other doesn’t order them but does not allow repetitions of vertices ether. Neither of Spivak’s definition (which have the advantage of being presheaves) quite fit our needs.Here is a proposal of a definition of a hypergraph associated with the sets of species and of transitions: is a pair of maps , from the set of transitions to the set of complexes . Hmm, I guess I just defined a reaction network.

I am not sure how to define paths of such a hypergraph. You could just look at paths in the reaction network graph. But it doesn’t feel optimal (maybe it would if I had more sleep). What looks right is made up of paths of length 4 starting (and ending) at species in the corresponding Petri net. I have trouble being articulate.

Eugene wrote:

It’s not rushing things for me to say: a reaction network, or Petri net, is a presentation of a symmetric monoidal category. The morphisms in this category are the ‘Feynman diagrams’ I’ve been drawing, like this:

If this isn’t true, then I’ll tweak the definition of Petri net until it is! The whole point of my research here is to show how various kinds of ‘networks’ in applied math are related to various kinds of symmetric monoidal categories. I started with Petri nets because they play a basic role: they let us describe a symmetric monoidal category generated by some objects (things) and morphisms (processes).

Concepts like ‘hypergraph’ may be a bit fidgety and

ad hoc: I listed 8 different concepts of ‘graph’ and there could be more for ‘hypergraph’. But the concept of symmetric monoidal category is inevitable as soon as you ponder a bunch of things that can interact and also trade places (without getting tangled up). So that’s solid bedrock, as far as I’m concerned. And the connection to Petri nets is discussed here:• Vladimiro Sassone, On the category of Petri net computations,

6th International Conference on Theory and Practice of Software Development, Proceedings of TAPSOFT ’95, Lecture Notes in Computer Science915, Springer, Berlin, pp. 334-348.I’m sorry if I sound a bit fierce, but this is my core philosophy. Other approaches are of course worth exploring, but this is mine.

It is a fierce comment. Since I don’t know what you know, I don’t see what you see. I do want to understand

whyyou believe what you believe, hence the questions and musings. My comments are not meant as a challenge.I tried looking at Sassone’s paper when you first linked to it months ago. It feels like I don’t have the background to understand it. I am hoping you plan to explain it here at some point.

This is a bit like sayng

. I have pondered systems things that interact and can trade places. And I see answers that don’t look like symmetric monoidal categories.There is the

of Golubitsky, Stewart and their collaborators, where is modeled as a groupoid symmetry of interactions.Recently a colleague of mine in engineering got me thinking about hybrid systems. While these kinds of systems are related to directed graphs, they are rather different from what people normally call

[…] Last time we explained how ‘reaction networks’, as used in chemistry, are just another way of talking about Petri nets. We stated an amazing result on reaction networks: Feinberg’s deficiency zero theorem. This settles quite a number of questions about chemical reactions. Now let’s illustrate it with an example. […]

Why are you referring to the deficiency as that of a reaction complex, rather than that of a reaction network?

That was just a silly mistake: there’s nothing called a ‘reaction complex’. Fixed! Thanks.

[…] Reaction networks versus Petri nets; the deficiency zero theorem, Network Theory Part 17 […]