*guest post by Manoj Gopalkrishnan*

Since this is my first time writing a blog post here, let me start with a word of introduction. I am a computer scientist at the Tata Institute of Fundamental Research, broadly interested in connections between Biology and Computer Science, with a particular interest in reaction networks. I first started thinking about them during my Ph.D. at the Laboratory for Molecular Science. My fascination with them has been predominantly mathematical. As a graduate student, I encountered an area with rich connections between combinatorics and dynamics, and surprisingly easy-to-state and compelling unsolved conjectures, and got hooked.

There is a story about Richard Feynman that he used to take bets with mathematicians. If any mathematician could make Feynman understand a mathematical statement, then Feynman would guess whether or not the statement was true. Of course, Feynman was in a habit of winning these bets, which allowed him to make the boast that mathematics, especially in its obsession for proof, was essentially irrelevant, since a relative novice like himself could after a moment’s thought guess at the truth of these mathematical statements. I have always felt Feynman’s claim to be unjust, but have often wondered what mathematical statement I would put to him so that his chances of winning were no better than random.

Today I want to tell you of a result about reaction networks that I have recently discovered with Abhishek Deshpande. The statement seems like a fine candidate to throw at Feynman because until we proved it, I would not have bet either way about its truth. Even after we obtained a short and elementary proof, I do not completely ‘see’ why it must be true. I am hoping some of you will be able to demystify it for me. So, I’m just going to introduce enough terms to be able to make the statement of our result, and let you think about how to prove it.

John and his colleagues have been talking about reaction networks as Petri nets in the network theory series on this blog. As discussed in part 2 of that series, a Petri net is a diagram like this:

Following John’s terminology, I will call the aqua squares ‘transitions’ and the yellow circles ‘species’. If we have some number #rabbit of rabbits and some number #wolf of wolves, we draw #rabbit many black dots called ‘tokens’ inside the yellow circle for rabbit, and #wolf tokens inside the yellow circle for wolf, like this:

Here #rabbit = 4 and #wolf = 3. The predation transition consumes one ‘rabbit’ token and one ‘wolf’ token, and produces two ‘wolf’ tokens, taking us here:

John explained in parts 2 and 3 how one can put rates on different transitions. For today I am only going to be concerned with ‘reachability:’ what token states are reachable from what other token states. John talked about this idea in part 25.

By a **complex** I will mean a population vector: a snapshot of the number of tokens in each species. In the example above, (#rabbit, #wolf) is a complex. If are two complexes, then we write

if we can get from to by a single transition in our Petri net. For example, we just saw that

via the predation transition.

**Reachability**, denoted , is the transitive closure of the relation . So (read **is reachable from** ) iff there are complexes

such that

For example, here by repeated predation.

I am very interested in switches. After all, a computer is essentially a box of switches! You can build computers by connecting switches together. In fact, that’s how early computers like the Z3 were built. The CMOS gates at the heart of modern computers are essentially switches. By analogy, the study of switches in reaction networks may help us understand biochemical circuits.

A **siphon** is a set of species that is ‘switch-offable’. That is, if there are no tokens in the siphon states, then they will remain absent in future. Equivalently, the only reactions that can produce tokens in the siphon states are those that require tokens from the siphon states before they can fire. Note that no matter how many rabbits there are, if there are no wolves, there will continue to be no wolves. So {wolf} is a siphon. Similarly, {rabbit} is a siphon, as is the union {rabbit, wolf}. However, when Hydrogen and Oxygen form Water, {Water} is not a siphon.

For another example, consider this Petri net:

The set {HCl, NaCl} is a siphon. However, there is a conservation law: whenever an HCl token is destroyed, an NaCl token is created, so that #HCl + #NaCl is invariant. If both HCl and NaCl were present to begin with, the complexes where both are absent are not reachable. In this sense, this siphon is not ‘really’ switch-offable. As a first pass at capturing this idea, we will introduce the notion of ‘critical set’.

A **conservation law** is a linear expression involving numbers of tokens that is invariant under every transition in the Petri net. A conservation law is **positive** if all the coefficients are non-negative. A **critical set** of states is a set that does not contain the support of a positive conservation law.

For example, the support of the positive conservation law #HCl + #NaCl is {HCl, NaCl}, and hence no set containing this set is critical. Thus {HCl, NaCl} is a siphon, but not critical. On the other hand, the set {NaCl} is critical but not a siphon. {HCl} is a critical siphon. And in our other example, {Wolf, Rabbit} is a critical siphon.

Of particular interest to us will be **minimal critical siphons**, the minimal sets among critical siphons. Consider this example:

Here we have two transitions:

and

The set is a critical siphon. But so is the smaller set So, is not minimal.

We define a **self-replicable** set to be a set of species such that there exist complexes and with such that for all we have

So, there are transitions that accomplish the job of creating more tokens for all the species in In other words: these species can ‘replicate themselves’.

We define a **drainable** set by changing the to a . So, there are transitions that accomplish the job of *reducing* the number of tokens for all the species in These species can ‘drain away’.

Now here comes the statement:

*Every minimal critical siphon is either drainable or self-replicable!*

We prove it in this paper:

• Abhishek Deshpande and Manoj Gopalkrishnan, Autocatalysis in reaction networks.

But first note that the statement becomes false if the critical siphon is not minimal. Look at this example again:

The set is a critical siphon. However is neither self-replicable (since every reaction destroys ) nor drainable (since every reaction produces ). But we’ve already seen that is not minimal. It has a critical subsiphon, namely This one *is* minimal—and it obeys our theorem, because it is drainable.

Checking these statements is a good way to make sure you understand the concepts! I know I’ve introduced a lot of terminology here, and it takes a while to absorb.

Anyway: our proof that every minimal critical siphon is either drainable or self-replicable makes use of a fun result about matrices. Consider a real square matrix with a sign pattern like this:

If the matrix is full-rank then there is a positive linear combination of the rows of the matrix so that all the entries are nonzero and have the same sign. In fact, we prove something stronger in Theorem 5.9 of our paper. At first, we thought this statement about matrices should be equivalent to one of the many well-known alternative statements of Farkas’ lemma, like Gordan’s theorem.

However, we could not find a way to make this work, so we ended up proving it by a different technique. Later, my colleague Jaikumar Radhakrishnan came up with a clever proof that uses Farkas’ lemma twice. However, so far we have not obtained the stronger result in Theorem 5.9 with this proof technique.

My interest in the result that every minimal critical siphon is either drainable or self-replicable is not purely aesthetic (though aesthetics is a big part of it). There is a research community of folks who are thinking of reaction networks as a programming language, and synthesizing molecular systems that exhibit sophisticated dynamical behavior as per specification:

• International Conference on DNA Computing and Molecular Programming.

• Foundations of Nanoscience: Self-Assembled Architectures and Devices.

• Molecular Programming Architectures, Abstractions, Algorithms and Applications.

Networks that exhibit some kind of catalytic behavior are a recurring theme among such systems, and even more so in biochemical circuits.

Here is an example of catalytic behavior:

The ‘catalyst’ helps transform to In the absence of the reaction is turned off. Hence, catalysts are switches in chemical circuits! From this point of view, it is hardly surprising that they are required for the synthesis of complex behaviors.

In information processing, one needs amplification to make sure that a signal can propagate through a circuit without being overwhelmed by errors. Here is a chemical counterpart to such amplification:

Here the catalyst catalyzes its own production: it is an ‘autocatalyst’, or a self-replicating species. By analogy, autocatalysis is key for scaling synthetic molecular systems.

Our work deals with these notions on a network level. We generalize the notion of catalysis in two ways. First, we allow a catalyst to be a set of species instead of a single species; second, its absence can turn off a reaction pathway instead of a single reaction. We propose the notion of self-replicable siphons as a generalization of the notion of autocatalysis. In particular, ‘weakly reversible’ networks have critical siphons precisely when they exhibit autocatalytic behavior. I was led to this work when I noticed the manifestation of this last statement in many examples.

Another hope I have is that perhaps one can study the dynamics of each minimal critical siphon of a reaction network separately, and then somehow be able to answer interesting questions about the dynamics of the entire network, by stitching together what we know for each minimal critical siphon. On the synthesis side, perhaps this could lead to a programming language to synthesize a reaction network that will achieve a specified dynamics. If any of this works out, it would be really cool! I think of how abelian group theory (and more broadly, the theory of abelian categories, which includes categories of vector bundles) benefits from a fundamental theorem that lets you break a finite abelian group into parts that are easy to study—or how number theory benefits from a special case, the fundamental theorem of arithmetic. John has also pointed out that reaction networks are really presentations of symmetric monoidal categories, so perhaps this could point the way to a Fundamental Theorem for Symmetric Monoidal Categories.

And then there is the Global Attractor Conjecture, a

long-standing open problem concerning the long-term behavior of solutions to the rate equations. Now that is a whole story by itself, and will have to wait for another day.

I don’t understand why HCl is not a siphon by this definition. No transitions in the Petri net create it. Are we assuming that (via a transition not pictured) H and Cl create HCl?

Hi cata,

HCl *is* a siphon in this Petri net.

Thanks for pointing this out Cata, I think this is an error.

Okay, I’ll fix it. I’ll say HCl is a critical siphon. And I’ll say, just for explanatory purposes, that NaCl is critical, but not a siphon.

By the way, what’s the time-reversed version of the concept of ‘siphon’? In other words, what do people call a collection of species, such if there are no tokens in these species, there could not have been any in the past?

(Every Petri net has an ‘time-reversed version’, where we take all the transitions and turn their inputs into outputs and vice versa. This means every concept in Petri net theory has a time-reversed version. For example, I believe the time-reversed version of ‘self-replicable’ is ‘drainable’. In symmetric monoidal category language, the time-reversed version of a Petri net gives the ‘opposite’ category.)

John, I believe such objects have been studied and are called “traps.” Take a look at Definition 5 of a “trap” in this book chapter. (I may be wrong, but it seems to me on a quick read right now that the authors may have inadvertently gotten the two definitions reversed.)

David Angeli, Patrick De Leenheer, Eduardo Sontag,

“A Petri Net Approach to Persistence Analysis in Chemical Reaction Networks.”

http://citeseerx.ist.psu.edu/viewdoc/download?doi=10.1.1.152.627&rep=rep1&type=pdf

The same authors also say,

“Traps for Petri-Nets enjoy the following invariance property: if a trap is non-empty at time zero (meaning that at least one of its places has tokens), then the trap is non-empty at all future times. In contrast, in a continuous set-up (when tokens are not integer quantities but may take any real value), satisfaction of the siphon-trap property does not prevent (in general) concentrations of species from decaying to zero asymptotically.”

in this paper:

A Petri net approach to the study of persistence in chemical reaction networks.

Click to access angeli_leenheer_sontag_math_biosciences_MBS-D-06-00188R1.pdf

Manoj, thanks for the nicely written blog post. Where did Feynman say that Mathematics is irrelevant? Somehow I am sceptical about the truth of that statement.

But regardless of whether Feynman’s first statement is true, I think there are indeed statements whose proof looks “trivial” since they seem so intuitive. However first this is often only true if you find the right way to state them in a clear way, so that you are able to built up an intuition. Secondly, intuition can still be wrong.

However for the above case I would suggest the following translations:

“siphon”:

group which is broke once is broke forever.“conservation law”:

a group which forms an “island of stability” by having a net constant wealth“Every minimal critical siphon is either drainable or self-replicable”:

Every smallest possible siphon group which doesn’t contain an island of stability will either feed itself on others or go bust.And I suspect that those minimal critical siphons which have more arrows pointing out than in go bust.

Thanks nad, that’s a very interesting way of rephrasing the results! Does your explanation extend to the question of why this is not true for all critical siphons? Any insight on how a critical siphon is made up of minimal critical siphons?

As for the Feynman reference, I remember reading this as an anecdote, possibly in “Surely you’re joking Mr. Fenyman” ? It’s possible my interpretation of the anecdote may have crept in. If someone can find the original anecdote, we can compare.

My “translation” for conservation law should be made more precise by adding:

“conservation law”:

a group which forms an “island of stability” by having a net constant wealth via one but arbitrary way of exchange.That is in the X,Y example one would – if one would apply all two transitions and not just one, keep the numbers of tokens constant. Or in other words X+Y satisfies a kind of “total conservation law.” So the X,Y example is critical but would not be totally critical. I could imagine that eventually by demanding “total conservation” on a support one might get around the minimal condition, but this is just a rough guess.

Manoj was citing a famous story about Feynman from one of his books, either

Surely You’re JokingorWhat Do You Care What Other People Think. Unfortunately I can’t find this anecdote online. And you have to be very careful when extracting general messages from Feynman’s anecdotes… because he enjoyed joking around.For example, once he said something roughly like this:

Here is having fun annoying self-important mathematicians. On the other hand, he also said:

(This is from

The Character of Physical Law.)It is interesting: a state that is self-replicable is the definition of the life (biological or of other type).

So that can the life be defined like a minimal critical siphon?

The siphon have only arcs that point out of the siphon, or that are cyclics (self-arcs): only so they cannot increase the null value.

This minimal siphons must drain, or self-replicating: there are only arcs that point out, or self point to siphon (minimal cutting of the arcs – for variable states – is equal to minimal critical set?).

This remind me an allele equation that I write for gene reactions:

AA(t+1) = AA(t)-AA(t)aa(t)+Aa(t)Aa(t)/2

Aa(t+1) = Aa(t)+2AA(t)aa(t)-Aa(t)Aa(t)

aa(t+1) = aa(t)-AA(t)aa(t)+Aa(t)Aa(t)/2

that can be view like a possible siphon for some alleles (alleles disapparance, or self-replicating).

As per the paper, I think the notion of reachability corresponds to transitive closure of -dilutions which are more general than just -reactions. So a state A is reachable from state B iff there is a sequence of -dilutions

What’s a ‘-dilution’?

I don’t know that term, but I have a guess as to what you mean. If we have a transition

wolf + rabbit wolf + wolf

in our Petri net, we can also use this transition to turn

wolf + rabbit + rabbit

into

wolf + wolf + rabbit

(One rabbit stands by and watches the other get eaten.) Manoj actually gave an example of this sort when explaining the concept of :

So, his use of does not correspond precisely to a transition, but rather a transition ‘with bystanders’. Is this what you’re calling ‘dilution’?

Yes, this is precisely what I mean by -dilution. Thanks for the clarification.

Thanks. Is ‘-dilution’ a standard terminology in some community? Part of my job is to learn all the jargon that everyone uses: we’ve got computer scientists studying Petri nets, chemists studying reaction networks, etc.

No. -dilution is not a standard term. It’s just a term that Manoj and I have introduced in our paper. I just felt that when we talk about reaction networks in general, one should make a distinction between a dilution and a reaction.

Thanks. There certainly needs to be a distinction, at least in technical work.

John, I got the term “dilution” from Definition 1 in this paper by Luca Cardelli. As far as I know, he invented the term.

Luca Cardelli, “Strand algebras for DNA Computing”

Click to access Strand%20Algebras%20for%20DNA%20Computing%20(Natural%20Computing).pdf

A few weeks back, I promised to tell you more about a long-standing open problem in reaction networks, the ’global attractor conjecture’. I am not going to quite get there today, but we shall take one step in that direction. Today’s plan is to help you make friends with a very useful function we will call the ‘free energy’ which comes up

all the timein the study of chemical reaction networks. […]