Chemical Reaction Network Talks

26 June, 2014

A while ago I blogged about David Soloveichik’s talk at this workshop:

Programming with Chemical Reaction Networks: Mathematical Foundations, Banff International Research Station, 8-13 June 2014.

Now the slides for his talk are available:

• David Soloveichik, U.C. San Francisco, The computational power of chemical reaction networks.

And now I’d like to tell you about three more talks!

The first two are about ways one chemical reaction network can simulate another. This is important for a couple of reasons. First, in biology, a bunch of different chemical reactions can ‘accomplish the same task’—and we’d like to make this idea precise. That’s what Luca Cardelli spoke about. Second, people trying to do computation with chemistry are starting to simulate quite general reactions using DNA! That’s what Sheung Woo Shin spoke about.

Luca Cardelli

Luca Cardelli was at Oxford when I was visiting this spring, but unfortunately I didn’t meet him! He works for Microsoft on the interface of biology and computation. At Banff, he talked about ways one chemical reaction network can simulate another. His slides are here:

• Luca Cardelli, Morphisms of reaction networks.

He has a paper that gives a more detailed explanation of these ideas:

• Luca Cardelli, Morphisms of reaction networks that couple structure to function.

Here is my own disorganized explanation… with lots of informative but confusing digressions. A population protocol is a chemical reaction with only 2-in, 2-out reactions. For example, this paper presents a population protocol that does ‘approximate majority detection':

• Dana Angluin, James Aspnes, and David Eisenstat, A simple population protocol for fast robust approximate majority, Distributed Computing 21 (2008), 87–102.

What’s the idea? We start with two kinds of molecules, say x’s and y’s, and we want to see which one is in the majority, so we run these chemical reactions:

x + y \to x + b

x + y \to y + b

x + b \to 2x

y + b \to 2y

See? All the reactions have 2 molecules going in and 2 going out. The b molecules act as ‘undecided voters’ who become either an x or a y, depending on who they meet first.

If we start with about n molecules, in O(n \log n) time these reactions are very likely to convert all x’s and y’s to whatever kind of molecule was in the majority initially… at least if the gap in the number of x’s and y’s is big enough.

Here’s another population protocol that also does the job:

x + y \to 2b

x + b \to 2x

y + b \to 2y

And here’s a proof that one of these algorithms actually works—most of the time, when the initial difference in populations is big enough:

• Etienne Perron, Dinkar Vasudevan, and Milan Vojonvic, Using three states for binary consensus on complete graphs, Technical Report, MSR-TR-2008-114, Microsoft, September 2008.

If we use a discrete-time formalism to describe the dynamics, the proof seems to get harder. See the paper by Angluin, Aspnes, and Eisenstat for the only known proof!

Anyway, Luca Cardelli is interested in chemical reaction networks actually found in biology. This approximate majority algorithm is seen quite clearly in a certain biological system: a certain ‘epigenetic switch’. However, it is usually ‘obfuscated’ or ‘encrypted': hidden in a bigger, more complicated chemical reaction network. For example, see:

• Luca Cardelli and Attila Csikász-Nagy, The cell cycle switch is approximate majority obfuscated, Scientific Reports 2 (2012).

This got him interested in developing a theory of morphisms between reaction networks, which could answer questions like: When can one CRN emulate another? But these questions turn out to be much easier if we use the rate equation than with the master equation. So, he asks: when can one CRN give the same rate equation as another?

He found a sufficient condition that’s ‘purely syntactic': you can tell if it holds by looking at the reaction networks, regardless of the rate constants.

Here’s the idea. We say one network emulates another if for any rate constants of the second, we can find rate constants for the first that makes its rate equation have solutions exactly mimicking that of the second, but where several species in the first correspond to one in the second.

For this to make sense, we assume there is a map sending:

• species to species
• reactions to reactions

In a chemical reaction network homomorphism, the map on reactions is determined by the map on species in the obvious way. For example, if species A is sent to f(A) and species B is sent to f(B) then the reaction

2A + B \to 3B

is sent to the reaction

2f(A) + f(B) \to 3 f(B)

In this situation, to make the first network emulate the second, we need to set equal the initial concentrations of all species in the inverse image of a given species.

A reactant homomorphism from one chemical reaction network to another is more general: it sends species to species, and for any reaction in the first chemical reaction network with input

A + B + C \cdots

there’s a reaction in the second with input

f(A) + f(B) + f(C) + \cdots

(Reactant is another name for input.)

A stoichiomorphism is a kind of morphism that takes rate constants into account. See Cardelli’s paper for the definition.

The main theorem: given a stoichiomorphism from one chemical reaction network to another that’s also a reactant homomorphism, then the first emulates the second.

For a better explanation, read his paper! Here’s a cool picture from his paper showing a bunch of chemical reaction networks including the approximate majority network (labelled AM), many of which show up in biology, and morphisms between them:


Click to enlarge! These chemical reaction networks are drawn in a special style: as influence networks, consisting of ‘gates’ where process activates or deactivates another. Each gate is a chemical reaction network of a certain form, schematically like this:

\mathrm{off} \leftrightarrow \mathrm{intermediate} \leftrightarrow \mathrm{on}

where the forward reactions are catalyzed by one chemical and the reverse reactions are catalyzed by another. A gate is like a switch that can be turned on or off.

While listening to this talk, I thought the way in which one CRN emulates another in Cardelli’s formalism looks awfully similar to the way one dynamical system emulates another in Eugene Lerman’s formalism:

• Eugene Lerman, Networks of dynamical systems, Azimuth, 18 March 2014.

The following picture from Cardelli’s paper shows that one of his morphisms of reaction networks is like ‘covering map’. This reminds me a lot of what’s happening in Lerman’s work.


Again, click to enlarge!

Seung Woo Shin

Seung Woo Shin was actually Brendan Fong’s roommate at the National University of Singapore while Brendan was working with me on chemical reaction networks. Apparently they never talked about their work!

Shin spoke about some other concepts of ‘morphism’ between chemical reaction networks. These other concepts do not involve reaction rates, just which chemicals can turn into which. You can see his slides here:

• Seung Woo Shin, Verifying CRN implementations.

and read his thesis for more details:

• Seung Woo Shin, Compiling and verifying DNA-based chemical reaction network implementations, Masters thesis, Caltech, 2012.

Abstract: One goal of molecular programming and synthetic biology is to build chemical circuits that can control chemical processes at the molecular level. Remarkably, it has been shown that synthesized DNA molecules can be used to construct complex chemical circuits that operate without any enzyme or cellular component. However, designing DNA molecules at the individual nucleotide base level is often difficult and laborious, and thus chemical reaction networks (CRNs) have been proposed as a higher-level programming language. So far, several general-purpose schemes have been described for designing synthetic DNA molecules that simulate the behavior of arbitrary CRNs, and many more are being actively investigated.

Here, we solve two problems related to this topic. First, we present a general-purpose CRN-to-DNA compiler that can apply user-defined compilation schemes for translating formal CRNs to domain-level specifications for DNA molecules. In doing so, we develop a language in which such schemes can be concisely and precisely described. This compiler can greatly reduce the amount of tedious manual labor faced by researchers working in the field. Second, we present a general method for the formal verification of the correctness of such compilation. We first show that this problem reduces to testing a notion of behavioral equivalence between two CRNs, and then we construct a mathematical formalism in which that notion can be precisely defined. Finally, we provide algorithms for testing that notion. This verification process can be thought of as an equivalent of model checking in molecular computation, and we hope that the generality of our verification techniques will eventually allow us to apply them not only to DNA-based CRN implementations but to a wider class of molecular programs.

His thesis built on this earlier paper:

• David Soloveichik, Georg Seelig and Erik Winfree, DNA as a universal substrate for chemical kinetics, Proceedings of the National Academy of Sciences (2010).

I think this work is fascinating and deeply related to category theory, so I talked to Shin and Winfree about it, and this is what we came up with:

CRN equivalences: progress report.

This is one of several reports on progress people at the workshop made on various open problems.

David Anderson

Brendan Fong and I wrote about David Anderson’s work in Part 9 of the network theory series. It’s so impressive that I expected him to be older… older than me, I guess. He’s not!

In his tutorial, he gave an overview of chemical reaction networks with an emphasis on the deficiency zero theorem. Since many people were puzzled by the ‘deficiency’ concept, they asked lots of questions. But I’ve already explained that idea in Part 21. So, I’ll just mention a couple of cool theorems he told us about!

Theorem (Horn and Jackson). If a reaction network has a complex balanced equilibrium, then:

1. It has no equilibria that are not complex balanced.

2. The reaction network must be weakly reversible.

3. Every stochiometric compatibility class contains precisely one complex balanced equilibrium.

I should have known this, since this work is classic. But I don’t think I knew that the existence of one complex balanced equilibrium implied all this stuff!

He also mentioned this paper:

• Guy Shinar and Martin Feinberg, Structural sources of robustness in biochemical reaction networks, Science (2010).

which contains this amazing theorem:

Theorem (Shinar and Feinberg). Suppose there is a chemical reaction network such that:

1. its deficiency equals one;

2. it has a positive steady state;

3. it has two “non-terminal complexes” that differ only in one species S. (“Non-terminal” is a concept that’s easier to explain with a picture of a reaction network).

Then the species S is absolutely robust: with any initial conditions, the rate equation will approach an equilibrium where the concentration of S approaches a specific fixed value c, independent of the initial conditions!

However, things work very differently if we treat the system stochastically, using the master equation:

• David F. Anderson, German A. Enciso and Matthew D. Johnston, Stochastic analysis of biochemical reaction networks with absolute concentration robustness.

More

A lot more happened at this workshop! There was a huge amount of discussion of the leader election problem, which is about how to cook up chemical reactions that create a ‘leader': a single molecule of some sort.

Leader election: the problem, and references.

Leader election: progress report.

As I explained before, David Soloveichik talked about various forms of digital computation with chemical reaction networks. David Doty talked about the very important flip side of the coin: analog computation.

• David Doty, Rate-independent computation by real-valued chemistry.

There were also great talks by Lulu Qian and Erik Winfree, which I won’t try to summarize. Qian does a lot of work in the lab making things actually happen, so if you’re a practical sort this is the talk to look at:

• Lulu Qian, Implementing complex CRNs with modular DNA components.

All in all, a very stimulating workshop. The diversity of things one can ask about chemical reaction networks is quite exciting!


The Computational Power of Chemical Reaction Networks

10 June, 2014

I’m at this workshop:

Programming with Chemical Reaction Networks: Mathematical Foundations, Banff International Research Station, 8-13 June 2014.

Luca Cardelli wrote about computation with chemical reactions in Part 26 of the network theory series here on this blog. So, it’s nice to meet him and many other researchers, learn more, and try to solve some problems together!

The first tutorial was this:

• David Soloveichik, U.C. San Francisco, The computational power of chemical reaction networks.

David works at the Center for Systems and Synthetic Biology, and their website says:

David did his graduate work with Erik Winfree at Caltech, focusing on algorithmic self-assembly and on synthetic networks of nucleic-acid interactions based on strand displacement cascades. He is interested in “molecular programming”: the systematic design of complex molecular systems based on the principles of computer science and distributed computing. More generally, he is trying to create a theoretical foundation of chemical computation applicable to both synthetic and natural systems.

According to his webpage, Soloveichik’s research interests are:

Wet-lab: the rational design of molecular interactions for synthetic biology, nanotechnology, and bioengineering. The goal is to engineer autonomous molecular systems that can sense, compute, and perform various actions. Using nucleic-acid “strand displacement cascades” as the molecular primitive, we are able to attain freedom of design that hasn’t been previously possible.

Theory: The theoretical foundation of chemical computation. Once we have a way to program molecular interactions, what programming language shall we use? How molecules can process information and carry out computation is still not well-understood; however, a formal connection to models of concurrent computation may allow systematic and scalable design, rigorous analysis and verification. Further, computational principles may elucidate the design of biological regulatory networks.

Here are my notes on his tutorial.

Motivation

We’ve got people here from different backgrounds:

• computational complexity theory
• wetlab / experimental science
• pure and applied mathematics
• software verification

CRNs (chemical reaction networks) show up in:

• chemistry
• population biology
• sensor networks
• math:
    ○ vector addition systems
    ○ Petri nets
    ○ commutative semigroups
    ○ bounded context-free languages
    ○ uniform recurrence equations

Why use them for computation? People want to go beyond the von Neumann architecture for computation. People also want to understand how cells process information. However, with a few exceptions, the computational perspective in this talk has not yet proved relevant in biology. So, there is a lot left to learn.

The model

The model of computation here will be the master equation for a chemical reaction network… since this has been explained starting Part 4 of the network theory series, I won’t review it!

Can all chemical reaction networks, even those without any conservation laws, be realized by actual chemical systems?

Though this is a subtle question, one answer is “yes, using strand displacement cascades”. This is a trick for getting DNA to simulate other chemical reactions. It’s been carried out in the lab! See this paper and many subsequent ones:

• Soloveichik, Seelig and Winfree, DNA as a universal substrate for chemical kinetics.

Abstract: Molecular programming aims to systematically engineer molecular and chemical systems of autonomous function and ever-increasing complexity. A key goal is to develop embedded control circuitry within a chemical system to direct molecular events. Here we show that systems of DNA molecules can be constructed that closely approximate the dynamic behavior of arbitrary systems of coupled chemical reactions. By using strand displacement reactions as a primitive, we construct reaction cascades with effectively unimolecular and bimolecular kinetics. Our construction allows individual reactions to be coupled in arbitrary ways such that reactants can participate in multiple reactions simultaneously, reproducing the desired dynamical properties. Thus arbitrary systems of chemical equations can be compiled into real chemical systems. We illustrate our method on the Lotka–Volterra oscillator, a limit-cycle oscillator, a chaotic system, and systems implementing feedback digital logic and algorithmic behavior.

However, even working with the master equation for a CRN, there are various things we might mean by having it compute something:

• uniform vs non-uniform: is a single CRN supposed to handle all inputs, or do we allow adding extra reactions for larger inputs? It’s a bit like Turing machines vs Boolean circuits.

• deterministic vs probabilistic: is the correct output guaranteed or merely likely?

• halting vs stabilizing: does the CRN ‘know’ when it has finished, or not? In the ‘halting’ case the CRN irreversibly produces some molecules that signal that the computation is done. In the ‘stabilizing’ case, it eventually stabilizes to the right answer, but we may not know how long to wait.

These distinctions dramatically affect the computational power. In the case of uniform computation:

• deterministic and halting: this has finite computational power.

• deterministic and stabilizing: this can decide semilinear predicates.

• probabilistic and halting: this is Turing-universal.

• probabilistic and stabilizing: this can decide \Delta_2^0 predicates, which are more general than computable ones. (Indeed, if we use Turing machines but don’t require them to signal when they’ve halted, the resulting infinitely long computations can ‘compute’ stuff that’s not computable in the usual sense.)

Deterministic stabilizing computations

Let’s look at the deterministic stabilizing computations in a bit more detail. We’ll look at decision problems. We have a subset S \subseteq \mathbb{N}^d, and we want to answer this question: is the vector X \in \mathbb{N}^d in the set S?

To do this, we represent the vector as a bunch of molecules: X_1 of the first kind, X_2 of the second kind, and so on. We call this an input. We may also include a fixed collection of additional molecules in our input, to help the reactions run.

Then we choose a chemical reaction network, and we let it run on our input. The answer to our question will be encoded in some molecules called Y and N. If X is in S, we want our chemical reaction to produce Y molecules. If it’s not, we want our reaction to produce N’s.

To make this more precise, we need to define what counts as an output. If we’ve got a bunch of molecules that

• contains Y but not N: then the output is YES.

• contains N but not Y: then the output is NO.

Otherwise the output is undefined.

Output-stable states are states with YES or NO output such that all states reachable from them via our chemical reactions give the same output. We say an output-stable-state is correct if this output is the correct answer to the question: is X in S.

Our chemical reaction network gives a deterministic stabilizing computation if for any input, and choosing any state reachable from that input, we can do further chemical reactions to reach a correct output-stable state.

In other words: starting from our input, and letting the chemical reactions <run any way they want, we will eventually stabilize at an output that gives the right answer to the question “is X in S?”

Examples

This sounds a bit complicated, but it’s really not. Let’s look at some examples!

Example 1. Suppose you want to check two numbers and see if one is greater than or equal to another. Here

S = \{(X_1,X_2) : X_2 \ge X_1 \}

How can you decide if a pair of numbers (X_1,X_2) is in this set?

You start with X_1 molecules of type A, X_2 molecules of type B, and one molecule of type Y. Then you use a chemical reaction network with these reactions:

A + N \to Y
B + Y \to N

If you let these reactions run, the Y switches to a N each time the reactions destroy an A. But the N switches back to a Y each time the reactions destroy a B.

When no more reactions are possible, we are left with either one Y or one N, which is the correct answer to your question!

Example 2. Suppose you want to check two numbers and see if one is equal to another. Here

S = \{(X_1,X_2) : X_2 = X_1 \}

How can you decide if a pair of numbers (X_1,X_2) is in here?

This is a bit harder! As before, you start with X_1 molecules of type A, X_2 molecules of type B, and one molecule of type Y. Then you use a chemical reaction network with these reactions:

A + B \to Y
Y + N \to Y
A + Y \to A + N
B + Y \to B + N

The first reaction lets an A and a B cancel out, producing a Y. If you only run this reaction, you’ll eventually be left with either a bunch of A\mathrm{s} or a bunch of B\mathrm{s} or nothing but Y\mathrm{s}.

If you have Y\mathrm{s}, your numbers were equal. The other reactions deal with the cases where you have A\mathrm{s} or B\mathrm{s} left over. But the key thing to check is that no matter what order we run the reactions, we’ll eventually get the right answer! In the end, you’ll have either Y\mathrm{s} or N\mathrm{s}, not both, and this will provide the yes-or-no answer to the question of whether X_1 = X_2.

What deterministic stabilizing computations can do

We’ve looked at some examples of deterministic stabilizing computations. The big question is: what kind of questions can they answer?

More precisely, for what subsets A \subseteq \mathbb{N}^d can we build a deterministic stabilizing computation that ends with output YES if the input X lies in A and with output NO otherwise?

The answer is: the ‘semilinear’ subsets!

• Dana Angluin, James Aspnes and David Eistenstat, Stably computable predicates are semilinear.

A set S \subseteq \mathbb{N}^d is linear if it’s of the form

\{u_0 + n_1 u_1 + \cdots + n_p u_p : n_i \in \mathbb{N}  \}

for some fixed vectors of natural numbers u_i \in \mathbb{N}^d.

A set S \subseteq \mathbb{N}^d semilinear if it’s a finite union of linear sets.

How did Angluin, Aspnes and Eisenstat prove their theorem? Apparently the easy part is showing that membership in any semilinear set can be decided by a chemical reaction network. David sketched the proof of the converse. I won’t go into it, but it used a very nice fact:

Dickson’s Lemma. Any subset of \mathbb{N}^d has a finite set of minimal elements, where we define x \le y if x_i \le y_i for all i.

For example, the region above and to the right of the hyperbola here has five minimal elements:

If you know some algebra, Dickson’s lemma should remind you of the Hilbert basis theorem, saying (for example) that every ideal in a ring of multivariable polynomials over a field is finitely generated. And in fact, Paul Gordan used Dickson’s Lemma in 1899 to help give a proof of Hilbert’s basis theorem.

It’s very neat to see how this lemma applies to chemical reaction networks! You can see how it works in Angluin, Aspnes and Eistenstat’s paper. But they call it “Higman’s lemma” for some reason.

References

Here are some of David Soloveichik’s recent talks:

• An introduction to strand displacement cascades for the Foresight Institute Conference (Palo Alto, Jan 2013): An artificial “biochemistry” with DNA.

• Paper presented at DNA Computing and Molecular Programming 18 (Aarhus, Denmark, Aug 2012): Deterministic function computation with chemical reaction networks.

• Tutorial talk for DNA Computing and Molecular Programming 17 (Pasadena, Aug 2011): The programming language of chemical kinetics, and how to discipline your DNA molecules using strand displacement cascades.

• High-level introduction to algorithmic self-assembly and stochastic chemical reaction networks as computer-theoretic models: Computer-theoretic abstractions for molecular programming.

• On algorithmic behavior in chemical reaction networks and implementing arbitrary chemical reaction networks with DNA: programming well-mixed chemical kinetics.


Programming with Chemical Reaction Networks

23 March, 2014

 

There will be a 5-day workshop on Programming with Chemical Reaction Networks: Mathematical Foundation at BIRS from Sunday, June 8 to Friday June 13, 2014 It’s being organized by

Anne Condon (University of British Columbia)
David Doty (California Institute of Technology)
Chris Thachuk (University of Oxford).

BIRS is the Banff International Research Station, in the mountains west of Calgary, in Alberta, Canada.

Description

Here’s the workshop proposal on the BIRS website. It’s a pretty interesting proposal, especially if you’ve already read Luca Cardelli’s description of computing with chemical reaction networks, at the end of our series of posts on chemical reaction networks. The references include a lot of cool papers, so I’ve created links to those to help you get ahold of them.

This workshop will explore three of the most important research themes concerning stochastic chemical reaction networks (CRNs). Below we motivate each theme and highlight key questions that the workshop will address. Our main objective is to bring together distinct research communities in order to consider new problems that could not be fully appreciated in isolation. It is also our aim to determine commonalities between different disciplines and bodies of research. For example, research into population protocols, vector addition systems, and Petri networks provide a rich body of theoretical results that may already address contemporary problems arising in the study of CRNs.

Computational power of CRNs

Before designing robust and practical systems, it is useful to know the limits to computing with a chemical soup. Some interesting theoretical results are already known for stochastic chemical reaction networks. The computational power of CRNs depend upon a number of factors, including: (i) is the computation deterministic, or probabilistic, and (ii) does the CRN have an initial context — certain species, independent of the input, that are initially present in some exact, constant count.

In general, CRNs with a constant number of species (independent of the input length) are capable of Turing universal computation [17], if the input is represented by the exact (unary) count of one molecular species, some small probability of error is permitted and an initial context in the form of a single-copy leader molecule is used.

Could the same result hold in the absence of an initial context? In a surprising result based on the distributed computing model of population protocols, it has been shown that if a computation must be error-free, then deterministic computation with CRNs having an initial context is limited to computing semilinear predicates [1], later extended to functions outputting natural numbers encoded by molecular counts [5].

Furthermore, any semilinear predicate or function can be computed by that class of CRNs in expected time polylogarithmic in the input length. Building on this result, it was recently shown that by incurring an expected time linear in the input length, the same result holds for “leaderless” CRNs [8] — CRNs with no initial context. Can this result be improved to sub-linear expected time? Which class of functions can be computed deterministically by a CRN without an initial context in expected time polylogarithmic in the input length?

While (restricted) CRNs are Turing-universal, current results use space proportional to the computation time. Using a non-uniform construction, where the number of species is proportional to the input length and each initial species is present in some constant count, it is known that any S(n) space-bounded computation can be computed by a logically-reversible tagged CRN, within a reaction volume of size poly(S(n)) [18]. Tagged CRNs were introduced to model explicitly the fuel molecules in physical realizations of CRNs such as DNA strand displacement systems [6] that are necessary to supply matter and energy for implementing reactions such as X → X + Y that violate conservation of mass and/or energy.

Thus, for space-bounded computation, there exist CRNs that are time-efficient or are space-efficient. Does there exist time- and space-efficient CRNs to compute any space-bounded function?

Designing and verifying robust CRNs

While CRNs provide a concise model of chemistry, their physical realizations are often more complicated and more granular. How can one be sure they accurately implement the intended network behaviour? Probabilistic model checking has already been employed to find and correct inconsistencies between CRNs and their DNA strand displacement system (DSD) implementations [9]. However, at present, model checking of arbitrary CRNs is only capable of verifying the correctness of very small systems. Indeed, verification of these types of systems is a difficult problem: probabilistic state reachability is undecidable [17, 20] and general state reachability is EXPSPACE-hard [4].

How can larger systems be verified? A deeper understanding of CRN behaviour may simplify the process of model checking. As a motivating example, there has been recent progress towards verifying that certain DSD implementations correctly simulate underlying CRNs [16, 7, 10]. This is an important step to ensuring correctness, prior to experiments. However, DSDs can also suffer from other errors when implementing CRNs, such as spurious hybridization or strand displacement. Can DSDs and more generally CRNs be designed to be robust to such predictable errors? Can error correcting codes and redundant circuit designs used in traditional computing be leveraged in these chemical computers? Many other problems arise when implementing CRNs. Currently, unique types of fuel molecules must be designed for every reaction type. This complicates the engineering process significantly. Can a universal type of fuel be designed to smartly implement any reaction?

Energy efficient computing with CRNs

Rolf Landauer showed that logically irreversible computation — computation as modeled by a standard Turing machine — dissipates an amount of energy proportional to the number of bits of information lost, such as previous state information, and therefore cannot be energy efficient [11]. However, Charles Bennett showed that, in principle, energy efficient computation is possible, by proposing a universal Turing machine to perform logically-reversible computation and identified nucleic acids (RNA/DNA) as a potential medium to realize logically-reversible computation in a physical system [2].

There have been examples of logically-reversible DNA strand displacement systems — a physical realization of CRNs — that are, in theory, capable of complex computation [12, 19]. Are these systems energy efficient in a physical sense? How can this argument be made formally to satisfy both the computer science and the physics communities? Is a physical experiment feasible, or are these results merely theoretical footnotes?

References

[1] D. Angluin, J. Aspnes, and D. Eisenstat. Stably computable predicates are semilinear. In PODC, pages 292–299, 2006.

[2] C. H. Bennett. Logical reversibility of computation. IBM Journal of Research and Development, 17 (6):525–532, 1973.

[3] L. Cardelli and A. Csikasz-Nagy. The cell cycle switch computes approximate majority. Scientific Reports, 2, 2012.

[4] E. Cardoza, R. Lipton, A. R. Meyer. Exponential space complete problems for Petri nets and commutative semigroups (Preliminary Report). Proceedings of the Eighth Annual ACM Symposium on Theory of Computing, pages 507–54, 1976.

[5] H. L. Chen, D. Doty, and D. Soloveichik. Deterministic function computation with chemical reaction networks. DNA Computing and Molecular Programming, pages 25–42, 2012.

[6] A. Condon, A. J. Hu, J. Manuch, and C. Thachuk. Less haste, less waste: on recycling and its limits in strand displacement systems. Journal of the Royal Society: Interface Focus, 2 (4):512–521, 2012.

[7] Q. Dong. A bisimulation approach to verification of molecular implementations of formal chemical reaction network. Master’s thesis. SUNY Stony Brook, 2012.

[8] D. Doty and M. Hajiaghayi. Leaderless deterministic chemical reaction networks. In Proceedings of the 19th International Meeting on DNA Computing and Molecular Programming, 2013.

[9] M. R. Lakin, D. Parker, L. Cardelli, M. Kwiatkowska, and A. Phillips. Design and analysis of DNA strand displacement devices using probabilistic model checking. Journal of The Royal Society Interface, 2012.

[10] M. R. Lakin, D. Stefanovic and A. Phillips. Modular Verification of Two-domain DNA Strand Displacement Networks via Serializability Analysis. In Proceedings of the 19th Annual conference on DNA computing, 2013.

[11] R. Landauer. Irreversibility and heat generation in the computing process. IBM Journal of research and development, 5 (3):183–191, 1961.

[12] L. Qian, D. Soloveichik, and E. Winfree. Efficient Turing-universal computation with DNA polymers (extended abstract) . In Proceedings of the 16th Annual conference on DNA computing, pages 123–140, 2010.

[13] L. Qian and E. Winfree. Scaling up digital circuit computation with DNA strand displacement cascades. Science, 332 (6034):1196–1201, 2011.

[14] L. Qian, E. Winfree, and J. Bruck. Neural network computation with DNA strand displacement cascades. Nature, 475 (7356):368–372, 2011.

[15] G. Seelig, D. Soloveichik, D.Y. Zhang, and E. Winfree. Enzyme-free nucleic acid logic circuits. Science, 314 (5805):1585–1588, 2006.

[16] S. W. Shin. Compiling and verifying DNA-based chemical reaction network implementations. Master’s thesis. California Insitute of Technology, 2011.

[17] D. Soloveichik, M. Cook, E. Winfree, and J. Bruck. Computation with finite stochastic chemical reaction networks. Natural Computing, 7 (4):615–633, 2008.

[18] C. Thachuk. Space and energy efficient molecular programming. PhD thesis, University of British Columbia, 2012.

[19] C. Thachuk and A. Condon. Space and energy efficient computation with DNA strand displacement systems. In Proceedings of the 18th Annual International Conference on DNA computing and Molecular Programming, 2012.

[20] G. Zavattaro and L. Cardelli. Termination Problems in Chemical Kinetics. In Proceedings of the 2008 Conference on Concurrency Theory, pages 477–491, 2008.


Network Theory II

12 March, 2014

 

Chemists are secretly doing applied category theory! When chemists list a bunch of chemical reactions like

C + O₂ → CO₂

they are secretly describing a ‘category’.

That shouldn’t be surprising. A category is simply a collection of things called objects together with things called morphisms going from one object to another, often written

f: x → y

The rules of a category say:

1) we can compose a morphism f: x → y and another morphism g: y → z to get an arrow gf: x → z,

2) (hg)f = h(gf), so we don’t need to bother with parentheses when composing arrows,

3) every object x has an identity morphism 1ₓ: x → x that obeys 1ₓ f = f and f 1ₓ = f.

Whenever we have a bunch of things (objects) and processes (arrows) that take one thing to another, we’re likely to have a category. In chemistry, the objects are bunches of molecules and the arrows are chemical reactions. But we can ‘add’ bunches of molecules and also add reactions, so we have something more than a mere category: we have something called a symmetric monoidal category.

My talk here, part of a series, is an explanation of this viewpoint and how we can use it to take ideas from elementary particle physics and apply them to chemistry! For more details try this free book:

• John Baez and Jacob Biamonte, A Course on Quantum Techniques for Stochastic Mechanics.

as well as this paper on the Anderson–Craciun–Kurtz theorem (discussed in my talk):

• John Baez and Brendan Fong, Quantum techniques for studying equilibrium in reaction networks.

You can also see the slides of this talk. Click on any picture in the slides, or any text in blue, and get more information!


Lyapunov Functions for Complex-Balanced Systems

7 January, 2014

guest post by Manoj Gopalkrishnan

A few weeks back, I promised to tell you more about a long-standing open problem in reaction networks, the ‘global attractor conjecture’. I am not going to quite get there today, but we shall take one step in that direction.

Today’s plan is to help you make friends with a very useful function we will call the ‘free energy’ which comes up all the time in the study of chemical reaction networks. We will see that for complex-balanced systems, the free energy function decreases along trajectories of the rate equation. I’m going to explain this statement, and give you most of the proof!

The point of doing all this work is that we will then be able to invoke Lyapunov’s theorem which implies stability of the dynamics. In Greek mythology, Sisyphus was cursed to roll a boulder up a hill only to have it roll down again, so that he had to keep repeating the task for all eternity. When I think of an unstable equilibrium, I imagine a boulder delicately balanced on top of a hill, which will fall off if given the slightest push:

or, more abstractly:

On the other hand, I picture a stable equilibrium as a pebble at the very bottom of a hill. Whichever way a perturbation takes it is up, so it will roll down again to the bottom:

Lyapunov’s theorem guarantees stability provided we can exhibit a nice enough function V that decreases along trajectories. ‘Nice enough’ means that, viewing V as a height function for the hill, the equilibrium configuration should be at the bottom, and every direction from there should be up. If Sisyphus had dug a pit at the top of the hill for the boulder to rest in, Lyapunov’s theorem would have applied, and he could have gone home to rest. The moral of the story is that it pays to learn dynamical systems theory!

Because of the connection to Lyapunov’s theorem, such functions that decrease along trajectories are also called Lyapunov functions. A similar situation is seen in Boltzmann’s H-theorem, and hence such functions are sometimes called H-functions by physicists.

Another reason for me to talk about these ideas now is that I have posted a new article on the arXiv:

• Manoj Gopalkrishnan, On the Lyapunov function for complex-balanced mass-action systems.

The free energy function in chemical reaction networks goes back at least to 1972, to this paper:

• Friedrich Horn and Roy Jackson, General mass action kinetics, Arch. Rational Mech. Analysis 49 (1972), 81–116.

Many of us credit Horn and Jackson’s paper with starting the mathematical study of reaction networks. My paper is an exposition of the main result of Horn and Jackson, with a shorter and simpler proof. The gain comes because Horn and Jackson proved all their results from scratch, whereas I’m using some easy results from graph theory, and the log-sum inequality.

We shall be talking about reaction networks. Remember the idea from the network theory series. We have a set S whose elements are called species, for example

S = \{ \mathrm{H}_2\mathrm{O}, \mathrm{H}^+, \mathrm{OH}^- \}

A complex is a vector of natural numbers saying how many items of each species we have. For example, we could have a complex (2,3,1). But chemists would usually write this as

2 \mathrm{H}_2\mathrm{O} + 3 \mathrm{H}^+ + \mathrm{OH}^-

A reaction network is a set S of species and a set T of transitions or reactions, where each transition \tau \in T goes from some complex m(\tau) to some complex n(\tau). For example, we could have a transition \tau with

m(\tau) = \mathrm{H}_2\mathrm{O}

and

n(\tau) = \mathrm{H}^+ + \mathrm{OH}^-

In this situation chemists usually write

\mathrm{H}_2\mathrm{O} \to \mathrm{H}^+ + \mathrm{OH}^-

but we want names like \tau for our transitions, so we might write

\tau : \mathrm{H}_2\mathrm{O} \to \mathrm{H}^+ + \mathrm{OH}^-

or

\mathrm{H}_2\mathrm{O} \stackrel{\tau}{\longrightarrow} \mathrm{H}^+ + \mathrm{OH}^-

As John explained in Part 3 of the network theory series, chemists like to work with a vector of nonnegative real numbers x(t) saying the concentration of each species at time t. If we know a rate constant r(\tau) > 0 for each transition \tau, we can write down an equation saying how these concentrations change with time:

\displaystyle{ \frac{d x}{d t} = \sum_{\tau \in T} r(\tau) (n(\tau) - m(\tau)) x^{m(\tau)} }

This is called the rate equation. It’s really a system of ODEs describing how the concentration of each species change with time. Here an expression like x^m is shorthand for the monomial {x_1}^{m_1} \cdots {x_k}^{m_k}.

John and Brendan talked about complex balance in Part 9. I’m going to recall this definition, from a slightly different point of view that will be helpful for the result we are trying to prove.

We can draw a reaction network as a graph! The vertices of this graph are all the complexes m(\tau), n(\tau) where \tau \in T. The edges are all the transitions \tau\in T. We think of each edge \tau as directed, going from m(\tau) to n(\tau).

We will call the map that sends each transition \tau to the positive real number r(\tau) x^{m(\tau)} the flow f_x(\tau) on this graph. The rate equation can be rewritten very simply in terms of this flow as:

\displaystyle{ \frac{d x}{d t} = \sum_{\tau \in T}(n(\tau) - m(\tau)) \, f_x(\tau) }

where the right-hand side is now a linear expression in the flow f_x.

Flows of water, or electric current, obey a version of Kirchhoff’s current law. Such flows are called conservative flows. The following two lemmas from graph theory are immediate for conservative flows:

Lemma 1. If f is a conservative flow then the net flow across every cut is zero.

A cut is a way of chopping the graph in two, like this:


It’s easy to prove Lemma 1 by induction, moving one vertex across the cut at a time.

Lemma 2. If a conservative flow exists then every edge \tau\in T is part of a directed cycle.

Why is Lemma 2 true? Suppose there exists an edge \tau : m \to n that is not part of any directed cycle. We will exhibit a cut with non-zero net flow. By Lemma 1, this will imply that the flow is not conservative.

One side of the cut will consist of all vertices from which m is reachable by a directed path in the reaction network. The other side of the cut contains at least n, since m is not reachable from n, by the assumption that \tau is not part of a directed cycle. There is flow going from left to right of the cut, across the transition \tau. Since there can be no flow coming back, this cut has nonzero net flow, and we’re done.     ▮

Now, back to the rate equation! We can ask if the flow f_x is conservative. That is, we can ask if, for every complex n:

\displaystyle{ \sum_{\tau : m \to n} f_x(m,n) = \sum_{\tau : n \to p} f_x(n,p). }

In words, we are asking if the sum of the flow through all transitions coming in to n equals the sum of the flow through all transitions going out of n. If this condition is satisfied at a vector of concentrations x = \alpha, so that the flow f_\alpha is conservative, then we call \alpha a point of complex balance. If in addition, every component of \alpha is strictly positive, then we say that the system is complex balanced.

Clearly if \alpha is a point of complex balance, it’s an equilibrium solution of the rate equation. In other words, x(t) = \alpha is a solution of the rate equation, where x(t) never changes.

I’m using ‘equilibrium’ the way mathematicians do. But I should warn you that chemists use ‘equilibrium’ to mean something more than merely a solution that doesn’t change with time. They often also mean it’s a point of complex balance, or even more. People actually get into arguments about this at conferences.

Complex balance implies more than mere equilibrium. For starters, if a reaction network is such that every edge belongs to a directed cycle, then one says that the reaction network is weakly reversible. So Lemmas 1 and 2 establish that complex-balanced systems must be weakly reversible!

From here on, we fix a complex-balanced system, with \alpha a strictly positive point of complex balance.

Definition. The free energy function is the function

g_\alpha(x) = \sum_{s\in S} x_s \log x_s - x_s - x_s \log \alpha_s

where the sum is over all species in S.

The whole point of defining the function this way is because it is the unique function, up to an additive constant, whose partial derivative with respect to x_s is \log x_s/\alpha_s. This is important enough that we write it as a lemma. To state it in a pithy way, it is helpful to introduce vector notation for division and logarithms. If x and y are two vectors, we will understand x/y to mean the vector z such that z_s = x_s/ y_s coordinate-wise. Similarly \log x is defined in a coordinate-wise sense as the vector with coordinates (\log x)_s = \log x_s.

Lemma 3. The gradient \nabla g_\alpha(x) of g_\alpha(x) equals \log(x/\alpha).

We’re ready to state our main theorem!

Theorem. Fix a trajectory x(t) of the rate equation. Then g_\alpha(x(t)) is a decreasing function of time t. Further, it is strictly decreasing unless x(t) is an equilibrium solution of the rate equation.

I find precise mathematical statements reassuring. You can often make up your mind about the truth value from a few examples. Very often, though not always, a few well-chosen examples are all you need to get the general idea for the proof. Such is the case for the above theorem. There are three key examples: the two-cycle, the three-cycle, and the figure-eight.

The two-cycle. The two-cycle is this reaction network:

It has two complexes m and n and two transitions \tau_1 = m\to n and \tau_2 = n\to m, with rates r_1 = r(\tau_1) and r_2 = r(\tau_2) respectively.

Fix a solution x(t) of the rate equation. Then the flow from m to n equals r_1 x^m and the backward flow equals r_2 x^n. The condition for f_\alpha to be a conservative flow requires that f_\alpha = r_1 \alpha^m = r_2 \alpha^n. This is one binomial equation in at least one variable, and clearly has a solution in the positive reals. We have just shown that every two-cycle is complex balanced.

The derivative d g_\alpha(x(t))/d t can now be computed by the chain rule, using Lemma 3. It works out to f_\alpha times

\displaystyle{ \left((x/\alpha)^m - (x/\alpha)^n\right) \, \log\frac{(x/\alpha)^n}{(x/\alpha)^m} }

This is never positive, and it’s zero if and only if

(x/\alpha)^m = (x/\alpha)^n

Why is this? Simply because the logarithm of something greater than 1 is positive, while the log of something less than 1 is negative, so that the sign of (x/\alpha)^m - (x/\alpha)^n is always opposite the sign of \log \frac{(x/\alpha)^n}{(x/\alpha)^m}. We have verified our theorem for this example.

(Note that (x/\alpha)^m = (x/\alpha)^n occurs when x = \alpha, but also at other points: in this example, there is a whole hypersurface consisting of points of complex balance.)

In fact, this simple calculation achieves much more.

Definition. A reaction network is reversible if for every transition \tau  : m \to n there is a transition \tau' : m \to n going back, called the reverse of \tau. Suppose we have a reversible reaction network and a vector of concentrations \alpha such that the flow along each edge equals that along the edge going back:

f_\alpha(\tau) = f_\alpha(\tau')

whenever \tau' is the reverse \tau. Then we say the reaction network is detailed balanced, and \alpha is a point of detailed balance.

For a detailed-balanced system, the time derivative of g_\alpha is a sum over the contributions of pairs consisting of an edge and its reverse. Hence, the two-cycle calculation shows that the theorem holds for all detailed balanced systems!

This linearity trick is going to prove very valuable. It will allow us to treat the general case of complex balanced systems one cycle at a time. The proof for a single cycle is essentially contained in the example of a three-cycle, which we treat next:

The three-cycle. The three-cycle is this reaction network:

We assume that the system is complex balanced, so that

f_\alpha(m_1\to m_2) = f_\alpha(m_2\to m_3) = f_\alpha(m_3\to m_1)

Let us call this nonnegative number f_\alpha. A small calculation employing the chain rule shows that d g_\alpha(x(t))/d t equals f_\alpha times

\displaystyle{ (x/\alpha)^{m_1}\, \log\frac{(x/\alpha)^{m_2}}{(x/\alpha)^{m_1}} \; + }

\displaystyle{ (x/\alpha)^{m_2} \, \log\frac{(x/\alpha)^{m_3}}{(x/\alpha)^{m_2}} \; + }

\displaystyle{  (x/\alpha)^{m_3}\, \log\frac{(x/\alpha)^{m_1}}{(x/\alpha)^{m_3}} }

We need to think about the sign of this quantity:

Lemma 3. Let a,b,c be positive numbers. Then a \log b/a + b\log c/b + c\log a/c is less than or equal to zero, with equality precisely when a=b=c.

The proof is a direct application of the log sum inequality. In fact, this holds not just for three numbers, but for any finite list of numbers. Indeed, that is precisely how one obtains the proof for cycles of arbitrary length. Even the two-cycle proof is a special case! If you are wondering how the log sum inequality is proved, it is an application of Jensen’s inequality, that workhorse of convex analysis.

The three-cycle calculation extends to a proof for the theorem so long as there is no directed edge that is shared between two directed cycles. When there are such edges, we need to argue that the flows f_\alpha and f_x can be split between the cycles sharing that edge in a consistent manner, so that the cycles can be analyzed independently. We will need the following simple lemma about conservative flows from graph theory. We will apply this lemma to the flow f_\alpha.

Lemma 4. Let f be a conservative flow on a graph G. Then there exist directed cycles C_1, C_2,\dots, C_k in G, and nonnegative real ‘flows’ f_1,f_2,\dots,f_k \in [0,\infty] such that for each directed edge e in G, the flow f(e) equals the sum of f_i over i such the cycle C_i contains the edge e.

Intuitively, this lemma says that conservative flows come from constant flows on the directed cycles of the graph. How does one show this lemma? I’m sure there are several proofs, and I hope some of you can share some of the really neat ones with me. The one I employed was algorithmic. The idea is to pick a cycle, any cycle, and subtract the maximum constant flow that this cycle allows, and repeat. This is most easily understood by looking at the example of the figure-eight:

The figure-eight. This reaction network consists of two three-cycles sharing an edge:

Here’s the proof for Lemma 4. Let f be a conservative flow on this graph. We want to exhibit cycles and flows on this graph according to Lemma 4. We arbitrarily pick any cycle in the graph. For example, in the figure-eight, suppose we pick the cycle u_0\to u_1\to u_2\to u_0. We pick an edge in this cycle on which the flow is minimum. In this case, f(u_0\to u_1) = f(u_2\to u_0) is the minimum. We define a remainder flow by subtracting from f this constant flow which was restricted to one cycle. So the remainder flow is the same as f on edges that don’t belong to the picked cycle. For edges that belong to the cycle, the remainder flow is f minus the minimum of f on this cycle. We observe that this remainder flow satisfies the conditions of Lemma 4 on a graph with strictly fewer edges. Continuing in this way, since the lemma is trivially true for the empty graph, we are done by infinite descent.

Now that we know how to split the flow f_\alpha across cycles, we can figure out how to split the rates across the different cycles. This will tell us how to split the flow f_x across cycles. Again, this is best illustrated by an example.

The figure-eight. Again, this reaction network looks like

Suppose as in Lemma 4, we obtain the cycles

C_1 = u_0\to u_1\to u_2\to u_0

with constant flow f_\alpha^1

and

C_2 = u_3\to u_1\to u_2\to u_3 with constant flow f_\alpha^2 such that

f_\alpha^1 + f_\alpha^2 = f_\alpha(u_1\to u_2)

Here’s the picture:

Then we obtain rates r^1(u_1\to u_2) and r^2(u_1\to u_2) by solving the equations

f^1_\alpha = r^1(u_1\to u_2) \alpha^{u_1}

f^2_\alpha = r^2(u_1\to u_2) \alpha^{u_2}

Using these rates, we can define non-constant flows f^1_x on C_1 and f^2_x on C_2 by the usual formulas:

f^1_x(u_1\to u_2) = r^1(u_1\to u_2) x^{u_1}

and similarly for f^2_x. In particular, this gives us

f^1_x(u_1\to u_2)/f^1_\alpha = (x/\alpha)^{u_1}

and similarly for f^2_x.

Using this, we obtain the proof of the Theorem! The time derivative of g_\alpha along a trajectory has a contribution from each cycle C as in Lemma 4, where each cycle is treated as a separate system with the new rates r^C, and the new flows f^C_\alpha and f^C_x. So, we’ve reduced the problem to the case of a cycle, which we’ve already done.

Let’s review what happened. The time derivative of the function g_\alpha has a very nice form, which is linear in the flow f_x. The reaction network can be broken up into cycles. Th e conservative flow f_\alpha for a complex balanced system can be split into conservative flows on cycles by Lemma 4. This informs us how to split the non-conservative flow f_x across cycles. By linearity of the time derivative, we can separately treat the case for every cycle. For each cycle, we get an expression to which the log sum inequality applies, giving us the final result that g_\alpha decreases along trajectories of the rate equation.

Now that we have a Lyapunov function, we will put it to use to obtain some nice theorems about the dynamics, and finally state the global attractor conjecture. All that and more, in the next blog post!


Water

29 November, 2013

Over a year ago, I wrote here about ice. It has 16 known forms with different crystal geometries. The most common form on Earth, hexagonal ice I, is a surprisingly subtle blend of order and randomness:

Liquid water is even more complicated. It’s mainly a bunch of molecules like this jostling around:


The two hydrogens are tightly attached to the oxygen. But accidents do happen. On average, for every 555 million molecules of water, one is split into a negatively charged OH⁻ and a positively charged H⁺. And this actually matters a lot, in chemistry. It’s the reason we say water has pH 7.

Why? By definition, pH 7 means that for every liter of water, there’s 10-7 moles of H⁺. That’s where the 7 comes from. But there’s 55.5 moles of water in every liter, at least when the water is cold so its density is almost 1 kilogram/liter. So, do the math and you see one that for 555 million molecules of water, there’s only one H⁺.

Acids have a lot more. For example, lemon juice has one H⁺ per 8800 water molecules.

But let’s think about this H⁺ thing. What is it, really? It’s a hydrogen atom missing its electron: a proton, all by itself!

But what happens when you’ve got a lone proton in water? It doesn’t just sit there. It quickly attaches to a water molecule, forming H₃O⁺. This is called a hydronium ion, and it looks like this:


But hydronium is still positively charged, so it will attract electrons in other water molecules! Different things can happen. Here you see a hydronium ion water molecule surrounded by three water molecules in a symmetrical way:


This is called an Eigen cation, with chemical formula H₉O₄⁺. I believe it’s named after the Nobel-prize-winning chemist Manfred Eigen—not his grandfather Günther, the mathematician of ‘eigenvector’ fame.

And here you see a hydronium ion at lower right, attracted to water molecule at left:


The is a Zundel cation, with chemical formula H₅O₂⁺. It’s named after Georg Zundel, the German expert on hydrogen bonds. The H⁺ in the middle looks more tightly connected to the water at right than the water at left. But it should be completely symmetrical—at least, that’s the theory of how a Zundel cation works.

But the Eigen and Zundel cations are still positively charged, so they attract more water molecules, making bigger and bigger structures. Nowadays chemists are studying these using computer simulations, and comparing the results to experiments. In 2010, Evgenii Stoyanov, Irina Stoyanova and Christopher Reed used infrared spectroscopy to argue that a lone proton often attaches itself to 6 water molecules, forming H⁺(H₂O)₆, or H₁₃O₆⁺, like this:


As you can see, this forms when each hydrogen in a Zundel cation attracts an extra water molecule.

Even this larger structure attracts more water molecules:


But the positive charge, they claim, stays roughly within the dotted line.

Wait. Didn’t I say the lone proton was right in the middle? Isn’t that what the picture shows—the H in the middle?

Well, the picture is a bit misleading! First, everything is wiggling around a lot. And second, quantum mechanics says we don’t know the position of that proton precisely! Instead, it’s a ‘probability cloud’ smeared over a large region, ending roughly at the dashed line. (You can’t say precisely where a cloud ends.)

It seems that something about these subtleties makes the distance between the two oxygen nuclei at the center is surprisingly large. In an ordinary water molecule, the distance between the hydrogen and oxygen is a bit less than 100 pm—that’s 100 picometers, or 100 × 10-12 meters, or one angstrom (Å) in chemist’s units:


In ordinary ice, there are also weaker bonds called hydrogen bonds that attach neighboring water molecules. These bonds are a bit longer, as shown in this picture by Stephen Lower, who also drew that great picture of ice:

But the distance between the two central oxygens in H₁₃O₆⁺ is about 2.57 angstroms, or twice 1.28:


Stoyanov, Stoyanova and Reed put the exclamation mark here. I guess the big distance came as a big surprise!

I should emphasize that this work is new and still controversial. There’s some evidence, which I don’t understand, that 20 is a ‘magic number': a lone proton is happiest when accompanied by 20 water molecules, forming H⁺(H₂O)₂₀. One possibility is that the proton is surrounded by a symmetrical cage of 20 water molecules shaped like a dodecahedron! But in 2005, a team of scientists did computer simulations and arrived at a different geometry, like this:

This is not symmetrical: there’s a Zundel cation highlighted at right, together with 20 water molecules.

Of course, in reality a number of different structures may predominate, in a rapidly changing and random way. Computer simulations should eventually let us figure this out. We’ve known the relevant laws of nature for over 80 years. But running them on a computer is not easy! Kieron Taylor did his PhD work on simulating water, and he wrote:

It’s a most vexatious substance to simulate in useful time scales. Including the proton exchange or even flexible multipoles requires immense computation.

It would be very interesting if the computational complexity of water were higher, in some precise sense, than many other liquids. It’s weird in other ways. It takes a lot of energy to heat water, it expands when it freezes, and its molecules have a large ‘dipole moment’—meaning the electric charge is distributed in a very lopsided way, thanks to the ‘Mickey Mouse’ way the two H’s are attached to the O.

I’ve been talking about the fate of the H⁺ when a water molecule splits into H⁺ and OH⁻. I should add that in heavy water, H⁺ could be something other than a lone proton. It could be a deuteron: a proton and a neutron stuck together. Or it could be a triton: a proton and two neutrons. For this reason, while most chemists call H⁺ simply a ‘proton’, the pedantically precise ones call it a hydron, which covers all the possibilities!

But what about the OH⁻? This is called a hydroxide ion:


But this, too, attracts other water molecules. First it grabs one and forms a bihydroxide ion, which is a chain like this:

H—O—H—O—H

with chemical formula H₃O₂⁻. And then the bihydroxide ion attracts other water molecules, perhaps like this:


Again, this is a guess—and certainly a simplified picture of a dynamic, quantum-mechanical system.

References and digressions

For more, see:

• Evgenii S. Stoyanov, Irina V. Stoyanova, Christopher A. Reed, The unique nature of H⁺ in water, Chemical Science 2 (2011), 462–472.

Abstract: The H⁺(aq) ion in ionized strong aqueous acids is an unexpectedly unique H₁₃O₆⁺ entity, unlike those in gas phase H⁺(H₂O)n clusters or typical crystalline acid hydrates. IR spectroscopy indicates that the core structure has neither H₉O₄⁺ Eigen-like nor typical H₅O₂⁺ Zundel-like character. Rather, extensive delocalization of the positive charge leads to a H₁₃O₆⁺ ion having an unexpectedly long central OO separation of 2.57 Å and four conjugated OO separations of 2.7 Å. These dimensions are in conflict with the shorter OO separations found in structures calculated by theory. Ultrafast dynamic properties of the five H atoms involved in these H-bonds lead to a substantial collapse of normal IR vibrations and the appearance of a continuous broad absorption (cba) across the entire IR spectrum. This cba is distinguishable from the broad IR bands associated with typical low-barrier H-bonds. The solvation shell outside of the H₁₃O₆⁺ ion defines the boundary of positive charge delocalization. At low acid concentrations, the H₁₃O₆⁺ ion is a constituent part of an ion pair that has contact with the first hydration shell of the conjugate base anion. At higher concentrations, or with weaker acids, one or two H₂O molecules of H₁₃O₆⁺ cation are shared with the hydration shell of the anion. Even the strongest acids show evidence of ion pairing.

Unfortunately this paper is not free, and my university doesn’t even subscribe to this journal. But I just discovered that Evgenii Stoyanov and Irina Stoyanova are here at U. C. Riverside! So, I may ask them some questions.

This picture:

came from here:

• Srinivasan S. Iyengar, Matt K. Petersen, Tyler J. F. Day, Christian J. Burnham, Virginia E. Teige and Gregory A. Voth, The properties of ion-water clusters. I. The protonated 21-water cluster, J. Chem. Phys. 123 (2005), 084309.

Abstract. The ab initio atom-centered density-matrix propagation approach and the multistate empirical valence bond method have been employed to study the structure, dynamics, and rovibrational spectrum of a hydrated proton in the “magic” 21 water cluster. In addition to the conclusion that the hydrated proton tends to reside on the surface of the cluster, with the lone pair on the protonated oxygen pointing “outwards,” it is also found that dynamical effects play an important role in determining the vibrational properties of such clusters. This result is used to analyze and complement recent experimental and theoretical studies.

This paper is free online! We live in a semi-barbaric age where science is probing the finest details of matter, space and time—but many of the discoveries, paid for by taxes levied on the hard-working poor, are snatched, hidden, and sold by profiteers. Luckily, a revolution is afoot…

There are other things in ‘pure water’ beside what I’ve mentioned. For example, there are some lone electrons! Since these are light, quantum mechanics says their probability cloud spreads out to be quite big. This picture by Michael Tauber shows what you should imagine:

He says:

Schematic representation of molecules in the first and second coordination shells around the solvated electron. First shell molecules are shown hydrogen bonded to the electron. Hydrogen bonds between molecules of 1st and 2nd shells are disrupted.


Autocatalysis in Reaction Networks

11 October, 2013

guest post by Manoj Gopalkrishnan

Since this is my first time writing a blog post here, let me start with a word of introduction. I am a computer scientist at the Tata Institute of Fundamental Research, broadly interested in connections between Biology and Computer Science, with a particular interest in reaction networks. I first started thinking about them during my Ph.D. at the Laboratory for Molecular Science. My fascination with them has been predominantly mathematical. As a graduate student, I encountered an area with rich connections between combinatorics and dynamics, and surprisingly easy-to-state and compelling unsolved conjectures, and got hooked.

There is a story about Richard Feynman that he used to take bets with mathematicians. If any mathematician could make Feynman understand a mathematical statement, then Feynman would guess whether or not the statement was true. Of course, Feynman was in a habit of winning these bets, which allowed him to make the boast that mathematics, especially in its obsession for proof, was essentially irrelevant, since a relative novice like himself could after a moment’s thought guess at the truth of these mathematical statements. I have always felt Feynman’s claim to be unjust, but have often wondered what mathematical statement I would put to him so that his chances of winning were no better than random.

Today I want to tell you of a result about reaction networks that I have recently discovered with Abhishek Deshpande. The statement seems like a fine candidate to throw at Feynman because until we proved it, I would not have bet either way about its truth. Even after we obtained a short and elementary proof, I do not completely ‘see’ why it must be true. I am hoping some of you will be able to demystify it for me. So, I’m just going to introduce enough terms to be able to make the statement of our result, and let you think about how to prove it.

John and his colleagues have been talking about reaction networks as Petri nets in the network theory series on this blog. As discussed in part 2 of that series, a Petri net is a diagram like this:

Following John’s terminology, I will call the aqua squares ‘transitions’ and the yellow circles ‘species’. If we have some number #rabbit of rabbits and some number #wolf of wolves, we draw #rabbit many black dots called ‘tokens’ inside the yellow circle for rabbit, and #wolf tokens inside the yellow circle for wolf, like this:

Here #rabbit = 4 and #wolf = 3. The predation transition consumes one ‘rabbit’ token and one ‘wolf’ token, and produces two ‘wolf’ tokens, taking us here:

John explained in parts 2 and 3 how one can put rates on different transitions. For today I am only going to be concerned with ‘reachability:’ what token states are reachable from what other token states. John talked about this idea in part 25.

By a complex I will mean a population vector: a snapshot of the number of tokens in each species. In the example above, (#rabbit, #wolf) is a complex. If y, y' are two complexes, then we write

y \to y'

if we can get from y to y' by a single transition in our Petri net. For example, we just saw that

(4,3)\to (3,4)

via the predation transition.

Reachability, denoted \to^*, is the transitive closure of the relation \to. So y\to^* y' (read y' is reachable from y) iff there are complexes

y=y_0,y_1,y_2,\dots,y_k =y'

such that

y_0\to y_1\to\cdots\to y_{k-1}\to y_k.

For example, here (5,1) \to^* (1, 5) by repeated predation.

I am very interested in switches. After all, a computer is essentially a box of switches! You can build computers by connecting switches together. In fact, that’s how early computers like the Z3 were built. The CMOS gates at the heart of modern computers are essentially switches. By analogy, the study of switches in reaction networks may help us understand biochemical circuits.

A siphon is a set of species that is ‘switch-offable’. That is, if there are no tokens in the siphon states, then they will remain absent in future. Equivalently, the only reactions that can produce tokens in the siphon states are those that require tokens from the siphon states before they can fire. Note that no matter how many rabbits there are, if there are no wolves, there will continue to be no wolves. So {wolf} is a siphon. Similarly, {rabbit} is a siphon, as is the union {rabbit, wolf}. However, when Hydrogen and Oxygen form Water, {Water} is not a siphon.

For another example, consider this Petri net:

The set {HCl, NaCl} is a siphon. However, there is a conservation law: whenever an HCl token is destroyed, an NaCl token is created, so that #HCl + #NaCl is invariant. If both HCl and NaCl were present to begin with, the complexes where both are absent are not reachable. In this sense, this siphon is not ‘really’ switch-offable. As a first pass at capturing this idea, we will introduce the notion of ‘critical set’.

A conservation law is a linear expression involving numbers of tokens that is invariant under every transition in the Petri net. A conservation law is positive if all the coefficients are non-negative. A critical set of states is a set that does not contain the support of a positive conservation law.

For example, the support of the positive conservation law #HCl + #NaCl is {HCl, NaCl}, and hence no set containing this set is critical. Thus {HCl, NaCl} is a siphon, but not critical. On the other hand, the set {NaCl} is critical but not a siphon. {HCl} is a critical siphon. And in our other example, {Wolf, Rabbit} is a critical siphon.

Of particular interest to us will be minimal critical siphons, the minimal sets among critical siphons. Consider this example:

Here we have two transitions:

X \to 2Y

and

2X \to Y

The set \{X,Y\} is a critical siphon. But so is the smaller set \{X\}. So, \{X,Y\} is not minimal.

We define a self-replicable set to be a set A of species such that there exist complexes y and y' with y\to^* y' such that for all i \in A we have

y'_i > y_i

So, there are transitions that accomplish the job of creating more tokens for all the species in A. In other words: these species can ‘replicate themselves’.

We define a drainable set by changing the > to a <. So, there are transitions that accomplish the job of reducing the number of tokens for all the species in A. These species can ‘drain away’.

Now here comes the statement:

Every minimal critical siphon is either drainable or self-replicable!

We prove it in this paper:

• Abhishek Deshpande and Manoj Gopalkrishnan, Autocatalysis in reaction networks.

But first note that the statement becomes false if the critical siphon is not minimal. Look at this example again:

The set \{X,Y\} is a critical siphon. However \{X,Y\} is neither self-replicable (since every reaction destroys X) nor drainable (since every reaction produces Y). But we’ve already seen that \{X,Y\} is not minimal. It has a critical subsiphon, namely \{X\}. This one is minimal—and it obeys our theorem, because it is drainable.

Checking these statements is a good way to make sure you understand the concepts! I know I’ve introduced a lot of terminology here, and it takes a while to absorb.

Anyway: our proof that every minimal critical siphon is either drainable or self-replicable makes use of a fun result about matrices. Consider a real square matrix with a sign pattern like this:

\left( \begin{array}{cccc} <0 & >0 & \cdots & > 0 \\ >0 & <0 & \cdots &> 0 \\ \vdots & \vdots & <0 &> 0 \\ >0 & >0 & \cdots & <0 \end{array} \right)

If the matrix is full-rank then there is a positive linear combination of the rows of the matrix so that all the entries are nonzero and have the same sign. In fact, we prove something stronger in Theorem 5.9 of our paper. At first, we thought this statement about matrices should be equivalent to one of the many well-known alternative statements of Farkas’ lemma, like Gordan’s theorem.

However, we could not find a way to make this work, so we ended up proving it by a different technique. Later, my colleague Jaikumar Radhakrishnan came up with a clever proof that uses Farkas’ lemma twice. However, so far we have not obtained the stronger result in Theorem 5.9 with this proof technique.

My interest in the result that every minimal critical siphon is either drainable or self-replicable is not purely aesthetic (though aesthetics is a big part of it). There is a research community of folks who are thinking of reaction networks as a programming language, and synthesizing molecular systems that exhibit sophisticated dynamical behavior as per specification:

International Conference on DNA Computing and Molecular Programming.

Foundations of Nanoscience: Self-Assembled Architectures and Devices.

Molecular Programming Architectures, Abstractions, Algorithms and Applications.

Networks that exhibit some kind of catalytic behavior are a recurring theme among such systems, and even more so in biochemical circuits.

Here is an example of catalytic behavior:

A + C \to B + C

The ‘catalyst’ C helps transform A to B. In the absence of C, the reaction is turned off. Hence, catalysts are switches in chemical circuits! From this point of view, it is hardly surprising that they are required for the synthesis of complex behaviors.

In information processing, one needs amplification to make sure that a signal can propagate through a circuit without being overwhelmed by errors. Here is a chemical counterpart to such amplification:

A + C \to 2C

Here the catalyst C catalyzes its own production: it is an ‘autocatalyst’, or a self-replicating species. By analogy, autocatalysis is key for scaling synthetic molecular systems.

Our work deals with these notions on a network level. We generalize the notion of catalysis in two ways. First, we allow a catalyst to be a set of species instead of a single species; second, its absence can turn off a reaction pathway instead of a single reaction. We propose the notion of self-replicable siphons as a generalization of the notion of autocatalysis. In particular, ‘weakly reversible’ networks have critical siphons precisely when they exhibit autocatalytic behavior. I was led to this work when I noticed the manifestation of this last statement in many examples.

Another hope I have is that perhaps one can study the dynamics of each minimal critical siphon of a reaction network separately, and then somehow be able to answer interesting questions about the dynamics of the entire network, by stitching together what we know for each minimal critical siphon. On the synthesis side, perhaps this could lead to a programming language to synthesize a reaction network that will achieve a specified dynamics. If any of this works out, it would be really cool! I think of how abelian group theory (and more broadly, the theory of abelian categories, which includes categories of vector bundles) benefits from a fundamental theorem that lets you break a finite abelian group into parts that are easy to study—or how number theory benefits from a special case, the fundamental theorem of arithmetic. John has also pointed out that reaction networks are really presentations of symmetric monoidal categories, so perhaps this could point the way to a Fundamental Theorem for Symmetric Monoidal Categories.

And then there is the Global Attractor Conjecture, a
long-standing open problem concerning the long-term behavior of solutions to the rate equations. Now that is a whole story by itself, and will have to wait for another day.


Coherence for Solutions of the Master Equation

10 July, 2013

guest post by Arjun Jain

I am a master’s student in the physics department of the Indian Institute of Technology Roorkee. I’m originally from Delhi. Since some time now, I’ve been wanting to go into Mathematical Physics. I hope to do a PhD in that. Apart from maths and physics, I am also quite passionate about art and music.

Right now I am visiting John Baez at the Centre for Quantum Technologies, and we’re working on chemical reaction networks. This post can be considered as an annotation to the last paragraph of John’s paper, Quantum Techniques for Reaction Networks, where he raises the question of when a solution to the master equation that starts as a coherent state will remain coherent for all times. Remember, the ‘master equation’ describes the random evolution of collections of classical particles, and a ‘coherent state’ is one where the probability distribution of particles of each type is a Poisson distribution.

If you’ve been following the network theory series on this blog, you’ll know these concepts, and you’ll know the Anderson-Craciun-Kurtz theorem gives many examples of coherent states that remain coherent. However, all these are equilibrium solutions of the master equation: they don’t change with time. Moreover they are complex balanced equilibria: the rate at which any complex is produced equals the rate at which it is consumed.

There are also non-equilibrium examples where coherent states remain coherent. But they seem rather rare, and I would like to explain why. So, I will give a necessary condition for it to happen. I’ll give the proof first, and then discuss some simple examples. We will see that while the condition is necessary, it is not sufficient.

First, recall the setup. If you’ve been following the network theory series, you can skip the next section.

Reaction networks

Definition. A reaction network consists of:

• a finite set S of species,

• a finite set K of complexes, where a complex is a finite sum of species, or in other words, an element of \mathbb{N}^S,

• a graph with K as its set of vertices and some set T of edges.

You should have in mind something like this:

where our set of species is S = \{A,B,C,D,E\}, the complexes are things like A + E, and the arrows are the elements of T, called transitions or reactions. So, we have functions

s , t : T \to K

saying the source and target of each transition.

Next:

Definition. A stochastic reaction network is a reaction network together with a function r: T \to (0,\infty) assigning a rate constant to each reaction.

From this we can write down the master equation, which describes how a stochastic state evolves in time:

\displaystyle{ \frac{d}{dt} \Psi(t) = H \Psi(t) }

Here \Psi(t) is a vector in the stochastic Fock space, which is the space of formal power series in a bunch of variables, one for each species, and H is an operator on this space, called the Hamiltonian.

From now on I’ll number the species with numbers from 1 to k, so

S = \{1, \dots, k\}

Then the stochastic Fock space consists of real formal power series in variables that I’ll call z_1, \dots, z_k. We can write any of these power series as

\displaystyle{\Psi = \sum_{\ell \in \mathbb{N}^k} \psi_\ell z^\ell }

where

z^\ell = z_1^{\ell_1} \cdots z_k^{\ell_k}

We have annihilation and creation operators on the stochastic Fock space:

\displaystyle{ a_i \Psi = \frac{\partial}{\partial z_i} \Psi }

\displaystyle{ a_i^\dagger \Psi = z_i \Psi }

and the Hamiltonian is built from these as follows:

\displaystyle{ H = \sum_{\tau \in T} r(\tau) \, ({a^\dagger}^{t(\tau)} - {a^\dagger}^{s(\tau)}) \, a^{s(\tau)} }

John explained this here (using slightly different notation), so I won’t go into much detail now, but I’ll say what all the symbols mean. Remember that the source of a transition \tau is a complex, or list of natural numbers:

s(\tau) = (s_1(\tau), \dots, s_k(\tau))

So, the power a^{s(\tau)} is really an abbreviation for a big product of annihilation operators, like this:

\displaystyle{ a^{s(\tau)} = a_1^{s_1(\tau)} \cdots a_k^{s_k(\tau)} }

This describes the annihilation of all the inputs to the transition \tau. Similarly, we define

\displaystyle{ {a^\dagger}^{s(\tau)} = {a_1^\dagger}^{s_1(\tau)} \cdots {a_k^\dagger}^{s_k(\tau)} }

and

\displaystyle{ {a^\dagger}^{t(\tau)} = {a_1^\dagger}^{t_1(\tau)} \cdots {a_k^\dagger}^{t_k(\tau)} }

The result

Here’s the result:

Theorem. If a solution \Psi(t) of the master equation is a coherent state for all times t \ge 0, then \Psi(0) must be complex balanced except for complexes of degree 0 or 1.

This requires some explanation.

First, saying that \Psi(t) is a coherent state means that it is an eigenvector of all the annihilation operators. Concretely this means

\Psi (t) = \displaystyle{\frac{e^{c(t) \cdot z}}{e^{c_1(t) + \cdots + c_k(t)}}}

where

c(t) = (c_1(t), \dots, c_k(t)) \in [0,\infty)^k

and

z = (z_1, \dots, z_k)

It will be helpful to write

\mathbf{1}= (1,1,1,...)

so we can write

\Psi (t) = \displaystyle{ e^{c(t) \cdot (z - \mathbf{1})} }

Second, we say that a complex has degree d if it is a sum of exactly d species. For example, in this reaction network:

the complexes A + C and B + E have degree 2, while the rest have degree 1. We use the word ‘degree’ because each complex \ell gives a monomial

z^\ell = z_1^{\ell_1} \cdots z_k^{\ell_k}

and the degree of the complex is the degree of this monomial, namely

\ell_1 + \cdots + \ell_k

Third and finally, we say a solution \Psi(t) of the master equation is complex balanced for a specific complex \ell if the total rate at which that complex is produced equals the total rate at which it’s destroyed.

Now we are ready to prove the theorem:

Proof. Consider the master equation

\displaystyle { \frac{d \Psi (t)}{d t} = H \psi (t) }

Assume that \Psi(t) is a coherent state for all t \ge 0. This means

\Psi (t) = \displaystyle{ e^{c(t) \cdot (z - \mathbf{1})} }

For convenience, we write c(t) simply as c, and similarly for the components c_i. Then we have

\displaystyle{ \frac{d\Psi(t)}{dt}  = (\dot{c} \cdot (z - \mathbf{1})) \, e^{c \cdot (z - \mathbf{1})}   }

On the other hand, the master equation gives

\begin{array}{ccl} \displaystyle {\frac{d\Psi(t)}{dt}} &=&  \displaystyle{ \sum_{\tau \in T} r(\tau) \, ({a^\dagger}^{t(\tau)} - {a^\dagger}^{s(\tau)}) \, a^{s(\tau)} e^{c \cdot (z - \mathbf{1})} } \\  \\  &=& \displaystyle{\sum_{\tau \in T} c^{t(\tau)} r(\tau) \, ({z}^{t(\tau)} - {z}^{s(\tau)}) e^{c \cdot (z - \mathbf{1})} } \end{array}

So,

\displaystyle{ (\dot{c} \cdot (z - \mathbf{1})) \, e^{c \cdot (z - \mathbf{1})}  =\sum_{\tau \in T} c^{t(\tau)} r(\tau) \, ({z}^{t(\tau)} - {z}^{s(\tau)}) e^{c \cdot (z - \mathbf{1})} }

As a result, we get

\displaystyle{  \dot{c}\cdot z -\dot{c}\cdot\mathbf{1} =  \sum_{\tau \in T} c^{s(\tau)} r(\tau) \, ({z}^{t(\tau)} - {z}^{s(\tau)})  }.

Comparing the coefficients of all z^\ell, we obtain the following. For \ell = 0, which is the only complex of degree zero, we get

\displaystyle { \sum_{\tau: t(\tau)=0} r(\tau) c^{s(\tau)} - \sum_{\tau\;:\; s(\tau)= 0} r(\tau) c^{s(\tau)} = -\dot{c}\cdot\mathbf{1} }

For the complexes \ell of degree one, we get these equations:

\displaystyle { \sum_{\tau\;:\; t(\tau)=(1,0,0,\dots)} r(\tau) c^{s(\tau)} - \sum_{\tau \;:\;s(\tau)=(1,0,0,\dots)} r(\tau) c^{s(\tau)}= \dot{c_1} }

\displaystyle { \sum_{\tau\; :\; t(\tau)=(0,1,0,\dots)} r(\tau) c^{s(\tau)} - \sum_{\tau\;:\; s(\tau)=(0,1,0,\dots)} r(\tau) c^{s(\tau)} = \dot{c_2} }

and so on. For all the remaining complexes \ell we have

\displaystyle { \sum_{\tau\;:\; t(\tau)=\ell} r(\tau) c^{s(\tau)} = \sum_{\tau \;:\; s(\tau)=\ell} r(\tau) c^{s(\tau)} }.

This says that the total rate at which this complex is produced equals the total rate at which it’s destroyed. So, our solution of the master equation is complex balanced for all complexes \ell of degree greater than one. This is our necessary condition.                                                                                   █

To illustrate the theorem, I’ll consider three simple examples. The third example shows that the condition in the theorem, though necessary, is not sufficient. Note that our proof also gives a necessary and sufficient condition for a coherent state to remain coherent: namely, that all the equations we listed hold, not just initially but for all times. But this condition seems a bit complicated.

Introducing amoebae into a Petri dish

Suppose that there is an inexhaustible supply of amoebae, randomly floating around in a huge pond. Each time an amoeba comes into our collection area, we catch it and add it to the population of amoebae in the Petri dish. Suppose that the rate constant for this process is 3.

So, the Hamiltonian is 3(a^\dagger -1). If we start with a coherent state, say

\displaystyle { \Psi(0)=\frac{e^{cz}}{e^c} }

then

\displaystyle { \Psi(t) = e^{3(a^\dagger -1)t} \; \frac{e^{cz}}{e^c}  = \frac{e^{(c+3t)z}}{e^{c+3t}} }

which is coherent at all times.

We can see that the condition of the theorem is satisfied, as all the complexes in the reaction network have degree 0 or 1.

Amoebae reproducing and competing

This example shows a Petri dish with one species, amoebae, and two transitions: fission and competition. We suppose that the rate constant for fission is 2, while that for competition is 1. The Hamiltonian is then

H= 2({a^\dagger}^2-a^\dagger)a + (a^\dagger-{a^\dagger}^2)a^2

If we start off with the coherent state

\displaystyle{\Psi(0) = \frac{e^{2z}}{e^2}}

we find that

\displaystyle {\Psi(t)=e^{2(z^2-z)2+(z-z^2)4} \; \Psi(0)}=\Psi(0)

which is coherent. It should be noted that the chosen initial state

\displaystyle{ \frac{e^{2z}}{e^2}}

was a complex balanced equilibrium solution. So, the Anderson–Craciun–Kurtz Theorem applies to this case.

Amoebae reproducing, competing, and being introduced

This is a combination of the previous two examples, where apart from ongoing reproduction and competition, amoebae are being introduced into the dish with a rate constant 3.

As in the above examples, we might think that coherent states could remain coherent forever here too. Let’s check that.

Assuming that this was true, if

\displaystyle{\Psi(t) = \frac{e^{c(t)z}}{e^{c(t)}} }

then c(t) would have to satisfy the following:

\dot{c}(t) = c(t)^2 + 3 -2c(t)

and

c(t)^2=2c(t)

Using the second equation, we get

\dot{c}(t) = 3 \Rightarrow c = 3t+ c_0

But this is certainly not a solution of the second equation. So, here we find that initially coherent states do not remain remain coherent for all times.

However, if we choose

\displaystyle{\Psi(0) = \frac{e^{2z}}{e^2}}

then this coherent state is complex balanced except for complexes of degree 1, since it was in the previous example, and the only new feature of this example, at time zero, is that single amoebas are being introduced—and these are complexes of degree 1. So, the condition of the theorem does hold.

So, the condition in the theorem is necessary but not sufficient. However, it is easy to check, and we can use it to show that in many cases, coherent states must cease to be coherent.


The Large-Number Limit for Reaction Networks (Part 1)

1 July, 2013

Waiting for the other shoe to drop.

This is a figure of speech that means ‘waiting for the inevitable consequence of what’s come so far’. Do you know where it comes from? You have to imagine yourself in an apartment on the floor below someone who is taking off their shoes. When you hear one, you know the next is coming.

There’s even an old comedy routine about this:

A guest who checked into an inn one night was warned to be quiet because the guest in the room next to his was a light sleeper. As he undressed for bed, he dropped one shoe, which, sure enough, awakened the other guest. He managed to get the other shoe off in silence, and got into bed. An hour later, he heard a pounding on the wall and a shout: “When are you going to drop the other shoe?”

When we were working on math together, James Dolan liked to say “the other shoe has dropped” whenever an inevitable consequence of some previous realization became clear. There’s also the mostly British phrase the penny has dropped. You say this when someone finally realizes the situation they’re in.

But sometimes one realization comes after another, in a long sequence. Then it feels like it’s raining shoes!

I guess that’s a rather strained metaphor. Perhaps falling like dominoes is better for these long chains of realizations.

This is how I’ve felt in my recent research on the interplay between quantum mechanics, stochastic mechanics, statistical mechanics and extremal principles like the principle of least action. The basics of these subjects should be completely figured out by now, but they aren’t—and a lot of what’s known, nobody bothered to tell most of us.

So, I was surprised to rediscover that the Maxwell relations in thermodynamics are formally identical to Hamilton’s equations in classical mechanics… though in retrospect it’s obvious. Thermodynamics obeys the principle of maximum entropy, while classical mechanics obeys the principle of least action. Wherever there’s an extremal principle, symplectic geometry, and equations like Hamilton’s equations, are sure to follow.

I was surprised to discover (or maybe rediscover, I’m not sure yet) that just as statistical mechanics is governed by the principle of maximum entropy, quantum mechanics is governed by a principle of maximum ‘quantropy’. The analogy between statistical mechanics and quantum mechanics has been known at least since Feynman and Schwinger. But this basic aspect was never explained to me!

I was also surprised to rediscover that simply by replacing amplitudes by probabilities in the formalism of quantum field theory, we get a nice formalism for studying stochastic many-body systems. This formalism happens to perfectly match the ‘stochastic Petri nets’ and ‘reaction networks’ already used in subjects from population biology to epidemiology to chemistry. But now we can systematically borrow tools from quantum field theory! All the tricks that particle physicists like—annihilation and creation operators, coherent states and so on—can be applied to problems like the battle between the AIDS virus and human white blood cells.

And, perhaps because I’m a bit slow on the uptake, I was surprised when yet another shoe came crashing to the floor the other day.

Because quantum field theory has, at least formally, a nice limit where Planck’s constant goes to zero, the same is true for for stochastic Petri nets and reaction networks!

In quantum field theory, we call this the ‘classical limit’. For example, if you have a really huge number of photons all in the same state, quantum effects sometimes become negligible, and we can describe them using the classical equations describing electromagnetism: the classical Maxwell equations. In stochastic situations, it makes more sense to call this limit the ‘large-number limit': the main point is that there are lots of particles in each state.

In quantum mechanics, different observables don’t commute, so the so-called commutator matters a lot:

[A,B] = AB - BA

These commutators tend to be proportional to Planck’s constant. So in the limit where Planck’s constant \hbar goes to zero, observables commute… but commutators continue to have a ghostly existence, in the form of Poisson bracket:

\displaystyle{ \{A,B\} = \lim_{\hbar \to 0} \; \frac{1}{\hbar} [A,B] }

Poisson brackets are a key part of symplectic geometry—the geometry of classical mechanics. So, this sort of geometry naturally shows up in the study of stochastic Petri nets!

Let me sketch how it works. I’ll start with a section reviewing stuff you should already know if you’ve been following the network theory series.

The stochastic Fock space

Suppose we have some finite set S. We call its elements species, since we think of them as different kinds of things—e.g., kinds of chemicals, or kinds of organisms.

To describe the probability of having any number of things of each kind, we need the stochastic Fock space. This is the space of real formal power series in a bunch of variables, one for each element of S. It won’t hurt to simply say

S = \{1, \dots, k \}

Then the stochastic Fock space is

\mathbb{R}[[z_1, \dots, z_k ]]

this being math jargon for the space of formal power series with real coefficients in some variables z_1, \dots, z_k, one for each element of S.

We write

n = (n_1, \dots, n_k) \in \mathbb{N}^S

and use this abbreviation:

z^n = z_1^{n_1} \cdots z_k^{n_k}

We use z^n to describe a state where we have n_1 things of the first species, n_2 of the second species, and so on.

More generally, a stochastic state is an element \Psi of the stochastic Fock space with

\displaystyle{ \Psi = \sum_{n \in \mathbb{N}^k} \psi_n \, z^n }

where

\psi_n \ge 0

and

\displaystyle{ \sum_{n  \in \mathbb{N}^k} \psi_n = 1 }

We use \Psi to describe a state where \psi_n is the probability of having n_1 things of the first species, n_2 of the second species, and so on.

The stochastic Fock space has some important operators on it: the annihilation operators given by

\displaystyle{ a_i \Psi = \frac{\partial}{\partial z_i} \Psi }

and the creation operators given by

\displaystyle{ a_i^\dagger \Psi = z_i \Psi }

From these we can define the number operators:

N_i = a_i^\dagger a_i

Part of the point is that

N_i z^n = n_i z^n

This says the stochastic state z^n is an eigenstate of all the number operators, with eigenvalues saying how many things there are of each species.

The annihilation, creation, and number operators obey some famous commutation relations, which are easy to check for yourself:

[a_i, a_j] = 0

[a_i^\dagger, a_j^\dagger] = 0

[a_i, a_j^\dagger] = \delta_{i j}

[N_i, N_j ] = 0

[N_i , a_j^\dagger] = \delta_{i j} a_j^\dagger

[N_i , a_j] = - \delta_{i j} a_j^\dagger

The last two have easy interpretations. The first of these two implies

N_i a_i^\dagger \Psi = a_i^\dagger (N_i + 1) \Psi

This says that if we start in some state \Psi, create a thing of type i, and then count the things of that type, we get one more than if we counted the number of things before creating one. Similarly,

N_i a_i \Psi = a_i (N_i - 1) \Psi

says that if we annihilate a thing of type i and then count the things of that type, we get one less than if we counted the number of things before annihilating one.

Introducing Planck’s constant

Now let’s introduce an extra parameter into this setup. To indicate the connection to quantum physics, I’ll call it \hbar, which is the usual symbol for Planck’s constant. However, I want to emphasize that we’re not doing quantum physics here! We’ll see that the limit where \hbar \to 0 is very interesting, but it will correspond to a limit where there are many things of each kind.

We’ll start by defining

A_i = \hbar \, a_i

and

C_i = a_i^\dagger

Here A stands for ‘annihilate’ and C stands for ‘create’. Think of A as a rescaled annihilation operator. Using this we can define a rescaled number operator:

\widetilde{N}_i = C_i A_i

So, we have

\widetilde{N}_i = \hbar N_i

and this explains the meaning of the parameter \hbar. The idea is that instead of counting things one at time, we count them in bunches of size 1/\hbar.

For example, suppose \hbar = 1/12. Then we’re counting things in dozens! If we have a state \Psi with

N_i \Psi = 36 \Psi

then there are 36 things of the ith kind. But this implies

\widetilde{N}_i \Psi = 3 \Psi

so there are 3 dozen things of the ith kind.

Chemists don’t count in dozens; they count things in big bunches called moles. A mole is approximately the number of carbon atoms in 12 grams: Avogadro’s number, 6.02 × 1023. When you count things by moles, you’re taking \hbar to be 1.66 × 10-24, the reciprocal of Avogadro’s number.

So, while in quantum mechanics Planck’s constant is ‘the quantum of action’, a unit of action, here it’s ‘the quantum of quantity': the amount that corresponds to one thing.

We can easily work out the commutation relations of our new rescaled operators:

[A_i, A_j] = 0

[C_i, C_j] = 0

[A_i, C_j] = \hbar \, \delta_{i j}

[\widetilde{N}_i, \widetilde{N}_j ] = 0

[\widetilde{N}_i , C_j] = \hbar \,  \delta_{i j} C_j

[\widetilde{N}_i , A_j] = - \hbar \, \delta_{i j} A_j

These are just what you see in quantum mechanics! The commutators are all proportional to \hbar.

Again, we can understand what these relations mean if we think a bit. For example, the commutation relation for \widetilde{N}_i and C_i says

N_i C_i \Psi = C_i (N_i + \hbar) \Psi

This says that if we start in some state \Psi, create a thing of type i, and then count the things of that type, we get \hbar more than if we counted the number of things before creating one. This is because we are counting things not one at a time, but in bunches of size 1/\hbar.

You may be wondering why I defined the rescaled annihilation operator to be \hbar times the original annihilation operator:

A_i = \hbar \, a_i

but left the creation operator unchanged:

C_i = a_i^\dagger

I’m wondering that too! I’m not sure I’m doing things the best way yet. I’ve also tried another more symmetrical scheme, taking A_k = \sqrt{\hbar} \, a_k and C_k = \sqrt{\hbar} a_k^\dagger. This gives the same commutation relations, but certain other formulas become more unpleasant. I’ll explain that some other day.

Next, we can take the limit as \hbar \to 0 and define Poisson brackets of operators by

\displaystyle{ \{A,B\} = \lim_{\hbar \to 0} \; \frac{1}{\hbar} [A,B] }

To make this rigorous it’s best to proceed algebraically. For this we treat \hbar as a formal variable rather than a specific number. So, our number system becomes \mathbb{R}[\hbar], the algebra of polynomials in \hbar. We define the Weyl algebra to be the algebra over \mathbb{R}[\hbar] generated by elements A_i and C_i obeying

[A_i, A_j] = 0

[C_i, C_j] = 0

[A_i, C_j] = \hbar \, \delta_{i j}

We can set \hbar = 0 in this formalism; then the Weyl algebra reduces to the algebra of polynomials in the variables A_i and C_i. This algebra is commutative! But we can define a Poisson bracket on this algebra by

\displaystyle{ \{A,B\} = \lim_{\hbar \to 0} \; \frac{1}{\hbar} [A,B] }

It takes a bit of work to explain to algebraists exactly what’s going on in this formula, because it involves an interplay between the algebra of polynomials in A_i and C_i, which is commutative, and the Weyl algebra, which is not. I’ll be glad to explain the details if you want. But if you’re a physicist, you can just follow your nose and figure out what the formula gives. For example:

\begin{array}{ccl}   \{A_i, C_j\} &=& \displaystyle{ \lim_{\hbar \to 0} \; \frac{1}{\hbar} [A_i, C_j] } \\  \\  &=& \displaystyle{ \lim_{\hbar \to 0} \; \frac{1}{\hbar} \, \hbar \, \delta_{i j} }  \\  \\  &=& \delta_{i j} \end{array}

Similarly, we have:

\{ A_i, A_j \} = 0

\{ C_i, C_j \} = 0

\{ A_i, C_j \} = \delta_{i j}

\{ \widetilde{N}_i, \widetilde{N}_j \}  = 0

\{ \widetilde{N}_i , C_j \} = \delta_{i j} C_j

\{ \widetilde{N}_i , A_j \} = - \delta_{i j} A_j

I should probably use different symbols for A_i, C_i and \widetilde{N}_i after we’ve set \hbar = 0, since they’re really different now, but I don’t have the patience to make up more names for things!

Now, we can think of A_i and C_i as coordinate functions on a 2k-dimensional vector space, and all the polynomials in A_i and C_i as functions on this space. This space is what physicists would call a ‘phase space’: they use this kind of space to describe the position and momentum of a particle, though here we are using it in a different way. Mathematicians would call it a ‘symplectic vector space’, because it’s equipped with a special structure, called a symplectic structure, that lets us define Poisson brackets of smooth functions on this space. We won’t need to get into that now, but it’s important—and it makes me happy to see it here.

More

There’s a lot more to do, but not today. My main goal is to understand, in a really elegant way, how the master equation for a stochastic Petri net reduces to the rate equation in the large-number limit. What we’ve done so far is start thinking of this as a \hbar \to 0 limit. This should let us borrow ideas about classical limits in quantum mechanics, and apply them to stochastic mechanics.

Stay tuned!


Quantum Techniques for Reaction Networks

11 June, 2013

Fans of the network theory series might like to look at this paper:

• John Baez, Quantum techniques for reaction networks.

and I would certainly appreciate comments and corrections.

This paper tackles a basic question we never got around to discussing: how the probabilistic description of a system where bunches of things randomly interact and turn into other bunches of things can reduce to a deterministic description in the limit where there are lots of things!

Mathematically, such systems are given by ‘stochastic Petri nets’, or if you prefer, ‘stochastic reaction networks’. These are just two equivalent pictures of the same thing. For example, we could describe some chemical reactions using this Petri net:

but chemists would use this reaction network:

C + O2 → CO2
CO2 + NaOH → NaHCO3
NaHCO3 + HCl → H2O + NaCl + CO2

Making either of them ‘stochastic’ merely means that we specify a ‘rate constant’ for each reaction, saying how probable it is.

For any such system we get a ‘master equation’ describing how the probability of having any number of things of each kind changes with time. In the class I taught on this last quarter, the students and I figured out how to derive from this an equation saying how the expected number of things of each kind changes with time. Later I figured out a much slicker argument… but either way, we get this result:

Theorem. For any stochastic reaction network and any stochastic state \Psi(t) evolving in time according to the master equation, then

\displaystyle{ \frac{d}{dt} \langle N \Psi(t) \rangle } =  \displaystyle{\sum_{\tau \in T}} \, r(\tau) \,  (s(\tau) - t(\tau)) \;  \left\langle N^{\underline{s(\tau)}}\, \Psi(t) \right\rangle

assuming the derivative exists.

Of course this will make no sense yet if you haven’t been following the network theory series! But I explain all the notation in the paper, so don’t be scared. The main point is that \langle N \Psi(t) \rangle is a vector listing the expected number of things of each kind at time t. The equation above says how this changes with time… but it closely resembles the ‘rate equation’, which describes the evolution of chemical systems in a deterministic way.

And indeed, the next big theorem says that the master equation actually implies the rate equation when the probability of having various numbers of things of each kind is given by a product of independent Poisson distributions. In this case \Psi(t) is what people in quantum physics call a ‘coherent state’. So:

Theorem. Given any stochastic reaction network, let
\Psi(t) be a mixed state evolving in time according to the master equation. If \Psi(t) is a coherent state when t = t_0, then \langle N \Psi(t) \rangle obeys the rate equation when t = t_0.

In most cases, this only applies exactly at one moment of time: later \Psi(t) will cease to be a coherent state. Then we must resort to the previous theorem to see how the expected number of things of each kind changes with time.

But sometimes our state \Psi(t) will stay coherent forever! For one case where this happens, see the companion paper, which I blogged about a little while ago:

• John Baez and Brendan Fong, Quantum techniques for studying equilibrium in reaction networks.

We wrote this first, but logically it comes after the one I just finished now!

All this material will get folded into the book I’m writing with Jacob Biamonte. There are just a few remaining loose ends that need to be tied up.


Follow

Get every new post delivered to your Inbox.

Join 2,818 other followers