A Course on Quantum Techniques for Stochastic Mechanics

Jacob Biamonte and I have come out with a draft of a book!

A course on quantum techniques for stochastic mechanics.

It’s based on the first 24 network theory posts on this blog. It owes a lot to everyone here, and the acknowledgements just scratch the surface of that indebtedness. At some later time I’d like to go through the posts and find the top twenty people who need to be thanked. But I’m leaving Singapore on Friday, going back to California to teach at U.C. Riverside, so I’ve been rushing to get something out before then.

If you see typos or other problems, please let us know!
We’ve reorganized the original blog articles and polished them up a bit, but we plan to do more before publishing these notes as a book.

I’m looking forward to teaching a seminar called Mathematics of the Environment when I get back to U.C. Riverside, and with luck I’ll put some notes from that on the blog here. I will also be trying to round up a team of grad students to work on network theory.

The next big topics in the network theory series will be electrical circuits and Bayesian networks. I’m beginning to see how these fit together with stochastic Petri nets in a unified framework, but I’ll need to talk and write about it to fill in all the details.

You can get a sense of what this course is about by reading this:

Foreword

This course is about a curious relation between two ways of describing situations that change randomly with the passage of time. The old way is probability theory and the new way is quantum theory

Quantum theory is based, not on probabilities, but on amplitudes. We can use amplitudes to compute probabilities. However, the relation between them is nonlinear: we take the absolute value of an amplitude and square it to get a probability. It thus seems odd to treat amplitudes as directly analogous to probabilities. Nonetheless, if we do this, some good things happen. In particular, we can take techniques devised in quantum theory and apply them to probability theory. This gives new insights into old problems.

There is, in fact, a subject eager to be born, which is mathematically very much like quantum mechanics, but which features probabilities in the same equations where quantum mechanics features amplitudes. We call this subject stochastic mechanics

Plan of the course

In Section 1 we introduce the basic object of study here: a ‘stochastic Petri net’. A stochastic Petri net describes in a very general way how collections of things of different kinds can randomly interact and turn into other things. If we consider large numbers of things, we obtain a simplified deterministic model called the ‘rate equation’, discussed in Section 2. More fundamental, however, is the ‘master equation’, introduced in Section 3. This describes how the probability of having various numbers of things of various kinds changes with time.

In Section 4 we consider a very simple stochastic Petri net and notice that in this case, we can solve the master equation using techniques taken from quantum mechanics. In Section 5 we sketch how to generalize this: for any stochastic Petri net, we can write down an operator called a ‘Hamiltonian’ built from ‘creation and annihilation operators’, which describes the rate of change of the probability of having various numbers of things. In Section 6 we illustrate this with an example taken from population biology. In this example the rate equation is just the logistic equation, one of the simplest models in population biology. The master equation describes reproduction and competition of organisms in a stochastic way.

In Section 7 we sketch how time evolution as described by the master equation can be written as a sum over Feynman diagrams. We do not develop this in detail, but illustrate it with a predator–prey model from population biology. In the process, we give a slicker way of writing down the Hamiltonian for any stochastic Petri net.

In Section 8 we enter into a main theme of this course: the study of equilibrium solutions of the master and rate equations. We present the Anderson–Craciun–Kurtz theorem, which shows how to get equilibrium solutions of the master equation from equilibrium solutions of the rate equation, at least if a certain technical condition holds. Brendan Fong has translated Anderson, Craciun and Kurtz’s original proof into the language of annihilation and creation operators, and we give Fong’s proof here. In this language, it turns out that the equilibrium solutions are mathematically just like ‘coherent states’ in quantum mechanics.

In Section 9 we give an example of the Anderson–Craciun–Kurtz theorem coming from a simple reversible reaction in chemistry. This example leads to a puzzle that is resolved by discovering that the presence of ‘conserved quantities’—quantities that do not change with time—let us construct many equilibrium solutions of the rate equation other than those given by the Anderson–Craciun–Kurtz theorem.

Conserved quantities are very important in quantum mechanics, and they are related to symmetries by a result called Noether’s theorem. In Section 10 we describe a version of Noether’s theorem for stochastic mechanics, which we proved with the help of Brendan Fong. This applies, not just to systems described by stochastic Petri nets, but a much more general class of processes called ‘Markov processes’. In the analogy to quantum mechanics, Markov processes are analogous to arbitrary quantum systems whose time evolution is given by a Hamiltonian. Stochastic Petri nets are analogous to a special case of these: the case where the Hamiltonian is built from annihilation and creation operators. In Section 11 we state the analogy between quantum mechanics and stochastic mechanics more precisely, and with more attention to mathematical rigor. This allows us to set the quantum and stochastic versions of Noether’s theorem side by side and compare them in Section 12.

In Section 13 we take a break from the heavy abstractions and look at a fun example from chemistry, in which a highly symmetrical molecule randomly hops between states. These states can be seen as vertices of a graph, with the transitions as edges. In this particular example we get a famous graph with 20 vertices and 30 edges, called the ‘Desargues graph’.

In Section 14 we note that the Hamiltonian in this example is a ‘graph Laplacian’, and, following a computation done by Greg Egan, we work out the eigenvectors and eigenvalues of this Hamiltonian explicitly. One reason graph Laplacians are interesting is that we can use them as Hamiltonians to describe time evolution in both stochastic and quantum mechanics. Operators with this special property are called ‘Dirichlet operators’, and we discuss them in Section 15. As we explain, they also describe electrical circuits made of resistors. Thus, in a peculiar way, the intersection of quantum mechanics and stochastic mechanics is the study of electrical circuits made of resistors!

In Section 16, we study the eigenvectors and eigenvalues of an arbitrary Dirichlet operator. We introduce a famous result called the Perron–Frobenius theorem for this purpose. However, we also see that the Perron–Frobenius theorem is important for understanding the equilibria of Markov processes. This becomes important later when we prove the ‘deficiency zero theorem’.

We introduce the deficiency zero theorem in Section 17. This result, proved by the chemists Feinberg, Horn and Jackson, gives equilibrium solutions for the rate equation for a large class of stochastic Petri nets. Moreover, these equilibria obey the extra condition that lets us apply the Anderson–Craciun–Kurtz theorem and obtain equlibrium solutions of the master equations as well. However, the deficiency zero theorem is best stated, not in terms of stochastic Petri nets, but in terms of another, equivalent, formalism: ‘chemical reaction networks’. So, we explain chemical reaction networks here, and use them heavily throughout the rest of the course. However, because they are applicable to such a large range of problems, we call them simply ‘reaction networks’. Like stochastic Petri nets, they describe how collections of things of different kinds randomly interact and turn into other things.

In Section 18 we consider a simple example of the deficiency zero theorem taken from chemistry: a diatomic gas. In Section 19 we apply the Anderson–Craciun–Kurtz theorem to the same example.

In Section 20 we begin the final phase of the course: proving the deficiency zero theorem, or at least a portion of it. In this section we discuss the concept of ‘deficiency’, which had been introduced before, but not really explained: the definition that makes the deficiency easy to compute is not the one that says what this concept really means. In Section 21 we show how to rewrite the rate equation of a stochastic Petri net—or equivalently, of a reaction network—in terms of a Markov process. This is surprising because the rate equation is nonlinear, while the equation describing a Markov process is linear in the probabilities involved. The trick is to use a nonlinear operation called ‘matrix exponentiation’. In Section 22 we study equilibria for Markov processes. Then, finally, in Section 23, we use these equilbria to obtain equilibrium solutions of the rate equation, completing our treatment of the deficiency zero theorem.

19 Responses to A Course on Quantum Techniques for Stochastic Mechanics

  1. Qiaochu Yuan says:

    Did you mean “foreword”?

    • Mark Meckes says:

      I wondered if that was a deliberate play on words.

    • John Baez says:

      Thanks. I realized that mistake today while walking around… I started thinking about ‘forward’, ‘foreward’ and ‘foreword’, and asked my wife to help me disentangle them. How embarrrassing! Maybe I should have just yelled “Fore!”

  2. John Baez says:

    Over on Google+, Blake Stacey wrote:

    Typo on p. 150: “or in a superposition of at least two eigensvectors”

    p. 151: “good both for stochastic mechanics and
    stochastic mechanics.”

    p. 152: “The probability of finding a state in the $i$th configuration is defined to be $|\psi(x)|^2$.”

    state -> system

    \psi(x) -> \psi_i

    p. 26: “In Section 2 we’ll get to the really interesting part, where ideas from quantum theory enter the game!”

    Section 2 -> Section 3

    p. 28: “—that is, random— about a stochastic Petri net.”

    random— about -> random—about

    p. 47: “It’s like if someone says a party is `formal’, so need
    to wear a white tie:”

    so need -> so you need

    p. 54: “This sort of balance is necessary for $H$ to be a sensible Hamiltonian in this sort of stochastic theory `infinitesimal stochastic operator’, to be precise).”

    theory `infinitesimal -> theory (an `infinitesimal

  3. Jamie Vicary says:

    Great to see all this stuff in one place!

    I love the in-line links to Wikipedia.

  4. Great book, it is a really valuable resource! I am enthusiast of your way of combining simple explanations, crystalline-clear examples and technical stuff which demonstrates the greater generality of the method.

    I have a suggestion for a chapter to add in the end (or for future extensions of the book): what is missing from the perspective of a physicist working in statistical mechanics of mesoscopic/open systems, is a (even short) discussion of detailed balance condition which is very special and implies many physical consequences (e.g. a system satisfying det.bal. cannot transform heat into work, so that most of living processes must violate it; physicists often need a fast criterion to verify if det. bal. is satisfied or not). I have not gone thoroughly into it, but I suspect that such a condition is equivalent (?) to the Hamiltonian being self-adjoint: so this should fit the discussion about Dirichlet operators, shouldn’it? All the best, Andrea

    • John Baez says:

      That’s an interesting idea. I plan a few more rounds of reorganization: the first was to move the Perron–Frobenius theorem forward, but the second will be to move material introducing Markov processes to a section or two of their own, right after the ‘quantum versus stochastic’ section and before Noether’s theorem for Markov processes. There’s material that belongs there, like the recipe for getting Markov processes from ‘graphs with rates’, which currently appears very near the end, as part of proving the deficiency zero theorem. And another thing that might belong there is the concept of detailed balance!

      I don’t think this concept is equivalent to the condition of the Hamiltonian being self-adjoint, but I believe it’s implied by it. Say you have a particle that can be in two states, 1 and 2, and the rate for the process 1 → 2 is 10, while the rate for the process 2 → 1 is 1. Then the Hamiltonian isn’t self-adjoint, because these numbers aren’t equal. However, in equilibrium, the particle will be 10 times more likely to be in state 2 than in state 1. So, there will be detailed balance.

      Detailed balance is a property of a stochastic state \psi as well as a Hamiltonian H: it says

      H_{ij} \psi_j = H_{ji} \psi_i

      where there’s no summation on the repeated index.

      Of course, it’s a property of a infinitesimal stochastic Hamiltonian H that there exists a state \psi for which detailed balance holds. Such Hamiltonians are called reversible. I don’t know a quick way to check to see if a Hamiltonian is reversible.

      (In our course we had a concept of ‘weak reversibility’, and there’s an obvious ‘strong’ version of this concept, but that’s not equivalent to the notion we’re talking about now. )

      Thanks for raising this issue. You’re making me think more about some things I’ve been wanting to think about.

      For example… \psi obeys detailed balance for a Hamiltonian H iff H is self-adjoint on L^2 defined not with respect to the usual counting measure on our set of states, but that measure multiplied by \psi. So H is a Dirichlet operator iff \psi = 1 obeys detailed balanced for H.

      • andreo says:

        You are absolutely right, but with some further (and hopefully interesting) specification. I assume that results valid for a continuous space of states are also valid for the discrete space you are considering, this should be checked but it seems the case to me. Therefore let me replace your master equation with a Fokker-Planck equation: d\psi/dt = H \psi with H \psi=\partial_x J where J is the probability current (J and x are of course vectors if in a multi-dimensional space).

        Then you can see that H is self-adjoint with respect to the stationary solution \psi_{st} (as you wrote) if and only the stationary current is zero J_{st} = 0. The subtle thing is that such a condition is stronger than detailed balance. Indeed one can verify that the total current can be decomposed in two parts, one so-called “reversible” and the other “irreversible”: J=J^{rev}+J^{irr}. This decomposition is based upon the parities, with respect to time-reversal, of the variables (the component of vector x): indeed one may have a master equation describing odd and/or even variables, e.g. velocities and positions respectively. The detailed balance condition is equivalent to J_{st}^{irr}=0, which is less than the previous condition on the total current. When detailed balance is satisfied, the “reversible” current can still be non-zero and so the total current. Imagine for instance a pendulum with non-negligible inertia, so that it is described by position and momentum q,p: its master (Fokker-Planck) equation reduces to the Liouville equation because there is no noise. The system is clearly time-reversible: trajectories in phase space have the same “probabilities” (assuming a uniform measure of initial conditions) of occurring in one direction and in the opposite direction, provided that p is changed of sign. Nevertheless in phase-space q,p there is a non-zero current, a flow of probability going circularly in a fixed and not-invertible direction. This current is an example of “reversible” current, in the sense that it does not add irreversibiity to the system. When the phase space is 1-dimensional, the current cannot have a reversible part and therefore detailed-balance and absence of total current are equivalent.

        In general however, when detailed balance is satisfied, the total current can be non-zero and therefore the Hamiltonian is not self-adjoint w.r.t. \psi_{st}. Detailed balance is still very useful because it allows to identify easily the stationary solution and consequently a particular decomposition of the Hamiltonian in a hermitian and anti-hermitian (always w.r.t. the stationary solution) making easy to study the whole (non-stationary) problem by expanding in right and left eigenfunctions. If detailed balance is not satisfied, things get much more complicate.

        For the things I’ve summarized here, my reference book is Risken’s “The Fokker-Planck equation: methods …..” There you can find the definitions of things like irreversible currents etc. which I have not given for brevity. In Risken’s book the connection of Fokker-Planck operator (your Hamiltonian) with a Schroedinger operator as well as with creation-destruction operators is also discussed. What is not discussed is an extension to discrete systems, as the ones you consider. For those systems, I again suggest (if you’re interested in the connection with time-reversibility, entropy production, etc.) to have a look to the review by J. Schnakenberg (Network theory of microscopic and macroscopic behavior of master equation systems, Rev. Mod. Phys. 48, 571–585 (1976) )

        • John Baez says:

          Thanks for the long comment! I’ll have to think about this and translate it into the master equation language I’m more familiar with:

          \displaystyle{ \frac{d \psi}{d t} = H \psi }

          This language applies to a continuous space of states as well as for a discrete one, as sketched in Network Theory (Part 12). I’m hoping that your split of the current into ‘reversible’ and ‘irreversible’ parts corresponds to the split of a Dirichlet form into its symmetric and skew-symmetric parts, as sketched in that post and, in vastly more detail, here:

          • Zhi-Ming Ma and Michael Röckner, Introduction to the Theory of (Non-Symmetric) Dirichlet Forms, Springer, Berlin, 1992.

        • andreo says:

          Not exactly. I explain a bit more, maybe you like the stuff :-)

          Remaining in the field of continuous markov processes, you have two possible decompositions of the Hamiltonian H= H_{sym} + H_{antisym} and H = H_{irr}+H_{rev}: in general the two parts of the first decomposition (symmetric and skew-symmetric, w.r.t. the stationary solution) and the two parts of the second decomposition (irreversible and reversible w.r.t. the time-reversal parities of the variables) do not coincide. In the first decomposition the stationary current is entirely contained in the antisym part: H_{antisym} \psi_{st}=\partial_x J_{st}. In the second decomposition the current is separated: H_{irr} \psi_{st} = \partial_x J_{irr} and H_{rev} \psi_{st} = \partial_x J_{rev} and J_{st}=J_{irr}+J_{rev}.

          Note that to identify the two parts of the first decomposition you have to know the stationary solution, because symmetrization and antisymmetrization must be done w.r.t. that measure. On the contrary, to identify the two parts of the second decomposition you only have to know the timereversalparities of the variables (I’m still keeping secret this decomposition, but if you wish I can show you that more explicitly, or you can just browse Risken’s book, section 6.4 :-) )

          When detailed balance is satisfied, however, the two decompositions exactly coincide, i.e. H_{sym} = H_{irr} and H_{antisym} = H_{rev}. Much better, if detailed balance is satisfied and there is no reversible current (so that H_{rev}=0) then H is self-adjoint w.r.t. the stationary solution; this happen very often, e.g. in “overdamped” (or aristotelian) dynamics, where you typically have only even variables (if detailed balance is satisfied, i.e. if there are no non-conservative of time-dependent external forces). Nevertheless detailed balance alone is already good because it let you immediately know what are the symmetric and antisymmetric parts of the hamiltonian which makes life much easier (including knowing the stationary solution).

          In principle H could be self-adjoint even if detailed balance is not satisfied, but I’ve never seen a physical example of that. You should have a vanishing total current but non-vanishing irreversible and reversible parts, i.e. J_{rev}=-J_{irr} and |J_{rev}| != 0 . That would be strange.

          It would be interesting to find the same structure described above also in discrete systems as those you are considering.

  5. Greg Egan says:

    This looks great!

    A few early typos (I haven’t read the whole thing yet)

    page 3 “electrical circuites”

    page 4 “inclluding”

    page 10 “the main thing we’ll be talking about in future blog entries”

    [Not necessarily a problem, since you’re upfront about the origins. The book of Feynman’s talks on QED has asides about the number of people in the audience … ]

    page 20/21 Five cases of [H^{2}O] where you mean [H_{2}O]

  6. melior says:

    Look forward to reading this! Also want to point you towards some related work by Bob Tucci, e.g. arXiv:1208.1503 and arXiv:quant-ph/0701201.

    • John Baez says:

      Thanks for the references! These will be relevant when I get into discussing Brendan Fong’s work on Bayesian networks. However, I’m more interested in classical Bayesian networks than these quantum ones… I’ve spent a lot of time thinking about quantum mechanics, but these days my main interest is classical probability theory for macroscopic systems… including using math from quantum theory to study classical probability theory in new ones.

  7. rrtucci says:

    Thanks for mentioning my work, Melior. I’ve been following John Baez’s network blog posts for the past year with much interest, and have learned quite a lot. ( I was also tickled to learn about a theorem attributed to Tom Kurtz, from whom I once took 2 very good courses at the Univ, of Wisconsin.)

  8. Kawerau says:

    p. 224: “Tihs”: -> This

    p. 225: “where \psi is a function that the probability of being in each state”: missing verb

    p. 229: “Because our reaction network is weakly reversible, Theorem 58 there exists”

  9. David Tanzer says:

    page 176, equation says:

    d/dt x1 + 2 x2 = 0

    should say:
    d/dt x1 + 2 d/dt x2 = 0

  10. David Tanzer says:

    p. 176. There is an insubstantial problem with this equation:

    d/dt x1 = alpha(2 c – x1) – 2 b x1^2

    It shouldn’t have the factor 2 on the c. Same for the integral that follows. Because: when solve for x2, you get:

    x2 = (c – x1) / 2, and when this gets multiplied by 2 alpha, you get:

    alpha * (c – x1).

    But it doesn’t make a real difference, because c is an unspecified constant.

You can use Markdown or HTML in your comments. You can also use LaTeX, like this: $latex E = m c^2 $. The word 'latex' comes right after the first dollar sign, with a space after it.

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out / Change )

Twitter picture

You are commenting using your Twitter account. Log Out / Change )

Facebook photo

You are commenting using your Facebook account. Log Out / Change )

Google+ photo

You are commenting using your Google+ account. Log Out / Change )

Connecting to %s