Are you having stocks and flow in mind?

Sort of, but not exactly. The Gillespie algorithm, and the simpler one I described, are random algorithms: they are not deterministic. They are not trying to numerically solve the master equation that describes the change of probabilities with time, they are trying to simulate a *specific randomly chosen history* of a stochastic process.

A while back you asked what someone meant by ‘a realization’ of a Markov process. What I’m calling ‘a specific randomly chosen history’ is the same thing as a ‘realization’. In the Feynman path integral approach to quantum mechanics it would be called a ‘path’ or a ‘history’, but here we are using the same idea for a stochastic process, not a quantum system. We’re trying to randomly pick one of the possible histories of the world, with the correct probability. It’s an example of a Monte Carlo method.

Someone should prove some theorems about how well these algorithms work, and maybe someone has, but I don’t know those theorems.

]]>Actually, there’s an even simpler and conceptually clearer, but less efficient, algorithm in which you increment the time by a very small amount Δ in each step.

Are you having stocks and flow in mind?

It’s good to think about numerically simulating a stochastic Petri net or (equivalently) chemical reaction network using the ‘Gillespie algorithm’.

Are there works which show how the first algorithm (i.e. something like stocks and flow modulo adjustments via taking logarithms and normalizations) and/or (?) the Gillespie algorithm converge in the continuous time limit to the Master equation, at least for examples? By my experience with nonlinear systems especially for longer time spans the quality of convergence may depend quite extremely on the concrete realizations of the flow dependencies.

]]>Let’s think: why will the mosquito only bite the person once per minute? It must be that the mosquito is buzzing around the room and only encounters the person once per minute, at which point a bite is instantly made…

That’s one possibility. Another possibility is that it encounters the person more often, but only bites some fraction of the time. In chemical reactions I believe the analogous thing can happen: the reaction isn’t guarantee to happen every time the molecules get close enough.

]]>Let’s think: why will the mosquito only bite the person once per minute? It must be that the mosquito is buzzing around the room and only *encounters* the person once per minute, at which point a bite is instantly made, and the mosquito then flies off trying to find another person to bite.

If we have 100 people and 1 mosquito in the room, then the master equation predicts 100 sneezes per minute, and 100 bites per minute. That agrees with the intuition just outlined above.

The problem I had with this is that I thought the master equation assumed that the constituents were ‘well-mixed’ — as if being stirred infinitely fast with an big spoon — ensuring that any subset of molecules would in fact come into contact arbitrarily soon, at least for a small length of time. Coupled with the intuition above, this would mean that even with 1 person and 1 mosquito in the room, there would be an infinite number of bites per minute, unless the bites themselves took some finite amount of time, which was the alternative basis for the rate constants that I was considering.

But I have now spent some time reading Gillespie’s “A rigorous derivation of the chemical master equation”, and I see that this isn’t quite right: rather, we should assume that the molecules move around as if they are particles of an ideal gas at some temperature . The well-mixed assumption just means that the system is kept homogeneous over time. And if we change the temperature of the system, the rate constants would change as well.

]]>It’s not quite clear to me how the rate constant ought to be defined physically, but maybe it’s something like this: the reciprocal of the expected time until the process occurs, when the initial state is given exactly by the input molecules required by the process, and no other process can take place.

That sounds right. It’s good to think about numerically simulating a stochastic Petri net or (equivalently) chemical reaction network using the ‘Gillespie algorithm’. Since Wikipedia explains that algorithm in the case of a reversible reaction

A + B → AB

AB → A + B

I think I won’t explain it.

Actually, there’s an even simpler and conceptually clearer, but less efficient, algorithm in which you increment the time by a very small amount Δ in each step. Then the chance that a given reaction occurs in that step will be Δ times the rate constant times the number of ways that reaction can occur (some obvious product involving the number of molecules of each type involved in that reaction).

If Δ is small enough, the chance that *two different* reactions occur in a given time step becomes negligible, so we can ignore that possibility, and the annoying issue of ‘deadlocks’, where doing one reaction makes it impossible to do another reaction, and vice versa, since there’s not enough stuff around to do both.

So, we just keep incrementing the time and randomly either doing one reaction or no reaction. The inefficiency is that you spend most of your time doing nothing. The Gillespie algorithm gets around that problem.

]]>It’s not quite clear to me how the rate constant ought to be defined physically, but maybe it’s something like this: the reciprocal of the expected time until the process occurs, when the initial state is given exactly by the input molecules required by the process, and no other process can take place.

So, what should the rate constant for spontaneous combustion be, by this definition? Let’s say 1/(1000 years). But for getting bitten by a mosqito, 1 per minute would probably be about right.

]]>(This assumes a ‘well mixed’ situation, where all the mosquitoes get an equal chance to bite you. If there are of them, the expected time for the first one to bite goes roughly as )

]]>Suppose there are species , and , and two reactions with the same rate constants: , and . Now, let’s think intuitively about this. Surely the requirements for reaction B to occur are strictly *more stringent* than those for reaction A: while A requires only an , B requires an and a . If we imagine our , and molecules rushing around in a liquid, reaction A can occur any time, on any present; but reaction B is restricted to occurring when molecules and are in close contact.

To make this distinction most extreme, consider the case that there are many molecules — say, 1000 of them — and just one molecule. Then at any particular moment, there are 1000 chances for reaction A to take place, but at most 1 chance for reaction B to take place, since there’s only 1 molecule present. So intuitively, since reactions A and B have the same rate constants, we might expect the observed rates of A and B to be in a ration of 1000 to 1.

Now, let’s analyze this situation with the master equation, which predicts that when the number of each species is well-defined, the rate at which a particular reaction takes place is given by the rate constant, multiplied by, for each input species, the number of that species present. (This is only exactly correct when each the reaction input involves at most 1 of each species.)

So, let’s analyze the situation with 1000 molecules and 1 molecule. Since , we see that reactions A and B will take place at an *equal* rate, violating the intuitive ratio derived above by a factor of 1000.

Worse, I argued above that *whatever* the physical state, reaction B should take place less often than reaction A, since it has more stringent preconditions. But this is also not predicted by the master equation. In a state with 1000 molecules and 1000 molecules, the master equation predicts that reaction B will take place at a rate of , which is 1000 times more often than reaction A.

Why does this make sense? What microphysical interpretation of the underlying physics makes this reasonable? What is so flawed about the intuitive argument described above?

]]>• Luca Cardelli and Attila Csikász-Nagy, The cell cycle switch is approximate majority obfuscated, *Scientific Reports* **2** (2012).

This got him interested in developing a theory of morphisms between reaction networks, which could answer questions like: When can one chemical reaction network (CRN) emulate another? But these questions turn out to be much easier if we use the rate equation than with the master equation. So, he asks: when can one CRN give the same rate equation as another?

]]>I’ve been trying to get from the master equation to the rate equation.

Great! In our seminar at UCR last quarter we spent a few sessions doing this. I am now writing a paper about what we discovered, called “Quantum techniques for studying the large number limit in chemical reaction networks”. I hope to be done in a couple weeks. If I didn’t already know many of the main results, I’d suggest this as a topic for you to work on when we meet in Singapore. Even though I *do* know many of the main results, there might be other things left to do.

Therefore, we’ll get the required result if the variance if zero.

Or more generally: if the variance is smaller than some number, we get the rate equation to within some accuracy. This sort of “for every there is a ” formulation is more useful, since it’s unlikely for the variance to be exactly zero.

In similar examples with higher n,m we require higher moments to be 0 also.

Right! Of course if the variance is zero so are these higher moments. But more generally, we need estimates on these higher moments to obtain the rate equation to within some given accuracy.

What is the significance of this?

I don’t know, it just makes a lot of sense that you need bounds on higher moments of if you are dealing with process with more inputs, since those involve terms like

Also, for the general case of multiple species, what is the form of in terms of annihilation and creation operators, to get the expected value?

There’s one number operator for each species,

It counts how many things we have of the $i$th species.

]]>