The Large-Number Limit for Reaction Networks (Part 2)

I’ve been talking a lot about ‘stochastic mechanics’, which is like quantum mechanics but with probabilities replacing amplitudes. In Part 1 of this mini-series I started telling you about the ‘large-number limit’ in stochastic mechanics. It turns out this is mathematically analogous to the ‘classical limit’ of quantum mechanics, where Planck’s constant \hbar goes to zero.

There’s a lot more I need to say about this, and lots more I need to figure out. But here’s one rather easy thing.

In quantum mechanics, ‘coherent states’ are a special class of quantum states that are very easy to calculate with. In a certain precise sense they are the best quantum approximations to classical states. This makes them good tools for studying the classical limit of quantum mechanics. As \hbar \to 0, they reduce to classical states where, for example, a particle has a definite position and momentum.

We can borrow this strategy to study the large-number limit of stochastic mechanics. We’ve run into coherent states before in our discussions here. Now let’s see how they work in the large-number limit!

Coherent states

For starters, let’s recall what coherent states are. We’ve got k different kinds of particles, and we call each kind a species. We describe the probability that we have some number of particles of each kind using a ‘stochastic state’. For starters, this is a formal power series in variables z_1, \dots, z_k. We write it as

\displaystyle{\Psi = \sum_{\ell \in \mathbb{N}^k} \psi_\ell z^\ell }

where z^\ell is an abbreviation for

z_1^{\ell_1} \cdots z_k^{\ell_k}

But for \Psi to be a stochastic state the numbers \psi_\ell need to be probabilities, so we require that

\psi_\ell \ge 0

and

\displaystyle{ \sum_{\ell \in \mathbb{N}^k} \psi_\ell = 1}

Sums of coefficients like this show up so often that it’s good to have an abbreviation for them:

\displaystyle{ \langle \Psi \rangle =  \sum_{\ell \in \mathbb{N}^k} \psi_\ell}

Now, a coherent state is a stochastic state where the numbers of particles of each species are independent random variables, and the number of the ith species is distributed according to a Poisson distribution.

Since we can pick ithe means of these Poisson distributions to be whatever we want, we get a coherent state \Psi_c for each list of numbers c \in [0,\infty)^k:

\displaystyle{ \Psi_c = \frac{e^{c \cdot z}}{e^c} }

Here I’m using another abbreviation:

e^{c} = e^{c_1 + \cdots + c_k}

If you calculate a bit, you’ll see

\displaystyle{  \Psi_c = e^{-(c_1 + \cdots + c_k)} \, \sum_{n \in \mathbb{N}^k} \frac{c_1^{n_1} \cdots c_k^{n_k}} {n_1! \, \cdots \, n_k! } \, z_1^{n_1} \cdots z_k^{n_k} }

Thus, the probability of having n_i things of the ith species is equal to

\displaystyle{  e^{-c_i} \, \frac{c_i^{n_i}}{n_i!} }

This is precisely the definition of a Poisson distribution with mean equal to c_i.

What are the main properties of coherent states? For starters, they are indeed states:

\langle \Psi_c \rangle = 1

More interestingly, they are eigenvectors of the annihilation operators

a_i = \displaystyle{ \frac{\partial}{\partial z_i} }

since when you differentiate an exponential you get back an exponential:

\begin{array}{ccl} a_i \Psi_c &=&  \displaystyle{ \frac{\partial}{\partial z_i} \frac{e^{c \cdot z}}{e^c} } \\ \\   &=& c_i \Psi_c \end{array}

We can use this fact to check that in this coherent state, the mean number of particles of the ith species really is c_i. For this, we introduce the number operator

N_i = a_i^\dagger a_i

where a_i^\dagger is the creation operator:

(a_i^\dagger \Psi)(z) = z_i \Psi(z)

The number operator has the property that

\langle N_i \Psi \rangle

is the mean number of particles of the ith species. If we calculate this for our coherent state \Psi_c, we get

\begin{array}{ccl} \langle a_i^\dagger a_i \Psi_c \rangle &=& c_i \langle a_i^\dagger \Psi_c \rangle \\  \\ &=& c_i \langle \Psi_c \rangle \\ \\ &=& c_i \end{array}

Here in the second step we used the general rule

\langle a_i^\dagger \Phi \rangle = \langle \Phi \rangle

which is easy to check.

Rescaling

Now let’s see how coherent states work in the large-numbers limit. For this, let’s use the rescaled annihilation, creation and number operators from Part 1. They look like this:

A_i = \hbar \, a_i

C_i = a_i^\dagger

\widetilde{N}_i = C_i A_i

Since

\widetilde{N}_i = \hbar N_i

the point is that the rescaled number operator counts particles not one at a time, but in bunches of size 1/\hbar. For example, if \hbar is the reciprocal of Avogadro’s number, we are counting particles in ‘moles’. So, \hbar \to 0 corresponds to a large-number limit.

To flesh out this idea some more, let’s define rescaled coherent states:

\widetilde{\Psi}_c = \Psi_{c/\hbar}

These are eigenvectors of the rescaled annihilation operators:

\begin{array}{ccl} A_i \widetilde{\Psi}_c &=& \hbar a_i \Psi_{c/\hbar}  \\  \\  &=& c_i \Psi_{c/\hbar} \\ \\  &=& c_i \widetilde{\Psi}_c  \end{array}

This in turn means that

\begin{array}{ccl} \langle \widetilde{N}_i \widetilde{\Psi}_c \rangle &=& \langle C_i A_i \widetilde{\Psi}_c \rangle \\  \\  &=& c_i \langle  C_i \widetilde{\Psi}_c \rangle \\  \\ &=& c_i \langle \widetilde{\Psi}_c \rangle \\ \\ &=& c_i \end{array}

Here we used the general rule

\langle C_i \Phi \rangle = \langle \Phi \rangle

which holds because the ‘rescaled’ creation operator C_i is really just the usual creation operator, which obeys this rule.

What’s the point of all this fiddling around? Simply this. The equation

\langle \widetilde{N}_i \widetilde{\Psi}_c \rangle = c_i

says the expected number of particles of the ith species in the state \widetilde{\Psi}_c is c_i, if we count these particles not one at a time, but in bunches of size 1/\hbar.

A simple test

As a simple test of this idea, let’s check that as \hbar \to 0, the standard deviation of the number of particles in the state \Psi_c goes to zero… where we count particle using the rescaled number operator.

The variance of the rescaled number operator is, by definition,

\langle \widetilde{N}_i^2 \widetilde{\Psi}_c \rangle -   \langle \widetilde{N}_i^2 \widetilde{\Psi}_c \rangle^2

and the standard deviation is the square root of the variance.

We already know the mean of the rescaled number operator:

\langle \widetilde{N}_i \widetilde{\Psi}_c \rangle = c_i

So, the main thing we need to calculate is the mean of its square:

\langle \widetilde{N}_i^2 \widetilde{\Psi}_c \rangle

For this we will use the commutation relation derived last time:

[A_i , C_i] = \hbar

This implies

\begin{array}{ccl} \widetilde{N}_i^2 &=& C_i A_i C_i A_i \\  \\  &=&  C_i (C_i A_i + \hbar) A_i \\ \\  &=&  C_i^2 A_i^2 + \hbar C_i A_i \end{array}

so

\begin{array}{ccl} \langle \widetilde{N}_i^2\widetilde{\Psi}_c \rangle &=& \langle (C_i^2 A_i^2 + \hbar C_i A_i) \Psi_c \rangle \\   \\  &=&  c_i^2 + \hbar c_i  \end{array}

where we used our friends

A_i \Psi_c = c_i \Psi_c

and

\langle C_i \Phi \rangle = \langle \Phi \rangle

So, the variance of the rescaled number of particles is

\begin{array}{ccl} \langle \widetilde{N}_i^2 \widetilde{\Psi}_c \rangle  -   \langle \widetilde{N}_i \widetilde{\Psi}_c \rangle^2  &=& c_i^2 + \hbar c_i - c_i^2 \\  \\  &=& \hbar c_i \end{array}

and the standard deviation is

(\hbar c_i)^{1/2}

Good, it goes to zero as \hbar \to 0! And the square root is just what you’d expect if you’ve thought about stuff like random walks or the central limit theorem.

A puzzle

I feel sure that in any coherent state, not only the variance but also all the higher moments of the rescaled number operators go to zero as \hbar \to 0. Can you prove this?

Here I mean the moments after the mean has been subtracted. The pth moment is then

\langle (\widetilde{N}_i - c_i)^p \; \widetilde{\Psi}_c \rangle

I want this to go to zero as \hbar \to 0.

Here’s a clue that should help. First, there’s a textbook formula for the higher moments of Poisson distributions without the mean subtracted. If I understand it correctly, it gives this:

\displaystyle{ \langle N_i^m \; \Psi_c \rangle = \sum_{j = 1}^m {c_i}^j \; \left\{ \begin{array}{c} m \\ j \end{array} \right\} }

Here

\displaystyle{ \left\{ \begin{array}{c} m \\ j \end{array} \right\} }

is the number of ways to partition an m-element set into j nonempty subsets. This is called Stirling’s number of the second kind. This suggests that there’s some fascinating combinatorics involving coherent states. That’s exactly the kind of thing I enjoy, so I would like to understand this formula someday… but not today! I just want something to go to zero!

If I rescale the above formula, I seem to get

\begin{array}{ccl} \langle \widetilde{N}_i^m \; \widetilde{\Psi}_c \rangle &=& \hbar^m \langle N_i^m \Psi_{c/\hbar} \rangle \\ \\ &=& \hbar^m \; \displaystyle{ \sum_{j = 1}^m \left(\frac{c_i}{\hbar}\right)^j \left\{ \begin{array}{c} m \\ j \end{array} \right\} } \end{array}

We could plug this formula into

\langle (\widetilde{N}_i - c_i)^p \; \widetilde{\Psi}_c \rangle =  \displaystyle{ \sum_{m = 0}^p \, \binom{m}{p} \; \langle \widetilde{N}_i^m \;  \widetilde{\Psi}_c \rangle \, (-c_i)^{p - m} }

and then try to show the result goes to zero as \hbar \to 0. But I don’t have the energy to do that… not right now, anyway!

Maybe you do. Or maybe you can think of a better approach to solving this problem. The answer must be well-known, since the large-number limit of a Poisson distribution is a very important thing.

30 Responses to The Large-Number Limit for Reaction Networks (Part 2)

  1. John Baez says:

    Hmm, if we’re taking the \hbar \to 0 limit, we can look at

    \langle \widetilde{N}_i^m \; \widetilde{\Psi}_c \rangle = \hbar^m \; \displaystyle{ \sum_{j = 1}^m \left(\frac{c_i}{\hbar}\right)^j \left\{ \begin{array}{c} m \\ j \end{array} \right\} }

    and discard terms of order \hbar or higher. The only term that survives occurs when j = m, and

    \displaystyle{ \left\{ \begin{array}{c} m \\ m \end{array} \right\} = 1 ,}

    so we get

    \langle \widetilde{N}_i^m \; \widetilde{\Psi}_c \rangle =  c_i^m  + O(\hbar)

    Then we can stick this in

    \langle (\widetilde{N}_i - c_i)^p \; \widetilde{\Psi}_c \rangle = \displaystyle{ \sum_{m = 0}^p \, \binom{m}{p} \; \langle \widetilde{N}_i^m \; \widetilde{\Psi}_c \rangle \, (-c_i)^{p - m} }

    and get

    \langle (\widetilde{N}_i - c_i)^p \; \widetilde{\Psi}_c \rangle = \displaystyle{ \sum_{m = 0}^p \, \binom{m}{p} \; c_i^m \, (-c_i)^{p - m} } + O(\hbar)

    which by the binomial theorem says

    \langle (\widetilde{N}_i - c_i)^p \; \widetilde{\Psi}_c \rangle = O(\hbar)

    so indeed, it goes to zero as \hbar \to 0!

    • John Baez says:

      Here’s a sketch of an easier proof. In what follows I’ll write O(\hbar) for any function of c \in [0,\infty)^k and \hbar \in \mathbb{R} that’s a polynomial in \hbar with only terms of degree 1 and higher, like this:

      a \hbar + b \hbar^2 + \cdots

      In my paper on reaction networks I showed that

      \langle N_i^{\underline r} \; \Psi_c \rangle = c_i^r

      where N_i^{\underline r} is the rth falling power of the ith number operator:

      N_i^{\underline r} = N_i (N_i - 1) \cdots (N_i - r+1)

      As a consequence we have the following identity for the rescaled number operator:

      \langle \widetilde{N}_i^{\underline r} \; \widetilde{\Psi}_c \rangle = c_i^r

      where \widetilde{N}_i^{\underline r} is my temporary bad notation for the rescaled falling power of the ith rescaled number operator:

      \widetilde{N}_i^{\underline r} = \widetilde{N}_i (\widetilde{N}_i - \hbar) \cdots (\widetilde{N}_i - r \hbar +\hbar)

      However, note that

      \widetilde{N}_i^{\underline r}  =\widetilde{N}_i^r + O(\hbar)

      since they differ by terms proportional to positive powers of \hbar. Thus

      \langle \widetilde{N}_i^r \; \widetilde{\Psi}_c \rangle = c_i^r + O(\hbar)

      On the other hand,

      \langle \widetilde{N}_i \widetilde{\Psi}_c \rangle^r = c_i^r

      exactly. So,

      \langle \widetilde{N}_i^r \widetilde{\Psi}_c \rangle -  \langle \widetilde{N}_i \widetilde{\Psi}_c \rangle^r = O(\hbar)

      Using this fact together with

      \langle (\widetilde{N}_i - c_i)^p \; \widetilde{\Psi}_c \rangle = \displaystyle{ \sum_{m = 0}^p \, \binom{m}{p} \; \langle \widetilde{N}_i^m \; \widetilde{\Psi}_c \rangle \, (-c_i)^{p - m} }

      we see that

      \langle (\widetilde{N}_i - c_i)^p \; \widetilde{\Psi}_c \rangle = O(\hbar)

      This may not look easer, but it seems easier to me since it doesn’t involve any identities with Stirling numbers of the second kind, and I understand every step!

  2. Arjun Jain says:

    Why are considering the moments of the Poisson Distribution ?
    For the master equation to give the rate equation, don’t we need to look at \langle \widetilde{N}_i^{\underline{p}} \widetilde{\Psi}_c \rangle - \langle \widetilde{N}_i \widetilde{\Psi}_c \rangle^p ? For coherent states, this is zero, without needing \hbar \to 0.

    Should’t we first show that in the large number limit, \langle \widetilde{N}_i^{\underline{p}} \widetilde{\Psi}_c \rangle - \langle \widetilde{N}_i^p \widetilde{\Psi}_c \rangle goes to 0, and then think about the moments ?

    • John Baez says:

      Arjun wrote:

      Why are you considering the moments of the Poisson Distribution?

      I just feel it’s be good to know that all these higher moments go to zero at \hbar \to 0. It’s obviously something we should expect! And it sounds like it could be a useful lemma in some calculations. So I was frustrated at first that it was hard to show. That indicated a weakness in my understanding. When it’s hard to show something obvious, it means you need to think more. So I thought more, and now I have found a much easier proof.

      Should’t we first show that in the large number limit, \langle \widetilde{N}_i^{\underline{p}} \widetilde{\Psi}_c \rangle - \langle \widetilde{N}_i^p \widetilde{\Psi}_c \rangle goes to 0, and then think about the moments?

      Actually that’s the idea behind the much easier proof! I’ll sketch the proof here soon.

  3. Dan says:

    You might want to think in terms of cumulants. Check out

    http://www.scholarpedia.org/article/Cumulants

    It appears that all cumulants of the poisson distribution are equal to the mean. That immediately gives you that the mean, variance, and third central moment are all \hbar c_i and hence go to zero as \hbar does. In fact, all of the cumulants go to zero, which should imply that all of the central moments do since it should be possible to express central moments as polynomials in cumulants.

    • Dan says:

      Oops, nevermind. It is not that simple. I missed one of the “fiddlings”. You’ve scaled things so that the mean is still c, not \hbar c

    • Dan says:

      Okay, now I’m thinking that with your definitions, the order r cumulant will be \hbar^{r-1} c_i, so that the mean is c, the variance is \hbar c, and the third central moment is \hbar^2 c. So, all cumulants past the first go to zero. The translation to central moments past the first is not as quick as I first thought.

    • John Baez says:

      Thanks for telling me about cumulants. It would take me a while to get enough intuition for them to use them for something… though they vaguely remind me of ideas from statistical mechanics.

      There are some funny formulas involved in translating between cumulants and moments… and I bet there’s some nice combinatorics lurking behind these. As I mentioned in my article, the moments of a Poisson distribution are somewhat complicated, involving Stirling numbers of the second kind. Are the cumulants simpler? If so, we might try to argue that the complexity of the moments of a Poisson distribution arises from translating cumulants into moments.

      • Dan says:

        …though they vaguely remind me of ideas from statistical mechanics.

        Yeah, there’s a close analogy. The cumulant generating function (CGF) is the log of the moment generating function (MGF). So, in the sense that the MGF is analogous to the partition function, the CGF is like the free energy. There’s a short discussion of this in the Wikipedia article on cumulants:

        http://en.wikipedia.org/wiki/Cumulant#Relation_to_statistical_physics

        As I mentioned in my article, the moments of a Poisson distribution are somewhat complicated, involving Stirling numbers of the second kind. Are the cumulants simpler?

        Yeah, they’re simpler. For a Poisson distribution with PMF given by p(n)=e^{-\mu}\mu^n/n! the moment generating function is M(t)=\exp(\mu (e^t - 1)) so that the CGF is \log M(t) = \mu (e^t-1). So, all of the cumulants are equal to the mean \mu.

        But to be honest, I’m not that familar with cumulants either. I’ve just found myself reading a lot of statistics literature recently and they seem to like cumulants, so they came to mind.

        • RZ says:

          off the top of my head, can’t you just invoke the central limit theorem and state that for a large number of particles the Poisson distribution looks like a Gaussian with the correct mean and variance up to O(\hbar), and then all moments become “classical” (whatever that means here)?

        • John Baez says:

          If I were smart enough maybe I could do what you’re saying. Can I use some version of the central limit theorem to prove that all the higher centered moments of a Poisson distribution with mean N approach certain functions of N as N \to \infty?

          I would like to know. But anyway, I got the job done some other way.

      • John Baez says:

        Thanks, Dan—those remarks were more useful to me than the article on cumulants you pointed me to! I’m very used to the idea of differentiating a partition function to get useful information in physics: n-point functions, which are essentially moments. And I’m very used to the idea of taking the log of the partition function to get the free energy. But I’ve never thought much about differentiating the free energy! It’s obvious that these derivatives repackage the information in the derivatives of the partition function, but…

        … oh, wait a minute. In quantum field theory, the log of the partition function is called the effective action, and its derivatives can be written as sums of connected Feynman diagrams just as the derivatives of the partition function can be expressed as sums of Feynman diagrams.

        Actually this should explain the appearance of those Stirling numbers of the second kind. Just as any finite set can be written as the disjoint union of sets in a partition, any connected Feynman diagram can be written as a disjoint union of connected diagrams. The generating function for structures of any sort is the exponential of the generating function for ‘connected’ structures of that sort. I think I’m starting to get something here…

        • Vasileios Anagnostopoulos says:

          Click to access lectures-IHP.pdf

          page 44

        • Dan says:

          Glad to have helped, if only a little. Taking derivatives of the free energy is pretty common in statistical mechanics as well (or maybe I should say statistical thermodynamics). As I’m sure you know, the free energy is a thermodynamic potential that allows you to get pretty much all of the interesting thermodynamic quantities by taking appropriate derivatives. But, yeah, it is also used in quantum field theory by analogy. And there is certainly a lot of combinatorics hidden in there (which I don’t understand). The Wikipedia page mentions some of that as well:

          http://en.wikipedia.org/wiki/Cumulants#Cumulants_and_set-partitions

          Oh, and thanks for fixing my failed attempt at box quotes. How do I do those here?

          Finally, for the sake of completeness and for anyone who might care, let me try to show the details of my claim that the cumulants of your rescaled number operator are \hbar^{r-1} c_i. My thinking was that your rescaled coherent state is a Poisson distribution with mean c_i/\hbar. So, the ordinary number operator is a Poisson random variable in that distribution with cumulants c_i/\hbar. The rescaled number operator \hbar N is just a scalar multiple of the ordinary number operator N. So, the moment generating function of the rescaled number operator is (with p(n) a Poisson PMF with mean \mu=c_i/\hbar)

          M(t)=\sum_n e^{t\hbar n} p(n) = \exp(\mu(e^{t\hbar}-1))

          Thus, the cumulant generating function is

          \log M(t)=\mu(e^{t\hbar}-1)

          So, the r-th cumulant (for r=1,2,\cdots) is

          \frac{d^r}{dt^r}\log M(0) = \hbar^r \mu = \hbar^{r-1} c_i

          And it will be a miracle if all of that LaTeX works….

        • John Baez says:

          I’m too sleepy to say anything interesting now, so I’ll just say that you can make nice quotes here using the standard HTML command for that:

          <blockquote>
          This is a quote!
          </blockquote>

          creates

          This is a quote!

  4. domenico says:

    Excuse me, I retrying.
    I write an idea that don’t work (too complex), but there are some strange intermediate results.
    I am thinking that all is possible in Dirac notation (and \hat \Psi in the second quantization for boson field):
    \hat \Psi = \sum_{l \in N^k} \psi_l (\hat a^{\dagger}_1)^{n_1} \cdots (\hat a^{\dagger}_k)^{n_k}
    and
    \hat \Psi_c = e^{-(c_1+\cdots+c_k)/2} \sum_{n \in N^k} \sqrt[]{\frac{c^{n_1}_1 \cdots c^{n_k}_k}{n_1!\cdots n_k!}} (\hat a^{\dagger}_1)^{n_1} \cdots (\hat a^{\dagger}_k)^{n_k}

    =1

    [\hat a,\hat a^{\dagger}]=1

    ===c_i

    [\hat a,\hat \Psi]=\frac{d \hat \Psi}{d \hat a^{\dagger}}

    =

    but the commutator relations permit to exchange n_i time the operator, and a_i |0>=0.

    =  = c_i
    and the B bunch number operator can be:

    =\frac{}{B}=\frac{}{B}

    =\frac{}{B^2}=\frac{}{B^2}

    so that:

    -^2=\frac{-^2}{B^2}

    -^2=\frac{-}{B^2}

    some solutions are simple, for example (the second is obtainable with Mathematica):

    =e^{-(c_1+\cdots+c_n)} \sum_{n\in N^k} \frac{c^{n_1}_1\cdots c^{n_k}_k}{n_1!\cdots n_k!} \frac{n_i!}{(n_i-B)!} =c^B_i

    =c^B_i e^{-c_i} \sum_n \frac{c^n_i}{n!} \frac{(n+B)!}{n!}\frac{n+B+1}{n+1}

  5. domenico says:

    There is a bug in the compliler

    • John Baez says:

      I tried to fix your comment, but I don’t understand it so it’s hard to fix. Your equations contained strange expressions like ===, etcetera. In general it’s better to explain things in words and avoid unusual and unexplained mathematical symbols.

  6. domenico says:

    Excuse me, I did not want to persevere in the error, try a different source.
    I re-re-write an idea that don’t work (too complex), but there are some strange intermediate results.
    I am thinking that all is possible in Dirac notation (and \hat \Psi in the second quantization for boson field):
    \hat \Psi = \sum_{l \in N^k} \psi_l (\hat a^{\dagger}_1)^{n_1} \cdots (\hat a^{\dagger}_k)^{n_k}
    and
    \hat \Psi_c = e^{-(c_1+\cdots+c_k)/2} \sum_{n \in N^k} \sqrt[]{\frac{c^{n_1}_1 \cdots c^{n_k}_k}{n_1!\cdots n_k!}} (\hat a^{\dagger}_1)^{n_1} \cdots (\hat a^{\dagger}_k)^{n_k}
    in Dirac notation the renormalization is:
    =1
    it is possible to obtain the number of particle for the distribution function:
    ===c_i
    using the commutation relations:
    [\hat a,\hat a^{\dagger}]=1
    it is possible to obtain the derivative of the distribution operator:
    [\hat a,\hat \Psi]=\frac{d \hat \Psi}{d \hat a^{\dagger}}
    the commutatior relation permit to exchange n_i time the operator, and a_i |0>=0, so that:
    =  = c_i
    I try a B bunch number operator (here I am not sure of the definition, Q can be a normalization):
    =\frac{}{B}=\frac{}{Q}
    and the square of the number operator:
    =\frac{}{Q^2}=\frac{}{Q^2}
    so that:
    -^2=\frac{-^2}{Q^2}
    or:
    -^2=\frac{-}{Q^2}
    some solutions are simple, for example (the second is obtainable with Mathematica):
    =e^{-(c_1+\cdots+c_n)} \sum_{n\in N^k} \frac{c^{n_1}_1\cdots c^{n_k}_k}{n_1!\cdots n_k!} \frac{n_i!}{(n_i-B)!} =c^B_i
    =c^B_i e^{-c_i} \sum_n \frac{c^n_i}{n!} \frac{(n+B)!}{n!}\frac{n+B+1}{n+1}

  7. domenico says:

    I surrender to the compiler

  8. domenico says:

    I am thinking a simple idea, and monstrous calculations.
    If a system have a operator distribution that depend on the temperature, then it is possible to apply the statistical mechanic over the second quantization.
    If this happen there is a connection between Feynmann diagram and chemical reaction (from elementary particles interaction to molecule chemical reaction potential).
    Can the Feynmann diagram be applied to the molecule reaction using simply an approximation of the reaction potential instead of the potential between elementary particles?

  9. Dan says:

    Okay, John, you have a perfectly good answer to your puzzle, but here’s an attempt at an abstract nonsense proof based on cumulants (for my own personal edification). I’ll start with some overly complicated notation (a prerequisite for any abstract nonsense proof). All of this is in the context of the PMF

    p(n) = e^{-\mu} \frac{\mu^n}{n!}

    where the mean is \mu = c_i/\hbar. We have several random variables to worry about. The ordinary number operator N has MGF:

    M(t) = \langle e^{tN}\Psi_{c/\hbar}\rangle=\sum_n e^{tn} p(n) = \exp(\mu (e^t-1))

    The rescaled number operator \tilde{N} has MGF:

    M_\hbar (t) = \langle e^{t\tilde{N}} \Psi_{c_i/\hbar}\rangle = \exp(\mu (e^{t\hbar}-1))

    Finally, we have the c_i-centered, rescaled number operator \tilde{N}-c_i which has MGF:

    M_\hbar^{(c_i)} = \exp (\mu e^{t\hbar} -tc_i -\mu)

    Now, the CGF of the c_i-centered, rescaled number operator is

    \log M_\hbar^{(c_i)}=\mu e^{t\hbar}-tc_i -\mu

    So, the first cumulant of \tilde{N}-c_i is

    \frac{d}{dt}\log M_\hbar^{(c_i)}(0)=\hbar\mu -c_i=c_i-c_i=0

    where we recall that \mu=c_i/\hbar. For higher order cumulants, the extra tc_i term gets killed and we get the same answer as for the rescaled number operator, i.e., for r=2,3,\cdots we have that

    \frac{d^r}{dt^r} \log M_\hbar^{(c_i)} (0) = \frac{d^r}{dt^r} \log M_\hbar(0)=\hbar^r \mu = \hbar^{r-1}c_i

    Thus, all cumulants for the centered, rescaled number operator of order greater than r=1 are proportional to \hbar and the first cumulant is zero. Furthermore, the moments of the centered, rescaled number operator can be written as polynomials of degree 1 or greater in its cumulants. Therefore, all of the c_i-centered moments of the rescaled number operator are either zero or proportional to \hbar and hence go to zero in the limit.

    I’ve always been partial to direct demonstration proofs, so I like your proof better. But I’ve already spent so much time on this that I felt I should finish it…. Now I’d better get back to the work I’m being paid for. :)

    • John Baez says:

      If I ever meet you I will gladly buy you a beer or two—or any beverage of your choice. That may recompense you for your unpaid work (though I know you did it just for fun).

      I like your approach, because it illustrates a little bit of the power and beauty of cumulants. First, the cumulant generating function of the Poisson distribution is so much simpler than the moment generating function. Second, it’s nice how all the higher cumulants don’t care when you translate the Poisson distribution to make its mean zero. (I bet that’s a general property of cumulants but I’m too lazy to think about it now.)

      Your approach is also a bit like my final approach, in the following sense. We avoid working directly with moments and work with other quantities, of which the moments are certain polynomial functions. For me, I happened to notice that when

      \displaystyle{ p(n) = e^{-\mu} \frac{\mu^n}{n!} }

      is a Poisson distribution, these quantities

      \displaystyle{ \sum_{n = 1}^\infty  n^{\underline{k}} \; p(n) = \sum_{n = 1}^\infty  n(n-1)(n-2) \cdots (n-k+1) \, p(n) }

      are simpler than the moments:

      \displaystyle{ \sum_{n = 1}^\infty  n^k \; p(n) }

      They obey

      \displaystyle{ \sum_{n = 1}^\infty  n^{\underline{k}} \; p(n) = \left(\sum_{n = 1}^\infty  n p(n)\right)^k}

      Someone who knew more about Poisson distributions would presumably know all sorts of tricks like this, including cumulants, but I’ve never really studied them before.

      • Dan says:

        I doubt our paths will ever cross, but I do appreciate the offer and certainly wouldn’t turn down a beer. But, as you noted, this is really just about the fun for me.

      • Dan says:

        So, your comment got me to thinking about the relationship between falling powers moments (what seem to be referred to as “factorial moments” elsewhere) and cumulants. And it sent me down a rabbit hole called the Umbral Calculus, which I had never heard of before. If you haven’t either, then the introduction of this survey is nice:

        Click to access ds3.pdf

        Anyway, I found myself reading this paper by Di Nardo and Senato:

        Click to access amm4.pdf

        Using the umbral calculus, it details the relationships between moments, factorial moments, central moments, and cumulants of random variables. The Poisson distribution seems to come up a lot. There’s a lot of notation and (unfortunately) I don’t really have the time to understand it fully. But here’s a few tidbits I pulled out that might be relevant to the present discussion:

        1. Proposition 7.1 shows that the factorial moment generating function is M[\log(1+t)] where M(t) is the ordinary moment generating function. So, for the Poisson with mean \mu, we’d get e^{t\mu} demonstrating the property you used in your proof above, i.e., the the r-th factorial moment of the Poisson is just \mu^r.

        2. Footnote 3 is a quote giving a short history of cumulants, which I found interesting.

        3. The first sentence of section 4 says:

        The family of Poisson r.v.’s plays a crucial role in the theory of r.v.’s, especially because the most general infinitely [sic] distribution may be represented as a limit of an appropriate sequence of compound Poisson processes (cf. [4]).

        Now, [4] is the second volume of Feller’s classic treatise on probability theory. I’m almost ashamed to admit that I don’t have access to a copy of that text, so I’m not really sure what that statement means, but it sounds like it might be relevant to your classical limit, no? After all, you understand the limit for Poisson distributions….

      • Dan says:

        It looks like maybe there was a word missing in the paper I quoted (I thought maybe they just needed to drop the -ly). From this paper

        Click to access 42.pdf

        it seems that the theorem in Feller says something about all infinitely divisible distributions on the nonnegative integers being compound Poisson, meaning they can be represented as

        \sum_{n=1}^{N} X_n

        where N is Poisson and X_n are iid and independent of N. And infinitely divisible seems to be a condition on the characteristic function of the distribution. It looks like it means that all roots of the characteristic function are themselves characteristic functions.

      • Dan says:

        Ah! I see that I’m chasing down the Generalized Central Limit Theorem….

        Click to access tr-406.pdf

        Rabbit holes are fun, but not often conducive to productivity.

  10. Now we have most of the concepts and tools in place, and we can tackle the large-number limit using quantum techniques. You can review the details here:

    The large-number limit for reaction networks (part 1).

    The large-number limit for reaction networks (part 2) .

You can use Markdown or HTML in your comments. You can also use LaTeX, like this: $latex E = m c^2 $. The word 'latex' comes right after the first dollar sign, with a space after it.

This site uses Akismet to reduce spam. Learn how your comment data is processed.