Arjun wrote:

Can we always assume that we will be able to normalize solutions for any symmetry, like this?

That’s a fun puzzle. Either prove this or find a counterexample!

]]>I get it now. I did mean instead of just – was a little sleepy then.

]]>Okay, so for the first case, , after normalization, gives the Poisson distribution with mean , while for the second case, after normalization, we get again.

In the first case, has some value of , which is not the same that of , because consisted of states other than those corresponding to .

Can we always assume that we will be able to normalize solutions for any symmetry, like this?

]]>Arjun Jain wrote:

I meant that isn’t it a bit artificial finding solutions first and then normalizing them by ourselves?

No, it’s very common to use constructions that produce non-normalized quantum states or probability distributions, and then normalize them.

is an observable- so we don’t expect to be a stochastic state.

It’s not that’s a stochastic state: if is a stochastic state, gives a stochastic state after we normalize it. And if depends on time,and it’s a solution of the master equation, and is a symmetry, we see is also a solution of the master equation! It’s very valuable to have a way to get new solutions from old ones, even if we need to normalize them.

(All these ideas are taken directly from quantum mechanics.)

Does the normalized solution represent anything realizable?

I think this is answered by Puzzle 3. You can see the answer to that puzzle on my website. And there’s another discussion of this same issue at the end of my second paper with Brendan Fong.

]]>I meant that isn’t it a bit artificial finding solutions first and then normalizing them by ourselves? is an observable- so we don’t expect to be a stochastic state. Does the normalized solution represent anything realizable?

]]>Great! We don’t have the answer to this puzzle in our book yet. I’ll add your answer, and credit you.

]]>4. For the general solution, you write that, because the master equation is linear, we can take any solution and write it as a linear combination of solutions one in each subspace . In reality, don’t we have only one value of from the initial conditions? Then what is the need of the linear combinations?

is the total number of atoms. If you know for such what the total number of atoms in a box of gas is, you can choose for that particular choice of , and you don’t need to consider any other choice of

But in reality, chemists rarely count the atoms in a box of gas. So it’s actually often better to work with coherent states, where we just know the *expected value* of the number of atoms. They are mathematically much simpler to deal with, and when is large the standard deviation in the number of atoms is much smaller than the mean, so you *almost* know the precise number of atoms in such a state.

These coherent states are linear combinations of states living in different subspaces

]]>Arjun wrote:

3. Have you talked about the Feynman diagram approach in detail, in some later post?

The most detail was in Part 7. I haven’t gone into more detail because this stuff is easy if you know Feynman diagram theory, and I don’t yet see a really exciting theorem to prove, or calculation to do, that would be worth writing a paper about. There should be something interesting to do, though. Maybe we can figure out something when we meet in Singapore.

]]>Consider:

As s and s work independently, we can put the s in front of the s in each term of the expression. Then we get:

The 2nd, 4th, 6th and 8th terms cancel. Then we get:

The 2nd and 4th terms in the bracket equal , while the 1st and 3rd terms give .

So we get 0. Similarly for the terms.

]]>Arjun wrote:

2. In puzzle 4, shouldn’t

be

along with a normalization constant?

I think you mean puzzle 3. No, I meant what I said. Either of these formulas gives an equilibrium solution of the master equation. It’s interesting to compute in either case! So, there are two different puzzles to do: my puzzle, and the puzzle you just suggested.

]]>