Jaynes realized, among other things, that entropy maximization in thermostatics is not just a law of physics but a general principle of reasoning in situations of incomplete information. This is widely understood by now (though some still argue about it), so we didn’t even think of mentioning it in our paper. But we probably should, since we’re explaining how entropy maximization works when you combine several systems.

You might like this post of mine:

• Classical mechanics versus thermodynamics (part 1).

Here’s the most relevant part:

]]>## The big picture

Now let’s step back and think about what’s going on.

Lately I’ve been trying to unify a bunch of ‘extremal principles’, including:

1) the principle of least action

2) the principle of least energy

3) the principle of maximum entropy

4) the principle of maximum simplicity, or Occam’s razorIn my post on quantropy I explained how the first three principles fit into a single framework if we treat Planck’s constant as an imaginary temperature. The guiding principle of this framework is

maximize entropy

subject to the constraints imposed by what you believeAnd that’s nice, because E. T. Jaynes has made a powerful case for this principle.

However, when the temperature is imaginary, entropy is so different that it may deserves a new name: say, ‘quantropy’. In particular, it’s complex-valued, so instead of maximizing it we have to look for stationary points: places where its first derivative is zero. But this isn’t so bad. Indeed, a lot of minimum and maximum principles are really ‘stationary principles’ if you examine them carefully.

What about the fourth principle: Occam’s razor? We can formalize this using algorithmic probability theory. Occam’s razor then becomes yet another special case of

maximize entropy

subject the constraints imposed by what you believeonce we realize that algorithmic entropy is a special case of ordinary entropy.

All of this deserves plenty of further thought and discussion—but not today!

That’s an interesting idea. It could work.

]]>Is there a visible route to doing real space/block spin renormalization for a 2D or even 1D Ising model through your setup? That would be wonderful!

]]>You guessed right; now I’ll have to think about whether your explanation of *why* it’s right is secretly the same as the explanation in our paper (it’s after Definition 11).

Our formalism covers statistical physics, both classical and quantum, and we give lots of examples of how. We treat the entropy rather than the partition function as the key ingredient. Spin systems, and gluing together spin systems, should work fine.

In classical thermodynamics, by taking Legendre transform one can recover the logarithm of the partition function from the entropy. I believe Hong Qian has studied something similar for statistical mechanics. But we don’t get deep into Legendre transforms in this paper. That should come next!

]]>Just skip ahead to the examples at the end.

]]>