• Mateo Polettini, Nonequilibrium thermodynamics as a gauge theory.

Abstract.We assume that markovian dynamics on a finite graph enjoys a gauge symmetry under local scalings of the probability density, derive the transformation law for the transition rates and interpret the thermodynamic force as a gauge potential. A widely accepted expression for the total entropy production of a system arises as the simplest gauge-invariant completion of the time derivative of Gibbs’s entropy. We show that transition rates can be given a simple physical characterization in terms of locally-detailed-balanced heat reservoirs. It follows that Clausius’s measure of irreversibility along a cyclic transformation is a geometric phase. In this picture, the gauge symmetry arises as the arbitrariness in the choice of a prior probability. Thermostatics depends on the information that is disposable to an observer; thermodynamics does not.

He also pointed out this paper by Schutz, who uses a quantum-mechanical notation for stochastic dynamics:

• G. M. Schutz, Exactly solvable models for many-body systems far from equilibrium.

He also said, and I summarize:

It seems that a special class of markovian generators, called p-normal, might have intriguing properties. Usually they have real and complex-conjugate eigenvalues. When you reverse the direction of time (there’s a well-defined way to do this…), imaginary parts of the eigenvalues change sign while real parts remain the same.

Some facts about normal systems are in my article on the Fisher metric:

• Mateo Polettini, Equivalence principle and critical behaviour for nonequilibrium decay modes.

Finally, if you’re interested in stochastic dynamics

andthermodynamics youhaveto read the milestone paper by J. Schnakenberg:• J. Schnakenberg, Network theory of microscopic and macroscopic behavior of master equation systems.

I don’t know what ‘p-normal operators’ are, or ‘normal systems’, so I need to learn the definitions of those.

]]>But I thank WebHubTelescope for nudging me to think more about how the principle of maximum entropy fits into the story.

Here’s what I know so far. It’s a bit more than I knew I knew:

In classical statics at temperature zero, a closed system will obey the **principle of minimum energy**. It will usually minimize the energy

E = K + V

in the following way. First it will minimize kinetic energy, K, by staying still. Then it will go on to minimize potential energy, V. So, people usually say statics at temperature zero is governed by the **principle of minimum potential energy**.

In classical statics at any fixed temperature, a closed system will obey the **principle of minimum free energy**. Now it will minimize

F = E – TS

where T is the temperature and S is the entropy. Note that this principle reduces to the principle of minimum energy when T = 0.

We can also consider classical statics at fixed energy or fixed entropy. Then we have:

• The **principle of maximum entropy**: for a closed system with fixed energy, the entropy is maximized at equilibrium.

• The **principle of minimum energy**: For a closed system with fixed entropy, the total energy is minimized at equilibrium.

All these last 3 principles should really be consequences of a single one, but I’m having trouble clearly stating what it is.

Thinking about this has given me some extra ideas, but I want to check them before springing them on the world. Thanks!

By the way, ‘classical statics at nonzero temperature’ is usually called ‘classical equilibrium thermodynamics’. But it’s a bit more consistent to reserve the term *dynamics* for situations where things are really changing in time.

My favorite counterpart to the least action principle in the quantum setting is Feynman’s ‘sum over histories’ idea. In classical mechanics, a system usually follows the path that minimizes the action . In quantum mechanics, it takes all possible paths with different amplitudes. To figure out what happens, we need to sum over all paths, with the path getting weighted by the amplitude . I’ve hidden Planck’s constant in this formula by setting it to 1, but if we don’t do that, we can (nonrigorously) show that as it goes to zero, the sum over histories reduces to the least action principle.

There’s also a sum over histories idea in the stochastic setting. Here we can say a system it takes all possible paths with different *probabilities*. To figure out what happens, we need to sum over all paths, with the path getting weighted by the probability .

This analogy between the quantum and stochastic theories is very much a part of what I’m talking about. It works best in the ‘overlap’ region discussed in my talk.

The relation to the principle of least entropy, on the other hand, is quite mysterious to me!

]]>• What are some ways you can use your analogy to take ideas from quantum mechanics and turn them into really *new* ideas in stochastic mechanics? (I completely forgot to mention the stochastic version of Noether’s theorem.)

• What are some ways you can use it in reverse, and take ideas from stochastic mechanics and turn them into really new ideas in quantum mechanics?

• How exactly does ‘decoherence’ turn quantum mechanical systems into systems that can be described using stochastic mechanics?

Cord Mueller gave me these references:

]]>M. Doi: Second quantization representation for classical many-particle system

http://iopscience.iop.org/0305-4470/9/9/008/A more recent review:

http://iopscience.iop.org/0305-4470/38/17/R01/Henk Hilhorst’s course notes (in French): http://www.th.u-psud.fr/page_perso/Hilhorst/N/Notes101120.pdf

The operator formalism is discussed in chapter 8, with some more references there.

Sorry to lower the tone.

]]>