Classical Mechanics versus Thermodynamics (Part 4)

Thanks to the precise mathematical analogy between classical mechanics and thermodynamics that we saw last time, we can define Poisson brackets in thermodynamics just as in classical mechanics. That is, we can define Poisson brackets of quantities like entropy, temperature, volume and pressure. But what does this mean?

Furthermore, we can get quantum mechanics from classical mechanics by replacing Poisson brackets by commutators. That’s a brief summary of a long story, but by now this story is pretty well understood. So, we can do the same maneuver in thermodynamics and get some new theory where thermodynamic quantities like entropy, temperature, volume and pressure no longer commute! But what does this mean physically?

The obvious guess is that we get some version of thermodynamics that takes quantum mechanics into account, but I believe this is wrong. I want to explain why, and what I think is really going on. Briefly, I think what we get is more like statistical mechanics—but with a somewhat new outlook on this subject.

In getting to this understanding, we’ll bump into a few thorny questions. They’re interesting in themselves, but I don’t want them to hold us back, because we don’t absolutely need to answer them here. So, I’ll leave them as ‘puzzles’. Maybe you can help me with them.

In the classical mechanics of a point particle on a line, I considered a 4-dimensional vector space (q,p,t,H) where a point describes the position q, momentum p, time t and energy H of a particle. This is usually called the extended phase space, since it’s bigger than the usual phase space where a point (q,p) only keeps track of position and momentum. We saw that the extended phase space naturally acquires a symplectic structure

\omega = dp \wedge dq - dH \wedge dt

We can define Poisson brackets of smooth functions on any symplectic manifold. Instead of reviewing the whole well-known procedure I’ll just turn the crank and tell you what it gives for the coordinate functions on the extended phase space. The Poisson brackets of position and momentum are nonzero, and similarly for time and energy. All the rest vanish. We have

\{q,p\} = 1, \qquad \{t,H\} = -1

so we call these pairs of functions conjugate variables.

Puzzle 1. The first equation above says that momentum generates translations of position. The second says that energy generates time translations, but in reverse, thanks to the minus sign. Why the difference? Is this a problem?

To go to quantum mechanics, we can build a noncommutative algebra generated by elements I’ll call \hat{q}, \hat{p}, \hat{t} and \hat{H}. We require that these commute except for two, which have commutators that mimic the Poisson brackets above:

[\hat{q},\hat{p}] = i \hbar, \qquad [\hat{t},\hat{H}] = i \hbar

Here \hbar is Planck’s constant, or more precisely the reduced Planck constant. This has dimensions of action, making the above equations dimensionally consistent: momentum times position has dimensions of action, and so does energy multiplied by time.

Puzzle 2. Does this mean the well-known Poisson bracket equations \{p,q\} = 1 is not dimensionally consistent? Does this cause problems in classical mechanics? If not, why not?

We can work with the algebra having these generators and relations; this is called a Weyl algebra. But physicists usually prefer to treat this algebra as consisting of operators on some Hilbert space. The Stone–von Neumann theorem says there’s a unique ‘really nice’ way to do this. I don’t want to explain what ‘really nice’ means—it’s too much of a digression here. In fact I wrote a whole book about it with Irving Segal and Zhengfang Zhou. But this article is better if you just want the basic idea:

• Wikipedia, Stone–von Neumann theorem.

The main thing I need to say is that in the ‘really nice’ situation, \hat{q}, \hat{p}, \hat{t} and \hat{H} are unbounded self-adjoint operators, and the spectrum of each one is the whole real line. That’s not all there is to it: more must be true. But a problem immediately shows up. This is fine for the usual position and momentum operators in quantum mechanics, but the spectrum of the Hamiltonian \hat{H} is usually not the whole real line! In fact we usually want its spectrum to be bounded below: that is, we don’t want to allow arbitrarily large negative energies.

As a result, in quantum mechanics we usually give up on trying to find a ‘time operator’ \hat{t} that obeys the relation

[\hat{t},\hat{H}] = i \hbar

There is more to say about this:

• John Baez, The time-energy uncertainty relation.

But this is not what I want to talk about today; I mention it only because we’ll see a similar problem in thermodynamics, where volume is bounded below.

In any event, there’s a standard ‘really nice’ way to make \hat{q} and \hat{p} into operators in quantum mechanics: we take the Hilbert space L^2(\mathbb{R}) of square-integrable functions on the line, and define

\displaystyle{ (\hat{q} \psi)(x) = x \psi(x), \qquad (\hat{p} \psi)(x) = -i \hbar \frac{\partial}{\partial x} \psi(x) }

We may try to copy this in thermodynamics.

So, let’s see how it works! There are some twists.

In my explanations of classical mechanics versus thermodynamics so far, I’ve mainly been using the energy scheme, where we start with the internal energy U as a function of entropy S and volume V, and then define temperature T and pressure P as derivatives, so that

dU = T dS - P dV

As we’ve seen, this makes it very interesting to consider a 4-dimensional space—I’ll again call it the extended phase space—where S,T,V and P are treated as independent coordinates. This vector space has a symplectic structure

\omega = dT \wedge dS - dP \wedge dV

and the physically allowed points in this vector space form a ‘Lagrangian submanifold’—that is, a 2-dimensional surface on which the symplectic structure vanishes:

\displaystyle{ \left\{ (S,T,V,P): \;  T = \left.\phantom{\Big|} \frac{\partial U}{\partial S}\right|_V  , \; P = -\left. \phantom{\Big|}\frac{\partial U}{\partial V} \right|_S \right\} \; \subset \; \mathbb{R}^4 }

The symplectic structure lets us define Poisson brackets of smooth functions on the extended phase space. Again I’ll just turn the crank and tell you the Poisson brackets of the coordinate functions. The math is just the same, only the names have been changed, so you should not be surprised that two of their Poisson brackets are nonzero:

\{S,T\} = 1, \qquad \{V,P\} = -1

and all the rest vanish.

Puzzle 3. What is the physical meaning of these Poisson brackets?

Again we may worry that these equations are dimensionally inconsistent—but let’s go straight to the ‘quantized’ version and do our worrying there!

So, we’ll build a noncommutative algebra generated by elements I’ll call \hat{S}, \hat{T}, \hat{V} and \hat{P}. We’ll require that these commute except for two, namely the pair S, T and the pair V, P. What relation should these obey?

A first try might be

[\hat{S},\hat{T}] = i \hbar, \qquad [\hat{V},\hat{P}] = i \hbar

However, these equations are not dimensionally consistent! In both cases the left side has units of energy. Entropy times temperature and volume times pressure both have units of energy, since the first is related to ‘heat’ and the second is related to ‘work’. But \hbar has units of action.

I declare this to be intolerable. We are groping around trying to understand the physical meaning of the mathematics here; dimensional analysis is a powerful guide, and if we don’t have that going for us we’ll be in serious trouble.

So, we could try

[\hat{S},\hat{T}] = i\alpha, \qquad [\hat{V},\hat{P}] = i\alpha

where \alpha is some constant with units of energy.

Unfortunately there is no fundamental constant with units of energy that plays an important role throughout thermodynamics. We could for example use the mass of the electron times the speed of light squared, but why in the world should thermodynamics place some special importance on this?

One important quantity with units of energy in thermodynamics is kT, where T is temperature and k is Boltzmann’s constant. Boltzmann’s constant is fundamental in thermodynamics, or more precisely statistical mechanics: it has units of energy per temperature. So we might try

[\hat{S},\hat{T}] = ikT, \qquad [\hat{V},\hat{P}] = ikT

but unfortunately the temperature T is not a constant: it’s one of the variables we’re trying to ‘quantize’! It would make more sense to try

[\hat{S},\hat{T}] = ik\hat{T}, \qquad [\hat{V},\hat{P}] = ik\hat{T}

but unfortunately these commutation relations are more complicated than the ones we had in quantum mechanics.

Luckily, the fundamental constant we want is sitting right in front of us: it’s Boltzmann’s constant. This has units of energy per temperature—or in other words, units of entropy. This suggests quitting the ‘energy scheme’ and working with the second most popular formulation of thermodynamics, the ‘entropy scheme’.

Here we start by writing the entropy S of our system as a function of its internal energy U and volume V. A simple calculation starting from dU = TdS - PdV then shows

\displaystyle{ dS = \frac{1}{T} dU + \frac{P}{T} dV }

It will help to make up short names for the quantities here, so I’ll define

\displaystyle{ C = \frac{1}{T} }

and call C the coldness, for the obvious reason. I’ll also define

\displaystyle{ B = \frac{P}{T} }

and call B the boldness, for no particularly good reason—I just need a name. In terms of these variables we get

dS = C dU + B dV

This is the so-called entropy scheme. This equation implies that C times U has dimensions of entropy, and so does
B times V.

If we go through the procedure we used last time, we get a 4-dimensional vector space with coordinates U,C,V,B and symplectic structure

\omega = dC \wedge dU + dB \wedge dV

The physically allowed points in this vector space form a Lagrangian submanifold

\displaystyle{  \left\{ (U,C,B,V): \;  C = \left.\phantom{\Big|} \frac{\partial S}{\partial U}\right|_V  , \; B = \left. \phantom{\Big|}\frac{\partial S}{\partial V} \right|_U \right\} \; \subset \; \mathbb{R}^4 }

which is really just the submanifold we had before, now described using new coordinates.

But let’s get to the point! We’ll build a noncommutative algebra generated by elements I’ll call \hat{U}, \hat{S}, \hat{V} and \hat{P}. We’ll require that these commute except for two, namely the pair U, C and the pair V, B. And what commutation relations should these obey?

We could try

[\hat{U},\hat{C}] = ik, \qquad [\hat{V},\hat{B}] = ik

This is now dimensionally consistent, since in each equation both sides have dimensions of entropy!

However, there’s another twist. Quantum mechanics is about amplitudes, while statistical mechanics is about probabilities, which are real. So I actually think I want

[\hat{U},\hat{C}] = k, \qquad [\hat{V},\hat{B}] = k

Let me give some evidence for this!

I will ignore the second equation for now and focus on the first. Suppose we’re doing statistical mechanics and we have a system that has probability p(E) of having internal energy E. We would like to define an internal energy operator \hat{U} and coldness operator \hat{C}. Say we set

\displaystyle{  (\hat{U} p)(E) = E p(E), \qquad (\hat{C} p)(E) = -k \frac{\partial}{\partial E} p(E) }

Then it’s easy to check that they obey the commutation relation

[\hat{U},\hat{C}] = k

Moreover—and this is the exciting part—they make physical sense! If the system definitely has internal energy E_0, then p(E) vanishes except at E = E_0. It follows that p is an eigvenvector of the internal energy operator with eigenvalue E_0:

(\hat{U} p)(E) = E p(E) = E_0 p(E)

so

\hat{U} p = E_0 p

On the other hand, suppose the system definitely has temperature T_0, and thus coldness C_0 = 1/T_0. Then statistical mechanics tells us that p(E) is given by the Boltzmann distribution

\displaystyle{ p(E) = \frac{\exp(-E/kT_0)}{Z}  = \frac{\exp(-C_0 E/k) }{Z} }

where Z is a normalizing constant called the partition function. It follows that p is an eigenvector of the coldness operator with eigenvalue C_0:

\begin{array}{ccl}  (\hat{C} p)(E) &=&  \displaystyle{ -k \frac{\partial}{\partial E} \frac{\exp(-C_0 E/k)}{Z} }\\ \\  &=&  \displaystyle{  C_0 \frac{\exp(-C_0 E/k)}{Z} } \\ \\  &=& C_0 p(E)  \end{array}

so

\hat{C} p = C_0 p

This makes me very happy. We are seeing a nice analogy:

• Internal energy eigenstates in statistical mechanics are like position eigenstates in quantum mechanics.

• Coldness eigenstates in statistical mechanics are like momentum eigenstates in quantum mechanics—except for a missing factor of i in the exponential, which makes the coldness eigenstates decrease exponentially instead of oscillate.

We can also define a temperature operator

\displaystyle{ \hat{T} = \frac{1}{\hat{C}} }

at least if we ignore the eigenvectors of the coldness operator with eigenvalue zero. But the coldness operator is more fundamental in this approach.

A lot of further questions and problems appear at this point, but I think I’ll stop here and tackle them later.


Part 1: Hamilton’s equations versus the Maxwell relations.

Part 2: the role of symplectic geometry.

Part 3: a detailed analogy between classical mechanics and thermodynamics.

Part 4: what is the analogue of quantization for thermodynamics?

10 Responses to Classical Mechanics versus Thermodynamics (Part 4)

  1. Toby Bartels says:

    and all the rest vanish

    For those unfamiliar with Poisson brackets (and especially their antisymmetry), you should probably also mention that \{p,q\} = -1 and \{H,t\} = 1 too.

  2. Peter Morgan says:

    In quantum thermodynamics, thermal noise is not Poincaré invariant, in contrast to “quantum noise”, associated with the vacuum state, which is Poincaré invariant. The difference can be said to come from a choice of time-like direction for a specific generator of time-like translations when we introduce the operator \exp{(-\beta\hat P_\mu T^\mu)}, where T^\mu is a unit-length forward-pointing time-like 4-vector. This could be discussed in several ways, but if we allow T^\mu to be of arbitrary length, then in this 4-dimensional formalism kT has the dimensions of action. I hope this is helpful towards making the ‘energy scheme’ work.

  3. allenknutson says:

    The appeal to dimensional analysis makes me want to self-advertise a 5-minute video on the subject. I hope you won’t be too annoyed.

  4. SteveB says:

    In the definition of the coldnesss operator (which has too many s’s, but sounds kind of cool when you say it with the extra one), psi should be p in two places and x should be E.

  5. Giampiero Campa says:

    I just came across this paper, (as the non-preprint version was published on the IEEE Control Systems magazine), i think it might be of interests for the topic in the subject, especially from section 6 on:

    Classical Thermodynamics Revisited: A Systems and Control Perspective: https://arxiv.org/pdf/2010.04213.pdf

  6. jackjohnson says:

    Toward the end of

    https://en.wikipedia.org/wiki/Cumulant `relation to statistical physics’ is the assertion:

    The Helmholtz free energy expressed in terms of

    F(\beta) = -\beta^{-1} \log Z(\beta)

    further connects thermodynamic quantities with the cumulant generating function for the energy.

    I’d like to understand this better, and I wonder if thermodynamic terms like pressure and volume have analogs in statistical mechanics. I worry that you may have already answered that question, though…

    • John Baez says:

      Hi! The temperature T and ‘coolness’ \beta = 1/kT make sense in statistical mechanics whenever you have a measure space X equipped with any measurable function H : X \to \mathbb{R} that you decide to call ‘energy’. You define

      Z(\beta) = \int_X \exp(-\beta H)

      and then you get a bunch of relations between \beta and the probability distribution on X that maximizes entropy subject to a constraint on the expected value of H. This is the main story of statistical mechanics. The free energy

      F(\beta) = -\beta^{-1} \ln Z(\beta)

      is one of the actors in this drama. If there’s something particular you’re curious about, just ask.

      The whole story also has a quantum version where we start with a Hilbert space equipped with a self-adjoint operator H and define

      Z(\beta) = \mathrm{tr} ( \exp(-\beta H) )

      Formally everything works the same; indeed a trace is just like an integral on a noncommutative algebra.

      There’s an isomorphic story for volume and pressure. The only difference is that you pick a function or operator and now you call it ‘volume’ instead of ‘pressure’. The math is the same.

      Someone recently told me about cumulant generating functions, and I guess the point is that they’re generating functions for logarithms of things, like \ln Z. But I don’t have a feel for why people care about them.

You can use Markdown or HTML in your comments. You can also use LaTeX, like this: $latex E = m c^2 $. The word 'latex' comes right after the first dollar sign, with a space after it.

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out /  Change )

Google photo

You are commenting using your Google account. Log Out /  Change )

Twitter picture

You are commenting using your Twitter account. Log Out /  Change )

Facebook photo

You are commenting using your Facebook account. Log Out /  Change )

Connecting to %s

This site uses Akismet to reduce spam. Learn how your comment data is processed.