Classical Mechanics versus Thermodynamics (Part 1)

It came as a bit of a shock last week when I realized that some of the equations I’d learned in thermodynamics were just the same as equations I’d learned in classical mechanics—with only the names of the variables changed, to protect the innocent.

Why didn’t anyone tell me?

For example: everybody loves Hamilton’s equations: there are just two, and they summarize the entire essence of classical mechanics. Most people hate the Maxwell relations in thermodynamics: there are lots, and they’re hard to remember.

But what I’d like to show you now is that Hamilton’s equations are Maxwell relations! They’re a special case, and you can derive them the same way. I hope this will make you like the Maxwell relations more, instead of liking Hamilton’s equations less.

First, let’s see what these equations look like. Then let’s see why Hamilton’s equations are a special case of the Maxwell relations. And then let’s talk about how this might help us unify different aspects of physics.

Hamilton’s equations

Suppose you have a particle on the line whose position q and momentum p are functions of time, t. If the energy H is a function of position and momentum, Hamilton’s equations say:

\begin{array}{ccr}  \displaystyle{  \frac{d p}{d t} }  &=&  \displaystyle{- \frac{\partial H}{\partial q} } \\  \\ \displaystyle{  \frac{d q}{d t} } &=&  \displaystyle{ \frac{\partial H}{\partial p} }  \end{array}

The Maxwell relations

There are lots of Maxwell relations, and that’s one reason people hate them. But let’s just talk about two; most of the others work the same way.

Suppose you have a physical system like a box of gas that has some volume V, pressure P, temperature T and entropy S. Then the first and second Maxwell relations say:

\begin{array}{ccr}  \displaystyle{ \left. \frac{\partial T}{\partial V}\right|_S } &=&  \displaystyle{ - \left. \frac{\partial P}{\partial S}\right|_V } \\   \\   \displaystyle{ \left. \frac{\partial S}{\partial  V}\right|_T  }  &=&  \displaystyle{ \left. \frac{\partial P}{\partial T} \right|_V }   \end{array}


Clearly Hamilton’s equations resemble the Maxwell relations. Please check for yourself that the patterns of variables are exactly the same: only the names have been changed! So, apart from a key subtlety, Hamilton’s equations become the first and second Maxwell relations if we make these replacements:

\begin{array} {ccccccc}  q &\to& S & &  p &\to & T \\ t & \to & V & & H &\to & P \end{array}

What’s the key subtlety? One reason people hate the Maxwell’s relations is they have lots of little symbols like \left. \right|_V saying what to hold constant when we take our partial derivatives. Hamilton’s equations don’t have those.

So, you probably won’t like this, but let’s see what we get if we write Hamilton’s equations so they exactly match the pattern of the Maxwell relations:

\begin{array}{ccr}     \displaystyle{ \left. \frac{\partial p}{\partial t} \right|_q }  &=&  \displaystyle{- \left. \frac{\partial H}{\partial q} \right|_t } \\  \\\displaystyle{  \left.\frac{\partial q}{\partial t} \right|_p } &=&  \displaystyle{ \left. \frac{\partial H}{\partial p} \right|_t }    \end{array}

This looks a bit weird, and it set me back a day. What does it mean to take the partial derivative of q in the t direction while holding p constant, for example?

I still think it’s weird. But I think it’s correct. To see this, let’s derive the Maxwell relations, and then derive Hamilton’s equations using the exact same reasoning, with only the names of variables changed.

Deriving the Maxwell relations

The Maxwell relations are extremely general, so let’s derive them in a way that makes that painfully clear. Suppose we have any smooth function U on the plane. Just for laughs, let’s call the coordinates of this plane S and V. Then we have

d U = T d S - P d V

for some functions T and P. This equation is just a concise way of saying that

\displaystyle{ T = \left.\frac{\partial U}{\partial S}\right|_V }


\displaystyle{ P = - \left.\frac{\partial U}{\partial V}\right|_S }

The minus sign here is unimportant: you can think of it as a whimsical joke. All the math would work just as well if we left it out.

(In reality, physicists call U as the internal energy of a system, regarded as a function of its entropy S and volume V. They then call T the temperature and P the pressure. It just so happens that for lots of systems, their internal energy goes down as you increase their volume, so P works out to be positive if we stick in this minus sign, so that’s what people did. But you don’t need to know any of this physics to follow the derivation of the Maxwell relations!)

Now, mixed partial derivatives commute, so we have:

\displaystyle{ \frac{\partial^2 U}{\partial V \partial S} =  \frac{\partial^2 U}{\partial S \partial V}}

Plugging in our definitions of T and V, this says

\displaystyle{ \left. \frac{\partial T}{\partial V}\right|_S = - \left. \frac{\partial P}{\partial S}\right|_V }

And that’s the first Maxwell relation! So, there’s nothing to it: it’s just a sneaky way of saying that the mixed partial derivatives of the function U commute.

The second Maxwell relation works the same way. But seeing this takes a bit of thought, since we need to cook up a suitable function whose mixed partial derivatives are the two sides of this equation:

\displaystyle{ \left. \frac{\partial S}{\partial  V}\right|_T  = \left. \frac{\partial P}{\partial T} \right|_V }

There are different ways to do this, but for now let me use the time-honored method of ‘pulling the rabbit from the hat’.

Here’s the function we want:

A = U - T S

(In thermodynamics this function is called the Helmholtz free energy. It’s sometimes denoted F, but the International Union of Pure and Applied Chemistry recommends calling it A, which stands for the German word ‘Arbeit’, meaning ‘work’.)

Let’s check that this function does the trick:

\begin{array}{ccl} d A &=& d U - d(T S) \\  &=& (T d S - P d V) - (S dT + T d S) \\  &=& -S d T - P dV \end{array}

If we restrict ourselves to any subset of the plane where T and V serve as coordinates, the above equation is just a concise way of saying

\displaystyle{ S = - \left.\frac{\partial A}{\partial T}\right|_V }


\displaystyle{ P = - \left.\frac{\partial A}{\partial V}\right|_T }

Then since mixed partial derivatives commute, we get:

\displaystyle{ \frac{\partial^2 A}{\partial V \partial T} =  \frac{\partial^2 A}{\partial T \partial V}}

or in other words:

\displaystyle{ \left. \frac{\partial S}{\partial  V}\right|_T  = \left. \frac{\partial P}{\partial T} \right|_V }

which is the second Maxwell relation.

We can keep playing this game using various pairs of the four functions S, T, P, V as coordinates, and get more Maxwell relations: enough to give ourselves a headache! But we have more better things to do today.

Hamilton’s equations as Maxwell relations

For example: let’s see how Hamilton’s equations fit into this game. Suppose we have a particle on the line. Consider smooth paths where it starts at some fixed position at some fixed time and ends at the point q at the time t. Nature will choose a path with least action—or at least one that’s a stationary point of the action. Let’s assume there’s a unique such path, and that it depends smoothly on q and t. For this to be true, we may need to restrict q and t to a subset of the plane, but that’s okay: go ahead and pick such a subset.

Given q and t in this set, nature will pick the path that’s a stationary point of action; the action of this path is called Hamilton’s principal function and denoted S(q,t). (Beware: this S is not the same as entropy!)

Let’s assume S is smooth. Then we can copy our derivation of the Maxwell equations line for line and get Hamilton’s equations! Let’s do it, skipping some steps but writing down the key results.

For starters we have

d S = p d q - H d t

for some functions p and H called the momentum and energy, which obey

\displaystyle{ p = \left.\frac{\partial S}{\partial q}\right|_t }


\displaystyle{ H = - \left.\frac{\partial S}{\partial t}\right|_q }

As far as I can tell it’s just a cute coincidence that we see a minus sign in the same place as before! Anyway, the fact that mixed partials commute gives us

\displaystyle{ \left. \frac{\partial p}{\partial t} \right|_q = - \left. \frac{\partial H}{\partial q} \right|_t }

which is the first of Hamilton’s equations. And now we see that all the funny \left. \right|_q and \left. \right|_t things are actually correct!

Next, we pull a rabbit out of our hat. We define this function:

X = S - p q

and check that

d X = - q dp - H d t

This function X probably has a standard name, but I don’t know it. Do you?

Then, considering any subset of the plane where p and t serve as coordinates, we see that because mixed partials commute:

\displaystyle{ \frac{\partial^2 X}{\partial t \partial p} =  \frac{\partial^2 A}{\partial p \partial t}}

we get

\displaystyle{ \left. \frac{\partial q}{\partial t} \right|_p = \left. \frac{\partial H}{\partial p} \right|_t }

So, we’re done!

But you might be wondering how we pulled this rabbit out of the hat. More precisely, why did we suspect it was there in the first place? There’s a nice answer if you’re comfortable with differential forms. We start with what we know:

d S = p d q - H d t

Next, we use this fundamental equation:

d^2 = 0

to note that:

\begin{array}{ccl}  0 &=& d^2 S \\ &=& d(p d q- H d t) \\ &=& d p \wedge d q - d H \wedge d t \\ &=& - dq \wedge d p - d H \wedge d t \\ &=& d(-q d p - H d t) \end{array}

See? We’ve managed to switch the roles of p and q, at the cost of an extra minus sign!

Then, if we restrict attention to any contractible open subset of the plane, the Poincaré Lemma says

d \omega = 0 \implies \omega = d \mu \; \textrm{for some} \; \mu


d(- q d p - H d t) = 0

it follows that there’s a function X with

d X = - q d p - H d t

This is our rabbit. And if you ponder the difference between -q d p and p d q, you’ll see it’s -d( p q). So, it’s no surprise that

X = S - p q

The big picture

Now let’s step back and think about what’s going on.

Lately I’ve been trying to unify a bunch of ‘extremal principles’, including:

1) the principle of least action
2) the principle of least energy
3) the principle of maximum entropy
4) the principle of maximum simplicity, or Occam’s razor

In my post on quantropy I explained how the first three principles fit into a single framework if we treat Planck’s constant as an imaginary temperature. The guiding principle of this framework is

maximize entropy
subject to the constraints imposed by what you believe

And that’s nice, because E. T. Jaynes has made a powerful case for this principle.

However, when the temperature is imaginary, entropy is so different that it may deserves a new name: say, ‘quantropy’. In particular, it’s complex-valued, so instead of maximizing it we have to look for stationary points: places where its first derivative is zero. But this isn’t so bad. Indeed, a lot of minimum and maximum principles are really ‘stationary principles’ if you examine them carefully.

What about the fourth principle: Occam’s razor? We can formalize this using algorithmic probability theory. Occam’s razor then becomes yet another special case of

maximize entropy
subject the constraints imposed by what you believe

once we realize that algorithmic entropy is a special case of ordinary entropy.

All of this deserves plenty of further thought and discussion—but not today!

Today I just want to point out that once we’ve formally unified classical mechanics and thermal statics (often misleadingly called ‘thermodynamics’), as sketched in the article on quantropy, we should be able to take any idea from one subject and transpose it to the other. And it’s true. I just showed you an example, but there are lots of others!

I guessed this should be possible after pondering three famous facts:

• In classical mechanics, if we fix the initial position of a particle, we can pick any position q and time t at which the particle’s path ends, and nature will seek the path to this endpoint that minimizes the action. This minimal action is Hamilton’s principal function S(q,t), which obeys

d S = p d q - H d t

In thermodynamics, if we fix the entropy S and volume V of a box of gas, nature will seek the probability distribution of microstates the minimizes the energy. This minimal energy is the internal energy U(S,V), which obeys

d U = T d S - P d V

• In classical mechanics we have canonically conjugate quantities, while in statistical mechanics we have conjugate variables. In classical mechanics the canonical conjugate of the position q is the momentum p, while the canonical conjugate of time t is energy H. In thermodynamics, the conjugate of entropy S is temperature T, while the conjugate of volume V is pressure P. All this is fits in perfectly with the analogy we’ve been using today:

\begin{array} {ccccccc}  q &\to& S & &  p &\to & T \\ t & \to & V & & H &\to & P \end{array}

• Something called the Legendre transformation plays a big role both in classical mechanics and thermodynamics. This transformation takes a function of some variable and turns it into a function of the conjugate variable. In our proof of the Maxwell relations, we secretly used a Legendre transformation to pass from the internal energy U(S,V) to the Helmholtz free energy A(T,V):

A = U - T S

where we must solve for the entropy S in terms of T and V to think of A as a function of these two variables.

Similarly, in our proof of Hamilton’s equations, we passed from Hamilton’s principal function S(q,t) to the function X(p,t):

X = S - p q

where we must solve for the position q in terms of p and t to think of X as a function of these two variables.

I hope you see that all this stuff fits together in a nice picture, and I hope to say a bit more about it soon. The most exciting thing for me will be to see how symplectic geometry, so important in classical mechanics, can be carried over to thermodynamics. Why? Because I’ve never seen anyone use symplectic geometry in thermodynamics. But maybe I just haven’t looked hard enough!

Indeed, it’s perfectly possible that some people already know what I’ve been saying today. Have you seen someone point out that Hamilton’s equations are a special case of the Maxwell relations? This would seem to be the first step towards importing all of symplectic geometry to thermodynamics.

52 Responses to Classical Mechanics versus Thermodynamics (Part 1)

  1. Boris Borcic says:

    A couple of tangential remarks

    (1) iirc the equations you here advertise as Maxwell’s I was taught as Boltzmann’s?

    (2) pardon my French, but an imo good enough reason not to name thermodynamics thermostatics is that in French that name allows to stage the second law with a pun – thus reminding that entropy is a measure of e.g. punning micro-states. L’entropie met un terme aux dynamiques – entropy terminates the dynamics.

    • John Baez says:

      Thanks! I haven’t read the paper yet, but judging from the abstract, it sounds like this guy gets it:

      • Mark A. Peterson, Analogy between thermodynamics and mechanics, American Journal of Physics 47 (1979).

      I’m surprised this paper isn’t better known! I wonder if it could be because the American Journal of Physics is a ‘teaching journal’.

      By the way: I didn’t use the phrase ‘Hamilton–Jacobi’ in my blog article, because I wanted to keep the jargon down to a bare minimum, but of course the idea of deriving Hamilton’s equations by taking derivatives of Hamilton’s principal function is tightly connected to the Hamilton–Jacobi equation. It seems that for the isomorphism between classical mechanics and thermodynamics to become vivid, the Hamilton–Jacobi approach to Hamilton’s equations is nicer than the more common one focused on the phase space with coordinates p and q. But once we know the isomorphism exists, we can use any approach we want!

      • M. R. says:

        Like Blake Stacey says a couple of comments down, any previous work on this correspondence probably stems from Caratheodory’s formulation of thermodynamics, which relies on differential forms… Incidentally, isn’t there a Hamiltonian formulation of geometrical optics? It’d be lovely to get optics into the mix too!

        I was wondering why you didn’t mention the Hamilton-Jacobi equation, but now that I’ve had a chance to compare my notes with this blog entry, I agree that your approach is much neater!

      • Blake Stacey says:

        The reference I have on Caratheodory’s formulation of thermodynamics is Frankel’s The Geometry of Physics (second edition, 2004), section 6.3, which I’ve never actually read (well, never that carefully). Historical background can be found at a slightly more elementary mathematical level in Max Born’s Natural Philosophy Of Cause And Chance (1949), chapter 5.

        • Toby Bartels says:

          The reference I have on Caratheodory’s formulation of thermodynamics is Frankel’s The Geometry of Physics (second edition, 2004), section 6.3, which I’ve never actually read (well, never that carefully).

          I don’t remember what it says about Caratheodory’s formulation of thermodynamics in section 6.3, but that is an awesome book, so you should read it!

      • Giampiero Campa says:

        There is also a nice connection to optimal control, which relies on a slightly more general definition of an Hamiltonian, based on the Hamilton-Jacobi-Bellman equation.

        You can also have a look at this for an historical perspective on the whole thing.

  2. JVK says:

    In the discussion on Hamiltonian mechanics, the function X looks like a generating function of the second kind, while S is a generating function of the first kind. Not sure what X is routinely called, though…

  3. dcorfield says:

    You’re reawakening all those quarter glimpsed things I never fully got:

    Something called the Legendre transformation plays a big role both in classical mechanics and thermodynamics.

    Why did it also play a role in information geometry?


    if you take the definition of Laplace transform and take the temperature \to 0 limit, you’ll see it becomes the Legendre transform,

    then where should we expect the Laplace transform to be used? Why Fourier analysis and QM, if that is like the Laplace transform for the imaginary axis?

    • John Baez says:

      David Corfield wrote:

      You’re reawakening all those quarter glimpsed things I never fully got:

      Something called the Legendre transformation plays a big role both in classical mechanics and thermodynamics.

      Why did it also play a role in information geometry?

      I guess we need to understand the essence of the Legendre transform. The Legendre transform shows up whenever we minimize or maximize something subject to constraints. That happens a lot.

      Here’s how it goes. Suppose you have a function of two variables—it could be more or fewer, but two is a nice example. Purely for fun let’s call this function U(S,V). Now suppose you want to find the minimum of U subject to a constraint on one of the variables—say, S. We can do this using the yoga of Lagrange multipliers, which I hope you know and love. But let me describe that yoga in a mildly nonstandard way.

      First, we find the value of S and V that minimize

      U(S,V) - T S

      Here T is called the Lagrange multiplier. We don’t vary it while doing the minimization. Thus, the location of the minimum (S,V) will depend on T. We figure out how. We then cleverly choose T so that S takes the value given by our constraint. We then read off V.

      But after we get in the habit of this, we start to love

      U(S,V) - T S

      and think of it in various new ways. It’s a function of three variables. But we can think of it as a function of just T and V—at least if we’re lucky. How? Because if we fix T and V, there will be a unique $S$ minimizing U(S,V) - T S—at least if we’re lucky. So, we use that S. Then U - T S depends on just T and V.

      This is what we mean when we write

      A(T,V) = U - T S

      We call A the Legendre transform of our original function U.

      Now we can approach our original task—locating the minimum of U for a fixed value of S—in a slightly new way! Now we’re locating the minimum of A for some fixed value of T.

      It’s all so tautologous and simple that it sounds a bit confusing and pointless. We’re really not doing much! But it sounds grand—in fact it is grand—when we attach physically significant names to our variables.

      Our original task was to find what a box of gas will do when it minimizes energy subject to a constraint on entropy.

      We’ve changed this to an equivalent problem: find what a box of gas will do when it minimizes free energy subject to a constraint on temperature.

      Here ‘find what it would do’ means find the value of the remaining variable V, which was a bystander in the above game. But there could be lots of such variables.

      Anyway, I hope you get it. We’ve switched our focus from the originally constrained variable to the Lagrange multiplier. We call the Lagrange multiplier the conjugate of the original variable. And we call our new function of interest (here A) the Legendre transform of our original function (here U).

    • John Baez says:

      I gave my current favorite explanation of the ‘essence’ of the Legendre transformation, but I didn’t answer David’s real puzzle, which I take to be: if the Legendre transformation can be seen as as T \to 0 limit of the Laplace transform, why do we see both Laplace transforms and Legendre transformations showing up in T > 0 thermal statics?

      This is a great puzzle and I don’t think I’ve reached the bottom of it. But here’s a start.

      In the T \to 0 limit of thermal statics, it reduces to classical statics, where the principle of least energy reigns supreme. Whenever we minimize things we expect to see Legendre transformations, so in classical statics we do.

      As we go to T > 0 we’re doing thermal statics. Now, instead of choosing the one state of minimum energy we say all possible states occur with different probabilities—with a state of energy E showing up with probability proportional to \exp(-E/T). So, our Legendre transforms turn into Laplace transforms.

      But, this probability distribution on states still minimizes something: namely, the free energy! So, we still see Legendre transforms showing up in thermal statics!

      So, we see both Laplace transforms and Legendre transforms in thermal statics.

      • Ah, that makes sense. Something similar happens with probabilities (though in view of the association between energies and probabilities that was always likely). When you have a distribution over a space you can always lift it up to a point in the space of distributions. So the Legendre distribution in the latter case is shifting you between coordinates for the moment-determined subspaces and those of the corresponding exponential families.

  4. Blake Stacey says:

    I’d guess that any prior work in this spirit would derive from the Caratheordory tradition of thermodynamics (which originally got started by a suggestion from Max Born).

    Here’s M. J. Peterson (1979), “Analogy between thermodynamics and mechanics” American Journal of Physics 47, 6: 488, DOI:10.1119/1.11788.

    We note that equations of state—by which we mean identical relations among the thermodynamic variables characterizing a system—are actually first‐order partial differential equations for a function which defines the thermodynamics of the system. Like the Hamilton‐Jacobi equation, such equations can be solved along trajectories given by Hamilton’s equations, the trajectories being quasistatic processes which obey the given equation of state. This gives rise to the notion of thermodynamic functions as infinitesimal generators of quasistatic processes, with a natural Poisson bracket formulation. This formulation of thermodynamic transformations is invariant under canonical coordinate transformations, just as classical mechanics is, which is to say that thermodynamics and classical mechanics have the same formal structure, namely a symplectic structure.

    Here’s what Peterson says in his introduction:

    The “true structure” of thermodynamics, in this sense of coordinate invariance, is easy to find when one has once wondered about it, and the result is the same as in mechanics: the Poisson bracket structure (what mathematicians call a symplectic structure). This fact is, I believe, well known to some people, but it was not known to me when I was worrying about it, so I offer it to those who may find themselves similarly placed. I still find it surprising that thermodynamics and classical mechanics, those two pillars of classical physics, are—in a sense to be described—formally isomorphic!

    The basic Poisson brackets he uses are

    \{T, S\} = -1


    \{V, P\} = -1.

    • John Baez says:

      Yes, these Poisson brackets are just what you’d expect from the relation I gave:

      \begin{array} {ccccccc}  q &\to& S & &  p &\to & T \\ t & \to & V & & H &\to & P \end{array}

      In fact I wrote it this way to hint at the canonically conjugate pairs (or if you prefer, thermodynamically conjugate variables). I have a blog article half-written about this symplectic stuff.

    • John Baez says:

      Well, actually his Poisson brackets differ by an overall sign from the usual ones in classical mechanics, if you use the analogy I suggest. But that’s not surprising: the overall sign of the Poisson brackets is somewhat a matter of convention (though the conventions interlock and you have to be careful of changing just one).

  5. demian cho says:


    I am currently teaching a thermodynamics class, and really enjoying your series of posts starting quantropy. I will seriously look into them once I have some free time.



    • demian cho says:

      Oh. I forgot to mention.
      I have a sophomore with whom we decide to go through Baez and Muniain this semester. Let’s see how far we can make to.


    • John Baez says:

      Hi, Demian! You wrote:

      I will seriously look into them once I have some free time.

      Great! As Mark Peterson points out in his paper, the analogy between classical mechanics and thermodynamics makes a lot of things clearer. It would be fun to incorporate that into a course, and from the comments here you’ll see a number of people are already doing it. I’d love to try it myself someday—maybe when I get back to U.C. Riverside.

  6. Lance Martin says:

    For some time I’ve suspected that various physical theories somehow use the same mathematics, but don’t have sufficient background to test this out. Your example here would be one of them. A friend of mine, Rob Tyler, once told me that he had published an article somewhere showing that the equations of electromagnetism are the same as those of fluid dynamics (Maxwell’s equations correspond to the vorticity relations I think). I’m not sure where all this leads, but at least it suggested a cute possibility on that score: what if anyone who solved either Yang-Mills or Navier-Stokes was entitled to $2,000,000 from the Clay foundation since by solving one they’d essentially solved the other? I just wish I knew what I was talking about on the matter.

  7. Mike says:

    A symplectric structure of thermodynamics? Cool beans.

    Qualitatively, I don’t see that as very surprising. It has been known for a long time by mathematicians that symplectic topology has very deep and rich applications in quantum mechanics. Also, as several of your former posts indicate, quantum mechanics itself shares many similarities with thermodynamics (this is especially true if you subscribe to the ensemble interpretation).

    It is important to look a bit deeper into the physics behind the equations, though. Hamiltonian mechanics goes far beyond just Hamilton’s equations of motion. The richness of Hamiltonian questions raises a few questions about this article: What does the fact that \int{dTdS} being invariant with varying V signify physically (analogous to the Poincare/Liouville integral invarint)? Can you relate thermodynamic variable via canonical transformations? Is the symplectic theory of thermodynamics useful in applications or in theory (perhaps symplectic integration of thermodynamic quantities could prove useful for engineers)?

    Darn, it is curiosities like these that make me regret my ignorance of thermodynamics!

  8. Blake Stacey says:

    One of the first things I learned to do with Hamilton’s equations of motion was to study the case where the initial condition is not exactly known: instead of saying we have at time t = 0 a particle at q with momentum p, or a set of particles with a well-specified position and momentum each, all we know is the probability density for where things are and what they’re doing, \rho(\vec{q}_1,\vec{q}_2,\ldots,\vec{q}_N, \vec{p}_1, \vec{p}_2, \ldots,\vec{p}_N;t). If this is how we’ll gamble about what’s happening at time t, and if we accept that Newton’s laws apply, how should we gamble about what will happen at t'? The Liouville equation tells us

    \frac{\partial \rho}{\partial t} = -\{\rho, H\},

    where H is the Hamiltonian which encodes all the ways the N particles can push and pull on one another.

    So, by formal analogy, can we say something about “statistical thermal statics”, i.e., a situation where we don’t know exact values for the macroscopic state variables? I guess the formal analogue of the Liouville equation would read

    \frac{\partial \rho}{\partial V} = -\{\rho, P\},

    relating the change in \rho under an infinitesimal quasi-static change in volume V to the Poisson bracket of \rho with the pressure P.

    • John Baez says:

      Blake wrote:

      So, by formal analogy, can we say something about “statistical thermal statics”, i.e., a situation where we don’t know exact values for the macroscopic state variables?

      That’s an interesting idea! It’s sort of amusing, since in statistical mechanics, thermal statics is already statistical. But taking a probability distributions of probability distribution, and collapsing it down to a probability distribution, is a perfectly fine thing (formalized using the Giry monad, if you feel like showing off).

      I think you’re exactly right about the analogue of the Liouville equation (though I don’t vouch for the minus sign); this should be a spinoff of M. J. Peterson’s comment:

      Like the Hamilton‐Jacobi equation, such equations can be solved along trajectories given by Hamilton’s equations, the trajectories being quasistatic processes which obey the given equation of state.

      • Is a probability distribution of probability distributions the same thing as Christian Beck’s superstatistics or a doubly stochastic model?

        The interesting thing about this monad is that it can lead to PDF’s that lack defined moments. For example the practice of applying an exponential distribution to an exponential distribution leads to a resultant PDF which lacks a mean but still has a median value. I have a feeling that this is a maximum entropy situation for a median-only constrained PDF, but I haven’t been able to set up the variational parameters correctly.

        So my math puzzle is “Find the maximum entropy probability density function given a constraint of known median value”.

      • Blake Stacey says:

        John Baez wrote:

        It’s sort of amusing, since in statistical mechanics, thermal statics is already statistical.

        Right. On the other hand, a statement like “the temperature is 300 plus-or-minus 5 degrees Celsius” makes sense in an engineering context.

        But taking a probability distributions of probability distribution, and collapsing it down to a probability distribution, is a perfectly fine thing (formalized using the Giry monad, if you feel like showing off).

        Where can I learn how to show off — starting here, maybe?

      • John Baez says:

        Since WebHubTel and Blake are both interested in probability distributions of probability distributions, and the literature I’ve been able to find doesn’t look as readable as it should be, here’s a little mini-course:

        Probability distributions aren’t nearly as flexible as probability measures, so we should really use those. Suppose X is a space, and let P X be the space of probability measures on X. Then we’d like to talk about P(P X), the space of probability measures on the space of probability measures on X. And there should be a map

        T: P(P X) \to P X

        which collapses a probability measure on the space of probability measures on X down to a probability measure on X.

        For example, if there’s a 50% chance that the coin in my hand is fair (so it has a 50% chance of landing heads up), but there’s a 50% chance that it’s rigged so that it has a 90% chance of landing heads up, we say there’s a 70% chance of its landing heads up.

        It’s not hard write down a formula for how T should work in general, so I’ll leave that as a little exercise. The challenging part is this:

        When I say X is ‘a space’ I’m being pretty vague. We need X to be a measurable space to define measures on it. That’s straightforward enough… but then we need P X to also be a measurable space, so we can define P(P X).

        So, we need to find some class of measurable spaces X such that P X is again a measurable space. And ideally we’d like P X to again be a measurable space in the same class! That would let us go hog-wild and define not just P(P X) but also $P(P(P(X)))$ and so on. Believe it or not, these things are useful too!

        Mathematicians have found a nice answer to this puzzle, though perhaps not the ultimate ideal answer: it’s to use ‘standard Borel spaces’. This is a class of measure spaces that includes all the ones you’d ever care about (unless you’re really insane!), and has the property that if X is a standard Borel space, so is P (X). Even better, there’s a nice complete classification of standard Borel spaces: two standard Borel spaces are isomorphic iff they have the same cardinality. This is a theorem due to Kuratowski.

        So, it turns out we get a functor P sending standard Borel spaces to standard Borel spaces, and a monad T : P^2 \Rightarrow P, meaning that for each standard Borel space X we get a map

        T: P(P X) \to P X

        as desired, and this map is incredibly well-behaved. Now is not the time for me to explain monads; I’ve done it before in This Week’s Finds. But anyway, this functor T is sometimes called the Giry monad… though often people use that term to mean something very slightly different, where instead of probability measures we use measures whose integral is less than or equal to 1 (called sub-probability measures).

        So, what’s a standard Borel space? I explained that in week272, but I’ll say it again:

        For starters, it’s a kind of measurable space, meaning a space equipped a collection of subsets that’s closed under countable intersections, countable unions and complement. Such a collection is called a sigma-algebra and we call the sets in the collection measurable. A measure on a measurable space assigns a number between 0 and +∞ to each measurable set, in such a way that for any countable disjoint union of measurable sets, the measure of their union is the sum of their measures.

        A nice way to build a measurable space is to start with a topological space. Then you take its open sets and keep taking countable intersections, countable unions and complements until you get a sigma-algebra. This may take a long time, but if you believe in transfinite induction you’re bound to eventually succeed. The sets in this sigma-algebra are called Borel sets.

        A basic result in real analysis is that if you put the usual topology on the real line, and use this to cook up a sigma-algebra as just described, there’s a unique measure on the resulting measurable space that assigns to each interval its usual length. This is called Lebesgue measure.

        Some topological spaces are big and nasty. But separable complete metric spaces are not so bad.

        We don’t care about the metric in this game. So, we use the term Polish space for a topological space that’s homeomorphic to a complete separable metric space. It was the Poles, like Kuratowski, who first got into this stuff.

        And often we don’t even care about the topology. So, we use the term standard Borel space for a measure space whose measurable sets are the Borel sets for some topology making it into a Polish space.

        In short: every complete separable metric space has a Polish space as its underlying topological space, and every Polish space has a standard Borel space as its underlying measurable space.

        Now, it’s hopeless to classify complete separable metric spaces. It’s even hopeless to classify Polish spaces. But it’s not hopeless to classify standard Borel spaces! The reason is that metric spaces are like diamonds: you can’t bend or stretch them at all without breaking them entirely. But topological spaces are like rubber… and measurable spaces are like dust. So, it’s very hard for two metric spaces to be isomorphic, but it’s easier for their underlying
        topological spaces – and even easier for their underlying measurable spaces.

        For example, the line and plane are isomorphic, if we use their usual sigma-algebras of Borel sets to make them into measurable spaces! And the plane is isomorphic to \mathbb{R}^n for every n, and all these are isomorphic to a separable Hilbert space! As measurable spaces, that is.

        In fact, every standard Borel space is isomorphic to one of these:

        • a countable set with its sigma-algebra of all subsets,

        • the real line with its sigma-algebra of Borel subsets.

        That’s pretty amazing. It means that standard Borel spaces are classified by just their cardinality, which can only be finite, countably infinite, or the cardinality of the continuum. The "continuum hypothesis" says there’s no cardinality between the countably infinite one and the cardinality of the continuum–but we don’t need the continuum hypothesis to prove this result.

  9. John Baez says:

    Here’s some of the conversation about this blog entry from Google+

    Allen Knutson wrote:

    Have you looked into Erik Verlinde’s entropic forces paper? I don’t think it’s the same thing, but it must be mentioned here.

    Dan Piponi wrote:

    I’d still like to better understand the appearance of the Legendre transform.

    The Legendre transform is analogous to the Fourier transform. We use the Fourier transform when we want to write a function as a sum of pieces that transform by multiplication when translated (i.e. exponentials). We can use the Legendre transform when we want to write a convex function as the max of a bunch of functions that transform by addition when translated (i.e. linear functions). So why does it appear here?

    In classical mechanics the Legendre transform appears as the limit of Fourier transforms that arise in quantum mechanics. I think, but I’m not sure, that the Legendre transforms in thermodynamics arise from the limit of Laplace transforms that come from convolving probability distributions (along the lines of something I G+’ed a while back).

    Stephen Lavelle wrote:

    Just goes to show how fundamentally unimaginative humans really are ;)

    John Baez wrote:

    Allen Knutson wrote:

    “Have you looked into Erik Verlinde’s entropic forces paper? I don’t think it’s the same thing, but it must be mentioned here.

    I could never make much sense of that paper, but while writing my next blog article the phrase ‘entropic force’ did come to mind, because I’ll talk about the principle of maximum entropy S in thermostatics versus the principle of minimum potential energy V in classical statics, and dV = -F plays a similar role to the role played by dS in this blog article. So yeah, something may be going on here.

    John Baez wrote:

    Dan Piponi: I understand what people are talking about when they say the Legendre transform is ‘T → 0 limit’ of the Laplace transform; James Dolan explained this stuff to me, and I used to talk about it a lot with David Corfield on the n-Cafe.

    In a nutshell, it all boils down to how the ‘tropical rig’, namely the numbers from 0 to +infinity with min as addition and + as multiplication, can be seen as a T → 0 limit of the usual ring of real numbers, where we conjugate the usual operations by a T-dependent function. We can formulate thermostatics as matrix mechanics using the real numbers; then classical statics arises in the T → 0 limit. Similarly, after a Wick rotation, we can see classical mechanics as an h → 0 limit of quantum mechanics. In thermostatics the Laplace transform reigns supreme; in quantum mechanics this becomes the Fourier transform, and in the T → 0 or h → 0 limit these reduce to the Legendre transform.

    I’m not really explaining anything, just sketching the ideas. However, this circle of ideas sits at an odd angle to what I’m thinking about now: the role of the Legendre transform in thermostatics! I think that’s what’s puzzling you, and I think that’s what’s puzzling David Corfield in his comment here:

    It’s puzzling me too! I think this is hinting at yet another layer of depth in how all these ideas fit together – one I’m not quite ready to tackle yet, but one I’d better not forget.

    By the way, it’s absurd that a T → 0 limit is called ‘tropical’. It should be called ‘arctic’ or something. It got this silly name in honor of the Brazilian mathematician Imre Simon:

    This stupid article does not explain how you can see the tropical rig as a limit of the real numbers. It thus sheds little light on what’s going on.

    Jim Walters wrote:

    Perhaps along these lines?

    Nathan Reed wrote:

    John Baez: if that Wikipedia article is missing something important, why not add it? :)

    Cliff Harvey wrote:

    By the way, on the subject of the Legendre transform, heres a nice document I came across a while back, which includes a discussion of the applications to thermodynamics (though it may be a bit basic if you’re a pro): “Making Sense of the Legendre Transform”.

    Kazimierz Kurz wrote:

    But what you are writing is old known fact. I read about it more that 20 years ago in Jamiołkowski, Ingarden, Mrugała “Fizyka statystyczna i termodynamika” (in translation “Statistical Physics and thermodynamics” in Polish) from 1990. They explain how using Pfaff’s forms, differential geometry and contact forms define thermodynamics in the same way as classical mechanics. They use differential forms, Legendre manifolds and even “thermodynamical brackets” which corresponds to Poison brackets from classical hamiltionian dynamics. It is quite popular book in Poland although is not very common way of explaining thermodynamics in that way. I do not know if this book has English edition. Unfortunately in google books there is only cover form this book, and you cannot look inside,..

    One of the authors – Mrugała – wrote about such formalism, see here:
    and here:

    Also this may be interesting:

    I do not have subscription to this sites so I do not know what is inside, but author and title maybe point up on something interesting here.

    Dan Piponi wrote:

    Cliff Harvey: I’ve read that article, but it really only convinced me that there’s still a need for someone to write an article that makes sense of the appearance of the Legendre transform in physics. I’ve a hunch John Baez is the man for the job!

    Ekaropolus Van Gor wrote:

    There is a “new” formalism made by Hernado Quevedo in which they use a Riemann metric and identify the thermodynamic interaction with curvature on a manifold. Here is the article:

    • Hernando Quevedo and María N. Quevedo, Fundamentals of geometrothermodynamics.

    I am doing my bachelor degree on studying thermodynamic phase space.

    John Baez wrote:

    Nathan Reed wrote:

    If that Wikipedia article is missing something important, why not add it?

    Maybe I will. I sometimes do. It could easily slip into a full-time job for me. If I could make duplicate copies of myself, one of them would definitely enjoy doing this all day.

    John Baez wrote:

    Kazimierz Kurz – Thanks! It indeed seemed likely that someone must have discovered this already – it’s too easy, and too important. That’s why I asked if someone knew about it. But I’d never encountered it in all my studies. If it were only available in an out-of-print book in Polish, that might explain this. But Blake Stacey pointed it out it’s also explained here:

    • M. J. Peterson, Analogy between thermodynamics and mechanics, American Journal of Physics 47 (1979), 488.

    Abstract: We note that equations of state—by which we mean identical relations among the thermodynamic variables characterizing a system—are actually first‐order partial differential equations for a function which defines the thermodynamics of the system. Like the Hamilton‐Jacobi equation, such equations can be solved along trajectories given by Hamilton’s equations, the trajectories being quasistatic processes which obey the given equation of state. This gives rise to the notion of thermodynamic functions as infinitesimal generators of quasistatic processes, with a natural Poisson bracket formulation. This formulation of thermodynamic transformations is invariant under canonical coordinate transformations, just as classical mechanics is, which is to say that thermodynamics and classical mechanics have the same formal structure, namely a symplectic structure.

    I haven’t read this article yet, but I’ll be interested to see if it refers to Jamiołkowski, Ingarden, Mrugała’s Fizyka Statystyczna i Termodynamika or any of the other papers you mention.

    John Baez wrote:

    Cliff Harvey – thanks! I read and enjoyed “Making sense of the Legendre transform” and talked about it in “week289” back when I was starting to become obsessed with these issues:

    If you read this you’ll see a big analogy between lots of different theories that have a pair of variables analogous to “p” and “q” in classical mechanics. And you’ll see this plea:

    I’ve spent decades thinking about the Legendre transform in the context of classical mechanics, but not so much in thermodynamics. I think its appearance in both subjects should be a big part of the analogy I’m talking about here. But if anyone knows a clear, detailed treatment of the analogy between classical mechanics and thermodynamics, focusing on the Legendre transform, please let me know!

    Unfortunately nobody pointed one out a reference back then so I had to figure it out myself! But it was fun to do, so I don’t really mind.

    Bernard Beard wrote:

    Ever study finance, John? It made a lot more sense to me once I understood that the net present value of a future cash flow is simply one particular value of the Laplace transform of that cash flow, the value corresponding to the selected discount rate.

    John Baez wrote:

    Bernard Beard – Yes, that’s a great example. I think my friend James Dolan pointed this out to me! For a long time the Laplace transform just seemed like a low-budget version of the Fourier transform for people who didn’t understand complex numbers (there are lots of problems you can solve using either one), but then I started meeting examples where it’s clearly the right thing to think about.

  10. An ongoing discussion of non-equilibrium thermodynamics and maximum entropy at the Climate
    I was trying to defend the concept of maximum entropy but the pushback I keep getting is that it doesn’t produce any new insight that you can’t get though conventional stat mech.

    An upcoming topic there is Gell-Mann’s concept of complexity/simplicity referred to as plectics. This is interesting because it may fit into the Occam’s razor bucket of parsimony and is the root term in symplectic geometry.

    The climate scientists are interested in this general topic because it could help make headway into complex general circulation models (GCM’s).

  11. Matt Parry says:

    V.I. Arnold once wrote something along the lines of: the reason why thermodynamics is hard is that it is naturally formulated on an odd-dimensional phase space. I.e. contact geometry as opposed to symplectic geometry. This makes me a bit suspicious of such a direct analogy between thermodynamics and classical mechanics.

    On other hand, quantum statistical mechanics does seem to be relatable to classical mechanics. Since there is always an irrelevant overall complex phase in QM, it is natural to consider the projective Hilbert space. Brody and Hughston showed that the projective Hilbert space is a symplectic manifold. The paper is “Geometrization of statistical mechanics” Proc. R. Soc. Lond. A (1999) 455, 1683–1715

    • John Baez says:

      Matt wrote:

      V.I. Arnold once wrote something along the lines of: the reason why thermodynamics is hard is that it is naturally formulated on an odd-dimensional phase space. I.e. contact geometry as opposed to symplectic geometry.

      Well, I like contact geometry too, and I’d be very interested to see what Arnol’d was actually talking about. What’s the simplest example does he give of an odd-dimensional phase space in thermodynamics?

      As you can see from my post, thermodynamics works quite nicely on an even-dimensional phase space. But that doesn’t necessarily conflict with the odd-dimensional approach. Indeed, classical mechanics can be formulated nicely on either an even-dimensional symplectic manifold or an odd-dimensional contact manifold. So, we can use the analogy between classical mechanics and thermodynamics, described here, to cook up an odd-dimensional phase space for thermodynamics. In the example I’m talking about this article, I think that would have S, T, V, P and U as coordinates.

      Here’s another nice paper emphasizing the fact that a projectivized Hilbert space is a symplectic manifold:

      • Abhay Ashtekar and Troy Schiling, Geometrical formulation of quantum mechanics.

      Abstract: States of a quantum mechanical system are represented by rays in a complex Hilbert space. The space of rays has, naturally, the structure of a Kähler manifold. This leads to a geometrical formulation of the postulates of quantum mechanics which, although equivalent to the standard algebraic formulation, has a very different appearance. In particular, states are now represented by points of a symplectic manifold (which happens to have, in addition, a compatible Riemannian metric), observables are represented by certain real-valued functions on this space and the Schr\”odinger evolution is captured by the symplectic flow generated by a Hamiltonian function. There is thus a remarkable similarity with the standard symplectic formulation of classical mechanics. Features—such as uncertainties and state vector reductions—which are specific to quantum mechanics can also be formulated geometrically but now refer to the Riemannian metric—a structure which is absent in classical mechanics. The geometrical formulation sheds considerable light on a number of issues such as the second quantization procedure, the role of coherent states in semi-classical considerations and the WKB approximation. More importantly, it suggests generalizations of quantum mechanics. The simplest among these are equivalent to the dynamical generalizations that have appeared in the literature. The geometrical reformulation provides a unified framework to discuss these and to correct a misconception. Finally, it also suggests directions in which more radical generalizations may be found.

      • John Baez says:

        Above we see Dan Piponi wondering about the Legendre transform in classical statics versus the Laplace transform in thermal statics, and me responding that the former is a T \to 0 limit of the latter. If we formally make the temperature T imaginary we get quantum mechanics and the Fourier transform.

        Dan has now explained some of these ideas in detail here:

        • Dan Piponi, Some parallels between classical and quantum mechanics, A Neighborhood of Infinity, 21 January 2012.

        He concludes as follows:

        I doubt I’ve said anything original here. Classical mechanics is well known to be the \hbar \to 0 limit of quantum mechanics as and it’s well known that in this limit we find that occurrences of the semiring (\mathbb{R}, +, \times) are replaced by the semiring (\mathbb{R}, \mathrm{min}, +). But I’ve never seen an article that attempts to describe classical mechanics in terms of repeated inf-convolution even though this is close to Hamilton’s formulation and I’ve never seen an article that shows the parallel with the Schrödinger equation in this way. I’m hoping someone will now be able to say to me “I’ve seen that before” and post a relevant link below.

        It’s really in thermal statics that we can rigorously understand how the semiring (\mathbb{R}, +, \times) arises from a 1-parameter deformation of the semiring (\mathbb{R}, \mathrm{min}, +). The parameter is temperature, T. A further ‘Wick rotation’ gives us quantum mechanics with

        \hbar = i T

        I’m not sure these free online books exactly what Dan wants, but it’s probably buried in them somewhere:

        • Grigory L. Litvinov, Victor Maslov, and Sergei Sergeev, Idempotent and tropical mathematics and problems of mathematical physics (Volume I).

        • Grigory L. Litvinov, Victor Maslov, and Sergei Sergeev, Idempotent and tropical mathematics and problems of mathematical physics (Volume II).

        These were the proceedings of the International Workshop on Idempotent and Tropical Mathematics and Problems of Mathematical Physics, held at the Independent University of Moscow, Russia in 2007. Litvinov has also written other good review articles, like this:

        • Grigory L. Litvinov, The Maslov dequantization, idempotent and tropical mathematics: A brief introduction.

        It’s good to look for papers containing the buzzword ‘idempotent analysis’. This comes from the fact that addition in the semiring (\mathbb{R}, \mathrm{min}, +). is idempotent:

        x \; \mathrm{min} \; x = x

      • Matt Parry says:

        Thank you John for the link to Ashtekar and Schilling’s work. This a much more complete statement of what I was trying to say.

        As for the Arnol’d quote…it was from memory and I slightly embellished it, but it can be found in his contribution to the Gibbs Symposium in 1989:

        My understanding has always been that in thermodynamics the macroscopic quantities come in pairs — one extensive and one intensive — with one singleton extensive quantity left over. For example the pairs are typically taken to be (energy, temperature), (volume, pressure), (particle number, chemical potential), etc. In this description, entropy is the odd one out and the goal of thermodynamics is to write down S=S(U,V,N,\ldots).

        (It is tempting to imagine that time t is conjugate to S — second law any one?! — but as you point out we are really doing thermostatics not thermodynamics!)

        So I guess I don’t see that odd-dimensional phase spaces need to be cooked up. On the other hand, I never knew that classical mechanics could be formulated on contact manifolds as well!

  12. Eric says:

    I took a course at UIUC that taught thermodynamics using differential forms. The course covered various applications of differential forms/differential geometry and thermodynamics was just one of many. I lost my notes and don’t remember the visiting professor’s name, but I remember how excited he was. The presentation was similar to what you have here.

    • John Baez says:

      I’ve seen lots of presentations of thermodynamics using differential forms. I’d never seen the analogy to classical mechanics exploited to bring symplectic geometry (or if you like, Poisson brackets) into play in thermodynamics. But you can find it in Peterson’s paper, and also maybe Jamiołkowski, Ingarden, and Mrugała’s book Fizyka Statystyczna i Termodynamika.

      Did your professor talk about Poisson brackets in thermodynamics? I’m curious how well-known these are.

      • Eric says:

        Did your professor talk about Poisson brackets in thermodynamics?

        Yes. I believe he did, but it was long ago and this stuff was pretty new to me back then. The visiting professor was Eastern European (I believe) and I wouldn’t be surprised if his reference was the latter one you mention.

  13. I showed you last time that in many branches of physics—including classical mechanics and thermodynamics—we can see our task as minimizing or maximizing some function. Today I want to show how we get from that task to symplectic geometry.

  14. Thomas says:

    Maybe this could be interesting:
    I don’t know if there is an english translation..

  15. Hello Azimuth, I love your article on the deep connections between Hamilton’s equations of classical mechanics and the Maxwell relations of thermodynamics!! Is this in a book somewhere? (I don’t have a printer.) Best regards, Michael Martin, MD. (Just an MD, not a PhD!)

  16. ishi says:

    this is an interesting post—i saw it months ago. i collect papers more or less showing equivalences between classical mechanics and thermodynamics/ nonequilibrium stat mech. ( i hadnt even considered maxwells electrodynamics, except that there appear to be at least two lagrangians that will get you there).
    there is the well known attempt via fisher information (reviewed critically by streater) and many other more current ones (some just qualitative—eg i noticed the ‘gravity law’ in economics was derived via stat mech by a wilson in the 60’s).
    i guess i first saw this type of thing in the onsager-machlup functional—you can get a principle of least action/hamilton jacobi eqn for diffusion (ie a conservative process for what is usually thought of as a nonconservative one).
    (there is alot of this stuff out there—it even has now gotten into proc royal society london by several authors, who often dont cite each other).
    i cant really figure out if all the approaches are the same (one can even go into e nelson’s approach to quantum theory).
    (my niece is in high school in singapore as an aside).

  17. I was surprised to rediscover that the Maxwell relations in thermodynamics are formally identical to Hamilton’s equations in classical mechanics—though in retrospect it’s obvious. Thermodynamics obeys the principle of maximum entropy, while classical mechanics obeys the principle of least action. Wherever there’s an extremal principle, symplectic geometry, and equations like Hamilton’s equations, are sure to follow.

  18. […] As always physics rules when it comes to defining natural behavior, and so we start by looking at energy balance. Consider a Gibbs energy formulation as a variational approach: […]

  19. […] original post on the CSALT model gave a flavor of the variational approach that I used — essentially solving for a thermostatic free energy minimum with respect to the […]

  20. I am not clear on something. The derivations in the post are completely general, but also completely formal.

    It seem that any sufficiently smooth relation

    Z = f(X, Y)

    can be expressed as

    dZ = AdX + BdY

    with A = \partial_X Z and B = \partial_Y Z, giving \partial_Y A = \partial_B X. Defining U = Z - BY then also gives

    dU = AdX - BdY

    This is all independent of the fact that H and U are both energy.

    No conservation laws or variational principles are used in the derivations. What is the significance of the connection between CM and TM?

    • John Baez says:

      Variational principles are implicit in what I’m doing, even though they’re not required for the calculations you wrote down. All the mathematical structures in this post—and even more, as described in Part 2—show up whenever we try to extremize a smooth function of several variables. In classical mechanics we’re minimizing Hamilton’s principal function (basically the action); in thermodynamics we’re minimizing entropy.

      Given this, the interesting puzzle—to me, anyway—is the relation between the principle of least action and the principle of maximum entropy. That’s what Blake Pollard and I investigated in our paper on quantropy. There’s still more to understand, though!

  21. btw is there a way for me to fix my own mess or a get a preview.

    • John Baez says:

      Sorry, no preview available here. The bug in your post was a subtle one (after you fixed the obvious ones). I fixed the problem by retyping the letters in your LaTeX comments that didn’t parse. Sometimes this happens when people cut-and-paste text that’s in a strange font.

  22. A formalism for contact geometry and relation to thermodynamics:

    • A. Bravetti, C. S. Lopez-Monsalvo and F. Nettel, Contact symmetries and Hamiltonian thermodynamics.

    • John Baez says:

      Thanks! It’s interesting how they seem to be claiming to get a metric on their contact manifold using the Fisher information metric. At least that’s how I interpret the first sentence of their abstract:

      It has been shown that contact geometry is the proper framework underlying classical thermodynamics and that thermodynamic fluctuations are captured by an additional metric structure related to Fisher’s Information Matrix. In this work we analyze several unaddressed aspects about the application of contact and metric geometry to thermodynamics. We consider here the Thermodynamic Phase Space and start by investigating the role of gauge transformations and Legendre symmetries for metric contact manifolds and their significance in thermodynamics. Then we present a novel mathematical characterization of first order phase transitions as equilibrium processes on the Thermodynamic Phase Space for which the Legendre symmetry is broken. Moreover, we use contact Hamiltonian dynamics to represent thermodynamic processes in a way that resembles the classical Hamiltonian formulation of conservative mechanics and we show that the relevant Hamiltonian coincides with the irreversible entropy production along thermodynamic processes. Therefore, we use such property to give a geometric definition of thermodynamically admissible fluctuations according to the Second Law of thermodynamics. Finally, we show that the length of a curve describing a thermodynamic process measures its entropy production.

  23. John Denker says:

    1) We should keep in mind that classical mechanics is only an approximation to quantum mechanics, just as classical thermodynamics is only an approximation to statistical mechanics.

    The symplectic structure is even more interesting and more useful when applied to the modern (post-classical) versions. Hamilton-Jacobi theory is amusing, but it’s less useful than out-and-out QM, and not significantly simpler.

    2) Stat mech is basically the analytic continuation of QM, continued in the direction of imaginary time. This point and its ramifications are discussed in e.g. Feynman and Hibbs Quantum Mechanics and Path Integrals (1965). The classical limit of QM is obtained by the method of stationary phase, whereas the classical limit of stat mech is obtained by the method of steepest descent … so the two subjects are very nearly but not quite identical.

    Again: If we’re going to make connections, it is even more interesting and more useful to connect the modern (post-classical) versions.

    FWIW note that Planck invented QM as an outgrowth from stat mech … not directly from classical mechanics. So the connections are there, and have been since Day One.

    3) Many (but not all) of the familiar formulas of thermodynamics can usefully be translated to the language of differential forms. In many cases all that is required is a re-interpretation of the symbols, leaving the form of the formula unchanged; for instance we interpret dE = T dS – P dV as a vector equation.

    I say “not all” formulas because more than a few of the formulas you see in typical thermodynamics books are nonsense. This includes (almost) all expressions involving “dQ” or “dW”. Such things simply do not exist (except in trivial cases). Daniel Schroeder in An Introduction to Thermal Physics (1999) rightly calls them a crime against the laws of mathematics. With a modicum of self-discipline it is straightforward to do thermodynamics without committing such crimes.

    Differential forms make thermodynamics simpler and more visually intuitive … and simultaneously more sophisticated, more powerful, and more correct.

    Although there are fat books on the subject of differential topology, only the tiniest fraction of that is necessary for present purposes. An introductory discussion (including pictures) can be found at and the application to thermodynamics is worked out in some detail at

    • Boris Borcic says:

      Is the contrast between having QM derive from stat mech and having it from classical mechanics, not really stiffer or steeper than presented here? If you start from the stat mech side isn’t there a sense by which the other view misleads on the materiality or definiteness of the wave function outside any context of identically prepared systems?

You can use Markdown or HTML in your comments. You can also use LaTeX, like this: $latex E = m c^2 $. The word 'latex' comes right after the first dollar sign, with a space after it.

Fill in your details below or click an icon to log in: Logo

You are commenting using your account. Log Out / Change )

Twitter picture

You are commenting using your Twitter account. Log Out / Change )

Facebook photo

You are commenting using your Facebook account. Log Out / Change )

Google+ photo

You are commenting using your Google+ account. Log Out / Change )

Connecting to %s


Get every new post delivered to your Inbox.

Join 3,330 other followers