There’s a fascinating analogy between classical mechanics and thermodynamics, which I last talked about in 2012:
• Classical mechanics versus thermodynamics (part 1).
• Classical mechanics versus thermodynamics (part 2).
I’ve figured out more about it, and today I’m giving a talk about it in the physics colloquium at the University of British Columbia. It’s a colloquium talk that’s supposed to be accessible for upper-level undergraduates, so I’ll spend a lot of time reviewing the basics… which is good, I think.
You can see my slides here, and I’ll base this blog article on them. You can also watch a video of my talk:
Hamilton’s equations versus the Maxwell relations
Why do Hamilton’s equations in classical mechanics:
look so much like the Maxwell relations in thermodynamics?

William Rowan Hamilton discovered his equations describing classical mechanics in terms of energy around 1827. By 1834 he had also introduced Hamilton’s principal function, which I’ll explain later.

James Clerk Maxwell is most famous for his equations describing electromagnetism, perfected in 1865. But he also worked on thermodynamics, and discovered the ‘Maxwell relations’ in 1871.
Hamilton’s equations describe how the position and momentum
of a particle on a line change with time
if we know the energy or Hamiltonian
:
Two of the Maxwell relations connect the volume , entropy
, pressure
and temperature
of a system in thermodynamic equilibrium:
Using this change of variables:
Hamilton’s equations:
become these relations:
These are almost like two of the Maxwell relations! But in thermodynamics we always use partial derivatives:
and we say which variables are held constant:
If we write Hamilton’s equations in the same style as the Maxwell relations, they look funny:
Can this possibly be right?
Yes! When we work out the analogy between classical mechanics and thermodynamics we’ll see why.
We can get Maxwell’s relations starting from this: the internal energy of a system in equilibrium depends on its entropy
and volume
Temperature and pressure are derivatives of
Maxwell’s relations follow from the fact that mixed partial derivatives commute! For example:
To get Hamilton’s equations the same way, we need a function of the particle’s position
and time
such that
Then we’ll get Hamilton’s equations from the fact that mixed partial derivatives commute!
The trick is to let be ‘Hamilton’s principal function’. So let’s define that. First, the action of a particle’s path is
where is the Lagrangian:
The particle always takes a path from to
that’s a critical point of the action. We can derive Hamilton’s equations from this fact.
Let’s assume this critical point is a minimum. Then the least action for any path from to
is called Hamilton’s principal function
A beautiful fact: if we differentiate Hamilton’s principal function, we get back the energy and momentum
:
You can prove these equations using
which implies that
where we integrate along the minimizing path. (It’s not as trivial as it may look, but you can do it.)
Now let’s fix a starting-point for our particle, and say its path ends at any old point
. Think of Hamilton’s principal function as a function of just
:
Then the particle’s momentum and energy when it reaches are:
This is just what we wanted. Hamilton’s equations now follow from the fact that mixed partial derivatives commute!
So, we have this analogy between classical mechanics and thermodynamics:
Classical mechanics | Thermodynamics |
action: |
internal energy: |
position: |
entropy: |
momentum: |
temperature: |
time: |
volume: |
energy: |
pressure: |
What’s really going on in this analogy? It’s not really the match-up of variables that matters most—it’s something a bit more abstract. Let’s dig deeper.
I said we could get Maxwell’s relations from the fact that mixed partials commute, and gave one example:
But to get the other Maxwell relations we need to differentiate other functions—and there are four of them!
• : internal energy
• : Helmholtz free energy
• : enthalpy
• : Gibbs free energy
They’re important, but memorizing all the facts about them has annoyed students of thermodynamics for over a century. Is there some other way to get the Maxwell relations? Yes!
In 1958 David Ritchie explained how we can get all four Maxwell relations from one equation! Jaynes also explained how in some unpublished notes for a book. Here’s how it works.
Start here:
Integrate around a loop :
so
This says the heat added to a system equals the work it does in this cycle
Green’s theorem implies that if a loop encloses a region
Similarly
But we know these are equal!
So, we get
for any region enclosed by a loop. And this in turn implies
In fact, all of Maxwell’s relations are hidden in this one equation!
Mathematicians call something like a 2-form and write it as
. It’s an ‘oriented area element’, so
Now, starting from
We can choose any coordinates and get
(Yes, this is mathematically allowed!)
If we take we get
and thus
We can actually cancel some factors and get one of the Maxwell relations:
(Yes, this is mathematically justified!)
Let’s try another one. If we take we get
Cancelling some factors here we get another of the Maxwell relations:
Other choices of give the other two Maxwell relations.
In short, Maxwell’s relations all follow from one simple equation:
Similarly, Hamilton’s equations follow from this equation:
All calculations work in exactly the same way!
By the way, we can get these equations efficiently using the identity and the product rule for
:
Now let’s change viewpoint slightly and temporarily treat and
as independent from
and
So, let’s start with
with coordinates
. Then this 2-form on
:
is called a symplectic structure.
Choosing the internal energy function , we get this 2-dimensional surface of equilibrium states:
Since
we know
for any region in the surface , since on this surface
and our old argument applies.
This fact encodes the Maxwell relations! Physically it says: for any cycle on the surface of equilibrium states, the heat flow in equals the work done.
Similarly, in classical mechanics we can start with with coordinates
, treating
and
as independent from
and
This 2-form on
:
is a symplectic structure. Hamilton’s principal function defines a 2d surface
We have for any region
in this surface
And this fact encodes Hamilton’s equations!
Summary
In thermodynamics, any 2d region in the surface
of equilibrium states has
This is equivalent to the Maxwell relations.
In classical mechanics, any 2d region in the surface
of allowed
4-tuples for particle trajectories through a single point
has
This is equivalent to Hamilton’s equations.
These facts generalize when we add extra degrees of freedom, e.g. the particle number in thermodynamics:
or more dimensions of space in classical mechanics:
We get a vector space with a 2-form
on it, and a Lagrangian submanifold
: that is, a n-dimensional submanifold such that
for any 2d region
This is more evidence for Alan Weinstein’s “symplectic creed”:
As a spinoff, we get two extra Hamilton’s equations for a point particle on a line! They look weird, but I’m sure they’re correct for trajectories that go through a specific arbitrary spacetime point
• Part 1: Hamilton’s equations versus the Maxwell relations.
• Part 2: the role of symplectic geometry.
• Part 3: a detailed analogy between classical mechanics and thermodynamics.
• Part 4: what is the analogue of quantization for thermodynamics?
Symplectic structure of thermodynamics given by Jean-Marie Souriau:
https://www.google.fr/url?sa=t&rct=j&q=&esrc=s&source=web&cd=&ved=2ahUKEwjhxeGZ4ZXzAhXOD2MBHcENCZMQFnoECAQQAQ&url=https%3A%2F%2Feventi.unibo.it%2Finfogeo2021%2Fspeakers%2Fgoffredo-chirco%2F%40%40download-relazione%2F45d05c56d4324825a3f477c50eea4886%2FLGT-IG-06_05.pdf&usg=AOvVaw1b2UD9Ce3D-nNgI-ACwG4y:
Charles-Michel Marle (Sorbonne University) Lecture on Souriau Symplectic model of Thermodynamics
Souriau Symplectic model of Entropy as Casimir function in Coadjoint representation
Thanks for these links! I’m reading Souriau on thermodynamics in his book Structure of Dynamical Systems: A Symplectic View of Physics. While probably nothing in my talk here would surprise him, he doesn’t seem to come out and make the points I’m making here. In particular, he focuses much more on statistical mechanics than classical thermodynamics.
Classical thermodynamics has also been studied in the language of contact geometry, and a lot of that can be translated into symplectic geometry. So, it would be quite hard to determine exactly what, if anything, that I’m doing here is “new”. But I think it deserves a clear and simple explanation.
Typos: When you first introduce
, you write
for time instead of
. You do this again when you first mention Hamiltonian mechanics in
, and then you also write
for momentum instead of
.
Thanks again! Fixed!
It’s possibly an analogy too far fetched, but if entropy and temperature correspond to position and momentum, could there exist a so far undiscovered uncertainty principle in ‘quantum’ thermodynamics, too?
That’s an interesting question.
There’s a presentation here (https://ncatlab.org/nlab/show/quantization#MotivationFromClassicalMechanicsAndLieTheory) of the idea that quantization arises out of the symplectic classical world merely from the ability to integrate the Lie group
non-trivially as
.
I wonder whether only some such quantizations are empirically realised.
Did that account derive from your 2006 seminar: https://math.ucr.edu/home/baez/qg-fall2006/qg-fall2006.html#quantization ?
[This comment has the wrong avatar on it.]
David had posted his comment in the wrong place. I reposted it in the right location and managed to put his name on it, but unfortunately my picture is stuck on it now and I don’t see how to fix that. Sorry, David!
David wrote:
That seems a bit exaggerated to me; it’s a good description of “prequantization”, which is a way to get from integral symplectic manifolds to
bundles over these manifolds. But as the presentation says,
One then needs to chop down the prequantum Hilbert space to a smaller one that’s usable in physics, via a ‘polarization’.
In Part 4 I go ahead and try to ‘quantize’ thermodynamics—though it turns out we get something more like statistical mechanics than quantum mechanics! Instead of using the full machinery of geometric quantization I use good old Schrödinger quantization (which is a special case). The hard part is not the math: it’s figuring out the physical meaning of the math, which dictates some of the choices to be made.
If there exists a Hamiltonian description of a thermodynamic system, then there exists a Schrödinger equation of a thermodynamic system of the type
which describes the probability amplitude of the thermodynamic system?
Sorry, I hadn’t read Wolfgang’s post
See Part 4 for something about this sort of question.
Thank you for your interesting posts.

is the Planck constant for a quantum thermodynamics system, but this means that every quantizable system has a different Planck constant, perhaps every quantizable system has an experimentally measurable constant with different dimensional analysis (in thermodynamics there is not time, because the thermodynamics transformation are reversible process).
![[\hat{T},\hat{S}]=i\epsilon](https://s0.wp.com/latex.php?latex=%5B%5Chat%7BT%7D%2C%5Chat%7BS%7D%5D%3Di%5Cepsilon+&bg=ffffff&fg=333333&s=0&c=20201002)
![[\hat{V},\hat{P}]=i\epsilon](https://s0.wp.com/latex.php?latex=%5B%5Chat%7BV%7D%2C%5Chat%7BP%7D%5D%3Di%5Cepsilon+&bg=ffffff&fg=333333&s=0&c=20201002)

from the
(I am thinking now of a superfluid in a piston interacting with a photon on the piston crown, to measure the volume and pressure).
I try to solve the physical constant in the simplest way possible.
where the
The commutators are
and it could be possible to measure the Planck constant in a thermodynamics system; I am thinking that superfluid drops could be this system, with amplitude phases which could be shown experimentally: for an isobaric process the wave function must be
i want to see if there exist a mental experiment to evaluate the
I am thinking that each Hamiltonian system (chemistry,physics, biology,economic,thermodynamics,etc) that is an approximation of a real system has a quantum approximation; so that uncertainty principle is true for each Hamiltonian system, with a, measurable and dimensional, different Planck constant.
So that each measurable Hamiltonian system has a uncertainty principle for the measurable standard deviation; the elementary interactions of systems are different, so that the Planck constant must be different.
Moreover, some Hamiltonian systems of non-classical type must show a quantization of some quantities, for example for some approximation of pressure in thermodynamics.
Over on the Category Theory Community Server someone wrote:
I replied:
They replied:
I replied:
This seems to me a really useful discussion of something likely to confuse almost all.
Thanks!
And congratulations on becoming an AMS Fellow!
Good that the AMS is recognizing contributions beyond published papers.
You certainly have done plenty to foster the general understanding of various mathematical issues.
Interesting that the AMS recognizes your contributions, maybe more so than UCR?
I guess that may be due to an understandable difference in goals and priorities: mathematical education versus whatever UCR expected of its faculty.
Interesting, but I merely mention this.
Don’t want to provoke counterproductive controversy.
If you have comments …
No academic ever feels sufficiently recognized, but the mathematics department at UCR has appointed me the F. Burton Jones Chair in Pure Mathematics, a position which has some actual extra money attached to it, so I have no complaints on that score.
You can now watch a video of my talk on “Classical mechanics versus thermodynamics”:
Bit of a late reply, but it took me a while to understand this better. I made some pictures of examples.
First, an ideal gas. Below is a picture of internal energy (U) versus Entropy and Volume (S,V). It is a bit unusual to express U as U(S,V), it is much simpler to just say U=Cv*T. But we are exploring relations with classical mechanics and quantum mechanics…
We can now construct a vector field
. Because the vector field is the gradient of a scalar, doing a line integral (TdS -pdV) of it around a loop will give zero. But if we integrate the individual components TdS and pdV separately, they give non-zero answers. These are interpreted respectively as
= the external heat supplied, and
= external work done. The argument works for any cycle, but the loop in the picture is a Carnot cycle; it goes along lines of constant S and constant U (Constant U is in our case the same as constant T).
We can also interpret the area enclosed by the loop as being the same as the line integral around the loop, eg
Note that the system will not spontaneously go around this loop, the loop is forced by an external means; a piston.
Next, let’s do a Legendre transformation:
The Legendre transformation switches the independent variable (V), to its ‘conjugate’
. To ensure that the new conjugate becomes (V), the dependent variable is transformed to Enthalpy H=U-pV.
The picture again shows the same Carnot cycle. Some problems are easier with these other coordinates. We could also switch from S to T.
Now let’s go to mechanics. The example in the picture is the harmonic oscillator. Again, expressing the Action in this way is rather unfamiliar. I got help from a nice article on the Hamilton Jacobi equation: https://hal.archives-ouvertes.fr/hal-02317455
(NB: The symbols S and p have changed meaning)
It turns out the action (S) is multivalued in terms of (t,q), because of an arcsine function. You Can add multiples of 2pi to the Action, and also there is a pi-S solution, which corresponds to a sign switch in p. Again, we can construct a vector field (E,p). Because of the multi-valuedness, there are 2 vector fields, with sign-switched p. To not mess up the picture too much, I did not draw both sets in the same point.
Now, the interpretation is different. It makes little sense to construct an engine cycle as in thermodynamics; it would be an engine that can do loops in time that “converts kinetic Action to potential action, keeping the total action the same”.
The system can have any initial point in the plot. But instead of a forced external cycle, we have a “forced passage of time”. As time passes, and if the external force on the oscillator is zero, the system “follows the arrows” of the vector field, as in the purple line. The purple line is the much more familiar sine-wave q=sin(t) of the Harmonic oscillator.
This Hamilton Jacobi formalism of mechanics, where you work directly with the action instead of quickly going to the Lagrangian or Hamiltonian, seems a bit difficult and abstract. But there is a nice relation to quantum mechanics: The lines of constant Action are related to lines of constant phase of the corresponding Schrodinger equation.
I have not figured out yet what “following the arrows” means in the thermodynamics case.
While a a cycle in
-space is not physical, you can have two different paths (each with strictly increasing
) with the same beginning point and the same ending point, and compare those. Then instead of ‹total X along the cycle›, you have ‹difference between X along the two paths›.
Yes, good point. If you have 2 time-shifted solutions for the free harmonic oscillator, they will intersect twice per time-interval of 2pi. One interesting case is where the time-shift is infinitesimal (dt).

The infinitesimal loop that you get should have some relation to the Hamilton Jacobi equation.
(Off to celebrate new year, but I might secretly think about this in the mean while)