## Classical Mechanics versus Thermodynamics (Part 3)

There’s a fascinating analogy between classical mechanics and thermodynamics, which I last talked about in 2012:

I’ve figured out more about it, and today I’m giving a talk about it in the physics colloquium at the University of British Columbia. It’s a colloquium talk that’s supposed to be accessible for upper-level undergraduates, so I’ll spend a lot of time reviewing the basics… which is good, I think.

You can see my slides here, and I’ll base this blog article on them. You can also watch a video of my talk:

### Hamilton’s equations versus the Maxwell relations

Why do Hamilton’s equations in classical mechanics:

$\begin{array}{ccr} \displaystyle{ \frac{d p}{d t} } &=& \displaystyle{- \frac{\partial H}{\partial q} } \\ \\ \displaystyle{ \frac{d q}{d t} } &=& \displaystyle{ \frac{\partial H}{\partial p} } \end{array}$

look so much like the Maxwell relations in thermodynamics?

$\begin{array}{ccr} \displaystyle{ \left. \frac{\partial T}{\partial V}\right|_S } &=& \displaystyle{ - \left. \frac{\partial P}{\partial S}\right|_V } \\ \\ \displaystyle{ \left. \frac{\partial S}{\partial V}\right|_T } &=& \displaystyle{ \left. \frac{\partial P}{\partial T} \right|_V } \end{array}$

William Rowan Hamilton discovered his equations describing classical mechanics in terms of energy around 1827. By 1834 he had also introduced Hamilton’s principal function, which I’ll explain later.

James Clerk Maxwell is most famous for his equations describing electromagnetism, perfected in 1865. But he also worked on thermodynamics, and discovered the ‘Maxwell relations’ in 1871.

Hamilton’s equations describe how the position $q$ and momentum $p$ of a particle on a line change with time $t$ if we know the energy or Hamiltonian $H(q,p)$:

$\begin{array}{ccr} \displaystyle{ \frac{d p}{d t} } &=& \displaystyle{- \frac{\partial H}{\partial q} } \\ \\ \displaystyle{ \frac{d q}{d t} } &=& \displaystyle{ \frac{\partial H}{\partial p} } \end{array}$

Two of the Maxwell relations connect the volume $V$, entropy $S$, pressure $P$ and temperature $T$ of a system in thermodynamic equilibrium:

$\begin{array}{ccr} \displaystyle{ \left. \frac{\partial T}{\partial V}\right|_S } &=& \displaystyle{ - \left. \frac{\partial P}{\partial S}\right|_V } \\ \\ \displaystyle{ \left. \frac{\partial S}{\partial V}\right|_T } &=& \displaystyle{ \left. \frac{\partial P}{\partial T} \right|_V } \end{array}$
Using this change of variables:

$q \to S \qquad p \to T$
$t \to V \qquad H \to P$

Hamilton’s equations:

$\begin{array}{ccr} \displaystyle{ \frac{d p}{d t} } &=& \displaystyle{- \frac{\partial H}{\partial q} } \\ \\ \displaystyle{ \frac{d q}{d t} } &=& \displaystyle{ \frac{\partial H}{\partial p} } \end{array}$

become these relations:

$\begin{array}{ccr} \displaystyle{ \frac{d T}{d V} } &=& \displaystyle{- \frac{\partial P}{\partial S} } \\ \\ \displaystyle{ \frac{d S}{d V} } &=& \displaystyle{ \frac{\partial P}{\partial T} } \end{array}$

These are almost like two of the Maxwell relations! But in thermodynamics we always use partial derivatives:

$\begin{array}{ccr} \displaystyle{ \frac{\partial T}{\partial V} } &=& \displaystyle{ - \frac{\partial P}{\partial S} } \\ \\ \displaystyle{ \frac{\partial S}{\partial V} } &=& \displaystyle{ \frac{\partial P}{\partial T} } \end{array}$

and we say which variables are held constant:

$\begin{array}{ccr} \displaystyle{ \left. \frac{\partial T}{\partial V}\right|_S } &=& \displaystyle{ - \left. \frac{\partial P}{\partial S}\right|_V } \\ \\ \displaystyle{ \left. \frac{\partial S}{\partial V}\right|_T } &=& \displaystyle{ \left. \frac{\partial P}{\partial T} \right|_V } \end{array}$

If we write Hamilton’s equations in the same style as the Maxwell relations, they look funny:

$\begin{array}{ccr} \displaystyle{ \left. \frac{\partial p}{\partial t}\right|_q } &=& \displaystyle{ - \left. \frac{\partial H}{\partial q}\right|_t } \\ \\ \displaystyle{ \left. \frac{\partial q}{\partial t}\right|_p } &=& \displaystyle{ \left. \frac{\partial H}{\partial p} \right|_t } \end{array}$

Can this possibly be right?

Yes! When we work out the analogy between classical mechanics and thermodynamics we’ll see why.

We can get Maxwell’s relations starting from this: the internal energy $U$ of a system in equilibrium depends on its entropy $S$ and volume $V.$

Temperature and pressure are derivatives of $U:$

$\displaystyle{ T = \left.\frac{\partial U}{\partial S} \right|_V \qquad P = - \left. \frac{\partial U}{\partial V} \right|_S }$

Maxwell’s relations follow from the fact that mixed partial derivatives commute! For example:

$\displaystyle{ \left. \frac{\partial T}{\partial V} \right|_S \; = \; \left. \frac{\partial}{\partial V} \right|_S \left. \frac{\partial }{\partial S} \right|_V U \; = \; \left. \frac{\partial }{\partial S} \right|_V \left. \frac{\partial}{\partial V} \right|_S U \; = \; - \left. \frac{\partial P}{\partial S} \right|_V }$

To get Hamilton’s equations the same way, we need a function $W$ of the particle’s position $q$ and time $t$ such that

$\displaystyle{ p = \left. \frac{\partial W}{\partial q} \right|_t \qquad H = -\left. \frac{\partial W}{\partial t} \right|_q }$

Then we’ll get Hamilton’s equations from the fact that mixed partial derivatives commute!

The trick is to let $W$ be ‘Hamilton’s principal function’. So let’s define that. First, the action of a particle’s path is

$\displaystyle{ \int_{t_0}^{t_1} L(q(t),\dot{q}(t)) \, dt }$

where $L$ is the Lagrangian:

$L(q,\dot{q}) = p \dot q - H(q,p)$

The particle always takes a path from $(q_0, t_0)$ to $(q_1, t_1)$ that’s a critical point of the action. We can derive Hamilton’s equations from this fact.

Let’s assume this critical point is a minimum. Then the least action for any path from $(q_0,t_0)$ to $(q_1,t_1)$ is called Hamilton’s principal function

$W(q_0,t_0,q_1,t_1) = \min_{q \; \mathrm{with} \; q(t_0) = q_0, q(t_1) = q_1} \int_{t_0}^{t_1} L(q(t),\dot{q}(t)) \, dt$

A beautiful fact: if we differentiate Hamilton’s principal function, we get back the energy $H$ and momentum $p$:

$\begin{array}{ccc} \displaystyle{ \frac{\partial}{\partial q_0} W(q_0,t_0,q_1,t_1) = -p(t_0) } && \displaystyle{ \frac{\partial}{\partial t_0} W(q_0,t_0,q_1,t_1) = H(t_0) } \\ \\ \displaystyle{ \frac{\partial}{\partial q_1} W(q_0,t_0,q_1,t_1) = p(t_1) } && \displaystyle{ \frac{\partial}{\partial t_1}W(q_0,t_0,q_1,t_1) = -H(t_1) } \end{array}$

You can prove these equations using

$L = p\dot{q} - H$

which implies that

$\displaystyle{ W(q_0,t_0,q_1,t_1) = \int_{q_0}^{q_1} p \, dq \; - \; \int_{t_0}^{t_1} H \, dt }$

where we integrate along the minimizing path. (It’s not as trivial as it may look, but you can do it.)

Now let’s fix a starting-point $(q_0,t_0)$ for our particle, and say its path ends at any old point $(q,t)$. Think of Hamilton’s principal function as a function of just $(q,t)$:

$W(q,t) = W(q_0,t_0,q,t)$

Then the particle’s momentum and energy when it reaches $(q,t)$ are:

$\displaystyle{ p = \left. \frac{\partial W}{\partial q} \right|_t \qquad H = -\left. \frac{\partial W}{\partial t} \right|_q }$

This is just what we wanted. Hamilton’s equations now follow from the fact that mixed partial derivatives commute!

So, we have this analogy between classical mechanics and thermodynamics:

 Classical mechanics Thermodynamics action: $W(q,t)$ internal energy: $U(V,S)$ position: $q$ entropy: $S$ momentum: $p = \frac{\partial W}{\partial q}$ temperature: $T = \frac{\partial U}{\partial S}$ time: $t$ volume: $V$ energy: $H= -\frac{\partial W}{\partial t}$ pressure: $P = - \frac{\partial U}{\partial V}$ $dW = pdq - Hdt$ $dU = TdS - PdV$

What’s really going on in this analogy? It’s not really the match-up of variables that matters most—it’s something a bit more abstract. Let’s dig deeper.

I said we could get Maxwell’s relations from the fact that mixed partials commute, and gave one example:

$\displaystyle{ \left. \frac{\partial T}{\partial V} \right|_S \; = \; \left. \frac{\partial}{\partial V} \right|_S \left. \frac{\partial }{\partial S} \right|_V U \; = \; \left. \frac{\partial }{\partial S} \right|_V \left. \frac{\partial}{\partial V} \right|_S U \; = \; - \left. \frac{\partial P}{\partial S} \right|_V }$

But to get the other Maxwell relations we need to differentiate other functions—and there are four of them!

$U$: internal energy
$U - TS$: Helmholtz free energy
$U + PV$: enthalpy
$U + PV - TS$: Gibbs free energy

They’re important, but memorizing all the facts about them has annoyed students of thermodynamics for over a century. Is there some other way to get the Maxwell relations? Yes!

In 1958 David Ritchie explained how we can get all four Maxwell relations from one equation! Jaynes also explained how in some unpublished notes for a book. Here’s how it works.

Start here:

$dU = T d S - P d V$

Integrate around a loop $\gamma$:

$\displaystyle{ \oint_\gamma T d S - P d V = \oint_\gamma d U = 0 }$

so

$\displaystyle{ \oint_\gamma T d S = \oint_\gamma P dV }$

This says the heat added to a system equals the work it does in this cycle

Green’s theorem implies that if a loop $\gamma$ encloses a region $R,$

$\displaystyle{ \oint_\gamma T d S = \int_R dT \, dS }$

Similarly

$\displaystyle{ \oint_\gamma P d V = \int_R dP \, dV }$

But we know these are equal!

So, we get

$\displaystyle{ \int_R dT \, dS = \int_R dP \, dV }$

for any region $R$ enclosed by a loop. And this in turn implies

$d T\, dS = dP \, dV$

In fact, all of Maxwell’s relations are hidden in this one equation!

Mathematicians call something like $dT \, dS$ a 2-form and write it as $dT \wedge dS$. It’s an ‘oriented area element’, so

$dT \, dS = -dS \, dT$

Now, starting from

$d T\, dS = dP \, dV$

We can choose any coordinates $X,Y$ and get

$\displaystyle{ \frac{dT \, dS}{dX \, dY} = \frac{dP \, dV}{dX \, dY} }$

(Yes, this is mathematically allowed!)

If we take $X = V, Y = S$ we get

$\displaystyle{ \frac{dT \, dS}{dV \, dS} = \frac{dP \, dV}{dV \, dS} }$

and thus

$\displaystyle{ \frac{dT \, dS}{dV \, dS} = - \frac{dV \, dP}{dV \, dS} }$

We can actually cancel some factors and get one of the Maxwell relations:

$\displaystyle{ \left. \frac{\partial T}{\partial V}\right|_S = - \left. \frac{\partial P}{\partial S}\right|_V }$

(Yes, this is mathematically justified!)

Let’s try another one. If we take $X = T, Y = V$ we get

$\displaystyle{ \frac{dT \, dS}{dT \, dV} = \frac{dP \, dV}{dT \, dV} }$

Cancelling some factors here we get another of the Maxwell relations:

$\displaystyle{ \left. \frac{\partial S}{\partial V} \right|_T = \left. \frac{\partial P}{\partial T} \right|_V }$

Other choices of $X,Y$ give the other two Maxwell relations.

In short, Maxwell’s relations all follow from one simple equation:

$d T\, dS = dP \, dV$

Similarly, Hamilton’s equations follow from this equation:

$d p\, dq = dH \, dt$

All calculations work in exactly the same way!

By the way, we can get these equations efficiently using the identity $d^2 = 0$ and the product rule for $d$:

$\begin{array}{ccl} dU = TdS - PdV & \implies & d^2 U = d(TdS - P dV) \\ \\ &\implies& 0 = dT\, dS - dP \, dV \\ \\ &\implies & dT\, dS = dP \, dV \end{array}$

Now let’s change viewpoint slightly and temporarily treat $P$ and $V$ as independent from $S$ and $T.$ So, let’s start with $\mathbb{R}^4$ with coordinates $(S,T,V,P)$. Then this 2-form on $\mathbb{R}^4$:

$\omega = dT \, dS - dP \, dV$

is called a symplectic structure.

Choosing the internal energy function $U(S,V)$, we get this 2-dimensional surface of equilibrium states:

$\displaystyle{ \Lambda = \left\{ (S,T,V,P): \; \textstyle{ T = \left.\phantom{\Big|} \frac{\partial U}{\partial S}\right|_V , \; P = -\left. \phantom{\Big|}\frac{\partial U}{\partial V} \right|_S} \right\} \; \subset \; \mathbb{R}^4 }$

Since

$\omega = dT \, dS - dP \, dV$

we know

$\displaystyle{ \int_R \omega = 0 }$

for any region in the surface $\Lambda$, since on this surface $dU = TdS - PdV$ and our old argument applies.

This fact encodes the Maxwell relations! Physically it says: for any cycle on the surface of equilibrium states, the heat flow in equals the work done.

Similarly, in classical mechanics we can start with $\mathbb{R}^4$ with coordinates $(q,p,t,H)$, treating $p$ and $H$ as independent from $q$ and $t .$ This 2-form on $\mathbb{R}^4$:

$\omega = dH \, dt - dp \, dq$

is a symplectic structure. Hamilton’s principal function $W(q,t)$ defines a 2d surface

$\displaystyle{ \Lambda = \left\{ (q,p,t,H): \; \textstyle{ p = \left.\phantom{\Big|} \frac{\partial W}{\partial q}\right|_t , \; H = -\left.\phantom{\Big|} \frac{\partial W}{\partial t} \right|_q} \right\} \subset \mathbb{R}^4 }$

We have $\int_R \omega = 0$ for any region $R$ in this surface $\Lambda.$ And this fact encodes Hamilton’s equations!

### Summary

In thermodynamics, any 2d region $R$ in the surface $\Lambda$ of equilibrium states has

$\displaystyle{ \int_R \omega = 0 }$

This is equivalent to the Maxwell relations.

In classical mechanics, any 2d region $R$ in the surface $\Lambda$ of allowed $(q,p,t,H)$ 4-tuples for particle trajectories through a single point $(q_0,t_0)$ has

$\displaystyle{ \int_R \omega = 0 }$

This is equivalent to Hamilton’s equations.

These facts generalize when we add extra degrees of freedom, e.g. the particle number $N$ in thermodynamics:

$\omega = dT \, dS - dP \, dV + d\mu \, dN$

or more dimensions of space in classical mechanics:

$\omega = dp_1 \, dq_1 + \cdots + dp_{n-1} dq_{n-1} - dH \, dt$

We get a vector space $\mathbb{R}^{2n}$ with a 2-form $\omega$ on it, and a Lagrangian submanifold $\Lambda \subset \mathbb{R}^{2n}$: that is, a n-dimensional submanifold such that

$\int_R \omega = 0$

for any 2d region $R \subset \Lambda.$

This is more evidence for Alan Weinstein’s “symplectic creed”:

EVERYTHING IS A LAGRANGIAN SUBMANIFOLD

As a spinoff, we get two extra Hamilton’s equations for a point particle on a line! They look weird, but I’m sure they’re correct for trajectories that go through a specific arbitrary spacetime point $(q_0,t_0).$

$\begin{array}{ccrcccr} \displaystyle{ \left. \frac{\partial T}{\partial V}\right|_S } &=& \displaystyle{ - \left. \frac{\partial P}{\partial S}\right|_V } & \qquad & \displaystyle{ \left. \frac{\partial p}{\partial t}\right|_q } &=& \displaystyle{ - \left. \frac{\partial H}{\partial q}\right|_t } \\ \\ \displaystyle{ \left. \frac{\partial S}{\partial V}\right|_T } &=& \displaystyle{ \left. \frac{\partial P}{\partial T} \right|_V } & & \displaystyle{ \left. \frac{\partial q}{\partial t}\right|_p } &=& \displaystyle{ \left. \frac{\partial H}{\partial p} \right|_t } \\ \\ \displaystyle{ \left. \frac{\partial V}{\partial T} \right|_P } &=& \displaystyle{ - \left. \frac{\partial S}{\partial P} \right|_T } & & \displaystyle{ \left. \frac{\partial t}{\partial p} \right|_H } &=& \displaystyle{ - \left. \frac{\partial q}{\partial H} \right|_p } \\ \\ \displaystyle{ \left. \frac{\partial V}{\partial S} \right|_P } &=& \displaystyle{ \left. \frac{\partial T}{\partial P} \right|_S } & & \displaystyle{ \left. \frac{\partial p}{\partial H} \right|_q } &=& \displaystyle{ \left. \frac{\partial t}{\partial q} \right|_H } \end{array}$

Part 1: Hamilton’s equations versus the Maxwell relations.

Part 2: the role of symplectic geometry.

Part 3: a detailed analogy between classical mechanics and thermodynamics.

Part 4: what is the analogue of quantization for thermodynamics?

### 25 Responses to Classical Mechanics versus Thermodynamics (Part 3)

1. Frederic Barbaresco says:
2. Frederic Barbaresco says:

Charles-Michel Marle (Sorbonne University) Lecture on Souriau Symplectic model of Thermodynamics

3. Frederic Barbaresco says:

Souriau Symplectic model of Entropy as Casimir function in Coadjoint representation

4. John Baez says:

Thanks for these links! I’m reading Souriau on thermodynamics in his book Structure of Dynamical Systems: A Symplectic View of Physics. While probably nothing in my talk here would surprise him, he doesn’t seem to come out and make the points I’m making here. In particular, he focuses much more on statistical mechanics than classical thermodynamics.

Classical thermodynamics has also been studied in the language of contact geometry, and a lot of that can be translated into symplectic geometry. So, it would be quite hard to determine exactly what, if anything, that I’m doing here is “new”. But I think it deserves a clear and simple explanation.

5. Toby Bartels says:

Typos: When you first introduce $W$, you write $T$ for time instead of $t$. You do this again when you first mention Hamiltonian mechanics in $\mathbb R^4$, and then you also write $P$ for momentum instead of $p$.

6. Wolfgang says:

It’s possibly an analogy too far fetched, but if entropy and temperature correspond to position and momentum, could there exist a so far undiscovered uncertainty principle in ‘quantum’ thermodynamics, too?

• John Baez says:

That’s an interesting question.

• David Corfield says:

There’s a presentation here (https://ncatlab.org/nlab/show/quantization#MotivationFromClassicalMechanicsAndLieTheory) of the idea that quantization arises out of the symplectic classical world merely from the ability to integrate the Lie group $\mathbb{R}$ non-trivially as $U(1)$.

I wonder whether only some such quantizations are empirically realised.

• David Corfield says:

Did that account derive from your 2006 seminar: https://math.ucr.edu/home/baez/qg-fall2006/qg-fall2006.html#quantization ?

• Toby Bartels says:

[This comment has the wrong avatar on it.]

• John Baez says:

David had posted his comment in the wrong place. I reposted it in the right location and managed to put his name on it, but unfortunately my picture is stuck on it now and I don’t see how to fix that. Sorry, David!

• John Baez says:

David wrote:

There’s a presentation here (https://ncatlab.org/nlab/show/quantization#MotivationFromClassicalMechanicsAndLieTheory) of the idea that quantization arises out of the symplectic classical world merely from the ability to integrate the Lie group $\mathbb{R}$ non-trivially as $\mathrm{U}(1)$.

That seems a bit exaggerated to me; it’s a good description of “prequantization”, which is a way to get from integral symplectic manifolds to $\mathrm{U}(1)$ bundles over these manifolds. But as the presentation says,

This is only the beginning of the story of quantization…

One then needs to chop down the prequantum Hilbert space to a smaller one that’s usable in physics, via a ‘polarization’.

In Part 4 I go ahead and try to ‘quantize’ thermodynamics—though it turns out we get something more like statistical mechanics than quantum mechanics! Instead of using the full machinery of geometric quantization I use good old Schrödinger quantization (which is a special case). The hard part is not the math: it’s figuring out the physical meaning of the math, which dictates some of the choices to be made.

7. domenico says:

If there exists a Hamiltonian description of a thermodynamic system, then there exists a Schrödinger equation of a thermodynamic system of the type

$i \hbar \partial_V \Psi (V) = H (S, T) \Psi (V)$

which describes the probability amplitude of the thermodynamic system?

• domenico says:

• John Baez says:

• domenico says:

Thank you for your interesting posts.
I try to solve the physical constant in the simplest way possible.
$i \epsilon \partial_V \Psi(V) = P(T,S) \Psi(V)$
where the $\epsilon$ is the Planck constant for a quantum thermodynamics system, but this means that every quantizable system has a different Planck constant, perhaps every quantizable system has an experimentally measurable constant with different dimensional analysis (in thermodynamics there is not time, because the thermodynamics transformation are reversible process).
The commutators are
$[\hat{T},\hat{S}]=i\epsilon$
$[\hat{V},\hat{P}]=i\epsilon$
and it could be possible to measure the Planck constant in a thermodynamics system; I am thinking that superfluid drops could be this system, with amplitude phases which could be shown experimentally: for an isobaric process the wave function must be
$\Psi(V) = \frac{1}{\sqrt{V_2-V_1}} e^{-i \frac{P V}{\epsilon}}$
i want to see if there exist a mental experiment to evaluate the $\epsilon$ from the $\hbar$ (I am thinking now of a superfluid in a piston interacting with a photon on the piston crown, to measure the volume and pressure).

• domenico says:

I am thinking that each Hamiltonian system (chemistry,physics, biology,economic,thermodynamics,etc) that is an approximation of a real system has a quantum approximation; so that uncertainty principle is true for each Hamiltonian system, with a, measurable and dimensional, different Planck constant.
So that each measurable Hamiltonian system has a uncertainty principle for the measurable standard deviation; the elementary interactions of systems are different, so that the Planck constant must be different.
Moreover, some Hamiltonian systems of non-classical type must show a quantization of some quantities, for example for some approximation of pressure in thermodynamics.

8. John Baez says:

Over on the Category Theory Community Server someone wrote:

Such a clear exposition John! Just a question: why do we get $\vert_q$ and $\vert_t$ in the partial derivatives on slide 16 (and the next ones)?

More specifically, what’s the difference between something like $\frac{\partial P}{\partial V}\vert_T$ and $\frac{\partial P}{\partial V}$? Is the first a projection of the second along the direction in which $T$ remains constant?

I replied:

The first means something, the second does not.

Thermodynamics is confusing because it studies the same manifold with many different coordinate systems. In the case at hand we have four functions on a 2-dimensional manifold, called $S,T,P,V.$ Any pair of these functions can serve as a coordinate system.

Partial derivatives only make sense with respect to a coordinate system: given coordinates $x,y$ on a 2-manifold and a function $f$ on this 2-manifold we write

$\frac{\partial f}{\partial x}|_y$

to mean the derivative of $f$ as we change the value of the coordinate $x$ while holding the coordinate $y$ fixed. The coordinates we hold fixed matter just as much as those whose values we change!

So, in thermodynamics something like $\frac{\partial P}{\partial V}$ doesn’t mean anything because we haven’t specified a coordinate system, and there are lots of options.

When we write $\frac{\partial P}{\partial V}|_T$ we are rapidly saying “I’m using the functions $V,T$ as coordinates, and I’m computing the rate of change of change of $P$ as I change the value of $V$ while holding $T$ fixed”.

So, this would be different from $\frac{\partial P}{\partial V}|_S,$ for example.

They replied:

John Baez wrote:

Thermodynamics is confusing because it studies the same manifold with many different coordinate systems. In the case at hand we have four functions on a 2-dimensional manifold, called $S,T,P,V.$ Any pair of these functions can serve as a coordinate system.

Ha, this is something I didn’t understand. I thought $S,T,P,V$ were coordinates. Of course now that I think about it, this doesn’t make sense!

Partial derivatives only make sense with respect to a coordinate system: given coordinates $x,y$ on a 2-manifold and a function $f$ on this 2-manifold we write

$\frac{\partial f}{\partial x}|_y$

to mean the derivative of $f$ as we change the value of the coordinate $x$ while holding the coordinate $y$ fixed. The coordinates we hold fixed matter just as much as those whose values we change!

I find this puzzling again. First of all, the notation $\vert_y$ is something I only ever seen used with that meaning in thermodynamics.

When I took differential geometry, the derivative of $f$ with respect to the coordinate chart $x_i$ would be denoted as $\partial_{x_i} f,$ and understood to be a scalar only defined where said chart is defined.

Let me see if this makes sense (suppose we fixed a chart and we are talking there): since we have 2 dimensions, to specify a derivative I need to give you 2 components (since a derivative is now a vector in a 2-dimensional space). So $\frac{\partial f}{\partial x}\vert_y$ really means $\partial_x f$ in my notation, where $y$ is implicitly fixed because no $\partial_y$ is appearing.

Is that what’s going on?

Just to check: if the manifold were n-dimensional, I would write $\frac{\partial f}{\partial x_1}\vert_{x_2, \ldots, x_n}$ to denote the derivative along the first coordinate chart only.

I replied:

I find this puzzling again. First of all, the notation $\vert_y$ is something I only ever seen used with that meaning in thermodynamics.

It’s used in math and physics whenever the choice of coordinates isn’t clear otherwise.

When I took differential geometry, the derivative of $f$ with respect to the coordinate chart $x_i$ would be denoted as $\partial_{x_i} f$, and understood to be a scalar only defined where said chart is defined.

Right, but there they are fixing a coordinate chart at the beginning of the discussion, so the notation for partial derivative doesn’t need to tell you which coordinates you’re using. It’s not a good notation for when you’re changing coordinates every ten seconds—for example, when the right side of an equation is using different coordinates than the left side!

Let me see if this makes sense (suppose we fixed a chart and we are talking there): since we have 2 dimensions, to specify a derivative I need to give you 2 components (since a derivative is now a vector in a 2-dimensional space). So $\frac{\partial f}{\partial x}\vert_y$ really means $\partial_x f$ in my notation, where $y$ is implicitly fixed because no $\partial_y$ is appearing.

Is that what’s going on?

Right!

Just to check: if the manifold were n-dimensional, I would write $\frac{\partial f}{\partial x_1}\vert_{x_2, \ldots, x_n}$ to denote the derivative along the first coordinate chart only.

Exactly. So this notation for the partial derivatives tells you that the coordinates being used are $x_1, x_2, \dots, x_n.$

• Keith Harbaugh says:

This seems to me a really useful discussion of something likely to confuse almost all.
Thanks!
And congratulations on becoming an AMS Fellow!
Good that the AMS is recognizing contributions beyond published papers.
You certainly have done plenty to foster the general understanding of various mathematical issues.
Interesting that the AMS recognizes your contributions, maybe more so than UCR?
I guess that may be due to an understandable difference in goals and priorities: mathematical education versus whatever UCR expected of its faculty.
Interesting, but I merely mention this.
Don’t want to provoke counterproductive controversy.

9. John Baez says:

You can now watch a video of my talk on “Classical mechanics versus thermodynamics”:

10. westy31 says:

Bit of a late reply, but it took me a while to understand this better. I made some pictures of examples.
First, an ideal gas. Below is a picture of internal energy (U) versus Entropy and Volume (S,V). It is a bit unusual to express U as U(S,V), it is much simpler to just say U=Cv*T. But we are exploring relations with classical mechanics and quantum mechanics…

We can now construct a vector field $\left(\frac{\partial U}{\partial S},\frac{\partial U}{\partial V} \right )= \left (T,-p \right )$. Because the vector field is the gradient of a scalar, doing a line integral (TdS -pdV) of it around a loop will give zero. But if we integrate the individual components TdS and pdV separately, they give non-zero answers. These are interpreted respectively as $\int T\ dS$= the external heat supplied, and $\int p\ dV$= external work done. The argument works for any cycle, but the loop in the picture is a Carnot cycle; it goes along lines of constant S and constant U (Constant U is in our case the same as constant T).
We can also interpret the area enclosed by the loop as being the same as the line integral around the loop, eg $\int T\ dS = \int\int dT\ dS$
Note that the system will not spontaneously go around this loop, the loop is forced by an external means; a piston.
Next, let’s do a Legendre transformation:

The Legendre transformation switches the independent variable (V), to its ‘conjugate’ $p=-\left(\frac{\partial U}{\partial V}\right )$. To ensure that the new conjugate becomes (V), the dependent variable is transformed to Enthalpy H=U-pV.
The picture again shows the same Carnot cycle. Some problems are easier with these other coordinates. We could also switch from S to T.
Now let’s go to mechanics. The example in the picture is the harmonic oscillator. Again, expressing the Action in this way is rather unfamiliar. I got help from a nice article on the Hamilton Jacobi equation: https://hal.archives-ouvertes.fr/hal-02317455
(NB: The symbols S and p have changed meaning)

It turns out the action (S) is multivalued in terms of (t,q), because of an arcsine function. You Can add multiples of 2pi to the Action, and also there is a pi-S solution, which corresponds to a sign switch in p. Again, we can construct a vector field (E,p). Because of the multi-valuedness, there are 2 vector fields, with sign-switched p. To not mess up the picture too much, I did not draw both sets in the same point.
Now, the interpretation is different. It makes little sense to construct an engine cycle as in thermodynamics; it would be an engine that can do loops in time that “converts kinetic Action to potential action, keeping the total action the same”.
The system can have any initial point in the plot. But instead of a forced external cycle, we have a “forced passage of time”. As time passes, and if the external force on the oscillator is zero, the system “follows the arrows” of the vector field, as in the purple line. The purple line is the much more familiar sine-wave q=sin(t) of the Harmonic oscillator.
This Hamilton Jacobi formalism of mechanics, where you work directly with the action instead of quickly going to the Lagrangian or Hamiltonian, seems a bit difficult and abstract. But there is a nice relation to quantum mechanics: The lines of constant Action are related to lines of constant phase of the corresponding Schrodinger equation.
I have not figured out yet what “following the arrows” means in the thermodynamics case.

• Toby Bartels says:

While a a cycle in $(t,q)$-space is not physical, you can have two different paths (each with strictly increasing $t$) with the same beginning point and the same ending point, and compare those. Then instead of ‹total X along the cycle›, you have ‹difference between X along the two paths›.

• westy31 says:

Yes, good point. If you have 2 time-shifted solutions for the free harmonic oscillator, they will intersect twice per time-interval of 2pi. One interesting case is where the time-shift is infinitesimal (dt).

The infinitesimal loop that you get should have some relation to the Hamilton Jacobi equation.