Classical Mechanics versus Thermodynamics (Part 3)

There’s a fascinating analogy between classical mechanics and thermodynamics, which I last talked about in 2012:

Classical mechanics versus thermodynamics (part 1).
Classical mechanics versus thermodynamics (part 2).

I’ve figured out more about it, and today I’m giving a talk about it in the physics colloquium at the University of British Columbia. It’s a colloquium talk that’s supposed to be accessible for upper-level undergraduates, so I’ll spend a lot of time reviewing the basics… which is good, I think.

I don’t know if the talk will be recorded, but you can see my slides here, and I’ll base this blog article on them.

Hamilton’s equations versus the Maxwell relations

Why do Hamilton’s equations in classical mechanics:

\begin{array}{ccr}  \displaystyle{  \frac{d p}{d t} }  &=&  \displaystyle{- \frac{\partial H}{\partial q} } \\  \\  \displaystyle{  \frac{d q}{d t} } &=&  \displaystyle{ \frac{\partial H}{\partial p} }  \end{array}

look so much like the Maxwell relations in thermodynamics?

\begin{array}{ccr}  \displaystyle{ \left. \frac{\partial T}{\partial V}\right|_S }  &=&  \displaystyle{ - \left. \frac{\partial P}{\partial S}\right|_V } \\   \\  \displaystyle{ \left. \frac{\partial S}{\partial  V}\right|_T  }  &=&  \displaystyle{ \left. \frac{\partial P}{\partial T} \right|_V }  \end{array}

William Rowan Hamilton discovered his equations describing classical mechanics in terms of energy around 1827. By 1834 he had also introduced Hamilton’s principal function, which I’ll explain later.

James Clerk Maxwell is most famous for his equations describing electromagnetism, perfected in 1865. But he also worked on thermodynamics, and discovered the ‘Maxwell relations’ in 1871.

Hamilton’s equations describe how the position q and momentum p of a particle on a line change with time t if we know the energy or Hamiltonian H(q,p):

\begin{array}{ccr}  \displaystyle{  \frac{d p}{d t} }  &=&  \displaystyle{- \frac{\partial H}{\partial q} } \\  \\  \displaystyle{  \frac{d q}{d t} } &=&  \displaystyle{ \frac{\partial H}{\partial p} }  \end{array}

Two of the Maxwell relations connect the volume V, entropy S, pressure P and temperature T of a system in thermodynamic equilibrium:

\begin{array}{ccr}  \displaystyle{ \left. \frac{\partial T}{\partial V}\right|_S }  &=&  \displaystyle{ - \left. \frac{\partial P}{\partial S}\right|_V } \\   \\  \displaystyle{ \left. \frac{\partial S}{\partial  V}\right|_T  }  &=&  \displaystyle{ \left. \frac{\partial P}{\partial T} \right|_V }  \end{array}
Using this change of variables:

q \to S \qquad p \to T
t \to V \qquad H \to P

Hamilton’s equations:

\begin{array}{ccr}  \displaystyle{  \frac{d p}{d t} }  &=&  \displaystyle{- \frac{\partial H}{\partial q} } \\   \\  \displaystyle{  \frac{d q}{d t} } &=&  \displaystyle{ \frac{\partial H}{\partial p} }  \end{array}

become these relations:

\begin{array}{ccr}  \displaystyle{  \frac{d T}{d V} }  &=&  \displaystyle{- \frac{\partial P}{\partial S} } \\  \\  \displaystyle{  \frac{d S}{d V} } &=&  \displaystyle{ \frac{\partial P}{\partial T} }  \end{array}

These are almost like two of the Maxwell relations! But in thermodynamics we always use partial derivatives:

\begin{array}{ccr}  \displaystyle{  \frac{\partial T}{\partial V} }  &=&  \displaystyle{ - \frac{\partial P}{\partial S} } \\   \\  \displaystyle{ \frac{\partial S}{\partial  V}   }  &=&  \displaystyle{  \frac{\partial P}{\partial T} }  \end{array}

and we say which variables are held constant:

\begin{array}{ccr}  \displaystyle{ \left. \frac{\partial T}{\partial V}\right|_S }  &=&  \displaystyle{ - \left. \frac{\partial P}{\partial S}\right|_V } \\   \\  \displaystyle{ \left. \frac{\partial S}{\partial  V}\right|_T  }  &=&  \displaystyle{ \left. \frac{\partial P}{\partial T} \right|_V }  \end{array}

If we write Hamilton’s equations in the same style as the Maxwell relations, they look funny:

\begin{array}{ccr}  \displaystyle{ \left. \frac{\partial p}{\partial t}\right|_q }  &=&  \displaystyle{ - \left. \frac{\partial H}{\partial q}\right|_t } \\   \\  \displaystyle{ \left. \frac{\partial q}{\partial  t}\right|_p  }  &=&  \displaystyle{ \left. \frac{\partial H}{\partial p} \right|_t }  \end{array}

Can this possibly be right?

Yes! When we work out the analogy between classical mechanics and thermodynamics we’ll see why.

We can get Maxwell’s relations starting from this: the internal energy U of a system in equilibrium depends on its entropy S and volume V.

Temperature and pressure are derivatives of U:

\displaystyle{  T =  \left.\frac{\partial U}{\partial S} \right|_V  \qquad  P = - \left. \frac{\partial U}{\partial V} \right|_S }

Maxwell’s relations follow from the fact that mixed partial derivatives commute! For example:

\displaystyle{    \left. \frac{\partial T}{\partial V} \right|_S \; = \;  \left. \frac{\partial}{\partial V} \right|_S \left. \frac{\partial }{\partial S} \right|_V U \; = \;  \left. \frac{\partial }{\partial S} \right|_V \left. \frac{\partial}{\partial V} \right|_S  U \; = \;  - \left. \frac{\partial P}{\partial S} \right|_V }

To get Hamilton’s equations the same way, we need a function W of the particle’s position q and time t such that

\displaystyle{       p = \left. \frac{\partial W}{\partial q} \right|_t   \qquad       H = -\left. \frac{\partial W}{\partial t} \right|_q  }

Then we’ll get Hamilton’s equations from the fact that mixed partial derivatives commute!

The trick is to let W be ‘Hamilton’s principal function’. So let’s define that. First, the action of a particle’s path is

\displaystyle{  \int_{t_0}^{t_1} L(q(t),\dot{q}(t)) \, dt  }

where L is the Lagrangian:

L(q,\dot{q}) = p \dot q - H(q,p)

The particle always takes a path from (q_0, t_0) to (q_1, t_1) that’s a critical point of the action. We can derive Hamilton’s equations from this fact.

Let’s assume this critical point is a minimum. Then the least action for any path from (q_0,t_0) to (q_1,t_1) is called Hamilton’s principal function

W(q_0,t_0,q_1,t_1) = \min_{q \; \mathrm{with} \; q(t_0) = q_0, q(t_1) = q_1}  \int_{t_0}^{t_1} L(q(t),\dot{q}(t)) \, dt

A beautiful fact: if we differentiate Hamilton’s principal function, we get back the energy H and momentum p:

\begin{array}{ccc}  \displaystyle{  \frac{\partial}{\partial q_0} W(q_0,t_0,q_1,t_1) = -p(t_0) } &&  \displaystyle{    \frac{\partial}{\partial t_0} W(q_0,t_0,q_1,t_1) = H(t_0) } \\  \\  \displaystyle{   \frac{\partial}{\partial q_1} W(q_0,t_0,q_1,t_1) =  p(t_1) } &&  \displaystyle{   \frac{\partial}{\partial t_1}W(q_0,t_0,q_1,t_1)  = -H(t_1) }  \end{array}

You can prove these equations using

L  = p\dot{q} - H

which implies that

\displaystyle{ W(q_0,t_0,q_1,t_1) =  \int_{q_0}^{q_1} p \, dq \; - \; \int_{t_0}^{t_1} H \, dt }

where we integrate along the minimizing path. (It’s not as trivial as it may look, but you can do it.)

Now let’s fix a starting-point (q_0,t_0) for our particle, and say its path ends at any old point (q,t). Think of Hamilton’s principal function as a function of just (q,t):

W(q,t) = W(q_0,t_0,q,t)

Then the particle’s momentum and energy when it reaches (q,t) are:

\displaystyle{   p = \left. \frac{\partial W}{\partial q} \right|_t   \qquad       H = -\left. \frac{\partial W}{\partial t} \right|_q }

This is just what we wanted. Hamilton’s equations now follow from the fact that mixed partial derivatives commute!

So, we have this analogy between classical mechanics and thermodynamics:

Classical mechanics Thermodynamics
action: W(q,t) internal energy: U(V,S)
position: q entropy: S
momentum: p = \frac{\partial W}{\partial q} temperature: T = \frac{\partial U}{\partial S}
time: t volume: V
energy: H= -\frac{\partial W}{\partial t} pressure: P = - \frac{\partial U}{\partial V}
dW = pdq - Hdt dU = TdS - PdV

What’s really going on in this analogy? It’s not really the match-up of variables that matters most—it’s something a bit more abstract. Let’s dig deeper.

I said we could get Maxwell’s relations from the fact that mixed partials commute, and gave one example:

\displaystyle{   \left. \frac{\partial T}{\partial V} \right|_S \; = \;  \left. \frac{\partial}{\partial V} \right|_S \left. \frac{\partial }{\partial S} \right|_V U \; = \;  \left. \frac{\partial }{\partial S} \right|_V \left. \frac{\partial}{\partial V} \right|_S  U \; = \;  - \left. \frac{\partial P}{\partial S} \right|_V }

But to get the other Maxwell relations we need to differentiate other functions—and there are four of them!

U: internal energy
U - TS: Helmholtz free energy
U + PV: enthalpy
U + PV - TS: Gibbs free energy

They’re important, but memorizing all the facts about them has annoyed students of thermodynamics for over a century. Is there some other way to get the Maxwell relations? Yes!

In 1958 David Ritchie explained how we can get all four Maxwell relations from one equation! Jaynes also explained how in some unpublished notes for a book. Here’s how it works.

Start here:

dU = T d S - P d V

Integrate around a loop \gamma:

\displaystyle{    \oint_\gamma T d S - P d V  = \oint_\gamma d U = 0 }

so

\displaystyle{  \oint_\gamma T d S = \oint_\gamma P dV }

This says the heat added to a system equals the work it does in this cycle

Green’s theorem implies that if a loop \gamma encloses a region R,

\displaystyle{   \oint_\gamma T d S = \int_R dT \, dS }

Similarly

\displaystyle{ \oint_\gamma P d V  = \int_R dP \, dV }

But we know these are equal!

So, we get

\displaystyle{ \int_R dT \, dS  =   \int_R dP \, dV  }

for any region R enclosed by a loop. And this in turn implies

d T\, dS = dP \, dV

In fact, all of Maxwell’s relations are hidden in this one equation!

Mathematicians call something like dT \, dS a 2-form and write it as dT \wedge dS. It’s an ‘oriented area element’, so

dT \, dS = -dS \, dT

Now, starting from

d T\, dS = dP \, dV

We can choose any coordinates X,Y and get

\displaystyle{   \frac{dT \, dS}{dX \, dY} = \frac{dP \, dV}{dX \, dY}  }

(Yes, this is mathematically allowed!)

If we take X = V, Y = S we get

\displaystyle{    \frac{dT \, dS}{dV \, dS} = \frac{dP \, dV}{dV \, dS}  }

and thus

\displaystyle{   \frac{dT \, dS}{dV \, dS} = - \frac{dV \, dP}{dV \, dS}  }

We can actually cancel some factors and get one of the Maxwell relations:

\displaystyle{  \left.   \frac{\partial T}{\partial V}\right|_S = - \left. \frac{\partial P}{\partial S}\right|_V  }

(Yes, this is mathematically justified!)

Let’s try another one. If we take X = T, Y = V we get

\displaystyle{   \frac{dT \, dS}{dT \, dV} = \frac{dP \, dV}{dT \, dV} }

Cancelling some factors here we get another of the Maxwell relations:

\displaystyle{ \left.   \frac{\partial S}{\partial V} \right|_T = \left. \frac{\partial P}{\partial T}  \right|_V }

Other choices of X,Y give the other two Maxwell relations.

In short, Maxwell’s relations all follow from one simple equation:

d T\, dS = dP \, dV

Similarly, Hamilton’s equations follow from this equation:

d p\, dq = dH \, dt

All calculations work in exactly the same way!

By the way, we can get these equations efficiently using the identity d^2 = 0 and the product rule for d:

\begin{array}{ccl} dU = TdS - PdV & \implies & d^2 U = d(TdS - P dV) \\ \\  &\implies& 0 = dT\, dS - dP \, dV  \\ \\  &\implies & dT\, dS = dP \, dV  \end{array}

Now let’s change viewpoint slightly and temporarily treat P and V as independent from S and T. So, let’s start with \mathbb{R}^4 with coordinates (S,T,V,P). Then this 2-form on \mathbb{R}^4:

\omega = dT \, dS - dP \, dV

is called a symplectic structure.

Choosing the internal energy function U(S,V), we get this 2-dimensional surface of equilibrium states:

\displaystyle{   \Lambda = \left\{ (S,T,V,P): \; \textstyle{ T = \left.\phantom{\Big|} \frac{\partial U}{\partial S}\right|_V  , \; P = -\left. \phantom{\Big|}\frac{\partial U}{\partial V} \right|_S} \right\} \; \subset \; \mathbb{R}^4 }

Since

\omega = dT \, dS - dP \, dV

we know

\displaystyle{  \int_R \omega = 0 }

for any region in the surface \Lambda, since on this surface dU = TdS - PdV and our old argument applies.

This fact encodes the Maxwell relations! Physically it says: for any cycle on the surface of equilibrium states, the heat flow in equals the work done.

Similarly, in classical mechanics we can start with \mathbb{R}^4 with coordinates (q,p,t,H), treating p and H as independent from q and t . This 2-form on \mathbb{R}^4:

\omega = dH \, dt - dp \, dq

is a symplectic structure. Hamilton’s principal function W(q,t) defines a 2d surface

\displaystyle{   \Lambda = \left\{ (q,p,t,H): \; \textstyle{ p = \left.\phantom{\Big|} \frac{\partial W}{\partial q}\right|_t  , \;  H = -\left.\phantom{\Big|} \frac{\partial W}{\partial t} \right|_q} \right\}  \subset \mathbb{R}^4 }

We have \int_R \omega = 0 for any region R in this surface \Lambda. And this fact encodes Hamilton’s equations!

Summary

In thermodynamics, any 2d region R in the surface \Lambda of equilibrium states has

\displaystyle{  \int_R \omega = 0 }

This is equivalent to the Maxwell relations.

In classical mechanics, any 2d region R in the surface \Lambda of allowed (q,p,t,H) 4-tuples for particle trajectories through a single point (q_0,t_0) has

\displaystyle{   \int_R \omega = 0 }

This is equivalent to Hamilton’s equations.

These facts generalize when we add extra degrees of freedom, e.g. the particle number N in thermodynamics:

\omega = dT \, dS - dP \, dV + d\mu \, dN

or more dimensions of space in classical mechanics:

\omega =  dp_1 \, dq_1 + \cdots + dp_{n-1} dq_{n-1} - dH \, dt

We get a vector space \mathbb{R}^{2n} with a 2-form \omega on it, and a Lagrangian submanifold \Lambda \subset \mathbb{R}^{2n}: that is, a n-dimensional submanifold such that

\int_R \omega = 0

for any 2d region R \subset \Lambda.

This is more evidence for Alan Weinstein’s “symplectic creed”:

EVERYTHING IS A LAGRANGIAN SUBMANIFOLD

As a spinoff, we get two extra Hamilton’s equations for a point particle on a line! They look weird, but I’m sure they’re correct for trajectories that go through a specific arbitrary spacetime point (q_0,t_0).

\begin{array}{ccrcccr}  \displaystyle{ \left. \frac{\partial T}{\partial V}\right|_S }  &=&  \displaystyle{ - \left. \frac{\partial P}{\partial S}\right|_V } & \qquad &  \displaystyle{ \left. \frac{\partial p}{\partial t}\right|_q }  &=&  \displaystyle{ - \left. \frac{\partial H}{\partial q}\right|_t }  \\   \\  \displaystyle{ \left. \frac{\partial S}{\partial  V}\right|_T  }  &=&  \displaystyle{ \left. \frac{\partial P}{\partial T} \right|_V } & &  \displaystyle{ \left. \frac{\partial q}{\partial  t}\right|_p  }  &=&  \displaystyle{ \left. \frac{\partial H}{\partial p} \right|_t }  \\ \\  \displaystyle{ \left. \frac{\partial V}{\partial T} \right|_P }  &=&  \displaystyle{ - \left. \frac{\partial S}{\partial P} \right|_T } & &  \displaystyle{ \left. \frac{\partial t}{\partial p} \right|_H }  &=&  \displaystyle{ - \left. \frac{\partial q}{\partial H} \right|_p }  \\ \\  \displaystyle{ \left. \frac{\partial V}{\partial S} \right|_P } &=&  \displaystyle{ \left. \frac{\partial T}{\partial P} \right|_S }  & &  \displaystyle{ \left. \frac{\partial p}{\partial H} \right|_q } &=&  \displaystyle{ \left. \frac{\partial t}{\partial q} \right|_H }  \end{array}

Part 1: Hamilton’s equations versus the Maxwell relations.

Part 2: the role of symplectic geometry.

Part 3: a detailed analogy between classical mechanics and thermodynamics.

Part 4: what is the analogue of quantization for thermodynamics?

19 Responses to Classical Mechanics versus Thermodynamics (Part 3)

  1. Frederic Barbaresco says:

    Charles-Michel Marle (Sorbonne University) Lecture on Souriau Symplectic model of Thermodynamics

  2. Frederic Barbaresco says:

    Souriau Symplectic model of Entropy as Casimir function in Coadjoint representation

  3. John Baez says:

    Thanks for these links! I’m reading Souriau on thermodynamics in his book Structure of Dynamical Systems: A Symplectic View of Physics. While probably nothing in my talk here would surprise him, he doesn’t seem to come out and make the points I’m making here. In particular, he focuses much more on statistical mechanics than classical thermodynamics.

    Classical thermodynamics has also been studied in the language of contact geometry, and a lot of that can be translated into symplectic geometry. So, it would be quite hard to determine exactly what, if anything, that I’m doing here is “new”. But I think it deserves a clear and simple explanation.

  4. Toby Bartels says:

    Typos: When you first introduce W, you write T for time instead of t. You do this again when you first mention Hamiltonian mechanics in \mathbb R^4, and then you also write P for momentum instead of p.

  5. Wolfgang says:

    It’s possibly an analogy too far fetched, but if entropy and temperature correspond to position and momentum, could there exist a so far undiscovered uncertainty principle in ‘quantum’ thermodynamics, too?

  6. domenico says:

    If there exists a Hamiltonian description of a thermodynamic system, then there exists a Schrödinger equation of a thermodynamic system of the type

    i \hbar \partial_V \Psi (V) = H (S, T) \Psi (V)

    which describes the probability amplitude of the thermodynamic system?

    • domenico says:

      Sorry, I hadn’t read Wolfgang’s post

    • John Baez says:

      See Part 4 for something about this sort of question.

      • domenico says:

        Thank you for your interesting posts.
        I try to solve the physical constant in the simplest way possible.
        i \epsilon \partial_V \Psi(V) = P(T,S) \Psi(V)
        where the \epsilon is the Planck constant for a quantum thermodynamics system, but this means that every quantizable system has a different Planck constant, perhaps every quantizable system has an experimentally measurable constant with different dimensional analysis (in thermodynamics there is not time, because the thermodynamics transformation are reversible process).
        The commutators are
        [\hat{T},\hat{S}]=i\epsilon
        [\hat{V},\hat{P}]=i\epsilon
        and it could be possible to measure the Planck constant in a thermodynamics system; I am thinking that superfluid drops could be this system, with amplitude phases which could be shown experimentally: for an isobaric process the wave function must be
        \Psi(V) = \frac{1}{\sqrt{V_2-V_1}} e^{-i \frac{P V}{\epsilon}}
        i want to see if there exist a mental experiment to evaluate the \epsilon from the \hbar (I am thinking now of a superfluid in a piston interacting with a photon on the piston crown, to measure the volume and pressure).

        • domenico says:

          I am thinking that each Hamiltonian system (chemistry,physics, biology,economic,thermodynamics,etc) that is an approximation of a real system has a quantum approximation; so that uncertainty principle is true for each Hamiltonian system, with a, measurable and dimensional, different Planck constant.
          So that each measurable Hamiltonian system has a uncertainty principle for the measurable standard deviation; the elementary interactions of systems are different, so that the Planck constant must be different.
          Moreover, some Hamiltonian systems of non-classical type must show a quantization of some quantities, for example for some approximation of pressure in thermodynamics.

  7. John Baez says:

    Over on the Category Theory Community Server someone wrote:

    Such a clear exposition John! Just a question: why do we get \vert_q and \vert_t in the partial derivatives on slide 16 (and the next ones)?

    More specifically, what’s the difference between something like \frac{\partial P}{\partial V}\vert_T and \frac{\partial P}{\partial V}? Is the first a projection of the second along the direction in which T remains constant?

    I replied:

    The first means something, the second does not.

    Thermodynamics is confusing because it studies the same manifold with many different coordinate systems. In the case at hand we have four functions on a 2-dimensional manifold, called S,T,P,V. Any pair of these functions can serve as a coordinate system.

    Partial derivatives only make sense with respect to a coordinate system: given coordinates x,y on a 2-manifold and a function f on this 2-manifold we write

    \frac{\partial f}{\partial x}|_y

    to mean the derivative of f as we change the value of the coordinate x while holding the coordinate y fixed. The coordinates we hold fixed matter just as much as those whose values we change!

    So, in thermodynamics something like \frac{\partial P}{\partial V} doesn’t mean anything because we haven’t specified a coordinate system, and there are lots of options.

    When we write \frac{\partial P}{\partial V}|_T we are rapidly saying “I’m using the functions V,T as coordinates, and I’m computing the rate of change of change of P as I change the value of V while holding T fixed”.

    So, this would be different from \frac{\partial P}{\partial V}|_S, for example.

    They replied:

    John Baez wrote:

    Thermodynamics is confusing because it studies the same manifold with many different coordinate systems. In the case at hand we have four functions on a 2-dimensional manifold, called S,T,P,V. Any pair of these functions can serve as a coordinate system.

    Ha, this is something I didn’t understand. I thought S,T,P,V were coordinates. Of course now that I think about it, this doesn’t make sense!

    Partial derivatives only make sense with respect to a coordinate system: given coordinates x,y on a 2-manifold and a function f on this 2-manifold we write

    \frac{\partial f}{\partial x}|_y

    to mean the derivative of f as we change the value of the coordinate x while holding the coordinate y fixed. The coordinates we hold fixed matter just as much as those whose values we change!

    I find this puzzling again. First of all, the notation \vert_y is something I only ever seen used with that meaning in thermodynamics.

    When I took differential geometry, the derivative of f with respect to the coordinate chart x_i would be denoted as \partial_{x_i} f, and understood to be a scalar only defined where said chart is defined.

    Let me see if this makes sense (suppose we fixed a chart and we are talking there): since we have 2 dimensions, to specify a derivative I need to give you 2 components (since a derivative is now a vector in a 2-dimensional space). So \frac{\partial f}{\partial x}\vert_y really means \partial_x f in my notation, where y is implicitly fixed because no \partial_y is appearing.

    Is that what’s going on?

    Just to check: if the manifold were n-dimensional, I would write \frac{\partial f}{\partial x_1}\vert_{x_2, \ldots, x_n} to denote the derivative along the first coordinate chart only.

    I replied:

    I find this puzzling again. First of all, the notation \vert_y is something I only ever seen used with that meaning in thermodynamics.

    It’s used in math and physics whenever the choice of coordinates isn’t clear otherwise.

    When I took differential geometry, the derivative of f with respect to the coordinate chart x_i would be denoted as \partial_{x_i} f, and understood to be a scalar only defined where said chart is defined.

    Right, but there they are fixing a coordinate chart at the beginning of the discussion, so the notation for partial derivative doesn’t need to tell you which coordinates you’re using. It’s not a good notation for when you’re changing coordinates every ten seconds—for example, when the right side of an equation is using different coordinates than the left side!

    Let me see if this makes sense (suppose we fixed a chart and we are talking there): since we have 2 dimensions, to specify a derivative I need to give you 2 components (since a derivative is now a vector in a 2-dimensional space). So \frac{\partial f}{\partial x}\vert_y really means \partial_x f in my notation, where y is implicitly fixed because no \partial_y is appearing.

    Is that what’s going on?

    Right!

    Just to check: if the manifold were n-dimensional, I would write \frac{\partial f}{\partial x_1}\vert_{x_2, \ldots, x_n} to denote the derivative along the first coordinate chart only.

    Exactly. So this notation for the partial derivatives tells you that the coordinates being used are x_1, x_2, \dots, x_n.

You can use Markdown or HTML in your comments. You can also use LaTeX, like this: $latex E = m c^2 $. The word 'latex' comes right after the first dollar sign, with a space after it.

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out /  Change )

Google photo

You are commenting using your Google account. Log Out /  Change )

Twitter picture

You are commenting using your Twitter account. Log Out /  Change )

Facebook photo

You are commenting using your Facebook account. Log Out /  Change )

Connecting to %s

This site uses Akismet to reduce spam. Learn how your comment data is processed.