The Large-Number Limit for Reaction Networks (Part 2)

6 July, 2013

I’ve been talking a lot about ‘stochastic mechanics’, which is like quantum mechanics but with probabilities replacing amplitudes. In Part 1 of this mini-series I started telling you about the ‘large-number limit’ in stochastic mechanics. It turns out this is mathematically analogous to the ‘classical limit’ of quantum mechanics, where Planck’s constant \hbar goes to zero.

There’s a lot more I need to say about this, and lots more I need to figure out. But here’s one rather easy thing.

In quantum mechanics, ‘coherent states’ are a special class of quantum states that are very easy to calculate with. In a certain precise sense they are the best quantum approximations to classical states. This makes them good tools for studying the classical limit of quantum mechanics. As \hbar \to 0, they reduce to classical states where, for example, a particle has a definite position and momentum.

We can borrow this strategy to study the large-number limit of stochastic mechanics. We’ve run into coherent states before in our discussions here. Now let’s see how they work in the large-number limit!

Coherent states

For starters, let’s recall what coherent states are. We’ve got k different kinds of particles, and we call each kind a species. We describe the probability that we have some number of particles of each kind using a ‘stochastic state’. For starters, this is a formal power series in variables z_1, \dots, z_k. We write it as

\displaystyle{\Psi = \sum_{\ell \in \mathbb{N}^k} \psi_\ell z^\ell }

where z^\ell is an abbreviation for

z_1^{\ell_1} \cdots z_k^{\ell_k}

But for \Psi to be a stochastic state the numbers \psi_\ell need to be probabilities, so we require that

\psi_\ell \ge 0


\displaystyle{ \sum_{\ell \in \mathbb{N}^k} \psi_\ell = 1}

Sums of coefficients like this show up so often that it’s good to have an abbreviation for them:

\displaystyle{ \langle \Psi \rangle =  \sum_{\ell \in \mathbb{N}^k} \psi_\ell}

Now, a coherent state is a stochastic state where the numbers of particles of each species are independent random variables, and the number of the ith species is distributed according to a Poisson distribution.

Since we can pick ithe means of these Poisson distributions to be whatever we want, we get a coherent state \Psi_c for each list of numbers c \in [0,\infty)^k:

\displaystyle{ \Psi_c = \frac{e^{c \cdot z}}{e^c} }

Here I’m using another abbreviation:

e^{c} = e^{c_1 + \cdots + c_k}

If you calculate a bit, you’ll see

\displaystyle{  \Psi_c = e^{-(c_1 + \cdots + c_k)} \, \sum_{n \in \mathbb{N}^k} \frac{c_1^{n_1} \cdots c_k^{n_k}} {n_1! \, \cdots \, n_k! } \, z_1^{n_1} \cdots z_k^{n_k} }

Thus, the probability of having n_i things of the ith species is equal to

\displaystyle{  e^{-c_i} \, \frac{c_i^{n_i}}{n_i!} }

This is precisely the definition of a Poisson distribution with mean equal to c_i.

What are the main properties of coherent states? For starters, they are indeed states:

\langle \Psi_c \rangle = 1

More interestingly, they are eigenvectors of the annihilation operators

a_i = \displaystyle{ \frac{\partial}{\partial z_i} }

since when you differentiate an exponential you get back an exponential:

\begin{array}{ccl} a_i \Psi_c &=&  \displaystyle{ \frac{\partial}{\partial z_i} \frac{e^{c \cdot z}}{e^c} } \\ \\   &=& c_i \Psi_c \end{array}

We can use this fact to check that in this coherent state, the mean number of particles of the ith species really is c_i. For this, we introduce the number operator

N_i = a_i^\dagger a_i

where a_i^\dagger is the creation operator:

(a_i^\dagger \Psi)(z) = z_i \Psi(z)

The number operator has the property that

\langle N_i \Psi \rangle

is the mean number of particles of the ith species. If we calculate this for our coherent state \Psi_c, we get

\begin{array}{ccl} \langle a_i^\dagger a_i \Psi_c \rangle &=& c_i \langle a_i^\dagger \Psi_c \rangle \\  \\ &=& c_i \langle \Psi_c \rangle \\ \\ &=& c_i \end{array}

Here in the second step we used the general rule

\langle a_i^\dagger \Phi \rangle = \langle \Phi \rangle

which is easy to check.


Now let’s see how coherent states work in the large-numbers limit. For this, let’s use the rescaled annihilation, creation and number operators from Part 1. They look like this:

A_i = \hbar \, a_i

C_i = a_i^\dagger

\widetilde{N}_i = C_i A_i


\widetilde{N}_i = \hbar N_i

the point is that the rescaled number operator counts particles not one at a time, but in bunches of size 1/\hbar. For example, if \hbar is the reciprocal of Avogadro’s number, we are counting particles in ‘moles’. So, \hbar \to 0 corresponds to a large-number limit.

To flesh out this idea some more, let’s define rescaled coherent states:

\widetilde{\Psi}_c = \Psi_{c/\hbar}

These are eigenvectors of the rescaled annihilation operators:

\begin{array}{ccl} A_i \widetilde{\Psi}_c &=& \hbar a_i \Psi_{c/\hbar}  \\  \\  &=& c_i \Psi_{c/\hbar} \\ \\  &=& c_i \widetilde{\Psi}_c  \end{array}

This in turn means that

\begin{array}{ccl} \langle \widetilde{N}_i \widetilde{\Psi}_c \rangle &=& \langle C_i A_i \widetilde{\Psi}_c \rangle \\  \\  &=& c_i \langle  C_i \widetilde{\Psi}_c \rangle \\  \\ &=& c_i \langle \widetilde{\Psi}_c \rangle \\ \\ &=& c_i \end{array}

Here we used the general rule

\langle C_i \Phi \rangle = \langle \Phi \rangle

which holds because the ‘rescaled’ creation operator C_i is really just the usual creation operator, which obeys this rule.

What’s the point of all this fiddling around? Simply this. The equation

\langle \widetilde{N}_i \widetilde{\Psi}_c \rangle = c_i

says the expected number of particles of the ith species in the state \widetilde{\Psi}_c is c_i, if we count these particles not one at a time, but in bunches of size 1/\hbar.

A simple test

As a simple test of this idea, let’s check that as \hbar \to 0, the standard deviation of the number of particles in the state \Psi_c goes to zero… where we count particle using the rescaled number operator.

The variance of the rescaled number operator is, by definition,

\langle \widetilde{N}_i^2 \widetilde{\Psi}_c \rangle -   \langle \widetilde{N}_i^2 \widetilde{\Psi}_c \rangle^2

and the standard deviation is the square root of the variance.

We already know the mean of the rescaled number operator:

\langle \widetilde{N}_i \widetilde{\Psi}_c \rangle = c_i

So, the main thing we need to calculate is the mean of its square:

\langle \widetilde{N}_i^2 \widetilde{\Psi}_c \rangle

For this we will use the commutation relation derived last time:

[A_i , C_i] = \hbar

This implies

\begin{array}{ccl} \widetilde{N}_i^2 &=& C_i A_i C_i A_i \\  \\  &=&  C_i (C_i A_i + \hbar) A_i \\ \\  &=&  C_i^2 A_i^2 + \hbar C_i A_i \end{array}


\begin{array}{ccl} \langle \widetilde{N}_i^2\widetilde{\Psi}_c \rangle &=& \langle (C_i^2 A_i^2 + \hbar C_i A_i) \Psi_c \rangle \\   \\  &=&  c_i^2 + \hbar c_i  \end{array}

where we used our friends

A_i \Psi_c = c_i \Psi_c


\langle C_i \Phi \rangle = \langle \Phi \rangle

So, the variance of the rescaled number of particles is

\begin{array}{ccl} \langle \widetilde{N}_i^2 \widetilde{\Psi}_c \rangle  -   \langle \widetilde{N}_i \widetilde{\Psi}_c \rangle^2  &=& c_i^2 + \hbar c_i - c_i^2 \\  \\  &=& \hbar c_i \end{array}

and the standard deviation is

(\hbar c_i)^{1/2}

Good, it goes to zero as \hbar \to 0! And the square root is just what you’d expect if you’ve thought about stuff like random walks or the central limit theorem.

A puzzle

I feel sure that in any coherent state, not only the variance but also all the higher moments of the rescaled number operators go to zero as \hbar \to 0. Can you prove this?

Here I mean the moments after the mean has been subtracted. The pth moment is then

\langle (\widetilde{N}_i - c_i)^p \; \widetilde{\Psi}_c \rangle

I want this to go to zero as \hbar \to 0.

Here’s a clue that should help. First, there’s a textbook formula for the higher moments of Poisson distributions without the mean subtracted. If I understand it correctly, it gives this:

\displaystyle{ \langle N_i^m \; \Psi_c \rangle = \sum_{j = 1}^m {c_i}^j \; \left\{ \begin{array}{c} m \\ j \end{array} \right\} }


\displaystyle{ \left\{ \begin{array}{c} m \\ j \end{array} \right\} }

is the number of ways to partition an m-element set into j nonempty subsets. This is called Stirling’s number of the second kind. This suggests that there’s some fascinating combinatorics involving coherent states. That’s exactly the kind of thing I enjoy, so I would like to understand this formula someday… but not today! I just want something to go to zero!

If I rescale the above formula, I seem to get

\begin{array}{ccl} \langle \widetilde{N}_i^m \; \widetilde{\Psi}_c \rangle &=& \hbar^m \langle N_i^m \Psi_{c/\hbar} \rangle \\ \\ &=& \hbar^m \; \displaystyle{ \sum_{j = 1}^m \left(\frac{c_i}{\hbar}\right)^j \left\{ \begin{array}{c} m \\ j \end{array} \right\} } \end{array}

We could plug this formula into

\langle (\widetilde{N}_i - c_i)^p \; \widetilde{\Psi}_c \rangle =  \displaystyle{ \sum_{m = 0}^p \, \binom{m}{p} \; \langle \widetilde{N}_i^m \;  \widetilde{\Psi}_c \rangle \, (-c_i)^{p - m} }

and then try to show the result goes to zero as \hbar \to 0. But I don’t have the energy to do that… not right now, anyway!

Maybe you do. Or maybe you can think of a better approach to solving this problem. The answer must be well-known, since the large-number limit of a Poisson distribution is a very important thing.

The Large-Number Limit for Reaction Networks (Part 1)

1 July, 2013

Waiting for the other shoe to drop.

This is a figure of speech that means ‘waiting for the inevitable consequence of what’s come so far’. Do you know where it comes from? You have to imagine yourself in an apartment on the floor below someone who is taking off their shoes. When you hear one, you know the next is coming.

There’s even an old comedy routine about this:

A guest who checked into an inn one night was warned to be quiet because the guest in the room next to his was a light sleeper. As he undressed for bed, he dropped one shoe, which, sure enough, awakened the other guest. He managed to get the other shoe off in silence, and got into bed. An hour later, he heard a pounding on the wall and a shout: “When are you going to drop the other shoe?”

When we were working on math together, James Dolan liked to say “the other shoe has dropped” whenever an inevitable consequence of some previous realization became clear. There’s also the mostly British phrase the penny has dropped. You say this when someone finally realizes the situation they’re in.

But sometimes one realization comes after another, in a long sequence. Then it feels like it’s raining shoes!

I guess that’s a rather strained metaphor. Perhaps falling like dominoes is better for these long chains of realizations.

This is how I’ve felt in my recent research on the interplay between quantum mechanics, stochastic mechanics, statistical mechanics and extremal principles like the principle of least action. The basics of these subjects should be completely figured out by now, but they aren’t—and a lot of what’s known, nobody bothered to tell most of us.

So, I was surprised to rediscover that the Maxwell relations in thermodynamics are formally identical to Hamilton’s equations in classical mechanics… though in retrospect it’s obvious. Thermodynamics obeys the principle of maximum entropy, while classical mechanics obeys the principle of least action. Wherever there’s an extremal principle, symplectic geometry, and equations like Hamilton’s equations, are sure to follow.

I was surprised to discover (or maybe rediscover, I’m not sure yet) that just as statistical mechanics is governed by the principle of maximum entropy, quantum mechanics is governed by a principle of maximum ‘quantropy’. The analogy between statistical mechanics and quantum mechanics has been known at least since Feynman and Schwinger. But this basic aspect was never explained to me!

I was also surprised to rediscover that simply by replacing amplitudes by probabilities in the formalism of quantum field theory, we get a nice formalism for studying stochastic many-body systems. This formalism happens to perfectly match the ‘stochastic Petri nets’ and ‘reaction networks’ already used in subjects from population biology to epidemiology to chemistry. But now we can systematically borrow tools from quantum field theory! All the tricks that particle physicists like—annihilation and creation operators, coherent states and so on—can be applied to problems like the battle between the AIDS virus and human white blood cells.

And, perhaps because I’m a bit slow on the uptake, I was surprised when yet another shoe came crashing to the floor the other day.

Because quantum field theory has, at least formally, a nice limit where Planck’s constant goes to zero, the same is true for for stochastic Petri nets and reaction networks!

In quantum field theory, we call this the ‘classical limit’. For example, if you have a really huge number of photons all in the same state, quantum effects sometimes become negligible, and we can describe them using the classical equations describing electromagnetism: the classical Maxwell equations. In stochastic situations, it makes more sense to call this limit the ‘large-number limit’: the main point is that there are lots of particles in each state.

In quantum mechanics, different observables don’t commute, so the so-called commutator matters a lot:

[A,B] = AB - BA

These commutators tend to be proportional to Planck’s constant. So in the limit where Planck’s constant \hbar goes to zero, observables commute… but commutators continue to have a ghostly existence, in the form of Poisson bracket:

\displaystyle{ \{A,B\} = \lim_{\hbar \to 0} \; \frac{1}{\hbar} [A,B] }

Poisson brackets are a key part of symplectic geometry—the geometry of classical mechanics. So, this sort of geometry naturally shows up in the study of stochastic Petri nets!

Let me sketch how it works. I’ll start with a section reviewing stuff you should already know if you’ve been following the network theory series.

The stochastic Fock space

Suppose we have some finite set S. We call its elements species, since we think of them as different kinds of things—e.g., kinds of chemicals, or kinds of organisms.

To describe the probability of having any number of things of each kind, we need the stochastic Fock space. This is the space of real formal power series in a bunch of variables, one for each element of S. It won’t hurt to simply say

S = \{1, \dots, k \}

Then the stochastic Fock space is

\mathbb{R}[[z_1, \dots, z_k ]]

this being math jargon for the space of formal power series with real coefficients in some variables z_1, \dots, z_k, one for each element of S.

We write

n = (n_1, \dots, n_k) \in \mathbb{N}^S

and use this abbreviation:

z^n = z_1^{n_1} \cdots z_k^{n_k}

We use z^n to describe a state where we have n_1 things of the first species, n_2 of the second species, and so on.

More generally, a stochastic state is an element \Psi of the stochastic Fock space with

\displaystyle{ \Psi = \sum_{n \in \mathbb{N}^k} \psi_n \, z^n }


\psi_n \ge 0


\displaystyle{ \sum_{n  \in \mathbb{N}^k} \psi_n = 1 }

We use \Psi to describe a state where \psi_n is the probability of having n_1 things of the first species, n_2 of the second species, and so on.

The stochastic Fock space has some important operators on it: the annihilation operators given by

\displaystyle{ a_i \Psi = \frac{\partial}{\partial z_i} \Psi }

and the creation operators given by

\displaystyle{ a_i^\dagger \Psi = z_i \Psi }

From these we can define the number operators:

N_i = a_i^\dagger a_i

Part of the point is that

N_i z^n = n_i z^n

This says the stochastic state z^n is an eigenstate of all the number operators, with eigenvalues saying how many things there are of each species.

The annihilation, creation, and number operators obey some famous commutation relations, which are easy to check for yourself:

[a_i, a_j] = 0

[a_i^\dagger, a_j^\dagger] = 0

[a_i, a_j^\dagger] = \delta_{i j}

[N_i, N_j ] = 0

[N_i , a_j^\dagger] = \delta_{i j} a_j^\dagger

[N_i , a_j] = - \delta_{i j} a_j^\dagger

The last two have easy interpretations. The first of these two implies

N_i a_i^\dagger \Psi = a_i^\dagger (N_i + 1) \Psi

This says that if we start in some state \Psi, create a thing of type i, and then count the things of that type, we get one more than if we counted the number of things before creating one. Similarly,

N_i a_i \Psi = a_i (N_i - 1) \Psi

says that if we annihilate a thing of type i and then count the things of that type, we get one less than if we counted the number of things before annihilating one.

Introducing Planck’s constant

Now let’s introduce an extra parameter into this setup. To indicate the connection to quantum physics, I’ll call it \hbar, which is the usual symbol for Planck’s constant. However, I want to emphasize that we’re not doing quantum physics here! We’ll see that the limit where \hbar \to 0 is very interesting, but it will correspond to a limit where there are many things of each kind.

We’ll start by defining

A_i = \hbar \, a_i


C_i = a_i^\dagger

Here A stands for ‘annihilate’ and C stands for ‘create’. Think of A as a rescaled annihilation operator. Using this we can define a rescaled number operator:

\widetilde{N}_i = C_i A_i

So, we have

\widetilde{N}_i = \hbar N_i

and this explains the meaning of the parameter \hbar. The idea is that instead of counting things one at time, we count them in bunches of size 1/\hbar.

For example, suppose \hbar = 1/12. Then we’re counting things in dozens! If we have a state \Psi with

N_i \Psi = 36 \Psi

then there are 36 things of the ith kind. But this implies

\widetilde{N}_i \Psi = 3 \Psi

so there are 3 dozen things of the ith kind.

Chemists don’t count in dozens; they count things in big bunches called moles. A mole is approximately the number of carbon atoms in 12 grams: Avogadro’s number, 6.02 × 1023. When you count things by moles, you’re taking \hbar to be 1.66 × 10-24, the reciprocal of Avogadro’s number.

So, while in quantum mechanics Planck’s constant is ‘the quantum of action’, a unit of action, here it’s ‘the quantum of quantity’: the amount that corresponds to one thing.

We can easily work out the commutation relations of our new rescaled operators:

[A_i, A_j] = 0

[C_i, C_j] = 0

[A_i, C_j] = \hbar \, \delta_{i j}

[\widetilde{N}_i, \widetilde{N}_j ] = 0

[\widetilde{N}_i , C_j] = \hbar \,  \delta_{i j} C_j

[\widetilde{N}_i , A_j] = - \hbar \, \delta_{i j} A_j

These are just what you see in quantum mechanics! The commutators are all proportional to \hbar.

Again, we can understand what these relations mean if we think a bit. For example, the commutation relation for \widetilde{N}_i and C_i says

N_i C_i \Psi = C_i (N_i + \hbar) \Psi

This says that if we start in some state \Psi, create a thing of type i, and then count the things of that type, we get \hbar more than if we counted the number of things before creating one. This is because we are counting things not one at a time, but in bunches of size 1/\hbar.

You may be wondering why I defined the rescaled annihilation operator to be \hbar times the original annihilation operator:

A_i = \hbar \, a_i

but left the creation operator unchanged:

C_i = a_i^\dagger

I’m wondering that too! I’m not sure I’m doing things the best way yet. I’ve also tried another more symmetrical scheme, taking A_k = \sqrt{\hbar} \, a_k and C_k = \sqrt{\hbar} a_k^\dagger. This gives the same commutation relations, but certain other formulas become more unpleasant. I’ll explain that some other day.

Next, we can take the limit as \hbar \to 0 and define Poisson brackets of operators by

\displaystyle{ \{A,B\} = \lim_{\hbar \to 0} \; \frac{1}{\hbar} [A,B] }

To make this rigorous it’s best to proceed algebraically. For this we treat \hbar as a formal variable rather than a specific number. So, our number system becomes \mathbb{R}[\hbar], the algebra of polynomials in \hbar. We define the Weyl algebra to be the algebra over \mathbb{R}[\hbar] generated by elements A_i and C_i obeying

[A_i, A_j] = 0

[C_i, C_j] = 0

[A_i, C_j] = \hbar \, \delta_{i j}

We can set \hbar = 0 in this formalism; then the Weyl algebra reduces to the algebra of polynomials in the variables A_i and C_i. This algebra is commutative! But we can define a Poisson bracket on this algebra by

\displaystyle{ \{A,B\} = \lim_{\hbar \to 0} \; \frac{1}{\hbar} [A,B] }

It takes a bit of work to explain to algebraists exactly what’s going on in this formula, because it involves an interplay between the algebra of polynomials in A_i and C_i, which is commutative, and the Weyl algebra, which is not. I’ll be glad to explain the details if you want. But if you’re a physicist, you can just follow your nose and figure out what the formula gives. For example:

\begin{array}{ccl}   \{A_i, C_j\} &=& \displaystyle{ \lim_{\hbar \to 0} \; \frac{1}{\hbar} [A_i, C_j] } \\  \\  &=& \displaystyle{ \lim_{\hbar \to 0} \; \frac{1}{\hbar} \, \hbar \, \delta_{i j} }  \\  \\  &=& \delta_{i j} \end{array}

Similarly, we have:

\{ A_i, A_j \} = 0

\{ C_i, C_j \} = 0

\{ A_i, C_j \} = \delta_{i j}

\{ \widetilde{N}_i, \widetilde{N}_j \}  = 0

\{ \widetilde{N}_i , C_j \} = \delta_{i j} C_j

\{ \widetilde{N}_i , A_j \} = - \delta_{i j} A_j

I should probably use different symbols for A_i, C_i and \widetilde{N}_i after we’ve set \hbar = 0, since they’re really different now, but I don’t have the patience to make up more names for things!

Now, we can think of A_i and C_i as coordinate functions on a 2k-dimensional vector space, and all the polynomials in A_i and C_i as functions on this space. This space is what physicists would call a ‘phase space’: they use this kind of space to describe the position and momentum of a particle, though here we are using it in a different way. Mathematicians would call it a ‘symplectic vector space’, because it’s equipped with a special structure, called a symplectic structure, that lets us define Poisson brackets of smooth functions on this space. We won’t need to get into that now, but it’s important—and it makes me happy to see it here.


There’s a lot more to do, but not today. My main goal is to understand, in a really elegant way, how the master equation for a stochastic Petri net reduces to the rate equation in the large-number limit. What we’ve done so far is start thinking of this as a \hbar \to 0 limit. This should let us borrow ideas about classical limits in quantum mechanics, and apply them to stochastic mechanics.

Stay tuned!

Energy and the Environment – What Physicists Can Do

25 April, 2013


The Perimeter Institute is a futuristic-looking place where over 250 physicists are thinking about quantum gravity, quantum information theory, cosmology and the like. Since I work on some of these things, I was recently invited to give the weekly colloquium there. But I took the opportunity to try to rally them into action:

Energy and the Environment: What Physicists Can Do. Watch the video or read the slides.

Abstract. The global warming crisis is part of a bigger transformation in which humanity realizes that the Earth is a finite system and that our population, energy usage, and the like cannot continue to grow exponentially. While politics and economics pose the biggest challenges, physicists are in a good position to help make this transition a bit easier. After a quick review of the problems, we discuss a few ways physicists can help.

On the video you can hear me say a lot of stuff that’s not on the slides: it’s more of a coherent story. The advantage of the slides is that anything in blue, you can click on to get more information. So for example, when I say that solar power capacity has been growing annually by 75% in recent years, you can see where I got that number.

I was pleased by the response to this talk. Naturally, it was not a case of physicists saying “okay, tomorrow I’ll quit working on the foundations of quantum mechanics and start trying to improve quantum dot solar cells.” It’s more about getting them to see that huge problems are looming ahead of us… and to see the huge opportunities for physicists who are willing to face these problems head-on, starting now. Work on energy technologies, the smart grid, and ‘ecotechnology’ is going to keep growing. I think a bunch of the younger folks, at least, could see this.

However, perhaps the best immediate outcome of this talk was that Lee Smolin introduced me to Manjana Milkoreit. She’s at the school of international affairs at Waterloo University, practically next door to the Perimeter Institute. She works on “climate change governance, cognition and belief systems, international security, complex systems approaches, especially threshold behavior, and the science-policy interface.”

So, she knows a lot about the all-important human and political side of climate change. Right now she’s interviewing diplomats involved in climate treaty negotiations, trying to see what they believe about climate change. And it’s very interesting!

In my next post, I’ll talk about something she pointed me to. Namely: what we can do to hold the temperature increase to 2 °C or less, given that the pledges made by various nations aren’t enough.

Network Theory (Part 29)

23 April, 2013

I’m talking about electrical circuits, but I’m interested in them as models of more general physical systems. Last time we started seeing how this works. We developed an analogy between electrical circuits and physical systems made of masses and springs, with friction:

Electronics Mechanics
charge: Q position: q
current: I = \dot{Q} velocity: v = \dot{q}
flux linkage: \lambda momentum: p
voltage: V = \dot{\lambda} force: F = \dot{p}
inductance: L mass: m
resistance: R damping coefficient: r  
inverse capacitance: 1/C   spring constant: k

But this is just the first of a large set of analogies. Let me list some, so you can see how wide-ranging they are!

More analogies

People in system dynamics often use effort as a term to stand for anything analogous to force or voltage, and flow as a general term to stand for anything analogous to velocity or electric current. They call these variables e and f.

To me it’s important that force is the time derivative of momentum, and velocity is the time derivative of position. Following physicists, I write momentum as p and position as q. So, I’ll usually write effort as \dot{p} and flow as \dot{q}.

Of course, ‘position’ is a term special to mechanics; it’s nice to have a general term for the thing whose time derivative is flow, that applies to any context. People in systems dynamics seem to use displacement as that general term.

It would also be nice to have a general term for the thing whose time derivative is effort… but I don’t know one. So, I’ll use the word momentum.

Now let’s see the analogies! Let’s see how displacement q, flow \dot{q}, momentum p and effort \dot{p} show up in several subjects:

displacement:    q flow:      \dot q momentum:      p effort:           \dot p
Mechanics: translation position velocity momentum force
Mechanics: rotation angle angular velocity angular momentum torque
Electronics charge current flux linkage voltage
Hydraulics volume flow pressure momentum pressure
Thermal Physics entropy entropy flow temperature momentum temperature
Chemistry moles molar flow chemical momentum chemical potential

We’d been considering mechanics of systems that move along a line, via translation, but we can also consider mechanics for systems that turn round and round, via rotation. So, there are two rows for mechanics here.

There’s a row for electronics, and then a row for hydraulics, which is closely analogous. In this analogy, a pipe is like a wire. The flow of water plays the role of current. Water pressure plays the role of electrostatic potential. The difference in water pressure between two ends of a pipe is like the voltage across a wire. When water flows through a pipe, the power equals the flow times this pressure difference—just as in an electrical circuit the power is the current times the voltage across the wire.

A resistor is like a narrowed pipe:

An inductor is like a heavy turbine placed inside a pipe: this makes the water tend to keep flowing at the same rate it’s already flowing! In other words, it provides a kind of ‘inertia’ analogous
to mass.

A capacitor is like a tank with pipes coming in from both ends, and a rubber sheet dividing it in two lengthwise:

When studying electrical circuits as a kid, I was shocked when I first learned that capacitors don’t let the electrons through: it didn’t seem likely you could do anything useful with something like that! But of course you can. Similarly, this gizmo doesn’t let the water through.

A voltage source is like a compressor set up to maintain a specified pressure difference between the input and output:

Similarly, a current source is like a pump set up to maintain a specified flow.

Finally, just as voltage is the time derivative of a fairly obscure quantity called ‘flux linkage’, pressure is the time derivative of an even more obscure quantity which has no standard name. I’m calling it ‘pressure momentum’, thanks to the analogy

momentum: force :: pressure momentum: pressure

Just as pressure has units of force per area, pressure momentum has units of momentum per area!

People invented this analogy back when they were first struggling to understand electricity, before electrons had been observed:

Hydraulic analogy, Wikipedia.

The famous electrical engineer Oliver Heaviside pooh-poohed this analogy, calling it the “drain-pipe theory”. I think he was making fun of William Henry Preece. Preece was another electrical engineer, who liked the hydraulic analogy and disliked Heaviside’s fancy math. In his inaugural speech as president of the Institution of Electrical Engineers in 1893, Preece proclaimed:

True theory does not require the abstruse language of mathematics to make it clear and to render it acceptable. All that is solid and substantial in science and usefully applied in practice, have been made clear by relegating mathematic symbols to their proper store place—the study.

According to the judgement of history, Heaviside made more progress in understanding electromagnetism than Preece. But there’s still a nice analogy between electronics and hydraulics. And I’ll eventually use the abstruse language of mathematics to make it very precise!

But now let’s move on to the row called ‘thermal physics’. We could also call this ‘thermodynamics’. It works like this. Say you have a physical system in thermal equilibrium and all you can do is heat it up or cool it down ‘reversibly’—that is, while keeping it in thermal equilibrium all along. For example, imagine a box of gas that you can heat up or cool down. If you put a tiny amount dE of energy into the system in the form of heat, then its entropy increases by a tiny amount dS. And they’re related by this equation:

dE = TdS

where T is the temperature.

Another way to say this is

\displaystyle{ \frac{dE}{dt} = T \frac{dS}{dt} }

where t is time. On the left we have the power put into the system in the form of heat. But since power should be ‘effort’ times ‘flow’, on the right we should have ‘effort’ times ‘flow’. It makes some sense to call dS/dt the ‘entropy flow’. So temperature, T, must play the role of ‘effort’.

This is a bit weird. I don’t usually think of temperature as a form of ‘effort’ analogous to force or torque. Stranger still, our analogy says that ‘effort’ should be the time derivative of some kind of ‘momentum’, So, we need to introduce temperature momentum: the integral of temperature over time. I’ve never seen people talk about this concept, so it makes me a bit nervous.

But when we have a more complicated physical system like a piston full of gas in thermal equilibrium, we can see the analogy working. Now we have

dE = TdS - PdV

The change in energy dE of our gas now has two parts. There’s the change in heat energy TdS, which we saw already. But now there’s also the change in energy due to compressing the piston! When we change the volume of the gas by a tiny amount dV, we put in energy -PdV.

Now look back at the first chart I drew! It says that pressure is a form of ‘effort’, while volume is a form of ‘displacement’. If you believe that, the equation above should help convince you that temperature is also a form of effort, while entropy is a form of displacement.

But what about the minus sign? That’s no big deal: it’s the result of some arbitrary conventions. P is defined to be the outward pressure of the gas on our piston. If this is positive, reducing the volume of the gas takes a positive amount of energy, so we need to stick in a minus sign. I could eliminate this minus sign by changing some conventions—but if I did, the chemistry professors at UCR would haul me away and increase my heat energy by burning me at the stake.

Speaking of chemistry: here’s how the chemistry row in the analogy chart works. Suppose we have a piston full of gas made of different kinds of molecules, and there can be chemical reactions that change one kind into another. Now our equation gets fancier:

\displaystyle{ dE = TdS - PdV + \sum_i  \mu_i dN_i }

Here N_i is the number of molecules of the ith kind, while \mu_i is a quantity called a chemical potential. The chemical potential simply says how much energy it takes to increase the number of molecules of a given kind. So, we see that chemical potential is another form of effort, while number of molecules is another form of displacement.

But chemists are too busy to count molecules one at a time, so they count them in big bunches called ‘moles’. A mole is the number of atoms in 12 grams of carbon-12. That’s roughly


atoms. This is called Avogadro’s constant. If we used 1 gram of hydrogen, we’d get a very close number called ‘Avogadro’s number’, which leads to lots of jokes:

(He must be desperate because he looks so weird… sort of like a mole!)

So, instead of saying that the displacement in chemistry is called ‘number of molecules’, you’ll sound more like an expert if you say ‘moles’. And the corresponding flow is called molar flow.

The truly obscure quantity in this row of the chart is the one whose time derivative is chemical potential! I’m calling it chemical momentum simply because I don’t know another name.

Why are linear and angular momentum so famous compared to pressure momentum, temperature momentum and chemical momentum?

I suspect it’s because the laws of physics are symmetrical
under translations and rotations. When the assumptions of Noether’s theorem hold, this guarantees that the total momentum and angular momentum of a closed system are conserved. Apparently the laws of physics lack the symmetries that would make the other kinds of momentum be conserved.

This suggests that we should dig deeper and try to understand more deeply how this chart is connected to ideas in classical mechanics, like Noether’s theorem or symplectic geometry. I will try to do that sometime later in this series.

More generally, we should try to understand what gives rise to a row in this analogy chart. Are there are lots of rows I haven’t talked about yet, or just a few? There are probably lots. But are there lots of practically important rows that I haven’t talked about—ones that can serve as the basis for new kinds of engineering? Or does something about the structure of the physical world limit the number of such rows?

Mildly defective analogies

Engineers care a lot about dimensional analysis. So, they often make a big deal about the fact that while effort and flow have different dimensions in different rows of the analogy chart, the following four things are always true:

pq has dimensions of action (= energy × time)
\dot{p} q has dimensions of energy
p \dot{q} has dimensions of energy
\dot{p} \dot{q} has dimensions of power (= energy / time)

In fact any one of these things implies all the rest.

These facts are important when designing ‘mixed systems’, which combine different rows in the chart. For example, in mechatronics, we combine mechanical and electronic elements in a single circuit! And in a hydroelectric dam, power is converted from hydraulic to mechanical and then electric form:

One goal of network theory should be to develop a unified language for studying mixed systems! Engineers have already done most of the hard work. And they’ve realized that thanks to conservation of energy, working with pairs of flow and effort variables whose product has dimensions of power is very convenient. It makes it easy to track the flow of energy through these systems.

However, people have tried to extend the analogy chart to include ‘mildly defective’ examples where effort times flow doesn’t have dimensions of power. The two most popular are these:

displacement:    q flow:      \dot q momentum:      p effort:           \dot p
Heat flow heat heat flow temperature momentum temperature
Economics inventory product flow economic momentum product price

The heat flow analogy comes up because people like to think of heat flow as analogous to electrical current, and temperature as analogous to voltage. Why? Because an insulated wall acts a bit like a resistor! The current flowing through a resistor is a function the voltage across it. Similarly, the heat flowing through an insulated wall is about proportional to the difference in temperature between the inside and the outside.

However, there’s a difference. Current times voltage has dimensions of power. Heat flow times temperature does not have dimensions of power. In fact, heat flow by itself already has dimensions of power! So, engineers feel somewhat guilty about this analogy.

Being a mathematical physicist, a possible way out presents itself to me: use units where temperature is dimensionless! In fact such units are pretty popular in some circles. But I don’t know if this solution is a real one, or whether it causes some sort of trouble.

In the economic example, ‘energy’ has been replaced by ‘money’. So other words, ‘inventory’ times ‘product price’ has units of money. And so does ‘product flow’ times ‘economic momentum’! I’d never heard of economic momentum before I started studying these analogies, but I didn’t make up that term. It’s the thing whose time derivative is ‘product price’. Apparently economists have noticed a tendency for rising prices to keep rising, and falling prices to keep falling… a tendency toward ‘conservation of momentum’ that doesn’t fit into their models of rational behavior.

I’m suspicious of any attempt to make economics seem like physics. Unlike elementary particles or rocks, people don’t seem to be very well modelled by simple differential equations. However, some economists have used the above analogy to model economic systems. And I can’t help but find that interesting—even if intellectually dubious when taken too seriously.

An auto-analogy

Beside the analogy I’ve already described between electronics and mechanics, there’s another one, called ‘Firestone’s analogy’:

• F.A. Firestone, A new analogy between mechanical and electrical systems, Journal of the Acoustical Society of America 4 (1933), 249–267.

Alain Bossavit pointed this out in the comments to Part 27. The idea is to treat current as analogous to force instead of velocity… and treat voltage as analogous to velocity instead of force!

In other words, switch your p’s and q’s:

Electronics Mechanics          (usual analogy) Mechanics      (Firestone’s analogy)
charge position: q momentum: p
current velocity: \dot{q} force: \dot{p}
flux linkage momentum: p position: q
voltage force: \dot{p} velocity: \dot{q}

This new analogy is not ‘mildly defective’: the product of effort and flow variables still has dimensions of power. But why bother with another analogy?

It may be helpful to recall this circuit from last time:

It’s described by this differential equation:

L \ddot{Q} + R \dot{Q} + C^{-1} Q = V

We used the ‘usual analogy’ to translate it into classical mechanics problem, and we got a problem where an object of mass L is hanging from a spring with spring constant 1/C and damping coefficient R, and feeling an additional external force F:

m \ddot{q} + r \dot{q} + k q = F

And that’s fine. But there’s an intuitive sense in which all three forces are acting ‘in parallel’ on the mass, rather than in series. In other words, all side by side, instead of one after the other.

Using Firestone’s analogy, we get a different classical mechanics problem, where the three forces are acting in series. The spring is connected to source of friction, which in turn is connected to an external force.

This may seem a bit mysterious. But instead of trying to explain it, I’ll urge you to read his paper, which is short and clearly written. I instead want to make a somewhat different point, which is that we can take a mechanical system, convert it to an electrical one following the usual analogy, and then convert back to a mechanical one using Firestone’s analogy. This gives us an ‘auto-analogy’ between mechanics and itself, which switches p and q.

And although I haven’t been able to figure out why from Firestone’s paper, I have other reasons for feeling sure this auto-analogy should contain a minus sign. For example:

p \mapsto q, \qquad q \mapsto -p

In other words, it should correspond to a 90° rotation in the (p,q) plane. There’s nothing sacred about whether we rotate clockwise or counterclockwise; we can equally well do this:

p \mapsto -q, \qquad q \mapsto p

But we need the minus sign to get a so-called symplectic transformation of the (p,q) plane. And from my experience with classical mechanics, I’m pretty sure we want that. If I’m wrong, please let me know!

I have a feeling we should revisit this issue when we get more deeply into the symplectic aspects of circuit theory. So, I won’t go on now.


The analogies I’ve been talking about are studied in a branch of engineering called system dynamics. You can read more about it here:

• Dean C. Karnopp, Donald L. Margolis and Ronald C. Rosenberg, System Dynamics: a Unified Approach, Wiley, New York, 1990.

• Forbes T. Brown, Engineering System Dynamics: a Unified Graph-Centered Approach, CRC Press, Boca Raton, 2007.

• Francois E. Cellier, Continuous System Modelling, Springer, Berlin, 1991.

System dynamics already uses lots of diagrams of networks. One of my goals in weeks to come is to explain the category theory lurking behind these diagrams.

Network Theory (Part 28)

10 April, 2013

Last time I left you with some puzzles. One was to use the laws of electrical circuits to work out what this one does:

If we do this puzzle, and keep our eyes open, we’ll see an analogy between electrical circuits and classical mechanics! And this is the first of a huge set of analogies. The same math shows up in many different subjects, whenever we study complex systems made of interacting parts. So, it should become part of any general theory of networks.

This simple circuit is very famous: it’s called a series RLC circuit, because it has a resistor of resistance R, an inductor of inductance L, and a capacitor of capacitance C, all hooked up ‘in series’, meaning one after another. But understand this circuit, it’s good to start with an even simpler one, where we leave out the voltage source:

This has three edges, so reading from top to bottom there are 3 voltages V_1, V_2, V_3, and 3 currents I_1, I_2, I_3, one for each edge. The white and black dots are called ‘nodes’, and the white ones are called ‘terminals’: current can flow in or out of those.

The voltages and currents obey a bunch of equations:

• Kirchhoff’s current law says the current flowing into each node that’s not a terminal equals the current flowing out:

I_1 = I_2 = I_3

• Kirchhoff’s voltage law says there are potentials \phi_0, \phi_1, \phi_2, \phi_3, one for each node, such that:

V_1 = \phi_0 - \phi_1

V_2 = \phi_1 - \phi_2

V_3 = \phi_2 - \phi_3

In this particular problem, Kirchhoff’s voltage law doesn’t say much, since we can always find potentials obeying this, given the voltages. But in other problems it can be important. And even here it suggests that the sum V_1 + V_2 + V_3 will be important; this is the ‘total voltage across the circuit’.

Next, we get one equation for each circuit element:

• The law for a resistor says:

V_1 = R I_1

The law for a inductor says:

\displaystyle{ V_2 = L \frac{d I_2}{d t} }

The law for a capacitor says:

\displaystyle{ I_3 = C \frac{d V_3}{d t} }

These are all our equations. What should we do with them? Since I_1 = I_2 = I_3, it makes sense to call all these currents simply I and solve for each voltage in terms of this. Here’s what we get:

V_1 = R I

\displaystyle{ V_2 = L \frac{d I}{d t} }

\displaystyle {V_3 = C^{-1} \int I \, dt }

So, if we know the current flowing through the circuit we can work out the voltage across each circuit element!

Well, not quite: in the case of the capacitor we only know it up to a constant, since there’s a constant of integration. This may seem like a minor objection, but it’s worth taking seriously. The point is that the charge on the capacitor’s plate is proportional to the voltage across the capacitor:

\displaystyle{V_3 = C^{-1} Q }

When electrons move on or off the plate, this charge changes, and we get a current:

\displaystyle{I = \frac{d Q}{d t} }

So, we can work out the time derivative of V_3 from the current I, but to work out V_3 itself we need the charge Q.

Treat these as definitions if you like, but they’re physical facts too! And they let us rewrite our trio of equations:

V_1 = R I

\displaystyle{ V_2 = L \frac{d I}{d t} }

\displaystyle{V_3 = C^{-1} \int I \, dt }

in terms of the charge, as follows:

V_1 = R \dot{Q}

V_2 = L \ddot{Q}

V_3 = C^{-1} Q

Then if we add these three equations, we get

V_1 + V_2 + V_3 = L \ddot Q + R \dot Q + C^{-1} Q

So, if we define the total voltage by

V = V_1 + V_2 + V_3 = \phi_0 - \phi_3

we get

L \ddot Q + R \dot Q + C^{-1} Q = V

And this is great!

Why? Because this equation is famous! If you’re a mathematician, you know it as the most general second-order linear ordinary differential equation with constant coefficients. But if you’re a physicist, you know it as the damped driven oscillator.

The analogy between electronics and mechanics

Here’s an example of a damped driven oscillator:

We’ve got an object hanging from a spring with some friction, and an external force pulling it down. Here the external force is gravity, so it’s constant in time, but we can imagine fancier situations where it’s not. So in a general damped driven oscillator:

• the object has mass m (and the spring is massless),

• the spring constant is k (this says how strong the spring force is),

• the damping coefficient is r (this says how much friction there is),

• the external force is F (in general a function of time).

Then Newton’s law says

m \ddot{q} + r \dot{q} + k q = F

And apart from the use of different letters, this is exactly like the equation for our circuit! Remember, that was

L \ddot Q + R \dot Q + C^{-1} Q = V

So, we get a wonderful analogy relating electronics and mechanics! It goes like this:

Electronics Mechanics
charge: Q position: q
current: I = \dot{Q} velocity: v = \dot{q}
voltage: V force: F
inductance: L mass: m
resistance: R damping coefficient: r  
inverse capacitance: 1/C   spring constant: k

If you understand mechanics, you can use this to get intuition about electronics… or vice versa. I’m more comfortable with mechanics, so when I see this circuit:

I imagine a current of electrons whizzing along, ‘forced’ by the voltage across the circuit, getting slowed by the ‘friction’ of the resistor, wanting to continue their motion thanks to the inertia or ‘mass’ of the inductor, and getting stuck on the plate of the capacitor, where their mutual repulsion pushes back against the flow of current—just like a spring fights back when you pull on it! This lets me know how the circuit will behave: I can use my mechanical intuition.

The only mildly annoying thing is that the inverse of the capacitance C is like the spring constant k. But this makes perfect sense. A capacitor is like a spring: you ‘pull’ on it with voltage and it ‘stretches’ by building up electric charge on its plate. If its capacitance is high, it’s like a easily stretchable spring. But this means the corresponding spring constant is low.

Besides letting us transfer intuition and techniques, the other great thing about analogies is that they suggest ways of extending themselves. For example, we’ve seen that current is the time derivative of charge. But if we hadn’t, we could still have guessed it, because current is like velocity, which is the time derivative of something important.

Similarly, force is analogous to voltage. But force is the time derivative of momentum! We don’t have momentum on our chart. Our chart is also missing the thing whose time derivative is voltage. This thing is called flux linkage, and sometimes denotes \lambda. So we should add this, and momentum, to our chart:

Electronics Mechanics
charge: Q position: q
current: I = \dot{Q} velocity: v = \dot{q}
flux linkage: \lambda momentum: p
voltage: V = \dot{\lambda} force: F = \dot{p}
inductance: L mass: m
resistance: R damping coefficient: r  
inverse capacitance: 1/C   spring constant: k

Fourier transforms

But before I get carried away talking about analogies, let’s try to solve the equation for our circuit:

L \ddot Q + R \dot Q + C^{-1} Q = V

This instantly tells us the voltage V as a function of time if we know the charge Q as a function of time. So, ‘solving’ it means figuring out Q if we know V. You may not care about Q—it’s the charge of the electrons stuck on the capacitor—but you should certainly care about the current I = \dot{Q}, and figuring out Q will get you that.

Besides, we’ll learn something good from solving this equation.

We could solve it using either the Laplace transform or the Fourier transform. They’re very similar. For some reason electrical engineers prefer the Laplace transform—does anyone know why? But I think the Fourier transform is conceptually preferable, slightly, so I’ll use that.

The idea is to write any function of time as a linear combination of oscillating functions \exp(i\omega t) with different frequencies \omega. More precisely, we write our function f as an integral

\displaystyle{ f(t) = \frac{1}{\sqrt{2 \pi}} \int_{-\infty}^\infty \hat{f}(\omega) e^{i\omega t} \, d\omega }

Here the function \hat{f} is called the Fourier transform of f, and it’s given by

\displaystyle{ \hat{f}(\omega) = \frac{1}{\sqrt{2 \pi}} \int_{-\infty}^\infty f(t) e^{-i\omega t} \, dt }

There is a lot one could say about this, but all I need right now is that differentiating a function has the effect of multiplying its Fourier transform by i\omega. To see this, we simply take the Fourier transform of \dot{f}:

\begin{array}{ccl}  \hat{\dot{f}}(\omega) &=& \displaystyle{  \frac{1}{\sqrt{2 \pi}} \int_{-\infty}^\infty \frac{df(t)}{dt} \, e^{-i\omega t} \, dt } \\  \\  &=& \displaystyle{ -\frac{1}{\sqrt{2 \pi}} \int_{-\infty}^\infty f(t) \frac{d}{dt} e^{-i\omega t} \, dt } \\  \\  &=& \displaystyle{ i\omega \; \frac{1}{\sqrt{2 \pi}} \int_{-\infty}^\infty f(t) e^{-i\omega t} \, dt } \\  \\  &=& i\omega \hat{f}(\omega) \end{array}

where in the second step we integrate by parts. So,

\hat{\dot{f}}(\omega) = i\omega \hat{f}(\omega)

The Fourier transform is linear, too, so we can start with our differential equation:

L \ddot Q + R \dot Q + C^{-1} Q = V

and take the Fourier transform of each term, getting

\displaystyle{ \left((i\omega)^2 L + (i\omega) R + C^{-1}\right) \hat{Q}(\omega) = \hat{V}(\omega) }

We can now solve for the charge in a completely painless way:

\displaystyle{  \hat{Q}(\omega) =  \frac{1}{((i\omega)^2 L + (i\omega) R + C^{-1})} \, \hat{V}(\omega) }

Well, we actually solved for \hat{Q} in terms of \hat{V}. But if we’re good at taking Fourier transforms, this is good enough. And it has a deep inner meaning.

To see its inner meaning, note that the Fourier transform of an oscillating function \exp(i \omega_0 t) is a delta function at the frequency \omega = \omega_0. This says that this oscillating function is purely of frequency \omega_0, like a laser beam of one pure color, or a sound of one pure pitch.

Actually there’s a little fudge factor due to how I defined the Fourier transform: if

f(t) = e^{i\omega_0 t}


\displaystyle{ \hat{f}(\omega) = \sqrt{2 \pi} \, \delta(\omega - \omega_0) }

But it’s no big deal. (You can define your Fourier transform so the 2\pi doesn’t show up here, but it’s bound to show up somewhere.)

Also, you may wonder how the complex numbers got into the game. What would it mean to say the voltage is \exp(i \omega t)? The answer is: don’t worry, everything in sight is linear, so we can take the real or imaginary part of any equation and get one that makes physical sense.

Anyway, what does our relation

\displaystyle{  \hat{Q}(\omega) =  \frac{1}{((i\omega)^2 L + (i\omega) R + C^{-1})} \hat{V}(\omega) }

mean? It means that if we put an oscillating voltage of frequency \omega_0 across our circuit, like this:

V(t) = e^{i \omega_0 t}

then we’ll get an oscillating charge at the same frequency, like this:

\displaystyle{  Q(t) =  \frac{1}{((i\omega_0)^2 L + (i\omega_0) R + C^{-1})}  e^{i \omega_0 t}  }

To see this, just use the fact that the Fourier transform of \exp(i \omega_0 t) is essentially a delta function at \omega_0, and juggle the equations appropriately!

But the magnitude and phase of this oscillating charge Q(t) depends on the function

\displaystyle{ \frac{1}{((i\omega_0)^2 L + (i\omega_0) R + C^{-1})}  }

For example, Q(t) will be big when \omega_0 is near a pole of this function! We can use this to study the resonant frequency of our circuit.

The same idea works for many more complicated circuits, and other things too. The function up there is an example of a transfer function: it describes the response of a linear, time-invariant system to an input of a given frequency. Here the ‘input’ is the voltage and the ‘response’ is the charge.


Taking this idea to its logical conclusion, we can see inductors and capacitors as being resistors with a frequency-dependent, complex-valued resistance! This generalized resistance is called ‘impedance. Let’s see how it works.

Suppose we have an electrical circuit. Consider any edge e of this circuit:

• If our edge e is labelled by a resistor of resistance R:


V_e = R I_e

Taking Fourier transforms, we get

\hat{V}_e = R \hat{I}_e

so nothing interesting here: our resistor acts like a resistor of resistance R no matter what the frequency of the voltage and current are!

• If our edge e is labelled by an inductor of inductance L:


\displaystyle{ V_e = L \frac{d I_e}{d t} }

Taking Fourier transforms, we get

\hat{V}_e = (i\omega L) \hat{I}_e

This is interesting: our inductor acts like a resistor of resistance i \omega L when the frequency of the current and voltage is \omega. So, we say the ‘impedance’ of the inductor is i \omega L.

• If our edge e is labelled by a capacitor of capacitance C:

we have

\displaystyle{ I_e = C \frac{d V_e}{d t} }

Taking Fourier transforms, we get

\hat{I}_e = (i\omega C) \hat{V}_e


\displaystyle{ \hat{V}_e = \frac{1}{i \omega C} \hat{I_e} }

So, our capacitor acts like a resistor of resistance 1/(i \omega C) when the frequency of the current and voltage is \omega. We say the ‘impedance’ of the capacitor is 1/(i \omega L).

It doesn’t make sense to talk about the impedance of a voltage source or current source, since these circuit elements don’t give a linear relation between voltage and current. But whenever an element is linear and its properties don’t change with time, the Fourier transformed voltage will be some function of frequency times the Fourier transformed current. And in this case, we call that function the impedance of the element. The symbol for impedance is Z, so we have

\hat{V}_e(\omega) = Z(\omega) \hat{I}_e(\omega)


\hat{V}_e = Z \hat{I}_e

for short.

The big picture

In case you’re getting lost in the details, here are the big lessons for today:

• There’s a detailed analogy between electronics and mechanics, which we’ll later extend to many other systems.

• The study of linear time-independent elements can be reduced to the study of resistors if we generalize resistance to impedance by letting it be a complex-valued function instead of a real number.

One thing we’re doing is preparing for a general study of linear time-independent open systems. We’ll use linear algebra, but the field—the number system in our linear algebra—will consist of complex-valued functions, rather than real numbers.


Let’s not forget our original problem:

This is closely related to the problem we just solved. All the equations we derived still hold! But if you do the math, or use some intuition, you’ll see the voltage source ensures that the voltage we’ve been calling V is a constant. So, the current I flowing around the wire obeys the same equation we got before:

L \ddot Q + R \dot Q + C^{-1} Q = V

where \dot Q = I. The only difference is that now V is constant.

Puzzle. Solve this equation for Q(t).

There are lots of ways to do this. You could use a Fourier transform, which would give a satisfying sense of completion to this blog article. Or, you could do it some other way.

The Planck Mission

22 March, 2013

Yesterday, the Planck Mission released a new map of the cosmic microwave background radiation:

380,000 years after the Big Bang, the Universe cooled down enough for protons and electrons to settle down and combine into hydrogen atoms. Protons and electrons are charged, so back when they were freely zipping around, no light could go very far without getting absorbed and then re-radiated. When they combined into neutral hydrogen atoms, the Universe soon switched to being almost transparent… as it is today. So the light emitted from that time is still visible now!

And it would look like this picture here… if you could see microwaves.

When this light was first emitted, it would have looked white to our eyes, since the temperature of the Universe was about 4000 kelvin. That’s the temperature when half the hydrogen atoms split apart into electrons and protons. 4200 kelvin looks like a fluorescent light; 2800 kelvin like an incandescent bulb, rather yellow.

But as the Universe expanded, this light got stretched out to orange, red, infrared… and finally a dim microwave glow, invisible to human eyes. The average temperature of this glow is very close to absolute zero, but it’s been measured very precisely: 2.725 kelvin.

But the temperature of the glow is not the same in every direction! There are tiny fluctuations! You can see them in this picture. The colors here span a range of ± .0002 kelvin.

These fluctuations are very important, because they were later amplified by gravity, with denser patches of gas collapsing under their own gravitational attraction (thanks in part to dark matter), and becoming even denser… eventually leading to galaxies, stars and planets, you and me.

But where did these fluctuations come from? I suspect they started life as quantum fluctuations in an originally completely homogeneous Universe. Quantum mechanics takes quite a while to explain – but in this theory a situation can be completely symmetrical, yet when you measure it, you get an asymmetrical result. The universe is then a ‘sum’ of worlds where these different results are seen. The overall universe is still symmetrical, but each observer sees just a part: an asymmetrical part.

If you take this seriously, there are other worlds where fluctuations of the cosmic microwave background radiation take all possible patterns… and form galaxies in all possible patterns. So while the universe as we see it is asymmetrical, with galaxies and stars and planets and you and me arranged in a complicated and seemingly arbitrary way, the overall universe is still symmetrical – perfectly homogeneous!

That seems very nice to me. But the great thing is, we can learn more about this, not just by chatting, but by testing theories against ever more precise measurements. The Planck Mission is a great improvement over the Wilkinson Microwave Anisotropy Probe (WMAP), which in turn was a huge improvement over the Cosmic Background Explorer (COBE):

Here is some of what they’ve learned:

• It now seems the Universe is 13.82 ± 0.05 billion years old. This is a bit higher than the previous estimate of 13.77 ± 0.06 billion years, due to the Wilkinson Microwave Anisotropy Probe.

• It now seems the rate at which the universe is expanding, known as Hubble’s constant, is 67.15 ± 1.2 kilometers per second per megaparsec. A megaparsec is roughly 3 million light-years. This is less than earlier estimates using space telescopes, such as NASA’s Spitzer and Hubble.

• It now seems the fraction of mass-energy in the Universe in the form of dark matter is 26.8%, up from 24%. Dark energy is now estimated at 68.3%, down from 71.4%. And normal matter is now estimated at 4.9%, up from 4.6%.

These cosmological parameters, and a bunch more, are estimated here:

Planck 2013 results. XVI. Cosmological parameters.

It’s amazing how we’re getting more and more accurate numbers for these basic facts about our world! But the real surprises lie elsewhere…

A lopsided universe, with a cold spot?


The Planck Mission found two big surprises in the cosmic microwave background:

• This radiation is slightly different on opposite sides of the sky! This is not due to the fact that the Earth is moving relative to the average position of galaxies. That fact does make the radiation look hotter in the direction we’re moving. But that produces a simple pattern called a ‘dipole moment’ in the temperature map. If we subtract that out, it seems there are real differences between two sides of the Universe… and they are complex, interesting, and not explained by the usual theories!

• There is a cold spot that seems too big to be caused by chance. If this is for real, it’s the largest thing in the Universe.

Could these anomalies be due to experimental errors, or errors in data analysis? I don’t know! They were already seen by the Wilkinson Microwave Anisotropy Probe; for example, here is WMAP’s picture of the cold spot:

The Planck Mission seems to be seeing them more clearly with its better measurements. Paolo Natoli, from the University of Ferrara writes:

The Planck data call our attention to these anomalies, which are now more important than ever: with data of such quality, we can no longer neglect them as mere artefacts and we must search for an explanation. The anomalies indicate that something might be missing from our current understanding of the Universe. We need to find a model where these peculiar traits are no longer anomalies but features predicted by the model itself.

For a lot more detail, see this paper:

Planck 2013 results. XXIII. Isotropy and statistics of the CMB.

(I apologize for not listing the authors on these papers, but there are hundreds!) Let me paraphrase the abstract for people who want just a little more detail:

Many of these anomalies were previously observed in the Wilkinson Microwave Anisotropy Probe data, and are now confirmed at similar levels of significance (around 3 standard deviations). However, we find little evidence for non-Gaussianity with the exception of a few statistical signatures that seem to be associated with specific anomalies. In particular, we find that the quadrupole-octopole alignment is also connected to a low observed variance of the cosmic microwave background signal. The dipolar power asymmetry is now found to persist to much smaller angular scales, and can be described in the low-frequency regime by a phenomenological dipole modulation model. Finally, it is plausible that some of these features may be reflected in the angular power spectrum of the data which shows a deficit of power on the same scales. Indeed, when the power spectra of two hemispheres defined by a preferred direction are considered separately, one shows evidence for a deficit in power, whilst its opposite contains oscillations between odd and even modes that may be related to the parity violation and phase correlations also detected in the data. Whilst these analyses represent a step forward in building an understanding of the anomalies, a satisfactory explanation based on physically motivated models is still lacking.

If you’re a scientist, your mouth should be watering now… your tongue should be hanging out! If this stuff holds up, it’s amazing, because it would call for real new physics.

I’ve heard that the difference between hemispheres might fit the simplest homogeneous but not isotropic solutions of general relativity, the Bianchi models. However, this is something one should carefully test using statistics… and I’m sure people will start doing this now.

As for the cold spot, the best explanation I can imagine is some sort of mechanism for producing fluctuations very early on… so that these fluctuations would get blown up to enormous size during the inflationary epoch, roughly between 10-36 and 10-32 seconds after the Big Bang. I don’t know what this mechanism would be!

There are also ways of trying to ‘explain away’ the cold spot, but even these seem jaw-droppingly dramatic. For example, an almost empty region 150 megaparsecs (500 million light-years) across would tend to cool down cosmic microwave background radiation coming through it. But it would still be the largest thing in the Universe! And such an unusual void would seem to beg for an explanation of its own.

Particle physics

The Planck Mission also shed a lot of light on particle physics, and especially on inflation. But, it mainly seems to have confirmed what particle physicists already suspected! This makes them rather grumpy, because these days they’re always hoping for something new, and they’re not getting it.

We can see this at Jester’s blog Résonaances, which also gives a very nice, though technical, summary of what the Planck Mission did for particle physics:

From a particle physicist’s point of view the single most interesting observable from Planck is the notorious N_{\mathrm{eff}}. This observable measures the effective number of degrees of freedom with sub-eV mass that coexisted with the photons in the plasma at the time when the CMB was formed (see e.g. my older post for more explanations). The standard model predicts N_{\mathrm{eff}} \approx 3, corresponding to the 3 active neutrinos. Some models beyond the standard model featuring sterile neutrinos, dark photons, or axions could lead to N_{\mathrm{eff}} > 3, not necessarily an integer. For a long time various experimental groups have claimed N_{\mathrm{eff}} much larger than 3, but with an error too large to blow the trumpets. Planck was supposed to sweep the floor and it did. They find

N_{\mathrm{eff}} = 3 \pm 0.5,

that is, no hint of anything interesting going on. The gurgling sound you hear behind the wall is probably your colleague working on sterile neutrinos committing a ritual suicide.

Another number of interest for particle theorists is the sum of neutrino masses. Recall that oscillation experiments tell us only about the mass differences, whereas the absolute neutrino mass scale is still unknown. Neutrino masses larger than 0.1 eV would produce an observable imprint into the CMB. [....] Planck sees no hint of neutrino masses and puts the 95% CL limit at 0.23 eV.

Literally, the most valuable Planck result is the measurement of the spectral index n_s, as it may tip the scale for the Nobel committee to finally hand out the prize for inflation. Simplest models of inflation (e.g., a scalar field φ with a φn potential slowly changing its vacuum expectation value) predicts the spectrum of primordial density fluctuations that is adiabatic (the same in all components) and Gaussian (full information is contained in the 2-point correlation function). Much as previous CMB experiments, Planck does not see any departures from that hypothesis. A more quantitative prediction of simple inflationary models is that the primordial spectrum of fluctuations is almost but not exactly scale-invariant. More precisely, the spectrum is of the form

\displaystyle{ P \sim (k/k_0)^{n_s-1} }

with n_s close to but typically slightly smaller than 1, the size of n_s being dependent on how quickly (i.e. how slowly) the inflaton field rolls down its potential. The previous result from WMAP-9,

n_s=0.972 \pm 0.013

(n_s =0.9608 \pm 0.0080 after combining with other cosmological observables) was already a strong hint of a red-tilted spectrum. The Planck result

n_s = 0.9603 \pm 0.0073

(n_s =0.9608 \pm 0.0054 after combination) pushes the departure of n_s - 1 from zero past the magic 5 sigma significance. This number can of course also be fitted in more complicated models or in alternatives to inflation, but it is nevertheless a strong support for the most trivial version of inflation.


In summary, the cosmological results from Planck are really impressive. We’re looking into a pretty wide range of complex physical phenomena occurring billions of years ago. And, at the end of the day, we’re getting a perfect description with a fairly simple model. If this is not a moment to cry out “science works bitches”, nothing is. Particle physicists, however, can find little inspiration in the Planck results. For us, what Planck has observed is by no means an almost perfect universe… it’s rather the most boring universe.

I find it hilarious to hear someone complain that the universe is “boring” on a day when astrophysicists say they’ve discovered the universe is lopsided and has a huge cold region, the largest thing ever seen by humans!

However, particle physicists seem so far rather skeptical of these exciting developments. Is this sour grapes, or are they being wisely cautious?

Time, as usual, will tell.

Centre for Quantum Mathematics and Computation

6 March, 2013

This fall they’re opening a new Centre for Quantum Mathematics and Computation at Oxford University. They’ll be working on diagrammatic methods for topology and quantum theory, quantum gravity, and computation. You’ll understand what this means if you know the work of the people involved:

• Samson Abramsky
• Bob Coecke
• Christopher Douglas
• Kobi Kremnitzer
• Steve Simon
• Ulrike Tillman
• Jamie Vicary

All these people are already at Oxford, so you may wonder what’s new about this center. I’m not completely sure, but they’ve gotten money from EPSRC (roughly speaking, the British NSF), and they’re already hiring a postdoc. Applications are due on March 11, so hurry up if you’re interested!

They’re having a conference October 1st to 4th to start things off. I’ll be speaking there, and they tell me that Steve Awodey, Alexander Beilinson, Lucien Hardy, Martin Hyland, Chris Isham, Dana Scott, and Anton Zeilinger have been invited too.

I’m really looking forward to seeing Chris Isham, since he’s one of the most honest and critical thinkers about quantum gravity and the big difficulties we have in understanding this subject—and he has trouble taking airplane flights, so it’s been a long time since I’ve seen him. It’ll also be great to see all the other people I know, and meet the ones I don’t.

For example, back in the 1990′s, I used to spend summers in Cambridge talking about n-categories with Martin Hyland and his students Eugenia Cheng, Tom Leinster and Aaron Lauda (who had been an undergraduate at U.C. Riverside). And more recently I’ve been talking a lot with Jamie Vicary about categories and quantum computation—since was in Singapore some of the time while I was there. (Indeed, I’m going back there this summer, and so will he.)

I’m not as big on n-categories and quantum gravity as I used to be, but I’m still interested in the foundations of quantum theory and how it’s connected to computation, so I think I can give a talk with some new ideas in it.

Black Holes and the Golden Ratio

28 February, 2013


The golden ratio shows up in the physics of black holes!

Or does it?

Most things get hotter when you put more energy into them. But systems held together by gravity often work the other way. For example, when a red giant star runs out of fuel and collapses, its energy goes down but its temperature goes up! We say these systems have a negative specific heat.

The prime example of a system held together by gravity is a black hole. Hawking showed—using calculations, not experiments—that a black hole should not be perfectly black. It should emit ‘Hawking radiation’. So it should have a very slight glow, as if it had a temperature above zero. For a black hole the mass of the Sun this temperature would be just 6 × 10-8 kelvin.

This is absurdly chilly, much colder than the microwave background radiation left over from the Big Bang. So in practice, such a black hole will absorb stuff—stars, nearby gas and dust, starlight, microwave background radiation, and so on—and grow bigger. But if we could protect it from all this stuff, and put it in a very cold box, it would slowly shrink by emitting radiation and losing energy, and thus mass. As it lost energy, its temperature would go up. The less energy it has, the hotter it gets: a negative specific heat! Eventually, as it shrinks to nothing, it should explode in a very hot blast.

But for a spinning black hole, things are more complicated. If it spins fast enough, its specific heat will be positive, like a more ordinary object.

And according to a 1989 paper by Paul Davies, the transition to positive specific heat happens at a point governed by the golden ratio! He claimed that in units where the speed of light and gravitational constant are 1, it happens when

\displaystyle{  \frac{J^2}{M^4} = \frac{\sqrt{5} - 1}{2}  }

Here J is the black hole’s angular momentum, M is its mass, and

\displaystyle{ \frac{\sqrt{5} - 1}{2} = 0.6180339\dots }

is a version of the golden ratio! This is for black holes with no electric charge.

Unfortunately, this claim is false. Cesar Uliana, who just did a master’s thesis on black hole thermodynamics, pointed this out in the comments below after I posted this article.

And curiously, twelve years before writing this paper with the mistake in it, Davies wrote a paper that got the right answer to the same problem! It’s even mentioned in the abstract.

The correct constant is not the golden ratio! The correct constant is smaller:

\displaystyle{ 2 \sqrt{3} - 3 = 0.46410161513\dots }

However, Greg Egan figured out the nature of Davies’ slip, and thus discovered how the golden ratio really does show up in black hole physics… though in a more quirky and seemingly less significant way.

As usually defined, the specific heat of a rotating black hole measures the change in internal energy per change in temperature while angular momentum is held constant. But Davies looked at the change in internal energy per change in temperature while the ratio of angular momentum to mass is held constant. It’s this modified quantity that switches from positive to negative when

\displaystyle{  \frac{J^2}{M^4} = \frac{\sqrt{5} - 1}{2} }

In other words:

Suppose we gradually add mass and angular momentum to a black hole while not changing the ratio of angular momentum, J, to mass, M. Then J^2/M^4 gradually drops. As this happens, the black hole’s temperature increases until

\displaystyle{ \frac{J^2}{M^4} = \frac{\sqrt{5} - 1}{2} }

in units where the speed of light and gravitational constant are 1. And then it starts dropping!

What does this mean? It’s hard to tell. It doesn’t seem very important, because it seems there’s no good physical reason for the ratio of J to M to stay constant. In particular, as a black hole shrinks by emitting Hawking radiation, this ratio goes to zero. In other words, the black hole spins down faster than it loses mass.


Discussions of black holes and the golden ratio can be found in a variety of places. Mario Livio is the author of The Golden Ratio, and also an astrophysicist, so it makes sense that he would be interested in this connection. He wrote about it here:

• Mario Livio, The golden ratio and astronomy, Huffington Post, 22 August 2012.

Marcus Chown, the main writer on cosmology for New Scientist, talked to Livio and wrote about it here:

• Marcus Chown, The golden rule, The Guardian, 15 January 2003.

Chown writes:

Perhaps the most surprising place the golden ratio crops up is in the physics of black holes, a discovery made by Paul Davies of the University of Adelaide in 1989. Black holes and other self-gravitating bodies such as the sun have a “negative specific heat”. This means they get hotter as they lose heat. Basically, loss of heat robs the gas of a body such as the sun of internal pressure, enabling gravity to squeeze it into a smaller volume. The gas then heats up, for the same reason that the air in a bicycle pump gets hot when it is squeezed.

Things are not so simple, however, for a spinning black hole, since there is an outward “centrifugal force” acting to prevent any shrinkage of the hole. The force depends on how fast the hole is spinning. It turns out that at a critical value of the spin, a black hole flips from negative to positive specific heat—that is, from growing hotter as it loses heat to growing colder. What determines the critical value? The mass of the black hole and the golden ratio!

Why is the golden ratio associated with black holes? “It’s a complete enigma,” Livio confesses.

Extremal black holes

As we’ve seen, a rotating uncharged black hole has negative specific heat whenever the angular momentum is below a certain critical value:

\displaystyle{ J < k M^2 }


\displaystyle{ k = \sqrt{2 \sqrt{3} - 3} = 0.68125003863\dots }

As J goes up to this critical value, the specific heat actually approaches -\infty! On the other hand, a rotating uncharged black hole has positive specific heat when

\displaystyle{  J > kM^2}

and as J goes down to this critical value, the specific heat approaches -\infty. So, there’s some sort of ‘phase transition’ at

\displaystyle{  J = k M^2 }

But as we make the black hole spin even faster, something very strange happens when

\displaystyle{ J > M^2 }

Then the black hole gets a naked singularity!

In other words, its singularity is no longer hidden behind an event horizon. An event horizon is an imaginary surface such that if you cross it, you’re doomed to never come back out. As far as we know, all black holes in nature have their singularities hidden behind an event horizon. But if the angular momentum were too big, this would not be true!

A black hole posed right at the brink:

\displaystyle{ J = M^2 }

is called an ‘extremal’ black hole.

Black holes in nature

Most physicists believe it’s impossible for black holes to go beyond extremality. There are lots of reasons for this. But do any black holes seen in nature get close to extremality? For example, do any spin so fast that they have positive specific heat? It seems the answer is yes!

Over on Google+, Robert Penna writes:

Nature seems to have no trouble making black holes on both sides of the phase transition. The spins of about a dozen solar mass black holes have reliable measurements. GRS1915+105 is close to J=M^2. The spin of A0620-00 is close to J=0. GRO J1655-40 has a spin sitting right at the phase transition.

The spins of astrophysical black holes are set by a competition between accretion (which tends to spin things up to J=M^2) and jet formation (which tends to drain angular momentum). I don’t know of any astrophysical process that is sensitive to the black hole phase transition.

That’s really cool, but the last part is a bit sad! The problem, I suspect, is that Hawking radiation is so pathetically weak.

But by the way, you may have heard of this recent paper—about a supermassive black hole that’s spinning super-fast:

• G. Risaliti, F. A. Harrison, K. K. Madsen, D. J. Walton, S. E. Boggs, F. E. Christensen, W. W. Craig, B. W. Grefenstette, C. J. Hailey, E. Nardini, Daniel Stern and W. W. Zhang, A rapidly spinning supermassive black hole at the centre of NGC 1365, Nature (2013), 449–451.

They estimate that this black hole has a mass about 2 million times that of our sun, and that

\displaystyle{ J \ge 0.84 \, M^2 }

with 90% confidence. If so, this is above the phase transition where it gets positive specific heat.

The nitty-gritty details

Here is where Paul Davies claimed the golden ratio shows up in black hole physics:

• Paul C. W. Davies, Thermodynamic phase transitions of Kerr-Newman black holes in de Sitter space, Classical and Quantum Gravity 6 (1989), 1909–1914.

He works out when the specific heat vanishes for rotating and/or charged black holes in a universe with a positive cosmological constant: so-called de Sitter space. The formula is pretty complicated. Then he set the cosmological constant \Lambda to zero. In this case de Sitter space flattens out to Minkowski space, and his black holes reduce to Kerr–Newman black holes: that is, rotating and/or charged black holes in an asymptotically Minkowskian spacetime. He writes:

In the limit \alpha \to 0 (that is, \Lambda \to 0), the cosmological horizon no longer exists: the solution corresponds to the case of a black hole in asymptotically flat spacetime. In this case r may be explicitly eliminated to give

(\beta + \gamma)^3 + \beta^2 -\beta - \frac{3}{4} \gamma^2  = 0.   \qquad (2.17)


\beta = a^2 / M^2

\gamma = Q^2 / M^2

and he says M is the black hole’s mass, Q is its charge and a is its angular momentum. He continues:

For \beta = 0 (i.e. a = 0) equation (2.17) has the solution \gamma = 3/4, or

\displaystyle{ Q^2 = \frac{3}{4} M^2 } \qquad  (2.18)

For \gamma = 0 (i.e. Q = 0), equation (2.17) may be solved to give \beta = (\sqrt{5} - 1)/2 or

\displaystyle{ a^2 = (\sqrt{5} - 1)M^2/2 \cong 0.62 M^2   }  \qquad  (2.19)

These were the results first reported for the black-hole case in Davies (1979).

In fact a can’t be the angular momentum, since the right condition for a phase transition should say the black hole’s angular momentum is some constant times its mass squared. I think Davies really meant to define

a = J/M

This is important beyond the level of a mere typo, because we get different concepts of specific heat depending on whether we hold J or a constant while taking certain derivatives!

In the usual definition of specific heat for rotating black holes, we hold J constant and see how the black hole’s heat energy changes with temperature. If we call this specific heat C_J, we have

\displaystyle{ C_J = T \left.\frac{\partial S}{\partial T}\right|_J }

where S is the black hole’s entropy. This specific heat C_J becomes infinite when

\displaystyle{ \frac{J^2}{M^4} = 2 \sqrt{3} - 3  }

But if instead we hold a = J/M constant, we get something else—and this what Davies did! If we call this modified concept of specific heat C_a, we have

\displaystyle{ C_a = T \left.\frac{\partial S}{\partial T}\right|_a }

This modified ‘specific heat’ C_a becomes infinite when

\displaystyle{  \frac{J^2}{M^4} = \frac{\sqrt{5}-1}{2} }

After proving these facts in the comments below, Greg Egan drew some nice graphs to explain what’s going on. Here are the curves of constant temperature as a function of the black hole’s mass M and angular momentum J:

The dashed parabola passing through the peaks of the curves of constant temperature is where C_J becomes infinite. This is where energy can be added without changing the temperature, so long as it’s added in a manner that leaves J constant.

And here are the same curves of constant temperature, along with the parabola where C_a becomes infinite:

This new dashed parabola intersects each curve of constant temperature at the point where the tangent to this curve passes through the origin: that is, where the tangent is a line of constant a=J/M. This is where energy and angular momentum can be added to the hole in a manner that leaves a constant without changing the temperature.

As mentioned, Davies correctly said when the ordinary specific heat C_J becomes infinite in another paper, eleven years earlier:

• Paul C. W. Davies, Thermodynamics of black holes, Rep. Prog. Phys. 41 (1978), 1313–1355.

You can see his answer on page 1336.

This 1978 paper, in turn, is a summary of previous work including an article from a year earlier:

• Paul C. W. Davies, The thermodynamic theory of black holes, Proc. Roy. Soc. Lond. A 353 (1977), 499–521.

And in the abstract of this earlier article, Davies wrote:

The thermodynamic theory underlying black-hole processes is developed in detail and applied to model systems. It is found that Kerr-Newman black holes undergo a phase transition at an angular-momentum mass ratio of 0.68M or an electric charge (Q) of 0.86M, where the heat capacity has an infinite discontinuity. Above the transition values the specific heat is positive, permitting isothermal equilibrium with a surrounding heat bath.

Here the number 0.68 is showing up because

\displaystyle{ \sqrt{ 2 \sqrt{3} - 3 } = 0.68125003863\dots }

The number 0.86 is showing up because

\displaystyle{ \sqrt{ \frac{3}{4} } = 0.86602540378\dots }

By the way, just in case you want to do some computations using experimental data, let me put the speed of light c and gravitational constant G back in the formulas. A rotating (uncharged) black hole is extremal when

\displaystyle{ c J = G M^2 }

A Bet Concerning Neutrinos (Part 5)

7 January, 2013

It’s a little-known spinoff of Heisenberg’s uncertainty principle. When you accurately measure the velocity of neutrinos, they can turn into ham!

I observed this myself. It came in the mail along with some sausages, bacon, and peach and blueberry syrup. They’re from Heather Vandagriff. Thanks, Heather!

These are the first of my winnings on some bets concerning the famous OPERA experiment that seemed to detect neutrinos going faster than light. I bet that this experiment would be shown wrong. Heather bet me some Tennessee ham against some nice cloth from Singapore.

The OPERA team announced that they’d detected faster-than-light neutrinos back in September 2011. But later, they discovered two flaws in their experimental setup.

First, a fiber optic cable wasn’t screwed in right. This made it take about 70 nanoseconds longer than it should have for a signal from a global positioning system to the so-called ‘master clock’:

Since the clock got its signal late, the neutrinos seemed to show up early. Click on the picture for a more detailed explanation.

On top of this, the clock was poorly calibrated! This had a roughly opposite effect: it tended to make the neutrinos seem to show up late… but only some of the time. However, this effect was not big enough, on average, to cancel the other mistake.

The OPERA team fixed these problems and repeated the experiment in May 2012. The neutrinos came in slower than light:

• OPERA, Measurement of the neutrino velocity with the OPERA detector in the CNGS beam, 12 July 2012.

Three other experiments using the same neutrino source—Borexino, ICARUS, and LVD—also got the same result! For a more detailed post-mortem, with lots of references, see:

Faster-than-light neutrino anomaly, Wikipedia.

My wife Lisa has a saying from her days in the computer business: when in doubt, check the cables.

Rolling Circles and Balls (Part 5)

2 January, 2013

Last time I promised to show you how the problem of a little ball rolling on a big stationary ball can be described using an 8-dimensional number system called the split octonions… if the big ball has a radius that’s 3 times the radius of the little one!

So, let’s get started.

First, I must admit that I lied.

Lying is an important pedagogical technique. The teacher simplifies the situation, so the student doesn’t get distracted by technicalities. Then later—and this is crucial!—the teacher admits that certain statements weren’t really true, and corrects them. It always makes me uncomfortable to do this. But it works better than dumping all the technical details on the students right away. In classes, I sometimes deal with my discomfort by telling the students: “Okay, now I’m going to lie a bit…”

What was my lie? Instead of an ordinary ball rolling on another ordinary ball, we need a ‘spinorial’ ball rolling on a ‘projective’ ball.

Let me explain that.

A spinorial ball

In physics, a spinor is a kind of particle that you need to turn around twice before it comes back to the way it was. Examples include electrons and protons.

If you give one of these particles a full 360° turn, which you can do using a magnetic field, it changes in a very subtle way. You can only detect this change using clever tricks. For example, take a polarized beam of electrons and send it through a barrier with two slits cut out. Each electron goes through both slits, because it’s a wave as well as a particle. Next, put a magnetic field next to one slit that’s precisely strong enough to rotate the electron by 360° if it goes through that slit. Then, make the beams recombine, and see how likely it is for electrons to be found at different locations. You’ll get different results than if you turn off the magnetic field that rotates the electron!

However, if you rotate a spinor by 720°—that is, two full turns—it comes back to exactly the way it was.

This may seem very odd, but when you understand the math of spinors you see it all makes sense. It’s a great example of how you have to follow the math where it leads you. If something is mathematically allowed, nature may take advantage of that possibility, regardless of whether it seems odd to you.

So, I hope you can imagine a ‘spinorial’ ball, which changes subtly when you turn it 360° around any axis, but comes back to its original orientation when you turn it around 720°. If you can’t, I’ll present the math more rigorously later on. That may or may not help.

A projective ball

What’s a ‘projective’ ball? It’s a ball whose surface is not a sphere, but a projective plane. A projective plane is a sphere that’s been modified so that diametrically opposite points count as the same point. The north pole is the same as the south pole, and so on!

In geography, the point diametrically opposite to some point on the Earth’s surface is called its antipodes, so let’s use that term. There’s a website that lets you find the antipodes of any place on Earth. Unfortunately the antipodes of most famous places are under water! But the antipodes of Madrid is in New Zealand, near Wellington:

When we roll a little ball on a big ‘projective’ ball, when the little ball reaches the antipodes of where it started, it counts as being back to its original location.

If you find this hard to visualize, imagine rolling two indistinguishable little balls on the big ball, that are always diametrically opposite each other. When one little ball rolls to the antipodes of where it started, the other one has taken its place, and the situation looks just like when you started!

A spinorial ball on a projective ball

Now let’s combine these ideas. Imagine a little spinorial ball rolling on a big projective ball. You need to turn the spinorial ball around twice to make it come back to its original orientation. But you only need to roll it halfway around the projective ball for it to come back to its original location.

These effects compensate for each other to some extent. The first makes it twice as hard to get back to where you started. The second makes it twice as easy!

But something really great happens when the big ball is 3 times as big as the little one. And that’s what I want you to understand.

For starters, consider an ordinary ball rolling on another ordinary ball that’s the same size. How many times does the rolling ball turn as it makes a round trip around the stationary one? If you watch this you can see the answer:

Follow the line drawn on the little ball. It turns around not once, but twice!

Next, consider one ball rolling on another whose radius is 2 times as big. How many times does the rolling ball turn as it makes a round trip?

It turns around 3 times.

And this pattern continues! I don’t have animations proving it, so either take my word for it, read our paper, or show it yourself.

In particular, a ball rolling on a ball whose radius is 3 times as big will turn 4 times as it makes a round trip.

So, by the time the little ball rolls halfway around the big one, it will have turned around twice!

But now suppose it’s a spinorial ball rolling on a projective plane. This is perfect. Now when the little ball goes halfway around big ball, it returns to its original location! And turning around the little ball twice gets it back to its original orientation!

So, there is something very neat about a spinorial ball rolling on a projective ball whose radius is exactly 3 times as big. And this is just the start. Now the split octonions get involved!

The rolling ball geometry

The key is to ponder a curious sort of geometry, which I’ll call the rolling ball geometry. This has ‘points’ and ‘lines’ which are defined in a funny way.

A point is any way a little spinorial ball can touch a projective ball that is 3 times as big. The lines are certain sets of points. A line consists of all the points we reach as the little ball rolls along some great circle on the big one, without slipping or twisting.

Of course these aren’t ‘points’ and ‘lines’ in the usual sense. But ever since the late 1800s, when mathematicians got excited about projective geometry—which is the geometry of the projective plane—we’ve enjoyed studying all sorts of strange variations on Euclidean geometry, with weirdly defined ‘points’ and ‘lines’. The rolling ball geometry fits very nicely into this tradition.

But the amazing thing is that we can describe points and lines of the rolling ball geometry in a completely different way, using the split octonions.

Split octonions

How does it work? As I said last time, the split octonions are an 8-dimensional number system. We build them as follows. We start with the ordinary real numbers. Then we throw in 3 square roots of -1, called i, j, and k, obeying

ij = -ji = k
jk = -kj = i
ki = -ik = j

At this point we have a famous 4-dimensional number system called the quaternions. Quaternions are numbers like

a + bi + cj + dk

where a,b,c,d are real numbers and i, j, k are the square roots of -1 we just created.

To build the octonions, we would now throw in another square root of -1. But to build the split octonions, we instead throw in a square root of +1. Let’s call it \ell. The hard part is saying what rules it obeys when we start multiplying it with other numbers in our system.

For starters, we get three more numbers \ell i, \ell j, \ell k. We decree these to be square roots of +1. But what happens when we multiply these with other things? For example, what is \ell i times j, and so on?

Since I don’t want to bore you, I’ll just get this over with quickly by showing you the multiplication table:

This says that \ell i (read down) times j (read across) is -\ell k, and so on.

Of course, this table is completely indigestible. I could never remember it, and you shouldn’t try. This is not the good way to explain how to multiply split octonions! It’s the lazy way. To really work with the split octonions you need a more conceptual approach, which John Huerta and I explain in our paper. But this is just a quick tour… so, on with the tour!

A split octonion is any number like

a + bi + cj + dk + e \ell + f \ell i + g \ell j + h \ell k

where a,b,c,d,e,f,g,h are real numbers. Since it takes 8 real numbers to specify a split octonion, we say they’re an 8-dimensional number system. But to describe the rolling ball geometry, we only need the imaginary split octonions, which are numbers like

x = bi + cj + dk + e \ell + f \ell i + g \ell j + h \ell k

The imaginary split octonions are 7-dimensional. 3 dimensions come from square roots of -1, while 4 come from square roots of 1.

We can use them to make up a far-out variant of special relativity: a universe with 3 time dimensions and 4 space dimensions! To do this, define the length of an imaginary split octonion x to be the number \|x \| with

\|x\|^2 = -b^2 - c^2 - d^2 + e^2 + f^2 + g^2 + h^2

This is a mutant version of the Pythagorean formula. The length \|x\| is real, in fact positive, for split octonions that point in the space directions. But it’s imaginary for those that point in the time directions!

This should not sound weird if you know special relativity. In special relativity we have spacelike vectors, whose length squared is positive, and timelike ones, whose length squared is negative.

If you don’t know special relativity—well, now you see how revolutionary Einstein’s ideas really are.

We also have vectors whose length squared is zero! These are called null. They’re also called lightlike, because light rays point along null vectors. In other words: light moves just as far in the space directions as it does in the time direction, so it’s poised at the brink between being spacelike and timelike.

The punchline

I’m sure you’re wondering where all this is going. Luckily, we’re there. We can describe the rolling ball geometry using the imaginary split octonions! Let me state it and then chat about it:

Theorem. There is a one-to-one correspondence between points in the rolling ball geometry and light rays through the point 0 in the imaginary split octonions. Under this correspondence, lines in the rolling ball geometry correspond to planes containing the point 0 in the imaginary split octonions with the property that whenever x and y lie in this plane, then xy = 0.

Even if you don’t get this, you can see it’s describing the rolling ball geometry in terms of stuff about the split octonions. An immediate consequence is that any symmetry of the split octonions is a symmetry of the rolling ball geometry.

The symmetries of the split octonions form a group called ‘the split form of G2’. With more work, we can show the converse: any symmetry of the rolling ball geometry is a symmetry of the split octonions. So, the symmetry group of the rolling ball geometry is precisely the split form of G2.

So what?

Well, G2 is an ‘exceptional group’—one of five groups that were discovered only when mathematicians like Killing and Cartan systematically started trying to classify groups in the late 1800s. The exceptional groups didn’t fit in the lists of groups mathematicians already knew.

If, as Tim Gowers has argued, some math is invented while some is discovered, the exceptional groups were discovered. Finding them was like going to the bottom of the ocean and finding weird creatures you never expected. These groups were—and are—hard to understand! They have dry, technical sounding names: E6, E7, E8, F4, and G2. They’re important in string theory—but again, just because the structure of mathematics forces it, not because people wanted it.

The exceptional groups can all be described using the octonions, or split octonions. But the octonions are also rather hard to understand. The rolling ball geometry, on the other hand, is rather simple and easy to visualize. So, it’s a way of bringing some exotic mathematics a bit closer to ordinary life.

Well, okay—in ordinary life you’d probably never thought about a spinorial ball rolling on a projective ball. But still: spinors and projective planes are far less exotic than split octonions and exceptional Lie groups. Any mathematician worth their salt knows about spinors and projective planes. They’re things that make plenty of sense.

I think now is a good time for most of you nonmathematicians to stop reading. I’ll leave off with a New Year’s puzzle:

Puzzle: Relative to the fixed stars, how many times does the Earth turn around its axis in a year?

Bye! It was nice seeing you!

The gory details

Still here? Cool. I want to wrap up by presenting the theorem in a more precise way, and then telling the detailed history of the rolling ball problem.

How can we specify a point in the rolling ball geometry? We need to say the location where the little ball touches the big ball, and we need to describe the ‘orientation’ of the little ball: that is, how it’s been rotated.

The point where the little ball touches the big one is just any point on the surface of the big ball. If the big ball were an ordinary ball this would be a point on the sphere, S^2. But since it’s a projective ball, we need a point on the projective plane, \mathbb{R}\mathrm{P}^2.

To describe the orientation of the little ball we need to say how it’s been rotated from some standard orientation. If the little ball were an ordinary ball we’d need to give an element of the rotation group, \mathrm{SO}(3). But since it’s a spinorial ball we need an element of the double cover of the rotation group, namely the special unitary group \mathrm{SU}(2).

So, the space of points in the rolling ball geometry is

X = \mathbb{R}\mathrm{P}^2 \times \mathrm{SU}(2)

This makes it easy to see how the imaginary split octonions get into the game. For starters, \mathrm{SU}(2) is isomorphic to the group of unit quaternions. We can define the absolute values of quaternions in a way that copies the usual formula for complex numbers:

\displaystyle{ |a + bi + cj + dk| = \sqrt{a^2 + b^2 + c^2 + d^2} }

The great thing about the quaternions is that if we multiply two of them, their absolute values multiply. In other words, if p and q are two quaternions,

|pq| = |p| |q|

This implies that the quaternions with absolute value 1—the unit quaternions—are closed under multiplication. In fact, they form a group. And in fact, this group is just SU(2) in mild disguise!

The unit quaternions form a sphere. Not an ordinary ’2-sphere’ of the sort we’ve been talking about so far, but a ’3-sphere’ in the 4-dimensional space of quaternions. We call that S^3.

So, the space of points in the rolling ball is isomorphic to a projective plane times a 3-sphere:

X \cong \mathbb{R}\mathrm{P}^2 \times S^3

But since the projective plane \mathbb{R}\mathrm{P}^2 is the sphere S^2 with antipodal points counted as the same point, our space of points is

\displaystyle{ X \cong \frac{S^2 \times S^3}{(p,q) \sim (-p,q)}}

Here dividing or ‘modding out’ by that stuff on the bottom says that we count any point (p,q) in S^2 \times S^3 as the same as (-p,q).

The cool part is that while S^3 is the unit quaternions, we can think of S^2 as the unit imaginary quaternions, where an imaginary quaternion is a number like

bi + cj + dk

So, we’re describing a point in the rolling ball geometry using a unit quaternion and a unit imaginary quaternion. That’s cool. But we can improve this description a bit using a nonobvious fact:

\displaystyle{ X \cong \frac{S^2 \times S^3}{(p,q) \sim (-p,-q)}}

There’s an extra minus sign here! Now we’re counting any point (p,q) in S^2 \times S^3 as the same as (-p,-q). In Proposition 2 in our paper we give an explicit formula for this isomorphism, which is important.

But never mind. Here’s the point of this improvement: we can now describe a point in the rolling ball geometry as a light ray through the origin in the imaginary split octonions! After all, any split octonion is of the form

a + bi + cj + dk + e \ell + f \ell i + g \ell j + h \ell k =  p + \ell q

where p and q are arbitrary quaternions. So, any imaginary split octonion is of the form

bi + cj + dk + e \ell + f \ell i + g \ell j + h \ell k =  p + \ell q

where q is a quaternion and p is an imaginary quaternion. This imaginary split octonion is lightlike if

-b^2 - c^2 - d^2 + e^2 + f^2 + g^2 + h^2 = 0

But this just says

|p|^2 = |q|^2

Given any light ray through the origin in the imaginary split octonions, it consists of all real multiples of some lightlike imaginary split octonion. So, we can describe it using a pair (p,q) where p is an imaginary quaternion and q is a quaternion with the same absolute value as p. We can normalize them so they’re both unit quaternions… but (p,q) and its negative (-p,-q) still determine the same light ray.

So, the space of light rays through the origin in the imaginary split octonions is

\displaystyle{\frac{S^2 \times S^3}{(p,q) \sim (-p,-q)}}

But this is the space of points in the rolling ball geometry!


So far nothing relies on knowing how to multiply imaginary split octonions: we only need the formula for their length, which is much simpler. It’s the lines in the rolling ball geometry that require multiplication. In our paper, we show in Theorem 5 that lines correspond to 2-dimensional subspaces of the imaginary split octonions with the property that whenever x, y lie in the subspace, then x y = 0. In particular this implies that x^2 = 0, which turns out to say that x is lightlike. So, these 2d subspaces consist of lightlike elements. But the property that x y = 0 whenever x, y lie in the subspace is actually stronger! And this is how the full strength of the split octonions gets used.

One last comment. What if we hadn’t used a spinorial ball rolling on a projective ball? What if we had used an ordinary ball rolling on another ordinary ball? Then our space of points would be S^2 \times \mathrm{SO}(3). This is a lot like the space X we’ve been looking at. The only difference is a slight change in where we put the minus sign:

\displaystyle{ S^2 \times \mathrm{SO}(3) \cong \frac{S^2 \times S^3}{(p,q) \sim (p,-q)}}

But this space is different than X. We could go ahead and define lines and look for symmetries of this geometry, but we wouldn’t get G2. We’d get a much smaller group. We’d also get a smaller symmetry group if we worked with X but the big ball were anything other than 3 times the size of the small one. For proofs, see:

• Gil Bor and Richard Montgomery, G2 and the “rolling distribution”.

The history

I will resist telling you how to use geometric quantization to get back from the rolling ball geometry to the split octonions. I will also resist telling you about ‘null triples’, which give specially nice bases for the split octonions. This is where John Huerta really pulled out all the stops and used his octonionic expertise to its full extent. For these things, you’ll just have to read our paper.

Instead, I want to tell you about the history of this problem. This part is mainly for math history buffs, so I’ll freely fling around jargon that I’d been suppressing up to now. This part is also where I give credit to all the great mathematicians who figured out most of the stuff I just explained! I won’t include references, except for papers that are free online. You can find them all in our paper.

On May 23, 1887, Wilhelm Killing wrote a letter to Friedrich Engel saying that he had found a 14-dimensional simple Lie algebra. This is now called \mathfrak{g}_2, because it’s the Lie algebra corresponding to the group G2. By October he had completed classifying the simple Lie algebras, and in the next three years he published this work in a series of papers.

Besides the already known classical simple Lie algebras, he claimed to have found six ‘exceptional’ ones. In fact he only gave a rigorous construction of the smallest, \mathfrak{g}_2. Later, in his famous 1894 thesis, Élie Cartan constructed all of them and noticed that two of them were isomorphic, so that there are really only five.

But already in 1893, Cartan had published a note describing an open set in \mathbb{C}^5 equipped with a 2-dimensional ‘distribution’—a smoothly varying field of 2d spaces of tangent vectors—for which the Lie algebra \mathfrak{g}_2 appears as the infinitesimal symmetries. In the same year, and actually in the same journal, Engel noticed the same thing.

In fact, this 2-dimensional distribution is closely related to the rolling ball problem. The point is that the space of configurations of the rolling ball is 5-dimensional, with a 2-dimensional distibution that describes motions of the ball where it rolls without slipping or twisting.

Both Cartan and Engel returned to this theme in later work. In particular, Engel discovered in 1900 that a generic antisymmetic trilinear form on \mathbb{C}^7 is preserved by a group isomorphic to the complex form of G2. Furthermore, starting from this 3-form he constructed a nondegenerate symmetric bilinear form on \mathbb{C}^7. This implies that the complex form of G2. is contained in a group isomorphic to \mathrm{SO}(7,\mathbb{C}). He also noticed that the vectors x \in \mathbb{C}^7 that are null—meaning x \cdot x = 0, where we write the bilinear form as a dot product—define a 5-dimensional projective variety on which G2 acts.

In fact, this variety is the complexification of the configuration space of a rolling fermionic ball on a projective plane! Futhermore, the space \mathbb{C}^7 is best seen as the complexification of the space of imaginary octonions. Like the space of imaginary quaternions (better known as \mathbb{R}^3), the 7-dimensional space of imaginary octonions comes with a dot product and cross product. Engel’s bilinear form on \mathbb{C}^7 arises from complexifying the dot product. His antisymmetric trilinear form arises from the dot product together with the cross product via the formula

x \cdot (y \times z).

However, all this was seen only later! It was only in 1908 that Cartan mentioned that the automorphism group of the octonions is a 14-dimensional simple Lie group. Six years later he stated something he probably had known for some time: this group is the compact real form of G2.

As I already mentioned, the octonions had been discovered long before: in fact the day after Christmas in 1843, by Hamilton’s friend John Graves. Two months before that, Hamilton had sent Graves a letter describing his dramatic discovery of the quaternions. This encouraged Graves to seek an even larger normed division algebra, and thus the octonions were born. Hamilton offered to publicize Graves’ work, but put it off or forgot until the young Arthur Cayley rediscovered the octonions in 1845. That this obscure algebra lay at the heart of all the exceptional Lie algebras and groups became clear only slowly. Cartan’s realization of its relation to \mathfrak{g}_2, and his later work on a symmetry called ‘triality’, was the first step.

In 1910, Cartan wrote a paper that studied 2-dimensional distributions in 5 dimensions. Generically such a distibution is not integrable: in other words, the Lie bracket of two vector fields lying in this distribution does not again lie in this distribution. However, it lies in a 3-dimensional distribution. The Lie bracket of vector fields lying in this 3-dimensional distibution then generically give arbitrary tangent vectors to the 5-dimensional manifold. Such a distribution is called a (2,3,5) distribution. Cartan worked out a complete system of local geometric invariants for these distributions. He showed that if all these invariants vanish, the infinitesimal symmetries of a (2,3,5) distribution in a neighborhood of a point form the Lie algebra \mathfrak{g}_2.

Again this is relevant to the rolling ball. The space of configurations of a ball rolling on a surface is 5-dimensional, and it comes equipped with a (2,3,5) distribution. The 2-dimensional distibution describes motions of the ball where it rolls without twisting or slipping. The 3-dimensional distribution describes motions where it can roll and twist, but not slip. Cartan did not discuss rolling balls, but he did consider a closely related example: curves of constant curvature 2 or 1/2 in the unit 3-sphere.

Beginning in the 1950′s, Francois Bruhat and Jacques Tits developed a very general approach to incidence geometry, eventually called the theory of ‘buildings’, which among other things gives a systematic approach to geometries having simple Lie groups as symmetries. In the case of G2, because the Dynkin diagram of this group has two dots, the relevant geometry has two types of figure: points and lines. Moreover because the Coxeter group associated to this Dynkin diagram is the symmetry group of a hexagon, a generic pair of points a and d fits into a configuration like this, called an ‘apartment’:

There is no line containing a pair of points here except when a line is actually shown, and more generally there are no ‘shortcuts’ beyond what is shown. For example, we go from a to b by following just one line, but it takes two to get from a to c, and three to get from a to d.

Betty Salzberg wrote a nice introduction to these ideas in 1982. Among other things, she noted that the points and lines in the incidence geometry of the split real form of G2 correspond to 1- and 2-dimensional null subalgebras of the imaginary split octonions. This was shown by Tits in 1955.

In 1993, Bryant and Hsu gave a detailed treatment of curves in manifolds equipped with 2-dimensional distributions, greatly extending the work of Cartan:

• Robert Bryant and Lucas Hsu, Rigidity of integral curves of rank 2 distributions.

They showed how the space of configurations of one surface rolling on another fits into this framework. However, Igor Zelenko may have been the first to explicitly mention a ball rolling on another ball in this context, and to note that something special happens when their ratio of radii is 3 or 1/3. In a 2005 paper, he considered an invariant of (2,3,5) distributions. He calculated it for the distribution arising from a ball rolling on a larger ball and showed it equals zero in these two cases.

(In our paper, John Huerta and I assume that the rolling ball is smaller than the fixed one, simply to eliminate one of these cases and focus on the case where the fixed ball is 3 times as big.)

In 2006, Bor and Montgomery's paper put many of the pieces together. They studied the (2,3,5) distribution on S^2 \times \mathrm{SO}(3) coming from a ball of radius 1 rolling on a ball of radius R, and proved a theorem which they credit to Robert Bryant. First, passing to the double cover, they showed the corresponding distribution on S^2 \times \mathrm{SU}(2) has a symmetry group whose identity component contains the split real form of G2 when R = 3 or 1/3. Second, they showed this action does not descend to original rolling ball configuration space S^2 \times \mathrm{SO}(3). Third, they showed that for any other value of R except R = 1, the symmetry group is isomorphic to \mathrm{SU}(2) \times \mathrm{SU}(2)/\pm(1,1). They also wrote:

Despite all our efforts, the ‘3’ of the ratio 1:3 remains mysterious. In this article it simply arises out of the structure constants for G2 and appears in the construction of the embedding of \mathfrak{so}(3) \times \mathfrak{so}(3) into \mathfrak{g}_2. Algebraically speaking, this ‘3’ traces back to the 3 edges in \mathfrak{g}_2's Dynkin diagram and the consequent relative positions of the long and short roots in the root diagram for \mathfrak{g}_2 which the Dynkin diagram is encoding.

Open problem. Find a geometric or dynamical interpretation for the ‘3’ of the 3:1 ratio.

As you can see from what I’ve said, John Huerta and I have offered a solution to this puzzle: the ‘3’ comes from the fact that a ball rolling once around a ball 3 times as big turns around exactly 4 times—just what you need to get a relationship to a spinorial ball rolling on a projective plane, and thus the lightcone in the split octonions! The most precise statement of this explanation comes in Theorem 3 of our paper.

While Bor and Montgomery’s paper goes into considerable detail about the connection with split octonions, most of their work uses the now standard technology of semisimple Lie algebras: roots, weights and the like. In 2006, Sagerschnig described the incidence geometry of \mathrm{G}_2 using the split octonions, and in 2008, Agrachev wrote a paper entitled ‘Rolling balls and octonions’. He emphasizes that the double cover S^2 \times \mathrm{SU}(2) can be identified with the double cover of \mathrm{PC}, the projectivization of the set of null vectors in the imaginary split octonions. He then shows that given a point \langle x \rangle \in \mathrm{PC}, the set of points \langle y \rangle connected to \langle x \rangle by a single roll is the annihilator

\{ x \in \mathbb{I} : y x = 0 \}

where \mathbb{I} is the space of imaginary split octonions.

This sketch of the history is incomplete in many ways. As usual, history resembles a fractal: the more closely you study it, the more details you see! If you want to dig deeper, I strongly recommend these:

• Ilka Agricola, Old and new on the the exceptional group G2.

• Robert Bryant, Élie Cartan and geometric duality.

This paper is also very helpful and fun:

• Aroldo Kaplan, Quaternions and octonions in mechanics.

It emphasizes the role that quaternions play in describing rotations, and the way an imaginary split octonion is built from an imaginary quaternion and a quaternion. And don’t forget this:

• Andrei Agrachev, Rolling balls and octonions.

Have fun!


Get every new post delivered to your Inbox.

Join 2,716 other followers