Network Theory (Part 28)

10 April, 2013

Last time I left you with some puzzles. One was to use the laws of electrical circuits to work out what this one does:

If we do this puzzle, and keep our eyes open, we’ll see an analogy between electrical circuits and classical mechanics! And this is the first of a huge set of analogies. The same math shows up in many different subjects, whenever we study complex systems made of interacting parts. So, it should become part of any general theory of networks.

This simple circuit is very famous: it’s called a series RLC circuit, because it has a resistor of resistance R, an inductor of inductance L, and a capacitor of capacitance C, all hooked up ‘in series’, meaning one after another. But understand this circuit, it’s good to start with an even simpler one, where we leave out the voltage source:

This has three edges, so reading from top to bottom there are 3 voltages V_1, V_2, V_3, and 3 currents I_1, I_2, I_3, one for each edge. The white and black dots are called ‘nodes’, and the white ones are called ‘terminals’: current can flow in or out of those.

The voltages and currents obey a bunch of equations:

• Kirchhoff’s current law says the current flowing into each node that’s not a terminal equals the current flowing out:

I_1 = I_2 = I_3

• Kirchhoff’s voltage law says there are potentials \phi_0, \phi_1, \phi_2, \phi_3, one for each node, such that:

V_1 = \phi_0 - \phi_1

V_2 = \phi_1 - \phi_2

V_3 = \phi_2 - \phi_3

In this particular problem, Kirchhoff’s voltage law doesn’t say much, since we can always find potentials obeying this, given the voltages. But in other problems it can be important. And even here it suggests that the sum V_1 + V_2 + V_3 will be important; this is the ‘total voltage across the circuit’.

Next, we get one equation for each circuit element:

• The law for a resistor says:

V_1 = R I_1

The law for a inductor says:

\displaystyle{ V_2 = L \frac{d I_2}{d t} }

The law for a capacitor says:

\displaystyle{ I_3 = C \frac{d V_3}{d t} }

These are all our equations. What should we do with them? Since I_1 = I_2 = I_3, it makes sense to call all these currents simply I and solve for each voltage in terms of this. Here’s what we get:

V_1 = R I

\displaystyle{ V_2 = L \frac{d I}{d t} }

\displaystyle {V_3 = C^{-1} \int I \, dt }

So, if we know the current flowing through the circuit we can work out the voltage across each circuit element!

Well, not quite: in the case of the capacitor we only know it up to a constant, since there’s a constant of integration. This may seem like a minor objection, but it’s worth taking seriously. The point is that the charge on the capacitor’s plate is proportional to the voltage across the capacitor:

\displaystyle{V_3 = C^{-1} Q }

When electrons move on or off the plate, this charge changes, and we get a current:

\displaystyle{I = \frac{d Q}{d t} }

So, we can work out the time derivative of V_3 from the current I, but to work out V_3 itself we need the charge Q.

Treat these as definitions if you like, but they’re physical facts too! And they let us rewrite our trio of equations:

V_1 = R I

\displaystyle{ V_2 = L \frac{d I}{d t} }

\displaystyle{V_3 = C^{-1} \int I \, dt }

in terms of the charge, as follows:

V_1 = R \dot{Q}

V_2 = L \ddot{Q}

V_3 = C^{-1} Q

Then if we add these three equations, we get

V_1 + V_2 + V_3 = L \ddot Q + R \dot Q + C^{-1} Q

So, if we define the total voltage by

V = V_1 + V_2 + V_3 = \phi_0 - \phi_3

we get

L \ddot Q + R \dot Q + C^{-1} Q = V

And this is great!

Why? Because this equation is famous! If you’re a mathematician, you know it as the most general second-order linear ordinary differential equation with constant coefficients. But if you’re a physicist, you know it as the damped driven oscillator.

The analogy between electronics and mechanics

Here’s an example of a damped driven oscillator:

We’ve got an object hanging from a spring with some friction, and an external force pulling it down. Here the external force is gravity, so it’s constant in time, but we can imagine fancier situations where it’s not. So in a general damped driven oscillator:

• the object has mass m (and the spring is massless),

• the spring constant is k (this says how strong the spring force is),

• the damping coefficient is r (this says how much friction there is),

• the external force is F (in general a function of time).

Then Newton’s law says

m \ddot{q} + r \dot{q} + k q = F

And apart from the use of different letters, this is exactly like the equation for our circuit! Remember, that was

L \ddot Q + R \dot Q + C^{-1} Q = V

So, we get a wonderful analogy relating electronics and mechanics! It goes like this:

Electronics Mechanics
charge: Q position: q
current: I = \dot{Q} velocity: v = \dot{q}
voltage: V force: F
inductance: L mass: m
resistance: R damping coefficient: r  
inverse capacitance: 1/C   spring constant: k

If you understand mechanics, you can use this to get intuition about electronics… or vice versa. I’m more comfortable with mechanics, so when I see this circuit:

I imagine a current of electrons whizzing along, ‘forced’ by the voltage across the circuit, getting slowed by the ‘friction’ of the resistor, wanting to continue their motion thanks to the inertia or ‘mass’ of the inductor, and getting stuck on the plate of the capacitor, where their mutual repulsion pushes back against the flow of current—just like a spring fights back when you pull on it! This lets me know how the circuit will behave: I can use my mechanical intuition.

The only mildly annoying thing is that the inverse of the capacitance C is like the spring constant k. But this makes perfect sense. A capacitor is like a spring: you ‘pull’ on it with voltage and it ‘stretches’ by building up electric charge on its plate. If its capacitance is high, it’s like a easily stretchable spring. But this means the corresponding spring constant is low.

Besides letting us transfer intuition and techniques, the other great thing about analogies is that they suggest ways of extending themselves. For example, we’ve seen that current is the time derivative of charge. But if we hadn’t, we could still have guessed it, because current is like velocity, which is the time derivative of something important.

Similarly, force is analogous to voltage. But force is the time derivative of momentum! We don’t have momentum on our chart. Our chart is also missing the thing whose time derivative is voltage. This thing is called flux linkage, and sometimes denotes \lambda. So we should add this, and momentum, to our chart:

Electronics Mechanics
charge: Q position: q
current: I = \dot{Q} velocity: v = \dot{q}
flux linkage: \lambda momentum: p
voltage: V = \dot{\lambda} force: F = \dot{p}
inductance: L mass: m
resistance: R damping coefficient: r  
inverse capacitance: 1/C   spring constant: k

Fourier transforms

But before I get carried away talking about analogies, let’s try to solve the equation for our circuit:

L \ddot Q + R \dot Q + C^{-1} Q = V

This instantly tells us the voltage V as a function of time if we know the charge Q as a function of time. So, ‘solving’ it means figuring out Q if we know V. You may not care about Q—it’s the charge of the electrons stuck on the capacitor—but you should certainly care about the current I = \dot{Q}, and figuring out Q will get you that.

Besides, we’ll learn something good from solving this equation.

We could solve it using either the Laplace transform or the Fourier transform. They’re very similar. For some reason electrical engineers prefer the Laplace transform—does anyone know why? But I think the Fourier transform is conceptually preferable, slightly, so I’ll use that.

The idea is to write any function of time as a linear combination of oscillating functions \exp(i\omega t) with different frequencies \omega. More precisely, we write our function f as an integral

\displaystyle{ f(t) = \frac{1}{\sqrt{2 \pi}} \int_{-\infty}^\infty \hat{f}(\omega) e^{i\omega t} \, d\omega }

Here the function \hat{f} is called the Fourier transform of f, and it’s given by

\displaystyle{ \hat{f}(\omega) = \frac{1}{\sqrt{2 \pi}} \int_{-\infty}^\infty f(t) e^{-i\omega t} \, dt }

There is a lot one could say about this, but all I need right now is that differentiating a function has the effect of multiplying its Fourier transform by i\omega. To see this, we simply take the Fourier transform of \dot{f}:

\begin{array}{ccl}  \hat{\dot{f}}(\omega) &=& \displaystyle{  \frac{1}{\sqrt{2 \pi}} \int_{-\infty}^\infty \frac{df(t)}{dt} \, e^{-i\omega t} \, dt } \\  \\  &=& \displaystyle{ -\frac{1}{\sqrt{2 \pi}} \int_{-\infty}^\infty f(t) \frac{d}{dt} e^{-i\omega t} \, dt } \\  \\  &=& \displaystyle{ i\omega \; \frac{1}{\sqrt{2 \pi}} \int_{-\infty}^\infty f(t) e^{-i\omega t} \, dt } \\  \\  &=& i\omega \hat{f}(\omega) \end{array}

where in the second step we integrate by parts. So,

\hat{\dot{f}}(\omega) = i\omega \hat{f}(\omega)

The Fourier transform is linear, too, so we can start with our differential equation:

L \ddot Q + R \dot Q + C^{-1} Q = V

and take the Fourier transform of each term, getting

\displaystyle{ \left((i\omega)^2 L + (i\omega) R + C^{-1}\right) \hat{Q}(\omega) = \hat{V}(\omega) }

We can now solve for the charge in a completely painless way:

\displaystyle{  \hat{Q}(\omega) =  \frac{1}{((i\omega)^2 L + (i\omega) R + C^{-1})} \, \hat{V}(\omega) }

Well, we actually solved for \hat{Q} in terms of \hat{V}. But if we’re good at taking Fourier transforms, this is good enough. And it has a deep inner meaning.

To see its inner meaning, note that the Fourier transform of an oscillating function \exp(i \omega_0 t) is a delta function at the frequency \omega = \omega_0. This says that this oscillating function is purely of frequency \omega_0, like a laser beam of one pure color, or a sound of one pure pitch.

Actually there’s a little fudge factor due to how I defined the Fourier transform: if

f(t) = e^{i\omega_0 t}

then

\displaystyle{ \hat{f}(\omega) = \sqrt{2 \pi} \, \delta(\omega - \omega_0) }

But it’s no big deal. (You can define your Fourier transform so the 2\pi doesn’t show up here, but it’s bound to show up somewhere.)

Also, you may wonder how the complex numbers got into the game. What would it mean to say the voltage is \exp(i \omega t)? The answer is: don’t worry, everything in sight is linear, so we can take the real or imaginary part of any equation and get one that makes physical sense.

Anyway, what does our relation

\displaystyle{  \hat{Q}(\omega) =  \frac{1}{((i\omega)^2 L + (i\omega) R + C^{-1})} \hat{V}(\omega) }

mean? It means that if we put an oscillating voltage of frequency \omega_0 across our circuit, like this:

V(t) = e^{i \omega_0 t}

then we’ll get an oscillating charge at the same frequency, like this:

\displaystyle{  Q(t) =  \frac{1}{((i\omega_0)^2 L + (i\omega_0) R + C^{-1})}  e^{i \omega_0 t}  }

To see this, just use the fact that the Fourier transform of \exp(i \omega_0 t) is essentially a delta function at \omega_0, and juggle the equations appropriately!

But the magnitude and phase of this oscillating charge Q(t) depends on the function

\displaystyle{ \frac{1}{((i\omega_0)^2 L + (i\omega_0) R + C^{-1})}  }

For example, Q(t) will be big when \omega_0 is near a pole of this function! We can use this to study the resonant frequency of our circuit.

The same idea works for many more complicated circuits, and other things too. The function up there is an example of a transfer function: it describes the response of a linear, time-invariant system to an input of a given frequency. Here the ‘input’ is the voltage and the ‘response’ is the charge.

Impedance

Taking this idea to its logical conclusion, we can see inductors and capacitors as being resistors with a frequency-dependent, complex-valued resistance! This generalized resistance is called ‘impedance. Let’s see how it works.

Suppose we have an electrical circuit. Consider any edge e of this circuit:

• If our edge e is labelled by a resistor of resistance R:

then

V_e = R I_e

Taking Fourier transforms, we get

\hat{V}_e = R \hat{I}_e

so nothing interesting here: our resistor acts like a resistor of resistance R no matter what the frequency of the voltage and current are!

• If our edge e is labelled by an inductor of inductance L:

then

\displaystyle{ V_e = L \frac{d I_e}{d t} }

Taking Fourier transforms, we get

\hat{V}_e = (i\omega L) \hat{I}_e

This is interesting: our inductor acts like a resistor of resistance i \omega L when the frequency of the current and voltage is \omega. So, we say the ‘impedance’ of the inductor is i \omega L.

• If our edge e is labelled by a capacitor of capacitance C:

we have

\displaystyle{ I_e = C \frac{d V_e}{d t} }

Taking Fourier transforms, we get

\hat{I}_e = (i\omega C) \hat{V}_e

or

\displaystyle{ \hat{V}_e = \frac{1}{i \omega C} \hat{I_e} }

So, our capacitor acts like a resistor of resistance 1/(i \omega C) when the frequency of the current and voltage is \omega. We say the ‘impedance’ of the capacitor is 1/(i \omega L).

It doesn’t make sense to talk about the impedance of a voltage source or current source, since these circuit elements don’t give a linear relation between voltage and current. But whenever an element is linear and its properties don’t change with time, the Fourier transformed voltage will be some function of frequency times the Fourier transformed current. And in this case, we call that function the impedance of the element. The symbol for impedance is Z, so we have

\hat{V}_e(\omega) = Z(\omega) \hat{I}_e(\omega)

or

\hat{V}_e = Z \hat{I}_e

for short.

The big picture

In case you’re getting lost in the details, here are the big lessons for today:

• There’s a detailed analogy between electronics and mechanics, which we’ll later extend to many other systems.

• The study of linear time-independent elements can be reduced to the study of resistors if we generalize resistance to impedance by letting it be a complex-valued function instead of a real number.

One thing we’re doing is preparing for a general study of linear time-independent open systems. We’ll use linear algebra, but the field—the number system in our linear algebra—will consist of complex-valued functions, rather than real numbers.

Puzzle

Let’s not forget our original problem:

This is closely related to the problem we just solved. All the equations we derived still hold! But if you do the math, or use some intuition, you’ll see the voltage source ensures that the voltage we’ve been calling V is a constant. So, the current I flowing around the wire obeys the same equation we got before:

L \ddot Q + R \dot Q + C^{-1} Q = V

where \dot Q = I. The only difference is that now V is constant.

Puzzle. Solve this equation for Q(t).

There are lots of ways to do this. You could use a Fourier transform, which would give a satisfying sense of completion to this blog article. Or, you could do it some other way.


The Planck Mission

22 March, 2013

Yesterday, the Planck Mission released a new map of the cosmic microwave background radiation:

380,000 years after the Big Bang, the Universe cooled down enough for protons and electrons to settle down and combine into hydrogen atoms. Protons and electrons are charged, so back when they were freely zipping around, no light could go very far without getting absorbed and then re-radiated. When they combined into neutral hydrogen atoms, the Universe soon switched to being almost transparent… as it is today. So the light emitted from that time is still visible now!

And it would look like this picture here… if you could see microwaves.

When this light was first emitted, it would have looked white to our eyes, since the temperature of the Universe was about 4000 kelvin. That’s the temperature when half the hydrogen atoms split apart into electrons and protons. 4200 kelvin looks like a fluorescent light; 2800 kelvin like an incandescent bulb, rather yellow.

But as the Universe expanded, this light got stretched out to orange, red, infrared… and finally a dim microwave glow, invisible to human eyes. The average temperature of this glow is very close to absolute zero, but it’s been measured very precisely: 2.725 kelvin.

But the temperature of the glow is not the same in every direction! There are tiny fluctuations! You can see them in this picture. The colors here span a range of ± .0002 kelvin.

These fluctuations are very important, because they were later amplified by gravity, with denser patches of gas collapsing under their own gravitational attraction (thanks in part to dark matter), and becoming even denser… eventually leading to galaxies, stars and planets, you and me.

But where did these fluctuations come from? I suspect they started life as quantum fluctuations in an originally completely homogeneous Universe. Quantum mechanics takes quite a while to explain – but in this theory a situation can be completely symmetrical, yet when you measure it, you get an asymmetrical result. The universe is then a ‘sum’ of worlds where these different results are seen. The overall universe is still symmetrical, but each observer sees just a part: an asymmetrical part.

If you take this seriously, there are other worlds where fluctuations of the cosmic microwave background radiation take all possible patterns… and form galaxies in all possible patterns. So while the universe as we see it is asymmetrical, with galaxies and stars and planets and you and me arranged in a complicated and seemingly arbitrary way, the overall universe is still symmetrical – perfectly homogeneous!

That seems very nice to me. But the great thing is, we can learn more about this, not just by chatting, but by testing theories against ever more precise measurements. The Planck Mission is a great improvement over the Wilkinson Microwave Anisotropy Probe (WMAP), which in turn was a huge improvement over the Cosmic Background Explorer (COBE):

Here is some of what they’ve learned:

• It now seems the Universe is 13.82 ± 0.05 billion years old. This is a bit higher than the previous estimate of 13.77 ± 0.06 billion years, due to the Wilkinson Microwave Anisotropy Probe.

• It now seems the rate at which the universe is expanding, known as Hubble’s constant, is 67.15 ± 1.2 kilometers per second per megaparsec. A megaparsec is roughly 3 million light-years. This is less than earlier estimates using space telescopes, such as NASA’s Spitzer and Hubble.

• It now seems the fraction of mass-energy in the Universe in the form of dark matter is 26.8%, up from 24%. Dark energy is now estimated at 68.3%, down from 71.4%. And normal matter is now estimated at 4.9%, up from 4.6%.

These cosmological parameters, and a bunch more, are estimated here:

Planck 2013 results. XVI. Cosmological parameters.

It’s amazing how we’re getting more and more accurate numbers for these basic facts about our world! But the real surprises lie elsewhere…

A lopsided universe, with a cold spot?

 

The Planck Mission found two big surprises in the cosmic microwave background:

• This radiation is slightly different on opposite sides of the sky! This is not due to the fact that the Earth is moving relative to the average position of galaxies. That fact does make the radiation look hotter in the direction we’re moving. But that produces a simple pattern called a ‘dipole moment’ in the temperature map. If we subtract that out, it seems there are real differences between two sides of the Universe… and they are complex, interesting, and not explained by the usual theories!

• There is a cold spot that seems too big to be caused by chance. If this is for real, it’s the largest thing in the Universe.

Could these anomalies be due to experimental errors, or errors in data analysis? I don’t know! They were already seen by the Wilkinson Microwave Anisotropy Probe; for example, here is WMAP’s picture of the cold spot:

The Planck Mission seems to be seeing them more clearly with its better measurements. Paolo Natoli, from the University of Ferrara writes:

The Planck data call our attention to these anomalies, which are now more important than ever: with data of such quality, we can no longer neglect them as mere artefacts and we must search for an explanation. The anomalies indicate that something might be missing from our current understanding of the Universe. We need to find a model where these peculiar traits are no longer anomalies but features predicted by the model itself.

For a lot more detail, see this paper:

Planck 2013 results. XXIII. Isotropy and statistics of the CMB.

(I apologize for not listing the authors on these papers, but there are hundreds!) Let me paraphrase the abstract for people who want just a little more detail:

Many of these anomalies were previously observed in the Wilkinson Microwave Anisotropy Probe data, and are now confirmed at similar levels of significance (around 3 standard deviations). However, we find little evidence for non-Gaussianity with the exception of a few statistical signatures that seem to be associated with specific anomalies. In particular, we find that the quadrupole-octopole alignment is also connected to a low observed variance of the cosmic microwave background signal. The dipolar power asymmetry is now found to persist to much smaller angular scales, and can be described in the low-frequency regime by a phenomenological dipole modulation model. Finally, it is plausible that some of these features may be reflected in the angular power spectrum of the data which shows a deficit of power on the same scales. Indeed, when the power spectra of two hemispheres defined by a preferred direction are considered separately, one shows evidence for a deficit in power, whilst its opposite contains oscillations between odd and even modes that may be related to the parity violation and phase correlations also detected in the data. Whilst these analyses represent a step forward in building an understanding of the anomalies, a satisfactory explanation based on physically motivated models is still lacking.

If you’re a scientist, your mouth should be watering now… your tongue should be hanging out! If this stuff holds up, it’s amazing, because it would call for real new physics.

I’ve heard that the difference between hemispheres might fit the simplest homogeneous but not isotropic solutions of general relativity, the Bianchi models. However, this is something one should carefully test using statistics… and I’m sure people will start doing this now.

As for the cold spot, the best explanation I can imagine is some sort of mechanism for producing fluctuations very early on… so that these fluctuations would get blown up to enormous size during the inflationary epoch, roughly between 10-36 and 10-32 seconds after the Big Bang. I don’t know what this mechanism would be!

There are also ways of trying to ‘explain away’ the cold spot, but even these seem jaw-droppingly dramatic. For example, an almost empty region 150 megaparsecs (500 million light-years) across would tend to cool down cosmic microwave background radiation coming through it. But it would still be the largest thing in the Universe! And such an unusual void would seem to beg for an explanation of its own.

Particle physics

The Planck Mission also shed a lot of light on particle physics, and especially on inflation. But, it mainly seems to have confirmed what particle physicists already suspected! This makes them rather grumpy, because these days they’re always hoping for something new, and they’re not getting it.

We can see this at Jester’s blog Résonaances, which also gives a very nice, though technical, summary of what the Planck Mission did for particle physics:

From a particle physicist’s point of view the single most interesting observable from Planck is the notorious N_{\mathrm{eff}}. This observable measures the effective number of degrees of freedom with sub-eV mass that coexisted with the photons in the plasma at the time when the CMB was formed (see e.g. my older post for more explanations). The standard model predicts N_{\mathrm{eff}} \approx 3, corresponding to the 3 active neutrinos. Some models beyond the standard model featuring sterile neutrinos, dark photons, or axions could lead to N_{\mathrm{eff}} > 3, not necessarily an integer. For a long time various experimental groups have claimed N_{\mathrm{eff}} much larger than 3, but with an error too large to blow the trumpets. Planck was supposed to sweep the floor and it did. They find

N_{\mathrm{eff}} = 3 \pm 0.5,

that is, no hint of anything interesting going on. The gurgling sound you hear behind the wall is probably your colleague working on sterile neutrinos committing a ritual suicide.

Another number of interest for particle theorists is the sum of neutrino masses. Recall that oscillation experiments tell us only about the mass differences, whereas the absolute neutrino mass scale is still unknown. Neutrino masses larger than 0.1 eV would produce an observable imprint into the CMB. [….] Planck sees no hint of neutrino masses and puts the 95% CL limit at 0.23 eV.

Literally, the most valuable Planck result is the measurement of the spectral index n_s, as it may tip the scale for the Nobel committee to finally hand out the prize for inflation. Simplest models of inflation (e.g., a scalar field φ with a φn potential slowly changing its vacuum expectation value) predicts the spectrum of primordial density fluctuations that is adiabatic (the same in all components) and Gaussian (full information is contained in the 2-point correlation function). Much as previous CMB experiments, Planck does not see any departures from that hypothesis. A more quantitative prediction of simple inflationary models is that the primordial spectrum of fluctuations is almost but not exactly scale-invariant. More precisely, the spectrum is of the form

\displaystyle{ P \sim (k/k_0)^{n_s-1} }

with n_s close to but typically slightly smaller than 1, the size of n_s being dependent on how quickly (i.e. how slowly) the inflaton field rolls down its potential. The previous result from WMAP-9,

n_s=0.972 \pm 0.013

(n_s =0.9608 \pm 0.0080 after combining with other cosmological observables) was already a strong hint of a red-tilted spectrum. The Planck result

n_s = 0.9603 \pm 0.0073

(n_s =0.9608 \pm 0.0054 after combination) pushes the departure of n_s - 1 from zero past the magic 5 sigma significance. This number can of course also be fitted in more complicated models or in alternatives to inflation, but it is nevertheless a strong support for the most trivial version of inflation.

[….]

In summary, the cosmological results from Planck are really impressive. We’re looking into a pretty wide range of complex physical phenomena occurring billions of years ago. And, at the end of the day, we’re getting a perfect description with a fairly simple model. If this is not a moment to cry out “science works bitches”, nothing is. Particle physicists, however, can find little inspiration in the Planck results. For us, what Planck has observed is by no means an almost perfect universe… it’s rather the most boring universe.

I find it hilarious to hear someone complain that the universe is “boring” on a day when astrophysicists say they’ve discovered the universe is lopsided and has a huge cold region, the largest thing ever seen by humans!

However, particle physicists seem so far rather skeptical of these exciting developments. Is this sour grapes, or are they being wisely cautious?

Time, as usual, will tell.


Centre for Quantum Mathematics and Computation

6 March, 2013

This fall they’re opening a new Centre for Quantum Mathematics and Computation at Oxford University. They’ll be working on diagrammatic methods for topology and quantum theory, quantum gravity, and computation. You’ll understand what this means if you know the work of the people involved:

• Samson Abramsky
• Bob Coecke
• Christopher Douglas
• Kobi Kremnitzer
• Steve Simon
• Ulrike Tillman
• Jamie Vicary

All these people are already at Oxford, so you may wonder what’s new about this center. I’m not completely sure, but they’ve gotten money from EPSRC (roughly speaking, the British NSF), and they’re already hiring a postdoc. Applications are due on March 11, so hurry up if you’re interested!

They’re having a conference October 1st to 4th to start things off. I’ll be speaking there, and they tell me that Steve Awodey, Alexander Beilinson, Lucien Hardy, Martin Hyland, Chris Isham, Dana Scott, and Anton Zeilinger have been invited too.

I’m really looking forward to seeing Chris Isham, since he’s one of the most honest and critical thinkers about quantum gravity and the big difficulties we have in understanding this subject—and he has trouble taking airplane flights, so it’s been a long time since I’ve seen him. It’ll also be great to see all the other people I know, and meet the ones I don’t.

For example, back in the 1990’s, I used to spend summers in Cambridge talking about n-categories with Martin Hyland and his students Eugenia Cheng, Tom Leinster and Aaron Lauda (who had been an undergraduate at U.C. Riverside). And more recently I’ve been talking a lot with Jamie Vicary about categories and quantum computation—since was in Singapore some of the time while I was there. (Indeed, I’m going back there this summer, and so will he.)

I’m not as big on n-categories and quantum gravity as I used to be, but I’m still interested in the foundations of quantum theory and how it’s connected to computation, so I think I can give a talk with some new ideas in it.


Black Holes and the Golden Ratio

28 February, 2013

 

The golden ratio shows up in the physics of black holes!

Or does it?

Most things get hotter when you put more energy into them. But systems held together by gravity often work the other way. For example, when a red giant star runs out of fuel and collapses, its energy goes down but its temperature goes up! We say these systems have a negative specific heat.

The prime example of a system held together by gravity is a black hole. Hawking showed—using calculations, not experiments—that a black hole should not be perfectly black. It should emit ‘Hawking radiation’. So it should have a very slight glow, as if it had a temperature above zero. For a black hole the mass of the Sun this temperature would be just 6 × 10-8 kelvin.

This is absurdly chilly, much colder than the microwave background radiation left over from the Big Bang. So in practice, such a black hole will absorb stuff—stars, nearby gas and dust, starlight, microwave background radiation, and so on—and grow bigger. But if we could protect it from all this stuff, and put it in a very cold box, it would slowly shrink by emitting radiation and losing energy, and thus mass. As it lost energy, its temperature would go up. The less energy it has, the hotter it gets: a negative specific heat! Eventually, as it shrinks to nothing, it should explode in a very hot blast.

But for a spinning black hole, things are more complicated. If it spins fast enough, its specific heat will be positive, like a more ordinary object.

And according to a 1989 paper by Paul Davies, the transition to positive specific heat happens at a point governed by the golden ratio! He claimed that in units where the speed of light and gravitational constant are 1, it happens when

\displaystyle{  \frac{J^2}{M^4} = \frac{\sqrt{5} - 1}{2}  }

Here J is the black hole’s angular momentum, M is its mass, and

\displaystyle{ \frac{\sqrt{5} - 1}{2} = 0.6180339\dots }

is a version of the golden ratio! This is for black holes with no electric charge.

Unfortunately, this claim is false. Cesar Uliana, who just did a master’s thesis on black hole thermodynamics, pointed this out in the comments below after I posted this article.

And curiously, twelve years before writing this paper with the mistake in it, Davies wrote a paper that got the right answer to the same problem! It’s even mentioned in the abstract.

The correct constant is not the golden ratio! The correct constant is smaller:

\displaystyle{ 2 \sqrt{3} - 3 = 0.46410161513\dots }

However, Greg Egan figured out the nature of Davies’ slip, and thus discovered how the golden ratio really does show up in black hole physics… though in a more quirky and seemingly less significant way.

As usually defined, the specific heat of a rotating black hole measures the change in internal energy per change in temperature while angular momentum is held constant. But Davies looked at the change in internal energy per change in temperature while the ratio of angular momentum to mass is held constant. It’s this modified quantity that switches from positive to negative when

\displaystyle{  \frac{J^2}{M^4} = \frac{\sqrt{5} - 1}{2} }

In other words:

Suppose we gradually add mass and angular momentum to a black hole while not changing the ratio of angular momentum, J, to mass, M. Then J^2/M^4 gradually drops. As this happens, the black hole’s temperature increases until

\displaystyle{ \frac{J^2}{M^4} = \frac{\sqrt{5} - 1}{2} }

in units where the speed of light and gravitational constant are 1. And then it starts dropping!

What does this mean? It’s hard to tell. It doesn’t seem very important, because it seems there’s no good physical reason for the ratio of J to M to stay constant. In particular, as a black hole shrinks by emitting Hawking radiation, this ratio goes to zero. In other words, the black hole spins down faster than it loses mass.

Popularizations

Discussions of black holes and the golden ratio can be found in a variety of places. Mario Livio is the author of The Golden Ratio, and also an astrophysicist, so it makes sense that he would be interested in this connection. He wrote about it here:

• Mario Livio, The golden ratio and astronomy, Huffington Post, 22 August 2012.

Marcus Chown, the main writer on cosmology for New Scientist, talked to Livio and wrote about it here:

• Marcus Chown, The golden rule, The Guardian, 15 January 2003.

Chown writes:

Perhaps the most surprising place the golden ratio crops up is in the physics of black holes, a discovery made by Paul Davies of the University of Adelaide in 1989. Black holes and other self-gravitating bodies such as the sun have a “negative specific heat”. This means they get hotter as they lose heat. Basically, loss of heat robs the gas of a body such as the sun of internal pressure, enabling gravity to squeeze it into a smaller volume. The gas then heats up, for the same reason that the air in a bicycle pump gets hot when it is squeezed.

Things are not so simple, however, for a spinning black hole, since there is an outward “centrifugal force” acting to prevent any shrinkage of the hole. The force depends on how fast the hole is spinning. It turns out that at a critical value of the spin, a black hole flips from negative to positive specific heat—that is, from growing hotter as it loses heat to growing colder. What determines the critical value? The mass of the black hole and the golden ratio!

Why is the golden ratio associated with black holes? “It’s a complete enigma,” Livio confesses.

Extremal black holes

As we’ve seen, a rotating uncharged black hole has negative specific heat whenever the angular momentum is below a certain critical value:

\displaystyle{ J < k M^2 }

where

\displaystyle{ k = \sqrt{2 \sqrt{3} - 3} = 0.68125003863\dots }

As J goes up to this critical value, the specific heat actually approaches -\infty! On the other hand, a rotating uncharged black hole has positive specific heat when

\displaystyle{  J > kM^2}

and as J goes down to this critical value, the specific heat approaches -\infty. So, there’s some sort of ‘phase transition’ at

\displaystyle{  J = k M^2 }

But as we make the black hole spin even faster, something very strange happens when

\displaystyle{ J > M^2 }

Then the black hole gets a naked singularity!

In other words, its singularity is no longer hidden behind an event horizon. An event horizon is an imaginary surface such that if you cross it, you’re doomed to never come back out. As far as we know, all black holes in nature have their singularities hidden behind an event horizon. But if the angular momentum were too big, this would not be true!

A black hole posed right at the brink:

\displaystyle{ J = M^2 }

is called an ‘extremal’ black hole.

Black holes in nature

Most physicists believe it’s impossible for black holes to go beyond extremality. There are lots of reasons for this. But do any black holes seen in nature get close to extremality? For example, do any spin so fast that they have positive specific heat? It seems the answer is yes!

Over on Google+, Robert Penna writes:

Nature seems to have no trouble making black holes on both sides of the phase transition. The spins of about a dozen solar mass black holes have reliable measurements. GRS1915+105 is close to J=M^2. The spin of A0620-00 is close to J=0. GRO J1655-40 has a spin sitting right at the phase transition.

The spins of astrophysical black holes are set by a competition between accretion (which tends to spin things up to J=M^2) and jet formation (which tends to drain angular momentum). I don’t know of any astrophysical process that is sensitive to the black hole phase transition.

That’s really cool, but the last part is a bit sad! The problem, I suspect, is that Hawking radiation is so pathetically weak.

But by the way, you may have heard of this recent paper—about a supermassive black hole that’s spinning super-fast:

• G. Risaliti, F. A. Harrison, K. K. Madsen, D. J. Walton, S. E. Boggs, F. E. Christensen, W. W. Craig, B. W. Grefenstette, C. J. Hailey, E. Nardini, Daniel Stern and W. W. Zhang, A rapidly spinning supermassive black hole at the centre of NGC 1365, Nature (2013), 449–451.

They estimate that this black hole has a mass about 2 million times that of our sun, and that

\displaystyle{ J \ge 0.84 \, M^2 }

with 90% confidence. If so, this is above the phase transition where it gets positive specific heat.

The nitty-gritty details

Here is where Paul Davies claimed the golden ratio shows up in black hole physics:

• Paul C. W. Davies, Thermodynamic phase transitions of Kerr-Newman black holes in de Sitter space, Classical and Quantum Gravity 6 (1989), 1909–1914.

He works out when the specific heat vanishes for rotating and/or charged black holes in a universe with a positive cosmological constant: so-called de Sitter space. The formula is pretty complicated. Then he set the cosmological constant \Lambda to zero. In this case de Sitter space flattens out to Minkowski space, and his black holes reduce to Kerr–Newman black holes: that is, rotating and/or charged black holes in an asymptotically Minkowskian spacetime. He writes:

In the limit \alpha \to 0 (that is, \Lambda \to 0), the cosmological horizon no longer exists: the solution corresponds to the case of a black hole in asymptotically flat spacetime. In this case r may be explicitly eliminated to give

(\beta + \gamma)^3 + \beta^2 -\beta - \frac{3}{4} \gamma^2  = 0.   \qquad (2.17)

Here

\beta = a^2 / M^2

\gamma = Q^2 / M^2

and he says M is the black hole’s mass, Q is its charge and a is its angular momentum. He continues:

For \beta = 0 (i.e. a = 0) equation (2.17) has the solution \gamma = 3/4, or

\displaystyle{ Q^2 = \frac{3}{4} M^2 } \qquad  (2.18)

For \gamma = 0 (i.e. Q = 0), equation (2.17) may be solved to give \beta = (\sqrt{5} - 1)/2 or

\displaystyle{ a^2 = (\sqrt{5} - 1)M^2/2 \cong 0.62 M^2   }  \qquad  (2.19)

These were the results first reported for the black-hole case in Davies (1979).

In fact a can’t be the angular momentum, since the right condition for a phase transition should say the black hole’s angular momentum is some constant times its mass squared. I think Davies really meant to define

a = J/M

This is important beyond the level of a mere typo, because we get different concepts of specific heat depending on whether we hold J or a constant while taking certain derivatives!

In the usual definition of specific heat for rotating black holes, we hold J constant and see how the black hole’s heat energy changes with temperature. If we call this specific heat C_J, we have

\displaystyle{ C_J = T \left.\frac{\partial S}{\partial T}\right|_J }

where S is the black hole’s entropy. This specific heat C_J becomes infinite when

\displaystyle{ \frac{J^2}{M^4} = 2 \sqrt{3} - 3  }

But if instead we hold a = J/M constant, we get something else—and this what Davies did! If we call this modified concept of specific heat C_a, we have

\displaystyle{ C_a = T \left.\frac{\partial S}{\partial T}\right|_a }

This modified ‘specific heat’ C_a becomes infinite when

\displaystyle{  \frac{J^2}{M^4} = \frac{\sqrt{5}-1}{2} }

After proving these facts in the comments below, Greg Egan drew some nice graphs to explain what’s going on. Here are the curves of constant temperature as a function of the black hole’s mass M and angular momentum J:

The dashed parabola passing through the peaks of the curves of constant temperature is where C_J becomes infinite. This is where energy can be added without changing the temperature, so long as it’s added in a manner that leaves J constant.

And here are the same curves of constant temperature, along with the parabola where C_a becomes infinite:

This new dashed parabola intersects each curve of constant temperature at the point where the tangent to this curve passes through the origin: that is, where the tangent is a line of constant a=J/M. This is where energy and angular momentum can be added to the hole in a manner that leaves a constant without changing the temperature.

As mentioned, Davies correctly said when the ordinary specific heat C_J becomes infinite in another paper, eleven years earlier:

• Paul C. W. Davies, Thermodynamics of black holes, Rep. Prog. Phys. 41 (1978), 1313–1355.

You can see his answer on page 1336.

This 1978 paper, in turn, is a summary of previous work including an article from a year earlier:

• Paul C. W. Davies, The thermodynamic theory of black holes, Proc. Roy. Soc. Lond. A 353 (1977), 499–521.

And in the abstract of this earlier article, Davies wrote:

The thermodynamic theory underlying black-hole processes is developed in detail and applied to model systems. It is found that Kerr-Newman black holes undergo a phase transition at an angular-momentum mass ratio of 0.68M or an electric charge (Q) of 0.86M, where the heat capacity has an infinite discontinuity. Above the transition values the specific heat is positive, permitting isothermal equilibrium with a surrounding heat bath.

Here the number 0.68 is showing up because

\displaystyle{ \sqrt{ 2 \sqrt{3} - 3 } = 0.68125003863\dots }

The number 0.86 is showing up because

\displaystyle{ \sqrt{ \frac{3}{4} } = 0.86602540378\dots }

By the way, just in case you want to do some computations using experimental data, let me put the speed of light c and gravitational constant G back in the formulas. A rotating (uncharged) black hole is extremal when

\displaystyle{ c J = G M^2 }

A Bet Concerning Neutrinos (Part 5)

7 January, 2013

It’s a little-known spinoff of Heisenberg’s uncertainty principle. When you accurately measure the velocity of neutrinos, they can turn into ham!

I observed this myself. It came in the mail along with some sausages, bacon, and peach and blueberry syrup. They’re from Heather Vandagriff. Thanks, Heather!

These are the first of my winnings on some bets concerning the famous OPERA experiment that seemed to detect neutrinos going faster than light. I bet that this experiment would be shown wrong. Heather bet me some Tennessee ham against some nice cloth from Singapore.

The OPERA team announced that they’d detected faster-than-light neutrinos back in September 2011. But later, they discovered two flaws in their experimental setup.

First, a fiber optic cable wasn’t screwed in right. This made it take about 70 nanoseconds longer than it should have for a signal from a global positioning system to the so-called ‘master clock’:

Since the clock got its signal late, the neutrinos seemed to show up early. Click on the picture for a more detailed explanation.

On top of this, the clock was poorly calibrated! This had a roughly opposite effect: it tended to make the neutrinos seem to show up late… but only some of the time. However, this effect was not big enough, on average, to cancel the other mistake.

The OPERA team fixed these problems and repeated the experiment in May 2012. The neutrinos came in slower than light:

• OPERA, Measurement of the neutrino velocity with the OPERA detector in the CNGS beam, 12 July 2012.

Three other experiments using the same neutrino source—Borexino, ICARUS, and LVD—also got the same result! For a more detailed post-mortem, with lots of references, see:

Faster-than-light neutrino anomaly, Wikipedia.

My wife Lisa has a saying from her days in the computer business: when in doubt, check the cables.


Rolling Circles and Balls (Part 5)

2 January, 2013

Last time I promised to show you how the problem of a little ball rolling on a big stationary ball can be described using an 8-dimensional number system called the split octonions… if the big ball has a radius that’s 3 times the radius of the little one!

So, let’s get started.

First, I must admit that I lied.

Lying is an important pedagogical technique. The teacher simplifies the situation, so the student doesn’t get distracted by technicalities. Then later—and this is crucial!—the teacher admits that certain statements weren’t really true, and corrects them. It always makes me uncomfortable to do this. But it works better than dumping all the technical details on the students right away. In classes, I sometimes deal with my discomfort by telling the students: “Okay, now I’m going to lie a bit…”

What was my lie? Instead of an ordinary ball rolling on another ordinary ball, we need a ‘spinorial’ ball rolling on a ‘projective’ ball.

Let me explain that.

A spinorial ball

In physics, a spinor is a kind of particle that you need to turn around twice before it comes back to the way it was. Examples include electrons and protons.

If you give one of these particles a full 360° turn, which you can do using a magnetic field, it changes in a very subtle way. You can only detect this change using clever tricks. For example, take a polarized beam of electrons and send it through a barrier with two slits cut out. Each electron goes through both slits, because it’s a wave as well as a particle. Next, put a magnetic field next to one slit that’s precisely strong enough to rotate the electron by 360° if it goes through that slit. Then, make the beams recombine, and see how likely it is for electrons to be found at different locations. You’ll get different results than if you turn off the magnetic field that rotates the electron!

However, if you rotate a spinor by 720°—that is, two full turns—it comes back to exactly the way it was.

This may seem very odd, but when you understand the math of spinors you see it all makes sense. It’s a great example of how you have to follow the math where it leads you. If something is mathematically allowed, nature may take advantage of that possibility, regardless of whether it seems odd to you.

So, I hope you can imagine a ‘spinorial’ ball, which changes subtly when you turn it 360° around any axis, but comes back to its original orientation when you turn it around 720°. If you can’t, I’ll present the math more rigorously later on. That may or may not help.

A projective ball

What’s a ‘projective’ ball? It’s a ball whose surface is not a sphere, but a projective plane. A projective plane is a sphere that’s been modified so that diametrically opposite points count as the same point. The north pole is the same as the south pole, and so on!

In geography, the point diametrically opposite to some point on the Earth’s surface is called its antipodes, so let’s use that term. There’s a website that lets you find the antipodes of any place on Earth. Unfortunately the antipodes of most famous places are under water! But the antipodes of Madrid is in New Zealand, near Wellington:

When we roll a little ball on a big ‘projective’ ball, when the little ball reaches the antipodes of where it started, it counts as being back to its original location.

If you find this hard to visualize, imagine rolling two indistinguishable little balls on the big ball, that are always diametrically opposite each other. When one little ball rolls to the antipodes of where it started, the other one has taken its place, and the situation looks just like when you started!

A spinorial ball on a projective ball

Now let’s combine these ideas. Imagine a little spinorial ball rolling on a big projective ball. You need to turn the spinorial ball around twice to make it come back to its original orientation. But you only need to roll it halfway around the projective ball for it to come back to its original location.

These effects compensate for each other to some extent. The first makes it twice as hard to get back to where you started. The second makes it twice as easy!

But something really great happens when the big ball is 3 times as big as the little one. And that’s what I want you to understand.

For starters, consider an ordinary ball rolling on another ordinary ball that’s the same size. How many times does the rolling ball turn as it makes a round trip around the stationary one? If you watch this you can see the answer:


Follow the line drawn on the little ball. It turns around not once, but twice!

Next, consider one ball rolling on another whose radius is 2 times as big. How many times does the rolling ball turn as it makes a round trip?

It turns around 3 times.

And this pattern continues! I don’t have animations proving it, so either take my word for it, read our paper, or show it yourself.

In particular, a ball rolling on a ball whose radius is 3 times as big will turn 4 times as it makes a round trip.

So, by the time the little ball rolls halfway around the big one, it will have turned around twice!

But now suppose it’s a spinorial ball rolling on a projective plane. This is perfect. Now when the little ball goes halfway around big ball, it returns to its original location! And turning around the little ball twice gets it back to its original orientation!

So, there is something very neat about a spinorial ball rolling on a projective ball whose radius is exactly 3 times as big. And this is just the start. Now the split octonions get involved!

The rolling ball geometry

The key is to ponder a curious sort of geometry, which I’ll call the rolling ball geometry. This has ‘points’ and ‘lines’ which are defined in a funny way.

A point is any way a little spinorial ball can touch a projective ball that is 3 times as big. The lines are certain sets of points. A line consists of all the points we reach as the little ball rolls along some great circle on the big one, without slipping or twisting.

Of course these aren’t ‘points’ and ‘lines’ in the usual sense. But ever since the late 1800s, when mathematicians got excited about projective geometry—which is the geometry of the projective plane—we’ve enjoyed studying all sorts of strange variations on Euclidean geometry, with weirdly defined ‘points’ and ‘lines’. The rolling ball geometry fits very nicely into this tradition.

But the amazing thing is that we can describe points and lines of the rolling ball geometry in a completely different way, using the split octonions.

Split octonions

How does it work? As I said last time, the split octonions are an 8-dimensional number system. We build them as follows. We start with the ordinary real numbers. Then we throw in 3 square roots of -1, called i, j, and k, obeying

ij = -ji = k
jk = -kj = i
ki = -ik = j

At this point we have a famous 4-dimensional number system called the quaternions. Quaternions are numbers like

a + bi + cj + dk

where a,b,c,d are real numbers and i, j, k are the square roots of -1 we just created.

To build the octonions, we would now throw in another square root of -1. But to build the split octonions, we instead throw in a square root of +1. Let’s call it \ell. The hard part is saying what rules it obeys when we start multiplying it with other numbers in our system.

For starters, we get three more numbers \ell i, \ell j, \ell k. We decree these to be square roots of +1. But what happens when we multiply these with other things? For example, what is \ell i times j, and so on?

Since I don’t want to bore you, I’ll just get this over with quickly by showing you the multiplication table:

This says that \ell i (read down) times j (read across) is -\ell k, and so on.

Of course, this table is completely indigestible. I could never remember it, and you shouldn’t try. This is not the good way to explain how to multiply split octonions! It’s the lazy way. To really work with the split octonions you need a more conceptual approach, which John Huerta and I explain in our paper. But this is just a quick tour… so, on with the tour!

A split octonion is any number like

a + bi + cj + dk + e \ell + f \ell i + g \ell j + h \ell k

where a,b,c,d,e,f,g,h are real numbers. Since it takes 8 real numbers to specify a split octonion, we say they’re an 8-dimensional number system. But to describe the rolling ball geometry, we only need the imaginary split octonions, which are numbers like

x = bi + cj + dk + e \ell + f \ell i + g \ell j + h \ell k

The imaginary split octonions are 7-dimensional. 3 dimensions come from square roots of -1, while 4 come from square roots of 1.

We can use them to make up a far-out variant of special relativity: a universe with 3 time dimensions and 4 space dimensions! To do this, define the length of an imaginary split octonion x to be the number \|x \| with

\|x\|^2 = -b^2 - c^2 - d^2 + e^2 + f^2 + g^2 + h^2

This is a mutant version of the Pythagorean formula. The length \|x\| is real, in fact positive, for split octonions that point in the space directions. But it’s imaginary for those that point in the time directions!

This should not sound weird if you know special relativity. In special relativity we have spacelike vectors, whose length squared is positive, and timelike ones, whose length squared is negative.

If you don’t know special relativity—well, now you see how revolutionary Einstein’s ideas really are.

We also have vectors whose length squared is zero! These are called null. They’re also called lightlike, because light rays point along null vectors. In other words: light moves just as far in the space directions as it does in the time direction, so it’s poised at the brink between being spacelike and timelike.

The punchline

I’m sure you’re wondering where all this is going. Luckily, we’re there. We can describe the rolling ball geometry using the imaginary split octonions! Let me state it and then chat about it:

Theorem. There is a one-to-one correspondence between points in the rolling ball geometry and light rays through the point 0 in the imaginary split octonions. Under this correspondence, lines in the rolling ball geometry correspond to planes containing the point 0 in the imaginary split octonions with the property that whenever x and y lie in this plane, then xy = 0.

Even if you don’t get this, you can see it’s describing the rolling ball geometry in terms of stuff about the split octonions. An immediate consequence is that any symmetry of the split octonions is a symmetry of the rolling ball geometry.

The symmetries of the split octonions form a group called ‘the split form of G2’. With more work, we can show the converse: any symmetry of the rolling ball geometry is a symmetry of the split octonions. So, the symmetry group of the rolling ball geometry is precisely the split form of G2.

So what?

Well, G2 is an ‘exceptional group’—one of five groups that were discovered only when mathematicians like Killing and Cartan systematically started trying to classify groups in the late 1800s. The exceptional groups didn’t fit in the lists of groups mathematicians already knew.

If, as Tim Gowers has argued, some math is invented while some is discovered, the exceptional groups were discovered. Finding them was like going to the bottom of the ocean and finding weird creatures you never expected. These groups were—and are—hard to understand! They have dry, technical sounding names: E6, E7, E8, F4, and G2. They’re important in string theory—but again, just because the structure of mathematics forces it, not because people wanted it.

The exceptional groups can all be described using the octonions, or split octonions. But the octonions are also rather hard to understand. The rolling ball geometry, on the other hand, is rather simple and easy to visualize. So, it’s a way of bringing some exotic mathematics a bit closer to ordinary life.

Well, okay—in ordinary life you’d probably never thought about a spinorial ball rolling on a projective ball. But still: spinors and projective planes are far less exotic than split octonions and exceptional Lie groups. Any mathematician worth their salt knows about spinors and projective planes. They’re things that make plenty of sense.

I think now is a good time for most of you nonmathematicians to stop reading. I’ll leave off with a New Year’s puzzle:

Puzzle: Relative to the fixed stars, how many times does the Earth turn around its axis in a year?

Bye! It was nice seeing you!

The gory details

Still here? Cool. I want to wrap up by presenting the theorem in a more precise way, and then telling the detailed history of the rolling ball problem.

How can we specify a point in the rolling ball geometry? We need to say the location where the little ball touches the big ball, and we need to describe the ‘orientation’ of the little ball: that is, how it’s been rotated.

The point where the little ball touches the big one is just any point on the surface of the big ball. If the big ball were an ordinary ball this would be a point on the sphere, S^2. But since it’s a projective ball, we need a point on the projective plane, \mathbb{R}\mathrm{P}^2.

To describe the orientation of the little ball we need to say how it’s been rotated from some standard orientation. If the little ball were an ordinary ball we’d need to give an element of the rotation group, \mathrm{SO}(3). But since it’s a spinorial ball we need an element of the double cover of the rotation group, namely the special unitary group \mathrm{SU}(2).

So, the space of points in the rolling ball geometry is

X = \mathbb{R}\mathrm{P}^2 \times \mathrm{SU}(2)

This makes it easy to see how the imaginary split octonions get into the game. For starters, \mathrm{SU}(2) is isomorphic to the group of unit quaternions. We can define the absolute values of quaternions in a way that copies the usual formula for complex numbers:

\displaystyle{ |a + bi + cj + dk| = \sqrt{a^2 + b^2 + c^2 + d^2} }

The great thing about the quaternions is that if we multiply two of them, their absolute values multiply. In other words, if p and q are two quaternions,

|pq| = |p| |q|

This implies that the quaternions with absolute value 1—the unit quaternions—are closed under multiplication. In fact, they form a group. And in fact, this group is just SU(2) in mild disguise!

The unit quaternions form a sphere. Not an ordinary ‘2-sphere’ of the sort we’ve been talking about so far, but a ‘3-sphere’ in the 4-dimensional space of quaternions. We call that S^3.

So, the space of points in the rolling ball is isomorphic to a projective plane times a 3-sphere:

X \cong \mathbb{R}\mathrm{P}^2 \times S^3

But since the projective plane \mathbb{R}\mathrm{P}^2 is the sphere S^2 with antipodal points counted as the same point, our space of points is

\displaystyle{ X \cong \frac{S^2 \times S^3}{(p,q) \sim (-p,q)}}

Here dividing or ‘modding out’ by that stuff on the bottom says that we count any point (p,q) in S^2 \times S^3 as the same as (-p,q).

The cool part is that while S^3 is the unit quaternions, we can think of S^2 as the unit imaginary quaternions, where an imaginary quaternion is a number like

bi + cj + dk

So, we’re describing a point in the rolling ball geometry using a unit quaternion and a unit imaginary quaternion. That’s cool. But we can improve this description a bit using a nonobvious fact:

\displaystyle{ X \cong \frac{S^2 \times S^3}{(p,q) \sim (-p,-q)}}

There’s an extra minus sign here! Now we’re counting any point (p,q) in S^2 \times S^3 as the same as (-p,-q). In Proposition 2 in our paper we give an explicit formula for this isomorphism, which is important.

But never mind. Here’s the point of this improvement: we can now describe a point in the rolling ball geometry as a light ray through the origin in the imaginary split octonions! After all, any split octonion is of the form

a + bi + cj + dk + e \ell + f \ell i + g \ell j + h \ell k =  p + \ell q

where p and q are arbitrary quaternions. So, any imaginary split octonion is of the form

bi + cj + dk + e \ell + f \ell i + g \ell j + h \ell k =  p + \ell q

where q is a quaternion and p is an imaginary quaternion. This imaginary split octonion is lightlike if

-b^2 - c^2 - d^2 + e^2 + f^2 + g^2 + h^2 = 0

But this just says

|p|^2 = |q|^2

Given any light ray through the origin in the imaginary split octonions, it consists of all real multiples of some lightlike imaginary split octonion. So, we can describe it using a pair (p,q) where p is an imaginary quaternion and q is a quaternion with the same absolute value as p. We can normalize them so they’re both unit quaternions… but (p,q) and its negative (-p,-q) still determine the same light ray.

So, the space of light rays through the origin in the imaginary split octonions is

\displaystyle{\frac{S^2 \times S^3}{(p,q) \sim (-p,-q)}}

But this is the space of points in the rolling ball geometry!

Yay!

So far nothing relies on knowing how to multiply imaginary split octonions: we only need the formula for their length, which is much simpler. It’s the lines in the rolling ball geometry that require multiplication. In our paper, we show in Theorem 5 that lines correspond to 2-dimensional subspaces of the imaginary split octonions with the property that whenever x, y lie in the subspace, then x y = 0. In particular this implies that x^2 = 0, which turns out to say that x is lightlike. So, these 2d subspaces consist of lightlike elements. But the property that x y = 0 whenever x, y lie in the subspace is actually stronger! And this is how the full strength of the split octonions gets used.

One last comment. What if we hadn’t used a spinorial ball rolling on a projective ball? What if we had used an ordinary ball rolling on another ordinary ball? Then our space of points would be S^2 \times \mathrm{SO}(3). This is a lot like the space X we’ve been looking at. The only difference is a slight change in where we put the minus sign:

\displaystyle{ S^2 \times \mathrm{SO}(3) \cong \frac{S^2 \times S^3}{(p,q) \sim (p,-q)}}

But this space is different than X. We could go ahead and define lines and look for symmetries of this geometry, but we wouldn’t get G2. We’d get a much smaller group. We’d also get a smaller symmetry group if we worked with X but the big ball were anything other than 3 times the size of the small one. For proofs, see:

• Gil Bor and Richard Montgomery, G2 and the “rolling distribution”.

The history

I will resist telling you how to use geometric quantization to get back from the rolling ball geometry to the split octonions. I will also resist telling you about ‘null triples’, which give specially nice bases for the split octonions. This is where John Huerta really pulled out all the stops and used his octonionic expertise to its full extent. For these things, you’ll just have to read our paper.

Instead, I want to tell you about the history of this problem. This part is mainly for math history buffs, so I’ll freely fling around jargon that I’d been suppressing up to now. This part is also where I give credit to all the great mathematicians who figured out most of the stuff I just explained! I won’t include references, except for papers that are free online. You can find them all in our paper.

On May 23, 1887, Wilhelm Killing wrote a letter to Friedrich Engel saying that he had found a 14-dimensional simple Lie algebra. This is now called \mathfrak{g}_2, because it’s the Lie algebra corresponding to the group G2. By October he had completed classifying the simple Lie algebras, and in the next three years he published this work in a series of papers.

Besides the already known classical simple Lie algebras, he claimed to have found six ‘exceptional’ ones. In fact he only gave a rigorous construction of the smallest, \mathfrak{g}_2. Later, in his famous 1894 thesis, Élie Cartan constructed all of them and noticed that two of them were isomorphic, so that there are really only five.

But already in 1893, Cartan had published a note describing an open set in \mathbb{C}^5 equipped with a 2-dimensional ‘distribution’—a smoothly varying field of 2d spaces of tangent vectors—for which the Lie algebra \mathfrak{g}_2 appears as the infinitesimal symmetries. In the same year, and actually in the same journal, Engel noticed the same thing.

In fact, this 2-dimensional distribution is closely related to the rolling ball problem. The point is that the space of configurations of the rolling ball is 5-dimensional, with a 2-dimensional distibution that describes motions of the ball where it rolls without slipping or twisting.

Both Cartan and Engel returned to this theme in later work. In particular, Engel discovered in 1900 that a generic antisymmetic trilinear form on \mathbb{C}^7 is preserved by a group isomorphic to the complex form of G2. Furthermore, starting from this 3-form he constructed a nondegenerate symmetric bilinear form on \mathbb{C}^7. This implies that the complex form of G2. is contained in a group isomorphic to \mathrm{SO}(7,\mathbb{C}). He also noticed that the vectors x \in \mathbb{C}^7 that are null—meaning x \cdot x = 0, where we write the bilinear form as a dot product—define a 5-dimensional projective variety on which G2 acts.

In fact, this variety is the complexification of the configuration space of a rolling fermionic ball on a projective plane! Futhermore, the space \mathbb{C}^7 is best seen as the complexification of the space of imaginary octonions. Like the space of imaginary quaternions (better known as \mathbb{R}^3), the 7-dimensional space of imaginary octonions comes with a dot product and cross product. Engel’s bilinear form on \mathbb{C}^7 arises from complexifying the dot product. His antisymmetric trilinear form arises from the dot product together with the cross product via the formula

x \cdot (y \times z).

However, all this was seen only later! It was only in 1908 that Cartan mentioned that the automorphism group of the octonions is a 14-dimensional simple Lie group. Six years later he stated something he probably had known for some time: this group is the compact real form of G2.

As I already mentioned, the octonions had been discovered long before: in fact the day after Christmas in 1843, by Hamilton’s friend John Graves. Two months before that, Hamilton had sent Graves a letter describing his dramatic discovery of the quaternions. This encouraged Graves to seek an even larger normed division algebra, and thus the octonions were born. Hamilton offered to publicize Graves’ work, but put it off or forgot until the young Arthur Cayley rediscovered the octonions in 1845. That this obscure algebra lay at the heart of all the exceptional Lie algebras and groups became clear only slowly. Cartan’s realization of its relation to \mathfrak{g}_2, and his later work on a symmetry called ‘triality’, was the first step.

In 1910, Cartan wrote a paper that studied 2-dimensional distributions in 5 dimensions. Generically such a distibution is not integrable: in other words, the Lie bracket of two vector fields lying in this distribution does not again lie in this distribution. However, it lies in a 3-dimensional distribution. The Lie bracket of vector fields lying in this 3-dimensional distibution then generically give arbitrary tangent vectors to the 5-dimensional manifold. Such a distribution is called a (2,3,5) distribution. Cartan worked out a complete system of local geometric invariants for these distributions. He showed that if all these invariants vanish, the infinitesimal symmetries of a (2,3,5) distribution in a neighborhood of a point form the Lie algebra \mathfrak{g}_2.

Again this is relevant to the rolling ball. The space of configurations of a ball rolling on a surface is 5-dimensional, and it comes equipped with a (2,3,5) distribution. The 2-dimensional distibution describes motions of the ball where it rolls without twisting or slipping. The 3-dimensional distribution describes motions where it can roll and twist, but not slip. Cartan did not discuss rolling balls, but he did consider a closely related example: curves of constant curvature 2 or 1/2 in the unit 3-sphere.

Beginning in the 1950’s, Francois Bruhat and Jacques Tits developed a very general approach to incidence geometry, eventually called the theory of ‘buildings’, which among other things gives a systematic approach to geometries having simple Lie groups as symmetries. In the case of G2, because the Dynkin diagram of this group has two dots, the relevant geometry has two types of figure: points and lines. Moreover because the Coxeter group associated to this Dynkin diagram is the symmetry group of a hexagon, a generic pair of points a and d fits into a configuration like this, called an ‘apartment’:

There is no line containing a pair of points here except when a line is actually shown, and more generally there are no ‘shortcuts’ beyond what is shown. For example, we go from a to b by following just one line, but it takes two to get from a to c, and three to get from a to d.

Betty Salzberg wrote a nice introduction to these ideas in 1982. Among other things, she noted that the points and lines in the incidence geometry of the split real form of G2 correspond to 1- and 2-dimensional null subalgebras of the imaginary split octonions. This was shown by Tits in 1955.

In 1993, Bryant and Hsu gave a detailed treatment of curves in manifolds equipped with 2-dimensional distributions, greatly extending the work of Cartan:

• Robert Bryant and Lucas Hsu, Rigidity of integral curves of rank 2 distributions.

They showed how the space of configurations of one surface rolling on another fits into this framework. However, Igor Zelenko may have been the first to explicitly mention a ball rolling on another ball in this context, and to note that something special happens when their ratio of radii is 3 or 1/3. In a 2005 paper, he considered an invariant of (2,3,5) distributions. He calculated it for the distribution arising from a ball rolling on a larger ball and showed it equals zero in these two cases.

(In our paper, John Huerta and I assume that the rolling ball is smaller than the fixed one, simply to eliminate one of these cases and focus on the case where the fixed ball is 3 times as big.)

In 2006, Bor and Montgomery's paper put many of the pieces together. They studied the (2,3,5) distribution on S^2 \times \mathrm{SO}(3) coming from a ball of radius 1 rolling on a ball of radius R, and proved a theorem which they credit to Robert Bryant. First, passing to the double cover, they showed the corresponding distribution on S^2 \times \mathrm{SU}(2) has a symmetry group whose identity component contains the split real form of G2 when R = 3 or 1/3. Second, they showed this action does not descend to original rolling ball configuration space S^2 \times \mathrm{SO}(3). Third, they showed that for any other value of R except R = 1, the symmetry group is isomorphic to \mathrm{SU}(2) \times \mathrm{SU}(2)/\pm(1,1). They also wrote:

Despite all our efforts, the ‘3’ of the ratio 1:3 remains mysterious. In this article it simply arises out of the structure constants for G2 and appears in the construction of the embedding of \mathfrak{so}(3) \times \mathfrak{so}(3) into \mathfrak{g}_2. Algebraically speaking, this ‘3’ traces back to the 3 edges in \mathfrak{g}_2's Dynkin diagram and the consequent relative positions of the long and short roots in the root diagram for \mathfrak{g}_2 which the Dynkin diagram is encoding.

Open problem. Find a geometric or dynamical interpretation for the ‘3’ of the 3:1 ratio.

As you can see from what I’ve said, John Huerta and I have offered a solution to this puzzle: the ‘3’ comes from the fact that a ball rolling once around a ball 3 times as big turns around exactly 4 times—just what you need to get a relationship to a spinorial ball rolling on a projective plane, and thus the lightcone in the split octonions! The most precise statement of this explanation comes in Theorem 3 of our paper.

While Bor and Montgomery’s paper goes into considerable detail about the connection with split octonions, most of their work uses the now standard technology of semisimple Lie algebras: roots, weights and the like. In 2006, Sagerschnig described the incidence geometry of \mathrm{G}_2 using the split octonions, and in 2008, Agrachev wrote a paper entitled ‘Rolling balls and octonions’. He emphasizes that the double cover S^2 \times \mathrm{SU}(2) can be identified with the double cover of \mathrm{PC}, the projectivization of the set of null vectors in the imaginary split octonions. He then shows that given a point \langle x \rangle \in \mathrm{PC}, the set of points \langle y \rangle connected to \langle x \rangle by a single roll is the annihilator

\{ x \in \mathbb{I} : y x = 0 \}

where \mathbb{I} is the space of imaginary split octonions.

This sketch of the history is incomplete in many ways. As usual, history resembles a fractal: the more closely you study it, the more details you see! If you want to dig deeper, I strongly recommend these:

• Ilka Agricola, Old and new on the the exceptional group G2.

• Robert Bryant, Élie Cartan and geometric duality.

This paper is also very helpful and fun:

• Aroldo Kaplan, Quaternions and octonions in mechanics.

It emphasizes the role that quaternions play in describing rotations, and the way an imaginary split octonion is built from an imaginary quaternion and a quaternion. And don’t forget this:

• Andrei Agrachev, Rolling balls and octonions.

Have fun!


Rolling Circles and Balls (Part 4)

2 January, 2013

So far in this series we’ve been looking at what happens when we roll circles on circles:

• In Part 1 we rolled a circle on a circle that’s the same size.


• In Part 2 we rolled a circle on a circle that’s twice as big.

• In Part 3 we rolled a circle inside a circle that was 2, 3, or 4 times as big.


In every case, we got lots of exciting math and pretty pictures. But all this pales in comparison to the marvels that occur when we roll a ball on another ball!

You’d never guess it, but the really amazing stuff happens when you roll a ball on another ball that’s exactly 3 times as big. In that case, the geometry of what’s going on turns out to be related to special relativity in a weird universe with 3 time dimensions and 4 space dimensions! Even more amazingly, it’s related to a strange number system called the split octonions.

The ordinary octonions are already strange enough. They’re an 8-dimensional number system where you can add, subtract, multiply and divide. They were invented in 1843 after the famous mathematician Hamilton invented a rather similar 4-dimensional number system called the quaternions. He told his college pal John Graves about it, since Graves was the one who got Hamilton interested in this stuff in the first place… though Graves had gone on to become a lawyer, not a mathematician. The day after Christmas that year, Graves sent Hamilton a letter saying he’d found an 8-dimensional number system with almost all the same properties! The one big missing property was the associative law for multiplication, namely:

(ab)c = a(bc)

The quaternions obey this, but the octonions don’t. For this and other reasons, they languished in obscurity for many years. But they eventually turned out to be the key to understanding some otherwise inexplicable symmetry groups called ‘exceptional groups’. Later still, they turned out to be important in string theory!

I’ve been fascinated by this stuff for a long time, in part because it starts out seeming crazy and impossible to understand… but eventually it makes sense. So, it’s a great example of how you can dramatically change your perspective by thinking for a long time. Also, it suggests that there could be patterns built into the structure of math, highly nonobvious patterns, which turn out to explain a lot about the universe.

About a decade ago I wrote a paper summarizing everything I’d learned so far:

The octonions.

But I knew there was much more to understand. I wanted to work on this subject with a student. But I never dared until I met John Huerta, who, rather oddly, wanted to get a Ph.D. in math but work on physics. That’s generally not a good idea. But it’s exactly what I had wanted to do as a grad student, so I felt a certain sympathy for him.


And he seemed good at thinking about how algebra and particle physics fit together. So, I decided we should start by writing a paper on ‘grand unified theories’—theories of all the forces except gravity:

The algebra of grand unified theories.

The arbitrary-looking collection of elementary particles we observe in nature turns out to contain secret patterns—patterns that jump into sharp focus using some modern algebra! Why do quarks have weird fractional charges like 2/3 and -1/3? Why does each generation of particles contain two quarks and two leptons? I can’t say we really know the answer to such questions, but the math of grand unified theories make these strange facts seem natural and inevitable.

The math turns out to involve rotations in 10 dimensions, and ‘spinors’: things that only come around back to the way they started after you turn them around twice. This turned out to be a great preparation for our later work.

As we wrote this article, I realized that John Huerta had a gift for mathematical prose. In fact, we recently won a prize for this paper! In two weeks we’ll meet at the big annual American Mathematical Society conference and pick it up.

John Huerta wound up becoming an expert on the octonions, and writing his thesis about how they make superstring theory possible in 10-dimensional spacetime:

Division algebras, supersymmetry and higher gauge theory.

The wonderful fact is that string theory works well in 10 dimensions because the octonions are 8-dimensional! Suppose that at each moment in time, a string is like a closed loop. Then as time passes, it traces out a 2-dimensional sheet in spacetime, called a worldsheet:

In this picture, 'up' means 'forwards in time'. Unfortunately this picture is just 3-dimensional: the real story happens in 10 dimensions! Don't bother trying to visualize 10 dimensions, just count: in 10-dimensional spacetime there are 10 – 2 = 8 extra dimensions besides those of the string’s worldsheet. These are the directions in which the string can vibrate. Since the octonions are 8-dimensional, we can describe the string’s vibrations using octonions! The algebraic magic of this number system then lets us cook up a beautiful equation describing these vibrations: an equation that has ‘supersymmetry’.

For a full explanation, read John Huerta’s thesis. But for an easy overview, read this paper we published in Scientific American:

The strangest numbers in string theory.

This got included in a collection called The Best Writing on Mathematics 2012, further confirming my opinion that collaborating with John Huerta was a good idea.

Anyway: string theory sounds fancy, but for many years I’d been tantalized by the relationship between the octonions and a much more prosaic physics problem: a ball rolling on another ball. I had a lot of clues saying these should be a nice relationship… though only if we work with a mutant version of the octonions called the ‘split’ octonions.

You probably know how we get the complex numbers by taking the ordinary real numbers and throwing in a square root of -1. But there’s also another number system, far less popular but still interesting, called the split complex numbers. Here we throw in a square root of 1 instead. Of course 1 already has two square roots, namely 1 and -1. But that doesn’t stop us from throwing in another!

This ‘split’ game, which is a lot more profound than it sounds at first, also works for the quaternions and octonions. We get the octonions by starting with the real numbers and throwing in seven square roots of -1, for a total of 8 dimensions. For the split octonions, we start with the real numbers and throw in three square roots of -1 and four square roots of 1. The split octonions are surprisingly similar to the octonions. There are tricks to go back and forth between the two, so you should think of them as two forms of the same underlying thing.

Anyway: I really liked the idea of finding the split octonions lurking in a concrete physics problem like a ball rolling on another ball. I hoped maybe this could shed some new light on what the octonions are really all about.

James Dolan and I tried hard to get it to work. We made a lot of progress, but then we got stuck, because we didn’t realize it only works when one ball is 3 times as big as the other! That was just too crazy for us to guess.

In fact, some mathematicians had known about this for a long time. Things would have gone a lot faster if I’d read more papers early on. By the time we caught up with the experts, I’d left for Singapore, and John Huerta, still back in Riverside, was the one talking with James Dolan about this stuff. They figured out a lot more.

Then Huerta got his Ph.D. and took a job in Australia, which is as close to Singapore as it is to almost anything. I got a grant from the Foundational Questions Institute to bring John to Singapore and figure out more stuff about the octonions and physics… and we wound up writing a paper about the rolling ball problem:

G2 and the rolling ball.

Whoops! I haven’t introduced G2 yet. It’s one of those ‘exceptional groups’ I mentioned: the smallest one, in fact. Like the octonions themselves, this group comes in a few different but closely related ‘forms’. The most famous form is the symmetry group of the octonions. But in our paper, we’re more interested in the ‘split’ form, which is the symmetry group of the split octonions. The reason is that this group is also the symmetry group of a ball rolling without slipping or twisting on another ball that’s exactly 3 times as big!

The fact that the same group shows up as the symmetries of these two different things is a huge clue that they’re deeply related. The challenge is to understand the relationship.

There are two parts to this challenge. One is to describe the rolling ball problem in terms of split octonions. The other is to reverse the story, and somehow get the split octonions to emerge naturally from the study of a rolling ball!

In our paper we tackled both parts. Describing the rolling ball problem using split octonions had already been done by other mathematicians, for example here:

• Robert Bryant and Lucas Hsu, Rigidity of integral curves of rank 2 distributions.

• Gil Bor and Richard Montgomery, G2 and the “rolling distribution”.

• Andrei Agrachev, Rolling balls and octonions.

• Aroldo Kaplan, Quaternions and octonions in mechanics.

We do however give a simpler explanation of why this description only works when one ball is 3 times as big as the other.

The other part, getting the split octonions to show up starting from the rolling ball problem, seems to be new to us. We show that in a certain sense, quantizing the rolling ball gives the split octonions! Very roughly, split octonions can been as quantum states of the rolling ball.

At this point I’ve gone almost as far as I can without laying on some heavy math. In theory I could show you pretty animations of a little ball rolling on a big one, and use these to illustrate the special thing that happens when the big one is 3 times as big. In theory I might be able to explain the whole story without many equations or much math jargon. That would be lots of fun…

… for you. But it would be a huge amount of work for me. So at this point, to make my job easier, I want to turn up the math level a notch or two. And this is a good point for both of us take a little break.

In the next and final post in this series, I’ll sketch how the problem of a little ball rolling on a big stationary ball can be described using split octonions… and why the symmetries of this problem give a group that’s the split form of G2if the big ball has a radius that’s 3 times the radius of the little one!

I will not quantize the rolling ball problem—for that, you’ll need to read our paper.


Follow

Get every new post delivered to your Inbox.

Join 3,095 other followers