Planets in the Fourth Dimension

You probably that planets go around the sun in elliptical orbits. But do you know why?

In fact, they’re moving in circles in 4 dimensions. But when these circles are projected down to 3-dimensional space, they become ellipses!

This animation by Greg Egan shows the idea:

The plane here represents 2 of the 3 space dimensions we live in. The vertical direction is the mysterious fourth dimension. The planet goes around in a circle in 4-dimensional space. But down here in 3 dimensions, its ‘shadow’ moves in an ellipse!

What’s this fourth dimension I’m talking about here? It’s a lot like time. But it’s not exactly time. It’s the difference between ordinary time and another sort of time, which flows at a rate inversely proportional to the distance between the planet and the sun.

The movie uses this other sort of time. Relative to this other time, the planet is moving at constant speed around a circle in 4 dimensions. But in ordinary time, its shadow in 3 dimensions moves faster when it’s closer to the sun.

All this sounds crazy, but it’s not some new physics theory. It’s just a different way of thinking about Newtonian physics!

Physicists have known about this viewpoint at least since 1980, thanks to a paper by the mathematical physicist Jürgen Moser. Some parts of the story are much older. A lot of papers have been written about it.

But I only realized how simple it is when I got this paper in my email, from someone I’d never heard of before:

• Jesper Göransson, Symmetries of the Kepler problem, 8 March 2015.

I get a lot of papers by crackpots in my email, but the occasional gem from someone I don’t know makes up for all those.

The best thing about Göransson’s 4-dimensional description of planetary motion is that it gives a clean explanation of an amazing fact. You can take any elliptical orbit, apply a rotation of 4-dimensional space, and get another valid orbit!

Of course we can rotate an elliptical orbit about the sun in the usual 3-dimensional way and get another elliptical orbit. The interesting part is that we can also do 4-dimensional rotations. This can make a round ellipse look skinny: when we tilt a circle into the fourth dimension, its ‘shadow’ in 3-dimensional space becomes thinner!

In fact, you can turn any elliptical orbit into any other elliptical orbit with the same energy by a 4-dimensional rotation of this sort. All elliptical orbits with the same energy are really just circular orbits on the same sphere in 4 dimensions!

Jesper Göransson explains how this works in a terse and elegant way. But I can’t resist summarizing the key results.

The Kepler problem

Suppose we have a particle moving in an inverse square force law. Its equation of motion is

\displaystyle{ m \ddot{\mathbf{r}} = - \frac{k \mathbf{r}}{r^3} }

where \mathbf{r} is its position as a function of time, r is its distance from the origin, m is its mass, and k says how strong the force is. From this we can derive the law of conservation of energy, which says

\displaystyle{ \frac{m \dot{\mathbf{r}} \cdot \dot{\mathbf{r}}}{2} - \frac{k}{r} = E }

for some constant E that depends on the particle’s orbit, but doesn’t change with time.

Let’s consider an attractive force, so k > 0, and elliptical orbits, so E < 0. Let's call the particle a 'planet'. It's a planet moving around the sun, where we treat the sun as so heavy that it remains perfectly fixed at the origin.

I only want to study orbits of a single fixed energy E. This frees us to choose units of mass, length and time in which

m = 1, \;\; k = 1, \;\; E = -\frac{1}{2}

This will reduce the clutter of letters and let us focus on the key ideas. If you prefer an approach that keeps in the units, see Göransson’s paper.

Now the equation of motion is

\displaystyle{\ddot{\mathbf{r}} = - \frac{\mathbf{r}}{r^3} }

and conservation of energy says

\displaystyle{ \frac{\dot{\mathbf{r}} \cdot \dot{\mathbf{r}}}{2} - \frac{1}{r} = -\frac{1}{2} }

The big idea, apparently due to Moser, is to switch from our ordinary notion of time to a new notion of time! We’ll call this new time s, and demand that

\displaystyle{ \frac{d s}{d t} = \frac{1}{r} }

This new kind of time ticks more slowly as you get farther from the sun. So, using this new time speeds up the planet’s motion when it’s far from the sun. If that seems backwards, just think about it. For a planet very far from the sun, one day of this new time could equal a week of ordinary time. So, measured using this new time, a planet far from the sun might travel in one day what would normally take a week.

This compensates for the planet’s ordinary tendency to move slower when it’s far from the sun. In fact, with this new kind of time, a planet moves just as fast when it’s farthest from the sun as when it’s closest.

Amazing stuff happens with this new notion of time!

To see this, first rewrite conservation of energy using this new notion of time. I’ve been using a dot for the ordinary time derivative, following Newton. Let’s use a prime for the derivative with respect to s. So, for example, we have

\displaystyle{ t' = \frac{dt}{ds} = r }


\displaystyle{ \mathbf{r}' = \frac{dr}{ds} = \frac{dt}{ds}\frac{dr}{dt} = r \dot{\mathbf{r}} }

Using this new kind of time derivative, Göransson shows that conservation of energy can be written as

\displaystyle{ (t' - 1)^2 + \mathbf{r}' \cdot \mathbf{r}' = 1 }

This is the equation of a sphere in 4-dimensional space!

I’ll prove that conservation of energy can be written this way later. First let’s talk about what it means. To understand it, we should treat the ordinary time coordinate t and the space coordinates (x,y,z) on an equal footing. The point


moves around in 4-dimensional space as the parameter s changes. What we’re seeing is that the velocity of this point, namely

\mathbf{v} = (t',x',y',z')

moves around on a sphere in 4-dimensional space! It’s a sphere of radius one centered at the point


With some further calculation we can show some other wonderful facts:

\mathbf{r}''' = -\mathbf{r}'


t''' = -(t' - 1)

These are the usual equations for a harmonic oscillator, but with an extra derivative!

I’ll prove these wonderful facts later. For now let’s just think about what they mean. We can state both of them in words as follows: the 4-dimensional velocity \mathbf{v} carries out simple harmonic motion about the point (1,0,0,0).

That’s nice. But since \mathbf{v} also stays on the unit sphere centered at this point, we can conclude something even better: v must move along a great circle on this sphere, at constant speed!

This implies that the spatial components of the 4-dimensional velocity have mean 0, while the t component has mean 1.

The first part here makes a lot of sense: our planet doesn’t drift ever farther from the Sun, so its mean velocity must be zero. The second part is a bit subtler, but it also makes sense: the ordinary time t moves forward at speed 1 on average with respect to the new time parameter s, but its rate of change oscillates in a sinusoidal way.

If we integrate both sides of

\mathbf{r}''' = -\mathbf{r}'

we get

\mathbf{r}'' = -\mathbf{r} + \mathbf{a}

for some constant vector \mathbf{a}. This says that the position \mathbf{r} oscillates harmonically about a point \mathbf{a}. Since \mathbf{a} doesn’t change with time, it’s a conserved quantity: it’s called the Runge–Lenz vector.

Often people start with the inverse square force law, show that angular momentum and the Runge–Lenz vector are conserved, and use these 6 conserved quantities and Noether’s theorem to show there’s a 6-dimensional group of symmetries. For solutions with negative energy, this turns out to be the group of rotations in 4 dimensions, mathrm{SO}(4). With more work, we can see how the Kepler problem is related to a harmonic oscillator in 4 dimensions. Doing this involves reparametrizing time.

I like Göransson’s approach better in many ways, because it starts by biting the bullet and reparametrizing time. This lets him rather efficiently show that the planet’s elliptical orbit is a projection to 3-dimensional space of a circular orbit in 4d space. The 4d rotational symmetry is then evident!

Göransson actually carries out his argument for an inverse square law in n-dimensional space; it’s no harder. The elliptical orbits in n dimensions are projections of circular orbits in n+1 dimensions. Angular momentum is a bivector in n dimensions; together with the Runge–Lenz vector it forms a bivector in n+1 dimensions. This is the conserved quantity associated to the (n+1) dimensional rotational symmetry of the problem.

He also carries out the analogous argument for positive-energy orbits, which are hyperbolas, and zero-energy orbits, which are parabolas. The hyperbolic case has the Lorentz group symmetry and the zero-energy case has Euclidean group symmetry! This was already known, but it’s nice to see how easily Göransson’s calculations handle all three cases.

Mathematical details

Checking all this is a straightforward exercise in vector calculus, but it takes a bit of work, so let me do some here. There will still be details left to fill in, and I urge that you give it a try, because this is the sort of thing that’s more interesting to do than to watch.

There are a lot of equations coming up, so I’ll put boxes around the important ones. The basic ones are the force law, conservation of energy, and the change of variables that gives

\boxed{  t' = r , qquad  \mathbf{r}' = r \dot{\mathbf{r}} }

We start with conservation of energy:

\boxed{ \displaystyle{ \frac{dot{\mathbf{r}} \cdot \dot{\mathbf{r}}}{2} -  \frac{1}{r}  = -\frac{1}{2} } }

and then use

\displaystyle{ \dot{\mathbf{r}} = \frac{d\mathbf{r}/dt}{dt/ds} = \frac{\mathbf{r}'}{t'} }

to obtain

\displaystyle{ \frac{\mathbf{r}' \cdot \mathbf{r}'}{2 t'^2}  - \frac{1}{t'} = -\frac{1}{2} }

With a little algebra this gives

\boxed{ \displaystyle{ \mathbf{r}' \cdot \mathbf{r}' + (t' - 1)^2 = 1} }

This shows that the ‘4-velocity’

\mathbf{v} = (t',x',y',z')

stays on the unit sphere centered at (1,0,0,0).

The next step is to take the equation of motion

\boxed{ \displaystyle{\ddot{\mathbf{r}} = - \frac{\mathbf{r}}{r^3} } }

and rewrite it using primes (s derivatives) instead of dots (t derivatives). We start with

\displaystyle{ \dot{\mathbf{r}} = \frac{\mathbf{r}'}{r} }

and differentiate again to get

\ddot{\mathbf{r}} = \displaystyle{ \frac{1}{r} \left(\frac{\mathbf{r}'}{r}\right)' }  = \displaystyle{ \frac{1}{r} \left( \frac{r \mathbf{r}'' - r' \mathbf{r}'}{r^2} \right) } = \displaystyle{ \frac{r \mathbf{r}'' - r' \mathbf{r}'}{r^3} }

Now we use our other equation for \ddot{\mathbf{r}} and get

\displaystyle{ \frac{r \mathbf{r}'' - r' \mathbf{r}'}{r^3} = - \frac{\mathbf{r}}{r^3} }


r \mathbf{r}'' - r' \mathbf{r}' = -\mathbf{r}


\boxed{ \displaystyle{ \mathbf{r}'' =  \frac{r' \mathbf{r}' - \mathbf{r}}{r} } }

To go further, it’s good to get a formula for r'' as well. First we compute

r' = \displaystyle{ \frac{d}{ds} (\mathbf{r} \cdot \mathbf{r})^{\frac{1}{2}} } = \displaystyle{ \frac{\mathbf{r}' \cdot \mathbf{r}}{r} }

and then differentiating again,

r'' = \displaystyle{\frac{d}{ds} \frac{\mathbf{r}' \cdot \mathbf{r}}{r} } = \displaystyle{ \frac{r \mathbf{r}'' \cdot \mathbf{r} + r \mathbf{r}' \cdot \mathbf{r}' - r' \mathbf{r}' \cdot \mathbf{r}}{r^2} }

Plugging in our formula for \mathbf{r}'', some wonderful cancellations occur and we get

r'' = \displaystyle{ \frac{\mathbf{r}' \cdot \mathbf{r}'}{r} - 1 }

But we can do better! Remember, conservation of energy says

\displaystyle{ \mathbf{r}' \cdot \mathbf{r}' + (t' - 1)^2 = 1}

and we know t' = r. So,

\mathbf{r}' \cdot \mathbf{r}' = 1 - (r - 1)^2 = 2r - r^2


r'' = \displaystyle{ \frac{\mathbf{r}' \cdot \mathbf{r}'}{r} - 1 } = 1 - r

So, we see

\boxed{ r'' = 1 - r }

Can you get here more elegantly?

Since t' = r this instantly gives

\boxed{ t''' = 1 - t' }

as desired.

Next let’s get a similar formula for \mathbf{r}'''. We start with

\displaystyle{ \mathbf{r}'' =  \frac{r' \mathbf{r}' - \mathbf{r}}{r} }

and differentiate both sides to get

\displaystyle{ \mathbf{r}''' = \frac{r r'' \mathbf{r}' + r r' \mathbf{r}'' - r \mathbf{r}' - r'}{r^2} }

Then plug in our formulas for r'' and \mathbf{r}''. Some truly miraculous cancellations occur and we get

\boxed{  \mathbf{r}''' = -\mathbf{r}' }

I could show you how it works—but to really believe it you have to do it yourself. It’s just algebra. Again, I’d like a better way to see why this happens!

Integrating both sides—which is a bit weird, since we got this equation by differentiating both sides of another one—we get

\boxed{ \mathbf{r}'' = -\mathbf{r} + \mathbf{a} }

for some fixed vector \mathbf{a}, the Runge–Lenz vector. This says \mathbf{r} undergoes harmonic motion about \mathbf{a}. It’s quite remarkable that both \mathbf{r} and its norm r undergo harmonic motion! At first I thought this was impossible, but it’s just a very special circumstance.

The quantum version of a planetary orbit is a hydrogen atom. Everything we just did has a quantum version! For more on that, see

• Greg Egan, The ellipse and the atom.

For more of the history of this problem, see:

• John Baez, Mysteries of the gravitational 2-body problem.

This also treats quantum aspects, connections to supersymmetry and Jordan algebras, and more! Someday I’ll update it to include the material in this blog post.

59 Responses to Planets in the Fourth Dimension

  1. Greg Egan says:

    Starting from scratch, it takes a lot of work to see how switching to the Moser-Göransson time coordinate makes a Kepler orbit look like a harmonic oscillator. But if we’re willing to make use of a few well-known, independently established facts, that offers a shortcut.

    Let’s take it as given that both the inverse-square law and the harmonic oscillator have elliptical orbits, and then use two well-known facts about ellipses:

    (1) The sum of the distances from a point P on the ellipse to the two foci, F and G, is 2a, where a is the major semi-axis (the “pins, string and pencil” property).

    (2) Lines from the foci to a point on the curve make equal angles with a tangent to the curve (the “whispering gallery” property).

    In the diagram, F’, C’ and G’ are the orthogonal projections onto the tangent at P of the foci F and G and the centre of the ellipse, C. The orthogonal distances are d_F, d_C and d_G. Property (2) means that the right triangles PF’F and PG’G are similar, and after getting the second triangle’s hypotenuse from property (1), we have:

    d_G/d_F = (2a-r)/r

    But the similarity of the triangles HF’F and HG’G also gives us:

    d_G/d_F = (q+2c)/q

    Solving for q:

    q = c r / (a-r)

    and inserting the result into an equation we get from the similarity of the triangles HF’F and HC’C:

    d_F/d_C = q/(q+c)

    we obtain:

    d_F/d_C = r/a

    Since angular momentum must be conserved around C in an isotropic harmonic oscillator, and around F in a Kepler orbit with centre of attraction F, we have:

    v_H = L_H / d_C
    v_K = L_K / d_F

    for angular momenta L_H and L_K and speeds v_H and v_K in the harmonic oscillator and Kepler orbit cases respectively. So we have:

    v_H / v_K = (L_H/L_K) d_F/d_C = (L_H/L_K) r/a


    d t / d s = (L_H/L_K) r/a

    In other words, we have shown that any change of time coordinate that makes the Kepler orbit look like a harmonic oscillator must have the rate of change of the old time coordinate with respect to the new one being proportional to r.

    • Greg Egan says:

      Here’s a somewhat slicker route to the ratio between the orthogonal distances from the focus and the centre of an ellipse to a tangent at any point.

      Writing the length of the part of the tangent between the two projected foci, F’ and G’, in two ways, we have:

      F' P + P G' = 2 F' C'

      Writing the orthogonal distance from the tangent to the centre as the average of the orthogonal distances to the two foci, we have:

      d_F + d_G = 2 d_C

      Now, the “whispering gallery” property of the ellipse means that the two angles marked at P are the same, and hence the tan of that angle can be written two ways as:

      d_F / F' P = d_G / P G'

      It’s obvious (and provable with a tiny bit of algebra), that these three equations imply that:

      d_C / F' C' = d_F / F' P = d_G / P G'

      This means that the angles marked at F’ and G’ are equal to the two angles marked at P. It also means that:

      d_F / d_C = F' P / F' C'

      Now, the ratio F' P / F' C' is simply twice the ratio F' P / F' G', which in turn is equal to r / (r + (2a - r)). So we have:

      d_F / d_C = F' P / F' C' = r/a

    • Greg Egan says:

      The animation above shows the helical world line in Newtonian space-time for one orbit, with points along the world line that start out spaced at equal intervals of Newtonian time, and then morph to show equal intervals of Moser-Göransson time. In each case, successive points correspond to wedges of equal area swept out in the elliptical orbit, but the wedges have different centres.

      This second animation shows the ordinary velocities (in blue) and the 4-velocities (in black) as we morph from using Newtonian time to Moser-Göransson time.

      In Newtonian time, the tips of the ordinary velocity vectors form a circle (not centred on the origin), which is the projection of the base of a circular cone that lies perfectly level: every 4-velocity using Newtonian time has the same height, of exactly 1.

      In Moser-Göransson time, the tips of the ordinary velocity vectors form an ellipse (of the same shape as the orbit itself), which is the projection of a tilted great circle on a sphere.

      • John Baez says:

        Great animations!

        Since I’m getting interested in the history of these ideas, the phrase “Moser-Göransson time” makes me wonder who really invented this time reparametrization.

        On the one hand, in a comment above, Martin Lo called it “Sundman’s Transform”.

        But another candidate is Levi-Civita, whose paper I haven’t read yet:

        • Tullio Levi-Civita, Sur la régularisation du problème des trois corps, Acta Mathematica 42 (1920), 99–144.

        I’d known about some of his work on tensor calculus, but not that he also worked on celestial mechanics, including the 3-body problem. It seems he ‘regularized’ collisions in the 3-body problem using a time reparametrization trick similar to Moser’s.

        (The bastards at Springer aren’t letting us read that paper for free yet, but it’s impressive to see the list of authors in this particular issue of Acta Mathematica: Study, Landau, Levi-Civita, Riesz, Lévy, Hilbert, Mittag-Lefler, Pólya, Hardy, Malmquist, Julia, Bieberbach, as well as just a few mathematicians I don’t know.)

      • John Baez says:

        It looks like Sundman used a reparametrization of time with

        \displaystyle{ \frac{d\tau}{dt} = \frac{1}{r} }

        before Levi-Civita, in his attempts to regularize the 3-body problem:

        • Karl Sundman, Memoire sur le probleme des trois corps, Acta Mathematica 36 (1912), 105–179.

        Again it’s in Acta. The 3-body problem was a big deal back then, since people were seeking an ‘analytical solution’, which might include any sort of power series.

        Let me quote a bit from here:

        • June Barrow-Green, The dramatic episode of Sundman, Historia Mathematica 37 (2010), pp. 164–203.

        Unlike the Acta papers, this has been snatched from the jaws of so-called “publishers”, so you can actually read it. Let me quote a bit:

        Abstract In 1912 the Finnish mathematical astronomer Karl Sundman published a remarkable solution to the three-body problem, of a type which mathematicians such as Poincaré had believed impossible to achieve. Although lauded at the time, the result dimmed from view as the twentieth century progressed and its significance was often overlooked. This article traces Sundman’s career and the path to his achievement, bringing to light the involvement of Ernst Lindelöf and Gösta Mittag-Leffler in Sundman’s research and professional development, and including an examination of the reception over time of Sundman’s result. A broader perspective on Sundman’s research is provided by short discussions of two of Sundman’s later papers: his contribution to Klein’s Encyklopädie and his design for a calculating machine for astronomy.

        Sounds interesting, eh?

        In contrast to his predecessors, Sundman attacked the problem by considering triple as well as binary collisions, and fundamental to his solution was the introduction of an auxiliary variable by which the coordinates and the time were generalised to complex values.

        Wow! Greg will remember me muttering in email about how I wanted to treat the coordinates as complex, to unify the hyperbolic and elliptical cases. I imagine Sundman was doing it mainly to ponder radii of convergence and the like.

        In the first paper Sundman focused on the case of triple collision for, as he himself observed, it had not been the subject of any publication [Sundman, 1907, I]. Seeking the conditions for such a collision, he found what he described as a “remarkable” theorem, namely, that a triple collision can occur only if all three integrals of angular momentum are simultaneously zero [p.17]. Following on from this, it was straightforward to show that all three bodies remain constantly in the same plane, defined by their common centre of gravity, and, with a little more work, to show that as they approach collision, they asymptotically approach one of the central configurations (or so-called Lagrangian solutions), which is either an equilateral triangle or collinear configuration.

        By considering the case in which at least one of the integrals remains non-zero and the initial conditions are known, he further showed that there is a positive limit below which the greatest of the three mutual distances between the bodies cannot go (in confirmation of conjectures made by Weierstrass in 1889). Additionally, he proved that if only two of the bodies collide then the angular velocity of the motion of one of the bodies around the other is finite, thereby filling in a gap in one of Bisconcini’s proofs.

        However, in reaching these results, Sundman had made certain assumptions with regard to binary collisions—in particular, that it was possible to analytically define an extension of the motion after collision—but he announced that these assumptions would be justified in a later paper. He concluded by remarking that the methods he had used could, with minor modification, be extended to the n-body problem and that he would soon return to this issue.

        In [1909] Sundman delivered on his promise to deal with binary collisions. As he announced on the opening page, the aim of the paper was to prove the following theorem:

        If the constants of angular momentum in the motion of three bodies with respect to their common centre of gravity are not all zero, one can find a variable \tau such that the coordinates of the bodies, their mutual distances and the time can be expanded in convergent series in powers of \tau which represent the motion for all real values of the time, whatever collisions occur between the bodies.

        Taking the case where at least one of the integrals of angular momentum remains nonzero, he expressed the nine Cartesian coordinates together with the time t as holomorphic functions of a single variable \tau, using a similar transformation to that proposed by Poincaré in 1886. He then proved that these 10 functions, starting with real initial values, are holomorphic within the unit circle

        |\tau| = 1

        of the complex plane, and, since

        \tau = \pm 1

        corresponds to

        t = \pm \infty

        the 10 corresponding functions can be represented
        by uniformly convergent series for all values of

        |t| < \infty

        Although no physical meaning can be ascribed to the analytic continuation in the complex domain, it is, as Siegel [1941, 432] observed, “important for the mathematical investigation of the differential equations” involved in the problem.

        I definitely want to read this. I’d heard of Sundman’s work on the 3-body problem, but I never looked into it.

      • Greg Egan says:

        There’s an interesting overview of the history and “mathematical philosophy” of the n-body problem in:

        Diacu, F.: The solution of the n-body Problem, The Mathematical Intelligencer, 1996, 18, p. 66–70

        Apparently Sundman’s series for the 3-body problem and Wang’s series for the n-body problem, although convergent, converge so slowly as to make them useless for all practical purposes!

      • John Baez says:

        I’d heard Sundman’s series weren’t practical, but I hadn’t known just how bad the situation was until reading June Barrow-Green’s paper:

        Needless to say, it was not long before someone decided to find out just how “valueless,” from the point of view of numerical computation, Sundman’s series actually were. In 1930 David Beloriszky calculated that if Sundman’s series were going to be used for astronomical observations then the computations
        would involve at least 108,000,000 terms!

        Still, I’m interested in how he introduced this new time parameter and analytically continued it to complex values.

  2. Josef Chlachula says:

    Very nice!
    Link in the article (Jesper Göransson, Symmetries of the Kepler problem, 8 March 2015) should be

  3. Felix Huber says:

    To first order, this transformation to a harmonic oscillator also works for orbits around black holes. In 2d/4d, it’s called the Levi-Civita/Kustaanheimo transformation, respectively, and one can use quaternions to deal with the rotation in the latter case!

    • John Baez says:

      Excellent, thanks for these references! I have a webpage on this stuff and I’m in the process of improving it.

      I’m not sure what ‘to first order’ means here. The harmonic oscillator is periodic, it doesn’t precess, so I don’t see how you can reduce motion around a black hole to a harmonic oscillator, unless the precession counts as a second-order effect. But I’ll read those papers.

      I’d known about squaring as a transformation of the complex plane that maps an ellipse centered at the origin to an ellipse with focus at the origin. Arnol’d uses this method, together with the same transformation of the time variable used here, to prove that a planet orbiting in an inverse-square force law follows an ellipse with focus at the origin:

      • Vladimir I. Arnol’d, Huygens and Barrow, Newton and Hooke: Pioneers in Mathematical Analysis and Catastrophe Theory from Evolvents to Quasicrystals, Appendix 1: Proof that orbits are elliptic, trans. Eric J. F. Primrose, Birkhauser, 1990.

      But I hadn’t known this method goes back to Levi-Civita!

      I’d thought about generalizing this to 4 dimensions using the quaternions, since squaring again doubles angles while squaring the radius. So I’m mildly disappointed, but not surprised, that someone else has already studied this.

      • Felix Huber says:

        Sorry, I was imprecise: to leading order in the relativistic perturbation.

        An orbit in a Schwarzschild metric won’t necessarily map to a closed orbit, but it can be done using the right time transformation and dropping higher order terms. It’s Hamiltonian will be the one of a spherical harmonic oscillator (whose solutions we know), with reduced angular momentum. Mapping back to the original space, the position angle is multiplied by a constant factor, which represents the relativistic precession, while the time gets some additional nontrivial terms. (, eq 22)

        I was working on this during my MSc project, to see what the leading order relativic effects of blackhole charge and spin are. Turns out, charge has the same effect as a reduced mass, while spin leads to elliptic integrals in the solutions of the equations of motions.

        Interestingly and somehow to argue that the Levi-Civita transformation makes sense: Bertrands theorem says that the only central force potentials that admit closed orbits are those of the Kepler problem and the harmonic oscillator. My guess is, there might be whole families of deformed potentials corresponding to each other with respect to the LC transform.

  4. Blake Stacey says:

    The link on your “Mysteries of the gravitational 2-body problem” to Lyman and Aravind’s 1993 paper “Deducing the Lenz vector of the hydrogen atom from supersymmetry” doesn’t work for me, but this snapshot on the Internet Archive does. Perhaps NOAA recently reorganized their website.

  5. Martin Lo says:


    This fictitious time transformation is also known as the Sundman’s Transform, it regularizes the orbits during close approach to the central body.

    I’m surprised to hear Moser called a “physicist”. I think he would consider himself a mathematician as most of us do.

    Great article as usual!

    This area is still under research by celestial mechanics and astrodynamics people. A. Celetti, a former Moser student, has published several recent papers and books on regularization, extending this work into the 3 body problem.

    By the way, what Moser actually showed is even more beautiful than what you discussed. Moser showed that Keplerian motion can be transformed into geodesic flows on the 3 sphere using the velocity vectors of the orbits, i.e. the hodographs. This in turn can be transformed into the harmonic oscillator on the unit tangent bundle of the 3 sphere via stereographic projection.

    It would be interesting to hear what you have to say about hodographs.


    • John Baez says:

      Okay, I’ll change Moser to a mathematician.

      By the way, what Moser actually showed is even more beautiful than what you discussed. Moser showed that Keplerian motion can be transformed into geodesic flows on the 3 sphere using the velocity vectors of the orbits, i.e. the hodographs. This in turn can be transformed into the harmonic oscillator on the unit tangent bundle of the 3 sphere via stereographic projection.

      There are a couple of different things one can do. Does this work of Moser involve reparametrizing time, as I’ve done here, or not?

      I know a reference that credits Moser for doing this stuff using a reparametrization of time… and what I’m presenting here is basically a nicer version of that, since it avoids the use of rather clunky use of stereographic projection, but has the same ultimate effect: it describe the elliptical solutions of the Kepler problem in terms of geodesic flow on the 3-sphere.

      But there is also another trick that does not involve reparametrizing time, yet still describes the elliptical solutions of the Kepler problem as a flow on the (co)tangent bundle of the 3-sphere! This is trick is sometimes called the Delauney Hamiltonian.

      It’s all in here:

      • Gert Heckman and Tim de Lat, On the regularization of the Kepler problem, J. Symplectic Geometry 10 (2012), 463–474.

      I have not managed to obtain the original paper by Moser:

      • J. Moser: Regularization of Kepler’s problem and the averaging method on a manifold, Comm. Pure and Appl. Math. 23 (1970), 609–636.

      so my understanding of his work is based on the summary in that other paper.

  6. Bill says:

    Very interesting; I’d have never considered the 4D shadow effect. So I suppose that the perihelion advance of general relativity would be represented by the shadow of an ellipsoid?

    • John Baez says:

      I don’t know how this works in general relativity—though see the comments here by Felix Huber, who seems to understand it. I don’t see how the shadow of an ellipsoid would give precessing orbits.

  7. Felix Huber says:

    This paper from Jörg Waldvogel might also be interesting:

    Waldvogel – Quaternions and the Perturbed Kepler Problem

    The fibration of R^4 into valid orbits is described on p. 9.

    • John Baez says:

      Thanks! I’ll keep working to fit together all the puzzle pieces. I may move slowly, since I’ve been neglecting more urgent work for the last week….

  8. Janek says:

    So, this means, that dark energy calculations in 3D are overestimated?

  9. Stephan says:

    With relativity, time goes faster when the distance to a (central) mass increases; here one looks at time slowing down with increasing distance to the central mass. Kind of odd that way.

    • John Baez says:

      Yes, it’s annoying!

      When I first started thinking about this, I was hoping that the reparametrization here:

      \displaystyle{ \frac{ds}{dt} = \frac{1}{r} }

      could be some kind of limiting case of the time dilation in general relativity:

      \displaystyle{ \frac{ds}{dt} = \sqrt{1 - \frac{R}{r}} }

      where R is the Schwarzschild radius, and I’m working in units where the speed of light is 1. But I haven’t been able to get anything along these lines to work.

      • Felix Huber says:

        Working in an isotropic metric (that is, the whole spatial part has the same entries in the metric, not like in the usual Schwarzschild parametrization) and only keeping first orders up to \frac{1}{r^2},

        \displaystyle{ \frac{ds}{dt} = \frac{1}{r} - \frac{2M}{r^2} }

        should do the trick. C.f. eq 12 in, and for the full transformation, follow eq. 2 – 13.

        For the shadow thing, I haven’t thought about it properly, but guess that it would be the shadow of a sphere with all angles in the plane of projection multiplied by a constant factor of

        \displaystyle{ \left(\sqrt{1 - 6 \left(\frac{2M}{P_{\phi}}\right)^2 } \right)^{-1} }

      • John Baez says:

        Cool! I tried to fix the LaTeX in your equations, but I didn’t know what you meant by \P_\phi, which I think created an error message, so I turned it into P_\phi.

        What is this thing?

  10. domenico says:

    I read quickly the description of the Fock’s momentum space projection of Greg Egan, and (if I understand well) the Fourier transform of the Schrodinger equation permit to simplify the Laplace operator so to obtain an equation where it is possible to apply coordinates transformations (similar to classical case).
    It seem that the momentum-space Schrodinger equation is simpler to compare with the classical transformation (even if it seem impossible to deduce the atom spectrum from harmonic oscillator spectrum), but it is possible to apply the Fourier transformation to the relativistic quantum equation (Dirac, Klein-Gordon) to simplify the equations.
    I am thinking that the trajectory of the masses in a gravitational field, or the trajectory of the charges in an electric field, may have the same quantization (discrete quantum gravitational wave instead of quantum photons), and that the Goransson transformation can be applied to an atom.

  11. Ave Sharia says:

    I literally understand none of this, but somebody on Reddit pointed out that the GIF at the top of this article is misleading, so I tried to fix it.

    • John Baez says:

      Is your gif mathematically exact? Does your planet obey Kepler’s 2nd law, sweeping out equal areas in equal times?

      If so, can I include it in the article?

      The original gif is not misleading if you also read the explanation. The key to this problem is

      […] a new sort of time, which flows at a rate inversely proportional to the distance between the planet and the sun.

      The movie uses this other sort of time. Relative to this other time, the planet is moving at constant speed around a circle in 4 dimensions. But in ordinary time, its shadow in 3 dimensions moves faster when it’s closer to the sun.

      Your gif, which is also very nice, shows the orbit in ordinary time. But the other sort of time is crucial to understanding this problem. I wrote:

      This new kind of time ticks more slowly as you get farther from the sun. So, using this new time speeds up the planet’s motion when it’s far from the sun. If that seems backwards, just think about it. For a planet very far from the sun, one day of this new time could equal a week of ordinary time. So, measured using this new time, a planet far from the sun might travel in one day what would normally take a week.

      This compensates for the planet’s ordinary tendency to move slower when it’s far from the sun. In fact, with this new kind of time, a planet moves just as fast when it’s farthest from the sun as when it’s closest.

      Amazing stuff happens with this new notion of time! To see this, first […]

      • Ave Sharia says:

        I missed your comment originally (my post was sort of hit-and-run, sorry!)

        The GIF almost certainly doesn’t obey Kepler’s 2nd law; I just decreased the frame duration linearly as the satellite approaches the closest point in its orbit. That said, you’re more than welcome to it- I don’t claim any rights to something I basically ripped off from your post!

  12. John Baez says:

    Roice Nelson wondered what it would be like to draw all the orbits that arise from a Hopf fibration of the 3-sphere:

    I played around with the Hopf fibration question a little last night, and drew a picture of an orthographically projected Hopf fibration, here:

    This picture shows circles on 5 nested tori in the fibration. Some things I noticed:

    – The nested tori of the Hopf fibration project to concentric cylinders with these projection parameters, with the elliptical orbits slices of the cylinders. For each torus, one defining circle projects to a circle and the other projects to a line. If I were to first rotate the fibration before projecting things would look different.

    – The image makes clear how all the projected ellipses have the same major axis, touching a sphere at two antipodal points. Elliptical orbits with the same major axis have the same period, which made sense since all these orbits have the same energy. (This is true for orbits not in the fibration too of course.)

    – It is interesting that the orthographic projection can unlink circles. Every one of the orbits here is linked in the fibration before the projection.

    – The focus of every orbit is different. Perhaps another picture to do would be to move all these orbits to be focus centered, rather than centered on the ellipses. My guess is that would be a messy picture though.

  13. John Baez says:

    You can read lots of questions and discussion about this post on Google+.

    This post created a nice spike of hits on the blog here. Usually we get 80-150 hits per hour, but it shot up a lot higher:

    The peak was 3,526 hits per hour. Here’s the daily total:

    A lot of the hits came from a link on Hacker News.

  14. Greg Egan says:

    Here’s a nice result I just figured out, which links this approach to the one that Pauli used for the hydrogen atom.

    One famous fact about 4-dimensional rotations is that they can be thought of as pairs of 3-dimensional rotations. This shows up in the Lie algebras, but the easiest way to see it in finite rotations is to think of any 4-dimensional rotation as simultaneous multiplication on the left with one unit quaternion and on the right with another, i.e.

    T(w) = q_L w q_R^{*}

    [The notation works out more smoothly if we describe the right-hand quaternion here as the conjugate of an arbitrary unit quaternion q_R.]

    Multiplication on the left and on the right commute, so in a 3-dimensional system with SO(4) symmetry we can sometimes find two vectors A and B such that the action of left-multiplication performs a 3-dimensional rotation on the first vector, leaving the second unchanged, and vice versa for right-multiplication, where the 3-dimensional rotations are:

    A \to q_L A q_L^{*}
    B \to q_R B q_R^{*}

    With the SO(4) action on orbits we get from thinking of each elliptical orbit as a great circle on a 3-sphere, two such vectors are:

    A = L-M
    B = L+M

    where L and M are suitably scaled versions of the angular momentum vector and the Laplace-Runge-Lenz vector. The easiest way to describe what I mean by “suitably scaled” is to say that this works out nicely if we make the length of L the same as the minor semi-axis of the ellipse, and the length of M the distance from the centre of the ellipse to the focus.

    So L is what you get if you take a vector from the centre of the ellipse to a point on the minor axis and flip it 90 degrees to point out of the plane of the orbit, while M simply points from the centre of the ellipse to one focus. There are some sign choices here that complicate things, but they can always be made in such a way that A is rotated solely by the left quaternion in the SO(4) rotation and B is rotated solely by the right quaternion, with:

    A \to q_L A q_L*
    B \to q_R B q_R*

    Pauli used the quantum operator versions of A and B as two independent three-dimensional spins, whose quantum numbers can be used to find those of the hydrogen atom.

    • John Baez says:

      Nice! I’d thought about treating the 4-vector

      \mathbf{x} = (t-s, x, y, z)

      as a quaternion depending on the Moser-Göransson time s, and I’d known how A = L - M and B= L + M have vanishing Poisson brackets and generate two commuting actions of \mathrm{SU}(2), but I hadn’t thought of treating these two \mathrm{SU}(2) actions as left and right multiplications by unit quaternions.

      So here’s a challenge: can we do a change of variables to rewrite the elliptical-orbit Kepler problem in a neat way using quaternions… using ordinary time? Perhaps for arbitrary negative energies?

      In Moser-Göransson time for a fixed negative energy, we can treat \mathbf{x} as a quaternion that moves around a 3-sphere of fixed radius in a great circle at unit speed. Left and right multiplications by unit quaternions act as symmetries of this problem, and the corresponding conserved quantities must be A and B, at least up to some fudge factor.

      So, how can we express this nicely for all negative energies at once? I guess Göransson actually does this, using 3-spheres of different radii for different energies. And then how do we do it for ordinary time.

      I now think that in the Hamiltonian formalism, there should be two functions on the same phase space: the ‘ordinary Hamiltonian’ that generates evolution in ordinary time, and the ‘Moser-Göransson’ Hamiltonian that generates evolution in Moser-Göransson time. I guess one is just the other times r. Ideally they’d both have nice quaternion formulas.

      (Then, when we want to bring the hyperbolic and parabolic solutions of the Kepler problem into the same picture, I think we should use complexified quaternions. This should do it more or less automatically.)

    • John Baez says:

      John wrote:

      I now think that in the Hamiltonian formalism, there should be two functions on the same phase space: the ‘ordinary Hamiltonian’ that generates evolution in ordinary time, and the ‘Moser-Göransson’ Hamiltonian that generates evolution in Moser-Göransson time. I guess one is just the other times r.

      Actually I don’t think it can be that simple, not unless we pull some sort of trick.

      It seems that every function F on phase space that’s conserved for ordinary time evolution is conserved for Moser-Göransson time evolution. If ordinary time evolution is generated by some Hamiltonian H and Moser-Göransson time evolution is generated by some Hamiltonian H' we then have

      \{H, F\} = \{H' , F\} = 0

      But if

      H' = r H

      the usual properties of Poisson brackets tell us

      \{H', F\} = \{r H, F\} = r \{H , F\} + H \{r, F\}


      0 = H \{r, F \}

      so \{r,F\} = 0 except in places where H = 0. This is violated when we take F to be the conserved quantity H, since

      \{r, H\} = \dot{r} \ne 0

      Various tricks might get around this problem. Restricting our phase space so it consists only of solutions where H is a fixed constant, as Moser and Görasson do, might be enough to do the job!

      • Felix Huber says:

        If H' = g(r) (H - E_0), where g(r) is the Poincaré (or Sundman) time transform dt/ds = g(r) = r, and E_0 the value of the original Hamiltonian H, then H' is a valid Hamiltonian.

      • John Baez says:

        Sorry, I don’t know what’s the mathematical content of saying “H' is a valid Hamiltonian”.

        • Felix Huber says:

          It will fulfill Hamiltons equations with respect to the new scaled time variable. E.g. the scaled time variable s is related to the original one t by dt = g(p, q) ds. Do the Poincaré time transformation from H(p, q, t) to \Gamma(p, q, s) = g(p, q) (H(p, q, t) - E_0). If H fulfills Hamiltons equations wrt to time t, so will the new Hamiltonian \Gamma wrt the scaled time s.

          Quote: “I now think that in the Hamiltonian formalism, there should be two functions on the same phase space: the ‘ordinary Hamiltonian’ that generates evolution in ordinary time, and the ‘Moser-Göransson’ Hamiltonian that generates evolution in Moser-Göransson time. I guess one is just the other times r. ”

          So \Gamma would be the Hamiltonian that generates the time evolution in the scaled/’Poincaré’ time (or what you called ‘Moser-Göransson’ time). The whole procedure is like some kind of canonical transform (but for the time) which preserves the form of Hamiltons equations.

    • Greg Egan says:

      John wrote:

      So here’s a challenge: can we do a change of variables to rewrite the elliptical-orbit Kepler problem in a neat way using quaternions… using ordinary time? Perhaps for arbitrary negative energies?

      If we have a particle of unit mass with negative energy E orbiting a centre of attraction at the origin of \mathbb{R}^3, and we want to map it onto a 3-sphere centred at the origin of \mathbb{R}^4 whose radius is a (the semi-major axis of the particle’s elliptical orbit), we can do so without introducing a new time variable, as follows.

      The vector from the centre of attraction to the centre of the ellipse is:

      M = (\frac{v^2}{4E}+\frac{1}{2}) r - \frac{v \cdot r}{2E} v

      We map this to \mathbb{R}^4 as:

      Q(r,v) = (\frac{v \cdot r}{\sqrt{-2 E}}, r-M)

      It’s not hard to check that |Q|^2 = \frac{k^2}{4 E^2} = a^2, where k is the force constant.

      The rate of change of this with respect to ordinary Newtonian time is:

      \dot{Q}(r,v) = (\frac{v^2+2E}{2 \sqrt{-2 E}}, v)

      and the second rate of change is:

      \ddot{Q}(r,v) = -\frac{k}{r^3} (Q(r,v)+M)

      I’m not sure how this can be made nice without changing the time coordinate.

    • Greg Egan says:

      We can recover M from the new variables as:

      M = -\frac{Q \dot{Q}^* + \dot{Q}^* Q}{2 |\dot{Q}|}

      and then we get the inverse map as:

      r = \frac{1}{2}(Q-Q^*) + M

      v = \frac{1}{2}(\dot{Q}-\dot{Q}^*)

      But I still can’t find a nice form for the Hamiltonian in the quaternionic variables.

    • Greg Egan says:

      Along with:

      \displaystyle{ M = -\frac{Q \dot{Q}^* + \dot{Q}^* Q}{2 |\dot{Q}|} }

      we can also recover the angular momentum vector, L, as:

      \displaystyle{\frac{L}{\sqrt{-2 E}}=\frac{\dot{Q}^* Q - Q \dot{Q}^*}{2 |\dot{Q}|} }

      where the left-hand side here is a vector whose length is equal to the minor semi-axis of the ellipse.

      And the associated vectors manifestly invariant under left- and right-multiplication, respectively, of Q by any fixed unit quaternion are:

      \displaystyle{ A = \frac{L}{\sqrt{-2 E}} - M = \frac{\dot{Q}^* Q}{|\dot{Q}|} }

      \displaystyle{B = \frac{L}{\sqrt{-2 E}} + M = -\frac{Q \dot{Q}^*}{|\dot{Q}|} }

      So that much works out nicely. But I don’t think the map to quaternionic coordinates is a canonical transformation if we stick to Newtonian time.

    • Greg Egan says:

      OK, if we switch to Moser-Göransson time, what changes? We still have:

      M = (\frac{v^2}{4E}+\frac{1}{2}) r - \frac{v \cdot r}{2E} v

      Q(r,v) = (\frac{v \cdot r}{\sqrt{-2 E}}, r-M)

      |Q(r,v)| = a = \frac{k}{-2E}

      But now, using a prime to denote the derivative with respect to the new time, we have:

      P(r,v) = Q'(r,v) = \frac{r}{\sqrt{-2 E}}(\frac{v^2+2E}{2 \sqrt{-2 E}}, v)

      with constant magnitude:

      |P(r,v)| = a = \frac{k}{-2E}

      For the second derivate we have, as we would hope:

      Q''(r,v) = -Q(r,v)

      and the Hamiltonian for Moser-Göransson time is simply:

      H(P,Q) = \frac{1}{2}(P^* P + Q^* Q)

      If we look at the invariants, they have the same form, but simplify slightly due to the fixed magnitude |P| = a:

      M = -\frac{Q P^* + P^* Q}{2 a}

      \frac{L}{\sqrt{-2 E}}=\frac{P^* Q - Q P^*}{2 a}

      A = \frac{L}{\sqrt{-2 E}} - M = \frac{P^* Q}{a}

      B = \frac{L}{\sqrt{-2 E}} + M = -\frac{Q P^*}{a}

      If we want to invert the map, we get a reasonably simple expression for the position vector:

      r = \frac{1}{2}(Q-Q^*) + M

      But to obtain the velocity v, I can’t see any easier way than to compute the scalar r as the magnitude of the position vector, and divide by that and a constant factor to extract v from the imaginary part of P.

  15. Emi says:

    Reblogged this on Pathological Handwaving and commented:

    Freaking Awesome!

  16. […] Planets move in 4 dimensions. One is sort of like time. Excellent physics, nicely explained by John Carlos Baez […]

  17. But back to inverse cube force laws! I know two more things about them. A while back I discussed how a particle in an inverse square force can be reinterpreted as a harmonic oscillator:

    Planets in the fourth dimension, Azimuth.

    There are many ways to think about this, and apparently the idea in some form goes all the way back to Newton! It involves a sneaky way to take a particle in a potential

    \displaystyle{ V(r) \propto r^{-1} }

    and think of it as moving around in the complex plane. Then if you square its position—thought of as a complex number—and cleverly reparametrize time, you get a particle moving in a potential

    \displaystyle{ V(r) \propto r^2 }

    This amazing trick can be generalized! A particle in a potential

    \displaystyle{ V(r) \propto r^p }

    can transformed to a particle in a potential

    \displaystyle{ V(r) \propto r^q }


    (p+2)(q+2) = 4

  18. […] “Bullets are magic.” ISS on the chopping block. Pluto video. Dark fire. 4D celestial mechanics. Quantum indeterminacy. Quantum hell. Beyond the End. Flaky (but […]

  19. Wait, I do not understand – I thought inverse square potential in dimension higher than 4 was not giving closed (and even bounded) orbits – this can be shown by some simple à la Bertrand argument: circular orbits are not stable. I believe in what you are saying for dim=3 but for higher dimensions I do not understand.

    • John Baez says:

      The inverse square force law gives stable, closed, elliptical orbits in any dimension.

      You seem to be thinking about this other face: if the force is proportional to r^p, the bound orbits will always be closed only in two special cases: the inverse square law (p = -2) and the harmonic oscillator (p = 1). This is Bertrand’s theorem.

      Furthermore, a potential obeying \nabla^2 \phi = - \rho will, for a density \rho concentrated in a small ball in n dimensions, give a force proportional to r^p where p = 1 - n. So we get an inverse square force law in 3 dimensions, an inverse cube force law in 4 dimensions, and so on.

      But my article was not about this stuff! It was about how a harmonic oscillator in 4 dimensions can simulate the motion in an inverse square force law in 3 dimensions if we change the time variable, change the center of the orbit, and project from a circular orbit in 4 dimensions down to an elliptic orbit in 3 dimensions!

You can use Markdown or HTML in your comments. You can also use LaTeX, like this: $latex E = m c^2 $. The word 'latex' comes right after the first dollar sign, with a space after it.

Fill in your details below or click an icon to log in: Logo

You are commenting using your account. Log Out / Change )

Twitter picture

You are commenting using your Twitter account. Log Out / Change )

Facebook photo

You are commenting using your Facebook account. Log Out / Change )

Google+ photo

You are commenting using your Google+ account. Log Out / Change )

Connecting to %s