The Inverse Cube Force Law


Here you see three planets. The blue planet is orbiting the Sun in a realistic way: it’s going around an ellipse.

The other two are moving in and out just like the blue planet, so they all stay on the same circle. But they’re moving around this circle at different rates! The green planet is moving faster than the blue one: it completes 3 orbits each time the blue planet goes around once. The red planet isn’t going around at all: it only moves in and out.

What’s going on here?

In 1687, Isaac Newton published his Principia Mathematica. This book is famous, but in Propositions 43–45 of Book I he did something that people didn’t talk about much—until recently. He figured out what extra force, besides gravity, would make a planet move like one of these weird other planets. It turns out an extra force obeying an inverse cube law will do the job!

Let me make this more precise. We’re only interested in ‘central forces’ here. A central force is one that only pushes a particle towards or away from some chosen point, and only depends on the particle’s distance from that point. In Newton’s theory, gravity is a central force obeying an inverse square law:

F(r) = - \displaystyle{ \frac{a}{r^2} }

for some constant a. But he considered adding an extra central force obeying an inverse cube law:

F(r) = - \displaystyle{ \frac{a}{r^2} + \frac{b}{r^3} }

He showed that if you do this, for any motion of a particle in the force of gravity you can find a motion of a particle in gravity plus this extra force, where the distance r(t) is the same, but the angle \theta(t) is not.

In fact Newton did more. He showed that if we start with any central force, adding an inverse cube force has this effect.

There’s a very long page about this on Wikipedia:

Newton’s theorem of revolving orbits, Wikipedia.

I haven’t fully understood all of this, but it instantly makes me think of three other things I know about the inverse cube force law, which are probably related. So maybe you can help me figure out the relationship.

The first, and simplest, is this. Suppose we have a particle in a central force. It will move in a plane, so we can use polar coordinates r, \theta to describe its position. We can describe the force away from the origin as a function F(r). Then the radial part of the particle’s motion obeys this equation:

\displaystyle{ m \ddot r = F(r) + \frac{L^2}{mr^3} }

where L is the magnitude of particle’s angular momentum.

So, angular momentum acts to provide a ‘fictitious force’ pushing the particle out, which one might call the centrifugal force. And this force obeys an inverse cube force law!

Furthermore, thanks to the formula above, it’s pretty obvious that if you change L but also add a precisely compensating inverse cube force, the value of \ddot r will be unchanged! So, we can set things up so that the particle’s radial motion will be unchanged. But its angular motion will be different, since it has a different angular momentum. This explains Newton’s observation.

It’s often handy to write a central force in terms of a potential:

F(r) = -V'(r)

Then we can make up an extra potential responsible for the centrifugal force, and combine it with the actual potential V into a so-called effective potential:

\displaystyle{ U(r) = V(r) + \frac{L^2}{2mr^2} }

The particle’s radial motion then obeys a simple equation:

\ddot{r} = - U'(r)

For a particle in gravity, where the force obeys an inverse square law and V is proportional to -1/r, the effective potential might look like this:

This is the graph of

\displaystyle{ U(r) = -\frac{4}{r} + \frac{1}{r^2} }

If you’re used to particles rolling around in potentials, you can easily see that a particle with not too much energy will move back and forth, never making it to r = 0 or r = \infty. This corresponds to an elliptical orbit. Give it more energy and the particle can escape to infinity, but it will never hit the origin. The repulsive ‘centrifugal force’ always overwhelms the attraction of gravity near the origin, at least if the angular momentum is nonzero.

On the other hand, suppose we have a particle moving in an attractive inverse cube force! Then the potential is proportional to 1/r^2, so the effective potential is

\displaystyle{ U(r) = \frac{c}{r^2} + \frac{L^2}{mr^2} }

where c is negative for an attractive force. If this attractive force is big enough, namely

\displaystyle{ c < -\frac{L^2}{m} }

then this force can exceed the centrifugal force, and the particle can fall in to r = 0.

If we keep track of the angular coordinate \theta, we can see what’s really going on. The particle is spiraling in to its doom, hitting the origin in a finite amount of time!


This should remind you of a black hole, and indeed something similar happens there, but even more drastic:

Schwarzschild geodesics: effective radial potential energy, Wikipedia.

For a nonrotating uncharged black hole, the effective potential has three terms. Like Newtonian gravity it has an attractive -1/r term and a repulsive 1/r^2 term. But it also has an attractive term -1/r^3 term! In other words, it’s as if on top of Newtonian gravity, we had another attractive force obeying an inverse fourth power law! This overwhelms the others at short distances, so if you get too close to a black hole, you spiral in to your doom.

For example, a black hole can have an effective potential like this:

But back to inverse cube force laws! I know two more things about them. A while back I discussed how a particle in an inverse square force can be reinterpreted as a harmonic oscillator:

Planets in the fourth dimension, Azimuth.

There are many ways to think about this, and apparently the idea in some form goes all the way back to Newton! It involves a sneaky way to take a particle in a potential

\displaystyle{ V(r) \propto r^{-1} }

and think of it as moving around in the complex plane. Then if you square its position—thought of as a complex number—and cleverly reparametrize time, you get a particle moving in a potential

\displaystyle{ V(r) \propto r^2 }

This amazing trick can be generalized! A particle in a potential

\displaystyle{ V(r) \propto r^p }

can transformed to a particle in a potential

\displaystyle{ V(r) \propto r^q }

if

(p+2)(q+2) = 4

A good description is here:

• Rachel W. Hall and Krešimir Josić, Planetary motion and the duality of force laws, SIAM Review 42 (2000), 115–124.

This trick transforms particles in r^p potentials with p ranging between -2 and +\infty to r^q potentials with q ranging between +\infty and -2. It’s like a see-saw: when p is small, q is big, and vice versa.

But you’ll notice this trick doesn’t actually work at p = -2, the case that corresponds to the inverse cube force law. The problem is that p + 2 = 0 in this case, so we can’t find q with (p+2)(q+2) = 4.

So, the inverse cube force is special in three ways: it’s the one that you can add on to any force to get solutions with the same radial motion but different angular motion, it’s the one that naturally describes the ‘centrifugal force’, and it’s the one that doesn’t have a partner! We’ve seen how the first two ways are secretly the same. I don’t know about the third, but I’m hopeful.

Quantum aspects

Finally, here’s a fourth way in which the inverse cube law is special. This shows up most visibly in quantum mechanics… and this is what got me interested in this business in the first place.

You see, I’m writing a paper called ‘Struggles with the continuum’, which discusses problems in analysis that arise when you try to make some of our favorite theories of physics make sense. The inverse square force law poses interesting problems of this sort, which I plan to discuss. But I started wanting to compare the inverse cube force law, just so people can see things that go wrong in this case, and not take our successes with the inverse square law for granted.

Unfortunately a huge digression on the inverse cube force law would be out of place in that paper. So, I’m offloading some of that material to here.

In quantum mechanics, a particle moving in an inverse cube force law has a Hamiltonian like this:

H = -\nabla^2 + c r^{-2}

The first term describes the kinetic energy, while the second describes the potential energy. I’m setting \hbar = 1 and 2m = 1 to remove some clutter that doesn’t really affect the key issues.

To see how strange this Hamiltonian is, let me compare an easier case. If p < 2, the Hamiltonian

H = -\nabla^2 + c r^{-p}

is essentially self-adjoint on C_0^\infty(\mathbb{R}^3 - \{0\}), which is the space of compactly supported smooth functions on 3d Euclidean space minus the origin. What this means is that first of all, H is defined on this domain: it maps functions in this domain to functions in L^2(\mathbb{R}^3). But more importantly, it means we can uniquely extend H from this domain to a self-adjoint operator on some larger domain. In quantum physics, we want our Hamiltonians to be self-adjoint. So, this fact is good.

Proving this fact is fairly hard! It uses something called the Kato–Lax–Milgram–Nelson theorem together with this beautiful inequality:

\displaystyle{ \int_{\mathbb{R}^3} \frac{1}{4r^2} |\psi(x)|^2 \,d^3 x \le \int_{\mathbb{R}^3} |\nabla \psi(x)|^2 \,d^3 x }

for any \psi\in C_0^\infty(\mathbb{R}^3).

If you think hard, you can see this inequality is actually a fact about the quantum mechanics of the inverse cube law! It says that if c \ge -1/4, the energy of a quantum particle in the potential c r^{-2} is bounded below. And in a sense, this inequality is optimal: if c < -1/4, the energy is not bounded below. This is the quantum version of how a classical particle can spiral in to its doom in an attractive inverse cube law, if it doesn’t have enough angular momentum. But it’s subtly and mysteriously different.

You may wonder how this inequality is used to prove good things about potentials that are ‘less singular’ than the c r^{-2} potential: that is, potentials c r^{-p} with p < 2. For that, you have to use some tricks that I don’t want to explain here. I also don’t want to prove this inequality, or explain why its optimal! You can find most of this in some old course notes of mine:

• John Baez, Quantum Theory and Analysis, 1989.

See especially section 15.

But it’s pretty easy to see how this inequality implies things about the expected energy of a quantum particle in the potential c r^{-2}. So let’s do that.

In this potential, the expected energy of a state \psi is:

\displaystyle{  \langle \psi, H \psi \rangle =   \int_{\mathbb{R}^3} \overline\psi(x)\, (-\nabla^2 + c r^{-2})\psi(x) \, d^3 x }

Doing an integration by parts, this gives:

\displaystyle{  \langle \psi, H \psi \rangle = \int_{\mathbb{R}^3} |\nabla \psi(x)|^2 + cr^{-2} |\psi(x)|^2 \,d^3 x }

The inequality I showed you says precisely that when c = -1/4, this is greater than or equal to zero. So, the expected energy is actually nonnegative in this case! And making c greater than -1/4 only makes the expected energy bigger.

Note that in classical mechanics, the energy of a particle in this potential ceases to be bounded below as soon as c < 0. Quantum mechanics is different because of the uncertainty principle! To get a lot of negative potential energy, the particle’s wavefunction must be squished near the origin, but that gives it kinetic energy.

It turns out that the Hamiltonian for a quantum particle in an inverse cube force law has exquisitely subtle and tricky behavior. Many people have written about it, running into ‘paradoxes’ when they weren’t careful enough. Only rather recently have things been straightened out.

For starters, the Hamiltonian for this kind of particle

H = -\nabla^2 + c r^{-2}

has different behaviors depending on c. Obviously the force is attractive when c > 0 and repulsive when c < 0, but that’s not the only thing that matters! Here’s a summary:

c \ge 3/4. In this case H is essentially self-adjoint on C_0^\infty(\mathbb{R}^3 - \{0\}). So, it admits a unique self-adjoint extension and there’s no ambiguity about this case.

c < 3/4. In this case H is not essentially self-adjoint on C_0^\infty(\mathbb{R}^3 - \{0\}). In fact, it admits more than one self-adjoint extension! This means that we need extra input from physics to choose the Hamiltonian in this case. It turns out that we need to say what happens when the particle hits the singularity at r = 0. This is a long and fascinating story that I just learned yesterday.

c \ge -1/4. In this case the expected energy \langle \psi, H \psi \rangle is bounded below for \psi \in C_0^\infty(\mathbb{R}^3 - \{0\}). It turns out that whenever we have a Hamiltonian that is bounded below, even if there is not a unique self-adjoint extension, there exists a canonical ‘best choice’ of self-adjoint extension, called the Friedrichs extension. I explain this in my course notes.

c < -1/4. In this case the expected energy is not bounded below, so we don’t have the Friedrichs extension to help us choose which self-adjoint extension is ‘best’.

To go all the way down this rabbit hole, I recommend these two papers:

• Sarang Gopalakrishnan, Self-Adjointness and the Renormalization of Singular Potentials, B.A. Thesis, Amherst College.

• D. M. Gitman, I. V. Tyutin and B. L. Voronov, Self-adjoint extensions and spectral analysis in the Calogero problem, Jour. Phys. A 43 (2010), 145205.

The first is good for a broad overview of problems associated to singular potentials such as the inverse cube force law; there is attention to mathematical rigor the focus is on physical insight. The second is good if you want—as I wanted—to really get to the bottom of the inverse cube force law in quantum mechanics. Both have lots of references.

Also, both point out a crucial fact I haven’t mentioned yet: in quantum mechanics the inverse cube force law is special because, naively, at least it has a kind of symmetry under rescaling! You can see this from the formula

H = -\nabla^2 + cr^{-2}

by noting that both the Laplacian and r^{-2} have units of length-2. So, they both transform in the same way under rescaling: if you take any smooth function \psi, apply H and then expand the result by a factor of k, you get k^2 times what you get if you do those operations in the other order.

In particular, this means that if you have a smooth eigenfunction of H with eigenvalue \lambda, you will also have one with eigenfunction k^2 \lambda for any k > 0. And if your original eigenfunction was normalizable, so will be the new one!

With some calculation you can show that when c \le -1/4, the Hamiltonian H has a smooth normalizable eigenfunction with a negative eigenvalue. In fact it’s spherically symmetric, so finding it is not so terribly hard. But this instantly implies that H has smooth normalizable eigenfunctions with any negative eigenvalue.

This implies various things, some terrifying. First of all, it means that H is not bounded below, at least not on the space of smooth normalizable functions. A similar but more delicate scaling argument shows that it’s also not bounded below on C_0^\infty(\mathbb{R}^3 - \{0\}), as I claimed earlier.

This is scary but not terrifying: it simply means that when c \le -1/4, the potential is too strongly negative for the Hamiltonian to be bounded below.

The terrifying part is this: we’re getting uncountably many normalizable eigenfunctions, all with different eigenvalues, one for each choice of k. A self-adjoint operator on a countable-dimensional Hilbert space like L^2(\mathbb{R}^3) can’t have uncountably many normalizable eigenvectors with different eigenvalues, since then they’d all be orthogonal to each other, and that’s too many orthogonal vectors to fit in a Hilbert space of countable dimension!

This sounds like a paradox, but it’s not. These functions are not all orthogonal, and they’re not all eigenfunctions of a self-adjoint operator. You see, the operator H is not self-adjoint on the domain we’ve chosen, the space of all smooth functions in L^2(\mathbb{R}^3). We can carefully choose a domain to get a self-adjoint operator… but it turns out there are many ways to do it.

Intriguingly, in most cases this choice breaks the naive dilation symmetry. So, we’re getting what physicists call an ‘anomaly’: a symmetry of a classical system that fails to give a symmetry of the corresponding quantum system.

Of course, if you’ve made it this far, you probably want to understand what the different choices of Hamiltonian for a particle in an inverse cube force law actually mean, physically. The idea seems to be that they say how the particle changes phase when it hits the singularity at r = 0 and bounces back out.

(Why does it bounce back out? Well, if it didn’t, time evolution would not be unitary, so it would not be described by a self-adjoint Hamiltonian! We could try to describe the physics of a quantum particle that does not come back out when it hits the singularity, and I believe people have tried, but this requires a different set of mathematical tools.)

For a detailed analysis of this, it seems one should take Schrödinger’s equation and do a separation of variables into the angular part and the radial part:

\psi(r,\theta,\phi) = \Psi(r) \Phi(\theta,\phi)

For each choice of \ell = 0,1,2,\dots one gets a space of spherical harmonics that one can use for the angular part \Phi. The interesting part is the radial part, \Psi. Here it is helpful to make a change of variables

u(r) = \Psi(r)/r

At least naively, Schrödinger’s equation for the particle in the cr^{-2} potential then becomes

\displaystyle{ \frac{d}{dt} u = -iH u }

where

\displaystyle{ H = -\frac{d^2}{dr^2} + \frac{c + \ell(\ell+1)}{r^2} }

Beware: I keep calling all sorts of different but related Hamiltonians H, and this one is for the radial part of the dynamics of a quantum particle in an inverse cube force. As we’ve seen before in the classical case, the centrifugal force and the inverse cube force join forces in an ‘effective potential’

\displaystyle{ U(r) = kr^{-2} }

where

k = c + \ell(\ell+1)

So, we have reduced the problem to that of a particle on the open half-line (0,\infty) moving in the potential kr^{-2}. The Hamiltonian for this problem:

\displaystyle{ H = -\frac{d^2}{dr^2} + \frac{k}{r^2} }

is called the Calogero Hamiltonian. Needless to say, it has fascinating and somewhat scary properties, since to make it into a bona fide self-adjoint operator, we must make some choice about what happens when the particle hits r = 0. The formula above does not really specify the Hamiltonian.

This is more or less where Gitman, Tyutin and Voronov begin their analysis, after a long and pleasant review of the problem. They describe all the possible choices of self-adjoint operator that are allowed. The answer depends on the values of k, but very crudely, the choice says something like how the phase of your particle changes when it bounces off the singularity. Most choices break the dilation invariance of the problem. But intriguingly, some choices retain invariance under a discrete subgroup of dilations!

So, the rabbit hole of the inverse cube force law goes quite deep, and I expect I haven’t quite gotten to the bottom yet. The problem may seem pathological, verging on pointless. But the math is fascinating, and it’s a great testing-ground for ideas in quantum mechanics—very manageable compared to deeper subjects like quantum field theory, which are riddled with their own pathologies. Finally, the connection between the inverse cube force law and centrifugal force makes me think it’s not a mere curiosity.

In four dimensions

It’s a bit odd to study the inverse cube force law in 3-dimensonal space, since Newtonian gravity and the electrostatic force would actually obey an inverse cube law in 4-dimensional space. For the classical 2-body problem it doesn’t matter much whether you’re in 3d or 4d space, since the motion stays on the plane. But for quantum 2-body problem it makes more of a difference!

Just for the record, let me say how the quantum 2-body problem works in 4 dimensions. As before, we can work in the center of mass frame and consider this Hamiltonian:

H = -\nabla^2 + c r^{-2}

And as before, the behavior of this Hamiltonian depends on c. Here’s the story this time:

c \ge 0. In this case H is essentially self-adjoint on C_0^\infty(\mathbb{R}^4 - \{0\}). So, it admits a unique self-adjoint extension and there’s no ambiguity about this case.

c < 0. In this case H is not essentially self-adjoint on C_0^\infty(\mathbb{R}^4 - \{0\}).

c \ge -1. In this case the expected energy \langle \psi, H \psi \rangle is bounded below for \psi \in C_0^\infty(\mathbb{R}^3 - \{0\}). So, there is exists a canonical ‘best choice’ of self-adjoint extension, called the Friedrichs extension.

c < -1. In this case the expected energy is not bounded below, so we don’t have the Friedrichs extension to help us choose which self-adjoint extension is ‘best’.

I’ve been assured these are correct by Barry Simon, and a lot of this material will appear in Section 7.4 of his book:

• Barry Simon, A Comprehensive Course in Analysis, Part 4: Operator Theory, American Mathematical Society, Providence, RI, 2015.

See also:

• Barry Simon, Essential self-adjointness of Schrödinger operators with singular potentials, Arch. Rational Mech. Analysis 52 (1973), 44–48.

Notes

The animation was made by ‘WillowW’ and placed on Wikicommons. It’s one of a number that appears in this Wikipedia article:

Newton’s theorem of revolving orbits, Wikipedia.

I made the graphs using the free online Desmos graphing calculator.

The picture of a spiral was made by ‘Anarkman’ and ‘Pbroks13’ and placed on Wikicommons; it appears in

Hyperbolic spiral, Wikipedia.

The hyperbolic spiral is one of three kinds of orbits that are possible in an inverse cube force law. They are vaguely analogous to ellipses, hyperbolas and parabolas, but there are actually no bound orbits except perfect circles. The three kinds are called Cotes’s spirals. In polar coordinates, they are:

• the epispiral:

\displaystyle{ \frac{1}{r} = A \cos\left( k\theta + \varepsilon \right) }

• the hyperbolic spiral:

\displaystyle{ \frac{1}{r} = A \cosh\left( k\theta + \varepsilon \right) }

• the Poinsot spiral:

\displaystyle{ \frac{1}{r} = A \theta + \varepsilon }

31 Responses to The Inverse Cube Force Law

  1. stasheff says:

    Inverse cube if we lived in 4 spatial dims

    • John Baez says:

      Hi, Jim—great to hear from you!

      Now you’ve got me wondering about something. Gravity and the electrostatic field would naturally obey an inverse cube force law if we lived in 4 spatial dimensions. But how would the self-adjointness of the Hamiltonian

      H = -\nabla^2 + cr^{-2}

      work in this dimension? Annoying, Reed and Simon have a nice discussion of this operator in 5 spatial dimensions in Example 4 of Chapter X.2 of their book Methods of Modern Mathematical Physics II: Fourier Analysis and Self-Adjointness. They also give theorems that handle this operator in more than 5 dimensions. But I don’t see how it works in 4 dimensions.

      In 3 dimensions, I explained that this Hamiltonian has different behavior depending on whether c > 3/4, -1/4 \le c \le 3/4, or c < -1/4.

      In 5 dimensions, Reed and Simon show it has very different behavior depending on whether

      c \ge -5/4. Then it’s essentially self-adjoint on C_0^\infty(\mathbb{R}^5-\{0\}).

      -9/4 \le c < -5/4. Then it’s not essentially self-adjoint on this domain, but it’s bounded below, so we can use the Friedrichs extension.

      c < -9/4 Then it’s not essentially self-adjoint on this domain and not bounded below.

      I find it truly amazing that we see such delicate behavior in both 3 and 5 dimensions. Usually if something like this just barely works in some dimension, it either works completely or not at all in higher dimensions.

      But I want to know what’s going on in 4 dimensions!

    • John Baez says:

      Ah, I figured out what some of what happens in d dimensions! The Hardy inequality in d dimensions says the Hamiltonian

      H = -\nabla^2 + cr^{-2}

      is bounded below for

      \displaystyle{ c \ge -\frac{(d-2)^2}{4} }

      For d = 3 this gives c \ge -1/4, for d = 4 this gives c \ge -1, and for d = 5 this gives d \ge -7/4.

      It should be possible to show the operator is not bounded below for

      \displaystyle{ c < -\frac{(d-2)^2}{4} }

      but I haven’t shown that.

    • John Baez says:

      Okay, here’s some more progress! On MathSciNet I read that this paper:

      • Barry Simon, Essential self-adjointness of Schrödinger operators with singular potentials, Arch. Rational Mech. Analysis 52 (1973), 44–48.

      proves:

      Theorem 2: Let q=q_1+q_2 with q_1 \in L^2(\mathbb{R}^m - \{0\})_{\mathrm loc} and q_2 \in L^\infty and suppose that

      q_1(r) \ge ((m-1)(m-3)-3)/r^2

      Then -\Delta +q is essentially self-adjoint on C_0^\infty(\mathbb{R}^m - \{0\}).

      Here I believe we must have \Delta = \nabla^2 though people often use the opposite sign convention, \Delta = -\nabla^2, to make it a nonnegative operator. This matters here, but I’m going to assume I’m right.

      Anyway, the function q_1(r) = cr^{-2} is in L^2(\mathbb{R}^m - \{0\})_{\mathrm loc}, and we can take q_2 = 0. So, the key condition is that

      q_1(r) \ge ((m-1)(m-3)-3)/r^2

      and when m = 4 we have

      ((m-1)(m-3)-3)/r^2 = 0

      so this theorem only says that -\nabla^2 + c r^{-2} is essentially self-adjoint on C_0^\infty(\mathbb{R}^m - \{0\}) when

      c \ge 0

      That’s not useless… but it’s useless for attractive potentials.

      Let me just see how this theorem fares in 3 and 5 dimensions, where I already know stuff.

      When m = 3 we have

      -((m-1)(m-3)-3)/r^2 = (3/4)r^{-2}

      so this theorem says that -\nabla^2 + c r^{-2} is essentially self-adjoint on C_0^\infty(\mathbb{R}^m - \{0\}) when

      c \ge 3/4

      Good, this matches what’s in my blog article!

      When m = 5 we have

      ((m-1)(m-3)-3)/r^2 = -(5/4)r^{-2}

      so this theorem says that -\nabla^2 + c r^{-2} is essentially self-adjoint on C_0^\infty(\mathbb{R}^m - \{0\}) when

      c \ge -5/4

      Good! This matches what I said in an earlier comment!

  2. Small correction in the section about changing r^p potentials into r^q ones:

    “But you’ll notice this trick doesn’t actually work at p = 2, the case that corresponds to the inverse cube force law.”

    I believe you mean p = -2.

  3. I enjoyed this post particularly much. I’m not trained enough in mathematical physics to absorb all details in managable time. But I still understood pleasantly much. I really like how you catch the eye here with an interesting piece of classical mechanics and then move over to quantum mechanics. Also, it fascinates me how such a seemingly simple investigation can lead into such deep waters. In particular, the issue of (possibly ambiguous) completability to a self-adjoint operator is intriguing. (And a bit scary.)

    • John Baez says:

      I’m glad you liked it Carsten! I’ve been sort of missing you over at G+, wondering how your studies of physics are going. The inverse cube law illustrates a lot of interesting issues in physics, even though it’s an esoteric topic.

      The issue of multiple different self-adjoint extensions is easier to understand when you have a free particle on the interval [0,1], with the usual Hamiltonian

      \displaystyle{ H = -\frac{\hbar^2}{2m} \frac{d^2}{dx^2} }

      or just

      \displaystyle{ H = - \frac{d^2}{dx^2} }

      if you want to keep the calculations clean. This operation is not symmetric on the space of all smooth functions on [0,1], since

      \langle \psi , H \phi \rangle \ne \langle H \psi , \phi \rangle

      for such functions. If you restrict to smooth functions that vanish at the endpoints, you get

      \langle \psi , H \phi \rangle = \langle H \psi , \phi \rangle

      (these are fun calculations to do), but the operator on this domain has infinitely many self-adjoint extensions! Each one of these extensions describes different physics. Each one gives a different recipe for what happens when the particle hits the ends of the interval! It can ‘wrap around’ and come back in at the other end, it can ‘bounce off’, but it can also do either of these things while acquiring an arbitrary phase, and it can also do a superposition of these two things.

      In this case you can work out everything very clearly and it becomes intuitive.

      For a particle in an attractive inverse cube force, or an even stronger force, the issue is what happens when the particle hits the central singularity. I still haven’t understood in concrete terms what are all the options! What happens can depend on its angular momentum, and for each choice of angular momentum the paper by Gitman, Tyutin and Voronov lists the possibilities. But I’d like a more ‘synthetic’ view, that provides more physical intuition. It’s possible that this is still an open problem.

      I’m also curious about work that lets the particle ‘fall down the hole and never come back’. This yields nonunitary time evolution. In his thesis, Gopalkrishnan writes:

      Vogt and Wannier (1954) [8] give a nonunitary solution to the 1/r^4 problem, and preen themselves (rather ironically) about having avoided Case’s “involved” mathematics. Nelson (1963) [9] arrives at a nonunitary result for the 1/r2 case by analytically continuing the functional integral; he basically hits the same nonanalyticity that Narnhofer does fifteen years later.

      […]

      The term fall to the center was first applied to singular potentials by Landau and Lifschitz in their quantum mechanics text (1958) [18], where the strong-coupling regime is compared with the classical situation, for which the particle’s trajectory is defined only for a finite time. This comparison between classical completeness (i.e. having a solution for all time) and self-adjointness is explored rigorously in Reed and Simon vol. II [75], and makes its way into the pedagogical literature with an AJP article by Zhu and Klauder [19] on the “Classical Symptoms of Quantum Illnesses” (1993).

      Non-self-adjoint extensions of singular potentials are explored in some more detail by Perelomov and Popov (1970) [20] and by Alliluev (1971) [21], using methods that are equivalent to Nelson’s. The mathematical side of the semigroup theory had been worked out previously by Hille, Yosida, Kato, Trotter, etc.; a classic text is Functional Analysis and Semigroups by Hille and Phillips [10]. In 1979, Radin [22] observed that Nelson’s nonunitary time-evolution operator could be written as an average over the unitary operators corresponding to all the self-adjoint extensions. (This is not
      especially shocking, since e.g. e^{i\theta} and e^{-i\theta}, which are unitary, average to \cos \theta, which is not.)

      This work found some physical application in 1998, when Denschlag et al experimentally realized an attractive 1/r^2 potential by scattering cold neutral atoms off a charged wire [23], and found that the atoms were absorbed.

      Since Hille was the thesis advisor of my thesis advisor Irving Segal, and Nelson had Segal as his thesis advisor, I feel some family obligation to learn what Nelson did here.

      • [I’ve been trying to comment since yesterday, but G+ authentication here has stopped working for me despite trying different browsers. So I’m using Twitter authentication now.]

        Hi John, thanks for your elaborate reply! I’ll boldly display my ignorance and ask: what exactly is meant by “extension” of a self-adjoint operator? Let’s consider the simple Hamiltonian from your reply. Does “extension” mean extending its domain to smooth functions on an interval larger as the unit interval, while still requiring that the functions vanish at 0 and 1?

        As for my studies of physics – it’s actually quite comical: I was trying to inch my way towards QFT, and an acquaintance recommended my the book “PCT, Spin and Statistics, and All That” by Streater and Wightman. I started reading an realized I wasn’t quite ready for it. In particular, I wanted to learn more about operator theory and distributions first. While ploughing through operators, I realized that I didn’t even understand why the Riemann integral won’t do for Hilbert spaces. So I looked closely at the Lebesgue integral. That lead me into measure theory. Then I found Terry Tao’s book “An introduction to measure theory”. And I couldn’t bring myself to just skim it. So I’m currently working my way through that. (Tao, bless him, has the nice habit of turning proofs of essential insights into exercises…)

        Fortunately, my deadlines are in software engineering, and not in physics or maths :)

        • John Baez says:

          Carsten wrote:

          Hi John, thanks for your elaborate reply! I’ll boldly display my ignorance and ask: what exactly is meant by “extension” of a self-adjoint operator?

          Actually I was talking about taking an operator that’s not self-adjoint and trying to extend it to make it self-adjoint. So your real question is: what’s an extension of an operator?

          And the answer is: an operator is a kind of function, and I think you know what it means to extend a function. But there’s more to say.

          Let’s consider the simple Hamiltonian from your reply. Does “extension” mean extending its domain […]

          Yes, if you’d stopped there your answer would be correct. The linear operators that physicists like are rarely operators T : H \to H that map all of a Hilbert space H to itself. They’re usually defined on just part of the Hilbert space. So, they are linear operators T: D \to H where D \subseteq H is a linear subspace called the domain of T. To extend T means to find a new operator T': D' \to H where D' is a bigger domain, such that T' equals T on the original domain:

          D \subseteq D' and T'(v) = T(v) for v \in D.

          Anyway…

          Does “extension” mean extending its domain to smooth functions on an interval larger as the unit interval, while still requiring that the functions vanish at 0 and 1?

          That’s almost never how it works. A more typical example would be this. We start with

          \displaystyle{T = \frac{d^2}{dx^2} }

          defined on the domain D consisting of all smooth functions on the unit interval that vanish at the endpoints. This operator is symmetric:

          \langle \psi , T \phi \rangle = \langle T \psi, \phi \rangle for all \psi,\phi \in D

          but it’s not self-adjoint. So, we might try to extend this operator to get something self-adjoint. For example, we could take an operator

          \displaystyle{T'' = \frac{d^2}{dx^2} }

          which looks just like T except it has a bigger domain D': all functions \psi with second derivative in L^2 obeying ‘periodic boundary conditions’:

          \psi(0) = \psi(1)

          This operator is self-adjoint! Since I haven’t defined ‘self-adjoint’ that’s not so easy to check, but it’s easy to check that it’s still symmetric:

          \langle \psi , T' \phi \rangle = \langle T' \psi, \phi \rangle for all \psi,\phi \in D'

          As you can see, this stuff is a bit technical. But it’s important, because the ‘same-looking’ operator can actually be different self-adjoint operators, describing different physics, depending on the domain! For example, if I changed only this equation in what I said above:

          \psi(0) = -\psi(1)

          we’d have a new domain D'' and a new self-adjoint operator

          \displaystyle{T'' = \frac{d^2}{dx^2} }

          with this new domain… and T'' has quite different properties than T'. It describes particle that pick up a phase of -1 when they hit one end of the unit interval and pop out the other end!

          Both T' and T'' are self-adjoint extensions of T.

          I was trying to inch my way towards QFT, and an acquaintance recommended my the book “PCT, Spin and Statistics, and All That” by Streater and Wightman. I started reading an realized I wasn’t quite ready for it.

          Yikes, that’s a hard way to start learning QFT—your acquaintance must have been a sadist. This book assumes you know quantum field theory and are eager to make it mathematically rigorous. At the very least all the stuff I explained just now about operators should be familiar to you before try to climb that mountain. But I see you backed down.

          That lead me into measure theory. Then I found Terry Tao’s book “An introduction to measure theory”.

          Okay, good! If you’re still interested in analysis after that, I’d suggest Reed and Simon’s Methods of Modern Mathematical Physics I: Functional Analysis. This leads you rapidly but (I think) clearly through analysis from topology and measure theory, through distributions, up to self-adjoint operators on Hilbert space, all the while paying a lot of attention to their applications in physics. I really loved it when I was learning this stuff.

        • Thanks! The notion of extension of an operator is clear to me now, and also that a typical case involves relaxing boundary conditions of the functions on which it operates. I also understand now that changing the domain can change the set of “candidate adjoints”, ideally to a singleton set containing only the operator itself. (Did I get this right?)

          I’m really glad about the advice on Wightman/Streater and Reed/Simon. It had dawned on me that my next step should be studying a certain bunch of maths, which miraculously agrees with what Reed/Simon seems to cover. So after finishing Tao’s book I’ll just pick up Reed/Simon.

      • John Baez says:

        Carsten wrote:

        Thanks! The notion of extension of an operator is clear to me now, and also that a typical case involves relaxing boundary conditions of the functions on which it operates. I also understand now that changing the domain can change the set of “candidate adjoints”, ideally to a singleton set containing only the operator itself. (Did I get this right?)

        All that is exactly right! So, I’ll reward (or punish) you with a bit more information.

        Often it is difficult or annoying to describe the domain of a self-adjoint operator. So, we often settle for an essentially self-adjoint operator: one that has a unique self-adjoint extension.

        Any self-adjoint operator is essentially self-adjoint, with its unique self-adjoint extension being itself. But essential self-adjointness is a very useful generalization.

        For example, here is an essentially self-adjoint operator:

        \displaystyle{A = \frac{d^2}{dx^2} }

        with the domain D consisting of all smooth functions \psi on the unit interval obeying

        \psi(0) = \psi(1)

        The unique self-adjoint extension of A is the operator

        \displaystyle{A' = \frac{d^2}{dx^2} }

        with the larger domain D' consisting of all functions \psi with

        \psi(0) = \psi(1)

        and with second derivative lying in L^2[0,1].

        As you can see, saying “second derivative lying in L^2[0,1]” sounds more technical than saying “smooth” (infinitely differentiable). We need to be technical because we’re trying to describe the “exactly correct” domain instead of something that’s “almost right”, but a bit smaller. And in more complicated examples, the exactly correct domain is almost impossible to describe explicitly.

        Anyway, it’s all explained very nicely near the end of Reed and Simon. Good luck on that, and don’t hesitate to ask questions, especially in public forums where I can have the pleasure of showing off in my reply! I like analysis and hardly ever get to talk about it these days.

        • Okay, I’ll file your reply under “reward” rather than “punishment” ;) So, sticking with your example, does some spectral theorem yield an orthogonal Eigenbasis for the self-adjoint operator that’s the unique extension? If so, that would be intriguing: we might only be interested in the smooth functions, but we’d have to “step out” of that domain for an Eigenbasis… (Feel free to say “Shut up now and read the book” :)

        • John Baez says:

          Self-adjoint operators on infinite-dimensional Hilbert spaces typically don’t have a basis of eigenvectors: the simplest example is the operator of multiplication by x on the Hilbert space L^2(\mathbb{R}). The would-be eigenvectors are Dirac deltas, which are not in L^2(\mathbb{R}), and there’s a continuum of them—too many for an orthonormal basis of L^2(\mathbb{R}).

          The spectral theorem in its grown-up, infinite-dimensional form tells you how to deal with this. Needless to say, physicists pretend the infinite-dimensional case is just like the finite-dimensional case… but Reed and Simon give a rigorous treatment of this crucial issue.

        • Thanks again! My immediate curiosity is sated now :) Time for the book.

  4. SimplyFred says:

    Another fascinating post! Another thought provoking demonstration! I’m wondering if another solution might obtain out of Richard Anthony Proctor’s book: ‘A Treatise on the Cycloid and all Forms of Cycloid Curves’ (1787) See: https://archive.org/details/atreatiseoncycl01procgoog

    • John Baez says:

      Do you know some relation between cycloids and the inverse cube force law?

      I should start by remembering the closed-form solutions of the inverse cube force laws. There’s information about that here:

      Binet equation: Cotes spirals, Wikipedia.

      An inverse cube force law has the form

      \displaystyle{ F(r)=-\frac{k}{r^3} }

      The shapes of the orbits of an inverse cube law are known as Cotes spirals. The Binet equation shows that the orbits must be solutions to the equation

      \displaystyle{ \frac{\mathrm{d}^{2}u}{\mathrm{d}\theta ^{2}}+u=\frac{k u}{m h^2} = C u }

      The differential equation has three kinds of solutions, in analogy to the different conic sections of the Kepler problem. When C < 1, the solution is the epispiral, including the pathological case of a straight line when C=0. When C=1, the solution is the hyperbolic spiral. When C>1 the solution is Poinsot’s spiral.

      Here are the equations of the epispiral:

      \displaystyle{ \frac{1}{r} = A \cos\left( k\theta + \varepsilon \right) }

      the hyperbolic spiral:

      \displaystyle{ \frac{1}{r} = A \cosh\left( k\theta + \varepsilon \right) }

      and the Poinsot spiral:

      \displaystyle{ \frac{1}{r} = A \theta + \varepsilon }

      All this looks like really beautiful, classical mathematics—a modified version of the theory of ellipses, hyperbolas and parabolas. The equation of the epispiral reminds me slightly of the cycloid.

  5. John Baez says:

    Since gravity and the electrostatic force would naturally obey an inverse cube force law if space were 4-dimensional, I decided to do some reading and figure out how the Hamiltonian

    H = -\nabla^2 + c r^{-2}

    acts in 4 dimensions. I checked my answers with Barry Simon, the champ of mathematical physics.

    And I was happy to discover that my answers were all correct! Moreover, a lot of this material will appear in his forthcoming book A Comprehensive Course in Analysis, Part 4: Operator Theory.

    So, here’s the story:

    c \ge 0. In this case H is essentially self-adjoint on C_0^\infty(\mathbb{R}^4 - \{0\}). So, it admits a unique self-adjoint extension and there’s no ambiguity about this case.

    c < 0. In this case H is not essentially self-adjoint on C_0^\infty(\mathbb{R}^3 - \{0\}).

    c \ge -1. In this case the expected energy \langle \psi, H \psi \rangle is bounded below for \psi \in C_0^\infty(\mathbb{R}^3 - \{0\}). It thus has a canonical ‘best choice’ of self-adjoint extension, the Friedrichs extension.

    c < -1. In this case the expected energy is not bounded below, so we don’t have the Friedrichs extension to help us choose which self-adjoint extension is ‘best’.

    In short, it’s the same as in 3 dimensions, but with the numbers 0 and -1 replacing 3/4 and -1/4. The situation is ‘better’ than in 3 dimensions.

  6. ralph muha says:

    The E-field equations for dipole radiation include inverse cube terms.

    http://en.wikipedia.org/wiki/Dipole_antenna#Hertzian_dipole

  7. domenico says:

    I am thinking (I don’t know if it is true) that if there is an infinitesimal transformation of the potential (that depends on a single parameter) then the space-temporal coordinates could have a coordinate transformation with a single parameter that could change the trajectory in the original trajectory, so that could be possible to use the Noether theorem (that must be true for each little parameters variation) to obtain a invariant of the transformation (close trajectories are close trajectories for a cubic potential, so that the transformation could be simple in this case).
    If the infinitesimal transformation of the potential is little, then the trajectory variation is little, and it could be possible an (infinitesimal) coordinates trasformation to obtain an invariant of the trajectory: it could be possible to obtain an invariant that connect coordinates variations with potential variation (like in the case of field theory) for a classical system.

  8. Phillip Helbig says:

    I’m not sure how closely this is related, but it’s an interesting question nonetheless:

    Take a rubber strip (i.e. a cut rubber band, reducing the genus from 1 to 0) fixed at both ends, or a guitar string, or whatever. Take hold of it somewhere and pull it. What results are two lines connecting the end points with the pulled-away point.

    Do the same with a two-dimensional analog, e.g. a trampoline. In this case, the result is not a cone, but rather a curved surface, with the slope greater the closer one is to the “disturbance”. (Side question: in an ideal case, is the slope infinite when the disturbance is a point?) Is there an obvious reason for this? What, exactly, is the form of this curved surface? Is there a general formula, say for a three-dimensional “surface” embedded in a four-dimensional space?

    • John Baez says:

      It’s fairly related. I believe it’s a pretty reasonable approximation to say your n-dimensional surface (e.g. rubber band for n = 1 or trampoline for n = 2) is trying to minimize its volume (e.g. length for n =1 or area for n = 2), subject to the boundary conditions you’re giving it, including the ‘disturbance’ at a chosen point.

      At the very least, this makes for a fun and much-studied math problem! The n = 2 case is called the theory of minimal surfaces. There are big textbooks on this, and also on the general framework that works in higher dimensions: geometric measure theory.

      Geometric measure theory is famous for being difficult. I just noticed that on Amazon, Frank Morgan’s Geometric Measure Theory: A Beginner’s Guide has just one review, a two-star review that says (among other things):

      Yes, the book is ABOUT mathematics, but it contains very little of the gory details that actually constitute the real mathematics, like definitions, theorems and proofs (and the intuition behind it all for that matter). You could say that it is about mathematics written elsewhere. Elsewhere here means Geometric Measure Theory by Herbert Federer.

      On the other hand, Federer’s Geometric Measure Theory has some better reviews, but also a one-star review that reads:

      abandon hope all ye who enter: this is the kind of thing that gives math books a horrible reputation — absolutely impenetrable to the non initiate — and written with what perversely seems almost to be a deliberate attempt at obfuscation — avoid like the plague

      Another review suggests getting both Morgan’s book and Federer’s book, and reading them side by side. And that’s probably good advice!

      Anyway, when your minimal surface isn’t ‘too warped’, you can approximately describe its height as a function

      \phi : \mathbb{R}^n \to \mathbb{R}

      obeying the Laplace equation. Then the things I’m saying about singularities in the gravitational field, or electrostatic field, become rather closely related to your observations. I think the big difference is that the gravitational field of a point particle obey the equation

      \nabla^2 \phi = \delta

      (so, Poisson’s equation with a delta function ‘source’) while the problem you mention is about

      \nabla^2 \phi = 0

      except at the origin, where

      \phi(0) = 1

      and the points x on the sphere of radius 1, where

      \phi(x) = 0

      (so, the Laplace equation with some Dirichlet boundary conditions).

      • Phillip Helbig says:

        Thanks for the reply. What would I do without you? :-)

        What happens if the boundary conditions move to infinity? Can an infinite area be minimized?

        If even you say that geometric measure theory is difficult, I wonder if I have time to master it in the rest of my life. What is the form of the surfaces for n=2, n=3, and n=4?

      • John Baez says:

        Phillip wrote:

        What happens if the boundary conditions move to infinity? Can an infinite area be minimized?

        We face this problem whenever we try to apply the principle of least action to a field on an infinite-sized chunk of spacetime. The way people deal with it is to replace ‘minimization’ by a principle saying that if you change your field within some finite-sized region, the quantity you’re trying to minimize doesn’t change, to first order:

        \displaystyle{ \frac{d}{ds} S(\phi + s \delta \phi) \vert_{s = 0} = 0}

        where \phi is our field, S is the quantity we wish to minimize (like action or area), and \delta \phi is a change in the field—and we demand \delta \phi = 0 outside some set of finite volume!

        This makes everything well-defined, usually, and we can derive some differential equation that \phi should satisfy. For area-minimizing surfaces this equation was first found by Lagrange himself.

        If even you say that geometric measure theory is difficult, I wonder if I have time to master it in the rest of my life.

        I don’t think you want to. But you might like to look at a gallery of minimal surfaces. These are typically only area-minimizing in the subtle sense I just described.

        What is the form of the surfaces for n=2, n=3, and n=4?

        I should know some of those, but I don’t!

        • Phillip Helbig says:

          “I should know some of those, but I don’t!”

          Definitely a challenge to the readers here!

        • Phillip Helbig says:

          No takers? Really?

        • Phillip Helbig says:

          I came across an interesting paper which does real mathematical physics on a stretched 2-dimensional rubber sheet (like those sometimes used to demonstrate gravity and/or which are confused with embedding diagrams). Some interesting tidbits:

          “there does not exist a two-dimensional, cylindrically-symmetric surface that will yield rolling marble orbits that are equivalent to the particle orbits of Newtonian gravitation,[3] or for particle orbits that arise in general relativity”

          “they found a Kepler-like expression of the form T3 / r2, which is reminiscent of Kepler’s third law for planetary orbits, but with the powers transposed”

          “a differential equation that determines the shape of the fabric warped by the mass …nonlinear, ordinary differential equation that cannot be solved analytically” [my emphasis]

  9. Next time we’ll look at what happens to point particles interacting electromagnetically when we take special relativity into account. After that, we’ll try to put special relativity and quantum mechanics together!

    For more on the inverse cube force law, see:

    • John Baez, The inverse cube force law, Azimuth, 30 August 2015.

  10. To understand what a great triumph this is, one needs to see what could have gone wrong! Suppose space had an extra dimension. In 3-dimensional space, Newtonian gravity obeys an inverse square force law because the area of a sphere is proportional to its radius squared. In 4-dimensional space, the force obeys an inverse cube law. Using a cube instead of a square makes the force stronger at short distances, with dramatic effects. For example, even for the classical 2-body problem, the equations of motion no longer ‘almost always’ have a well-defined solution for all times. For an open set of initial conditions, the particles spiral into each other in a finite amount of time!

    The quantum version of this theory is also problematic. The uncertainty principle is not enough to save the day. The inequalities above no longer hold: kinetic energy does not triumph over potential energy. The Hamiltonian is no longer essentially self-adjoint on the set of wavefunctions that I described.

You can use Markdown or HTML in your comments. You can also use LaTeX, like this: $latex E = m c^2 $. The word 'latex' comes right after the first dollar sign, with a space after it.

This site uses Akismet to reduce spam. Learn how your comment data is processed.