Knowing that there exist smooth vector fields with a certain property (like, V(x) = x^2 d/dx on the reals goes off to infinity in finite time) is a far cry from knowing that a **special class** of vector fields (those coming from classical mechanics of point masses in Euclidean space) can do the same thing.

My kids (4/7) have a magazine called “Crazy Words” (circulation 20). One of the long running columns is “Collisions We’d Like to See” with examples such as Butter —> Peanut Butter. I will see if the editors want to publish “Von Neumann —> Infinite Series”. Anyway, I tried to run this on the computer. May be I ran my numbers wrong, but it seemed simple enough, and it didn’t work. You can have arbitrary many collisions with this configuration, but eventually the balls stop colliding (and surprisingly quickly).

I thought the answer was, no, you can’t have infinitely many. But I don’t know the proof.

I think this is fairly easy: take two heavy billiard balls moving toward each other, and a light one bouncing back and forth between them. Assume this is in 1-dimensional space – or in other words, they’re all moving along a line. I haven’t done the calculation, but I think the light one can bounce back and forth infinitely many times before all three touch.

This reminds me a bit of that joke about John von Neumann, who was famous for being quick at calculations:

Von Neumann and a friend were on a train, and they were getting a bit bored, so his friend posed him a puzzle.

“Suppose two trains on the same track begin a mile apart and head towards each other at 60 miles an hour. A fly starting at the front of one train flies at 120 mph to the other train, and then flies back to the first train, and so on, back and forth until it gets squashed when the trains collide. How far does the fly travel?”

Von Neumann thought about it a moment and said, “One mile”.

His friend said. “Wow! You’re really good. Most people don’t notice the easy way to solve it: the trains meet in half a minute, and the fly can travel 1 mile in half a minute. They think they have to sum an infinite series!”

Von Neumann looked stunned, and said “But that’s how I did it.”

]]>I’ve seen some of Montgomery’s work on regularizing the planar 3-body problem, but I’m not sure I’ve seen that paper.

It’s starting to sound like covering spaces and Galois theory are connected not only to the classic problems of trisecting the angle, doubling the cube and solving the quintic, but also to the 3-body problem!

]]>But what about a simultaneous collision of 3 or more bodies? This seems more difficult.

Have you seen the recent work by Montgomery and Moeckel? They find a beautiful geometric way to regularize the entire configuration space (including 3-body collisions) in the planar 3-body problem. In the end they get out a 4-fold octahedral covering of the Riemann sphere!

The nicest resource is the talk of Moeckel – be sure to get the one with the videos to really appreciate the beauty.

]]>I think most physicists would find that surprising.

What I find surprising is that Koopman operators and ergodic theory have deep applications in number theory and combinatorics. There is e.g. a proof of Szemeredi’s theorem by H. Furstenberg using ergodic theory.

A recent account on how these concepts relate physics and mathematics can be found in T. Eisner et al Operator Theoretic Aspects of Ergodic Theory.

]]>Thanks for explaining your quibbles. None of them move me to change what I wrote, because they basically depend on what counts as “surprising”, which is highly subjective.

As a writer, it’s generally a better strategy to say “this is surprising” and then explain it, than to say “if you’re smart, this will not be surprising”.

Continuing my counter-nitpicking:

I didn’t mention chaos at all; that’s not part of the theme in this series. I’m not getting into the issue of ‘in practice’ predictability, just ‘in principle’ predictability. So, one main theme is whether the most famous theories of physics lead to well-defined time evolution given initial data—or for quantum field theory (where the initial value problem is too hard) whether we can define the amplitude of having some particles with some momenta at future infinity given similar data at past infinity.

Most physicists would not be surprised at the collision singularities in the Newtonian gravitational n-body problem. They *would* be surprised by non-collision singularities where particles manage to ‘mine’ an infinite amount of potential energy and convert it to kinetic energy in a finite amount of time. The battle between kinetic and potential energy is a second theme of this series. In the quantum version of the Newtonian gravitational n-body problem, kinetic energy triumphs over potential energy and all is well. In perturbative QED, where arbitrarily large numbers of particles can be created, potential energy fights back! — and thus, Dyson argues, the power series diverge.

Physicists would attempt to shrug off both the collision and non-collision singularities in the Newtonian gravitational n-body problem by arguing that they occur only for initial data in a set of measure zero.

If this is true, we can define time evolution not for individual points on phase space but for probability distributions on phase space—or we can use ‘Koopmanism’ to define time evolution as a strongly continuous 1-parameter unitary group on L^{2} of the phase space.

But in fact nobody has proved this is true except for low n. I think most physicists would find *that* surprising.

So, another general theme is that a lot of basic questions about our favorite theories of physics, like the extent to which they allow us to predict the future in principle, remain unsettled. And, this tends to be connected to infinities that arise when there are states of arbitrarily negative potential energy.

On second thought, there probably is something I should change about my paper: I should probably add some remarks like this to the Conclusion, which right now is rather hasty.

]]>