Is spacetime really a continuum? That is, can points of spacetime really be described—at least locally—by lists of four real numbers ? Or is this description, though immensely successful so far, just an approximation that breaks down at short distances?
Rather than trying to answer this hard question, let’s look back at the struggles with the continuum that mathematicians and physicists have had so far.
The worries go back at least to Zeno. Among other things, he argued that that an arrow can never reach its target:
That which is in locomotion must arrive at the half-way stage before it arrives at the goal.—Aristotle summarizing Zeno
and Achilles can never catch up with a tortoise:
In a race, the quickest runner can never overtake the slowest, since the pursuer must first reach the point whence the pursued started, so that the slower must always hold a lead.—Aristotle summarizing Zeno
These paradoxes can now be dismissed using our theory of real numbers. An interval of finite length can contain infinitely many points. In particular, a sum of infinitely many terms can still converge to a finite answer.
But the theory of real numbers is far from trivial. It became fully rigorous only considerably after the rise of Newtonian physics. At first, the practical tools of calculus seemed to require infinitesimals, which seemed logically suspect. Thanks to the work of Dedekind, Cauchy, Weierstrass, Cantor and others, a beautiful formalism was developed to handle real numbers, limits, and the concept of infinity in a precise axiomatic manner.
However, the logical problems are not gone. Gödel’s theorems hang like a dark cloud over the axioms of mathematics, assuring us that any consistent theory as strong as Peano arithmetic, or stronger, cannot prove itself consistent. Worse, it will leave some questions unsettled.
For example: how many real numbers are there? The continuum hypothesis proposes a conservative answer, but the usual axioms of set theory leaves this question open: there could vastly more real numbers than most people think. And the superficially plausible axiom of choice—which amounts to saying that the product of any collection of nonempty sets is nonempty—has scary consequences, like the existence of non-measurable subsets of the real line. This in turn leads to results like that of Banach and Tarski: one can partition a ball of unit radius into six disjoint subsets, and by rigid motions reassemble these subsets into two disjoint balls of unit radius. (Later it was shown that one can do the job with five, but no fewer.)
However, most mathematicians and physicists are inured to these logical problems. Few of us bother to learn about attempts to tackle them head-on, such as:
• nonstandard analysis and synthetic differential geometry, which let us work consistently with infinitesimals,
• constructivism, which avoids proof by contradiction: for example, one must ‘construct’ a mathematical object to prove that it exists,
• finitism (which avoids infinities altogether),
• ultrafinitism, which even denies the existence of very large numbers.
This sort of foundational work proceeds slowly, and is now deeply unfashionable. One reason is that it rarely seems to intrude in ‘real life’ (whatever that is). For example, it seems that no question about the experimental consequences of physical theories has an answer that depends on whether or not we assume the continuum hypothesis or the axiom of choice.
But even if we take a hard-headed practical attitude and leave logic to the logicians, our struggles with the continuum are not over. In fact, the infinitely divisible nature of the real line—the existence of arbitrarily small real numbers—is a serious challenge to almost all of the most widely used theories of physics.
Indeed, we have been unable to rigorously prove that most of these theories make sensible predictions in all circumstances, thanks to problems involving the continuum.
One might hope that a radical approach to the foundations of mathematics—such as those listed above—would allow avoid some of the problems I’ll be discussing. However, I know of no progress along these lines that would interest most physicists. Some of the ideas of constructivism have been embraced by topos theory, which also provides a foundation for calculus with infinitesimals using synthetic differential geometry. Topos theory and especially higher topos theory are becoming important in mathematical physics. They’re great! But as far as I know, they have not been used to solve the problems I want to discuss here.
Today I’ll talk about one of the first theories to use calculus: Newton’s theory of gravity.
In its simplest form, Newtonian gravity describes ideal point particles attracting each other with a force inversely proportional to the square of their distance. It is one of the early triumphs of modern physics. But what happens when these particles collide? Apparently the force between them becomes infinite. What does Newtonian gravity predict then?
Of course real planets are not points: when two planets come too close together, this idealization breaks down. Yet if we wish to study Newtonian gravity as a mathematical theory, we should consider this case. Part of working with a continuum is successfully dealing with such issues.
In fact, there is a well-defined ‘best way’ to continue the motion of two point masses through a collision. Their velocity becomes infinite at the moment of collision but is finite before and after. The total energy, momentum and angular momentum are unchanged by this event. So, a 2-body collision is not a serious problem. But what about a simultaneous collision of 3 or more bodies? This seems more difficult.
Worse than that, Xia proved in 1992 that with 5 or more particles, there are solutions where particles shoot off to infinity in a finite amount of time!
This sounds crazy at first, but it works like this: a pair of heavy particles orbit each other, another pair of heavy particles orbit each other, and these pairs toss a lighter particle back and forth. Xia and Saari’s nice expository article has a picture of the setup:
Each time the lighter particle gets thrown back and forth, the pairs move further apart from each other, while the two particles within each pair get closer together. And each time they toss the lighter particle back and forth, the two pairs move away from each other faster!
As the time approaches a certain value the speed of these pairs approaches infinity, so they shoot off to infinity in opposite directions in a finite amount of time, and the lighter particle bounces back and forth an infinite number of times!
Of course this crazy behavior isn’t possible in the real world, but Newtonian physics has no ‘speed limit’, and we’re idealizing the particles as points. So, if two or more of them get arbitrarily close to each other, the potential energy they liberate can give some particles enough kinetic energy to zip off to infinity in a finite amount of time! After that time, the solution is undefined.
You can think of this as a modern reincarnation of Zeno’s paradox. Suppose you take a coin and put it heads up. Flip it over after 1/2 a second, then flip it over after 1/4 of a second, and so on. After one second, which side will be up? There is no well-defined answer. That may not bother us, since this is a contrived scenario that seems physically impossible. It’s a bit more bothersome that Newtonian gravity doesn’t tell us what happens to our particles when
Your might argue that collisions and these more exotic ‘noncollision singularities’ occur with probability zero, because they require finely tuned initial conditions. If so, perhaps we can safely ignore them!
This is a nice fallback position. But to a mathematician, this argument demands proof.
A bit more precisely, we would like to prove that the set of initial conditions for which two or more particles come arbitrarily close to each other within a finite time has ‘measure zero’. This would mean that ‘almost all’ solutions are well-defined for all times, in a very precise sense.
In 1977, Saari proved that this is true for 4 or fewer particles. However, to the best of my knowledge, the problem remains open for 5 or more particles. Thanks to previous work by Saari, we know that the set of initial conditions that lead to collisions has measure zero, regardless of the number of particles. So, the remaining problem is to prove that noncollision singularities occur with probability zero.
It is remarkable that even Newtonian gravity, often considered a prime example of determinism in physics, has not been proved to make definite predictions, not even ‘almost always’! In 1840, Laplace wrote:
We ought to regard the present state of the universe as the effect of its antecedent state and as the cause of the state that is to follow. An intelligence knowing all the forces acting in nature at a given instant, as well as the momentary positions of all things in the universe, would be able to comprehend in one single formula the motions of the largest bodies as well as the lightest atoms in the world, provided that its intellect were sufficiently powerful to subject all data to analysis; to it nothing would be uncertain, the future as well as the past would be present to its eyes. The perfection that the human mind has been able to give to astronomy affords but a feeble outline of such an intelligence.—Laplace
However, this dream has not yet been realized for Newtonian gravity.
I expect that noncollision singularities will be proved to occur with probability zero. If so, the remaining question would why it takes so much work to prove this, and thus prove that Newtonian gravity makes definite predictions in almost all cases. Is this is a weakness in the theory, or just the way things go? Clearly it has something to do with three idealizations:
• point particles whose distance can be arbitrarily small,
• potential energies that can be arbitrariy large and negative,
• velocities that can be arbitrarily large.
These are connected: as the distance between point particles approaches zero, their potential energy approaches and conservation of energy dictates that some velocities approach
Does the situation improve when we go to more sophisticated theories? For example, does the ‘speed limit’ imposed by special relativity help the situation? Or might quantum mechanics help, since it describes particles as ‘probability clouds’, and puts limits on how accurately we can simultaneously know both their position and momentum?
Next time I’ll talk about quantum mechanics, which indeed does help.
• Part 1: introduction; the classical mechanics of gravitating point particles.
• Part 2: the quantum mechanics of point particles.
• Part 3: classical point particles interacting with the electromagnetic field.
• Part 4: quantum electrodynamics.
• Part 5: renormalization in quantum electrodynamics.
• Part 6: summing the power series in quantum electrodynamics.
• Part 7: singularities in general relativity.
• Part 8: cosmic censorship in general relativity; conclusions.
looks for Like button LIKE : )
Is a ‘point’ a continuous idealisation or a discrete one? And isn’t synthetic differential geometry fundamentally about continuous objects? E.g. Infinitely differential functions etc. JL Bell’s book on The Continuous and the Infinitesimal in Mathematics and Philosophy is an interesting read on these issues. And Lawvree of category theory fame actually started out as a student of continuum mechanics under Truesdell who regarded point particle mechanics as ‘degenerate’ and ‘too discrete’.
You’re reminding me of some quotes by Einstein. In 1916 he wrote a letter to a former student, Walter Dällenbach, saying:
This is interesting in part because it shows how much Einstein thought about these issues, which he never really wrote anything definitive about. In 1935 he wrote to Langevin:
These quotes are analyzed in:
• John Stachel, The other Einstein: Einstein contra field theory, Science in Context 6 (1993), 275–290.
I thank Richard Gaylord for sending me a copy of this paper.
Interesting quotes, thanks. I’ll have to have a read.
Somewhat related – I think Schrödinger asked something like ‘what is a particle that has no trajectory or path?’. Which reminds me of Russell’s proposed ‘at-at’ resolution of Zeno’s paradox.
The whole ‘discrete vs continuous’ question (and what that even means) remains pretty fascinating even after all these years since the Greeks (and whoever else)!
This is slightly off-topic, but I was recently reminded that the idealised version of Newtonian physics used in a lot of high school physics problems suffers from some curious defects.
Suppose you place a perfectly rigid plank on a perfectly rigid table, with a portion of the plank jutting out over the edge of the table. Can you describe the pressure exerted by the plank on the table?
The answer is: you can’t, or at least you can’t without making further assumptions. If all you have to go on is the static balance of forces and torques, that’s not enough to solve the problem. You need some kind of non-rigid material model for the objects; for example, you can assume that the surface of the table is perfectly elastic (with a huge spring constant), so the force it exerts against the planar surface of the plank is piecewise linear.
Engineers, of course, learn all about this kind of thing very early on, and refer to situations like this as “hyperstatic”. But in high school, all the problems they give you tend to be carefully contrived to conceal the fact that the toy model being used is indeterminate even in some very simple situations.
Hmm, I don’t remember ever learning about this!
(I’m at the age where cautious statements of this sort become advisable.)
In this problem we have finitely many equations and infinitely many unknowns. But it seems the continuum nature of the plank is not fundamental here! Even if the plank met the table at a finite grid of points—imagine little nails poking out of the bottom of the plank—you couldn’t compute the force exerted by the table at these points: more unknowns than equations.
Somehow this reminds me of two things: a bed of nails, and how four-legged chairs tend to be wobbly.
Right, even a rigid thin rod with three point-like supports is hyperstatic! That’s why they always limit these kinds of problems to two supports in high school physics classes. It’s not surprising that professional engineers need to consider the way the deck of a real-world bridge with three supports will flex, in order to make it safe, but it’s a bit startling, at least initially, to realise that you need to consider something similar just to get a unique solution for the forces in the simplest possible toy model of this system.
Greg, now you tell me. As a team building exercise ten people including me (5 standing on each side) were asked to balance an 8 ft hollow aluminum rod on our 20 extended horizontal index fingers, maintaining contact at all times as we lowered it to the ground. We were given 20 minutes and failed, much to our chagrin. But armed with your information, I could have thrown around a little hyperstatic lingo and claimed that the task was theoretically impossible.
Except that another team managed to do it :-)
that’s not a bug, that’s a feature! it’s implicit in how machine builders flatten lathe beds to submicron precision, and in how astronomers grind spherical lenses and mirrors!
Here’s a question that’s only tangentially related for now—it’ll become relevant much later in the series, but I’m too impatient.
Some people like to wonder what the electron would be like if it were a tiny black hole described using general relativity, not quantum mechanics. But in fact it would not be a black hole, it would be a naked singularity. I’ve been trying to fix the Wikipedia article on this topic:
• Wikipedia, Black hole electron.
I’ve dramatically overhauled it, and I’m pretty happy except for one thing. There’s a certain length scale called associated to a charge which shows up in the theory of charged black holes. I believe that
This was already on Wikipedia and I think it’s right. But whoever calculated this got the answer
while I’m getting
This is annoyingly close yet far!
Just to lay my cards on the table, I’m using
The ratio of my answer and theirs is suspiciously close to 1.5. This is really pissing me off.
I agree with all your constants, and using the formula you provide I get the same answer as you. So if the formula is correct, that other answer must be wrong. I can’t really guess what they did differently; if you substitute a 9 for the 4 in the denominator, you do get pretty close to their answer … but that’s a weird mistake to make.
A 9 looks sorta like a 4 in some people’s handwriting… but still!
Great, I was getting pretty frustrated at this!
PS: Misner, Thorne and Wheeler give your answer as the length in metres corresponding to the elementary charge, when using geometric units. So the in the Reissner–Nordström metric is really just the charge of the hole in geometric units.
This is reassuring.
I have their book back at home in California. Here I have an electronic version, but I think those formulas are in the back flap, which I don’t have.
I don’t know a snappy interpretation of since the singularity formed by a point charge with much larger than the Schwarzschild radius will not have a horizon or ergosphere, but any reasonable definition of the ‘size’ of the blob of warped space surrounding a point charge of charge should give something on the order of
It’s interesting that the characteristic size of the ring singularity that arises when we include the electron’s angular momentum in the calculation vastly exceeds which in turn vastly exceeds the electron’s Schwarzschild radius. is so much bigger than the Planck length that we shouldn’t need quantum gravity to understand what’s going on here! But it’s so close to the Compton wavelength that we should be able to experimentally rule out a ring singularity this big.
An even less relevant puzzle, which I record just because I’d otherwise forget:
Puzzle: if you convert one Planck-mass-sized black hole into energy each second, how many 60-watt light bulbs can you power?
It’s worthwhile guessing before one calculates.
Have you chanced upon this paper about “Topos Theory in the Foundation of Physics?” https://arxiv.org/abs/0803.0417
It’s motivations seem complex, but it too mentions problems with the real numbers. (After attending a talk by Döring, I tried to read that paper. But I didn’t get past the first 15 or so pages for lack of time. Besides, since I’m no physicist, I’m unable to judge whether that direction is promising. but I’d like to know.)
Sure, I’ve read that paper by Isham and Döring—they’re not that many people interested in quantum gravity and category theory, so we all know each other and follow each other’s work.
Chris Isham was for many years famous as the “conscience” of quantum gravity. He’d write papers reviewing work on the subject and pointing out problems, both conceptual and technical, without ever making a strong push for an approach of his own. This sort of careful attitude was especially important when loop quantum gravity and string theory took off, each with their own enthusiasts who tended to be blind to their favored theory’s faults.
For example, if you ever become deeply interested in the all-important “problem of time” in quantum gravity, and want to delve into the dozens of attempts to solve this and why none of them have succeeded, his paper Canonical quantum gravity and the problem of time is required reading.
Or—more likely—if you ever just want an overview of why quantum gravity is so tough, try his paper Structural issues in quantum gravity.
Over the decades he leaned ever more to the position that we weren’t thinking radically enough. Thus it was not a complete shock that when Andreas Döring burst onto the scene with his topos-theoretic approach to physics, Isham became very interested in this… but it was a surprise to me that Isham for the first time advocated an approach to quantum gravity! He once told me something like “I really think this is it.”
Personally I am not convinced. Topos theory is the beautiful, perfected modern approach to intuitionistic logic, showing how this logic (which drops the principle of excluded middle “P or not P”) arises naturally from sheaf theory (the mathematical theory of “local truth”, which lets us say where a statement is true). Everyone who loves deep mathematics should learn at least a bit of topos theory.
However, topos theory was not designed with quantum logic in mind, and I don’t think it’s a sufficient framework for grappling with quantum theory.
Urs Schreiber’s work using cohesive ∞-topoi to understand `stringy’ geometry is more interesting to me.
I could say a lot more about this, but instead I’ll just suggest that if you want to know why Isham likes topos theory, a better place to start is his paper Topos methods in the foundations of physics.
Here was my attempt to explain Isham and Döring’s work in week257 of This Week’s Finds, written in 2007 after a conference at Imperial College.
I also spoke to Andreas Döring and Chris Isham about their work on topos theory and quantum physics. Andreas Döring lives near Greenwich, while Isham lives across the Thames in London proper. So, I talked to Döring a couple times, and once we visited Isham at his house.
I mainly mention this because Isham is one of the gurus of quantum gravity, profoundly interested in philosophy… so I was surprised, at the end of our talk, when he showed me into a room with a huge rack of computers hooked up to a bank of about 8 video monitors, and controls reminiscent of an airplane cockpit.
It turned out to be his homemade flight simulator! He’s been a hobbyist electrical engineer for years – the kind of guy who loves nothing more than a soldering iron in his hand. He’d just gotten a big 750-watt power supply, since he’d blown out his
Anyway, he and Döring have just come out with a series of papers:
14) Andreas Döring and Christopher Isham, A topos foundation for theories of physics: I. Formal languages for physics, available as arXiv:quant-ph/0703060.
II. Daseinisation and the liberation of quantum theory, available as arXiv:quant-ph/0703062.
III. The representation of physical quantities with arrows, available as arXiv:quant-ph/0703064.
IV. Categories of systems, available as arXiv:quant-ph/0703066.
Though they probably don’t think of it this way, you can think of their work as making precise Bohr’s ideas on seeing the quantum world through classical eyes. Instead of talking about all observables at once, they consider collections of observables that you can measure simultaneously without the uncertainty principle kicking in. These collections are called "commutative subalgebras".
You can think of a commutative subalgebra as a classical snapshot of the full quantum reality. Each snapshot only shows part of the reality. One might show an electron’s position; another might show it’s momentum.
Some commutative subalgebras contain others, just like some open sets of a topological space contain others. The analogy is a good one, except there’s no one commutative subalgebra that contains all the others.
Topos theory is a kind of "local" version of logic, but where the concept of locality goes way beyond the ordinary notion from topology. In topology, we say a property makes sense "locally" if it makes sense for points in some particular open set. In the Döring-Isham setup, a property makes sense "locally" if it makes sense "within a particular classical snapshot of reality" – that is, relative to a particular commutative subalgebra.
(Speaking of topology and its generalizations, this work on topoi and physics is related to the "étale topology" idea I mentioned a while back – but technically it’s much simpler. The étale topology lets you define a topos of sheaves on a certain category. The Döring-Isham work just uses the topos of presheaves on the poset of commutative subalgebras. Trust me – while this may sound scary, it’s much easier.)
Döring and Isham set up a whole program for doing physics "within a topos", based on existing ideas on how to do math in a topos. You can do vast amounts of math inside any topos just as if you were in the ordinary world of set theory – but using intuitionistic logic instead of classical logic. Intuitionistic logic denies the principle of excluded middle, namely:
In Döring and Isham’s setup, if you pick a commutative subalgebra that contains the position of an electron as one of its observables, it can’t contain the electron’s momentum. That’s because these observables don’t commute: you can’t measure them both simultaneously. So, working "locally" – that is, relative to this particular subalgebra – the statement
is neither true nor false! It’s just not defined.
Their work has inspired this very nice paper:
15) Chris Heunen and Bas Spitters, A topos for algebraic quantum theory, available as arXiv:0709.4364.
so let me explain that too.
I said you can do a lot of math inside a topos. In particular, you can define an algebra of observables – or technically, a "C*-algebra".
By the Isham-Döring work I just sketched, any C*-algebra of observables gives a topos. Heunen and Spitters show that the original C*-algebra gives rise to a C*-algebra in this topos, which is commutative even if the original one was noncommutative! That actually makes sense, since in this setup each "local view" of the full quantum reality is classical.
So, they get this sort of picture:
I’ve been taking the “ambient topos” to be the familiar category of sets, but it could be something else.
What’s really neat is that the Gelfand-Naimark theorem, saying commutative C*-algebras are always algebras of continuous functions on compact Hausdorff spaces, can be generalized to work within any topos. So, we get a space in our topos such that observables of the C*-algebra in the topos are just functions on this space.
I know this sounds technical if you’re not into this stuff. But it’s really quite wonderful. It basically means this: using topos logic, we can talk about a classical space of states for a quantum system! However, this space typically has "no global points" – that’s called the “Kochen-Specker theorem”. In other words, there’s no overall classical reality that matches all the classical snapshots.
Thanks, that was one content-heavy answer! I spent Saturday following various links you supplied. I think there is a typo on page 697 of Urs Schreiber’s article. Just kidding.
Seriously, I did get a better feel for the area, in particular from your own explanations, from Isham’s “Topos methods in the foundations of physics”, and from quickly surveying the “problem of time” stuff.
Lots of open questions, among them: “What’s the replacement for the reals, in particular considering that topoi already have Dedekind reals?” And “Does the replacement for the reals imply that spectral theory must be rebuilt from scratch?”
But I feel I should not pursue this now, and stick to learning more basic stuff. I see these topos exploits as a mere sneak peek on my part.
Thanks for the great article. I’d gotten a rough idea of the 5-particle oscillating scenario earlier, but it never sunk in that a consequence was nondeterminism (without collisions, yet).
It also never sunk in that (if I understand your description), that the ratio of the light particle’s average speed to the average speed of the other 4 particles grows without bound, even as the latter is itself growing without bound.
Yes! It’s always good, when you hear about some phenomenon in some theory that has time reversal symmetry, to think about its time-reversed version. So if particles shooting off to infinity and disappearing in finite time sounds “weird but acceptable” you should ask whether particles suddenly shooting in from infinity when they weren’t there before is also acceptable.
I forgot to mention this in my article. But later I’ll talk about white holes and some weird time-reversal issues in the classical electrodynamics of point particles.
I hadn’t thought about it quite that way, but I guess it must be true!
“These paradoxes can now be dismissed using our theory of real numbers.”
We don’t need a theory of real numbers. The distance rapidly becomes too small to matter or to measure. How much difference does half a micron make in a mile long running race? None at all.
That’s a fine philosophy, but try to redo all of physics without mentioning real numbers and you’ll find it’s not so easy. It’s possible this is exactly what we need to do! But it’s not easy.
[…] Last time we saw that that nobody yet knows if Newtonian gravity, applied to point particles, truly succeeds in predicting the future. To be precise: for four or more particles, nobody has proved that almost all initial conditions give a well-defined solution for all times! […]
Do people try to formalise information theory without mentioning the reals?
Tobias Fritz has a nice paper featuring an axiomatic theory of ‘resources’, which could be all sorts of things:
• Tobias Fritz, Resource convertibility and ordered commutative monoids.
He goes in the direction of adding axioms until you can deduce that your resources can be measured by an n-tuple of real numbers (with n possibly infinite). The axioms don’t mention the real numbers.
Thanks! I don’t understand the paper, but that’s OK. Its existence answers my question with a ‘yes’. And it does seem intuitive that it would be the communication channels that are the resource objects:
I am thinking that if a number (for example four) of black holes orbit along (roughly) the classical trajectories of Xia, then it could be possible an expulsion to almost infinity with a maximum velocity.
You might be interested to know that the gentle 3-body version of this 5-body finite time blowup is the main mechanism for globular cluster evaporation. In globular clusters, when a single star has a close encounter with a pair of stars in a binary, the single star tends to gain kinetic energy at the cost of the binary pair orbiting closer together.
A Thousand Blazing Suns: The Inner Life of Globular Clusters
Sometimes, the solo star gains cluster escape velocity and evaporates from the cluster.
I thought that a black hole with a speed of ejection close to that of light should have visible effects in a nebula
In this series we’re looking at mathematical problems that arise in physics due to treating spacetime as a continuum—basically, problems with infinities.
In Part 1 we looked at classical point particles interacting gravitationally. We saw they could convert an infinite amount of potential energy into kinetic energy in a finite time! Then we switched to electromagnetism, and went a bit beyond traditional Newtonian mechanics: in Part 2 we threw quantum mechanics into the mix, and in Part 3 we threw in special relativity. Briefly, quantum mechanics made things better, but special relativity made things worse.
Now let’s throw in both!
Here are my quibbles. Newtonian mechanics in general and Newtonian gravity in particular models physical systems as flows of smooth vector fields on smooth manifolds. ( In the case of massive particles interacting by Newton’s law the vector field lives on the tangent bundle of minus “polydiagonals.”) But integral curves of smooth vector fields can easily run off to infinity in finite time. And flows of smooth vector fields can be chaotic. So why would it be surprising that these things (incompleteness of the vector field and chaos) happen in Newtonian body problem? It’s more shocking that the 2-body problem is completely integrable.
One may be surprised that “deterministic” (existence, uniqueness and smooth dependence on initial conditions) doesn’t mean “predictable in practice.” It seems to me that this kind of surprise has nothing to do with Newtonian physics and more of a problem of a failure of imagination. It’s hard to imagine what a complex system may do (by definition of complex).
Thanks for explaining your quibbles. None of them move me to change what I wrote, because they basically depend on what counts as “surprising”, which is highly subjective.
As a writer, it’s generally a better strategy to say “this is surprising” and then explain it, than to say “if you’re smart, this will not be surprising”.
Continuing my counter-nitpicking:
I didn’t mention chaos at all; that’s not part of the theme in this series. I’m not getting into the issue of ‘in practice’ predictability, just ‘in principle’ predictability. So, one main theme is whether the most famous theories of physics lead to well-defined time evolution given initial data—or for quantum field theory (where the initial value problem is too hard) whether we can define the amplitude of having some particles with some momenta at future infinity given similar data at past infinity.
Most physicists would not be surprised at the collision singularities in the Newtonian gravitational n-body problem. They would be surprised by non-collision singularities where particles manage to ‘mine’ an infinite amount of potential energy and convert it to kinetic energy in a finite amount of time. The battle between kinetic and potential energy is a second theme of this series. In the quantum version of the Newtonian gravitational n-body problem, kinetic energy triumphs over potential energy and all is well. In perturbative QED, where arbitrarily large numbers of particles can be created, potential energy fights back! — and thus, Dyson argues, the power series diverge.
Physicists would attempt to shrug off both the collision and non-collision singularities in the Newtonian gravitational n-body problem by arguing that they occur only for initial data in a set of measure zero.
If this is true, we can define time evolution not for individual points on phase space but for probability distributions on phase space—or we can use ‘Koopmanism’ to define time evolution as a strongly continuous 1-parameter unitary group on L2 of the phase space.
But in fact nobody has proved this is true except for low n. I think most physicists would find that surprising.
So, another general theme is that a lot of basic questions about our favorite theories of physics, like the extent to which they allow us to predict the future in principle, remain unsettled. And, this tends to be connected to infinities that arise when there are states of arbitrarily negative potential energy.
On second thought, there probably is something I should change about my paper: I should probably add some remarks like this to the Conclusion, which right now is rather hasty.
What I find surprising is that Koopman operators and ergodic theory have deep applications in number theory and combinatorics. There is e.g. a proof of Szemeredi’s theorem by H. Furstenberg using ergodic theory.
A recent account on how these concepts relate physics and mathematics can be found in T. Eisner et al Operator Theoretic Aspects of Ergodic Theory.
Knowing that there exist smooth vector fields with a certain property (like, V(x) = x^2 d/dx on the reals goes off to infinity in finite time) is a far cry from knowing that a special class of vector fields (those coming from classical mechanics of point masses in Euclidean space) can do the same thing.
John Baez reviews the surprising richness in the (non-)well-posedness of dynamical physics theories. (This is the first of a multi-part series; see Baez’s sidebar.)
Have you seen the recent work by Montgomery and Moeckel? They find a beautiful geometric way to regularize the entire configuration space (including 3-body collisions) in the planar 3-body problem. In the end they get out a 4-fold octahedral covering of the Riemann sphere!
The nicest resource is the talk of Moeckel – be sure to get the one with the videos to really appreciate the beauty.
I’ve seen some of Montgomery’s work on regularizing the planar 3-body problem, but I’m not sure I’ve seen that paper.
It’s starting to sound like covering spaces and Galois theory are connected not only to the classic problems of trisecting the angle, doubling the cube and solving the quintic, but also to the 3-body problem!
Great posts and comments! A similar problem. Can a finite number of billiard balls have infinity many collisions? Suppose you have some number, say n, billiard balls. They can have whatever masses and radii you like. Set them off in 2 or 3 dimensional (unbounded) space (or higher if you like!) with any initial positions and momenta you choose. Can such a configuration experience infinitely many collisions?
I think this is fairly easy: take two heavy billiard balls moving toward each other, and a light one bouncing back and forth between them. Assume this is in 1-dimensional space – or in other words, they’re all moving along a line. I haven’t done the calculation, but I think the light one can bounce back and forth infinitely many times before all three touch.
This reminds me a bit of that joke about John von Neumann, who was famous for being quick at calculations:
Von Neumann and a friend were on a train, and they were getting a bit bored, so his friend posed him a puzzle.
“Suppose two trains on the same track begin a mile apart and head towards each other at 60 miles an hour. A fly starting at the front of one train flies at 120 mph to the other train, and then flies back to the first train, and so on, back and forth until it gets squashed when the trains collide. How far does the fly travel?”
Von Neumann thought about it a moment and said, “One mile”.
His friend said. “Wow! You’re really good. Most people don’t notice the easy way to solve it: the trains meet in half a minute, and the fly can travel 1 mile in half a minute. They think they have to sum an infinite series!”
Von Neumann looked stunned, and said “But that’s how I did it.”
My kids (4/7) have a magazine called “Crazy Words” (circulation 20). One of the long running columns is “Collisions We’d Like to See” with examples such as Butter —> Peanut Butter. I will see if the editors want to publish “Von Neumann —> Infinite Series”. Anyway, I tried to run this on the computer. May be I ran my numbers wrong, but it seemed simple enough, and it didn’t work. You can have arbitrary many collisions with this configuration, but eventually the balls stop colliding (and surprisingly quickly).
I thought the answer was, no, you can’t have infinitely many. But I don’t know the proof.
This is a fascinating series. One minor point though. The first occurrence of the word “velocity” in Part 1 should be changed to “acceleration”. (For equal mass point particles, their velocities changes sign just before and just after they collide at t=0. Mathematically the velocity is a step function undefined at t=0. Physically it seems reasonable to define v = 0 at t = 0.)