The Circular Electron Positron Collider

16 September, 2016

Chen-Ning Yang is perhaps China’s most famous particle physicists. Together with Tsung-Dao Lee, he won the Nobel prize in 1957 for discovering that the laws of physics known the difference between left and right. He helped create Yang–Mills theory: the theory that describes all the forces in nature except gravity. He helped find the Yang–Baxter equation, which describes what particles do when they move around on a thin sheet of matter, tracing out braids.

Right now the world of particle physics is in a shocked, somewhat demoralized state because the Large Hadron Collider has not yet found any physics beyond the Standard Model. Some Chinese scientists want to forge ahead by building an even more powerful, even more expensive accelerator.

But Yang recently came out against this. This is a big deal, because he is very prestigious, and only China has the will to pay for the next machine. The director of the Chinese institute that wants to build the next machine, Wang Yifeng, issued a point-by-point rebuttal of Yang the very next day.

Over on G+, Willie Wong translated some of Wang’s rebuttal in some comments to my post on this subject. The real goal of my post here is to make this translation a bit easier to find—not because I agree with Wang, but because this discussion is important: it affects the future of particle physics.

First let me set the stage. In 2012, two months after the Large Hadron Collider found the Higgs boson, the Institute of High Energy Physics proposed a bigger machine: the Circular Electron Positron Collider, or CEPC.

This machine would be a ring 100 kilometers around. It would collide electrons and positrons at an energy of 250 GeV, about twice what you need to make a Higgs. It could make lots of Higgs bosons and study their properties. It might find something new, too! Of course that would be the hope.

It would cost $6 billion, and the plan was that China would pay for 70% of it. Nobody knows who would pay for the rest.

According to Science:

On 4 September, Yang, in an article posted on the social media platform WeChat, says that China should not build a supercollider now. He is concerned about the huge cost and says the money would be better spent on pressing societal needs. In addition, he does not believe the science justifies the cost: The LHC confirmed the existence of the Higgs boson, he notes, but it has not discovered new particles or inconsistencies in the standard model of particle physics. The prospect of an even bigger collider succeeding where the LHC has failed is “a guess on top of a guess,” he writes. Yang argues that high-energy physicists should eschew big accelerator projects for now and start blazing trails in new experimental and theoretical approaches.

That same day, IHEP’s director, Wang Yifang, posted a point-by-point rebuttal on the institute’s public WeChat account. He criticized Yang for rehashing arguments he had made in the 1970s against building the BECP. “Thanks to comrade [Deng] Xiaoping,” who didn’t follow Yang’s advice, Wang wrote, “IHEP and the BEPC … have achieved so much today.” Wang also noted that the main task of the CEPC would not be to find new particles, but to carry out detailed studies of the Higgs boson.

Yang did not respond to request for comment. But some scientists contend that the thrust of his criticisms are against the CEPC’s anticipated upgrade, the Super Proton-Proton Collider (SPPC). “Yang’s objections are directed mostly at the SPPC,” says Li Miao, a cosmologist at Sun Yat-sen University, Guangzhou, in China, who says he is leaning toward supporting the CEPC. That’s because the cost Yang cites—$20 billion—is the estimated price tag of both the CEPC and the SPPC, Li says, and it is the SPPC that would endeavor to make discoveries beyond the standard model.

Still, opposition to the supercollider project is mounting outside the high-energy physics community. Cao Zexian, a researcher at CAS’s Institute of Physics here, contends that Chinese high-energy physicists lack the ability to steer or lead research in the field. China also lacks the industrial capacity for making advanced scientific instruments, he says, which means a supercollider would depend on foreign firms for critical components. Luo Huiqian, another researcher at the Institute of Physics, says that most big science projects in China have suffered from arbitrary cost cutting; as a result, the finished product is often a far cry from what was proposed. He doubts that the proposed CEPC would be built to specifications.

The state news agency Xinhua has lauded the debate as “progress in Chinese science” that will make big science decision-making “more transparent.” Some, however, see a call for transparency as a bad omen for the CEPC. “It means the collider may not receive the go-ahead in the near future,” asserts Institute of Physics researcher Wu Baojun. Wang acknowledged that possibility in a 7 September interview with Caijing magazine: “opposing voices naturally have an impact on future approval of the project,” he said.

Willie Wong’s prefaced his translation of Wang’s rebuttal with this:

Here is a translation of the essential parts of the rebuttal; some standard Chinese language disclaimers of deference etc are omitted. I tried to make the translation as true to the original as possible; the viewpoints expressed are not my own.

Here is the translation:

Today (September 4) published the article by CN Yang titled “China should not build an SSC today”. As a scientist who works on the front line of high energy physics and the current director of the the high energy physics institute in the Chinese Academy of Sciences, I cannot agree with his viewpoint.

(A) The first reason to Dr. Yang’s objection is that a supercollider is a bottomless hole. His objection stemmed from the American SSC wasting 3 billion US dollars and amounted to naught. The LHC cost over 10 billion US dollars. Thus the proposed Chinese accelerator cannot cost less than 20 billion US dollars, with no guaranteed returns. [Ed: emphasis original]

Here, there are actually three problems. The first is “why did SSC fail”? The second is “how much would a Chinese collider cost?” And the third is “is the estimate reasonable and realistic?” Here I address them point by point.

(1) Why did the American SSC fail? Are all colliders bottomless pits?

The many reasons leading to the failure of the American SSC include the government deficit at the time, the fight for funding against the International Space Station, the party politics of the United States, the regional competition between Texas and other states. Additionally there are problems with poor management, bad budgeting, ballooning construction costs, failure to secure international collaboration. See references [2,3] [Ed: consult original article for references; items 1-3 are English language]. In reality, “exceeding the budget” is definitely not the primary reason for the failure of the SSC; rather, the failure should be attributed to some special and circumstantial reasons, caused mainly by political elements.

For the US, abandoning the SSC was a very incorrect decision. It lost the US the chance for discovering the Higgs Boson, as well as the foundations and opportunities for future development, and thereby also the leadership position that US has occupied internationally in high energy physics until then. This definitely had a very negative impact on big science initiatives in the US, and caused one generation of Americans to lose the courage to dream. The reasons given by the American scientific community against the SSC are very similar to what we here today against the Chinese collider project. But actually the cancellation of the SSC did not increase funding to other scientific endeavors. Of course, activation of the SSC would not have reduced the funding to other scientific endeavors, and many people who objected to the project are not regretting it.

Since then, LHC was constructed in Europe, and achieved great success. Even though its construction exceeded its original budget, but not by a lot. This shows that supercollider projects do not have to be bottomless, and has a chance to succeed.

The Chinese political landscape is entirely different from that of the US. In particular, for large scale constructions, the political system is superior. China has already accomplished to date many tasks which the Americans would not, or could not do; many more will happen in the future. The failure of SSC doesn’t mean that we cannot do it. We should scientifically analyze the situation, and at the same time foster international collaboration, and properly manage the budget.

(2) How much would it cost? Our planned collider (using circumference of 100 kilometers for computations) will proceed in two steps. [Ed: details omitted. The author estimated that the electron-positron collider will cost 40 Billion Yuan, followed by the proton-proton collider which will cost 100 billion Yuan, not accounting for inflation. With approximately 10 year construction time for each phase.] The two-phase planning is to showcase the scientific longevity of the project, especially entrainment of other technical development (e.g. high energy superconductors), and that the second phase [ed: the proton-proton collider] is complementary to the scientific and technical developments of the first phase. The reason that the second phase designs are incorporated in the discussion is to prevent the scenario where design elements of the first phase inadvertently shuts off possibility of further expansion in the second phase.

(3) Is this estimate realistic? Are we going to go down the same road as the American SSC?

First, note that in the past 50 years , there were many successful colliders internationally (LEP, LHC, PEPII, KEKB/SuperKEKB etc) and many unsuccessful ones (ISABELLE, SSC, FAIR, etc). The failed ones are all proton accelerators. All electron colliders have been successful. The main reason is that proton accelerators are more complicated, and it is harder to correctly estimate the costs related to constructing machines beyond the current frontiers.

There are many successful large-scale constructions in China. In the 40 years since the founding of the high energy physics institute, we’ve built [list of high energy experiment facilities, I don’t know all their names in English], each costing over 100 million Yuan, and none are more than 5% over budget, in terms of actual costs of construction, time to completion, meeting milestones. We have a well developed expertise in budget, construction, and management.

For the CEPC (electron-positron collider) our estimates relied on two methods:

(i) Summing of the parts: separately estimating costs of individual elements and adding them up.

(ii) Comparisons: using costs for elements derived from costs of completed instruments both domestically and abroad.

At the level of the total cost and at the systems level, the two methods should produce cost estimates within 20% of each other.

After completing the initial design [ref. 1], we produced a list of more than 1000 required equipments, and based our estimates on that list. The estimates are refereed by local and international experts.

For the SPPC (the proton-proton collider; second phase) we only used the second method (comparison). This is due to the second phase not being the main mission at hand, and we are not yet sure whether we should commit to the second phase. It is therefore not very meaningful to discuss its potential cost right now. We are committed to only building the SPPC once we are sure the science and the technology are mature.

(B) The second reason given by Dr. Yang is that China is still a developing country, and there are many social-economic problems that should be solved before considering a supercollider.

Any country, especially one as big as China, must consider both the immediate and the long-term in its planning. Of course social-economic problems need to be solved, and indeed solving them is taking currently the lions share of our national budget. But we also need to consider the long term, including an appropriate amount of expenditures on basic research, to enable our continuous development and the potential to lead the world. The China at the end of the Qing dynasty has a rich populace with the world’s highest GDP. But even though the government has the ability to purchase armaments, the lack of scientific understanding reduced the country to always be on the losing side of wars.

In the past few hundred years, developments into understanding the structure of matter, from molecules, atoms, to the nucleus, the elementary particles, all contributed and led the scientific developments of their era. High energy physics pursue the finest structure of matter and its laws, the techniques used cover many different fields, from accelerator, detectors, to low temperature, superconducting, microwave, high frequency, vacuum, electronic, high precision instrumentation, automatic controls, computer science and networking, in many ways led to the developments in those fields and their broad adoption. This is a indicator field in basic science and technical developments. Building the supercollider can result in China occupying the leadership position in such diverse scientific fields for several decades, and also lead to the domestic production of many of the important scientific and technical instruments. Furthermore, it will allow us to attract international intellectual capital, and allow the training of thousands of world-leading specialists in our institutes. How is this not an urgent need for the country?

In fact, the impression the Chinese government and the average Chinese people create for the world at large is a populace with lots of money, and also infatuated with money. It is hard for a large country to have a international voice and influence without significant contribution to the human culture. This influence, in turn, affects the benefits China receive from other countries. In terms of current GDP, the proposed project (including also the phase 2 SPPC) does not exceed that of the Beijing positron-electron collider completed in the 80s, and is in fact lower than LEP, LHC, SSC, and ILC.

Designing and starting the construction of the next supercollider within the next 5 years is a rare opportunity to let us achieve a leadership position internationally in the field of high energy physics. The newly discovered Higgs boson has a relatively low mass, which allows us to probe it further using a circular positron-electron collider. Furthermore, such colliders has a chance to be modified into proton colliders. This facility will have over 5 decades of scientific use. Furthermore, currently Europe, US, and Japan all already have scientific items on their agenda, and within 20 years probably cannot construct similar facilities. This gives us an advantage in competitiveness. Thirdly, we already have the experience building the Beijing positron-electron collider, so such a facility is in our strengths. The window of opportunity typically lasts only 10 years, if we miss it, we don’t know when the next window will be. Furthermore, we have extensive experience in underground construction, and the Chinese economy is currently at a stage of high growth. We have the ability to do the constructions and also the scientific need. Therefore a supercollider is a very suitable item to consider.

(C) The third reason given by Dr. Yang is that constructing a supercollider necessarily excludes funding other basic sciences.

China currently spends 5% of all R&D budget on basic research; internationally 15% is more typical for developed countries. As a developing country aiming to joint the ranks of developed country, and as a large country, I believe we should aim to raise the ratio to 10% gradually and eventually to 15%. In terms of numbers, funding for basic science has a large potential for growth (around 100 billion yuan per annum) without taking away from other basic science research.

On the other hand, where should the increased funding be directed? Everyone knows that a large portion of our basic science research budgets are spent on purchasing scientific instruments, especially from international sources. If we evenly distribute the increased funding amount all basic science fields, the end results is raising the GDP of US, Europe, and Japan. If we instead spend 10 years putting 30 billion Yuan into accelerator science, more than 90% of the money will remain in the country, and improve our technical development and market share of domestic companies. This will also allow us to raise many new scientists and engineers, and greatly improve the state of art in domestically produced scientific instruments.

In addition, putting emphasis into high energy physics will only bring us to the normal funding level internationally (it is a fact that particle physics and nuclear physics are severely underfunded in China). For the purposes of developing a world-leading big science project, CEPC is a very good candidate. And it does not contradict a desire to also develop other basic sciences.

(D) Dr. Yang’s fourth objection is that both supersymmetry and quantum gravity have not been verified, and the particles we hope to discover using the new collider will in fact be nonexistent.

That is of course not the goal of collider science. In [ref 1] which I gave to Dr. Yang myself, we clearly discussed the scientific purpose of the instrument. Briefly speaking, the standard model is only an effective theory in the low energy limit, and a new and deeper theory is need. Even though there are some experimental evidence beyond the standard model, more data will be needed to indicate the correct direction to develop the theory. Of the known problems with the standard model, most are related to the Higgs Boson. Thus a deeper physical theory should have hints in a better understanding of the Higgs boson. CEPC can probe to 1% precision [ed. I am not sure what this means] Higgs bosons, 10 times better than LHC. From this we have the hope to correctly identify various properties of the Higgs boson, and test whether it in fact matches the standard model. At the same time, CEPC has the possibility of measuring the self-coupling of the Higgs boson, of understanding the Higgs contribution to vacuum phase transition, which is important for understanding the early universe. [Ed. in this previous sentence, the translations are a bit questionable since some HEP jargon is used with which I am not familiar] Therefore, regardless of whether LHC has discovered new physics, CEPC is necessary.

If there are new coupling mechanisms for Higgs, new associated particles, composite structure for Higgs boson, or other differences from the standard model, we can continue with the second phase of the proton-proton collider, to directly probe the difference. Of course this could be due to supersymmetry, but it could also be due to other particles. For us experimentalists, while we care about theoretical predictions, our experiments are not designed only for them. To predict whether a collider can or cannot discover a hypothetical particle at this moment in time seems premature, and is not the view point of the HEP community in general.

(E) The fifth objection is that in the past 70 years high energy physics have not led to tangible improvements to humanity, and in the future likely will not.

In the past 70 years, there are many results from high energy physics, which led to techniques common to everyday life. [Ed: list of examples include sychrotron radiation, free electron laser, scatter neutron source, MRI, PET, radiation therapy, touch screens, smart phones, the world-wide web. I omit the prose.]

[Ed. Author proceeds to discuss hypothetical economic benefits from
a) superconductor science
b) microwave source
c) cryogenics
d) electronics
sort of the usual stuff you see in funding proposals.]

(F) The sixth reason was that the institute for High Energy Physics of the Chinese Academy of Sciences has not produced much in the past 30 years. The major scientific contributions to the proposed collider will be directed by non-Chinese, and so the nobel will also go to a non-Chinese.

[Ed. I’ll skip this section because it is a self-congratulatory pat on one’s back (we actually did pretty well for the amount of money invested), a promise to promote Chinese participation in the project (in accordance to the economic investment), and the required comment that “we do science for the sake of science, and not for winning the Nobel.”]

(G) The seventh reason is that the future in HEP is in developing a new technique to accelerate particles, and developing a geometric theory, not in building large accelerators.

A new method to accelerate particles is definitely an important aspect to accelerator science. In the next several decades this can prove useful for scattering experiments or for applied fields where beam confinement is not essential. For high energy colliders, in terms of beam emittance and energy efficiency, new acceleration principles have a long way to go. During this period, high energy physics cannot be simply put on hold. In terms of “geometric theory” or “string theory”, these are too far from experimentally approachable, and is not a problem we can consider currently.

People disagree on the future of high energy physics. Currently there are no Chinese winners of the Nobel prize in physics, but there are many internationally. Dr. Yang’s viewpoints are clearly out of mainstream. Not just currently, but also in the past several decades. Dr. Yang has been documented to have held a pessimistic view of higher energy physics and its future since the 60s, and that’s how he missed out on the discovery of the standard model. He is on record as being against Chinese collider science since the 70s. It is fortunate that the government supported the Institute of High Energy Physics and constructed various supporting facilities, leading to our current achievements in synchrotron radiation and neutron scattering. For the future, we should listen to the younger scientists at the forefront of current research, for that’s how we can gain international recognition for our scientific research.

It will be very interesting to see how this plays out.


Struggles with the Continuum (Part 4)

14 September, 2016

In this series we’re looking at mathematical problems that arise in physics due to treating spacetime as a continuum—basically, problems with infinities.

In Part 1 we looked at classical point particles interacting gravitationally. We saw they could convert an infinite amount of potential energy into kinetic energy in a finite time! Then we switched to electromagnetism, and went a bit beyond traditional Newtonian mechanics: in Part 2 we threw quantum mechanics into the mix, and in Part 3 we threw in special relativity. Briefly, quantum mechanics made things better, but special relativity made things worse.

Now let’s throw in both!

When we study charged particles interacting electromagnetically in a way that takes both quantum mechanics and special relativity into account, we are led to quantum field theory. The ensuing problems are vastly more complicated than in any of the physical theories we have discussed so far. They are also more consequential, since at present quantum field theory is our best description of all known forces except gravity. As a result, many of the best minds in 20th-century mathematics and physics have joined the fray, and it is impossible here to give more than a quick summary of the situation. This is especially true because the final outcome of the struggle is not yet known.

It is ironic that quantum field theory originally emerged as a solution to a problem involving the continuum nature of spacetime, now called the ‘ultraviolet catastrophe’. In classical electromagnetism, a box with mirrored walls containing only radiation acts like a collection of harmonic oscillators, one for each vibrational mode of the electromagnetic field. If we assume waves can have arbitrarily small wavelengths, there are infinitely many of these oscillators. In classical thermodynamics, a collection of harmonic oscillators in thermal equilibrium will share the available energy equally: this result is called the ‘equipartition theorem’.

Taken together, these principles lead to a dilemma worthy of Zeno. The energy in the box must be divided into an infinite number of equal parts. If the energy in each part is nonzero, the total energy in the box must be infinite. If it is zero, there can be no energy in the box.

For the logician, there is an easy way out: perhaps a box of electromagnetic radiation can only be in thermal equilibrium if it contains no energy at all! But this solution cannot satisfy the physicist, since it does not match what is actually observed. In reality, any nonnegative amount of energy is allowed in thermal equilibrium.

The way out of this dilemma was to change our concept of the harmonic oscillator. Planck did this in 1900, almost without noticing it. Classically, a harmonic oscillator can have any nonnegative amount of energy. Planck instead treated the energy

…not as a continuous, infinitely divisible quantity, but as a discrete quantity composed of an integral number of finite equal parts.

In modern notation, the allowed energies of a quantum harmonic oscillator are integer multiples of \hbar \omega, where \omega is the oscillator’s frequency and \hbar is a new constant of nature, named after Planck. When energy can only take such discrete values, the equipartition theorem no longer applies. Instead, the principles of thermodynamics imply that there is a well-defined thermal equilibrium in which vibrational modes with shorter and shorter wavelengths, and thus higher and higher energies, hold less and less of the available energy. The results agree with experiments when the constant \hbar is given the right value.

The full import of what Planck had done became clear only later, starting with Einstein’s 1905 paper on the photoelectric effect, for which he won the Nobel prize. Einstein proposed that the discrete energy steps actually arise because light comes in particles, now called ‘photons’, with a photon of frequency \omega carrying energy \hbar\omega. It was even later that Ehrenfest emphasized the role of the equipartition theorem in the original dilemma, and called this dilemma the ‘ultraviolet catastrophe’. As usual, the actual history is more complicated than the textbook summaries. For details, try:

• Helen Kragh, Quantum Generations: A History of Physics in the Twentieth Century, Princeton U. Press, Princeton, 1999.

The theory of the ‘free’ quantum electromagnetic field—that is, photons not interacting with charged particles—is now well-understood. It is a bit tricky to deal with an infinite collection of quantum harmonic oscillators, but since each evolves independently from all the rest, the issues are manageable. Many advances in analysis were required to tackle these issues in a rigorous way, but they were erected on a sturdy scaffolding of algebra. The reason is that the quantum harmonic oscillator is exactly solvable in terms of well-understood functions, and so is the free quantum electromagnetic field. By the 1930s, physicists knew precise formulas for the answers to more or less any problem involving the free quantum electromagnetic field. The challenge to mathematicians was then to find a coherent mathematical framework that takes us to these answers starting from clear assumptions. This challenge was taken up and completely met by the mid-1960s.

However, for physicists, the free quantum electromagnetic field is just the starting-point, since this field obeys a quantum version of Maxwell’s equations where the charge density and current density vanish. Far more interesting is ‘quantum electrodynamics’, or QED, where we also include fields describing charged particles—for example, electrons and their antiparticles, positrons—and try to impose a quantum version of the full-fledged Maxwell equations. Nobody has found a fully rigorous formulation of QED, nor has anyone proved such a thing cannot be found!

QED is part of a more complicated quantum field theory, the Standard Model, which describes the electromagnetic, weak and strong forces, quarks and leptons, and the Higgs boson. It is widely regarded as our best theory of elementary particles. Unfortunately, nobody has found a rigorous formulation of this theory either, despite decades of hard work by many smart physicists and mathematicians.

To spur progress, the Clay Mathematics Institute has offered a million-dollar prize for anyone who can prove a widely believed claim about a class of quantum field theories called ‘pure Yang–Mills theories’.

A good example is the fragment of the Standard Model that describes only the strong force—or in other words, only gluons. Unlike photons in QED, gluons interact with each other. To win the prize, one must prove that the theory describing them is mathematically consistent and that it describes a world where the lightest particle is a ‘glueball’: a blob made of gluons, with mass strictly greater than zero. This theory is considerably simpler than the Standard Model. However, it is already very challenging.

This is not the only million-dollar prize that the Clay Mathematics Institute is offering for struggles with the continuum. They are also offering one for a proof of global existence of solutions to the Navier–Stokes equations for fluid flow. However, their quantum field theory challenge is the only one for which the problem statement is not completely precise. The Navier–Stokes equations are a collection of partial differential equations for the velocity and pressure of a fluid. We know how to precisely phrase the question of whether these equations have a well-defined solution for all time given smooth initial data. Describing a quantum field theory is a trickier business!

To be sure, there are a number of axiomatic frameworks for quantum field theory:

• Ray Streater and Arthur Wightman, PCT, Spin and Statistics, and All That, Benjamin Cummings, San Francisco, 1964.

• James Glimm and Arthur Jaffe, Quantum Physics: A Functional Integral Point of View, Springer, Berlin, 1987.

• John C. Baez, Irving Segal and Zhengfang Zhou, Introduction to Algebraic and Constructive Quantum Field Theory, Princeton U. Press, Princeton, 1992.

• Rudolf Haag, Local Quantum Physics: Fields, Particles, Algebras, Springer, Berlin, 1996.

We can prove physically interesting theorems from these axioms, and also rigorously construct some quantum field theories obeying these axioms. The easiest are the free theories, which describe non-interacting particles. There are also examples of rigorously construted quantum field theories that describe interacting particles in fewer than 4 spacetime dimensions. However, no quantum field theory that describes interacting particles in 4-dimensional spacetime has been proved to obey the usual axioms. Thus, much of the wisdom of physicists concerning quantum field theory has not been fully transformed into rigorous mathematics.

Worse, the question of whether a particular quantum field theory studied by physicists obeys the usual axioms is not completely precise—at least, not yet. The problem is that going from the physicists’ formulation to a mathematical structure that might or might not obey the axioms involves some choices.

This is not a cause for despair; it simply means that there is much work left to be done. In practice, quantum field theory is marvelously good for calculating answers to physics questions. The answers involve approximations. In practice the approximations work very well. The problem is just that we do not fully understand, in a mathematically rigorous way, what these approximations are supposed to be approximating.

How could this be? In the next part, I’ll sketch some of the key issues in the case of quantum electrodynamics. I won’t get into all the technical details: they’re too complicated to explain in one blog article. Instead, I’ll just try to give you a feel for what’s at stake.


Struggles with the Continuum (Part 3)

12 September, 2016

In these posts, we’re seeing how our favorite theories of physics deal with the idea that space and time are a continuum, with points described as lists of real numbers. We’re not asking if this idea is true: there’s no clinching evidence to answer that question, so it’s too easy to let ones philosophical prejudices choose the answer. Instead, we’re looking to see what problems this idea causes, and how physicists have struggled to solve them.

We started with the Newtonian mechanics of point particles attracting each other with an inverse square force law. We found strange ‘runaway solutions’ where particles shoot to infinity in a finite amount of time by converting an infinite amount of potential energy into kinetic energy.

Then we added quantum mechanics, and we saw this problem went away, thanks to the uncertainty principle.

Now let’s take away quantum mechanics and add special relativity. Now our particles can’t go faster than light. Does this help?

Point particles interacting with the electromagnetic field

Special relativity prohibits instantaneous action at a distance. Thus, most physicists believe that special relativity requires that forces be carried by fields, with disturbances in these fields propagating no faster than the speed of light. The argument for this is not watertight, but we seem to actually see charged particles transmitting forces via a field, the electromagnetic field—that is, light. So, most work on relativistic interactions brings in fields.

Classically, charged point particles interacting with the electromagnetic field are described by two sets of equations: Maxwell’s equations and the Lorentz force law. The first are a set of differential equations involving:

• the electric field \vec E and mangetic field \vec B, which in special relativity are bundled together into the electromagnetic field F, and

• the electric charge density \rho and current density \vec \jmath, which are bundled into another field called the ‘four-current’ J.

By themselves, these equations are not enough to completely determine the future given initial conditions. In fact, you can choose \rho and \vec \jmath freely, subject to the conservation law

\displaystyle{ \frac{\partial \rho}{\partial t} + \nabla \cdot \vec \jmath = 0 }

For any such choice, there exists a solution of Maxwell’s equations for t \ge 0 given initial values for \vec E and \vec B that obey these equations at t = 0.

Thus, to determine the future given initial conditions, we also need equations that say what \rho and \vec{\jmath} will do. For a collection of charged point particles, they are determined by the curves in spacetime traced out by these particles. The Lorentz force law says that the force on a particle of charge e is

\vec{F} = e (\vec{E} + \vec{v} \times \vec{B})

where \vec v is the particle’s velocity and \vec{E} and \vec{B} are evaluated at the particle’s location. From this law we can compute the particle’s acceleration if we know its mass.

The trouble starts when we try to combine Maxwell’s equations and the Lorentz force law in a consistent way, with the goal being to predict the future behavior of the \vec{E} and \vec{B} fields, together with particles’ positions and velocities, given all these quantities at t = 0. Attempts to do this began in the late 1800s. The drama continues today, with no definitive resolution! You can find good accounts, available free online, by Feynman and by Janssen and Mecklenburg. Here we can only skim the surface.

The first sign of a difficulty is this: the charge density and current associated to a charged particle are singular, vanishing off the curve it traces out in spacetime but ‘infinite’ on this curve. For example, a charged particle at rest at the origin has

\rho(t,\vec x) = e \delta(\vec{x}), \qquad \vec{\jmath}(t,\vec{x}) = \vec{0}

where \delta is the Dirac delta and e is the particle’s charge. This in turn forces the electric field to be singular at the origin. The simplest solution of Maxwell’s equations consisent with this choice of \rho and \vec\jmath is

\displaystyle{ \vec{E}(t,\vec x) = \frac{e \hat{r}}{4 \pi \epsilon_0 r^2}, \qquad \vec{B}(t,\vec x) = 0 }

where \hat{r} is a unit vector pointing away from the origin and \epsilon_0 is a constant called the permittivity of free space.

In short, the electric field is ‘infinite’, or undefined, at the particle’s location. So, it is unclear how to define the ‘self-force’ exerted by the particle’s own electric field on itself. The formula for the electric field produced by a static point charge is really just our old friend, the inverse square law. Since we had previously ignored the force of a particle on itself, we might try to continue this tactic now. However, other problems intrude.

In relativistic electrodynamics, the electric field has energy density equal to

\displaystyle{ \frac{\epsilon_0}{2} |\vec{E}|^2 }

Thus, the total energy of the electric field of a point charge at rest is proportional to

\displaystyle{ \frac{\epsilon_0}{2} \int_{\mathbb{R}^3} |\vec{E}|^2 \, d^3 x = \frac{e^2}{8 \pi \epsilon_0} \int_0^\infty \frac{1}{r^4} \, r^2 dr. }

But this integral diverges near r = 0, so the electric field of a charged particle has an infinite energy!

How, if at all, does this cause trouble when we try to unify Maxwell’s equations and the Lorentz force law? It helps to step back in history. In 1902, the physicist Max Abraham assumed that instead of a point, an electron is a sphere of radius R with charge evenly distributed on its surface. Then the energy of its electric field becomes finite, namely:

\displaystyle{  E = \frac{e^2}{8 \pi \epsilon_0} \int_{R}^\infty \frac{1}{r^4} \, r^2 dr = \frac{1}{2} \frac{e^2}{4 \pi \epsilon_0 R} }

where e is the electron’s charge.

Abraham also computed the extra momentum a moving electron of this sort acquires due to its electromagnetic field. He got it wrong because he didn’t understand Lorentz transformations. In 1904 Lorentz did the calculation right. Using the relationship between velocity, momentum and mass, we can derive from his result a formula for the ‘electromagnetic mass’ of the electron:

\displaystyle{  m = \frac{2}{3} \frac{e^2}{4 \pi \epsilon_0 R c^2} }

where c is the speed of light. We can think of this as the extra mass an electron acquires by carrying an electromagnetic field along with it.

Putting the last two equations together, these physicists obtained a remarkable result:

\displaystyle{  E = \frac{3}{4} mc^2 }

Then, in 1905, a fellow named Einstein came along and made it clear that the only reasonable relation between energy and mass is

E = mc^2

What had gone wrong?

In 1906, Poincaré figured out the problem. It is not a computational mistake, nor a failure to properly take special relativity into account. The problem is that like charges repel, so if the electron were a sphere of charge it would explode without something to hold it together. And that something—whatever it is—might have energy. But their calculation ignored that extra energy.

In short, the picture of the electron as a tiny sphere of charge, with nothing holding it together, is incomplete. And the calculation showing E = \frac{3}{4}mc^2, together with special relativity saying E = mc^2, shows that this incomplete picture is inconsistent. At the time, some physicists hoped that all the mass of the electron could be accounted for by the electromagnetic field. Their hopes were killed by this discrepancy.

Nonetheless it is interesting to take the energy E computed above, set it equal to m_e c^2 where m_e is the electron’s observed mass, and solve for the radius R. The answer is

\displaystyle{ R = \frac{1}{8 \pi \epsilon_0} \frac{e^2}{m_e c^2} } \approx 1.4 \times 10^{-15} \mathrm{ meters}

In the early 1900s, this would have been a remarkably tiny distance: 0.00003 times the Bohr radius of a hydrogen atom. By now we know this is roughly the radius of a proton. We know that electrons are not spheres of this size. So at present it makes more sense to treat the calculations so far as a prelude to some kind of limiting process where we take R \to 0. These calculations teach us two lessons.

First, the electromagnetic field energy approaches +\infty as we let R \to 0, so we will be hard pressed to take this limit and get a well-behaved physical theory. One approach is to give a charged particle its own ‘bare mass’ m_\mathrm{bare} in addition to the mass m_\mathrm{elec} arising from electromagnetic field energy, in a way that depends on R. Then as we take the R \to 0 limit we can let m_\mathrm{bare} \to -\infty in such a way that m_\mathrm{bare} + m_\mathrm{elec} approaches a chosen limit m, the physical mass of the point particle. This is an example of ‘renormalization’.

Second, it is wise to include conservation of energy-momentum as a requirement in addition to Maxwell’s equations and the Lorentz force law. Here is a more sophisticated way to phrase Poincaré’s realization. From the electromagnetic field one can compute a ‘stress-energy tensor’ T, which describes the flow of energy and momentum through spacetime. If all the energy and momentum of an object comes from its electromagnetic field, you can compute them by integrating T over the hypersurface t = 0. You can prove that the resulting 4-vector transforms correctly under Lorentz transformations if you assume the stress-energy tensor has vanishing divergence: \partial^\mu T_{\mu \nu} = 0. This equation says that energy and momentum are locally conserved. However, this equation fails to hold for a spherical shell of charge with no extra forces holding it together. In the absence of extra forces, it violates conservation of momentum for a charge to feel an electromagnetic force yet not accelerate.

So far we have only discussed the simplest situation: a single charged particle at rest, or moving at a constant velocity. To go further, we can try to compute the acceleration of a small charged sphere in an arbitrary electromagnetic field. Then, by taking the limit as the radius r of the sphere goes to zero, perhaps we can obtain the law of motion for a charged point particle.

In fact this whole program is fraught with difficulties, but physicists boldly go where mathematicians fear to tread, and in a rough way this program was carried out already by Abraham in 1905. His treatment of special relativistic effects was wrong, but these were easily corrected; the real difficulties lie elsewhere. In 1938 his calculations were carried out much more carefully—though still not rigorously—by Dirac. The resulting law of motion is thus called the ‘Abraham–Lorentz–Dirac force law’.

There are three key ways in which this law differs from our earlier naive statement of the Lorentz force law:

• We must decompose the electromagnetic field in two parts, the ‘external’ electromagnetic field F_\mathrm{ext} and the field produced by the particle:

F = F_\mathrm{ext} + F_\mathrm{ret}

Here F_\mathrm{ext} is a solution Maxwell equations with J = 0, while F_\mathrm{ret} is computed by convolving the particle’s 4-current J with a function called the ‘retarded Green’s function’. This breaks the time-reversal symmetry of the formalism so far, ensuring that radiation emitted by the particle moves outward as t increases. We then decree that the particle only feels a Lorentz force due to F_\mathrm{ext}, not F_\mathrm{ret}. This avoids the problem that F_\mathrm{ret} becomes infinite along the particle’s path as r \to 0.

• Maxwell’s equations say that an accelerating charged particle emits radiation, which carries energy-momentum. Conservation of energy-momentum implies that there is a compensating force on the charged particle. This is called the ‘radiation reaction’. So, in addition to the Lorentz force, there is a radiation reaction force.

• As we take the limit r \to 0, we must adjust the particle’s bare mass m_\mathrm{bare} in such a way that its physical mass m = m_\mathrm{bare} + m_\mathrm{elec} is held constant. This involves letting m_\mathrm{bare} \to -\infty as m_\mathrm{elec} \to +\infty.

It is easiest to describe the Abraham–Lorentz–Dirac force law using standard relativistic notation. So, we switch to units where c and 4 \pi \epsilon_0 equal 1, let x^\mu denote the spacetime coordinates of a point particle, and use a dot to denote the derivative with respect to proper time. Then the Abraham–Lorentz–Dirac force law says

m \ddot{x}^\mu = e F_{\mathrm{ext}}^{\mu \nu} \, \dot{x}_\nu \; - \; \frac{2}{3}e^2 \ddot{x}^\alpha \ddot{x}_\alpha \, \dot{x}^\mu \; + \; \frac{2}{3}e^2 \dddot{x}^\mu .

The first term at right is the Lorentz force, which looks more elegant in this new notation. The second term is fairly intuitive: it acts to reduce the particle’s velocity at a rate proportional to its velocity (as one would expect from friction), but also proportional to the squared magnitude of its acceleration. This is the ‘radiation reaction’.

The last term, called the ‘Schott term’, is the most shocking. Unlike all familiar laws in classical mechanics, it involves the third derivative of the particle’s position!

This seems to shatter our original hope of predicting the electromagnetic field and the particle’s position and velocity given their initial values. Now it seems we need to specify the particle’s initial position, velocity and acceleration.

Furthermore, unlike Maxwell’s equations and the original Lorentz force law, the Abraham–Lorentz–Dirac force law is not symmetric under time reversal. If we take a solution and replace t with -t, the result is not a solution. Like the force of friction, radiation reaction acts to make a particle lose energy as it moves into the future, not the past.

The reason is that our assumptions have explicitly broken time symmetry. The splitting F = F_\mathrm{ext} + F_\mathrm{ret} says that a charged accelerating particle radiates into the future, creating the field F_\mathrm{ret}, and is affected only by the remaining electromagnetic field F_\mathrm{ext}.

Worse, the Abraham–Lorentz–Dirac force law has counterintuitive solutions. Suppose for example that F_\mathrm{ext} = 0. Besides the expected solutions where the particle’s velocity is constant, there are solutions for which the particle accelerates indefinitely, approaching the speed of light! These are called ‘runaway solutions’. In these runaway solutions, the acceleration as measured in the frame of reference of the particle grows exponentially with the passage of proper time.

So, the notion that special relativity might help us avoid the pathologies of Newtonian point particles interacting gravitationally—five-body solutions where particles shoot to infinity in finite time—is cruelly mocked by the Abraham–Lorentz–Dirac force law. Particles cannot move faster than light, but even a single particle can extract an arbitrary amount of energy-momentum from the electromagnetic field in its immediate vicinity and use this to propel itself forward at speeds approaching that of light. The energy stored in the field near the particle is sometimes called ‘Schott energy’.

Thanks to the Schott term in the Abraham–Lorentz–Dirac force law, the Schott energy can be converted into kinetic energy for the particle. The details of how this work are nicely discussed in a paper by Øyvind Grøn, so click the link and read that if you’re interested. I’ll just show you a picture from that paper:

Gron - Schott energy

So even one particle can do crazy things! But worse, suppose we generalize the framework to include more than one particle. The arguments for the Abraham–Lorentz–Dirac force law can be generalized to this case. The result is simply that each particle obeys this law with an external field F_\mathrm{ext} that includes the fields produced by all the other particles. But a problem appears when we use this law to compute the motion of two particles of opposite charge. To simplify the calculation, suppose they are located symmetrically with respect to the origin, with equal and opposite velocities and accelerations. Suppose the external field felt by each particle is solely the field created by the other particle. Since the particles have opposite charges, they should attract each other. However, one can prove they will never collide. In fact, if at any time they are moving towards each other, they will later turn around and move away from each other at ever-increasing speed!

This fact was discovered by C. Jayaratnam Eliezer in 1943. It is so counterintuitive that several proofs were required before physicists believed it.

None of these strange phenomena have ever been seen experimentally. Faced with this problem, physicists have naturally looked for ways out. First, why not simply cross out the \dddot{x}^\mu term in the Abraham–Lorentz–Dirac force? Unfortunately the resulting simplified equation

m \ddot{x}^\mu = e F_{\mathrm{ext}}^{\mu \nu} \, \dot{x}_\nu - \frac{2}{3}e^2 \ddot{x}^\alpha \ddot{x}_\alpha \, \dot{x}^\mu

has only trivial solutions. The reason is that with the particle’s path parametrized by proper time, the vector \dot{x}^\mu has constant length, so the vector \ddot{x}^\mu is orthogonal to \dot{x}^\mu . So is the vector F_{\mathrm{ext}}^{\mu \nu} \dot{x}_\nu, because F_{\mathrm{ext}} is an antisymmetric tensor. So, the last term must be zero, which implies \ddot{x} = 0, which in turn implies that all three terms must vanish.

Another possibility is that some assumption made in deriving the Abraham–Lorentz–Dirac force law is incorrect. Of course the theory is physically incorrect, in that it ignores quantum mechanics and other things, but that is not the issue. The issue here is one of mathematical physics, of trying to formulate a well-behaved classical theory that describes charged point particles interacting with the electromagnetic field. If we can prove this is impossible, we will have learned something. But perhaps there is a loophole. The original arguments for the Abraham–Lorentz–Dirac force law are by no means mathematically rigorous. They involve a delicate limiting procedure, and approximations that were believed, but not proved, to become perfectly accurate in the r \to 0 limit. Could these arguments conceal a mistake?

Calculations involving a spherical shell of charge has been improved by a series of authors, and nicely summarized by Fritz Rohrlich. In all these calculations, nonlinear powers of the acceleration and its time derivatives are neglected, and one hopes this is acceptable in the r \to 0 limit.

Dirac, struggling with renormalization in quantum field theory, took a different tack. Instead of considering a sphere of charge, he treated the electron as a point from the very start. However, he studied the flow of energy-momentum across the surface of a tube of radius r centered on the electron’s path. By computing this flow in the limit r \to 0, and using conservation of energy-momentum, he attempted to derive the force on the electron. He did not obtain a unique result, but the simplest choice gives the Abraham–Lorentz–Dirac equation. More complicated choices typically involve nonlinear powers of the acceleration and its time derivatives.

Since this work, many authors have tried to simplify Dirac’s rather complicated calculations and clarify his assumptions. This book is a good guide:

• Stephen Parrott, Relativistic Electrodynamics and Differential Geometry, Springer, Berlin, 1987.

But more recently, Jerzy Kijowski and some coauthors have made impressive progress in a series of papers that solve many of the problems we have described.

Kijowski’s key idea is to impose conditions on precisely how the electromagnetic field is allowed to behave near the path traced out by a charged point particle. He breaks the field into a ‘regular’ part and a ‘singular’ part:

F = F_\textrm{reg} + F_\textrm{sing}

Here F_\textrm{reg} is smooth everywhere, while F_\textrm{sing} is singular near the particle’s path, but only in a carefully prescribed way. Roughly, at each moment, in the particle’s instantaneous rest frame, the singular part of its electric field consists of the familiar part proportional to 1/r^2, together with a part proportional to 1/r^3 which depends on the particle’s acceleration. No other singularities are allowed!

On the one hand, this eliminates the ambiguities mentioned earlier: in the end, there are no ‘nonlinear powers of the acceleration and its time derivatives’ in Kijowski’s force law. On the other hand, this avoids breaking time reversal symmetry, as the earlier splitting F = F_\textrm{ext} + F_\textrm{ret} did.

Next, Kijowski defines the energy-momentum of a point particle to be m \dot{x}, where m is its physical mass. He defines the energy-momentum of the electromagnetic field to be just that due to F_\textrm{reg}, not F_\textrm{sing}. This amounts to eliminating the infinite ‘electromagnetic mass’ of the charged particle. He then shows that Maxwell’s equations and conservation of total energy-momentum imply an equation of motion for the particle!

This equation is very simple:

m \ddot{x}^\mu = e F_{\textrm{reg}}^{\mu \nu} \, \dot{x}_\nu

It is just the Lorentz force law! Since the troubling Schott term is gone, this is a second-order differential equation. So we can hope that to predict the future behavior of the electromagnetic field, together with the particle’s position and velocity, given all these quantities at t = 0.

And indeed this is true! In 1998, together with Gittel and Zeidler, Kijowski proved that initial data of this sort, obeying the careful restrictions on allowed singularities of the electromagnetic field, determine a unique solution of Maxwell’s equations and the Lorentz force law, at least for a short amount of time. Even better, all this remains true for any number of particles.

There are some obvious questions to ask about this new approach. In the Abraham–Lorentz–Dirac force law, the acceleration was an independent variable that needed to be specified at t = 0 along with position and momentum. This problem disappears in Kijowski’s approach. But how?

We mentioned that the singular part of the electromagnetic field, F_\textrm{sing}, depends on the particle’s acceleration. But more is true: the particle’s acceleration is completely determined by F_\textrm{sing}. So, the particle’s acceleration is not an independent variable because it is encoded into the electromagnetic field.

Another question is: where did the radiation reaction go? The answer is: we can see it if we go back and decompose the electromagnetic field as F_\textrm{ext} + F_\textrm{ret} as we had before. If we take the law

m \ddot{x}^\mu = e F_{\textrm{reg}}^{\mu \nu} \dot{x}_\nu

and rewrite it in terms of F_\textrm{ext}, we recover the original Abraham–Lorentz–Dirac law, including the radiation reaction term and Schott term.

Unfortunately, this means that ‘pathological’ solutions where particles extract arbitrary amounts of energy from the electromagnetic field are still possible. A related problem is that apparently nobody has yet proved solutions exist for all time. Perhaps a singularity worse than the allowed kind could develop in a finite amount of time—for example, when particles collide.

So, classical point particles interacting with the electromagnetic field still present serious challenges to the physicist and mathematician. When you have an infinitely small charged particle right next to its own infinitely strong electromagnetic field, trouble can break out very easily!

Particles without fields

Finally, I should also mention attempts, working within the framework of special relativity, to get rid of fields and have particles interact with each other directly. For example, in 1903 Schwarzschild introduced a framework in which charged particles exert an electromagnetic force on each other, with no mention of fields. In this setup, forces are transmitted not instantaneously but at the speed of light: the force on one particle at one spacetime point x depends on the motion of some other particle at spacetime point y only if the vector x - y is lightlike. Later Fokker and Tetrode derived this force law from a principle of least action. In 1949, Feynman and Wheeler checked that this formalism gives results compatible with the usual approach to electromagnetism using fields, except for several points:

• Each particle exerts forces only on other particles, so we avoid the thorny issue of how a point particle responds to the electromagnetic field produced by itself.

• There are no electromagnetic fields not produced by particles: for example, the theory does not describe the motion of a charged particle in an ‘external electromagnetic field’.

• The principle of least action guarantees that ‘if A affects B then B affects A‘. So, if a particle at x exerts a force on a particle at a point y in its future lightcone, the particle at y exerts a force on the particle at x in its past lightcone. This raises the issue of ‘reverse causality’, which Feynman and Wheeler address.

Besides the reverse causality issue, perhaps one reason this approach has not been more pursued is that it does not admit a Hamiltonian formulation in terms of particle positions and momenta. Indeed, there are a number of ‘no-go theorems’ for relativistic multiparticle Hamiltonians, saying that these can only describe noninteracting particles. So, most work that takes both quantum mechanics and special relativity into account uses fields.

Indeed, in quantum electrodynamics, even the charged point particles are replaced by fields—namely quantum fields! Next time we’ll see whether that helps.


Struggles with the Continuum (Part 2)

9 September, 2016

Last time we saw that that nobody yet knows if Newtonian gravity, applied to point particles, truly succeeds in predicting the future. To be precise: for four or more particles, nobody has proved that almost all initial conditions give a well-defined solution for all times!

The problem is related to the continuum nature of space: as particles get arbitrarily close to other, an infinite amount of potential energy can be converted to kinetic energy in a finite amount of time.

I left off by asking if this problem is solve by more sophisticated theories. For example, does the ‘speed limit’ imposed by special relativity help the situation? Or might quantum mechanics help, since it describes particles as ‘probability clouds’, and puts limits on how accurately we can simultaneously know both their position and momentum?

We begin with quantum mechanics, which indeed does help.

The quantum mechanics of charged particles

Few people spend much time thinking about ‘quantum celestial mechanics’—that is, quantum particles obeying Schrödinger’s equation, that attract each other gravitationally, obeying an inverse-square force law. But Newtonian gravity is a lot like the electrostatic force between charged particles. The main difference is a minus sign, which makes like masses attract, while like charges repel. In chemistry, people spend a lot of time thinking about charged particles obeying Schrödinger’s equation, attracting or repelling each other electrostatically. This approximation neglects magnetic fields, spin, and indeed anything related to the finiteness of the speed of light, but it’s good enough explain quite a bit about atoms and molecules.

In this approximation, a collection of charged particles is described by a wavefunction \psi, which is a complex-valued function of all the particles’ positions and also of time. The basic idea is that \psi obeys Schrödinger’s equation

\displaystyle{ \frac{d \psi}{dt} = - i H \psi}

where H is an operator called the Hamiltonian, and I’m working in units where \hbar = 1.

Does this equation succeeding in predicting \psi at a later time given \psi at time zero? To answer this, we must first decide what kind of function \psi should be, what concept of derivative applies to such funtions, and so on. These issues were worked out by von Neumann and others starting in the late 1920s. It required a lot of new mathematics. Skimming the surface, we can say this.

At any time, we want \psi to lie in the Hilbert space consisting of square-integrable functions of all the particle’s positions. We can then formally solve Schrödinger’s equation as

\psi(t) = \exp(-i t H) \psi(0)

where \psi(t) is the solution at time t. But for this to really work, we need H to be a self-adjoint operator on the chosen Hilbert space. The correct definition of ‘self-adjoint’ is a bit subtler than what most physicists learn in a first course on quantum mechanics. In particular, an operator can be superficially self-adjoint—the actual term for this is ‘symmetric’—but not truly self-adjoint.

In 1951, based on earlier work of Rellich, Kato proved that H is indeed self-adjoint for a collection of nonrelativistic quantum particles interacting via inverse-square forces. So, this simple model of chemistry works fine. We can also conclude that ‘celestial quantum mechanics’ would dodge the nasty problems that we saw in Newtonian gravity.

The reason, simply put, is the uncertainty principle.

In the classical case, bad things happen because the energy is not bounded below. A pair of classical particles attracting each other with an inverse square force law can have arbitrarily large negative energy, simply by being very close to each other. Since energy is conserved, if you have a way to make some particles get an arbitrarily large negative energy, you can balance the books by letting others get an arbitrarily large positive energy and shoot to infinity in a finite amount of time!

When we switch to quantum mechanics, the energy of any collection of particles becomes bounded below. The reason is that to make the potential energy of two particles large and negative, they must be very close. Thus, their difference in position must be very small. In particular, this difference must be accurately known! Thus, by the uncertainty principle, their difference in momentum must be very poorly known: at least one of its components must have a large standard deviation. This in turn means that the expected value of the kinetic energy must be large.

This must all be made quantitative, to prove that as particles get close, the uncertainty principle provides enough positive kinetic energy to counterbalance the negative potential energy. The Kato–Lax–Milgram–Nelson theorem, a refinement of the original Kato–Rellich theorem, is the key to understanding this issue. The Hamiltonian H for a collection of particles interacting by inverse square forces can be written as

H = K + V

where K is an operator for the kinetic energy and V is an operator for the potential energy. With some clever work one can prove that for any \epsilon > 0, there exists c > 0 such that if \psi is a smooth normalized wavefunction that vanishes at infinity and at points where particles collide, then

| \langle \psi , V \psi \rangle | \le \epsilon \langle \psi, K\psi \rangle + c.

Remember that \langle \psi , V \psi \rangle is the expected value of the potential energy, while \langle \psi, K \psi \rangle is the expected value of the kinetic energy. Thus, this inequality is a precise way of saying how kinetic energy triumphs over potential energy.

By taking \epsilon = 1, it follows that the Hamiltonian is bounded below on such
states \psi:

\langle \psi , H \psi \rangle \ge -c

But the fact that the inequality holds even for smaller values of \epsilon is the key to showing H is ‘essentially self-adjoint’. This means that while H is not self-adjoint when defined only on smooth wavefunctions that vanish at infinity and at points where particles collide, it has a unique self-adjoint extension to some larger domain. Thus, we can unambiguously take this extension to be the true Hamiltonian for this problem.

To understand what a great triumph this is, one needs to see what could have gone wrong! Suppose space had an extra dimension. In 3-dimensional space, Newtonian gravity obeys an inverse square force law because the area of a sphere is proportional to its radius squared. In 4-dimensional space, the force obeys an inverse cube law:

\displaystyle{ F = -\frac{Gm_1 m_2}{r^3} }

Using a cube instead of a square here makes the force stronger at short distances, with dramatic effects. For example, even for the classical 2-body problem, the equations of motion no longer ‘almost always’ have a well-defined solution for all times. For an open set of initial conditions, the particles spiral into each other in a finite amount of time!

Hyperbolic spiral - a fairly common orbit in an inverse cube force

Hyperbolic spiral – a fairly common orbit in an inverse cube force.

The quantum version of this theory is also problematic. The uncertainty principle is not enough to save the day. The inequalities above no longer hold: kinetic energy does not triumph over potential energy. The Hamiltonian is no longer essentially self-adjoint on the set of wavefunctions that I described.

In fact, this Hamiltonian has infinitely many self-adjoint extensions! Each one describes different physics: namely, a different choice of what happens when particles collide. Moreover, when G exceeds a certain critical value, the energy is no longer bounded below.

The same problems afflict quantum particles interacting by the electrostatic force in 4d space, as long as some of the particles have opposite charges. So, chemistry would be quite problematic in a world with four dimensions of space.

With more dimensions of space, the situation becomes even worse. In fact, this is part of a general pattern in mathematical physics: our struggles with the continuum tend to become worse in higher dimensions. String theory and M-theory may provide exceptions.

Next time we’ll look at what happens to point particles interacting electromagnetically when we take special relativity into account. After that, we’ll try to put special relativity and quantum mechanics together!

For more

For more on the inverse cube force law, see:

• John Baez, The inverse cube force law, Azimuth, 30 August 2015.

It turns out Newton made some fascinating discoveries about this law in his Principia; it has remarkable properties both classically and in quantum mechanics.

The hyperbolic spiral is one of 3 kinds of orbits possible in an inverse cube force; for the others see:

Cotes’s spiral, Wikipedia.

The picture of a hyperbolic spiral was drawn by Anarkman and Pbroks13 and placed on Wikicommons under a Creative Commons Attribution-Share Alike 3.0 Unported license.


Struggles with the Continuum (Part 1)

8 September, 2016

Is spacetime really a continuum? That is, can points of spacetime really be described—at least locally—by lists of four real numbers (t,x,y,z)? Or is this description, though immensely successful so far, just an approximation that breaks down at short distances?

Rather than trying to answer this hard question, let’s look back at the struggles with the continuum that mathematicians and physicists have had so far.

The worries go back at least to Zeno. Among other things, he argued that that an arrow can never reach its target:

That which is in locomotion must arrive at the half-way stage before it arrives at the goal.Aristotle summarizing Zeno

and Achilles can never catch up with a tortoise:

In a race, the quickest runner can never overtake the slowest, since the pursuer must first reach the point whence the pursued started, so that the slower must always hold a lead.Aristotle summarizing Zeno

These paradoxes can now be dismissed using our theory of real numbers. An interval of finite length can contain infinitely many points. In particular, a sum of infinitely many terms can still converge to a finite answer.

But the theory of real numbers is far from trivial. It became fully rigorous only considerably after the rise of Newtonian physics. At first, the practical tools of calculus seemed to require infinitesimals, which seemed logically suspect. Thanks to the work of Dedekind, Cauchy, Weierstrass, Cantor and others, a beautiful formalism was developed to handle real numbers, limits, and the concept of infinity in a precise axiomatic manner.

However, the logical problems are not gone. Gödel’s theorems hang like a dark cloud over the axioms of mathematics, assuring us that any consistent theory as strong as Peano arithmetic, or stronger, cannot prove itself consistent. Worse, it will leave some questions unsettled.

For example: how many real numbers are there? The continuum hypothesis proposes a conservative answer, but the usual axioms of set theory leaves this question open: there could vastly more real numbers than most people think. And the superficially plausible axiom of choice—which amounts to saying that the product of any collection of nonempty sets is nonempty—has scary consequences, like the existence of non-measurable subsets of the real line. This in turn leads to results like that of Banach and Tarski: one can partition a ball of unit radius into six disjoint subsets, and by rigid motions reassemble these subsets into two disjoint balls of unit radius. (Later it was shown that one can do the job with five, but no fewer.)

However, most mathematicians and physicists are inured to these logical problems. Few of us bother to learn about attempts to tackle them head-on, such as:

nonstandard analysis and synthetic differential geometry, which let us work consistently with infinitesimals,

constructivism, which avoids proof by contradiction: for example, one must ‘construct’ a mathematical object to prove that it exists,

finitism (which avoids infinities altogether),

ultrafinitism, which even denies the existence of very large numbers.

This sort of foundational work proceeds slowly, and is now deeply unfashionable. One reason is that it rarely seems to intrude in ‘real life’ (whatever that is). For example, it seems that no question about the experimental consequences of physical theories has an answer that depends on whether or not we assume the continuum hypothesis or the axiom of choice.

But even if we take a hard-headed practical attitude and leave logic to the logicians, our struggles with the continuum are not over. In fact, the infinitely divisible nature of the real line—the existence of arbitrarily small real numbers—is a serious challenge to almost all of the most widely used theories of physics.

Indeed, we have been unable to rigorously prove that most of these theories make sensible predictions in all circumstances, thanks to problems involving the continuum.

One might hope that a radical approach to the foundations of mathematics—such as those listed above—would allow avoid some of the problems I’ll be discussing. However, I know of no progress along these lines that would interest most physicists. Some of the ideas of constructivism have been embraced by topos theory, which also provides a foundation for calculus with infinitesimals using synthetic differential geometry. Topos theory and especially higher topos theory are becoming important in mathematical physics. They’re great! But as far as I know, they have not been used to solve the problems I want to discuss here.

Today I’ll talk about one of the first theories to use calculus: Newton’s theory of gravity.

Newtonian Gravity

In its simplest form, Newtonian gravity describes ideal point particles attracting each other with a force inversely proportional to the square of their distance. It is one of the early triumphs of modern physics. But what happens when these particles collide? Apparently the force between them becomes infinite. What does Newtonian gravity predict then?

Of course real planets are not points: when two planets come too close together, this idealization breaks down. Yet if we wish to study Newtonian gravity as a mathematical theory, we should consider this case. Part of working with a continuum is successfully dealing with such issues.

In fact, there is a well-defined ‘best way’ to continue the motion of two point masses through a collision. Their velocity becomes infinite at the moment of collision but is finite before and after. The total energy, momentum and angular momentum are unchanged by this event. So, a 2-body collision is not a serious problem. But what about a simultaneous collision of 3 or more bodies? This seems more difficult.

Worse than that, Xia proved in 1992 that with 5 or more particles, there are solutions where particles shoot off to infinity in a finite amount of time!

This sounds crazy at first, but it works like this: a pair of heavy particles orbit each other, another pair of heavy particles orbit each other, and these pairs toss a lighter particle back and forth. Xia and Saari’s nice expository article has a picture of the setup:

xia_construction

Each time the lighter particle gets thrown back and forth, the pairs move further apart from each other, while the two particles within each pair get closer together. And each time they toss the lighter particle back and forth, the two pairs move away from each other faster!

As the time t approaches a certain value t_0, the speed of these pairs approaches infinity, so they shoot off to infinity in opposite directions in a finite amount of time, and the lighter particle bounces back and forth an infinite number of times!

Of course this crazy behavior isn’t possible in the real world, but Newtonian physics has no ‘speed limit’, and we’re idealizing the particles as points. So, if two or more of them get arbitrarily close to each other, the potential energy they liberate can give some particles enough kinetic energy to zip off to infinity in a finite amount of time! After that time, the solution is undefined.

You can think of this as a modern reincarnation of Zeno’s paradox. Suppose you take a coin and put it heads up. Flip it over after 1/2 a second, then flip it over after 1/4 of a second, and so on. After one second, which side will be up? There is no well-defined answer. That may not bother us, since this is a contrived scenario that seems physically impossible. It’s a bit more bothersome that Newtonian gravity doesn’t tell us what happens to our particles when t = t_0.

Your might argue that collisions and these more exotic ‘noncollision singularities’ occur with probability zero, because they require finely tuned initial conditions. If so, perhaps we can safely ignore them!

This is a nice fallback position. But to a mathematician, this argument demands proof.

A bit more precisely, we would like to prove that the set of initial conditions for which two or more particles come arbitrarily close to each other within a finite time has ‘measure zero’. This would mean that ‘almost all’ solutions are well-defined for all times, in a very precise sense.

In 1977, Saari proved that this is true for 4 or fewer particles. However, to the best of my knowledge, the problem remains open for 5 or more particles. Thanks to previous work by Saari, we know that the set of initial conditions that lead to collisions has measure zero, regardless of the number of particles. So, the remaining problem is to prove that noncollision singularities occur with probability zero.

It is remarkable that even Newtonian gravity, often considered a prime example of determinism in physics, has not been proved to make definite predictions, not even ‘almost always’! In 1840, Laplace wrote:

We ought to regard the present state of the universe as the effect of its antecedent state and as the cause of the state that is to follow. An intelligence knowing all the forces acting in nature at a given instant, as well as the momentary positions of all things in the universe, would be able to comprehend in one single formula the motions of the largest bodies as well as the lightest atoms in the world, provided that its intellect were sufficiently powerful to subject all data to analysis; to it nothing would be uncertain, the future as well as the past would be present to its eyes. The perfection that the human mind has been able to give to astronomy affords but a feeble outline of such an intelligence.Laplace

However, this dream has not yet been realized for Newtonian gravity.

I expect that noncollision singularities will be proved to occur with probability zero. If so, the remaining question would why it takes so much work to prove this, and thus prove that Newtonian gravity makes definite predictions in almost all cases. Is this is a weakness in the theory, or just the way things go? Clearly it has something to do with three idealizations:

• point particles whose distance can be arbitrarily small,

• potential energies that can be arbitrariy large and negative,

• velocities that can be arbitrarily large.

These are connected: as the distance between point particles approaches zero, their potential energy approaches -\infty, and conservation of energy dictates that some velocities approach +\infty.

Does the situation improve when we go to more sophisticated theories? For example, does the ‘speed limit’ imposed by special relativity help the situation? Or might quantum mechanics help, since it describes particles as ‘probability clouds’, and puts limits on how accurately we can simultaneously know both their position and momentum?

Next time I’ll talk about quantum mechanics, which indeed does help.


Interview (Part 2)

21 March, 2016

Greg Bernhardt runs an excellent website for discussing physics, math and other topics, called Physics Forums. He recently interviewed me there. Since I used this opportunity to explain a bit about the Azimuth Project and network theory, I thought I’d reprint the interview here. Here is Part 2.

 

Tell us about your experience with past projects like “This Week’s Finds in Mathematical Physics”.

I was hired by U.C. Riverside back in 1989. I was lonely and bored, since Lisa was back on the other coast. So, I spent a lot of evenings on the computer.

We had the internet back then—this was shortly after stone tools were invented—but the world-wide web hadn’t caught on yet. So, I would read and write posts on “newsgroups” using a program called a “news server”. You have to imagine me sitting in front of an old green­-on­-black cathode ray tube monitor with a large floppy disk drive, firing up the old modem to hook up to the internet.

In 1993, I started writing a series of posts on the papers I’d read. I called it “This Week’s Finds in Mathematical Physics”, which was a big mistake, because I couldn’t really write one every week. After a while I started using it to explain lots of topics in math and physics. I wrote 300 issues. Then I quit in 2010, when I started taking climate change seriously.

Share with us a bit about your current projects like Azimuth and the n­-Café.

The n­-Category Café is a blog I started with Urs Schreiber and the philosopher David Corfield back in 2006, when all three of us realized that n­-categories are the big wave that math is riding right now. We have a bunch more bloggers on the team now. But the n­-Café lost some steam when I quit work in n­-categories and Urs started putting most of his energy into two related projects: a wiki called the nLab and a discussion group called the nForum.

In 2010, when I noticed that global warming was like a huge wave crashing down on our civilization, I started the Azimuth Project. The goal was to create a focal point for scientists and engineers interested in saving the planet. It consists of a team of people, a blog, a wiki and a discussion group. It was very productive for a while: we wrote a lot of educational articles on climate science and energy issues. But lately I’ve realized I’m better at abstract math. So, I’ve been putting more time into working with my grad students.

What about climate change has captured your interest?

That’s like asking: “What about that huge tsunami rushing toward us has captured your interest?”

Around 2004 I started hearing news that sent chills up my spine ­ and what really worried me is how few people were talking about this news, at least in the US.

I’m talking about how we’re pushing the Earth’s climate out of the glacial cycle we’ve been in for over a million years, into brand new territory. I’m talking about things like how it takes hundreds or thousands of years for CO2 to exit the atmosphere after it’s been put in. And I’m talking about how global warming is just part of a bigger phenomenon: the Anthropocene. That’s a new geological epoch, in which the biosphere is rapidly changing due to human influences. It’s not just the temperature:

• About 1/4 of all chemical energy produced by plants is now used by humans.

• The rate of species going extinct is 100­–1000 times the usual background rate.

• Populations of large ocean fish have declined 90% since 1950.

• Humans now take more nitrogen from the atmosphere and convert it into nitrates than all other processes combined.

8­-9 times as much phosphorus is flowing into oceans than the natural background rate.

This doesn’t necessarily spell the end of our civilization, but it is something that we’ll all have to deal with.

So, I felt the need to alert people and try to dream up strategies to do something. That’s why in 2010 I quit work on n­-categories and started the Azimuth Project.

Carbon Dioxide Variations

You have life experience on both US coasts. Which do you prefer and why?

There are some differences between the coasts, but they’re fairly minor. The West Coast is part of the Pacific Rim, so there’s more Asian influence here. The seasons are less pronounced here, because winds in the northern hemisphere blow from west to east, and the oceans serve as a temperature control system. Down south in Riverside it’s a semi­-desert, so we can eat breakfast in our back yard in January! But I live here not because I like the West Coast more. This just happens to be where my wife Lisa and I managed to get a job.

What I really like is getting out of the US and seeing the rest of the world. When you’re at cremation ritual in Bali, or a Hmong festival in Laos, the difference between regions of the US starts seeming pretty small.

But I wasn’t a born traveler. When I spent my first summer in England, I was very apprehensive about making a fool of myself. The British have different manners, and their old universities are full of arcane customs and subtle social distinctions that even the British find terrifying. But after a few summers there I got over it. First, all around the world, being American gives you a license to be clueless. If you behave any better than the worst stereotypes, people are impressed. Second, I spend most of my time with mathematicians, who are incredibly forgiving of bad social behavior as long as you know interesting theorems.

By now I’ve gotten to feel very comfortable in England. The last couple of years I’ve spent time at the quantum computation group at Oxford–the group run by Bob Coecke and Samson Abramsky. I like talking to Jamie Vicary about n­categories and physics, and also my old friend Minhyong Kim, who is a number theorist there.

I was also very apprehensive when I first visited Paris. Everyone talks about how the waiters are rude, and so on. But I think that’s an exaggeration. Yes, if you go to cafés packed with boorish tourists, the waiters will treat you like a boorish tourist—so don’t do that. If you go to quieter places and behave politely, most people are friendly. Luckily Lisa speaks French and has some friends in Paris; that opens up a lot of opportunities. I don’t speak French, so I always feel like a bit of an idiot, but I’ve learned to cope. I’ve spent a few summers there working with Paul­-André Melliès on category theory and logic.

Yau Ma Tei Market - Hong Kong

Yau Ma Tei Market – Hong Kong

I was also intimidated when I first spent a summer in Hong Kong—and even more so when I spent a summer in Shanghai. Lisa speaks Chinese too: she’s more cultured than me, and she drags me to interesting places. My first day walking around Shanghai left me completely exhausted: everything was new! Walking down the street you see people selling frogs in a bucket, strange fungi and herbs, then a little phone shop where telephone numbers with lots of 8’s cost more, and so on: it’s a kind of cognitive assault.

But again, I came to enjoy it. And coming back to California, everything seemed a bit boring. Why is there so much land that’s not being used? Where are all the people? Why is the food so bland?

I’ve spent the most time outside the US in Singapore. Again, that’s because my wife and I both got job offers there, not because it’s the best place in the world. Compared to China it’s rather sterile and manicured. But it’s still a fascinating place. They’ve pulled themselves up from a British colonial port town to a multi­cultural country that’s in some ways more technologically advanced than the US. The food is great: it’s a mix of Chinese, Indian, Malay and pretty much everything else. There’s essentially no crime: you can walk around in the darkest alley in the worst part of town at 3 am and still feel safe. It’s interesting to live in a country where people from very different cultures are learning to live together and prosper. The US considers itself a melting-pot, but in Singapore they have four national languages: English, Mandarin, Malay and Tamil.

Most of all, it’s great to live in places where the culture and politics is different than where I grew up. But I’m trying to travel less, because it’s bad for the planet.

You’ve gained some fame for your “crackpot index”. What were your motivations for developing it? Any new criteria you’d add?

After the internet first caught on, a bunch of us started using it to talk about physics on the usenet newsgroup sci.physics.

And then, all of a sudden, crackpots around the world started joining in!

Before this, I don’t think anybody realized how many people had their own personal theories of physics. You might have a crazy uncle who spent his time trying to refute special relativity, but you didn’t realize there were actually thousands of these crazy uncles.

As I’m sure you know here at Physics Forums, crackpots naturally tend to drive out more serious conversations. If you have some people talking about the laws of black hole thermodynamics, and some guy jumps in and says that the universe is a black hole, everyone will drop what they’re doing and argue with that guy. It’s irresistible. It reminds me of how when someone brings a baby to a party, everyone will start cooing to the baby. But it’s worse.

When physics crackpots started taking over the usenet newsgroup sci.physics, I discovered that they had a lot of features in common. The Crackpot Index summarizes these common features. Whenever I notice a new pattern, I add it.

For example: if someone starts comparing themselves to Galileo and says the physics establishment is going after them like the Inquisition, I guarantee you that they’re a crackpot. Their theories could be right—but unfortunately, they’ve got delusions of grandeur and a persecution complex.

It’s not being wrong that makes someone a crackpot. Being a full­-fledged crackpot is the endpoint of a tragic syndrome. Someone starts out being a bit too confident that they can revolutionize physics without learning it first. In fact, many young physicists go through this stage! But the good ones react to criticism by upping their game. The ones who become crackpots just brush it off. They come up with an idea that they think is great, and when nobody likes it, they don’t say “okay, I need to learn more.” Instead, they make up excuses: nobody understands me, maybe there’s a conspiracy at work, etc. The excuses get more complicated with each rebuff, and it gets harder and harder for them to back down and say “whoops, I was wrong”.

When I wrote the Crackpot Index, I thought crackpots were funny. Alexander Abian claimed all the world’s ills would be cured if we blew up the Moon. Archimedes Plutonium thinks the Universe is a giant plutonium atom. These ideas are funny. But now I realize how sad it is that someone can start with an passion for physics and end up in this kind of trap. They almost never escape.

Who are some of your math and physics heroes of the past and of today?

Wow, that’s a big question! I think every scientist needs to have heroes. I’ve had a lot.

Marie Curie

Marie Curie

When I was a kid, I was in love with Marie Curie. I wanted to marry a woman like her: someone who really cared about science. She overcame huge obstacles to get a degree in physics, discovered not one but two new elements, often doing experiments in her own kitchen—and won not one but two Nobel prizes. She was a tragic figure in many ways. Her beloved husband Pierre, a great physicist in his own right, slipped and was run over by a horse­-drawn cart, dying instantly when the wheels ran over his skull. She herself probably died from her experiments with radiation. But this made me love her all the more.

Later my big hero was Einstein. How could any physicist not have Einstein as a hero? First he came up with the idea that light comes in discrete quanta: photons. Then, two months later, he used Brownian motion to figure out the size of atoms. One month after that: special relativity, unifying space and time! Three months later, the equivalence between mass and energy. And all this was just a warmup for his truly magnificent theory of general relativity, explaining gravity as the curvature of space and time. He truly transformed our vision of the Universe. And then, in his later years, the noble and unsuccessful search for a unified field theory. As a friend of mine put it, what matters here is not that he failed: what matters is that he set physics a new goal, more ambitious than any goal it had before.

Later it was Feynman. As I mentioned, my uncle gave me Feynman’s Lectures on Physics. This is how I first learned Maxwell’s equations, special relativity, quantum mechanics. His way of explaining things with a minimum of jargon, getting straight to the heart of every issue, is something I really admire. Later I enjoyed his books like Surely You Must Be Joking. Still later I learned enough to be impressed by his work on QED.

But when you read his autobiographical books, you can see that he was a bit too obsessed with pretending to be a fun­-loving ordinary guy. A fun­-loving ordinary guy who just happens to be smarter than everyone else. In short, a self­-absorbed showoff. He could also be pretty mean to women—and in that respect, Einstein was even worse. So our heroes should not be admired uncritically.

Alexander Grothendieck

Alexander Grothendieck

A good example is Alexander Grothendieck. I guess he’s my main math hero these days. To solve concrete problems like the Weil conjectures, he avoided brute force techniques and instead developed revolutionary new concepts that gently dissolved those problems. And these new concepts turned out to be much more important than the problems that motivated him. I’m talking about abelian categories, schemes, topoi, stacks, things like that. Everyone who really wants to understand math at a deep level has got to learn these concepts. They’re beautiful and wonderfully simple—but not easy to master. You have to really change your world view to understand them, just like general relativity or quantum mechanics. You have to rewire your neurons.

At his peak, Grothendieck seemed almost superhuman. It seems he worked almost all day and all night, bouncing his ideas off the other amazing French algebraic geometers. Apparently 20,000 pages of his writings remain unpublished! But he became increasingly alienated from the mathematical establishment and eventually disappeared completely, hiding in a village near the Pyrenees.

Which groundbreaking advances in science and math are you most looking forward to?

I’d really like to see progress in figuring out the fundamental laws of physics. Ideally, I’d like to know the Theory of Everything. Of course, we don’t even know that there is one! There could be an endless succession of deeper and deeper realizations to be had about the laws of physics, with no final answer.

If we ever do discover the Theory of Everything, that won’t be the end of the story. It could be just the beginning. For example, next we could ask why this particular theory governs our Universe. Is it necessary, or contingent? People like to chat about this puzzle already, but I think it’s premature. I think we should find the Theory of Everything first.

Unfortunately, right now fundamental physics is in a phase of being “stuck”. I don’t expect to see the Theory of Everything in my lifetime. I’d be happy to see any progress at all! There are dozens of very basic things we don’t understand.

When it comes to math, I expect that people will have their hands full this century redoing the foundations using ∞-categories, and answering some of the questions that come up when you do this. The crowd working on “homotopy type theory” is making good progress–but so far they’re mainly thinking about ∞-groupoids, which are a very special sort of ∞-category. When we do all of math using ∞-categories, it will be a whole new ballgame.

And then there’s the question of whether humanity will figure out a way to keep from ruining the planet we live on. And the question of whether we’ll succeed in replacing ourselves with something more intelligent—or even wiser.

The Milky Way and Andromeda Nebula after their first collision, 4 billion years from now

The Milky Way and Andromeda Nebula after their first collision, 4 billion years from now

Here’s something cool: red dwarf stars will keep burning for 10 trillion years. If we, or any civilization, can settle down next to one of those, there will be plenty of time to figure things out. That’s what I hope for.

But some of my friends think that life always uses up resources as fast as possible. So one of my big questions is whether intelligent life will develop the patience to sit around and think interesting thoughts, or whether it will burn up red dwarf stars and every other source of energy as fast as it can, as we’re doing now with fossil fuels.

What does the future hold for John Baez? What are your goals?

What the future holds for me, primarily, is death.

That’s true of all of us—or at least most of us. While some hope that technology will bring immortality, or at least a much longer life, I bet most of us are headed for death fairly soon. So I try to make the most of the time I have.

I’m always re­-evaluating what I should do. I used to spend time thinking about quantum gravity and n­-categories. But quantum gravity feels stuck, and n­-category theory is shooting forward so fast that my help is no longer needed.

Climate change is hugely important, and nobody really knows what to do about it. Lots of people are trying lots of different things. Unfortunately I’m no better than the rest when it comes to the most obvious strategies—like politics, or climate science, or safer nuclear reactors, or better batteries and photocells.

The trick is finding things you can do better than other people. Right now for me that means thinking about networks and biology in a very abstract way. I’m inspired by this remark by Patten and Witkamp:

To understand ecosystems, ultimately will be to understand networks.

So that’s my goal for the next five years or so. It’s probably not be the best thing anyone can do to prepare for the Middle Anthropocene. But it may be the best thing I can do: use the math I know to help people understand the biosphere.

It may seem like I keep jumping around: from quantum gravity to n-categories to biology. But I keep wanting to think about networks, and how they change in time.

At some point I hope to retire and become a bit more of a self­-indulgent wastrel. I could write a fun book about group theory in geometry and physics, and a fun book about the octonions. I might even get around to spending more time on music!

John Baez in Namo Gorge, Gansu

John Baez


Interview (Part 1)

18 March, 2016

Greg Bernhardt runs an excellent website for discussing physics, math and other topics, called Physics Forums. He recently interviewed me there. Since I used this opportunity to explain a bit about the Azimuth Project and network theory, I thought I’d reprint the interview here. Here is Part 1.

Give us some background on yourself.

I’m interested in all kinds of mathematics and physics, so I call myself a mathematical physicist. But I’m a math professor at the University of California in Riverside. I’ve taught here since 1989. My wife Lisa Raphals got a job here nine years later: among other things, she studies classical Chinese and Greek philosophy.

I got my bachelors’s degree in math at Princeton. I did my undergrad thesis on whether you can use a computer to solve Schrödinger’s equation to arbitrary accuracy. In the end, it became obvious that you can. I was really interested in mathematical logic, and I used some in my thesis—the theory of computable functions—but I decided it wasn’t very helpful in physics. When I read the magnificently poetic last chapter of Misner, Thorne and Wheeler’s Gravitation, I decided that quantum gravity was the problem to work on.

I went to math grad school at MIT, but I didn’t find anyone to work with on quantum gravity. So, I did my thesis on quantum field theory with Irving Segal. He was one of the founders of “constructive quantum field theory”, where you try to rigorously prove that quantum field theories make mathematical sense and obey certain axioms that they should. This was a hard subject, and I didn’t accomplish much, but I learned a lot.

I got a postdoc at Yale and switched to classical field theory, mainly because it was something I could do. On the side I was still trying to understand quantum gravity. String theory was bursting into prominence at the time, and my life would have been easier if I’d jumped onto that bandwagon. But I didn’t like it, because most of the work back then studied strings moving on a fixed “background” spacetime. Quantum gravity is supposed to be about how the geometry of spacetime is variable and quantum­-mechanical, so I didn’t want a theory of quantum gravity set on a pre­-existing background geometry!

I got a professorship at U.C. Riverside based on my work on classical field theory. But at a conference on that subject in Seattle, I heard Abhay Ashtekar, Chris Isham and Renate Loll give some really interesting talks on loop quantum gravity. I don’t know why they gave those talks at a conference on classical field theory. But I’m sure glad they did! I liked their work because it was background­-free and mathematically rigorous. So I started work on loop quantum gravity.

Like many other theories, quantum gravity is easier in low dimensions. I became interested in how category theory lets you formulate quantum gravity in a universe with just 3 spacetime dimensions. It amounts to a radical new conception of space, where the geometry is described in a thoroughly quantum­-mechanical way. Ultimately, space is a quantum superposition of “spin networks”, which are like Feynman diagrams. The idea is roughly that a spin network describes a virtual process where particles move around and interact. If we know how likely each of these processes is, we know the geometry of space.

A spin network

A spin network

Loop quantum gravity tries to do the same thing for full­-fledged quantum gravity in 4 spacetime dimensions, but it doesn’t work as well. Then Louis Crane had an exciting idea: maybe 4­-dimensional quantum gravity needs a more sophisticated structure: a “2­-category”.

I had never heard of 2­-categories. Category theory is about things and processes that turn one thing into another. In a 2­-category we also have “meta-processes” that turn one process into another.

I became very excited about 2-­categories. At the time I was so dumb I didn’t consider the possibility of 3­-categories, and 4­-categories, and so on. To be precise, I was more of a mathematical physicist than a mathematician: I wasn’t trying to develop math for its own sake. Then someone named James Dolan told me about n­-categories! That was a real eye­-opener. He came to U.C. Riverside to work with me. So I started thinking about n­-categories in parallel with loop quantum gravity.

Dolan was technically my grad student, but I probably learned more from him than vice versa. In 1996 we wrote a paper called “Higher­-dimensional algebra and topological quantum field theory”, which might be my best paper. It’s full of grandiose guesses about n-­categories and their connections to other branches of math and physics. We had this vision of how everything fit together. It was so beautiful, with so much evidence supporting it, that we knew it had to be true. Unfortunately, at the time nobody had come up with a good definition of n­-category, except for n < 4. So we called our guesses “hypotheses” instead of “conjectures”. In math a conjecture should be something utterly precise: it’s either true or not, with no room for interpretation.

By now, I think everybody more or less believes our hypotheses. Some of the easier ones have already been turned into theorems. Jacob Lurie, a young hotshot at Harvard, improved the statement of one and wrote a 111-page outline of a proof. Unfortunately he still used some concepts that hadn’t been defined. People are working to fix that, and I feel sure they’ll succeed.

A foam of soap bubbles

A foam of soap bubbles

Anyway, I kept trying to connect these ideas to quantum gravity. In 1997, I introduced “spin foams”. These are structures like spin networks, but with an extra dimension. Spin networks have vertices and edges. Spin foams also have 2­-dimensional faces: imagine a foam of soap bubbles.

The idea was to use spin foams to give a purely quantum­-mechanical description of the geometry of spacetime, just as spin networks describe the geometry of space. But mathematically, what we’re doing here is going from a category to a 2­-category.

By now, there are a number of different theories of quantum gravity based on spin foams. Unfortunately, it’s not clear that any of them really work. In 2002, Dan Christensen, Greg Egan and I did a bunch of supercomputer calculations to study this question. We showed that the most popular spin foam theory at the time gave dramatically different answers than people had hoped for. I think we more or less killed that theory.

That left me rather depressed. I don’t enjoy programming: indeed, Christensen and Egan did all the hard work of that sort on our paper. I didn’t want to spend years sifting through spin foam theories to find one that works. And most of all, I didn’t want to end up as an old man still not knowing if my work had been worthwhile! To me n­-category theory was clearly the math of the future—and it was easy for me to come up with cool new ideas in that subject. So, I quit quantum gravity and switched to n­-categories.

But this was very painful. Quantum gravity is a kind of “holy grail” in physics. When you work on that subject, you wind up talking to lots of people who believe that unifying quantum mechanics and general relativity is the most important thing in the world, and that nothing else could possibly be as interesting. You wind up believing it. It took me years to get out of that mindset.

Ironically, when I quit quantum gravity, I felt free to explore string theory. As a branch of math, it’s really wonderful. I started looking at how n­-categories apply to string theory. It turns out there’s a wonderful story here: briefly, particles are to categories as strings are to 2­-categories, and all the math of particles can be generalized to strings using this idea! I worked out a bit of this story with Urs Schreiber and John Huerta.

Around 2010, I felt I had to switch to working on environmental issues and math related to engineering and biology, for the sake of the planet. That was another painful renunciation. But luckily, Urs Schreiber and others are continuing to work on n­-categories and string theory, and doing it better than I ever could. So I don’t feel the need to work on those things anymore—indeed, it would be hard to keep up. I just follow along quietly from the sidelines.

It’s quite possible that we need a dozen more good ideas before we really get anywhere on quantum gravity. But I feel confident that n­-categories will have a role to play. So, I’m happy to have helped push that subject forward.

Your uncle, Albert Baez, was a physicist. How did he help develop your interests?

He had a huge effect on me. He’s mainly famous for being the father of the folk singer Joan Baez. But he started out in optics and helped invent the first X­-ray microscope. Later he became very involved in physics education, especially in what were then called third­-world countries. For example, in 1951 he helped set up a physics department at the University of Baghdad.

Albert V. Baez

Albert V. Baez

When I was a kid he worked for UNESCO, so he’d come to Washington D.C. and stay with my parents, who lived nearby. Whenever he showed up, he would open his suitcase and pull out amazing gadgets: diffraction gratings, holograms, and things like that. And he would explain how they work! I decided physics was the coolest thing there is.

When I was eight, he gave me a copy of his book The New College Physics: A Spiral Approach. I immediately started trying to read it. The “spiral approach” is a great pedagogical principle: instead of explaining each topic just once, you should start off easy and then keep spiraling around from topic to topic, examining them in greater depth each time you revisit them. So he not only taught me physics, he taught me about how to learn and how to teach.

Later, when I was fifteen, I spent a couple weeks at his apartment in Berkeley. He showed me the Lawrence Hall of Science, which is where I got my first taste of programming—in BASIC, with programs stored on paper tape. This was in 1976. He also gave me a copy of The Feynman Lectures on Physics. And so, the following summer, when I was working at a state park building trails and such, I was also trying to learn quantum mechanics from the third volume of The Feynman Lectures. The other kids must have thought I was a complete geek—which of course I was.

Give us some insight on what your average work day is like.

During the school year I teach two or three days a week. On days when I teach, that’s the focus of my day: I try to prepare my classes starting at breakfast. Teaching is lots of fun for me. Right now I’m teaching two courses: an undergraduate course on game theory and a graduate course on category theory. I’m also running a seminar on category theory. In addition, I meet with my graduate students for a four­-hour session once a week: they show me what they’ve done, and we try to push our research projects forward.

On days when I don’t teach, I spend a lot of time writing. I love blogging, so I could easily do that all day, but I try to spend a lot of time writing actual papers. Any given paper starts out being tough to write, but near the end it practically writes itself. At the end, I have to tear myself away from it: I keep wanting to add more. At that stage, I feel an energetic glow at the end of a good day spent writing. Few things are so satisfying.

During the summer I don’t teach, so I can get a lot of writing done. I spent two years doing research at the Centre of Quantum Technologies, which is in Singapore, and since 2012 I’ve been working there during summers. Sometimes I bring my grad students, but mostly I just write.

I also spend plenty of time doing things with my wife, like talking, cooking, shopping, and working out at the gym. We like to watch TV shows in the evening, mainly mysteries and science fiction.

We also do a lot of gardening. When I was younger that seemed boring ­ but as you get older, subjective time speeds up, so you pay more attention to things like plants growing. There’s something tremendously satisfying about planting a small seedling, watching it grow into an orange tree, and eating its fruit for breakfast.

I love playing the piano and recording electronic music, but doing it well requires big blocks of time, which I don’t always have. Music is pure delight, and if I’m not listening to it I’m usually composing it in my mind.

If I gave in to my darkest urges and becames a decadent wastrel I might spend all day blogging, listening to music, recording music and working on pure math. But I need other things to stay sane.

What research are you working on at the moment?

Lately I’ve been trying to finish a paper called “Struggles with the Continuum”. It’s about the problems physics has with infinities, due to the assumption that spacetime is a continuum. At certain junctures this paper became psychologically difficult to write, since it’s supposed to include a summary of quantum field theory, which is complicated and sprawling subject. So, I’ve resorted to breaking this paper into blog articles and posting them on Physics Forums, just to motivate myself.

Purely for fun, I’ve been working with Greg Egan on some projects involving the octonions. The octonions are a number system where you can add, subtract, multiply and divide. Such number systems only exist in 1, 2, 4, and 8 dimensions: you’ve got the real numbers, which form a line, the complex numbers, which form a plane, the quaternions, which are 4­-dimensional, and the octonions, which are 8­dimensional. The octonions are the biggest, but also the weirdest. For example, multiplication of octonions violates the associative law: (xy)z is not equal to x(yz). So the octonions sound completely crazy at first, but they turn out to have fascinating connections to string theory and other things. They’re pretty addictive, and if became a decadent wastrel I would spend a lot more time on them.

The integral octonions of norm 1, projected onto a plane

The 240 unit integral octonions, projected onto a plane

There’s a concept of “integer” for the octonions, and integral octonions form a lattice, a repeating pattern of points, in 8 dimensions. This is called the E8 lattice. There’s another lattices that lives in in 24 dimensions, called the “Leech lattice”. Both are connected to string theory. Notice that 8+2 equals 10, the dimension superstrings like to live in, and 24+2 equals 26, the dimension bosonic strings like to live in. That’s not a coincidence! The 2 here comes from the 2­-dimensional world-sheet of the string.

Since 3×8 is 24, Egan and I became interested in how you could built the Leech lattice from 3 copies of the E8 lattice. People already knew a trick for doing it, but it took us a while to understand how it worked—and then Egan showed you could do this trick in exactly 17,280 ways! I want to write up the proof. There’s a lot of beautiful geometry here.

There’s something really exhilarating about struggling to reach the point where you have some insight into these structures and how they’re connected to physics.

My main work, though, involves using category theory to study networks. I’m interested in networks of all kinds, from electrical circuits to neural networks to “chemical reaction networks” and many more. Different branches of science and engineering focus on different kinds of networks. But there’s not enough communication between researchers in different subjects, so it’s up to mathematicians to set up a unified theory of networks.

I’ve got seven grad students working on this project—or actually eight, if you count Brendan Fong: I’ve been helping him on his dissertation, but he’s actually a student at Oxford.

Brendan was the first to join the project. I wanted him to work on electrical circuits, which are a nice familiar kind of network, a good starting point. But he went much deeper: he developed a general category­-theoretic framework for studying networks. We then applied it to electrical circuits, and other things as well.

Blake Pollard and Brendan Fong at the Centre for Quantum Technologies

Blake Pollard and Brendan Fong at the Centre for Quantum Technologies

Blake Pollard is a student of mine in the physics department here at U. C. Riverside. Together with Brendan and me, he developed a category­-theoretic approach to Markov processes: random processes where a system hops around between different states. We used Brendan’s general formalism to reduce Markov processes to electrical circuits. Now Blake is going further and using these ideas to tackle chemical reaction networks.

My other students are in the math department at U. C. Riverside. Jason Erbele is working on “control theory”, a branch of engineering where you try to design feedback loops to make sure processes run in a stable way. Control theory uses networks called “signal flow diagrams”, and Jason has worked out how to understand these using category theory.

Signal flow diagram

Signal flow diagram for an inverted pendulum on a cart

Jason isn’t using Brendan’s framework: he’s using a different one, called PROPs, which were developed a long time ago for use in algebraic topology. My student Franciscus Rebro has been developing it further, for use in our project. It gives a nice way to describe networks in terms of their basic building blocks. It also illuminates the similarity between signal flow diagrams and Feynman diagrams! They’re very similar, but there’s a big difference: in signal flow diagrams the signals are classical, while Feynman diagrams are quantum­-mechanical.

My student Brandon Coya has been working on electrical circuits. He’s sort of continuing what Brendan started, and unifying Brendan’s formalism with PROPs.

My student Adam Yassine is starting to work on networks in classical mechanics. In classical mechanics you usually consider a single system: you write down the Hamiltonian, you get the equations of motion, and you try to solve them. He’s working on a setup where you can take lots of systems and hook them up into a network.

My students Kenny Courser and Daniel Cicala are digging deeper into another aspect of network theory. As I hinted earlier, a category is about things and processes that turn one thing into another. In a 2­-category we also have “meta-processes” that turn one process into another. We’re starting to bring 2­-categories into network theory.

For example, you can use categories to describe an electrical circuit as a process that turns some inputs into some outputs. You put some currents in one end and some currents come out the other end. But you can also use 2­-categories to describe “meta-processes” that turn one electrical circuit into another. An example of a meta-process would be a way of simplifying an electrical circuit, like replacing two resistors in series by a single resistor.

Ultimately I want to push these ideas in the direction of biochemistry. Biology seems complicated and “messy” to physicists and mathematicians, but I think there must be a beautiful logic to it. It’s full of networks, and these networks change with time. So, 2­-categories seem like a natural language for biology.

It won’t be easy to convince people of this, but that’s okay.