The Koide Formula

4 April, 2021

There are three charged leptons: the electron, the muon and the tau. Let m_e, m_\mu and m_\tau be their masses. Then the Koide formula says

\displaystyle{ \frac{m_e + m_\mu + m_\tau}{\big(\sqrt{m_e} + \sqrt{m_\mu} + \sqrt{m_\tau}\big)^2} = \frac{2}{3} }

There’s no known reason for this formula to be true! But if you plug in the experimentally measured values of the electron, muon and tau masses, it’s accurate within the current experimental error bars:

\displaystyle{ \frac{m_e + m_\mu + m_\tau}{\big(\sqrt{m_e} + \sqrt{m_\mu} + \sqrt{m_\tau}\big)^2} = 0.666661 \pm 0.000007 }

Is this significant or just a coincidence? Will it fall apart when we measure the masses more accurately? Nobody knows.

Here’s something fun, though:

Puzzle. Show that no matter what the electron, muon and tau masses might be—that is, any positive numbers whatsoever—we must have

\displaystyle{ \frac{1}{3} \le \frac{m_e + m_\mu + m_\tau}{\big(\sqrt{m_e} + \sqrt{m_\mu} + \sqrt{m_\tau}\big)^2} \le 1}

For some reason this ratio turns out to be almost exactly halfway between the lower bound and upper bound!

Koide came up with his formula in 1982 before the tau’s mass was measured very accurately.  At the time, using the observed electron and muon masses, his formula predicted the tau’s mass was

m_\tau = 1776.97 MeV/c2

while the observed mass was

m_\tau = 1784.2 ± 3.2 MeV/c2

Not very good.

In 1992 the tau’s mass was measured much more accurately and found to be

m_\tau = 1776.99 ± 0.28 MeV/c2

Much better!

Koide has some more recent thoughts about his formula:

• Yoshio Koide, What physics does the charged lepton mass relation tell us?, 2018.

He points out how difficult it is to explain a formula like this, given how masses depend on an energy scale in quantum field theory.


Vincenzo Galilei

3 April, 2021

I’ve been reading about early music. I ran into Vicenzo Galilei, an Italian lute player, composer, and music theorist who lived during the late Renaissance and helped start the Baroque era. Of course anyone interested in physics will know Galileo Galilei. And it turns out Vicenzo was Galileo’s dad!

The really interesting part is that Vincenzo did a lot of experiments—and he got Galileo interested in the experimental method!

Vicenzo started out as a lutenist, but in 1563 he met Gioseffo Zarlino, the most important music theorist of the sixteenth century, and began studying with him. Vincenzo became interested in tuning and keys, and in 1584 he anticipated Bach’s Well-Tempered Clavier by composing 24 groups of dances, one for each of the 12 major and 12 minor keys.

He also studied acoustics, especially vibrating strings and columns of air. He discovered that while the frequency of sound produced by a vibrating string varies inversely with the length of string, it’s also proportional to the square root of the tension applied. For example, weights suspended from strings of equal length need to be in a ratio of 9:4 to produce a perfect fifth, which is the frequency ratio 3:2.

Galileo later told a biographer that Vincenzo introduced him to the idea of systematic testing and measurement. The basement of their house was strung with lengths of lute string materials, each of different lengths, with different weights attached. Some say this drew Galileo’s attention away from pure mathematics to physics!

You can see books by Vicenzo Galilei here:

• Internet Archive, Vincenzo Galilei, c. 1520 – 2 July 1591.

Unfortunately for me they’re in Italian, but the title of his Dialogo della Musica Antica et Della Moderna reminds me of his son’s Dialogo sopra i Due Massimi Sistemi del Mondo (Dialog Concerning the Two Chief World Systems).

Speaking of dialogs, here’s a nice lute duet by Vincenzo Galilei, played by Evangelina Mascardi and Frédéric Zigante:

It’s from his book Fronimo Dialogo, an instruction manual for the lute which includes many compositions, including the 24 dances illustrating the 24 keys. “Fronimo” was an imaginary expert in the lute—in ancient Greek, phronimo means sage—and the book apparently consists of dialogs with between Fronimo and a student Eumazio (meaning “he who learns well”).

So, I now suspect that Galileo also got his fondness for dialogs from his dad, too! Or maybe everyone was writing them back then?


Can We Understand the Standard Model Using Octonions?

31 March, 2021


I gave two talks in Latham Boyle and Kirill Krasnov’s Perimeter Institute workshop Octonions and the Standard Model.

The first talk was on Monday April 5th at noon Eastern Time. The second was exactly one week later, on Monday April 12th at noon Eastern Time.

Here they are:

Can we understand the Standard Model? (video, slides)

Abstract. 40 years trying to go beyond the Standard Model hasn’t yet led to any clear success. As an alternative, we could try to understand why the Standard Model is the way it is. In this talk we review some lessons from grand unified theories and also from recent work using the octonions. The gauge group of the Standard Model and its representation on one generation of fermions arises naturally from a process that involves splitting 10d Euclidean space into 4+6 dimensions, but also from a process that involves splitting 10d Minkowski spacetime into 4d Minkowski space and 6 spacelike dimensions. We explain both these approaches, and how to reconcile them.

The second is on Monday April 12th at noon Eastern Time:

Can we understand the Standard Model using octonions? (video, slides)

Abstract. Dubois-Violette and Todorov have shown that the Standard Model gauge group can be constructed using the exceptional Jordan algebra, consisting of 3×3 self-adjoint matrices of octonions. After an introduction to the physics of Jordan algebras, we ponder the meaning of their construction. For example, it implies that the Standard Model gauge group consists of the symmetries of an octonionic qutrit that restrict to symmetries of an octonionic qubit and preserve all the structure arising from a choice of unit imaginary octonion. It also sheds light on why the Standard Model gauge group acts on 10d Euclidean space, or Minkowski spacetime, while preserving a 4+6 splitting.

You can see all the slides and videos and also some articles with more details here.


The Joy of Condensed Matter

24 March, 2021

I published a slightly different version of this article in Nautilus on February 24, 2021.


Everyone seems to be talking about the problems with physics: Woit’s book Not Even Wrong, Smolin’s The Trouble With Physics and Hossenfelder’s Lost in Math leap to mind, and they have started a wider conversation. But is all of physics really in trouble, or just some of it?

If you actually read these books, you’ll see they’re about so-called “fundamental physics”. Some other parts of physics are doing just fine, and I want to tell you about one. It’s called “condensed matter physics”, and it’s the study of solids and liquids. We are living in the golden age of condensed matter physics.

But first, what is “fundamental” physics? It’s a tricky term. You might think any truly revolutionary development in physics counts as fundamental. But in fact physicists use this term in a more precise, narrowly delimited way. One of the goals of physics is to figure out some laws that, at least in principle, we could use to predict everything that can be predicted about the physical universe. The search for these laws is fundamental physics.

The fine print is crucial. First: “at least in principle”. In principle we can use the fundamental physics we know to calculate the boiling point of water to immense accuracy—but nobody has done it yet, because the calculation is hard. Second: “everything that can be predicted”. As far we can tell, quantum mechanics says there’s inherent randomness in things, which makes some predictions impossible, not just impractical, to carry out with certainty. And this inherent quantum randomness sometimes gets amplified over time, by a phenomenon called chaos. For this reason, even if we knew everything about the universe now, we couldn’t predict the weather precisely a year from now.

So even if fundamental physics succeeded perfectly, it would be far from giving the answer to all our questions about the physical world. But it’s important nonetheless, because it gives us the basic framework in which we can try to answer these questions.

As of now, research in fundamental physics has given us the Standard Model (which seeks to describe matter and all the forces except gravity) and General Relativity (which describes gravity). These theories are tremendously successful, but we know they are not the last word. Big questions remain unanswered—like the nature of dark matter, or whatever is fooling us into thinking there’s dark matter.
Unfortunately, progress on these questions has been very slow since the 1990s.

Luckily fundamental physics is not all of physics, and today it is no longer the most exciting part of physics. There is still plenty of mind-blowing new physics being done. And lot of it—though by no means all—is condensed matter physics.

Traditionally, the job of condensed matter physics was to predict the properties of solids and liquids found in nature. Sometimes this can be very hard: for example, computing the boiling point of water. But now we know enough fundamental physics to design strange new materials—and then actually make these materials, and probe their properties with experiments, testing our theories of how they should work. Even better, these experiments can often be done on a table top. There’s no need for enormous particle accelerators here.

Let’s look at an example. We’ll start with the humble “hole”. A crystal is a regular array of atoms, each with some electrons orbiting it. When one of these electrons gets knocked off somehow, we get a “hole”: an atom with a missing electron. And this hole can actually move around like a particle! When an electron from some neighboring atom moves to fill the hole, the hole moves to the neighboring atom. Imagine a line of people all wearing hats except for one whose head is bare: if their neighbor lends them their hat, the bare head moves to the neighbor. If this keeps happening, the bare head will move down the line of people. The absence of a thing can act like a thing!

The famous physicist Paul Dirac came up with the idea of holes in 1930. He correctly predicted that since electrons have negative electric charge, holes should have positive charge. Dirac was working on fundamental physics: he hoped the proton could be explained as a hole. That turned out not to be true. Later physicists found another particle that could: the “positron”. It’s just like an electron with the opposite charge. And thus antimatter—particles like ordinary matter particles, with the same mass but with the opposite charge—was born. But that’s another story.

In 1931, Heisenberg applied the idea of holes to condensed matter physics. He realized that just as electrons create an electrical current as they move along, so do holes—but because they’re positively charged, their electrical current goes in the other direction! It became clear that holes carry electrical current in some but of the materials called “semiconductors”: for example, silicon with a bit of aluminum added to it. After many further developments, in 1948 the physicist William Schockley patented transistors that use both holes and electrons to form a kind of switch. He later won the Nobel prize for this, and now they’re widely used in computer chips.

Holes in semiconductors are not really particles in the sense of fundamental physics. They are really just a convenient way of thinking about the motion of electrons. But any sufficiently convenient abstraction takes on a life of its own. The equations that describe the behavior of holes are just like the equations that describe the behavior of particles. So, we can treat holes as if they were particles. We’ve already seen that a hole is positively charged. But because it takes energy to get a hole moving, a hole also acts like it has a mass. And so on: the properties we normally attribute to particles also make sense for holes.

Physicists have a name for things that act like particles even though they’re really not: “quasiparticles”. There are many kinds: holes are just one of the simplest. The beauty of quasiparticles is that we can practically make them to order, having a vast variety of properties. As Michael Nielsen put it, we now live in the era of “designer matter”.

For example, consider the “exciton”. Since an electron is negatively charged and a hole is positively charged, they attract each other. And if the hole is much heavier than the electron—remember, a hole has a mass—an electron can orbit a hole much as an electron orbits a proton in a hydrogen atom. Thus, they form a kind of artificial atom called an exciton. It’s a ghostly dance of presence and absence!


This is how an exciton moves through a crystal.

The idea of excitons goes back all the way to 1931. By now we can make excitons in large quantities in certain semiconductors. They don’t last for long: the electron quickly falls back into the hole. It can take between 1 and 10 trillonths of a second for this to happen. But that’s enough time to do some interesting things.

For example: if you can make an artificial atom, can you make an artificial molecule? Sure! Just as two atoms of hydrogen can stick together and form a molecule, two excitons can stick together and form a “biexciton”. An exciton can stick to another hole and form a “trion”. An exciton can even stick to a photon—a particle of light—and form something called a “polariton”. It’s a blend of matter and light!

Can you make a gas of artificial atoms? Yes! At low densities and high temperatures, excitons zip around very much like atoms in a gas. Can you make a liquid? Again, yes: at higher densities, and colder temperatures, excitons bump into each other enough to act like a liquid. At even colder temperatures, excitons can even form a “superfluid”, with almost zero viscosity: if you could somehow get it swirling around, it would go on practically forever.

This is just a small taste of what researchers in condensed matter physics are doing these days. Besides excitons, they are studying a host of other quasiparticles. A “phonon” is a quasiparticle of sound formed from vibrations moving through a crystal. A “magnon” is a quasiparticle of magnetization: a pulse of electrons in a crystal whose spins have flipped. The list goes on, and becomes ever more esoteric.

But there is also much more to the field than quasiparticles. Physicists can now create materials in which the speed of light is much slower than usual, say 40 miles an hour. They can create materials called “hyperbolic metamaterials” in which light moves as if there were two space dimensions and two time dimensions, instead of the usual three dimensions of space and one of time! Normally we think that time can go forward in just one direction, but in these substances light acts as if there’s a whole circle of directions that count as “forward in time”. The possibilities are limited only by our imagination and the fundamental laws of physics.

At this point, usually some skeptic comes along and questions whether these things are useful. Indeed, some of these new materials are likely to be useful. In fact a lot of condensed matter physics, while less glamorous than what I have just described, is carried out precisely to develop new improved computer chips—and also technologies like “photonics,” which uses light instead of electrons. The fruits of photonics are ubiquitous—it saturates modern technology, like flat-screen TVs—but physicists are now aiming for more radical applications, like computers that process information using light.

Then typically some other kind of skeptic comes along and asks if condensed matter physics is “just engineering”. Of course the very premise of this question is insulting: there is nothing wrong with engineering! Trying to build useful things is not only important in itself, it’s a great way to raise deep new questions about physics. For example the whole field of thermodynamics, and the idea of entropy, arose in part from trying to build better steam engines. But condensed matter physics is not just engineering. Large portions of it are blue-sky research into the possibilities of matter, like I’ve been talking about here.

These days, the field of condensed matter physics is just as full of rewarding new insights as the study of elementary particles or black holes. And unlike fundamental physics, progress in condensed matter physics is rapid—in part because experiments are comparatively cheap and easy, and in part because there is more new territory to explore.

So, when you see someone bemoaning the woes of fundamental physics, take them seriously—but don’t let it get you down. Just find a good article on condensed matter physics and read that. You’ll cheer up immediately.


Can We Understand the Standard Model?

16 March, 2021


I’m giving a talk in Latham Boyle and Kirill Krasnov’s Perimeter Institute workshop Octonions and the Standard Model on Monday April 5th at noon Eastern Time.

This talk will be a review of some facts about the Standard Model. Later I’ll give one that says more about the octonions.

Can we understand the Standard Model?

Abstract. 40 years trying to go beyond the Standard Model hasn’t yet led to any clear success. As an alternative, we could try to understand why the Standard Model is the way it is. In this talk we review some lessons from grand unified theories and also from recent work using the octonions. The gauge group of the Standard Model and its representation on one generation of fermions arises naturally from a process that involves splitting 10d Euclidean space into 4+6 dimensions, but also from a process that involves splitting 10d Minkowski spacetime into 4d Minkowski space and 6 spacelike dimensions. We explain both these approaches, and how to reconcile them.

You can see the slides here, and later a video of my talk will appear. You can register to attend the talk at the workshop’s website.

Here’s a puzzle, just for fun. As I’ll recall in my talk, there’s a normal subgroup of \mathrm{U}(1) \times \mathrm{SU}(2) \times \mathrm{SU}(3) that acts trivially on all known particles, and this fact is very important. The ‘true’ gauge group of the Standard Model is the quotient of \mathrm{U}(1) \times \mathrm{SU}(2) \times \mathrm{SU}(3) by this normal subgroup.

This normal subgroup is isomorphic to \mathbb{Z}_6 and it consists of all the elements

(\zeta^n, (-1)^n, \omega^n )  \in \mathrm{U}(1) \times \mathrm{SU}(2) \times \mathrm{SU}(3)

where

\zeta = e^{2 \pi i / 6}

is my favorite primitive 6th root of unity, -1 is my favorite primitive square root of unity, and

\omega = e^{2 \pi i / 3}

is my favorite primitive cube root of unity. (I’m a primitive kind of guy, in touch with my roots.)

Here I’m turning the numbers (-1)^n into elements of \mathrm{SU}(2) by multiplying them by the 2 \times 2 identity matrix, and turning the numbers \omega^n into elements of \mathrm{SU}(3) by multiplying them by the 3 \times 3 identity matrix.

But in fact there are a bunch of normal subgroups of \mathrm{U}(1) \times \mathrm{SU}(2) \times \mathrm{SU}(3) isomorphic to \mathbb{Z}_6. By my count there are 12 of them! So you have to be careful that you’ve got the right one, when you’re playing with some math and trying to make it match the Standard Model.

Puzzle 1. Are there really exactly 12 normal subgroups of \mathrm{U}(1) \times \mathrm{SU}(2) \times \mathrm{SU}(3) that are isomorphic to \mathbb{Z}_6?

Puzzle 2. Which ones give quotients isomorphic to the true gauge group of the Standard Model, which is \mathrm{U}(1) \times \mathrm{SU}(2) \times \mathrm{SU}(3) modulo the group of elements (\zeta^n, (-1)^n, \omega^n)?

To help you out, it helps to know that every normal subgroup of \mathrm{SU}(2) is a subgroup of its center, which consists of the matrices \pm 1. Similarly, every normal subgroup of \mathrm{SU}(3) is a subgroup of its center, which consists of the matrices 1, \omega and \omega^2. So, the center of \mathrm{U}(1) \times \mathrm{SU}(2) \times \mathrm{SU}(3) is \mathrm{U}(1) \times \mathbb{Z}_2 \times \mathbb{Z}_3.

Here, I believe, are the 12 normal subgroups of \mathrm{U}(1) \times \mathrm{SU}(2) \times \mathrm{SU}(3) isomorphic to \mathbb{Z}_6. I could easily have missed some, or gotten something else wrong!

  1. The group consisting of all elements (1, (-1)^n, \omega^n).
  2. The group consisting of all elements ((-1)^n, 1, \omega^n).
  3. The group consisting of all elements ((-1)^n, (-1)^n, \omega^n).
  4. The group consisting of all elements (\omega^n, (-1)^n, 1).
  5. The group consisting of all elements (\omega^n, (-1)^n, \omega^n).
  6. The group consisting of all elements (\omega^n, (-1)^n, \omega^{-n}).
  7. The group consisting of all elements (\zeta^n , 1, 1).
  8. The group consisting of all elements (\zeta^n , (-1)^n, 1).
  9. The group consisting of all elements (\zeta^n , 1, \omega^n).
  10. The group consisting of all elements (\zeta^n , 1, \omega^{-n}).
  11. The group consisting of all elements (\zeta^n , (-1)^n, \omega^n).
  12. The group consisting of all elements (\zeta^n , (-1)^n, \omega^{-n}).

Magic Numbers

9 March, 2021

Working in the Manhattan Project, Maria Goeppert Mayer discovered in 1948 that nuclei with certain numbers of protons and/or neutrons are more stable than others. In 1963 she won the Nobel prize for explaining this discovery with her ‘nuclear shell model’.

Nuclei with 2, 8, 20, 28, 50, or 82 protons are especially stable, and also nuclei with 2, 8, 20, 28, 50, 82 or 126 neutrons. Eugene Wigner called these magic numbers, and it’s a fun challenge to explain them.

For starters one can imagine a bunch of identical fermions in a harmonic oscillator potential. In one-dimensional space we have evenly spaced energy levels, each of which holds one state if we ignore spin. I’ll write this as

1, 1, 1, 1, ….

But if we have spin-1/2 fermions, each of these energy levels can hold two spin states, so the numbers double:

2, 2, 2, 2, ….

In two-dimensional space, ignoring spin, the pattern changes to

1, 1+1, 1+1+1, 1+1+1+1, ….

or in other words

1, 2, 3, 4, ….

That is: there’s one state of the lowest possible energy, 2 states of the next energy, and so on. Including spin the numbers double:

2, 4, 6, 8, ….

In three-dimensional space the pattern changes to this if we ignore spin:

1, 1+2, 1+2+3, 1+2+3+4, ….

or

1, 3, 6, 10, ….

So, we’re getting triangular numbers! Here’s a nice picture of these states, drawn by J. G. Moxness:


Including spin the numbers double:

2, 6, 12, 20, ….

So, there are 2 states of the lowest energy, 2+6 = 8 states of the first two energies, 2+6+12 = 20 states of the first three energies, and so on. We’ve got the first 3 magic numbers right! But then things break down: next we get 2+6+12+20 = 40, while the next magic number is just 28.

Wikipedia has a nice explanation of what goes wrong and how to fix it to get the next few magic numbers right:

Nuclear shell model.

We need to take two more effects into account. First, ‘spin-orbit interactions’ decrease the energy of a state when some spins point in the opposite direction from the orbital angular momentum. Second, the harmonic oscillator potential gets flattened out at large distances, so states of high angular momentum have less energy than you’d expect. I won’t attempt to explain the details, since Wikipedia does a pretty good job and I’m going to want breakfast soon. Here’s a picture that cryptically summarizes the analysis:

The notation is old-fashioned, from spectroscopy—you may know it if you’ve studied atomic physics, or chemistry. If you don’t know it, don’t worry about it! The main point is that the energy levels in the simple story I just told change a bit. They don’t change much until we hit the fourth magic number; then 8 of the next 20 energy levels get lowered so much that this magic number is 2+6+12+8 = 28 instead of 2+6+12+20 = 40. Things go on from there.

But here’s something cute: our simplified calculation of the magic numbers actually matches the count of states in each energy level for a four-dimensional harmonic oscillator! In four dimensions, if we ignore spin, the number of states in each energy level goes like this:

1, 1+3, 1+3+6, 1+3+6+10, …

These are the tetrahedral numbers:

Doubling them to take spin into account, we get the first three magic numbers right! Then, alas, we get 40 instead of 28.

But we can understand some interesting features of the world using just the first three magic numbers: 2, 8, and 20.

For example, helium-4 has 2 protons and 2 neutrons, so it’s ‘doubly magic’ and very stable. It’s the second most common substance in the universe! And in radioactive decays, often a helium nucleus gets shot out. Before people knew what it was, people called it an ‘alpha particle’… and the name stuck.

Oxygen-16, with 8 protons and 8 neutrons, is also doubly magic. So is calcium-40, with 20 protons and 20 neutrons. This is the heaviest stable element with the same number of protons and neutrons! After that, the repulsive electric charge of the protons needs to be counteracted by a greater number of neutrons.

A wilder example is helium-10, with 2 protons and 8 neutrons. It’s doubly magic, but not stable. It just barely clings to existence, helped by all that magic.

Here’s one thing I didn’t explain yet, which is actually pretty easy. Why is it true that—ignoring the spin—the number of states of the harmonic oscillator in the nth energy level follows this pattern in one-dimensional space:

1, 1, 1, 1, ….

and this pattern in two-dimensional space:

1, 1+1 = 2, 1+1+1 = 3, 1+1+1+1 = 4, …

and this pattern in three-dimensional space:

1, 1+2 = 3, 1+2+3 = 6, 1+2+3+4 = 10, ….

and this pattern in four-dimensional space:

1, 1+3 = 4, 1+3+6 = 10, 1+3+6+10 = 20, ….

and so on?

To see this we need to know two things. First, the allowed energies for a harmonic oscillator in one-dimensional space are equally spaced. So, if we say the lowest energy allowed is 0, by convention, and choose units where the next allowed energy is 1, then the allowed energies are the natural numbers:

0, 1, 2, 3, 4, ….

Second, a harmonic oscillator in n-dimensional space is just like n independent harmonic oscillators in one-dimensional space. In particular, its energy is just the sum of their energies.

So, the number of states of energy E for an n-dimensional oscillator is just the number of ways of writing E as a sum of a list of n natural numbers! The order of the list matters here: writing 3 as 1+2 counts as different than writing it as 2+1.

This leads to the patterns we’ve seen. For example, consider a harmonic oscillator in two-dimensional space. It has 1 state of energy 0, namely

0+0

It has 2 states of energy 1, namely

1+0 and 0+1

It has 3 states of energy 2, namely

2+0 and 1+1 and 0+2

and so on.

Next, consider a harmonic oscillator in three-dimensional space. This has 1 state of energy 0, namely

0+0+0

It has 3 states of energy 1, namely

1+0+0 and 0+1+0 and 0+0+1

It has 6 states of energy 2, namely

2+0+0 and 1+1+0 and 1+0+1 and 0+2+0 and 0+1+1 and 0+0+2

and so on. You can check that we’re getting triangular numbers: 1, 3, 6, etc. The easiest way is to note that to get a state of energy E, the first of the three independent oscillators can have any natural number j from 0 to E as its energy, and then there are E – j ways to choose the energies of the other two oscillators so that they sum to E – j. This gives a total of

E + (E-1) + (E-2) + \cdots + 1

states, and this is a triangular number.

The pattern continues in a recursive way: in four-dimensional space the same sort of argument gives us tetrahedral numbers because these are sums of triangular numbers, and so on. We’re getting the diagonals of Pascal’s triangle, otherwise known as binomial coefficients.



We often think of the binomial coefficient

\displaystyle{\binom{n}{k} }

as the number of ways of choosing a k-element subset of an n-element set. But here we are seeing it’s the number of ways of choosing an ordered (k+1)-tuple of natural numbers that sum to n. You may enjoy finding a quick proof that these two things are equal!


Hypernuclei

6 March, 2021

A baryon is a particle made of 3 quarks. The most familiar are the proton, which consists of two up quarks and a down quark, and the neutron, made of two downs and an up. Baryons containing strange quarks were discovered later, since the strange quark is more massive and soon decays to an up or down quark. A hyperon is a baryon that contains one or more strange quarks, but none of the still more massive quarks.

The first hyperon be found was the Λ, or lambda baryon. It’s made of an up quark, a down quark and a strange quark. You can think of it as a ‘heavy neutron’ in which one down quark was replaced by a strange quark. The strange quark has the same charge as the down, so like the neutron the Λ is neutral.

The Λ baryon was discovered in October 1950 by V. D. Hopper and S. Biswas of the University of Melbourne: these particles produced naturally when cosmic rays hit the upper atmosphere, and they were detected in photographic emulsions flown in a balloon. Imagine discovering a new elementary particle using a balloon! Those were the good old days.

The Λ has a mean life of just 0.26 nanoseconds, but that’s actually a long time in this business. The strange quark can only decay using the weak force, which, as its name suggests, is weak—so this happens slowly compared to decays involving the electromagnetic or strong forces.

For comparison, the Δ+ baryon is made of two ups and a down, just like a proton, but it has spin 3/2 instead of spin 1/2. So, you can think of it as a ‘fast-spinning proton’. It decays very quickly via the strong force: it has a mean life of just 5.6 × 10-23 seconds! When you get used to things like this, a nanosecond seems like an eternity.

The unexpectedly long lifetime of the Λ and some other particles was considered ‘strange’, and this eventually led people to dream up a quantity called ‘strangeness’, which is not conserved, but only changed by the weak interaction, so that strange particles decay on time scales of roughly nanoseconds. In 1962 Murray Gell-Mann realized that strangeness is simply the number of strange quarks in a particle, minus the number of strange antiquarks.

So, what’s a ‘hypernucleus’?

A hypernucleus is nucleus containing one or more hyperons along with the usual protons and neutrons. Since nuclei are held together by the strong force, they do things on time scales of 10-23 seconds—so an extra hyperon, which lasts for many billion times longer, can be regarded as a stable particle of a new kind when you’re doing nuclear physics! It lets you build new kinds of nuclei.

One well-studied hypernucleus is the hypertriton. Remember, an ordinary triton consists of a proton and two neutrons: it’s the nucleus of tritium, the radioactive isotope of hydrogen used in hydrogen bombs, also known as hydrogen-3. To get a hypertriton, we replace one of the neutrons with a Λ. So, it consists of a proton, a neutron, and a Λ.

In a hypertriton, the Λ behaves almost like a free particle. So, the lifetime of a hypertriton should be almost the same as that of a Λ by itself. Remember, the lifetime of the Λ is 0.26 nanoseconds. The lifetime of the hypertriton is a bit less: 0.24 nanoseconds. Predicting this lifetime, and even measuring it accurately, has taken a lot of work:

Hypertriton lifetime puzzle nears resolution, CERN Courier, 20 December 2019.

Hypernuclei get more interesting when they have more protons and neutrons. In a nucleus the protons form ‘shells’: due to the Pauli exclusion principle, you can only put one proton in each state. The neutrons form their own shells. So the situation is a bit like chemistry, where the electrons form shells, but now you have two kinds of shells. For example in helium-4 we have two protons, one spin-up and one spin-down, in the lowest energy level, also known as the first shell—and also two neutrons in their lowest energy level.

If you add an extra neutron to your helium-4, to get helium-5, it has to occupy a higher energy level. But if you add a hyperon, since it’s different from both the proton and neutron, it can too can occupy the lowest energy level.

Indeed, no matter how big your nucleus is, if you add a hyperon it goes straight to the lowest energy level! You can roughly imagine it as falling straight to the center of the nucleus—though everything is quantum-mechanical, so these mental images have to be taken with a grain of salt.

One reason for studying hypernuclei is that in some neutron stars, the inner core may contain hyperons! The point is that by weaseling around the Pauli exclusion principle, we can get more particles in low-energy states, producing dense forms of nuclear matter that have less energy. But nobody knows if this ‘strange nuclear matter’ is really stable. So this is an active topic of research. Hypernuclei are one of the few ways to learn useful information about this using experiments in the lab.

For a lot more, try this:

• A. Gal, E. V. Hungerford and D. J. Millener, Strangeness in nuclear physics, Reviews of Modern Physics 88 (2016), 035004.

You can see some hyperons in the baryon octet, which consists of spin-1/2 baryons made of up, down and strange quarks:

and the baryon decuplet which consists of spin-3/2 baryons made of up, down and strange quarks:

In these charts I3 is proportional to the number of up quarks minus the number of down quarks, Q is the electric charge, and S is the strangeness.

Gell-Mann and other physicists realized that mathematically, both the baryon octet and the baryon decuplet are both irreducible representations of SU(3). But that’s another tale!


Physics History Puzzle

3 March, 2021

Which famous physicist once gave a lecture attended by a secret agent with a pistol, who would kill him if he said the wrong thing?


Theoretical Physics in the 21st Century

1 March, 2021

I gave a talk at the Zürich Theoretical Physics Colloquium for Sustainability Week 2021. I was excited to get a chance to speak both about the future of theoretical physics and the climate crisis.

You can see a video of my talk, and also my slides: links in blue on my slides lead to more information.

Title: Theoretical Physics in the 21st Century.

Time: Monday, 8 March 2021, 15:45 UTC (that is, Greenwich Mean Time).

Abstract: The 20th century was the century of physics. What about the 21st? Though progress on some old problems is frustratingly slow, exciting new questions are emerging in condensed matter physics, nonequilibrium thermodynamics and other fields. And most of all, the 21st century is the dawn of the Anthropocene, in which we will adapt to the realities of life on a finite-​sized planet. How can physicists help here?

Hosts: Niklas Beisert, Anna Knörr.


Neutrinos and Neutrettos

7 February, 2021

Until I improved it, the Wikipedia article on the muon neutrino said:

The muon neutrino is a lepton, an elementary subatomic particle which has the symbol νμ and no net electric charge. Together with the muon it forms the second generation of leptons, hence the name muon neutrino. It was first hypothesized in the early 1940s by several people, and was discovered in 1962 by Leon Lederman, Melvin Schwartz and Jack Steinberger.

But it didn’t say who hypothesized it! So I got curious: who predicted the existence of the muon neutrino, and in what papers did they do it?

A sort of obvious theory is this: when people realized the muon was a lot like an electron, they began to suspect that just as electron can turn into an electron neutrino in some reactions, a muon could turn into a muon neutrino.

This is approximately right, but the history is not so simple. To understand it you need a bit of background. In 1935, Hideki Yukawa predicted that the attractive force between nucleons was carried by a particle lighter than these particles but heavier than the electron. This middleweight particle was dubbed a ‘mesotron’, or later ‘meson’.

A particle in this mass range was discovered in 1937, and people thought at first it was Yukawa’s meson. But later they realized it wasn’t: it didn’t interact much with nucleons!

In 1947, Yukawa’s meson was found. They named it the ‘pi meson’, or ‘pion’ for short. The original impostor was renamed the ‘mu meson’, or ‘muon’. For a while it, too, was considered a meson of sorts. But by now we consider it a wholly different sort of beast, so the term ‘mu meson’ is no longer used.

A negative pion usually decays into a muon and a muon antineutrino. A muon, in turn typically decays into an electron, a muon neutrino and an electron neutrino. Note that ‘electron-ness’ and ‘muon-ness’ are separately conserved in these processes.

Studying these processes made people suspect the existence of the muon neutrino and its antiparticle. But people first observed them in the midst of sorting out the confusion between muons and pions. So everything was a mess for while.

In fact, at one point what we now call the muon neutrino was called the ‘neutretto’!

I asked around about these issues, and I got three very interesting answers.

Over on the History of Science and Mathematics StackExchange I asked:

So: who predicted the existence of the muon neutrino, and in what papers did they do it?

and someone named Conifold answered:

Nobody in particular, it was what is called “folklore”. The idea came up naturally when muon decay was observed by several groups in 1948, see Anicin’s The neutrino – its past, present and future:

When in 1948 the electron spectrum from muon decay was found to be continuous it became obvious that not one but two neutrinos are emitted along with the electron. Pontecorvo witnesses that at that time everybody felt that the two neutrinos should be different. They were even named differently, the “neutrino” and the “neutretto”, but with time the idea seem to have been forgotten and it was only in 1962, when the difference between the two neutrinos has been clearly demonstrated in the first of a long series of important accelerator neutrino experiments, that the electron and the muon neutrino were finally given life.

The Pontecorvo reference is to his The infancy and youth of neutrino physics: some recollections, where we read:

Several groups, among which J. Steinberger, E. Hincks and I, and others were investigating the (cosmic) muon decay. The result of the investigations was that the decaying muon emits 3 particles: one electron (this we found by measuring the electron Bremsstrahlung) and two neutral particles, which were called by various people in different ways: two neutrinos, neutrino and neutretto, ν and ν’, etc. I am saying this to make clear that for people working on muons in the old times, the question about different types of neutrinos has always been present. True, later on many theoreticians forgot all about it, and some of them “invented” again the two neutrinos (for example M. Markov), but for people like Bernardini, Steinberger, Hincks and me … the two neutrino question was never forgotten… How to perform the decisive experiment I was able to formulate /40/ clearly enough (the use of muon neutrino beams). At the time the idea of the experiment was not obvious, although the statement may be strange today: one must search for electrons and muons produced in matter by muon neutrinos“.

/40/ is the reference to Pontecorvo’s 1959 paper in JETP, The Universal Fermi interaction and astrophysics.

Over on Physics StackExchange someone named HDE 226868 wrote:

This seems to be a rather complicated issue. The earliest source I’ve been able to find proposed the existence of a muon counterpart to the electron neutrino is Sakata & Inoue’s On the correlations between mesons and Yukawa particles, published in English in 1946 but formulated several years prior. They postulated the existence of a charged meson m^{\pm} and a neutral meson n which could interact with the then-called “Yukawa meson” Y^{\pm} (the charged pion) by

m^{\pm}\leftrightarrow n+Y^{\pm},\quad n\leftrightarrow m^{\pm}+Y^{\mp}

In particular, they described n as a

neutral meson which is assumed in the following discussions to have a negligible mass, and consequently may be regarded as equivalent with the neutrino

Decades later (I can’t determine the precise data), Masami Nakagama wrote in Neutrinos and Sakata: a personal view that it was later assumed by many “from the convenience and economy principles” that the beta decay neutrinos (electron neutrinos) and the “neutral mesons” of Sakata and Inoue were the same, something Sakata apparently resisted. In conjunction with Sakata’s objections, Ogama & Kamefuchi’s On the µ-meson decay explored some problematic consequences of assuming that the two particles were identical, meaning that the debate was going on as of 1950. The upshot? It seems that the community may have shifted from the idea that there was a sibling to the [electron] neutrino to the idea that this particle was the same as the neutrino; then shifted back in the aftermath of the 1962 experiments at Brookhaven (interestingly enough, the Danby et al. paper reporting the experiments mentions none of the abovementioned theories).

An additional reason I say that this is complicated is that proponents of the distinct-particle theory might not have still classified both particles under the umbrella of “neutrino”—in other words, it’s not clear to me that Sakata & Inoue intended for their “neutral meson” to be thought of as a true sibling to the neutrino, or just an analogous counterpart in a pair of sort-of-analogous interactions. But that may not be an objection that others think is substantial.

Also on Physics StackExchange, Karim Chahine wrote:

I may have found it. I’m quoting Wikipedia’s article on Schoichi Sakata:

Sakata and Inoue proposed their two-meson theory in 1942. At the time, a charged particle discovered in the hard component cosmic rays was misidentified as the Yukawa’s meson (π±) nuclear force career particle). The misinterpretation led to puzzles in the discovered cosmic ray particle. Sakata and Inoue solved these puzzles by identifying the cosmic ray particle as a daughter charged fermion produced in the π± decay. A new neutral fermion was also introduced to allow π± decay into fermions.

We now know that these charged and neutral fermions correspond to the second generation leptons μ and νμ in the modern language. They then discussed the decay of the Yukawa particle,

\pi^+\rightarrow \mu^+ + \nu_\mu

Sakata and Inoue predicted correct spin assignment for the muon, and they also introduced the second neutrino. They treated it as a distinct particle from the beta decay neutrino, and anticipated correctly the three body decay of the muon. The English printing of Sakata–Inoue’s two-meson theory paper was delayed until 1946, one year before the experimental discovery of \pi\rightarrow\mu\nu decay.

I might be wrong but it’s my best shot.