Hidden Symmetries of the Hydrogen Atom

4 April, 2019

Here’s the math colloquium talk I gave at Georgia Tech this week:

Abstract. A classical particle moving in an inverse square central force, like a planet in the gravitational field of the Sun, moves in orbits that do not precess. This lack of precession, special to the inverse square force, indicates the presence of extra conserved quantities beyond the obvious ones. Thanks to Noether’s theorem, these indicate the presence of extra symmetries. It turns out that not only rotations in 3 dimensions, but also in 4 dimensions, act as symmetries of this system. These extra symmetries are also present in the quantum version of the problem, where they explain some surprising features of the hydrogen atom. The quest to fully understand these symmetries leads to some fascinating mathematical adventures.

I left out a lot of calculations, but someday I want to write a paper where I put them all in. This material is all known, but I feel like explaining it my own way.

In the process of creating the slides and giving the talk, though, I realized there’s a lot I don’t understand yet. Some of it is embarrassingly basic! For example, I give Greg Egan’s nice intuitive argument for how you can get some ‘Runge–Lenz symmetries’ in the 2d Kepler problem. I might as well just quote his article:

• Greg Egan, The ellipse and the atom.

He says:

Now, one way to find orbits with the same energy is by applying a rotation that leaves the sun fixed but repositions the planet. Any ordinary three-dimensional rotation can be used in this way, yielding another orbit with exactly the same shape, but oriented differently.

But there is another transformation we can use to give us a new orbit without changing the total energy. If we grab hold of the planet at either of the points where it’s travelling parallel to the axis of the ellipse, and then swing it along a circular arc centred on the sun, we can reposition it without altering its distance from the sun. But rather than rotating its velocity in the same fashion (as we would do if we wanted to rotate the orbit as a whole) we leave its velocity vector unchanged: its direction, as well as its length, stays the same.

Since we haven’t changed the planet’s distance from the sun, its potential energy is unaltered, and since we haven’t changed its velocity, its kinetic energy is the same. What’s more, since the speed of a planet of a given mass when it’s moving parallel to the axis of its orbit depends only on its total energy, the planet will still be in that state with respect to its new orbit, and so the new orbit’s axis must be parallel to the axis of the original orbit.

Rotations together with these ‘Runge–Lenz transformations’ generate an SO(3) action on the space of elliptical orbits of any given energy. But what’s the most geometrically vivid description of this SO(3) action?

Someone at my talk noted that you could grab the planet at any point of its path, and move to anywhere the same distance from the Sun, while keeping its speed the same, and get a new orbit with the same energy. Are all the SO(3) transformations of this form?

I have a bunch more questions, but this one is the simplest!

Problems with the Standard Model Higgs

25 February, 2019

Here is a conversation I had with Scott Aaronson. It started on his blog, in a discussion about ‘fine-tuning’. Some say the Standard Model of particle physics can’t be the whole story, because in this theory you need to fine-tune the fundamental constants to keep the Higgs mass from becoming huge. Others say this argument is invalid.

I tried to push the conversation toward the calculations actually underlie this argument. Then our conversation drifted into email and got more technical… and perhaps also more interesting, because it led us to contemplate the stability of the vacuum!

You see, if we screwed up royally on our fine-tuning and came up with a theory where the square of the Higgs mass was negative, the vacuum would be unstable. It would instantly decay into a vast explosion of Higgs bosons.

Another possibility, also weird, turns out to be slightly more plausible. This is that the Higgs mass is positive—as it clearly is—and yet the vacuum is ‘metastable’. In this scenario, the vacuum we see around us might last a long time, and yet eventually it could decay through quantum tunnelling to the ‘true’ vacuum, with a lower energy density:

Little bubbles of true vacuum would form, randomly, and then grow very rapidly. This would be the end of life as we know it.

Scott agreed that other people might like to see our conversation. So here it is. I’ll fix a few mistakes, to make me seem smarter than I actually am.

If I said, “supersymmetry basically has to be there because it’s such a beautiful symmetry,” that would be an argument from beauty. But I didn’t say that, and I disagree with anyone who does say it. I made something weaker, what you might call an argument from the explanatory coherence of the world. It merely says that, when we find basic parameters of nature to be precariously balanced against each other, to one part in 1010 or whatever, there’s almost certainly some explanation. It doesn’t say the explanation will be beautiful, it doesn’t say it will be discoverable by an FCC or any other collider, and it doesn’t say it will have a form (like SUSY) that anyone has thought of yet.

Scott wrote:

It merely says that, when we find basic parameters of nature to be precariously balanced against each other, to one part in 1010 or whatever, there’s almost certainly some explanation.

Do you know examples of this sort of situation in particle physics, or is this just a hypothetical situation?

To answer a question with a question, do you disagree that that’s the current situation with (for example) the Higgs mass, not to mention the vacuum energy, if one considers everything that could naïvely contribute? A lot of people told me it was, but maybe they lied or I misunderstood them.

The basic rough story is this. We measure the Higgs mass. We can assume that the Standard Model is good up to some energy near the Planck energy, after which it fizzles out for some unspecified reason.

According to the Standard Model, each of the 25 fundamental constants appearing in the Standard Model is a “running coupling constant”. That is, it’s not really a constant, but a function of energy: roughly the energy of the process we use to measure that process. Let’s call these “coupling constants measured at energy E”. Each of these 25 functions is determined by the value of all 25 functions at any fixed energy E – e.g. energy zero, or the Planck energy. This is called the “renormalization group flow”.

So, the Higgs mass we measure is actually the Higgs mass at some energy E quite low compared to the Planck energy.

And, it turns out that to get this measured value of the Higgs mass, the values of some fundamental constants measured at energies near the Planck mass need to almost cancel out. More precisely, some complicated function of them needs to almost but not quite obey some equation.

People summarize the story this way: to get the observed Higgs mass we need to “fine-tune” the fundamental constants’ values as measured near the Planck energy, if we assume the Standard Model is valid up to energies near the Planck energy.

A lot of particle physicists accept this reasoning and additionally assume that fine-tuning the values of fundamental constants as measured near the Planck energy is “bad”. They conclude that it would be “bad” for the Standard Model to be valid up to the Planck energy.

(In the previous paragraph you can replace “bad” with some other word—for example, “implausible”.)

Indeed you can use a refined version of the argument I’m sketching here to say “either the fundamental constants measured at energy E need to obey an identity up to precision ε or the Standard Model must break down before we reach energy E”, where ε gets smaller as E gets bigger.

Then, in theory, you can pick an ε and say “an ε smaller than that would make me very nervous.” Then you can conclude that “if the Standard Model is valid up to energy E, that will make me very nervous”.

(But I honestly don’t know anyone who has approximately computed ε as a function of E. Often people seem content to hand-wave.)

People like to argue about how small an ε should make us nervous, or even whether any value of ε should make us nervous.

But another assumption behind this whole line of reasoning is that the values of fundamental constants as measured at some energy near the Planck energy are “more important” than their values as measured near energy zero, so we should take near-cancellations of these high-energy values seriously—more seriously, I suppose, than near-cancellations at low energies.

Most particle physicists will defend this idea quite passionately. The philosophy seems to be that God designed high-energy physics and left his grad students to work out its consequences at low energies—so if you want to understand physics, you need to focus on high energies.

Scott wrote in email:

Do I remember correctly that it’s actually the square of the Higgs mass (or its value when probed at high energy?) that’s the sum of all these positive and negative high-energy contributions?

John wrote:

Sorry to take a while. I was trying to figure out if that’s a reasonable way to think of things. It’s true that the Higgs mass squared, not the Higgs mass, is what shows up in the Standard Model Lagrangian. This is how scalar fields work.

But I wouldn’t talk about a “sum of positive and negative high-energy contributions”. I’d rather think of all the coupling constants in the Standard Model—all 25 of them—obeying a coupled differential equation that says how they change as we change the energy scale. So, we’ve got a vector field on $\mathbb{R}^{25}$ that says how these coupling constants “flow” as we change the energy scale.

Here’s an equation from a paper that looks at a simplified model:

Here $m_h$ is the Higgs mass, $m_t$ is the mass of the top quark, and both are being treated as functions of a momentum $k$ (essentially the energy scale we’ve been talking about). $v$ is just a number. You’ll note this equation simplifies if we work with the Higgs mass squared, since

$m_h dm_h = \frac{1}{2} d(m_h^2)$

This is one of a bunch of equations—in principle 25—that say how all the coupling constants change. So, they all affect each other in a complicated way as we change $k.$

By the way, there’s a lot of discussion of whether the Higgs mass square goes negative at high energies in the Standard Model. Some calculations suggest it does; other people argue otherwise. If it does, this would generally be considered an inconsistency in the whole setup: particles with negative mass squared are tachyons!

I think one could make a lot of progress on these theoretical issues involving the Standard Model if people took them nearly as seriously as string theory or new colliders.

Scott wrote:

So OK, I was misled by the other things I read, and it’s more complicated than $m_h^2$ being a sum of mostly-canceling contributions (I was pretty sure $m_h$ couldn’t be such a sum, since then a slight change to parameters could make it negative).

Rather, there’s a coupled system of 25 nonlinear differential equations, which one could imagine initializing with the “”true”” high-energy SM parameters, and then solving to find the measured low-energy values. And these coupled differential equations have the property that, if we slightly changed the high-energy input parameters, that could generically have a wild effect on the low-energy outputs, pushing them up to the Planck scale or whatever.

Philosophically, I suppose this doesn’t much change things compared to the naive picture: the question is still, how OK are you with high-energy parameters that need to be precariously tuned to reproduce the low-energy SM, and does that state of affairs demand a new principle to explain it? But it does lead to a different intuition: namely, isn’t this sort of chaos just the generic behavior for nonlinear differential equations? If we fix a solution to such equations at a time $t_0,$ our solution will almost always appear “finely tuned” at a faraway time $t_1$—tuned to reproduce precisely the behavior at $t_0$ that we fixed previously! Why shouldn’t we imagine that God fixed the values of the SM constants for the low-energy world, and they are what they are at high energies simply because that’s what you get when you RG-flow to there?

I confess I’d never heard the speculation that $m_h^2$ could go negative at sufficiently high energies (!!). If I haven’t yet abused your generosity enough: what sort of energies are we talking about, compared to the Planck energy?

John wrote:

Scott wrote:

Rather, there’s a coupled system of 25 nonlinear differential equations, which one could imagine initializing with the “”true”” high-energy SM parameters, and then solving to find the measured low-energy values. And these coupled differential equations have the property that, if we slightly changed the high-energy input parameters, that could generically have a wild effect on the low-energy outputs, pushing them up to the Planck scale or whatever.

Right.

Philosophically, I suppose this doesn’t much change things compared to the naive picture: the question is still, how OK are you with high-energy parameters that need to be precariously tuned to reproduce the low-energy SM, and does that state of affairs demand a new principle to explain it? But it does lead to a different intuition: namely, isn’t this sort of chaos just the generic behavior for nonlinear differential equations?

Yes it is, generically.

Physicists are especially interested in theories that have “ultraviolet fixed points”—by which they usually mean values of the parameters that are fixed under the renormalization group flow and attractive as we keep increasing the energy scale. The idea is that these theories seem likely to make sense at arbitrarily high energy scales. For example, pure Yang-Mills fields are believed to be “asymptotically free”—the coupling constant measuring the strength of the force goes to zero as the energy scale gets higher.

But attractive ultraviolet fixed points are going to be repulsive as we reverse the direction of the flow and see what happens as we lower the energy scale.

So what gives? Are all ultraviolet fixed points giving theories that require “fine-tuning” to get the parameters we observe at low energies? Is this bad?

Well, they’re not all the same. For theories considered nice, the parameters change logarithmically as we change the energy scale. This is considered to be a mild change. The Standard Model with Higgs may not have an ultraviolet fixed point, but people usually worry about something else: the Higgs mass changes quadratically with the energy scale. This is related to the square of the Higgs mass being the really important parameter… if we used that, I’d say linearly.

I think there’s a lot of mythology and intuitive reasoning surrounding this whole subject—probably the real experts could say a lot about it, but they are few, and a lot of people just repeat what they’ve been told, rather uncritically.

If we fix a solution to such equations at a time $t_0,$ our solution will almost always appear “finely tuned” at a faraway time $t_1$—tuned to reproduce precisely the behavior at $t_0$ that we fixed previously! Why shouldn’t we imagine that God fixed the values of the SM constants for the low-energy world, and they are what they are at high energies simply because that’s what you get when you RG-flow to there?

This is something I can imagine Sabine Hossenfelder saying.

I confess I’d never heard the speculation that $m_h^2$ could go negative at sufficiently high energies (!!). If I haven’t yet abused your generosity enough: what sort of energies are we talking about, compared to the Planck energy?

The experts are still arguing about this; I don’t really know. To show how weird all this stuff is, there’s a review article from 2013 called “The top quark and Higgs boson masses and the stability of the electroweak vacuum”, which doesn’t look crackpotty to me, that argues that the vacuum state of the universe is stable if the Higgs mass and top quark are in the green region, but only metastable otherwise:

The big ellipse is where the parameters were expected to lie in 2012 when the paper was first written. The smaller ellipses only indicate the size of the uncertainty expected after later colliders made more progress. You shouldn’t take them too seriously: they could be centered in the stable region or the metastable region.

An appendix give an update, which looks like this:

The paper says:

one sees that the central value of the top mass lies almost exactly on the boundary between vacuum stability and metastability. The uncertainty on the top quark mass is nevertheless presently too large to clearly discriminate between these two possibilities.

Then John wrote:

By the way, another paper analyzing problems with the Standard Model says:

It has been shown that higher dimension operators may change the lifetime of the metastable vacuum, $\tau$, from

$\tau = 1.49 \times 10^{714} T_U$

to

$\tau =5.45 \times 10^{-212} T_U$

where $T_U$ is the age of the Universe.

In other words, the calculations are not very reliable yet.

And then John wrote:

Sorry to keep spamming you, but since some of my last few comments didn’t make much sense, even to me, I did some more reading. It seems the best current conventional wisdom is this:

Assuming the Standard Model is valid up to the Planck energy, you can tune parameters near the Planck energy to get the observed parameters down here at low energies. So of course the the Higgs mass down here is positive.

But, due to higher-order effects, the potential for the Higgs field no longer looks like the classic “Mexican hat” described by a polynomial of degree 4:

with the observed Higgs field sitting at one of the global minima.

Instead, it’s described by a more complicated function, like a polynomial of degree 6 or more. And this means that the minimum where the Higgs field is sitting may only be a local minimum:

In the left-hand scenario we’re at a global minimum and everything is fine. In the right-hand scenario we’re not and the vacuum we see is only metastable. The Higgs mass is still positive: that’s essentially the curvature of the potential near our local minimum. But the universe will eventually tunnel through the potential barrier and we’ll all die.

Yes, that seems to be the conventional wisdom! Obviously they’re keeping it hush-hush to prevent panic.

This paper has tons of relevant references:

• Tommi Markkanen, Arttu Rajantie, Stephen Stopyra, Cosmological aspects of Higgs vacuum metastability.

Abstract. The current central experimental values of the parameters of the Standard Model give rise to a striking conclusion: metastability of the electroweak vacuum is favoured over absolute stability. A metastable vacuum for the Higgs boson implies that it is possible, and in fact inevitable, that a vacuum decay takes place with catastrophic consequences for the Universe. The metastability of the Higgs vacuum is especially significant for cosmology, because there are many mechanisms that could have triggered the decay of the electroweak vacuum in the early Universe. We present a comprehensive review of the implications from Higgs vacuum metastability for cosmology along with a pedagogical discussion of the related theoretical topics, including renormalization group improvement, quantum field theory in curved spacetime and vacuum decay in field theory.

Scott wrote:

Once again, thank you so much! This is enlightening.

If you’d like other people to benefit from it, I’m totally up for you making it into a post on Azimuth, quoting from my emails as much or as little as you want. Or you could post it on that comment thread on my blog (which is still open), or I’d be willing to make it into a guest post (though that might need to wait till next week).

I guess my one other question is: what happens to this RG flow when you go to the infrared extreme? Is it believed, or known, that the “low-energy” values of the 25 Standard Model parameters are simply fixed points in the IR? Or could any of them run to strange values there as well?

I don’t really know the answer to that, so I’ll stop here.

But in case you’re worrying now that it’s “possible, and in fact inevitable, that a vacuum decay takes place with catastrophic consequences for the Universe”, relax! These calculations are very hard to do correctly. All existing work uses a lot of approximations that I don’t completely trust. Furthermore, they are assuming that the Standard Model is valid up to very high energies without any corrections due to new, yet-unseen particles!

So, while I think it’s a great challenge to get these calculations right, and to measure the Standard Model parameters accurately enough to do them right, I am not very worried about the Universe being taken over by a rapidly expanding bubble of ‘true vacuum’.

From Classical to Quantum and Back

30 January, 2019

Damien Calaque has invited me to speak at FGSI 2019, a conference on the Foundations of Geometric Structures of Information. It will focus on scientific legacy of Cartan, Koszul and Souriau. Since Souriau helped invent geometric quantization, I decided to talk about this. That’s part of why I’ve been writing about it lately!

I’m looking forward to speaking to various people at this conference, including Mikhail Gromov, who has become interested in using category theory to understand biology and the brain.

Here’s my talk:

Abstract. Edward Nelson famously claimed that quantization is a mystery, not a functor. In other words, starting from the phase space of a classical system (a symplectic manifold) there is no functorial way of constructing the correct Hilbert space for the corresponding quantum system. In geometric quantization one gets around this problem by equipping the classical phase space with extra structure: for example, a Kähler manifold equipped with a suitable line bundle. Then quantization becomes a functor. But there is also a functor going the other way, sending any Hilbert space to its projectivization. This makes quantum systems into specially well-behaved classical systems! In this talk we explore the interplay between classical mechanics and quantum mechanics revealed by these functors going both ways.

For more details, read these articles:

• Part 1: the mystery of geometric quantization: how a quantum state space is a special sort of classical state space.
• Part 2: the structures besides a mere symplectic manifold that are used in geometric quantization.
• Part 3: geometric quantization as a functor with a right adjoint, ‘projectivization’, making quantum state spaces into a reflective subcategory of classical ones.
• Part 4: making geometric quantization into a monoidal functor.
• Part 5: the simplest example of geometric quantization: the spin-1/2 particle.
• Part 6: quantizing the spin-3/2 particle using the twisted cubic; coherent states via the adjunction between quantization and projectivization.
• Part 7: the Veronese embedding as a method of ‘cloning’ a classical system, and taking the symmetric tensor powers of a Hilbert space as the corresponding method of cloning a quantum system.
• Part 8: cloning a system as changing the value of Planck’s constant.

• Classification Problems in Symplectic Linear Algebra

21 January, 2019

Check out the video of Jonathan Lorand’s talk, the second in the Applied Category Theory Seminar here at UCR. It was nicely edited by Paola Fernandez and uploaded by Joe Moeller.

Abstract. In this talk we will look at various examples of classification problems in symplectic linear algebra: conjugacy classes in the symplectic group and its Lie algebra, linear lagrangian relations up to conjugation, tuples of (co)isotropic subspaces. I will explain how many such problems can be encoded using the theory of symplectic poset representations, and will discuss some general results of this theory. Finally, I will recast this discussion from a broader category-theoretic perspective.

Check out the video of his talk above, and also his talk slides. It was nicely edited by Paola Fernandez and uploaded by Joe Moeller.

Here are some papers to read for more detail:

• Jonathan Lorand, Classifying linear canonical relations.

• Jonathan Lorand and Alan Weinstein, Decomposition of (co)isotropic relations.

Lorand is working on a paper with Weinstein and Christian Herrman that delves deeper into these topics. I first met him at the ACT2018 school in Leiden, where we worked along with Blake Pollard, Fabrizio Genovese (shown below) and Maru Sarazola on biochemical coupling through emergent conservation laws. Right now he’s visiting UCR and working with me to dig deeper into these questions using symplectic geometry! This is a very tantalizing project that keeps on not quite working…

Geometric Quantization (Part 8)

21 January, 2019

Puzzle. You measure the energy and frequency of some laser light trapped in a mirrored box and use quantum mechanics to compute the expected number of photons in the box. Then someone tells you that you used the wrong value of Planck’s constant in your calculation. Somehow you used a value that was twice the correct value! How should you correct your calculation of the expected number of photons?

I’ll give away the answer to the puzzle below, so avert your eyes if you want to think about it more.

This scenario sounds a bit odd—it’s not very likely that your table of fundamental constants would get Planck’s constant wrong this way. But it’s interesting because once upon a time we didn’t know about quantum mechanics and we didn’t know Planck’s constant. We could still give a reasonably good description of some laser light trapped in a mirrored box: there’s a standing wave solution of Maxwell’s equations that does the job. But when we learned about quantum mechanics we learned to describe this situation using photons. The number of photons we need depends on Planck’s constant.

And while we can’t really change Planck’s constant, mathematical physicists often like to treat Planck’s constant as variable. The limit where Planck’s constant goes to zero is called the ‘classical limit’, where our quantum description should somehow reduce to our old classical description.

Here’s the answer to the puzzle: if you halve Planck’s constant you need to double the number of photons. The reason is that the energy of a photon with frequency $\nu$ is $h \nu.$ So, to get a specific total energy $E$ in our box of light of a given frequency, we need $E/h \nu$ photons.

So, the classical limit is also the limit where the expected number of photons goes to infinity! As the ‘packets of energy’ get smaller, we need more of them to get a certain amount of energy.

This has a nice relationship to what I’d been doing with geometric quantization last time.

I explained how we could systematically replace any classical system considering by a ‘cloned’ version of that system: a collection of identical copies constrained to all lie in the same state. The set of allowed states is the same, but the symplectic structure is multiplied by a constant factor: the number of copies. We can see this as follows: if the phase space of our system is an abstract symplectic manifold $M$ its kth power $M^k$ is again a symplectic manifold. We can look at the image of the diagonal embedding

$\begin{array}{rcl} \Delta_k \colon M &\to & M^k \\ x & \mapsto & (x,x,\dots, x) \end{array}$

The image is a symplectic submanifold of $M^k,$ and it’s diffeomorphic to $M,$ but $\Delta_k$ is not a symplectomorphism from $M$ to its image. The image has a symplectic structure that’s k times bigger!

What does this mean for physics?

If we act like physicists instead of mathematicians for a minute and keep track of units, we’ll notice that the naive symplectic structure in classical mechanics has units of action: think $\omega = dp \wedge dq$. Planck’s constant also has units of action, so to get a dimensionless version of the symplectic structure, we should use $\omega/h.$ Then it’s clear that multiplying the symplectic structure by k is equivalent to dividing Planck’s constant by k!

So: cloning a system, multiplying the number of copies by k, should be related to dividing Planck’s constant by k. And the limit $k \to \infty$ should be related to a ‘classical limit’.

Of course this would not convince a mathematician so far, since I’m using a strange mix of ideas from classical mechanics and quantum mechanics! But our approach to geometric quantization makes everything precise. We have a category $\texttt{Class}$ of classical systems, a category $\texttt{Quant}$ of quantum systems, a quantization functor

$Q \colon \texttt{Class} \to \texttt{Quant}$

and a functor going back:

$P \colon \texttt{Quant} \to \texttt{Class}$

which reveals that quantum systems are special classical systems. And last time we saw that there are also ways to ‘clone’ classical and quantum systems.

Our classical systems are more than mere symplectic manifolds: they are projectively normal subvarieties of $\mathbb{C}\mathrm{P}^n$ for arbitrary n. So, we clone a classical system by a trick that looks more fancy than the diagonal embedding I mentioned above: we instead use the kth Veronese embedding, which defines a functor

$\nu_k \colon \texttt{Class} \to \texttt{Class}$

But this cloning process has the same effect on the underlying symplectic manifold: it multiplies the symplectic structure by k.

Similarly, we clone a quantum system by replacing its set of states $\mathrm{P}(\mathbb{C}^n)$ (the projectivization of the Hilbert space $\mathbb{C}^n$) by $\mathrm{P}(S^k(\mathbb{C}^n)),$ where $S^k$ is the kth symmetric power. This gives a functor

$S^k \colon \texttt{Quant} \to \texttt{Quant}$

and the ‘obvious squares commute’:

$\texttt{Q} \circ v_k = S^k \circ \texttt{Q}$
$\texttt{P} \circ S^k = v_k \circ \texttt{P}$

All I’m doing now is giving this math a new physical interpretation: the k-fold cloning process is the same as dividing Planck’s constant by k!

If this seems a bit elusive, we can look at an example like the spin-j particle. In Part 6 we saw that if we clone the state space for the spin-1/2 particle we get the state space for the spin-j particle, where $j = k/2.$ But angular momentum has units of Planck’s constant, and the quantity j is really an angular momentum divided by $\hbar.$ So this process of replacing 1/2 by k/2 can be reinterpreted as the process of dividing Planck’s constant by k: if Planck’s constant is smaller, we need j to be bigger to get a given angular momentum! And what we thought was a single spin-1/2 particle in the state $\psi$ becomes k spin-1/2 particles in the ‘cloned’ state $\psi \otimes \psi \otimes \cdots \otimes \psi.$ As explained in Part 6, we can reinterpret this as a state of a single spin-k/2 particle.

Finally, let me point out something curious. We have a systematic way of changing our description of a quantum system when we divide Planck’s constant by an integer. But we can’t do it when we divide Planck’s constant by any other sort of number! So, in a very real sense, Planck’s constant is quantized.

Part 1: the mystery of geometric quantization: how a quantum state space is a special sort of classical state space.

Part 2: the structures besides a mere symplectic manifold that are used in geometric quantization.

Part 3: geometric quantization as a functor with a right adjoint, ‘projectivization’, making quantum state spaces into a reflective subcategory of classical ones.

Part 4: making geometric quantization into a monoidal functor.

Part 5: the simplest example of geometric quantization: the spin-1/2 particle.

Part 6: quantizing the spin-3/2 particle using the twisted cubic; coherent states via the adjunction between quantization and projectivization.

Part 7: the Veronese embedding as a method of ‘cloning’ a classical system, and taking the symmetric tensor powers of a Hilbert space as the corresponding method of cloning a quantum system.

Part 8: cloning a system as changing the value of Planck’s constant.

Geometric Quantization (Part 7)

8 January, 2019

I’ve been falling in love with algebraic geometry these days, as I realize how many of its basic concepts and theorems have nice interpretations in terms of geometric quantization. I had trouble getting excited about them before. I’m talking about things like the Segre embedding, the Veronese embedding, the Kodaira embedding theorem, Chow’s theorem, projective normality, ample line bundles, and so on. In the old days, all these things used to make me nod and go “that’s nice”, without great enthusiasm. Now I see what they’re all good for!

Of course this is my own idiosyncratic take on the subject: obviously algebraic geometers have their own pefectly fine notion of what these things are good for. But I never got the hang of that.

Today I want to talk about how the Veronese embedding can be used to ‘clone’ a classical system. For any number k, you can take a classical system and build a new one; a state of this new system is k copies of the original system constrained to all be in the same state! This may not seem to do much, but it does something: for example, it multiplies the Kähler structure on the classical state space by k. And it has a quantum analogue, which has a much more notable effect!

Last time I looked at an example, where I built the spin-3/2 particle by cloning the spin-1/2 particle.

In brief, it went like this. The space of classical states of the spin-1/2 particle is the Riemann sphere, $\mathbb{C}\mathrm{P}^1.$ This just happens to also be the space of quantum states of the spin-1/2 particle, since it’s the projectivization of $\mathbb{C}^2.$ To get the 3/2 particle we look at the map

$\text{cubing} \colon \mathbb{C}^2 \to S^3(\mathbb{C}^2)$

You can think of this as the map that ‘triplicates’ a spin-1/2 particle, creating 3 of them in the same state. This gives rise to a map between the corresponding projective spaces, which we should probably call

$P(\text{cubing}) \colon P(\mathbb{C}^2) \to P(S^3(\mathbb{C}^2))$

It’s an embedding.

Algebraic geometers call the image of this embedding the twisted cubic, since it’s a curve in 3d projective space described by homogeneous cubic equations. But for us, it’s the embedding of the space of classical states of the spin-3/2 particle into the space of quantum states. (The fact that classical states give specially nice quantum states is familiar in physics, where these specially nice quantum states are called ‘coherent states’, or sometimes ‘generalized coherent states’.)

Now, you’ll have noted that the numbers 2 and 3 show up a bunch in what I just said. But there’s nothing special about these numbers! They could be arbitrary natural numbers… well, > 1 if we don’t enjoy thinking about degenerate cases.

Here’s how the generalization works. Let’s think of guys in $\mathbb{C}^n$ as linear functions on the dual of this space. We can raise any one of them to the k power and get a homogeneous polynomial of degree k. The space of such polynomials is called $S^k(\mathbb{C}^n),$ so raising to the kth power defines a map

$\mathbb{C}^n \to S^k(\mathbb{C}^n)$

This in turn gives rise to a map between the corresponding projective spaces:

$P(\mathbb{C}^n) \to P(S^k(\mathbb{C}^n))$

This map is an embedding, since different linear functions give different polynomials when you raise them to the k power, at least if $k \ge 1.$ And this map is famous: it’s called the k Veronese embedding. I guess it’s often denoted

$v_k \colon P(\mathbb{C}^n) \to P(S^k(\mathbb{C}^n))$

An important special case occurs when we take $n = 2,$ as we’d been doing before. The space of homogeneous polynomials of degree k in two variables has dimension $k + 1,$ so we can think of the Veronese embedding as a map

$v_k \colon \mathbb{C}\mathrm{P}^1 \to \mathbb{C}\mathrm{P}^k$

embedding the projective line as a curve in $\mathbb{C}\mathrm{P}^k.$ This sort of curve is called a rational normal curve. When $d = 3$ it’s our friend from last time, the twisted cubic.

In general, we can think of $\mathbb{C}\mathrm{P}^k$ as the space of quantum states of the spin-k/2 particle, since we got it from projectivizing the spin-k/2 representation of $\mathrm{SU}(2),$ namely $S^k(\mathbb{C}^n).$ Sitting inside here, the rational normal curve is the space of classical states of the spin-k/2 particle—or in other words, ‘coherent states’.

Maybe I should expand on this, since it flew by so fast! Pick any direction you want the angular momentum of your spin-k/2 particle to point. Think of this as a point on the Riemann sphere and think of that as coming from some vector $\psi \in \mathbb{C}^2.$ That describes a quantum spin-1/2 particle whose angular momentum points in the desired direction. But now, form the tensor product

$\underbrace{\psi \otimes \cdots \otimes \psi}_{k}$

This is completely symmetric under permuting the factors, so we can think of it as a vector in $S^k(\mathbb{C}^2).$ And indeed, it’s just what I was calling

$v_k (\psi) \in S^k(\mathbb{C}^2)$

This vector describes a collection of k indistinguishable quantum spin-1/2 particles with angular momenta all pointing in the same direction. But it also describes a single quantum spin-k/2 particle whose angular momentum points in that direction! Not all vectors in $S^k(\mathbb{C}^2)$ are of this form, clearly. But those that are, are called ‘coherent states’.

Now, let’s do this all a bit more generally. We’ll work with $\mathbb{C}^n,$ not just $\mathbb{C}^2.$ And we’ll use a variety $M \subseteq \mathbb{C}\mathrm{P}^{n-1}$ as our space of classical states, not necessarily all of $\mathbb{C}\mathrm{P}^{n-1}.$

Remember, we’ve got:

• a category $\texttt{Class}$ where the objects are linearly normal subvarieties $M \subseteq \mathbb{C}\mathrm{P}^{n-1}$ for arbitrary $n,$

and

• a category $\texttt{Quant}$ where the objects are linear subspaces $V \subseteq \mathbb{C}^n$ for arbitrary $n.$

The morphisms in each case are just inclusions. We’ve got a ‘quantization’ functor

$\texttt{Q} \colon \texttt{Class} \to \texttt{Quant}$

that maps $M \subseteq \mathbb{C}\mathrm{P}^{n-1}$ to the smallest $V \subseteq \mathbb{C}^n$ whose projectivization contains $M.$ And we’ve got what you might call a ‘classicization’ functor going back:

$\texttt{P} \colon \texttt{Quant} \to \texttt{Class}$

We actually call this ‘projectization’, since it sends any linear subspace $V \subseteq \mathbb{C}^n$ to its projective space sitting inside $\mathbb{C}\mathrm{P}^{n-1}$.

We would now like to get the Veronese embedding into the game, copying what we just did for the spin-k/2 particle. We’d like each Veronese embedding $v_k$ to define a functor from $\texttt{Class}$ to $\texttt{Class}$ and also a functor $\texttt{Quant}$ to $\texttt{Quant}.$ For example, the first of these should send the space of classical states of the spin-1/2 particle to the space of classical states of the spin-k/2 particle. The second should do the same for the space of quantum states.

The quantum version works just fine. Here’s how it goes. An object in $\texttt{Quant}$ is a linear subspace

$V \subseteq \mathbb{C}^n$

for some $n.$ Our functor should send this to

$S^k(V) \subseteq S^k(\mathbb{C}^n) \cong \mathbb{C}^{\left(\!\!{n\choose k}\!\!\right)}$

Here $\left(\!{n\choose k}\!\right)$, pronounced ‘n multichoose k’ , is the number of ways to choose k not-necessarily-distinct items from a set of n, since this is the dimension of the space of degree-k homogeneous polynomials on $\mathbb{C}^n.$ (We have to pick some sort of ordering on monomials to get the isomorphism above; this is one of the clunky aspects of our current framework, which I plan to fix someday.)

This process indeed defines functor, and the only reasonable name for it is

$S^k \colon \texttt{Quant} \to \texttt{Quant}$

Intuitively, it takes any state space of any quantum system and produces the state space for k indistinguishable copies that system. (If you’re a physicist, muttering the phrase ‘identical bosons’ may clarify things. There is also a fermionic version where we use exterior powers instead of symmetric powers, but let’s not go there now.)

The classical version of this functor suffers from a small glitch, which however is easy to fix. An object in $\texttt{Class}$ is a linearly normal subvariety

$M \subseteq \mathbb{C}\mathrm{P}^{n-1}$

for some $n.$ Applying the kth Veronese embedding we get a subvariety

$v_k(M) \subseteq \mathbb{C}\mathrm{P}^{\left(\!\!{n\choose k}\!\!\right)-1}$

However, I don’t think this is linearly normal, in general. I think it’s linearly normal iff $M$ is k-normal. You can take this as a definition of k-normality, if you like, though there are other equivalent ways to say it.

Luckily, a projectively normal subvariety of projective space is k-normal for all $k \ge 1.$ And even better, projectively normal varieties are fairly common! In particular, any projective space is a projectively normal subvariety of itself.

So, we can redefine the category $\texttt{Class}$ by letting objects be projectively normal subvarieties $M \subseteq \mathbb{C}\mathrm{P}^{n-1}$ for arbitrary $n \ge 1.$ I’m using the same notation for this new category, which is ordinarily a very dangerous thing to do, because all our results about the original version are still true for this one! In particular, we still have adjoint functors

$\texttt{Q} \colon \texttt{Class} \to \texttt{Quant}, \qquad \texttt{P} \colon \texttt{Quant} \to \texttt{Class}$

defined exactly as before. But now the kth Veronese embedding gives a functor

$v_k \colon \texttt{Class} \to \texttt{Class}$

Intuitively, this takes any state space of any classical system and produces the state space for k indistinguishable copies that system that are all in the same state. It has no effect on the classical state space $M$ as an abstract variety, just its embedding into projective space—which in turn affects its Kähler structure and the line bundle it inherits from projective space. In particular, its symplectic structure gets multiplied by k, and the line bundle over it gets replaced by its kth tensor power. (These are well-known facts about the Veronese embedding.)

I believe that this functor obeys

$\texttt{Q} \circ v_k = S^k \circ \texttt{Q}$

and it’s just a matter of unraveling the definitions to see that

$\texttt{P} \circ S^k = v_k \circ \texttt{P}$

So, very loosely, the functors

$v_k \colon \texttt{Class} \to \texttt{Class}, \qquad S^k \colon \texttt{Quant} \to \texttt{Quant}$

should be thought of as replacing a classical or quantum system by a new ‘cloned’ version of that system. And they get along perfectly with quantization and its adjoint, projectivization!

Part 1: the mystery of geometric quantization: how a quantum state space is a special sort of classical state space.

Part 2: the structures besides a mere symplectic manifold that are used in geometric quantization.

Part 3: geometric quantization as a functor with a right adjoint, ‘projectivization’, making quantum state spaces into a reflective subcategory of classical ones.

Part 4: making geometric quantization into a monoidal functor.

Part 5: the simplest example of geometric quantization: the spin-1/2 particle.

Part 6: quantizing the spin-3/2 particle using the twisted cubic; coherent states via the adjunction between quantization and projectivization.

Part 7: the Veronese embedding as a method of ‘cloning’ a classical system, and the symmetric tensor powers of a Hilbert space as the corresponding way to clone a quantum system.

Part 8: cloning a system as changing the value of Planck’s constant.

Unsolved Mysteries of Fundamental Physics

2 January, 2019

In this century, progress in fundamental physics has been slow. The Large Hadron Collider hasn’t yet found any surprises, attempts to directly detect dark matter have been unsuccessful, string theory hasn’t made any successful predictions, and nobody really knows what to do about any of this. But there is no shortage of problems, and clues. Watch the talk I gave at the Cambridge University Physics Society for some ideas on this! Warning: this is for ordinary folks, not experts.

There are some squeaky sounds on the video at first, but they seem to go away pretty quick, so hang in there! You can also see my talk slides here:

and click on the links for extra information.