Here is a conversation I had with Scott Aaronson. It started on his blog, in a discussion about ‘fine-tuning’. Some say the Standard Model of particle physics can’t be the whole story, because in this theory you need to fine-tune the fundamental constants to keep the Higgs mass from becoming huge. Others say this argument is invalid.
I tried to push the conversation toward the calculations actually underlie this argument. Then our conversation drifted into email and got more technical… and perhaps also more interesting, because it led us to contemplate the stability of the vacuum!
You see, if we screwed up royally on our fine-tuning and came up with a theory where the square of the Higgs mass was negative, the vacuum would be unstable. It would instantly decay into a vast explosion of Higgs bosons.
Another possibility, also weird, turns out to be slightly more plausible. This is that the Higgs mass is positive—as it clearly is—and yet the vacuum is ‘metastable’. In this scenario, the vacuum we see around us might last a long time, and yet eventually it could decay through quantum tunnelling to the ‘true’ vacuum, with a lower energy density:
Little bubbles of true vacuum would form, randomly, and then grow very rapidly. This would be the end of life as we know it.
Scott agreed that other people might like to see our conversation. So here it is. I’ll fix a few mistakes, to make me seem smarter than I actually am.
I’ll start with some stuff on his blog.
Scott wrote, in part:
If I said, “supersymmetry basically has to be there because it’s such a beautiful symmetry,” that would be an argument from beauty. But I didn’t say that, and I disagree with anyone who does say it. I made something weaker, what you might call an argument from the explanatory coherence of the world. It merely says that, when we find basic parameters of nature to be precariously balanced against each other, to one part in 1010 or whatever, there’s almost certainly some explanation. It doesn’t say the explanation will be beautiful, it doesn’t say it will be discoverable by an FCC or any other collider, and it doesn’t say it will have a form (like SUSY) that anyone has thought of yet.
John wrote:
Scott wrote:
It merely says that, when we find basic parameters of nature to be precariously balanced against each other, to one part in 1010 or whatever, there’s almost certainly some explanation.
Do you know examples of this sort of situation in particle physics, or is this just a hypothetical situation?
Scott wrote:
To answer a question with a question, do you disagree that that’s the current situation with (for example) the Higgs mass, not to mention the vacuum energy, if one considers everything that could naïvely contribute? A lot of people told me it was, but maybe they lied or I misunderstood them.
John wrote:
The basic rough story is this. We measure the Higgs mass. We can assume that the Standard Model is good up to some energy near the Planck energy, after which it fizzles out for some unspecified reason.
According to the Standard Model, each of the 25 fundamental constants appearing in the Standard Model is a “running coupling constant”. That is, it’s not really a constant, but a function of energy: roughly the energy of the process we use to measure that process. Let’s call these “coupling constants measured at energy E”. Each of these 25 functions is determined by the value of all 25 functions at any fixed energy E – e.g. energy zero, or the Planck energy. This is called the “renormalization group flow”.
So, the Higgs mass we measure is actually the Higgs mass at some energy E quite low compared to the Planck energy.
And, it turns out that to get this measured value of the Higgs mass, the values of some fundamental constants measured at energies near the Planck mass need to almost cancel out. More precisely, some complicated function of them needs to almost but not quite obey some equation.
People summarize the story this way: to get the observed Higgs mass we need to “fine-tune” the fundamental constants’ values as measured near the Planck energy, if we assume the Standard Model is valid up to energies near the Planck energy.
A lot of particle physicists accept this reasoning and additionally assume that fine-tuning the values of fundamental constants as measured near the Planck energy is “bad”. They conclude that it would be “bad” for the Standard Model to be valid up to the Planck energy.
(In the previous paragraph you can replace “bad” with some other word—for example, “implausible”.)
Indeed you can use a refined version of the argument I’m sketching here to say “either the fundamental constants measured at energy E need to obey an identity up to precision ε or the Standard Model must break down before we reach energy E”, where ε gets smaller as E gets bigger.
Then, in theory, you can pick an ε and say “an ε smaller than that would make me very nervous.” Then you can conclude that “if the Standard Model is valid up to energy E, that will make me very nervous”.
(But I honestly don’t know anyone who has approximately computed ε as a function of E. Often people seem content to hand-wave.)
People like to argue about how small an ε should make us nervous, or even whether any value of ε should make us nervous.
But another assumption behind this whole line of reasoning is that the values of fundamental constants as measured at some energy near the Planck energy are “more important” than their values as measured near energy zero, so we should take near-cancellations of these high-energy values seriously—more seriously, I suppose, than near-cancellations at low energies.
Most particle physicists will defend this idea quite passionately. The philosophy seems to be that God designed high-energy physics and left his grad students to work out its consequences at low energies—so if you want to understand physics, you need to focus on high energies.
Scott wrote in email:
Do I remember correctly that it’s actually the square of the Higgs mass (or its value when probed at high energy?) that’s the sum of all these positive and negative high-energy contributions?
John wrote:
Sorry to take a while. I was trying to figure out if that’s a reasonable way to think of things. It’s true that the Higgs mass squared, not the Higgs mass, is what shows up in the Standard Model Lagrangian. This is how scalar fields work.
But I wouldn’t talk about a “sum of positive and negative high-energy contributions”. I’d rather think of all the coupling constants in the Standard Model—all 25 of them—obeying a coupled differential equation that says how they change as we change the energy scale. So, we’ve got a vector field on
that says how these coupling constants “flow” as we change the energy scale.
Here’s an equation from a paper that looks at a simplified model:

Here
is the Higgs mass,
is the mass of the top quark, and both are being treated as functions of a momentum
(essentially the energy scale we’ve been talking about).
is just a number. You’ll note this equation simplifies if we work with the Higgs mass squared, since

This is one of a bunch of equations—in principle 25—that say how all the coupling constants change. So, they all affect each other in a complicated way as we change 
By the way, there’s a lot of discussion of whether the Higgs mass square goes negative at high energies in the Standard Model. Some calculations suggest it does; other people argue otherwise. If it does, this would generally be considered an inconsistency in the whole setup: particles with negative mass squared are tachyons!
I think one could make a lot of progress on these theoretical issues involving the Standard Model if people took them nearly as seriously as string theory or new colliders.
Scott wrote:
So OK, I was misled by the other things I read, and it’s more complicated than
being a sum of mostly-canceling contributions (I was pretty sure
couldn’t be such a sum, since then a slight change to parameters could make it negative).
Rather, there’s a coupled system of 25 nonlinear differential equations, which one could imagine initializing with the “”true”” high-energy SM parameters, and then solving to find the measured low-energy values. And these coupled differential equations have the property that, if we slightly changed the high-energy input parameters, that could generically have a wild effect on the low-energy outputs, pushing them up to the Planck scale or whatever.
Philosophically, I suppose this doesn’t much change things compared to the naive picture: the question is still, how OK are you with high-energy parameters that need to be precariously tuned to reproduce the low-energy SM, and does that state of affairs demand a new principle to explain it? But it does lead to a different intuition: namely, isn’t this sort of chaos just the generic behavior for nonlinear differential equations? If we fix a solution to such equations at a time
our solution will almost always appear “finely tuned” at a faraway time
—tuned to reproduce precisely the behavior at
that we fixed previously! Why shouldn’t we imagine that God fixed the values of the SM constants for the low-energy world, and they are what they are at high energies simply because that’s what you get when you RG-flow to there?
I confess I’d never heard the speculation that
could go negative at sufficiently high energies (!!). If I haven’t yet abused your generosity enough: what sort of energies are we talking about, compared to the Planck energy?
John wrote:
Scott wrote:
Rather, there’s a coupled system of 25 nonlinear differential equations, which one could imagine initializing with the “”true”” high-energy SM parameters, and then solving to find the measured low-energy values. And these coupled differential equations have the property that, if we slightly changed the high-energy input parameters, that could generically have a wild effect on the low-energy outputs, pushing them up to the Planck scale or whatever.
Right.
Philosophically, I suppose this doesn’t much change things compared to the naive picture: the question is still, how OK are you with high-energy parameters that need to be precariously tuned to reproduce the low-energy SM, and does that state of affairs demand a new principle to explain it? But it does lead to a different intuition: namely, isn’t this sort of chaos just the generic behavior for nonlinear differential equations?
Yes it is, generically.
Physicists are especially interested in theories that have “ultraviolet fixed points”—by which they usually mean values of the parameters that are fixed under the renormalization group flow and attractive as we keep increasing the energy scale. The idea is that these theories seem likely to make sense at arbitrarily high energy scales. For example, pure Yang-Mills fields are believed to be “asymptotically free”—the coupling constant measuring the strength of the force goes to zero as the energy scale gets higher.
But attractive ultraviolet fixed points are going to be repulsive as we reverse the direction of the flow and see what happens as we lower the energy scale.
So what gives? Are all ultraviolet fixed points giving theories that require “fine-tuning” to get the parameters we observe at low energies? Is this bad?
Well, they’re not all the same. For theories considered nice, the parameters change logarithmically as we change the energy scale. This is considered to be a mild change. The Standard Model with Higgs may not have an ultraviolet fixed point, but people usually worry about something else: the Higgs mass changes quadratically with the energy scale. This is related to the square of the Higgs mass being the really important parameter… if we used that, I’d say linearly.
I think there’s a lot of mythology and intuitive reasoning surrounding this whole subject—probably the real experts could say a lot about it, but they are few, and a lot of people just repeat what they’ve been told, rather uncritically.
If we fix a solution to such equations at a time
our solution will almost always appear “finely tuned” at a faraway time
—tuned to reproduce precisely the behavior at
that we fixed previously! Why shouldn’t we imagine that God fixed the values of the SM constants for the low-energy world, and they are what they are at high energies simply because that’s what you get when you RG-flow to there?
This is something I can imagine Sabine Hossenfelder saying.
I confess I’d never heard the speculation that
could go negative at sufficiently high energies (!!). If I haven’t yet abused your generosity enough: what sort of energies are we talking about, compared to the Planck energy?
The experts are still arguing about this; I don’t really know. To show how weird all this stuff is, there’s a review article from 2013 called “The top quark and Higgs boson masses and the stability of the electroweak vacuum”, which doesn’t look crackpotty to me, that argues that the vacuum state of the universe is stable if the Higgs mass and top quark are in the green region, but only metastable otherwise:
The big ellipse is where the parameters were expected to lie in 2012 when the paper was first written. The smaller ellipses only indicate the size of the uncertainty expected after later colliders made more progress. You shouldn’t take them too seriously: they could be centered in the stable region or the metastable region.
An appendix give an update, which looks like this:
The paper says:
one sees that the central value of the top mass lies almost exactly on the boundary between vacuum stability and metastability. The uncertainty on the top quark mass is nevertheless presently too large to clearly discriminate between these two possibilities.
Then John wrote:
By the way, another paper analyzing problems with the Standard Model says:
It has been shown that higher dimension operators may change the lifetime of the metastable vacuum,
, from
to
where
is the age of the Universe.
In other words, the calculations are not very reliable yet.
And then John wrote:
Sorry to keep spamming you, but since some of my last few comments didn’t make much sense, even to me, I did some more reading. It seems the best current conventional wisdom is this:
Assuming the Standard Model is valid up to the Planck energy, you can tune parameters near the Planck energy to get the observed parameters down here at low energies. So of course the the Higgs mass down here is positive.
But, due to higher-order effects, the potential for the Higgs field no longer looks like the classic “Mexican hat” described by a polynomial of degree 4:
with the observed Higgs field sitting at one of the global minima.
Instead, it’s described by a more complicated function, like a polynomial of degree 6 or more. And this means that the minimum where the Higgs field is sitting may only be a local minimum:
In the left-hand scenario we’re at a global minimum and everything is fine. In the right-hand scenario we’re not and the vacuum we see is only metastable. The Higgs mass is still positive: that’s essentially the curvature of the potential near our local minimum. But the universe will eventually tunnel through the potential barrier and we’ll all die.
Yes, that seems to be the conventional wisdom! Obviously they’re keeping it hush-hush to prevent panic. 
This paper has tons of relevant references:
• Tommi Markkanen, Arttu Rajantie, Stephen Stopyra, Cosmological aspects of Higgs vacuum metastability.
Abstract. The current central experimental values of the parameters of the Standard Model give rise to a striking conclusion: metastability of the electroweak vacuum is favoured over absolute stability. A metastable vacuum for the Higgs boson implies that it is possible, and in fact inevitable, that a vacuum decay takes place with catastrophic consequences for the Universe. The metastability of the Higgs vacuum is especially significant for cosmology, because there are many mechanisms that could have triggered the decay of the electroweak vacuum in the early Universe. We present a comprehensive review of the implications from Higgs vacuum metastability for cosmology along with a pedagogical discussion of the related theoretical topics, including renormalization group improvement, quantum field theory in curved spacetime and vacuum decay in field theory.
Scott wrote:
Once again, thank you so much! This is enlightening.
If you’d like other people to benefit from it, I’m totally up for you making it into a post on Azimuth, quoting from my emails as much or as little as you want. Or you could post it on that comment thread on my blog (which is still open), or I’d be willing to make it into a guest post (though that might need to wait till next week).
I guess my one other question is: what happens to this RG flow when you go to the infrared extreme? Is it believed, or known, that the “low-energy” values of the 25 Standard Model parameters are simply fixed points in the IR? Or could any of them run to strange values there as well?
I don’t really know the answer to that, so I’ll stop here.
But in case you’re worrying now that it’s “possible, and in fact inevitable, that a vacuum decay takes place with catastrophic consequences for the Universe”, relax! These calculations are very hard to do correctly. All existing work uses a lot of approximations that I don’t completely trust. Furthermore, they are assuming that the Standard Model is valid up to very high energies without any corrections due to new, yet-unseen particles!
So, while I think it’s a great challenge to get these calculations right, and to measure the Standard Model parameters accurately enough to do them right, I am not very worried about the Universe being taken over by a rapidly expanding bubble of ‘true vacuum’.