There’s yet another conference in this fast-paced series, and this time it’s in Southern California!

• Symposium on Compositional Structures 4, 22–23 May, 2019, Chapman University, California.

The Symposium on Compositional Structures (SYCO) is an interdisciplinary series of meetings aiming to support the growing community of researchers interested in the phenomenon of compositionality, from both applied and abstract perspectives, and in particular where category theory serves as a unifying common language.

The first SYCO was in September 2018, at the University of Birmingham. The second SYCO was in December 2018, at the University of Strathclyde. The third SYCO was in March 2019, at the University of Oxford. Each meeting attracted about 70 participants.

We welcome submissions from researchers across computer science, mathematics, physics, philosophy, and beyond, with the aim of fostering friendly discussion, disseminating new ideas, and spreading knowledge between fields. Submission is encouraged for both mature research and work in progress, and by both established academics and junior researchers, including students.

Submission is easy, with no format requirements or page restrictions. The meeting does not have proceedings, so work can be submitted even if it has been submitted or published elsewhere. Think creatively—you could submit a recent paper, or notes on work in progress, or even a recent Masters or PhD thesis.

While no list of topics could be exhaustive, SYCO welcomes submissions

with a compositional focus related to any of the following areas, in

particular from the perspective of category theory:

• logical methods in computer science, including classical and quantum programming, type theory, concurrency, natural language processing and machine learning;

• graphical calculi, including string diagrams, Petri nets and reaction networks;

• languages and frameworks, including process algebras, proof nets, type theory and game semantics;

• abstract algebra and pure category theory, including monoidal category theory, higher category theory, operads, polygraphs, and relationships to homotopy theory;

• quantum algebra, including quantum computation and representation theory;

• tools and techniques, including rewriting, formal proofs and proof assistants, and game theory;

• industrial applications, including case studies and real-world problem descriptions.

This new series aims to bring together the communities behind many previous successful events which have taken place over the last decade, including “Categories, Logic and Physics”, “Categories, Logic and Physics (Scotland)”, “Higher-Dimensional Rewriting and Applications”, “String Diagrams in Computation, Logic and Physics”, “Applied Category Theory”, “Simons Workshop on Compositionality”, and the “Peripatetic Seminar in Sheaves and Logic”.

SYCO will be a regular fixture in the academic calendar, running regularly throughout the year, and becoming over time a recognized venue for presentation and discussion of results in an informal and friendly atmosphere. To help create this community, and to avoid the need to make difficult choices between strong submissions, in the event that more good-quality submissions are received than can be accommodated in the timetable, the programme committee may choose to

*defer* some submissions to a future meeting, rather than reject them. This would be done based largely on submission order, giving an incentive for early submission, but would also take into account other requirements, such as ensuring a broad scientific programme. Deferred submissions can be re-submitted to any future SYCO meeting, where they would not need peer review, and where they would be prioritised for inclusion in the programme. This will allow us to ensure that speakers have enough time to present their ideas, without creating an unnecessarily competitive reviewing process. Meetings will be held sufficiently frequently to avoid a backlog of deferred papers.

• John Baez, University of California, Riverside: Props in network theory.

• Tobias Fritz, Perimeter Institute for Theoretical Physics: Categorical probability: results and challenges.

• Nina Otter, University of California, Los Angeles: A unified framework for equivalences in social networks.

All times are anywhere-on-earth.

• Submission deadline: Wednesday 24 April 2019

• Author notification: Wednesday 1 May 2019

• Registration deadline: TBA

• Symposium dates: Wednesday 22 and Thursday 23 May 2019

Submission is by EasyChair, via the following link:

• https://easychair.org/conferences/?conf=syco4

Submissions should present research results in sufficient detail to allow them to be properly considered by members of the programme committee, who will assess papers with regards to significance, clarity, correctness, and scope. We encourage the submission of work in progress, as well as mature results. There are no proceedings, so work can be submitted even if it has been previously published, or has been submitted for consideration elsewhere. There is no specific formatting requirement, and no page limit, although for long submissions authors should understand that reviewers may not be able to read the entire document in detail.

• Miriam Backens, University of Oxford

• Ross Duncan, University of Strathclyde and Cambridge Quantum Computing

• Brendan Fong, Massachusetts Institute of Technology

• Stefano Gogioso, University of Oxford

• Amar Hadzihasanovic, Kyoto University

• Chris Heunen, University of Edinburgh

• Dominic Horsman, University of Grenoble

• Martti Karvonen, University of Edinburgh

• Kohei Kishida, Dalhousie University (chair)

• Andre Kornell, University of California, Davis

• Martha Lewis, University of Amsterdam

• Samuel Mimram, École Polytechnique

• Benjamin Musto, University of Oxford

• Nina Otter, University of California, Los Angeles

• Simona Paoli, University of Leicester

• Dorette Pronk, Dalhousie University

• Mehrnoosh Sadrzadeh, Queen Mary

• Pawel Sobocinski, University of Southampton

• Joshua Tan, University of Oxford

• Sean Tull, University of Oxford

• Dominic Verdon, University of Bristol

• Jamie Vicary, University of Birmingham and University of Oxford

• Maaike Zwart, University of Oxford

Here’s the math colloquium talk I gave at Georgia Tech this week:

• Hidden symmetries of the hydrogen atom.

Abstract.A classical particle moving in an inverse square central force, like a planet in the gravitational field of the Sun, moves in orbits that do not precess. This lack of precession, special to the inverse square force, indicates the presence of extra conserved quantities beyond the obvious ones. Thanks to Noether’s theorem, these indicate the presence of extra symmetries. It turns out that not only rotations in 3 dimensions, but also in 4 dimensions, act as symmetries of this system. These extra symmetries are also present in the quantum version of the problem, where they explain some surprising features of the hydrogen atom. The quest to fully understand these symmetries leads to some fascinating mathematical adventures.

I left out a lot of calculations, but someday I want to write a paper where I put them all in. This material is all *known*, but I feel like explaining it my own way.

In the process of creating the slides and giving the talk, though, I realized there’s a lot I don’t understand yet. Some of it is embarrassingly basic! For example, I give Greg Egan’s nice intuitive argument for how you can get some ‘Runge–Lenz symmetries’ in the 2d Kepler problem. I might as well just quote his article:

• Greg Egan, The ellipse and the atom.

He says:

Now, one way to find orbits with the same energy is by applying a rotation that leaves the sun fixed but repositions the planet. Any ordinary three-dimensional rotation can be used in this way, yielding another orbit with exactly the same shape, but oriented differently.

But there is another transformation we can use to give us a new orbit without changing the total energy. If we grab hold of the planet at either of the points where it’s travelling parallel to the axis of the ellipse, and then swing it along a circular arc centred on the sun, we can reposition it without altering its distance from the sun. But rather than rotating its velocity in the same fashion (as we would do if we wanted to rotate the orbit as a whole) we leave its velocity vector unchanged: its direction, as well as its length, stays the same.

Since we haven’t changed the planet’s distance from the sun, its potential energy is unaltered, and since we haven’t changed its velocity, its kinetic energy is the same. What’s more, since the speed of a planet of a given mass when it’s moving parallel to the axis of its orbit depends only on its total energy, the planet will still be in that state with respect to its new orbit, and so the new orbit’s axis must be parallel to the axis of the original orbit.

Rotations together with these ‘Runge–Lenz transformations’ generate an SO(3) action on the space of elliptical orbits of any given energy. But what’s the most geometrically vivid description of this SO(3) action?

Someone at my talk noted that you could grab the planet at *any* point of its path, and move to *anywhere* the same distance from the Sun, while keeping its speed the same, and get a new orbit with the same energy. Are all the SO(3) transformations of this form?

I have a bunch more questions, but this one is the simplest!

]]>Check out the video of Christian Williams’’s talk in the Applied Category Theory Seminar here at U. C. Riverside. It was nicely edited by Paola Fernandez and uploaded by Joe Moeller.

Abstract.Historically, code represents a sequence of instructions for a single machine. Each computer is its own world, and only interacts with others by sending and receiving data through external ports. As society becomes more interconnected, this paradigm becomes more inadequate – these virtually isolated nodes tend to form networks of great bottleneck and opacity. Communication is a fundamental and integral part of computing, and needs to be incorporated in the theory of computation.To describe systems of interacting agents with dynamic interconnection, in 1980 Robin Milner invented the

pi calculus: a formal language in which a term represents an open, evolving system of processes (or agents) which communicate over names (or channels). Because a computer is itself such a system, the pi calculus can be seen as a generalization of traditional computing languages; there is an embedding of lambda into pi – but there is an important change in focus: programming is less like controlling a machine and more like designing an ecosystem of autonomous organisms.We review the basics of the pi calculus, and explore a variety of examples which demonstrate this new approach to programming. We will discuss some of the history of these ideas, called “process algebra”, and see exciting modern applications in blockchain and biology.

“… as we seriously address the problem of modelling mobile communicating systems we get a sense of completing a model which was previously incomplete; for we can now begin to describe what goes on outside a computer in the same terms as what goes on inside – i.e. in terms of interaction. Turning this observation inside-out, we may say that we inhabit a global computer, an informatic world which demands to be understood just as fundamentally as physicists understand the material world.”— Robin Milner

The talks slides are here.

Reading material:

• Robin Milner, The polyadic pi calculus: a tutorial.

• Robin Milner, *Communicating and Mobile Systems*.

• Joachim Parrow, An introduction to the pi calculus.

]]>Check out the video of Daniel Cicala’s talk, the fourth in the Applied Category Theory Seminar here at U. C. Riverside. It was nicely edited by Paola Fernandez and uploaded by Joe Moeller.

Abstract. A social contagion may manifest as a cultural trend, a spreading opinion or idea or belief. In this talk, we explore a simple model of social contagion on a random network. We also look at the effect that network connectivity, edge distribution, and heterogeneity has on the diffusion of a contagion.

The talk slides are here.

Reading material:

• Mason A. Porter and James P. Gleeson, Dynamical systems on networks: a tutorial.

• Duncan J. Watts, A simple model of global cascades on random networks.

]]>• John Baez, John Foley and Joe Moeller, Network models from Petri nets with catalysts.

Check it out! And please report typos, mistakes, or anything you have trouble understanding! I’m happy to answer questions here.

Petri nets are a widely studied formalism for describing collections of entities of different types, and how they turn into other entities. I’ve written a lot about them here. Network models are a formalism for designing and tasking networks of agents, which our team invented for this project. Here we combine these ideas! This is worthwhile because while both formalisms involve networks, they serve a different function, and are in some sense complementary.

A Petri net can be drawn as a bipartite directed graph with vertices of two kinds: **places**, drawn as circles, and **transitions** drawn as squares:

When we run a Petri net, we start by placing a finite number of dots called **tokens** in each place:

This is called a **marking**. Then we repeatedly change the marking using the transitions. For example, the above marking can change to this:

and then this:

Thus, the places represent different *types* of entity, and the transitions are ways that one collection of entities of specified types can turn into another such collection.

Network models serve a different function than Petri nets: they are a general tool for working with networks of many kinds. Mathematically a network model is a lax symmetric monoidal functor where is the free strict symmetric monoidal category on a set Elements of represent different kinds of ‘agents’. Unlike in a Petri net, we do not usually consider processes where these agents turn into other agents. Instead, we wish to study everything that can be done with a fixed collection of agents. Any object is of the form for some thus, it describes a collection of agents of various kinds. The functor maps this object to a category that describes everything that can be done with this collection of agents.

In many examples considered so far, is a category whose morphisms are graphs of some sort whose nodes are agents of types Composing these morphisms corresponds to ‘overlaying’ graphs. Network models of this sort let us design networks where the nodes are agents and the edges are communication channels or shared commitments. In our first paper the operation of overlaying graphs was always commutative:

• John Baez, John Foley, Joe Moeller and Blake Pollard, Network models.

Subsequently Joe introduced a more general noncommutative overlay operation:

• Joe Moeller, Noncommutative network models.

This lets us design networks where each agent has a limit on how many communication channels or commitments it can handle; the noncommutativity lets us take a ‘first come, first served’ approach to resolving conflicting commitments.

Here we take a different tack: we instead take to be a category whose morphisms are *processes that the given collection of agents, can carry out*. Composition of morphisms corresponds to carrying out first one process and then another.

This idea meshes well with Petri net theory, because any Petri net determines a symmetric monoidal category whose morphisms are processes that can be carried out using this Petri net. More precisely, the objects in are markings of and the morphisms are sequences of ways to change these markings using transitions, e.g.:

Given a Petri net, then, how do we construct a network model and in particular, what is the set ? In a network model the elements of represent different kinds of agents. In the simplest scenario, these agents persist in time. Thus, it is natural to take to be some set of ‘catalysts’. In chemistry, a reaction may require a catalyst to proceed, but it neither increases nor decrease the amount of this catalyst present. In everyday life, a *door* serves as a catalyst: it lets you walk though a wall, and it doesn’t get used up in the process!

For a Petri net, ‘catalysts’ are species that are neither increased nor decreased in number by any transition. For example, in the following Petri net, species is a catalyst:

but neither nor is a catalyst. The transition requires one token of type as input to proceed, but it also outputs one token of this type, so the total number of such tokens is unchanged. Similarly, the transition requires no tokens of type as input to proceed, and it also outputs no tokens of this type, so the total number of such tokens is unchanged.

In Theorem 11 of our paper, we prove that given any Petri net and any subset of the catalysts of there is a network model

An object says how many tokens of each catalyst are present; is then the subcategory of where the objects are markings that have this specified amount of each catalyst, and morphisms are processes going between these.

From the functor we can construct a category by ‘gluing together’ all the categories using the Grothendieck construction. Because is symmetric monoidal we can use an enhanced version of this construction to make into a symmetric monoidal category. We already did this in our first paper on network models, but by now the math has been better worked out here:

• Joe Moeller and Christina Vasilakopoulou, Monoidal Grothendieck construction.

The tensor product in describes doing processes ‘in parallel’. The category is similar to but it is better suited to applications where agents each have their own ‘individuality’, because is actually a *commutative* monoidal category, where permuting agents has no effect at all, while is not so degenerate. In Theorem 12 of our paper we make this precise by more concretely describing as a symmetric monoidal category, and clarifying its relation to

There are no morphisms between an object of and an object of when since no transitions can change the amount of catalysts present. The category is thus a ‘disjoint union’, or more technically a coproduct, of subcategories where an element of free commutative monoid on specifies the amount of each catalyst present.

The tensor product on has the property that tensoring an object in with one in gives an object in and similarly for morphisms. However, in Theorem 14 we show that each subcategory also has its *own* tensor product, which describes doing one process *after* another while reusing catalysts.

This tensor product is a very cool thing. On the one hand it’s quite obvious: for example, if two people want to walk through a door, they can both do it, one at a time, because the door doesn’t get used up when someone walks through it. On the other hand, it’s mathematically interesting: it turns out to give a lot of examples of monoidal categories that can’t be made symmetric or even braided, even though the tensor product of objects is commutative! The proof boils down to this:

Here represents the catalysts, and and are two processes which we can carry out using these catalysts. We can do either one first, but we get different morphisms as a result.

The paper has lots of pictures like this—many involving jeeps and boats, which serve as catalysts to carry people first from a base to the shore and then from the shore to an island. I think these make it clear that the underlying ideas are quite commonsensical. But they need to be formalized to program them into a computer—and it’s nice that doing this brings in some classic themes in category theory!

Some posts in this series:

• Part 1. CASCADE: the Complex Adaptive System Composition and Design Environment.

• Part 2. Metron’s software for system design.

• Part 3. Operads: the basic idea.

• Part 4. Network operads: an easy example.

• Part 5. Algebras of network operads: some easy examples.

• Part 6. Network models.

• Part 7. Step-by-step compositional design and tasking using commitment networks.

• Part 8. Compositional tasking using category-valued network models.

• Part 9 – Network models from Petri nets with catalysts.

]]>In my 50s, too old to become a real expert, I have finally fallen in love with algebraic geometry. As the name suggests, this is the study of geometry using algebra. Aroun 1637, Pierre Fermat and Rene Descartes laid the groundwork for this subject by taking a plane, mentally drawing a grid on it as we now do with graph paper, and calling the coordinates and . We can the write down an equation like , and there will be a curve consisting of points whose coordinates obey this equation. In this example, we get a circle!

It was a revolutionary idea at the time, because it lets us systematically convert questions about geometry into questions about equations, which we can solve if we’re good enough at algebra. Some mathematicians spend their whole lives on this majestic subject. But I never really liked it much—until recently. Now I’ve connected it to my interest in quantum physics.

*We can describe many interesting curves with just polynomials. For example, roll a circle inside a circle three times as big. You get a curve with three sharp corners called a “deltoid”, shown in red above. It’s not obvious that you can describe this using a polynomial equation, but you can. The great mathematician Leonhard Euler dreamt this up in 1745.*

As a kid I liked physics better than math. My uncle Albert Baez, father of the famous folk singer Joan Baez, worked for UNESCO, helping developing countries with physics education. My parents lived in Washington D.C.. Whenever my uncle came to town, he’d open his suitcase, pull out things like magnets or holograms, and use them to explain physics to me. This was fascinating. When I was eight, he gave me a copy of the college physics textbook he wrote. While I couldn’t understand it, I knew right away that I wanted to. I decided to become a physicist.

My parents were a bit worried, because they knew physicists needed mathematics, and I didn’t seem very good at that. I found long division insufferably boring, and refused to do my math homework, with its endless repetitive drills. But later, when I realized that by fiddling around with equations I could learn about the universe, I was hooked. The mysterious symbols seemed like magic spells. And in a way, they are. Science is the magic that actually works.

And so I learned to love math, but in a certain special way: as the key to physics. In college I wound up majoring in math, in part because I was no good at experiments. I learned quantum mechanics and general relativity, studying the necessary branches of math as I went. I was fascinated by Eugene Wigner’s question about the “unreasonable effectiveness” of mathematics in describing the universe. As he put it, “The miracle of the appropriateness of the language of mathematics for the formulation of the laws of physics is a wonderful gift which we neither understand nor deserve.”

Despite Wigner’s quasi-religious language, I didn’t think that God was an explanation. As far as I can tell, that hypothesis raises more questions than it answers. I studied mathematical logic and tried to prove that any universe containing a being like us, able to understand the laws of that universe, must have some special properties. I failed utterly, though I managed to get my first publishable paper out of the wreckage. I decided that there must be some deep mystery here, that we might someday understand, but only after we understood what the laws of physics actually are: not the pretty good approximate laws we know now, but the actual correct laws.

As a youthful optimist I felt sure such laws must exist, and that we could know them. And then, surely, these laws would give a clue to the deeper puzzle: why the universe is governed by mathematical laws in the first place.

So I went to graduate school—to a math department, but motivated by physics. I already knew that there was too much mathematics to ever learn it all, so I tried to focus on what mattered to me. And one thing that did not matter to me, I believed, was algebraic geometry.

How could any mathematician *not* fall in love with algebraic geometry? Here’s why. In its classic form, this subject considers only polynomial equations—equations that describe not just curves, but also higher-dimensional shapes called “varieties.” So is fine, and so is , but an equation with sines or cosines, or other functions, is out of bounds—unless we can figure out how to convert it into an equation with just polynomials. As a graduate student, this seemed like a terrible limitation. After all, physics problems involve plenty of functions that aren’t polynomials.

*This is Cayley’s nodal cubic surface. It’s famous because it is the variety with the most nodes (those pointy things) that is described by a cubic equation. The equation is , and it’s called “cubic” because we’re multiplying at most three variables at once.*

Why does algebraic geometry restrict itself to polynomials? Mathematicians study curves described by all sorts of equations – but sines, cosines and other fancy functions are only a distraction from the fundamental mysteries of the relation between geometry and algebra. Thus, by restricting the breadth of their investigations, algebraic geometers can dig deeper. They’ve been working away for centuries, and by now their mastery of polynomial equations is truly staggering. Algebraic geometry has become a powerful tool in number theory, cryptography and other subjects.

I once met a grad student at Harvard, and I asked him what he was studying. He said one word, in a portentous tone: “Hartshorne.” He meant Robin Hartshorne’s textbook *Algebraic Geometry*, published in 1977. Supposedly an introduction to the subject, it’s actually a very hard-hitting tome. Consider Wikipedia’s description:

The first chapter, titled “Varieties,” deals with the classical algebraic geometry of varieties over algebraically closed fields. This chapter uses many classical results in commutative algebra, including Hilbert’s Nullstellensatz, with the books by Atiyah–Macdonald, Matsumura, and Zariski–Samuel as usual references.

If you can’t make heads or tails of this… well, that’s exactly my point. To penetrate even the first chapter of Hartshorne, you need quite a bit of background. To read Hartshorne is to try to catch up with centuries of geniuses running as fast as they could.

One of these geniuses was Hartshorne’s thesis advisor, Alexander Grothendieck. From about 1960 to 1970, Grothendieck revolutionized algebraic geometry as part of an epic quest to prove some conjectures about number theory, the Weil Conjectures. He had the idea that these could be translated into questions about geometry and settled that way. But making this idea precise required a huge amount of work. To carry it out, he started a seminar. He gave talks almost every day, and enlisted the help of some of the best mathematicians in Paris.

Working nonstop for a decade, they produced tens of thousands of pages of new mathematics, packed with mind-blowing concepts. In the end, using these ideas, Grothendieck succeeded in proving all the Weil Conjectures except the final, most challenging one—a close relative of the famous Riemann Hypothesis, for which a million dollar prize still waits.

Towards the end of this period, Grothendieck also became increasingly involved in radical politics and environmentalism. In 1970, when he learned that his research institute was partly funded by the military, he resigned. He left Paris and moved to teach in the south of France. Two years later a student of his proved the last of the Weil Conjectures—but in a way that Grothendieck disliked, because it used a “trick” rather than following the grand plan he had in mind. He was probably also jealous that someone else reached the summit before him. As time went by, Grothendieck became increasingly embittered with academia. And in 1991, he disappeared!

We now know that he moved to the Pyrenees, where he lived until his death in 2014. He seems to have largely lost interest in mathematics and turned his attention to spiritual matters. Some reports make him seem quite unhinged. It is hard to say. At least 20,000 pages of his writings remain unpublished.

During his most productive years, even though he dominated the French school of algebraic geometry, many mathematicians considered Grothendieck’s ideas “too abstract.” This sounds a bit strange, given how abstract all mathematics is. What’s inarguably true is that it takes time and work to absorb his ideas. As grad student I steered clear of them, since I was busy struggling to learn physics. There, too, centuries of geniuses have been working full-speed, and anyone wanting to reach the cutting edge has a lot of catching up to do. But, later in my career, my research led me to Grothendieck’s work.

If I had taken a different path, I might have come to grips with his work through string theory. String theorists postulate that besides the visible dimensions of space and time—three of space and one of time—there are extra dimensions of space curled up too small to see. In some of their theories these extra dimensions form a variety. So, string theorists easily get pulled into sophisticated questions about algebraic geometry. And this, in turn, pulls them toward Grothendieck.

Indeed, some of the best advertisements for string theory are not successful predictions of experimental results—it’s made absolutely none of these—but rather, its ability to solve problems within pure mathematics, including algebraic geometry. For example, suppose you have a typical quintic threefold: a 3-dimensional variety described by a polynomial equation of degree 5. How many curves can you draw on it that are described by polynomials of degree 4? I’m sure this question has occurred to you. So, you’ll be pleased to know that answer is exactly 317,206,375.

This sort of puzzle is quite hard, but string theorists have figured out a systematic way to solve many puzzles of this sort, including much harder ones. Thus, we now see string theorists talking with algebraic geometers, each able to surprise the other with their insights.

My own interest in Grothendieck’s work had a different source. I’ve always had serious doubts about string theory, and counting curves on varieties is the last thing I’d ever try. Like rock climbing, it’s exciting to watch but too scary to actually attempt myself. But it turns out that Grothendieck’s ideas are so general and powerful that they spill out beyond algebraic geometry into many other subjects. In particular, his 600-page unpublished manuscript *Pursuing Stacks*, written in 1983, made a big impression on me. In it, he argues that topology—very loosely, the theory of what space can be shaped like, if we don’t care about bending or stretching it, just what kind of holes it has—can be completely reduced to algebra!

At first this idea may sound just like algebraic geometry, where we use algebra to describe geometrical shapes, like curves or higher-dimensional varieties. But “algebraic topology” winds up having a very different flavor, because in topology we don’t restrict our shapes to be described by polynomial equations. Instead of dealing with beautiful gems, we’re dealing with floppy, flexible blobs—so the kind of algebra we need is different.

Algebraic topology is a beautiful subject that has been around long before Grothendieck—but he was one of the first to seriously propose a method to reduce all topology to algebra.

Thanks to my work on physics, his proposal was tremendously exciting when I came across it. At the time I had taken up the challenge of trying to unify our two best theories of physics: quantum physics, which describes all the forces except gravity, and general relativity, which describes gravity. It seems that until we do this, our understanding of the fundamental laws of physics is doomed to be incomplete. But it’s devilishly difficult. One reason is that quantum physics is based on algebra, while general relativity involves a lot of topology. But that suggests an avenue of attack: if we can figure out how to express topology in terms of algebra, we might find a better language to formulate a theory of quantum gravity.

My physics colleagues will let out a howl here, and complain that I am oversimplifying. Yes, I’m oversimplifying. There is more to quantum physics than mere algebra, and more to general relativity than mere topology. Nonetheless, the possible benefits to physics of reducing topology to algebra are what got me so excited about Grothendieck’s work.

So, starting in the 1990s, I tried to understand the powerful abstract concepts that Grothendieck had invented—and by now I have partially succeeded. Some mathematicians find these concepts to be the hard part of algebraic geometry. They now seem like the easy part to me. The hard part, for me is the nitty-gritty details. First, there is all the material in those texts that Hartshorne takes as prerequisites: “the books by Atiyah–Macdonald, Matsumura, and Zariski–Samuel.” But there is also a lot more.

So, while I now have some of what it takes to read Hartshorne, until recently I was too intimidated to learn it. A student of physics once asked a famous expert how much mathematics a physicist needs to know. The expert replied: “More.” Indeed, the job of learning mathematics is never done, so I focus on the things that seem most important and/or fun. Until last year, algebraic geometry never rose to the top of the list.

What changed? I realized that algebraic geometry is connected to the relation between classical and quantum physics. Classical physics is the physics of Newton, where we imagine that we can measure everything with complete precision, at least in principle. Quantum physics is the physics of Schrödinger and Heisenberg, governed by the uncertainty principle: if we measure some aspects of a physical system with complete precision, others must remain undetermined.

For example, any spinning object has an “angular momentum”. In classical mechanics we visualize this as an arrow pointing along the axis of rotation, whose length is proportional to how fast the object is spinning. And in classical mechanics, we assume we can measure this arrow precisely. In quantum mechanics—a more accurate description of reality—this turns out not to be true. For example, if we know how far this arrow points in the direction, we cannot know how far it points in the direction. This uncertainty is too small to be noticeable for a spinning basketball, but for an electron it is important: physicists had only a rough understanding of electrons until they took this into account.

Physicists often want to “quantize” classical physics problems. That is, they start with the classical description of some physical system, and they want to figure out the quantum description. There is no fully general and completely systematic procedure for doing this. This should not be surprising: the two worldviews are so different. However, there are useful recipes for quantization. The most systematic ones apply to a very limited selection of physics problems.

For example, sometimes in classical physics we can describe a system by a point in a variety. This is not something one generally expects, but it happens in plenty of important cases. For example, consider a spinning object: if we fix how long its angular momentum arrow is, the arrow can still point in any direction, so its tip must lie on a sphere. Thus, we can describe a spinning object by a point on a sphere. And this sphere is actually a variety, the “Riemann sphere”, named after Bernhard Riemann, one of the greatest algebraic geometers of the 1800s.

When a classical physics problem is described by a variety, some magic happens. The process of quantization becomes completely systematic—and surprisingly simple. There is even a kind of reverse process, which one might call “classicization,” that lets you turn the quantum description back into a classical description. The classical and quantum approaches to physics become tightly linked, and one can take ideas from either approach and see what they say about the other one. For example, each point on the variety describes not only a state of the classical system (in our example, a definite direction for the angular momentum), but also a state of the corresponding quantum system—even though the latter is governed by the uncertainty principle. The quantum state is the “best quantum approximation” to the classical state.

Even better, in this situation many of the basic theorems about algebraic geometry can be seen as facts about quantization! Since quantization is something I’ve been thinking about for a long time, this makes me very happy. Richard Feynman once said that for him to make progress on a tough physics problem, he needed to have some sort of special angle on it:

I have to think that I have some kind of inside track on this problem. That is, I have some sort of talent that the other guys aren’t using, or some way of looking, and they are being foolish not to notice this wonderful way to look at it. I have to think I have a little better chance than the other guys, for some reason. I know in my heart that it is likely that the reason is false, and likely the particular attitude I’m taking with it was thought of by others. I don’t care; I fool myself into thinking I have an extra chance.

This may be what I’d been missing on algebraic geometry until now. Algebraic geometry is not just a problem to be solved, it’s a body of knowledge—but it’s such a large, intimidating body of knowledge that I didn’t dare tackle it until I got an inside track. Now I can read Hartshorne, translate some of the results into facts about physics, and feel I have a chance at understanding this stuff. And it’s a great feeling.

*For the details of how algebraic geometry connects classical and quantum mechanics, see my talk slides and series of blog articles.*

A metal-organic framework or **MOF** is a molecular structure built from metal atoms and organic compounds. There are many kinds. They can be 3-dimensional, like this one made by scientists at CSIRO in Australia:

And they can be full of microscopic holes, giving them an enormous surface area! For example, here’s a diagram of a MOF with yellow and orange balls showing the holes:

In fact, one gram of the stuff can have a surface area of more than 12,000 square meters!

Gas molecules like to sit inside these holes. So, perhaps surprisingly at first, you can pack a lot more gas in a cylinder containing a MOF than you can in an empty cylinder at the same pressure!

This lets us store gases using MOFs—like carbon dioxide, but also hydrogen, methane and others. And importantly, you can also get the gas molecules *out* of the MOF without enormous amounts of energy. Also, you can craft MOFs with different hole sizes and different chemical properties, so they attract some gases much more than others.

So, we can imagine various applications suited to fighting climate change! One is carbon capture and storage, where you want a substance that eagerly latches onto CO_{2} molecules, but can also easily be persuaded to let them go. But another is hydrogen or methane storage for the purpose of fuel. Methane releases less CO_{2} than gasoline does when it burns, per unit amount of energy—and hydrogen releases none at all. That’s why some advocate a hydrogen economy.

Could hydrogen-powered cars be better than battery-powered cars, someday? I don’t know. But never mind—such issues, though important, aren’t what I want to talk about *now*. I just want to quote something about methane storage in MOFs, to give you a sense of the state of the art.

• Mark Peplow, Metal-organic framework compound sets methane storage record, *C&EN*, 11 December 2017.

Cars powered by methane emit less CO

_{2}than gasoline guzzlers, but they need expensive tanks and compressors to carry the gas at about 250 atm. Certain metal-organic framework (MOF) compounds—made from a lattice of metal-based nodes linked by organic struts—can store methane at lower pressures because the gas molecules pack tightly inside their pores.So MOFs, in principle, could enable methane-powered cars to use cheaper, lighter, and safer tanks. But in practical tests, no material has met a U.S. Department of Energy (DOE) gas storage target of 263 cm

^{3}of methane per cm^{3}of adsorbent at room temperature and 64 atm, enough to match the capacity of high-pressure tanks.A team led by David Fairen-Jimenez at the University of Cambridge has now developed a synthesis method that endows a well-known MOF with a capacity of 259 cm

^{3}of methane per cm^{3}under those conditions, at least 50% higher than its nearest rival. “It’s definitely a significant result,” says Jarad A. Mason at Harvard University, who works with MOFs and other materials for energy applications and was not involved in the research. “Capacity has been one of the biggest stumbling blocks.”Only about two-thirds of the MOF’s methane was released when the pressure dropped to 6 atm, a minimum pressure needed to sustain a decent flow of gas from a tank. But this still provides the highest methane delivery capacity of any bulk adsorbent.

A couple things are worth noting here. First, the process of a molecule sticking to a surface is called adsorption, not to be confused with absorption. Second, notice that using MOFs they managed to compress methane by a factor of 259 at a pressure of just 64 atmospheres. If we tried the same trick without MOFs we would need a pressure of 259 atmospheres!

But MOFs are not only good at *holding* gases, they’re good at *sucking them up*, which is really the flip side of the same coin: gas molecules avidly seek to sit inside the little holes of your MOF. So people are also using MOFs to build highly sensitive *detectors* for specific kinds of gases:

• Tunable porous MOF materials interface with electrodes to sound the alarm at the first sniff of hydrogen sulfide, *Phys.Org*, 7 March 2017.

And some MOFs work in water, too—so people are trying to use them as water filters, sort of a high-tech version of zeolites, the minerals that inspired people to invent MOFs in the first place. Zeolites have an impressive variety of crystal structures:

and so on… but MOFs seem to be more adjustable in their structure and chemical properties.

Looking more broadly at future applications, we can imagine MOFs will be important in a host of technologies where we want a substance with lots of microscopic holes that are eager to hold specific molecules. I have a feeling that the most powerful applications of MOFs will come when other technologies mature. For example: projecting forward to a time when we get really good nanotechnology, we can imagine MOFs as useful “storage lockers” for molecular robots.

But next time I’ll talk about what we can do *now*, or soon, to capture carbon dioxide with MOFs.

In the meantime: can you imagine some cool things we could do with MOFs? This may feed your imagination:

• Wikipedia, Metal-organic frameworks.

]]>• Breakthrough Institute, Is climate change like diabetes or an asteroid?

The Breakthrough Insitute seeks “technological solutions to environmental challenges”, so that informs their opinions. Let me quote some bits and urge you to read the whole thing! Even if it annoys you, it should make you think a bit.

Is climate change more like an asteroid or diabetes? Last month, one of us argued at Slate that climate advocates should resist calls to declare a national climate emergency because climate change was more like “diabetes for the planet” than an asteroid. The diabetes metaphor was surprisingly controversial. Climate change can’t be managed or lived with, many argued in response; it is an existential threat to human societies that demands an immediate cure.

The objection is telling, both in the ways in which it misunderstands the nature of the problem and in the contradictions it reveals. Diabetes is not benign. It is not a “natural” phenomena and it can’t be cured. It is a condition that, if unmanaged, can kill you. And even for those who manage it well, life is different than before diabetes.

This seems to us to be a reasonably apt description of the climate problem. There is no going back to the world before climate change. Whatever success we have mitigating climate change, we almost certainly won’t return to pre-industrial atmospheric concentrations of greenhouse gases, at least not for many centuries. Even at one or 1.5 degrees Celsius of warming, the climate and the planet will look very different, and that will bring unavoidable consequences for human societies. We will live on a hotter planet and in a climate that will be more variable and less predictable.

How bad our planetary diabetes gets will depend on how much we continue to emit and how well adapted to a changing climate human societies become. With the present one degree of warming, it appears that human societies have adapted relatively well. Various claims attributing present day natural disasters to climate change are controversial. But the overall statistics suggest that deaths due to climate-related natural disasters globally are falling, not rising, and that economic losses associated with those disasters, adjusting for growing population and affluence, have been flat for many decades.

But at three or four degrees of warming, all bets are off. And it appears that unmanaged, that’s where present trends in emissions arelikely to take us. Moreover, even with radical action, stabilizing emissions at 1.5 degrees C, as many advocates now demand, is not possible without either solar geoengineering or sucking carbon emissions out of the atmosphere at massive scale. Practically, given legacy emissions and committed infrastructure, the long-standing international target of limiting temperature increase to two degrees C is also extremely unlikely.

Unavoidably, then, treating our climate change condition will require not simply emissions reductions but also significant adaptation to known and unknown climate risks that are already baked in to our future due to two centuries of fossil fuel consumption. It is in this sense that we have long argued that climate change must be understood as a chronic condition of global modernity, a problem that will be managed but not solved.

A discussion of the worst-case versus the best-case IPCC scenarios, and what leads to these scenarios:

The worst case climate scenarios, which are based on worst case emissions scenarios, are the source of most of the terrifying studies of potential future climate impacts. These are frequently described as “business as usual” — what happens if the economy keeps growing and the global population becomes wealthier and hence more consumptive. But that’s not how the IPCC, which generates those scenarios, actually gets to very high emissions futures. Rather, the worst case scenarios are those in which the world remains poor, populous, unequal, and low-tech. It is a future with lots of poor people who don’t have access to clean technology. By contrast, a future in which the world is resilient to a hotter climate is likely also one in which the world has been more successful at mitigating climate change as well. A wealthier world will be a higher-tech world, one with many more low carbon technological options and more resources to invest in both mitigation and adaptation. It will be less populous (fertility rates reliably fall as incomes rise), less unequal (because many fewer people will live in extreme poverty), and more urbanized (meaning many more people living in cities with hard infrastructure, air conditioning, and emergency services to protect them).

That will almost certainly be a world in which global average temperatures have exceeded two degrees above pre-industrial levels. The latest round of climate deadline-ism (12 years to prevent climate catastrophe according to

The Guardian) won’t change that. But as even David Wallace Wells, whose bookThe Uninhabitable Earthhas helped revitalize climate catastrophism, acknowledges, “Two degrees would be terrible but it’s better than three… And three degrees is much better than four.”Given the current emissions trajectory, a future world that stabilized emissions below 2.5 or three degrees, an accomplishment that in itself will likely require very substantial and sustained efforts to reduce emissions, would also likely be one reasonably well adapted to live in that climate, as it would, of necessity, be one that was much wealthier, less unequal, and more advanced technologically than the world we live in today.

The most controversial part of the article concerns the “apocalyptic” or “millenarian” tendency among enviromentalists: the feeling that only a complete reorganization of society will save us—for example, going “back to nature”.

[…] while the nature of the climate problem is chronic and the political and policy responses are incremental, the culture and ideology of contemporary environmentalism is millenarian. In the millenarian mind, there are only two choices, catastrophe or completely reorganizing society. Americans will either see the writing on the wall and remake the world, or perish in fiery apocalypse.

This, ultimately, is why adaptation, nuclear energy, carbon capture, and solar geoengineering have no role in the environmental narrative of apocalypse and salvation, even as all but the last are almost certainly necessary for any successful response to climate change and will also end up in any major federal policy effort to address climate change. Because they are basically plug-and-play with the existing socio-technical paradigm. They don’t require that we end capitalism or consumerism or energy intensive lifestyles. Modern, industrial, techno-society goes on, just without the emissions. This is also why efforts by nuclear, carbon capture, and geoengineering advocates to marshall catastrophic framing to build support for those approaches have had limited effect.

The problem for the climate movement is that the technocratic requirements necessary to massively decarbonize the global economy conflict with the egalitarian catastrophism that the movement’s mobilization strategies demand. McKibben has privately acknowledged as much to several people, explaining that he hasn’t publicly recognized the need for nuclear energy because he believes doing so would “split this movement in half.”

Implicit in these sorts of political calculations is the assumption that once advocates have amassed sufficient political power, the necessary concessions to the practical exigencies of deeply reducing carbon emissions will then become possible. But the army you raise ultimately shapes the sorts of battles you are able to wage, and it is not clear that the army of egalitarian millenarians that the climate movement is mobilizing will be willing to sign on to the necessary compromises — politically, economically, and technologically — that would be necessary to actually address the problem.

Again: read the whole thing!

]]>Why not? It turns out that if you start talking about the specifics of one particular approach to fighting global warming, people instantly want to start talking about other approaches they consider better. This makes some sense: it’s a big problem and we need to compare different approaches. But it’s also a bit frustrating: we need to study different approaches individually so we can know enough to compare them, or make progress on any one approach.

I mainly want to study the nitty-gritty details of various individual approaches, starting with one approach to carbon scrubbing. But if I don’t say *anything* about the bigger picture, people will be unsatisfied.

So, right now I want to say a bit about carbon dioxide scrubbers.

The first thing to realize—and this applies to *all* approaches to battling global warming—is the huge scale of the task. In 2018 we put 37.1 gigatonnes of CO_{2} into the atmosphere by burning fossil fuels and making cement.

That’s a lot! Let’s compare some of the other biggest human industries, in terms of the sheer mass being processed.

Cement production is big. Global cement production in 2017 was about 4.1 gigatonnes, with China making more than the rest of the world combined, and a large uncertainty in how much they made. But digging up and burning carbon is even bigger. For example, over 7 gigatonnes of coal is being mined per year. I can’t find figures on total agricultural production, but in 2004 we created about 5 gigatonnes of agricultural waste. Total grain production was just 2.53 gigatonnes in 2017. Total plastic production in 2017 was a mere 348 megatonnes.

So, *to use technology to remove as much CO _{2} from the air as we’re currently putting in would require an industry that processes more mass than any other today*.

I conclude that this won’t happen anytime soon. Indeed David McKay calls all methods of removing CO_{2} from air “the last thing we should talk about”. For now, he argues, we should focus on cutting carbon emissions. And I believe that to do *that* on a large enough scale requires economic incentives, for example a carbon tax.

But to keep global warming below 2°C over pre-industrial levels, it’s becoming increasingly likely that we’ll need negative carbon emissions:

Indeed, a lot of scenarios contemplated by policymakers involve net negative carbon emissions. Often they don’t realize just how hard these are to achieve! In his talk Mitigation on methadone: how negative emissions lock in our high-carbon addiction, Kevin Anderson has persuasively argued that policymakers are fooling themselves into thinking we can keep burning carbon as we like now and achieve the necessary negative emissions later. He’s not against negative carbon emissions. He’s against using vague fantasies of negative carbon emissions to put off confronting reality!

It is not well understood by policy makers, or indeed many academics, that IAMs [integrated assessment models] assume such a massive deployment of negative emission technologies. Yet when it comes to the more stringent Paris obligations, studies suggest that it is not possible to reach 1.5°C with a 50% chance without significant negative emissions. Even for 2°C, very few scenarios have explored mitigation without negative emissions, and contrary to common perception, negative emissions are also prevalent in higher stabilisation targets (Figure 2). Given such a pervasive and pivotal role of negative emissions in mitigation scenarios, their almost complete absence from climate policy discussions is disturbing and needs to be addressed urgently.

Read his whole article!

Pondering the difficulty of large-scale negative carbon emissions, but also their potential importance, I’m led to imagine scenarios like this:

In the 21st century we slowly wean ourselves of our addiction to burning carbon. By the end, we’re suffering a lot from global warming. It’s a real mess. But suppose our technological civilization survives, and we manage to develop a cheap source of clean energy. And once we switch to this, we don’t simply revert to our old bad habit of growing until we exhaust the available resources! We’ve learned our lesson—the hard way. We start trying to cleaning up the mess we made. Among other things, we start removing carbon dioxide from the atmosphere. We then spend a century—or two, or three—doing this. Thanks to various tipping points in the Earths’ climate system, we never get things back to the way they were. But we do, finally, make the Earth a beautiful place again.

If we’re aiming for some happy ending like this, *it may pay to explore various ways to achieve negative carbon emissions even if we can’t scale them up fast enough to stop a big mess in the 21st century*.

(Of course, I’m not suggesting this strategy as an alternative to cutting carbon emissions, or doing all sorts of other good things. We need a multi-pronged strategy, including some prongs that will only pay off in the long run, and only if we’re lucky.)

If we’re exploring various methods to achieve negative carbon emissions, a key aspect is figuring out economically viable pathways to scale up those methods. They’ll start small and they’ll inevitably be expensive at first. The ones that get big will get cheaper—per tonne of CO_{2} removed—as they grow.

This has various implications. For example, suppose someone builds a machine that sucks CO_{2} from the air and uses it to make carbonated soft drinks and to make plants grow better in greenhouses. As I mentioned, Climeworks is actually doing this!

In one sense, this is utterly pointless for fighting climate change, because these markets only use 6 megatonnes of CO_{2} annually—less than 0.02% of how much CO_{2} we’re dumping into the atmosphere!

But on the other hand, if this method of CO_{2} scrubbing can be scaled up and become cheaper and cheaper, it’s useful to start exploring the technology now. It could be the first step along some economically viable pathway.

I especially like the idea of CO_{2} scrubbing for coal-fired power plants. Of course to cut carbon emissions it would be better to *ban* coal-fired power plants. But this will take a while:

So, we can imagine an intermediate regime where regulations or a carbon tax make people sequester the CO_{2} from coal-fired power plants. And if this happens, there could be a big market for carbon dioxide scrubbers—for a while, at least.

I hope we can agree on at least one thing: the big picture is complicated. Next time I’ll zoom in and start talking about a specific technology for CO_{2} scrubbing.

I tried to push the conversation toward the calculations actually underlie this argument. Then our conversation drifted into email and got more technical… and perhaps also more interesting, because it led us to contemplate the stability of the vacuum!

You see, if we screwed up royally on our fine-tuning and came up with a theory where the square of the Higgs mass was *negative*, the vacuum would be unstable. It would instantly decay into a vast explosion of Higgs bosons.

Another possibility, also weird, turns out to be slightly more plausible. This is that the Higgs mass is positive—as it clearly is—and yet the vacuum is ‘metastable’. In this scenario, the vacuum we see around us might last a long time, and yet eventually it could decay through quantum tunnelling to the ‘true’ vacuum, with a lower energy density:

Little bubbles of true vacuum would form, randomly, and then grow very rapidly. This would be the end of life as we know it.

Scott agreed that other people might like to see our conversation. So here it is. I’ll fix a few mistakes, to make me seem smarter than I actually am.

I’ll start with some stuff on his blog.

If I said, “supersymmetry basically has to be there because it’s such a beautiful symmetry,” that would be an argument from beauty. But I didn’t say that, and I disagree with anyone who does say it. I made something weaker, what you might call an argument from the explanatory coherence of the world. It merely says that, when we find basic parameters of nature to be precariously balanced against each other, to one part in 10

^{10}or whatever, there’s almost certainly some explanation. It doesn’t say the explanation will be beautiful, it doesn’t say it will be discoverable by an FCC or any other collider, and it doesn’t say it will have a form (like SUSY) that anyone has thought of yet.

Scott wrote:

It merely says that, when we find basic parameters of nature to be precariously balanced against each other, to one part in 10

^{10}or whatever, there’s almost certainly some explanation.Do you know examples of this sort of situation in particle physics, or is this just a hypothetical situation?

To answer a question with a question, do you disagree that that’s the current situation with (for example) the Higgs mass, not to mention the vacuum energy, if one considers everything that could naïvely contribute? A lot of people told me it was, but maybe they lied or I misunderstood them.

The basic rough story is this. We measure the Higgs mass. We can assume that the Standard Model is good up to some energy near the Planck energy, after which it fizzles out for some unspecified reason.

According to the Standard Model, each of the 25 fundamental constants appearing in the Standard Model is a “running coupling constant”. That is, it’s not really a constant, but a function of energy: roughly the energy of the process we use to measure that process. Let’s call these “coupling constants measured at energy E”. Each of these 25 functions is determined by the value of all 25 functions at any fixed energy E – e.g. energy zero, or the Planck energy. This is called the “renormalization group flow”.

So, the Higgs mass we measure is actually the Higgs mass at some energy E quite low compared to the Planck energy.

And, it turns out that to get this measured value of the Higgs mass, the values of some fundamental constants measured at energies near the Planck mass need to almost cancel out. More precisely, some complicated function of them needs to almost but not quite obey some equation.

People summarize the story this way: to get the observed Higgs mass we need to “fine-tune” the fundamental constants’ values as measured near the Planck energy, if we assume the Standard Model is valid up to energies near the Planck energy.

A lot of particle physicists accept this reasoning and additionally assume that fine-tuning the values of fundamental constants as measured near the Planck energy is “bad”. They conclude that it would be “bad” for the Standard Model to be valid up to the Planck energy.

(In the previous paragraph you can replace “bad” with some other word—for example, “implausible”.)

Indeed you can use a refined version of the argument I’m sketching here to say “either the fundamental constants measured at energy E need to obey an identity up to precision ε or the Standard Model must break down before we reach energy E”, where ε gets smaller as E gets bigger.

Then, in theory, you can pick an ε and say “an ε smaller than that would make me very nervous.” Then you can conclude that “if the Standard Model is valid up to energy E, that will make me very nervous”.

(But I honestly don’t know anyone who has approximately computed ε as a function of E. Often people seem content to hand-wave.)

People like to argue about how small an ε should make us nervous, or even whether any value of ε should make us nervous.

But another assumption behind this whole line of reasoning is that the values of fundamental constants as measured at some energy near the Planck energy are “more important” than their values as measured near energy zero, so we should take near-cancellations of these high-energy values seriously—more seriously, I suppose, than near-cancellations at low energies.

Most particle physicists will defend this idea quite passionately. The philosophy seems to be that God designed high-energy physics and left his grad students to work out its consequences at low energies—so if you want to understand physics, you need to focus on high energies.

Scott wrote in email:

Do I remember correctly that it’s actually the square of the Higgs mass (or its value when probed at high energy?) that’s the sum of all these positive and negative high-energy contributions?

John wrote:

Sorry to take a while. I was trying to figure out if that’s a reasonable way to think of things. It’s true that the Higgs mass squared, not the Higgs mass, is what shows up in the Standard Model Lagrangian. This is how scalar fields work.

But I wouldn’t talk about a “sum of positive and negative high-energy contributions”. I’d rather think of all the coupling constants in the Standard Model—all 25 of them—obeying a coupled differential equation that says how they change as we change the energy scale. So, we’ve got a vector field on that says how these coupling constants “flow” as we change the energy scale.

Here’s an equation from a paper that looks at a simplified model:

Here is the Higgs mass, is the mass of the top quark, and both are being treated as functions of a momentum (essentially the energy scale we’ve been talking about). is just a number. You’ll note this equation simplifies if we work with the Higgs mass squared, since

This is one of a bunch of equations—in principle 25—that say how all the coupling constants change. So, they all affect each other in a complicated way as we change

By the way, there’s a lot of discussion of whether the Higgs mass square goes

negativeat high energies in the Standard Model. Some calculations suggest it does; other people argue otherwise. If it does, this would generally be considered an inconsistency in the whole setup: particles with negative mass squared are tachyons!I think one could make a lot of progress on these theoretical issues involving the Standard Model if people took them nearly as seriously as string theory or new colliders.

Scott wrote:

So OK, I was misled by the other things I read, and it’s more complicated than being a sum of mostly-canceling contributions (I was pretty sure couldn’t be such a sum, since then a slight change to parameters could make it negative).

Rather, there’s a coupled system of 25 nonlinear differential equations, which one could imagine initializing with the “”true”” high-energy SM parameters, and then solving to find the measured low-energy values. And these coupled differential equations have the property that, if we slightly changed the high-energy input parameters, that could generically have a wild effect on the low-energy outputs, pushing them up to the Planck scale or whatever.

Philosophically, I suppose this doesn’t much change things compared to the naive picture: the question is still, how OK are you with high-energy parameters that need to be precariously tuned to reproduce the low-energy SM, and does that state of affairs demand a new principle to explain it? But it does lead to a different intuition: namely, isn’t this sort of chaos just the

genericbehavior for nonlinear differential equations? If we fix a solution to such equations at a time our solution will almostalwaysappear “finely tuned” at a faraway time —tuned to reproduce precisely the behavior at that we fixed previously! Why shouldn’t we imagine that God fixed the values of the SM constants for the low-energy world, and they are what they are at high energies simply because that’s what you get when you RG-flow to there?I confess I’d never heard the speculation that could go negative at sufficiently high energies (!!). If I haven’t yet abused your generosity enough: what sort of energies are we talking about, compared to the Planck energy?

John wrote:

Scott wrote:

Rather, there’s a coupled system of 25 nonlinear differential equations, which one could imagine initializing with the “”true”” high-energy SM parameters, and then solving to find the measured low-energy values. And these coupled differential equations have the property that, if we slightly changed the high-energy input parameters, that could generically have a wild effect on the low-energy outputs, pushing them up to the Planck scale or whatever.

Right.

Philosophically, I suppose this doesn’t much change things compared to the naive picture: the question is still, how OK are you with high-energy parameters that need to be precariously tuned to reproduce the low-energy SM, and does that state of affairs demand a new principle to explain it? But it does lead to a different intuition: namely, isn’t this sort of chaos just the

genericbehavior for nonlinear differential equations?Yes it is, generically.

Physicists are especially interested in theories that have “ultraviolet fixed points”—by which they usually mean values of the parameters that are fixed under the renormalization group flow and

attractiveas we keep increasing the energy scale. The idea is that these theories seem likely to make sense at arbitrarily high energy scales. For example, pure Yang-Mills fields are believed to be “asymptotically free”—the coupling constant measuring the strength of the force goes to zero as the energy scale gets higher.But attractive ultraviolet fixed points are going to be

repulsiveas we reverse the direction of the flow and see what happens as we lower the energy scale.So what gives? Are all ultraviolet fixed points giving theories that require “fine-tuning” to get the parameters we observe at low energies? Is this bad?

Well, they’re not all the same. For theories considered nice, the parameters change

logarithmicallyas we change the energy scale. This is considered to be a mild change. The Standard Model with Higgs may not have an ultraviolet fixed point, but people usually worry about something else: the Higgs mass changes quadratically with the energy scale. This is related to the square of the Higgs mass being the really important parameter… if we used that, I’d say linearly.I think there’s a lot of mythology and intuitive reasoning surrounding this whole subject—probably the real experts could say a lot about it, but they are few, and a lot of people just repeat what they’ve been told, rather uncritically.

If we fix a solution to such equations at a time our solution will almost

alwaysappear “finely tuned” at a faraway time —tuned to reproduce precisely the behavior at that we fixed previously! Why shouldn’t we imagine that God fixed the values of the SM constants for the low-energy world, and they are what they are at high energies simply because that’s what you get when you RG-flow to there?This is something I can imagine Sabine Hossenfelder saying.

I confess I’d never heard the speculation that could go negative at sufficiently high energies (!!). If I haven’t yet abused your generosity enough: what sort of energies are we talking about, compared to the Planck energy?

The experts are still arguing about this; I don’t really know. To show how weird all this stuff is, there’s a review article from 2013 called “The top quark and Higgs boson masses and the stability of the electroweak vacuum”, which doesn’t look crackpotty to me, that argues that the vacuum state of the universe is stable if the Higgs mass and top quark are in the green region, but only metastable otherwise:

The big ellipse is where the parameters were expected to lie in 2012 when the paper was first written. The smaller ellipses only indicate the

size of the uncertaintyexpected after later colliders made more progress. You shouldn’t take them too seriously: they could be centered in the stable region or the metastable region.An appendix give an update, which looks like this:

The paper says:

one sees that the central value of the top mass lies almost exactly on the boundary between vacuum stability and metastability. The uncertainty on the top quark mass is nevertheless presently too large to clearly discriminate between these two possibilities.

Then John wrote:

By the way, another paper analyzing problems with the Standard Model says:

It has been shown that higher dimension operators may change the lifetime of the metastable vacuum, , from

to

where is the age of the Universe.

In other words, the calculations are not very reliable yet.

And then John wrote:

Sorry to keep spamming you, but since some of my last few comments didn’t make much sense, even to me, I did some more reading. It seems the best current conventional wisdom is this:

Assuming the Standard Model is valid up to the Planck energy, you can tune parameters near the Planck energy to get the observed parameters down here at low energies. So of course the the Higgs mass down here is positive.

But, due to higher-order effects, the potential for the Higgs field no longer looks like the classic “Mexican hat” described by a polynomial of degree 4:with the observed Higgs field sitting at one of the global minima.

Instead, it’s described by a more complicated function, like a polynomial of degree 6 or more. And this means that the minimum where the Higgs field is sitting may only be a local minimum:

In the left-hand scenario we’re at a global minimum and everything is fine. In the right-hand scenario we’re not and the vacuum we see is only metastable. The Higgs mass is still positive: that’s essentially the curvature of the potential near our local minimum. But the universe will eventually tunnel through the potential barrier and we’ll all die.

Yes, that seems to be the conventional wisdom! Obviously they’re keeping it hush-hush to prevent panic.

This paper has tons of relevant references:

• Tommi Markkanen, Arttu Rajantie, Stephen Stopyra, Cosmological aspects of Higgs vacuum metastability.

Abstract.The current central experimental values of the parameters of the Standard Model give rise to a striking conclusion: metastability of the electroweak vacuum is favoured over absolute stability. A metastable vacuum for the Higgs boson implies that it is possible, and in fact inevitable, that a vacuum decay takes place with catastrophic consequences for the Universe. The metastability of the Higgs vacuum is especially significant for cosmology, because there are many mechanisms that could have triggered the decay of the electroweak vacuum in the early Universe. We present a comprehensive review of the implications from Higgs vacuum metastability for cosmology along with a pedagogical discussion of the related theoretical topics, including renormalization group improvement, quantum field theory in curved spacetime and vacuum decay in field theory.

Scott wrote:

Once again, thank you so much! This is enlightening.

If you’d like other people to benefit from it, I’m totally up for you making it into a post on Azimuth, quoting from my emails as much or as little as you want. Or you could post it on that comment thread on my blog (which is still open), or I’d be willing to make it into a guest post (though that might need to wait till next week).

I guess my one other question is: what happens to this RG flow when you go to the infrared extreme? Is it believed, or known, that the “low-energy” values of the 25 Standard Model parameters are simply fixed points in the IR? Or could any of them run to strange values there as well?

I don’t really know the answer to that, so I’ll stop here.

But in case you’re worrying now that it’s “possible, and in fact inevitable, that a vacuum decay takes place with catastrophic consequences for the Universe”, *relax!* These calculations are very hard to do correctly. All existing work uses a lot of approximations that I don’t completely trust. Furthermore, they are assuming that the Standard Model is valid up to very high energies without any corrections due to new, yet-unseen particles!

So, while I think it’s a great challenge to get these calculations right, and to measure the Standard Model parameters accurately enough to do them right, I am not very worried about the Universe being taken over by a rapidly expanding bubble of ‘true vacuum’.

]]>