## Complex Adaptive System Design (Part 9)

24 March, 2019

Here’s our latest paper for the Complex Adaptive System Composition and Design Environment project:

• John Baez, John Foley and Joe Moeller, Network models from Petri nets with catalysts.

Check it out! And please report typos, mistakes, or anything you have trouble understanding! I’m happy to answer questions here.

### The idea

Petri nets are a widely studied formalism for describing collections of entities of different types, and how they turn into other entities. I’ve written a lot about them here. Network models are a formalism for designing and tasking networks of agents, which our team invented for this project. Here we combine these ideas! This is worthwhile because while both formalisms involve networks, they serve a different function, and are in some sense complementary.

A Petri net can be drawn as a bipartite directed graph with vertices of two kinds: places, drawn as circles, and transitions drawn as squares:

When we run a Petri net, we start by placing a finite number of dots called tokens in each place:

This is called a marking. Then we repeatedly change the marking using the transitions. For example, the above marking can change to this:

and then this:

Thus, the places represent different types of entity, and the transitions are ways that one collection of entities of specified types can turn into another such collection.

Network models serve a different function than Petri nets: they are a general tool for working with networks of many kinds. Mathematically a network model is a lax symmetric monoidal functor $G \colon \mathsf{S}(C) \to \mathsf{Cat},$ where $\mathsf{S}(C)$ is the free strict symmetric monoidal category on a set $C.$ Elements of $C$ represent different kinds of ‘agents’. Unlike in a Petri net, we do not usually consider processes where these agents turn into other agents. Instead, we wish to study everything that can be done with a fixed collection of agents. Any object $x \in \mathsf{S}(C)$ is of the form $c_1 \otimes \cdots \otimes c_n$ for some $c_i \in C;$ thus, it describes a collection of agents of various kinds. The functor $G$ maps this object to a category $G(x)$ that describes everything that can be done with this collection of agents.

In many examples considered so far, $G(x)$ is a category whose morphisms are graphs of some sort whose nodes are agents of types $c_1, \dots, c_n.$ Composing these morphisms corresponds to ‘overlaying’ graphs. Network models of this sort let us design networks where the nodes are agents and the edges are communication channels or shared commitments. In our first paper the operation of overlaying graphs was always commutative:

• John Baez, John Foley, Joe Moeller and Blake Pollard, Network models.

Subsequently Joe introduced a more general noncommutative overlay operation:

• Joe Moeller, Noncommutative network models.

This lets us design networks where each agent has a limit on how many communication channels or commitments it can handle; the noncommutativity lets us take a ‘first come, first served’ approach to resolving conflicting commitments.

Here we take a different tack: we instead take $G(x)$ to be a category whose morphisms are processes that the given collection of agents, $x,$ can carry out. Composition of morphisms corresponds to carrying out first one process and then another.

This idea meshes well with Petri net theory, because any Petri net $P$ determines a symmetric monoidal category $FP$ whose morphisms are processes that can be carried out using this Petri net. More precisely, the objects in $FP$ are markings of $P,$ and the morphisms are sequences of ways to change these markings using transitions, e.g.:

Given a Petri net, then, how do we construct a network model $G \colon \mathsf{S}(C) \to \mathsf{Cat},$ and in particular, what is the set $C$? In a network model the elements of $C$ represent different kinds of agents. In the simplest scenario, these agents persist in time. Thus, it is natural to take $C$ to be some set of ‘catalysts’. In chemistry, a reaction may require a catalyst to proceed, but it neither increases nor decrease the amount of this catalyst present. In everyday life, a door serves as a catalyst: it lets you walk though a wall, and it doesn’t get used up in the process!

For a Petri net, ‘catalysts’ are species that are neither increased nor decreased in number by any transition. For example, in the following Petri net, species $a$ is a catalyst:

but neither $b$ nor $c$ is a catalyst. The transition $\tau_1$ requires one token of type $a$ as input to proceed, but it also outputs one token of this type, so the total number of such tokens is unchanged. Similarly, the transition $\tau_2$ requires no tokens of type $a$ as input to proceed, and it also outputs no tokens of this type, so the total number of such tokens is unchanged.

In Theorem 11 of our paper, we prove that given any Petri net $P,$ and any subset $C$ of the catalysts of $P,$ there is a network model

$G \colon \mathsf{S}(C) \to \mathsf{Cat}$

An object $x \in \mathsf{S}(C)$ says how many tokens of each catalyst are present; $G(x)$ is then the subcategory of $FP$ where the objects are markings that have this specified amount of each catalyst, and morphisms are processes going between these.

From the functor $G \colon \mathsf{S}(C) \to \mathsf{Cat}$ we can construct a category $\int G$ by ‘gluing together’ all the categories $G(x)$ using the Grothendieck construction. Because $G$ is symmetric monoidal we can use an enhanced version of this construction to make $\int G$ into a symmetric monoidal category. We already did this in our first paper on network models, but by now the math has been better worked out here:

• Joe Moeller and Christina Vasilakopoulou, Monoidal Grothendieck construction.

The tensor product in $\int G$ describes doing processes ‘in parallel’. The category $\int G$ is similar to $FP,$ but it is better suited to applications where agents each have their own ‘individuality’, because $FP$ is actually a commutative monoidal category, where permuting agents has no effect at all, while $\int G$ is not so degenerate. In Theorem 12 of our paper we make this precise by more concretely describing $\int G$ as a symmetric monoidal category, and clarifying its relation to $FP.$

There are no morphisms between an object of $G(x)$ and an object of $G(x')$ when $x \not\cong x',$ since no transitions can change the amount of catalysts present. The category $FP$ is thus a ‘disjoint union’, or more technically a coproduct, of subcategories $FP_i$ where $i,$ an element of free commutative monoid on $C,$ specifies the amount of each catalyst present.

The tensor product on $FP$ has the property that tensoring an object in $FP_i$ with one in $FP_j$ gives an object in $FP_{i+j},$ and similarly for morphisms. However, in Theorem 14 we show that each subcategory $FP_i$ also has its own tensor product, which describes doing one process after another while reusing catalysts.

This tensor product is a very cool thing. On the one hand it’s quite obvious: for example, if two people want to walk through a door, they can both do it, one at a time, because the door doesn’t get used up when someone walks through it. On the other hand, it’s mathematically interesting: it turns out to give a lot of examples of monoidal categories that can’t be made symmetric or even braided, even though the tensor product of objects is commutative! The proof boils down to this:

Here $i$ represents the catalysts, and $f$ and $f'$ are two processes which we can carry out using these catalysts. We can do either one first, but we get different morphisms as a result.

The paper has lots of pictures like this—many involving jeeps and boats, which serve as catalysts to carry people first from a base to the shore and then from the shore to an island. I think these make it clear that the underlying ideas are quite commonsensical. But they need to be formalized to program them into a computer—and it’s nice that doing this brings in some classic themes in category theory!

Some posts in this series:

Part 2. Metron’s software for system design.

Part 3. Operads: the basic idea.

Part 4. Network operads: an easy example.

Part 5. Algebras of network operads: some easy examples.

Part 6. Network models.

Part 7. Step-by-step compositional design and tasking using commitment networks.

Part 8. Compositional tasking using category-valued network models.

Part 9 – Network models from Petri nets with catalysts.

## Algebraic Geometry

15 March, 2019

A more polished version of this article appeared on Nautilus on 2019 February 28. This version has some more material.

### How I Learned to Stop Worrying and Love Algebraic Geometry

In my 50s, too old to become a real expert, I have finally fallen in love with algebraic geometry. As the name suggests, this is the study of geometry using algebra. Aroun 1637, Pierre Fermat and Rene Descartes laid the groundwork for this subject by taking a plane, mentally drawing a grid on it as we now do with graph paper, and calling the coordinates $x$ and $y$. We can the write down an equation like $x^2 + y^2 = 1$, and there will be a curve consisting of points whose coordinates obey this equation. In this example, we get a circle!

It was a revolutionary idea at the time, because it lets us systematically convert questions about geometry into questions about equations, which we can solve if we’re good enough at algebra. Some mathematicians spend their whole lives on this majestic subject. But I never really liked it much—until recently. Now I’ve connected it to my interest in quantum physics.

We can describe many interesting curves with just polynomials. For example, roll a circle inside a circle three times as big. You get a curve with three sharp corners called a “deltoid”, shown in red above. It’s not obvious that you can describe this using a polynomial equation, but you can. The great mathematician Leonhard Euler dreamt this up in 1745.

As a kid I liked physics better than math. My uncle Albert Baez, father of the famous folk singer Joan Baez, worked for UNESCO, helping developing countries with physics education. My parents lived in Washington D.C.. Whenever my uncle came to town, he’d open his suitcase, pull out things like magnets or holograms, and use them to explain physics to me. This was fascinating. When I was eight, he gave me a copy of the college physics textbook he wrote. While I couldn’t understand it, I knew right away that I wanted to. I decided to become a physicist.

My parents were a bit worried, because they knew physicists needed mathematics, and I didn’t seem very good at that. I found long division insufferably boring, and refused to do my math homework, with its endless repetitive drills. But later, when I realized that by fiddling around with equations I could learn about the universe, I was hooked. The mysterious symbols seemed like magic spells. And in a way, they are. Science is the magic that actually works.

And so I learned to love math, but in a certain special way: as the key to physics. In college I wound up majoring in math, in part because I was no good at experiments. I learned quantum mechanics and general relativity, studying the necessary branches of math as I went. I was fascinated by Eugene Wigner’s question about the “unreasonable effectiveness” of mathematics in describing the universe. As he put it, “The miracle of the appropriateness of the language of mathematics for the formulation of the laws of physics is a wonderful gift which we neither understand nor deserve.”

Despite Wigner’s quasi-religious language, I didn’t think that God was an explanation. As far as I can tell, that hypothesis raises more questions than it answers. I studied mathematical logic and tried to prove that any universe containing a being like us, able to understand the laws of that universe, must have some special properties. I failed utterly, though I managed to get my first publishable paper out of the wreckage. I decided that there must be some deep mystery here, that we might someday understand, but only after we understood what the laws of physics actually are: not the pretty good approximate laws we know now, but the actual correct laws.

As a youthful optimist I felt sure such laws must exist, and that we could know them. And then, surely, these laws would give a clue to the deeper puzzle: why the universe is governed by mathematical laws in the first place.

So I went to graduate school—to a math department, but motivated by physics. I already knew that there was too much mathematics to ever learn it all, so I tried to focus on what mattered to me. And one thing that did not matter to me, I believed, was algebraic geometry.

How could any mathematician not fall in love with algebraic geometry? Here’s why. In its classic form, this subject considers only polynomial equations—equations that describe not just curves, but also higher-dimensional shapes called “varieties.” So $x^2 + y^2 = 1$ is fine, and so is $x^{47} - 2xyz = y^7$, but an equation with sines or cosines, or other functions, is out of bounds—unless we can figure out how to convert it into an equation with just polynomials. As a graduate student, this seemed like a terrible limitation. After all, physics problems involve plenty of functions that aren’t polynomials.

This is Cayley’s nodal cubic surface. It’s famous because it is the variety with the most nodes (those pointy things) that is described by a cubic equation. The equation is $(xy + yz + zx)(1 - x - y - z) + xyz = 0$, and it’s called “cubic” because we’re multiplying at most three variables at once.

Why does algebraic geometry restrict itself to polynomials? Mathematicians study curves described by all sorts of equations – but sines, cosines and other fancy functions are only a distraction from the fundamental mysteries of the relation between geometry and algebra. Thus, by restricting the breadth of their investigations, algebraic geometers can dig deeper. They’ve been working away for centuries, and by now their mastery of polynomial equations is truly staggering. Algebraic geometry has become a powerful tool in number theory, cryptography and other subjects.

I once met a grad student at Harvard, and I asked him what he was studying. He said one word, in a portentous tone: “Hartshorne.” He meant Robin Hartshorne’s textbook Algebraic Geometry, published in 1977. Supposedly an introduction to the subject, it’s actually a very hard-hitting tome. Consider Wikipedia’s description:

The first chapter, titled “Varieties,” deals with the classical algebraic geometry of varieties over algebraically closed fields. This chapter uses many classical results in commutative algebra, including Hilbert’s Nullstellensatz, with the books by Atiyah–Macdonald, Matsumura, and Zariski–Samuel as usual references.

If you can’t make heads or tails of this… well, that’s exactly my point. To penetrate even the first chapter of Hartshorne, you need quite a bit of background. To read Hartshorne is to try to catch up with centuries of geniuses running as fast as they could.

One of these geniuses was Hartshorne’s thesis advisor, Alexander Grothendieck. From about 1960 to 1970, Grothendieck revolutionized algebraic geometry as part of an epic quest to prove some conjectures about number theory, the Weil Conjectures. He had the idea that these could be translated into questions about geometry and settled that way. But making this idea precise required a huge amount of work. To carry it out, he started a seminar. He gave talks almost every day, and enlisted the help of some of the best mathematicians in Paris.

Alexander Grothendieck at his seminar in Paris

Working nonstop for a decade, they produced tens of thousands of pages of new mathematics, packed with mind-blowing concepts. In the end, using these ideas, Grothendieck succeeded in proving all the Weil Conjectures except the final, most challenging one—a close relative of the famous Riemann Hypothesis, for which a million dollar prize still waits.

Towards the end of this period, Grothendieck also became increasingly involved in radical politics and environmentalism. In 1970, when he learned that his research institute was partly funded by the military, he resigned. He left Paris and moved to teach in the south of France. Two years later a student of his proved the last of the Weil Conjectures—but in a way that Grothendieck disliked, because it used a “trick” rather than following the grand plan he had in mind. He was probably also jealous that someone else reached the summit before him. As time went by, Grothendieck became increasingly embittered with academia. And in 1991, he disappeared!

We now know that he moved to the Pyrenees, where he lived until his death in 2014. He seems to have largely lost interest in mathematics and turned his attention to spiritual matters. Some reports make him seem quite unhinged. It is hard to say. At least 20,000 pages of his writings remain unpublished.

During his most productive years, even though he dominated the French school of algebraic geometry, many mathematicians considered Grothendieck’s ideas “too abstract.” This sounds a bit strange, given how abstract all mathematics is. What’s inarguably true is that it takes time and work to absorb his ideas. As grad student I steered clear of them, since I was busy struggling to learn physics. There, too, centuries of geniuses have been working full-speed, and anyone wanting to reach the cutting edge has a lot of catching up to do. But, later in my career, my research led me to Grothendieck’s work.

If I had taken a different path, I might have come to grips with his work through string theory. String theorists postulate that besides the visible dimensions of space and time—three of space and one of time—there are extra dimensions of space curled up too small to see. In some of their theories these extra dimensions form a variety. So, string theorists easily get pulled into sophisticated questions about algebraic geometry. And this, in turn, pulls them toward Grothendieck.

A slice of one particular variety, called a “quintic threefold,” that can be used to describe the extra curled-up dimensions of space in string theory.

Indeed, some of the best advertisements for string theory are not successful predictions of experimental results—it’s made absolutely none of these—but rather, its ability to solve problems within pure mathematics, including algebraic geometry. For example, suppose you have a typical quintic threefold: a 3-dimensional variety described by a polynomial equation of degree 5. How many curves can you draw on it that are described by polynomials of degree 4? I’m sure this question has occurred to you. So, you’ll be pleased to know that answer is exactly 317,206,375.

This sort of puzzle is quite hard, but string theorists have figured out a systematic way to solve many puzzles of this sort, including much harder ones. Thus, we now see string theorists talking with algebraic geometers, each able to surprise the other with their insights.

My own interest in Grothendieck’s work had a different source. I’ve always had serious doubts about string theory, and counting curves on varieties is the last thing I’d ever try. Like rock climbing, it’s exciting to watch but too scary to actually attempt myself. But it turns out that Grothendieck’s ideas are so general and powerful that they spill out beyond algebraic geometry into many other subjects. In particular, his 600-page unpublished manuscript Pursuing Stacks, written in 1983, made a big impression on me. In it, he argues that topology—very loosely, the theory of what space can be shaped like, if we don’t care about bending or stretching it, just what kind of holes it has—can be completely reduced to algebra!

At first this idea may sound just like algebraic geometry, where we use algebra to describe geometrical shapes, like curves or higher-dimensional varieties. But “algebraic topology” winds up having a very different flavor, because in topology we don’t restrict our shapes to be described by polynomial equations. Instead of dealing with beautiful gems, we’re dealing with floppy, flexible blobs—so the kind of algebra we need is different.

Mathematicians sometimes joke that a topologist cannot tell the difference between a doughnut and a coffee cup.

Algebraic topology is a beautiful subject that has been around long before Grothendieck—but he was one of the first to seriously propose a method to reduce all topology to algebra.

Thanks to my work on physics, his proposal was tremendously exciting when I came across it. At the time I had taken up the challenge of trying to unify our two best theories of physics: quantum physics, which describes all the forces except gravity, and general relativity, which describes gravity. It seems that until we do this, our understanding of the fundamental laws of physics is doomed to be incomplete. But it’s devilishly difficult. One reason is that quantum physics is based on algebra, while general relativity involves a lot of topology. But that suggests an avenue of attack: if we can figure out how to express topology in terms of algebra, we might find a better language to formulate a theory of quantum gravity.

My physics colleagues will let out a howl here, and complain that I am oversimplifying. Yes, I’m oversimplifying. There is more to quantum physics than mere algebra, and more to general relativity than mere topology. Nonetheless, the possible benefits to physics of reducing topology to algebra are what got me so excited about Grothendieck’s work.

So, starting in the 1990s, I tried to understand the powerful abstract concepts that Grothendieck had invented—and by now I have partially succeeded. Some mathematicians find these concepts to be the hard part of algebraic geometry. They now seem like the easy part to me. The hard part, for me is the nitty-gritty details. First, there is all the material in those texts that Hartshorne takes as prerequisites: “the books by Atiyah–Macdonald, Matsumura, and Zariski–Samuel.” But there is also a lot more.

So, while I now have some of what it takes to read Hartshorne, until recently I was too intimidated to learn it. A student of physics once asked a famous expert how much mathematics a physicist needs to know. The expert replied: “More.” Indeed, the job of learning mathematics is never done, so I focus on the things that seem most important and/or fun. Until last year, algebraic geometry never rose to the top of the list.

What changed? I realized that algebraic geometry is connected to the relation between classical and quantum physics. Classical physics is the physics of Newton, where we imagine that we can measure everything with complete precision, at least in principle. Quantum physics is the physics of Schrödinger and Heisenberg, governed by the uncertainty principle: if we measure some aspects of a physical system with complete precision, others must remain undetermined.

For example, any spinning object has an “angular momentum”. In classical mechanics we visualize this as an arrow pointing along the axis of rotation, whose length is proportional to how fast the object is spinning. And in classical mechanics, we assume we can measure this arrow precisely. In quantum mechanics—a more accurate description of reality—this turns out not to be true. For example, if we know how far this arrow points in the $x$ direction, we cannot know how far it points in the $y$ direction. This uncertainty is too small to be noticeable for a spinning basketball, but for an electron it is important: physicists had only a rough understanding of electrons until they took this into account.

Physicists often want to “quantize” classical physics problems. That is, they start with the classical description of some physical system, and they want to figure out the quantum description. There is no fully general and completely systematic procedure for doing this. This should not be surprising: the two worldviews are so different. However, there are useful recipes for quantization. The most systematic ones apply to a very limited selection of physics problems.

For example, sometimes in classical physics we can describe a system by a point in a variety. This is not something one generally expects, but it happens in plenty of important cases. For example, consider a spinning object: if we fix how long its angular momentum arrow is, the arrow can still point in any direction, so its tip must lie on a sphere. Thus, we can describe a spinning object by a point on a sphere. And this sphere is actually a variety, the “Riemann sphere”, named after Bernhard Riemann, one of the greatest algebraic geometers of the 1800s.

When a classical physics problem is described by a variety, some magic happens. The process of quantization becomes completely systematic—and surprisingly simple. There is even a kind of reverse process, which one might call “classicization,” that lets you turn the quantum description back into a classical description. The classical and quantum approaches to physics become tightly linked, and one can take ideas from either approach and see what they say about the other one. For example, each point on the variety describes not only a state of the classical system (in our example, a definite direction for the angular momentum), but also a state of the corresponding quantum system—even though the latter is governed by the uncertainty principle. The quantum state is the “best quantum approximation” to the classical state.

Even better, in this situation many of the basic theorems about algebraic geometry can be seen as facts about quantization! Since quantization is something I’ve been thinking about for a long time, this makes me very happy. Richard Feynman once said that for him to make progress on a tough physics problem, he needed to have some sort of special angle on it:

I have to think that I have some kind of inside track on this problem. That is, I have some sort of talent that the other guys aren’t using, or some way of looking, and they are being foolish not to notice this wonderful way to look at it. I have to think I have a little better chance than the other guys, for some reason. I know in my heart that it is likely that the reason is false, and likely the particular attitude I’m taking with it was thought of by others. I don’t care; I fool myself into thinking I have an extra chance.

This may be what I’d been missing on algebraic geometry until now. Algebraic geometry is not just a problem to be solved, it’s a body of knowledge—but it’s such a large, intimidating body of knowledge that I didn’t dare tackle it until I got an inside track. Now I can read Hartshorne, translate some of the results into facts about physics, and feel I have a chance at understanding this stuff. And it’s a great feeling.

For the details of how algebraic geometry connects classical and quantum mechanics, see my talk slides and series of blog articles.

## Metal-Organic Frameworks

11 March, 2019

I’ve been talking about new technologies for fighting climate change, with an emphasis on negative carbon emissions. Now let’s begin looking at one technology in more detail. This will take a few articles. I want to start with the basics.

A metal-organic framework or MOF is a molecular structure built from metal atoms and organic compounds. There are many kinds. They can be 3-dimensional, like this one made by scientists at CSIRO in Australia:

And they can be full of microscopic holes, giving them an enormous surface area! For example, here’s a diagram of a MOF with yellow and orange balls showing the holes:

In fact, one gram of the stuff can have a surface area of more than 12,000 square meters!

Gas molecules like to sit inside these holes. So, perhaps surprisingly at first, you can pack a lot more gas in a cylinder containing a MOF than you can in an empty cylinder at the same pressure!

This lets us store gases using MOFs—like carbon dioxide, but also hydrogen, methane and others. And importantly, you can also get the gas molecules out of the MOF without enormous amounts of energy. Also, you can craft MOFs with different hole sizes and different chemical properties, so they attract some gases much more than others.

So, we can imagine various applications suited to fighting climate change! One is carbon capture and storage, where you want a substance that eagerly latches onto CO2 molecules, but can also easily be persuaded to let them go. But another is hydrogen or methane storage for the purpose of fuel. Methane releases less CO2 than gasoline does when it burns, per unit amount of energy—and hydrogen releases none at all. That’s why some advocate a hydrogen economy.

Could hydrogen-powered cars be better than battery-powered cars, someday? I don’t know. But never mind—such issues, though important, aren’t what I want to talk about now. I just want to quote something about methane storage in MOFs, to give you a sense of the state of the art.

• Mark Peplow, Metal-organic framework compound sets methane storage record, C&EN, 11 December 2017.

Cars powered by methane emit less CO2 than gasoline guzzlers, but they need expensive tanks and compressors to carry the gas at about 250 atm. Certain metal-organic framework (MOF) compounds—made from a lattice of metal-based nodes linked by organic struts—can store methane at lower pressures because the gas molecules pack tightly inside their pores.

So MOFs, in principle, could enable methane-powered cars to use cheaper, lighter, and safer tanks. But in practical tests, no material has met a U.S. Department of Energy (DOE) gas storage target of 263 cm3 of methane per cm3 of adsorbent at room temperature and 64 atm, enough to match the capacity of high-pressure tanks.

A team led by David Fairen-Jimenez at the University of Cambridge has now developed a synthesis method that endows a well-known MOF with a capacity of 259 cm3 of methane per cm3 under those conditions, at least 50% higher than its nearest rival. “It’s definitely a significant result,” says Jarad A. Mason at Harvard University, who works with MOFs and other materials for energy applications and was not involved in the research. “Capacity has been one of the biggest stumbling blocks.”

Only about two-thirds of the MOF’s methane was released when the pressure dropped to 6 atm, a minimum pressure needed to sustain a decent flow of gas from a tank. But this still provides the highest methane delivery capacity of any bulk adsorbent.

A couple things are worth noting here. First, the process of a molecule sticking to a surface is called adsorption, not to be confused with absorption. Second, notice that using MOFs they managed to compress methane by a factor of 259 at a pressure of just 64 atmospheres. If we tried the same trick without MOFs we would need a pressure of 259 atmospheres!

But MOFs are not only good at holding gases, they’re good at sucking them up, which is really the flip side of the same coin: gas molecules avidly seek to sit inside the little holes of your MOF. So people are also using MOFs to build highly sensitive detectors for specific kinds of gases:

And some MOFs work in water, too—so people are trying to use them as water filters, sort of a high-tech version of zeolites, the minerals that inspired people to invent MOFs in the first place. Zeolites have an impressive variety of crystal structures:

and so on… but MOFs seem to be more adjustable in their structure and chemical properties.

Looking more broadly at future applications, we can imagine MOFs will be important in a host of technologies where we want a substance with lots of microscopic holes that are eager to hold specific molecules. I have a feeling that the most powerful applications of MOFs will come when other technologies mature. For example: projecting forward to a time when we get really good nanotechnology, we can imagine MOFs as useful “storage lockers” for molecular robots.

But next time I’ll talk about what we can do now, or soon, to capture carbon dioxide with MOFs.

In the meantime: can you imagine some cool things we could do with MOFs? This may feed your imagination:

• Wikipedia, Metal-organic frameworks.

## Breakthrough Institute on Climate Change

10 March, 2019

I found this article, apparently by Ted Nordhaus and Alex Trembath, to be quite thought-provoking. At times it sinks too deep into the moment’s politics for my taste, given that the issues it raises will probably be confronting us for the whole 21st century. But still, it raises big issues:

• Breakthrough Institute, Is climate change like diabetes or an asteroid?

The Breakthrough Insitute seeks “technological solutions to environmental challenges”, so that informs their opinions. Let me quote some bits and urge you to read the whole thing! Even if it annoys you, it should make you think a bit.

Is climate change more like an asteroid or diabetes? Last month, one of us argued at Slate that climate advocates should resist calls to declare a national climate emergency because climate change was more like “diabetes for the planet” than an asteroid. The diabetes metaphor was surprisingly controversial. Climate change can’t be managed or lived with, many argued in response; it is an existential threat to human societies that demands an immediate cure.

The objection is telling, both in the ways in which it misunderstands the nature of the problem and in the contradictions it reveals. Diabetes is not benign. It is not a “natural” phenomena and it can’t be cured. It is a condition that, if unmanaged, can kill you. And even for those who manage it well, life is different than before diabetes.

This seems to us to be a reasonably apt description of the climate problem. There is no going back to the world before climate change. Whatever success we have mitigating climate change, we almost certainly won’t return to pre-industrial atmospheric concentrations of greenhouse gases, at least not for many centuries. Even at one or 1.5 degrees Celsius of warming, the climate and the planet will look very different, and that will bring unavoidable consequences for human societies. We will live on a hotter planet and in a climate that will be more variable and less predictable.

How bad our planetary diabetes gets will depend on how much we continue to emit and how well adapted to a changing climate human societies become. With the present one degree of warming, it appears that human societies have adapted relatively well. Various claims attributing present day natural disasters to climate change are controversial. But the overall statistics suggest that deaths due to climate-related natural disasters globally are falling, not rising, and that economic losses associated with those disasters, adjusting for growing population and affluence, have been flat for many decades.

But at three or four degrees of warming, all bets are off. And it appears that unmanaged, that’s where present trends in emissions arelikely to take us. Moreover, even with radical action, stabilizing emissions at 1.5 degrees C, as many advocates now demand, is not possible without either solar geoengineering or sucking carbon emissions out of the atmosphere at massive scale. Practically, given legacy emissions and committed infrastructure, the long-standing international target of limiting temperature increase to two degrees C is also extremely unlikely.

Unavoidably, then, treating our climate change condition will require not simply emissions reductions but also significant adaptation to known and unknown climate risks that are already baked in to our future due to two centuries of fossil fuel consumption. It is in this sense that we have long argued that climate change must be understood as a chronic condition of global modernity, a problem that will be managed but not solved.

A discussion of the worst-case versus the best-case IPCC scenarios, and what leads to these scenarios:

The worst case climate scenarios, which are based on worst case emissions scenarios, are the source of most of the terrifying studies of potential future climate impacts. These are frequently described as “business as usual” — what happens if the economy keeps growing and the global population becomes wealthier and hence more consumptive. But that’s not how the IPCC, which generates those scenarios, actually gets to very high emissions futures. Rather, the worst case scenarios are those in which the world remains poor, populous, unequal, and low-tech. It is a future with lots of poor people who don’t have access to clean technology. By contrast, a future in which the world is resilient to a hotter climate is likely also one in which the world has been more successful at mitigating climate change as well. A wealthier world will be a higher-tech world, one with many more low carbon technological options and more resources to invest in both mitigation and adaptation. It will be less populous (fertility rates reliably fall as incomes rise), less unequal (because many fewer people will live in extreme poverty), and more urbanized (meaning many more people living in cities with hard infrastructure, air conditioning, and emergency services to protect them).

That will almost certainly be a world in which global average temperatures have exceeded two degrees above pre-industrial levels. The latest round of climate deadline-ism (12 years to prevent climate catastrophe according to The Guardian) won’t change that. But as even David Wallace Wells, whose book The Uninhabitable Earth has helped revitalize climate catastrophism, acknowledges, “Two degrees would be terrible but it’s better than three… And three degrees is much better than four.”

Given the current emissions trajectory, a future world that stabilized emissions below 2.5 or three degrees, an accomplishment that in itself will likely require very substantial and sustained efforts to reduce emissions, would also likely be one reasonably well adapted to live in that climate, as it would, of necessity, be one that was much wealthier, less unequal, and more advanced technologically than the world we live in today.

The most controversial part of the article concerns the “apocalyptic” or “millenarian” tendency among enviromentalists: the feeling that only a complete reorganization of society will save us—for example, going “back to nature”.

[…] while the nature of the climate problem is chronic and the political and policy responses are incremental, the culture and ideology of contemporary environmentalism is millenarian. In the millenarian mind, there are only two choices, catastrophe or completely reorganizing society. Americans will either see the writing on the wall and remake the world, or perish in fiery apocalypse.

This, ultimately, is why adaptation, nuclear energy, carbon capture, and solar geoengineering have no role in the environmental narrative of apocalypse and salvation, even as all but the last are almost certainly necessary for any successful response to climate change and will also end up in any major federal policy effort to address climate change. Because they are basically plug-and-play with the existing socio-technical paradigm. They don’t require that we end capitalism or consumerism or energy intensive lifestyles. Modern, industrial, techno-society goes on, just without the emissions. This is also why efforts by nuclear, carbon capture, and geoengineering advocates to marshall catastrophic framing to build support for those approaches have had limited effect.

The problem for the climate movement is that the technocratic requirements necessary to massively decarbonize the global economy conflict with the egalitarian catastrophism that the movement’s mobilization strategies demand. McKibben has privately acknowledged as much to several people, explaining that he hasn’t publicly recognized the need for nuclear energy because he believes doing so would “split this movement in half.”

Implicit in these sorts of political calculations is the assumption that once advocates have amassed sufficient political power, the necessary concessions to the practical exigencies of deeply reducing carbon emissions will then become possible. But the army you raise ultimately shapes the sorts of battles you are able to wage, and it is not clear that the army of egalitarian millenarians that the climate movement is mobilizing will be willing to sign on to the necessary compromises — politically, economically, and technologically — that would be necessary to actually address the problem.

## Negative Carbon Emissions

2 March, 2019

A carbon dioxide scrubber is any sort of gadget that removes carbon dioxide from the air. There are various ways such gadgets can work, and various things we can do with them. For example, they’re already being used to clean the air in submarines and human-occupied spacecraft. I want to talk about carbon dioxide scrubbers as a way to reduce carbon emissions from burning fossil fuels, and a specific technology for doing this. But I don’t want to talk about those things today.

Why not? It turns out that if you start talking about the specifics of one particular approach to fighting global warming, people instantly want to start talking about other approaches they consider better. This makes some sense: it’s a big problem and we need to compare different approaches. But it’s also a bit frustrating: we need to study different approaches individually so we can know enough to compare them, or make progress on any one approach.

I mainly want to study the nitty-gritty details of various individual approaches, starting with one approach to carbon scrubbing. But if I don’t say anything about the bigger picture, people will be unsatisfied.

So, right now I want to say a bit about carbon dioxide scrubbers.

The first thing to realize—and this applies to all approaches to battling global warming—is the huge scale of the task. In 2018 we put 37.1 gigatonnes of CO2 into the atmosphere by burning fossil fuels and making cement.

That’s a lot! Let’s compare some of the other biggest human industries, in terms of the sheer mass being processed.

Cement production is big. Global cement production in 2017 was about 4.1 gigatonnes, with China making more than the rest of the world combined, and a large uncertainty in how much they made. But digging up and burning carbon is even bigger. For example, over 7 gigatonnes of coal is being mined per year. I can’t find figures on total agricultural production, but in 2004 we created about 5 gigatonnes of agricultural waste. Total grain production was just 2.53 gigatonnes in 2017. Total plastic production in 2017 was a mere 348 megatonnes.

So, to use technology to remove as much CO2 from the air as we’re currently putting in would require an industry that processes more mass than any other today.

I conclude that this won’t happen anytime soon. Indeed David McKay calls all methods of removing CO2 from air “the last thing we should talk about”. For now, he argues, we should focus on cutting carbon emissions. And I believe that to do that on a large enough scale requires economic incentives, for example a carbon tax.

But to keep global warming below 2°C over pre-industrial levels, it’s becoming increasingly likely that we’ll need negative carbon emissions:

Indeed, a lot of scenarios contemplated by policymakers involve net negative carbon emissions. Often they don’t realize just how hard these are to achieve! In his talk Mitigation on methadone: how negative emissions lock in our high-carbon addiction, Kevin Anderson has persuasively argued that policymakers are fooling themselves into thinking we can keep burning carbon as we like now and achieve the necessary negative emissions later. He’s not against negative carbon emissions. He’s against using vague fantasies of negative carbon emissions to put off confronting reality!

It is not well understood by policy makers, or indeed many academics, that IAMs [integrated assessment models] assume such a massive deployment of negative emission technologies. Yet when it comes to the more stringent Paris obligations, studies suggest that it is not possible to reach 1.5°C with a 50% chance without significant negative emissions. Even for 2°C, very few scenarios have explored mitigation without negative emissions, and contrary to common perception, negative emissions are also prevalent in higher stabilisation targets (Figure 2). Given such a pervasive and pivotal role of negative emissions in mitigation scenarios, their almost complete absence from climate policy discussions is disturbing and needs to be addressed urgently.

Pondering the difficulty of large-scale negative carbon emissions, but also their potential importance, I’m led to imagine scenarios like this:

In the 21st century we slowly wean ourselves of our addiction to burning carbon. By the end, we’re suffering a lot from global warming. It’s a real mess. But suppose our technological civilization survives, and we manage to develop a cheap source of clean energy. And once we switch to this, we don’t simply revert to our old bad habit of growing until we exhaust the available resources! We’ve learned our lesson—the hard way. We start trying to cleaning up the mess we made. Among other things, we start removing carbon dioxide from the atmosphere. We then spend a century—or two, or three—doing this. Thanks to various tipping points in the Earths’ climate system, we never get things back to the way they were. But we do, finally, make the Earth a beautiful place again.

If we’re aiming for some happy ending like this, it may pay to explore various ways to achieve negative carbon emissions even if we can’t scale them up fast enough to stop a big mess in the 21st century.

(Of course, I’m not suggesting this strategy as an alternative to cutting carbon emissions, or doing all sorts of other good things. We need a multi-pronged strategy, including some prongs that will only pay off in the long run, and only if we’re lucky.)

If we’re exploring various methods to achieve negative carbon emissions, a key aspect is figuring out economically viable pathways to scale up those methods. They’ll start small and they’ll inevitably be expensive at first. The ones that get big will get cheaper—per tonne of CO2 removed—as they grow.

This has various implications. For example, suppose someone builds a machine that sucks CO2 from the air and uses it to make carbonated soft drinks and to make plants grow better in greenhouses. As I mentioned, Climeworks is actually doing this!

In one sense, this is utterly pointless for fighting climate change, because these markets only use 6 megatonnes of CO2 annually—less than 0.02% of how much CO2 we’re dumping into the atmosphere!

But on the other hand, if this method of CO2 scrubbing can be scaled up and become cheaper and cheaper, it’s useful to start exploring the technology now. It could be the first step along some economically viable pathway.

I especially like the idea of CO2 scrubbing for coal-fired power plants. Of course to cut carbon emissions it would be better to ban coal-fired power plants. But this will take a while:

So, we can imagine an intermediate regime where regulations or a carbon tax make people sequester the CO2 from coal-fired power plants. And if this happens, there could be a big market for carbon dioxide scrubbers—for a while, at least.

I hope we can agree on at least one thing: the big picture is complicated. Next time I’ll zoom in and start talking about a specific technology for CO2 scrubbing.

## Problems with the Standard Model Higgs

25 February, 2019

Here is a conversation I had with Scott Aaronson. It started on his blog, in a discussion about ‘fine-tuning’. Some say the Standard Model of particle physics can’t be the whole story, because in this theory you need to fine-tune the fundamental constants to keep the Higgs mass from becoming huge. Others say this argument is invalid.

I tried to push the conversation toward the calculations actually underlie this argument. Then our conversation drifted into email and got more technical… and perhaps also more interesting, because it led us to contemplate the stability of the vacuum!

You see, if we screwed up royally on our fine-tuning and came up with a theory where the square of the Higgs mass was negative, the vacuum would be unstable. It would instantly decay into a vast explosion of Higgs bosons.

Another possibility, also weird, turns out to be slightly more plausible. This is that the Higgs mass is positive—as it clearly is—and yet the vacuum is ‘metastable’. In this scenario, the vacuum we see around us might last a long time, and yet eventually it could decay through quantum tunnelling to the ‘true’ vacuum, with a lower energy density:

Little bubbles of true vacuum would form, randomly, and then grow very rapidly. This would be the end of life as we know it.

Scott agreed that other people might like to see our conversation. So here it is. I’ll fix a few mistakes, to make me seem smarter than I actually am.

If I said, “supersymmetry basically has to be there because it’s such a beautiful symmetry,” that would be an argument from beauty. But I didn’t say that, and I disagree with anyone who does say it. I made something weaker, what you might call an argument from the explanatory coherence of the world. It merely says that, when we find basic parameters of nature to be precariously balanced against each other, to one part in 1010 or whatever, there’s almost certainly some explanation. It doesn’t say the explanation will be beautiful, it doesn’t say it will be discoverable by an FCC or any other collider, and it doesn’t say it will have a form (like SUSY) that anyone has thought of yet.

Scott wrote:

It merely says that, when we find basic parameters of nature to be precariously balanced against each other, to one part in 1010 or whatever, there’s almost certainly some explanation.

Do you know examples of this sort of situation in particle physics, or is this just a hypothetical situation?

To answer a question with a question, do you disagree that that’s the current situation with (for example) the Higgs mass, not to mention the vacuum energy, if one considers everything that could naïvely contribute? A lot of people told me it was, but maybe they lied or I misunderstood them.

The basic rough story is this. We measure the Higgs mass. We can assume that the Standard Model is good up to some energy near the Planck energy, after which it fizzles out for some unspecified reason.

According to the Standard Model, each of the 25 fundamental constants appearing in the Standard Model is a “running coupling constant”. That is, it’s not really a constant, but a function of energy: roughly the energy of the process we use to measure that process. Let’s call these “coupling constants measured at energy E”. Each of these 25 functions is determined by the value of all 25 functions at any fixed energy E – e.g. energy zero, or the Planck energy. This is called the “renormalization group flow”.

So, the Higgs mass we measure is actually the Higgs mass at some energy E quite low compared to the Planck energy.

And, it turns out that to get this measured value of the Higgs mass, the values of some fundamental constants measured at energies near the Planck mass need to almost cancel out. More precisely, some complicated function of them needs to almost but not quite obey some equation.

People summarize the story this way: to get the observed Higgs mass we need to “fine-tune” the fundamental constants’ values as measured near the Planck energy, if we assume the Standard Model is valid up to energies near the Planck energy.

A lot of particle physicists accept this reasoning and additionally assume that fine-tuning the values of fundamental constants as measured near the Planck energy is “bad”. They conclude that it would be “bad” for the Standard Model to be valid up to the Planck energy.

(In the previous paragraph you can replace “bad” with some other word—for example, “implausible”.)

Indeed you can use a refined version of the argument I’m sketching here to say “either the fundamental constants measured at energy E need to obey an identity up to precision ε or the Standard Model must break down before we reach energy E”, where ε gets smaller as E gets bigger.

Then, in theory, you can pick an ε and say “an ε smaller than that would make me very nervous.” Then you can conclude that “if the Standard Model is valid up to energy E, that will make me very nervous”.

(But I honestly don’t know anyone who has approximately computed ε as a function of E. Often people seem content to hand-wave.)

People like to argue about how small an ε should make us nervous, or even whether any value of ε should make us nervous.

But another assumption behind this whole line of reasoning is that the values of fundamental constants as measured at some energy near the Planck energy are “more important” than their values as measured near energy zero, so we should take near-cancellations of these high-energy values seriously—more seriously, I suppose, than near-cancellations at low energies.

Most particle physicists will defend this idea quite passionately. The philosophy seems to be that God designed high-energy physics and left his grad students to work out its consequences at low energies—so if you want to understand physics, you need to focus on high energies.

Scott wrote in email:

Do I remember correctly that it’s actually the square of the Higgs mass (or its value when probed at high energy?) that’s the sum of all these positive and negative high-energy contributions?

John wrote:

Sorry to take a while. I was trying to figure out if that’s a reasonable way to think of things. It’s true that the Higgs mass squared, not the Higgs mass, is what shows up in the Standard Model Lagrangian. This is how scalar fields work.

But I wouldn’t talk about a “sum of positive and negative high-energy contributions”. I’d rather think of all the coupling constants in the Standard Model—all 25 of them—obeying a coupled differential equation that says how they change as we change the energy scale. So, we’ve got a vector field on $\mathbb{R}^{25}$ that says how these coupling constants “flow” as we change the energy scale.

Here’s an equation from a paper that looks at a simplified model:

Here $m_h$ is the Higgs mass, $m_t$ is the mass of the top quark, and both are being treated as functions of a momentum $k$ (essentially the energy scale we’ve been talking about). $v$ is just a number. You’ll note this equation simplifies if we work with the Higgs mass squared, since

$m_h dm_h = \frac{1}{2} d(m_h^2)$

This is one of a bunch of equations—in principle 25—that say how all the coupling constants change. So, they all affect each other in a complicated way as we change $k.$

By the way, there’s a lot of discussion of whether the Higgs mass square goes negative at high energies in the Standard Model. Some calculations suggest it does; other people argue otherwise. If it does, this would generally be considered an inconsistency in the whole setup: particles with negative mass squared are tachyons!

I think one could make a lot of progress on these theoretical issues involving the Standard Model if people took them nearly as seriously as string theory or new colliders.

Scott wrote:

So OK, I was misled by the other things I read, and it’s more complicated than $m_h^2$ being a sum of mostly-canceling contributions (I was pretty sure $m_h$ couldn’t be such a sum, since then a slight change to parameters could make it negative).

Rather, there’s a coupled system of 25 nonlinear differential equations, which one could imagine initializing with the “”true”” high-energy SM parameters, and then solving to find the measured low-energy values. And these coupled differential equations have the property that, if we slightly changed the high-energy input parameters, that could generically have a wild effect on the low-energy outputs, pushing them up to the Planck scale or whatever.

Philosophically, I suppose this doesn’t much change things compared to the naive picture: the question is still, how OK are you with high-energy parameters that need to be precariously tuned to reproduce the low-energy SM, and does that state of affairs demand a new principle to explain it? But it does lead to a different intuition: namely, isn’t this sort of chaos just the generic behavior for nonlinear differential equations? If we fix a solution to such equations at a time $t_0,$ our solution will almost always appear “finely tuned” at a faraway time $t_1$—tuned to reproduce precisely the behavior at $t_0$ that we fixed previously! Why shouldn’t we imagine that God fixed the values of the SM constants for the low-energy world, and they are what they are at high energies simply because that’s what you get when you RG-flow to there?

I confess I’d never heard the speculation that $m_h^2$ could go negative at sufficiently high energies (!!). If I haven’t yet abused your generosity enough: what sort of energies are we talking about, compared to the Planck energy?

John wrote:

Scott wrote:

Rather, there’s a coupled system of 25 nonlinear differential equations, which one could imagine initializing with the “”true”” high-energy SM parameters, and then solving to find the measured low-energy values. And these coupled differential equations have the property that, if we slightly changed the high-energy input parameters, that could generically have a wild effect on the low-energy outputs, pushing them up to the Planck scale or whatever.

Right.

Philosophically, I suppose this doesn’t much change things compared to the naive picture: the question is still, how OK are you with high-energy parameters that need to be precariously tuned to reproduce the low-energy SM, and does that state of affairs demand a new principle to explain it? But it does lead to a different intuition: namely, isn’t this sort of chaos just the generic behavior for nonlinear differential equations?

Yes it is, generically.

Physicists are especially interested in theories that have “ultraviolet fixed points”—by which they usually mean values of the parameters that are fixed under the renormalization group flow and attractive as we keep increasing the energy scale. The idea is that these theories seem likely to make sense at arbitrarily high energy scales. For example, pure Yang-Mills fields are believed to be “asymptotically free”—the coupling constant measuring the strength of the force goes to zero as the energy scale gets higher.

But attractive ultraviolet fixed points are going to be repulsive as we reverse the direction of the flow and see what happens as we lower the energy scale.

So what gives? Are all ultraviolet fixed points giving theories that require “fine-tuning” to get the parameters we observe at low energies? Is this bad?

Well, they’re not all the same. For theories considered nice, the parameters change logarithmically as we change the energy scale. This is considered to be a mild change. The Standard Model with Higgs may not have an ultraviolet fixed point, but people usually worry about something else: the Higgs mass changes quadratically with the energy scale. This is related to the square of the Higgs mass being the really important parameter… if we used that, I’d say linearly.

I think there’s a lot of mythology and intuitive reasoning surrounding this whole subject—probably the real experts could say a lot about it, but they are few, and a lot of people just repeat what they’ve been told, rather uncritically.

If we fix a solution to such equations at a time $t_0,$ our solution will almost always appear “finely tuned” at a faraway time $t_1$—tuned to reproduce precisely the behavior at $t_0$ that we fixed previously! Why shouldn’t we imagine that God fixed the values of the SM constants for the low-energy world, and they are what they are at high energies simply because that’s what you get when you RG-flow to there?

This is something I can imagine Sabine Hossenfelder saying.

I confess I’d never heard the speculation that $m_h^2$ could go negative at sufficiently high energies (!!). If I haven’t yet abused your generosity enough: what sort of energies are we talking about, compared to the Planck energy?

The experts are still arguing about this; I don’t really know. To show how weird all this stuff is, there’s a review article from 2013 called “The top quark and Higgs boson masses and the stability of the electroweak vacuum”, which doesn’t look crackpotty to me, that argues that the vacuum state of the universe is stable if the Higgs mass and top quark are in the green region, but only metastable otherwise:

The big ellipse is where the parameters were expected to lie in 2012 when the paper was first written. The smaller ellipses only indicate the size of the uncertainty expected after later colliders made more progress. You shouldn’t take them too seriously: they could be centered in the stable region or the metastable region.

An appendix give an update, which looks like this:

The paper says:

one sees that the central value of the top mass lies almost exactly on the boundary between vacuum stability and metastability. The uncertainty on the top quark mass is nevertheless presently too large to clearly discriminate between these two possibilities.

Then John wrote:

By the way, another paper analyzing problems with the Standard Model says:

It has been shown that higher dimension operators may change the lifetime of the metastable vacuum, $\tau$, from

$\tau = 1.49 \times 10^{714} T_U$

to

$\tau =5.45 \times 10^{-212} T_U$

where $T_U$ is the age of the Universe.

In other words, the calculations are not very reliable yet.

And then John wrote:

Sorry to keep spamming you, but since some of my last few comments didn’t make much sense, even to me, I did some more reading. It seems the best current conventional wisdom is this:

Assuming the Standard Model is valid up to the Planck energy, you can tune parameters near the Planck energy to get the observed parameters down here at low energies. So of course the the Higgs mass down here is positive.

But, due to higher-order effects, the potential for the Higgs field no longer looks like the classic “Mexican hat” described by a polynomial of degree 4:

with the observed Higgs field sitting at one of the global minima.

Instead, it’s described by a more complicated function, like a polynomial of degree 6 or more. And this means that the minimum where the Higgs field is sitting may only be a local minimum:

In the left-hand scenario we’re at a global minimum and everything is fine. In the right-hand scenario we’re not and the vacuum we see is only metastable. The Higgs mass is still positive: that’s essentially the curvature of the potential near our local minimum. But the universe will eventually tunnel through the potential barrier and we’ll all die.

Yes, that seems to be the conventional wisdom! Obviously they’re keeping it hush-hush to prevent panic.

This paper has tons of relevant references:

• Tommi Markkanen, Arttu Rajantie, Stephen Stopyra, Cosmological aspects of Higgs vacuum metastability.

Abstract. The current central experimental values of the parameters of the Standard Model give rise to a striking conclusion: metastability of the electroweak vacuum is favoured over absolute stability. A metastable vacuum for the Higgs boson implies that it is possible, and in fact inevitable, that a vacuum decay takes place with catastrophic consequences for the Universe. The metastability of the Higgs vacuum is especially significant for cosmology, because there are many mechanisms that could have triggered the decay of the electroweak vacuum in the early Universe. We present a comprehensive review of the implications from Higgs vacuum metastability for cosmology along with a pedagogical discussion of the related theoretical topics, including renormalization group improvement, quantum field theory in curved spacetime and vacuum decay in field theory.

Scott wrote:

Once again, thank you so much! This is enlightening.

If you’d like other people to benefit from it, I’m totally up for you making it into a post on Azimuth, quoting from my emails as much or as little as you want. Or you could post it on that comment thread on my blog (which is still open), or I’d be willing to make it into a guest post (though that might need to wait till next week).

I guess my one other question is: what happens to this RG flow when you go to the infrared extreme? Is it believed, or known, that the “low-energy” values of the 25 Standard Model parameters are simply fixed points in the IR? Or could any of them run to strange values there as well?

I don’t really know the answer to that, so I’ll stop here.

But in case you’re worrying now that it’s “possible, and in fact inevitable, that a vacuum decay takes place with catastrophic consequences for the Universe”, relax! These calculations are very hard to do correctly. All existing work uses a lot of approximations that I don’t completely trust. Furthermore, they are assuming that the Standard Model is valid up to very high energies without any corrections due to new, yet-unseen particles!

So, while I think it’s a great challenge to get these calculations right, and to measure the Standard Model parameters accurately enough to do them right, I am not very worried about the Universe being taken over by a rapidly expanding bubble of ‘true vacuum’.

## The Cost of Sucking

19 February, 2019

I’m talking about carbon dioxide scrubbers. This post will just be an extended quote from an excellent book, which is free online:

• David McKay, Sustainable Energy: Without the Hot Air.

It will help us begin to understand the economics. But some numbers may have changed since this was written! Also, the passage I’m quoting focuses on taking carbon dioxide out of the air. This not really what I’m researching now: I’m actually interested in removing carbon dioxide from the exhaust from coal-fired power plants, at least until we manage to eliminate these plants. But the two problems have enough similarities that it’s worth looking at the former.

Here is what McKay says:

### The cost of sucking

Today, pumping carbon out of the ground is big bucks. In the future, perhaps pumping carbon into the ground is going to be big bucks. Assuming that inadequate action is taken now to halt global carbon pollution, perhaps a coalition of the willing will in a few decades pay to create a giant vacuum cleaner, and clean up everyone’s mess.

Before we go into details of how to capture carbon from thin air, let’s discuss the unavoidable energy cost of carbon capture. Whatever technologies we use, they have to respect the laws of physics, and unfortunately grabbing CO2 from thin air and concentrating it requires energy. The laws of physics say that the energy required must be at least 0.2 kWh per kg of CO2 (table 31.5). Given that real processes are typically 35% efficient at best, I’d be amazed if the energy cost of carbon capture is ever reduced below 0.55 kWh per kg.

Now, let’s assume that we wish to neutralize a typical European’s CO2 output of 11 tons per year, which is 30 kg per day per person. The energy required, assuming a cost of 0.55 kWh per kg of CO2, is 16.5 kWh per day per person. This is exactly the same as British electricity consumption. So powering the giant vacuum cleaner may require us to double our electricity production – or at least, to somehow obtain extra power equal to our current electricity production.

If the cost of running giant vacuum cleaners can be brought down, brilliant, let’s make them. But no amount of research and development can get round the laws of physics, which say that grabbing CO2 from thin air and concentrating it into liquid CO2 requires at least 0.2 kWh per kg of CO2.

Now, what’s the best way to suck CO2 from thin air? I’ll discuss four technologies for building the giant vacuum cleaner:

A. chemical pumps;
B. trees;
C. accelerated weathering of rocks;
D. ocean nourishment.

### A. Chemical technologies for carbon capture

The chemical technologies typically deal with carbon dioxide in two steps.

 concentrate compress 0.03% CO2 → Pure CO2 → Liquid CO2

First, they concentrate CO2 from its low concentration in the atmosphere; then they compress it into a small volume ready for shoving somewhere (either down a hole in the ground or deep in the ocean). Each of these steps has an energy cost. The costs required by the laws of physics are shown in table 31.5.

In 2005, the best published methods for CO2 capture from thin air were quite inefficient: the energy cost was about 3.3 kWh per kg, with a financial cost of about \$140 per ton of CO2. At this energy cost, capturing a European’s 30 kg per day would cost 100 kWh per day – almost the same as the European’s energy consumption of 125 kWh per day. Can better vacuum cleaners be designed?

Recently, Wallace Broecker, climate scientist, “perhaps the world’s foremost interpreter of the Earth’s operation as a biological, chemical, and physical system,” has been promoting an as yet unpublished technology developed by physicist Klaus Lackner for capturing CO2 from thin air. Broecker imagines that the world could carry on burning fossil fuels at much the same rate as it does now, and 60 million CO2-scrubbers (each the size of an up-ended shipping container) will vacuum up the CO2. What energy does Lackner’s process require? In June 2007 Lackner told me that his lab was achieving 1.3 kWh per kg, but since then they have developed a new process based on a resin that absorbs CO2 when dry and releases CO2 when moist. Lackner told me in June 2008 that, in a dry climate, the concentration cost has been reduced to about 0.18–0.37 kWh of low-grade heat per kg CO2. The compression cost is 0.11 kWh per kg. Thus Lackner’s total cost is 0.48 kWh or less per kg. For a European’s emissions of 30 kg CO2 per day, we are still talking about a cost of 14 kWh per day, of which 3.3 kWh per day would be electricity, and the rest heat.

Hurray for technical progress! But please don’t think that this is a small cost. We would require roughly a 20% increase in world energy production, just to run the vacuum cleaners.

### Conclusion

Okay, this is me again: John Baez.

If you want to read about the other methods—trees, accelerated weathering of rocks, and ocean nourishment, go to McKay’s book. I’m not saying that they are less interesting! I am not trying, in this particular series of posts, to scan all technologies and find the best ones. I’m trying to study carbon dioxide scrubbers.