## Symposium on Compositional Structures 4

8 April, 2019

There’s yet another conference in this fast-paced series, and this time it’s in Southern California!

Symposium on Compositional Structures 4, 22–23 May, 2019, Chapman University, California. Organized by Alexander Kurz.

The Symposium on Compositional Structures (SYCO) is an interdisciplinary series of meetings aiming to support the growing community of researchers interested in the phenomenon of compositionality, from both applied and abstract perspectives, and in particular where category theory serves as a unifying common language.
The first SYCO was in September 2018, at the University of Birmingham. The second SYCO was in December 2018, at the University of Strathclyde. The third SYCO was in March 2019, at the University of Oxford. Each meeting attracted about 70 participants.

We welcome submissions from researchers across computer science, mathematics, physics, philosophy, and beyond, with the aim of fostering friendly discussion, disseminating new ideas, and spreading knowledge between fields. Submission is encouraged for both mature research and work in progress, and by both established academics and junior researchers, including students.

Submission is easy, with no format requirements or page restrictions. The meeting does not have proceedings, so work can be submitted even if it has been submitted or published elsewhere. Think creatively—you could submit a recent paper, or notes on work in progress, or even a recent Masters or PhD thesis.

While no list of topics could be exhaustive, SYCO welcomes submissions
with a compositional focus related to any of the following areas, in
particular from the perspective of category theory:

• logical methods in computer science, including classical and quantum programming, type theory, concurrency, natural language processing and machine learning;

• graphical calculi, including string diagrams, Petri nets and reaction networks;

• languages and frameworks, including process algebras, proof nets, type theory and game semantics;

• abstract algebra and pure category theory, including monoidal category theory, higher category theory, operads, polygraphs, and relationships to homotopy theory;

• quantum algebra, including quantum computation and representation theory;

• tools and techniques, including rewriting, formal proofs and proof assistants, and game theory;

• industrial applications, including case studies and real-world problem descriptions.

This new series aims to bring together the communities behind many previous successful events which have taken place over the last decade, including “Categories, Logic and Physics”, “Categories, Logic and Physics (Scotland)”, “Higher-Dimensional Rewriting and Applications”, “String Diagrams in Computation, Logic and Physics”, “Applied Category Theory”, “Simons Workshop on Compositionality”, and the “Peripatetic Seminar in Sheaves and Logic”.

SYCO will be a regular fixture in the academic calendar, running regularly throughout the year, and becoming over time a recognized venue for presentation and discussion of results in an informal and friendly atmosphere. To help create this community, and to avoid the need to make difficult choices between strong submissions, in the event that more good-quality submissions are received than can be accommodated in the timetable, the programme committee may choose to
defer some submissions to a future meeting, rather than reject them. This would be done based largely on submission order, giving an incentive for early submission, but would also take into account other requirements, such as ensuring a broad scientific programme. Deferred submissions can be re-submitted to any future SYCO meeting, where they would not need peer review, and where they would be prioritised for inclusion in the programme. This will allow us to ensure that speakers have enough time to present their ideas, without creating an unnecessarily competitive reviewing process. Meetings will be held sufficiently frequently to avoid a backlog of deferred papers.

### Invited speakers

John Baez, University of California, Riverside: Props in network theory.

Tobias Fritz, Perimeter Institute for Theoretical Physics: Categorical probability: results and challenges.

Nina Otter, University of California, Los Angeles: A unified framework for equivalences in social networks.

### Important dates

All times are anywhere-on-earth.

• Submission deadline: Wednesday 24 April 2019
• Author notification: Wednesday 1 May 2019
• Symposium dates: Wednesday 22 and Thursday 23 May 2019

### Submission

Submission is by EasyChair, via the following link:

Submissions should present research results in sufficient detail to allow them to be properly considered by members of the programme committee, who will assess papers with regards to significance, clarity, correctness, and scope. We encourage the submission of work in progress, as well as mature results. There are no proceedings, so work can be submitted even if it has been previously published, or has been submitted for consideration elsewhere. There is no specific formatting requirement, and no page limit, although for long submissions authors should understand that reviewers may not be able to read the entire document in detail.

### Programme Committee

• Miriam Backens, University of Oxford
• Ross Duncan, University of Strathclyde and Cambridge Quantum Computing
• Brendan Fong, Massachusetts Institute of Technology
• Stefano Gogioso, University of Oxford
• Chris Heunen, University of Edinburgh
• Dominic Horsman, University of Grenoble
• Martti Karvonen, University of Edinburgh
• Kohei Kishida, Dalhousie University (chair)
• Andre Kornell, University of California, Davis
• Martha Lewis, University of Amsterdam
• Samuel Mimram, École Polytechnique
• Benjamin Musto, University of Oxford
• Nina Otter, University of California, Los Angeles
• Simona Paoli, University of Leicester
• Dorette Pronk, Dalhousie University
• Pawel Sobocinski, University of Southampton
• Joshua Tan, University of Oxford
• Sean Tull, University of Oxford
• Dominic Verdon, University of Bristol
• Jamie Vicary, University of Birmingham and University of Oxford
• Maaike Zwart, University of Oxford

## Hidden Symmetries of the Hydrogen Atom

4 April, 2019

Here’s the math colloquium talk I gave at Georgia Tech this week:

Abstract. A classical particle moving in an inverse square central force, like a planet in the gravitational field of the Sun, moves in orbits that do not precess. This lack of precession, special to the inverse square force, indicates the presence of extra conserved quantities beyond the obvious ones. Thanks to Noether’s theorem, these indicate the presence of extra symmetries. It turns out that not only rotations in 3 dimensions, but also in 4 dimensions, act as symmetries of this system. These extra symmetries are also present in the quantum version of the problem, where they explain some surprising features of the hydrogen atom. The quest to fully understand these symmetries leads to some fascinating mathematical adventures.

I left out a lot of calculations, but someday I want to write a paper where I put them all in. This material is all known, but I feel like explaining it my own way.

In the process of creating the slides and giving the talk, though, I realized there’s a lot I don’t understand yet. Some of it is embarrassingly basic! For example, I give Greg Egan’s nice intuitive argument for how you can get some ‘Runge–Lenz symmetries’ in the 2d Kepler problem. I might as well just quote his article:

• Greg Egan, The ellipse and the atom.

He says:

Now, one way to find orbits with the same energy is by applying a rotation that leaves the sun fixed but repositions the planet. Any ordinary three-dimensional rotation can be used in this way, yielding another orbit with exactly the same shape, but oriented differently.

But there is another transformation we can use to give us a new orbit without changing the total energy. If we grab hold of the planet at either of the points where it’s travelling parallel to the axis of the ellipse, and then swing it along a circular arc centred on the sun, we can reposition it without altering its distance from the sun. But rather than rotating its velocity in the same fashion (as we would do if we wanted to rotate the orbit as a whole) we leave its velocity vector unchanged: its direction, as well as its length, stays the same.

Since we haven’t changed the planet’s distance from the sun, its potential energy is unaltered, and since we haven’t changed its velocity, its kinetic energy is the same. What’s more, since the speed of a planet of a given mass when it’s moving parallel to the axis of its orbit depends only on its total energy, the planet will still be in that state with respect to its new orbit, and so the new orbit’s axis must be parallel to the axis of the original orbit.

Rotations together with these ‘Runge–Lenz transformations’ generate an SO(3) action on the space of elliptical orbits of any given energy. But what’s the most geometrically vivid description of this SO(3) action?

Someone at my talk noted that you could grab the planet at any point of its path, and move to anywhere the same distance from the Sun, while keeping its speed the same, and get a new orbit with the same energy. Are all the SO(3) transformations of this form?

I have a bunch more questions, but this one is the simplest!

## The Pi Calculus: Towards Global Computing

4 April, 2019

Check out the video of Christian Williams’’s talk in the Applied Category Theory Seminar here at U. C. Riverside. It was nicely edited by Paola Fernandez and uploaded by Joe Moeller.

Abstract. Historically, code represents a sequence of instructions for a single machine. Each computer is its own world, and only interacts with others by sending and receiving data through external ports. As society becomes more interconnected, this paradigm becomes more inadequate – these virtually isolated nodes tend to form networks of great bottleneck and opacity. Communication is a fundamental and integral part of computing, and needs to be incorporated in the theory of computation.

To describe systems of interacting agents with dynamic interconnection, in 1980 Robin Milner invented the pi calculus: a formal language in which a term represents an open, evolving system of processes (or agents) which communicate over names (or channels). Because a computer is itself such a system, the pi calculus can be seen as a generalization of traditional computing languages; there is an embedding of lambda into pi – but there is an important change in focus: programming is less like controlling a machine and more like designing an ecosystem of autonomous organisms.

We review the basics of the pi calculus, and explore a variety of examples which demonstrate this new approach to programming. We will discuss some of the history of these ideas, called “process algebra”, and see exciting modern applications in blockchain and biology.

“… as we seriously address the problem of modelling mobile communicating systems we get a sense of completing a model which was previously incomplete; for we can now begin to describe what goes on outside a computer in the same terms as what goes on inside – i.e. in terms of interaction. Turning this observation inside-out, we may say that we inhabit a global computer, an informatic world which demands to be understood just as fundamentally as physicists understand the material world.” — Robin Milner

The talks slides are here.

• Robin Milner, The polyadic pi calculus: a tutorial.

• Robin Milner, Communicating and Mobile Systems.

• Joachim Parrow, An introduction to the pi calculus.

## Social Contagion Modeled on Random Networks

29 March, 2019

Check out the video of Daniel Cicala’s talk, the fourth in the Applied Category Theory Seminar here at U. C. Riverside. It was nicely edited by Paola Fernandez and uploaded by Joe Moeller.

Abstract. A social contagion may manifest as a cultural trend, a spreading opinion or idea or belief. In this talk, we explore a simple model of social contagion on a random network. We also look at the effect that network connectivity, edge distribution, and heterogeneity has on the diffusion of a contagion.

The talk slides are here.

• Mason A. Porter and James P. Gleeson, Dynamical systems on networks: a tutorial.

• Duncan J. Watts, A simple model of global cascades on random networks.

## Complex Adaptive System Design (Part 9)

24 March, 2019

Here’s our latest paper for the Complex Adaptive System Composition and Design Environment project:

• John Baez, John Foley and Joe Moeller, Network models from Petri nets with catalysts.

Check it out! And please report typos, mistakes, or anything you have trouble understanding! I’m happy to answer questions here.

### The idea

Petri nets are a widely studied formalism for describing collections of entities of different types, and how they turn into other entities. I’ve written a lot about them here. Network models are a formalism for designing and tasking networks of agents, which our team invented for this project. Here we combine these ideas! This is worthwhile because while both formalisms involve networks, they serve a different function, and are in some sense complementary.

A Petri net can be drawn as a bipartite directed graph with vertices of two kinds: places, drawn as circles, and transitions drawn as squares:

When we run a Petri net, we start by placing a finite number of dots called tokens in each place:

This is called a marking. Then we repeatedly change the marking using the transitions. For example, the above marking can change to this:

and then this:

Thus, the places represent different types of entity, and the transitions are ways that one collection of entities of specified types can turn into another such collection.

Network models serve a different function than Petri nets: they are a general tool for working with networks of many kinds. Mathematically a network model is a lax symmetric monoidal functor $G \colon \mathsf{S}(C) \to \mathsf{Cat},$ where $\mathsf{S}(C)$ is the free strict symmetric monoidal category on a set $C.$ Elements of $C$ represent different kinds of ‘agents’. Unlike in a Petri net, we do not usually consider processes where these agents turn into other agents. Instead, we wish to study everything that can be done with a fixed collection of agents. Any object $x \in \mathsf{S}(C)$ is of the form $c_1 \otimes \cdots \otimes c_n$ for some $c_i \in C;$ thus, it describes a collection of agents of various kinds. The functor $G$ maps this object to a category $G(x)$ that describes everything that can be done with this collection of agents.

In many examples considered so far, $G(x)$ is a category whose morphisms are graphs of some sort whose nodes are agents of types $c_1, \dots, c_n.$ Composing these morphisms corresponds to ‘overlaying’ graphs. Network models of this sort let us design networks where the nodes are agents and the edges are communication channels or shared commitments. In our first paper the operation of overlaying graphs was always commutative:

• John Baez, John Foley, Joe Moeller and Blake Pollard, Network models.

Subsequently Joe introduced a more general noncommutative overlay operation:

• Joe Moeller, Noncommutative network models.

This lets us design networks where each agent has a limit on how many communication channels or commitments it can handle; the noncommutativity lets us take a ‘first come, first served’ approach to resolving conflicting commitments.

Here we take a different tack: we instead take $G(x)$ to be a category whose morphisms are processes that the given collection of agents, $x,$ can carry out. Composition of morphisms corresponds to carrying out first one process and then another.

This idea meshes well with Petri net theory, because any Petri net $P$ determines a symmetric monoidal category $FP$ whose morphisms are processes that can be carried out using this Petri net. More precisely, the objects in $FP$ are markings of $P,$ and the morphisms are sequences of ways to change these markings using transitions, e.g.:

Given a Petri net, then, how do we construct a network model $G \colon \mathsf{S}(C) \to \mathsf{Cat},$ and in particular, what is the set $C$? In a network model the elements of $C$ represent different kinds of agents. In the simplest scenario, these agents persist in time. Thus, it is natural to take $C$ to be some set of ‘catalysts’. In chemistry, a reaction may require a catalyst to proceed, but it neither increases nor decrease the amount of this catalyst present. In everyday life, a door serves as a catalyst: it lets you walk though a wall, and it doesn’t get used up in the process!

For a Petri net, ‘catalysts’ are species that are neither increased nor decreased in number by any transition. For example, in the following Petri net, species $a$ is a catalyst:

but neither $b$ nor $c$ is a catalyst. The transition $\tau_1$ requires one token of type $a$ as input to proceed, but it also outputs one token of this type, so the total number of such tokens is unchanged. Similarly, the transition $\tau_2$ requires no tokens of type $a$ as input to proceed, and it also outputs no tokens of this type, so the total number of such tokens is unchanged.

In Theorem 11 of our paper, we prove that given any Petri net $P,$ and any subset $C$ of the catalysts of $P,$ there is a network model

$G \colon \mathsf{S}(C) \to \mathsf{Cat}$

An object $x \in \mathsf{S}(C)$ says how many tokens of each catalyst are present; $G(x)$ is then the subcategory of $FP$ where the objects are markings that have this specified amount of each catalyst, and morphisms are processes going between these.

From the functor $G \colon \mathsf{S}(C) \to \mathsf{Cat}$ we can construct a category $\int G$ by ‘gluing together’ all the categories $G(x)$ using the Grothendieck construction. Because $G$ is symmetric monoidal we can use an enhanced version of this construction to make $\int G$ into a symmetric monoidal category. We already did this in our first paper on network models, but by now the math has been better worked out here:

• Joe Moeller and Christina Vasilakopoulou, Monoidal Grothendieck construction.

The tensor product in $\int G$ describes doing processes ‘in parallel’. The category $\int G$ is similar to $FP,$ but it is better suited to applications where agents each have their own ‘individuality’, because $FP$ is actually a commutative monoidal category, where permuting agents has no effect at all, while $\int G$ is not so degenerate. In Theorem 12 of our paper we make this precise by more concretely describing $\int G$ as a symmetric monoidal category, and clarifying its relation to $FP.$

There are no morphisms between an object of $G(x)$ and an object of $G(x')$ when $x \not\cong x',$ since no transitions can change the amount of catalysts present. The category $FP$ is thus a ‘disjoint union’, or more technically a coproduct, of subcategories $FP_i$ where $i,$ an element of free commutative monoid on $C,$ specifies the amount of each catalyst present.

The tensor product on $FP$ has the property that tensoring an object in $FP_i$ with one in $FP_j$ gives an object in $FP_{i+j},$ and similarly for morphisms. However, in Theorem 14 we show that each subcategory $FP_i$ also has its own tensor product, which describes doing one process after another while reusing catalysts.

This tensor product is a very cool thing. On the one hand it’s quite obvious: for example, if two people want to walk through a door, they can both do it, one at a time, because the door doesn’t get used up when someone walks through it. On the other hand, it’s mathematically interesting: it turns out to give, not a monoidal category, but something called a ‘premonoidal’ category. This concept, which we explain in our paper, was invented by John Power and Edmund Robinson for use in theoretical computer science.

The paper has lots of pictures involving jeeps and boats, which serve as catalysts to carry people first from a base to the shore and then from the shore to an island. I think these make it clear that the underlying ideas are quite commonsensical. But they need to be formalized to program them into a computer—and it’s nice that doing this brings in some classic themes in category theory!

Some posts in this series:

Part 2. Metron’s software for system design.

Part 3. Operads: the basic idea.

Part 4. Network operads: an easy example.

Part 5. Algebras of network operads: some easy examples.

Part 6. Network models.

Part 7. Step-by-step compositional design and tasking using commitment networks.

Part 8. Compositional tasking using category-valued network models.

Part 9 – Network models from Petri nets with catalysts.

## Algebraic Geometry

15 March, 2019

A more polished version of this article appeared on Nautilus on 2019 February 28. This version has some more material.

### How I Learned to Stop Worrying and Love Algebraic Geometry

In my 50s, too old to become a real expert, I have finally fallen in love with algebraic geometry. As the name suggests, this is the study of geometry using algebra. Aroun 1637, Pierre Fermat and Rene Descartes laid the groundwork for this subject by taking a plane, mentally drawing a grid on it as we now do with graph paper, and calling the coordinates $x$ and $y$. We can the write down an equation like $x^2 + y^2 = 1$, and there will be a curve consisting of points whose coordinates obey this equation. In this example, we get a circle!

It was a revolutionary idea at the time, because it lets us systematically convert questions about geometry into questions about equations, which we can solve if we’re good enough at algebra. Some mathematicians spend their whole lives on this majestic subject. But I never really liked it much—until recently. Now I’ve connected it to my interest in quantum physics.

We can describe many interesting curves with just polynomials. For example, roll a circle inside a circle three times as big. You get a curve with three sharp corners called a “deltoid”, shown in red above. It’s not obvious that you can describe this using a polynomial equation, but you can. The great mathematician Leonhard Euler dreamt this up in 1745.

As a kid I liked physics better than math. My uncle Albert Baez, father of the famous folk singer Joan Baez, worked for UNESCO, helping developing countries with physics education. My parents lived in Washington D.C.. Whenever my uncle came to town, he’d open his suitcase, pull out things like magnets or holograms, and use them to explain physics to me. This was fascinating. When I was eight, he gave me a copy of the college physics textbook he wrote. While I couldn’t understand it, I knew right away that I wanted to. I decided to become a physicist.

My parents were a bit worried, because they knew physicists needed mathematics, and I didn’t seem very good at that. I found long division insufferably boring, and refused to do my math homework, with its endless repetitive drills. But later, when I realized that by fiddling around with equations I could learn about the universe, I was hooked. The mysterious symbols seemed like magic spells. And in a way, they are. Science is the magic that actually works.

And so I learned to love math, but in a certain special way: as the key to physics. In college I wound up majoring in math, in part because I was no good at experiments. I learned quantum mechanics and general relativity, studying the necessary branches of math as I went. I was fascinated by Eugene Wigner’s question about the “unreasonable effectiveness” of mathematics in describing the universe. As he put it, “The miracle of the appropriateness of the language of mathematics for the formulation of the laws of physics is a wonderful gift which we neither understand nor deserve.”

Despite Wigner’s quasi-religious language, I didn’t think that God was an explanation. As far as I can tell, that hypothesis raises more questions than it answers. I studied mathematical logic and tried to prove that any universe containing a being like us, able to understand the laws of that universe, must have some special properties. I failed utterly, though I managed to get my first publishable paper out of the wreckage. I decided that there must be some deep mystery here, that we might someday understand, but only after we understood what the laws of physics actually are: not the pretty good approximate laws we know now, but the actual correct laws.

As a youthful optimist I felt sure such laws must exist, and that we could know them. And then, surely, these laws would give a clue to the deeper puzzle: why the universe is governed by mathematical laws in the first place.

So I went to graduate school—to a math department, but motivated by physics. I already knew that there was too much mathematics to ever learn it all, so I tried to focus on what mattered to me. And one thing that did not matter to me, I believed, was algebraic geometry.

How could any mathematician not fall in love with algebraic geometry? Here’s why. In its classic form, this subject considers only polynomial equations—equations that describe not just curves, but also higher-dimensional shapes called “varieties.” So $x^2 + y^2 = 1$ is fine, and so is $x^{47} - 2xyz = y^7$, but an equation with sines or cosines, or other functions, is out of bounds—unless we can figure out how to convert it into an equation with just polynomials. As a graduate student, this seemed like a terrible limitation. After all, physics problems involve plenty of functions that aren’t polynomials.

This is Cayley’s nodal cubic surface. It’s famous because it is the variety with the most nodes (those pointy things) that is described by a cubic equation. The equation is $(xy + yz + zx)(1 - x - y - z) + xyz = 0$, and it’s called “cubic” because we’re multiplying at most three variables at once.

Why does algebraic geometry restrict itself to polynomials? Mathematicians study curves described by all sorts of equations – but sines, cosines and other fancy functions are only a distraction from the fundamental mysteries of the relation between geometry and algebra. Thus, by restricting the breadth of their investigations, algebraic geometers can dig deeper. They’ve been working away for centuries, and by now their mastery of polynomial equations is truly staggering. Algebraic geometry has become a powerful tool in number theory, cryptography and other subjects.

I once met a grad student at Harvard, and I asked him what he was studying. He said one word, in a portentous tone: “Hartshorne.” He meant Robin Hartshorne’s textbook Algebraic Geometry, published in 1977. Supposedly an introduction to the subject, it’s actually a very hard-hitting tome. Consider Wikipedia’s description:

The first chapter, titled “Varieties,” deals with the classical algebraic geometry of varieties over algebraically closed fields. This chapter uses many classical results in commutative algebra, including Hilbert’s Nullstellensatz, with the books by Atiyah–Macdonald, Matsumura, and Zariski–Samuel as usual references.

If you can’t make heads or tails of this… well, that’s exactly my point. To penetrate even the first chapter of Hartshorne, you need quite a bit of background. To read Hartshorne is to try to catch up with centuries of geniuses running as fast as they could.

One of these geniuses was Hartshorne’s thesis advisor, Alexander Grothendieck. From about 1960 to 1970, Grothendieck revolutionized algebraic geometry as part of an epic quest to prove some conjectures about number theory, the Weil Conjectures. He had the idea that these could be translated into questions about geometry and settled that way. But making this idea precise required a huge amount of work. To carry it out, he started a seminar. He gave talks almost every day, and enlisted the help of some of the best mathematicians in Paris.

Alexander Grothendieck at his seminar in Paris

Working nonstop for a decade, they produced tens of thousands of pages of new mathematics, packed with mind-blowing concepts. In the end, using these ideas, Grothendieck succeeded in proving all the Weil Conjectures except the final, most challenging one—a close relative of the famous Riemann Hypothesis, for which a million dollar prize still waits.

Towards the end of this period, Grothendieck also became increasingly involved in radical politics and environmentalism. In 1970, when he learned that his research institute was partly funded by the military, he resigned. He left Paris and moved to teach in the south of France. Two years later a student of his proved the last of the Weil Conjectures—but in a way that Grothendieck disliked, because it used a “trick” rather than following the grand plan he had in mind. He was probably also jealous that someone else reached the summit before him. As time went by, Grothendieck became increasingly embittered with academia. And in 1991, he disappeared!

We now know that he moved to the Pyrenees, where he lived until his death in 2014. He seems to have largely lost interest in mathematics and turned his attention to spiritual matters. Some reports make him seem quite unhinged. It is hard to say. At least 20,000 pages of his writings remain unpublished.

During his most productive years, even though he dominated the French school of algebraic geometry, many mathematicians considered Grothendieck’s ideas “too abstract.” This sounds a bit strange, given how abstract all mathematics is. What’s inarguably true is that it takes time and work to absorb his ideas. As grad student I steered clear of them, since I was busy struggling to learn physics. There, too, centuries of geniuses have been working full-speed, and anyone wanting to reach the cutting edge has a lot of catching up to do. But, later in my career, my research led me to Grothendieck’s work.

If I had taken a different path, I might have come to grips with his work through string theory. String theorists postulate that besides the visible dimensions of space and time—three of space and one of time—there are extra dimensions of space curled up too small to see. In some of their theories these extra dimensions form a variety. So, string theorists easily get pulled into sophisticated questions about algebraic geometry. And this, in turn, pulls them toward Grothendieck.

A slice of one particular variety, called a “quintic threefold,” that can be used to describe the extra curled-up dimensions of space in string theory.

Indeed, some of the best advertisements for string theory are not successful predictions of experimental results—it’s made absolutely none of these—but rather, its ability to solve problems within pure mathematics, including algebraic geometry. For example, suppose you have a typical quintic threefold: a 3-dimensional variety described by a polynomial equation of degree 5. How many curves can you draw on it that are described by polynomials of degree 4? I’m sure this question has occurred to you. So, you’ll be pleased to know that answer is exactly 317,206,375.

This sort of puzzle is quite hard, but string theorists have figured out a systematic way to solve many puzzles of this sort, including much harder ones. Thus, we now see string theorists talking with algebraic geometers, each able to surprise the other with their insights.

My own interest in Grothendieck’s work had a different source. I’ve always had serious doubts about string theory, and counting curves on varieties is the last thing I’d ever try. Like rock climbing, it’s exciting to watch but too scary to actually attempt myself. But it turns out that Grothendieck’s ideas are so general and powerful that they spill out beyond algebraic geometry into many other subjects. In particular, his 600-page unpublished manuscript Pursuing Stacks, written in 1983, made a big impression on me. In it, he argues that topology—very loosely, the theory of what space can be shaped like, if we don’t care about bending or stretching it, just what kind of holes it has—can be completely reduced to algebra!

At first this idea may sound just like algebraic geometry, where we use algebra to describe geometrical shapes, like curves or higher-dimensional varieties. But “algebraic topology” winds up having a very different flavor, because in topology we don’t restrict our shapes to be described by polynomial equations. Instead of dealing with beautiful gems, we’re dealing with floppy, flexible blobs—so the kind of algebra we need is different.

Mathematicians sometimes joke that a topologist cannot tell the difference between a doughnut and a coffee cup.

Algebraic topology is a beautiful subject that has been around long before Grothendieck—but he was one of the first to seriously propose a method to reduce all topology to algebra.

Thanks to my work on physics, his proposal was tremendously exciting when I came across it. At the time I had taken up the challenge of trying to unify our two best theories of physics: quantum physics, which describes all the forces except gravity, and general relativity, which describes gravity. It seems that until we do this, our understanding of the fundamental laws of physics is doomed to be incomplete. But it’s devilishly difficult. One reason is that quantum physics is based on algebra, while general relativity involves a lot of topology. But that suggests an avenue of attack: if we can figure out how to express topology in terms of algebra, we might find a better language to formulate a theory of quantum gravity.

My physics colleagues will let out a howl here, and complain that I am oversimplifying. Yes, I’m oversimplifying. There is more to quantum physics than mere algebra, and more to general relativity than mere topology. Nonetheless, the possible benefits to physics of reducing topology to algebra are what got me so excited about Grothendieck’s work.

So, starting in the 1990s, I tried to understand the powerful abstract concepts that Grothendieck had invented—and by now I have partially succeeded. Some mathematicians find these concepts to be the hard part of algebraic geometry. They now seem like the easy part to me. The hard part, for me is the nitty-gritty details. First, there is all the material in those texts that Hartshorne takes as prerequisites: “the books by Atiyah–Macdonald, Matsumura, and Zariski–Samuel.” But there is also a lot more.

So, while I now have some of what it takes to read Hartshorne, until recently I was too intimidated to learn it. A student of physics once asked a famous expert how much mathematics a physicist needs to know. The expert replied: “More.” Indeed, the job of learning mathematics is never done, so I focus on the things that seem most important and/or fun. Until last year, algebraic geometry never rose to the top of the list.

What changed? I realized that algebraic geometry is connected to the relation between classical and quantum physics. Classical physics is the physics of Newton, where we imagine that we can measure everything with complete precision, at least in principle. Quantum physics is the physics of Schrödinger and Heisenberg, governed by the uncertainty principle: if we measure some aspects of a physical system with complete precision, others must remain undetermined.

For example, any spinning object has an “angular momentum”. In classical mechanics we visualize this as an arrow pointing along the axis of rotation, whose length is proportional to how fast the object is spinning. And in classical mechanics, we assume we can measure this arrow precisely. In quantum mechanics—a more accurate description of reality—this turns out not to be true. For example, if we know how far this arrow points in the $x$ direction, we cannot know how far it points in the $y$ direction. This uncertainty is too small to be noticeable for a spinning basketball, but for an electron it is important: physicists had only a rough understanding of electrons until they took this into account.

Physicists often want to “quantize” classical physics problems. That is, they start with the classical description of some physical system, and they want to figure out the quantum description. There is no fully general and completely systematic procedure for doing this. This should not be surprising: the two worldviews are so different. However, there are useful recipes for quantization. The most systematic ones apply to a very limited selection of physics problems.

For example, sometimes in classical physics we can describe a system by a point in a variety. This is not something one generally expects, but it happens in plenty of important cases. For example, consider a spinning object: if we fix how long its angular momentum arrow is, the arrow can still point in any direction, so its tip must lie on a sphere. Thus, we can describe a spinning object by a point on a sphere. And this sphere is actually a variety, the “Riemann sphere”, named after Bernhard Riemann, one of the greatest algebraic geometers of the 1800s.

When a classical physics problem is described by a variety, some magic happens. The process of quantization becomes completely systematic—and surprisingly simple. There is even a kind of reverse process, which one might call “classicization,” that lets you turn the quantum description back into a classical description. The classical and quantum approaches to physics become tightly linked, and one can take ideas from either approach and see what they say about the other one. For example, each point on the variety describes not only a state of the classical system (in our example, a definite direction for the angular momentum), but also a state of the corresponding quantum system—even though the latter is governed by the uncertainty principle. The quantum state is the “best quantum approximation” to the classical state.

Even better, in this situation many of the basic theorems about algebraic geometry can be seen as facts about quantization! Since quantization is something I’ve been thinking about for a long time, this makes me very happy. Richard Feynman once said that for him to make progress on a tough physics problem, he needed to have some sort of special angle on it:

I have to think that I have some kind of inside track on this problem. That is, I have some sort of talent that the other guys aren’t using, or some way of looking, and they are being foolish not to notice this wonderful way to look at it. I have to think I have a little better chance than the other guys, for some reason. I know in my heart that it is likely that the reason is false, and likely the particular attitude I’m taking with it was thought of by others. I don’t care; I fool myself into thinking I have an extra chance.

This may be what I’d been missing on algebraic geometry until now. Algebraic geometry is not just a problem to be solved, it’s a body of knowledge—but it’s such a large, intimidating body of knowledge that I didn’t dare tackle it until I got an inside track. Now I can read Hartshorne, translate some of the results into facts about physics, and feel I have a chance at understanding this stuff. And it’s a great feeling.

For the details of how algebraic geometry connects classical and quantum mechanics, see my talk slides and series of blog articles.

## Metal-Organic Frameworks

11 March, 2019

I’ve been talking about new technologies for fighting climate change, with an emphasis on negative carbon emissions. Now let’s begin looking at one technology in more detail. This will take a few articles. I want to start with the basics.

A metal-organic framework or MOF is a molecular structure built from metal atoms and organic compounds. There are many kinds. They can be 3-dimensional, like this one made by scientists at CSIRO in Australia:

And they can be full of microscopic holes, giving them an enormous surface area! For example, here’s a diagram of a MOF with yellow and orange balls showing the holes:

In fact, one gram of the stuff can have a surface area of more than 12,000 square meters!

Gas molecules like to sit inside these holes. So, perhaps surprisingly at first, you can pack a lot more gas in a cylinder containing a MOF than you can in an empty cylinder at the same pressure!

This lets us store gases using MOFs—like carbon dioxide, but also hydrogen, methane and others. And importantly, you can also get the gas molecules out of the MOF without enormous amounts of energy. Also, you can craft MOFs with different hole sizes and different chemical properties, so they attract some gases much more than others.

So, we can imagine various applications suited to fighting climate change! One is carbon capture and storage, where you want a substance that eagerly latches onto CO2 molecules, but can also easily be persuaded to let them go. But another is hydrogen or methane storage for the purpose of fuel. Methane releases less CO2 than gasoline does when it burns, per unit amount of energy—and hydrogen releases none at all. That’s why some advocate a hydrogen economy.

Could hydrogen-powered cars be better than battery-powered cars, someday? I don’t know. But never mind—such issues, though important, aren’t what I want to talk about now. I just want to quote something about methane storage in MOFs, to give you a sense of the state of the art.

• Mark Peplow, Metal-organic framework compound sets methane storage record, C&EN, 11 December 2017.

Cars powered by methane emit less CO2 than gasoline guzzlers, but they need expensive tanks and compressors to carry the gas at about 250 atm. Certain metal-organic framework (MOF) compounds—made from a lattice of metal-based nodes linked by organic struts—can store methane at lower pressures because the gas molecules pack tightly inside their pores.

So MOFs, in principle, could enable methane-powered cars to use cheaper, lighter, and safer tanks. But in practical tests, no material has met a U.S. Department of Energy (DOE) gas storage target of 263 cm3 of methane per cm3 of adsorbent at room temperature and 64 atm, enough to match the capacity of high-pressure tanks.

A team led by David Fairen-Jimenez at the University of Cambridge has now developed a synthesis method that endows a well-known MOF with a capacity of 259 cm3 of methane per cm3 under those conditions, at least 50% higher than its nearest rival. “It’s definitely a significant result,” says Jarad A. Mason at Harvard University, who works with MOFs and other materials for energy applications and was not involved in the research. “Capacity has been one of the biggest stumbling blocks.”

Only about two-thirds of the MOF’s methane was released when the pressure dropped to 6 atm, a minimum pressure needed to sustain a decent flow of gas from a tank. But this still provides the highest methane delivery capacity of any bulk adsorbent.

A couple things are worth noting here. First, the process of a molecule sticking to a surface is called adsorption, not to be confused with absorption. Second, notice that using MOFs they managed to compress methane by a factor of 259 at a pressure of just 64 atmospheres. If we tried the same trick without MOFs we would need a pressure of 259 atmospheres!

But MOFs are not only good at holding gases, they’re good at sucking them up, which is really the flip side of the same coin: gas molecules avidly seek to sit inside the little holes of your MOF. So people are also using MOFs to build highly sensitive detectors for specific kinds of gases:

And some MOFs work in water, too—so people are trying to use them as water filters, sort of a high-tech version of zeolites, the minerals that inspired people to invent MOFs in the first place. Zeolites have an impressive variety of crystal structures:

and so on… but MOFs seem to be more adjustable in their structure and chemical properties.

Looking more broadly at future applications, we can imagine MOFs will be important in a host of technologies where we want a substance with lots of microscopic holes that are eager to hold specific molecules. I have a feeling that the most powerful applications of MOFs will come when other technologies mature. For example: projecting forward to a time when we get really good nanotechnology, we can imagine MOFs as useful “storage lockers” for molecular robots.

But next time I’ll talk about what we can do now, or soon, to capture carbon dioxide with MOFs.

In the meantime: can you imagine some cool things we could do with MOFs? This may feed your imagination:

• Wikipedia, Metal-organic frameworks.