Open Systems: A Double Categorical Perspective (Part 3)

23 January, 2021

Back to Kenny Courser’s thesis:

• Kenny Courser, Open Systems: A Double Categorical Perspective, Ph.D. thesis, U. C. Riverside, 2020.

Last time I explained the problems with decorated cospans as a framework for dealing with open systems. I vaguely hinted that Kenny’s thesis presents two solutions to these problems: so-called ‘structured cospans’, and a new improved approach to decorated cospans. Now let me explain these!

You may wonder why I’m returning to this now, after three months of silence. The reason is that Kenny, Christina Vasilakopolou, and I just finished a paper that continues this story:

• John Baez, Kenny Courser and Christina Vasilakopoulou, Structured versus decorated cospans.

We showed that under certain conditions, structured and decorated cospans are equivalent. So, I’m excited about this stuff again.

Last time I explained Fong’s theorem about decorated cospans:

Fong’s Theorem. Suppose \mathsf{A} is a category with finite colimits, and make \mathsf{A} into a symmetric monoidal category with its coproduct as the tensor product. Suppose F\colon (\mathsf{A},+) \to (\mathsf{Set},\times) is a symmetric lax monoidal functor. Define an F-decorated cospan to be a cospan

in \mathsf{A} together with an element x\in F(N) called a decoration. Then there is a symmetric monoidal category with

• objects of \mathsf{A} as objects,
• isomorphism classes of F-decorated cospans as morphisms.

The theorem is true, but it doesn’t apply to all the examples we wanted it to. The problem is that it’s ‘not categorified enough’. It’s fine if we want to decorate the apex N of our cospan with some extra structure: we do this by choosing an element of some set F(N). But in practice, we often want to decorate N with some extra stuff, which means choosing an object of a category F(N). So we should really use not a functor

F\colon (\mathsf{A},+) \to (\mathsf{Set},\times)

but something like a functor

F\colon (\mathsf{A},+) \to (\mathbf{Cat},\times)

What do I mean by ‘something like a functor?’ Well, \mathbf{Cat} is not just a category but a 2-category: it has categories as objects, functors as morphisms, but also natural transformations as 2-morphisms. The natural notion of ‘something like a functor’ from a category to a 2-category is called a pseudofunctor. And just as we can define symmetric lax monoidal functor, we can define a symmetric lax monoidal pseudofunctor.

All these nuances really matter when we’re studying open graphs, as we were last time!

Here we want the feet of our structured cospan to be finite sets and the apex to be a finite graph. So, we have \mathsf{A} = \mathsf{FinSet} and for any N \in \mathsf{FinSet} we want F(N) to be the set, or category, of finite graphs having N as their set of nodes.

I explained last time all the disasters that ensue when you try to let F(N) be the set of finite graphs having N as its set of nodes. You can try, but you will pay dearly for it! You can struggle and fight, like Hercules trying to chop all the heads off the Hydra, but you still can’t get a symmetric lax monoidal functor

F\colon (\mathsf{A},+) \to (\mathsf{Set},\times)

that sends any finite set N to the set of graphs having N as their set of nodes.

But there is a perfectly nice category F(N) of all finite graphs having N as their set of nodes. And you can get a symmetric lax monoidal pseudofunctor

F\colon (\mathsf{A},+) \to (\mathbf{Cat},\times)

that sends any any finite set to the category of finite graphs having it as nodes. So you should stop fighting and go with the flow.

Kenny, Christina and I proved an enhanced version of Fong’s theorem that works starting from this more general kind of F. And instead of just giving you a symmetric monoidal category, this theorem gives you a symmetric monoidal double category.

In fact, that is something you should have wanted already, even with Fong’s original hypotheses! The clue is that Fong’s theorem uses isomorphism classes of decorated cospans, which suggests we’d get something better if we used decorated cospans themselves. Kenny tackled this a while ago, getting a version of Fong’s theorem that produces a symmetric monoidal double category, and another version that produces a symmetric monoidal bicategory:

• Kenny Courser, A bicategory of decorated cospans, Theory and Applications of Categories 32 (2017), 995–1027.

Over the years we’ve realized that the double category is better, because it contains more information and is easier to work with. So, in our new improved approach to decorated cospans, we go straight for the jugular and get a double category. And here’s how it works:

Theorem. Suppose \mathsf{A} is a category with finite colimits, and make \mathsf{A} into a symmetric monoidal category with its coproduct as the tensor product. Suppose F\colon (\mathsf{A},+) \to (\mathbf{Cat},\times) is a symmetric lax monoidal pseudofunctor. Then there is a symmetric monoidal double category F\mathbb{C}\mathbf{sp} in which

• an object is an object of \mathsf{A}
• a vertical morphism is a morphism in \mathsf{A}
• a horizontal morphism is an F-decorated cospan, meaning a cospan in \mathsf{A} together with a decoration:


• a 2-morphism is a map of decorated cospans, meaning a commutative diagram in \mathsf{A}:

together with a morphism \tau \colon F(h)(x) \to x', the map of decorations.

We call F\mathbb{C}\mathbf{sp} a decorated cospan double category. And as our paper explains, this idea lets us fix all the broken attempted applications of Fong’s original decorated cospan categories!

All this is just what any category theorist worth their salt would try, in order to fix the original problems with decorated cospans. It turns out that proving the theorem above is not so easy, mainly because the definition of ‘symmetric monoidal double category’ is rather complex. But if you accept the theorem—including the details of how you get the symmetric monoidal structure on the double category, which I have spared you here—then it doesn’t really matter much that the proof takes work.

Next time I’ll tell you about the other way to fix the original decorated cospan formalism: structured cospans. When these work, they are often easier to use.


Part 1: an overview of Courser’s thesis and related papers.

Part 2: problems with the original decorated cospans.

Part 3: the new improved decorated cospans.


Categories of Nets (Part 2)

20 January, 2021

guest post by Michael Shulman

Now that John gave an overview of the Petri nets paper that he and I have just written with Jade and Fabrizio, I want to dive a bit more into what we accomplish. The genesis of this paper was a paper written by Fabrizio and several other folks entitled Computational Petri Nets: Adjunctions Considered Harmful, which of course sounds to a category theorist like a challenge. Our paper, and particularly the notion of Σ-net and the adjunction in the middle column relating Σ-nets to symmetric strict monoidal categories, is an answer to that challenge.

Suppose you wanted to “freely” generate a symmetric monoidal category from some combinatorial data. What could that data be? In other words (for a category theorist at least), what sort of category \mathsf{C} appears in an adjunction \mathsf{C} \rightleftarrows \mathsf{SMC}? (By the way, all monoidal categories in this post will be strict, so I’m going to drop that adjective for conciseness.)

Perhaps the simplest choice is the same data that naturally generates a plain category, namely a directed graph. However, this is pretty limited in terms of what symmetric monoidal categories it can generate, since the generating morphisms will always only have single generating objects as their domain and codomain.

Another natural choice is the same data that naturally generates a multicategory, which might be called a “multigraph”: a set of objects together with, for every tuple of objects x_1,\dots,x_n and single object y, a set of arrows from (x_1,\dots,x_n) to y. In the generated symmetric monoidal category, such an arrow gives rise to a morphism x_1\otimes\cdots\otimes x_n \to y; thus we can now have multiple generating objects in the domains of generating morphisms, but not the codomains.

Of course, this suggests an even better solution: a set of objects, together with a set of arrows for every pair of tuples (x_1,\dots,x_m) and (y_1,\dots,y_n). I’d be tempted to call this a “polygraph”, since it also naturally generates a polycategory. But other folks got there first and called it a “tensor scheme” and also a “pre-net”. In the latter case, the objects are called “places” and the morphisms “transitions”. But whatever we call it, it allows us to generate free symmetric monoidal categories in which the domains and codomains of generating morphisms can both be arbitrary tensor products of generating objects. For those who like fancy higher-categorical machinery, it’s the notion of computad obtained from the monad for symmetric monoidal categories.

However, pre-nets are not without flaws. One of the most glaring, for people who actually want to compute with freely generated symmetric monoidal categories, is that there aren’t enough morphisms between them. For instance, suppose one pre-net N has three places x,y,z and a transition f:(x,x,y) \to z, while a second pre-net N' has three places x',y',z' and a transition f':(x',y',x') \to z'. Once we generate a symmetric monoidal category, then f can be composed with a symmetry x\otimes y \otimes x \cong x\otimes x\otimes y and similarly for f'; so the symmetric monoidal categories generated by N and N' are isomorphic. But there isn’t even a single map of pre-nets from N to N' or vice versa, because a map of pre-nets has to preserve the ordering on the inputs and outputs. This is weird and annoying for combinatorial data that’s supposed to present a symmetric monoidal category.

Another way of making essentially the same point is that just as the adjunction between SMCs and directed graphs factors through categories, and the adjunction between SMCs and multigraphs factors through multicategories, the adjunction between SMCs and pre-nets factors through non-symmetric monoidal categories. In other words, a pre-net is really better viewed as data for generating a non-symmetric monoidal category, which we can then freely add symmetries to.

By contrast, in the objects that we call “Petri nets”, the domain and codomain of each generating morphism are elements of the free commutative monoid on the set of places—as opposed to elements of the free monoid, which is what they are for a pre-net. Thus, the domain of f and f' above would be x+x+y and x+y+x respectively, which in a commutative monoid are equal (both are 2x+y). So the corresponding Petri nets of N and N' are indeed isomorphic. However, once we squash everything down in this way, we lose the ability to functorially generate a symmetric monoidal category; all we can generate is a commutative monoidal category where all the symmetries are identities.

At this point we’ve described the upper row and the left- and right-hand columns in John’s diagram:

What’s missing is a kind of net in the middle that corresponds to symmetric monoidal categories. To motivate the definition of Σ-net, consider how to solve the problem above of the “missing morphisms”. We want to send f:(x,x,y) \to z to a “permuted version” of f':(x',y',x') \to z'. For this to be implemented by an actual set-map, we need this “permuted version” to be present in the data of N' somehow. This suggests that the transitions should come with a permutation action like that of, say, a symmetric multicategory. Then inside N' we can actually act on f' by the transposition \tau = (2,3) \in S_3, yielding a new morphism \tau(f') : (x',x',y')\to z' which we can take to be the image of f. Of course, we can also act on f' by other permutations, and likewise on f; but since these permutation actions are part of the structure they must be preserved by the morphism, so sending f to \tau(f') uniquely determines where we have to send all these permutation images.

Now you can go back and look again at John’s definition of Σ-net: a set S, a groupoid T, and a discrete opfibration T \to P S \times P S ^{op}, where P denotes the free-symmetric-strict-monoidal-category functor \mathsf{Set} \to \mathsf{Cat}. Such a discrete opfibration is the same as a functor N \colon P S \times P S ^{op} \to \mathsf{Set}, and the objects of P S are the finite sequences of elements of S while its morphisms are permutations; thus this is precisely a pre-net (the action of the functor N on objects) with permutation actions as described above. I won’t get into the details of constructing the adjunction relating Σ-nets to symmetric monoidal categories; you can read the paper, or maybe I’ll blog about it later.

However, in solving the “missing morphisms” problem, we’ve introduced a new possibility. Suppose we act on f \colon (x,x,y) \to z by the transposition \sigma = (1,2) \in S_3 that switches the first two entries. We get another transition (x,x,y)\to z with the same domain and codomain as f; so it might equal f, or it might not! In other words, transitions in a Σ-net can have isotropy. If \sigma(f)=f, then when we generate a free symmetric monoidal category from our Σ-net, the corresponding morphism f:x\otimes x \otimes y \to z will have the property that when we compose it with the symmetry morphism x\otimes x\otimes y \cong x\otimes x\otimes y we get back f again. No symmetric monoidal category generated by a pre-net has this property; it’s more like the behavior of the commutative monoidal category generated by a Petri net, except that in the latter case the symmetry x\otimes x\otimes y \cong x\otimes x\otimes y itself is the identity, rather than just acting by the identity on f.

This suggests that Σ-nets can either “behave like pre-nets” or “behave like Petri nets”. This is made precise by the bottom row of adjunctions in the diagram. On one hand, we can map a pre-net to a Σ-net by freely generating the action of all permutations. This has a right adjoint that just forgets the permutation action (which actually has a further right adjoint, although that’s a bit weird). On the other hand, we can map a Petri net to a Σ-net by making all the permutations act as trivially as possible; this has a left adjoint that identifies each transition with all its permutation images. And these adjunctions commute with the three “free monoidal category” adjunctions in reasonable ways (see the paper for details).

The right adjoint mapping Petri nets into Σ-nets is fully faithful, so we really can say that Σ-nets “include” Petri nets. The left adjoint mapping pre-nets to Σ-nets is not fully faithful—it can’t possibly be, since the whole point of introducing Σ-nets was that pre-nets don’t have enough morphisms! But the full image of this functor is equivalent to a fourth kind of net: Kock’s whole-grain Petri nets. Kock’s approach to solving the problem of pre-nets is somewhat different, more analogous to the notion of “fat” symmetric monoidal category: he takes the domain and codomain of each transition to be a family of places indexed by a finite set. But his category turns out to be equivalent to the category of Σ-nets that are freely generated by some pre-net. (Kock actually proved this himself, as well as sketching the adjunction between Σ-nets and symmetric monoidal categories. He called Σ-nets “digraphical species”.)

So Σ-nets “include” both Petri nets and pre-nets, in an appropriate sense. The pre-nets (or, more precisely, whole-grain nets) are the Σ-nets with free permutation actions (trivial isotropy), while the Petri nets are the Σ-nets with trivial permutation actions (maximal isotropy). In Petri-net-ese, these correspond to the “individual token philosophy” and the “collective token philosophy”, respectively. (This makes it tempting to refer to the functors from Σ-nets to pre-nets and Petri nets as individuation and collectivization respectively.) But Σ-nets also allow us to mix and match the two philosophies, having some transitions with trivial isotropy, others with maximal isotropy, and still others with intermediate isotropy.

I like to think of Σ-nets as a Petri net analogue of orbifolds. Commutative-monoid-based Petri nets are like “coarse moduli spaces”, where we’ve quotiented by all symmetries but destroyed all the isotropy information; while whole-grain Petri nets are like manifolds, where we have no singularities but can only quotient by free actions. Pre-nets can then be thought of a “presentation” of a manifold, such as by a particular way of gluing coordinate patches together: useful in concrete examples, but not the “invariant” object we really want to study mathematically.


Categories of Nets (Part 1)

17 January, 2021

I’ve been thinking about Petri nets a lot. Around 2010, I got excited about using them to describe chemical reactions, population dynamics and more, using ideas taken from quantum physics. Then I started working with my student Blake Pollard on ‘open’ Petri nets, which you can glue together to form larger Petri nets. Blake and I focused on their applications to chemistry, but later my student Jade Master and I applied them to computer science and brought in some new math. I was delighted when Evan Patterson and Micah Halter used all this math, along with ideas of Joachim Kock, to develop software for rapidly assembling models of COVID-19.

Now I’m happy to announce that Jade and I have teamed up with Fabrizio Genovese and Mike Shulman to straighten out a lot of mysteries concerning Petri nets and their variants:

• John Baez, Fabrizio Genovese, Jade Master and Mike Shulman, Categories of nets.

This paper is full of interesting ideas, but I’ll just tell you the basic framework.

A Petri net is a seemingly simple thing:

It consists of places (drawn as circles) and transitions (drawn as boxes), with directed edges called arcs from places to transitions and from transitions to places.

The idea is that when you use a Petri net, you put dots called tokens in the places, and then move them around using the transitions:

A Petri net is actually a way of describing a monoidal category. A way of putting a bunch of tokens in the places gives an object of this category, and a way of moving them around repeatedly (as above) gives a morphism.

The idea sounds straightforward enough. But it conceals some subtleties, which researchers have been struggling with for at least 30 years.

There are various ways to make the definition of Petri net precise. For example: is there a finite set of arcs from a given place to a given transition (and the other way around), or merely a natural number of arcs? If there is a finite set, is this set equipped with an ordering or not? Furthermore, what is a morphism between Petri nets?

Different answers are good for different purposes. In the so-called ‘individual token philosophy’, we allow a finite set of tokens in each place. In the ‘collective token philosophy’, we merely allow a natural number of tokens in each place. It’s like the difference between having 4 individual workers named John, Fabrizio, Jade and Mike where you can tell who did what, and having 4 anonymous workers: nameless drones.

Our goal was to sort this out all and make it crystal clear. We focus on 3 kinds of net, each of which naturally generates its own kind of monoidal category:

pre-nets, which generate free strict monoidal categories.

Σ-nets, which generate free symmetric strict monoidal categories.

Petri nets, which generate free commutative monoidal categories.

These three kinds of monoidal category differ in how ‘commutative’ they are:

• In a strict monoidal category we typically have x \otimes y \ne y \otimes x.

• In a strict symmetric monoidal category we have for each pair of objects a chosen isomorphism x \otimes y \cong y \otimes x.

• A commutative monoidal category is a symmetric strict monoidal category where the symmetry isomorphisms are all identities, so x \otimes y = y \otimes x.

So, we have a spectrum running from hardcore individualism, where two different things of the same type are never interchangeable… to hardcore collectivism, where two different things of the same type are so interchangeable that switching them counts as doing nothing at all! In the theory of Petri nets and their variants, the two extremes have been studied better than the middle.

You can summarize the story with this diagram:

There are three different categories of nets at bottom, and three diffferent categories of monoidal categories on top — all related by adjoint functors! Here the left adjoints point up the page — since different kinds of nets freely generate different kinds of monoidal categories — and also to the right, in the direction of increasing ‘commutativity’.

If you’re a category theorist you’ll recognize at least two of the three categories on top:

\mathsf{StrMC}, with strict monoidal categories as objects and strict monoidal functors as morphisms.

\mathsf{SSMC}, with symmetric strict monoidal categories as objects and strict symmetric monoidal functors as their morphisms.

\mathsf{CMC}, with commutative monoidal categories as objects and strict symmetric monoidal functors as morphisms. A commutative monoidal category is a symmetric strict monoidal category where the symmetry is the identity.

The categories of nets are probably less familiar. But they are simple enough. Here I’ll just describe their objects. The morphisms are fairly obvious, but read our paper for details.

\mathsf{PreNet}, with pre-nets as objects. A pre-net consists of a set S of places, a set T of transitions, and a function T \to S^\ast\times S^\ast, where S^\ast is the set of lists of elements of S.

\Sigma\mathsf{-net}, with Σ-nets as objects. A Σ-net consists of a set S, a groupoid T, and a discrete opfibration T \to P S \times P S^{\mathrm{op}}, where P S is the free symmetric strict monoidal category generated by a set of objects S and no generating morphisms.

\mathsf{Petri}, with Petri nets as objects. A Petri net, as we use the term, consists of a set S, a set T, and a function T \to \mathbb{N}[S] \times \mathbb{N}[S], where \mathbb{N}[S] is the set of multisets of elements of S.

What does this mean in practice?

• In a pre-net, each transition has an ordered list of places as ‘inputs’ and an ordered list of places as ‘outputs’. We cannot permute the inputs or outputs of a transition.

• In a Σ-net, each transition has an ordered list of places as inputs and an ordered list of places as outputs. However, permuting the entries of these lists gives a new transition with a new list of inputs and a new list of outputs!

• In a Petri net, each transition has a multiset of places as inputs and a multiset of places as outputs. A multiset is like an ‘unordered list’: entries can appear repeatedly, but the order makes no difference at all.

So, pre-nets are rigidly individualist. Petri nets are rigidly collectivist. And Σ-nets are flexible, including both extremes as special cases!

On the one hand, we can use the left adjoint functor

\mathsf{PreNet} \to \Sigma\mathsf{-net}

to freely generate Σ-nets from pre-nets. If we do this, we get Σ-nets such that permutations of inputs and outputs act freely on transitions. Joachim Kock has recently studied Σ-nets of this sort. He calls them whole-grain Petri nets, and he treats them as forming a category in their own right, but it’s also the full image of the above functor.

On the other hand, we can use the right adjoint functor

\mathsf{Petri} \to \Sigma\mathsf{-net}

to turn Petri nets into Σ-nets. If we do this, we get Σ-nets such that permutations of inputs and outputs act trivially on transitions: the permutations have no effect at all.

I’m not going to explain how we got any of the adjunctions in this diagram:

That’s where the interesting category theory comes in. Nor will I tell you about the various alternative mathematical viewpoints on Σ-nets… nor how we draw them. I also won’t explain our work on open nets and open categories of all the various kinds. I’m hoping Mike Shulman will say some more about what we’ve done. That’s why this blog article is optimistically titled “Part 1”.

But I hope you see the main point. There are three different kinds of things like Petri nets, each of which serves to freely generate a different kind of monoidal category. They’re all interesting, and a lot of confusion can be avoided if we don’t mix them up!


This Week’s Finds (1–50)

12 January, 2021

Take a copy of this!

This Week’s Finds in Mathematical Physics (1-50), 242 pages.

These are the first 50 issues of This Week’s Finds of Mathematical Physics. This series has sometimes been called the world’s first blog, though it was originally posted on a “usenet newsgroup” called sci.physics.research — a form of communication that predated the world-wide web. I began writing this series as a way to talk about papers I was reading and writing, and in the first 50 issues I stuck closely to this format. These issues focus rather tightly on quantum gravity, topological quantum field theory, knot theory, and applications of n-categories to these subjects. There are, however, digressions into elliptic curves, Lie algebras, linear logic and various other topics.

Tim Hosgood kindly typeset all 300 issues of This Week’s Finds in 2020. They will be released in six installments of 50 issues each, for a total of about 2610 pages. I have edited the issues here to make the style a bit more uniform and also to change some references to preprints, technical reports, etc. into more useful arXiv links. This accounts for some anachronisms where I discuss a paper that only appeared on the arXiv later.

The process of editing could have gone on much longer; there are undoubtedly many mistakes remaining. If you find some, please contact me and I will try to fix them.

By the way, sci.physics.research is still alive and well, and you can use it on Google. But I can’t find the first issue of This Week’s Finds there — if you can find it, I’ll be grateful. I can only get back to the sixth issue. Take a look if you’re curious about usenet newsgroups! They were low-tech compared to what we have now, but they felt futuristic at the time, and we had some good conversations.



Applied Compositional Thinking for Engineers

7 December, 2020

Hey! There’s a new online course coming up!

Applied Compositional Thinking for Engineers. January 7, 8, 11-15, 20-22, and 25-29. Taught by Andrea Censi, Jonathan Lorand and Gioele Zardini.

It’s not a coincidence that the acronym for “Compositional Thinking” is “CT”, just like “Category Theory”.

The course is free but registration is required. During the sign-up process you will be asked for the most convenient times for you; the instructors will choose what works best for a majority of participants. Each session will consist of short explanatory videos (to be watched in advance), and a 50 minute online lecture (allowing for questions and discussion).

Andrea Censi is known for his work on co-design. He and the others are at ETH Zürich, but this is not an official ETH Zürich course.

Here’s a video explaining the course:


Graph Transformation Theory and Applications

4 December, 2020

I love graph rewriting—the study of ways to change one graph into another by changing one small part at a time. My student Daniel Cicala did his thesis on this! So I’m happy to hear about the new virtual seminar series GReTA: Graph TRansformation Theory and Applications.

It aims to serve as a platform for the international graph rewriting community, promote recent developments and trends in the field, and encourage regular networking and interaction between members of this community.

Seminars are held twice a month in the form of Zoom sessions (some of which will be live-streamed to YouTube). Go to the link if you want to join on Zoom.

You can get regular updates on the GReTA seminars in several ways:

• Subscribe to the GReTA YouTube channel.

• Subscribe to the GReTA Google Calendar (or alternatively import it in iCal format).

• Subscribe to the GReTA mailing list.

Here are the two talks so far. Any subject that can promote talks on both logic and chemistry must be good! Thinking of chemistry and logic as two aspects of the same thing is bound to trigger new ideas. (Just as a sequence of chemical reactions converts reactants into products, a proof converts assumptions into conclusions.)

Speaker: Barbara König
Title: Graph transformation meets logic

Abstract. We review the integration of (first-order) logic respectively nested conditions into graph transformation. Conditions can serve various purposes: they can constrain graph rewriting, symbolically specify sets of graphs, be used in query languages and in verification (for instance in Hoare logic and for behavioural equivalence checking). In the graph transformation community the formalism of nested graph conditions has emerged, that is, conditions which are equivalent to first-order logic, but directly integrate graphs and graph morphisms, in order to express constraints more succinctly. In this talk we also explain how the notion of nested conditions can be lifted from graph transformation systems to the setting of reactive systems as defined by Leifer and Milner. It turns out that some constructions for graph transformation systems (such as computing weakest preconditions and strongest postconditions and showing local confluence by means of critical pair analysis) can be done quite elegantly in the more general setting.

Speakers: Daniel Merkle and Jakob Lykke Andersen
Title: Chemical graph transformation and applications

Abstract: Any computational method in chemistry must choose some level of precision in the modeling. One choice is made in the methods of quantum chemistry based on quantum field theory. While highly accurate, the methods are computationally very demanding, which restricts their practical use to single reactions of molecules of moderate size even when run on supercomputers. At the same time, most existing computational methods for systems chemistry and biology are formulated at the other abstraction extreme, in which the structure of molecules is represented either not at all or in a very rudimentary fashion that does not permit the tracking of individual atoms across a series of reactions.

In this talk, we present our on-going work on creating a practical modelling framework for chemistry based on Double Pushout graph transformation, and how it can be applied to analyse chemical systems. We will address important technical design decisions as well as the importance of methods inspired from Algorithm Engineering in order to reach the required efficiency of our implementation. We will present chemically relevant features that our framework provides (e.g. automatic atom tracing) as well as a set of chemical systems we investigated are currently investigating. If time allows we will discuss variations of graph transformation rule compositions and their chemical validity.


The 600-Cell (Part 4)

30 November, 2020

I get really bored some evenings these days, after I run out of energy to work on my own projects, and before I lie in bed and read Whittaker’s mammoth tome, A History of the Theories of Aether & Electricity. So I’ve taken up browsing the arXiv. It can be quite entertaining! Here’s something I found last night:

• Tomme Denney, Da’Shay Hooker, De’Janeke Johnson, Tianna Robinson, Majid Butler and Sandernisha Claiborne, The geometry of H4 polytopes.

It mentions some cool facts that call for a new installment of this series of mine:

• The 60-cell: Part 1, Part 2, Part 3.

Remember that the 24-cell is a four-dimensional regular polytope with 24 vertices and 24 octahedral faces:



The 600-cell is a four-dimensional regular polytope with 120 vertices and 600 tetrahedral faces:


Since 120/24 = 5, you might hope that there’s a way to partition the 600-cell’s vertices into the vertices of five 24-cells. And indeed there is!

So we get a compound of five 24-cells. It’s a kind of four-dimensional analogue of this picture by Greg Egan, showing a compound of five tetrahedra:



How many ways are there to inscribe a compound of 24-cells in the 600-cell? That is: how many ways are there to partition the 600-cell’s vertices into the vertices of five 24-cells?

This question has an interesting history, which I explained in Part 2. A fellow named P. H. Schoute claimed in 1905 that the answer is 10. In 1933 the famous geometer Coxeter publicly doubted this claim, writing that surely there should be just 5. Later he changed his mind and agreed that Schoute was correct… but still gave no proof. In 2017 David Roberson verified it using computer calculations. But the paper I’m talking finally offers a human-readable proof.

But they show something even better! First: there are exactly 25 ways to inscribe a 24-cell into a 600-cell—that is, ways to find a subset of the 600-cell’s vertices that form the vertices of a 24-cell.

But now for the cool part: we can list these 25 in a 5 × 5 square, so that each row and each column give a different way to inscribe a compound of 24-cells in the 600-cell. So we get a total of 10.

I hope you understood that. If not, maybe the paper’s summary will be clearer:

The 25 24-cells can be placed in a 5×5 array, so that each row and each column of the array partition the 120 vertices of the 600-cell into five disjoint 24-cells. The rows and columns of the array are the only ten such partitions of the 600-cell.

This too was claimed without proof by P. H. Schoute in 1905. A proof is in the paper by Denney, Hooker, Johnson, Robinson, Butler and Claiborne!

There’s a lot more cool stuff in this paper, as hinted at in the abstract:

Abstract. We describe the geometry of an arrangement of 24-cells inscribed in the 600-cell. In §7 we apply our results to the even unimodular lattice E8 and show how the 600-cell transforms E8/2E8, an 8-space over the field F2, into a 4-space over F4 whose points, lines and planes are labeled by the geometric objects of the 600-cell.

Yes, if you take the E8 lattice and mod out by the vectors that are two times vectors in that lattice, you get an 8-dimensional vector space over the field with 2 elements. But you can think of it as a 4-dimensional vector space over 4 elements. How many 1-dimensional subspaces does this vector space have? You can count them, and the answer is

\frac{4^4 - 1}{4 - 1} = 85

The paper shows how these correspond to the 60 pairs of opposite vertices in the 600-cell together with the 25 24-cells inscribed in the 600-cell! Wow!


Ramanujan’s Last Formula

27 November, 2020

When I gave a talk about Ramanujan’s easiest formula at the Whittier College math club run by my former student Brandon Coya, one of the students there asked a great question: are there any unproved formulas of Ramanujan left?

So I asked around on MathOverflow, and this is the result:

George Andrews and Bruce Berndt have written five books about Ramanujan’s lost notebook, which was actually not a notebook but a pile of notes Andrews found in 1976 in a box at the Wren Library at Trinity College, Cambridge. In 2019 Berndt wrote about the last unproved identity in the lost notebook:

• Bruce C. Berndt, Junxian Li and Alexandru Zaharescu, The final problem: an identity from Ramanujan’s lost notebook, Journal of the London Mathematical Society 100 (2) (2019), 568–591.

Following Timothy Chow’s advice, I consulted Berndt and asked him if there were any remaining formulas of Ramanujan that have neither been proved nor disproved. He said no:

To the best of my knowledge, there are no claims or conjectures remaining. There are some statements to which we have not been able to attach meaning.

I checked to make sure that this applies to all of Ramanujan’s output, not just the lost notebook, and he said yes.

It’s sort of sad. But there’s a big difference between proving an identity and extracting all the wisdom contained in that identity! A lot of Ramanujan’s formulas have combinatorial interpretations, while others are connected to complex analysis—e.g. mock theta functions—so I’m sure there’s a lot of good work left to do, inspired by Ramanujan’s identities. There is also a continuing industry of discovering Ramanujan-like identities.

For more fun reading, try this:

• Robert P. Schneider, Uncovering Ramanujan’s “lost” notebook: an oral history.

Here’s an identity from Ramanujan’s lost notebooks:


The Tenfold Way

22 November, 2020

I now have a semiannual column in the Notices of the American Mathematical Society! I’m excited: it gives me a chance to write short explanations of cool math topics and get them read by up to 30,000 mathematicians. It’s sort of like This Week’s Finds on steroids.

Here’s the first one:

The tenfold way, Notices Amer. Math. Soc. 67 (November 2020), 1599–1601.

The tenfold way became important in physics around 2010: it implies that there are ten fundamentally different kinds of matter. But it goes back to 1964, when C. T. C. Wall classified real ‘super division algebras’. He found that besides \mathbb{R}, \mathbb{C} and \mathbb{H}, which give ‘purely even’ super division algebras, there are seven more. He also showed that these ten algebras are all real or complex Clifford algebras. The eight real ones represent all eight Morita equivalence classes of real Clifford algebras, and the two complex ones do the same for the complex Clifford algebras. The tenfold way thus unites real and complex Bott periodicity.

In my article I explain what a ‘super division algebra’ is, give a quick proof that there are ten real super division algebras, and say a bit about how they show up in quantum mechanics and geometry.

For a lot more about the tenfold way, try this:

The tenfold way.


Ramanujan’s Easiest Formula

18 November, 2020

A while ago I decided to figure out how to prove one of Ramanujan’s formulas. I feel this is the sort of thing every mathematician should try at least once.

I picked the easiest one I could find. Hardy called it one of the “least impressive”. Still, it was pretty interesting: it turned out to be a puzzle within a puzzle. It has an easy outer layer which one can solve using standard ideas in calculus, and a tougher inner core which requires more cleverness. This inner core was cracked first by Laplace and then by Jacobi. Not being clever enough to do it myself, I read Jacobi’s two-page paper on this subject to figure out the trick. It was in Latin, and full of mistakes, but still one of the most fun papers I’ve ever read.

On Friday November 20th I’m giving a talk about this at the Whittier College Math Club, which is run by my former student Brandon Coya. Here are my slides:

Ramanujan’s easiest formula.

Here is Ramanjuan’s puzzle in the The Journal of the Indian Mathematical Society: