It’s a ‘Platonic solid in 4 dimensions’ with 600 tetrahedral faces and 120 vertices. One reason I like it is that you can think of these vertices as forming a *group*: a double cover of the rotational symmetry group of the icosahedron. Another reason is that it’s a halfway house between the icosahedron and the lattice. I explained all this in my last post here:

I wrote that post as a spinoff of an article I was writing for the *Newsletter of the London Mathematical Society*, which had a deadline attached to it. Now I should be writing something else, for another deadline. But somehow deadlines strongly demotivate me—they make me want to do *anything else*. So I’ve been continuing to think about the 600-cell. I posed some puzzles about it in the comments to my last post, and they led me to some interesting thoughts, which I feel like explaining. But they’re not quite solidified, so right now I just want to give a fairly concrete picture of the 600-cell, or at least its vertices.

This will be a much less demanding post than the last one—and correspondingly less rewarding. Remember the basic idea:

Points in the 3-sphere can be seen as quaternions of norm 1, and these form a group that double covers The vertices of the 600-cell are the points of a subgroup that double covers the rotational symmetry group of the icosahedron. This group is the famous **binary icosahedral group**.

Thus, we can name the vertices of the 600-cell by rotations of the icosahedron—as long as we remember to distinguish between a rotation by and a rotation by Let’s do it!

• 0° (1 of these). We can take the identity rotation as our chosen ‘favorite’ vertex of the 600-cell.

• 72° (12 of these). The nearest neighbors of our chosen vertex correspond to the rotations by the smallest angles that are symmetries of the icosahedron; these correspond to taking any of its 12 vertices and giving it a 1/5 turn clockwise.

• 120° (20 of these). The next nearest neighbors correspond to taking one of the 20 faces of the icosahedron and giving it a 1/3 turn clockwise.

• 144° (12 of these). These correspond to taking one of the vertices of the icosahedron and giving it a 2/5 turn clockwise.

• 180° (30 of these). These correspond to taking one of the edges and giving it a 1/2 turn clockwise. (Note that since we’re working in the double cover rather than giving one edge a half turn clockwise counts as different than giving the opposite edge a half turn clockwise.)

• 216° (12 of these). These correspond to taking one of the vertices of the icosahedron and giving it a 3/5 turn clockwise. (Again, this counts as different than rotating the opposite vertex by a 2/5 turn clockwise.)

• 240° (20 of these). These correspond to taking one of the faces of the icosahedron and giving it a 2/3 turn clockwise. (Again, this counts as different than rotating the opposite vertex by a 1/3 turn clockwise.)

• 288° (12 of these). These correspond to taking any of the vertices and giving it a 4/5 turn clockwise.

• 360° (1 of these). This corresponds to a full turn in any direction.

Let’s check:

Good! We need a total of 120 vertices.

This calculation also shows that if we move a hyperplane through the 3-sphere, which hits our favorite vertex the moment it touches the 3-sphere, it will give the following slices of the 600-cell:

• Slice 1: a point (our favorite vertex),

• Slice 2: a dodecahedron (its 12 nearest neighbors),

• Slice 3: an icosahedron (the 20 next-nearest neighbors),

• Slice 4: a dodecahedron (the 12 third-nearest neighbors),

• Slice 5: an icosidodecahedron (the 30 fourth-nearest neighbors),

• Slice 6: a dodecahedron (the 12 fifth-nearest neighbors),

• Slice 7: an icosahedron (the 20 sixth-nearest neighbors),

• Slice 8: a dodecahedron (the 12 seventh-nearest neighbors),

• Slice 9: a point (the vertex opposite our favorite).

Here’s a picture drawn by J. Gregory Moxness, illustrating this:

Note that there are 9 slices. Each corresponds to a different conjugacy class in the group These in turn correspond to the dots in the *extended* Dynkin diagram of which has the usual 8 dots and one more.

The usual Dynkin diagram has ‘legs’ of lengths and

The three legs correspond to conjugacy classes in that map to rotational symmetries of an icosahedron that preserve a vertex (5 conjugacy classes), an edge (2 conjugacy classes), and a (3 conjugacy classes)… not counting the element That last element gives the extra dot in the *extended* Dynkin diagram.

]]>

You can see a PDF here:

• From the icosahedron to E_{8}.

Here’s the story:

In mathematics, every sufficiently beautiful object is connected to all others. Many exciting adventures, of various levels of difficulty, can be had by following these connections. Take, for example, the icosahedron—that is, the *regular* icosahedron, one of the five Platonic solids. Starting from this it is just a hop, skip and a jump to the lattice, a wonderful pattern of points in 8 dimensions! As we explore this connection we shall see that it also ties together many other remarkable entities: the golden ratio, the quaternions, the quintic equation, a highly symmetrical 4-dimensional shape called the 600-cell, and a manifold called the Poincaré homology 3-sphere.

Indeed, the main problem with these adventures is knowing where to stop! The story we shall tell is just a snippet of a longer one involving the McKay correspondence and quiver representations. It would be easy to bring in the octonions, exceptional Lie groups, and more. But it can be enjoyed without these esoteric digressions, so let us introduce the protagonists without further ado.

The icosahedron has a long history. According to a comment in Euclid’s *Elements* it was discovered by Plato’s friend Theaetetus, a geometer who lived from roughly 415 to 369 BC. Since Theaetetus is believed to have classified the Platonic solids, he may have found the icosahedron as part of this project. If so, it is one of the earliest mathematical objects discovered as part of a classification theorem. It’s hard to be sure. In any event, it was known to Plato: in his *Timaeus*, he argued that water comes in atoms of this shape.

The icosahedron has 20 triangular faces, 30 edges, and 12 vertices. We can take the vertices to be the four points

and all those obtained from these by cyclic permutations of the coordinates, where

is the golden ratio. Thus, we can group the vertices into three orthogonal **golden rectangles**: rectangles whose proportions are to 1.

In fact, there are five ways to do this. The rotational symmetries of the icosahedron permute these five ways, and any nontrivial rotation gives a nontrivial permutation. The rotational symmetry group of the icosahedron is thus a subgroup of Moreover, this subgroup has 60 elements. After all, any rotation is determined by what it does to a chosen face of the icosahedron: it can map this face to any of the 20 faces, and it can do so in 3 ways. The rotational symmetry group of the icosahedron is therefore a 60-element subgroup of Group theory therefore tells us that it must be the alternating group

The lattice is harder to visualize than the icosahedron, but still easy to characterize. Take a bunch of equal-sized spheres in 8 dimensions. Get as many of these spheres to touch a single sphere as you possibly can. Then, get as many to touch *those* spheres as you possibly can, and so on. Unlike in 3 dimensions, where there is ‘wiggle room’, you have no choice about how to proceed, except for an overall rotation and translation. The balls will inevitably be centered at points of the lattice!

We can also characterize the lattice as the one giving the densest packing of spheres among all lattices in 8 dimensions. This packing was long suspected to be optimal even among those that do not arise from lattices—but this fact was proved only in 2016, by the young mathematician Maryna Viazovska [V].

We can also describe the lattice more explicitly. In suitable coordinates, it consists of vectors for which:

1) the components are either all integers or all integers plus and

2) the components sum to an even number.

This lattice consists of all integral linear combinations of the 8 rows of this matrix:

The inner product of any row vector with itself is 2, while the inner product of distinct row vectors is either 0 or -1. Thus, any two of these vectors lie at an angle of either 90° or 120°. If we draw a dot for each vector, and connect two dots by an edge when the angle between their vectors is 120° we get this pattern:

This is called the Dynkin diagram. In the first part of our story we shall find the lattice hiding in the icosahedron; in the second part, we shall find this diagram. The two parts of this story must be related—but the relation remains mysterious, at least to me.

The quickest route from the icosahedron to goes through the fourth dimension. The symmetries of the icosahedron can be described using certain quaternions; the integer linear combinations of these form a subring of the quaternions called the ‘icosians’, but the icosians can be reinterpreted as a lattice in 8 dimensions, and this is the lattice [CS]. Let us see how this works.

The quaternions, discovered by Hamilton, are a 4-dimensional algebra

with multiplication given as follows:

It is a normed division algebra, meaning that the norm

obeys

for all The unit sphere in is thus a group, often called because its elements can be identified with unitary matrices with determinant 1. This group acts as rotations of 3-dimensional Euclidean space, since we can see any point in as a **purely imaginary** quaternion and the quaternion is then purely imaginary for any Indeed, this action gives a double cover

where is the group of rotations of

We can thus take any Platonic solid, look at its group of rotational symmetries, get a subgroup of and take its double cover in If we do this starting with the icosahedron, we see that the 60-element group is covered by a 120-element group called the **binary icosahedral group**.

The elements of are quaternions of norm one, and it turns out that they are the vertices of a 4-dimensional regular polytope: a 4-dimensional cousin of the Platonic solids. It deserves to be called the “hypericosahedron”, but it is usually called the 600-cell, since it has 600 tetrahedral faces. Here is the 600-cell projected down to 3 dimensions, drawn using Robert Webb’s Stella software:

Explicitly, if we identify with the elements of are the points

and those obtained from these by even permutations of the coordinates. Since these points are closed under multiplication, if we take integral linear combinations of them we get a subring of the quaternions:

Conway and Sloane [CS] call this the ring of **icosians**. The icosians are not a lattice in the quaternions: they are dense. However, any icosian is of the form where and live in the **golden field**

Thus we can think of an icosian as an 8-tuple of rational numbers. Such 8-tuples form a lattice in 8 dimensions.

In fact we can put a norm on the icosians as follows. For the usual quaternionic norm has

for some rational numbers and but we can define a new norm on by setting

With respect to this new norm, the icosians form a lattice that fits isometrically in 8-dimensional Euclidean space. And this is none other than

Not only is the lattice hiding in the icosahedron; so is the Dynkin diagram. The space of all regular icosahedra of arbitrary size centered at the origin has a singularity, which corresponds to a degenerate special case: the icosahedron of zero size. If we resolve this singularity in a minimal way we get eight Riemann spheres, intersecting in a pattern described by the Dynkin diagram!

This remarkable story starts around 1884 with Felix Klein’s *Lectures on the Icosahedron* [Kl]. In this work he inscribed an icosahedron in the Riemann sphere, He thus got the icosahedron’s symmetry group, to act as conformal transformations of —indeed, rotations. He then found a rational function of one complex variable that is invariant under all these transformations. This function equals at the centers of the icosahedron’s faces, 1 at the midpoints of its edges, and at its vertices.

Here is Klein’s icosahedral function as drawn by Abdelaziz Nait Merzouk. The color shows its phase, while the contour lines show its magnitude:

We can think of Klein’s icosahedral function as a branched cover of the Riemann sphere by itself with 60 sheets:

Indeed, acts on and the quotient space is isomorphic to again. The function gives an explicit formula for the quotient map

Klein managed to reduce solving the quintic to the problem of solving the equation for A modern exposition of this result is Shurman’s *Geometry of the Quintic* [Sh]. For a more high-powered approach, see the paper by Nash [N]. Unfortunately, neither of these treatments avoids complicated calculations. But our interest in Klein’s icosahedral function here does not come from its connection to the quintic: instead, we want to see its connection to

For this we should actually construct Klein’s icosahedral function. To do this, recall that the Riemann sphere is the space of 1-dimensional linear subspaces of Let us work directly with While acts on this comes from an action of this group’s double cover on As we have seen, the rotational symmetry group of the icosahedron, is double covered by the binary icosahedral group To build an -invariant rational function on we should thus look for -invariant homogeneous polynomials on

It is easy to construct three such polynomials:

• of degree 12, vanishing on the 1d subspaces corresponding to icosahedron vertices.

• of degree 30, vanishing on the 1d subspaces corresponding to icosahedron edge midpoints.

• of degree 20, vanishing on the 1d subspaces corresponding to icosahedron face centers.

Remember, we have embedded the icosahedron in and each point in is a 1-dimensional subspace of so each icosahedron vertex determines such a subspace, and there is a linear function on unique up to a constant factor, that vanishes on this subspace. The icosahedron has 12 vertices, so we get 12 linear functions this way. Multiplying them gives a homogeneous polynomial of degree 12 on that vanishes on all the subspaces corresponding to icosahedron vertices! The same trick gives which has degree 30 because the icosahedron has 30 edges, and which has degree 20 because the icosahedron has 20 faces.

A bit of work is required to check that and are invariant under instead of changing by constant factors under group transformations. Indeed, if we had copied this construction using a tetrahedron or octahedron, this would not be the case. For details, see Shurman’s book [Sh], which is free online, or van Hoboken’s nice thesis [VH].

Since both and have degree 60, is homogeneous of degree zero, so it defines a rational function This function is invariant under because and are invariant under Since vanishes at face centers of the icosahedron while vanishes at vertices, equals at face centers and at vertices. Finally, thanks to its invariance property, takes the same value at every edge center, so we can normalize or to make this value 1.

Thus, has precisely the properties required of Klein’s icosahedral function! And indeed, these properties uniquely characterize that function, so that function is

Now comes the really interesting part. Three polynomials on a 2-dimensional space must obey a relation, and and obey a very pretty one, at least after we normalize them correctly:

We could guess this relation simply by noting that each term must have the same degree. Every -invariant polynomial on is a polynomial in and and indeed

This complex surface is smooth except at where it has a singularity. And hiding in this singularity is !

To see this, we need to ‘resolve’ the singularity. Roughly, this means that we find a smooth complex surface and an onto map

that is one-to-one away from the singularity. (More precisely, if is an algebraic variety with singular points is a **resolution** of if is smooth, is proper, is dense in and is an isomorphism between and For more details see Lamotke’s book [L].)

There are many such resolutions, but one **minimal** resolution, meaning that all others factor uniquely through this one:

What sits above the singularity in this minimal resolution? Eight copies of the Riemann sphere one for each dot here:

Two of these s intersect in a point if their dots are connected by an edge: otherwise they are disjoint.

This amazing fact was discovered by Patrick Du Val in 1934 [DV]. Why is it true? Alas, there is not enough room in the margin, or even in the entire blog article, to explain this. The books by Kirillov [Ki] and Lamotke [L] fill in the details. But here is a clue. The Dynkin diagram has ‘legs’ of lengths and :

On the other hand,

where in terms of the rotational symmetries of the icosahedron:

• is a turn around some vertex of the icosahedron,

• is a turn around the center of an edge touching that vertex,

• is a turn around the center of a face touching that vertex,

and we must choose the sense of these rotations correctly to obtain To get a presentation of the binary icosahedral group we drop one relation:

The dots in the Dynkin diagram correspond naturally to conjugacy classes in not counting the conjugacy class of the central element Each of these conjugacy classes, in turn, gives a copy of in the minimal resolution of

Not only the Dynkin diagram, but also the lattice, can be found in the minimal resolution of Topologically, this space is a 4-dimensional manifold. Its real second homology group is an 8-dimensional vector space with an inner product given by the intersection pairing. The integral second homology is a lattice in this vector space spanned by the 8 copies of we have just seen—and it is a copy of the lattice [KS].

But let us turn to a more basic question: what is like as a topological space? To tackle this, first note that we can identify a pair of complex numbers with a single quaternion, and this gives a homeomorphism

where we let act by right multiplication on So, it suffices to understand

Next, note that sitting inside are the points coming from the unit sphere in These points form the 3-dimensional manifold which is called the **Poincaré homology 3-sphere** [KS]. This is a wonderful thing in its own right: Poincaré discovered it as a counterexample to his guess that any compact 3-manifold with the same homology as a 3-sphere is actually diffeomorphic to the 3-sphere, and it is deeply connected to But for our purposes, what matters is that we can think of this manifold in another way, since we have a diffeomorphism

The latter is just *the space of all icosahedra inscribed in the unit sphere in 3d space*, where we count two as the same if they differ by a rotational symmetry.

This is a nice description of the points of coming from points in the unit sphere of But every quaternion lies in *some* sphere centered at the origin of of possibly zero radius. It follows that is the space of *all* icosahedra centered at the origin of 3d space—of arbitrary size, including a degenerate icosahedron of zero size. This degenerate icosahedron is the singular point in This is where is hiding.

Clearly much has been left unexplained in this brief account. Most of the missing details can be found in the references. But it remains unknown—at least to me—how the two constructions of from the icosahedron fit together in a unified picture.

Recall what we did. First we took the binary icosahedral group took integer linear combinations of its elements, thought of these as forming a lattice in an 8-dimensional rational vector space with a natural norm, and discovered that this lattice is a copy of the lattice. Then we took took its minimal resolution, and found that the integral 2nd homology of this space, equipped with its natural inner product, is a copy of the lattice. From the same ingredients we built the same lattice in two very different ways! How are these constructions connected? This puzzle deserves a nice solution.

I thank Tong Yang for inviting me to speak on this topic at the Annual General Meeting of the Hong Kong Mathematical Society on May 20, 2017, and Guowu Meng for hosting me at the HKUST while I prepared that talk. I also thank the many people, too numerous to accurately list, who have helped me understand these topics over the years.

[CS] J. H. Conway and N. J. A. Sloane, *Sphere Packings, Lattices and Groups*, Springer, Berlin, 2013.

[DV] P. du Val, On isolated singularities of surfaces which do not affect the conditions of adjunction, I, II and III, *Proc. Camb. Phil. Soc. * **30**, 453–459, 460–465, 483–491.

[KS] R. Kirby and M. Scharlemann, Eight faces of the Poincaré homology 3-sphere, *Usp. Mat. Nauk.* **37** (1982), 139–159. Available at https://tinyurl.com/ybrn4pjq

[Ki] A. Kirillov, *Quiver Representations and Quiver Varieties*, AMS, Providence, Rhode Island, 2016.

[Kl] F. Klein, *Lectures on the Ikosahedron and the Solution of Equations of the Fifth Degree*, Trüubner & Co., London, 1888. Available at https://archive.org/details/cu31924059413439

[L] K. Lamotke, *Regular Solids and Isolated Singularities*, Vieweg & Sohn, Braunschweig, 1986.

[N] O. Nash, On Klein’s icosahedral solution of the quintic. Available at https://arxiv.org/abs/1308.0955

[Sh] J. Shurman, *Geometry of the Quintic*, Wiley, New York, 1997. Available at http://people.reed.edu/~jerry/Quintic/quintic.html

[Sl] P. Slodowy, Platonic solids, Kleinian singularities, and Lie groups, in *Algebraic Geometry*, Lecture Notes in Mathematics **1008**, Springer, Berlin, 1983, pp. 102–138.

[VH] J. van Hoboken, *Platonic Solids, Binary Polyhedral Groups, Kleinian Singularities and Lie Algebras of Type A, D, E*, Master’s Thesis, University of Amsterdam, 2002. Available at http://math.ucr.edu/home/baez/joris_van_hoboken_platonic.pdf

[V] M. Viazovska, The sphere packing problem in dimension 8, *Ann. Math.* **185** (2017), 991–1015. Available at https://arxiv.org/abs/1603.04246

]]>

In certain crystals you can knock an electron out of its favorite place and leave a **hole**: a place with a missing electron. Sometimes these holes can move around like particles. And naturally these holes attract electrons, since they are *places an electron would want to be*.

Since an electron and a hole attract each other, they can orbit each other. An orbiting electron-hole pair is a bit like a hydrogen atom, where an electron orbits a proton. All of this is quantum-mechanical, of course, so you should be imagining smeared-out wavefunctions, not little dots moving around. But imagine dots if it’s easier.

An orbiting electron-hole pair is called an **exciton**, because while it acts like a particle in its own right, it’s really just a special kind of ‘excited’ electron—an electron with extra energy, not in its lowest energy state where it wants to be.

An exciton usually doesn’t last long: the orbiting electron and hole spiral towards each other, the electron finds the hole it’s been seeking, and it settles down.

But excitons can last long enough to do interesting things. In 1978 the Russian physicist Abrikosov wrote a short and very creative paper in which he raised the possibility that *excitons could form a crystal in their own right!* He called this new state of matter **excitonium**.

In fact his reasoning was very simple.

Just as electrons have a mass, so do holes. That sounds odd, since a hole is just a vacant spot where an electron would like to be. But such a hole can move around. It has more energy when it moves faster, and it takes force to accelerate it—so it acts just like it has a mass! The precise mass of a hole depends on the nature of the substance we’re dealing with.

Now imagine a substance with very heavy holes.

When a hole is much heavier than an electron, it will stand almost still when an electron orbits it. So, they form an exciton that’s *very* similar to a hydrogen atom, where we have an electron orbiting a much heavier proton.

Hydrogen comes in different forms: gas, liquid, solid… and at extreme pressures, like in the core of Jupiter, hydrogen becomes *metallic*. So, we should expect that excitons can come in all these different forms too!

We should be able to create an exciton gas… an exciton liquid… an exciton solid…. and under the right circumstances, a *metallic crystal of excitons*. Abrikosov called this **metallic excitonium**.

People have been trying to create this stuff for a long time. Some claim to have succeeded. But a new paper claims to have found something else: a Bose–Einstein condensate of excitons:

• Anshul Kogar *et al*, Signatures of exciton condensation in a transition metal dichalcogenide, *Science* (2017).

A lone electron acts like a fermion, so I guess a hole does do, and if so that means an exciton acts approximately like a boson. When it’s cold, a gas of bosons will ‘condense’, with a significant fraction of them settling into the lowest energy states available. I guess excitons have been seen to do this!

There’s a pretty good simplified explanation at the University of Illinois website:

• Siv Schwink, Physicists excited by discovery of new form of matter, excitonium, 7 December 2017.

However, the picture on this page, which I used above, shows domain walls moving through crystallized excitonium. I think that’s different than a Bose-Einstein condensate!

I urge you to look at Abrikosov’s paper. It’s short and beautiful:

• Alexei Alexeyevich Abrikosov, A possible mechanism of high temperature superconductivity, *Journal of the Less Common Metals*

**62** (1978), 451–455.

(Cool journal title. Is there a journal of the *more* common metals?)

In this paper, Abrikoskov points out that previous authors had the idea of metallic excitonium. Maybe his new idea was that this might be a superconductor—and that this might explain high-temperature superconductivity. The reason for his guess is that metallic hydrogen, too, is widely suspected to be a superconductor.

Later, Abrikosov won the Nobel prize for some other ideas about superconductors. I think I should read more of his papers. He seems like one of those physicists with great intuitions.

**Puzzle 1.** If a crystal of excitons conducts electricity, what is actually going on? That is, which electrons are moving around, and how?

This is a fun puzzle because an exciton crystal is a kind of *abstract* crystal created by the motion of electrons in another, ordinary, crystal. And that leads me to another puzzle, that I don’t know the answer to:

**Puzzle 2.** Is it possible to create a hole in excitonium? If so, it possible to create an exciton in excitonium? If so, is it possible to create **meta-excitonium**: an crystal of excitons in excitonium?

]]>

I’d like to explain a conjecture about Wigner crystals, which we came up with in a discussion on Google+. It’s a purely mathematical conjecture that’s pretty simple to state, motivated by the picture above. But let me start at the beginning.

Electrons repel each other, so they don’t usually form crystals. But if you trap a bunch of electrons in a small space, and cool them down a lot, they will try to get as far away from each other as possible—and they can do this by forming a crystal!

This is sometimes called an **electron crystal**. It’s also called a **Wigner crystal**, because the great physicist Eugene Wigner predicted in 1934 that this would happen.

Only since the late 1980s have we been able to make electron crystals in the lab. Such a crystal can only form if the electron density is low enough. The reasons is that even at absolute zero, a gas of electrons has kinetic energy. At absolute zero the gas will minimize its energy. But it can’t do this by having all the electrons in a state with zero momentum, since you can’t put two electrons in the same state, thanks to the Pauli exclusion principle. So, higher momentum states need to be occupied, and this means there’s kinetic energy. And it has more if its density is high: if there’s less room in position space, the electrons are forced to occupy more room in momentum space.

When the density is high, this prevents the formation of a crystal: instead, we have lots of electrons whose wavefunctions are ‘sitting almost on top of each other’ in position space, but with different momenta. They’ll have lots of kinetic energy, so minimizing kinetic energy becomes more important than minimizing potential energy.

When the density is low, this effect becomes unimportant, and the electrons mainly try to minimize potential energy. So, they form a crystal with each electron avoiding the rest. It turns out they form a **body-centered cubic**: a crystal lattice formed of cubes, with an extra electron in the middle of each cube.

To know whether a uniform electron gas at zero temperature forms a crystal or not, you need to work out its so-called **Wigner-Seitz radius**. This is the average inter-particle spacing measured in units of the Bohr radius. The **Bohr radius** is the unit of length you can cook up from the electron mass, the electron charge and Planck’s constant:

It’s mainly famous as the average distance between the electron and a proton in a hydrogen atom in its lowest energy state.

Simulations show that a 3-dimensional uniform electron gas crystallizes when the Wigner–Seitz radius is at least 106. The picture, however, shows an electron crystal in *2 dimensions*, formed by electrons trapped on a thin film shaped like a disk. In 2 dimensions, Wigner crystals form when the Wigner–Seitz radius is at least 31. In the picture, the density is so low that we can visualize the electrons as points with well-defined positions.

So, the picture simply shows a bunch of points trying to minimize the potential energy, which is proportional to

The lines between the dots are just to help you see what’s going on. They’re showing the **Delauney triangulation**, where we draw a graph that divides the plane into regions closer to one electron than all the rest, and then take the dual of that graph.

Thanks to energy minimization, this triangulation wants to be a lattice of equilateral triangles. But since such a triangular lattice doesn’t fit neatly into a disk, we also see some ‘defects’:

Most electrons have 6 neighbors. But there are also some red defects, which are electrons with 5 neighbors, and blue defects, which are electrons with 7 neighbors.

Note that there are 6 clusters of defects. In each cluster there is one more red defect than blue defect. I think this is not a coincidence.

**Conjecture.** When we choose a sufficiently large number of points on a disk in such a way that

is minimized, and draw the Delauney triangulation, there will be 6 more vertices with 5 neighbors than vertices with 7 neighbors.

Here’s a bit of evidence for this, which is not at all conclusive. Take a sphere and triangulate it in such a way that each vertex has 5, 6 or 7 neighbors. Then here’s a cool fact: there must be 12 more vertices with 5 neighbors than vertices with 7 neighbors.

**Puzzle.** Prove this fact.

If we think of the picture above as the top half of a triangulated sphere, then each vertex in this triangulated sphere has 5, 6 or 7 neighbors. So, there must be 12 more vertices on the sphere with 5 neighbors than with 7 neighbors. So, it makes some sense that the *top half* of the sphere will contain 6 more vertices with 5 neighbors than with 7 neighbors. But this is not a proof.

I have a feeling this energy minimization problem has been studied with various numbers of points. So, there either be a lot of evidence for my conjecture, or some counterexamples that will force me to refine it. The picture shows what happens with 600 points on the disk. Maybe something dramatically different happens with 599! Maybe someone has even proved theorems about this. I just haven’t had time to look for such work.

The picture here was drawn by Arunas.rv and placed on Wikicommons on a Creative Commons Attribution-Share Alike 3.0 Unported license.

]]>

Do you want to see how snake-like it is? Okay, but beware… this video clip is a warning:

This snake-like monster is also called the ‘pseudo-arc’. It’s the limit of a sequence of curves that get more and more wiggly. Here are the 5th and 6th curves in the sequence:

Here are the 8th and 10th:

But what happens if you try to draw the pseudo-arc itself, the limit of all these curves? It turns out to be infinitely wiggly—*so wiggly that any picture of it is useless.*

In fact Wayne Lewis and Piotr Minic wrote a paper about this, called Drawing the pseudo-arc. That’s where I got these pictures. The paper also shows stage 200, and it’s a big fat ugly black blob!

But the pseudo-arc is beautiful if you see through the pictures to the concepts, because it’s a universal snake-like continuum. Let me explain. This takes some math.

The nicest metric spaces are compact metric spaces, and each of these can be written as the union of connected components… so there’s a long history of interest in compact connected metric spaces. Except for the empty set, which probably doesn’t deserve to be called connected, these spaces are called **continua**.

Like all point-set topology, the study of continua is considered a bit old-fashioned, because people have been working on it for so long, and it’s hard to get good new results. But on the bright side, what this means is that many great mathematicians have contributed to it, and there are lots of nice theorems. You can learn about it here:

• W. T. Ingraham, A brief historical view of continuum theory,

*Topology and its Applications* **153** (2006), 1530–1539.

• Sam B. Nadler, Jr, *Continuum Theory: An Introduction*, Marcel Dekker, New York, 1992.

Now, if we’re doing topology, we should really talk not about metric spaces but about **metrizable** spaces: that is, topological spaces where the topology comes from *some* metric, which is not necessarily unique. This nuance is a way of clarifying that we don’t really care about the metric, just the topology.

So, we define a **continuum** to be a nonempty compact connected metrizable space. When I think of this I think of a curve, or a ball, or a sphere. Or maybe something bigger like the **Hilbert cube**: the countably infinite product of closed intervals. Or maybe something full of holes, like the Sierpinski carpet:

or the Menger sponge:

Or maybe something weird like a solenoid:

Very roughly, a continuum is ‘snake-like’ if it’s long and skinny and doesn’t loop around. But the precise definition is a bit harder:

We say that an open cover 𝒰 of a space X **refines** an open cover 𝒱 if each element of 𝒰 is contained in an element of 𝒱. We call a continuum X **snake-like** if each open cover of X can be refined by an open cover U_{1}, …, U_{n} such that for any i, j the intersection of U_{i} and U_{j} is nonempty iff i and j are right next to each other.

Such a cover is called a **chain**, so a snake-like continuum is also called **chainable**. But ‘snake-like’ is so much cooler: we should take advantage of any opportunity to bring snakes into mathematics!

The simplest snake-like continuum is the closed unit interval [0,1]. It’s hard to think of others. But here’s what Mioduszewski proved in 1962: the pseudo-arc is a **universal** snake-like continuum. That is: it’s a snake-like continuum, and it has continuous map onto *every* snake-like continuum!

This is a way of saying that the pseudo-arc is the most complicated snake-like continuum possible. A bit more precisely: it bends back on itself as much as possible while still going somewhere! You can see this from the pictures above, or from the construction on Wikipedia:

• Wikipedia, Pseudo-arc.

I like the idea that there’s a subset of the plane with this simple ‘universal’ property, which however is so complicated that it’s impossible to draw.

Here’s the paper where these pictures came from:

• Wayne Lewis and Piotr Minic, Drawing the pseudo-arc, *Houston J. Math.* **36** (2010), 905–934.

The pseudo-arc has other amazing properties. For example, it’s ‘indecomposable’. A nonempty connected closed subset of a continuum is a continuum in its own right, called a **subcontinuum**, and we say a continuum is **indecomposable** if it is not the union of two proper subcontinua.

It takes a while to get used to this idea, since all the examples of continua that I’ve listed so far are decomposable except for the pseudo-arc and the solenoid!

Of course a single point is an indecomposable continuum, but that example is so boring that people sometimes exclude it. The first interesting example was discovered by Brouwer in 1910. It’s the intersection of an infinite sequence of sets like this:

It’s called the **Brouwer–Janiszewski–Knaster continuum** or **buckethandle**. Like the solenoid, it shows up as an attractor in some chaotic dynamical systems.

It’s easy to imagine how if you write the buckethandle as the union of two closed proper subsets, at least one will be disconnected. And note: you don’t even need these subsets to be disjoint! So, it’s an indecomposable continuum.

But once you get used to indecomposable continua, you’re ready for the next level of weirdness. An even more dramatic thing is a **hereditarily indecomposable** continuum: one for which each subcontinuum is also indecomposable.

Apart from a single point, the pseudo-arc is the unique hereditarily indecomposable snake-like continuum! I believe this was first proved here:

• R. H. Bing, Concerning hereditarily indecomposable continua, *Pacific J. Math.* **1** (1951), 43–51.

Finally, here’s one more amazing fact about the pseudo-arc. To explain it, I need a bunch more nice math:

Every continuum arises as a closed subset of the Hilbert cube. There’s an obvious way to define the distance between two closed subsets of a compact metric space, called the Hausdorff distance—if you don’t know about this already, it’s fun to reinvent it yourself. The set of all closed subsets of a compact metric space thus forms a metric space in its own right—and by the way, the **Blaschke selection theorem** says this metric space is again compact!

Anyway, this stuff means that there’s a metric space whose points are all subcontinua of the Hilbert cube, and we don’t miss out on any continua by looking at these. So we can call this the **space of all continua**.

Now for the amazing fact: *pseudo-arcs are dense in the space of all continua!*

I don’t know who proved this. It’s mentioned here:

• Trevor L. Irwin and Salawomir Solecki, Projective Fraïssé limits and the pseudo-arc.

but they refer to this paper as a good source for such facts:

• Wayne Lews, The pseudo-arc, *Bol. Soc. Mat. Mexicana (3)* **5** (1999), 25–77.

Abstract.The pseudo-arc is the simplest nondegenerate hereditarily indecomposable continuum. It is, however, also the most important, being homogeneous, having several characterizations, and having a variety of useful mapping properties. The pseudo-arc has appeared in many areas of continuum theory, as well as in several topics in geometric topology, and is beginning to make its appearance in dynamical systems. In this monograph, we give a survey of basic results and examples involving the pseudo-arc. A more complete treatment will be given in a book dedicated to this topic, currently under preparation by this author. We omit formal proofs from this presentation, but do try to give indications of some basic arguments and construction techniques. Our presentation covers the following major topics: 1. Construction 2. Homogeneity 3. Characterizations 4. Mapping properties 5. Hyperspaces 6. Homeomorphism groups 7. Continuous decompositions 8. Dynamics.

It may seem surprising that one can write a whole book about the pseudo-arc… but if you like continua, it’s a fundamental structure just like spheres and cubes!

]]>

• Jean-Luc Thiffeault and Matthew D. Finn, Topology, braids, and mixing in fluids.

I’ve talked a lot about entropy on this blog, but not much about topological entropy. This is a way to define the entropy of a continuous map from a compact topological space to itself. The idea is that a map that mixes things up a lot should have a lot of entropy. In particular, any map defining a ‘chaotic’ dynamical system should have positive entropy, while non-chaotic maps maps should have zero entropy.

How can we make this precise? First, cover with finitely many open sets Then take any point in apply the map to it over and over, say times, and report which open set the point lands in each time. You can record this information in a string of symbols. How much information does this string have? The easiest way to define this is to simply count the total number of strings that can be produced this way by choosing different points initially. Then, take the logarithm of this number.

Of course the answer depends on typically growing bigger as increases. So, divide it by and try to take the limit as Or, to be careful, take the lim sup: this could be infinite, but it’s always well-defined. This will tell us how much new information we get, on average, each time we apply the map and report which set our point lands in.

And of course the answer also depends on our choice of open cover So, take the supremum over all finite open covers. This is called the **topological entropy** of

Believe it or not, this is often finite! Even though the log of the number of symbol strings we get will be larger when we use a cover with lots of small sets, when we divide by and take the limit as this dependence often washes out.

Any braid gives a bunch of maps from the disc to itself. So, we define the **entropy of a braid** to be the minimum—or more precisely, the infimum—of the topological entropies of these maps.

How does a braid give a bunch of maps from the disc to itself? Imagine the disc as made of very flexible rubber. Grab it at some finite set of points and then move these points around in the pattern traced out by the braid. When you’re done you get a map from the disc to itself. The map you get is not unique, since the rubber is wiggly and you could have moved the points around in slightly different ways. So, you get a bunch of maps.

I’m being sort of lazy in giving precise details here, since the idea seems so intuitively obvious. But that could be because I’ve spent a lot of time thinking about braids, the braid group, and their relation to maps from the disc to itself!

This picture by Thiffeault and Finn may help explain the idea:

As we keep move points around each other, we keep building up more complicated braids with 4 strands, and keep getting more complicated maps from the disc to itself. In fact, these maps are often chaotic! More precisely: they often have positive entropy.

In this other picture the vertical axis represents time, and we more clearly see the braid traced out as our 4 points move around:

Each horizontal slice depicts a map from the disc (or square: this is topology!) to itself, but we only see their effect on a little rectangle drawn in black.

Okay, now for the punchline!

**Puzzle 1.** Which braid with 3 strands has the highest entropy per generator? What is its entropy per generator?

I should explain: any braid with 3 strands can be written as a product of generators Here switches strands 1 and 2 moving the counterclockwise around each other, does the same for strands 2 and 3, and and do the same but moving the strands clockwise.

For any braid we can write it as a product of generators with as small as possible, and then we can evaluate its entropy divided by This is the right way to compare the entropy of braids, because if a braid gives a chaotic map we expect powers of that braid to have entropy growing linearly with

Now for the answer to the puzzle!

**Answer 1.** A 3-strand braid maximizing the entropy per generator is And the entropy of this braid, per generator, is the logarithm of the golden ratio:

In other words, the entropy of this braid is

All this works regardless of which base we use for our logarithms. But if we use base e, which seems pretty natural, the maximum possible entropy per generator is

Or if you prefer base 2, then each time you stir around a point in the disc with this braid, you’re creating

bits of unknown information.

This fact was proved here:

• D. D’Alessandro, M. Dahleh and I Mezíc, Control of mixing in fluid flow: A maximum entropy approach, *IEEE Transactions on Automatic Control* **44** (1999), 1852–1863.

So, people call this braid the **golden braid**. But since you can use it to generate entropy forever, perhaps it should be called the *eternal* golden braid.

What does it all mean? Well, the 3-strand braid group is called , and I wrote a long story about it:

• John Baez, This Week’s Finds in Mathematical Physics (Week 233).

You’ll see there that has a representation as 2 × 2 matrices:

These matrices are shears, which is connected to how the braids and give maps from the disc to itself that shear points. If we take the golden braid and turn it into a matrix using this representation, we get a matrix for which the magnitude of its largest eigenvalue is the square of the golden ratio! So, the amount of stretching going on is ‘the golden ratio per generator’.

I guess this must be part of the story too:

**Puzzle 2.** Is it true that when we multiply matrices of the form

or their inverses:

the magnitude of the largest eigenvalue of the resulting product can never exceed the th power of the golden ratio?

There’s also a strong connection between braid groups, certain quasiparticles in the plane called Fibonacci anyons, and the golden ratio. But I don’t see the relation between these things and topological entropy! So, there is a mystery here—at least for me.

For more, see:

• Matthew D. Finn and Jean-Luc Thiffeault, Topological optimisation of rod-stirring devices, *SIAM Review* **53** (2011), 723—743.

Abstract.There are many industrial situations where rods are used to stir a fluid, or where rods repeatedly stretch a material such as bread dough or taffy. The goal in these applications is to stretch either material lines (in a fluid) or the material itself (for dough or taffy) as rapidly as possible. The growth rate of material lines is conveniently given by the topological entropy of the rod motion. We discuss the problem of optimising such rod devices from a topological viewpoint. We express rod motions in terms of generators of the braid group, and assign a cost based on the minimum number of generators needed to write the braid. We show that for one cost function—the topological entropy per generator—the optimal growth rate is the logarithm of the golden ratio. For a more realistic cost function,involving the topological entropy per operation where rods are allowed to move together, the optimal growth rate is the logarithm of the silver ratio, We show how to construct devices that realise this optimal growth, which we callsilver mixers.

Here is the silver ratio:

But now for some reason I feel it’s time to stop!

]]>

• Applied category theory, Fall Western Sectional Meeting of the AMS, 4-5 November 2017, U.C. Riverside.

A bunch of people stayed for a few days afterwards, and we had a lot of great discussions. I wish I could explain everything that happened, but I’m too busy right now. Luckily, even if you couldn’t come here, you can now see slides of almost all the talks… and videos of many!

Click on talk titles to see abstracts. For multi-author talks, the person whose name is in boldface is the one who gave the talk. For videos, go here: I haven’t yet created links to all the videos.

9:00 a.m.A higher-order temporal logic for dynamical systems — talk slides.

**David I. Spivak**, MIT.

10:00 a.m.

Algebras of open dynamical systems on the operad of wiring diagrams — talk slides.

**Dmitry Vagner**, Duke University

David I. Spivak, MIT

Eugene Lerman, University of Illinois at Urbana-Champaign

10:30 a.m.

Abstract dynamical systems — talk slides.

**Christina Vasilakopoulou**, UCR

David Spivak, MIT

Patrick Schultz, MIT

3:00 p.m.

Decorated cospans— talk slides.

**Brendan Fong**, MIT

4:00 p.m.

Compositional modelling of open reaction networks — talk slides.

**Blake S. Pollard**, UCR

John C. Baez, UCR

4:30 p.m.

A bicategory of coarse-grained Markov processes — talk slides.

**Kenny Courser**, UCR

5:00 p.m.

A bicategorical syntax for pure state qubit quantum mechanics — talk slides.

**Daniel M. Cicala**, UCR

5:30 p.m.

Open systems in classical mechanics — talk slides.

**Adam Yassine**, UCR

9:00 a.m.

Controllability and observability: diagrams and duality — talk slides.

**Jason Erbele**, Victor Valley College

9:30 a.m.

Frobenius monoids, weak bimonoids, and corelations — talk slides.

**Brandon Coya**, UCR

10:00 a.m.

Compositional design and tasking of networks.

**John D. Foley**, Metron, Inc.

John C. Baez, UCR

Joseph Moeller, UCR

Blake S. Pollard, UCR

10:30 a.m.

Operads for modeling networks — talk slides.

**Joseph Moeller**, UCR

John Foley, Metron Inc.

John C. Baez, UCR

Blake S. Pollard, UCR

2:00 p.m.

Reeb graph smoothing via cosheaves — talk slides.

**Vin de Silva**, Department of Mathematics, Pomona College

3:00 p.m.

Knowledge representation in bicategories of relations — talk slides.

**Evan Patterson**, Stanford University, Statistics Department

3:30 p.m.

The multiresolution analysis of flow graphs — talk slides.

**Steve Huntsman**, BAE Systems

4:00 p.m.

Data modeling and integration using the open source tool Algebraic Query Language (AQL) — talk slides.

**Peter Y. Gates**, Categorical Informatics

**Ryan Wisnesky**, Categorical Informatics

]]>

• Biology as information dynamics, November 13, 2017, 4:00–5:00 pm, General Biology Seminar, Kerckhoff 119, Caltech.

If you’re around, please check it out! I’ll be around all day talking to people, including Erik Winfree, my graduate student host Fangzhou Xiao, and other grad students.

If you can’t make it, you can watch this video! It’s a neat subject, and I want to do more on it:

Abstract.If biology is the study of self-replicating entities, and we want to understand the role of information, it makes sense to see how information theory is connected to the ‘replicator equation’ — a simple model of population dynamics for self-replicating entities. The relevant concept of information turns out to be the information of one probability distribution relative to another, also known as the Kullback–Liebler divergence. Using this we can get a new outlook on free energy, see evolution as a learning process, and give a clearer, more general formulation of Fisher’s fundamental theorem of natural selection.

]]>

• John Baez, John Foley, Blake Pollard and Joseph Moeller, Network models.

There will be two talks about this at the AMS special session on Applied Category Theory this weekend at U. C. Riverside: one by John Foley of Metron Inc., and one by my grad student Joseph Moeller. I’ll try to get their talk slides someday. But for now, here’s the basic idea.

Our goal is to build operads suited for designing networks. These could be networks where the vertices represent fixed or moving agents and the edges represent communication channels. More generally, they could be networks where the vertices represent entities of various types, and the edges represent relationships between these entities—for example, that one agent is committed to take some action involving the other. This paper arose from an example where the vertices represent planes, boats and drones involved in a search and rescue mission in the Caribbean. However, even for this one example, we wanted a flexible formalism that can handle networks of many kinds, described at a level of detail that the user is free to adjust.

To achieve this flexibility, we introduced a general concept of ‘network model’. Simply put, a network model is a *kind* of network. Any network model gives an operad whose operations are ways to build larger networks of this kind by gluing smaller ones. This operad has a ‘canonical’ algebra where the operations act to assemble networks of the given kind. But it also has other algebras, where it acts to assemble networks of this kind *equipped with extra structure and properties*. This flexibility is important in applications.

What exactly is a ‘kind of network’? That’s the question we had to answer. We started with some examples, At the crudest level, we can model networks as simple graphs. If the vertices are agents of some sort and the edges represent communication channels, this means we allow at most one channel between any pair of agents.

However, simple graphs are too restrictive for many applications. If we allow multiple communication channels between a pair of agents, we should replace simple graphs with ‘multigraphs’. Alternatively, we may wish to allow directed channels, where the sender and receiver have different capabilities: for example, signals may only be able to flow in one direction. This requires replacing simple graphs with ‘directed graphs’. To combine these features we could use ‘directed multigraphs’.

But none of these are sufficiently general. It’s also important to consider graphs with colored vertices, to specify different types of agents, and colored edges, to specify different types of channels. This leads us to ‘colored directed multigraphs’.

All these are *examples* of what we mean by a ‘kind of network’, but none is sufficiently general. More complicated kinds, such as hypergraphs or Petri nets, are likely to become important as we proceed.

Thus, instead of separately studying all these kinds of networks, we introduced a unified notion that subsumes all these variants: a ‘network model’. Namely, given a set of ‘vertex colors’, a **network model** is a lax symmetric monoidal functor

where is the free strict symmetric monoidal category on and is the category of small categories.

Unpacking this somewhat terrifying definition takes a little work. It simplifies in the special case where takes values in the category of monoids. It simplifies further when is a singleton, since then is the groupoid where objects are natural numbers and morphisms from to are bijections

If we impose both these simplifying assumptions, we have what we call a **one-colored network model**: a lax symmetric monoidal functor

As we shall see, the network model of simple graphs is a one-colored network model, and so are many other motivating examples. If you like André Joyal’s theory of ‘species’, then one-colored network models should be pretty fun, since they’re species with some extra bells and whistles.

But if you don’t, there’s still no reason to panic. In relatively down-to-earth terms, a one-colored network model amounts to roughly this. If we call elements of ‘networks with vertices’, then:

• Since is a monoid, we can **overlay** two networks with the same number of vertices and get a new one. We call this operation

• Since is a functor, the symmetric group acts on the monoid Thus, for each , we have a monoid automorphism that we call simply

• Since is lax monoidal, we also have an operation

We call this operation the **disjoint union** of networks. In examples like simple graphs, it looks just like what it sounds like.

Unpacking the abstract definition further, we see that these operations obey some equations, which we list in Theorem 11 of our paper. They’re all obvious if you draw pictures of examples… and don’t worry, our paper has a few pictures. (We plan to add more.) For example, the ‘interchange law’

holds whenever and This is a nice relationship between overlaying networks and taking their disjoint union.

In Section 2 of our apper we study one-colored network models, and give lots of examples. In Section 3 we describe a systematic procedure for getting one-colored network models from monoids. In Section 4 we study general network models and give examples of these. In Section 5 we describe a category of network models, and show that the procedure for getting network models from monoids is functorial. We also make into a symmetric monoidal category, and give examples of how to build new networks models by tensoring old ones.

Our main result is that any network model gives a typed operad, also known as a ‘colored operad’. This operad has operations that describe how to stick networks of the given kind together to form larger networks of this kind. This operad has a ‘canonical algebra’, where it acts on networks of the given kind—but the real point is that it has lots of other algebra, where it acts on networks of the given kind *equipped with extra structure and properties*.

The technical heart of our paper is Section 6, mainly written by Joseph Moeller. This provides the machinery to construct operads from network models in a functorial way. Category theorists should find this section interesting, because because it describes enhancements of the well-known ‘Grothendieck construction’ of the category of elements of a functor

where is any small category. For example, if is symmetric monoidal and is lax symmetric monoidal, then we show is symmetric monoidal. Moreover, we show that the construction sending the lax symmetric monoidal functor to the symmetric monoidal category is functorial.

In Section 7 we apply this machinery to build operads from network models. In Section 8 we describe some algebras of these operads, including an algebra whose elements are networks of range-limited communication channels. In future work we plan to give many more detailed examples, and to explain how these algebras, and the homomorphisms between them, can be used to design and optimize networks.

I want to explain all this in more detail—this is a pretty hasty summary, since I’m busy this week. But for now you can read the paper!

]]>

The deadline for applying to this ‘school’ on applied category theory is Wednesday November 1st.

• Applied Category Theory: Adjoint School: online sessions starting in January 2018, followed by a meeting 23–27 April 2018 at the Lorentz Center in Leiden, the Netherlands. Organized by Bob Coecke (Oxford), Brendan Fong (MIT), Aleks Kissinger (Nijmegen), Martha Lewis (Amsterdam), and Joshua Tan (Oxford).

The name ‘adjoint school’ is a bad pun, but the school should be great. Here’s how it works:

The Workshop on Applied Category Theory 2018 takes place in May 2018. A principal goal of this workshop is to bring early career researchers into the applied category theory community. Towards this goal, we are organising the Adjoint School.

The Adjoint School will run from January to April 2018. By the end of the school, each participant will:

- be familiar with the language, goals, and methods of four prominent, current research directions in applied category theory;
- have worked intensively on one of these research directions, mentored by an expert in the field; and
- know other early career researchers interested in applied category theory.

They will then attend the main workshop, well equipped to take part in discussions across the diversity of applied category theory.

The Adjoint School comprises (1) an Online Reading Seminar from January to April 2018, and (2) a four day Research Week held at the Lorentz Center, Leiden, The Netherlands, from Monday April 23rd to Thursday April 26th.

In the Online Reading Seminar we will read papers on current research directions in applied category theory. The seminar will consist of eight iterations of a two week block. Each block will have one paper as assigned reading, two participants as co-leaders, and three phases:

- A presentation (over WebEx) on the assigned reading delivered by the two block co-leaders.
- Reading responses and discussion on a private forum, facilitated by Brendan Fong and Nina Otter.
- Publication of a blog post on the
*n*-Category Café written by the co-leaders.

Each participant will be expected to co-lead one block.

The Adjoint School is taught by mentors John Baez, Aleks Kissinger, Martha Lewis, and Pawel Sobocinski. Each mentor will mentor a working group comprising four participants. During the second half of the Online Reading Seminar, these working groups will begin to meet with their mentor (again over video conference) to learn about open research problems related to their reading.

In late April, the participants and the mentors will convene for a four day Research Week at the Lorentz Center. After opening lectures by the mentors, the Research Week will be devoted to collaboration within the working groups. Morning and evening reporting sessions will keep the whole school informed of the research developments of each group.

The following week, participants will attend Applied Category Theory 2018, a discussion-based 60-attendee workshop at the Lorentz Center. Here they will have the chance to meet senior members across the applied category theory community and learn about ongoing research, as well as industry applications.

Following the school, successful working groups will be invited to contribute to a new, to be launched, CUP book series.

Meetings will be on Mondays; we will determine a time depending on the locations of the chosen participants.

- Jan 8: B. Coecke, M. Sadrzadeh, and S. Clark, Mathematical foundations for a compositional distributional model of meaning,
*Linguistic Analysis***36**(2010), 345–384. - Jan 22: A. Kissinger and S. Uijlen. A categorical semantics for causal structure,
*32nd Annual ACM/IEEE Symposium on Logic in Computer Science (LICS)*2017, pp. 1-12 - Feb 5: B. Fong, Decorated cospans,
*Theory and Applications of Categories***30**(2015), 1096–1120. - Feb 19: A. Carboni and R.F.C. Walters. Cartesian bicategories I,
*Journal of Pure and Applied Algebra***49**(1987), 11–32. - Mar 5: J. Bolt, B. Coecke, F. Genovese, M. Lewis, D. Marsden, and R. Piedeleu.
*Interacting conceptual spaces I: grammatical composition of concepts*, arXiv preprint, 2017. - Mar 19: J. Baez and B. Pollard, A compositional framework for reaction networks,
*Reviews in Mathematical Physics***29**(2017), 1750028. - Apr 2: J.C. Willems, The behavioral approach to open and interconnected systems,
*IEEE Control Systems***27**(2007), 46–99. - Apr 16: J. Henson, R. Lal, and M. Pusey, Theory-independent limits on correlations from generalised Bayesian networks,
*New Journal of Physics***27**(2014), 113043.

**John Baez: Semantics for open Petri nets and reaction networks**

Petri nets and reaction networks are widely used to describe systems of interacting entities in computer science, chemistry and other fields, but the study of open Petri nets and reaction networks is new, and raise many new questions connected to Lawvere’s “functorial semantics”.

*Reading: Fong; Baez and Pollard.*

**Aleks Kissinger: Unification of the logic of causality**

Employ the framework of (pre-)causal categories to unite notions of causality and techniques for causal reasoning which occur in classical statistics, quantum foundations, and beyond.

*Reading: Kissinger and Uijlen; Henson, Lal, and Pusey.*

**Martha Lewis: Compositional approaches to linguistics and cognition**

Use compact closed categories to integrate compositional models of meaning with distributed, geometric, and other meaning representations in linguistics and cognitive science.

*Reading: Coecke, Sadrzadeh, and Clark; Bolt, Coecke, Genovese, Lewis, Marsden, and Piedeleu.*

**Pawel Sobocinski: Modelling of open and interconnected systems**

Use Carboni and Walters’ bicategories of relations as a multidisciplinary algebra of open and interconnected systems.

*Reading: Carboni and Walters; Willems.*

We hope that each working group will comprise both participants who specialise in category theory and in the relevant application field. As a prerequisite, those participants specialising in category theory should feel comfortable with the material found in Categories for the Working Mathematician or its equivalent; those specialising in applications should have a similar graduate-level introduction.

To apply, please fill out the form here. You will be asked to upload a single PDF file containing the following information:

- Your contact information and educational history.
- A brief paragraph explaining your interest in this course.
- A paragraph or two describing one of your favorite topics in category theory, or your application field.
- A ranked list of the papers you would most like to present, together with an explanation of your preferences. Note that the paper you present determines which working group you will join.

You may add your CV if you wish.

Anyone is welcome to apply, although preference may be given to current graduate students and postdocs. Women and members of other underrepresented groups within applied category theory are particularly encouraged to apply.

Some support will be available to help with the costs (flights, accommodation, food, childcare) of attending the Research Week and the Workshop on Applied Category Theory; please indicate in your application if you would like to be considered for such support.

If you have any questions, please feel free to contact Brendan Fong (bfo at mit dot edu) or Nina Otter (otter at maths dot ox dot ac dot uk).

Application deadline: **November 1st, 2017.**

]]>