Topological Crystals (Part 4)

28 August, 2016


k4_crystal

Okay, let’s look at some examples of topological crystals. These are what got me excited in the first place. We’ll get some highly symmetrical crystals, often in higher-dimensional Euclidean spaces. The ‘triamond’, above, is a 3d example.

Review

First let me remind you how it works. We start with a connected graph X. This has a space C_0(X,\mathbb{R}) of 0-chains, which are formal linear combinations of vertices, and a space C_1(X,\mathbb{R}) of 1-chains, which are formal linear combinations of edges.

We choose a vertex in X. Each path \gamma in X starting at this vertex determines a 1-chain c_\gamma, namely the sum of its edges. These 1-chains give some points in C_1(X,\mathbb{R}). These points are the vertices of a graph \overline{X} called the maximal abelian cover of X. The maximal abelian cover has an edge from c_\gamma to c_{\gamma'} whenever the path \gamma' is obtained by adding an extra edge to \gamma. We can think of this edge as a straight line segment from c_\gamma to c_{\gamma'}.

So, we get a graph \overline{X} sitting inside C_1(X,\mathbb{R}). But this is a high-dimensional space. To get something nicer we’ll project down to a lower-dimensional space.

There’s boundary operator

\partial : C_1(X,\mathbb{R}) \to C_0(X,\mathbb{R})

sending any edge to the difference of its two endpoints. The kernel of this operator is the space of 1-cycles, Z_1(X,\mathbb{R}). There’s an inner product on the space of 1-chains such that edges form an orthonormal basis, so we get an orthogonal projection

\pi : C_1(X,\mathbb{R}) \to Z_1(X,\mathbb{R})

We can use this to take the maximal abelian cover \overline{X} and project it down to the space of 1-cycles. The hard part is checking that \pi is one-to-one on \overline{X}. But that’s what I explained last time! It’s true whenever our original graph X has no bridges: that is, edges whose removal would disconnect our graph, like this:

So, when X is a bridgeless graph, we get a copy of the maximal abelian cover embedded in Z_1(X,\mathbb{R}). This is our topological crystal.

Let’s do some examples.

Graphene

I showed you this one before, but it’s a good place to start. Let X be this graph:

Since this graph has 3 edges, its space of 1-chains is 3-dimensional. Since this graph has 2 holes, its 1-cycles form a plane in that 3d space. If we take paths \gamma in X starting at the red vertex, form the 1-chains c_\gamma, and project them down to this plane, we get this:

Here the 1-chains c_\gamma are the white and red dots. They’re the vertices of the maximal abelian cover \overline{X}, while the line segments between them are the edges of \overline{X}. Projecting these vertices and edges onto the plane of 1-cycles, we get our topological crystal:

This is the pattern of graphene, a miraculous 2-dimensional form of carbon. The more familiar 3d crystal called graphite is made of parallel layers of graphene connected with some other bonds.

Puzzle 1. Classify bridgeless connected graphs with 2 holes (or more precisely, a 2-dimensional space of 1-cycles). What are the resulting 2d topological crystals?

Diamond

Now let’s try this graph:

Since it has 3 holes, it gives a 3d crystal:

This crystal structure is famous! It’s the pattern used by a diamond. Up to translation it has two kinds of atoms, corresponding to the two vertices of the original graph.

Triamond

Now let’s try this graph:

Since it has 3 holes, it gives another 3d crystal:

This is also famous: it’s sometimes called a ‘triamond’. If you’re a bug crawling around on this crystal, locally you experience the same topology as if you were crawling around on a wire-frame model of a tetrahedron. But you’re actually on the maximal abelian cover!

Up to translation the triamond has 4 kinds of atoms, corresponding to the 4 vertices of the tetrahedron. Each atom has 3 equally distant neighbors lying in a plane at 120° angles from each other. These planes lie in 4 families, each parallel to one face of a regular tetrahedron. This structure was discovered by the crystallographer Laves, and it was dubbed the Laves graph by Coxeter. Later Sunada called it the ‘\mathrm{K}_4 lattice’ and studied its energy minimization properties. Theoretically it seems to be a stable form of carbon. Crystals in this pattern have not yet been seen, but this pattern plays a role in the structure of certain butterfly wings.

Puzzle 2. Classify bridgeless connected graphs with 3 holes (or more precisely, a 3d space of 1-cycles). What are the resulting 3d topological crystals?

Lonsdaleite and hyperquartz

There’s a crystal form of carbon called lonsdaleite that looks like this:

It forms in meteor impacts. It does not arise as 3-dimensional topological crystal.

Puzzle 3. Show that this graph gives a 5-dimensional topological crystal which can be projected down to give lonsdaleite in 3d space:

Puzzle 4. Classify bridgeless connected graphs with 4 holes (or more precisely, a 4d space of 1-cycles). What are the resulting 4d topological crystals? A crystallographer with the wonderful name of Eon calls this one hyperquartz, because it’s a 4-dimensional analogue of quartz:

All these classification problems are quite manageable if you notice there are certain ‘boring’, easily understood ways to get new bridgeless connected graphs with n holes from old ones.

Platonic crystals

For any connected graph X, there is a covering map

q : \overline{X} \to X

The vertices of \overline{X} come in different kinds, or ‘colors’, depending on which vertex of X they map to. It’s interesting to look at the group of ‘covering symmetries’, \mathrm{Cov}(X), consisting of all symmetries of \overline{X} that map vertices of same color to vertices of the same color. Greg Egan and I showed that if X has no bridges, \mathrm{Cov}(X) also acts as symmetries of the topological crystal associated to X. This group fits into a short exact sequence:

1 \longrightarrow H_1(X,\mathbb{Z}) \longrightarrow \mathrm{Cov}(X) \longrightarrow \mathrm{Aut}(X) \longrightarrow 1

where \mathrm{Aut}(X) is the group of all symmetries of X. Thus, every symmetry of X is covered by some symmetry of its topological crystal, while H_1(X,\mathbb{Z}) acts as translations of the crystal, in a way that preserves the color of every vertex.

For example consider the triamond, which comes from the tetrahedron. The symmetry group of the tetrahedron is this Coxeter group:

\mathrm{A}_3 = \langle s_1, s_2, s_3 \;| \; (s_1s_2)^3 = (s_2s_3)^3 = s_1^2 = s_2^2 = s_3^2 = 1\rangle

Thus, the group of covering symmetries of the triamond is an extension of \mathrm{A}_3 by \mathbb{Z}^3. Beware the notation here: this is not the alternating group on the 3 letters. In fact it’s the permutation group on 4 letters, namely the vertices of the tetrahedron!

We can also look at other ‘Platonic crystals’. The symmetry group of the cube and octahedron is the Coxeter group

\mathrm{B}_3 = \langle s_1, s_2, s_3 \;| \; (s_1s_2)^3 = (s_2s_3)^4 = s_1^2 = s_2^2 = s_3^2 = 1\rangle

Since the cube has 6 faces, the graph formed by its vertices and edges a 5d space of 1-cycles. The corresponding topological crystal is thus 5-dimensional, and its group of covering symmetries is an extension of \mathrm{B}_3 by \mathbb{Z}^5. Similarly, the octahedron gives a 7-dimensional topological crystal whose group of covering symmetries, an extension of \mathrm{B}_3 by \mathbb{Z}^7.

The symmetry group of the dodecahedron and icosahedron is

\mathrm{H}_3 = \langle s_1, s_2, s_3 \;| \; (s_1s_2)^3 = (s_2s_3)^5= s_1^2 = s_2^2 = s_3^2 = 1\rangle

and these solids give crystals of dimensions 11 and 19. If you’re a bug crawling around on the the second of these, locally you experience the same topology as if you were crawling around on a wire-frame model of a icosahedron. But you’re actually in 19-dimensional space, crawling around on the maximal abelian cover!

There is also an infinite family of degenerate Platonic solids called ‘hosohedra’ with two vertices, n edges and n faces. These faces cannot be made flat, since each face has just 2 edges, but that is not relevant to our construction: the vertices and edges still give a graph. For example, when n = 6, we have the ‘hexagonal hosohedron’:

The corresponding crystal has dimension n-1, and its group of covering symmetries is an extension of \mathrm{S}_n \times \mathbb{Z}/2 by \mathbb{Z}^{n-1}. The case n = 3 gives the graphene crystal, while n = 4 gives the diamond.

Exotic crystals

We can also get crystals from more exotic highly symmetrical graphs. For example, take the Petersen graph:

Its symmetry group is the symmetric group \mathrm{S}_5. It has 10 vertices and 15 edges, so its Euler characteristic is -5, which implies that its space of 1-cycles is 6-dimensional. It thus gives a 6-dimensional crystal whose group of covering symmetries is an extension of \mathrm{S}_5 by \mathbb{Z}^6.

Two more nice examples come from Klein’s quartic curve, a Riemann surface of genus three on which the 336-element group \mathrm{PGL}(2,\mathbb{F}_7) acts as isometries. These isometries preserve a tiling of Klein’s quartic curve by 56 triangles, with 7 meeting at each vertex. This picture is topologically correct, though not geometrically:

From this tiling we obtain a graph X embedded in Klein’s quartic curve. This graph has 56 \times 3 / 2 = 84 edges and 56 \times 3 / 7 = 24 vertices, so it has Euler characteristic -60. It thus gives a 61-dimensional topological crystal whose group of covering symmetries is extension of \mathrm{PGL}(2,\mathbb{F}_7) by \mathbb{Z}^{61}.

There is also a dual tiling of Klein’s curve by 24 heptagons, 3 meeting at each vertex. This gives a graph with 84 edges and 56 vertices, hence Euler characteristic -28. From this we obtain a 29-dimensional topological crystal whose group of covering symmetries is an extension of \mathrm{PGL}(2,\mathbb{F}_7) by \mathbb{Z}^{29}.

The packing fraction

Now that we’ve got a supply of highly symmetrical crystals in higher dimensions, we can try to study their structure. We’ve only made a bit of progress on this.

One interesting property of a topological crystal is its ‘packing fraction’. I like to call the vertices of a topological crystal atoms, for the obvious reason. The set A of atoms is periodic. It’s usually not a lattice. But it’s contained in the lattice L obtained by projecting the integral 1-chains down to the space of 1-cycles:

L = \{   \pi(c) : \; c \in C_1(X,\mathbb{Z})  \}

We can ask what fraction of the points in this lattice are actually atoms. Let’s call this the packing fraction. Since Z_1(X,\mathbb{Z}) acts as translations on both A and L, we can define it to be

\displaystyle{     \frac{|A/Z_1(X,\mathbb{Z})|}{|L/Z_1(X,\mathbb{Z})|} }

For example, suppose X is the graph that gives graphene:

Then the packing fraction is 2/3, as can be seen here:

For any bridgeless connected graph X, it turns out that the packing fraction equals

\displaystyle{    \frac{|V|}{|T|} }

where V is the set of vertices and T is the set of spanning trees. The main tool used to prove this is Bacher, de la Harpe and Nagnibeda’s work on integral cycles and integral cuts, which in turn relies on Kirchhoff’s matrix tree theorem.

Greg Egan used Mathematica to count the spanning trees in the examples discussed above, and this let us work out their packing fractions. They tend to be very low! For example, the maximal abelian cover of the dodecahedron gives an 11-dimensional crystal with packing fraction 1/27,648, while the heptagonal tiling of Klein’s quartic gives a 29-dimensional crystal with packing fraction 1/688,257,368,064,000,000.

So, we have some very delicate, wispy crystals in high-dimensional spaces, built from two simple ideas in topology: the maximal abelian cover of a graph, and the natural inner product on 1-chains. They have intriguing connections to tropical geometry, but they are just beginning to be understood in detail. Have fun with them!

For more, see:

• John Baez, Topological crystals.

Read the whole series

Part 1 – the basic idea.

Part 2 – the maximal abelian cover of a graph.

Part 3 – constructing topological crystals.

Part 4 – examples of topological crystals.


Topological Crystals (Part 3)

6 August, 2016


k4_crystal

Last time I explained how to create the ‘maximal abelian cover’ of a connected graph. Now I’ll say more about a systematic procedure for embedding this into a vector space. That will give us a topological crystal, like the one above.

Some remarkably symmetrical patterns arise this way! For example, starting from this graph:

we get this:

Nature uses this pattern for crystals of graphene.

Starting from this graph:

we get this:

Nature uses this for crystals of diamond! Since the construction depends only on the topology of the graph we start with, we call this embedded copy of its maximal abelian cover a topological crystal.

Today I’ll remind you how this construction works. I’ll also outline a proof that it gives an embedding of the maximal abelian cover if and only if the graph has no bridges: that is, edges that disconnect the graph when removed. I’ll skip all the hard steps of the proof, but they can be found here:

• John Baez, Topological crystals.

The homology of graphs

I’ll start with some standard stuff that’s good to know. Let X be a graph. Remember from last time that we’re working in a setup where every edge e goes from a vertex called its source s(e) to a vertex called its target t(e). We write e: x \to y to indicate that e is going from x to y. You can think of the edge as having an arrow on it, and if you turn the arrow around you get the inverse edge, e^{-1}: y \to x. Also, e^{-1} \ne e.

The group of integral 0-chains on X, C_0(X,\mathbb{Z}), is the free abelian group on the set of vertices of X. The group of integral 1-chains on X, C_1(X,\mathbb{Z}), is the quotient of the free abelian group on the set of edges of X by relations e^{-1} = -e for every edge e. The boundary map is the homomorphism

\partial : C_1(X,\mathbb{Z}) \to C_0(X,\mathbb{Z})

such that

\partial e = t(e) - s(e)

for each edge e, and

Z_1(X,\mathbb{Z}) =  \ker \partial

is the group of integral 1-cycles on X.

Remember, a path in a graph is a sequence of edges, the target of each one being the source of the next. Any path \gamma = e_1 \cdots e_n in X determines an integral 1-chain:

c_\gamma = e_1 + \cdots + e_n

For any path \gamma we have

c_{\gamma^{-1}} = -c_{\gamma},

and if \gamma and \delta are composable then

c_{\gamma \delta} = c_\gamma + c_\delta

Last time I explained what it means for two paths to be ‘homologous’. Here’s the quick way to say it. There’s groupoid called the fundamental groupoid of X, where the objects are the vertices of X and the morphisms are freely generated by the edges except for relations saying that the inverse of e: x \to y really is e^{-1}: y \to x. We can abelianize the fundamental groupoid by imposing relations saying that \gamma \delta = \delta \gamma whenever this equation makes sense. Each path \gamma : x \to y gives a morphism which I’ll call [[\gamma]] : x \to y in the abelianized fundamental groupoid. We say two paths \gamma, \gamma' : x \to y are homologous if [[\gamma]] = [[\gamma']].

Here’s a nice thing:

Lemma A. Let X be a graph. Two paths \gamma, \delta : x \to y in X are homologous if and only if they give the same 1-chain: c_\gamma = c_\delta.

Proof. See the paper. You could say they give ‘homologous’ 1-chains, too, but for graphs that’s the same as being equal.   █

We define vector spaces of 0-chains and 1-chains by

C_0(X,\mathbb{R}) = C_0(X,\mathbb{Z}) \otimes \mathbb{R}, \qquad C_1(X,\mathbb{R}) = C_1(X,\mathbb{Z}) \otimes \mathbb{R},

respectively. We extend the boundary map to a linear map

\partial :  C_1(X,\mathbb{R}) \to C_0(X,\mathbb{R})

We let Z_1(X,\mathbb{R}) be the kernel of this linear map, or equivalently,

Z_1(X,\mathbb{R}) = Z_0(X,\mathbb{Z}) \otimes \mathbb{R}  ,

and we call elements of this vector space 1-cycles. Since Z_1(X,\mathbb{Z}) is a free abelian group, it forms a lattice in the space of 1-cycles. Any edge of X can be seen as a 1-chain, and there is a unique inner product on C_1(X,\mathbb{R}) such that edges form an orthonormal basis (with each edge e^{-1} counting as the negative of e.) There is thus an orthogonal projection

\pi : C_1(X,\mathbb{R}) \to Z_1(X,\mathbb{R}) .

This is the key to building topological crystals!

The embedding of atoms

We now come to the main construction, first introduced by Kotani and Sunada. To build a topological crystal, we start with a connected graph X with a chosen basepoint x_0. We define an atom to be a homology class of paths starting at the basepoint, like

[[\alpha]] : x_0 \to x

Last time I showed that these atoms are the vertices of the maximal abelian cover of X. Now let’s embed these atoms in a vector space!

Definition. Let X be a connected graph with a chosen basepoint. Let A be its set of atoms. Define the map

i : A \to Z_1(X,\mathbb{R})

by

i([[ \alpha ]]) = \pi(c_\alpha) .

That i is well-defined follows from Lemma A. The interesting part is this:

Theorem A. The following are equivalent:

(1) The graph X has no bridges.

(2) The map i : A \to Z_1(X,\mathbb{R}) is one-to-one.

Proof. The map i is one-to-one if and only if for any atoms [[ \alpha ]] and [[ \beta ]], i([[ \alpha ]])  = i([[ \beta ]]) implies [[ \alpha ]]= [[ \beta ]]. Note that \gamma = \beta^{-1} \alpha is a path in X with c_\gamma = c_{\alpha} - c_\beta, so

\pi(c_\gamma) = \pi(c_{\alpha} - c_\beta) =  i([[ \alpha ]]) - i([[ \beta ]])

Since \pi(c_\gamma) vanishes if and only if c_\gamma is orthogonal to every 1-cycle, we have

c_{\gamma} \textrm{ is orthogonal to every 1-cycle}   \; \iff \;   i([[ \alpha ]])  = i([[ \beta ]])

On the other hand, Lemma A says

c_\gamma = 0 \; \iff \; [[ \alpha ]]= [[ \beta ]].

Thus, to prove (1)\iff(2), it suffices to that show that X has no bridges if and only if every 1-chain c_\gamma orthogonal to every 1-cycle has c_\gamma =0. This is Lemma D below.   █

The following lemmas are the key to the theorem above — and also a deeper one saying that if X has no bridges, we can extend i : A \to Z_1(X,\mathbb{R}) to an embedding of the whole maximal abelian cover of X.

For now, we just need to show that any nonzero 1-chain coming from a path in a bridgeless graph has nonzero inner product with some 1-cycle. The following lemmas, inspired by an idea of Ilya Bogdanov, yield an algorithm for actually constructing such a 1-cycle. This 1-cycle also has other desirable properties, which will come in handy later.

To state these, let a simple path be one in which each vertex appears at most once. Let a simple loop be a loop \gamma : x \to x in which each vertex except x appears at most once, while x appears exactly twice, as the starting point and ending point. Let the support of a 1-chain c, denoted \mathrm{supp}(c), be the set of edges e such that \langle c, e\rangle> 0. This excludes edges with \langle c, e \rangle= 0 , but also those with \langle c , e \rangle < 0, which are inverses of edges in the support. Note that

c = \sum_{e \in \mathrm{supp}(c)} \langle c, e \rangle  .

Thus, \mathrm{supp}(c) is the smallest set of edges such that c can be written as a positive linear combination of edges in this set.

Okay, here are the lemmas!

Lemma B. Let X be any graph and let c be an integral 1-cycle on X. Then for some n we can write

c = c_{\sigma_1} + \cdots +  c_{\sigma_n}

where \sigma_i are simple loops with \mathrm{supp}(c_{\sigma_i}) \subseteq \mathrm{supp}(c).

Proof. See the paper. The proof is an algorithm that builds a simple loop \sigma_1 with\mathrm{supp}(c_{\sigma_1}) \subseteq \mathrm{supp}(c). We subtract this from c, and if the result isn’t zero we repeat the algorithm, continuing to subtract off 1-cycles c_{\sigma_i} until there’s nothing left.   █

Lemma C. Let \gamma: x \to y be a path in a graph X. Then for some n \ge 0 we can write

c_\gamma = c_\delta + c_{\sigma_1} + \cdots +  c_{\sigma_n}

where \delta: x \to y is a simple path and \sigma_i are simple loops with \mathrm{supp}(c_\delta), \mathrm{supp}(c_{\sigma_i}) \subseteq \mathrm{supp}(c_\gamma).

Proof. This relies on the previous lemma, and the proof is similar — but when we can’t subtract off any more c_{\sigma_i}’s we show what’s left is c_\delta for a simple path \delta: x \to y.   █

Lemma D. Let X be a graph. Then the following are equivalent:

(1) X has no bridges.

(2) For any path \gamma in X, if c_\gamma is orthogonal to every 1-cycle then c_\gamma = 0.

Proof. It’s easy to show a bridge e gives a nonzero 1-chain c_e that’s orthogonal to all 1-cycles, so the hard part is showing that for a bridgeless graph, if c_\gamma is orthogonal to every 1-cycle then c_\gamma = 0. The idea is to start with a path for which c_\gamma \ne 0. We hit this path with Lemma C, which lets us replace \gamma by a simple path \delta. The point is that a simple path is a lot easier to deal with than a general path: a general path could wind around crazily, passing over every edge of our graph multiple times.

Then, assuming X has no bridges, we use Ilya Bogdanov’s idea to build a 1-cycle that’s not orthogonal to c_\delta. The basic idea is to take the path \delta : x \to y and write it out as \delta = e_1 \cdots e_n. Since the last edge e_n is not a bridge, there must be a path from y back to x that does not use the edge e_n or its inverse. Combining this path with \delta we can construct a loop, which gives a cycle having nonzero inner product with c_\delta and thus with c_\gamma.

I’m deliberately glossing over some difficulties that can arise, so see the paper for details!   █

Embedding the whole crystal

Okay: so far, we’ve taken a connected bridgeless graph X and embedded its atoms into the space of 1-cycles via a map

i : A \to Z_1(X,\mathbb{R})  .

These atoms are the vertices of the maximal abelian cover \overline{X}. Now we’ll extend i to an embedding of the whole graph \overline{X} — or to be precise, its geometric realization |\overline{X}|. Remember, for us a graph is an abstract combinatorial gadget; its geometric realization is a topological space where the edges become closed intervals.

The idea is that just as i maps each atom to a point in the vector space Z_1(X,\mathbb{R}), j maps each edge of |\overline{X}| to a straight line segment between such points. These line segments serve as the ‘bonds’ of a topological crystal. The only challenge is to show that these bonds do not cross each other.

Theorem B. If X is a connected graph with basepoint, the map i : A \to Z_1(X,\mathbb{R}) extends to a continuous map

j : |\overline{X}| \to Z_1(X,\mathbb{R})

sending each edge of |\overline{X}| to a straight line segment in Z_1(X,\mathbb{R}). If X has no bridges, then j is one-to-one.

Proof. The first part is easy; the second part takes real work! The problem is to show the edges don’t cross. Greg Egan and I couldn’t do it using just Lemma D above. However, there’s a nice argument that goes back and uses Lemma C — read the paper for details.

As usual, history is different than what you read in math papers: David Speyer gave us a nice proof of Lemma D, and that was good enough to prove that atoms are mapped into the space of 1-cycles in a one-to-one way, but we only came up with Lemma C after weeks of struggling to prove the edges don’t cross.   █

Connections to tropical geometry

Tropical geometry sets up a nice analogy between Riemann surfaces and graphs. The Abel–Jacobi map embeds any Riemann surface \Sigma in its Jacobian, which is the torus H_1(\Sigma,\mathbb{R})/H_1(\Sigma,\mathbb{Z}). We can similarly define the Jacobian of a graph X to be H_1(X,\mathbb{R})/H_1(X,\mathbb{Z}). Theorem B yields a way to embed a graph, or more precisely its geometric realization |X|, into its Jacobian. This is the analogue, for graphs, of the Abel–Jacobi map.

After I put this paper on the arXiv, I got an email from Matt Baker saying that he had already proved Theorem A — or to be precise, something that’s clearly equivalent. It’s Theorem 1.8 here:

• Matthew Baker and Serguei Norine, Riemann–Roch and Abel–Jacobi theory on a finite graph.

This says that the vertices of a bridgeless graph X are embedded in its Jacobian by means of the graph-theoretic analogue of the Abel–Jacobi map.

What I really want to know is whether someone’s written up a proof that this map embeds the whole graph, not just its vertices, into its Jacobian in a one-to-one way. That would imply Theorem B. For more on this, try my conversation with David Speyer.

Anyway, there’s a nice connection between topological crystallography and tropical geometry, and not enough communication between the two communities. Once I figure out what the tropical folks have proved, I will revise my paper to take that into account.

Next time I’ll talk about more examples of topological crystals!

Read the whole series

Part 1 – the basic idea.

Part 2 – the maximal abelian cover of a graph.

Part 3 – constructing topological crystals.

Part 4 – examples of topological crystals.


Topological Crystals (Part 2)

27 July, 2016

k4_crystal

We’re building crystals, like diamonds, purely from topology. Last time I said how: you take a graph X and embed its maximal abelian cover into the vector space H_1(X,\mathbb{R}). Now let me say a bit more about the maximal abelian cover. It’s not nearly as famous as the universal cover, but it’s very nice.

First I’ll whiz though the basic idea, and then I’ll give the details.

The basic idea

By ‘space’ let me mean a connected topological space that’s locally nice. The basic idea is that if X is some space, its universal cover \widetilde{X} is a covering space of X that covers all other covering spaces of X. The maximal abelian cover \overline{X} has a similar universal property—but it’s abelian, and it covers all abelian connected covers. A cover is abelian if its group of deck transformations is abelian.

The cool part is that universal covers are to homotopy theory as maximal abelian covers are to homology theory.

What do I mean by that? For starters, points in \widetilde{X} are just homotopy classes of paths in X starting at some chosen basepoint. And the points in \overline{X} are just ‘homology classes’ of paths starting at the basepoint.

But people don’t talk so much about ‘homology classes’ of paths. So what do I mean by that? Here a bit of category theory comes in handy. Homotopy classes of paths in X are morphisms in the fundamental groupoid of X. Homology classes of paths are morphisms in the abelianized version of the fundamental groupoid!

But wait a minute — what does that mean? Well, we can abelianize any groupoid by imposing the relations

f g = g f

whenever it makes sense to do so. It makes sense to do so when you can compose the morphisms f : x \to y and g : x' \to y' in either order, and the resulting morphisms f g and g f have the same source and the same target. And if you work out what that means, you’ll see it means

x = y = x' = y'

But now let me say it all much more slowly, for people who want a more relaxed treatment.

The details

There are lots of slightly different things called ‘graphs’ in mathematics, but in topological crystallography it’s convenient to work with one that you’ve probably never seen before. This kind of graph has two copies of each edge, one pointing in each direction.

So, we’ll say a graph X = (E,V,s,t,i) has a set V of vertices, a set E of edges, maps s,t : E \to V assigning to each edge its source and target, and a map i : E \to E sending each edge to its inverse, obeying

s(i(e)) = t(e), \quad t(i(e)) = s(e) , \qquad i(i(e)) = e

and

i(e) \ne e

for all e \in E.

That inequality at the end will make category theorists gag: definitions should say what’s true, not what’s not true. But category theorists should be able to see what’s really going on here, so I leave that as a puzzle.

For ordinary folks, let me repeat the definition using more words. If s(e) = v and t(e) = w we write e : v \to w, and draw e as an interval with an arrow on it pointing from v to w. We write i(e) as e^{-1}, and draw e^{-1} as the same interval as e, but with its arrow reversed. The equations obeyed by i say that taking the inverse of e : v \to w gives an edge e^{-1} : w \to v and that (e^{-1})^{-1} = e. No edge can be its own inverse.

A map of graphs, say f : X \to X', is a pair of functions, one sending vertices to vertices and one sending edges to edges, that preserve the source, target and inverse maps. By abuse of notation we call both of these functions f.

I started out talking about topology; now I’m treating graphs very combinatorially, but we can bring the topology back in. From a graph X we can build a topological space |X| called its geometric realization. We do this by taking one point for each vertex and gluing on one copy of [0,1] for each edge e : v \to w, gluing the point 0 to v and the point 1 to w, and then identifying the interval for each edge e with the interval for its inverse by means of the map t \mapsto 1 - t.

Any map of graphs gives rise to a continuous map between their geometric realizations, and we say a map of graphs is a cover if this continuous map is a covering map. For simplicity we denote the fundamental group of |X| by \pi_1(X), and similarly for other topological invariants of |X|. However, sometimes I’ll need to distinguish between a graph X and its geometric realization |X|.

Any connected graph X has a universal cover, meaning a connected cover

p : \widetilde{X} \to X

that covers every other connected cover. The geometric realization of \widetilde{X} is connected and simply connected. The fundamental group \pi_1(X) acts as deck transformations of \widetilde{X}, meaning invertible maps g : \widetilde{X} \to \widetilde{X} such that p \circ g = p. We can take the quotient of \widetilde{X} by the action of any subgroup G \subseteq \pi_1(X) and get a cover q : \widetilde{X}/G \to X.

In particular, if we take G to be the commutator subgroup of \pi_1(X), we call the graph \widetilde{X}/G the maximal abelian cover of the graph X, and denote it by \overline{X}. We obtain a cover

q : \overline{X} \to X

whose group of deck transformations is the abelianization of \pi_1(X). This is just the first homology group H_1(X,\mathbb{Z}). In particular, if the space corresponding to X has n holes, this is the free abelian group on
n generators.

I want a concrete description of the maximal abelian cover! I’ll build it starting with the universal cover, but first we need some preliminaries on paths in graphs.

Given vertices x,y in X, define a path from x to y to be a word of edges \gamma = e_1 \cdots e_\ell with e_i : v_{i-1} \to v_i for some vertices v_0, \dots, v_\ell with v_0 = x and v_\ell = y. We allow the word to be empty if and only if x = y; this gives the trivial path from x to itself.

Given a path \gamma from x to y we write \gamma : x \to y, and we write the trivial path from x to itself as 1_x : x \to x. We define the composite of paths \gamma : x \to y and \delta : y \to z via concatenation of words, obtaining a path we call \gamma \delta : x \to z. We call a path from a vertex x to itself a loop based at x.

We say two paths from x to y are homotopic if one can be obtained from the other by repeatedly introducing or deleting subwords of the form e_i e_{i+1} where e_{i+1} = e_i^{-1}. If [\gamma] is a homotopy class of paths from x to y, we write [\gamma] : x \to y. We can compose homotopy classes [\gamma] : x \to y and [\delta] : y \to z by setting [\gamma] [\delta] = [\gamma \delta].

If X is a connected graph, we can describe the universal cover \widetilde{X} as follows. Fix a vertex x_0 of X, which we call the basepoint. The vertices of \widetilde{X} are defined to be the homotopy classes of paths [\gamma] : x_0 \to x where x is arbitrary. The edges in \widetilde{X} from the vertex [\gamma] : x_0 \to x to the vertex [\delta] : x_0 \to y are defined to be the edges e \in E with [\gamma e] = [\delta]. In fact, there is always at most one such edge. There is an obvious map of graphs

p : \widetilde{X} \to X

sending each vertex [\gamma] : x_0 \to x of \widetilde{X} to the vertex
x of X. This map is a cover.

Now we are ready to construct the maximal abelian cover \overline{X}. For this, we impose a further equivalence relation on paths, which is designed to make composition commutative whenever possible. However, we need to be careful. If \gamma : x \to y and \delta : x' \to y' , the composites \gamma \delta and \delta \gamma are both well-defined if and only if x' = y and y' = x. In this case, \gamma \delta and \delta \gamma share the same starting point and share the same ending point if and only if x = x' and y = y'. If all four of these equations hold, both \gamma and \delta are loops based at x. So, we shall impose the relation \gamma \delta = \delta \gamma only in this case.

We say two paths are homologous if one can be obtained from another by:

• repeatedly introducing or deleting subwords e_i e_{i+1} where
e_{i+1} = e_i^{-1}, and/or

• repeatedly replacing subwords of the form

e_i \cdots e_j e_{j+1} \cdots e_k

by those of the form

e_{j+1} \cdots e_k e_i \cdots e_j

where e_i \cdots e_j and e_{j+1} \cdots e_k are loops based at the same vertex.

My use of the term ‘homologous’ is a bit nonstandard here!

We denote the homology class of a path \gamma by [[ \gamma ]]. Note that if two paths \gamma : x \to y, \delta : x' \to y' are homologous then x = x' and y = y'. Thus, the starting and ending points of a homology class of paths are well-defined, and given any path \gamma : x \to y we write [[ \gamma ]] : x \to y . The composite of homology classes is also well-defined if we set

[[ \gamma ]] [[ \delta ]] = [[ \gamma \delta ]]

We construct the maximal abelian cover of a connected graph X just as we constructed its universal cover, but using homology classes rather than homotopy classes of paths. And now I’ll introduce some jargon that should make you start thinking about crystals!

Fix a basepoint x_0 for X. The vertices of \overline{X}, or atoms, are defined to be the homology classes of paths [[\gamma]] : x_0 \to x where x is arbitrary. Any edge of \overline{X}, or bond, goes from some atom [[ \gamma]] : x_0 \to x to the some atom [[ \delta ]] : x_0 \to y. The bonds from [[ \gamma]] to [[ \delta ]] are defined to be the edges e \in E with [[ \gamma e ]] = [[ \delta ]]. There is at most one bond between any two atoms. Again we have a covering map

q : \overline{X} \to X .

The homotopy classes of loops based at x_0 form a group, with composition as the group operation. This is the fundamental group \pi_1(X) of the graph X. This is isomorphic as the fundamental group of the space associated to X. By our construction of the universal cover, \pi_1(X) is also the set of vertices of \widetilde{X} that are mapped to x_0 by p. Furthermore, any element [\gamma] \in \pi_1(X) defines a deck transformation of \widetilde{X} that sends each vertex [\delta] : x_0 \to x to the vertex [\gamma] [\delta] : x_0 \to x.

Similarly, the homology classes of loops based at x_0 form a group with composition as the group operation. Since the additional relation used to define homology classes is precisely that needed to make composition of homology classes of loops commutative, this group is the abelianization of \pi_1(X). It is therefore isomorphic to the first homology group H_1(X,\mathbb{Z}) of the geometric realization of X.

By our construction of the maximal abelian cover, H_1(X,\mathbb{Z}) is also the set of vertices of \overline{X} that are mapped to x_0 by q. Furthermore, any element [[\gamma]] \in H_1(X,\mathbb{Z}) defines a deck transformation of \overline{X} that sends each vertex [[\delta]] : x_0 \to x to the vertex [[\gamma]] [[\delta]] : x_0 \to x.

So, it all works out! The fundamental group \pi_1(X) acts as deck transformations of the universal cover, while the first homology group H_1(X,\mathbb{Z}) acts as deck transformations of the maximal abelian cover.

Puzzle for experts: what does this remind you of in Galois theory?

We’ll get back to crystals next time.

Read the whole series

Part 1 – the basic idea.

Part 2 – the maximal abelian cover of a graph.

Part 3 – constructing topological crystals.

Part 4 – examples of topological crystals.


Topological Crystals (Part 1)

22 July, 2016

k4_crystal

A while back, we started talking about crystals:

• John Baez, Diamonds and triamonds, Azimuth, 11 April 2016.

In the comments on that post, a bunch of us worked on some puzzles connected to ‘topological crystallography’—a subject that blends graph theory, topology and mathematical crystallography. You can learn more about that subject here:

• Tosio Sunada, Crystals that nature might miss creating, Notices of the AMS 55 (2008), 208–215.

I got so interested that I wrote this paper about it, with massive help from Greg Egan:

• John Baez, Topological crystals.

I’ll explain the basic ideas in a series of posts here.

First, a few personal words.

I feel a bit guilty putting so much work into this paper when I should be developing network theory to the point where it does our planet some good. I seem to need a certain amount of beautiful pure math to stay sane. But this project did at least teach me a lot about the topology of graphs.

For those not in the know, applying homology theory to graphs might sound fancy and interesting. For people who have studied a reasonable amount of topology, it probably sounds easy and boring. The first homology of a graph of genus g is a free abelian group on g generators: it’s a complete invariant of connected graphs up to homotopy equivalence. Case closed!

But there’s actually more to it, because studying graphs up to homotopy equivalence kills most of the fun. When we’re studying networks in real life we need a more refined outlook on graphs. So some aspects of this project might pay off, someday, in ways that have nothing to do with crystallography. But right now I’ll just talk about it as a fun self-contained set of puzzles.

I’ll start by quickly sketching how to construct topological crystals, and illustrate it with the example of graphene, a 2-dimensional form of carbon:


I’ll precisely state our biggest result, which says when this construction gives a crystal where the atoms don’t bump into each other and the bonds between atoms don’t cross each other. Later I may come back and add detail, but for now you can find details in our paper.

Constructing topological crystals

The ‘maximal abelian cover’ of a graph plays a key role in Sunada’s work on topological crystallography. Just as the universal cover of a connected graph X has the fundamental group \pi_1(X) as its group of deck transformations, the maximal abelian cover, denoted \overline{X}, has the abelianization of \pi_1(X) as its group of deck transformations. It thus covers every other connected cover of X whose group of deck transformations is abelian. Since the abelianization of \pi_1(X) is the first homology group H_1(X,\mathbb{Z}), there is a close connection between the maximal abelian cover and homology theory.

In our paper, Greg and I prove that for a large class of graphs, the maximal abelian cover can naturally be embedded in the vector space H_1(X,\mathbb{R}). We call this embedded copy of \overline{X} a ‘topological crystal’. The symmetries of the original graph can be lifted to symmetries of its topological crystal, but the topological crystal also has an n-dimensional lattice of translational symmetries. In 2- and 3-dimensional examples, the topological crystal can serve as the blueprint for an actual crystal, with atoms at the vertices and bonds along the edges.

The general construction of topological crystals was developed by Kotani and Sunada, and later by Eon. Sunada uses ‘topological crystal’ for an even more general concept, but we only need a special case.

Here’s how it works. We start with a graph X. This has a space C_0(X,\mathbb{R}) of 0-chains, which are formal linear combinations of vertices, and a space C_1(X,\mathbb{R}) of 1-chains, which are formal linear combinations of edges. There is a boundary operator

\partial \colon C_1(X,\mathbb{R}) \to C_0(X,\mathbb{R})

This is the linear operator sending any edge to the difference of its two endpoints. The kernel of this operator is called the space of 1-cycles, Z_1(X,\mathbb{R}). There is an inner product on the space of 1-chains such that edges form an orthonormal basis. This determines an orthogonal projection

\pi \colon C_1(X,\mathbb{R}) \to Z_1(X,\mathbb{R})

For a graph, Z_1(X,\mathbb{R}) is isomorphic to the first homology group H_1(X,\mathbb{R}). So, to obtain the topological crystal of X, we need only embed its maximal abelian cover \overline{X} in Z_1(X,\mathbb{R}). We do this by embedding \overline{X} in C_1(X,\mathbb{R}) and then projecting it down via \pi.

To accomplish this, we need to fix a basepoint for X. Each path \gamma in X starting at this basepoint determines a 1-chain c_\gamma. These 1-chains correspond to the vertices of \overline{X}. The graph \overline{X} has an edge from c_\gamma to c_{\gamma'} whenever the path \gamma' is obtained by adding an extra edge to \gamma. This edge is a straight line segment from the point c_\gamma to the point c_{\gamma'}.

The hard part is checking that the projection \pi maps this copy of \overline{X} into Z_1(X,\mathbb{R}) in a one-to-one manner. In Theorems 6 and 7 of our paper we prove that this happens precisely when the graph X has no ‘bridges’: that is, edges whose removal would disconnect X.

Kotani and Sunada noted that this condition is necessary. That’s actually pretty easy to see. The challenge was to show that it’s sufficient! For this, our main technical tool is Lemma 5, which for any path \gamma decomposes the 1-chain c_\gamma into manageable pieces.

We call the resulting copy of \overline{X} embedded in Z_1(X,\mathbb{R}) a topological crystal.

Let’s see how it works in an example!

Take X to be this graph:

Since X has 3 edges, the space of 1-chains is 3-dimensional. Since X has 2 holes, the space of 1-cycles is a 2-dimensional plane in this 3-dimensional space. If we consider paths \gamma in X starting at the red vertex, form the 1-chains c_\gamma, and project them down to this plane, we obtain the following picture:

Here the 1-chains c_\gamma are the white and red dots. These are the vertices of \overline{X}, while the line segments between them are the edges of \overline{X}. Projecting these vertices and edges onto the plane of 1-cycles, we obtain the topological crystal for X. The blue dots come from projecting the white dots onto the plane of 1-cycles, while the red dots already lie on this plane. The resulting topological crystal provides the pattern for graphene:

That’s all there is to the basic idea! But there’s a lot more to say about it, and a lot of fun examples to look at: diamonds, triamonds, hyperquartz and more.

Read the whole series

Part 1 – the basic idea.

Part 2 – the maximal abelian cover of a graph.

Part 3 – constructing topological crystals.

Part 4 – examples of topological crystals.


Azimuth News (Part 5)

11 June, 2016

I’ve been rather quiet about Azimuth projects lately, because I’ve been too busy actually working on them. Here’s some of what’s happening:

Jason Erbele is finishing his thesis, entitled Categories in Control: Applied PROPs. He successfully gave his thesis defense on Wednesday June 8th, but he needs to polish it up some more. Building on the material in our paper “Categories in control”, he’s defined a category where the morphisms are signal flow diagrams. But interestingly, not all the diagrams you can draw are actually considered useful in control theory! So he’s also found a subcategory where the morphisms are the ‘good’ signal flow diagrams, the ones control theorists like. For these he studies familiar concepts like controllability and observability. When his thesis is done I’ll announce it here.

Brendan Fong is also finishing his thesis, called The Algebra of Open and Interconnected Systems. Brendan has already created a powerful formalism for studying open systems: the decorated cospan formalism. We’ve applied it to two examples: electrical circuits and Markov processes. Lately he’s been developing the formalism further, and this will appear in his thesis. Again, I’ll talk about it when he’s done!

Blake Pollard and I are writing a paper called “A compositional framework for open chemical reaction networks”. Here we take our work on Markov processes and throw in two new ingredients: dynamics and nonlinearity. Of course Markov processes have a dynamics, but in our previous paper when we ‘black-boxed’ them to study their external behaviour, we got a relation between flows and populations in equilibrium. Now we explain how to handle nonequilibrium situations as well.

Brandon Coya, Franciscus Rebro and I are writing a paper that might be called “The algebra of networks”. I’m not completely sure of the title, nor who the authors will be: Brendan Fong may also be a coauthor. But the paper explores the technology of PROPs as a tool for describing networks. As an application, we’ll give a new shorter proof of the functoriality of black-boxing for electrical circuits. This new proof also applies to nonlinear circuits. I’m really excited about how the theory of PROPs, first introduced in algebraic topology, is catching fire with all the new applications to network theory.

I expect all these projects to be done by the end of the summer. Near the end of June I’ll go to the Centre for Quantum Technologies, in Singapore. This will be my last summer there. My main job will be to finish up the two papers that I’m supposed to be writing.

There’s another paper that’s already done:

Kenny Courser has written a paper “A bicategory of decorated cospans“, pushing Brendan’s framework from categories to bicategories. I’ll explain this very soon here on this blog! One goal is to understand things like the coarse-graining of open systems: that is, the process of replacing a detailed description by a less detailed description. Since we treat open systems as morphisms, coarse-graining is something that goes from one morphism to another, so it’s naturally treated as a 2-morphism in a bicategory.

So, I’ve got a lot of new ideas to explain here, and I’ll start soon! I also want to get deeper into systems biology.



In the fall I’ve got a couple of short trips lined up:

• Monday November 14 – Friday November 18, 2016 – I’ve been invited by Yoav Kallus to visit the Santa Fe Institute. From the 16th to 18th I’ll attend a workshop on Statistical Physics, Information Processing and Biology.

• Monday December 5 – Friday December 9 – I’ve been invited to Berkeley for a workshop on Compositionality at the Simons Institute for the Theory of Computing, organized by Samson Abramsky, Lucien Hardy, and Michael Mislove. ‘Compositionality’ is a name for how you describe the behavior of a big complicated system in terms of the behaviors of its parts, so this is closely connected to my dream of studying open systems by treating them as morphisms that can be composed to form bigger open systems.

Here’s the announcement:

The compositional description of complex objects is a fundamental feature of the logical structure of computation. The use of logical languages in database theory and in algorithmic and finite model theory provides a basic level of compositionality, but establishing systematic relationships between compositional descriptions and complexity remains elusive. Compositional models of probabilistic systems and languages have been developed, but inferring probabilistic properties of systems in a compositional fashion is an important challenge. In quantum computation, the phenomenon of entanglement poses a challenge at a fundamental level to the scope of compositional descriptions. At the same time, compositionally has been proposed as a fundamental principle for the development of physical theories. This workshop will focus on the common structures and methods centered on compositionality that run through all these areas.

I’ll say more about both these workshops when they take place.


Programming with Data Flow Graphs

5 June, 2016

Network theory is catching on—in a very practical way!

Google recently started a new open source library called TensorFlow. It’s for software built using data flow graphs. These are graphs where the edges represent tensors—that is, multidimensional arrays of numbers—and the nodes represent operations on tensors. Thus, they are reminiscent of the spin networks used in quantum gravity and gauge theory, or the tensor networks used in renormalization theory. However, I bet the operations involved are nonlinear! If so, they’re more general.

Here’s what Google says:

About TensorFlow

TensorFlow™ is an open source software library for numerical computation using data flow graphs. Nodes in the graph represent mathematical operations, while the graph edges represent the multidimensional data arrays (tensors) communicated between them. The flexible architecture allows you to deploy computation to one or more CPUs or GPUs in a desktop, server, or mobile device with a single API. TensorFlow was originally developed by researchers and engineers working on the Google Brain Team within Google’s Machine Intelligence research organization for the purposes of conducting machine learning and deep neural networks research, but the system is general enough to be applicable in a wide variety of other domains as well.

What is a Data Flow Graph?

Data flow graphs describe mathematical computation with a directed graph of nodes & edges. Nodes typically implement mathematical operations, but can also represent endpoints to feed in data, push out results, or read/write persistent variables. Edges describe the input/output relationships between nodes. These data edges carry dynamically-sized multidimensional data arrays, or tensors. The flow of tensors through the graph is where TensorFlow gets its name. Nodes are assigned to computational devices and execute asynchronously and in parallel once all the tensors on their incoming edges becomes available.

TensorFlow Features

Deep Flexibility. TensorFlow isn’t a rigid neural networks library. If you can express your computation as a data flow graph, you can use TensorFlow. You construct the graph, and you write the inner loop that drives computation. We provide helpful tools to assemble subgraphs common in neural networks, but users can write their own higher-level libraries on top of TensorFlow. Defining handy new compositions of operators is as easy as writing a Python function and costs you nothing in performance. And if you don’t see the low-level data operator you need, write a bit of C++ to add a new one.

True Portability. TensorFlow runs on CPUs or GPUs, and on desktop, server, or mobile computing platforms. Want to play around with a machine learning idea on your laptop without need of any special hardware? TensorFlow has you covered. Ready to scale-up and train that model faster on GPUs with no code changes? TensorFlow has you covered. Want to deploy that trained model on mobile as part of your product? TensorFlow has you covered. Changed your mind and want to run the model as a service in the cloud? Containerize with Docker and TensorFlow just works.

Connect Research and Production. Gone are the days when moving a machine learning idea from research to product require a major rewrite. At Google, research scientists experiment with new algorithms in TensorFlow, and product teams use TensorFlow to train and serve models live to real customers. Using TensorFlow allows industrial researchers to push ideas to products faster, and allows academic researchers to share code more directly and with greater scientific reproducibility.

Auto-Differentiation. Gradient based machine learning algorithms will benefit from TensorFlow’s automatic differentiation capabilities. As a TensorFlow user, you define the computational architecture of your predictive model, combine that with your objective function, and just add data — TensorFlow handles computing the derivatives for you. Computing the derivative of some values w.r.t. other values in the model just extends your graph, so you can always see exactly what’s going on.

Language Options. TensorFlow comes with an easy to use Python interface and a no-nonsense C++ interface to build and execute your computational graphs. Write stand-alone TensorFlow Python or C++ programs, or try things out in an interactive TensorFlow iPython notebook where you can keep notes, code, and visualizations logically grouped. This is just the start though — we’re hoping to entice you to contribute SWIG interfaces to your favorite language — be it Go, Java, Lua, JavaScript, or R.

Maximize Performance. Want to use every ounce of muscle in that workstation with 32 CPU cores and 4 GPU cards? With first-class support for threads, queues, and asynchronous computation, TensorFlow allows you to make the most of your available hardware. Freely assign compute elements of your TensorFlow graph to different devices, and let TensorFlow handle the copies.

Who Can Use TensorFlow?

TensorFlow is for everyone. It’s for students, researchers, hobbyists, hackers, engineers, developers, inventors and innovators and is being open sourced under the Apache 2.0 open source license.

TensorFlow is not complete; it is intended to be built upon and extended. We have made an initial release of the source code, and continue to work actively to make it better. We hope to build an active open source community that drives the future of this library, both by providing feedback and by actively contributing to the source code.

Why Did Google Open Source This?

If TensorFlow is so great, why open source it rather than keep it proprietary? The answer is simpler than you might think: We believe that machine learning is a key ingredient to the innovative products and technologies of the future. Research in this area is global and growing fast, but lacks standard tools. By sharing what we believe to be one of the best machine learning toolboxes in the world, we hope to create an open standard for exchanging research ideas and putting machine learning in products. Google engineers really do use TensorFlow in user-facing products and services, and our research group intends to share TensorFlow implementations along side many of our research publications.

For more details, try this:

TensorFlow tutorials.


Interview (Part 2)

21 March, 2016

Greg Bernhardt runs an excellent website for discussing physics, math and other topics, called Physics Forums. He recently interviewed me there. Since I used this opportunity to explain a bit about the Azimuth Project and network theory, I thought I’d reprint the interview here. Here is Part 2.

 

Tell us about your experience with past projects like “This Week’s Finds in Mathematical Physics”.

I was hired by U.C. Riverside back in 1989. I was lonely and bored, since Lisa was back on the other coast. So, I spent a lot of evenings on the computer.

We had the internet back then—this was shortly after stone tools were invented—but the world-wide web hadn’t caught on yet. So, I would read and write posts on “newsgroups” using a program called a “news server”. You have to imagine me sitting in front of an old green­-on­-black cathode ray tube monitor with a large floppy disk drive, firing up the old modem to hook up to the internet.

In 1993, I started writing a series of posts on the papers I’d read. I called it “This Week’s Finds in Mathematical Physics”, which was a big mistake, because I couldn’t really write one every week. After a while I started using it to explain lots of topics in math and physics. I wrote 300 issues. Then I quit in 2010, when I started taking climate change seriously.

Share with us a bit about your current projects like Azimuth and the n­-Café.

The n­-Category Café is a blog I started with Urs Schreiber and the philosopher David Corfield back in 2006, when all three of us realized that n­-categories are the big wave that math is riding right now. We have a bunch more bloggers on the team now. But the n­-Café lost some steam when I quit work in n­-categories and Urs started putting most of his energy into two related projects: a wiki called the nLab and a discussion group called the nForum.

In 2010, when I noticed that global warming was like a huge wave crashing down on our civilization, I started the Azimuth Project. The goal was to create a focal point for scientists and engineers interested in saving the planet. It consists of a team of people, a blog, a wiki and a discussion group. It was very productive for a while: we wrote a lot of educational articles on climate science and energy issues. But lately I’ve realized I’m better at abstract math. So, I’ve been putting more time into working with my grad students.

What about climate change has captured your interest?

That’s like asking: “What about that huge tsunami rushing toward us has captured your interest?”

Around 2004 I started hearing news that sent chills up my spine ­ and what really worried me is how few people were talking about this news, at least in the US.

I’m talking about how we’re pushing the Earth’s climate out of the glacial cycle we’ve been in for over a million years, into brand new territory. I’m talking about things like how it takes hundreds or thousands of years for CO2 to exit the atmosphere after it’s been put in. And I’m talking about how global warming is just part of a bigger phenomenon: the Anthropocene. That’s a new geological epoch, in which the biosphere is rapidly changing due to human influences. It’s not just the temperature:

• About 1/4 of all chemical energy produced by plants is now used by humans.

• The rate of species going extinct is 100­–1000 times the usual background rate.

• Populations of large ocean fish have declined 90% since 1950.

• Humans now take more nitrogen from the atmosphere and convert it into nitrates than all other processes combined.

8­-9 times as much phosphorus is flowing into oceans than the natural background rate.

This doesn’t necessarily spell the end of our civilization, but it is something that we’ll all have to deal with.

So, I felt the need to alert people and try to dream up strategies to do something. That’s why in 2010 I quit work on n­-categories and started the Azimuth Project.

Carbon Dioxide Variations

You have life experience on both US coasts. Which do you prefer and why?

There are some differences between the coasts, but they’re fairly minor. The West Coast is part of the Pacific Rim, so there’s more Asian influence here. The seasons are less pronounced here, because winds in the northern hemisphere blow from west to east, and the oceans serve as a temperature control system. Down south in Riverside it’s a semi­-desert, so we can eat breakfast in our back yard in January! But I live here not because I like the West Coast more. This just happens to be where my wife Lisa and I managed to get a job.

What I really like is getting out of the US and seeing the rest of the world. When you’re at cremation ritual in Bali, or a Hmong festival in Laos, the difference between regions of the US starts seeming pretty small.

But I wasn’t a born traveler. When I spent my first summer in England, I was very apprehensive about making a fool of myself. The British have different manners, and their old universities are full of arcane customs and subtle social distinctions that even the British find terrifying. But after a few summers there I got over it. First, all around the world, being American gives you a license to be clueless. If you behave any better than the worst stereotypes, people are impressed. Second, I spend most of my time with mathematicians, who are incredibly forgiving of bad social behavior as long as you know interesting theorems.

By now I’ve gotten to feel very comfortable in England. The last couple of years I’ve spent time at the quantum computation group at Oxford–the group run by Bob Coecke and Samson Abramsky. I like talking to Jamie Vicary about n­categories and physics, and also my old friend Minhyong Kim, who is a number theorist there.

I was also very apprehensive when I first visited Paris. Everyone talks about how the waiters are rude, and so on. But I think that’s an exaggeration. Yes, if you go to cafés packed with boorish tourists, the waiters will treat you like a boorish tourist—so don’t do that. If you go to quieter places and behave politely, most people are friendly. Luckily Lisa speaks French and has some friends in Paris; that opens up a lot of opportunities. I don’t speak French, so I always feel like a bit of an idiot, but I’ve learned to cope. I’ve spent a few summers there working with Paul­-André Melliès on category theory and logic.

Yau Ma Tei Market - Hong Kong

Yau Ma Tei Market – Hong Kong

I was also intimidated when I first spent a summer in Hong Kong—and even more so when I spent a summer in Shanghai. Lisa speaks Chinese too: she’s more cultured than me, and she drags me to interesting places. My first day walking around Shanghai left me completely exhausted: everything was new! Walking down the street you see people selling frogs in a bucket, strange fungi and herbs, then a little phone shop where telephone numbers with lots of 8’s cost more, and so on: it’s a kind of cognitive assault.

But again, I came to enjoy it. And coming back to California, everything seemed a bit boring. Why is there so much land that’s not being used? Where are all the people? Why is the food so bland?

I’ve spent the most time outside the US in Singapore. Again, that’s because my wife and I both got job offers there, not because it’s the best place in the world. Compared to China it’s rather sterile and manicured. But it’s still a fascinating place. They’ve pulled themselves up from a British colonial port town to a multi­cultural country that’s in some ways more technologically advanced than the US. The food is great: it’s a mix of Chinese, Indian, Malay and pretty much everything else. There’s essentially no crime: you can walk around in the darkest alley in the worst part of town at 3 am and still feel safe. It’s interesting to live in a country where people from very different cultures are learning to live together and prosper. The US considers itself a melting-pot, but in Singapore they have four national languages: English, Mandarin, Malay and Tamil.

Most of all, it’s great to live in places where the culture and politics is different than where I grew up. But I’m trying to travel less, because it’s bad for the planet.

You’ve gained some fame for your “crackpot index”. What were your motivations for developing it? Any new criteria you’d add?

After the internet first caught on, a bunch of us started using it to talk about physics on the usenet newsgroup sci.physics.

And then, all of a sudden, crackpots around the world started joining in!

Before this, I don’t think anybody realized how many people had their own personal theories of physics. You might have a crazy uncle who spent his time trying to refute special relativity, but you didn’t realize there were actually thousands of these crazy uncles.

As I’m sure you know here at Physics Forums, crackpots naturally tend to drive out more serious conversations. If you have some people talking about the laws of black hole thermodynamics, and some guy jumps in and says that the universe is a black hole, everyone will drop what they’re doing and argue with that guy. It’s irresistible. It reminds me of how when someone brings a baby to a party, everyone will start cooing to the baby. But it’s worse.

When physics crackpots started taking over the usenet newsgroup sci.physics, I discovered that they had a lot of features in common. The Crackpot Index summarizes these common features. Whenever I notice a new pattern, I add it.

For example: if someone starts comparing themselves to Galileo and says the physics establishment is going after them like the Inquisition, I guarantee you that they’re a crackpot. Their theories could be right—but unfortunately, they’ve got delusions of grandeur and a persecution complex.

It’s not being wrong that makes someone a crackpot. Being a full­-fledged crackpot is the endpoint of a tragic syndrome. Someone starts out being a bit too confident that they can revolutionize physics without learning it first. In fact, many young physicists go through this stage! But the good ones react to criticism by upping their game. The ones who become crackpots just brush it off. They come up with an idea that they think is great, and when nobody likes it, they don’t say “okay, I need to learn more.” Instead, they make up excuses: nobody understands me, maybe there’s a conspiracy at work, etc. The excuses get more complicated with each rebuff, and it gets harder and harder for them to back down and say “whoops, I was wrong”.

When I wrote the Crackpot Index, I thought crackpots were funny. Alexander Abian claimed all the world’s ills would be cured if we blew up the Moon. Archimedes Plutonium thinks the Universe is a giant plutonium atom. These ideas are funny. But now I realize how sad it is that someone can start with an passion for physics and end up in this kind of trap. They almost never escape.

Who are some of your math and physics heroes of the past and of today?

Wow, that’s a big question! I think every scientist needs to have heroes. I’ve had a lot.

Marie Curie

Marie Curie

When I was a kid, I was in love with Marie Curie. I wanted to marry a woman like her: someone who really cared about science. She overcame huge obstacles to get a degree in physics, discovered not one but two new elements, often doing experiments in her own kitchen—and won not one but two Nobel prizes. She was a tragic figure in many ways. Her beloved husband Pierre, a great physicist in his own right, slipped and was run over by a horse­-drawn cart, dying instantly when the wheels ran over his skull. She herself probably died from her experiments with radiation. But this made me love her all the more.

Later my big hero was Einstein. How could any physicist not have Einstein as a hero? First he came up with the idea that light comes in discrete quanta: photons. Then, two months later, he used Brownian motion to figure out the size of atoms. One month after that: special relativity, unifying space and time! Three months later, the equivalence between mass and energy. And all this was just a warmup for his truly magnificent theory of general relativity, explaining gravity as the curvature of space and time. He truly transformed our vision of the Universe. And then, in his later years, the noble and unsuccessful search for a unified field theory. As a friend of mine put it, what matters here is not that he failed: what matters is that he set physics a new goal, more ambitious than any goal it had before.

Later it was Feynman. As I mentioned, my uncle gave me Feynman’s Lectures on Physics. This is how I first learned Maxwell’s equations, special relativity, quantum mechanics. His way of explaining things with a minimum of jargon, getting straight to the heart of every issue, is something I really admire. Later I enjoyed his books like Surely You Must Be Joking. Still later I learned enough to be impressed by his work on QED.

But when you read his autobiographical books, you can see that he was a bit too obsessed with pretending to be a fun­-loving ordinary guy. A fun­-loving ordinary guy who just happens to be smarter than everyone else. In short, a self­-absorbed showoff. He could also be pretty mean to women—and in that respect, Einstein was even worse. So our heroes should not be admired uncritically.

Alexander Grothendieck

Alexander Grothendieck

A good example is Alexander Grothendieck. I guess he’s my main math hero these days. To solve concrete problems like the Weil conjectures, he avoided brute force techniques and instead developed revolutionary new concepts that gently dissolved those problems. And these new concepts turned out to be much more important than the problems that motivated him. I’m talking about abelian categories, schemes, topoi, stacks, things like that. Everyone who really wants to understand math at a deep level has got to learn these concepts. They’re beautiful and wonderfully simple—but not easy to master. You have to really change your world view to understand them, just like general relativity or quantum mechanics. You have to rewire your neurons.

At his peak, Grothendieck seemed almost superhuman. It seems he worked almost all day and all night, bouncing his ideas off the other amazing French algebraic geometers. Apparently 20,000 pages of his writings remain unpublished! But he became increasingly alienated from the mathematical establishment and eventually disappeared completely, hiding in a village near the Pyrenees.

Which groundbreaking advances in science and math are you most looking forward to?

I’d really like to see progress in figuring out the fundamental laws of physics. Ideally, I’d like to know the Theory of Everything. Of course, we don’t even know that there is one! There could be an endless succession of deeper and deeper realizations to be had about the laws of physics, with no final answer.

If we ever do discover the Theory of Everything, that won’t be the end of the story. It could be just the beginning. For example, next we could ask why this particular theory governs our Universe. Is it necessary, or contingent? People like to chat about this puzzle already, but I think it’s premature. I think we should find the Theory of Everything first.

Unfortunately, right now fundamental physics is in a phase of being “stuck”. I don’t expect to see the Theory of Everything in my lifetime. I’d be happy to see any progress at all! There are dozens of very basic things we don’t understand.

When it comes to math, I expect that people will have their hands full this century redoing the foundations using ∞-categories, and answering some of the questions that come up when you do this. The crowd working on “homotopy type theory” is making good progress–but so far they’re mainly thinking about ∞-groupoids, which are a very special sort of ∞-category. When we do all of math using ∞-categories, it will be a whole new ballgame.

And then there’s the question of whether humanity will figure out a way to keep from ruining the planet we live on. And the question of whether we’ll succeed in replacing ourselves with something more intelligent—or even wiser.

The Milky Way and Andromeda Nebula after their first collision, 4 billion years from now

The Milky Way and Andromeda Nebula after their first collision, 4 billion years from now

Here’s something cool: red dwarf stars will keep burning for 10 trillion years. If we, or any civilization, can settle down next to one of those, there will be plenty of time to figure things out. That’s what I hope for.

But some of my friends think that life always uses up resources as fast as possible. So one of my big questions is whether intelligent life will develop the patience to sit around and think interesting thoughts, or whether it will burn up red dwarf stars and every other source of energy as fast as it can, as we’re doing now with fossil fuels.

What does the future hold for John Baez? What are your goals?

What the future holds for me, primarily, is death.

That’s true of all of us—or at least most of us. While some hope that technology will bring immortality, or at least a much longer life, I bet most of us are headed for death fairly soon. So I try to make the most of the time I have.

I’m always re­-evaluating what I should do. I used to spend time thinking about quantum gravity and n­-categories. But quantum gravity feels stuck, and n­-category theory is shooting forward so fast that my help is no longer needed.

Climate change is hugely important, and nobody really knows what to do about it. Lots of people are trying lots of different things. Unfortunately I’m no better than the rest when it comes to the most obvious strategies—like politics, or climate science, or safer nuclear reactors, or better batteries and photocells.

The trick is finding things you can do better than other people. Right now for me that means thinking about networks and biology in a very abstract way. I’m inspired by this remark by Patten and Witkamp:

To understand ecosystems, ultimately will be to understand networks.

So that’s my goal for the next five years or so. It’s probably not be the best thing anyone can do to prepare for the Middle Anthropocene. But it may be the best thing I can do: use the math I know to help people understand the biosphere.

It may seem like I keep jumping around: from quantum gravity to n-categories to biology. But I keep wanting to think about networks, and how they change in time.

At some point I hope to retire and become a bit more of a self­-indulgent wastrel. I could write a fun book about group theory in geometry and physics, and a fun book about the octonions. I might even get around to spending more time on music!

John Baez in Namo Gorge, Gansu

John Baez