Diamonds and Triamonds

The structure of a diamond crystal is fascinating. But there’s an equally fascinating form of carbon, called the triamond, that’s theoretically possible but never yet seen in nature. Here it is:


k4_crystal

In the triamond, each carbon atom is bonded to three others at 120° angles, with one double bond and two single bonds. Its bonds lie in a plane, so we get a plane for each atom.

But here’s the tricky part: for any two neighboring atoms, these planes are different. In fact, if we draw the bond planes for all the atoms in the triamond, they come in four kinds, parallel to the faces of a regular tetrahedron!

If we discount the difference between single and double bonds, the triamond is highly symmetrical. There’s a symmetry carrying any atom and any of its bonds to any other atom and any of its bonds. However, the triamond has an inherent handedness, or chirality. It comes in two mirror-image forms.

A rather surprising thing about the triamond is that the smallest rings of atoms are 10-sided. Each atom lies in 15 of these 10-sided rings.

Some chemists have argued that the triamond should be ‘metastable’ at room temperature and pressure: that is, it should last for a while but eventually turn to graphite. Diamonds are also considered metastable, though I’ve never seen anyone pull an old diamond ring from their jewelry cabinet and discover to their shock that it’s turned to graphite. The big difference is that diamonds are formed naturally under high pressure—while triamonds, it seems, are not.

Nonetheless, the mathematics behind the triamond does find its way into nature. A while back I told you about a minimal surface called the ‘gyroid’, which is found in many places:

The physics of butterfly wings.

It turns out that the pattern of a gyroid is closely connected to the triamond! So, if you’re looking for a triamond-like pattern in nature, certain butterfly wings are your best bet:

• Matthias Weber, The gyroids: algorithmic geometry III, The Inner Frame, 23 October 2015.

Instead of trying to explain it here, I’ll refer you to the wonderful pictures at Weber’s blog.

Building the triamond

I want to tell you a way to build the triamond. I saw it here:

• Toshikazu Sunada, Crystals that nature might miss creating, Notices of the American Mathematical Society 55 (2008), 208–215.

This is the paper that got people excited about the triamond, though it was discovered much earlier by the crystallographer Fritz Laves back in 1932, and Coxeter named it the Laves graph.

To build the triamond, we can start with this graph:


k4_graph_colored_sunada

It’s called \mathrm{K}_4, since it’s the complete graph on four vertices, meaning there’s one edge between each pair of vertices. The vertices correspond to four different kinds of atoms in the triamond: let’s call them red, green, yellow and blue. The edges of this graph have arrows on them, labelled with certain vectors

e_1, e_2, e_3, f_1, f_2, f_3 \in \mathbb{R}^3

Let’s not worry yet about what these vectors are. What really matters is this: to move from any atom in the triamond to any of its neighbors, you move along the vector labeling the edge between them… or its negative, if you’re moving against the arrow.

For example, suppose you’re at any red atom. It has 3 nearest neighbors, which are blue, green and yellow. To move to the blue neighbor you add f_1 to your position. To move to the green one you subtract e_2, since you’re moving against the arrow on the edge connecting blue and green. Similarly, to go to the yellow neighbor you subtract the vector f_3 from your position.

Thus, any path along the bonds of the triamond determines a path in the graph \mathrm{K}_4.

Conversely, if you pick an atom of some color in the triamond, any path in \mathrm{K}_4 starting from the vertex of that color determines a path in the triamond! However, going around a loop in \mathrm{K}_4 may not get you back to the atom you started with in the triamond.

Mathematicians summarize these facts by saying the triamond is a ‘covering space’ of the graph \mathrm{K}_4.

Now let’s see if you can figure out those vectors.

Puzzle 1. Find vectors e_1, e_2, e_3, f_1, f_2, f_3 \in \mathbb{R}^3 such that:

A) All these vectors have the same length.

B) The three vectors coming out of any vertex lie in a plane at 120° angles to each other:


k4_graph_colored_sunada

For example, f_1, -e_2 and -f_3 lie in a plane at 120° angles to each other. We put in two minus signs because two arrows are pointing into the red vertex.

C) The four planes we get this way, one for each vertex, are parallel to the faces of a regular tetrahedron.

If you want, you can even add another constraint:

D) All the components of the vectors e_1, e_2, e_3, f_1, f_2, f_3 are integers.

Diamonds and hyperdiamonds

That’s the triamond. Compare the diamond:


Here each atom of carbon is connected to four others. This pattern is found not just in carbon but also other elements in the same column of the periodic table: silicon, germanium, and tin. They all like to hook up with four neighbors.

The pattern of atoms in a diamond is called the diamond cubic. It’s elegant but a bit tricky. Look at it carefully!

To build it, we start by putting an atom at each corner of a cube. Then we put an atom in the middle of each face of the cube. If we stopped there, we would have a face-centered cubic. But there are also four more carbons inside the cube—one at the center of each tetrahedron we’ve created.

If you look really carefully, you can see that the full pattern consists of two interpenetrating face-centered cubic lattices, one offset relative to the other along the cube’s main diagonal.

The face-centered cubic is the 3-dimensional version of a pattern that exists in any dimension: the Dn lattice. To build this, take an n-dimensional checkerboard and alternately color the hypercubes red and black. Then, put a point in the center of each black hypercube!

You can also get the Dn lattice by taking all n-tuples of integers that sum to an even integer. Requiring that they sum to something even is a way to pick out the black hypercubes.

The diamond is also an example of a pattern that exists in any dimension! I’ll call this the hyperdiamond, but mathematicians call it Dn+, because it’s the union of two copies of the Dn lattice. To build it, first take all n-tuples of integers that sum to an even integer. Then take all those points shifted by the vector (1/2, …, 1/2).

In any dimension, the volume of the unit cell of the hyperdiamond is 1, so mathematicians say it’s unimodular. But only in even dimensions is the sum or difference of any two points in the hyperdiamond again a point in the hyperdiamond. Mathematicians call a discrete set of points with this property a lattice.

If even dimensions are better than odd ones, how about dimensions that are multiples of 4? Then the hyperdiamond is better still: it’s an integral lattice, meaning that the dot product of any two vectors in the lattice is again an integer.

And in dimensions that are multiples of 8, the hyperdiamond is even better. It’s even, meaning that the dot product of any vector with itself is even.

In fact, even unimodular lattices are only possible in Euclidean space when the dimension is a multiple of 8. In 8 dimensions, the only even unimodular lattice is the 8-dimensional hyperdiamond, which is usually called the E8 lattice. The E8 lattice is one of my favorite entities, and I’ve written a lot about it in this series:

Integral octonions.

To me, the glittering beauty of diamonds is just a tiny hint of the overwhelming beauty of E8.

But let’s go back down to 3 dimensions. I’d like to describe the diamond rather explicitly, so we can see how a slight change produces the triamond.

It will be less stressful if we double the size of our diamond. So, let’s start with a face-centered cubic consisting of points whose coordinates are even integers summing to a multiple of 4. That consists of these points:

(0,0,0)   (2,2,0)   (2,0,2)   (0,2,2)

and all points obtained from these by adding multiples of 4 to any of the coordinates. To get the diamond, we take all these together with another face-centered cubic that’s been shifted by (1,1,1). That consists of these points:

(1,1,1)   (3,3,1)   (3,1,3)   (1,3,3)

and all points obtained by adding multiples of 4 to any of the coordinates.

The triamond is similar! Now we start with these points

(0,0,0)   (1,2,3)   (2,3,1)   (3,1,2)

and all the points obtain from these by adding multiples of 4 to any of the coordinates. To get the triamond, we take all these together with another copy of these points that’s been shifted by (2,2,2). That other copy consists of these points:

(2,2,2)   (3,0,1)   (0,1,3)   (1,3,0)

and all points obtained by adding multiples of 4 to any of the coordinates.

Unlike the diamond, the triamond has an inherent handedness, or chirality. You’ll note how we used the point (1,2,3) and took cyclic permutations of its coordinates to get more points. If we’d started with (3,2,1) we would have gotten the other, mirror-image version of the triamond.

Covering spaces

I mentioned that the triamond is a ‘covering space’ of the graph \mathrm{K}_4. More precisely, there’s a graph T whose vertices are the atoms of the triamond, and whose edges are the bonds of the triamond. There’s a map of graphs

p: T \to \mathrm{K}_4

This automatically means that every path in T is mapped to a path in \mathrm{K}_4. But what makes T a covering space of \mathrm{K}_4 is that any path in T comes from a path in \mathrm{K}_4, which is unique after we choose its starting point.

If you’re a high-powered mathematician you might wonder if T is the universal covering space of \mathrm{K}_4. It’s not, but it’s the universal abelian covering space.

What does this mean? Any path in \mathrm{K}_4 gives a sequence of vectors e_1, e_2, e_3, f_1, f_2, f_3 and their negatives. If we pick a starting point in the triamond, this sequence describes a unique path in the triamond. When does this path get you back where you started? The answer, I believe, is this: if and only if you can take your sequence, rewrite it using the commutative law, and cancel like terms to get zero. This is related to how adding vectors in \mathbb{R}^3 is a commutative operation.


k4_graph_colored_sunada

For example, there’s a loop in \mathrm{K}_4 that goes “red, blue, green, red”. This gives the sequence of vectors

f_1, -e_3, e_2

We can turn this into an expression

f_1 - e_3 + e_2

However, we can’t simplify this to zero using just the commutative law and cancelling like terms. So, if we start at some red atom in the triamond and take the unique path that goes “red, blue, green, red”, we do not get back where we started!

Note that in this simplification process, we’re not allowed to use what the vectors “really are”. It’s a purely formal manipulation.

Puzzle 2. Describe a loop of length 10 in the triamond using this method. Check that you can simplify the corresponding expression to zero using the rules I described.

A similar story works for the diamond, but starting with a different graph:


diamond_graph_sunada

The graph formed by a diamond’s atoms and the edges between them is the universal abelian cover of this little graph! This graph has 2 vertices because there are 2 kinds of atom in the diamond. It has 4 edges because each atom has 4 nearest neighbors.

Puzzle 3. What vectors should we use to label the edges of this graph, so that the vectors coming out of any vertex describe how to move from that kind of atom in the diamond to its 4 nearest neighbors?

There’s also a similar story for graphene, which is hexagonal array of carbon atoms in a plane:


graphene

Puzzle 4. What graph with edges labelled by vectors in \mathbb{R}^2 should we use to describe graphene?

I don’t know much about how this universal abelian cover trick generalizes to higher dimensions, though it’s easy to handle the case of a cubical lattice in any dimension.

Puzzle 5. I described higher-dimensional analogues of diamonds: are there higher-dimensional triamonds?

References

The Wikipedia article is good:

• Wikipedia, Laves graph.

They say this graph has many names: the K4 crystal, the (10,3)-a network, the srs net, the diamond twin, and of course the triamond. The name triamond is not very logical: while each carbon has 3 neighbors in the triamond, each carbon has not 2 but 4 neighbors in the diamond. So, perhaps the diamond should be called the ‘quadriamond’. In fact, the word ‘diamond’ has nothing to do with the prefix ‘di-‘ meaning ‘two’. It’s more closely related to the word ‘adamant’. Still, I like the word ‘triamond’.

This paper describes various attempts to find the Laves graph in chemistry:

• Stephen T. Hyde, Michael O’Keeffe, and Davide M. Proserpio, A short history of an elusive yet ubiquitous structure in chemistry, materials, and mathematics, Angew. Chem. Int. Ed. 47 (2008), 7996–8000.

This paper does some calculations arguing that the triamond is a metastable form of carbon:

• Masahiro Itoh et al, New metallic carbon crystal, Phys. Rev. Lett. 102 (2009), 055703.

Abstract. Recently, mathematical analysis clarified that sp2 hybridized carbon should have a three-dimensional crystal structure (\mathrm{K}_4) which can be regarded as a twin of the sp3 diamond crystal. In this study, various physical properties of the \mathrm{K}_4 carbon crystal, especially for the electronic properties, were evaluated by first principles calculations. Although the \mathrm{K}_4 crystal is in a metastable state, a possible pressure induced structural phase transition from graphite to \mathrm{K}_4 was suggested. Twisted π states across the Fermi level result in metallic properties in a new carbon crystal.

The picture of the \mathrm{K}_4 crystal was placed on Wikicommons by someone named ‘Workbit’, under a Creative Commons Attribution-Share Alike 4.0 International license. The picture of the tetrahedron was made using Robert Webb’s Stella software and placed on Wikicommons. The pictures of graphs come from Sunada’s paper, though I modified the picture of \mathrm{K}_4. The moving image of the diamond cubic was created by H.K.D.H. Bhadeshia and put into the public domain on Wikicommons. The picture of graphene was drawn by Dr. Thomas Szkopek and put into the public domain on Wikicommons.

129 Responses to Diamonds and Triamonds

  1. amarashiki says:

    John, perhaps you will be interested I asked for a more general topic, …Or not here ;) http://chemistry.stackexchange.com/questions/19475/hydrocarbons-with-only-4-carbon-atoms
    BTW, I asked that problem to some advanced students at High School…I do know where the triamond is in all the structures, but there are other mysterious stuff…

  2. Greg Egan says:

    Puzzle 1

    e_1 = (-1,-1,0)
    e_2 = (0,1,-1)
    e_3 = (1,0,1)
    f_1 = (-1,1,0)
    f_2 = (0,1,1)
    f_3 = (-1,0,1)

    The outgoing vectors at the green vertex are:

    e_1 = (-1,-1,0)
    e_2 = (0,1,-1)
    e_3 = (1,0,1)

    These are of equal length, and sum to zero, which means they must be coplanar and all separated by 120°. They are all orthogonal to:

    n_G = (1,-1,-1)

    The outgoing vectors at the yellow vertex are:

    -e_1 = (1,1,0)
    -f_2 = (0,-1,-1)
    f_3 = (-1,0,1)

    These meet the same conditions, and are orthogonal to:

    n_Y = (-1,1,-1)

    The outgoing vectors at the red vertex are:

    -e_2 = (0, -1,1)
    f_1 =  (-1,1,0)
    -f_3 = (1,0,-1)

    These meet the same conditions, and are orthogonal to:

    n_R = (1,1,1)

    The outgoing vectors at the blue vertex are:

    -e_3 = (-1,0,-1)
    f_2 =  (0,1,1)
    -f_1 = (1,-1,0)

    These meet the same conditions, and are orthogonal to:

    n_B = (-1,-1,1)

    Finally, the four normals to the planes associated with the four vertices:

    n_G = (1,-1,-1)
    n_Y = (-1,1,-1)
    n_R = (1,1,1)
    n_B = (-1,-1,1)

    all have equal lengths and mutual dot products of -1, so they are the normals to the faces of a regular tetrahedron.

    • John Baez says:

      Right! I thought you might like this one.

      The descriptions I’ve read don’t emphasize the tetrahedron, but that seems like the right way to understand the triamond. Here’s how I think about it now.

      We can take a cube with vertices (\pm 1, \pm 1, \pm 1) and inscribe two regular tetrahedra in it.

      Pick the one that contains the point (1,1,1), as you’ve done. Now we want to find 3 vectors in this plane that are at 120° to each other. It helps to know that the points (x,y,z) with integer coordinates summing to zero form a triangular lattice, lying in a plane orthogonal to (1,1,1). So use the vectors

      (0,-1,1)
      (-1,1,0)
      (1,0,-1)

      which are at 120° angles since they have length squared 2 and dot product -1 with each other. I could have also used the negatives of these 3 vectors, but I’m deliberately making the same choice as you.

      Now, we can use the rotational symmetries of the tetrahedron to carry this triple of vectors to triples that are orthogonal to the other vertices of the tetrahedron.

      However, I’m not sure that’s the right method. There’s a sign issue to consider. After all, we could have used the negatives of the 3 vectors above! More importantly, we need the vector pointing from the red vertex of the tetrahedron to the blue one (for example) to be minus the vector pointing from the blue vertex to the red one.

      So, I will have to rotate the tetrahedron so that red vertex (1,1,1) gets carried to another vertex, say the blue one (-1,-1,1), and see what this does to the 3 vectors listed above.

      One last thing: you made a couple of choices to get your solution (which tetrahedron inscribed in the cube, which triple of vectors to start with at one vertex). In the end, there should be just two solutions, which give mirror-image versions of the triamond!

    • John Baez says:

      So, I will have to rotate the tetrahedron so that red vertex (1,1,1) gets carried to another vertex, say the blue one (-1,-1,1), and see what this does to the 3 vectors listed above.

      Luckily this is easy: this rotation just negates the first two coordinates. So it carries the outgoing vectors at the red vertex, namely

      (0,-1,1)
      (-1,1,0)
      (1,0,-1)

      to the vectors

      (0,1,1)
      (1,-1,0)
      (-1,0,-1)

      And these are indeed your outgoing vectors at the blue vertex!

      So, after choosing a triple of outgoing vectors for one vertex of the tetrahedron, we just rotate them to get the triples for the other vertices. But the interesting thing is that our initial choice has a handedness.

  3. jessemckeown says:

    Re: Universal Abelian Covering Space… the image of \pi_1 T \hookrightarrow \pi_1 K_4 should be the commutator subgroup; \mathbb{Z}^3 is its Deck Transformations (which is transitive on “fibers” because the Commutator Subgroup is Normal)…

    So, you’ve told us that there’s a 4-colouring of T, but in connection with the suppressed double-bonds, I’m wondering is T in fact bipartite, like graphene?

    Also, is there a predicted X-ray crystalogram for this gem?

    • jessemckeown says:

      Oh, I can answer bipartite: yes, because commutators are made of an even number of oriented edges possibly canceling in pairs, so loops in T must be even.

    • John Baez says:

      Jesse wrote, with more Capital Letters:

      Re: Universal abelian covering space… the image of \pi_1 T \hookrightarrow \pi_1 \mathrm{K}_4 should be the commutator subgroup; \mathbb{Z}^3 is its deck transformations (which is transitive on “fibers” because the commutator subgroup is normal)…

      Good, right! I had said some silly stuff, mixing up subgroups and quotient groups as I often do when dealing with covering spaces and Galois theory. I realized my mistake while eating breakfast, then ran back and deleted it. But now I will add a correct explanation, and a new puzzle.

      So, you’ve told us that there’s a 4-colouring of T, but in connection with the suppressed double-bonds, I’m wondering is T in fact bipartite, like graphene?

      That’s an interesting question.

      The graphene graph is bipartite because it’s a covering space, in fact the universal abelian cover, of a graph that’s bipartite. Finding that graph is Puzzle 4.

      Similarly, the diamond graph is bipartite because it’s a covering space, in fact the universal abelian cover, of this bipartite graph:

      The triamond graph T doesn’t have this reason for being bipartite. It’s the universal abelian cover of \mathrm{K}_4, which is not bipartite:

      The triamond graph inherits a 4-coloring from the 4 colors of vertices shown here, and it inherits single and double bonds from the two kinds of edges shown here. However, each vertex of any color is connected by edges to vertices of all the other 3 colors.

      This doesn’t prove the triamond graph is not bipartite! Indeed, while \mathrm{K}_4 has lots of cycles of odd length — forbidden in a bipartite graph — the shortest cycles in the triamond graph has length 10.

      So, my answer to your question is “I don’t know.”

      (Meanwhile, you have answered it: “yes”.)

      Also, is there a predicted X-ray crystallogram for this gem?

      I don’t know that either!

  4. Think diamond is fascinating, how about the zincblende crystal structure? That one is partly responsible for the high-speed optoelectronics advances made over the last few decades.

    Zincblende is precisely a diamond structure but for one substitution rule. It also forms spontaneously with no pressure required.

    • John Baez says:

      I’m not sure what “one substitution rule” means. I just checked, and the zincblende crystal structure looks just like diamond, except we have alternating zinc and sulfur atoms:

      Maybe that’s what “one substitution rule” means.

      I mentioned that in diamond we have “two kinds of atoms” — more precisely, atoms lying in two separate face-centered cubic lattices. Diamond has a symmetry carrying one of these lattices to another. But in zincblende one lattice is zinc atoms and the other is sulfur!

      Zinc sulfide also comes in another form, called wurtzite, which has hexagonal rather than cubic symmetry:

      Is there are “universal abelian covering” description of this one?

      • Column III + Column V elements for zincblende and Column IV for diamond.

        Watch what happens when you have a free surface on [100], [111], or [110] orientations.

      • There is a possibility of creating a direct-bandgap lattice out of a column IV diamond structure. Add tin to germanium and one can potentially create an infared laser. Years ago, I was the first to successfully create a metastable SnGe lattice, which we confirmed via x-ray crystallography,


        Applied Physics Letters 54(21):2142 – 2144 · May 1989

        Since that time others have made progress in improving the opto-electronic properties of this material.

        Who knows what kind of interesting electronic properties would arise with these hypothetical lattice structures.

      • Walter Blackstock says:

        A.F. Wells in addition to his modest The Third Dimension in Chemistry (1956) wrote a massive textbook on Structural Inorganic Chemistry (1962, OUP) that may be worth dipping into. Some content may be available on Google Books, but you can never tell how much! (GB always truncates you on the interesting page). With luck, you may find discussion Diamond-type structures around p120.

  5. arch1 says:

    I was confused until it sunk in that you said each vertex in K4 represents a kind of atom, not an atom of a specific kind, so that e.g. edge f2 notwithstanding, the blue and yellow triamond atoms bonded to a given red are not bonded to each other. (You also said the smallest ring was 10 atoms, so I have no good excuse:-).

    • John Baez says:

      I spent a good 10 minutes debating whether to add this sentence:

      Conversely, if you pick an atom of some color in the triamond, any path in \mathrm{K}_4 starting from the vertex of that color determines a path in the triamond! However, going around a loop in \mathrm{K}_4 may not get you back to the atom you started with in the triamond.

      I couldn’t tell whether it would enlighten people or confuse them. You’ve convinced me to add it!

  6. Walter Blackstock says:

    An interesting post: I would not be surprised to find some way of K4 production if some exotic demand arose (say akin to N-induced vacancies in diamond). Indeed, how one would recognise K4 admixed with other carbon allotropes might be the initial challenge. An X-ray diffraction fingerprint perhaps, but that’s so obvious it must be well-studied.

    On a minor wikipedian point, the attribution of the K4 image as being “drawn by ‘Workbit'” may be misleading as Sunada on p6 in this book attributes the same image to Kayo Sunada. Copyright on images is a minefield. Sunada also has a 2012 “Lecture on topological crystallography” which has some interesting background for non-mathematicians like myself. Search on Google Scholar for an accessible pre-print.

    • John Baez says:

      Thanks! So it’s possible that Workbit just stole this image and put it on Wikicommons here, calling it “own work”.

      I’ll guess that Kayo Sunada is a relative of Toshikazu Sunada, who studied and popularized the \mathrm{K}_4 crystal.

  7. jessemckeown says:

    Re Puzzle 5 (unfinished)
    In trying to build a small bit of T, starting from the covering space description, I found I needed to use the fact that every edge of a tetrahedron is disjoint from a unique other edge; that particular fact doesn’t work in any other dimension.

    On the other hand, K5 certainly has a universal abelian cover, whose deck transformations should be free abelian of rank … (5*4/2)- 5 + 1 = 6. (hmm… that’s also the dimension of O(4)…)

    • John Baez says:

      At the end of his fascinating paper, Sunada writes:

      It is a challenging problem to list all crystal lattices with the strong isotropic property in general dimension. A typical example is the d-dimensional diamond lattice, a generalization of the hexagonal and diamond lattices, defined as the maximal abelian covering graph over the finite graph with two vertices and d+1 multiple edges joining them. The maximal abelian covering graph over the complete graph \mathrm{K}_n also gives an example, whose dimension is n(n-1)/2.

      This is the one you’re talking about for n = 5.

      The ‘strong isotropic property’ is defined earlier in his paper. If you’ve got a graph embedded in \mathbb{R}^n, he says it’s strongly isotropic if the group of Euclidean symmetries preserving the graph acts transitively on flags, where a flag is a vertex and an edge incident to that vertex.

      An abstract graph having symmetries that act transitively on flags is called a symmetric graph, and there’s a whole literature on these (though maybe this focuses on finite graphs).

      So, any strongly isotropic graph embedded in \mathbb{R}^n has to be symmetric.

  8. John Baez says:

    I have an idea for Puzzle 5. Let’s look at the universal abelian covers of some nice graphs, namely those coming from Platonic solids. We got the triamond from the tetrahedron, but this could be an example of a systematic procedure that works for other examples!

    The vertices and edges of a cube form a graph which looks like this if you flatten it out:

    This graph has 5 independent loops. In other words, its fundamental group is the free group on 5 generators. Its universal abelian cover should thus live in 5 dimensions.

    How do we define this? Imagine 6 kinds of atoms, one for each vertex of the cube. Then, figure out vectors that describe how to hop from one kind of atom to a neighboring atom of a different kind. Each atom will have 3 neighbors.

    How do we figure out these vectors?

    Label the directed edges of the cube with vectors. In other words, draw arrows on the edges, and label them with vectors—but decree that the vector changes sign if we change our minds about which way the arrow points.

    Demand that the vectors labeling the 3 edge pointing out of each vertex of the cube sum to zero. This is like Kirchhoff’s current law: it says the total current flowing into each vertex equals the total current flowing out. However, now current is a vector, not a scalar.

    How much choice do we have in picking vectors like this?

    The cube has 12 edges and 8 vertices. So, we have to choose 12 vectors, but impose 8 equations among them.

    That sounds like ultimately we’re picking 12 – 8 = 4 vectors. But that’s wrong, because not all the equations are independent! You can derive the last equation from the rest! So, we’re really picking 5 vectors.

    Here’s one way to see this: think of our graph as an electrical circuit, but where current is vector-valued. If we impose Kirchhoff’s current law at every vertex except one, it must also hold at the last one.

    Another way to see it is to remember that our graph has 5 independent loops. Each one gives an independent quantity, which electrical engineers would call a mesh current.

    Another way to see it is this: since the fundamental group of our graph is the free group on 5 generators, its 1st homology group, the abelianization of the fundamental group is \mathbb{Z}^5. (In fact this is the same idea as the electrical engineering argument, just phrased in another language.)

    Anyway, the upshot is this. The task of choosing vectors for edges which sum to zero at each vertex amounts to choosing 5 vectors.

    We can make them linearly independent if we use a 5-dimensional vector space. So, we are getting ready to build a graph, say C, embedded in 5d space. This graph will be the universal abelian cover of the graph shown above.

    We’d like to choose the vectors in a very nice symmetrical way. At the very least, the symmetries of the cube should act as symmetries of our graph C—and not just symmetries of it as an abstract graph, but as a graph embedded in 5-dimensional Euclidean space.

    Hmm, I thought I knew how to do this, but now I don’t.

    So that’s a nice challenge. If we succeed we’ll get a very nice crystal in 5 dimensions where each atom has 3 neighbors.

    We can also try this for the other Platonic solids:

    The tetrahedron gives the triamond graph T, which lives in 3 dimensions, because the tetrahedron has 4 faces—or if you prefer, it has 6 edges and 4 vertices, and 6 – 4 + 1 = 3. In the corresponding crystal, each atom has 3 neighbors.

    The cube gives a graph C which lives in 5 dimensions, because the cube has 6 faces—or if you prefer, it has 12 edges and 8 vertices, and 12 – 8 + 1 = 5. The challenge is to find the most symmetrical version of this graph C. In the corresponding crystal, each atom has 3 neighbors.

    The octahedron gives a graph O which lives in 7 dimensions, because the octahedron has 8 faces—or if you prefer, it has 12 edges and 6 vertices, and 12 – 6 + 1 = 7. The challenge is to find the most symmetrical version of this graph O. In the corresponding crystal, each atom has 4 neighbors.

    The dodecahedron gives a graph D which lives in 11 dimensions, because the dodecahedron has 12 faces—or if you prefer, it has 30 edges and 20 vertices, and 30 – 20 + 1 = 11. The challenge is to find the most symmetrical version of this graph D. In the corresponding crystal, each atom has 3 neighbors.

    The icosahedron gives a graph I which lives in 19 dimensions, because the icosahedron has 19 faces—or if you prefer, it has 30 edges and 12 vertices, and 30 – 12 + 1 = 9. The challenge is to find the most symmetrical version of this graph I. In the corresponding crystal, each atom has 5 neighbors.

    Of course we can also play this game starting with other polyhedra or polytopes.

    For example, the universal abelian cover of buckminsterfullerene should give a purely theoretical crystal form of carbon in 31 dimensions, where each atom has 3 neighbors: two connected with single bonds, and one connected with a double bond.

    • John Baez says:

      I wrote:

      We’d like to choose the vectors in a very nice symmetrical way. At the very least, the symmetries of the cube should act as symmetries of our graph C—and not just symmetries of it as an abstract graph, but as a graph embedded in 5-dimensional Euclidean space.

      Hmm, I thought I knew how to do this, but now I don’t.

      Okay, I think I get it now. This should work, not just for the cube, but for all the examples. But let me describe it for the cube.

      We start by arbitrarily directing each edge in the cube—that is, drawing arrows on them. Then the space \mathbb{R}^{12} consists of ways to label directed edges by numbers. The symmetries of the cube act as linear transformations of this space. The recipe is pretty obvious, I hope—except for one thing. We just need to remember that if we map an edge to an edge in a way that reverses its direction, we stick a minus sign in front of the number labeling that edge! In other words, we’re thinking of \mathbb{R}^{12} as the space of ‘electrical currents on edges of the cube’.

      If we give \mathbb{R}^{12} its usual inner product, the symmetries of the cube act in a way that preserves this inner product.

      Then, let V \subseteq \mathbb{R}^{12} be the subspace consisting of currents that obey Kirchhoff’s current law. As we’ve seen, this is 5-dimensional. We can use our inner product to define a projection

      p : \mathbb{R}^{12} \to V

      Next, for each directed edge i = 1, \dots, 12, let e_i \in \mathbb{R}^{12} be the corresponding basis vector, using the standard basis of \mathbb{R}^{12}.

      Then, let

      v_i = p(e_i) \in V

      So, for each directed edge of the cube we get a vector v_i in our 5-dimensional space V, and these vectors obey Kirchhoff’s current law. That is, when we take the 3 edges directed outward from any vertex in the cube, the corresponding vectors v_i sum to zero.

      The inner product on \mathbb{R}^{12} gives an inner product on V, and since I haven’t broken the symmetry at any stage of this construction, all the vectors v_i will have the same length.

      Now, following my previously described recipe, we can build a crystal in 5 dimensions which has 6 kinds of atoms—one for each vertex of the cube. Each atom will have 3 neighbors joined to it by bonds. These bonds will all be the same length, and they will lie at 120° angles from each other.

      Since I haven’t spoiled the symmetry at any stage of this construction, the symmetry group of the cube will act on this crystal. Moreover, this crystal—which is really a graph embedded in 5d Euclidean space—will be the universal abelian cover of the graph coming from a cube.

    • John Baez says:

      I see two things that could go wrong with this construction.

      First, we might have v_i = 0 for some i, and thus for all i, due to the symmetry. This would be a major bummer, but I don’t think it happens.

      Second, the graph C that we build in 5 dimensions might have vertices that are dense in 5-dimensional space. This would make it not like a ‘crystal’, but it could still be somewhat interesting. I don’t see how to rule out this scenario without some computations. In fact, this scenario reminds me of how we can build quasiperiodic tilings by projecting a hypercubic lattice from n dimensions down to 2 dimensions and then doing some other stuff:

      • Greg Egan, de Bruijn.

    • Greg Egan says:

      What if you picked a simplex centred at the origin in \mathbb{R}^5 and arbitrarily associated each vertex with a face of the cube? Using these six vectors as the vector-valued mesh currents associated with the (oriented) faces of the cube, you then get a vector in \mathbb{R}^5 for each directed edge, as the difference between the vectors associated with the two faces incident on that edge.

      I think that might give the same result as your construction, but it would circumvent the need to work in \mathbb{R}^{12}. The symmetries of the cube would act in \mathbb{R}^5 by permuting the vertices of the simplex.

      Does that sound right, or have I misunderstood something? In any case, I can try to figure out what kind of graph this produces in \mathbb{R}^5.

      • John Baez says:

        Interesting idea! Maybe that will give the same result, or maybe it will give a different, more symmetrical graph in \mathbb{R}^5 that has all permutations of a 6-element set as symmetries, rather than merely the symmetries of the cube. The only bad thing I can imagine about getting a more symmetrical graph is that maybe its vertices will be dense in \mathbb{R}^5, which isn’t what I’d like for the atoms in a crystal—or, less romantically speaking, a sphere packing, possibly a rather loose sphere packing.

        Speaking of sphere packings, I just saw George Hart’s picture of the ‘Heesch–Laves loose-packing’ of sphere:

        on a page about structures related to the triamond:

        • George W. Hart, The (10,3)-a network.

        The (10,3)-a network is another name for the triamond graph. I don’t know where this name comes from.

        The Heesch–Laves loose-packing was at one point, at least for a time, the least dense packing known of equal-sized spheres in \mathbb{R}^3 such that each sphere is unable to move if we hold its neighbors fixed. It has a density of about 0.05. For more information see:

        • Joseph O’Rourke, Are there locally jammed arrangements of spheres of zero density?, MathOverflow.

        • arch1 says:

          1) Naively eyeballing this, I don’t see any spheres which look immobile if their neighbors are fixed. I suppose hidden or omitted (beyond-cell-boundary) spheres could account for all of this, but it seems unlikely (it takes 4 contact points,not all in the same hemisphere to fix a sphere, right?)
          2) This kind of thing seems like it could make a compelling museum exhibit (perhaps not a hands-on one, though:-)

      • John Baez says:

        Oh, I was being a bit silly: I thought nothing in your construction would break the symmetry down from the permutation group on 6 letters to the symmetry group of the cube, because you’re picking your 6 mesh currents in a way that doesn’t mention the cube. But when you define currents for the cube’s edges, you’re using the way the 6 faces of the cube are incident to its 12 edges. So this could easily break the symmetry down to that of a cube. So it’s possible your construction gives the same result as mine, though I don’t feel sure.

      • Greg Egan says:

        If I find the vector-valued edge currents by projecting the standard basis of \mathbb{R}^{12} into the 5-dimensional subspace of vectors that obey Kirchhoff’s law at all the vertices, and then solve for the vector-valued mesh currents in the same 5-dimensional subspace, then, although the edge currents all have the same lengths, and the mesh currents all have the same lengths … the mesh currents don’t lie at the vertices of a 5-simplex.

        But there might be another choice of basis of \mathbb{R}^{12} that yields a 5-simplex.

        Using the method starting with the mesh currents at the vertices of a 5-simplex, I did a million-step random walk on the cube’s edge graph and lifted it into \mathbb{R}^5. The resulting points look like a discrete set, with none of them closer to each other than the graph’s edge length.

      • John Baez says:

        Excellent! Since they’re not bunching up in a nasty way, I suspect the vertices of this graph, call it C, lie in some lattice L in \mathbb{R}^5. That’s how it works with the triamond: we can start with the lattice of points with integer coordinates, and take ‘half’ of these points in a periodic way.

        So, I’d like to figure out the lattice L. Since you built everything starting with the 5-simplex, and the symmetry group of the 5-simplex is the Coxeter group \mathrm{A}_5,, the obvious guess is the \mathrm{A}_5 lattice.

        I think the coordinates of the vertices of a 5-simplex in \mathbb{R}^5 involve some irrational numbers, since the angle between any two vertices is \arccos (-1/6). Irrational numbers make me a bit queasy when I’m trying to show something has a periodic pattern.

        Luckily, we can work abstractly and simply let L be the lattice consisting of integer linear combinations of the vertices v_1, \dots, v_6 of the 5-simplex. The last vertex is minus the sum of the rest, so this is really a lattice.

        These vectors v_i are your ‘mesh currents’. Your edge currents are certain sums of pairs of these. So yes, the vertices of your graph C will clearly form a subset of the lattice L.

        (Is it really that easy? When I started writing this I thought it would be harder.)

        The next question is: which subset? Or at least: what fraction of the points of L are in this subset, and what is the periodicity of this subset?

        The triamond vertices are a subset of the lattice of points with integer coordinates, containing 1/8 of the points. This subset has periodicity 4 in each coordinate direction, but we can also add (2,2,2) to a triamond vertex and get another triamond vertex. In other words, we can add (\pm 2, \pm 2, \pm 2) to any triamond vertex and get another triamond vertex.

      • Greg Egan says:

        If we take any loop in the cube’s edge graph that visits all 8 vertices, and sum the associated edge vectors around the whole loop, we should get a vector in \mathbb{R}^5 that, when added to any vertex of the covering graph, will take us to another vertex of the covering graph. In fact, adding such a vector should always take us to another vertex of the same colour (in the sense that it covers the same vertex of the cube).

        I think the set of all such vectors should form a lattice. If you go around any loop an integral number of times (going backwards for negative integers), that corresponds to multiplying the associated vector by that integer. And you can “add” any two of these all-vertex loops by switching between them at any vertex, which will correspond to adding the associated vectors.

        It’s a bit tricky finding a basis for this lattice. By looping around the two faces in each of the 3 pairs of opposite faces of the cube, either in the same direction or in opposite directions, and then stitching those two loops together by going back and forth along any edge that joins them (which cancels out that edge’s vector, since you traverse it both ways), you can systematically get 6 loops that visit every vertex, and any 5 will give linearly independent vectors.

        But you can get another set of 5 vectors by a different approach. If you take two identically-positioned “U”-shaped paths on opposite faces of the cube, you can then join the free ends of the “U”s, making an eight-edged loop that visits all 8 vertices without backtracking. There are 12 ways to do this (3 choices of the pair of faces that have the “U”s, then 4 choices for the way you orient the “U”s), but of course there can only be 5 linearly independent vectors arising from them.

        Now, neither set of 5 vectors lies wholly in the lattice spanned by the other set of 5. So unless I’m confused, the lattice we need will be the union of both lattices. A basis for the union can be found by taking the union of the bases and reducing it to Hermite Normal Form, which is the analog of reduced row echelon form over the integers.

        If I’m right about all of this, I end up with a basis with a determinant that is 2304 times that of the basis for L, the lattice of integer-linear combinations of the 5-simplex vertices. But we have 8 disjoint lattices like this, one for each of the 8 vertex colours. So I’m guessing that 1 in 2304/8 = 288 points of L are vertices of the covering graph.

      • Greg Egan says:

        Ah, I just realised that there’s a much simpler way to get the basis for the loops that visit all 8 vertices. You can just loop around any single face of the cube, but then add self-cancelling detours from each of the four vertices of that face to the closest vertex on the opposite face.

        It almost seems like cheating, but these really are loops that visit all 8 vertices, despite the fact that the final vector is a sum of just four edge vectors. And if you pick any 5 faces of the cube, the 5 vectors you get really do give you a basis for the lattice I mentioned previously.

        This also makes for a more persuasive case that the previous analysis didn’t miss any points when trying to calculate the density of the covering graph vertices in the lattice L. I argued that any vector that arises from an 8-vertex loop will take you between two identically-coloured vertices of the covering graph … but what about loops that visit less than 8 vertices? They should also give vectors that take you between vertices of the covering graph, so long as you’re starting from a vertex whose colour is one that was included in the loop.

        But since looping around a single face of the cube, visiting just 4 vertices, yields the same vector as a loop that visits all 8 vertices, the vectors from those smaller loops are automatically part of the same lattice, and there are no extra points to be counted.

      • Greg Egan says:

        As a cross-check for the methods I’ve used with the cube, I applied the same approach to the tetrahedron, and it did yield the known results for the triamond.

        There’s one potentially confusing wrinkle: with the choice of scale we’ve been using for the triamond, the 3-simplex of mesh currents gives a lattice L whose fundamental domains have a volume half that of the lattice with integer coordinates. The lattice of vectors that take you between identically coloured vertices of the covering graph has a density of 1 in 64 points of L, and given the four colours for the four vertices of the tetrahedron, that means 1 in 16 points of L are vertices of the covering graph. But since L is twice as dense as the lattice with integer coordinates, we end up with the required density for the covering graph of 1 in 8 points with integer coordinates.

      • John Baez says:

        Greg wrote:

        If we take any loop in the cube’s edge graph that visits all 8 vertices, and sum the associated edge vectors around the whole loop, we should get a vector in \mathbb{R}^5 that, when added to any vertex of the covering graph, will take us to another vertex of the covering graph. In fact, adding such a vector should always take us to another vertex of the same colour (in the sense that it covers the same vertex of the cube).

        I was confused by this, because I didn’t think it was necessary for the path to visit all 8 vertices. I thought any loop would be okay.

        But now maybe I see what you mean. Let’s call the covering graph in \mathbb{R}^5 the crystal. Let’s call a vertex of this graph an atom, and save vertex to mean ‘vertex of the cube’. There’s a map from atoms to vertices. For each vertex c of the cube, I’ll call the atoms that map to it atoms of color c.

        For each vertex c in the cube and any loop starting at the vertex, we get a vector in \mathbb{R}^5 that when added to any atom of color c takes us to another atom of color c.

        But you were looking for vectors that when added to any atom give another atom of the same color. For this you chose loops that visit all 8 vertices. You never quite explained why, but I think I see why: these loops can be viewed as starting at any vertex of the cube, and we get the same vector regardless.

        At first you thought this condition of visiting all 8 vertices was a serious restriction. But then you decided it was not:

        Ah, I just realised that there’s a much simpler way to get the basis for the loops that visit all 8 vertices. You can just loop around any single face of the cube, but then add self-cancelling detours from each of the four vertices of that face to the closest vertex on the opposite face.

        The upshot is that any loop in the cube gives a vector which when added to any atom takes us to another atom of the same color.

        So, while the atoms in our crystal do not form a lattice, we can determine a lattice L that acts on our crystal as translation symmetries. This lattice consists of integer linear combinations of vectors coming from loops in the cube.

        And as you note, there’s nothing special about the cube here! The arguments are very general and should apply to all the cases I mentioned, and others too. I’m glad you checked the tetrahedron.

      • Greg Egan says:

        John wrote:

        The upshot is that any loop in the cube gives a vector which when added to any atom takes us to another atom of the same color.

        Right, and it took me a while to grasp that! It seems counter-intuitive at first that the vector from a red-green-blue-red loop can be added to a yellow atom to take you to another yellow atom. It’s only once I realised that a loop of any size gives the same vector as some other loop that visits every vertex (and hence can be thought of as a path that starts and ends at any vertex) that the lattice of translation symmetries made sense.

  9. domenico says:

    I am thinking that the structure of the triamond is similar to the structure of the benzene, there is a double bound and two single bound, so that there may be a structure with a resonance.
    I am thinking that the triamond could be a phase of right temperature and pressure of carbon, so the spectrum could be observed in carbon star with some masses.

    • John Baez says:

      That would be great! As you may know, there was a controversy over whether a certain carbon-rich ‘super-Earth’ planet was made of diamond:

      • Megan Gannon, Diamond super-planet may not be so glam, Space.com, 13 October 2013.

      There are also discussions of liquid carbon oceans with diamond
      icebergs on Neptune and Uranus:

      • Eric Bland, Diamond oceans possible on Uranus, Neptune, Discovery.com News, 15 January 2010.

      So besides carbon-rich black dwarf stars there are a variety of places where triamonds may lurk—if they are ever stable under any conditions.

  10. John Baez says:

    Let me try to summarize and generalize some of Greg’s and my discoveries.

    Consider a graph G drawn on a compact connected oriented surface. So, we have a set V of vertices, a set E of edges and a set F of faces, which are polygons. The faces are oriented in a consistent way, and let’s arbitrarily choose an orientation for each edge.

    A good example would be a Platonic solid, or this graph with 24 vertices, 84 edges and 56 triangles that Greg drew on Klein’s quartic curve:


    The graph G has a universal abelian cover:

    p: \tilde{G} \to G

    and it seems we have a nice way of building this cover and embedding it in \mathbb{R}^{|F|-1}, where |F| is the number of faces.

    To do this, we choose vectors u_f in \mathbb{R}^{|F|-1}, one for each face f \in F. We demand that they are linearly independent except for one relation: they sum to zero. The nicest way is to choose these vectors to be vertices of a regular simplex in \mathbb{R}^{|F|-1}.

    Then we define a vector u_e for each edge e \in E to be the difference of vectors for the two faces that e touches. The orientation we’ve chosen for the edge e will be consistent with the orientation of one of these faces, say f_1, and it will go against the orientation of the other face, say f_2. So, we define

    u_e = u_{f_1} - u_{f_2}.

    We write e^{-1} for the edge e equipped with the opposite orientation, and set

    u_{e^{-1}} = - u_e

    Now, we define \tilde{G} as follows. We arbitrarily choose a vertex v_0 \in G. For any path of edges e_1, e_2, \dots, e_n starting at v_0 and ending at some vertex v, we get a vector

    u_{e_1} + \cdots + u_{e_n} \in \mathbb{R}^{|F|}

    Here each edge e_i can be either e or e^{-1} for some e \in E. In other words, we can build paths using edges that either follow the direction we originally chose, or go backwards.

    Each vector we get this way will be a vertex in \tilde{G}, and our covering map

    p: \tilde{G} \to G

    will send it to the vertex v, the endpoint of the path.

    We decree that two vertices x, y \in \tilde{G} are connected by an edge iff there is an edge in G between p(x) and p(y).

    Note that if the edge e \in E goes from the vertex p(x) to the vertex p(y), then

    y - x = u_e \in \mathbb{R}^{|F|-1}

    Let \tilde{V} be the set of vertices of the covering graph \tilde{G}, and let \tilde{E} be the set of edges of \tilde{G}.

    The set \tilde{V} is not usually a lattice: for example, we can get the pattern of atoms in graphene, a diamond or a triamond by this recipe.

    However, there’s a lattice L \subset  \mathbb{R}^{|F|-1} generated by all the vectors u_f labelling faces. And Greg’s argument shows that L acts by translations on \tilde{V}.

    [EDIT: No, we should let L be the lattice generated by the vectors

    w_f = u_{e_1} + \cdots + u_{e_n} \in \mathbb{R}^{|F|}

    where e_1, \dots, e_n are the edges forming a loop going around the face f in the direction that matches the orientation of that face.]

    In interesting cases we will have a finite group \Gamma acting as symmetries of our oriented surface, preserving the graph G. For example, this could be the symmetry group of a Platonic solid, or the 168-element group of symmetries of the graph on Klein’s quartic curve.

    In this case, we can define for each group element a linear transformation of \mathbb{R}^{|F|-1} that permutes the face vectors u_f. This gives a representation of \Gamma on \mathbb{R}^{|F|-1} which preserves the covering graph \tilde{G}, and also the lattice L.

    Suppose the group \Gamma acts in a flag-transitive way on the graph G — that is, mapping any flag (vertex-edge pair, with the vertex incident to the edge) to any other flag. This happens in the highly symmetrical situations I keep using as examples.

    Then I believe the semidirect product \Gamma \ltimes L acts in a flag-transitive way on \tilde{G}.

    The most interesting case is when \Gamma acts as orthogonal transformations of \mathbb{R}^{|F|-1}. Then \Gamma \ltimes L will act as Euclidean transformations: that is, transformations that preserve angles and distances. I believe we can achieve this by choosing the face vectors u_f to be vertices of a regular simplex. Then any permutation of the faces will define an orthogonal transformation of \mathbb{R}^{|F|-1}.

    So, we get a bunch of examples of extremely symmetrical higher-dimensional crystals, which are not lattices! Sunada calls a graph in \mathbb{R}^n strongly isotropic if there are Euclidean transformations acting on it in a flag-transitive way. We’re getting a lot of examples of these.

    • Greg Egan says:

      Thanks for writing this great summary! But I don’t think you meant to say this part:

      However, there’s a lattice L \subset  \mathbb{R}^{|F|-1} generated by all the vectors u_f labelling faces. And Greg’s argument shows that L acts by translations on \tilde{V}.

      The lattice L defined this way contains translations that won’t preserve \tilde{V}. The lattice of translations that act on \tilde{V} is a sublattice of L generated by the vectors associated with loops in G. Even though we can think of each face as defining a loop, the sum of the edge vectors around each face is not the same thing as u_f, the “mesh current” associated with that face!

      • John Baez says:

        Hmm, I thought it was, but I don’t see any reason it should be.

        This confusion of mine feels connected to my confusion here:

        2) I’m not seeing where exactly we use the fact that the vectors u_f sum to zero! This is annoying.

    • John Baez says:

      A few more points:

      1) I didn’t prove all the claims here, e.g. the claim that \tilde{G} is the universal abelian cover of G.

      2) I’m not seeing where exactly we use the fact that the vectors u_f sum to zero! This is annoying.

      3) I think there should be a fairly simple formula for the density of the vertices \tilde{V} inside the lattice L: that is, the fraction of lattice points that are vertices of the covering graph \tilde{G}. It should depend only on the original graph G and the surface it’s drawn on.

    • Greg Egan says:

      On (2), if the “mesh currents” u_f don’t sum to zero, you don’t get a consistent system of linear equations for the edge vectors.

      On (3), the simplest formula I can give right now for the fraction \rho of points in the lattice L that belong to the set of vertices of the covering graph is:

      \displaystyle{\rho = \frac{|V|}{|\mathrm{det}(\mathcal{L})|}}

      where |V| is the number of vertices of the graph G, and \mathcal{L} is the matrix of coordinates for a set of |F|-1 loop vectors associated with all but one face of the graph, expressed in a basis \{u_f\} of “mesh currents” for the same faces.

      I don’t know if there’s a simpler way to describe that determinant. Of course it doesn’t depend on the particular vectors we choose for the roles of \{u_f\} in \mathbb{R}^{|F|-1}, and I believe it’s an invariant of G alone, at least up to a sign. But I don’t know a slicker way to compute it than solving for the edge vectors and summing them to get the loop vectors.

      • John Baez says:

        Greg wrote:

        On (2), if the “mesh currents” u_f don’t sum to zero, you don’t get a consistent system of linear equations for the edge vectors.

        That’s what I felt, but the formula saying the edge current is a difference of mesh currents seems perfectly well-defined regardless of what those mesh currents are. I think something else bad happens if the mesh currents don’t sum to zero.

      • Greg Egan says:

        John wrote:

        the formula saying the edge current is a difference of mesh currents seems perfectly well-defined regardless of what those mesh currents are.

        You’re right, sorry. The bad thing that happens is that the edge currents won’t obey Kirchhoff’s law at every vertex unless the mesh currents sum to zero. There are only |F|-1 free parameters in the solution space for Kirchhoff’s law, so if you want to solve for edge currents obeying that law by using mesh currents, you’re only free to choose |F|-1 of them.

      • John Baez says:

        Greg wrote:

        The bad thing that happens is that the edge currents won’t obey Kirchhoff’s law at every vertex unless the mesh currents sum to zero.

        Right. This mattered to me a lot when I started developing these ideas using the analogy to circuits, but I’m not sure what it does for us when it comes to building a symmetrical crystal.

        Of course, one thing it does is ensure that the vectors from any atom to its officially designated neighbors sum to zero. This is a nice property, but not something I’d require in a crystal. More important is that the lengths of these edges are all equal: this is required if we want an edge-transitive symmetry group.

        (By the ‘officially designated neighbors’ of an atom, or vertex in \tilde{G}, I mean those connected to it by edges in \tilde{G}. These are not necessarily the nearest vertices.)

        Another thing it may do is this. We have two collections of vectors, which I managed to mix up earlier:

        1) The mesh currents u_f, one for each face.

        2) The vectors of the form

        u_{e_1} + \cdots + u_{e_n}

        where e_1, \dots, e_n are the edges going clockwise around a face f.

        Each of these collections generates a lattice. The second lattice is contained in the first. I have a feeling that if the mesh currents sum to zero, there’s some nice relation between these two lattices. I’ll need to think about this… right now I need to get more coffee.

      • Greg Egan says:

        John wrote:

        I have a feeling that if the mesh currents sum to zero, there’s some nice relation between these two lattices.

        The relationship in general, for the |F| loop vectors \mathcal{L}_{i} in terms of the mesh currents u_f, is:

        \displaystyle{\mathcal{L}_{i} = \sum_{f=1}^{|F|} \mathrm{Lap}_{i, f} u_f}

        where \mathrm{Lap} is the graph Laplacian of the dual graph, and I’m using the sign convention that this is equal to the degree matrix minus the adjacency matrix.

        When the mesh currents sum to zero, we can rewrite, say, the last mesh current as minus the sum of all the others. We then have:

        \displaystyle{\mathcal{L}_{i} = \sum_{f=1}^{|F|-1} (\mathrm{Lap}_{i, f} - \mathrm{Lap}_{i, |F|}) u_f}

      • Greg Egan says:

        If the mesh currents don’t sum to zero, and we just use |F| generic vectors for the u_f, then I don’t think any of the things that we’ve been describing as lattices will still be lattices: the u_f won’t generate a lattice, and nor will the \mathcal{L}_i. So there’ll be no guarantee that all the atoms lie in a subset of a lattice, or that there will be a lattice of translation symmetries for the crystal.

        So I guess the most generic possibility where we still get lattices is to choose |F|-1 of the u_f generically, and then choose the last one to be any vector in the lattice generated by the others.

      • John Baez says:

        Greg wrote:

        If the mesh currents don’t sum to zero, and we just use |F| generic vectors for the u_f, then I don’t think any of the things that we’ve been describing as lattices will still be lattices […]

        If we drop the demand that the u_f sum to zero, I was planning to take them to be linearly independent vectors in \mathbb{R}^{|F|}, not vectors in \mathbb{R}^{|F|-1}. That way they’ll generate a lattice. Is there something bad about this?

        I guess one thing bad about it is that this approach doesn’t cover the motivating examples: graphene, diamond and triamond. But there might be something more intrinsically bad about it!

        I’m eager to understand your new ideas about Laplacians, but I’m a bit hung up on this basic issue.

      • Greg Egan says:

        If you go up a dimension to \mathbb{R}^{|F|} and use |F| linearly independent mesh currents u_f, then the |F| loop vectors \mathcal{L}_i will sum to zero, because every column of the graph Laplacian sums to zero.

        Because the loop vectors generate the translational symmetries, this means that every atom of a given colour in the crystal will lie in a single hyperplane, and all these hyperplanes will be parallel. So the crystal won’t quite be degenerate, but it will have a finite extent in one dimension.

      • John Baez says:

        Okay, great! That’s sufficiently strange that I’m happy to accept the constraint that the mesh currents sum to zero: then we get a ‘crystal’ that lies in (|F|-1)-dimensional space and also extends infinitely in all directions in this space, with a lattice of translational symmetries.

    • Greg Egan says:

      You can get the matrix \mathcal{L} of coordinates for the loop vectors in a mesh current basis by doing a bit of manipulation on the adjacency matrix for faces of the graph. This removes the need to solve explicitly for the edge vectors, if all you want is the density of the covering graph vertices in the lattice generated by the mesh currents.

      In what follows, I’m assuming that no two faces share more than a single edge.

      Define \mathcal{A} as the matrix with \mathcal{A}_{i,j} equal to 1 whenever the ith face of the graph shares an edge with the jth face of the graph, and 0 otherwise.

      Define n_i to be the number of edges of the ith face.

      We then have, for i,j \in \{1,...,|F|-1\}:

      \displaystyle{\mathcal{L}_{i j} = \mathcal{A}_{i,|F|} +  \begin{cases} n_i & i = j \\  -\mathcal{A}_{i,j} & i \neq j \end{cases}}

    • Greg Egan says:

      I think the determinant of my matrix \mathcal{L} is actually just the product of the non-zero eigenvalues of the graph Laplacian of the dual graph!

      By “dual graph” I mean a graph with a node for every face in our original graph G, and an edge connecting two nodes whenever the original faces shared an edge.

      The graph Laplacian of a graph is a matrix whose diagonal entries are the valence of the nodes, with -1 in the jth column of the ith row when nodes i and j are connected by an edge, and 0 otherwise.

      The matrix \mathcal{L} is very closely related to the graph Laplacian for the dual graph; to get \mathcal{L}, you take the graph Laplacian, then add a row of ones whenever there was a -1 in the last column of that row. This means the last column becomes zero (except in the last row). You then drop both the last column and the last row, and you have the matrix \mathcal{L}.

      Why is the determinant of \mathcal{L} equal to the product of the non-zero eigenvalues of the graph Laplacian? I’ve checked that this is true for the tetrahedron’s dual graph and the cube’s dual graph, but maybe it’s better just to state something that I’m more confident is true in general, and which is actually a bit easier to compute.

      If you take the graph Laplacian and replace the last row with a row of 1s, then the determinant will be unchanged by the elementary row operation of adding that row to all the rows that have a -1 in the last column. If you do this, you end up with zero everywhere in the last column, except in the last row, where it’s 1. The determinant of the whole matrix is then just the determinant of the block that omits the last row and column, which is none other than the matrix \mathcal{L}.

      So, the determinant of \mathcal{L} is the determinant of the matrix obtained by replacing the last row of the graph Laplacian of the dual graph with a row of 1s.

      • John Baez says:

        This sounds really cool, but I’m having trouble understanding why it’s true. My trouble starts back here:

        […] the simplest formula I can give right now for the fraction \rho of points in the lattice L that belong to the set of vertices of the covering graph is:

        \displaystyle{\rho = \frac{|V|}{|\mathrm{det}(\mathcal{L})|}}

        where |V| is the number of vertices of the graph G, and \mathcal{L} is the matrix of coordinates for a set of |F|-1 loop vectors associated with all but one face of the graph, expressed in a basis \{u_f\} of “mesh currents” for the same faces.

        I don’t see why this is true.

      • Greg Egan says:

        I said that

        \displaystyle{\rho = \frac{|V|}{|\mathrm{det}(\mathcal{L})|}}

        where \rho is the fraction of points in the lattice L that are “atoms” of the crystal, |V| is the number of vertices of the graph G, and \mathcal{L} is the matrix of coordinates for a set of |F|-1 loop vectors associated with all but one face of the graph, expressed in a basis \{u_f\} of “mesh currents” for the same faces.

        John wrote:

        I don’t see why this is true.

        We’ve defined L to be the lattice generated by the |F| mesh currents \{u_f\}. We can take any |F|-1 of the \{u_f\} and they’ll give us a basis for the lattice L. From the way we’ve defined the edge vectors as the difference between the mesh currents associated with the two faces incident on each edge, any overall translation from one atom in the crystal to another must lie in L. Or more concretely, if we position our crystal so that it has an atom lying at the origin, the set of positions of all the atoms in the crystal must be a subset of L.

        Now, we can abstract away the specific geometry of L in \mathbb{R}^{|F|-1} by working in a basis for L. In that basis, the determinant of the basis for any sublattice of L will equal the volume of a unit cell of the sublattice, relative to a volume of 1 for the unit cells of L itself. So if we have |F|-1 vectors \{\mathcal{L}_i\} that generate a sublattice of L, the determinant of the matrix \mathcal{L} such that:

        \displaystyle{\mathcal{L}_i = \sum_{f=1}^{|F|-1} \mathcal{L}_{i,f} u_f}

        will be equal to that relative volume, and the inverse of the determinant will give us the fraction of points in L that lie in the sublattice generated by the \{\mathcal{L}_i\}.

        But the crystal itself consists of |V| different “colours” of atoms, each of them lying in a subset of L isomorphic to the sublattice generated by the \{\mathcal{L}_i\}. So we need to multiply the inverse of the determinant by |V| to get the density of the whole crystal in L. So, that’s where the formula for \rho comes from.

        As for the link between the coefficients \mathcal{L}_{i,f} and the graph Laplacian of the dual graph, that’s very simple! We’ve chosen an orientation for every face of the graph when defining the mesh currents, and we will define the loop vector \mathcal{L}_i for each face as the sum of the edge vectors around the face in the same direction as its orientation. We don’t need to choose orientations for the individual edges; we just want to add the edge vectors “around the face” according to the orientation of the face itself.

        So, for face number f, the contribution from each edge is u_f - u_{f_i} for f_i a face that is incident on face f, and the loop vector for the whole face is d_f times u_f (where d_f is the degree or valence of the face’s node in the dual graph), plus -1 times every u_{f_i} such that there is an edge joining the nodes f and f_i in the dual graph.

        But those coefficients are just the entries of the graph Laplacian matrix for the dual graph!

        The only real complication then comes from the fact that the graph Laplacian is |F| \times |F|, and the |F| mesh currents \{u_f\} sum to zero. So the determinant of the smaller, (|F|-1) \times (|F|-1) matrix \mathcal{L}_{i,f} isn’t the determinant of the graph Laplacian, which of course is 0. Rather, as I argued elsewhere, you can get the determinant of \mathcal{L}_{i,f} by replacing the last row (or any other single row, or column) of the graph Laplacian with 1s, and taking the determinant of the resulting matrix. And though I haven’t proved it, I’m convinced after checking a lot of examples that the determinant is also equal to the product of the non-zero eigenvalues of the graph Laplacian.

      • John Baez says:

        Thanks, Greg. That exposition helped tremendously, along with my work on an example.

        Before, I’d been confused about a lot of things. One of the lesser ones was that terrifyingly non-invariant step where you did something to the last row of a matrix. I knew it couldn’t matter which row was the ‘last’, but it’s nice to hear you say it.

        There should be some very general nonsense about a linear transformation of an n-dimensional vector space that has a 1-dimensional kernel, as we have here. Its determinant will vanish, since one of its eigenvalues is zero. However, the product of the remaining eigenvalues will be a good invariant. We should be able to compute this by chopping down an n×n matrix to an (n-1)×(n-1) matrix in some way and then taking the determinant of that. You seem to be doing something like that.

      • Graham says:

        I am not following at all closely. I don’t really know what you’re doing. However, it is reminding me a lot of Kirchhoff’s matrix tree theorem.

        • Greg Egan says:

          That’s interesting, and helpful. I can’t yet see any explicit connection to the count of spanning trees, but even just the linear-algebaric details mentioned in that article make it easier to understand why the product of the non-zero eigenvalues is the same as the determinant I’m after.

        • John Baez says:

          Interesting! Kirchhoff’s matrix tree theorem has got to be important here. One thing it instantly does is show that the product \lambda_1 \cdots \lambda_{n-1} of nonzero eigenvalues of the graph Laplacian is an integer, which must be true for Greg’s conjecture to be right.

          (In fact the theorem says \lambda_1 \cdots \lambda_{n-1} is an integer divisible by n, the number of vertices, so if Greg’s conjecture is right, the density he calls \rho must be the reciprocal of an integer.)

          A spanning tree does something nice: each edge not in the spanning tree creates a loop when we add it to the spanning tree, and these loops generate the 1st homology group of the graph. In our context, this 1st homology group is also the lattice we’re calling L, generated by the ‘loop vectors’ associated to the faces of our graph. So something is going on here….

        • John Baez says:

          By the way, I’ve changed “Kirchoff” to “Kirchhoff” in about a dozen comments here.

          I often misspell this name, and others here seem to be falling into the same trap. You can see his pained expression:

          Gustav Robert Kirchhoff

        • Greg Egan says:

          John wrote:

          (In fact the theorem says \lambda_1 \cdots \lambda_{n-1} is an integer divisible by n, the number of vertices, so if Greg’s conjecture is right, the density he calls \rho must be the reciprocal of an integer.)

          We need to be careful here about the graph G versus its dual! In the dual graph, the number of vertices, called n in the Wikipedia article on Kirchhoff’s theorem (sorry about the misspelling), is equal to |F|, the number of faces in the original graph G.

          What Kirchhoff’s theorem shows is that:

          \displaystyle{\frac{\lambda_1 \cdots \lambda_{|F|-1}}{|F|}}

          is an integer. What follows from my conjecture about the determinant of \mathcal{L} is that:

          \displaystyle{\frac{1}{\rho} = \frac{\lambda_1 \cdots \lambda_{|F|-1}}{|V|}}

          where |V| is the number of vertices in the original graph G.

          So, although I believe \rho is the reciprocal of an integer, Kirchhoff’s theorem doesn’t immediately show that.

        • John Baez says:

          Whoops! Thanks for that correction.

          I want to do another round of writing up, maybe on the n-Category Café for a change. This will help me straighten out some things.

        • Tim Silverman says:

          The number of spanning trees is equal to the order of the Picard group of the (dual) graph. I think something vaguely like the following is going on: we’re taking the index of the lattice of principal divisors (which lie in the image of the laplacian) in the lattice of zero divisors (mesh currents summing to zero), presumably over \mathbb{Z}. This is giving the density of one lattice in the other, and then having a sublattice for each vertex gives you the product of the nonzero eigenvalues of the laplacian via the matrix tree theorem.

          Something like that, anyway.

        • John Baez says:

          Cool! You’re mixing graph theory and number theory terms in a way I haven’t seen before… I guess mediated by lattice theory. This is both fascinating and confusing. There’s also a typo here:

          lattice of zero divisors (mesh currents summing to zero) over, presumable over \mathbb{Z}.

          What are you trying to say here? Indeed we’re working with a bunch of \mathbb{Z}-modules, viewing them as lattices in \mathbb{R}^n to add a spicy geometric interpretation to what’s going on.

        • John Baez says:

          Hmm! It turns out Toshikazu Sunada has developed the analogy between graphs and number theory (or maybe it’s better to say algebraic geometry) in quite a fair amount of detail in his book Topological Crystallography. It looks like he’s developed some of theory we’re developing here (and some we haven’t yet), but without exploring the higher-dimensional examples that come from Platonic solids more complicated than the tetrahedron. More on this later!

  11. Stephen Hyde says:

    An interesting post. Thanks.

    -The name “(10,3)-a” comes from A Wells (see his book “Three Dimensional Nets and Polyhedra”, Wiley). A standard name in the materials community nowadays is “srs”, by Michael O’Keeffe, and derived from the SrSi_2 structure. This and many other interesting nets can be found at the site overseen by O’Keeffe. (srs is at http://rcsr.net/nets/srs)

    -There has been much misinformation about this net since Sunada’s paper, that seems to have propagated further. (10,3)-a/srs/triamond… is a common network pattern in condensed materials, and seen all over the place in nature. (So the paper referred to above [Stephen T. Hyde, Michael O’Keeffe, and Davide M. Proserpio, A short history of an elusive yet ubiquitous structure in chemistry, materials, and mathematics, Angew. Chem. Int. Ed. 47 (2008), 7996–8000.] describes more than “attempts” to realise the structure, it describes actualisations of the structure.

    • John Baez says:

      I looked at that paper, and it describes gyroids that are found in nature and closely related to the triamond… but are there crystals with atoms arranged in this way? I didn’t see any of those.

      • Stephen Hyde says:

        These nets are common in molecular-scale assemblies, such as metal organic frameworks. A fascinating example is this one:
        An Exceptional 54-Fold Interpenetrated Coordination Polymer with 103-srs Network Topology
        Hua Wu, Jin Yang, Zhong-Min Su, Stuart R. Batten, and Jian-Fang Ma
        Journal of the American Chemical Society 2011 133 (30), 11406-11409
        DOI: 10.1021/ja202303b

        The srs net is a subgraph of the structure.

        At the atomic scale, I dont know of a solid that builds this net, though it describes the Si sites in SrSi_2.

  12. Stephen Hyde says:

    Re my previous comment, I should add the caveat (that may be your point John!) that the srs net is not known as a pure (sp^2) carbon framework.

  13. John Baez says:

    I think I’ll try to build my intuition for the crystal lattices described here by looking at the example that comes from the cube.

    We start with a regular 5-simplex. I’d like to write its 6 corners as points with integer coordinates, just to keep the calculations simple. Since the symmetries of the 5-simplex are the same as the symmetries of the \mathrm{A}_6 lattice, let’s try using that. This lattice consists of points (x_1, \dots, x_6) with integer coordinates summing to zero. The symmetries in question just permute the coordinates.

    Now, down in 2 dimensions I know how to think of the corners of a regular 2-simplex as points in the \mathrm{A}_2 lattice:

    There are two ways to do it, which correspond to the ‘quark’ and ‘antiquark’ representations of \mathrm{SU}(3), the Lie group associated to \mathrm{A}_2. If we think of the \mathrm{A}_2 lattice as consisting of triples of integers summing to zero, these two triangles are

    (1,-1,0), \; (0, 1, -1), \; (-1, 0, 1)

    and its negative.

    This should generalize nicely to any dimension: the \mathrm{A}_n lattice is the root lattice for \mathrm{SU}(n+1), and the obvious (n+1)-dimensional representation of \mathrm{SU}(n+1) and its dual should give two fairly obvious regular simplices in the lattice

    \mathrm{A}_n = \{ (x_1, \dots, x_{n+1}) \in \mathbb{Z}^{n+1} : \; x_1 + \cdots + x_{n+1} = 0 \}

    But, much to my chagrin, I ran into trouble finding these two ‘obvious’ simplices!

  14. John Baez says:

    So, I did the following thing. These points in \mathbb{Z}^{n+1} form the (n+1) corners of a regular n-simplex

    (1, 0, 0, \dots, 0 ), \;\; (0, 1, 0, \dots, 0), \; \dots, \; (0, 0, 0, \dots, 1)

    Unfortunately the coordinates don’t sum to zero. So, subtract the same number from each coordinate to ensure they do sum to zero. Now the coordinates aren’t integers anymore, so multiply by something to make them integers. We get

    (n, -1, -1, \dots, -1), \; \; \dots, \; \; (-1, -1, -1, \dots, n)

    Let’s call the ith of these vectors u_i. Just to check that things are working, note that

    u_i \cdot u_j = - (n+1) \;  \mathrm{ if } \; i \ne j

    while

    u_i \cdot u_i = n^2 + n

    so the angle between two different vectors of this sort is

    \arccos(-1/n)

    as it should be for corners of a regular n-simplex!

    In the case of the \mathrm{A}_2 lattice this recipe gives us the corners of an equilateral triangle:

    (2,-1,-1), \;\; (-1,2,-1) , \;\; (-1,-1,2)

    This is not the correct triangle for the quark or antiquark representation, so I’m doing something a bit silly, but it should work okay for what I’m doing here! And I’ve found nice coordinates for the corners of an n-simplex in any dimension so I can handle not just the crystal that’s the universal abelian cover of a cube, but all such crystals.

  15. John Baez says:

    So, back to the cube! To each of its 6 faces we assign a face vector u_i:

    u_1 = (5,-1,-1,-1,-1,-1)

    u_2 = (-1,5,-1,-1,-1,-1)

    etc. These vectors are corners of a regular simplex in \mathbb{R}^6. They all lie in the hyperplane where the coordinates sum to zero.

    Next we want to assign vectors to the cube’s 12 edges, following our general recipe. For this we need to break the symmetry a bit. We need to say which of our cube’s faces touches which other faces.

    Each cube face has just one face that it does not touch: the opposite face. So, we can group the faces into 3 pairs of opposite faces. We can say these are opposite faces:

    u_1 = (5,-1,-1,-1,-1,-1)

    u_2 = (-1,5,-1,-1,-1,-1)

    and these are opposite faces:

    u_3 = (-1,-1,5,-1,-1,-1)

    u_4 = (-1,-1,-1,5,-1,-1)

    as are these:

    u_5 = (-1,-1,-1,-1,5,-1)

    u_6 = (-1,-1,-1,-1,-1,5)

    Note that we’re grouping the 6 components into 3 blocks of 2.

    Each pair of faces i,j that are not opposite determine an edge of the cube. We define an edge vector for each edge:

    e_{ij} = u_i - u_j

    Note that switching i and j multiplies this vector by -1, which we intepret as switching the direction of the edge.

    For example, we have

    e_{13} = (6,0,-6,0,0,0)

    The vectors we get this way are precisely those having 6 and a -6 as components, all the other components being zero, and never both a 6 and a -6 in the same block.

    Next I should work out some vectors associated to loops of edges in the cube.

  16. John Baez says:

    Let’s look at the loop of edges going around the cube’s ith face. To this we want to assign a loop vector \mathcal{L}_i.

    I’ll try the case i = 1.

    Face 1 touches faces 3, 4, 5 and 6, so we’re going to add up the edge vectors e_{13}, e_{14}, e_{15} and e_{16} with carefully chosen signs. Am I smart enough to determine these signs?

    I think we can declare that the vector e_{ij} corresponds to the edge between faces i and j, going counterclockwise around the face i. (Some idiot chose ‘counterclockwise’ as the preferred direction in math, and I’m not going to change that now.)

    So, if \mathcal{L}_i is the sum of edge vectors for edges going around face 1 in a counterclockwise direction, we get

    \mathcal{L}_1 = e_{13} + e_{14} + e_{15} + e_{16}

    I may not be writing these edges in the order in which we meet them, but luckily addition of vectors is commutative so that doesn’t matter!

    Okay, that was not so hard… I hope. It’s easy to generalize the above formula to get a formula for all the loop vectors \mathcal{L}_i.

    But let’s see what \mathcal{L}_1 looks like:

    \mathcal{L}_1 = e_{13} + e_{14} + e_{15} + e_{16}

    = (6,0,-6,0,0,0) + (6,0,0,-6,0,0) + (6,0,0,0,-6,0) + (6,0,0,0,0,-6)

    = (24,0,-6,-6,-6,-6)

    So, the pattern seems to be this: \mathcal{L}_i has the 24 as its ith component, a 0 as the other component in the same block, and -6’s in the remaining 4 components.

    By this point it’s tempting to divide everything by 6, just to simplify the arithmetic. If we want the face vectors to have integer components, we’re not allowed to do that. But maybe it’s the edge vectors that really matter in the end!

    • John Baez says:

      I had written, a bit hesitantly:

      \mathcal{L}_1 = e_{13} + e_{14} + e_{15} + e_{16}

      = (6,0,-6,0,0,0) + (6,0,0,-6,0,0) + (6,0,0,0,-6,0) + (6,0,0,0,0,-6)

      = (24,0,-6,-6,-6,-6)

      Note that since e_{ij} = u_i - u_j, we also have

      \mathcal{L}_1 = 4 u_1 - u_3 - u_4 - u_6

      This is a special case of what Greg said more generally:

      So, for face number f, the contribution from each edge is u_f - u_{f_i} for f_i a face that is incident on face f, and the loop vector for the whole face is d_f times u_f (where d_f is the degree or valence of the face’s node in the dual graph), plus -1 times every u_{f_i} such that there is an edge joining the nodes f and f_i in the dual graph.

      I might not have understood this remark of Greg’s so easily if I hadn’t just worked out an example!

  17. jessemckeown says:

    One can choose at least three decagons in T meeting minimally at one vertex:

    (in this example, meeting in the red vertex). (I don’t know if the crossing relations in this picture are consistent, so ignore them). So that’s one half-decent reason to call it (10,3)-something. Maybe something more systematic can be done?

    • John Baez says:

      Wow, that’s nice! So there is indeed something (10,3)-ish about the triamond.

      You’re reminding me of a question I had:

      If I handed you a nice symmetrical graph like that coming from the tetrahedron or cube, how long would you have to think before you could tell me the smallest loop in the universal abelian covering of this graph?

      For the tetrahedron it has length 10, but I still don’t know why. What about the cube?

      • jessemckeown says:

        The tetrahedron is ten because-ish the commutator involves two copies each of two triangles, and two triangles in the tetrahedron meet in one edge; for the cube, a minimal commutator of adjacent faces at the basepoint starts with 2 × 4 × 2 = 16 edges of which I’m guessing one pair reduces, to give 14 ; and a minimal commutator of disjoint faces starts with 2 × (4 + 6) = 20 edges, and I think there are no cancellations there… On the other hand, I’m not sure that one of those 20-edge loops isn’t composable from intersecting 14-edge loop.

        • jessemckeown says:

          and… after that I’m not sure pasting more 14-gons doesn’t give something shorter, either. Hm. And that is why relatives of the word problem are hard, isn’t it.

        • Layra Idarani says:

          Pretty sure that 14 is in fact the answer, and that in general the answer will be the commutator of 2 adjacent faces.
          The map from loops on the cube to vectors is basically a winding function; you can decompose the loop on the cube into a signed sum of loops around faces and the resulting vector is the sum of winding number of each face times the loop vector for that face. So to get a sum of 0 you need your winding number to be an integer multiple of the sum of all the faces, which is like having a winding number of 0.
          A loop that only goes around one face and has winding number 0 is contractible, and hence gives a contractible path on the lattice. So you need to involve at least two faces, and you need to go around each of them at least once in either direction.
          You need to go around the perimeter of whatever shape you make at least twice, and you need at least two more edge traversals to change direction without making a contractible path.
          The minimum perimeter for a shape involving two faces is 6, so the minimum closed path in the cube lattice is 14, given by the commutator of two faces.
          Notably, involving a third face adjacent to the first two does give a shape with perimeter 6, but the direction-reversal requires 4 extra edge traversals while the commutator of two faces only requires 2.
          So the cube should give (14, 3), the dodecahedron (22, 3), the octahedron (10, 4) and the icosahedron (10, 5).

        • Layra Idarani says:

          Oops! Not 22 for the dodecahedron, 18.

      • Greg Egan says:

        John wrote:

        If I handed you a nice symmetrical graph like that coming from the tetrahedron or cube, how long would you have to think before you could tell me the smallest loop in the universal abelian covering of this graph?

        For a Platonic solid with regular n-gons as faces, I’d immediately reply 4 n-2. But I had to spend half an hour thinking about the general case first.

        Suppose you have a loop \ell_a in the graph G that starts and ends at a given vertex v_a. And suppose you have another loop \ell_b in G that starts and ends at the vertex v_b, and traverses all the same edges as \ell_a in reverse order. In other words, these two loops are almost the same, except they’re based at different vertices and they go around the same edges in opposite directions.

        When you lift these loops to the covering graph, they will yield opposite net displacement vectors. But you can’t just splice them together to get a proper loop in the covering graph, because v_a \neq v_b (and if you did have v_a=v_b, you’d just get a degenerate loop that went somewhere then backtracked along its entire path).

        So, you need to join v_a and v_b with some kind of “detour” that goes right off the loop, to avoid backtracking. Once you establish that path in G — call it d_{a b} when going from v_a to v_b and d_{b a} when going from v_b to v_a, by the same edges, backwards — you can lift the path:

        \ell_a + d_{a b} + \ell_b + d_{b a}

        to get a loop in the covering graph that won’t have any backtracking.

        For a Platonic solid with regular n-gons as faces, the number of edges in \ell_a and \ell_b will be n, and you get the smallest possible detour by having v_a and v_b adjacent vertices, and using n-1 edges of another face (one that shares the edge with endpoints v_a and v_b with the face we use for the loops) as the detour.

        So the total number of edges is:

        |\ell_a| + |d_{a b}| + |\ell_b| + |d_{b a}| = 2 n + 2(n-1) = 4n-2

      • Greg Egan says:

        Sorry, I didn’t intend those weird symbols for the names of loops and detours, I thought they’d just appear as script, lower-case “l” and “d”.

        • John Baez says:

          For some idiotic reason TeX doesn’t create calligraphic lower-case letter using \mathcal. I think that was before people realized we needed boatloads of fonts. For now I’ve changed your lower-case calligraphic l to \ell (this script letter has its own special name) and your lower-case calligraphic d to a plain d.

    • jessemckeown says:

      And there is at least one better way to pick the three decagons at one vertex; this one is symmetric there.

      (in a more familiar projection)

  18. John Baez says:

    I’ll continue my studies of the 5-dimensional crystal that’s a universal abelian cover of the cube graph, picking up where I left off.

    Instead of trying to compute the density of this crystal, I’ll try to understand a bit about what it “looks like”.

    There are 6 kinds of vertices, corresponding to the 6 vertices of the cube. Let’s take a vertex and study the 3 edges coming out of it!

    We have direct access to the cube’s 6 faces, which are called 1,2,3,4,5,6. Faces 1 and 2 are diametrically opposite to each other, as are faces 3 and 4, as are faces 5 and 6.

    So, there’s a unique vertex incident to faces 1, 3 and 5. Let’s look at that. If I’m getting my signs correct, the 3 edges coming out of this vertex are e_{13}, e_{35} and e_{51}. I am probably choosing some sign convention by saying this, and that makes me nervous, but I doubt it will get me in trouble in this little calculation. I’d run a bigger risk if I started studying more than one vertex; then my sign convention would need to be consistent somehow.

    Charging boldly ahead:

    e_{13} = u_1 - u_3 = (5,-1,-1,-1,-1,-1) - (-1,-1,5,-1,-1,-1)

    = (6,0,-6,0,0,0)

    e_{35} = u_3 - u_5 = (-1,-1,5,-1,-1,-1) - (-1,-1,-1,-1,5,-1)

    = (0,0,6,0,-6,0)

    e_{51} = u_5 - u_1 = (-1,-1,-1,-1,5,-1)  - (5,-1,-1,-1,-1,-1)

    = (-6,0,0,0,6,0)

    This was not very thrilling, but we’ve learned that this vertex — hence by symmetry every vertex — has 3 edges coming out of it that are all at this angle to each other:

    \arccos(e_{13} \cdot e_{35} / e_{13} \cdot e_{13}) =  \arccos(-36/72) = \arccos(-1/2) = 120^\circ

    This is actually interesting. The triamond, obtained by the very same general recipe starting with a tetrahedron rather than a cube, also has 3 edges coming out of each vertex that are at 120^\circ angles to each other.

    If I had more energy I would instantly repeat this calculation for the octahedron, dodecahedron and icosahedron.

    • Greg Egan says:

      With our recipe, any trivalent vertex will have its incident edges at 120^\circ to each other, because the edge vectors obey Kirchhoff’s law, and using a regular simplex for the mesh currents means that all edge vectors are of the same length.

      The only way three equal-length vectors can sum to zero (i.e. obey Kirchhoff’s law at the vertex) is if they make a symmetrical 3-pointed star like that. So you don’t need to calculate anything to know that the dodecahedron vertices will look the same.

      We ought to be able to predict the local configuration at the 4-valent and 5-valent vertices as well. I’d be surprised if the neighbouring vertices in these cases don’t form a regular tetrahedron and a regular 4-simplex, respectively, centred on the starting vertex, since these are the least degenerate possibilities consistent with Kirchhoff’s law and equal-length edge vectors. Of course, it would also be consistent if they formed 4-pointed and 5-pointed stars in a plane, but my hunch is that they don’t.

      • Layra Idarani says:

        For the octahedron it’s actually more complicated: two edges on the same face form an angle of 120 degrees, but two edges that aren’t from the same face are actually perpendicular!

    • John Baez says:

      Greg wrote:

      With our recipe, any trivalent vertex will have its incident edges at 120^\circ to each other, because the edge vectors obey Kirchhoff’s law, and using a regular simplex for the mesh currents means that all edge vectors are of the same length.

      Okay, great! Boring calculations often reveal results that can be obtained faster by thinking clearly. But I’m still glad I did the boring calculations, because it makes everything seem more ‘real’.

      We ought to be able to predict the local configuration at the 4-valent and 5-valent vertices as well. I’d be surprised if the neighbouring vertices in these cases don’t form a regular tetrahedron and a regular 4-simplex, respectively, centred on the starting vertex, since these are the least degenerate possibilities consistent with Kirchhoff’s law and equal-length edge vectors.

      That sounds nice! If so, when we form a highly symmetrical graph by triangulating Klein’s quartic curve with 7 triangles meeting at each corner, in the resulting ‘crystal’ each atom should have 7 neighbors lying at the vertices of a regular 6-simplex. That would be quite pretty!

      And so on: we can get lots of crystals like this using surfaces of higher genus.

      I’m thinking of calling these Platonic crystals, unless someone has a better name.

      • Greg Egan says:

        Alas, my hunch was wrong; as Layra points out, the symmetry between the edges is broken at the vertices of an octahedron (or icosahedron), because a pair of edges incident on the vertex might or might not belong to the same face.

      • John Baez says:

        Yes, I figured it out too while watching TV. If you trace out a loop around the corners of an equilateral triangle, the differences between successive corners are vectors that are themselves the corners of an equilateral triangle. But this pattern fails when we go to a tetrahedron or any higher-dimensional simplex!

        It’s related to how

        \displaystyle{ \binom{n}{2} = n }

        only for n = 3.

      • Greg Egan says:

        Using John’s construction for the mesh currents forming a regular simplex, and slightly tweaking some of the edge-labelling conventions, wlog we can take the outgoing edge vectors at any vertex in a graph with n faces to be, in cyclic order around the vertex:

        \displaystyle{(n,-n,0,0,0,...0,0,...)}
        \displaystyle{(0,n,-n,0,0,...0,0,...)}
        \displaystyle{(0,0,n,-n,0,...0,0,...)}

        \displaystyle{(-n,0,0,0,0,...n,0,...)}

        So, cyclically adjacent pairs of edges in the original graph will lift to edges at 120 degrees to each other in the covering graph, and all other pairs of edges at the vertex will lift to mutually perpendicular edges in the covering graph.

        • John Baez says:

          Isn’t that last vector supposed to be

          \displaystyle{(-n,0,0,0,0,..., n)} \; ?

          So, cyclically adjacent pairs of edges in the original graph will lift to edges at 120 degrees to each other in the covering graph, and all other pairs of edges at the vertex will lift to mutually perpendicular edges in the covering graph.

          That’s very nice. For one thing, if you normalize the vectors you listed to have length squared 2, they have exactly the inner products that you need for the root vectors of the affine Dynkin diagram \widetilde{\mathrm{A}}_{n-1}:


          I’m not sure exactly what this implies, but affine Dynkin diagrams are to groups generated by reflections and translations as ordinary Coxeter–Dynkin diagrams are to groups generated by reflections (Coxeter groups). So, just as ordinary Coxeter–Dynkin diagrams are a nice way to describe the symmetry groups of Platonic solids, affine Dynkin diagrams are a nice way to describe symmetry groups of very symmetrical lattices. Since each of our ‘Platonic crystals’ has (at least) the \mathrm{A}_{n-1} lattice as translational symmetries, it’s not shocking to see the \widetilde{\mathrm{A}}_{n-1} affine Dynkin diagram show up.

          However, I still don’t see quite why it’s showing up in the way it is.

        • Layra Idarani says:

          I’m not really sure how relevant the Dynkin connection is, actually. We get \widetilde{A_{n-1}} for the case of 1-skeleta of polyhedra, but we can easily extend the construction to the 1-skeleton of higher-dimensional polytopes: each face gets assigned a vector and an orientation, directed edges get assigned vectors based on adjacent faces. For the 4-simplex we get angles of arccos(-1/3). So I think the \widetilde{A_{n-1}} connection may be a low-dimensional coincidence.

        • Greg Egan says:

          John wrote:

          Isn’t that last vector supposed to be

          \displaystyle{(-n,0,0,0,0,..., n)} \; ?

          No, because these are vectors in \mathbb{R}^{n}, where n=|F| is the number of faces in the graph. You’re working in the subspace of dimension n-1 where the coordinates sum to zero. But there won’t be n faces around any given vertex, there will be a smaller number, so the list of edge vectors won’t get as far as:

          \displaystyle{(-n,0,0,0,0,...,n)}

          I should have stressed that the length of that list of vectors isn’t n, but n_v, the number of faces incident on some vertex.

          Concretely, the point of that block of zeroes remaining at the end of every vector in the list is just that the edge vectors at each vertex don’t generate a full-dimensional sublattice of the lattice spanned by the mesh currents; they generate a lower-dimensional sublattice that is a scaled copy of A_{n_v-1}.

        • John Baez says:

          Oh, right. I get it.

          By the way, I spotted an issue that I haven’t thought about hard enough. I said that choosing face vectors that sum to zero is enough to ensure Kirchhoff’s current law for the edge vectors. That’s definitely true if our graph is drawn on a sphere, but
          if it’s drawn on a surface of higher genus I need to be a bit more careful than I have so far!

          Actually, right this second I’m leaning toward the opinion that everything is fine. On a surface of higher genus we can have current flowing around a handle, but as long as the surface is connected it seems the number of independent Kirchhoff’s current law constraints equals the number of faces minus 1.

        • Greg Egan says:

          John wrote:

          I said that choosing face vectors that sum to zero is enough to ensure Kirchhoff’s current law for the edge vectors.

          I agreed to that when you said it … but I now think it’s not a necessary condition for satisfying Kirchhoff’s law, regardless of whether the graph is embedded on a sphere or a surface of higher genus. (Of course we do have other, geometrical reasons to want the mesh currents to sum to zero!)

          So long as we can put an orientation on every face, we can specify a mesh current for each face and think of that current as flowing around just inside the boundary of the face. It’s easier to think clearly about what’s happening if we pretend for the moment that the mesh currents are scalar parameters, and the geometry of the situation turns them into directional currents moving along the surface in a particular way. So these scalars parameterise conserved vector fields on the surface in which we’ve embedded the graph, and in the idealisation where the currents get ever closer to the border of each face, the field is only non-zero on the edges, and is tangent to the edge.

          We can then ascribe to each edge a current that is the vector sum of the mesh currents running along that edge from the two faces incident on the edge. I think this makes things clearer than having to think about choosing an orientation for every edge and worrying about which face’s mesh current we subtract from which at each particular edge.

          I believe it’s obvious, then, that the sum of edge currents at any vertex will automatically be zero, because at each vertex, every face’s mesh current will appear twice with opposite signs: once flowing into the vertex and once flowing out of the vertex.

          In effect, the whole idea of a conserved current circulating around the border of each face guarantees conservation everywhere: you can’t get any sources or sinks appearing when all you’re really doing is adding up a whole lot of loops!

        • Greg Egan says:

          I should add a remark that reconciles what I said here with some linear-algebraic facts that seem to contradict it, but really don’t!

          In the example of the cube, we have 6 faces, 8 vertices and 12 edges. If you choose an orientation on each edge so you can talk about 12 edge current variables e_i, then the system of 8 linear equations that says Kirchhoff’s current law is obeyed at all 8 vertices has rank 7, i.e. one of the equations follows automatically from all the others being true.

          So, the solution space in the edge current variables has dimension 12-7=5. Since that’s one less than the number of faces, 6, it seems intuitively obvious that we will need to impose some condition on the 6 mesh currents in order to stay in that 5-dimensional solution space.

          But that intuition is wrong! The map from the 6-dimensional space of mesh currents to the space of edge currents is not of maximal rank: it takes the whole 6-dimensional domain into the 5-dimensional subspace of edge currents satisfying Kirchhoff’s current law at every vertex.

        • Greg Egan says:

          BTW, I gather from the Wikipedia article on mesh analysis that when the technique is used to analyse planar circuits, mesh currents are specified only for the loops in the circuit that do not contain any other loop, which in our setup on a sphere would correspond to specifying them for every face of the polygon except one, i.e. setting one face’s mesh current to zero.

          That choice makes sense when you’re solving for the currents in a circuit, because it gives you the correct number of parameters and makes the equations as sparse as possible. It would be pointless to add another mesh current around the outside of a planar circuit just to make the sum of all mesh currents equal to zero.

          But our choice of making the sum of all mesh currents equal to zero is just as valid in terms of satisfying Kirchhoff’s current law, and for our purposes it makes the geometry much more symmetrical.

        • John Baez says:

          Greg wrote:

          So, cyclically adjacent pairs of edges in the original graph will lift to edges at 120 degrees to each other in the covering graph, and all other pairs of edges at the vertex will lift to mutually perpendicular edges in the covering graph.

          By the way: for some reason, even though I always knew the opposite edges of a regular tetrahedron are at right angles to each other, I didn’t notice the obvious generalization! If we take a round tour of a higher-dimensional regular simplex, visiting each vertex just once, each edge we traverse will be orthogonal to all the rest, except for the edges that came immediately before and immediately after.

          Where did this blindness come from? Maybe it’s because some edges of the regular tetrahedron look orthogonal when you project them down to the plane this way:

          whereas no edges look orthogonal in the usual projection of the 4-simplex:

          In 4-space, each edge of the outer pentagon is orthogonal to all the others—except for its neighbors, which it meets at a 60° angle. Ditto for the inner pentagram!

        • Layra Idarani says:

          Indeed, any two edges in a regular n-simplex either share a vertex, in which case they meet at 60 degrees, or are skew and thus determine a regular tetrahedron, in which case they’re perpendicular.

        • John Baez says:

          Yes, it’s one of those things that’s obvious if you think about it. But I’d been dealing with 4-simplexes a lot in my work on quantum gravity, and a lot more since, and I’d never thought about it! I want to spare others my years of benighted ignorance.

      • John Baez says:

        Thanks for clarifying this stuff! I was sort of confused.

        The study of mesh currents, Kirchhoff’s laws and such is actually what mathematicians would call part of ‘homology theory’, and if/when I write this up more carefully I’ll probably use that way of thinking to debug my brain, but I haven’t really done it yet. In some ways homology theory is overkill, but it’s just right for formalizing the concept of the ‘universal abelian cover’ of a graph, and one thing I want to do someday is carefully prove we’re getting universal abelian covers.

  19. mrted57 says:

    Thank you, John. Interesting as always.

  20. John Baez says:

    I’ve decided that the lattice L_E generated by edge vectors is more relevant to crystallography than the lattice L_F generated by face vectors, since it’s the edge vectors that describe the separations between atoms joined by bonds. In other words, you can take vectors pointing along all the bonds of your crystal, let them generate a lattice, and you get L_E. It’s harder—at least for me—to look at a crystal and read off the lattice L_F.

    But I’m getting confused about the relation between these lattices, even when we assume (as we’re usually doing) that the face vectors sum to zero.

    We have one face vector u_f for each face f \in F, which are linearly independent except for one constraint: they sum to zero.

    From these we define edge vectors

    u_e = u_{f_1} - u_{f_2}

    where f_1, f_2 are the two faces meeting along the edge e.

    So, obviously L_E is contained in L_F. It only knows about differences of face vectors, but that’s so bad, since the sum of all face vectors is zero.

    Hmm, right now I’m thinking that L_E is of index |F| in L_F. In other words,

    |L_F/L_E| = |F|

    Does that seem right?

  21. John Baez says:

    We’ve also got the lattice generated by loop vectors, which I’ll call L_L. This lattice, unlike the other two, acts as translational symmetries on the set A of atoms of our crystal. We have

    L_L \subseteq A \subseteq L_E \subseteq L_F

    It is fairly easy to see that

    |A/L_L| = |V|

    the number of vertices in our graph. This says that one atom of each ‘color’ (with the vertices of our graph giving ‘colors’) appears in each unit cell of L_L.

    Greg proved that

    |L_F/L_L| = \lambda_1 \cdots \lambda_n

    where the right side is the product of nonzero eigenvalues of the Laplacian of the dual graph, and n is one less than the number of vertices of the dual graph, or faces of our original graph:

    n = |F|-1

    Kirchhoff’s matrix tree theorem says that

    \lambda_1 \cdots \lambda_n = |F| \cdot |T|

    where |T| is the number of spanning trees in the dual graph. Putting together the last 3 equations, we get

    |L_F/L_L| = |F| \cdot |T|

    I should do an example or two to check all this. Indeed, I want to work out the relevant numbers for all the Platonic examples.

  22. John Baez says:

    So, let me try the example of the degenerate Platonic solid called the ‘trigonal hosohedron’:

    This gives the crystal structure of graphene:

    or in other words, the hexagonal tiling:

    There are two ‘colors’ of atoms here: those with bond poking out to the left and those with a bond poking out to the right. This is because the trigonal hosohedron has two vertices:

    |A/L_L| = |V| = 2

    The lattice L_E of edge vectors looks like a triangular tiling:

    The vertices of the hexagonal tiling occupy only 2/3 of the vertices in the triangular tiling. I had to draw a lot of pictures before I was sure of this simple fact, but now it seems obvious. Mathematically this means

    |A/L_L| = \frac{2}{3} |L_E/L_L|

    Combining this with the previous equation we get

    |L_E/L_L| = 3

    • John Baez says:

      Continuing with the trigonal hosohedron, I want to check my guess

      |L_F/L_E| = |F|

      In this case we have 3 face vectors, say a,b,c, obeying a+b+c = 0. These generate the lattice L_F.

      The sublattice L_E \subseteq L_F is generated by the differences a-b, b-c, a-c, or indeed any two of these.

      If we use the equation a+b+c = 0 to eliminate c,
      then L_F is generated by

      a,b

      while L_E is generated by

      a-b, \quad a-c = a - (-a-b) = 2a+b

      Thus, the index of L_E in L_F is

      \displaystyle{ \mathrm{det}  \left( \begin{array}{cc} 1 & -1 \\ 2 & 1 \end{array} \right) } = 3

      In other words,

      |L_F/L_E| = 3

      which is consistent with my guess

      |L_F/L_E| = |F|

    • John Baez says:

      I also want to check Greg’s formula

      |L_F/L_L| = |F| \cdot |T|

      On the one hand, the trigonal hosohedron has 3 faces, and its dual graph has 3 spanning trees:

      so his formula gives

      |L_F/L_L| = |F| \cdot |T| = 3 \cdot 3 = 9

      But we can also calculate this quantity in a different way! Since we have

      L_L \subseteq L_E \subseteq L_F

      we have

      |L_F/L_L| = |L_F/L_E| \cdot |L_E/L_L|

      and above I’ve shown

      |L_F/L_E| = 3,  |L_E/L_L| = 3

      This gives

      |L_F/L_L| = 3 \cdot 3 = 9

      consistent with Greg’s formula.

      This is exciting, because I made some mistakes en route, but now things are working.

    • John Baez says:

      Given all this, I can calculate one thing I’m interested in now, which is: “what fraction of the potential locations for atoms in the hexagonal lattice are actually occupied by atoms?” Or in short: what is the packing fraction?

      I calculated this density earlier for graphene, but the basic idea is just that graphene has some obvious ‘holes’ where atoms could have been, and the fraction of potential locations that are ‘filled’ is 2/3:

      Let’s see how to calculate this systematically starting from the graph that gives rise to this crystal: the trigonal hosohedron.

      Mathematically A is the set of actual atoms, while L_E is the lattice of potential locations for atoms. So, we can define the packing fraction by

      \displaystyle{ \frac{|A/L_L|}{|L_E/L_L|} }

      The numerator is easy to compute; I’ve argued above that it’s the number of vertices of our graph (here the trigonal hosohedron):

      |A/L_L| = |V|

      The denominator is tougher; I think we need to use

      |L_F/L_L| = |L_F/L_E| \cdot |L_E/L_L|

      and solve for the quantity we want:

      |L_E/L_L| =  \displaystyle{ \frac{  |L_F/L_L| } {|L_F/L_E| } }

      Then we can use Greg’s result

      |L_F/L_L| = |F| \cdot |T|

      and my guess

      |L_F/L_E| = |F|

      to get

      |L_E/L_L| =  \displaystyle{ \frac{  |F| \cdot |T|  }{ |F|  }  }  = |T|

      Hey, that’s nice!

      So, using Greg’s result and my guess, the packing fraction is:

      \displaystyle{ \frac{|A/L_L|}{|L_E/L_L|} = \frac{|V|}{|T|} }

      Nice!

      We can check this for the hexagonal lattice. The trigonal hosohedron has 2 vertices, and its dual graph has 3 spanning trees, so we get 2/3, just as we should!

  23. John Baez says:

    Now let me try a harder example: the dodecahedron!

    This gives a crystal in 11 dimensions with 20 different colors of atoms, since the dodecahedron has 12 faces and 20 vertices.

    The packing fraction, defined as

    \displaystyle{ \frac{|A/L_L|}{|L_E/L_L|} = \frac{|V|}{|T|} }

    is 20 divided by the number of spanning trees in the dual graph, which is the icosahedral graph. Here’s a typical spanning tree in the icosahedral graph:


    I got this picture from a Matlab page about a program that counts spanning trees. This page says the icosahedral graph has 5,184,000 spanning trees! So, the density of our 11-dimensional crystal is

    \displaystyle{ \frac{20}{5,184,000} = \frac{1}{259,200} }

    It seems the only thing that’s hard to compute is the number of spanning trees. Surely someone has done this for all the Platonic solids!

    • Greg Egan says:

      Great stuff!

      I get the following results:

      \displaystyle{\begin{array}{ccc} \mathrm{Polyhedron}  & |T| & \rho \\  \mathrm{Tetrahedron} & 16 & \frac{1}{4} \\  \mathrm{Cube} & 384 & \frac{1}{48} \\  \mathrm{Octahedron} & 384 & \frac{1}{64} \\  \mathrm{Dodecahedron} & 5,184,000 & \frac{1}{259,200} \\  \mathrm{Icosahedron} & 5,184,000 & \frac{1}{432,000} \\  \mathrm{TruncatedIcosahedron} & 375,291,866,372,898,816,000 & \frac{1}{6,254,864,439,548,313,600} \\ \mathrm{PentakisDodecahedron} & 375,291,866,372,898,816,000 & \frac{1}{11,727,870,824,153,088,000} \\ \mathrm{KleinQuarticTriangles} & 38,542,412,611,584,000,000 & \frac{1}{1,605,933,858,816,000,000} \\ \mathrm{KleinQuarticHeptagons} & 846,083,041,649,491,968 & \frac{1}{15,108,625,743,740,928} \end{array}}

      where \rho is the packing fraction relative to the edge vector lattice and |T| is the number of spanning trees of the dual graph. I computed |T| from the graph Laplacian.

      This all fits in with your conjecture that the index of the edge vector lattice in the face vector lattice is |F|, which shouldn’t be too hard to prove.

      It’s interesting how |T| seems to be the same for the dual when the graph embeds into the sphere, but not for higher genus.

      • John Baez says:

        Thanks a million! I assume your \rho here is what I’m now calling the packing fraction, not the quantity you used to call \rho. (It must be, because your results match mine.)

        While you’re doing these, could you please also do two more? The cuboctahedron:

        and the icosidodecahedron:

        are nice because their symmetry groups acts transitively on their vertex-edge flags. I believe this implies the associated crystal also has a symmetry group acting transitively on the vertex-edge flags. Sunada calls such crystals strongly isotropic, and he raised the challenge of classifying them. One of the best things about the Platonic crystals is that they’re strongly isotropic, but those coming from the cuboctahedron and icosidodecahedron are equally good in this respect (as are those coming from Klein’s quartic curve and infinitely many other highly symmetric tilings of higher-genus surfaces).

        I don’t know if I’ve found all graphs embedded in the sphere that have symmetry groups that act transitively on flags!

      • Greg Egan says:

        Yes, \rho here is the packing fraction relative to the edge vector lattice (not the face vector lattice).

        For those other polyhedra:

        \displaystyle{\begin{array}{ccc}  \mathrm{Polyhedron} & |T| & \rho \\   \mathrm{Cuboctahedron} & 331,776 & \frac{1}{27,648} \\  \mathrm{Icosidodecahedron} & 208,971,104,256,000 & \frac{1}{6,965,703,475,200} \end{array}}

    • Greg Egan says:

      Oh, I see that it’s a well-known result that, for planar graphs, the graph and its dual have the same number of spanning trees.

      • John Baez says:

        Wow, the proof is so beautiful I have to quote it:

        A spanning tree may be defined as a set of edges that, together with all of the vertices of the graph, forms a connected and acyclic subgraph. But, by cut-cycle duality, if a set S of edges in a planar graph G is acyclic (has no cycles), then the set of edges dual to S has no cuts, from which it follows that the complementary set of dual edges (the duals of the edges that are not in S) forms a connected subgraph. Symmetrically, if S is connected, then the edges dual to the complement of S form an acyclic subgraph. Therefore, when S has both properties – it is connected and acyclic – the same is true for the complementary set in the dual graph. That is, each spanning tree of G is complementary to a spanning tree of the dual graph, and vice versa.

        This reminds me a lot of the Hodge star operator on a surface.

        The discrete analogue of a 1-form on a surface is a 1-cochain, i.e. a function assigning a number to each directed edge in a polygonal decomposition of that surface. The ‘Hodge dual’ of a 1-cochain is the 1-cochain on the dual graph, which assigns to each dual edge the same number that the original 1-cochain assigned to the corresponding edge on the original graph.

        I’d hope that following this analogy, we’d get some direct way to see why the product of nonzero eigenvalues of the Laplacian on a graph on the sphere equals the product of nonzero eigenvalues of the Laplacian on the dual graph! This fact follows from the fact you just mentioned together with Kirchhoff’s matrix tree theorem. But it should have some proof based on the relation between cochains on a graph and cochains on the dual graph.

  24. Greg Egan says:

    Here’s a sketch of a proof that the index of the edge vector lattice L_E in the face vector lattice L_F is |F|, when the face vectors sum to zero. I’ll illustrate it with the cube as an example, so |F|=6.

    If the face vectors lie in \mathbb{R}^{|F|-1}, and the edge vectors are all differences of pairs of face vectors, then in terms of the |F| face vectors, the |F| coefficients of any edge vector are integer-linear combinations of the |F|-1 rows of a matrix like this:

    \displaystyle{ M_1 =  \left(\begin{array}{cccccc}  1 & 0 & 0 & 0 & 0 & -1 \\  0 & 1 & 0 & 0 & 0 & -1 \\  0 & 0 & 1 & 0 & 0 & -1 \\  0 & 0 & 0 & 1 & 0 & -1 \\  0 & 0 & 0 & 0 & 1 & -1 \end{array}\right)}

    For any pair of face vectors involving only the first |F|-1 faces, we can take the difference of two of these rows to obtain the edge vector, and for any pair that includes the last face, the edge vector is simply one of these rows as it stands (or its opposite). So we can certainly obtain any edge vector this way.

    Conversely, we can obtain any row of this matrix as an integer-linear combination of edge vectors, by finding a path through the dual graph from the last face to the face whose number is the row number and summing the edge vectors (or their opposites) for the edges dual to those in the path. All the face vectors we pass through can be cancelled out, leaving only those at the start and end of the path.

    Next, we need to account for the fact that the face vectors aren’t linearly independent, and rewrite this matrix replacing the last face vector with minus the sum of all the others. We then obtain a matrix like this:

    \displaystyle{ M_2 =  \left(\begin{array}{ccccc}  2 & 1 & 1 & 1 & 1 \\  1 & 2 & 1 & 1 & 1 \\  1 & 1 & 2 & 1 & 1 \\  1 & 1 & 1 & 2 & 1 \\  1 & 1 & 1 & 1 & 2 \end{array}\right)}

    All we’ve done here is add 1 to every entry and drop the last column, since we are subtracting the vector:

    (-1,-1,...,-1)

    rather than including the -1 in the last column.

    The sum of all the rows of M_2 will be the same in every column: one more than the number of rows. Since the number of rows is |F|-1, the sum of the rows will be:

    (|F|,|F|,...,|F|)

    We don’t change the determinant of M_2 if we add every other row to the last row to turn it into that sum, and then, we don’t change the determinant by subtracting that new last row, divided by |F|, from every other row. This removes the effect of adding 1 to all the entries that took us from M_1 to M_2, for the first |F|-2 rows, so we reach a matrix that agrees with M_1 for the first |F|-2 rows and |F|-1 columns, and ends with the row:

    (|F|,|F|,...,|F|)

    For our example of the cube:

    \displaystyle{ M_3 =  \left(\begin{array}{ccccc}  1 & 0 & 0 & 0 & 0 \\  0 & 1 & 0 & 0 & 0 \\  0 & 0 & 1 & 0 & 0 \\  0 & 0 & 0 & 1 & 0 \\  6 & 6 & 6 & 6 & 6 \end{array}\right)}

    So we have \mathrm{det}(M_3) = |F|.

    • John Baez says:

      Excellent! I arrived at my guess via some manipulations roughly like this, but in the case of the tetrahedron, so that generalizing it seemed a bit tricky. For the tetrahedron, all those paths you mention have length 1, so it was hard to recognize them as paths!

  25. John Baez says:

    We’ve seen that for graphs drawn on a surface, Kirchhoff’s matrix tree theorem is equivalent to a simple formula for the number of points in the quotient L_E/L_L: it’s the number of spanning trees in the dual graph.

    Note that while we’ve been using ‘face vectors’ to get ‘edge vectors’ and then ‘loop vectors’, all the counting we’ve been doing in the last 6 or 7 comments didn’t depend on what those face vectors were.

    And indeed, the lattice of loop vectors L_L has a nice meaning independent of our choice of face vectors: it’s isomorphic to the first homology group H_1(X,\mathbb{Z}) of our graph X.

    I believe the lattice of edge vectors L_E has a similar abstract interpretation: it’s some sort of subgroup or quotient group of the free abelian group on the set of edges. The free abelian group on the set of edges is usually called the group of 1-chains and denoted C_1(X,\mathbb{Z}).

    If I could figure out this abstract interpretation, I’d have a nice highbrow statement of Kirchhoff’s matrix tree theorem! The first homology group of a graph (namely L_L) sits inside some bigger free abelian group that we can get from a graph (namely L_E) and the number of points in the quotient is the number of spanning trees in the dual graph.

    And if our original graph is planar, that’s also the number of spanning trees in our original graph! It might even give a clearer way to understand Kirchhoff’s matrix tree theorem. Ideally, there’d be a natural bijection between spanning trees and points in L_E/L_L.

    Probably someone has already figured out most of this, indeed it’s probably in Sunada’s book Topological Cystallography, but I want to understand it.

    For a second I thought the lattice of edge vectors was itself isomorphic to C_1(X,\mathbb{Z}), but of course it’s not, because the lattice of edge vectors has a smaller rank. C_1(X,\mathbb{Z}) is a free abelian group with as many generatores as edges of our graph, while L_E is a free abelian group with as many generators as independent loops of our graph.

    • John Baez says:

      Sunada has a somewhat different abstract interpretation of Kirchhoff’s matrix tree theorem. It’s Theorem 10.8 in his book Topological Crystallography, though he may not have been the first to prove it.

      He starts with a graph X and lets \mathrm{Div}^0 consist of functions from vertices of X to \mathbb{Z} with the property that when you sum their values over all vertices you get zero.

      \mathrm{Div}^0 is an abelian group. Sitting inside here is a subgroup, namely the range of

      \Delta: \mathrm{Div}^0 \to \mathrm{Div}^0

      where \Delta is the graph Laplacian. Let’s call this subgroup \Delta \mathrm{Div}^0.

      He shows that the number of points of \mathrm{Div}^0 /  \Delta \mathrm{Div}^0 is the number of spanning trees in X.

      Note that this ‘summing over all vertices gives zero’ condition is dual to our condition on face vectors, where we like the sum over all faces to give zero. Note also that his formulation doesn’t mention the dual graph of X. So apparently he has moved completely over to the dual picture, as compared to us.

      • John Baez says:

        And also somehow he’s managing to talk about “0-cochains” (functions from vertices to \mathbb{Z}) instead of “1-chains” (integer linear combinations of edges).

        • Tim Silverman says:

          • P. Bacher, P. de la Harpe and T. Nagnibeda, The lattice of integral flows and the lattice of integral cuts, Bull. Soc. Math. France 125 (2) (1997), 167–198.

          Might be helpful if you haven’t seen it.

        • John Baez says:

          Thanks! I believe they proved some of the stuff Greg just proved, and more generally.

          Namely, for any finite directed graph \Gamma they let the space of 1-cochains C^1(\Gamma,\mathbb{Z}) consist of functions from edges to integers. This is isomorphic to the lattice I’ve been calling L_E: the lattice spanned by ‘edge vectors’.

          They define a Laplacian \Delta on 1-cochains: given a 1-cochain f, they define (\Delta f)(e) to be 2 f(e), minus the sum of f(e') over edges e' that end where e starts, minus the sum of f(e'') over edges e'' that start where e ends.

          They let \Lambda^1(\Gamma,\mathbb{Z}) consist of 1-cochains that are in the kernel of the Laplacian. This is isomorphic to the lattice that I’ve recently taken to calling L_L: the lattice spanned by ‘loop vectors’.

          Greg and I showed that

          |L_E/L_T| = |T|

          where |T| is the number of spanning trees in the graph.

          I believe their Proposition 1.3 is closely related to this result. They say:

          The determinant of \Lambda^1(\Gamma,\mathbb{Z}) is equal to the complexity of \Gamma.

          The complexity of \Gamma is just the number of spanning trees.

          Hmm, but I’m a bit puzzled about this ‘determinant’.

  26. Greg Egan says:

    I’ve just realised that some of the results we’ve proved won’t hold for graphs that embed in surfaces of higher genus, such as Klein’s quartic curve. So, those packing densities I calculated for the two graphs in Klein’s quartic curve are almost certainly wrong.

    I computed a basis for the lattice L_L of translation symmetries of the crystal by assuming that you get a basis for all the loops in the graph by following the edges around each of |F|-1 faces. But the dimension of the cycle space of a graph is always d_C = |E|-|V|+c for a graph with c connected components. For a single connected component, we have:

    d_C = |E|-|V|+1 = |F|+1-\chi

    where \chi is the Euler characteristic. For a sphere \chi = 2 and the dimension of the cycle space is d_C = |F|-1, but for Klein’s quartic curve with \chi=-4, we have d_C = |F|+5.

    This means that for a graph embedded in Klein’s quartic curve, there will be loops in the graph such that when we sum the associated edge vectors, the result need not lie in the lattice generated by the vectors we get from taking loops around all but one face in the graph. I say “need not” rather than “does not”, because if we’re still working in \mathbb{R}^{|F|-1} there’s the complication that the full set of |F|+5 loop vectors won’t be linearly independent, so I’m not sure exactly what will happen. The one thing that’s clear is that all integer-linear combinations of all |F|+5 loop vectors will yield translation symmetries of the crystal, whether or not that lattice in \mathbb{R}^{|F|-1} is the same as the one generated by the |F|-1 loops associated with faces.

    • John Baez says:

      You’re right. For each extra handle you add to a torus, you get two noncontractible loops that cross each other: one that runs ‘around’ the handle and one that runs ‘along’ it. (Sorry, those prepositions quite don’t do justice to the geometry.) So for Klein’s quartic we must take into account 6 loops besides the ones that run around faces.

      The fancy way to talk about this relates the first homology group of the graph (which is a free abelian group on some number of generators) to the first homology group of the surface (which is a free abelian group on some smaller number of generators).

  27. […] I read an interesting blog post by a mathematician, John Baez, on the structure of diamonds and triamonds. […]

  28. A while back, we started talking about crystals:

    • John Baez, Diamonds and triamonds, Azimuth, 11 April 2016.

    In the comments on that post, a bunch of us worked on some puzzles connected to ‘topological crystallography’—a subject that blends graph theory, topology and mathematical crystallography. You can learn more about that subject here:

    • Tosio Sunada, Crystals that nature might miss creating, Notices of the AMS 55 (2008), 208–215.

    Greg Egan and I got so interested that we wrote a paper about it!

    • John Baez and Greg Egan, Topological crystals.

  29. A lot of different diamondoids occur naturally in petroleum. Though the carbon in diamonds is not biological in origin, the diamondoids found in petroleum are composed of carbon from biological sources. This was shown by studying the ratios of carbon isotopes present.

    Eric Drexler has proposed using diamondoids for nanotechnology, but he’s talking about larger molecules than those shown here.

    For more fun along these lines, try:

    Diamonds and triamonds, Azimuth, 11 April 2016.

  30. […] John Baez elaborates on the sequence of ‘hyperdiamonds’, together with mentioning an alternative three-dimensional structure called the triamond, here. […]

You can use Markdown or HTML in your comments. You can also use LaTeX, like this: $latex E = m c^2 $. The word 'latex' comes right after the first dollar sign, with a space after it.

This site uses Akismet to reduce spam. Learn how your comment data is processed.