Diamonds and Triamonds

11 April, 2016

The structure of a diamond crystal is fascinating. But there’s an equally fascinating form of carbon, called the triamond, that’s theoretically possible but never yet seen in nature. Here it is:


k4_crystal

In the triamond, each carbon atom is bonded to three others at 120° angles, with one double bond and two single bonds. Its bonds lie in a plane, so we get a plane for each atom.

But here’s the tricky part: for any two neighboring atoms, these planes are different. In fact, if we draw the bond planes for all the atoms in the triamond, they come in four kinds, parallel to the faces of a regular tetrahedron!

If we discount the difference between single and double bonds, the triamond is highly symmetrical. There’s a symmetry carrying any atom and any of its bonds to any other atom and any of its bonds. However, the triamond has an inherent handedness, or chirality. It comes in two mirror-image forms.

A rather surprising thing about the triamond is that the smallest rings of atoms are 10-sided. Each atom lies in 15 of these 10-sided rings.

Some chemists have argued that the triamond should be ‘metastable’ at room temperature and pressure: that is, it should last for a while but eventually turn to graphite. Diamonds are also considered metastable, though I’ve never seen anyone pull an old diamond ring from their jewelry cabinet and discover to their shock that it’s turned to graphite. The big difference is that diamonds are formed naturally under high pressure—while triamonds, it seems, are not.

Nonetheless, the mathematics behind the triamond does find its way into nature. A while back I told you about a minimal surface called the ‘gyroid’, which is found in many places:

The physics of butterfly wings.

It turns out that the pattern of a gyroid is closely connected to the triamond! So, if you’re looking for a triamond-like pattern in nature, certain butterfly wings are your best bet:

• Matthias Weber, The gyroids: algorithmic geometry III, The Inner Frame, 23 October 2015.

Instead of trying to explain it here, I’ll refer you to the wonderful pictures at Weber’s blog.

Building the triamond

I want to tell you a way to build the triamond. I saw it here:

• Toshikazu Sunada, Crystals that nature might miss creating, Notices of the American Mathematical Society 55 (2008), 208–215.

This is the paper that got people excited about the triamond, though it was discovered much earlier by the crystallographer Fritz Laves back in 1932, and Coxeter named it the Laves graph.

To build the triamond, we can start with this graph:


k4_graph_colored_sunada

It’s called \mathrm{K}_4, since it’s the complete graph on four vertices, meaning there’s one edge between each pair of vertices. The vertices correspond to four different kinds of atoms in the triamond: let’s call them red, green, yellow and blue. The edges of this graph have arrows on them, labelled with certain vectors

e_1, e_2, e_3, f_1, f_2, f_3 \in \mathbb{R}^3

Let’s not worry yet about what these vectors are. What really matters is this: to move from any atom in the triamond to any of its neighbors, you move along the vector labeling the edge between them… or its negative, if you’re moving against the arrow.

For example, suppose you’re at any red atom. It has 3 nearest neighbors, which are blue, green and yellow. To move to the blue neighbor you add f_1 to your position. To move to the green one you subtract e_2, since you’re moving against the arrow on the edge connecting blue and green. Similarly, to go to the yellow neighbor you subtract the vector f_3 from your position.

Thus, any path along the bonds of the triamond determines a path in the graph \mathrm{K}_4.

Conversely, if you pick an atom of some color in the triamond, any path in \mathrm{K}_4 starting from the vertex of that color determines a path in the triamond! However, going around a loop in \mathrm{K}_4 may not get you back to the atom you started with in the triamond.

Mathematicians summarize these facts by saying the triamond is a ‘covering space’ of the graph \mathrm{K}_4.

Now let’s see if you can figure out those vectors.

Puzzle 1. Find vectors e_1, e_2, e_3, f_1, f_2, f_3 \in \mathbb{R}^3 such that:

A) All these vectors have the same length.

B) The three vectors coming out of any vertex lie in a plane at 120° angles to each other:


k4_graph_colored_sunada

For example, f_1, -e_2 and -f_3 lie in a plane at 120° angles to each other. We put in two minus signs because two arrows are pointing into the red vertex.

C) The four planes we get this way, one for each vertex, are parallel to the faces of a regular tetrahedron.

If you want, you can even add another constraint:

D) All the components of the vectors e_1, e_2, e_3, f_1, f_2, f_3 are integers.

Diamonds and hyperdiamonds

That’s the triamond. Compare the diamond:

Here each atom of carbon is connected to four others. This pattern is found not just in carbon but also other elements in the same column of the periodic table: silicon, germanium, and tin. They all like to hook up with four neighbors.

The pattern of atoms in a diamond is called the diamond cubic. It’s elegant but a bit tricky. Look at it carefully!

To build it, we start by putting an atom at each corner of a cube. Then we put an atom in the middle of each face of the cube. If we stopped there, we would have a face-centered cubic. But there are also four more carbons inside the cube—one at the center of each tetrahedron we’ve created.

If you look really carefully, you can see that the full pattern consists of two interpenetrating face-centered cubic lattices, one offset relative to the other along the cube’s main diagonal.

The face-centered cubic is the 3-dimensional version of a pattern that exists in any dimension: the Dn lattice. To build this, take an n-dimensional checkerboard and alternately color the hypercubes red and black. Then, put a point in the center of each black hypercube!

You can also get the Dn lattice by taking all n-tuples of integers that sum to an even integer. Requiring that they sum to something even is a way to pick out the black hypercubes.

The diamond is also an example of a pattern that exists in any dimension! I’ll call this the hyperdiamond, but mathematicians call it Dn+, because it’s the union of two copies of the Dn lattice. To build it, first take all n-tuples of integers that sum to an even integer. Then take all those points shifted by the vector (1/2, …, 1/2).

In any dimension, the volume of the unit cell of the hyperdiamond is 1, so mathematicians say it’s unimodular. But only in even dimensions is the sum or difference of any two points in the hyperdiamond again a point in the hyperdiamond. Mathematicians call a discrete set of points with this property a lattice.

If even dimensions are better than odd ones, how about dimensions that are multiples of 4? Then the hyperdiamond is better still: it’s an integral lattice, meaning that the dot product of any two vectors in the lattice is again an integer.

And in dimensions that are multiples of 8, the hyperdiamond is even better. It’s even, meaning that the dot product of any vector with itself is even.

In fact, even unimodular lattices are only possible in Euclidean space when the dimension is a multiple of 8. In 8 dimensions, the only even unimodular lattice is the 8-dimensional hyperdiamond, which is usually called the E8 lattice. The E8 lattice is one of my favorite entities, and I’ve written a lot about it in this series:

Integral octonions.

To me, the glittering beauty of diamonds is just a tiny hint of the overwhelming beauty of E8.

But let’s go back down to 3 dimensions. I’d like to describe the diamond rather explicitly, so we can see how a slight change produces the triamond.

It will be less stressful if we double the size of our diamond. So, let’s start with a face-centered cubic consisting of points whose coordinates are even integers summing to a multiple of 4. That consists of these points:

(0,0,0)   (2,2,0)   (2,0,2)   (0,2,2)

and all points obtained from these by adding multiples of 4 to any of the coordinates. To get the diamond, we take all these together with another face-centered cubic that’s been shifted by (1,1,1). That consists of these points:

(1,1,1)   (3,3,1)   (3,1,3)   (1,3,3)

and all points obtained by adding multiples of 4 to any of the coordinates.

The triamond is similar! Now we start with these points

(0,0,0)   (1,2,3)   (2,3,1)   (3,1,2)

and all the points obtain from these by adding multiples of 4 to any of the coordinates. To get the triamond, we take all these together with another copy of these points that’s been shifted by (2,2,2). That other copy consists of these points:

(2,2,2)   (3,0,1)   (0,1,3)   (1,3,0)

and all points obtained by adding multiples of 4 to any of the coordinates.

Unlike the diamond, the triamond has an inherent handedness, or chirality. You’ll note how we used the point (1,2,3) and took cyclic permutations of its coordinates to get more points. If we’d started with (3,2,1) we would have gotten the other, mirror-image version of the triamond.

Covering spaces

I mentioned that the triamond is a ‘covering space’ of the graph \mathrm{K}_4. More precisely, there’s a graph T whose vertices are the atoms of the triamond, and whose edges are the bonds of the triamond. There’s a map of graphs

p: T \to \mathrm{K}_4

This automatically means that every path in T is mapped to a path in \mathrm{K}_4. But what makes T a covering space of \mathrm{K}_4 is that any path in T comes from a path in \mathrm{K}_4, which is unique after we choose its starting point.

If you’re a high-powered mathematician you might wonder if T is the universal covering space of \mathrm{K}_4. It’s not, but it’s the universal abelian covering space.

What does this mean? Any path in \mathrm{K}_4 gives a sequence of vectors e_1, e_2, e_3, f_1, f_2, f_3 and their negatives. If we pick a starting point in the triamond, this sequence describes a unique path in the triamond. When does this path get you back where you started? The answer, I believe, is this: if and only if you can take your sequence, rewrite it using the commutative law, and cancel like terms to get zero. This is related to how adding vectors in \mathbb{R}^3 is a commutative operation.


k4_graph_colored_sunada

For example, there’s a loop in \mathrm{K}_4 that goes “red, blue, green, red”. This gives the sequence of vectors

f_1, -e_3, e_2

We can turn this into an expression

f_1 - e_3 + e_2

However, we can’t simplify this to zero using just the commutative law and cancelling like terms. So, if we start at some red atom in the triamond and take the unique path that goes “red, blue, green, red”, we do not get back where we started!

Note that in this simplification process, we’re not allowed to use what the vectors “really are”. It’s a purely formal manipulation.

Puzzle 2. Describe a loop of length 10 in the triamond using this method. Check that you can simplify the corresponding expression to zero using the rules I described.

A similar story works for the diamond, but starting with a different graph:


diamond_graph_sunada

The graph formed by a diamond’s atoms and the edges between them is the universal abelian cover of this little graph! This graph has 2 vertices because there are 2 kinds of atom in the diamond. It has 4 edges because each atom has 4 nearest neighbors.

Puzzle 3. What vectors should we use to label the edges of this graph, so that the vectors coming out of any vertex describe how to move from that kind of atom in the diamond to its 4 nearest neighbors?

There’s also a similar story for graphene, which is hexagonal array of carbon atoms in a plane:


graphene

Puzzle 4. What graph with edges labelled by vectors in \mathbb{R}^2 should we use to describe graphene?

I don’t know much about how this universal abelian cover trick generalizes to higher dimensions, though it’s easy to handle the case of a cubical lattice in any dimension.

Puzzle 5. I described higher-dimensional analogues of diamonds: are there higher-dimensional triamonds?

References

The Wikipedia article is good:

• Wikipedia, Laves graph.

They say this graph has many names: the K4 crystal, the (10,3)-a network, the srs net, the diamond twin, and of course the triamond. The name triamond is not very logical: while each carbon has 3 neighbors in the triamond, each carbon has not 2 but 4 neighbors in the diamond. So, perhaps the diamond should be called the ‘quadriamond’. In fact, the word ‘diamond’ has nothing to do with the prefix ‘di-‘ meaning ‘two’. It’s more closely related to the word ‘adamant’. Still, I like the word ‘triamond’.

This paper describes various attempts to find the Laves graph in chemistry:

• Stephen T. Hyde, Michael O’Keeffe, and Davide M. Proserpio, A short history of an elusive yet ubiquitous structure in chemistry, materials, and mathematics, Angew. Chem. Int. Ed. 47 (2008), 7996–8000.

This paper does some calculations arguing that the triamond is a metastable form of carbon:

• Masahiro Itoh et al, New metallic carbon crystal, Phys. Rev. Lett. 102 (2009), 055703.

Abstract. Recently, mathematical analysis clarified that sp2 hybridized carbon should have a three-dimensional crystal structure (\mathrm{K}_4) which can be regarded as a twin of the sp3 diamond crystal. In this study, various physical properties of the \mathrm{K}_4 carbon crystal, especially for the electronic properties, were evaluated by first principles calculations. Although the \mathrm{K}_4 crystal is in a metastable state, a possible pressure induced structural phase transition from graphite to \mathrm{K}_4 was suggested. Twisted π states across the Fermi level result in metallic properties in a new carbon crystal.

The picture of the \mathrm{K}_4 crystal was placed on Wikicommons by someone named ‘Workbit’, under a Creative Commons Attribution-Share Alike 4.0 International license. The picture of the tetrahedron was made using Robert Webb’s Stella software and placed on Wikicommons. The pictures of graphs come from Sunada’s paper, though I modified the picture of \mathrm{K}_4. The moving image of the diamond cubic was created by H.K.D.H. Bhadeshia and put into the public domain on Wikicommons. The picture of graphene was drawn by Dr. Thomas Szkopek and put into the public domain on Wikicommons.


Mathematics in Biochemical Kinetics and Engineering

23 March, 2016

Anyone who was interested in the Workshop on Mathematical Trends in Reaction Network Theory last summer in Copenhagen might be interested in this:

Mathematics in (bio)Chemical Kinetics and Engineering (MaCKiE 2017), Budapest, 25–27 May, 2017.

This conference is planned so that it starts right after another one: the 14th Joint European Thermodynamics Conference will be in Budapest from the 21st to the 25th.

Since its first event in 2002, the MaCKiE workshop is organized in every second year. The previous meetings were held in Ghent (Belgium), Chennai (India), Heidelberg (Germany), and Houston (USA). The meeting aims to bring together scientists interested in the application of advanced mathematical methods to describe kinetic phenomena, especially chemists, mathematicians, physicist, biologists, and engineers. The acronym MaCKiE naturally comes from the title of the conference, but is also part of the German name of Mack the Knife in Brecht and Weill’s Threepenny Opera, Mackie Messer, and is phonetically indistinguishable from “makkie” in Dutch, optimistically meaning “a cinch”.

Conference papers will be published in Reaction Kinetics, Mechanisms and Catalysis in early 2018.


Glycolysis (Part 2)

18 January, 2016

Glyolysis is a way that organisms can get free energy from glucose without needing oxygen. Animals like you and me can do glycolysis, but we get more free energy from oxidizing glucose. Other organisms are anaerobic: they don’t need oxygen. And some, like yeast, survive mainly by doing glycolysis!

If you put yeast cells in water containing a constant low concentration of glucose, they convert it into alcohol at a constant rate. But if you increase the concentration of glucose something funny happens. The alcohol output starts to oscillate!

It’s not that the yeast is doing something clever and complicated. If you break down the yeast cells, killing them, this effect still happens. People think these oscillations are inherent to the chemical reactions in glycolysis.

I learned this after writing Part 1, thanks to Alan Rendall. I first met Alan when we were both working on quantum gravity. But last summer I met him in Copenhagen, where we both attending the workshop Trends in reaction network theory. It turned out that now he’s deep into the mathematics of biochemistry, especially chemical oscillations! He has a blog, and he’s written some great articles on glycolysis:

• Alan Rendall, Albert Goldbeter and glycolytic oscillations, Hydrobates, 21 January 2012.

• Alan Rendall, The Higgins–Selkov oscillator, Hydrobates, 14 May 2014.

In case you’re wondering, Hydrobates is the name of a kind of sea bird, the storm petrel. Alan is fond of sea birds. Since the ultimate goal of my work is to help our relationship with nature, this post is dedicated to the storm petrel:

The basics

Last time I gave a summary description of glycolysis:

glucose + 2 NAD+ + 2 ADP + 2 phosphate →
2 pyruvate + 2 NADH + 2 H+ + 2 ATP + 2 H2O

The idea is that a single molecule of glucose:

gets split into two molecules of pyruvate:

The free energy released from this process is used to take two molecules of adenosine diphosphate or ADP:

and attach to each one phosphate group, typically found as phosphoric acid:

thus producing two molecules of adenosine triphosphate or ATP:

along with 2 molecules of water.

But in the process, something else happens too! 2 molecules of nicotinamide adenine dinucleotide NAD get reduced. That is, they change from the oxidized form called NAD+:

to the reduced form called NADH, along with two protons: that is, 2 H+.

Puzzle 1. Why does NAD+ have a little plus sign on it, despite the two O’s in the picture above?

Left alone in water, ATP spontaneously converts back to ADP and phosphate:

ATP + H2O → ADP + phosphate

This process gives off 30.5 kilojoules of energy per mole. The cell harnesses this to do useful work by coupling this reaction to others. Thus, ATP serves as ‘energy currency’, and making it is the main point of glycolysis.

The cell can also use NADH to do interesting things. It generally has more free energy than NAD+, so it can power things while turning back into NAD+. Just how much more free energy it has depends a lot on conditions in the cell: for example, on the pH.

Puzzle 2. There is often roughly 700 times as much NAD+ as NADH in the cytoplasm of mammals. In these conditions, what is the free energy difference between NAD+ and NADH? I think this is something you’re supposed to be able to figure out.

Nothing in what I’ve said so far gives any clue about why glycolysis might exhibit oscillations. So, we have to dig deeper.

Some details

Glycolysis actually consists of 10 steps, each mediated by its own enzyme. Click on this picture to see all these steps:


If your eyes tend to glaze over when looking at this, don’t feel bad—so do mine. There’s a lot of information here. But if you look carefully, you’ll see that the 1st and 3rd stages of glycolysis actually convert 2 ATP’s to ADP, while the 7th and 10th convert 4 ADP’s to ATP. So, the early steps require free energy, while the later ones double this investment. As the saying goes, “it takes money to make money”.

This nuance makes it clear that if a cell starts with no ATP, it won’t be able to make ATP by glycolysis. And if has just a small amount of ATP, it won’t be very good at making it this way.

In short, this affects the dynamics in an important way. But I don’t see how it could explain oscillations in how much ATP is manufactured from a constant supply of glucose!

We can look up the free energy changes for each of the 10 reactions in glycolysis. Here they are, named by the enzymes involved:


I got this from here:

• Leslie Frost, Glycolysis.

I think these are her notes on Chapter 14 of Voet, Voet, and Pratt’s Fundamentals of Biochemistry. But again, I don’t think these explain the oscillations. So we have to look elsewhere.

Oscillations

By some careful detective work—by replacing the input of glucose by an input of each of the intermediate products—biochemists figured out which step causes the oscillations. It’s the 3rd step, where fructose-6-phosphate is converted into fructose-1,6-bisphosphate, powered by the conversion of ATP into ADP. The enzyme responsible for this step is called phosphofructokinase or PFK. And it turns out that PFK works better when there is ADP around!

In short, the reaction network shown above is incomplete: ADP catalyzes its own formation in the 3rd step.

How does this lead to oscillations? The Higgins–Selkov model is a scenario for how it might happen. I’ll explain this model, offering no evidence that it’s correct. And then I’ll take you to a website where you can see this model in action!

Suppose that fructose-6-phosphate is being produced at a constant rate. And suppose there’s some other reaction, which we haven’t mentioned yet, that uses up ADP at a constant rate. Suppose also that it takes two ADP’s to catalyze the 3rd step. So, we have these reactions:

→ fructose-6-phosphate
fructose-6-phosphate + 2 ADP → 3 ADP
ADP →

Here the blanks mean ‘nothing’, or more precisely ‘we don’t care’. The fructose-6-biphosphate is coming in from somewhere, but we don’t care where it’s coming from. The ADP is going away, but we don’t care where. We’re also ignoring the ATP that’s required for the second reaction, and the fructose-1,6-bisphosphate that’s produced by this reaction. All these features are irrelevant to the Higgins–Selkov model.

Now suppose there’s initially a lot of ADP around. Then the fructose-6-phosphate will quickly be used up, creating even more ADP. So we get even more ADP!

But as this goes on, the amount of fructose-6-phosphate sitting around will drop. So, eventually the production of ADP will drop. Thus, since we’re positing a reaction that uses up ADP at a constant rate, the amount of ADP will start to drop.

Eventually there will be very little ADP. Then it will be very hard for fructose-6-phosphate to get used up. So, the amount of fructose-6-phosphate will start to build up!

Of course, whatever ADP is still left will help use up this fructose-6-phosphate and turn it into ADP. This will increase the amount of ADP. So eventually we will have a lot of ADP again.

We’re back where we started. And so, we’ve got a cycle!

Of course, this story doesn’t prove anything. We should really take our chemical reaction network and translate it into some differential equations for the amount of fructose-6-phosphate and the amount of ADP. In the Higgins–Selkov model people sometimes write just ‘S’ for fructose-6-phosphate and ‘P’ for ADP. (In case you’re wondering, S stands for ‘substrate’ and P stands for ‘product’.) So, our chemical reaction network becomes

→ S
S + 2P → 3P
P →

and using the law of mass action we get these equations:

\displaystyle{ \frac{d S}{dt} = v_0 - k_1 S P^2 }

\displaystyle{ \frac{d P}{dt} = k_1 S P^2 - k_2 P }

where S and P stand for how much S and P we have, respectively, and v_0, k_1, k_2 are some constants.

Now we can solve these differential equations and see if we get oscillations. The answer depends on the constants v_0, k_1, k_2 and also perhaps the initial conditions.

To see what actually happens, try this website:

• Mike Martin, Glycolytic oscillations: the Higgins–Selkov model.

If you run it with the constants and initial conditions given to you, you’ll get oscillations. You’ll also get this vector field on the S,P plane, showing how the system evolves in time:


This is called a phase portrait, and its a standard tool for studying first-order differential equations where two variables depend on time.

This particular phase portrait shows an unstable fixed point and a limit cycle. That’s jargon for saying that in these conditions, the system will tend to oscillate. But if you adjust the constants, the limit cycle will go away! The appearance or disappearance of a limit cycle like this is called a Hopf bifurcation.

For details, see:

• Alan Rendall, Dynamical systems, Chapter 11: Oscillations.

He shows that the Higgins–Selkov model has a unique stationary solution (i.e. fixed point), which he describes. By linearizing it, he finds that this fixed point is stable when v_0 (the inflow of S) is less than a certain value, and unstable when it exceeds that value.

In the unstable case, if the solutions are all bounded as t \to \infty there must be a periodic solution. In the course notes he shows this for a simpler model of glycolysis, the Schnakenberg model. In some separate notes he shows it for the Higgins–Selkov model, at least for certain values of the parameters:

• Alan Rendall, The Higgins–Selkov oscillator.


Glycolysis (Part 1)

8 January, 2016

I’m trying to understand some biology. Being a mathematician I’m less interested in all the complicated details of life on this particular planet than in something a bit more abstract. I want to know ‘the language of life’: the right way to talk about living systems.

Of course, there’s no way to reach this goal without learning a lot of the complicated details. But I might as well be honest and state my goal, since it’s bound to put a strange spin on how I learn and talk about biology.

For example, when I heard people were using the pi-calculus to model a very simple bacterium, I wasn’t eager to know how close their model is to the Last Universal Ancestor, the primordial bug from which we all descend. Even though it’s a fascinating question, it’s not one I can help solve. Instead, I wanted to know if the pi-calculus is really the best language for this kind of model.

I also wanted to know what types of chemical reactions are needed for a cell to survive. I’ll never remember all the details of those reactions: I don’t have the right kind of mind for that. But I might manage to think about these reactions in abstract ways that biologists haven’t tried.

So, when I read this:

The minimal gene set prokaryote has been exhaustively described in the enhanced π-calculus. We represented the 237 genes, their relative products, and the metabolic pathways expressed and regulated by the genes, as the corresponding processes and channels. In particular: the glycolytic pathway, the pentose phosphate pathway, the pathways involved in nucleotide, aminoacids, coenzyme, lipids, and glycerol metabolism.

I instantly wanted to get an overall view of these reactions, without immersing myself in all the details.

Unfortunately I don’t know how to do this. Do you?

It might be like trying to learn grammar without learning vocabulary: not very easy, and perhaps unwise.

But I bet there’s a biochemistry textbook that would help me: one that focuses on the forest before getting into the names of all the trees. I may have even seen a such book! I’ve certainly tried to learn biochemistry. It’s a perfectly fascinating subject. But it’s only recently that I’ve gotten serious about chemical reaction networks and nonequilibrium thermodynamics. this may help guide my studies now.

Anyway, let me start with the ‘glycolytic pathway’. Glycolysis is the process of breaking down a sugar called glucose, thereby powering the formation of ATP, which holds energy in a form that the cell can easily use to do many things.

Glycolysis looks pretty complicated, at least if you’re a mathematician:


But when you’re trying to understand the activities of a complicated criminal network, a good slogan is ‘follow the money’. And for a chemical reaction network, you can ‘follow the conserved quantities’. We’ve got various kinds of atoms—hydrogen, carbon, nitrogen, oxygen, phosphorus—and the number of each kind is conserved. That should help us follow what’s going on.

Energy is also conserved, and that’s incredibly important in thermodynamics. Free energy—energy in forms that are actually useful—is not conserved. But it’s still very good to follow it, since while it can go away, turning into heat, it essentially never appears out of nowhere.

The usual definition of free energy is something like

F = E - TS

where E is energy, T is temperature and S is entropy. You can think of this roughly “energy minus energy in the form of heat”. There’s a lot more to say here, but I just want to add that free energy can also be interpreted as ‘relative information’, a purely information-theoretic concept. For an explanation, see Section 4 of this paper:

• John Baez and Blake Pollard, Relative entropy in biological systems. (Blog article here.)

Since I like abstract generalities, this information-theoretic way of understanding free energy appeals to me.

And of course free energy is useful, so an organism should care about it—and we should be able to track what an organism actually does with it. This is one of my main goals: understanding better what it means for a system to ‘do something with free energy’.

In glycolysis, some of the free energy of glucose gets transferred to ATP. ATP is a bit like ‘money’: it carries free energy in a way that the cell can easily ‘spend’ to do interesting things. So, at some point I want to look at an example of how the cell actually spends this money. But for now I want to think about glycolysis—which may be more like ‘cashing a check and getting money’.

First, let’s see what we get if we ‘black-box’ glycolysis. I’ve written about black-boxing electrical circuits and Markov processes: it’s a way to ignore their inner workings and focus on the relation between inputs and outputs.

Blake Pollard and I are starting to study the black-boxing of chemical reaction networks. If we black-box glycolysis, we get this:

glucose + 2 NAD+ + 2 ADP + 2 phosphate →
2 pyruvate + 2 NADH + 2 H+ + 2 ATP + 2 H2O

I’ll talk about NAD+ and NADH later; let’s temporarily ignore those.

A molecule of glucose has more free energy than 2 pyruvate molecules plus 2 water molecules. On the other hand, ADP + phosphate has less free energy than ATP. So, glycolysis is taking free energy from glucose and putting some of it into the handy form of ATP molecules. And a natural question is: how efficient is this reaction? How much free energy gets wasted?

Here’s an interesting paper that touches indirectly on this question:

• Daniel A. Beard, Eric Babson, Edward Curtis and Hong Qian, Thermodynamic constraints for biochemical networks, Journal of Theoretical Biology 228 (2004), 327–333.

They develop a bunch of machinery for studying chemical reaction networks, which I hope to explain someday. (Mathematicians will be delighted to hear that they use matroids, a general framework for studying linear dependence. Biochemists may be less delighted.) Then they apply this machinery to glycolysis, using computers to do some calculations, and they conclude:

Returning to the original problem of ATP production in energy metabolism, and searching for the flux vector that maximizes ATP production while satisfying the
mass balance constraint and the thermodynamic constraint, we find that at most 20.5 ATP are produced for each glucose molecule consumed.

So, they’re getting some upper bound on how good glycolysis could actually be!

Puzzle 1. What upper bounds can you get simply from free energy considerations?

For example, ignore NADH and NAD+ for a second, and ask how much ATP you could make from turning a molecule of glucose into pyruvate and water if free energy were the only consideration. To answer this, you could take the free energy of a mole glucose minus the free energy of the corresponding amount of pyruvate and water, and divide it by the free energy of a mole of ATP minus the free energy of the corresponding amount of ADP and phosphate. What do you get?

Puzzle 2. How do NADH and NAD+ fit into the story? In the last paragraph I ignored those. We shouldn’t really do that! NAD+ is an oxidized form of nicotinamide adenine dinucleotide. NADH is the the reduced form of the same chemical. In our cells, NADH has more free energy than NAD+. So, besides producing ‘free energy money’ in the form of ATP, glycolysis is producing it in the form of NADH! This should improve our upper bound on how much ATP could be produced by glycolysis.

However, the cell uses NADH for more than just ‘money’. It uses NADH to oxidize other chemicals and NAD+ to reduce them. Reduction and oxidation are really important in chemistry, including biochemistry. I need to understand this whole redox business better. Right now my guess is that it’s connected to yet another conserved quantity, which I haven’t mentioned so far.

Puzzle 3. What conserved quantity is that?


An Electron in Water

30 December, 2015

What happens when a fast-moving electron hits water?

This question is important for understanding the effects of ionizing radiation, but it’s also just cool. I learned the answer here:

• H. Haberland and K. H. Bowen, Solvated electron clusters, Springer Series in Chemical Physics 56 (1994), 134–153.

There are four stages. To get a feel for these, it helps to remember that a femtosecond is 10-15 seconds. This is roughly the time it takes for light to travel a third of a micron—about 5000 times the radius of a hydrogen atom.

1. Thermalization

In 10 to 20 femtoseconds, our electron slows down due to its interaction with water molecules. It transfers energy to these molecules by knocking their electrons to higher energy levels or even knocking them off entirely. It does this until its energy has dropped to about 5 electron volts.

This is still quite energetic: for comparison, a lone electron at room temperature would have an energy of about 0.037 electron volts. Still, at this point we say the electron is thermalized. From now on it exchanges energy with water mainly by exciting motions of the water molecules.

2. Localization

Metals have a conduction band, a range of energies such that electrons with these energies can move freely. But materials called dielectrics, which do not conduct electricity well, also have a conduction band! The difference is that for metals, the conduction band has an energy low enough that electrons can easily jump into it. For dielectrics, the conduction band has an energy so high that it’s usually unoccupied.

Water is a dielectric. So, its conduction band is mostly empty. But even after our fast-moving electron is thermalized, it still has enough energy to occupy the conduction band! So, it moves along through the water!

It does this for about 110 to 180 femtoseconds until it finds a more localized state with about the same energy it has. You see, an electron in the conduction band has a wavefunction that’s very spread out. But there are also some states of about the same energy where the wavefunction is not so spread out. These are called shallow trap states. ‘Trap state’ means that the electron is stuck in one place. ‘Shallow’ means that the energy is still fairly high—there are also ‘deep trap states’.

All this sounds a bit abstract. What are these shallow trap states actually like? I think the electron finds itself a home in a little region that momentarily happens not to contain water molecules.

3. Solvation

Next, the water molecules around the newly trapped electron start reacting to its electric field. As this happens, the energy of the electron decreases. People say the electron “digs its own trap”. I like that metaphor!

This takes about 240 femtoseconds. When this is done, we have what’s called a solvated electron:

This picture is by Michael Tauber. The electron looks huge, but that’s just because its wavefunction is fairly spread out. He says this picture shows the ‘first and second coordination shells’ around the solvated electron.

4. Recombination

In the final stage, the electron combines with a positively charged ion. How long this takes depends radically on how many positively charged ions happen to be around.

For example, even in pure water there are some lone protons floating around. Not many! At 25 degrees Celsius, there is one lone proton per half billion molecules of water. But eventually the electron may combine with one of those and form a hydrogen atom.

In reality, it’s not quite that simple. A proton floating around in water will almost surely have already attached itself to a water molecule, forming a hydronium ion, H3O+:


And hydronium is still positively charged, so it will attract electrons in other water molecules. It will stick on to them in various various ways, the two most famous being the Eigen cation H₉O₄⁺:


and the Zundel cation H₅O₂⁺:


These in turn are still positively charged, so they attract more water molecules, making bigger and bigger structures, which I discussed in detail here:

Water, Azimuth, 29 November 2013.

Here’s a picture of one, called the protonated 21-water cluster:

Presumably when our lone electron meets these structures, they fall apart like a house of cards! I don’t know.


Trends in Reaction Network Theory (Part 2)

1 July, 2015

Here in Copenhagen we’ll soon be having a bunch of interesting talks on chemical reaction networks:

Workshop on Mathematical Trends in Reaction Network Theory, 1-3 July 2015, Department of Mathematical Sciences, University of Copenhagen. Organized by Elisenda Feliu and Carsten Wiuf.

Looking through the abstracts, here are a couple that strike me.

First of all, Gheorghe Craciun claims to have proved the biggest open conjecture in this field: the Global Attractor Conjecture!

• Gheorge Craciun, Toric differential inclusions and a proof of the global attractor conjecture.

This famous old conjecture says that for a certain class of chemical reactions, the ones coming from ‘complex balanced reaction networks’, the chemicals will approach equilibrium no matter what their initial concentrations are. Here’s what Craciun says:

Abstract. In a groundbreaking 1972 paper Fritz Horn and Roy Jackson showed that a complex balanced mass-action system must have a unique locally stable equilibrium within any compatibility class. In 1974 Horn conjectured that this equilibrium is a global attractor, i.e., all solutions in the same compatibility class must converge to this equilibrium. Later, this claim was called the Global Attractor Conjecture, and it was shown that it has remarkable implications for the dynamics of large classes of polynomial and power-law dynamical systems, even if they are not derived from mass-action kinetics. Several special cases of this conjecture have been proved during the last decade. We describe a proof of the conjecture in full generality. In particular, it will follow that all detailed balanced mass action systems and all deficiency zero mass-action systems have the global attractor property. We will also discuss some implications for biochemical mechanisms that implement noise filtering and cellular homeostasis.

Manoj Gopalkrishnan wrote a great post explaining the concept of complex balanced reaction network here on Azimuth, so if you want to understand the conjecture you could start there.

Even better, Manoj is talking here about a way to do statistical inference with chemistry! His talk is called ‘Statistical inference with a chemical soup’:

Abstract. The goal is to design an “intelligent chemical soup” that can do statistical inference. This may have niche technological applications in medicine and biological research, as well as provide fundamental insight into the workings of biochemical reaction pathways. As a first step towards our goal, we describe a scheme that exploits the remarkable mathematical similarity between log-linear models in statistics and chemical reaction networks. We present a simple scheme that encodes the information in a log-linear model as a chemical reaction network. Observed data is encoded as initial concentrations, and the equilibria of the corresponding mass-action system yield the maximum likelihood estimators. The simplicity of our scheme suggests that molecular environments, especially within cells, may be particularly well suited to performing statistical computations.

It’s based on this paper:

• Manoj Gopalkrishnan, A scheme for molecular computation of maximum likelihood estimators for log-linear models.

I’m not sure, but this idea may exploit existing analogies between the approach to equilibrium in chemistry, the approach to equilibrium in evolutionary game theory, and statistical inference. You may have read Marc Harper’s post about that stuff!

David Doty is giving a broader review of ‘Computation by (not about) chemistry’:

Abstract. The model of chemical reaction networks (CRNs) is extensively used throughout the natural sciences as a descriptive language for existing chemicals. If we instead think of CRNs as a programming language for describing artificially engineered chemicals, what sorts of computations are possible for these chemicals to achieve? The answer depends crucially on several formal choices:

1) Do we treat matter as infinitely divisible (real-valued concentrations) or atomic (integer-valued counts)?

2) How do we represent the input and output of the computation (e.g., Boolean presence or absence of species, positive numbers directly represented by counts/concentrations, positive and negative numbers represented indirectly by the difference between counts/concentrations of a pair of species)?

3) Do we assume mass-action rate laws (reaction rates proportional to reactant counts/concentrations) or do we insist the system works correctly under a broader class of rate laws?

The talk will survey several recent results and techniques. A primary goal of the talk is to convey the “programming perspective”: rather than asking “What does chemistry do?”, we want to understand “What could chemistry do?” as well as “What can chemistry provably not do?”

I’m really interested in chemical reaction networks that appear in biological systems, and there will be lots of talks about that. For example, Ovidiu Radulescu will talk about ‘Taming the complexity of biochemical networks through model reduction and tropical geometry’. Model reduction is the process of simplifying complicated models while preserving at least some of their good features. Tropical geometry is a cool version of algebraic geometry that uses the real numbers with minimization as addition and addition as multiplication. This number system underlies the principle of least action, or the principle of maximum energy. Here is Radulescu’s abstract:

Abstract. Biochemical networks are used as models of cellular physiology with diverse applications in biology and medicine. In the absence of objective criteria to detect essential features and prune secondary details, networks generated from data are too big and therefore out of the applicability of many mathematical tools for studying their dynamics and behavior under perturbations. However, under circumstances that we can generically denote by multi-scaleness, large biochemical networks can be approximated by smaller and simpler networks. Model reduction is a way to find these simpler models that can be more easily analyzed. We discuss several model reduction methods for biochemical networks with polynomial or rational rate functions and propose as their common denominator the notion of tropical equilibration, meaning finite intersection of tropical varieties in algebraic geometry. Using tropical methods, one can strongly reduce the number of variables and parameters of biochemical network. For multi-scale networks, these reductions are computed symbolically on orders of magnitude of parameters and variables, and are valid in wide domains of parameter and phase spaces.

I’m talking about the analogy between probabilities and quantum amplitudes, and how this makes chemistry analogous to particle physics. You can see two versions of my talk here, but I’ll be giving the ‘more advanced’ version, which is new:

Probabilities versus amplitudes.

Abstract. Some ideas from quantum theory are just beginning to percolate back to classical probability theory. For example, the master equation for a chemical reaction network describes the interactions of molecules in a stochastic rather than quantum way. If we look at it from the perspective of quantum theory, this formalism turns out to involve creation and annihilation operators, coherent states and other well-known ideas—but with a few big differences.

Anyway, there are a lot more talks, but if I don’t have breakfast and walk over to the math department, I’ll miss those talks!

You can learn more about individual talks in the comments here (see below) and also in Matteo Polettini’s blog:

• Matteo Polettini, Mathematical trends in reaction network theory: part 1 and part 2, Out of Equilibrium, 1 July 2015.


Resource Convertibility (Part 3)

13 April, 2015

guest post by Tobias Fritz

In Part 1 and Part 2, we learnt about ordered commutative monoids and how they formalize theories of resource convertibility and combinability. In this post, I would like to say a bit about the applications that have been explored so far. First, the study of resource theories has become a popular subject in quantum information theory, and many of the ideas in my paper actually originate there. I’ll list some references at the end. So I hope that the toolbox of ordered commutative monoids will turn out to be useful for this. But here I would like to talk about an example application that is much easier to understand, but no less difficult to analyze: graph theory and the resource theory of zero-error communication.

A graph consists of a bunch of nodes connected by a bunch of edges, for example like this:

This particular graph is the pentagon graph or 5-cycle. To give it some resource-theoretic interpretation, think of it as the distinguishability graph of a communication channel, where the nodes are the symbols that can be sent across the channel, and two symbols share an edge if and only if they can be unambiguously decoded. For example, the pentagon graph roughly corresponds to the distinguishability graph of my handwriting, when restricted to five letters only:

So my ‘w’ is distinguishable from my ‘u’, but it may be confused for my ‘m’. In order to communicate unambiguously, it looks like I should restrict myself to using only two of those letters in writing, since any third of them may be mistaken for one of the other three. But alternatively, I could use a block code to create context around each letter which allows for perfect disambiguation. This is what happens in practice: I write in natural language, where an entire word is usually not ambiguous.

One can now also consider graph homomorphisms, which are maps like this:

The numbers on the nodes indicate where each node on the left gets mapped to. Formally, a graph homomorphism is a function taking nodes to nodes such that adjacent nodes get mapped to adjacent nodes. If a homomorphism G\to H exists between graphs G and H, then we also write H\geq G; in terms of communication channels, we can interpret this as saying that H simulates G, since the homomorphism provides a map between the symbols which preserves distinguishability. A ‘code’ for a communication channel is then just a homomorphism from the complete graph in which all nodes share an edge to the graph which describes the channel. With this ordering structure, the collection of all finite graphs forms an ordered set. This ordered set has an intricate structure which is intimately related to some big open problems in graph theory.

We can also combine two communication channels to form a compound one. Going back to the handwriting example, we can consider the new channel in which the symbols are pairs of letters. Two such pairs are distinguishable if and only if either the first letters of each pair are distinguishable or the second letters are,

(a,b) \sim (a',b') \:\Leftrightarrow\: a\sim a' \:\lor\: b\sim b'

When generalized to arbitrary graphs, this yields the definition of disjunctive product of graphs. It is not hard to show that this equips the ordered set of graphs with a binary operation compatible with the ordering, so that we obtain an ordered commutative monoid denoted Grph. It mathematically formalizes the resource theory of zero-error communication.

Using the toolbox of ordered commutative monoids combined with some concrete computations on graphs, one can show that Grph is not cancellative: if K_{11} is the complete graph on 11 nodes, then 3C_5\not\geq K_{11}, but there exists a graph G such that

3 C_5 + G \geq K_{11} + G

The graph G turns out to have 136 nodes. This result seems to be new. But if you happen to have seen something like this before, please let me know!

Last time, we also talked about rates of conversion. In Grph, it turns out that some of these correspond to famous graph invariants! For example, the rate of conversion from a graph G to the single-edge graph K_2 is Shannon capacity \Theta(\overline{G}), where \overline{G} is the complement graph. This is of no surprise since \Theta was originally defined by Shannon with precisely this rate in mind, although he did not use the language of ordered commutative monoids. In any case, the Shannon capacity \Theta(\overline{G}) is a graph invariant notorious for its complexity: it is not known whether there exists an algorithm to compute it! But an application of the Rate Theorem from Part 2 gives us a formula for the Shannon capacity:

\Theta(\overline{G}) = \inf_f f(G)

where f ranges over all graph invariants which are monotone under graph homomorphisms, multiplicative under disjunctive product, and normalized such that f(K_2) = 2. Unfortunately, this formula still not produce an algorithm for computing \Theta. But it nonconstructively proves the existence of many new graph invariants f which approximate the Shannon capacity from above.

Although my story ends here, I also feel that the whole project has barely started. There are lots of directions to explore! For example, it would be great to fit Shannon’s noisy channel coding theorem into this framework, but this has turned out be technically challenging. If you happen to be familiar with rate-distortion theory and you want to help out, please get in touch!

References

Here is a haphazard selection of references on resource theories in quantum information theory and related fields:

• Igor Devetak, Aram Harrow and Andreas Winter, A resource framework for quantum Shannon theory.

• Gilad Gour, Markus P. Müller, Varun Narasimhachar, Robert W. Spekkens and Nicole Yunger Halpern, The resource theory of informational nonequilibrium in thermodynamics.

• Fernando G.S.L. Brandão, Michał Horodecki, Nelly Huei Ying Ng, Jonathan Oppenheim and Stephanie Wehner, The second laws of quantum thermodynamics.

• Iman Marvian and Robert W. Spekkens, The theory of manipulations of pure state asymmetry: basic tools and equivalence classes of states under symmetric operations.

• Elliott H. Lieb and Jakob Yngvason, The physics and mathematics of the second law of thermodynamics.


Follow

Get every new post delivered to your Inbox.

Join 3,231 other followers