Entropy in the Universe

25 January, 2020

If you click on this picture, you’ll see a zoomable image of the Milky Way with 84 million stars:



But stars contribute only a tiny fraction of the total entropy in the observable Universe. If it’s random information you want, look elsewhere!

First: what’s the ‘observable Universe’, exactly?

The further you look out into the Universe, the further you look back in time. You can’t see through the hot gas from 380,000 years after the Big Bang. That ‘wall of fire’ marks the limits of the observable Universe.

But as the Universe expands, the distant ancient stars and gas we see have moved even farther away, so they’re no longer observable. Thus, the so-called ‘observable Universe’ is really the ‘formerly observable Universe’. Its edge is 46.5 billion light years away now!

This is true even though the Universe is only 13.8 billion years old. A standard challenge in understanding general relativity is to figure out how this is possible, given that nothing can move faster than light.

What’s the total number of stars in the observable Universe? Estimates go up as telescopes improve. Right now people think there are between 100 and 400 billion stars in the Milky Way. They think there are between 170 billion and 2 trillion galaxies in the Universe.

In 2009, Chas Egan and Charles Lineweaver estimated the total entropy of all the stars in the observable Universe at 1081 bits. You should think of these as qubits: it’s the amount of information to describe the quantum state of everything in all these stars.

But the entropy of interstellar and intergalactic gas and dust is about ten times more the entropy of stars! It’s about 1082 bits.

The entropy in all the photons in the Universe is even more! The Universe is full of radiation left over from the Big Bang. The photons in the observable Universe left over from the Big Bang have a total entropy of about 1090 bits. It’s called the ‘cosmic microwave background radiation’.

The neutrinos from the Big Bang also carry about 1090 bits—a bit less than the photons. The gravitons carry much less, about 1088 bits. That’s because they decoupled from other matter and radiation very early, and have been cooling ever since. On the other hand, photons in the cosmic microwave background radiation were formed by annihilating
electron-positron pairs until about 10 seconds after the Big Bang. Thus the graviton radiation is expected to be cooler than the microwave background radiation: about 0.6 kelvin as compared to 2.7 kelvin.

Black holes have immensely more entropy than anything listed so far. Egan and Lineweaver estimate the entropy of stellar-mass black holes in the observable Universe at 1098 bits. This is connected to why black holes are so stable: the Second Law says entropy likes to increase.

But the entropy of black holes grows quadratically with mass! So black holes tend to merge and form bigger black holes — ultimately forming the ‘supermassive’ black holes at the centers of most galaxies. These dominate the entropy of the observable Universe: about 10104 bits.

Hawking predicted that black holes slowly radiate away their mass when they’re in a cold enough environment. But the Universe is much too hot for supermassive black holes to be losing mass now. Instead, they very slowly grow by eating the cosmic microwave background, even when they’re not eating stars, gas and dust.

So, only in the far future will the Universe cool down enough for large black holes to start slowly decaying via Hawking radiation. Entropy will continue to increase… going mainly into photons and gravitons! This process will take a very long time. Assuming nothing is falling into it and no unknown effects intervene, a solar-mass black hole takes about 1067 years to evaporate due to Hawking radiation — while a really big one, comparable to the mass of a galaxy, should take about 1099 years.

If our current most popular ideas on dark energy are correct, the Universe will continue to expand exponentially. Thanks to this, there will be a cosmological event horizon surrounding each observer, which will radiate Hawking radiation at a temperature of roughly 10-30 kelvin.

In this scenario the Universe in the very far future will mainly consist of massless particles produced as Hawking radiation at this temperature: photons and gravitons. The entropy within the exponentially expanding ball of space that is today our ‘observable Universe’ will continue to increase exponentially… but more to the point, the entropy density will approach that of a gas of photons and gravitons in thermal equilibrium at 10-30 kelvin.

Of course, it’s quite likely that some new physics will turn up, between now and then, that changes the story! I hope so: this would be a rather dull ending to the Universe.

For more details, go here:

• Chas A. Egan and Charles H. Lineweaver, A larger estimate of the entropy of the universe, The Astrophysical Journal 710 (2010), 1825.

Also read my page on information.


Coupling Through Emergent Conservation Laws (Part 8)

3 July, 2018

joint post with Jonathan Lorand, Blake Pollard, and Maru Sarazola

To wrap up this series, let’s look at an even more elaborate cycle of reactions featuring emergent conservation laws: the citric acid cycle. Here’s a picture of it from Stryer’s textbook Biochemistry:

I’ll warn you right now that we won’t draw any grand conclusions from this example: that’s why we left it out of our paper. Instead we’ll leave you with some questions we don’t know how to answer.

All known aerobic organisms use the citric cycle to convert energy derived from food into other useful forms. This cycle couples an exergonic reaction, the conversion of acetyl-CoA to CoA-SH, to endergonic reactions that produce ATP and a chemical called NADH.

The citric acid cycle can be described at various levels of detail, but at one level it consists of ten reactions:

\begin{array}{rcl}   \mathrm{A}_1 + \text{acetyl-CoA} + \mathrm{H}_2\mathrm{O} & \longleftrightarrow &  \mathrm{A}_2 + \text{CoA-SH}  \\  \\   \mathrm{A}_2 & \longleftrightarrow &  \mathrm{A}_3 + \mathrm{H}_2\mathrm{O} \\  \\  \mathrm{A}_3 + \mathrm{H}_2\mathrm{O} & \longleftrightarrow &   \mathrm{A}_4 \\  \\   \mathrm{A}_4 + \mathrm{NAD}^+  & \longleftrightarrow &  \mathrm{A}_5 + \mathrm{NADH} + \mathrm{H}^+  \\  \\   \mathrm{A}_5 + \mathrm{H}^+ & \longleftrightarrow &  \mathrm{A}_6 + \textrm{CO}_2 \\  \\  \mathrm{A}_6 + \mathrm{NAD}^+ + \text{CoA-SH} & \longleftrightarrow &  \mathrm{A}_7 + \mathrm{NADH} + \mathrm{H}^+ + \textrm{CO}_2 \\  \\   \mathrm{A}_7 + \mathrm{ADP} + \mathrm{P}_{\mathrm{i}}   & \longleftrightarrow &  \mathrm{A}_8 + \text{CoA-SH} + \mathrm{ATP} \\  \\   \mathrm{A}_8 + \mathrm{FAD} & \longleftrightarrow &  \mathrm{A}_9 + \mathrm{FADH}_2 \\  \\  \mathrm{A}_9 + \mathrm{H}_2\mathrm{O}  & \longleftrightarrow &  \mathrm{A}_{10} \\  \\  \mathrm{A}_{10} + \mathrm{NAD}^+  & \longleftrightarrow &  \mathrm{A}_1 + \mathrm{NADH} + \mathrm{H}^+  \end{array}

Here \mathrm{A}_1, \dots, \mathrm{A}_{10} are abbreviations for species that cycle around, each being transformed into the next. It doesn’t really matter for what we’ll be doing, but in case you’re curious:

\mathrm{A}_1= oxaloacetate,
\mathrm{A}_2= citrate,
\mathrm{A}_3= cis-aconitate,
\mathrm{A}_4= isocitrate,
\mathrm{A}_5= oxalosuccinate,
\mathrm{A}_6= α-ketoglutarate,
\mathrm{A}_7= succinyl-CoA,
\mathrm{A}_8= succinate,
\mathrm{A}_9= fumarate,
\mathrm{A}_{10}= L-malate.

In reality, the citric acid cycle also involves inflows of reactants such as acetyl-CoA, which is produced by metabolism, as well as outflows of both useful products such as ADP and NADH and waste products such as CO2. Thus, a full analysis requires treating this cycle as an open chemical reaction network, where species flow in and out. However, we can gain some insight just by studying the emergent conservation laws present in this network, ignoring inflows and outflows—so let’s do that!

There are a total of 22 species in the citric acid cycle. There are 10 forward reactions. We can see that their vectors are all linearly independent as follows. Since each reaction turns \mathrm{A}_i into \mathrm{A}_{i+1}, where we count modulo 10, it is easy to see that any nine of the reaction vectors are linearly independent. Whichever one we choose to ‘close the cycle’ could in theory be linearly dependent on the rest. However, it is easy to see that the vector for this reaction

\mathrm{A}_8 + \mathrm{FAD} \longleftrightarrow \mathrm{A}_9 + \mathrm{FADH}_2

is linearly independent from the rest, because only this one involves FAD. So, all 10 reaction vectors are linearly independent, and the stoichiometric subspace has dimension 10.

Since 22 – 10 = 12, there must be 12 linearly independent conserved quantities. Some of these conservation laws are ‘fundamental’, at least by the standards of chemistry. All the species involved are made of 6 different atoms (carbon, hydrogen, oxygen, nitrogen, phosphorus and sulfur), and conservation of charge provides another fundamental conserved quantity, for a total of 7.

(In our example from last time we didn’t keep track of conservation of hydrogen and charge, because both \mathrm{H}^+ and e^- ions are freely available in water… but we studied the citric acid cycle when we were younger, more energetic and less wise, so we kept careful track of hydrogen and charge, and made sure that all the reactions conserved these. So, we’ll have 7 fundamental conserved quantities.)

For example, the conserved quantity

[\text{acetyl-CoA}] + [\text{CoA-SH}] + [\mathrm{A}_7]

arises from the fact that \text{acetyl-CoA}, \text{CoA-SH} and \mathrm{A}_7 contain a single sulfur atom, while none of the other species involved contain sulfur.

Similarly, the conserved quantity

3[\mathrm{ATP}] + 2[\mathrm{ADP}] + [\mathrm{P}_{\mathrm{i}}] + 2[\mathrm{FAD}] +2[\mathrm{FADH}_2]

expresses conservation of phosphorus.

Besides the 7 fundamental conserved quantities, there must also be 5 linearly independent emergent conserved quantities: that is, quantities that are not conserved in every possible chemical reaction, but remain constant in every reaction in the citric acid cycle. We can use these 5 quantities:

[\mathrm{ATP}] + [\mathrm{ADP}], due to the conservation of adenosine.

[\mathrm{FAD}] + [\mathrm{FADH}_2], due to conservation of flavin adenine dinucleotide.

[\mathrm{NAD}^+] + [\mathrm{NADH}], due to conservation of nicotinamide adenine dinucleotide.

[\mathrm{A}_1] + \cdots + [\mathrm{A}_{10}]. This expresses the fact that in the citric acid cycle each species [\mathrm{A}_i] is transformed to the next, modulo 10.

[\text{acetyl-CoA}] + [\mathrm{A}_1] + \cdots + [\mathrm{A}_7] + [\text{CoA-SH}]. It can be checked by hand that each reaction in the citric acid cycle conserves this quantity. This expresses the fact that during the first 7 reactions of the citric acid cycle, one molecule of \text{acetyl-CoA} is destroyed and one molecule of \text{CoA-SH} is formed.

Of course, other conserved quantities can be formed as linear combinations of fundamental and emergent conserved quantities, often in nonobvious ways. An example is

3 [\text{acetyl-CoA}] + 3 [\mathrm{A}_2] + 3[\mathrm{A}_3] + 3[\mathrm{A}_4] + 2[\mathrm{A}_5] +
2[\mathrm{A}_6] + [\mathrm{A}_7] + [\mathrm{A}_8] + [\mathrm{A}_9] + [\mathrm{A}_{10}] + [\mathrm{NADH}]

which expresses the fact that in each turn of the citric acid cycle, one molecule of \text{acetyl-CoA} is destroyed and three of \mathrm{NADH} are formed. It is easier to check by hand that this quantity is conserved than to express it as an explicit linear combination of the 12 conserved quantities we have listed so far.

Finally, we bit you a fond farewell and leave you with this question: what exactly do the 7 emergent conservation laws do? In our previous two examples (ATP hydrolysis and the urea cycle) there were certain undesired reactions involving just the species we listed which were forbidden by the emergent conservation laws. In this case I don’t see any of those. But there are other important processes, involving additional species, that are forbidden. For example, if you let acetyl-CoA sit in water it will ‘hydrolyze’ as follows:

\text{acetyl-CoA} + \mathrm{H}_2\mathrm{O} \longleftrightarrow \text{CoA-SH} + \text{acetate} + \text{H}^+

So, it’s turning into CoA-SH and some other stuff, somewhat as does in the citric acid cycle, but in a way that doesn’t do anything ‘useful’: no ATP or NADH is created in this process. This is one of the things the citric acid cycle tries to prevent.

(Remember, a reaction being ‘forbidden by emergent conservation laws’ doesn’t mean it’s absolutely forbidden. It just means that it happens much more slowly than the catalyzed reactions we are listing in our reaction network.)

Unfortunately acetate and \text{H}^+ aren’t on the list of species we’re considering. We could add them. If we added them, and perhaps other species, could we get a setup where every emergent conservation law could be seen as preventing a specific unwanted reaction that’s chemically allowed?

Ideally the dimension of the space of emergent conservation laws would match the dimension of the space spanned by reaction vectors of unwanted reactions, so ‘everything would be accounted for’. But even in the simpler example of the urea cycle, we didn’t achieve this perfect match.

 


 
The paper:

• John Baez, Jonathan Lorand, Blake S. Pollard and Maru Sarazola,
Biochemical coupling through emergent conservation laws.

The blog series:

Part 1 – Introduction.

Part 2 – Review of reaction networks and equilibrium thermodynamics.

Part 3 – What is coupling?

Part 4 – Interactions.

Part 5 – Coupling in quasiequilibrium states.

Part 6 – Emergent conservation laws.

Part 7 – The urea cycle.

Part 8 – The citric acid cycle.


Coupling Through Emergent Conservation Laws (Part 6)

1 July, 2018

joint post with Jonathan Lorand, Blake Pollard, and Maru Sarazola

Now let’s think about emergent conservation laws!

When a heavy rock connected to a lighter one by a pulley falls down and pulls up the lighter one, you’re seeing an emergent conservation law:

Here the height of the heavy rock plus the height of light one is a constant. That’s a conservation law! It forces some of the potential energy lost by one rock to be transferred to the other. But it’s not a fundamental conservation law, built into the fabric of physics. It’s an emergent law that holds only thanks to the clever design of the pulley. If the rope broke, this law would be broken too.

It’s not surprising that biology uses similar tricks. But let’s see exactly how it works. First let’s look at all four reactions we’ve been studying:

\begin{array}{cccc}  \mathrm{X} + \mathrm{Y}   & \mathrel{\substack{\alpha_{\rightarrow} \\\longleftrightarrow\\ \alpha_{\leftarrow}}} & \mathrm{XY} & \qquad (1) \\ \\  \mathrm{ATP} & \mathrel{\substack{\beta_{\rightarrow} \\\longleftrightarrow\\ \beta_{\leftarrow}}} &  \mathrm{ADP} + \mathrm{P}_{\mathrm{i}} & \qquad (2) \\  \\   \mathrm{X} + \mathrm{ATP}   & \mathrel{\substack{\gamma_{\rightarrow} \\\longleftrightarrow\\ \gamma_{\leftarrow}}} & \mathrm{ADP} + \mathrm{XP}_{\mathrm{i}}    & \qquad (3) \\ \\   \mathrm{XP}_{\mathrm{i}} +\mathrm{Y} & \mathrel{\substack{\delta_{\rightarrow} \\\longleftrightarrow\\ \delta_{\leftarrow}}} &  \mathrm{XY} + \mathrm{P}_{\mathrm{i}} & \qquad (4)   \end{array}

It’s easy to check that the rate equations for these reactions have the following conserved quantities, that is, quantities that are constant in time:

A) [\mathrm{X}] + [\mathrm{XP}_{\mathrm{i}} ] + [\mathrm{XY}], due to the conservation of X.

B) [\mathrm{Y}] + [\mathrm{XY}], due to the conservation of Y.

C) 3[\mathrm{ATP}] +[\mathrm{XP}_{\mathrm{i}} ] +[\mathrm{P}_{\mathrm{i}}] +2[\mathrm{ADP}], due to the conservation of phosphorus.

D) [\mathrm{ATP}] + [\mathrm{ADP}], due to the conservation of adenosine.

Moreover, these quantities, and their linear combinations, are the only conserved quantities for reactions (1)–(4).

To see this, we use some standard ideas from reaction network theory. Consider the 7-dimensional space with orthonormal basis given by the species in our reaction network:

\mathrm{ATP}, \mathrm{ADP}, \mathrm{P}_{\mathrm{i}}, \mathrm{XP}_{\mathrm{i}}, \mathrm{X}, \mathrm{Y}, \mathrm{XY}

We can think of complexes like \mathrm{ADP} + \mathrm{P}_{\mathrm{i}} as vectors in this space. An arbitrary choice of the concentrations of all species also defines a vector in this space. Furthermore, any reaction involving these species defines a vector in this space, namely the sum of the products minus the sum of the reactants. This is called the reaction vector of this reaction. Reactions (1)–(4) give these reaction vectors:

\begin{array}{ccl}    v_\alpha &=& \mathrm{XY} - \mathrm{X} - \mathrm{Y}  \\  \\  v_\beta &= & \mathrm{P}_{\mathrm{i}} + \mathrm{ADP} - \mathrm{ATP} \\  \\  v_\gamma &=& \mathrm{XP}_{\mathrm{i}}  + \mathrm{ADP} -  \mathrm{ATP} - \mathrm{X} \\   \\  v_\delta &= & \mathrm{XY} + \mathrm{P}_{\mathrm{i}} -  \mathrm{XP}_{\mathrm{i}}  -  \mathrm{Y}  \end{array}

Any change in concentrations caused by these reactions must lie in the stoichiometric subspace: that is, the space spanned by the reaction vectors. Since these vectors obey one nontrivial relation:

v_\alpha + v_\beta = v_\gamma + v_\delta

the stochiometric subspace is 3-dimensional. Therefore, the space of conserved quantities must be 4-dimensional, since these specify the constraints on allowed changes in concentrations.

Now let’s compare the situation where ‘coupling’ occurs! For this we consider only reactions (3) and (4):

Now the stoichiometric subspace is 2-dimensional, since v_\gamma and v_\delta are linearly independent. Thus, the space of conserved quantities becomes 5-dimensional! Indeed, we can find an additional conserved quantity:

E) [\mathrm{Y} ] +[\mathrm{P}_{\mathrm{i}}]

that is linearly independent from the four conserved quantities we had before. It does not derive from the conservation of a particular molecular component. In other words, conservation of this quantity is not a fundamental law of chemistry. Instead, it is an emergent conservation law, which holds thanks to the workings of the cell! It holds in situations where the rate constants of reactions catalyzed by the cell’s enzymes are so much larger than those of other reactions that we can ignore those other reactions.

And remember from last time: these are precisely the situations where we have coupling.

Indeed, the emergent conserved quantity E) precisely captures the phenomenon of coupling! The only way for ATP to form ADP + Pi without changing this quantity is for Y to be consumed in the same amount as Pi is created… thus forming the desired product XY.

Next time we’ll look at a more complicated example from biology: the urea cycle.

 


 
The paper:

• John Baez, Jonathan Lorand, Blake S. Pollard and Maru Sarazola,
Biochemical coupling through emergent conservation laws.

The blog series:

Part 1 – Introduction.

Part 2 – Review of reaction networks and equilibrium thermodynamics.

Part 3 – What is coupling?

Part 4 – Interactions.

Part 5 – Coupling in quasiequilibrium states.

Part 6 – Emergent conservation laws.

Part 7 – The urea cycle.

Part 8 – The citric acid cycle.


Coupling Through Emergent Conservation Laws (Part 5)

30 June, 2018

joint post with Jonathan Lorand, Blake Pollard, and Maru Sarazola

Coupling is the way biology makes reactions that ‘want’ to happen push forward desirable reactions that don’t want to happen. Coupling is achieved through the action of enzymes—but in a subtle way. An enzyme can increase the rate constant of a reaction. However, it cannot change the ratio of forward to reverse rate constants, since that is fixed by the difference of free energies, as we saw in Part 2:

\displaystyle{ \frac{\alpha_\to}{\alpha_\leftarrow} = e^{-\Delta {G^\circ}/RT} }    \qquad

and the presence of an enzyme does not change this.

Indeed, if an enzyme could change this ratio, there would be no need for coupling! For example, increasing the ratio \alpha_\rightarrow/\alpha_\leftarrow in the reaction

\mathrm{X} + \mathrm{Y} \mathrel{\substack{\alpha_{\rightarrow} \\\longleftrightarrow\\ \alpha_{\leftarrow}}} \mathrm{XY}

would favor the formation of XY, as desired. But this option is not available.

Instead, to achieve coupling, the cell uses enyzmes to greatly increase both the forward and reverse rate constants for some reactions while leaving those for others unchanged!

Let’s see how this works. In our example, the cell is trying to couple ATP hydrolysis to the formation of the molecule XY from two smaller parts X and Y. These reactions don’t help do that:

\begin{array}{cclc}  \mathrm{X} + \mathrm{Y}   & \mathrel{\substack{\alpha_{\rightarrow} \\\longleftrightarrow\\ \alpha_{\leftarrow}}} & \mathrm{XY} & \qquad (1) \\ \\  \mathrm{ATP} & \mathrel{\substack{\beta_{\rightarrow} \\\longleftrightarrow\\ \beta_{\leftarrow}}} &  \mathrm{ADP} + \mathrm{P}_{\mathrm{i}} & \qquad (2)   \end{array}

but these do:

\begin{array}{cclc}   \mathrm{X} + \mathrm{ATP}   & \mathrel{\substack{\gamma_{\rightarrow} \\\longleftrightarrow\\ \gamma_{\leftarrow}}} & \mathrm{ADP} + \mathrm{XP}_{\mathrm{i}}    & (3) \\ \\   \mathrm{XP}_{\mathrm{i}} +\mathrm{Y} & \mathrel{\substack{\delta_{\rightarrow} \\\longleftrightarrow\\ \delta_{\leftarrow}}} &  \mathrm{XY} + \mathrm{P}_{\mathrm{i}} & (4)   \end{array}

So, the cell uses enzymes to make the rate constants for reactions (3) and (4) much bigger than for (1) and (2). In this situation we can ignore reactions (1) and (2) and still have a good approximate description of the dynamics, at least for sufficiently short time scales.

Thus, we shall study quasiequilibria, namely steady states of the rate equation for reactions (3) and (4) but not (1) and (2). In this approximation, the relevant Petri net becomes this:

Now it is impossible for ATP to turn into ADP + Pi without X + Y also turning into XY. As we shall see, this is the key to coupling!

In quasiequilibrium states, we shall find a nontrivial relation between the ratios [\mathrm{XY}]/[\mathrm{X}][\mathrm{Y}] and [\mathrm{ATP}]/[\mathrm{ADP}][\mathrm{P}_{\mathrm{i}}]. This lets the cell increase the amount of XY that gets made by increasing the amount of ATP present.

Of course, this is just part of the full story. Over longer time scales, reactions (1) and (2) become important. They would drive the system toward a true equilibrium, and destroy coupling, if there were not an inflow of the reactants ATP, X and Y and an outflow of the products Pi and XY. To take these inflows and outflows into account, we need the theory of ‘open’ reaction networks… which is something I’m very interested in!

But this is beyond our scope here. We only consider reactions (3) and (4), which give the following rate equation:

\begin{array}{ccl}   \dot{[\mathrm{X}]} & = & -\gamma_\to [\mathrm{X}][\mathrm{ATP}] + \gamma_\leftarrow [\mathrm{ADP}][\mathrm{XP}_{\mathrm{i}} ]  \\  \\  \dot{[\mathrm{Y}]} & = & -\delta_\to [\mathrm{XP}_{\mathrm{i}} ][\mathrm{Y}] +\delta_\leftarrow [\mathrm{XY}][\mathrm{P}_{\mathrm{i}}]  \\ \\  \dot{[\mathrm{XY}]} & = &\delta_\to [\mathrm{XP}_{\mathrm{i}} ][\mathrm{Y}] -\delta_\leftarrow [\mathrm{XY}][\mathrm{P}_{\mathrm{i}}]  \\ \\  \dot{[\mathrm{ATP}]} & = & -\gamma_\to [\mathrm{X}][\mathrm{ATP}] + \gamma_\leftarrow [\mathrm{ADP}][\mathrm{XP}_{\mathrm{i}} ]  \\ \\  \dot{[\mathrm{ADP}]} & =& \gamma_\to [\mathrm{X}][\mathrm{ATP}] - \gamma_\leftarrow [\mathrm{ADP}][\mathrm{XP}_{\mathrm{i}} ]  \\  \\  \dot{[\mathrm{P}_{\mathrm{i}}]} & = & \delta_\to [\mathrm{XP}_{\mathrm{i}} ][\mathrm{Y}] -\delta_\leftarrow [\mathrm{XY}][\mathrm{P}_{\mathrm{i}}]  \\ \\  \dot{[\mathrm{XP}_{\mathrm{i}} ]} & = & \gamma_\to [\mathrm{X}][\mathrm{ATP}] - \gamma_\leftarrow [\mathrm{ADP}][\mathrm{XP}_{\mathrm{i}} ] \\ \\ && -\delta_\to [\mathrm{XP}_{\mathrm{i}}][\mathrm{Y}] +\delta_\leftarrow [\mathrm{XY}][\mathrm{P}_{\mathrm{i}}]  \end{array}

Quasiequilibria occur when all these time derivatives vanish. This happens when

\begin{array}{ccl}   \gamma_\to [\mathrm{X}][\mathrm{ATP}] & = & \gamma_\leftarrow [\mathrm{ADP}][\mathrm{XP}_{\mathrm{i}} ]\\  \\  \delta_\to [\mathrm{XP}_{\mathrm{i}} ][\mathrm{Y}] & = & \delta_\leftarrow [\mathrm{XY}][\mathrm{P}_{\mathrm{i}}]  \end{array}

This pair of equations is equivalent to

\displaystyle{ \frac{\gamma_\to}{\gamma_\leftarrow}\frac{[\mathrm{X}][\mathrm{ATP}]}{[\mathrm{ADP}]}=[\mathrm{XP}_{\mathrm{i}} ]  =\frac{\delta_\leftarrow}{\delta_\to}\frac{[\mathrm{XY}][\mathrm{P}_{\mathrm{i}}]}{[\mathrm{Y}]} }

and it implies

\displaystyle{ \frac{[\mathrm{XY}]}{[\mathrm{X}][\mathrm{Y}]}  = \frac{\gamma_\to}{\gamma_\leftarrow}\frac{\delta_\to}{\delta_\leftarrow} \frac{[\mathrm{ATP}]}{[\mathrm{ADP}][\mathrm{P}_{\mathrm{i}}]} }

If we forget about the species XPi (whose presence is crucial for the coupling to happen, but whose concentration we do not care about), the quasiequilibrium does not impose any conditions other than the above relation. We conclude that, under these circumstances and assuming we can increase the ratio

\displaystyle{ \frac{[\mathrm{ATP}]}{[\mathrm{ADP}][\mathrm{P}_{\mathrm{i}}]} }

it is possible to increase the ratio

\displaystyle{\frac{[\mathrm{XY}]}{[\mathrm{X}][\mathrm{Y}]} }

This constitutes ‘coupling’.

We can say a bit more, since we can express the ratios of forward and reverse rate constants in terms of exponentials of free energy differences using the laws of thermodynamics, as explained in Part 2. Reactions (1) and (2), taken together, convert X + Y + ATP to XY + ADP + Pi. So do reactions (3) and (4) taken together. Thus, these two pairs of reactions involve the same total change in free energy, so

\displaystyle{          \frac{\alpha_\to}{\alpha_\leftarrow}\frac{\beta_\to}{\beta_\leftarrow} =   \frac{\gamma_\to}{\gamma_\leftarrow}\frac{\delta_\to}{\delta_\leftarrow} }

But we’re also assuming ATP hydrolysis is so strongly exergonic that

\displaystyle{ \frac{\beta_\to}{\beta_\leftarrow} \gg \frac{\alpha_\leftarrow}{\alpha_\to}  }

This implies that

\displaystyle{    \frac{\gamma_\to}{\gamma_\leftarrow}\frac{\delta_\to}{\delta_\leftarrow} \gg 1 }

Thus,

\displaystyle{ \frac{[\mathrm{XY}]}{[\mathrm{X}][\mathrm{Y}]}  \gg \frac{[\mathrm{ATP}]}{[\mathrm{ADP}][\mathrm{P}_{\mathrm{i}}]} }

This is why coupling to ATP hydrolysis is so good at driving the synthesis of XY from X and Y! Ultimately, this inequality arises from the fact that the decrease in free energy for the reaction

\mathrm{ATP} \to \mathrm{ADP} + \mathrm{P}_{\mathrm{i}}

greatly exceeds the increase in free energy for the reaction

\mathrm{X} + \mathrm{Y} \to \mathrm{XY}

But this fact can only be put to use in the right conditions. We need to be in a ‘quasiequilibrium’ state, where fast reactions have reached a steady state but not slow ones. And we need fast reactions to have this property: they can only turn ATP into ADP + Pi if they also turn X + Y into XY. Under these conditions, we have ‘coupling’.

Next time we’ll see how coupling relies on an ’emergent conservation law’.

 


 
The paper:

• John Baez, Jonathan Lorand, Blake S. Pollard and Maru Sarazola,
Biochemical coupling through emergent conservation laws.

The blog series:

Part 1 – Introduction.

Part 2 – Review of reaction networks and equilibrium thermodynamics.

Part 3 – What is coupling?

Part 4 – Interactions.

Part 5 – Coupling in quasiequilibrium states.

Part 6 – Emergent conservation laws.

Part 7 – The urea cycle.

Part 8 – The citric acid cycle.


Coupling Through Emergent Conservation Laws (Part 4)

29 June, 2018

joint post with Jonathan Lorand, Blake Pollard, and Maru Sarazola

We’ve been trying to understand coupling: how a chemical reaction that ‘wants to happen’ because it decreases the amount of free energy can drive forward a chemical reaction that increases free energy.

For coupling to occur, the reactant species in both reactions must interact in some way. Indeed, in real-world examples where ATP hydrolysis is coupled to the formation of larger molecule \mathrm{XY} from parts \mathrm{X} and \mathrm{Y}, it is observed that, aside from the reactions we discussed last time:

\begin{array}{cclc}  \mathrm{X} + \mathrm{Y}   & \mathrel{\substack{\alpha_{\rightarrow} \\\longleftrightarrow\\ \alpha_{\leftarrow}}} & \mathrm{XY} & \qquad (1) \\ \\  \mathrm{ATP} & \mathrel{\substack{\beta_{\rightarrow} \\\longleftrightarrow\\ \beta_{\leftarrow}}} &  \mathrm{ADP} + \mathrm{P}_{\mathrm{i}} & \qquad (2)   \end{array}

two other reactions (and their reverses) take place:

\begin{array}{cclc}   \mathrm{X} + \mathrm{ATP}   & \mathrel{\substack{\gamma_{\rightarrow} \\\longleftrightarrow\\ \gamma_{\leftarrow}}} & \mathrm{ADP} + \mathrm{XP}_{\mathrm{i}}    & (3) \\ \\   \mathrm{XP}_{\mathrm{i}} +\mathrm{Y} & \mathrel{\substack{\delta_{\rightarrow} \\\longleftrightarrow\\ \delta_{\leftarrow}}} &  \mathrm{XY} + \mathrm{P}_{\mathrm{i}} & (4)   \end{array}

We can picture all four reactions (1-4) in a single Petri net as follows:

Taking into account this more complicated set of reactions, which are interacting with each other, is still not enough to explain the phenomenon of coupling. To see this, let’s consider the rate equation for the system comprised of all four reactions. To write it down neatly, let’s introduce reaction velocities that say the rate at which each forward reaction is taking place, minus the rate of the reverse reaction:

\begin{array}{ccl}    J_\alpha &=& \alpha_\to [\mathrm{X}][\mathrm{Y}] - \alpha_\leftarrow [\mathrm{XY}]  \\   \\   J_\beta  &=& \beta_\to [\mathrm{ATP}] - \beta_\leftarrow [\mathrm{ADP}] [\mathrm{P}_{\mathrm{i}}]  \\  \\   J_\gamma &=& \gamma_\to [\mathrm{ATP}] [\mathrm{X}] - \gamma_\leftarrow [\mathrm{ADP}] [\mathrm{XP}_{\mathrm{i}} ] \\   \\   J_\delta &=& \delta_\to [\mathrm{XP}_{\mathrm{i}} ] [\mathrm{Y}] - \delta_\leftarrow [\mathrm{XY}] [\mathrm{P}_{\mathrm{i}}]  \end{array}

All these follow from the law of mass action, which we explained in Part 2. Remember, this says that any reaction occurs at a rate equal to its rate constant times the product of the concentrations of the species involved. So, for example, this reaction

\mathrm{XP}_{\mathrm{i}} +\mathrm{Y} \mathrel{\substack{\delta_{\rightarrow} \\\longleftrightarrow\\ \delta_{\leftarrow}}}   \mathrm{XY} + \mathrm{P}_{\mathrm{i}}

goes forward at a rate equal to \delta_\rightarrow [\mathrm{XP}_{\mathrm{i}}][\mathrm{Y}], while the reverse reaction occurs at a rate equal to \delta_\leftarrow [\mathrm{ADP}] [\mathrm{P}_{\mathrm{i}}]. So, its reaction velocity is

J_\delta = \delta_\to [\mathrm{XP}_{\mathrm{i}} ] [\mathrm{Y}] - \delta_\leftarrow [\mathrm{XY}] [\mathrm{P}_{\mathrm{i}}]

In terms of these reaction velocities, we can write the rate equation as follows:

\begin{array}{ccl}   \dot{[\mathrm{X}]} & = & -J_\alpha - J_\gamma  \\  \\  \dot{[\mathrm{Y}]} & = & -J_\alpha - J_\delta \\   \\  \dot{[\mathrm{XY}]} & = & J_\alpha + J_\delta \\   \\  \dot{[\mathrm{ATP}]} & = & -J_\beta - J_\gamma \\   \\  \dot{[\mathrm{ADP}]} & = & J_\beta + J_\gamma \\    \\  \dot{[\mathrm{P}_{\mathrm{i}}]} & = & J_\beta + J_\delta \\  \\  \dot{[\mathrm{XP}_{\mathrm{i}} ]} & = & J_\gamma -J_\delta  \end{array}

This makes sense if you think a bit: it says how each reaction contributes to the formation or destruction of each species.

In a steady state, all these time derivatives are zero, so we must have

J_\alpha = J_\beta = -J_\gamma = - J_\delta

Furthermore, in a detailed balanced equilibrium, every reaction occurs at the same rate as its reverse reaction, so all four reaction velocities vanish! In thermodynamics, a system that’s truly in equilibrium obeys this sort of detailed balance condition.

When all the reaction velocities vanish, we have:

\begin{array}{ccl}  \displaystyle{ \frac{[\mathrm{XY}]}{[\mathrm{X}][\mathrm{Y}]} } &=& \displaystyle{ \frac{\alpha_\to}{\alpha_\leftarrow} } \\  \\  \displaystyle{ \frac{[\mathrm{ADP}][\mathrm{P}_{\mathrm{i}}]}{[\mathrm{ATP}]} } &=& \displaystyle{ \frac{\beta_\to}{\beta_\leftarrow}  } \\  \\  \displaystyle{ \frac{[\mathrm{ADP}] [\mathrm{XP}_{\mathrm{i}} ]}{[\mathrm{ATP}][\mathrm{X}]} } &=& \displaystyle{ \frac{\gamma_\to}{\gamma_\leftarrow} } \\  \\  \displaystyle{   \frac{[\mathrm{XY}][\mathrm{P}_{\mathrm{i}}]}{[\mathrm{XP}_{\mathrm{i}} ][\mathrm{Y}]} }   &=& \displaystyle{ \frac{\delta_\to}{\delta_\leftarrow} }  \end{array}

Thus, even when the reactants interact, there can be no coupling if the whole system is in equilibrium, since then the ratio [\mathrm{XY}]/[\mathrm{X}][\mathrm{Y}] is still forced to be \alpha_\to/\alpha_\leftarrow. This is obvious to anyone who truly understands what Boltzmann and Gibbs did. But here we saw it in detail.

The moral is that coupling cannot occur in equilibrium. But how, precisely, does coupling occur? Stay tuned!

 


 
The paper:

• John Baez, Jonathan Lorand, Blake S. Pollard and Maru Sarazola,
Biochemical coupling through emergent conservation laws.

The blog series:

Part 1 – Introduction.

Part 2 – Review of reaction networks and equilibrium thermodynamics.

Part 3 – What is coupling?

Part 4 – Interactions.

Part 5 – Coupling in quasiequilibrium states.

Part 6 – Emergent conservation laws.

Part 7 – The urea cycle.

Part 8 – The citric acid cycle.


Coupling Through Emergent Conservation Laws (Part 3)

28 June, 2018

joint post with Jonathan Lorand, Blake Pollard, and Maru Sarazola

Last time we gave a quick intro to the chemistry and thermodynamics we’ll use to understand ‘coupling’. Now let’s really get started!

Suppose that we are in a setting in which some reaction

\mathrm{X} + \mathrm{Y} \mathrel{\substack{\alpha_{\rightarrow} \\\longleftrightarrow\\ \alpha_{\leftarrow}}} \mathrm{XY}

takes place. Let’s also assume we are interested in the production of \mathrm{XY} from \mathrm{X} and \mathrm{Y}, but that in our system, the reverse reaction is favored to happen. This means that that reverse rate constant exceeds the forward one, let’s say by a lot:

\alpha_\leftarrow \gg \alpha_\to

so that in equilibrium, the concentrations of the species will satisfy

\displaystyle{ \frac{[\mathrm{XY}]}{[\mathrm{X}][\mathrm{Y}]}\ll 1 }

which we assume undesirable. How can we influence this ratio to get a more desired outcome?

This is where coupling comes into play. Informally, we think of the coupling of two reactions as a process in which an endergonic reaction—one which does not ‘want’ to happen—is combined with an exergonic reaction—one that does ‘want’ to happen—in a way that improves the products-to-reactants concentrations ratio of the first reaction.

An important example of coupling, and one we will focus on, involves ATP hydrolysis:

\mathrm{ATP} + \mathrm{H}_2\mathrm{O} \mathrel{\substack{\beta_{\rightarrow} \\\longleftrightarrow\\ \beta_{\leftarrow}}} \mathrm{ADP} + \mathrm{P}_{\mathrm{i}} + \mathrm{H}^+

where ATP (adenosine triphosphate) reacts with a water molecule. Typically, this reaction results in ADP (adenosine diphosphate), a phosphate ion \mathrm{P}_{\mathrm{i}} and a hydrogen ion \mathrm{H}^+. To simplify calculations, we will replace the above equation with

\mathrm{ATP}  \mathrel{\substack{\beta_{\rightarrow} \\\longleftrightarrow\\ \beta_{\leftarrow}}} \mathrm{ADP} + \mathrm{P}_{\mathrm{i}}

since suppressing the bookkeeping of hydrogen and oxygen atoms in this manner will not affect our main points.

One reason ATP hydrolysis is good for coupling is that this reaction is strongly exergonic:

\beta_\to \gg \beta_\leftarrow

and in fact so much that

\displaystyle{ \frac{\beta_\to}{\beta_\leftarrow} \gg \frac{\alpha_\leftarrow}{\alpha_\to}  }

Yet this fact alone is insufficient to explain coupling!

To see why, suppose our system consists merely of the two reactions

\begin{array}{ccc}  \mathrm{X} + \mathrm{Y}   & \mathrel{\substack{\alpha_{\rightarrow} \\\longleftrightarrow\\ \alpha_{\leftarrow}}} & \mathrm{XY} \\ \\  \mathrm{ATP} & \mathrel{\substack{\beta_{\rightarrow} \\\longleftrightarrow\\ \beta_{\leftarrow}}} &  \mathrm{ADP} + \mathrm{P}_{\mathrm{i}} \label{beta}  \end{array}

happening in parallel. We can study the concentrations in equilibrium to see that one reaction has no influence on the other. Indeed, the rate equation for this reaction network is

\begin{array}{ccl}  \dot{[\mathrm{X}]} & = & -\alpha_\to [\mathrm{X}][\mathrm{Y}]+\alpha_\leftarrow [\mathrm{XY}]\\ \\  \dot{[\mathrm{Y}]} & = & -\alpha_\to [\mathrm{X}][\mathrm{Y}]+\alpha_\leftarrow [\mathrm{XY}]\\ \\  \dot{[\mathrm{XY}]} & = & \alpha_\to [\mathrm{X}][\mathrm{Y}]-\alpha_\leftarrow [\mathrm{XY}]\\ \\  \dot{[\mathrm{ATP}]} & =& -\beta_\to [\mathrm{ATP}]+\beta_\leftarrow [\mathrm{ADP}][\mathrm{P}_{\mathrm{i}}]\\ \\  \dot{[\mathrm{ADP}]} & = &\beta_\to [\mathrm{ATP}]-\beta_\leftarrow [\mathrm{ADP}][\mathrm{P}_{\mathrm{i}}]\\ \\  \dot{[\mathrm{P}_{\mathrm{i}}]} & = &\beta_\to [\mathrm{ATP}]-\beta_\leftarrow [\mathrm{ADP}][\mathrm{P}_{\mathrm{i}}]  \end{array}

When concentrations are constant, these are equivalent to the relations

\displaystyle{  \frac{[\mathrm{XY}]}{[\mathrm{X}][\mathrm{Y}]} = \frac{\alpha_\to}{\alpha_\leftarrow} \ \ \text{ and } \ \ \frac{[\mathrm{ADP}][\mathrm{P}_{\mathrm{i}}]}{[\mathrm{ATP}]} = \frac{\beta_\to}{\beta_\leftarrow} }

We thus see that ATP hydrolysis is in no way affecting the ratio of [\mathrm{XY}] to [\mathrm{X}][\mathrm{Y}]. Intuitively, there is no coupling because the two reactions proceed independently. This ‘independence’ is clearly visible if we draw the reaction network as a so-called Petri net:

So what really happens when we are in the presence of coupling? Stay tuned for the next episode!

By the way, here’s what ATP hydrolysis looks like in a bit more detail, from a website at Loreto College:


 


 
The paper:

• John Baez, Jonathan Lorand, Blake S. Pollard and Maru Sarazola,
Biochemical coupling through emergent conservation laws.

The blog series:

Part 1 – Introduction.

Part 2 – Review of reaction networks and equilibrium thermodynamics.

Part 3 – What is coupling?

Part 4 – Interactions.

Part 5 – Coupling in quasiequilibrium states.

Part 6 – Emergent conservation laws.

Part 7 – The urea cycle.

Part 8 – The citric acid cycle.


Coupling Through Emergent Conservation Laws (Part 2)

27 June, 2018

joint post with Jonathan Lorand, Blake Pollard, and Maru Sarazola

Here’s a little introduction to the chemistry and thermodynamics prerequisites for our work on ‘coupling’. Luckily, it’s fun stuff that everyone should know: a lot of the world runs on these principles!

We will be working with reaction networks. A reaction network consists of a set of reactions, for example

\mathrm{X}+\mathrm{Y}\longrightarrow \mathrm{XY}

Here X, Y and XY are the species involved, and we interpret this reaction as species X and Y combining to form species XY. We call X and Y the reactants and XY the product. Additive combinations of species, such as X + Y, are called complexes.

The law of mass action states that the rate at which a reaction occurs is proportional to the product of the concentrations of the reactants. The proportionality constant is called the rate constant; it is a positive real number associated to a reaction that depends on chemical properties of the reaction along with the temperature, the pH of the solution, the nature of any catalysts that may be present, and so on. Every reaction has a reverse reaction; that is, if X and Y combine to form XY, then XY can also split into X and Y. The reverse reaction has its own rate constant.

We can summarize this information by writing

\mathrm{X} + \mathrm{Y} \mathrel{\substack{\alpha_{\rightarrow} \\\longleftrightarrow\\ \alpha_{\leftarrow}}}  \mathrm{XY}

where \alpha_{\to} is the rate constant for X and Y to combine and form XY, while \alpha_\leftarrow is the rate constant for the reverse reaction.

As time passes and reactions occur, the concentration of each species will likely change. We can record this information in a collection of functions

[\mathrm{X}] \colon \mathbb{R} \to [0,\infty),

one for each species X, where \mathrm{X}(t) gives the concentration of the species \mathrm{X} at time t. This naturally leads one to consider the rate equation of a given reaction, which specifies the time evolution of these concentrations. The rate equation can be read off from the reaction network, and in the above example it is:

\begin{array}{ccc}  \dot{[\mathrm{X}]} & = & -\alpha_\to [\mathrm{X}][\mathrm{Y}]+\alpha_\leftarrow [\mathrm{XY}]\\  \dot{[\mathrm{Y}]} & = & -\alpha_\to [\mathrm{X}][\mathrm{Y}]+\alpha_\leftarrow [\mathrm{XY}]\\  \dot{[\mathrm{XY}]} & = & \alpha_\to [\mathrm{X}][\mathrm{Y}]-\alpha_\leftarrow [\mathrm{XY}]  \end{array}

Here \alpha_\to [\mathrm{X}] [\mathrm{Y}] is the rate at which the forward reaction is occurring; thanks to the law of mass action, this is the rate constant \alpha_\to times the product of the concentrations of X and Y. Similarly, \alpha_\leftarrow [\mathrm{XY}] is the rate at which the reverse reaction is occurring.

We say that a system is in detailed balanced equilibrium, or simply equilibrium, when every reaction occurs at the same rate as its reverse reaction. This implies that the concentration of each species is constant in time. In our example, the condition for equilibrium is

\displaystyle{ \frac{\alpha_\to}{\alpha_\leftarrow}=\frac{[\mathrm{XY}]}{[\mathrm{X}][\mathrm{Y}]} }

and the rate equation then implies that

\dot{[\mathrm{X}]} =  \dot{[\mathrm{Y}]} =\dot{[\mathrm{XY}]} = 0

The laws of thermodynamics determine the ratio of the forward and reverse rate constants. For any reaction at all, this ratio is

\displaystyle{ \frac{\alpha_\to}{\alpha_\leftarrow} = e^{-\Delta {G^\circ}/RT} }  \qquad \qquad \qquad (1)

where T is the temperature, R is the ideal gas constant, and \Delta {G^\circ} is the free energy change under standard conditions.

Note that if \Delta {G^\circ} < 0, then the rate constant of the forward reaction is larger than the rate constant of the reverse reaction:

\alpha_\to > \alpha_\leftarrow

In this case one may loosely say that the forward reaction ‘wants’ to happen ‘spontaneously’. Such a reaction is called exergonic. If on the other hand \Delta {G^\circ} > 0, then the forward reaction is ‘non-spontaneous’ and it is called endergonic.

The most important thing for us is that \Delta {G^\circ} takes a very simple form. Each species has a free energy. The free energy of a complex

\mathrm{A}_1 + \cdots + \mathrm{A}_m

is the sum of the free energies of the species \mathrm{A}_i. Given a reaction

\mathrm{A}_1 + \cdots + \mathrm{A}_m \longrightarrow \mathrm{B}_1 + \cdots + \mathrm{B}_n

the free energy change \Delta {G^\circ} for this reaction is the free energy of

\mathrm{B}_1 + \cdots + \mathrm{B}_n

minus the free energy of

\mathrm{A}_1 + \cdots + \mathrm{A}_m.

As a consequence, \Delta{G^\circ} is additive with respect to combining multiple reactions in either series or parallel. In particular, then, the law (1) imposes relations between ratios of rate constants: for example, if we have the following more complicated set of reactions

\mathrm{A} \mathrel{\substack{\alpha_{\rightarrow} \\\longleftrightarrow\\ \alpha_{\leftarrow}}} \mathrm{B}

\mathrm{B} \mathrel{\substack{\beta_{\rightarrow} \\\longleftrightarrow\\ \beta_{\leftarrow}}} \mathrm{C}

\mathrm{A} \mathrel{\substack{\gamma_{\rightarrow} \\\longleftrightarrow\\ \gamma_{\leftarrow}}} \mathrm{C}

then we must have

\displaystyle{    \frac{\gamma_\to}{\gamma_\leftarrow} = \frac{\alpha_\to}{\alpha_\leftarrow} \frac{\beta_\to}{\beta_\leftarrow} .  }

So, not only are the rate constant ratios of reactions determined by differences in free energy, but also nontrivial relations between these ratios can arise, depending on the structure of the system of reactions in question!

Okay—this is all the basic stuff we’ll need to know. Please ask questions! Next time we’ll go ahead and use this stuff to start thinking about how biology manages to make reactions that ‘want’ to happen push forward reactions that are useful but wouldn’t happen spontaneously on their own.

 


 
The paper:

• John Baez, Jonathan Lorand, Blake S. Pollard and Maru Sarazola,
Biochemical coupling through emergent conservation laws.

The blog series:

Part 1 – Introduction.

Part 2 – Review of reaction networks and equilibrium thermodynamics.

Part 3 – What is coupling?

Part 4 – Interactions.

Part 5 – Coupling in quasiequilibrium states.

Part 6 – Emergent conservation laws.

Part 7 – The urea cycle.

Part 8 – The citric acid cycle.


Coupling Through Emergent Conservation Laws (Part 1)

27 June, 2018

joint post with Jonathan Lorand, Blake Pollard, and Maru Sarazola

In the cell, chemical reactions are often ‘coupled’ so that reactions that release energy drive reactions that are biologically useful but involve an increase in energy. But how, exactly, does coupling work?

Much is known about this question, but the literature is also full of vague explanations and oversimplifications. Coupling cannot occur in equilibrium; it arises in open systems, where the concentrations of certain chemicals are held out of equilibrium due to flows in and out. One might thus suspect that the simplest mathematical treatment of this phenomenon would involve non-equilibrium steady states of open systems. However, Bazhin has shown that some crucial aspects of coupling arise in an even simpler framework:

• Nicolai Bazhin, The essence of ATP coupling, ISRN Biochemistry 2012 (2012), article 827604.

He considers ‘quasi-equilibrium’ states, where fast reactions have come into equilibrium and slow ones are neglected. He shows that coupling occurs already in this simple approximation.

In this series of blog articles we’ll do two things. First, we’ll review Bazhin’s work in a way that readers with no training in biology or chemistry should be able to follow. (But if you get stuck, ask questions!) Second, we’ll explain a fact that seems to have received insufficient attention: in many cases, coupling relies on emergent conservation laws.

Conservation laws are important throughout science. Besides those that are built into the fabric of physics, such as conservation of energy and momentum, there are also many ’emergent’ conservation laws that hold approximately in certain circumstances. Often these arise when processes that change a given quantity happen very slowly. For example, the most common isotope of uranium decays into lead with a half-life of about 4 billion years—but for the purposes of chemical experiments in the laboratory, it is useful to treat the amount of uranium as a conserved quantity.

The emergent conservation laws involved in biochemical coupling are of a different nature. Instead of making the processes that violate these laws happen more slowly, the cell uses enzymes to make other processes happen more quickly. At the time scales relevant to cellular metabolism, the fast processes dominate, while slowly changing quantities are effectively conserved. By a suitable choice of these emergent conserved quantities, the cell ensures that certain reactions that release energy can only occur when other ‘desired’ reactions occur. To be sure, this is only approximately true, on sufficiently short time scales. But this approximation is enlightening!

Following Bazhin, our main example involves ATP hydrolysis. We consider this following schema for a whole family of reactions:

\begin{array}{ccc}  \mathrm{X} + \mathrm{ATP}  & \longleftrightarrow & \mathrm{ADP} + \mathrm{XP}_{\mathrm{i}} \qquad (1) \\  \mathrm{XP}_{\mathrm{i}} + \mathrm{Y}  & \longleftrightarrow &    \mathrm{XY} + \mathrm{P}_{\mathrm{i}} \,\;\;\;\;\qquad (2)  \end{array}

Some concrete examples of this schema include:

• The synthesis of glutamine (XY) from glutamate (X) and ammonium (Y). This is part of the important glutamate-glutamine cycle in the central nervous system.

• The synthesis of sucrose (XY) from glucose (X) and fructose (Y). This is one of many processes whereby plants synthesize more complex sugars and starches from simpler building-blocks.

In these and other examples, the two reactions, taken together, have the effect of synthesizing a larger molecule XY out of two parts X and Y while ATP is broken down to ADP and the phosphate ion Pi Thus, they have the same net effect as this other pair of reactions:

\begin{array}{ccc}  \mathrm{X} + \mathrm{Y} &\longleftrightarrow & \mathrm{XY} \;\;\;\quad \quad \qquad  (3) \\   \mathrm{ATP} &\longleftrightarrow & \mathrm{ADP} + \mathrm{P}_{\mathrm{i}} \qquad (4) \end{array}

The first reaction here is just the synthesis of XY from X and Y. The second is a deliberately simplified version of ATP hydrolysis. The first involves an increase of energy, while the second releases energy. But in the schema used in biology, these processes are ‘coupled’ so that ATP can only break down to ADP + Pi if X and Y combine to form XY.

As we shall see, this coupling crucially relies on a conserved quantity: the total number of Y molecules plus the total number of Pi ions is left unchanged by reactions (1) and (2). This fact is not a fundamental law of physics, nor even a general law of chemistry (such as conservation of phosphorus atoms). It is an emergent conservation law that holds approximately in special situations. Its approximate validity relies on the fact that the cell has enzymes that make reactions (1) and (2) occur more rapidly than reactions that violate this law, such as (3) and (4).

In the series to come, we’ll start by providing the tiny amount of chemistry and thermodynamics needed to understand what’s going on. Then we’ll raise the question “what is coupling?” Then we’ll study the reactions required for coupling ATP hydrolysis to the synthesis of XY from components X and Y, and explain why these reactions are not yet enough for coupling. Then we’ll show that coupling occurs in a ‘quasiequilibrium’ state where reactions (1) and (2), assumed much faster than the rest, have reached equilibrium, while the rest are neglected. And then we’ll explain the role of emergent conservation laws!

 


 
The paper:

• John Baez, Jonathan Lorand, Blake S. Pollard and Maru Sarazola,
Biochemical coupling through emergent conservation laws.

The blog series:

Part 1 – Introduction.

Part 2 – Review of reaction networks and equilibrium thermodynamics.

Part 3 – What is coupling?

Part 4 – Interactions.

Part 5 – Coupling in quasiequilibrium states.

Part 6 – Emergent conservation laws.

Part 7 – The urea cycle.

Part 8 – The citric acid cycle.


Effective Thermodynamics for a Marginal Observer

8 May, 2018

guest post by Matteo Polettini

Suppose you receive an email from someone who claims “here is the project of a machine that runs forever and ever and produces energy for free!” Obviously he must be a crackpot. But he may be well-intentioned. You opt for not being rude, roll your sleeves, and put your hands into the dirt, holding the Second Law as lodestar.

Keep in mind that there are two fundamental sources of error: either he is not considering certain input currents (“hey, what about that tiny hidden cable entering your machine from the electrical power line?!”, “uh, ah, that’s just to power the “ON” LED”, “mmmhh, you sure?”), or else he is not measuring the energy input correctly (“hey, why are you using a Geiger counter to measure input voltages?!”, “well, sir, I ran out of voltmeters…”).

In other words, the observer might only have partial information about the setup, either in quantity or quality. Because he has been marginalized by society (most crackpots believe they are misunderstood geniuses) we will call such observer “marginal,” which incidentally is also the word that mathematicians use when they focus on the probability of a subset of stochastic variables.

In fact, our modern understanding of thermodynamics as embodied in statistical mechanics and stochastic processes is founded (and funded) on ignorance: we never really have “complete” information. If we actually had, all energy would look alike, it would not come in “more refined” and “less refined” forms, there would not be a differentials of order/disorder (using Paul Valery’s beautiful words), and that would end thermodynamic reasoning, the energy problem, and generous research grants altogether.

Even worse, within this statistical approach we might be missing chunks of information because some parts of the system are invisible to us. But then, what warrants that we are doing things right, and he (our correspondent) is the crackpot? Couldn’t it be the other way around? Here I would like to present some recent ideas I’ve been working on together with some collaborators on how to deal with incomplete information about the sources of dissipation of a thermodynamic system. I will do this in a quite theoretical manner, but somehow I will mimic the guidelines suggested above for debunking crackpots. My three buzzwords will be: marginal, effective, and operational.

“Complete” thermodynamics: an out-of-the-box view

The laws of thermodynamics that I address are:

• The good ol’ Second Law (2nd)

• The Fluctuation-Dissipation Relation (FDR), and the Reciprocal Relation (RR) close to equilibrium.

• The more recent Fluctuation Relation (FR)1 and its corollary the Integral Fluctuation Relation (IFR), which have been discussed on this blog in a remarkable post by Matteo Smerlak.

The list above is all in the “area of the second law”. How about the other laws? Well, thermodynamics has for long been a phenomenological science, a patchwork. So-called stochastic thermodynamics is trying to put some order in it by systematically grounding thermodynamic claims in (mostly Markov) stochastic processes. But it’s not an easy task, because the different laws of thermodynamics live in somewhat different conceptual planes. And it’s not even clear if they are theorems, prescriptions, or habits (a bit like in jurisprudence2).

Within stochastic thermodynamics, the Zeroth Law is so easy nobody cares to formulate it (I do, so stay tuned…). The Third Law: no idea, let me know. As regards the First Law (or, better, “laws”, as many as there are conserved quantities across the system/environment interface…), we will assume that all related symmetries have been exploited from the offset to boil down the description to a minimum.

1

This minimum is as follows. We identify a system that is well separated from its environment. The system evolves in time, the environment is so large that its state does not evolve within the timescales of the system3. When tracing out the environment from the description, an uncertainty falls upon the system’s evolution. We assume the system’s dynamics to be described by a stochastic Markovian process.

How exactly the system evolves and what is the relationship between system and environment will be described in more detail below. Here let us take an “out of the box” view. We resolve the environment into several reservoirs labeled by index \alpha. Each of these reservoirs is “at equilibrium” on its own (whatever that means4). Now, the idea is that each reservoir tries to impose “its own equilibrium” on the system, and that their competition leads to a flow of currents across the system/environment interface. Each time an amount of the reservoir’s resource crosses the interface, a “thermodynamic cost” has to be to be paid or gained (be it a chemical potential difference for a molecule to go through a membrane, or a temperature gradient for photons to be emitted/absorbed, etc.).

The fundamental quantities of stochastic thermodynamic modeling thus are:

• On the “-dynamic” side: the time-integrated currents \Phi^t_\alpha, independent among themselves5. Currents are stochastic variables distributed with joint probability density

P(\{\Phi_\alpha\}_\alpha)

• On the “thermo-” side: The so-called thermodynamic forces or “affinities”6 \mathcal{A}_\alpha (collectively denoted \mathcal{A}). These are tunable parameters that characterize reservoir-to-reservoir gradients, and they are not stochastic. For convenience, we conventionally take them all positive.

Dissipation is quantified by the entropy production:

\sum \mathcal{A}_\alpha \Phi^t_\alpha

We are finally in the position to state the main results. Be warned that in the following expressions the exact treatment of time and its scaling would require a lot of specifications, but keep in mind that all these relations hold true in the long-time limit, and that all cumulants scale linearly with time.

FR: The probability of observing positive currents is exponentially favoured with respect to negative currents according to

P(\{\Phi_\alpha\}_\alpha) / P(\{-\Phi_\alpha\}_\alpha) = \exp \sum \mathcal{A}_\alpha \Phi^t_\alpha

Comment: This is not trivial, it follows from the explicit expression of the path integral, see below.

IFR: The exponential of minus the entropy production is unity

\big\langle  \exp - \sum \mathcal{A}_\alpha \Phi^t_\alpha  \big\rangle_{\mathcal{A}} =1

Homework: Derive this relation from the FR in one line.

2nd Law: The average entropy production is not negative

\sum \mathcal{A}_\alpha \left\langle \Phi^t_\alpha \right\rangle_{\mathcal{A}} \geq 0

Homework: Derive this relation using Jensen’s inequality.

Equilibrium: Average currents vanish if and only if affinities vanish:

\left\langle \Phi^t_\alpha \right\rangle_{\mathcal{A}} \equiv 0, \forall \alpha \iff  \mathcal{A}_\alpha \equiv 0, \forall \alpha

Homework: Derive this relation taking the first derivative w.r.t. {\mathcal{A}_\alpha} of the IFR. Notice that also the average depends on the affinities.

S-FDR: At equilibrium, it is impossible to tell whether a current is due to a spontaneous fluctuation (quantified by its variance) or to an external perturbation (quantified by the response of its mean). In a symmetrized (S-) version:

\left.  \frac{\partial}{\partial \mathcal{A}_\alpha}\left\langle \Phi^t_{\alpha'} \right\rangle \right|_{0} + \left.  \frac{\partial}{\partial \mathcal{A}_{\alpha'}}\left\langle \Phi^t_{\alpha} \right\rangle \right|_{0} = \left. \left\langle \Phi^t_{\alpha} \Phi^t_{\alpha'} \right\rangle \right|_{0}

Homework: Derive this relation taking the mixed second derivatives w.r.t. {\mathcal{A}_\alpha} of the IFR.

RR: The reciprocal response of two different currents to a perturbation of the reciprocal affinities close to equilibrium is symmetrical:

\left.  \frac{\partial}{\partial \mathcal{A}_\alpha}\left\langle \Phi^t_{\alpha'} \right\rangle \right|_{0} - \left.  \frac{\partial}{\partial \mathcal{A}_{\alpha'}}\left\langle \Phi^t_{\alpha} \right\rangle \right|_{0} = 0

Homework: Derive this relation taking the mixed second derivatives w.r.t. {\mathcal{A}_\alpha} of the FR.

Notice the implication scheme: FR ⇒ IFR ⇒ 2nd, IFR ⇒ S-FDR, FR ⇒ RR.

“Marginal” thermodynamics (still out-of-the-box)

Now we assume that we can only measure a marginal subset of currents \{\Phi_\mu^t\}_\mu \subset \{\Phi_\alpha^t\}_\alpha (index \mu always has a smaller range than \alpha), distributed with joint marginal probability

P(\{\Phi_\mu\}_\mu) = \int \prod_{\alpha \neq \mu} d\Phi_\alpha \, P(\{\Phi_\alpha\}_\alpha)

2

Notice that a state where these marginal currents vanish might not be an equilibrium, because other currents might still be whirling around. We call this a stalling state.

\mathrm{stalling:} \qquad \langle \Phi_\mu \rangle \equiv 0,  \quad \forall \mu

My central question is: can we associate to these currents some effective affinity \mathcal{Q}_\mu in such a way that at least some of the results above still hold true? And, are all definitions involved just a fancy mathematical construct, or are they operational?

First the bad news: In general the FR is violated for all choices of effective affinities:

P(\{\Phi_\mu\}_\mu) / P(\{-\Phi_\mu\}_\mu) \neq \exp \sum \mathcal{Q}_\mu \Phi^t_\mu

This is not surprising and nobody would expect that. How about the IFR?

Marginal IFR: There are effective affinities such that

\left\langle \exp - \sum \mathcal{Q}_\mu \Phi^t_\mu \right\rangle_{\mathcal{A}} =1

Mmmhh. Yeah. Take a closer look this expression: can you see why there actually exists an infinite choice of “effective affinities” that would make that average cross 1? Which on the other hand is just a number, so who even cares? So this can’t be the point.

The fact is, the IFR per se is hardly of any practical interest, as are all “absolutes” in physics. What matters is “relatives”: in our case, response. But then we need to specify how the effective affinities depend on the “real” affinities. And here steps in a crucial technicality, whose precise argumentation is a pain. Basing on reasonable assumptions7, we demonstrate that the IFR holds for the following choice of effective affinities:

\mathcal{Q}_\mu = \mathcal{A}_\mu - \mathcal{A}^{\mathrm{stalling}}_\mu,

where \mathcal{A}^{\mathrm{stalling}} is the set of values of the affinities that make marginal currents stall. Notice that this latter formula gives an operational definition of the effective affinities that could in principle be reproduced in laboratory (just go out there and tune the tunable until everything stalls, and measure the difference). Obviously:

Stalling: Marginal currents vanish if and only if effective affinities vanish:

\left\langle \Phi^t_\mu \right\rangle_{\mathcal{A}} \equiv 0, \forall \mu \iff \mathcal{A}_\mu \equiv 0, \forall \mu

Now, according to the inference scheme illustrated above, we can also prove that:

Effective 2nd Law: The average marginal entropy production is not negative

\sum \mathcal{Q}_\mu \left\langle \Phi^t_\mu \right\rangle_{\mathcal{A}} \geq 0

S-FDR at stalling:

\left. \frac{\partial}{\partial \mathcal{A}_\mu}\left\langle \Phi^t_{\mu'} \right\rangle \right|_{\mathcal{A}^{\mathrm{stalling}}} + \left. \frac{\partial}{\partial \mathcal{A}_{\mu'}}\left\langle \Phi^t_{\mu} \right\rangle \right|_{\mathcal{A}^{\mathrm{stalling}}} = \left. \left\langle \Phi^t_{\mu} \Phi^t_{\mu'} \right\rangle \right|_{\mathcal{A}^{\mathrm{stalling}}}

Notice instead that the RR is gone at stalling. This is a clear-cut prediction of the theory that can be experimented with basically the same apparatus with which response theory has been experimentally studied so far (not that I actually know what these apparatus are…): at stalling states, differing from equilibrium states, the S-FDR still holds, but the RR does not.

Into the box

You’ve definitely gotten enough at this point, and you can give up here. Please exit through the gift shop.

If you’re stubborn, let me tell you what’s inside the box. The system’s dynamics is modeled as a continuous-time, discrete configuration-space Markov “jump” process. The state space can be described by a graph G=(I, E) where I is the set of configurations, E is the set of possible transitions or “edges”, and there exists some incidence relation between edges and couples of configurations. The process is determined by the rates w_{i \gets j} of jumping from one configuration to another.

We choose these processes because they allow some nice network analysis and because the path integral is well defined! A single realization of such a process is a trajectory

\omega^t = (i_0,\tau_0) \to (i_1,\tau_1) \to \ldots \to (i_N,\tau_N)

A “Markovian jumper” waits at some configuration i_n for some time \tau_n with an exponentially decaying probability w_{i_n} \exp - w_{i_n} \tau_n with exit rate w_i = \sum_k w_{k \gets i}, then instantaneously jumps to a new configuration i_{n+1} with transition probability w_{i_{n+1} \gets {i_n}}/w_{i_n}. The overall probability density of a single trajectory is given by

P(\omega^t) = \delta \left(t - \sum_n \tau_n \right) e^{- w_{i_N}\tau_{i_N}} \prod_{n=0}^{N-1} w_{j_n \gets i_n} e^{- w_{i_n} \tau_{i_n}}

One can in principle obtain the probability distribution function of any observable defined along the trajectory by taking the marginal of this measure (though in most cases this is technically impossible). Where does this expression come from? For a formal derivation, see the very beautiful review paper by Weber and Frey, but be aware that this is what one would intuitively come up with if one had to simulate with the Gillespie algorithm.

The dynamics of the Markov process can also be described by the probability of being at some configuration i at time t, which evolves via the master equation

\dot{p}_i(t) = \sum_j \left[ w_{ij} p_j(t) - w_{ji} p_i(t) \right].

We call such probability the system’s state, and we assume that the system relaxes to a uniquely defined steady state p = \mathrm{lim}_{t \to \infty} p(t).

A time-integrated current along a single trajectory is a linear combination of the net number of jumps \#^t between configurations in the network:

\Phi^t_\alpha = \sum_{ij} C^{ij}_\alpha \left[ \#^t(i \gets j) - \#^t(j\gets i) \right]

The idea here is that one or several transitions within the system occur because of the “absorption” or the “emission” of some environmental degrees of freedom, each with different intensity. However, for the moment let us simplify the picture and require that only one transition contributes to a current, that is that there exist i_\alpha,j_\alpha such that

C^{ij}_\alpha = \delta^i_{i_\alpha} \delta^j_{j_\alpha}.

Now, what does it mean for such a set of currents to be “complete”? Here we get inspiration from Kirchhoff’s Current Law in electrical circuits: the continuity of the trajectory at each configuration of the network implies that after a sufficiently long time, cycle or loop or mesh currents completely describe the steady state. There is a standard procedure to identify a set of cycle currents: take a spanning tree T of the network; then the currents flowing along the edges E\setminus T left out from the spanning tree form a complete set.

The last ingredient you need to know are the affinities. They can be constructed as follows. Consider the Markov process on the network where the observable edges are removed G' = (I,T). Calculate the steady state of its associated master equation (p^{\mathrm{eq}}_i)_i, which is necessarily an equilibrium (since there cannot be cycle currents in a tree…). Then the affinities are given by

\mathcal{A}_\alpha = \log  w_{i_\alpha j_\alpha} p^{\mathrm{eq}}_{j_\alpha} / w_{j_\alpha i_\alpha} p^{\mathrm{eq}}_{i_\alpha}.

Now you have all that is needed to formulate the complete theory and prove the FR.

Homework: (Difficult!) With the above definitions, prove the FR.

How about the marginal theory? To define the effective affinities, take the set E_{\mathrm{mar}} = \{i_\mu j_\mu, \forall \mu\} of edges where there run observable currents. Notice that now its complement obtained by removing the observable edges, the hidden edge set E_{\mathrm{hid}} = E \setminus E_{\mathrm{mar}}, is not in general a spanning tree: there might be cycles that are not accounted for by our observations. However, we can still consider the Markov process on the hidden space, and calculate its stalling steady state p^{\mathrm{st}}_i, and ta-taaa: The effective affinities are given by

\mathcal{Q}_\mu = \log w_{i_\mu j_\mu} p^{\mathrm{st}}_{j_\mu} / w_{j_\mu i_\mu} p^{\mathrm{st}}_{i_\mu}.

Proving the marginal IFR is far more complicated than the complete FR. In fact, very often in my field we will not work with the current’ probability density itself, but we prefer to take its bidirectional Laplace transform and work with the currents’ cumulant generating function. There things take a quite different and more elegant look.

Many other questions and possibilities open up now. The most important one left open is: Can we generalize the theory the (physically relevant) case where the current is supported on several edges? For example, for a current defined like \Phi^t = 5 \Phi^t_{12} + 7 \Phi^t_{34}? Well, it depends: the theory holds provided that the stalling state is not “internally alive”, meaning that if the observable current vanishes on average, then also should \Phi^t_{12} and \Phi^t_{34} separately. This turns out to be a physically meaningful but quite strict condition.

Is all of thermodynamics “effective”?

Let me conclude with some more of those philosophical considerations that sadly I have to leave out of papers…

Stochastic thermodynamics strongly depends on the identification of physical and information-theoretic entropies — something that I did not openly talk about, but that lurks behind the whole construction. Throughout my short experience as researcher I have been pursuing a program of “relativization” of thermodynamics, by making the role of the observer more and more evident and movable. Inspired by Einstein’s Gedankenexperimenten, I also tried to make the theory operational. This program may raise eyebrows here and there: Many thermodynamicians embrace a naive materialistic world-view whereby what only matters are “real” physical quantities like temperature, pressure, and all the rest of the information-theoretic discourse is at best mathematical speculation or a fascinating analog with no fundamental bearings. According to some, information as a physical concept lingers alarmingly close to certain extreme postmodern claims in the social sciences that “reality” does not exist unless observed, a position deemed dangerous at times when the authoritativeness of science is threatened by all sorts of anti-scientific waves.

I think, on the contrary, that making concepts relative and effective and by summoning the observer explicitly is a laic and prudent position that serves as an antidote to radical subjectivity. The other way around—clinging to the objectivity of a preferred observer, which is implied in any materialistic interpretation of thermodynamics, e.g. by assuming that the most fundamental degrees of freedom are the positions and velocities of gas’s molecules—is the dangerous position, expecially when the role of such preferred observer is passed around from the scientist to the technician and eventually to the technocrat, who would be induced to believe there are simple technological fixes to complex social problems

How do we reconcile observer-dependency and the laws of physics? The object and the subject? On the one hand, much like the position of an object depends on the reference frame, so much so entropy and entropy production do depend on the observer and the particular apparatus that he controls or experiment he is involved with. On the other hand, much like motion is ultimately independent of position and it is agreed upon by all observers that share compatible measurement protocols, so much so the laws of thermodynamics are independent of that particular observer’s quantification of entropy and entropy production (e.g., the effective Second Law holds independently of how much the marginal observer knows of the system, if he operates according to our phenomenological protocol…). This is the case even in the every-day thermodynamics as practiced by energetic engineers et al., where there are lots of choices to gauge upon, and there is no other external warrant that the amount of dissipation being quantified is the “true” one (whatever that means…)—there can only be trust in one’s own good practices and methodology.

So in this sense, I like to think that all observers are marginal, that this effective theory serves as a dictionary by which different observers practice and communicate thermodynamics, and that we should not revere the laws of thermodynamics as “true” idols, but rather as tools of good scientific practice.

References

• M. Polettini and M. Esposito, Effective fluctuation and response theory, arXiv:1803.03552.

In this work we give the complete theory and numerous references to work of other people that was along the same lines. We employ a “spiral” approach to the presentation of the results, inspired by the pedagogical principle of Albert Baez.

• M. Polettini and M. Esposito, Effective thermodynamics for a marginal observer, Phys. Rev. Lett. 119 (2017), 240601, arXiv:1703.05715.

This is a shorter version of the story.

• B. Altaner, M. Polettini and M. Esposito, Fluctuation-dissipation relations far from equilibrium, Phys. Rev. Lett. 117 (2016), 180601, arXiv:1604.0883.

An early version of the story, containing the FDR results but not the full-fledged FR.

• G. Bisker, M. Polettini, T. R. Gingrich and J. M. Horowitz, Hierarchical bounds on entropy production inferred from partial information, J. Stat. Mech. (2017), 093210, arXiv:1708.06769.

Some extras.

• M. F. Weber and E. Frey, Master equations and the theory of stochastic path integrals, Rep. Progr. Phys. 80 (2017), 046601, arXiv:1609.02849.

Great reference if one wishes to learn about path integrals for master equation systems.

Footnotes

1 There are as many so-called “Fluctuation Theorems” as there are authors working on them, so I decided not to call them by any name. Furthermore, notice I prefer to distinguish between a relation (a formula) and a theorem (a line of reasoning). I lingered more on this here.

2 “Just so you know, nobody knows what energy is.”—Richard Feynman.

I cannot help but mention here the beautiful book by Shapin and Schaffer, Leviathan and the Air-Pump, about the Boyle vs. Hobbes diatribe about what constitutes a “matter of fact,” and Bruno Latour’s interpretation of it in We Have Never Been Modern. Latour argues that “modernity” is a process of separation of the human and natural spheres, and within each of these spheres a process of purification of the unit facts of knowledge and the unit facts of politics, of the object and the subject. At the same time we live in a world where these two spheres are never truly separated, a world of “hybrids” that are at the same time necessary “for all practical purposes” and unconceivable according to the myths that sustain the narration of science, of the State, and even of religion. In fact, despite these myths, we cannot conceive a scientific fact out of the contextual “network” where this fact is produced and replicated, and neither we can conceive society out of the material needs that shape it: so in this sense “we have never been modern”, we are not quite different from all those societies that we take pleasure of studying with the tools of anthropology. Within the scientific community Latour is widely despised; probably he is also misread. While it is really difficult to see how his analysis applies to, say, high-energy physics, I find that thermodynamics and its ties to the industrial revolution perfectly embodies this tension between the natural and the artificial, the matter of fact and the matter of concern. Such great thinkers as Einstein and Ehrenfest thought of the Second Law as the only physical law that would never be replaced, and I believe this is revelatory. A second thought on the Second Law, a systematic and precise definition of all its terms and circumstances, reveals that the only formulations that make sense are those phenomenological statements such as Kelvin-Planck’s or similar, which require a lot of contingent definitions regarding the operation of the engine, while fetishized and universal statements are nonsensical (such as that masterwork of confusion that is “the entropy of the Universe cannot decrease”). In this respect, it is neither a purely natural law—as the moderns argue, nor a purely social construct—as the postmodern argue. One simply has to renounce to operate this separation. While I do not have a definite answer on this problem, I like to think of the Second Law as a practice, a consistency check of the thermodynamic discourse.

3 This assumption really belongs to a time, the XIXth century, when resources were virtually infinite on planet Earth…

4 As we will see shortly, we define equilibrium as that state where there are no currents at the interface between the system and the environment, so what is the environment’s own definition of equilibrium?!

5 This because we have already exploited the First Law.

6 This nomenclature comes from alchemy, via chemistry (think of Goethe’s The elective affinities…), it propagated in the XXth century via De Donder and Prigogine, and eventually it is still present in language in Luxembourg because in some way we come from the “late Brussels school”.

7 Basically, we ask that the tunable parameters are environmental properties, such as temperatures, chemical potentials, etc. and not internal properties, such as the energy landscape or the activation barriers between configurations.


Symposium on Compositional Structures

4 May, 2018

As I type this, sitting in a lecture hall at the Lorentz Center, Jamie Vicary, University of Birmingham and University of Oxford, is announcing a new series of meetings:

Symposium on Compositional Structures.

The website, which will probably change, currently says this:

Symposium on Compositional Structures (SYCO)

The Symposium on Compositional Structures is a new interdisciplinary meeting aiming to support the growing community of researchers interested in the phenomenon of compositionality, from both applied and abstract perspectives, and in particular where category theory serves as a unifying common language.

We welcome submissions from researchers across computer science, mathematics, physics, philosophy, and beyond, with the aim of fostering discussion, disseminating new ideas, and spreading knowledge of open problems between fields. Submission is encouraged for both mature research and work in progress, and by both established academics and junior researchers, including students. The meeting does not have proceedings.

While no list of topics could be exhaustive, SYCO welcomes submissions with a compositional focus related any the following areas, in particular from the perspective of category theory:

logical methods in computer science, including quantum and classical programming, concurrency, natural language processing and machine learning;

graphical calculi, including string diagrams, Petri nets and reaction networks;

languages and frameworks, including process algebras, proof nets, type theory and game semantics;

abstract algebra and pure category theory, including monoidal category theory, higher category theory, operads, polygraphs, and relationships to homotopy theory;

quantum algebra, including quantum computation and representation theory;

tools and techniques, including rewriting, formal proofs and proof assistants;

industrial applications, including case studies and real-world problem descriptions.

Meetings

Meetings will involve both invited and contributed talks. The first meeting is planned for Autumn 2018, with more details to follow soon.

Funding

Some funding may be available to support travel and subsistence, especially for junior researchers who are speaking at the meeting.

Steering committee

The symposium is managed by the following people:

Ross Duncan, University of Strathclyde.
Chris Heunen, University of Edinburgh.
Aleks Kissinger, Radboud University Nijmegen.
Samuel Mimram, École Polytechnique.
Mehrnoosh Sadrzadeh, Queen Mary.
Pawel Sobocinski, University of Southampton.
Jamie Vicary, University of Birmingham and University of Oxford.