Coupling Through Emergent Conservation Laws (Part 6)

1 July, 2018

joint post with Jonathan Lorand, Blake Pollard, and Maru Sarazola

Now let’s think about emergent conservation laws!

When a heavy rock connected to a lighter one by a pulley falls down and pulls up the lighter one, you’re seeing an emergent conservation law:

Here the height of the heavy rock plus the height of light one is a constant. That’s a conservation law! It forces some of the potential energy lost by one rock to be transferred to the other. But it’s not a fundamental conservation law, built into the fabric of physics. It’s an emergent law that holds only thanks to the clever design of the pulley. If the rope broke, this law would be broken too.

It’s not surprising that biology uses similar tricks. But let’s see exactly how it works. First let’s look at all four reactions we’ve been studying:

\begin{array}{cccc}  \mathrm{X} + \mathrm{Y}   & \mathrel{\substack{\alpha_{\rightarrow} \\\longleftrightarrow\\ \alpha_{\leftarrow}}} & \mathrm{XY} & \qquad (1) \\ \\  \mathrm{ATP} & \mathrel{\substack{\beta_{\rightarrow} \\\longleftrightarrow\\ \beta_{\leftarrow}}} &  \mathrm{ADP} + \mathrm{P}_{\mathrm{i}} & \qquad (2) \\  \\   \mathrm{X} + \mathrm{ATP}   & \mathrel{\substack{\gamma_{\rightarrow} \\\longleftrightarrow\\ \gamma_{\leftarrow}}} & \mathrm{ADP} + \mathrm{XP}_{\mathrm{i}}    & \qquad (3) \\ \\   \mathrm{XP}_{\mathrm{i}} +\mathrm{Y} & \mathrel{\substack{\delta_{\rightarrow} \\\longleftrightarrow\\ \delta_{\leftarrow}}} &  \mathrm{XY} + \mathrm{P}_{\mathrm{i}} & \qquad (4)   \end{array}

It’s easy to check that the rate equations for these reactions have the following conserved quantities, that is, quantities that are constant in time:

A) [\mathrm{X}] + [\mathrm{XP}_{\mathrm{i}} ] + [\mathrm{XY}], due to the conservation of X.

B) [\mathrm{Y}] + [\mathrm{XY}], due to the conservation of Y.

C) 3[\mathrm{ATP}] +[\mathrm{XP}_{\mathrm{i}} ] +[\mathrm{P}_{\mathrm{i}}] +2[\mathrm{ADP}], due to the conservation of phosphorus.

D) [\mathrm{ATP}] + [\mathrm{ADP}], due to the conservation of adenosine.

Moreover, these quantities, and their linear combinations, are the only conserved quantities for reactions (1)–(4).

To see this, we use some standard ideas from reaction network theory. Consider the 7-dimensional space with orthonormal basis given by the species in our reaction network:

\mathrm{ATP}, \mathrm{ADP}, \mathrm{P}_{\mathrm{i}}, \mathrm{XP}_{\mathrm{i}}, \mathrm{X}, \mathrm{Y}, \mathrm{XY}

We can think of complexes like \mathrm{ADP} + \mathrm{P}_{\mathrm{i}} as vectors in this space. An arbitrary choice of the concentrations of all species also defines a vector in this space. Furthermore, any reaction involving these species defines a vector in this space, namely the sum of the products minus the sum of the reactants. This is called the reaction vector of this reaction. Reactions (1)–(4) give these reaction vectors:

\begin{array}{ccl}    v_\alpha &=& \mathrm{XY} - \mathrm{X} - \mathrm{Y}  \\  \\  v_\beta &= & \mathrm{P}_{\mathrm{i}} + \mathrm{ADP} - \mathrm{ATP} \\  \\  v_\gamma &=& \mathrm{XP}_{\mathrm{i}}  + \mathrm{ADP} -  \mathrm{ATP} - \mathrm{X} \\   \\  v_\delta &= & \mathrm{XY} + \mathrm{P}_{\mathrm{i}} -  \mathrm{XP}_{\mathrm{i}}  -  \mathrm{Y}  \end{array}

Any change in concentrations caused by these reactions must lie in the stoichiometric subspace: that is, the space spanned by the reaction vectors. Since these vectors obey one nontrivial relation:

v_\alpha + v_\beta = v_\gamma + v_\delta

the stochiometric subspace is 3-dimensional. Therefore, the space of conserved quantities must be 4-dimensional, since these specify the constraints on allowed changes in concentrations.

Now let’s compare the situation where ‘coupling’ occurs! For this we consider only reactions (3) and (4):

Now the stoichiometric subspace is 2-dimensional, since v_\gamma and v_\delta are linearly independent. Thus, the space of conserved quantities becomes 5-dimensional! Indeed, we can find an additional conserved quantity:

E) [\mathrm{Y} ] +[\mathrm{P}_{\mathrm{i}}]

that is linearly independent from the four conserved quantities we had before. It does not derive from the conservation of a particular molecular component. In other words, conservation of this quantity is not a fundamental law of chemistry. Instead, it is an emergent conservation law, which holds thanks to the workings of the cell! It holds in situations where the rate constants of reactions catalyzed by the cell’s enzymes are so much larger than those of other reactions that we can ignore those other reactions.

And remember from last time: these are precisely the situations where we have coupling.

Indeed, the emergent conserved quantity E) precisely captures the phenomenon of coupling! The only way for ATP to form ADP + Pi without changing this quantity is for Y to be consumed in the same amount as Pi is created… thus forming the desired product XY.

Next time we’ll look at a more complicated example from biology: the urea cycle.

 


 
The paper:

• John Baez, Jonathan Lorand, Blake S. Pollard and Maru Sarazola,
Biochemical coupling through emergent conservation laws.

The blog series:

Part 1 – Introduction.

Part 2 – Review of reaction networks and equilibrium thermodynamics.

Part 3 – What is coupling?

Part 4 – Interactions.

Part 5 – Coupling in quasiequilibrium states.

Part 6 – Emergent conservation laws.

Part 7 – The urea cycle.

Part 8 – The citric acid cycle.


Coupling Through Emergent Conservation Laws (Part 5)

30 June, 2018

joint post with Jonathan Lorand, Blake Pollard, and Maru Sarazola

Coupling is the way biology makes reactions that ‘want’ to happen push forward desirable reactions that don’t want to happen. Coupling is achieved through the action of enzymes—but in a subtle way. An enzyme can increase the rate constant of a reaction. However, it cannot change the ratio of forward to reverse rate constants, since that is fixed by the difference of free energies, as we saw in Part 2:

\displaystyle{ \frac{\alpha_\to}{\alpha_\leftarrow} = e^{-\Delta {G^\circ}/RT} }    \qquad

and the presence of an enzyme does not change this.

Indeed, if an enzyme could change this ratio, there would be no need for coupling! For example, increasing the ratio \alpha_\rightarrow/\alpha_\leftarrow in the reaction

\mathrm{X} + \mathrm{Y} \mathrel{\substack{\alpha_{\rightarrow} \\\longleftrightarrow\\ \alpha_{\leftarrow}}} \mathrm{XY}

would favor the formation of XY, as desired. But this option is not available.

Instead, to achieve coupling, the cell uses enyzmes to greatly increase both the forward and reverse rate constants for some reactions while leaving those for others unchanged!

Let’s see how this works. In our example, the cell is trying to couple ATP hydrolysis to the formation of the molecule XY from two smaller parts X and Y. These reactions don’t help do that:

\begin{array}{cclc}  \mathrm{X} + \mathrm{Y}   & \mathrel{\substack{\alpha_{\rightarrow} \\\longleftrightarrow\\ \alpha_{\leftarrow}}} & \mathrm{XY} & \qquad (1) \\ \\  \mathrm{ATP} & \mathrel{\substack{\beta_{\rightarrow} \\\longleftrightarrow\\ \beta_{\leftarrow}}} &  \mathrm{ADP} + \mathrm{P}_{\mathrm{i}} & \qquad (2)   \end{array}

but these do:

\begin{array}{cclc}   \mathrm{X} + \mathrm{ATP}   & \mathrel{\substack{\gamma_{\rightarrow} \\\longleftrightarrow\\ \gamma_{\leftarrow}}} & \mathrm{ADP} + \mathrm{XP}_{\mathrm{i}}    & (3) \\ \\   \mathrm{XP}_{\mathrm{i}} +\mathrm{Y} & \mathrel{\substack{\delta_{\rightarrow} \\\longleftrightarrow\\ \delta_{\leftarrow}}} &  \mathrm{XY} + \mathrm{P}_{\mathrm{i}} & (4)   \end{array}

So, the cell uses enzymes to make the rate constants for reactions (3) and (4) much bigger than for (1) and (2). In this situation we can ignore reactions (1) and (2) and still have a good approximate description of the dynamics, at least for sufficiently short time scales.

Thus, we shall study quasiequilibria, namely steady states of the rate equation for reactions (3) and (4) but not (1) and (2). In this approximation, the relevant Petri net becomes this:

Now it is impossible for ATP to turn into ADP + Pi without X + Y also turning into XY. As we shall see, this is the key to coupling!

In quasiequilibrium states, we shall find a nontrivial relation between the ratios [\mathrm{XY}]/[\mathrm{X}][\mathrm{Y}] and [\mathrm{ATP}]/[\mathrm{ADP}][\mathrm{P}_{\mathrm{i}}]. This lets the cell increase the amount of XY that gets made by increasing the amount of ATP present.

Of course, this is just part of the full story. Over longer time scales, reactions (1) and (2) become important. They would drive the system toward a true equilibrium, and destroy coupling, if there were not an inflow of the reactants ATP, X and Y and an outflow of the products Pi and XY. To take these inflows and outflows into account, we need the theory of ‘open’ reaction networks… which is something I’m very interested in!

But this is beyond our scope here. We only consider reactions (3) and (4), which give the following rate equation:

\begin{array}{ccl}   \dot{[\mathrm{X}]} & = & -\gamma_\to [\mathrm{X}][\mathrm{ATP}] + \gamma_\leftarrow [\mathrm{ADP}][\mathrm{XP}_{\mathrm{i}} ]  \\  \\  \dot{[\mathrm{Y}]} & = & -\delta_\to [\mathrm{XP}_{\mathrm{i}} ][\mathrm{Y}] +\delta_\leftarrow [\mathrm{XY}][\mathrm{P}_{\mathrm{i}}]  \\ \\  \dot{[\mathrm{XY}]} & = &\delta_\to [\mathrm{XP}_{\mathrm{i}} ][\mathrm{Y}] -\delta_\leftarrow [\mathrm{XY}][\mathrm{P}_{\mathrm{i}}]  \\ \\  \dot{[\mathrm{ATP}]} & = & -\gamma_\to [\mathrm{X}][\mathrm{ATP}] + \gamma_\leftarrow [\mathrm{ADP}][\mathrm{XP}_{\mathrm{i}} ]  \\ \\  \dot{[\mathrm{ADP}]} & =& \gamma_\to [\mathrm{X}][\mathrm{ATP}] - \gamma_\leftarrow [\mathrm{ADP}][\mathrm{XP}_{\mathrm{i}} ]  \\  \\  \dot{[\mathrm{P}_{\mathrm{i}}]} & = & \delta_\to [\mathrm{XP}_{\mathrm{i}} ][\mathrm{Y}] -\delta_\leftarrow [\mathrm{XY}][\mathrm{P}_{\mathrm{i}}]  \\ \\  \dot{[\mathrm{XP}_{\mathrm{i}} ]} & = & \gamma_\to [\mathrm{X}][\mathrm{ATP}] - \gamma_\leftarrow [\mathrm{ADP}][\mathrm{XP}_{\mathrm{i}} ] \\ \\ && -\delta_\to [\mathrm{XP}_{\mathrm{i}}][\mathrm{Y}] +\delta_\leftarrow [\mathrm{XY}][\mathrm{P}_{\mathrm{i}}]  \end{array}

Quasiequilibria occur when all these time derivatives vanish. This happens when

\begin{array}{ccl}   \gamma_\to [\mathrm{X}][\mathrm{ATP}] & = & \gamma_\leftarrow [\mathrm{ADP}][\mathrm{XP}_{\mathrm{i}} ]\\  \\  \delta_\to [\mathrm{XP}_{\mathrm{i}} ][\mathrm{Y}] & = & \delta_\leftarrow [\mathrm{XY}][\mathrm{P}_{\mathrm{i}}]  \end{array}

This pair of equations is equivalent to

\displaystyle{ \frac{\gamma_\to}{\gamma_\leftarrow}\frac{[\mathrm{X}][\mathrm{ATP}]}{[\mathrm{ADP}]}=[\mathrm{XP}_{\mathrm{i}} ]  =\frac{\delta_\leftarrow}{\delta_\to}\frac{[\mathrm{XY}][\mathrm{P}_{\mathrm{i}}]}{[\mathrm{Y}]} }

and it implies

\displaystyle{ \frac{[\mathrm{XY}]}{[\mathrm{X}][\mathrm{Y}]}  = \frac{\gamma_\to}{\gamma_\leftarrow}\frac{\delta_\to}{\delta_\leftarrow} \frac{[\mathrm{ATP}]}{[\mathrm{ADP}][\mathrm{P}_{\mathrm{i}}]} }

If we forget about the species XPi (whose presence is crucial for the coupling to happen, but whose concentration we do not care about), the quasiequilibrium does not impose any conditions other than the above relation. We conclude that, under these circumstances and assuming we can increase the ratio

\displaystyle{ \frac{[\mathrm{ATP}]}{[\mathrm{ADP}][\mathrm{P}_{\mathrm{i}}]} }

it is possible to increase the ratio

\displaystyle{\frac{[\mathrm{XY}]}{[\mathrm{X}][\mathrm{Y}]} }

This constitutes ‘coupling’.

We can say a bit more, since we can express the ratios of forward and reverse rate constants in terms of exponentials of free energy differences using the laws of thermodynamics, as explained in Part 2. Reactions (1) and (2), taken together, convert X + Y + ATP to XY + ADP + Pi. So do reactions (3) and (4) taken together. Thus, these two pairs of reactions involve the same total change in free energy, so

\displaystyle{          \frac{\alpha_\to}{\alpha_\leftarrow}\frac{\beta_\to}{\beta_\leftarrow} =   \frac{\gamma_\to}{\gamma_\leftarrow}\frac{\delta_\to}{\delta_\leftarrow} }

But we’re also assuming ATP hydrolysis is so strongly exergonic that

\displaystyle{ \frac{\beta_\to}{\beta_\leftarrow} \gg \frac{\alpha_\leftarrow}{\alpha_\to}  }

This implies that

\displaystyle{    \frac{\gamma_\to}{\gamma_\leftarrow}\frac{\delta_\to}{\delta_\leftarrow} \gg 1 }

Thus,

\displaystyle{ \frac{[\mathrm{XY}]}{[\mathrm{X}][\mathrm{Y}]}  \gg \frac{[\mathrm{ATP}]}{[\mathrm{ADP}][\mathrm{P}_{\mathrm{i}}]} }

This is why coupling to ATP hydrolysis is so good at driving the synthesis of XY from X and Y! Ultimately, this inequality arises from the fact that the decrease in free energy for the reaction

\mathrm{ATP} \to \mathrm{ADP} + \mathrm{P}_{\mathrm{i}}

greatly exceeds the increase in free energy for the reaction

\mathrm{X} + \mathrm{Y} \to \mathrm{XY}

But this fact can only be put to use in the right conditions. We need to be in a ‘quasiequilibrium’ state, where fast reactions have reached a steady state but not slow ones. And we need fast reactions to have this property: they can only turn ATP into ADP + Pi if they also turn X + Y into XY. Under these conditions, we have ‘coupling’.

Next time we’ll see how coupling relies on an ’emergent conservation law’.

 


 
The paper:

• John Baez, Jonathan Lorand, Blake S. Pollard and Maru Sarazola,
Biochemical coupling through emergent conservation laws.

The blog series:

Part 1 – Introduction.

Part 2 – Review of reaction networks and equilibrium thermodynamics.

Part 3 – What is coupling?

Part 4 – Interactions.

Part 5 – Coupling in quasiequilibrium states.

Part 6 – Emergent conservation laws.

Part 7 – The urea cycle.

Part 8 – The citric acid cycle.


Coupling Through Emergent Conservation Laws (Part 4)

29 June, 2018

joint post with Jonathan Lorand, Blake Pollard, and Maru Sarazola

We’ve been trying to understand coupling: how a chemical reaction that ‘wants to happen’ because it decreases the amount of free energy can drive forward a chemical reaction that increases free energy.

For coupling to occur, the reactant species in both reactions must interact in some way. Indeed, in real-world examples where ATP hydrolysis is coupled to the formation of larger molecule \mathrm{XY} from parts \mathrm{X} and \mathrm{Y}, it is observed that, aside from the reactions we discussed last time:

\begin{array}{cclc}  \mathrm{X} + \mathrm{Y}   & \mathrel{\substack{\alpha_{\rightarrow} \\\longleftrightarrow\\ \alpha_{\leftarrow}}} & \mathrm{XY} & \qquad (1) \\ \\  \mathrm{ATP} & \mathrel{\substack{\beta_{\rightarrow} \\\longleftrightarrow\\ \beta_{\leftarrow}}} &  \mathrm{ADP} + \mathrm{P}_{\mathrm{i}} & \qquad (2)   \end{array}

two other reactions (and their reverses) take place:

\begin{array}{cclc}   \mathrm{X} + \mathrm{ATP}   & \mathrel{\substack{\gamma_{\rightarrow} \\\longleftrightarrow\\ \gamma_{\leftarrow}}} & \mathrm{ADP} + \mathrm{XP}_{\mathrm{i}}    & (3) \\ \\   \mathrm{XP}_{\mathrm{i}} +\mathrm{Y} & \mathrel{\substack{\delta_{\rightarrow} \\\longleftrightarrow\\ \delta_{\leftarrow}}} &  \mathrm{XY} + \mathrm{P}_{\mathrm{i}} & (4)   \end{array}

We can picture all four reactions (1-4) in a single Petri net as follows:

Taking into account this more complicated set of reactions, which are interacting with each other, is still not enough to explain the phenomenon of coupling. To see this, let’s consider the rate equation for the system comprised of all four reactions. To write it down neatly, let’s introduce reaction velocities that say the rate at which each forward reaction is taking place, minus the rate of the reverse reaction:

\begin{array}{ccl}    J_\alpha &=& \alpha_\to [\mathrm{X}][\mathrm{Y}] - \alpha_\leftarrow [\mathrm{XY}]  \\   \\   J_\beta  &=& \beta_\to [\mathrm{ATP}] - \beta_\leftarrow [\mathrm{ADP}] [\mathrm{P}_{\mathrm{i}}]  \\  \\   J_\gamma &=& \gamma_\to [\mathrm{ATP}] [\mathrm{X}] - \gamma_\leftarrow [\mathrm{ADP}] [\mathrm{XP}_{\mathrm{i}} ] \\   \\   J_\delta &=& \delta_\to [\mathrm{XP}_{\mathrm{i}} ] [\mathrm{Y}] - \delta_\leftarrow [\mathrm{XY}] [\mathrm{P}_{\mathrm{i}}]  \end{array}

All these follow from the law of mass action, which we explained in Part 2. Remember, this says that any reaction occurs at a rate equal to its rate constant times the product of the concentrations of the species involved. So, for example, this reaction

\mathrm{XP}_{\mathrm{i}} +\mathrm{Y} \mathrel{\substack{\delta_{\rightarrow} \\\longleftrightarrow\\ \delta_{\leftarrow}}}   \mathrm{XY} + \mathrm{P}_{\mathrm{i}}

goes forward at a rate equal to \delta_\rightarrow [\mathrm{XP}_{\mathrm{i}}][\mathrm{Y}], while the reverse reaction occurs at a rate equal to \delta_\leftarrow [\mathrm{ADP}] [\mathrm{P}_{\mathrm{i}}]. So, its reaction velocity is

J_\delta = \delta_\to [\mathrm{XP}_{\mathrm{i}} ] [\mathrm{Y}] - \delta_\leftarrow [\mathrm{XY}] [\mathrm{P}_{\mathrm{i}}]

In terms of these reaction velocities, we can write the rate equation as follows:

\begin{array}{ccl}   \dot{[\mathrm{X}]} & = & -J_\alpha - J_\gamma  \\  \\  \dot{[\mathrm{Y}]} & = & -J_\alpha - J_\delta \\   \\  \dot{[\mathrm{XY}]} & = & J_\alpha + J_\delta \\   \\  \dot{[\mathrm{ATP}]} & = & -J_\beta - J_\gamma \\   \\  \dot{[\mathrm{ADP}]} & = & J_\beta + J_\gamma \\    \\  \dot{[\mathrm{P}_{\mathrm{i}}]} & = & J_\beta + J_\delta \\  \\  \dot{[\mathrm{XP}_{\mathrm{i}} ]} & = & J_\gamma -J_\delta  \end{array}

This makes sense if you think a bit: it says how each reaction contributes to the formation or destruction of each species.

In a steady state, all these time derivatives are zero, so we must have

J_\alpha = J_\beta = -J_\gamma = - J_\delta

Furthermore, in a detailed balanced equilibrium, every reaction occurs at the same rate as its reverse reaction, so all four reaction velocities vanish! In thermodynamics, a system that’s truly in equilibrium obeys this sort of detailed balance condition.

When all the reaction velocities vanish, we have:

\begin{array}{ccl}  \displaystyle{ \frac{[\mathrm{XY}]}{[\mathrm{X}][\mathrm{Y}]} } &=& \displaystyle{ \frac{\alpha_\to}{\alpha_\leftarrow} } \\  \\  \displaystyle{ \frac{[\mathrm{ADP}][\mathrm{P}_{\mathrm{i}}]}{[\mathrm{ATP}]} } &=& \displaystyle{ \frac{\beta_\to}{\beta_\leftarrow}  } \\  \\  \displaystyle{ \frac{[\mathrm{ADP}] [\mathrm{XP}_{\mathrm{i}} ]}{[\mathrm{ATP}][\mathrm{X}]} } &=& \displaystyle{ \frac{\gamma_\to}{\gamma_\leftarrow} } \\  \\  \displaystyle{   \frac{[\mathrm{XY}][\mathrm{P}_{\mathrm{i}}]}{[\mathrm{XP}_{\mathrm{i}} ][\mathrm{Y}]} }   &=& \displaystyle{ \frac{\delta_\to}{\delta_\leftarrow} }  \end{array}

Thus, even when the reactants interact, there can be no coupling if the whole system is in equilibrium, since then the ratio [\mathrm{XY}]/[\mathrm{X}][\mathrm{Y}] is still forced to be \alpha_\to/\alpha_\leftarrow. This is obvious to anyone who truly understands what Boltzmann and Gibbs did. But here we saw it in detail.

The moral is that coupling cannot occur in equilibrium. But how, precisely, does coupling occur? Stay tuned!

 


 
The paper:

• John Baez, Jonathan Lorand, Blake S. Pollard and Maru Sarazola,
Biochemical coupling through emergent conservation laws.

The blog series:

Part 1 – Introduction.

Part 2 – Review of reaction networks and equilibrium thermodynamics.

Part 3 – What is coupling?

Part 4 – Interactions.

Part 5 – Coupling in quasiequilibrium states.

Part 6 – Emergent conservation laws.

Part 7 – The urea cycle.

Part 8 – The citric acid cycle.


Coupling Through Emergent Conservation Laws (Part 3)

28 June, 2018

joint post with Jonathan Lorand, Blake Pollard, and Maru Sarazola

Last time we gave a quick intro to the chemistry and thermodynamics we’ll use to understand ‘coupling’. Now let’s really get started!

Suppose that we are in a setting in which some reaction

\mathrm{X} + \mathrm{Y} \mathrel{\substack{\alpha_{\rightarrow} \\\longleftrightarrow\\ \alpha_{\leftarrow}}} \mathrm{XY}

takes place. Let’s also assume we are interested in the production of \mathrm{XY} from \mathrm{X} and \mathrm{Y}, but that in our system, the reverse reaction is favored to happen. This means that that reverse rate constant exceeds the forward one, let’s say by a lot:

\alpha_\leftarrow \gg \alpha_\to

so that in equilibrium, the concentrations of the species will satisfy

\displaystyle{ \frac{[\mathrm{XY}]}{[\mathrm{X}][\mathrm{Y}]}\ll 1 }

which we assume undesirable. How can we influence this ratio to get a more desired outcome?

This is where coupling comes into play. Informally, we think of the coupling of two reactions as a process in which an endergonic reaction—one which does not ‘want’ to happen—is combined with an exergonic reaction—one that does ‘want’ to happen—in a way that improves the products-to-reactants concentrations ratio of the first reaction.

An important example of coupling, and one we will focus on, involves ATP hydrolysis:

\mathrm{ATP} + \mathrm{H}_2\mathrm{O} \mathrel{\substack{\beta_{\rightarrow} \\\longleftrightarrow\\ \beta_{\leftarrow}}} \mathrm{ADP} + \mathrm{P}_{\mathrm{i}} + \mathrm{H}^+

where ATP (adenosine triphosphate) reacts with a water molecule. Typically, this reaction results in ADP (adenosine diphosphate), a phosphate ion \mathrm{P}_{\mathrm{i}} and a hydrogen ion \mathrm{H}^+. To simplify calculations, we will replace the above equation with

\mathrm{ATP}  \mathrel{\substack{\beta_{\rightarrow} \\\longleftrightarrow\\ \beta_{\leftarrow}}} \mathrm{ADP} + \mathrm{P}_{\mathrm{i}}

since suppressing the bookkeeping of hydrogen and oxygen atoms in this manner will not affect our main points.

One reason ATP hydrolysis is good for coupling is that this reaction is strongly exergonic:

\beta_\to \gg \beta_\leftarrow

and in fact so much that

\displaystyle{ \frac{\beta_\to}{\beta_\leftarrow} \gg \frac{\alpha_\leftarrow}{\alpha_\to}  }

Yet this fact alone is insufficient to explain coupling!

To see why, suppose our system consists merely of the two reactions

\begin{array}{ccc}  \mathrm{X} + \mathrm{Y}   & \mathrel{\substack{\alpha_{\rightarrow} \\\longleftrightarrow\\ \alpha_{\leftarrow}}} & \mathrm{XY} \\ \\  \mathrm{ATP} & \mathrel{\substack{\beta_{\rightarrow} \\\longleftrightarrow\\ \beta_{\leftarrow}}} &  \mathrm{ADP} + \mathrm{P}_{\mathrm{i}} \label{beta}  \end{array}

happening in parallel. We can study the concentrations in equilibrium to see that one reaction has no influence on the other. Indeed, the rate equation for this reaction network is

\begin{array}{ccl}  \dot{[\mathrm{X}]} & = & -\alpha_\to [\mathrm{X}][\mathrm{Y}]+\alpha_\leftarrow [\mathrm{XY}]\\ \\  \dot{[\mathrm{Y}]} & = & -\alpha_\to [\mathrm{X}][\mathrm{Y}]+\alpha_\leftarrow [\mathrm{XY}]\\ \\  \dot{[\mathrm{XY}]} & = & \alpha_\to [\mathrm{X}][\mathrm{Y}]-\alpha_\leftarrow [\mathrm{XY}]\\ \\  \dot{[\mathrm{ATP}]} & =& -\beta_\to [\mathrm{ATP}]+\beta_\leftarrow [\mathrm{ADP}][\mathrm{P}_{\mathrm{i}}]\\ \\  \dot{[\mathrm{ADP}]} & = &\beta_\to [\mathrm{ATP}]-\beta_\leftarrow [\mathrm{ADP}][\mathrm{P}_{\mathrm{i}}]\\ \\  \dot{[\mathrm{P}_{\mathrm{i}}]} & = &\beta_\to [\mathrm{ATP}]-\beta_\leftarrow [\mathrm{ADP}][\mathrm{P}_{\mathrm{i}}]  \end{array}

When concentrations are constant, these are equivalent to the relations

\displaystyle{  \frac{[\mathrm{XY}]}{[\mathrm{X}][\mathrm{Y}]} = \frac{\alpha_\to}{\alpha_\leftarrow} \ \ \text{ and } \ \ \frac{[\mathrm{ADP}][\mathrm{P}_{\mathrm{i}}]}{[\mathrm{ATP}]} = \frac{\beta_\to}{\beta_\leftarrow} }

We thus see that ATP hydrolysis is in no way affecting the ratio of [\mathrm{XY}] to [\mathrm{X}][\mathrm{Y}]. Intuitively, there is no coupling because the two reactions proceed independently. This ‘independence’ is clearly visible if we draw the reaction network as a so-called Petri net:

So what really happens when we are in the presence of coupling? Stay tuned for the next episode!

By the way, here’s what ATP hydrolysis looks like in a bit more detail, from a website at Loreto College:


 


 
The paper:

• John Baez, Jonathan Lorand, Blake S. Pollard and Maru Sarazola,
Biochemical coupling through emergent conservation laws.

The blog series:

Part 1 – Introduction.

Part 2 – Review of reaction networks and equilibrium thermodynamics.

Part 3 – What is coupling?

Part 4 – Interactions.

Part 5 – Coupling in quasiequilibrium states.

Part 6 – Emergent conservation laws.

Part 7 – The urea cycle.

Part 8 – The citric acid cycle.


Coupling Through Emergent Conservation Laws (Part 2)

27 June, 2018

joint post with Jonathan Lorand, Blake Pollard, and Maru Sarazola

Here’s a little introduction to the chemistry and thermodynamics prerequisites for our work on ‘coupling’. Luckily, it’s fun stuff that everyone should know: a lot of the world runs on these principles!

We will be working with reaction networks. A reaction network consists of a set of reactions, for example

\mathrm{X}+\mathrm{Y}\longrightarrow \mathrm{XY}

Here X, Y and XY are the species involved, and we interpret this reaction as species X and Y combining to form species XY. We call X and Y the reactants and XY the product. Additive combinations of species, such as X + Y, are called complexes.

The law of mass action states that the rate at which a reaction occurs is proportional to the product of the concentrations of the reactants. The proportionality constant is called the rate constant; it is a positive real number associated to a reaction that depends on chemical properties of the reaction along with the temperature, the pH of the solution, the nature of any catalysts that may be present, and so on. Every reaction has a reverse reaction; that is, if X and Y combine to form XY, then XY can also split into X and Y. The reverse reaction has its own rate constant.

We can summarize this information by writing

\mathrm{X} + \mathrm{Y} \mathrel{\substack{\alpha_{\rightarrow} \\\longleftrightarrow\\ \alpha_{\leftarrow}}}  \mathrm{XY}

where \alpha_{\to} is the rate constant for X and Y to combine and form XY, while \alpha_\leftarrow is the rate constant for the reverse reaction.

As time passes and reactions occur, the concentration of each species will likely change. We can record this information in a collection of functions

[\mathrm{X}] \colon \mathbb{R} \to [0,\infty),

one for each species X, where \mathrm{X}(t) gives the concentration of the species \mathrm{X} at time t. This naturally leads one to consider the rate equation of a given reaction, which specifies the time evolution of these concentrations. The rate equation can be read off from the reaction network, and in the above example it is:

\begin{array}{ccc}  \dot{[\mathrm{X}]} & = & -\alpha_\to [\mathrm{X}][\mathrm{Y}]+\alpha_\leftarrow [\mathrm{XY}]\\  \dot{[\mathrm{Y}]} & = & -\alpha_\to [\mathrm{X}][\mathrm{Y}]+\alpha_\leftarrow [\mathrm{XY}]\\  \dot{[\mathrm{XY}]} & = & \alpha_\to [\mathrm{X}][\mathrm{Y}]-\alpha_\leftarrow [\mathrm{XY}]  \end{array}

Here \alpha_\to [\mathrm{X}] [\mathrm{Y}] is the rate at which the forward reaction is occurring; thanks to the law of mass action, this is the rate constant \alpha_\to times the product of the concentrations of X and Y. Similarly, \alpha_\leftarrow [\mathrm{XY}] is the rate at which the reverse reaction is occurring.

We say that a system is in detailed balanced equilibrium, or simply equilibrium, when every reaction occurs at the same rate as its reverse reaction. This implies that the concentration of each species is constant in time. In our example, the condition for equilibrium is

\displaystyle{ \frac{\alpha_\to}{\alpha_\leftarrow}=\frac{[\mathrm{XY}]}{[\mathrm{X}][\mathrm{Y}]} }

and the rate equation then implies that

\dot{[\mathrm{X}]} =  \dot{[\mathrm{Y}]} =\dot{[\mathrm{XY}]} = 0

The laws of thermodynamics determine the ratio of the forward and reverse rate constants. For any reaction at all, this ratio is

\displaystyle{ \frac{\alpha_\to}{\alpha_\leftarrow} = e^{-\Delta {G^\circ}/RT} }  \qquad \qquad \qquad (1)

where T is the temperature, R is the ideal gas constant, and \Delta {G^\circ} is the free energy change under standard conditions.

Note that if \Delta {G^\circ} < 0, then the rate constant of the forward reaction is larger than the rate constant of the reverse reaction:

\alpha_\to > \alpha_\leftarrow

In this case one may loosely say that the forward reaction ‘wants’ to happen ‘spontaneously’. Such a reaction is called exergonic. If on the other hand \Delta {G^\circ} > 0, then the forward reaction is ‘non-spontaneous’ and it is called endergonic.

The most important thing for us is that \Delta {G^\circ} takes a very simple form. Each species has a free energy. The free energy of a complex

\mathrm{A}_1 + \cdots + \mathrm{A}_m

is the sum of the free energies of the species \mathrm{A}_i. Given a reaction

\mathrm{A}_1 + \cdots + \mathrm{A}_m \longrightarrow \mathrm{B}_1 + \cdots + \mathrm{B}_n

the free energy change \Delta {G^\circ} for this reaction is the free energy of

\mathrm{B}_1 + \cdots + \mathrm{B}_n

minus the free energy of

\mathrm{A}_1 + \cdots + \mathrm{A}_m.

As a consequence, \Delta{G^\circ} is additive with respect to combining multiple reactions in either series or parallel. In particular, then, the law (1) imposes relations between ratios of rate constants: for example, if we have the following more complicated set of reactions

\mathrm{A} \mathrel{\substack{\alpha_{\rightarrow} \\\longleftrightarrow\\ \alpha_{\leftarrow}}} \mathrm{B}

\mathrm{B} \mathrel{\substack{\beta_{\rightarrow} \\\longleftrightarrow\\ \beta_{\leftarrow}}} \mathrm{C}

\mathrm{A} \mathrel{\substack{\gamma_{\rightarrow} \\\longleftrightarrow\\ \gamma_{\leftarrow}}} \mathrm{C}

then we must have

\displaystyle{    \frac{\gamma_\to}{\gamma_\leftarrow} = \frac{\alpha_\to}{\alpha_\leftarrow} \frac{\beta_\to}{\beta_\leftarrow} .  }

So, not only are the rate constant ratios of reactions determined by differences in free energy, but also nontrivial relations between these ratios can arise, depending on the structure of the system of reactions in question!

Okay—this is all the basic stuff we’ll need to know. Please ask questions! Next time we’ll go ahead and use this stuff to start thinking about how biology manages to make reactions that ‘want’ to happen push forward reactions that are useful but wouldn’t happen spontaneously on their own.

 


 
The paper:

• John Baez, Jonathan Lorand, Blake S. Pollard and Maru Sarazola,
Biochemical coupling through emergent conservation laws.

The blog series:

Part 1 – Introduction.

Part 2 – Review of reaction networks and equilibrium thermodynamics.

Part 3 – What is coupling?

Part 4 – Interactions.

Part 5 – Coupling in quasiequilibrium states.

Part 6 – Emergent conservation laws.

Part 7 – The urea cycle.

Part 8 – The citric acid cycle.


Coupling Through Emergent Conservation Laws (Part 1)

27 June, 2018

joint post with Jonathan Lorand, Blake Pollard, and Maru Sarazola

In the cell, chemical reactions are often ‘coupled’ so that reactions that release energy drive reactions that are biologically useful but involve an increase in energy. But how, exactly, does coupling work?

Much is known about this question, but the literature is also full of vague explanations and oversimplifications. Coupling cannot occur in equilibrium; it arises in open systems, where the concentrations of certain chemicals are held out of equilibrium due to flows in and out. One might thus suspect that the simplest mathematical treatment of this phenomenon would involve non-equilibrium steady states of open systems. However, Bazhin has shown that some crucial aspects of coupling arise in an even simpler framework:

• Nicolai Bazhin, The essence of ATP coupling, ISRN Biochemistry 2012 (2012), article 827604.

He considers ‘quasi-equilibrium’ states, where fast reactions have come into equilibrium and slow ones are neglected. He shows that coupling occurs already in this simple approximation.

In this series of blog articles we’ll do two things. First, we’ll review Bazhin’s work in a way that readers with no training in biology or chemistry should be able to follow. (But if you get stuck, ask questions!) Second, we’ll explain a fact that seems to have received insufficient attention: in many cases, coupling relies on emergent conservation laws.

Conservation laws are important throughout science. Besides those that are built into the fabric of physics, such as conservation of energy and momentum, there are also many ’emergent’ conservation laws that hold approximately in certain circumstances. Often these arise when processes that change a given quantity happen very slowly. For example, the most common isotope of uranium decays into lead with a half-life of about 4 billion years—but for the purposes of chemical experiments in the laboratory, it is useful to treat the amount of uranium as a conserved quantity.

The emergent conservation laws involved in biochemical coupling are of a different nature. Instead of making the processes that violate these laws happen more slowly, the cell uses enzymes to make other processes happen more quickly. At the time scales relevant to cellular metabolism, the fast processes dominate, while slowly changing quantities are effectively conserved. By a suitable choice of these emergent conserved quantities, the cell ensures that certain reactions that release energy can only occur when other ‘desired’ reactions occur. To be sure, this is only approximately true, on sufficiently short time scales. But this approximation is enlightening!

Following Bazhin, our main example involves ATP hydrolysis. We consider this following schema for a whole family of reactions:

\begin{array}{ccc}  \mathrm{X} + \mathrm{ATP}  & \longleftrightarrow & \mathrm{ADP} + \mathrm{XP}_{\mathrm{i}} \qquad (1) \\  \mathrm{XP}_{\mathrm{i}} + \mathrm{Y}  & \longleftrightarrow &    \mathrm{XY} + \mathrm{P}_{\mathrm{i}} \,\;\;\;\;\qquad (2)  \end{array}

Some concrete examples of this schema include:

• The synthesis of glutamine (XY) from glutamate (X) and ammonium (Y). This is part of the important glutamate-glutamine cycle in the central nervous system.

• The synthesis of sucrose (XY) from glucose (X) and fructose (Y). This is one of many processes whereby plants synthesize more complex sugars and starches from simpler building-blocks.

In these and other examples, the two reactions, taken together, have the effect of synthesizing a larger molecule XY out of two parts X and Y while ATP is broken down to ADP and the phosphate ion Pi Thus, they have the same net effect as this other pair of reactions:

\begin{array}{ccc}  \mathrm{X} + \mathrm{Y} &\longleftrightarrow & \mathrm{XY} \;\;\;\quad \quad \qquad  (3) \\   \mathrm{ATP} &\longleftrightarrow & \mathrm{ADP} + \mathrm{P}_{\mathrm{i}} \qquad (4) \end{array}

The first reaction here is just the synthesis of XY from X and Y. The second is a deliberately simplified version of ATP hydrolysis. The first involves an increase of energy, while the second releases energy. But in the schema used in biology, these processes are ‘coupled’ so that ATP can only break down to ADP + Pi if X and Y combine to form XY.

As we shall see, this coupling crucially relies on a conserved quantity: the total number of Y molecules plus the total number of Pi ions is left unchanged by reactions (1) and (2). This fact is not a fundamental law of physics, nor even a general law of chemistry (such as conservation of phosphorus atoms). It is an emergent conservation law that holds approximately in special situations. Its approximate validity relies on the fact that the cell has enzymes that make reactions (1) and (2) occur more rapidly than reactions that violate this law, such as (3) and (4).

In the series to come, we’ll start by providing the tiny amount of chemistry and thermodynamics needed to understand what’s going on. Then we’ll raise the question “what is coupling?” Then we’ll study the reactions required for coupling ATP hydrolysis to the synthesis of XY from components X and Y, and explain why these reactions are not yet enough for coupling. Then we’ll show that coupling occurs in a ‘quasiequilibrium’ state where reactions (1) and (2), assumed much faster than the rest, have reached equilibrium, while the rest are neglected. And then we’ll explain the role of emergent conservation laws!

 


 
The paper:

• John Baez, Jonathan Lorand, Blake S. Pollard and Maru Sarazola,
Biochemical coupling through emergent conservation laws.

The blog series:

Part 1 – Introduction.

Part 2 – Review of reaction networks and equilibrium thermodynamics.

Part 3 – What is coupling?

Part 4 – Interactions.

Part 5 – Coupling in quasiequilibrium states.

Part 6 – Emergent conservation laws.

Part 7 – The urea cycle.

Part 8 – The citric acid cycle.


A Biochemistry Question

26 June, 2018

Does anyone know a real-world example of a cycle like this:


or in other words, this:

\begin{array}{ccc}  \mathrm{A} + \mathrm{C}_1 \longrightarrow \mathrm{C}_2 \\   \mathrm{X} + \mathrm{C}_2 \longrightarrow \mathrm{C}_3  \\    \mathrm{C}_3 \longrightarrow \mathrm{B} + \mathrm{C}_4   \\    \mathrm{C}_4 \longrightarrow \mathrm{Y} + \mathrm{C}_1   \end{array}

where the reaction

\mathrm{A} \to \mathrm{B}

is exergonic (i.e., involves a decrease in free energy) while

\mathrm{X} \to \mathrm{Y}

is endergonic (i.e., involves a free energy increase)?

The idea is that the above cycle, presumably catalyzed so that all the reactions go fairly fast under normal conditions, ‘couples’ the exergonic reaction, which ‘wants to happen’, to the endergonic reaction, which doesn’t… thus driving the endergonic one.

I would love an example from biochemistry. This is like a baby version of much more elaborate cycles such as the citric acid cycle, shown here:

in a picture from Stryer’s Biochemistry. I’m writing a paper on this stuff with Jonathan Lorand, Blake Pollard and Maru Sarazola, and we have—presumably obvious—reasons to want to discuss a simpler cycle!


Applied Category Theory 2018 – Videos

30 April, 2018

Some of the talks at Applied Category Theory 2018 were videotaped by the Statebox team. You can watch them on YouTube:

• David Spivak, A higher-order temporal logic for dynamical systems. Book available here and slides here.

• Fabio Zanasi and Bart Jacobs, Categories in Bayesian networks. Paper available here. (Some sound missing; when you hit silence skip forwards to about 15:00.)

• Bob Coecke and Aleks Kissinger, Causality. Paper available here.

• Samson Abramsky, Games and constraint satisfaction, Part 1 and Part 2. Paper available here.

• Dan Ghica, Diagrammatic semantics for digital circuits. Paper available here.

• Kathryn Hess, Towards a categorical approach to neuroscience.

• Tom Leinster, Biodiversity and the theory of magnitude. Papers available here and here.

• John Baez, Props in network theory. Slides available here, paper here and blog article here.


Retrotransposons

14 January, 2018

This article is very interesting:

• Ed Yong, Brain cells share information with virus-like capsules, Atlantic, January 12, 2018.

Your brain needs a protein called Arc. If you have trouble making this protein, you’ll have trouble forming new memories. The neuroscientist Jason Shepherd noticed something weird:

He saw that these Arc proteins assemble into hollow, spherical shells that look uncannily like viruses. “When we looked at them, we thought: What are these things?” says Shepherd. They reminded him of textbook pictures of HIV, and when he showed the images to HIV experts, they confirmed his suspicions. That, to put it bluntly, was a huge surprise. “Here was a brain gene that makes something that looks like a virus,” Shepherd says.

That’s not a coincidence. The team showed that Arc descends from an ancient group of genes called gypsy retrotransposons, which exist in the genomes of various animals, but can behave like their own independent entities. They can make new copies of themselves, and paste those duplicates elsewhere in their host genomes. At some point, some of these genes gained the ability to enclose themselves in a shell of proteins and leave their host cells entirely. That was the origin of retroviruses—the virus family that includes HIV.

It’s worth pointing out that gypsy is the name of a specific kind of retrotransposon. A retrotransposon is a gene that can make copies of itself by first transcribing itself from DNA into RNA and then converting itself back into DNA and inserting itself at other places in your chromosomes.

About 40% of your genes are retrotransposons! They seem to mainly be ‘selfish genes’, focused on their own self-reproduction. But some are also useful to you.

So, Arc genes are the evolutionary cousins of these viruses, which explains why they produce shells that look so similar. Specifically, Arc is closely related to a viral gene called gag, which retroviruses like HIV use to build the protein shells that enclose their genetic material. Other scientists had noticed this similarity before. In 2006, one team searched for human genes that look like gag, and they included Arc in their list of candidates. They never followed up on that hint, and “as neuroscientists, we never looked at the genomic papers so we didn’t find it until much later,” says Shepherd.

I love this because it confirms my feeling that viruses are deeply entangled with our evolutionary past. Computer viruses are just the latest phase of this story.

As if that wasn’t weird enough, other animals seem to have independently evolved their own versions of Arc. Fruit flies have Arc genes, and Shepherd’s colleague Cedric Feschotte showed that these descend from the same group of gypsy retrotransposons that gave rise to ours. But flies and back-boned animals co-opted these genes independently, in two separate events that took place millions of years apart. And yet, both events gave rise to similar genes that do similar things: Another team showed that the fly versions of Arc also sends RNA between neurons in virus-like capsules. “It’s exciting to think that such a process can occur twice,” says Atma Ivancevic from the University of Adelaide.

This is part of a broader trend: Scientists have in recent years discovered several ways that animals have used the properties of virus-related genes to their evolutionary advantage. Gag moves genetic information between cells, so it’s perfect as the basis of a communication system. Viruses use another gene called env to merge with host cells and avoid the immune system. Those same properties are vital for the placenta—a mammalian organ that unites the tissues of mothers and babies. And sure enough, a gene called syncytin, which is essential for the creation of placentas, actually descends from env. Much of our biology turns out to be viral in nature.

Here’s something I wrote in 1998 when I was first getting interested in this business:

RNA reverse transcribing viruses

RNA reverse transcribing viruses are usually called retroviruses. They have a single-stranded RNA genome. They infect animals, and when they get inside the cell’s nucleus, they copy themselves into the DNA of the host cell using reverse transcriptase. In the process they often cause tumors, presumably by damaging the host’s DNA.

Retroviruses are important in genetic engineering because they raised for the first time the possibility that RNA could be transcribed into DNA, rather than the reverse. In fact, some of them are currently being deliberately used by scientists to add new genes to mammalian cells.

Retroviruses are also important because AIDS is caused by a retrovirus: the human immunodeficiency virus (HIV). This is part of why AIDS is so difficult to treat. Most usual ways of killing viruses have no effect on retroviruses when they are latent in the DNA of the host cell.

From an evolutionary viewpoint, retroviruses are fascinating because they blur the very distinction between host and parasite. Their genome often contains genetic information derived from the host DNA. And once they are integrated into the DNA of the host cell, they may take a long time to reemerge. In fact, so-called endogenous retroviruses can be passed down from generation to generation, indistinguishable from any other cellular gene, and evolving along with their hosts, perhaps even from species to species! It has been estimated that up to 1% of the human genome consists of endogenous retroviruses! Furthermore, not every endogenous retrovirus causes a noticeable disease. Some may even help their hosts.

It gets even spookier when we notice that once an endogenous retrovirus lost the genes that code for its protein coat, it would become indistinguishable from a long terminal repeat (LTR) retrotransposon—one of the many kinds of “junk DNA” cluttering up our chromosomes. Just how much of us is made of retroviruses? It’s hard to be sure.

For my whole article, go here:

Subcellular life forms.

It’s about the mysterious subcellular entities that stand near the blurry border between the living and the non-living—like viruses, viroids, plasmids, satellites, transposons and prions. I need to update it, since a lot of new stuff is being discovered!

Jason Shepherd’s new paper has a few other authors:

• Elissa D. Pastuzyn, Cameron E. Day, Rachel B. Kearns, Madeleine Kyrke-Smith, Andrew V. Taibi, John McCormick, Nathan Yoder, David M. Belnap, Simon Erlendsson, Dustin R. Morado, John A.G. Briggs, Cédric Feschotte and Jason D. Shepherd, The neuronal gene Arc encodes a repurposed retrotransposon gag protein that mediates intercellular RNA transfer, Cell 172 (2018), 275–288.


Biology as Information Dynamics (Part 3)

9 November, 2017

On Monday I’m giving this talk at Caltech:

Biology as information dynamics, November 13, 2017, 4:00–5:00 pm, General Biology Seminar, Kerckhoff 119, Caltech.

If you’re around, please check it out! I’ll be around all day talking to people, including Erik Winfree, my graduate student host Fangzhou Xiao, and other grad students.

If you can’t make it, you can watch this video! It’s a neat subject, and I want to do more on it:

Abstract. If biology is the study of self-replicating entities, and we want to understand the role of information, it makes sense to see how information theory is connected to the ‘replicator equation’ — a simple model of population dynamics for self-replicating entities. The relevant concept of information turns out to be the information of one probability distribution relative to another, also known as the Kullback–Liebler divergence. Using this we can get a new outlook on free energy, see evolution as a learning process, and give a clearer, more general formulation of Fisher’s fundamental theorem of natural selection.