It’s been a long time since you’ve seen an installment of the information geometry series on this blog! Before I took a long break, I was explaining relative entropy and how it changes in evolutionary games. Much of what I said is summarized and carried further here:

• John Baez and Blake Pollard, Relative entropy in biological systems, *Entropy* **18** (2016), 46. (Blog article here.)

But now Blake has a new paper, and I want to talk about that:

• Blake Pollard, Open Markov processes: a compositional perspective on non-equilibrium steady states in biology, *Entropy* **18** (2016), 140.

I’ll focus on just one aspect: the principle of minimum entropy production. This is an exciting yet controversial principle in non-equilibrium thermodynamics. Blake examines it in a situation where we can tell exactly what’s happening.

### Non-equilibrium steady states

Life exists away from equilibrium. Left isolated, systems will tend toward thermodynamic equilibrium. However, biology is about **open systems**: physical systems that exchange matter or energy with their surroundings. Open systems can be maintained away from equilibrium by this exchange. This leads to the idea of a **non-equilibrium steady state**—a state of an open system that doesn’t change, but is not in equilibrium.

A simple example is a pan of water sitting on a stove. Heat passes from the flame to the water and then to the air above. If the flame is very low, the water doesn’t boil and nothing moves. So, we have a steady state, at least approximately. But this is not an equilibrium, because there is a constant flow of energy through the water.

Of course in reality the water will be slowly evaporating, so we don’t really have a steady state. As always, models are approximations. If the water is evaporating slowly enough, it can be useful to approximate the situation with a non-equilibrium steady state.

There is much more to biology than steady states. However, to dip our toe into the chilly waters of non-equilibrium thermodynamics, it is nice to start with steady states. And already here there are puzzles left to solve.

### Minimum entropy production

Ilya Prigogine won the Nobel prize for his work on non-equilibrium thermodynamics. One reason is that he had an interesting idea about steady states. He claimed that under certain conditions, a non-equilibrium steady state will *minimize entropy production!*

There has been a lot of work trying to make the ‘principle of minimum entropy production’ precise and turn it into a theorem. In this book:

• G. Lebon and D. Jou, *Understanding Non-equilibrium Thermodynamics*, Springer, Berlin, 2008.

the authors give an argument for the principle of minimum entropy production based on four conditions:

• **time-independent boundary conditions**: the surroundings of the system don’t change with time.

• **linear phenomenological laws**: the laws governing the macroscopic behavior of the system are linear.

• **constant phenomenological coefficients**: the laws governing the macroscopic behavior of the system don’t change with time.

• **symmetry of the phenomenological coefficients**: since they are linear, the laws governing the macroscopic behavior of the system can be described by a linear operator, and we demand that in a suitable basis the matrix for this operator is symmetric:

The last condition is obviously the subtlest one; it’s sometimes called **Onsager reciprocity**, and people have spent a lot of time trying to derive it from other conditions.

However, Blake goes in a different direction. He considers a concrete class of open systems, a very large class called ‘open Markov processes’. These systems obey the first three conditions listed above, and the ‘detailed balanced’ open Markov processes also obey the last one. But Blake shows that minimum entropy production holds only approximately—with the approximation being good for steady states that are *near equilibrium!*

However, he shows that another minimum principle holds exactly, even for steady states that are far from equilibrium. He calls this the ‘principle of minimum dissipation’.

We actually discussed the principle of minimum dissipation in an earlier paper:

• John Baez, Brendan Fong and Blake Pollard, A compositional framework for Markov processes. (Blog article here.)

But one advantage of Blake’s new paper is that it presents the results with a minimum of category theory. Of course I love category theory, and I think it’s the right way to formalize open systems, but it can be intimidating.

Another good thing about Blake’s new paper is that it explicitly compares the principle of minimum entropy to the principle of minimum dissipation. He shows they agree in a certain limit—namely, the limit where the system is close to equilibrium.

Let me explain this. I won’t include the nice example from biology that Blake discusses: a very simple model of membrane transport. For that, read his paper! I’ll just give the general results.

### The principle of minimum dissipation

An **open Markov process** consists of a finite set of **states**, a subset of **boundary states**, and an **infinitesimal stochastic** operator meaning a linear operator with

and

I’ll explain these two conditions in a minute.

For each we introduce a **population** We call the resulting function the **population distribution**. Populations evolve in time according to the **open master equation**:

So, the populations obey a linear differential equation at states that are not in the boundary, but they are specified ‘by the user’ to be chosen functions at the boundary states.

The off-diagonal entries are the rates at which population hops from the th to the th state. This lets us understand the definition of an infinitesimal stochastic operator. The first condition:

says that the rate for population to transition from one state to another is non-negative. The second:

says that population is conserved, at least if there are no boundary states. Population can flow in or out at boundary states, since the master equation doesn’t hold there.

A **steady state** is a solution of the open master equation that does not change with time. A steady state for a closed Markov process is typically called an **equilibrium**. So, an equilibrium obeys the master equation at all states, while for a steady state this may not be true at the boundary states. Again, the reason is that population can flow in or out at the boundary.

We say an equilibrium of a Markov process is **detailed balanced** if the rate at which population flows from the th state to the th state is equal to the rate at which it flows from the th state to the th:

Suppose we’ve got an open Markov process that has a detailed balanced equilibrium . Then a non-equilibrium steady state will minimize a function called the ‘dissipation’, subject to constraints on its boundary populations. There’s a nice formula for the dissipation in terms of and

**Definition.** Given an open Markov process with detailed balanced equilibrium we define the **dissipation** for a population distribution to be

This formula is a bit tricky, but you’ll notice it’s quadratic in and it vanishes when So, it’s pretty nice.

Using this concept we can formulate a principle of minimum dissipation, and prove that non-equilibrium steady states obey this principle:

**Definition.** We say a population distribution obeys the **principle of minimum dissipation** with boundary population if minimizes subject to the constraint that

**Theorem 1.** A population distribution is a steady state with for all boundary states if and only if obeys the principle of minimum dissipation with boundary population .

**Proof**. This follows from Theorem 28 in A compositional framework for Markov processes.

### Minimum entropy production versus minimum dissipation

How does dissipation compare with entropy production? To answer this, first we must ask: what really is entropy production? And: how does the equilibrium state show up in the concept of entropy production?

The **relative entropy** of two population distributions is given by

It is well known that for a closed Markov process with as a detailed balanced equilibrium, the relative entropy is monotonically *decreasing* with time. This is due to an annoying sign convention in the definition of relative entropy: while entropy is typically increasing, relative entropy typically decreases. We could fix this by putting a minus sign in the above formula or giving this quantity some other name. A lot of people call it the **Kullback–Leibler divergence**, but I have taken to calling it **relative information**. For more, see:

• John Baez and Blake Pollard, Relative entropy in biological systems. (Blog article here.)

We say ‘relative entropy’ in the title, but then we explain why ‘relative information’ is a better name, and use that. More importantly, we explain why has the physical meaning of *free energy*. Free energy tends to decrease, so everything is okay. For details, see Section 4.

Blake has a nice formula for how fast decreases:

**Theorem 2.** Consider an open Markov process with as its set of states and as the set of boundary states. Suppose obeys the open master equation and is a detailed balanced equilibrium. For any boundary state let

measure how much fails to obey the master equation. Then we have

Moreover, the first term is less than or equal to zero.

**Proof.** For a self-contained proof, see Information geometry (part 16), which is coming up soon. It will be a special case of the theorems there. █

Blake compares this result to previous work by Schnakenberg:

• J. Schnakenberg, Network theory of microscopic and macroscopic behavior of master equation systems, *Rev. Mod. Phys.* **48** (1976), 571–585.

The negative of Blake’s first term is this:

Under certain circumstances, this equals what Schnakenberg calls the **entropy production**. But a better name for this quantity might be **free energy loss**, since for a closed Markov process that’s exactly what it is! In this case there are no boundary states, so the theorem above says is the rate at which relative entropy—or in other words, free energy—decreases.

For an open Markov process, things are more complicated. The theorem above shows that free energy can also flow in or out at the boundary, thanks to the second term in the formula.

Anyway, the sensible thing is to compare a principle of ‘minimum free energy loss’ to the principle of minimum dissipation. The principle of minimum dissipation is true. How about the principle of minimum free energy loss? It turns out to be approximately true near equilibrium.

For this, consider the situation in which is near to the equilibrium distribution in the sense that

for some small numbers We collect these numbers in a vector called

**Theorem 3.** Consider an open Markov process with as its set of states and as the set of boundary states. Suppose is a detailed balanced equilibrium and let be arbitrary. Then

where is the free energy loss, is the dissipation, is defined as above, and by we mean a sum of terms of order

**Proof.** First take the free energy loss:

Expanding the logarithm to first order in we get

Since is infinitesimal stochastic, so the second term in the sum vanishes, leaving

or

Since is a equilibrium we have so now the last term in the sum vanishes, leaving

Next, take the dissipation

and expand the square, getting

Since is infinitesimal stochastic, The first term is just this times a function of summed over so it vanishes, leaving

Since is an equilibrium, The last term above is this times a function of summed over so it vanishes, leaving

This matches what we got for up to terms of order █

In short: detailed balanced open Markov processes are governed by the principle of minimum dissipation, not minimum entropy production. *Minimum dissipation agrees with minimum entropy production only near equilibrium.*