Last time we saw that given a bunch of different species of self-replicating entities, the entropy of their population distribution can go either up or down as time passes. This is true even in the pathetically simple case where all the replicators have constant fitness—so they don’t interact with each other, and don’t run into any ‘limits to growth’.
This is a bit of a bummer, since it would be nice to use entropy to explain how replicators are always extracting information from their environment, thanks to natural selection.
Luckily, a slight variant of entropy, called ‘relative entropy’, behaves better. When our replicators have an ‘evolutionary stable state’, the relative entropy is guaranteed to always change in the same direction as time passes!
Thanks to Einstein, we’ve all heard that times and distances are relative. But how is entropy relative?
It’s easy to understand if you think of entropy as lack of information. Say I have a coin hidden under my hand. I tell you it’s heads-up. How much information did I just give you? Maybe 1 bit? That’s true if you know it’s a fair coin and I flipped it fairly before covering it up with my hand. But what if you put the coin down there yourself a minute ago, heads up, and I just put my hand over it? Then I’ve given you no information at all. The difference is the choice of ‘prior’: that is, what probability distribution you attributed to the coin before I gave you my message.
My love affair with relative entropy began in college when my friend Bruce Smith and I read Hugh Everett’s thesis, The Relative State Formulation of Quantum Mechanics. This was the origin of what’s now often called the ‘many-worlds interpretation’ of quantum mechanics. But it also has a great introduction to relative entropy. Instead of talking about ‘many worlds’, I wish people would say that Everett explained some of the mysteries of quantum mechanics using the fact that entropy is relative.
Anyway, it’s nice to see relative entropy showing up in biology.
Relative Entropy
Inscribe an equilateral triangle in a circle. Randomly choose a line segment joining two points of this circle. What is the probability that this segment is longer than a side of the triangle?
This puzzle is called Bertrand’s paradox, because different ways of solving it give different answers. To crack the paradox, you need to realize that it’s meaningless to say you’ll “randomly” choose something until you say more about how you’re going to do it.
In other words, you can’t compute the probability of an event until you pick a recipe for computing probabilities. Such a recipe is called a probability measure.
This applies to computing entropy, too! The formula for entropy clearly involves a probability distribution, even when our set of events is finite:
But this formula conceals a fact that becomes obvious when our set of events is infinite. Now the sum becomes an integral:
And now it’s clear that this formula makes no sense until we choose the measure On a finite set we have a god-given choice of measure, called counting measure. Integrals with respect to this are just sums. But in general we don’t have such a god-given choice. And even for finite sets, working with counting measure is a choice: we are choosing to believe that in the absence of further evidence, all options are equally likely.
Taking this fact into account, it seems like we need two things to compute entropy: a probability distribution , and a measure
That’s on the right track. But an even better way to think of it is this:
Now we see the entropy depends two measures: the probability measure we care about, but also the measure
Their ratio is important, but that’s not enough: we also need one of these measures to do the integral. Above I used the measure
to do the integral, but we can also use
if we write
Either way, we are computing the entropy of one measure relative to another. So we might as well admit it, and talk about relative entropy.
The entropy of the measure relative to the measure
is defined by:
The second formula is simpler, but the first looks more like summing so they’re both useful.
Since we’re taking entropy to be lack of information, we can also get rid of the minus sign and define relative information by
If you thought something was randomly distributed according to the probability measure but then you you discover it’s randomly distributed according to the probability measure
how much information have you gained? The answer is
For more on relative entropy, read Part 6 of this series. I gave some examples illustrating how it works. Those should convince you that it’s a useful concept.
Okay: now let’s switch back to a more lowbrow approach. In the case of a finite set, we can revert to thinking of our two measures as probability distributions, and write the information gain as
If you want to sound like a Bayesian, call the prior probability distribution and
the posterior probability distribution. Whatever you call them,
is the amount of information you get if you thought
and someone tells you “no,
!”
We’ll use this idea to think about how a population gains information about its environment as time goes by, thanks to natural selection. The rest of this post will be an exposition of Theorem 1 in this paper:
• Marc Harper, The replicator equation as an inference dynamic.
Harper says versions of this theorem ave previously appeared in work by Ethan Akin, and independently in work by Josef Hofbauer and Karl Sigmund. He also credits others here. An idea this good is rarely noticed by just one person.
The change in relative information
So: consider different species of replicators. Let
be the population of the
th species, and assume these populations change according to the replicator equation:
where each function depends smoothly on all the populations. And as usual, we let
be the fraction of replicators in the th species.
Let’s study the relative information where
is some fixed probability distribution. We’ll see something great happens when
is a stable equilibrium solution of the replicator equation. In this case, the relative information can never increase! It can only decrease or stay constant.
We’ll think about what all this means later. First, let’s see that it’s true! Remember,
and only depends on time, not
, so
where is the rate of change of the probability
We saw a nice formula for this in Part 9:
where
and
is the mean fitness of the species. So, we get
Nice, but we can fiddle with this expression to get something more enlightening. Remember, the numbers sum to one. So:
where in the last step I used the definition of the mean fitness. This result looks even cuter if we treat the numbers as the components of a vector
and similarly for the numbers
and
Then we can use the dot product of vectors to say
So, the relative information will always decrease if
for all choices of the population
And now something really nice happens: this is also the condition for to be an evolutionarily stable state. This concept goes back to John Maynard Smith, the founder of evolutionary game theory. In 1982 he wrote:
A population is said to be in an evolutionarily stable state if its genetic composition is restored by selection after a disturbance, provided the disturbance is not too large.
I will explain the math next time—I need to straighten out some things in my mind first. But the basic idea is compelling: an evolutionarily stable state is like a situation where our replicators ‘know all there is to know’ about the environment and each other. In any other state, the population has ‘something left to learn’—and the amount left to learn is the relative information we’ve been talking about! But as time goes on, the information still left to learn decreases!
Note: in the real world, nature has never found an evolutionarily stable state… except sometimes approximately, on sufficiently short time scales, in sufficiently small regions. So we are still talking about an idealization of reality! But that’s okay, as long as we know it.
Posted by John Baez


