Last time I began explaining the tight relation between three concepts:
• entropy,
• information—or more precisely, lack of information,
and
• biodiversity.
The idea is to consider different species of ‘replicators’. A replicator is any entity that can reproduce itself, like an organism, a gene, or a meme. A replicator can come in different kinds, and a ‘species’ is just our name for one of these kinds. If
is the population of the
th species, we can interpret the fraction
as a probability: the probability that a randomly chosen replicator belongs to the th species. This suggests that we define entropy just as we do in statistical mechanics:
In the study of statistical inference, entropy is a measure of uncertainty, or lack of information. But now we can interpret it as a measure of biodiversity: it’s zero when just one species is present, and small when a few species have much larger populations than all the rest, but gets big otherwise.
Our goal here is play these viewpoints off against each other. In short, we want to think of natural selection, and even biological evolution, as a process of statistical inference—or in simple terms, learning.
To do this, let’s think about how entropy changes with time. Last time we introduced a simple model called the replicator equation:
where each population grows at a rate proportional to some ‘fitness functions’ . We can get some intuition by looking at the pathetically simple case where these functions are actually constants, so
The equation then becomes trivial to solve:
Last time I showed that in this case, the entropy will eventually decrease. It will go to zero as whenever one species is fitter than all the rest and starts out with a nonzero population—since then this species will eventually take over.
But remember, the entropy of a probability distribution is its lack of information. So the decrease in entropy signals an increase in information. And last time I argued that this makes perfect sense. As the fittest species takes over and biodiversity drops, the population is acquiring information about its environment.
However, I never said the entropy is always decreasing, because that’s false! Even in this pathetically simple case, entropy can increase.
Suppose we start with many replicators belonging to one very unfit species, and a few belonging to various more fit species. The probability distribution will start out sharply peaked, so the entropy will start out low:

Now think about what happens when time passes. At first the unfit species will rapidly die off, while the population of the other species slowly grows:


So the probability distribution will, for a while, become less sharply peaked. Thus, for a while, the entropy will increase!
This seems to conflict with our idea that the population’s entropy should decrease as it acquires information about its environment. But in fact this phenomenon is familiar in the study of statistical inference. If you start out with strongly held false beliefs about a situation, the first effect of learning more is to become less certain about what’s going on!
Get it? Say you start out by assigning a high probability to some wrong guess about a situation. The entropy of your probability distribution is low: you’re quite certain about what’s going on. But you’re wrong. When you first start suspecting you’re wrong, you become more uncertain about what’s going on. Your probability distribution flattens out, and the entropy goes up.
So, sometimes learning involves a decrease in information—false information. There’s nothing about the mathematical concept of information that says this information is true.
Given this, it’s good to work out a formula for the rate of change of entropy, which will let us see more clearly when it goes down and when it goes up. To do this, first let’s derive a completely general formula for the time derivative of the entropy of a probability distribution. Following Sir Isaac Newton, we’ll use a dot to stand for a time derivative:
In the last term we took the derivative of the logarithm and got a factor of which cancelled the factor of
. But since
we know
so this last term vanishes:
Nice! To go further, we need a formula for . For this we might as well return to the general replicator equation, dropping the pathetically special assumption that the fitness functions are actually constants. Then we saw last time that
where we used the abbreviation
for the fitness of the th species, and defined the mean fitness to be
Using this cute formula for , we get the final result:
This is strikingly similar to the formula for entropy itself. But now each term in the sum includes a factor saying how much more fit than average, or less fit, that species is. The quantity is always nonnegative, since the graph of
looks like this:

So, the th term contributes positively to the change in entropy if the
th species is fitter than average, but negatively if it’s less fit than average.
This may seem counterintuitive!
Puzzle 1. How can we reconcile this fact with our earlier observations about the case when the fitness of each species is population-independent? Namely: a) if initially most of the replicators belong to one very unfit species, the entropy will rise at first, but b) in the long run, when the fittest species present take over, the entropy drops?
If this seems too tricky, look at some examples! The first illustrates observation a); the second illustrates observation b):
Puzzle 2. Suppose we have two species, one with fitness equal to 1 initially constituting 90% of the population, the other with fitness equal to 10 initially constituting just 10% of the population:
At what rate does the entropy change at ? Which species is responsible for most of this change?
Puzzle 3. Suppose we have two species, one with fitness equal to 10 initially constituting 90% of the population, and the other with fitness equal to 1 initially constituting just 10% of the population:
At what rate does the entropy change at ? Which species is responsible for most of this change?
I had to work through these examples to understand what’s going on. Now I do, and it all makes sense.
Next time
Still, it would be nice if there were some quantity that always goes down with the passage of time, reflecting our naive idea that the population gains information from its environment, and thus loses entropy, as time goes by.
Often there is such a quantity. But it’s not the naive entropy: it’s the relative entropy. I’ll talk about that next time. In the meantime, if you want to prepare, please reread Part 6 of this series, where I explained this concept. Back then, I argued that whenever you’re tempted to talk about entropy, you should talk about relative entropy. So, we should try that here.
There’s a big idea lurking here: information is relative. How much information a signal gives you depends on your prior assumptions about what that signal is likely to be. If this is true, perhaps biodiversity is relative too.
Those green bar graphs look like like the Preston and Whittaker plots of biodiversity.
I like the way this series is moving along :)
This reminds me of a topic that came up in a technical meeting we had with my robotics group in Lisbon last week. Children who have not yet learned a language are good at noticing differences between phonemes. Then their ability to discriminate dips. When it comes up again, it has changed– they can now only discriminate phonemes that their native language discriminates. They have learned a mapping from sound to sense that groups some things together. The speaker had invented a machine learning method using interlinked self-organizing maps that reproduced this behavior.
That’s interesting! I’m not sure if it’s formally related to what I was talking about. I was talking about how the usual mathematical concept of information includes ‘misinformation’. Thus, learning can involve a decrease of information as you discard misinformation, followed by an increase of information as you acquire correct (or at least better) information. I’m not sure the children’s early state involves having a lot of ‘misinformation’. Maybe in some sense it does—I can’t tell.
It would be interesting to know what the children are like in the period when their ability to discriminate has gone down. Presumably they’re trying to classify sounds according to phonemes in the language they’re learning, but not doing very well?
I’ve got a former student in Lisbon now: Jeffrey Morton. Next fall another former student of mine will be going there: John Huerta. The common link is Roger Picken, a mathematical physicist at the Instituto Superior Tecnico who is interested in ‘higher gauge theory’, an application of category theory to physics.
Douglas Summers-Stay says:
The question is also when do you start to call a language language. My mother noticed that a child (8 months) in my vicinity made the sound HOOM everytime when she saw a dog (dog is in german “HUND” (speak hoond) which is sort of an onomatopoeia for barfing). So was she learning a word or inventing a word by mimicking barfing?
It is interesting to see how this works for the quadratic diversity
which is the inverse of the “average discord” within a population characterized by the probabilities
, and where species
resembles species
(genetically, functionally, whatever) to an extent measured by
. The rate of change of this diversity measure is
In the special case that species are regarded as entirely distinct from one another, the matrix
reduces to a Kronecker delta, and the time derivative of the diversity becomes
In Puzzle 2, the sum is negative so the product is positive and the diversity is increasing; in Puzzle 3, the sum is positive so the product is negative and the diversity is decreasing. This makes sense, because in Puzzle 2 we should see the population moving towards a more even distribution: the proportion of species 2 should be going up because its fitness is larger than the average. The opposite holds true in Puzzle 3. Because the diversity is maximized for the equiprobable distribution, we should expect the diversity to be going up in Puzzle 2 and down in Puzzle 3.
Oops. “Average discord” should be “average concord” or “average similarity”. This is what I get for not proofreading before I leave to catch the bus.
My brain seems to have switched itself to the “off” position today as far as algebra is concerned, but I believe one can construct a Lyapunov function using what one might call the “cross diversity”,
And of course I forget the
. Not my best day!
Did you mean to include examples immediately after Puzzle 1?
The examples are the puzzles 2 and 3.
Yes, I thought Puzzle 1 might be too tricky in its abstract general form, so I wanted to nudge people into trying some examples:
Why isn’t entropy monotonically decreasing? The puzzles clearly illustrate that this is not the case… here is another way to think about it using interior trajectories — those in which all types survive at equilibrium. This can occur when the fitness landscape is not constant. (Hopefully I’m not giving away much of what John is planning to talk about next.)
The maximum entropy discrete distribution is the uniform distribution, but this distribution is unlikely to be the stable population distribution in general (it is easy to pick landscapes that converge to any given distribution). In any case wanting the entropy to be monotonically decreasing is of course too much to ask for since a population can start a lower point of entropy initially and converge to the chosen distribution of higher entropy. Moreover the entropy at the equilibria is nonzero, which is intuitively unsatisfying in addition to the fact that the net result of selection in this case is overall more entropy.
Note that at a rest point of the replicator dynamic, we have that
so either
or
For interior distributions (no
), we have that
for all
so at the stable state it is the fitness landscape that is uniform rather than the population distribution. (Note that the fitness landscape is not necessarily a probability distribution, but we can demand that it is everywhere positive.) If some of the
are zero, then we are really in a lower-dimensional probability simplex, and the preceding statement applies. (Incidentally, this condition is where the game theory in evolutionary game theory comes in — it’s part of the Nash equilibrium criteria.) On the other hand, if only one type of replicator ultimately survives, the entropy has to converge to zero eventually. One caveat, however: a uniform fitness landscape does not guarantee stability!
One more puzzle to think about — it is possible for the replicator dynamic to have interior non-limit cycles in dimension 3 and higher (e.g. for the rock-paper-scissors game[1], of which examples have been found in the natural world). This implies that there is a constant of motion for the dynamic on these landscapes, but it’s not the entropy! Why not, and what is it?
[1] See plot A at this location: http://sites.sinauer.com/animalcommunication2e/images/10/WT10.05Figure07.jpg
Looks my math formatting got eaten. Just use John’s equation for the replicator equation.
In WordPress blogs, you need to write
$latex E = mc^2 $
to get
I’ll fix up those equations. Thanks for saying lots of interesting stuff without giving away too much of the posts to come! (You were not the intended audience of this series, since I’m just thinking out loud about your work….)
I have a doubt concerning the interpretation of entropy as a lack of information.
For an environment (equivalently a collection of fitnesses), given the simplicity of the replicator equation, there may be one or infinitely many equilibria.
– If a species has a fitness greater than any other, then at equilibrium, it will have p=1 and the entropy will be zero.
– However, if two species have an identical fitness larger than any other species, then they may both end up with p>0. The entropy would thus be non-zero.
– Even worse, when all fitnesses are equal, any distribution of probabilities is an equilibrium for the replicator dynamics…
Can we, in such a case, interpret it as a lack of information about the environment compared to the case where only one species is singled out?
Entropy is quite generally a measure of lack of information, but one has to ask ‘lack compared to what?‘, and this is why relative entropy is so important. The problems you mention are good examples of why ordinary entropy is not sufficient to understand the sense in which an ecosystem acquires information about its environment. Next time I’ll present Marc Harper’s solution using relative entropy.
Last time we saw that if we take a bunch of different species of self-replicating entities, the entropy of their population distribution can go either up or down as time passes. We saw this is true even in the pathetically simple case where all the replicators have constant fitness—so they don’t interact with each other, and don’t hit ‘limits to growth’ as their population grows […]
The formula after “we get the final result” isn’t rendered properly for some reason, at least on the two browser I tested.
It works fine for me on Firefox. What’s wrong with it for you?
Here it is:
What do other people think?
That’s really weird. What I get is “x”, the e-like “belongs in a set” symbol, and a right arrow. The alt text works fine and looks like valid code to me.
What browser are you using? And if Firefox, which works fine for me, what version are you using and what system are you running it on?
Same for me with ubuntu 11.10 and Firefox 13.0. I get the same thing as Jussi. Though, when I place my cursor on it, I see the correct latex code… that I have to compile in my head.
Same thing (x element arrow) here: Win XP (tabula rasa boot/install at some Bavarian university) and Firefox 5.0.1.
Weird! After having sent the comment (which required me to log in to my wordpress account) I see the correct formula in both places.
I tested it with IE9 and Chrome on Windows 7, as well as Chromium and Firefox on Ubuntu. Same effect on all of them. But now I’m at work, and I see it render correctly using Chromium. I smell a caching bug on the server side…
One possibility is that there’s although it looks like one web address behind the scenes “s0.wp.com” is sent to a geographically nearby server. It’s interesting that Florifulgurator is in Bavaria and, going purely on the name, you’re probably in Finland (apologies if this is stereotyping! :-) ). Whereas John is in Spain. So it might be a caching bug that’s hit one of the geographical servers near you (that’s now resolved itself).
Not that it matters much, but I’m Paris until Tuesday the 19th—then I’ll take a train to Spain.
Yes that’s correct, I’m in Finland. :) I guess it’s Murphy’s law that it’s the most important equation in the post that bugs.
This is my talk for the workshop Biological Complexity: Can It Be Quantified?
• Biology as information dynamics.