## Information Geometry (Part 11)

Last time we saw that given a bunch of different species of self-replicating entities, the entropy of their population distribution can go either up or down as time passes. This is true even in the pathetically simple case where all the replicators have constant fitness—so they don’t interact with each other, and don’t run into any ‘limits to growth’.

This is a bit of a bummer, since it would be nice to use entropy to explain how replicators are always extracting information from their environment, thanks to natural selection.

Luckily, a slight variant of entropy, called ‘relative entropy’, behaves better. When our replicators have an ‘evolutionary stable state’, the relative entropy is guaranteed to always change in the same direction as time passes!

Thanks to Einstein, we’ve all heard that times and distances are relative. But how is entropy relative?

It’s easy to understand if you think of entropy as lack of information. Say I have a coin hidden under my hand. I tell you it’s heads-up. How much information did I just give you? Maybe 1 bit? That’s true if you know it’s a fair coin and I flipped it fairly before covering it up with my hand. But what if you put the coin down there yourself a minute ago, heads up, and I just put my hand over it? Then I’ve given you no information at all. The difference is the choice of ‘prior’: that is, what probability distribution you attributed to the coin before I gave you my message.

My love affair with relative entropy began in college when my friend Bruce Smith and I read Hugh Everett’s thesis, The Relative State Formulation of Quantum Mechanics. This was the origin of what’s now often called the ‘many-worlds interpretation’ of quantum mechanics. But it also has a great introduction to relative entropy. Instead of talking about ‘many worlds’, I wish people would say that Everett explained some of the mysteries of quantum mechanics using the fact that entropy is relative.

Anyway, it’s nice to see relative entropy showing up in biology.

### Relative Entropy

Inscribe an equilateral triangle in a circle. Randomly choose a line segment joining two points of this circle. What is the probability that this segment is longer than a side of the triangle?

This puzzle is called Bertrand’s paradox, because different ways of solving it give different answers. To crack the paradox, you need to realize that it’s meaningless to say you’ll “randomly” choose something until you say more about how you’re going to do it.

In other words, you can’t compute the probability of an event until you pick a recipe for computing probabilities. Such a recipe is called a probability measure.

This applies to computing entropy, too! The formula for entropy clearly involves a probability distribution, even when our set of events is finite:

$S = - \sum_i p_i \ln(p_i)$

But this formula conceals a fact that becomes obvious when our set of events is infinite. Now the sum becomes an integral:

$S = - \int_X p(x) \ln(p(x)) \, d x$

And now it’s clear that this formula makes no sense until we choose the measure $d x.$ On a finite set we have a god-given choice of measure, called counting measure. Integrals with respect to this are just sums. But in general we don’t have such a god-given choice. And even for finite sets, working with counting measure is a choice: we are choosing to believe that in the absence of further evidence, all options are equally likely.

Taking this fact into account, it seems like we need two things to compute entropy: a probability distribution $p(x)$, and a measure $d x.$ That’s on the right track. But an even better way to think of it is this:

$\displaystyle{ S = - \int_X \frac{p(x) dx}{dx} \ln \left(\frac{p(x) dx}{dx}\right) \, dx }$

Now we see the entropy depends two measures: the probability measure $p(x) dx$ we care about, but also the measure $d x.$ Their ratio is important, but that’s not enough: we also need one of these measures to do the integral. Above I used the measure $dx$ to do the integral, but we can also use $p(x) dx$ if we write

$\displaystyle{ S = - \int_X \ln \left(\frac{p(x) dx}{dx}\right) p(x) dx }$

Either way, we are computing the entropy of one measure relative to another. So we might as well admit it, and talk about relative entropy.

The entropy of the measure $d \mu$ relative to the measure $d \nu$ is defined by:

$\begin{array}{ccl} S(d \mu, d \nu) &=& \displaystyle{ - \int_X \frac{d \mu(x) }{d \nu(x)} \ln \left(\frac{d \mu(x)}{ d\nu(x) }\right) d\nu(x) } \\ \\ &=& \displaystyle{ - \int_X \ln \left(\frac{d \mu(x)}{ d\nu(x) }\right) d\mu(x) } \end{array}$

The second formula is simpler, but the first looks more like summing $-p \ln(p),$ so they’re both useful.

Since we’re taking entropy to be lack of information, we can also get rid of the minus sign and define relative information by

$\begin{array}{ccl} I(d \mu, d \nu) &=& \displaystyle{ \int_X \frac{d \mu(x) }{d \nu(x)} \ln \left(\frac{d \mu(x)}{ d\nu(x) }\right) d\nu(x) } \\ \\ &=& \displaystyle{ \int_X \ln \left(\frac{d \mu(x)}{ d\nu(x) }\right) d\mu(x) } \end{array}$

If you thought something was randomly distributed according to the probability measure $d \nu,$ but then you you discover it’s randomly distributed according to the probability measure $d \mu,$ how much information have you gained? The answer is $I(d\mu,d\nu).$

For more on relative entropy, read Part 6 of this series. I gave some examples illustrating how it works. Those should convince you that it’s a useful concept.

Okay: now let’s switch back to a more lowbrow approach. In the case of a finite set, we can revert to thinking of our two measures as probability distributions, and write the information gain as

$I(q,p) = \displaystyle{ \sum_i \ln \left(\frac{q_i}{p_i }\right) q_i}$

If you want to sound like a Bayesian, call $p$ the prior probability distribution and $q$ the posterior probability distribution. Whatever you call them, $I(q,p)$ is the amount of information you get if you thought $p$ and someone tells you “no, $q$!”

We’ll use this idea to think about how a population gains information about its environment as time goes by, thanks to natural selection. The rest of this post will be an exposition of Theorem 1 in this paper:

• Marc Harper, The replicator equation as an inference dynamic.

Harper says versions of this theorem ave previously appeared in work by Ethan Akin, and independently in work by Josef Hofbauer and Karl Sigmund. He also credits others here. An idea this good is rarely noticed by just one person.

### The change in relative information

So: consider $n$ different species of replicators. Let $P_i$ be the population of the $i$th species, and assume these populations change according to the replicator equation:

$\displaystyle{ \frac{d P_i}{d t} = f_i(P_1, \dots, P_n) \, P_i }$

where each function $f_i$ depends smoothly on all the populations. And as usual, we let

$\displaystyle{ p_i = \frac{P_i}{\sum_j P_j} }$

be the fraction of replicators in the $i$th species.

Let’s study the relative information $I(q,p)$ where $q$ is some fixed probability distribution. We’ll see something great happens when $q$ is a stable equilibrium solution of the replicator equation. In this case, the relative information can never increase! It can only decrease or stay constant.

We’ll think about what all this means later. First, let’s see that it’s true! Remember,

$\begin{array}{ccl} I(q,p) &=& \displaystyle{ \sum_i \ln \left(\frac{q_i}{ p_i }\right) q_i } \\ \\ &=& \displaystyle{ \sum_i \Big(\ln(q_i) - \ln(p_i) \Big) q_i } \end{array}$

and only $p_i$ depends on time, not $q_i$, so

$\begin{array}{ccl} \displaystyle{ \frac{d}{dt} I(q,p)} &=& \displaystyle{ - \frac{d}{dt} \sum_i \ln(p_i) q_i }\\ \\ &=& \displaystyle{ - \sum_i \frac{\dot{p}_i}{p_i} \, q_i } \end{array}$

where $\dot{p}_i$ is the rate of change of the probability $p_i.$ We saw a nice formula for this in Part 9:

$\displaystyle{ \dot{p}_i = \Big( f_i(P) - \langle f(P) \rangle \Big) \, p_i }$

where

$f_i(P) = f_i(P_1, \dots, P_n)$

and

$\displaystyle{ \langle f(P) \rangle = \sum_i f_i(P) p_i }$

is the mean fitness of the species. So, we get

$\displaystyle{ \frac{d}{dt} I(q,p) } = \displaystyle{ - \sum_i \Big( f_i(P) - \langle f(P) \rangle \Big) \, q_i }$

Nice, but we can fiddle with this expression to get something more enlightening. Remember, the numbers $q_i$ sum to one. So:

$\begin{array}{ccl} \displaystyle{ \frac{d}{dt} I(q,p) } &=& \displaystyle{ \langle f(P) \rangle - \sum_i f_i(P) q_i } \\ \\ &=& \displaystyle{ \sum_i f_i(P) (p_i - q_i) } \end{array}$

where in the last step I used the definition of the mean fitness. This result looks even cuter if we treat the numbers $f_i(P)$ as the components of a vector $f(P),$ and similarly for the numbers $p_i$ and $q_i.$ Then we can use the dot product of vectors to say

$\displaystyle{ \frac{d}{dt} I(q,p) = f(P) \cdot (p - q) }$

So, the relative information $I(q,p)$ will always decrease if

$f(P) \cdot (p - q) \le 0$

for all choices of the population $P.$

And now something really nice happens: this is also the condition for $q$ to be an evolutionarily stable state. This concept goes back to John Maynard Smith, the founder of evolutionary game theory. In 1982 he wrote:

A population is said to be in an evolutionarily stable state if its genetic composition is restored by selection after a disturbance, provided the disturbance is not too large.

I will explain the math next time—I need to straighten out some things in my mind first. But the basic idea is compelling: an evolutionarily stable state is like a situation where our replicators ‘know all there is to know’ about the environment and each other. In any other state, the population has ‘something left to learn’—and the amount left to learn is the relative information we’ve been talking about! But as time goes on, the information still left to learn decreases!

Note: in the real world, nature has never found an evolutionarily stable state… except sometimes approximately, on sufficiently short time scales, in sufficiently small regions. So we are still talking about an idealization of reality! But that’s okay, as long as we know it.

### 58 Responses to Information Geometry (Part 11)

1. I am suspecting that my relationship with my husband is jammed packed with relative entropy. This post tickled my brain cells for sure!

• John Baez says:

I suppose ‘relative entropy’ could be a nice name for the fact that when I put away the dishes and kitchen gadgets, my wife doesn’t know where they are, and vice versa. This is not just a pun, it’s an actual example of the concept! What looks random to one person may be orderly to another.

2. arch1 says:

If I’m beginning to understand the lowbrow approach, I can visualize the two-species case as a landscape height I which is a function of the two population sizes. Stable config q is at a local minimum of I, and as p evolves toward q it always moves downhill (perhaps circuitously) on I’s landscape.

Is this right as far as it goes? If so I am curious whether, given the functions fi, I is the *only* landscape having this spiffy property, and also (these Qs may be related come to think of it) whether p always follows the steepest path down I’s landscape.

• Marc Harper says:

The trajectory is not always the steepest path. It is possible to show that the replicator equation is a gradient with respect to the Fisher information metric / Shahshahani metric if and only if the fitness landscape is a Eucldiean gradient. I believe that Kimura was also aware of this before the geometric approach, and phrased the idea along the lines of “in the direction of unit variance” or something similar. Nevertheless it is not necessary for the trajectories to be gradients for the relative entropy to be monotonically decreasing.

The quintessential example is when the fitness landscape is of the form f(x) = A x for a symmetric matrix A. This is a model of a diploid (two of each chromosome) gene locus for a well-mixed population, where the replicators are the different possible alleles at the locus (and is naturally symmetric since there is no order to the two genes a diploid organism actually has). This is also the equation that Fisher and others first studied before the replicator equation appeared as a generalization (due to Taylor and Jonker in ~1978). If A is not symmetric (e.g. for cyclical rock-paper-scissors interactions), the trajectory is not a gradient, but may still converge as described by John above.

The relative entropy is not the only function to have this property — simple transformations of I (adding constants, exponentiating, etc.) will have the same property. For instance, Bomze was the first (I believe, it can be hard to track down all the original sources) to prove such results in this case and used the cross entropy between p and q rather than the relative entropy (which are equal up to a constant in q, so the derivative is the same if q is constant). However, the metric is unique with respect to this property for the replicator equation, and the relative entropy localizes to this metric.

To answer your first question, it is possible to have two interacting populations p and q (with their own landscapes f and g). In this case if there is a stable configuration (p’, q’), it is the sum of the relative entropies I(p’, p) + I(q’,q) that has the desired properties. So what you are saying is basically correct, just be careful not to abuse the term landscape for both the fitness landscape f and the relative entropy I.

• arch1 says:

Thanks Marc. The 43% of this that I understood was very helpful.

I am guessing that since the trajectories are not in general gradients, it may be harder to identify “watershed” boundaries (hope I’m not hijacking yet another technical term here:-) than it is for – well, for real watersheds.

3. John Baez says:

Over on G+, Deen Abiola wrote:

Speaking of many worlds, in reading this I feel as if I am looking into a parallel universe where many things are the same but certain things are slightly off. The model you describe is very similar to the work I’ve read in ML [machine learning] papers. The difference of focus seems to be that your write up focuses on characterizing the distributions and how they update, while the ML work, based on much the same assumptions, focuses on characterizing the class of functions learnable by such a model.

For example, on a cursory reading what you call mean fitness looks like the empirical performance in Valiant’s model. And the learning class most like evolution is correlational statistical queries – which are described as performing queries for the correlation between a known function f and an unknown g, relative to some distribution.

In machine learning Relative entropy goes by the name Kullback Liebler Divergence. And it is the basic work horse of all inference algorithms. Most algorithms can be written in terms of minimizing the divergence between the target distribution and your approximation of it.

What is unknown about the model is it leaves open as to whether evolution can learn independent of a distribution. Feldman suggests that some things may be at least weakly learnable and wonders if there are any corollaries to boosting in nature. Boosting is an awesome technique to take a bunch of weak learners and make a stronger one, it can also be understood in terms of a way to do well with weak computational resources in a zero sum game vs an adversary/nature.﻿

• John Baez says:

I replied:

Deen Abiola – Thanks for the view from the machine learning side of the aisle! I’m a mathematical physicist by training who has decided to start working across fields. I think there’s a lot to be gained by combining ideas from machine learning, evolutionary game theory and also the theory of ‘chemical reaction networks’, which computer scientists call ‘stochastic Petri nets’. But, I’m a raw amateur at most of these subjects, so I can use all the help I can get.

Translating terms between fields is always important in this game, so thanks for the tips. As for ‘Kullback-Leibler divergence’, lots of people in all fields use that term, but I think it’s a disgustingly obscure name for such an important concept, so (like a bunch of other people) I use the term ‘relative information’, or for the negative, ‘relative entropy’. Another commonly used term is ‘information gain’.﻿

4. John Baez says:

Over on G+, Deen Abiola wrote:

John Baez – my question is two:

Is the replicator equation a good model of what goes on in evolution such that the results you write about are applicable to nature? Or is it something like an economic model, indicative in aggregate but not generally useful due to inability to capture nuance?

Note: in the real world, nature has never found an evolutionarily stable state…

Why do humans not qualify as a good approximation of this?﻿

• John Baez says:

I replied:

Deen Abiola wrote:

Is the replicator equation a good model of what goes on in evolution such that the results you write about are applicable to nature?

I’m no expert on mathematical biology, but I’ve never seen any model in that subject that I’d call “a good model such that the results you write about are applicable to nature”… without a lot of caveats. In biology it’s usually hard to get enough data to tell how good the models really are. And for any model, you can easily think of important effects it neglects. So they should always be taken with an enormous grain of salt.

One important effect the replicator equation neglects is innovation: the ability for the population of a species to start at zero and rise to a nonzero value.

Of course, better models have been studied that take this crucial phenomenon into account! In my dreams, I’ll eventually get around to talking about these.﻿

• John Baez says:

John wrote:

Note: in the real world, nature has never found an evolutionarily stable state… except sometimes approximately, on sufficiently short time scales, in sufficiently small regions.

Deen﻿ wrote:

Why do humans not qualify as a good approximation of this?

I guess they are if you’re willing to work on a short enough time scale, in a sufficiently small region, and are more interested in how humans stay the same than how they’re evolving. (That sounds sarcastic, but it actually is interesting to understand what makes populations stay the same.)

If you look at a map of the world, you’ll find that in Europe most adults can digest milk, while in Asia most adults are lactose intolerant:

This is the result of a mutation that happened in the Mideast about 5000 years ago, and spread in societies that raised cows.

If you’re studying phenomena like this, assuming an evolutionarily stable state would be fatal. For other purposes it might be fine.

• Marc Harper says:

I think it’s fair to say that nature sometimes settles into evolutionarily stable states, with the caveat that they are short lived in isolated / regular environments, at least until the next major environmental change or until the next clever mutation opens up a new dimension of competition. Note that “evolutionarily stable” really means “selectively stable” in many contexts. There is always a background of mutation (natural diversification I like to call it) that selection acts on, and it’s not too hard to imagine that there are prolonged periods in which no significant mutations occur (a la the neutral theory of evolution). During these periods the population dynamics are probably similar to what the replicator equation describes, up to stochastic noise based on the size of the population, and other random factors not captured by the model.

Also consider that some parts of the genome are very similar across organisms and have been conserved for many millions of years (such as heat-shock proteins that cells use to adapt to sudden thermodynamic changes in the environment). If we restrict our attention to a sufficiently small subspace of the adaptive landscape, it would certainly seem that there is a long term and stable solution. In other words, while there is variation in other dimensions, in this subspace things are rather boring — the solution was found long ago and no one seems to gain by deviating from it. This sounds rather like the definition of evolutionarily stability me, or as close as one can hope to reach in nature. It is easy to imagine analogous situations, even for humans.

The vanilla replicator equation is a model of natural selection (a terrible term, by the way, natural proliferation would be much better, or natural preservation as Darwin suggested), not of evolution, so it’s more “selection-in-a-box” than “realistic and general model of evolution”. Selection-in-a-box can lead to evolutionarily stable states, so we just have to be creative in identifying such microcosms in nature. In the long run of course there is no global stability as evolution has no goal or pinnacle and environments always change.

• Any insight into the conditions under which change is promoted? I am currently working with a certain genetic-programming algo for machine learning, and have noticed that population fitness has a ‘flight of the swallow’ shape: rapid improvement for a while settling to a plateau (possibly a very long one), until some dramatic change is discovered, which promotes a burst of improvement that completely obsoletes the species that dominated the previous plateau.

From the “faster algorithm” (aka “continued employment”) point of view, I of course, want to avoid getting stuck on plateaus, and find “the next new thing”. Yet, I despair; sometimes I get the feeling that the algorithm is a blind man, searching for the hole in a golf-course green (and to mangle metaphors, that hole then is the rabbit-hole that leads to a whole new golf course…)

• John Baez says:

Linas – while I don’t know much about these issues, I’m finding this paper inspiring, so maybe you could extract some inspiration from it:

I hope Marc gives you a better answer!

• Marc Harper says:

Yes, I can give you some insights into the conditions in which changes occur. The first is when the boundary is circumvented and a new type enters the population. Now there is suddenly another dimension to explore and this can lead to new configurations that dominate others. Note that this is explicitly ruled out by selection-in-a-box models like the replicator equation (no innovation, assumption of all known types).

Another is a dramatic change in the environment, say due to a disaster or due to some permanent change in a particular environmental property (like rising temperature). For instance, sometimes bacteria in extreme conditions (like oceanic vents) develop different versions of certain proteins that would normally not operate at the higher temperatures. These proteins can turn out to be more efficient in the ancestral environment as an unintended consequence. In this case the alternative selection pressure may have lead to a more efficient version of the protein by taking a different path that was unlikely in the original environment.

From a practical modeling perspective, sometimes escaping a local optimum requires tunneling through a less fit type to get to a more fit type that is multiple steps of variation away from the local optimum. If selection is too strong or variation too low then this may not happen. Some researchers have found that bacteria can have increased mutation rates when under intense selection. If there is an interaction component to the algorithm and it depends on the population distribution, it may just be a matter of having a particular variation achieve the critical threshold in the population.

What you describe seems like it is very much the case for some real populations. There’s no consistent increase in fitness (though many people misinterpret Fisher’s Fundamental theorem to say so), rather there are occasional breakthroughs that occur because of drift or happenstance.

5. Here’s a great post by John Baez with the forbidding title “Information Geometry, Part 11.” But if you can stomach a few […]

• John Baez says:

How could anyone resist a post called “Information Geometry, Part 11”?

Anyway, this blog post by Sean Carroll is a good nontechnical summary of what I was trying to say. He makes it sound as if there always exists a stable equilibrium state, which ain’t true, but oh well. More importantly, I wish he’d mentioned Marc Harper.

• Greg Egan says:

Anyone who does decide to read this should ignore the link above to some horrible aggregator site called “sciencenewsx.com” which puts up a non-removable floating gadget obscuring the text that begs you to “like” the post on various social media.

The original post is on Cosmic Variance, at:

http://blogs.discovermagazine.com/cosmicvariance/2012/06/07/evolution-entropy-and-information/

• John Baez says:

I’ll change the link to the one you suggested—thanks! I have complete dictatorial control over all backtracks appearing on this blog. For example I was the one who added the words ‘Sean Carroll’; the original aggregator didn’t bother to mention the post was written by him!

6. Arrow says:

While the model is interesting it is way too simplistic to be of much use in biology. For one it doesn’t capture evolution at all if fitness function cannot change with time. It doesn’t capture spatial dependence as mentioned, it doesn’t capture dependence on non living environment.

I guess one could take most of those things into account by making fitness a function of time, space, environment and the number of all other species but such model would probably be intractable. And it would still not support species divergence.

And even if one had a model capturing everything mentioned it would still be of limited use since it would be exceedingly hard to get data to actually do any reliable modeling of even simple natural populations.

• Arrow says:

There is also a problem of randomness. A single random event can often significantly alter the fitness of many populations. For example a virus acquiring an ability to transfer between species, or a bacteria mutating to become resistant to it’s host immune defenses or to antibiotics.

• Frederik De Roo says:

Man who says it can’t be done should not interrupt man doing it.

• John Baez says:

Maybe you misunderstand why I’m interested in models in biology. Trying to use them to make quantitative predictions about specific situations is the last thing on my mind. My goal (and it’s not just mine) is to gain insight into which processes can cause which effects.

For this, the point of a model is not to believe the model is right and then believe what it says. The point is to say “if this were all that were going on, what would happen?” Mathematical biologists have made a lot of progress in understanding things this way.

In certain carefully limited regimes, models can also give quantitative results you can trust. But to delimit such a regime, you need to know biology very well, you need to spend years studying that specific regime (say: a population of HIV viruses in a patient taking different drugs), and you need to do lots of experiments. That’s obviously not where I’m going to make a contribution. I’m going to make a contribution by taking theoretical ideas from different subjects and understanding how they fit together better than anyone has before—mainly by virtue of the fact that nobody has bothered to try before.

I expect there are several hundred papers studying the replicator equation. It captures many interesting effects that are seen in biology. People have also written about lots of models that go beyond the replicator equation. Since I’m just learning this subject, I want to start near the beginning. But I want to focus on ideas related to information geometry and stochastic Petri nets, because they’re connected to evolution in interesting ways.

The replicator equation contains within it, secretly, the ability to model spatial dependence:

• Hisashi Ohtsuki and Martin A. Nowak, The replicator equation on graphs, J. Theor. Biol. 243 (2006), 86–97.

It’s easy to throw in explicit time-dependence into the fitness functions if you want. So as far as I can tell, some of its main conceptual limitations are:

1) the absence of innovation, meaning the ability of new types of replicators to appear starting from zero population.

2) its treatment of populations as continuous rather than discrete,

3) its treatment of the dynamics as deterministic rather than stochastic.

I know a nice way to deal with all these at once, but I need to keep some suspense up so people keep reading these blog posts, so I won’t describe it yet! (Anyone who’s been paying attention will already know, but please don’t tell.)

The fun part will be to see the role of entropy in these more general models. It’s a robust concept, so it probably won’t just shrivel up and die.

• Arrow says:

Someone above asked about use in biology so as a biologist (though molecular) I provided my 2c.

Yes, I know even simple models can be used to learn new things about the modeled phenomena though (as a resident skeptic ;) I find it a bit hard to imagine what novel things one could learn about the underlying biology by studying such a simple replicator equation.

Could be just my limited imagination but since you say you “expect there are several hundred papers studying the replicator equation,” perhaps you or someone else could point out one containing an example of such an insight.

That’s not to say that the model is not interesting from mathematical or (as someone mentioned) machine learning POV, only that i find it’s utility for actual biology rather low.

• John Baez says:

Since it’s late I won’t now dig up links to some papers applying the replicator equation to issues evolutionary and population biologists care about. (Even better, maybe Marc Harper or someone will do this while I’m asleep.) Right now I’ll address this:

such a simple replicator equation

In fact this equation

$\displaystyle{ \frac{d}{dt} P_i = F_i(P_1,\dots,P_n) P_i }$

is far from simple. It’s really the general first-order time-independent ordinary differential equation in many variables! To see this, make the change of variables

$P_i = \ln (x_i)$

and get

$\displaystyle{ \frac{d}{dt} x_i = F_i(e^{x_1}, \dots , e^{x_n}) }$

If we write

$\displaystyle{ F_i(e^{x_1}, \dots , e^{x_n}) = G_i(x_1, \dots, x_n) }$

this becomes

$\displaystyle{ \frac{d}{dt} x_i = G_i(x_1,\dots, x_n) }$

or via the miracle of vector notation

$\displaystyle{ \frac{d}{dt} x = G(x) }$

where $G$ can be any smooth function of $n$ real variables, since $F$ was any smooth function of $n$ positive real variables.

And since any time-independent ordinary differential equation can be written as a system of 1st-order ones, the replicator equation is secretly the fully general time-independent ordinary differential equation!

So your complaint should not be that it’s too simple. On the contrary, your complaint should be that it’s so general that it contains every possible complication that’s possible in the subject of time-independent ordinary differential equations!

But that makes it all the more impressive that Marc Harper stated a simple theorem linking it to the concept of entropy.

And of course people who work on the replicator equation in biology look at particular examples, and classes of examples, inspired by biology… and they ask questions motivated by biology. They aren’t just doing general-purpose differential equation theory.

• Arrow says:

The equation may be general but it’s still simple in my book, it’s very easy to describe or understand the thinking behind it (think kolmogorov complexity).

Compare it to standard model lagrangian for example, now that is a complex equation.

But if you think about it, SM only deals with interactions and transformation of a couple of particle species, and here we are trying to model evolution, a process that is many, many orders more complex. So the equation is also very simple when compared to the full complexity of the process it attempts to model.

• I think I understand what John is trying to do. In the field of ecological diversity, there are two competing approaches. The traditional one is of understanding via biological mechanism. This of course takes large amounts of knowledge, collected over years of research. The alternative is finding patterns in the underlying process, without much thought to the mechanisms. Many traditionalists do not like the latter because it looks as if the statisticians are trying to short-circuit the years of work into establishing mechanisms. Yet, it appears that these mechanism-free probability and statistics approaches such as MaxEnt are very useful and practical.

This is an Oikos paper with an unusual title that describes this ongoing battle:
Mechanisms in macroecology: AWOL or purloined letter? Towards a pragmatic view of mechanism

A PDF of the paper is available here

7. Ian williams says:

Hello, I wonder if you have considered the following. In open systems, assuming for a moment a directionality to time, change will take place and that change will always be towards a state of increased entropy. At a meta level entropy could be considered as a “force” or “requirement” or perhaps an essential corollary of change in an open system. If one accepts this then I suggest that life, and indeed the whole evolution of the universe, is driven by entropy. Returning to life for a moment. All of the steps in evolution occurred as a result of chemical reactions in which the end state had a greater entropy than the starting state. But, the products – life forms- also accelerated increases in entropy, more than their individual comment atoms could have. In other words I suggest life arose because it catalyzed increases in entropy.

I suggest that the reasons humans eventually came into being is that, to date, they are the most efficient catalyzers of increased entropy. Consider nuclear fission, global warming, etc. etc. gram for gram a human being will increase the entropy of the universe more than if all his component atoms (or quarks) were scattered throughout the cosmos. A key test of this idea would be to determine whether life forms as they evolve, produce a greater net increase in entropy  per gram, than the less evolved precursor – human>bonobo>crocodile>d. melanogaster> c. difficil.

As is probably painfully obvious from the foregoing, I am not a physicist, however I cannot see the logical fault in my reasoning. I realize entropy is not a force and has the dimensions of energy  over temperature but it’s increase, and the idea that life catalyzes that process, does seem to offer some attraction as a “simple” solution to the origin of life and as to why it will not only have evolved elsewhere in the universe but will also evolve toward technical sophistication as a way of maximizing that catalysis.

8. David Roemer says:

This blog was recommended to me because I am trying to get the American Journal of Physics to retract an article titled “Entropy and evolution” by Daniel Styer published in Nov 2008. It contains an equation using the Bolztmann constant and estimates of the probabilities of organisms to calculate the entropy change of the biosphere to show the second law of thermodynamics is not violated by evolution. I consider the equation to be absurd. Organisms don’t have a temperature or an entropy. What follows is a link to the article. If it doesn’t work I can email it to you:

http://newevangelist.me/2012/02/22/physics-department-of-new-york-university/

http://newevangelist.me/2012/02/02/american-journal-of-physics/

http://newevangelist.me/2012/02/23/american-association-of-physics-teachers/

9. […] The answer is, of course, not much. A subject near and dear to my heart: Evolution, Entropy, and Information, over at Cosmic Variance, referencing John Baez’s great series on information geometry […]

10. Chris Taylor says:

John

How does this work with replicator equations that have oscillatory transients? For example, consider

$f_1(x,y)=1-y$

and

$f_2(x,y)=-1+x-ey,$

with

$\dot{x}=f_1(x,y)x$

and

$\dot{y}=f_2(x,y)y$.

Now there is an evolutionarily stable state at $(x,y)=(1+e,1)$ which all trajectories tend toward in an oscillatory manner. Because of these oscillations, the proportions of each species must pass through the proportions of the evolutionarily stable state, and therefore the relative entropy (relative to the ESS) also oscillates.

It is true that the relative entropy decreases on each cycle (i.e. if you take a Poincare section) but it certainly doesn’t decrease monotonically. I suspect my system fails to satisfy some important criteria, but I can’t see what criteria that is.

Chris

• John Baez says:

Thanks for proposing this example! I’m not in the mood for calculations right now, but did you check to see if your proposed evolutionarily stable state really is one according to the technical definition I gave? A list of populations $Q = (Q_1, \dots Q_n)$ is an evolutionarily stable state if

$f(P) \cdot (p - q) \le 0$

for all populations $P = (P_1, \dots P_n)$, where $p$ and $q$ are the corresponding lists of probabilities:

$\displaystyle{ p_i = \frac{P_i}{\sum_j P_j}}$

$\displaystyle{ q_i = \frac{Q_i}{\sum_j Q_j}}$

People working on the replicator equation have (what seems to my naive eyes to be) a disturbing tendency to describe the dynamics in terms of the probabilities $p_i$ rather than the populations $P_i$. This would be very confusing in examples where the probabilities at one time aren’t enough to determine the probabilities at a future time because the total populations also affect the answer. You seem to be claiming this is going on in your example.

But in my writeup I tried to avoid making any sloppy mistakes along these lines. So, maybe the problem is that your proposed evolutionarily stable state doesn’t really obey that inequality up there.

• Chris Taylor says:

You’re right, it’s not an evolutionarily stable state. I had assumed that the definition of ESS coincided with the notion of an asymptotically stable state, but that only appears to be true sometimes (in particular, ESS => ASS, but ASS => ESS only for symmetric games). Interesting! I will need to think on this some more.

• Marc Harper says:

I think it is more common to use Lotka-Volterra or variants if one wants to track the population size. The replicator equation, by virtue of normalization, removes a degree of freedom. I won’t go into the details, but information geometers talk about the denormalization of the simplex, the positive orthant of Euclidean space (as did Shahshahani in his 1979 paper with a particular metric).

Lyapunov stability implies Nash equilibrium; strict Nash equilibrium implies asymptotic stability. It is possible to have a connected set of asymptotically stable points for the replicator dynamic yet for all the points in the set to be stationary. So the set is asymptotically stable but no particular point in it is (i.e. they are Nash equilibria but not strict Nash equilibria).

If you are interested in such matters, see “Evolutionary Game Theory” by Weibull (MIT Press). There is an example of the above on p107 (Figure 3.8) (and a similar example in Hofbauer and Sigmund’s “Evolutionary Games and Population Dynamics”).

• John Baez says:

Thanks, both of you, for clarifying some things! For some weird reason I’m only seeing the last two comments now. I’ve been distracted, but I didn’t think I was that distracted!

Marc wrote:

The replicator equation, by virtue of normalization, removes a degree of freedom.

Right, but what perturbs me is that the dynamics depends only on these normalized populations

$\displaystyle { p_i = \frac{P_i}{\sum_i P_i} }$

just in the special case where the fitness functions are homogeneous functions of the population vector $P$, i.e., when

$f_i(\alpha P) = \alpha^k f_i(P)$

This is true in some famous examples, but not the example Chris Taylor was looking at. In these other cases the normalized populations at time zero may not determine the normalized populations at later times, which is a bit like trying to do physics with position but not momentum.

In other words, you may not ‘want’ to track the population size, but you usually need to, to know what’s going to happen. I’m sure everyone who cares already knows this, but it makes me nervous when people seem to be ignoring this point.

• Marc Harper says:

What John says is of course completely correct. In many cases the replicator-equation and Lotka-Volterra equations can be transformed into one another without loss, but not for arbitrary fitness landscapes. There are drabacks to both approaches. By normalizing we are essentially letting the population be infinite, since we assume the population components are differentiable are so vary continuously. In the case of absolute population sizes, we allow fractional numbers of individuals (again continuous variation), which is also unrealistic. Most people seem to take a “it’s not such a big deal” perspective, but when one type falls down to 0.1 of an individual and then rebounds to dominate the population, it’s fairly suspect as a realistic effect.

11. Just as no MIS manager got fired for buying IBM PCs (even though better computers were available), one cannot fault anyone for applying Shannon’s notion of entropy in all sort of fields where it has no particular applicability over other notions of entropy or information content. Shannon’s notion comes into its own in matters of coding and communication as evidenced by its interpretation using the Noiseless Coding Theorem. But there is another more fundamental notion of entropy or information content that comes out of logic (partition logic that is, the dual to ordinary subset logic) and was even originally developed to measure biodiversity where it is called the Gini-Simpson index. See: http://www.mathblog.ellerman.org/2010/02/history-of-the-logical-entropy-formula/ for the history of this simple non-sexy formula in biodiversity, cryptography, economics, and logic. It is the special case of Rao’s quadratic entropy (defined in terms of a distance function) where one uses the logical distance function (the logical distance between two elements of a set is the complement of the Kronecker delta “closeness” function). The basic nature of the logical entropy formula is even more obvious in the quantum version (1-tr[rho squared]) where it provides the interpretation of the off-diagonal terms in a pure state density matrix and directly measures the information gained in a measurement.

• John Baez says:

Hi, David! Ultimately the use of different measures of entropy or biodiversity should be determined not by their ‘sexiness’ but by their mathematical properties. In biodiversity it’s common to consider the whole range of Rényi entropies

$H_q(p) = \displaystyle{ \frac{\ln \sum_i p_i^q}{1 - q}}$

for $0 \le q < \infty$, or even the limiting case $q = \infty$. The reason is that these have good mathematical properties. As $q$ becomes small they pay more attention to rare events, or rare species. As you probably know, we can sneak up on $q = 1$ by taking a limit, and then we get the Shannon entropy

$\displaystyle{ \lim_{q \to 1} H_q(p) = - \sum_i p_i \ln p_i}$

On the other hand, $\alpha = 2$ gives the collision entropy

$H_2(p) = \displaystyle{- \ln \sum_i p_i^2 }$

In biodiversity, Lou Jost has convinced all right-thinking people to exponentiate the Rényi entropies and get the ‘Hill numbers’ or ‘diversities’

$\displaystyle{ D_q(p) = \left( \sum_i p_i^q \right)^{1/(1-q)} }$

I recall you liked this idea too.

But now let me get to my actual point: the theorem I’m discussing today seems to work only for Shannon entropy! I haven’t carefully checked it, but I did some calculations to see if the relative Rényi entropy decreases for other values of $q$, and I couldn’t get this to follow the assumption that we’re computing the entropy of an evolutionarily stable state relative to the current state, which evolves according to the replicator equation. I’d love to be shown wrong, and someday I’ll to settle the matter definitively, but it didn’t seem to be working.

So: while I’m always eager to learn nice properties of different Rényi entropies, or Hill numbers, there may be times when one particular entropy measure is the only one that makes a theorem work… and that will trump any sort of ‘taste’ or ‘sexiness’ considerations.

12. I have been trying to read some of John Baez’s series on “Information Geometry” (here at Part 11), part of which has now also been summarized by Sean Carroll (who blogs for Discover […]

13. […] Last time we saw that if a population evolves toward an ‘evolutionarily stable state’, then the amount of information our population has ‘left to learn’ can never increase! It must always decrease or stay the same. […]

14. Tobias Fritz says:

Did you know that free energy is actually ‘just’ relative entropy?

Suppose that a thermodynamical system is in a state $p$, which is a probability distribution over its space of pure states. (You can as well take the system to be quantum and $p$ to be its density matrix.) Suppose further that the system has a Hamiltonian $H$ and temperature $T$, so that the equilibrium state is $p_{eq} = e^{-H/T}$.

Then the free energy of $p$ is given by $F=\langle H\rangle_p - TS(p)$. Splitting this up into sums over the system’s configurations $i$ gives

$F = \displaystyle{ \sum_i \left( p_i H_i + T p_i \log p_i \right) = T \sum_i p_i( \log p_i - \log p_{eq,i} ) }$,

which is, up to the factor $T$, just the relative entropy of $p$ with respect to $p_{eq}$!

Doing a web search reveals that this is quite well-known. I learned this yesterday from Paul Skrzypczyk.

• John Baez says:

Tobias wrote:

Did you know that free energy is actually ‘just’ relative entropy?

Yes, I discovered this while preparing my talk for the Mathematics of Biodiversity conference.

I had all the right ingredients in front of my face:

1) In the beginning of the talk, I explained the idea that any probability distribution is a state of thermal equilibrium for your favorite temperature $T_0$ and some Hamiltonian $H$ (which is unique up to a constant):

$p_i = \exp(-H_i / T_0)$

This then allows to make the probability distribution temperature-dependent:

$p_i(T) = \exp(-H_i / T) / Z(T)$

where the partition function $Z(T)$ serves to make the
probabilities sum to one:

$Z(T) = \displaystyle{ \sum_i \exp(-H_i / T_0) }$

2) Then I explained the concept of free energy, emphasizing that thanks to the above ideas, any probability distribution gives rise to temperature-dependent concepts of entropy and free energy, whose formulas I wrote down.

3) Then I explained that according to the replicator equation, relative entropy always increases, mentioning that this was a version of the second law of thermodynamics.

But of course the usual second law of thermodynamics says that free energy always decreases for a closed system whose energy is conserved!

So I did a calculation and noticed that relative entropy is ‘just’ the negative of free energy—or if you prefer, free energy is ‘just’ the negative of relative entropy. This calculation is essentially the same as the one you just showed us.

But I didn’t talk about this. Why not? Because there was something confusing me, which is in fact still confusing me. Maybe someone here can help me out.

In this blog post, I gave the proof that given some assumptions, the relative information is always decreasing. But information of what relative to what?

In my notation it’s the information of the equilibrium probability distribution $q$ relative to the time-dependent probability distribution $p$:

$I(q,p) = \displaystyle{ \sum_i q_i \ln q_i - q_i \ln p_i }$

Of course entropy has a minus sign, so we can also say relative entropy is always increasing. This is the entropy of the equilibrium probability distribution $q$ relative to the time-dependent probability distribution $p$:

$S(q,p) = - I(q,p) = \displaystyle{ \sum_i q_i \ln p_i - q_i \ln q_i }$

But free energy seems to be the other way around: up to a factor, it’s the entropy of the time-dependent probability distribution $q$ relative to the equilibrium probability distribution $p$!

Am I making a dumb mistake?

• Tobias Fritz says:

Thanks for the informative reply, John! After rereading the main post, I realize that I should have said ‘relative information’ rather than ‘relative entropy’ in order to get the sign conventions consistent with your terminology.

So I did a calculation and noticed that relative entropy is ‘just’ the negative of free energy—or if you prefer, free energy is ‘just’ the negative of relative entropy.

Either way is fine!

I think the $I(p,q)$ in your last equation should be $I(q,p)$.

Am I making a dumb mistake?

If so, then I failed to notice it while following your reasoning.
Are there examples where the relative information of the time-dependent distribution with respect to the equilibrium distribution is not monotonically decreasing?

• John Baez says:

Tobias wrote:

I think the $I(p,q)$ in your last equation should be $I(q,p)$.

Whoops, I’ll fix that. There’s enough confusion about the roles of $p$ and $q$ without that mistake!

Are there examples where the relative information of the time-dependent distribution with respect to the equilibrium distribution is not monotonically decreasing?

I don’t know any.

This is why I’m so confused. On the one hand, since $I(p,q)$ is essentially ‘free energy’, the laws of thermodynamics say this quantity should decrease in many situations. On the other hand, I haven’t found an easy proof that it decreases starting from the replicator equation and some other reasonable assumptions… while the proof that $I(q,p)$ decreases is strikingly simple. I guess I should try a bit harder!

By the way, while I’ve been very distracted from our latest entropy project, one reason I’ve been distracted is because I’m talking to Jamie Vicary. And luckily, Jamie has some ideas on category theory and Bayesian networks that are closely related to our work! I haven’t quite fit them together, but I think it will be possible.

Here’s a little puzzle that’s very relevant. The category FinSet sits in a bigger category FinStoch whose objects are finite sets and whose morphisms are ‘stochastic maps’, where a stochastic map $f : X \to Y$ is a function sending each element of $X$ to a probability distribution on $Y.$ Is there a purely category-theoretic way to tell if a morphism in FinStoch comes from one in FinSet? You’re allowed to use the symmetric monoidal category structure on FinStoch if you want.

(You can also change the rules of the game in other ways if doing so you gives an interesting theorem! The general goal is to ‘recognize’ functions inside categories of a more stochastic nature.)

There’s a slightly similar puzzle that has a known answer. Let FinRel be the category of finite sets and relations. There’s purely 2-category-theoretic way to tell if a morphism in FinRel comes from one in FinSet: these are the morphisms with right adjoints. But this approach doesn’t seem to work for FinStoch.

• John Baez says:

It’s possible we should do something like this: embed FinSet in $\mathrm{Mat}(\mathbb{R})$, the category where objects are finite sets and a morphism $f: S \to T$ is an $S \times T$-shaped matrix of real numbers. Since $\mathrm{Mat}(\mathbb{R})$ is equivalent to the category of finite-dimensional real vector spaces there’s no purely category-theoretic way to recognize which morphisms come from functions. But, if you equip an object with a nice commutative Frobenius algebra structure, this amounts to equipping a finite-dimensional real vector space with a basis. And then, given two such vector spaces with basis, we should be able to recognize which linear maps between them send basis elements to basis elements!

Still, the story should be a bit different for FinStoch.

• Tobias Fritz says:

Ok, so now we have two puzzles: first, figuring out whether the equilibrium state should be the prior or the posterior in talking about the decrease in relative information during the approach to equilibrium, and second abstractly characterizing FinSet as a subcategory of FinStoch.

Concerning the first, I wonder if the natural statement would be that relative information *always* decreases under a certain class of dynamics $T$:
$I(T(p),T(p')) \leq I(p,p')$
Since an equilibrium state satisfies $T(q)=q$, this would automatically imply that the relative information with respect to equilibrium is non-increasing, in both ways of choosing prior and posterior state!
For example, this inequality is known to hold when $T$ operates linearly on the probabilities, i.e. when $T$ is a stochastic map. The question is, what is the most general class of transformations $T$ for which this holds? Evolution according to the replicator equation probably allows stationary states which are not evolutionarily stable, right? This would suggest that the inequality cannot hold in this generality when $T$ is replicator equation dynamics.

Concerning the second problem, I haven’t yet been able to figure anything out, except for the details in your “purely 2-category-theoretic way to tell if a morphism in FinRel comes from one in FinSet”. The 2-morphisms are just inclusions of relations, right?
Could you say a bit more about the relation to Bayesian networks? I just wonder because I’ve recently thought a lot about these in the context of the foundations of quantum theory; this is the first outgrowth of that work. Bayesian networks don’t officially appear there yet, but they’re lurking in the background.

• John Baez says:

Tobias wrote:

Concerning the first, I wonder if the natural statement would be that relative information always decreases under a certain class of dynamics $T$:

$I(T(p),T(p')) \leq I(p,p')$

I like that idea. By the way, is this true when $T$ is a stochastic operator, i.e. a linear operator that maps probability distributions to probability distributions? It feels like it should be true.

Evolution according to the replicator equation probably allows stationary states which are not evolutionarily stable, right?

I’m pretty sure that’s right, though I don’t know this stuff well enough to instantly cough up an example.

This would suggest that the inequality cannot hold in this generality when $T$ is replicator equation dynamics.

This would suggest it, but not actually prove it, so we should be careful!

Concerning the second problem, I haven’t yet been able to figure anything out, except for the details in your “purely 2-category-theoretic way to tell if a morphism in FinRel comes from one in FinSet”. The 2-morphisms are just inclusions of relations, right?

Right, or what people call ‘implication’: for example, ‘x is the best friend of y’ implies ‘x is the friend of y’. So FinRel becomes a locally posetal 2-category, sometimes called a ‘2-poset’.

Could you say a bit more about the relation to Bayesian networks?

Since Brendan Fong and Jamie Vicary are working on Bayesian networks at the CQT and I’m planning to write lots of posts about them in the Network Theory series, it’s hard for me to say just a bit about them.

• Tobias Fritz says:

John wrote:

I like that idea. By the way, is this true when T is a stochastic operator, i.e. a linear operator that maps probability distributions to probability distributions? It feels like it should be true.

Yes, that is true, as I tried to explain in the previous comment. Finding a good reference is surprisingly hard; the quantum generalization of this inequality appears for example as eq. (7) in this review paper.

Since Brendan Fong and Jamie Vicary are working on Bayesian networks at the CQT and I’m planning to write lots of posts about them in the Network Theory series, it’s hard for me to say just a bit about them.

Okay, I’m looking forward to those posts and Brendan and Jamie’s work!

• Tobias Fritz says:

Is there a purely category-theoretic way to tell if a morphism in FinStoch comes from one in FinSet? You’re allowed to use the symmetric monoidal category structure on FinStoch if you want.

Here’s a solution which you’re probably going to be disappointed with. I assume that the monoidal structure is the “disjoint union” which restricts to the coproduct in FinSet.

For each $n$, fix some $n$-element set $[n]$. Then there is a unique morphism $[0]\to [1]$ in FinStoch, and likewise a unique morphism $[2]\to [1]$. FinSet is the monoidal subcategory generated by these morphisms and all isos.

While this is most likely not the highbrow characterization you’re looking for, it shows at least that purely “external” characterizations are possible. In particular, every monoidal automorphism (or autoequivalence) of FinStoch preserves the subcategory FinSet.

• John Baez says:

Actually I was thinking about the monoidal structure on FinStoch that restricts to the product in FinSet, since a random variable taking values in $S$ is a stochastic map

$[1] \to S$

and if we have two random variables

$[1] \to S, \quad [1] \to T$

the pair of them can be considered a random variable in its own right

$[1] \to S \times T$

By the way, I learned this idea of treating random variables as stochastic maps from Jamie Vicary. It means the slice category [1]/FinStoch is quite interesting.

But the reason I’m not entirely satisfied with your characterization of FinSet inside FinStoch is not this, and it’s not that it’s insufficiently ‘highbrow’: it’s that you’re describing morphisms that generate this subcatgory, whereas I was hoping there’s some property that holds precisely for the morphisms in this category.

But I think I made some progress with Jamie and Brendan from a slightly different direction: starting from the category of finite-dimensional real Hilbert spaces, then picking out FinSet as the subcategory of special commutative dagger-Frobenius algebras with morphisms preserving the comultiplication, and then using that to also pick out FinStoch as a subcategory. This seems like a decent way to go about things, especially because ‘infinitesimal stochastic’ maps, i.e. generators of 1-parameter stochastic groups, are also present in the original big category.

• Tobias Fritz says:

Actually I was thinking about the monoidal structure on FinStoch that restricts to the product in FinSet […]

Ah, darn! I can see why this makes more sense in the context of Bayesian networks.

It means the slice category [1]/FinStoch is quite interesting.

Yes, it’s like FinProb, with functions generalized to stochastic maps. You had brought this up before!

But the reason I’m not entirely satisfied with your characterization of FinSet inside FinStoch is […] that you’re describing morphisms that generate this subcatgory, whereas I was hoping there’s some property that holds precisely for the morphisms in this category.

Yes, I know! That’s what I meant by “highbrow” ;)

But I think I made some progress with Jamie and Brendan from a slightly different direction: starting from the category of finite-dimensional real Hilbert spaces, then picking out FinSet as the subcategory of special commutative dagger-Frobenius algebras with morphisms preserving the comultiplication, and then using that to also pick out FinStoch as a subcategory.

Great to see you’re making progress! I’m a bit puzzled by the term “subcategory” here; you rather get FinSet and FinStoch as a category of algebras of a PROP (or something like that) in FHilb, right?

• John Baez says:

Tobias wrote:

I’m a bit puzzled by the term “subcategory” here; you rather get FinSet and FinStoch as a category of algebras of a PROP (or something like that) in FHilb, right?

Yeah, I shouldn’t have called it a “subcategory”—I was trying to show off, but I got carried away.

Among category theorists, it’s common to show off by saying X is a subset of Y even if it’s not, as long as X is equipped with a monomorphism from X to Y. This is, after all, the ‘non-evil’ variant of the notion of subset, as used in structural set theory. Being a subset is then not a property, but a structure.

You can also try to invent non-evil variants of the notion of subcategory. The idea is that we say X is a subcategory of Y if it’s equipped with a functor to Y that’s nice enough.

The obvious functor from FinSet (or FinStoch) to finite-dimensional real Hilbert spaces is faithful, but not full. In this situation, it’s bound to be confusing to think of this functor as making FinSet a ‘subcategory’ of FinHilb, since it’s like saying the category of groups is a subcategory of Set, which sounds really stupid.

• Tobias Fritz says:

That’s very cool! One nice aspect of category theory, in contrast to real life, is that it’s pretty clear what’s evil and what’s not ;)

This is off topic, but I wonder if that means that even the concept of ‘morphism’ is evil: a morphism is an element of the set of morphisms between two objects, and elements are evil! A similar “objection” applies to the ‘objects’ in a category. Does higher category theory offer a solution to this problem?

• John Baez says:

Tobias wrote:

One nice aspect of category theory, in contrast to real life, is that it’s pretty clear what’s evil and what’s not ;)

Well, a lot of people have trouble understanding the concept of ‘evil’ in category and n-category theory… and I’m afraid maybe you do, too!

Elements of sets are not evil, nor are objects of categories.

What’s ‘evil’, in the technical sense of that word, is to make a definition or state a theorem asserting some property of objects in an n-category that holds for one object but fails to hold for some equivalent object.

This is fairly precise, but click the link to read more subtleties.

For example, it’s evil to define a function

$f: X \to Y$

to be an inclusion when it’s 1-1 and $X$ is a subset of $Y$

if we are thinking of functions as objects in the arrow category of Set. The reason is that the second clause, “$X$ is a subset of $Y$“, can easily be true for some function

$f: X \to Y$

but not true for some function

$f' : X' \to Y'$

that’s isomorphic in the arrow category.

To fix this, we can make the non-evil definition: a function

$f: X \to Y$

is an injection when it’s 1-1.

And so on…

One reason for avoiding ‘evil’ definitions and theorems is just that then all the facts we prove are guaranteed to be invariant under equivalence, which means we don’t need to check a lot of annoying fine print before applying them. But some category theorists get very annoyed that I use the term ‘evil’ in this way… because they want to do things that are evil in this sense, and don’t like being called ‘evil’.

• Tobias Fritz says:

Thank you for the corrections and the lucid explanation! I hope to get back to this sort of stuff in some more detail in the intermediate future…

15. John Baez says:

Here’s a paper that just showed up, pointed out by Blake Stacey:

• S. A. Frank, Natural selection. V. How to read the fundamental equations of evolutionary change in terms of information theory, Journal of Evolutionary Biology 25 (2012), 2377–96.

Abstract. The equations of evolutionary change by natural selection are commonly expressed in statistical terms. Fisher’s fundamental theorem emphasizes the variance in fitness. Quantitative genetics expresses selection with covariances and regressions. Population genetic equations depend on genetic variances. How can we read those statistical expressions with respect to the meaning of natural selection? One possibility is to relate the statistical expressions to the amount of information that populations accumulate by selection. However, the connection between selection and information theory has never been compelling. Here, I show the correct relations between statistical expressions for selection and information theory expressions for selection. Those relations link selection to the fundamental concepts of entropy and information in the theories of physics, statistics, and communication. We can now read the equations of selection in terms of their natural meaning. Selection causes populations to accumulate information about the environment.

16. Doug says:

Nature Communications 23 OCT 12, article number 1145, reports on a human only brain gene miR-941 that apparently arose spontaneously from micro RNA several million years ago.

What are the odds?

17. Suppose $q$ is fixed, while $p$ evolves in time according to the replicator equation. Then

$\displaystyle{ \frac{d}{d t} I(q,p) = \sum_i f_i(P) (p_i - q_i) }$

For the proof, see Part 11 of the information geometry series.

18. This is my talk for the workshop Biological Complexity: Can It Be Quantified?

Abstract. If biology is the study of self-replicating entities, and we want to understand the role of information, it makes sense to see how information theory is connected to the ‘replicator equation’—a simple model of population dynamics for self-replicating entities. The relevant concept of information turns out to be the information of one probability distribution relative to another, also known as the Kullback–Liebler divergence. Using this we can see evolution as a learning process, and give a clean general formulation of Fisher’s fundamental theorem of natural selection.