Abstract. If biology is the study of self-replicating entities, and we want to understand the role of information, it makes sense to see how information theory is connected to the ‘replicator equation’—a simple model of population dynamics for self-replicating entities. The relevant concept of information turns out to be the information of one probability distribution relative to another, also known as the Kullback–Leibler divergence. Using this we can get a new outlook on free energy, see evolution as a learning process, and give a clean general formulation of Fisher’s fundamental theorem of natural selection.

The last reference contains proofs of the equations shown in red in my slides.
In particular, Part 16 contains a proof of my updated version of Fisher’s fundamental theorem.

I don’t know the Ryu-Takaynagi formula. The Fisher information metric is fundamental to information geometry, meaning the geometry of the space of probability distributions, or more general probability measures. I wrote about it in this series:

Thanks! I fixed it, but it will take some time for the fixed version to appear, since (unusually) this airport wifi is not letting me use WinSCP to transfer files.

By the way, I’m very happy with my statement of Fisher’s fundamental theorem of natural selection—but I’ve never seen it anywhere else, though I’m having trouble imagining it’s new. It’s on the second to last page of my talk.

Ronald Fisher’s original statement and proof became famous for their obscurity. Quoth Wikipedia:

It uses some mathematical notation but is not a theorem in the mathematical sense.

It states:

“The rate of increase in fitness of any organism at any time is equal to its genetic variance in fitness at that time.”

Or in more modern terminology:

“The rate of increase in the mean fitness of any organism at any time ascribable to natural selection acting through changes in gene frequencies is exactly equal to its genetic variance in fitness at that time”.

Largely as a result of Fisher’s feud with the American geneticist Sewall Wright about adaptive landscapes, the theorem was widely misunderstood to mean that the average fitness of a population would always increase, even though models showed this not to be the case. In 1972, George R. Price showed that Fisher’s theorem was indeed correct (and that Fisher’s proof was also correct, given a typo or two), but did not find it to be of great significance. The sophistication that Price pointed out, and that had made understanding difficult, is that the theorem gives a formula for part of the change in gene frequency, and not for all of it. This is a part that can be said to be due to natural selection

I don’t find it very clear, perhaps because I didn’t spend enough time on it.

My result is a theorem in the mathematical sense, though quite an easy one. I assume a population distribution evolves according to the replicator equation and derive an equation whose right-hand side matches that of Fisher’s original equation: the variance of the fitness.

But my left-hand side is different: it’s the square of the speed of the corresponding probability distribution, where speed is measured using the ‘Fisher information metric’. This metric was discovered by the same Fisher, but I don’t think he used it in his work on the fundamental theorem.

Something similar to my statement appears as Theorem 2 of Marc Harper’s paper:

• Josef Hofbauer and Karl Sigmund, Evolutionary Games and Population Dynamics, Cambridge University Press, Cambridge, 1998.

However, his Theorem 2 assumes that the probability distribution flows along the gradient of a function, and I’m not assuming that: indeed, my version applies to the rock-paper-scissors game where the probability distribution of players moves round and round!

You can use Markdown or HTML in your comments. You can also use LaTeX, like this: $latex E = m c^2 $. The word 'latex' comes right after the first dollar sign, with a space after it. Cancel reply

You need the word 'latex' right after the first dollar sign, and it needs a space after it. Double dollar signs don't work, and other limitations apply, some described here. You can't preview comments here, but I'm happy to fix errors.

Thank you very much! Didn’t know Fisher information metric! It will be my weekend read. Do you think can somehow be related to Ryu-Takayanagi formula?

I don’t know the Ryu-Takaynagi formula. The Fisher information metric is fundamental to information geometry, meaning the geometry of the space of probability distributions, or more general probability measures. I wrote about it in this series:

• Information geometry.

The word “distribution” is repeated on slide 12.

Thanks! I fixed it, but it will take some time for the fixed version to appear, since (unusually) this airport wifi is not letting me use WinSCP to transfer files.

Okay, the fixed version is up now—and I’ve put the most important equations in cute red letters.

By the way, I’m very happy with my statement of Fisher’s fundamental theorem of natural selection—but I’ve never seen it anywhere else, though I’m having trouble imagining it’s new. It’s on the second to last page of my talk.

Ronald Fisher’s original statement and proof became famous for their obscurity. Quoth Wikipedia:

Price’s paper is here:

• George R. Price, Fisher’s ‘fundamental theorem’ made clear,

Annals of Human Genetics36(1972), 129–140.I don’t find it very clear, perhaps because I didn’t spend enough time on it.

My result

isa theorem in the mathematical sense, though quite an easy one. I assume a population distribution evolves according to the replicator equation and derive an equation whose right-hand side matches that of Fisher’s original equation: the variance of the fitness.But my left-hand side is different: it’s the square of the speed of the corresponding probability distribution, where speed is measured using the ‘Fisher information metric’. This metric was discovered by the same Fisher, but I don’t think he used it in

hiswork on the fundamental theorem.Something similar to my statement appears as Theorem 2 of Marc Harper’s paper:

• Marc Harper, Information geometry and evolutionary game theory.

and for that theorem he cites:

• Josef Hofbauer and Karl Sigmund,

Evolutionary Games and Population Dynamics, Cambridge University Press, Cambridge, 1998.However, his Theorem 2 assumes that the probability distribution flows along the gradient of a function, and I’m not assuming that: indeed, my version applies to the rock-paper-scissors game where the probability distribution of players moves round and round!