Biology as Information Dynamics (Part 1)

This is my talk for the workshop Biological Complexity: Can It Be Quantified?

• John Baez, Biology as information dynamics, 2 February 2017.

Abstract. If biology is the study of self-replicating entities, and we want to understand the role of information, it makes sense to see how information theory is connected to the ‘replicator equation’—a simple model of population dynamics for self-replicating entities. The relevant concept of information turns out to be the information of one probability distribution relative to another, also known as the Kullback–Leibler divergence. Using this we can get a new outlook on free energy, see evolution as a learning process, and give a clean general formulation of Fisher’s fundamental theorem of natural selection.

For more, read:

• Marc Harper, The replicator equation as an inference dynamic.

• Marc Harper, Information geometry and evolutionary game theory.

• Barry Sinervo and Curt M. Lively, The rock-paper-scissors game and the evolution of alternative male strategies, Nature 380 (1996), 240–243.

• John Baez, Diversity, entropy and thermodynamics.

• John Baez, Information geometry.

The last reference contains proofs of the equations shown in red in my slides.
In particular, Part 16 contains a proof of my updated version of Fisher’s fundamental theorem.

6 Responses to Biology as Information Dynamics (Part 1)

  1. Marco Rossi says:

    Thank you very much! Didn’t know Fisher information metric! It will be my weekend read. Do you think can somehow be related to Ryu-Takayanagi formula?

  2. Blake Stacey says:

    The word “distribution” is repeated on slide 12.

  3. John Baez says:

    By the way, I’m very happy with my statement of Fisher’s fundamental theorem of natural selection—but I’ve never seen it anywhere else, though I’m having trouble imagining it’s new. It’s on the second to last page of my talk.

    Ronald Fisher’s original statement and proof became famous for their obscurity. Quoth Wikipedia:

    It uses some mathematical notation but is not a theorem in the mathematical sense.

    It states:

    “The rate of increase in fitness of any organism at any time is equal to its genetic variance in fitness at that time.”

    Or in more modern terminology:

    “The rate of increase in the mean fitness of any organism at any time ascribable to natural selection acting through changes in gene frequencies is exactly equal to its genetic variance in fitness at that time”.

    Largely as a result of Fisher’s feud with the American geneticist Sewall Wright about adaptive landscapes, the theorem was widely misunderstood to mean that the average fitness of a population would always increase, even though models showed this not to be the case. In 1972, George R. Price showed that Fisher’s theorem was indeed correct (and that Fisher’s proof was also correct, given a typo or two), but did not find it to be of great significance. The sophistication that Price pointed out, and that had made understanding difficult, is that the theorem gives a formula for part of the change in gene frequency, and not for all of it. This is a part that can be said to be due to natural selection

    Price’s paper is here:

    • George R. Price, Fisher’s ‘fundamental theorem’ made clear, Annals of Human Genetics 36 (1972), 129–140.

    I don’t find it very clear, perhaps because I didn’t spend enough time on it.

    My result is a theorem in the mathematical sense, though quite an easy one. I assume a population distribution evolves according to the replicator equation and derive an equation whose right-hand side matches that of Fisher’s original equation: the variance of the fitness.

    But my left-hand side is different: it’s the square of the speed of the corresponding probability distribution, where speed is measured using the ‘Fisher information metric’. This metric was discovered by the same Fisher, but I don’t think he used it in his work on the fundamental theorem.

    Something similar to my statement appears as Theorem 2 of Marc Harper’s paper:

    • Marc Harper, Information geometry and evolutionary game theory.

    and for that theorem he cites:

    • Josef Hofbauer and Karl Sigmund, Evolutionary Games and Population Dynamics, Cambridge University Press, Cambridge, 1998.

    However, his Theorem 2 assumes that the probability distribution flows along the gradient of a function, and I’m not assuming that: indeed, my version applies to the rock-paper-scissors game where the probability distribution of players moves round and round!

You can use Markdown or HTML in your comments. You can also use LaTeX, like this: $latex E = m c^2 $. The word 'latex' comes right after the first dollar sign, with a space after it.

Fill in your details below or click an icon to log in: Logo

You are commenting using your account. Log Out /  Change )

Twitter picture

You are commenting using your Twitter account. Log Out /  Change )

Facebook photo

You are commenting using your Facebook account. Log Out /  Change )

Connecting to %s

This site uses Akismet to reduce spam. Learn how your comment data is processed.