## Fisher’s Fundamental Theorem (Part 4)

I wrote a paper that summarizes my work connecting natural selection to information theory:

• John Baez, The fundamental theorem of natural selection.

Check it out! If you have any questions or see any mistakes, please let me know.

Just for fun, here’s the abstract and introduction.

Abstract. Suppose we have n different types of self-replicating entity, with the population $P_i$ of the ith type changing at a rate equal to $P_i$ times the fitness $f_i$ of that type. Suppose the fitness $f_i$ is any continuous function of all the populations $P_1, \dots, P_n$. Let $p_i$ be the fraction of replicators that are of the ith type. Then $p = (p_1, \dots, p_n)$ is a time-dependent probability distribution, and we prove that its speed as measured by the Fisher information metric equals the variance in fitness. In rough terms, this says that the speed at which information is updated through natural selection equals the variance in fitness. This result can be seen as a modified version of Fisher’s fundamental theorem of natural selection. We compare it to Fisher’s original result as interpreted by Price, Ewens and Edwards.

#### Introduction

In 1930, Fisher stated his “fundamental theorem of natural selection” as follows:

The rate of increase in fitness of any organism at any time is equal to its genetic variance in fitness at that time

Some tried to make this statement precise as follows:

The time derivative of the mean fitness of a population equals the variance of its fitness.

But this is only true under very restrictive conditions, so a controversy was ignited.

An interesting resolution was proposed by Price, and later amplified by Ewens and Edwards. We can formalize their idea as follows. Suppose we have n types of self-replicating entity, and idealize the population of the ith type as a real-valued function $P_i(t)$. Suppose

$\displaystyle{ \frac{d}{dt} P_i(t) = f_i(P_1(t), \dots, P_n(t)) \, P_i(t) }$

where the fitness $f_i$ is a differentiable function of the populations of every type of replicator. The mean fitness at time $t$ is

$\displaystyle{ \overline{f}(t) = \sum_{i=1}^n p_i(t) \, f_i(P_1(t), \dots, P_n(t)) }$

where $p_i(t)$ is the fraction of replicators of the ith type:

$\displaystyle{ p_i(t) = \frac{P_i(t)}{\phantom{\Big|} \sum_{j = 1}^n P_j(t) } }$

By the product rule, the rate of change of the mean fitness is the sum of two terms:

$\displaystyle{ \frac{d}{dt} \overline{f}(t) = \sum_{i=1}^n \dot{p}_i(t) \, f_i(P_1(t), \dots, P_n(t)) \; + \; }$

$\displaystyle{ \sum_{i=1}^n p_i(t) \,\frac{d}{dt} f_i(P_1(t), \dots, P_n(t)) }$

The first of these two terms equals the variance of the fitness at time t. We give the easy proof in Theorem 1. Unfortunately, the conceptual significance of this first term is much less clear than that of the total rate of change of mean fitness. Ewens concluded that “the theorem does not provide the substantial biological statement that Fisher claimed”.

But there is another way out, based on an idea Fisher himself introduced in 1922: Fisher information. Fisher information gives rise to a Riemannian metric on the space of probability distributions on a finite set, called the ‘Fisher information metric’—or in the context of evolutionary game theory, the ‘Shahshahani metric’. Using this metric we can define the speed at which a time-dependent probability distribution changes with time. We call this its ‘Fisher speed’. Under just the assumptions already stated, we prove in Theorem 2 that the Fisher speed of the probability distribution

$p(t) = (p_1(t), \dots, p_n(t))$

is the variance of the fitness at time t.

As explained by Harper, natural selection can be thought of as a learning process, and studied using ideas from information geometry—that is, the geometry of the space of probability distributions. As $p(t)$ changes with time, the rate at which information is updated is closely connected to its Fisher speed. Thus, our revised version of the fundamental theorem of natural selection can be loosely stated as follows:

As a population changes with time, the rate at which information is updated equals the variance of fitness.

The precise statement, with all the hypotheses, is in Theorem 2. But one lesson is this: variance in fitness may not cause ‘progress’ in the sense of increased mean fitness, but it does cause change!

For more details in a user-friendly blog format, read the whole series:

Part 1: the obscurity of Fisher’s original paper.

Part 2: a precise statement of Fisher’s fundamental theorem of natural selection, and conditions under which it holds.

Part 3: a modified version of the fundamental theorem of natural selection, which holds much more generally.

Part 4: my paper on the fundamental theorem of natural selection.

### 7 Responses to Fisher’s Fundamental Theorem (Part 4)

1. SteveB says:

Equation 6 is missing a factor $f_i$ on the rhs.

2. ecoquant says:

Very good. Thanks!

4. […] Part 4: my paper on the fundamental theorem of natural […]

5. Jon says:

Is this also true for generalized populations within the Human economy? This would then provide a case for increasing decreasing the reliance on strict admissions criteria to universities, with supporting examples provided by many 20th century giants of Physics (Dirac: Engineer turned Physicist due to WWI-induced increase in acceptance rate to Physics. Einstein: Unable to progress beyond PhD. Nother: Excluded as a female)

6. leebloomquist says:

Click to access interface.pdf

The founding paper on the interface theory of perception, a theory based on evolution. There is a section on research possibilities for mathematics and physics!

This site uses Akismet to reduce spam. Learn how your comment data is processed.