Last time we looked at a zero-sum game and noticed that when both players use their maximin strategy, we get a Nash equilibrium. This isn’t a coincidence—it always works this way for zero-sum games! This fact is not obvious. It will take some work to prove it. This will be our first really big theorem.

But first, we need to define a maximin strategy.

I already tried to give you the rough idea: it’s where you pick a mixed strategy that *maximizes* your expected payoff while assuming that no matter what mixed strategy you pick, the other player will pick the mixed strategy that *minimizes* your expected payoff. But to prove things about this concept, we need a more precise definition.

### The setup

First, remember the setup. We’re talking about a zero-sum 2-player normal form game. So as usual, we assume:

• Player A has some set of choices

• Player B has some set of choices

• If player A makes choice and player B makes choice the payoff to player A is and the payoff to player B is

But, because it’s zero-sum game, we also assume

for all choices and In other words, the payoff matrices and , whose entries are these numbers and sum to zero:

We’ll let to stand for A’s mixed strategy, and let stand for B’s mixed strategy. These are probability distributions. So, is a vector in obeying

while is a vector in obeying

Given these mixed strategies, A’s expected payoff is

while B’s expected payoff is

Since we will never mention the matrix again!

### Minima and maxima

As you might guess, we’re going to talk a lot about minima and maxima. So, we should be really clear about what they are!

**Definition 1.** Suppose is any real-valued function defined on any set We say has a **maximum** at if

for all In this case we call the number the **maximum value** of and we write

Note that mathematicians use ‘maximum’ to mean an element where the function gets as big as possible… and use ‘maximum value’ to mean how big gets there. This is a bit different than ordinary English!

Also note that a maximum may not exist! And if it exists, it may not be unique. For example, the function on the real line has no maximum, while the sine function has lots. So unless we know for sure a function has exactly one maximum, we should talk about *a* maximum instead of *the* maximum.

Similar stuff is true for minima, too:

**Definition 1.** Suppose is any real-valued function defined on any set We say has a **minimum** at if

for all In this case we call the number the **minimum value** of and we write

### Security levels

Pretend you’re player A. Since it’s a zero-sum game, we know B will try to maximize their payoff… which means minimizing your payoff. So, no matter what your mixed strategy is, you should expect that B will find a mixed strategy that’s a minimum of

So, you should only expect to get a payoff of

We call this player A’s **security level**. For short, let’s write it as

A’s security level is a function of We graphed this function in the example we looked at last time. It’s the dark curve here:

The different lines show for different choices of The minimum of all these gives the dark curve.

### Maximin strategies

Last time we found A’s maximin strategy by finding the *maximum* of A’s security level—the place where that dark curve is highest!

Suppose is a maximin strategy for player A. Since it maximizes A’s security level,

for all mixed strategies for player A. In other words, we have

If you look at this formula, you can really see why we use the word ‘maximin’!

It’s a little-known false fact that the concept of maximin strategy was named after the Roman emperor Maximin. Such an emperor really does exist! But in fact, there were *two* Roman emperors named Maximin. So he exists, but he’s not unique.

### Definitions

Now we’re ready for some definitions that summarize what we’ve learned.

**Definition 3.** Given a zero-sum 2-player normal form game and a mixed strategy for player A, we define **A’s security level** to be

**Definition 4.** Given a zero-sum 2-player normal form game, we say a mixed strategy for player A is a **maximin strategy** if

We can also make up definitions that apply to player B. We just need to remember that there’s a minus sign in B’s expected payoff:

**Definition 5.** Given a zero-sum 2-player normal form game and a mixed strategy for player B, we define **B’s security level** to be

**Definition 6.** Given a zero-sum 2-player normal form game, we say a mixed strategy for B is a **minimax strategy for B** if

But there’s an easy fact about maxima and minima that lets us simplify this last definition. To make a function as big as possible is the same as making as small as possible and then sticking a minus sign in front:

Similarly, to minimize is the same as maximizing and then taking the negative of that:

Using these rules, we get this little theorem:

**Theorem 1.** Given a zero-sum 2-player normal form game, is a minimax strategy for B if and only if:

**Proof.** Suppose that is a minimax strategy for B. By Definition 6,

Repeatedly using the rules for pushing minus signs through max and min, we see:

and then

Taking the negative of both sides, we get the equation we want:

We can also reverse our calculation and show that this equation implies is a maximin strategy. So, this equation is true if and only if is a maximin strategy for B. █

This little theorem talks about *a minimum of maxima* instead of a *maximum of minima*. This is one reason people talk about ‘minimax strategies’. In fact what we’re calling a maximin strategy, people often call a minimax strategy!

Next time we’ll start proving some really important things, which were first shown by the great mathematician John von Neumann:

• If both players in a zero-sum game use a maximin strategy, we get a Nash equilibrium.

• In any Nash equilibrium for a zero-sum game, both players must be using a maximin strategy!

• For any zero-sum game, there exists a Nash equilibrium.

Now we’re talking about mixed strategies, of course. We already saw a while back that if we only consider *pure* strategies, there are games like rock-paper-scissors and matching pennies that *don’t* have a Nash equilibrium.

Before I quit, one more false fact. Just as the maximin strategy was named after the emperor Maximin, the minimax strategy was named after the emperor Minimax! I mentioned that Emperor Maximin really exists, but is not unique. The case of Emperor Minimax is even more interesting. He’s really unique… but he does not exist!

Another amazing false fact: the MaxEnt approach has its own patron saint!

Last time we saw the official definition of maximin strategy. Now we’ll show something really important. In a Nash equilibrium for a zero-sum game, both players must be using a maximin strategy! […]

You notes are just incredible. I have been reading textbooks for two months and I haven’t such an intuitive and captivating approach. Thank you John!

Thanks!