Suppose we have a 2-player normal form game. As usual, we assume:

• Player A has some set of choices

• Player B has some set of choices

• If player A makes choice and player B makes choice the payoff to player A is and the payoff to player B is

Earlier we studied ‘pure strategies’, where the players make the same choice each time. Now we’ll study ‘mixed strategies’, where the players make their choices randomly. I want to show you that there’s always a Nash equilibrium when we allow mixed strategies—even in games like rock, paper, scissors that don’t have a Nash equilibrium with pure strategies!

But to do this, we need to *define* Nash equilibria for mixed strategies. And before that, we need to define mixed strategies!

First let’s make up a name for the set of player A’s choices:

and a name for the set of player B’s choices:

**Definition 1.** A **mixed strategy for player A** is a probability distribution on the set of their choices, A **mixed strategy for player B** is a probability distribution on the set of their choices,

Let’s recall exactly what this means, since you’ll need to know! Player A has a probability of making any choice , and these probabilities obey

and

Similarly, the probability that player B makes the choice is and these probabilities obey

and

In our earlier discussions of probability, we would call and sets of **events**. An event is anything that can happen. But now the thing that can happen is that a player makes a certain choice in the game! So, now we’re calling and sets of **choices**. But you can also think of these choices as events.

### The expected payoff

Now let’s work out the expected value of the payoff to each player. To do this, we’ll assume:

1) Player A uses mixed strategy

2) Player B uses mixed strategy

3) Player A and player B’s choices are independent.

If you forget what ‘independent’ means, look at Part 8. The basic idea is player A’s choice doesn’t affect player B’s choice, and vice versa. After all, this is a ‘simultaneous game’, where each player makes their choice not knowing what the other has done.

But mathematically, the point is that we must assume the player’s choices are independent to know the probability of player A making choice *and* player B making choice is the product

Knowing this, we can work out the expected value of the payoff to player A. Here it is:

I hope you see why. The probability that player A makes choice and player B makes choice is The payoff to player A when this happens is We multiply these and sum over all the possible choices for both players. That’s how expected values work!

Similarly, the expected value of the payoff for player B is

### More details

If you’re in the mood, I can make this more formal. Remember that is the set of all ordered pairs where and A pair is an event where player A makes choice and player B makes choice

A’s payoff is a function on this set Namely, if player A makes choice and player B makes choice A’s payoff is . There’s also a probability distribution on Namely, the probability of the event is So, the expected value of the payoff with respect to this probability distribution is

But this is equal to what we’ve seen already:

### Matrix multiplication and the dot product

It looks like all of the students in this class have studied some linear algebra. So, I’ll assume you know how to:

• take the dot product of vectors to get a number,

and

• multiply a vector by a matrix to get a new vector.

Click on the links if you want to review these concepts. They will let us write our formulas for expected payoffs much more efficiently!

Here’s how. First, we think of the probability distribution as a vector in that is, a list of numbers:

Second, we think of the probability distribution as a vector in

Third, we think of and as matrices, since that’s what they are:

Here’s the cool part:

**Theorem.** If A has mixed strategy and B has mixed strategy then the expected value of A’s payoff is

and the expected value of B’s payoff is

**Proof.** We’ll only prove the first one, since the second works just the same way. By definition, is a vector in with components

Also by definition, the dot product of with is the number

But this agrees with our earlier formula for the expected value of A’s payoff, namely

So, we’re done! █

It’s not just quicker to write

than

It will also let us use tools from linear algebra to study games!

### Nash equilibria

We’ll have to look at examples to understand this stuff better, but let me charge ahead and define ‘Nash equilibria’ for mixed strategies. The idea is similar to the idea we’ve already seen. A pair of mixed strategies, one for A and one for B, is a Nash equilibrium if neither player can improve the expected value of their payoff by unilaterally changing their mixed strategy.

Let’s make that precise. As before, the definition of Nash equilibrium involves two conditions:

**Definition.** Given a 2-player normal form game, a pair of mixed strategies one for player A and one for player B, is a **Nash equilibrium** if:

1) For all mixed strategies for player A,

2) For all mixed strategies for player B,

Condition 1) says that player A can’t improve their payoff by switching their mixed strategy from to any other mixed strategy Condition 2) says that player B can’t improve their payoff by switching their mixed strategy from to any other mixed strategy

Now, one of our goals is to prove this big wonderful theorem:

**Theorem.** For any 2-player normal form game, there exists a pair of mixed strategies one for player A and one for player B, that is a Nash equilibrium.

For zero-sum games this was proved by John von Neumann. Later John Nash proved a version for nonzero-sum games, and even games with more than two players. These are famously smart mathematicians, so you should not expect the proof to be extremely easy. We will need to work!

But next time we’ll start by looking at examples, to get our feet on the ground and start getting some intuition for these new ideas.

[…] Last time we introduced mixed strategies, and generalized the idea of Nash equilibrium to mixed strategies. Now let’s look at an example. […]

In the proof of the first Theorem, should that first sigma be indexed from j = 1 to n?

Also, small grammar point in the paragraph preceding “The expected payoff”:

Thank you for your great notes!

You’re welcome! I bet you’re the Jesse in my class.

Yes, thanks—fixed. I keep mixing up my m’s and n’s.

Great lecture notes! The best way to explain Game theory is as you have done. Thanks. Do you have them on pdf?

No, I don’t have a PDF of this.