Game Theory (Part 18)

5 March, 2013

We’re talking about zero-sum 2-player normal form games. Last time we saw that in a Nash equilibrium for a game like this, both players must use a maximin strategy. Now let’s try to prove the converse!

In other words: let’s try to prove that if both players use a maximin strategy, the result is a Nash equilibrium.

Today we’ll only prove this is true if a certain equation holds. It’s the cool-looking equation we saw last time:

\displaystyle{\min_{q'} \max_{p'} \; p' \cdot A q' = \max_{p'} \min_{q'} \; p' \cdot A q'}

Last time we showed this cool-looking equation is true whenever our game has a Nash equilibrium. In fact, this equation is always true. In other words: it’s true for any zero-sum two-player normal form game. The reason is that any such game has a Nash equilibrium. But we haven’t showed that yet.

So, let’s do what we can easily do.

Maximin strategies give Nash equilibria… sometimes

Let’s start by remembering some facts we saw in Part 16 and Part 17.

We’re studying a zero-sum 2-player normal form game. Player A’s payoff matrix is A, and player B’s payoff matrix is -A.

We saw that a pair of mixed strategies (p,q), one for player A and one for player B, is a Nash equilibrium if and only if

1) p \cdot A q \ge p' \cdot A q for all p'

and

2) p \cdot A q \ge p \cdot A q' for all q'.

We saw that p is a maximin strategy for player A if and only if:

\displaystyle{ \min_{q'} \; p \cdot A q'  = \max_{p'} \min_{q'} \; p' \cdot A q' }

We also saw that q is a maximin strategy for player B if and only if:

\displaystyle{  \max_{p'} \; p' \cdot A q  = \min_{q'} \max_{p'} \; p' \cdot A q' }

With these in hand, we can easily prove our big result for the day. We’ll call it Theorem 4, continuing with the theorem numbers we started last time:

Theorem 4. Suppose we have a zero-sum 2-player normal form game for which

\displaystyle{\min_{q'} \max_{p'} \; p' \cdot A q' = \max_{p'} \min_{q'} \; p' \cdot A q'}      ★

holds. If p is a maximin strategy for player A and q is a maximin strategy for player B, then (p,q) is a Nash equilibrium.

Proof. Suppose that p is a maximin strategy for player A and q is a maximin strategy for player B. Thus:

\displaystyle{ \min_{q'} \; p \cdot A q'  = \max_{p'} \min_{q'} \; p' \cdot A q' }

and

\displaystyle{ \max_{p'} \; p' \cdot A q  = \min_{q'} \max_{p'} \; p' \cdot A q' }

But since ★ holds, the right sides of these two equations are equal. So, the left sides are equal too:

\displaystyle{ \min_{q'} \; p \cdot A q' = \max_{p'} \; p' \cdot A q  }     ★★

Now, since a function is always less than or equal to its maximum value, and greater than or equal to its minimum value, we have

\displaystyle{ \min_{q'} \; p \cdot A q' \le p \cdot A q \le \max_{p'} \; p' \cdot A q  }

But ★★ says the quantity at far left here equals the quantity at far right! So, the quantity in the middle must equal both of them:

\displaystyle{ \min_{q'} \; p \cdot A q' = p \cdot A q = \max_{p'} \; p' \cdot A q  }     ★★★

By the definition of minimum value, the first equation in ★★★:

\displaystyle{  p \cdot A q = \min_{q'} \; p \cdot A q' }

says that

\displaystyle{ p \cdot A q \le p \cdot A q' }

for all q'. This is condition 2) in the definition of Nash equilibrium. Similarly, by the definition of maximum value, the second equation in ★★★:

\displaystyle{ p \cdot A q = \max_{p'} \; p' \cdot A q }

says that

\displaystyle{ p \cdot A q \ge p' \cdot A q }

for all p'. This is condition 1) in the definition of Nash equilibrium. So, the pair (p,q) is a Nash equilibrium. █


Game Theory (Part 17)

27 February, 2013

Last time we saw the official definition of maximin strategy. Now we’ll prove something really important. In a Nash equilibrium for a zero-sum game, both players must be using a maximin strategy!

To prove this we will need to look at a lot of maxima and minima. We will always assume these maxima and minima exist. For what we’re doing, this is true. This can be proved using an important result from topology: given a continuous real valued function f: X \to \mathbb{R} on a compact set X, it has a minimum and a maximum. If you haven’t learned this yet… well, I hope you do by the time you get a degree in mathematics.

But now is not the time to talk about this. Let’s dive in!

Nash equilibria give maximin strategies

We start with a cool-looking inequality:

Theorem 1. For any zero-sum 2-player normal form game,

\displaystyle{ \min_{q'} \max_{p'} p' \cdot A q' \ge \max_{p'} \min_{q'} \; p' \cdot A q'}

Proof. Since a function is always greater than or equal to its minimum value, for any mixed strategies p and q we have

\displaystyle{  p \cdot A q \ge \min_{q'} \; p \cdot A q'}

If one function is \ge another, its maximum value is \ge the other function’s maximum value. So, the above inequality gives

\displaystyle{  \max_{p'} p' \cdot A q \ge \max_{p'} \min_{q'} \; p' \cdot A q'}

The right side here is just a number; the left side is a function of q. Since this function is always greater than or equal to the right side, so is its minimum:

\displaystyle{ \min_{q'} \max_{p'} p' \cdot A q' \ge \max_{p'} \min_{q'} \; p' \cdot A q'}   █

Next, we’ll show this cool-looking inequality becomes an equation when a Nash equilibrium exists. In fact a Nash equilibrium always exists, but we haven’t shown this yet. So:

Theorem 2. Given a zero-sum 2-player normal form game for which a Nash equilibrium exists,

\displaystyle{\min_{q'} \max_{p'} \; p' \cdot A q' = \max_{p'} \min_{q'} \; p' \cdot A q'}

Proof. Suppose a Nash equilibrium (p,q) exists. Then for any mixed strategy p' for player A, we have

\displaystyle{ p \cdot A q \ge p' \cdot A q}

since A can’t improve their payoff by switching their mixed strategy. Similarly, for any mixed strategy q' for player B, p \cdot B q \ge p \cdot B q', since B can’t improve their payoff by switching their mixed strategy. But B = -A, so this says

\displaystyle{ p \cdot A q' \ge p \cdot A q}

Combining these two inequalities, we get

\displaystyle{ p \cdot A q' \ge p' \cdot A q}

for all p', q'. The minimum of the left side over all q' must be greater than or equal to the right side, which doesn’t depend on q':

\displaystyle{ \min_{q'} p \cdot A q' \ge p' \cdot A q}

Now the maximum of the right side over all p' must be less than or equal to the left side, which doesn’t depend on p':

\displaystyle{ \min_{q'} p \cdot A q' \ge \max_{p'} p' \cdot A q}

It follows that

\begin{array}{ccl} \displaystyle{ \max_{p'} \min_{q'} p' \cdot A q'} &\ge& \displaystyle{ \min_{q'} p \cdot A q'} \\  \\  &\ge&  \displaystyle{  \max_{p'} p' \cdot A q } \\  \\ &\ge&  \displaystyle{ \min_{q'} \max_{p'} p' \cdot A q' } \end{array}

The middle inequality here is the one we saw a moment ago. The first inequality comes from the fact that the maximum value of a function is greater than or equal to any of its values:

\textrm{for all } x, \; \displaystyle{ \max_{x'} f(x') \ge f(x) }

so

\displaystyle{ \max_{p'} \min_{q'} p' \cdot A q' \ge \min_{q'} p \cdot A q' }

And the last inequality comes from the fact that the values of a function are always greater than or equal to its minimum value:

\textrm{for all } x, \; \displaystyle{ f(x) \ge \min_{x'} f(x') }

so

\displaystyle{  \max_{p'} p' \cdot A q  \ge  \min_{q'} \max_{p'} p' \cdot A q' }

Putting all these inequalities together, we get

\displaystyle{ \max_{p'} \min_{q'} p \cdot A q' \ge \min_{q'} \max_{p'}  p' \cdot A q}

On the other hand, Theorem 1 gives an inequality pointing the other way:

\displaystyle{ \min_{q'} \max_{p'} p' \cdot A q' \ge \max_{p'} \min_{q'} \; p' \cdot A q'}

So, the two sides must be equal:

\displaystyle{ \max_{p'} \min_{q'} \; p' \cdot A q' = \min_{q'} \max_{p'} \; p' \cdot A q'}

which is what we were trying to show!   █

What’s the point of this cool-looking equation? One point is it connects the terms ‘minimax’ and ‘maximin’. There’s a lot more to say about it. But right now, we need it for one big thing: it lets us prove that in a Nash equilibrium for a zero-sum game, both players must be using a maximin strategy!

Theorem 3. If (p,q) is a Nash equilibrium for a zero-sum 2-player normal-form game, then p is a maximin strategy for player A and q is a maximin strategy for player B.

Proof. Let’s assume that (p,q) is a Nash equilibrium. We need to show that p is a maximin strategy for player A and q is a maximin strategy for player B.

First let’s remember some things we saw in the proof of Theorem 2. We assumed that (p,q) is a Nash equilibrium, and we showed

\begin{array}{ccl} \displaystyle{ \max_{p'} \min_{q'} p' \cdot A q'} &\ge& \displaystyle{ \min_{q'} p \cdot A q'} \\  \\  &\ge&  \displaystyle{  \max_{p'} p' \cdot A q } \\  \\ &\ge&  \displaystyle{ \min_{q'} \max_{p'} p' \cdot A q' } \end{array}

If this looks confusing, go back to the proof of Theorem 2. But now look at the beginning and the end of this chain of inequalities. We saw in Theorem 2 that they’re equal! So all the stuff in the middle has to be equal, too. In particular,

\displaystyle{ \min_{q'} \; p \cdot A q'  = \max_{p'} \min_{q'} \; p' \cdot A q' }

Last time we saw this means that p is a maximin strategy for player A. Also,

\displaystyle{  \max_{p'} \; p' \cdot A q  = \min_{q'} \max_{p'} \; p' \cdot A q' }

Last time we saw this means that that q is a maximin strategy for player B.   █

Whew! That was quite a workout!

But we’re on a mission here, and we’re not done. We’ve shown that if (p,q) is a Nash equilibrium, p and q are maximin strategies. Next time we’ll try to show the converse: if p and q are maximin strategies, then (p,q) is a Nash equilibrium.


Game Theory (Part 16)

26 February, 2013

Last time we looked at a zero-sum game and noticed that when both players use their maximin strategy, we get a Nash equilibrium. This isn’t a coincidence—it always works this way for zero-sum games! This fact is not obvious. It will take some work to prove it. This will be our first really big theorem.

But first, we need to define a maximin strategy.

I already tried to give you the rough idea: it’s where you pick a mixed strategy that maximizes your expected payoff while assuming that no matter what mixed strategy you pick, the other player will pick the mixed strategy that minimizes your expected payoff. But to prove things about this concept, we need a more precise definition.

The setup

First, remember the setup. We’re talking about a zero-sum 2-player normal form game. So as usual, we assume:

• Player A has some set of choices i = 1, \dots, m.

• Player B has some set of choices j = 1, \dots, n.

• If player A makes choice i and player B makes choice j, the payoff to player A is A_{ij} and the payoff to player B is B_{ij}.

But, because it’s zero-sum game, we also assume

A_{ij} + B_{ij} = 0

for all choices i = 1, \dots, m and j = 1, \dots, n. In other words, the payoff matrices A and B, whose entries are these numbers A_{ij} and B_{ij}, sum to zero:

A + B = 0

We’ll let p to stand for A’s mixed strategy, and let q stand for B’s mixed strategy. These are probability distributions. So, p = (p_1, \dots, p_m) is a vector in \mathbb{R}^m obeying

0 \le p_i \le 1 , \quad \displaystyle{ \sum_{i = 1}^m p_i = 1 }

while q = (q_1 , \dots, q_m) is a vector in \mathbb{R}^n obeying

0 \le q_j \le 1 , \quad  \displaystyle{ \sum_{j=1}^n q_j = 1 }

Given these mixed strategies, A’s expected payoff is

p \cdot A q

while B’s expected payoff is

p \cdot B q = - p \cdot A q

Since B = -A, we will never mention the matrix B again!

Minima and maxima

As you might guess, we’re going to talk a lot about minima and maxima. So, we should be really clear about what they are!

Definition 1. Suppose f : X \to \mathbb{R} is any real-valued function defined on any set X. We say f has a maximum at x \in X if

f(x) \ge f(x')

for all x' \in X. In this case we call the number f(x) the maximum value of f, and we write

\displaystyle{ f(x) = \max_{x' \in X} f(x') }

Note that mathematicians use ‘maximum’ to mean an element x where the function f gets as big as possible… and use ‘maximum value’ to mean how big f gets there. This is a bit different than ordinary English!

Also note that a maximum may not exist! And if it exists, it may not be unique. For example, the function x^2 on the real line has no maximum, while the sine function has lots. So unless we know for sure a function has exactly one maximum, we should talk about a maximum instead of the maximum.

Similar stuff is true for minima, too:

Definition 1. Suppose f : X \to \mathbb{R} is any real-valued function defined on any set X. We say f has a minimum at x \in X if

f(x) \le f(x')

for all x' \in X. In this case we call the number f(x) the minimum value of f, and we write

\displaystyle{ f(x) = \min_{x' \in X} f(x') }

Security levels

Pretend you’re player A. Since it’s a zero-sum game, we know B will try to maximize their payoff… which means minimizing your payoff. So, no matter what your mixed strategy p is, you should expect that B will find a mixed strategy q' that’s a minimum of

p \cdot A q

So, you should only expect to get a payoff of

\displaystyle{ \min_{q' \in \{ \textrm{B's mixed strategies\}}} \; p \cdot A q' }

We call this player A’s security level. For short, let’s write it as

\displaystyle{ \min_{q'} \; p \cdot A q' }

A’s security level is a function of p. We graphed this function in the example we looked at last time. It’s the dark curve here:

The different lines show p \cdot A q for different choices of q. The minimum of all these gives the dark curve.

Maximin strategies

Last time we found A’s maximin strategy by finding the maximum of A’s security level—the place where that dark curve is highest!

Suppose p is a maximin strategy for player A. Since it maximizes A’s security level,

\displaystyle{ \min_{q'} \; p \cdot A q' \ge  \min_{q'} \; p' \cdot A q' }

for all mixed strategies p' for player A. In other words, we have

\displaystyle{ \min_{q'} \; p \cdot A q'  = \max_{p'} \min_{q'} \; p' \cdot A q' }

If you look at this formula, you can really see why we use the word ‘maximin’!

It’s a little-known false fact that the concept of maximin strategy was named after the Roman emperor Maximin. Such an emperor really does exist! But in fact, there were two Roman emperors named Maximin. So he exists, but he’s not unique.

 

Definitions

Now we’re ready for some definitions that summarize what we’ve learned.

Definition 3. Given a zero-sum 2-player normal form game and a mixed strategy p for player A, we define A’s security level to be

\displaystyle{ \min_{q'} \; p \cdot A q' }

Definition 4. Given a zero-sum 2-player normal form game, we say a mixed strategy p for player A is a maximin strategy if

\displaystyle{ \min_{q'} \; p \cdot A q'  = \max_{p'} \min_{q'} \; p' \cdot A q' }

We can also make up definitions that apply to player B. We just need to remember that there’s a minus sign in B’s expected payoff:

Definition 5. Given a zero-sum 2-player normal form game and a mixed strategy q for player B, we define B’s security level to be

\displaystyle{ \min_{p'} \; - p' \cdot A q }

Definition 6. Given a zero-sum 2-player normal form game, we say a mixed strategy q' for B is a maximin strategy for B if

\displaystyle{  \min_{p'} \; - p' \cdot A q = \max_{q'} \min_{p'} \; - p' \cdot A q' }

But there’s an easy fact about maxima and minima that lets us simplify this last definition. To make a function -f as big as possible is the same as making f as small as possible and then sticking a minus sign in front:

\displaystyle{ \max_{x \in X} -f(x) = - \min_{x \in X} f(x)}

Similarly, to minimize -f is the same as maximizing f and then taking the negative of that:

\displaystyle{ \min_{x \in X} -f(x) = - \max_{x \in X} f(x)}

Using these rules, we get this little theorem:

Theorem 1. Given a zero-sum 2-player normal form game, q is a maximin strategy for B if and only if:

\displaystyle{  \max_{p'} \; p' \cdot A q  = \min_{q'} \max_{p'} \; p' \cdot A q' }

Proof. Suppose that q is a maximin strategy for B. By Definition 6,

\displaystyle{  \min_{p'} \; - p' \cdot A q = \max_{q'} \min_{p'} \; - p' \cdot A q' }

Repeatedly using the rules for pushing minus signs through max and min, we see:

\displaystyle{ - \max_{p'} p \cdot A q = \max_{q'} \left( - \max_{p'} \; p \cdot A q \right) }

and then

\displaystyle{ - \max_{p'} p \cdot A q = - \min_{q'} \max_{p'} \; p' \cdot A q' }

Taking the negative of both sides, we get the equation we want:

\displaystyle{  \max_{p'} \; p' \cdot A q  = \min_{q'} \max_{p'} \; p' \cdot A q' }

We can also reverse our calculation and show that this equation implies q is a maximin strategy. So, this equation is true if and only if q is a maximin strategy for B.   █

This little theorem talks about a minimum of maxima instead of a maximum of minima. This is one reason people talk about ‘minimax strategies’. In fact what we’re calling a maximin strategy, people often call a minimax strategy!

Next time we’ll start proving some really important things, which were first shown by the great mathematician John von Neumann:

• If both players in a zero-sum game use a maximin strategy, we get a Nash equilibrium.

• In any Nash equilibrium for a zero-sum game, both players must be using a maximin strategy!

• For any zero-sum game, there exists a Nash equilibrium.

Now we’re talking about mixed strategies, of course. We already saw a while back that if we only consider pure strategies, there are games like rock-paper-scissors and matching pennies that don’t have a Nash equilibrium.

Before I quit, one more false fact. Just as the maximin strategy was named after the emperor Maximin, the minimax strategy was named after the emperor Minimax! I mentioned that Emperor Maximin really exists, but is not unique. The case of Emperor Minimax is even more interesting. He’s really unique… but he does not exist!


Game Theory (Part 15)

20 February, 2013

In Part 13 we saw an example of a Nash equilibrium where both players use a mixed strategy: that is, make their choice randomly, using a certain probability distribution on their set of mixed strategies.

We found this Nash equilibrium using the oldest method known to humanity: we guessed it. Guessing is what mathematicians do when we don’t know anything better to do. When you’re trying to solve a new kind of problem, it’s often best to start with a very easy example, where you can guess the answer. While you’re doing this, you should try to notice patterns! These can give you tricks that are useful for harder problems, where pure guessing won’t usually work so well.

So now let’s try a slightly harder example. In Part 13 we looked at the game ‘matching pennies’, which was a zero-sum game. Now we’ll look at a modified version. It will still be a zero-sum game. We’ll find a Nash equilibrium by finding each player’s ‘maximin strategy’. This technique works for zero-sum two-player normal-form games, but not most other games.

Why the funny word ‘maximin’?

In this sort of strategy, you assume that no matter what you do, the other player will do whatever minimizes your expected payoff. This is a sensible assumption in a zero-sum game! They are trying to maximize their expected payoff, but in a zero-sum game whatever they win, you lose… so they’ll minimize your expected payoff.

So, in a maximin strategy you try to maximize your expected payoff while assuming that given whatever strategy you use, your opponent will use a strategy that minimizes your expected payoff.

Think about that sentence: it’s tricky! But it explains the funny word ‘maximin’. In brief, you’re trying to maximize the minimum of your expected payoff.

We’ll make this more precise in a while… but let’s dive in and look at a new game!

Matching pennies—modified version

In this new game, each player has a penny and must secretly turn the penny to either heads or tails. They then reveal their choices simultaneously. If the pennies do not match—one heads and one tails—B wins 10 cents from A. If the pennies are both heads, player A wins 10 cents from player B. And if the pennies are both tails, player A wins 20 cents from player B

Let’s write this game in normal form! If we say

• choice 1 is heads
• choice 2 is tails

then the normal form looks like this:

1 2
1    (10,-10)   (-10,10)  
2 (-10,10)   (20,-20)  

We can also break this into two matrices in the usual way, one showing the payoff for A:

A = \left( \begin{array}{rr} 10 & -10 \\ -10 & 20 \end{array} \right)

and one showing the payoff for B:

B = \left( \begin{array}{rr} -10 & 10 \\ 10 & -20 \end{array} \right)

Before we begin our real work, here are some warmup puzzles:

Puzzle 1. Is this a zero-sum game?

Puzzle 2. Is this a symmetric game?

Puzzle 3. Does this game have a Nash equilibrium if we consider only pure strategies?

Puzzle 4. Does this game have a dominant strategy for either player A or player B?

A’s expected payoff

Remember that we write A’s mixed strategy as

p = (p_1, p_2)

Here p_1 is the probability that A makes choice 1: that is, chooses heads. p_2 is the probability that A makes choice 2: that is, chooses tails. Similarly, we write B’s mixed strategy as

q = (q_1, q_2)

Here q_1 is the probability that B makes choice 1, and q_2 is the probability that B makes choice 2.

Given all this, the expected value of A’s payoff is

p \cdot A q

We saw this back in Part 12.

Now let’s actually calculate the expected value of A’s payoff. It helps to remember that probabilities add up to 1, so

p_1 + p_2 = 1, \qquad q_1 + q_2 = 1

It’s good to have fewer variables to worry about! So, we’ll solve for p_2 and q_2:

p_2 = 1 - p_1, \qquad q_2 = 1 - q_1

and write

p = (p_1, 1 - p_1)
q = (q_1, 1 - q_1)

Then we can calculate A’s expected payoff. We start by computing A q:

\begin{array}{ccl} A q &=&   \left( \begin{array}{rr} 10 & -10 \\ -10 & 20 \end{array} \right) \left( \begin{array}{c} q_1 \\ 1 - q_1 \end{array} \right) \\  \\ &=& \left( \begin{array}{c} 10 q_1 - 10(1-q_1) \\ -10 q_1 + 20(1 - q_1) \end{array} \right) \\  \\ &=& \left( \begin{array}{c} 20 q_1 - 10  \\ 20 - 30 q_1  \end{array} \right) \end{array}

This gives

\begin{array}{ccl} p \cdot A q &=& (p_1 , 1 - p_1) \cdot (20 q_1 - 10  ,  20 - 30 q_1) \\  \\  &=& p_1(20 q_1 - 10) + (1-p_1)(20 - 30 q_1) \\ \\ &=& 20 - 30 p_1 - 30 q_1 + 50 p_1 q_1 \end{array}

This is a bit complicated, so let’s graph A’s expected payoff in the extreme cases where B either makes choice 1 with probability 100%, or choice 2 with probability 100%.

If B makes choice 1 with probability 100%, then q_1 = 1. Put this in the formula above and we see A’s expected payoff is

p \cdot A q = 20 p_1 - 10

If we graph this as a function of p_1 we get a straight line:

On the other hand, if B makes choice 2 with probability 100%, then q_2 = 1 so q_1 = 0. Now A’s expected payoff is

p \cdot A q = 20 - 30 p_1

Graphing this, we get:

The point of these graphs becomes clearer if we draw them both together:

Some interesting things happen where the lines cross! We’ll get A’s maximin strategy by picking the value of p_1 where these lines cross! And in fact, there’s a Nash equilibrium where player A chooses this value of p_1, and B uses the same trick to choose his value of q_1.

But we can already see something simpler. We’ve drawn graphs of A’s expected payoff for two extreme cases of player B’s mixed strategy: q_1 = 1 and q_1 = 0. Suppose B uses some other mixed strategy, say q_1 = 2/5 for example. Now A’s expected payoff will be something between the two lines we’ve drawn. Let’s see what it is:

\begin{array}{ccl} p \cdot A q &=& 20 - 30 p_1 - 30 q_1 + 50 p_1 q_1 \\  \\  &=& 20 - 30 p_1 - 30 \cdot \frac{2}{5} + 50 p_1 \cdot \frac{2}{5} \\  \\  &=& 8 - 10p_1  \end{array}

If we graph this along with our other two lines, we get this:

We get a line between the other two, as we expect. But more importantly, all three lines cross at the same point!

That’s not a coincidence, that’s how it always works. If we draw lines for more different choices of B’s mixed strategy, they look like this:

The point where they all intersect looks important! It is. Soon we’ll see why.

A’s maximin strategy

Now we’ll find A’s maximin strategy. Let me explain the idea a bit more. The idea is that A is cautious, so he wants to choose p_1 that maximizes his expected payoff in the worst-case scenario. In other words, A wants to maximize his expected payoff assuming that B is trying to minimize A’s expected payoff.

Read that last sentence a few times, because it’s confusing at first.

Why would B try to minimize A’s expected payoff? B isn’t nasty: he’s just a rational agent trying to maximize his own expected payoff! But if you solved Puzzle 1, you know this game is a zero-sum game. So A’s payoff is minus B’s payoff. Thus, if B is trying to maximize his own expected payoff, he’s also minimizing A’s expected payoff.

Given this, let’s figure out what A should do. A should look at different mixed strategies B can use, and graph A’s payoff as a function of p_1. We did this:

Then, he should choose p_1 that maximizes his expected payoff in the worst-case scenario. To do this, he should focus attention on the lines that give him the smallest expected payoff:

This is the dark curve. It’s not very ‘curvy’, but mathematicians consider a broken line to be an example of a curve!

To find this dark curve, all that matters are extreme cases of B’s strategy: the case q_1 = 1 and the case q_1 = 0. So we can ignore the intermediate cases, and simplify our picture:

The dark curve is highest where the two lines cross! This happens when

-20 + 10 p_1 = 20 - 30 p_1

or in other words

p_1 = 3/5

So, A’s maximin strategy is

p = (3/5, 2/5)

It’s the mixed strategy that maximizes A’s expected payoff given that B is trying to minimize A’s expected payoff.

B’s expected payoff

Next let’s give the other guy a chance. Let’s work out B’s expected payoff and maximin strategy. The expected value of B’s payoff is

p \cdot B q

We could calculate this just like we calculated p \cdot A q , but it’s quicker to remember that this is a zero-sum game:

B = - A

so that

p \cdot B q = - p \cdot A q

and since we already saw that

p \cdot A q = 20 - 30 p_1 - 30 q_1 + 50 p_1 q_1

now we have

p \cdot B q = -20 + 30 p_1 + 30 q_1 - 50 p_1 q_1

B’s maximin strategy

To figure out B’s maximin strategy, we’ll copy what worked for player A. First we’ll work out B’s expected payoff in two extreme cases:

• The case where A always makes choice 1: p_1 = 1.

• The case where A always makes choice 2: p_1 = 0.

We’ll get two functions of q_1 whose graphs are lines. Then we’ll find where these lines intersect!

Let’s get to work. When p_1 = 1 we get

\begin{array}{ccl} p \cdot B q &=& -20 + 30 p_1 + 30 q_1 - 50 p_1 q_1  \\  \\  &=& 10 - 20 p_1 \end{array}

When p_1 = 0 we get

p \cdot B q = -20 + 30 q_1

I won’t graph these two lines, but they intersect when

10 - 20 q_1 = -20 + 30 q_1

or

q_1 = 3/5

Hmm, it’s sort of a coincidence that we’re getting the same number that we got for p_1; it won’t always work this way! But anyway, we see that B’s maximin strategy is

q = (3/5, 2/5)

A Nash equilibrium

Now let’s put the pieces together. What happens in a zero-sum game when player A and player B both choose a maximin strategy? It’s not too surprising: we get a Nash equilibrium!

Let’s see why in this example. (We can do a general proof later.) We have

p = (3/5, 2/5)

and

q = (3/5, 2/5)

To show that p and q form a Nash equilibrium, we need to check that neither player could improve their expected payoff by changing to a different mixed strategy. Remember:

Definition. Given a 2-player normal form game, a pair of mixed strategies (p,q), one for player A and one for player B, is a Nash equilibrium if:

1) For all mixed strategies p' for player A, p' \cdot A q \le p \cdot A q.

2) For all mixed strategies q' for player B, p \cdot B q' \le p \cdot B q.

I’ll check clause 1) and I’ll let you check clause 2), which is similar. For clause 1) we need to check

p' \cdot A q \le p \cdot A q

for all mixed strategies p'. Just like we did with p, we can write

p' = (p'_1, 1 - p'_1)

We can reuse one of our old calculations to see that

\begin{array}{ccl} p' \cdot A q &=& 20 - 30 p'_1 - 30 q_1 + 50 p'_1 q_1 \\   \\  &=& 20 - 30 p'_1 - 30 \cdot \frac{3}{5} + 50 p'_1 \cdot \frac{3}{5} \\   \\  &=& 2 \end{array}

Hmm, A’s expected payoff is always 2, no matter what mixed strategy he uses, as long as B uses his maximin strategy q! So of course we have

p' \cdot A q = p \cdot A q = 2

which implies that

p' \cdot A q \le p \cdot A q

If you don’t believe me, we can calculate p \cdot A q and see it equals p' \cdot A q:

\begin{array}{ccl} p \cdot A q &=& 20 - 30 p_1 - 30 q_1 + 50 p_1 q_1 \\   \\  &=& 20 - 30 \cdot \frac{3}{5} - 30 \cdot \frac{3}{5} + 50 \cdot \frac{3}{5} \cdot \frac{3}{5} \\   \\  &=& 2 \end{array}

Yup, it’s 2.

Puzzle 5. Finish showing that (p,q) is a Nash equilibrium by showing that for all mixed strategies q' for player B, p \cdot B q' \le p \cdot B q.

Puzzle 6. How high does this dark curve get?

You have the equations for the two lines, so you can figure this out. Explain the meaning of your answer!


Game Theory (Part 14)

18 February, 2013

Here is the first test in our game theory course. If you’re following along on this blog, you can do it and then look at the answers below.

Definitions

1. Define a Nash equilibrium for a 2-player normal form
game.

2. Define the expected value of some function with respect to some probability distribution.

Proof

3. Suppose A and B are the payoff matrices for 2-player normal form game. Prove that if (i,j) is a Nash equilibrium, there cannot exist a choice i' for player A that strictly dominates choice i.

2-Player normal form games

Consider this 2-player normal form game:

\begin{array}{rrr}  (-2,1) & (4,2) & (2,-5)  \\   (2,3) & (-6,2) & (5,3) \\  (1,0) & (2,-4) & (4,-3) \end{array}

4. Find all the Nash equilibria. Draw a box around each Nash
equilibrium.

For problems 5-8 do not simplify your answers by working out the binomial coefficients, etc.

Probabilities

5. If you draw 3 cards from a well-shuffled standard deck,
what is the probability that at least 2 are hearts?

6. If you flip 4 fair and independent coins, what is the probability that exactly 2 land heads up?

Expected values

7. Suppose you pay $2 to enter a lottery. Suppose you have a 1% chance of winning $100, and otherwise you win nothing. What is the expected value of your payoff, including your winnings but also the money you paid?

8. Suppose you draw two cards from a well-shuffled standard deck. Suppose you win $100 if you get two aces, $10 if you get one ace, and nothing if you get no aces. What is your expected payoff?

Extra credit

About how many ways are there to choose 3 atoms from all the atoms in the observable universe? Since this question is for extra credit, I’ll make it hard: I’ll only accept answers written in scientific notation, for example 2 \times 10^{50}.


And here are the answers to the first test.

Definitions

1. Given a 2-player normal form game where A’s
payoff is A_{ij} and B’s payoff is B_{ij}, a pair of choices (i,j) is a Nash equilibrium if:

1) For all 1 \le i' \le m, A_{i'j} \le A_{ij}.

2) For all 1 \le j' \le n, B_{ij'} \le B_{ij}.

2. The expected value of a function f : X \to \mathbb{R} with respect to a probability distribution p on the finite set X is

\sum_{i \in X} f(i) p_i

Note that a good definition makes it clear what term is being defined, by writing it in boldface or underlining it. Also, it’s best if all variables used in the definition are explained: here they are f, X and p.

Proof

3. Theorem. Suppose A and B are the payoff matrices for 2-player normal form game. If (i,j) is a Nash equilibrium, there cannot exist a choice i' for player A that strictly dominates choice i.

Proof. Suppose that (i,j) is a Nash equilibrium. Then

A_{ij} \ge A_{i' j}

for any choice i' for player A. On the other hand, if choice i' strictly dominates choice i, then

A_{i'j} > A_{i j}

This contradicts the previous inequality, so there cannot exist
a choice i' for player A that strictly dominates choice i. █


Note that the really good way to write a proof involves:

• First writing “Theorem” and stating the theorem.
• Saying “Proof” at the start of the proof.
• Giving an argument that starts with the hypotheses and leads to the conclusion.
• Marking the end of the proof with “Q.E.D.” or “█” or something similar.

2-Player normal form games

4. In this 2-player normal form game, the three Nash equilibria are marked with boxes:

\begin{array}{rrr}  (-2,1) & \boxed{(4,2)} & (2,-5)  \\   \boxed{(2,3)} & (-6,2) & \boxed{(5,3)} \\  (1,0) & (2,-4) & (4,-3) \end{array}

Probabilities

5. If you draw 3 cards from a well-shuffled standard deck,
what is the probability that at least 2 are hearts?

Answer. One correct answer is

\displaystyle{ \frac{\binom{13}{2} \binom{39}{1}}{\binom{52}{3}} +      \frac{\binom{13}{3} \binom{39}{0}}{\binom{52}{3}} }

since there are:

\binom{52}{3} ways to choose 3 cards, all equally likely,

\binom{13}{2} ways to choose 2 hearts and \binom{39}{1} ways to chose 1 non-heart, and

\binom{13}{3} ways to choose 2 hearts and \binom{39}{0} ways to chose 0 non-hearts.

Another correct answer is

\displaystyle{ \frac{\binom{13}{2} \cdot 39 }{\binom{52}{3}} +      \frac{\binom{13}{3}} {\binom{52}{3}}}

This is equal since \binom{39}{0} = 1 and \binom{39}{1} = 39.

6. If you flip 4 fair and independent coins, what is the probability that exactly 2 land heads up?

Answer. The probability

\displaystyle{ \frac{\binom{4}{2}}{2^4} }

since there are 2^4 possible ways the coins can land, all
equally likely, and \binom{4}{2} ways to choose 2 of the 4 coins to land heads up.

Expected values

7. Suppose you pay $2 to enter a lottery. Suppose you have a 1% chance of winning $100, and otherwise you win nothing. What is the expected value of your payoff, including your winnings but also the money you paid?

Answer. One correct answer is

0.01 \cdot \$98 \quad + \quad 0.99 \cdot (-\$ 2)

Another correct answer is

0.01 \cdot \$100 \quad + \quad 0.99 \cdot \$0 \quad - \quad \$2

Of course these are equal.

8. Suppose you draw two cards from a well-shuffled standard deck. Suppose you win $100 if you get two aces, $10 if you get one ace, and nothing if you get no aces. What is your expected payoff?

Answer. One correct answer is

\displaystyle{   \frac{\binom{4}{2} \binom{48}{0}}{\binom{52}{2}} \cdot \$100 \quad + \quad \frac{\binom{4}{1} \binom{48}{1}}{\binom{52}{2}} \cdot \$10 \quad + \quad      \frac{\binom{4}{0} \binom{48}{2}}{\binom{52}{2}} \cdot \$ 0 }

since \binom{4}{n} \binom{48}{3-n} is the number of ways to pick n aces and 3-n non-aces. Of course we can also leave off the last term, which is zero:

\displaystyle{  \frac{\binom{4}{2} \binom{48}{0}}{\binom{52}{2}} \cdot \$100 \quad + \quad     \frac{\binom{4}{1} \binom{48}{1}}{\binom{52}{2}} \cdot \$10 }

Since \binom{48}{0} = 1 we can also write this as

\displaystyle{  \frac{\binom{4}{2}}{\binom{52}{2}} \cdot \$100  \quad + \quad     \frac{\binom{4}{1} \binom{48}{1}}{\binom{52}{2}} \cdot \$10 }

Or, since \binom{4}{1} = 4 and \binom{48}{1} = 48 we can write this as

\displaystyle{    \frac{\binom{4}{2}}{\binom{52}{2}} \cdot \$100 +      \frac{4 \cdot 48}{\binom{52}{2}} \cdot \$10 }

But I said not to bother simplifying the binomial coefficents.

Extra credit

About how many ways are there to choose 3 atoms from all the atoms in the observable universe? Since this question is for extra credit, I’ll make it hard: I’ll only accept answers written in scientific notation, for example 2 \times 10^{50}.

Answer. In class I said the number of atoms in the
observable universe is about 10^{80}, and I said I might put this on the test. So, the answer is

\begin{array}{ccl} \displaystyle{ \binom{10^{80}}{3}} &=& \displaystyle{ \frac{10^{80}(10^{80} - 1)(10^{80} - 2)}{3 \cdot 2  \cdot 1} } \\  \\ &\approx&  \displaystyle{ \frac{10^{80} \cdot 10^{80} \cdot 10^{80}}{6} } \\   \\ &=& \displaystyle{ \frac{10^{240}}{6} } \\   \\ &\approx & 1.667 \times 10^{239} \end{array}

Note that the question asked for an approximate answer, since we don’t know exactly how many atoms there are in the observable universe. The right answer to a question like this gives no more decimal places than we have in our data, so 1.667 \times 10^{239} is actually too precise! You only have one digit in the data I gave you, so a better answer is

\mathbf{ 2 \times 10^{239} }

Since the figure 10^{80} is very approximate, another correct answer is

\mathbf{ 10^{239} }

Nash Equilibria

12 February, 2013

As you know if you’ve been reading this blog lately, I’m teaching game theory. I could use some help.

Is there a nice elementary proof of the existence of Nash equilibria for 2-person games?

Here’s the theorem I have in mind. Suppose A and B are m \times n matrices of real numbers. Say a mixed strategy for player A is a vector p \in \mathbb{R}^m with

\displaystyle{ p_i \ge 0 , \quad \sum_i p_i = 1 }

and a mixed strategy for player B is a vector q \in \mathbb{R}^n with

\displaystyle{ q_i \ge 0 , \quad \sum_j q_j = 1 }

A Nash equilibrium is a pair consisting of a mixed strategy p for A and a mixed strategy q for B such that:

1) For every mixed strategy p' for A, p' \cdot A q \le p \cdot A q.

2) For every mixed strategy q' for B, p \cdot B q' \le p \cdot B q.

(The idea is that p \cdot A q is the expected payoff to player A when A chooses mixed strategy p and B chooses q. Condition 1 says A can’t improve their payoff by unilaterally switching to some mixed strategy p'. Similarly, condition 2 says B can’t improve their expected payoff by unilaterally switching to some mixed strategy q'.)

Some history

The history behind my question is sort of amusing, so let me tell you about that—though I probably don’t know it all.

Nash won the Nobel Prize for a one-page proof of a more general theorem for n-person games, but his proof uses Kakutani’s fixed-point theorem, which seems like overkill, at least for the 2-person case. There is also a proof using Brouwer’s fixed-point theorem; see here for the n-person case and here for the 2-person case. But again, this seems like overkill.

Earlier, von Neumann had proved a result which implies the one I’m interested in, but only in the special case where B = -A: the so-called minimax theorem. Von Neumann wrote:

As far as I can see, there could be no theory of games … without that theorem … I thought there was nothing worth publishing until the Minimax Theorem was proved.

I believe von Neumann used Brouwer’s fixed point theorem, and I get the impression Kakutani proved his fixed point theorem in order to give a different proof of this result! Apparently when Nash explained his generalization to von Neumann, the latter said:

That’s trivial, you know. That’s just a fixed point theorem.

But you don’t need a fixed point theorem to prove von Neumann’s minimax theorem! There’s a more elementary proof in an appendix to Andrew Colman’s 1982 book Game Theory and its Applications in the Social and Biological Sciences. He writes:

In common with many people, I first encountered game theory in non-mathematical books, and I soon became intrigued by the minimax theorem but frustrated by the way the books tiptoed around it without proving it. It seems reasonable to suppose that I am not the only person who has encountered this problem, but I have not found any source to which mathematically unsophisticated readers can turn for a proper understanding of the theorem, so I have attempted in the pages that follow to provide a simple, self-contained proof with each step spelt out as clearly as possible both in symbols and words.

This proof is indeed very elementary. The deepest fact used is merely that a continuous function assumes a maximum on a compact set—and actually just a very special case of this. So, this is very nice.

Unfortunately, the proof is spelt out in such enormous elementary detail that I keep falling asleep halfway through! And worse, it only covers the case B = -A.

Is there a good references to an elementary but terse proof of the existence of Nash equilibria for 2-person games? If I don’t find one, I’ll have to work through Colman’s proof and then generalize it. But I feel sure someone must have done this already.


Game Theory (Part 13)

11 February, 2013

Last time we introduced mixed strategies, and generalized the idea of Nash equilibrium to mixed strategies. Now let’s look at an example.

Matching pennies

In the game of matching pennies, each player has a penny and must secretly turn the penny to either heads or tails. They then reveal their choices simultaneously. If the pennies match—both heads or both tails—player A keeps both pennies, so he wins one from player B. If the pennies do not match—one heads and one tails—B keeps both pennies, so she wins one from A.

Let’s write this game in normal form! If the pennies match, A wins one penny and B loses one, so their payoffs are (1,-1). If they pennies don’t match, A loses one and B wins one, so their payoffs are (-1,1). If we say

• choice 1 is heads
• choice 2 is tails

then the normal form looks like this:

1 2
1    (1,-1)   (-1,1)  
2 (-1,1)   (1,-1)  

We can also break this into two matrices in the usual way, one showing the payoff for A:

A = \left( \begin{array}{rr} 1 & -1 \\ -1 & 1 \end{array} \right)

and one showing the payoff for B:

B = \left( \begin{array}{rr} -1 & 1 \\ 1 & -1 \end{array} \right)

If you examine this game, it’s easy to see no pure strategy is a Nash equilibrium. If A chooses heads, B will want to choose tails. But if B chooses tails, A will want to choose tails. And if A chooses tails, B will want to choose heads. And if B chooses heads, A will want to choose heads! There’s always someone who would do better by changing their choice. It’s a lot like rock, paper, scissors.

Mixed strategies

However, there is a Nash equilibrium if we allow mixed strategies, where the players can randomly make their choice according to a certain probability distribution.

Let’s see if we can guess where this Nash equilibrium is. If you were player A, can you guess what you should do? Should you choose heads 90% of the time? 80% of the time? 70% of the time?

A student says: “50% of the time.”

Yes, that seems right. After all, if player A chooses heads most of the time, player B can win most of the time by always choosing tails! And if player A chooses tails most of the time, B can win most of the time by always choosing heads! The only way out is for player A to choose heads with probability 1/2.

But what about player B? What should player B do?

A student says: “She should also choose heads 50% of the time.”

Yes, that makes sense too—for the same sort of reason.

So, let’s see if these mixed strategies give a Nash equilibrium. Remember that we write

p = (p_1, p_2)

for A’s mixed strategy. p_1 is the probability that A makes choice 1: that is, chooses heads. p_2 is the probability that A makes choice 2: that is, chooses tails. So, what p are we guessing here?

The students say:

p = (1/2, 1/2)

Okay, that’s A’s mixed strategy. And what about B’s mixed strategy?

The students say:

q = (1/2, 1/2)

Okay, now lets see if these are really a Nash equilibrium! Remember the definition from last time:

Definition. Given a 2-player normal form game, a pair of mixed strategies (p,q), one for player A and one for player B, is a Nash equilibrium if:

1) For all mixed strategies p' for player A, p' \cdot A q \le p \cdot A q.

2) For all mixed strategies q' for player B, p \cdot B  q' \le p \cdot B q.

We want to check this for our p and q. This looks hard, because we have to check part 1) for all p' and part 2) for all q'. But sometimes it’s easy, and I’ve picked a game where it’s easy.

Why is it easy? Well, first consider part 1). For this we need to show

p' \cdot A q \le p \cdot A q

To show this, let’s start by computing A q:

A q =   \left( \begin{array}{rr} 1 & -1 \\ -1 & 1 \end{array} \right) \left( \begin{array}{r} 1/2 \\ 1/2 \end{array} \right) = \left( \begin{array}{r} 0 \\ 0 \end{array} \right)

Hey! It’s the zero vector, a list of all zeros! We write this simply as 0. So, we’re trying to show

p' \cdot 0 \le p \cdot 0

but of course this is true, because both sides equal the number zero!

What’s going on here? The point is that if B uses mixed strategy q, A’s expected payoff is zero no matter what mixed strategy A uses! If A uses any mixed strategy p', the expected value of his payoff is always p' \cdot A q = 0, since Aq = 0.

Now let’s check part 2). This should be similar. Now we need to show that for any mixed strategy q',

p \cdot B q' \le p \cdot B q

Hey, this is actually a bit different! Before we could use Aq = 0 to get the job done. But now we see B q' showing up, and q' is any mixed strategy, so we can’t calculate this! What do we do?

We need to use the power of math!

We need to use a fact about matrices and dot products! For any vectors v \in \mathbb{R}^m and w \in \mathbb{R}^n and any m \times n matrix A, we have

v \cdot A w = A^T v \cdot w

Here A^T is the transpose of the matrix A; that is, the matrix we get by switching rows and columns:

Here’s the formula for the transpose:

A^T_{ji} = A_{ij}

Someday when I’ll in a bad mood I’ll make you prove v \cdot A w = A^T v \cdot w in your homework.

Anyway, applying this equation to the matrix B, we can take part 2):

p \cdot B q' \le p \cdot B q

and rewrite it like this:

B^T p \cdot q' \le B^T p \cdot q

Notice that B^T p shows up on both sides now. So, we should start by computing this! We have

B =  \left( \begin{array}{rr} -1 & 1 \\ 1 & -1 \end{array} \right)

so taking the transpose doesn’t actually do anything in this particular case:

B^T=  \left( \begin{array}{rr} -1 & 1 \\ 1 & -1 \end{array} \right)

and we get:

B^T p =   \left( \begin{array}{rr} -1 & 1 \\ 1 & -1 \end{array} \right) \left( \begin{array}{r} 1/2 \\ 1/2 \end{array} \right) = \left( \begin{array}{r} 0 \\ 0 \end{array} \right)

Hey! It’s the zero vector again! You should have guessed that was coming. So, the thing we’re trying to show:

B^T p \cdot q' \le B^T p \cdot q

simply boils down to

0 \cdot q' \le 0 \cdot q

which is true because both sides are zero.

Yay, we’re done! We found a Nash equilibrium! And the point of our last little calculation is that if A uses mixed strategy p, B’s expected payoff is zero no matter what mixed strategy B uses. This is just the mirror image of what we saw before: if B uses mixed strategy q, A’s expected payoff is zero no matter what mixed strategy A uses.

So, when both A and B are choosing heads or tails with 50% probabilities, independently, either one can unilaterally change their mixed strategy without improving their expected payoff. That’s a Nash equilibrium for you!

The coin-matching game

Just for fun, let me warn you about a scam that involves matching pennies. I’ll quote this article:

Coin-matching game, Wikipedia.

A coin-matching game (also a coin smack or smack game) is a confidence trick in which two con artists set up one victim.

The first con artist strikes up a conversation with the victim, usually while waiting somewhere. The con artist suggests matching pennies (or other coins) to pass the time. The second con artist arrives and joins in, but soon leaves for a moment. The first con artist then suggests cheating. The victim, thinking they are going to scam the second con artist, agrees to match coins each time.

When the second con artist returns and begins losing, he accuses the two of cheating and threatens to call the police. The first con artist offers a sizable sum of hush money, and the victim pitches in. After the victim leaves, the two con artists split up the money extorted from the victim.

So, watch out if someone suggests playing ‘matching pennies’ with you! It could be legit, but if a second person joins in and starts placing bets on the two of you, it’s probably a coin smack!

For more con games, see:

The Encylopedia of Scams.

I’m not trying to teach you how to con people! I’m trying to help you defend yourself. Of course the best way to avoid scams is a generally cautious, skeptical attitude, especially in any situation where someone seems to be helping you get money or get around the law. And don’t forget: successful con artists don’t usually look suspicious, like this:

They look and act trustworthy. As the saying goes:

The most important thing is sincerity. If you can fake that, you’ve got it made.


Game Theory (Part 12)

11 February, 2013

Suppose we have a 2-player normal form game. As usual, we assume:

• Player A has some set of choices i = 1, \dots, m.

• Player B has some set of choices j = 1, \dots, n.

• If player A makes choice i and player B makes choice j, the payoff to player A is A_{ij} and the payoff to player B is B_{ij}.

Earlier we studied ‘pure strategies’, where the players make the same choice each time. Now we’ll study ‘mixed strategies’, where the players make their choices randomly. I want to show you that there’s always a Nash equilibrium when we allow mixed strategies—even in games like rock, paper, scissors that don’t have a Nash equilibrium with pure strategies!

But to do this, we need to define Nash equilibria for mixed strategies. And before that, we need to define mixed strategies!

First let’s make up a name for the set of player A’s choices:

X = \{ 1, \dots, m \}

and a name for the set of player B’s choices:

Y = \{1, \dots, n \}

Definition 1. A mixed strategy for player A is a probability distribution p on the set of their choices, X. A mixed strategy for player B is a probability distribution q on the set of their choices, Y.

Let’s recall exactly what this means, since you’ll need to know! Player A has a probability p_i of making any choice i = 1, \dots, m, and these probabilities obey

0 \le p_i \le 1

and

\displaystyle{ \sum_{i \in X} p_i = 1 }

Similarly, the probability that player B makes the choice j = 1, \dots, n is q_j, and these probabilities obey

0 \le q_j \le 1

and

\displaystyle{ \sum_{j \in Y} q_j = 1 }

In our earlier discussions of probability, we would call X and Y sets of events. An event is anything that can happen. But now the thing that can happen is that a player makes a certain choice in the game! So, now we’re calling X and Y sets of choices. But you can also think of these choices as events.

The expected payoff

Now let’s work out the expected value of the payoff to each player. To do this, we’ll assume:

1) Player A uses mixed strategy p.
2) Player B uses mixed strategy q.
3) Player A and player B’s choices are independent.

If you forget what ‘independent’ means, look at Part 8. The basic idea is player A’s choice doesn’t affect player B’s choice, and vice versa. After all, this is a ‘simultaneous game’, where each player makes their choice not knowing what the other has done.

But mathematically, the point is that we must assume the player’s choices are independent to know the probability of player A making choice i and player B making choice j is the product

p_i \, q_j

Knowing this, we can work out the expected value of the payoff to player A. Here it is:

\displaystyle{ \sum_{i \in X, j \in Y} A_{i j} \, p_i \, q_j }

I hope you see why. The probability that player A makes choice i and player B makes choice j is p_i q_j. The payoff to player A when this happens is A_{ij}. We multiply these and sum over all the possible choices for both players. That’s how expected values work!

Similarly, the expected value of the payoff for player B is

\displaystyle{ \sum_{i \in X, j \in Y} B_{i j} \, p_i \, q_j }

More details

If you’re in the mood, I can make this more formal. Remember that X \times Y is the set of all ordered pairs (i,j) where i \in X and j \in Y. A pair (i,j) is an event where player A makes choice i and player B makes choice j.

A’s payoff is a function on this set X \times Y. Namely, if player A makes choice i and player B makes choice j, A’s payoff is A_{ij}. There’s also a probability distribution on X \times Y. Namely, the probability of the event (i,j) is p_i \, q_j. So, the expected value of the payoff with respect to this probability distribution is

\displaystyle{ \sum_{(i,j) \in X \times Y} A_{i j} \, p_i \, q_j }

But this is equal to what we’ve seen already:

\displaystyle{ \sum_{i \in X, j \in Y} A_{i j} \, p_i \, q_j }

Matrix multiplication and the dot product

It looks like all of the students in this class have studied some linear algebra. So, I’ll assume you know how to:

take the dot product of vectors to get a number,

and

multiply a vector by a matrix to get a new vector.

Click on the links if you want to review these concepts. They will let us write our formulas for expected payoffs much more efficiently!

Here’s how. First, we think of the probability distribution p as a vector in \mathbb{R}^m; that is, a list of m numbers:

p = (p_1, \dots, p_m)

Second, we think of the probability distribution q as a vector in \mathbb{R}^n:

q = (q_1, \dots, q_n)

Third, we think of A and B as m \times n matrices, since that’s what they are:

A = \left( \begin{array}{cccc} A_{11} & A_{12} & \cdots & A_{1n} \\ A_{21} & A_{22} & \cdots & A_{2n} \\ \vdots & \vdots & \ddots & \vdots \\ A_{m1} & A_{m2} & \cdots & A_{mn} \end{array} \right)

 

B = \left( \begin{array}{cccc} B_{11} & B_{12} & \cdots & B_{1n} \\ B_{21} & B_{22} & \cdots & B_{2n} \\ \vdots & \vdots & \ddots & \vdots \\ B_{m1} & B_{m2} & \cdots & B_{mn} \end{array} \right)

Here’s the cool part:

Theorem. If A has mixed strategy p and B has mixed strategy q, then the expected value of A’s payoff is

p \cdot A q

and the expected value of B’s payoff is

p \cdot B q

Proof. We’ll only prove the first one, since the second works just the same way. By definition, A q is a vector in \mathbb{R}^m with components

\displaystyle{ (Aq)_i = \sum_{j = 1}^n A_{ij} q_j }

Also by definition, the dot product of p with Aq is the number

\begin{array}{ccl} p \cdot Aq &=& \displaystyle{ \sum_{i = 1}^m p_i \, (Aq)_i} \\ \\ &=& \displaystyle{ \sum_{i = 1}^m \sum_{j = 1}^n p_i \, A_{ij} \, q_j } \end{array}

But this agrees with our earlier formula for the expected value of A’s payoff, namely

\displaystyle{ \sum_{i \in X, j \in Y} A_{i j} \, p_i \, q_j  = \sum_{i = 1}^m \sum_{j = 1}^n A_{i j} \, p_i \, q_j }

So, we’re done! █

It’s not just quicker to write

p \cdot A q

than

\displaystyle{ \sum_{i \in X, j \in Y} A_{i j} \, p_i \, q_j }

It will also let us use tools from linear algebra to study games!

Nash equilibria

We’ll have to look at examples to understand this stuff better, but let me charge ahead and define ‘Nash equilibria’ for mixed strategies. The idea is similar to the idea we’ve already seen. A pair of mixed strategies, one for A and one for B, is a Nash equilibrium if neither player can improve the expected value of their payoff by unilaterally changing their mixed strategy.

Let’s make that precise. As before, the definition of Nash equilibrium involves two conditions:

Definition. Given a 2-player normal form game, a pair of mixed strategies (p,q), one for player A and one for player B, is a Nash equilibrium if:

1) For all mixed strategies p' for player A, p' \cdot A q \le p \cdot A q.

2) For all mixed strategies q' for player B, p \cdot B q' \le p \cdot B q.

Condition 1) says that player A can’t improve their payoff by switching their mixed strategy from p to any other mixed strategy p'. Condition 2) says that player B can’t improve their payoff by switching their mixed strategy from q to any other mixed strategy q'.

Now, one of our goals is to prove this big wonderful theorem:

Theorem. For any 2-player normal form game, there exists a pair of mixed strategies (p,q), one for player A and one for player B, that is a Nash equilibrium.

For zero-sum games this was proved by John von Neumann. Later John Nash proved a version for nonzero-sum games, and even games with more than two players. These are famously smart mathematicians, so you should not expect the proof to be extremely easy. We will need to work!

But next time we’ll start by looking at examples, to get our feet on the ground and start getting some intuition for these new ideas.


Game Theory (Part 11)

6 February, 2013

Here’s a game. I flip a fair coin. If it lands heads up, I give you $1. If it lands tails up, I give you nothing.

How much should you pay to play this game?

This is not a mathematics question, because it asks what you “should” do. This could depend on many things that aren’t stated in the question.

Nonetheless, mathematicians have a way they like to answer this question. They do it by computing the so-called ‘expected value’ of your payoff. With probability 1/2 you get 1 dollar; with probability 1/2 you get 0 dollars. So, the expected value is defined to be

\displaystyle{ \frac{1}{2} \times 1 + \frac{1}{2} \times 0 = \frac{1}{2} }

Don’t be fooled by the word ‘expected’: mathematicians use words in funny ways. I’m not saying you should expect to get 1/2 a dollar each time you play this game: obviously you don’t! It means that you get 1/2 a dollar ‘on average’. More precisely: if you play the game lots of times, say a million times, there’s a high probability that you’ll get fairly close to 1/2 a million dollars. (We could make this more precise and prove it, but that would be quite a digression right now.)

So, if you have lots of money and lots of time, you could pay up to 1/2 a dollar to play this game, over and over, and still make money on average. If you pay exactly 1/2 a dollar you won’t make money on average, but you won’t lose it either—on average.

Expected values

Let’s make the idea precise:

Definition. Suppose X is a finite set and p is a probability distribution on that set. Suppose

f: X \to \mathbb{R}

is a function from X to \mathbb{R}. Then the expected value of f with respect to p is defined to be

\displaystyle{ \langle f \rangle = \sum_{i \in X} p_i f(i) }

The idea here is that we are averaging the different values f(i) of the function f, but we count f(i) more when the probability of the event i is bigger. We pronounce \langle f \rangle like this: “the expected value of f“.

Examples

Example 1. Suppose you enter a lottery have a 0.01% chance of winning $100 and a 99.99% chance of winning nothing. What is the expected value of your payoff?

With probability 0.0001 you win $100. With probability .9999 you win zero dollars. So, your expected payoff is

\displaystyle{ 0.0001 \times 100 + .9999 \times 0 = 0.01 }

dollars. So: if you play this game over and over, you expect that on average you will win a penny per game.

But usually you have to pay to enter a lottery! This changes everything. Let’s see how:

Example 2. Suppose you pay $5 to enter a lottery. Suppose you have a 0.01% chance of winning $100 and a 99.99% chance of winning nothing. What is the expected value of your payoff, including your winnings but also the money you paid?

With probability 0.0001 you win $100, but pay $5, so your payoff is $95 in this case. With probability .9999 you win nothing, but pay $5, so your payoff is -$5 in this case. So, your expected payoff is

\displaystyle{ 0.0001 \times 95 - .9999 \times 5 = - 4.99 }

dollars. In simple terms: if we play this game over and over, we expect that on average we will lose $4.99 per play.

Example 3. Suppose you pay $5 to play a game where you
flip a coin 5 times. Suppose the coin is fair and the flips are independent. If the coin lands heads up every time, you win $100. Otherwise you win nothing. What is the expected value of your payoff, including your winnings but also the money you paid?

Since the coin flips are fair and independent, the probability that it lands heads up every time is

\displaystyle{ \frac{1}{2^5} = \frac{1}{32} }

So, when we count the $5 you pay to play, with probability 1/32 your payoff is $95, and with probability (1 – 1/32) = 31/32 your payoff is -$5. The expected value of your payoff is thus

\displaystyle{ \frac{1}{32} \times 95 - \frac{31}{32} \times 5 = -1.875 }

dollars.

Risk aversion and risk tolerance

Soon we’ll start talking about games where players used ‘mixed strategies’, meaning that they randomly make their choices according to some probability distribution. To keep the math simple, we will assume our ‘rational agents’ want to maximize the expected value of their payoff.

But it’s important to remember that life is not really so simple, especially if payoffs are measured in dollars. Rational agents may have good reasons to do something else!

For example, suppose some evil fiend says they’ve kidnapped my wife and they’ll kill her unless I give him a dollar. Suppose I only have 99 cents. But suppose they offer me a chance to play this game: I flip a fair coin, and if it lands heads up, I get $1. If it lands tails up, I get nothing.

How much would I pay to play this game?

Assuming I had no way to call the police, etcetera, I would pay all my 99 cents to play this game. After all, if I don’t play it, my wife will die! But if I do play it, I would at least have a 50% chance of saving her.

The point here is that my happiness, or utility, is not proportional to my amount of money. If I have less than $1, I’m really miserable. If I have $1 or more, I’m much better off.

There are many other reasons why people might be willing to pay more or less to play a game than the expected value of its monetary payoff. Some people are risk tolerant: they are willing to accept higher risks to get a chance at a higher payoffs. Others are risk averse: they would prefer to have a high probability of getting a payoff even if it’s not so big. See:

Risk aversion, Wikipedia.

In class I asked all the students: would you like to play the following game? I’ll flip a fair coin. Then I’ll double your quiz score for today if it comes heads, but give you a zero for your quiz score if it comes up tails.

Suppose your quiz score is Q. If you get heads, I’ll give you Q more points. If you get tails, I’ll take away Q points. So the expected value of the payoff for this game, measured in points, is

\displaystyle{ \frac{1}{2} \times Q - \frac{1}{2} \times Q = 0 }

So, if the expected value is what matters to you, you’ll be right on the brink of wanting to play this game: it doesn’t help you, and it doesn’t hurt you.

But in reality, different people will make different decisions. I polled the students, using our electronic clicker system, and 46% said they wanted to play this game. 54% said they did not.

Then I changed the game. I said that I would roll a fair 6-sided die. If a 6 came up, I would multiply their quiz score by 6. Otherwise I would set their quiz score to zero.

If your quiz score is Q, your payoff if you win will be 5 Q, since I’m multiplying your score by 6. If you lose, your payoff will be -Q. So, the expected value of your payoff is still zero:

\displaystyle{ \frac{1}{6} \times 5Q - \frac{5}{6} \times Q = 0 }

But now the stakes are higher, in a certain sense. You can win more, but it’s less likely.

Only 30% of students wanted to play this new game, while 70% said they would not.

I got the students who wanted to play to hand in slips of paper with their names on them. I put them in a hat and had a student randomly choose one. The winner got to play this game. He rolled a 1. So, his quiz score for the day went to zero.

Ouch!

Here is a famous beggar in San Francisco:


Game Theory (Part 10)

5 February, 2013

Last time we solved some probability puzzles involving coin flips. This time we’ll look at puzzles involving cards.

Permutations

Example 1. How many ways are there to order 3 cards: a jack (J), a queen (Q), and a king (K)?

By order them I mean put one on top, then one in the middle, then one on the bottom. There are three choices for the first card: it can be A, Q, or K. That leaves two choices for what the second card can be, and just one for the third. So, there are

3 \times 2 \times 1 = 6

ways to order the cards.

Example 2. How many ways are there to order all 52 cards in an ordinary deck?

By the same reasoning, the answer is

52 \times 51 \times 50 \times \cdots \times 2 \times 1

This is a huge number. We call it 52 factorial, or 52! for short. I guess the exclamation mark emphasizes how huge this number is. In fact

52! \approx 8.06 \times 10^{67}

This is smaller than the number of atoms in the observable universe, which is about 10^{80}. But it’s much bigger than the number of galaxies in the observable universe, which is about 10^{11}, or even the number of stars in the observable universe, which is roughly 10^{22}. It’s impressive that we can hold such a big number in our hand… in the form of possible ways to order a deck of cards!

A well-shuffled deck

Definition 1. We say a deck is well-shuffled if each of the possible ways of ordering the cards in the deck has the same probability.

Example 3. If a deck of cards is well-shuffled, what’s the probability that it’s in this order?

Since all orders have the same probability, and there are 52! of them, the probability that they’re in any particular order is

\displaystyle{ \frac{1}{52!} }

So, the answer is

\displaystyle{ \frac{1}{52!} \approx 1.24 \times 10^{-68} }

A hand from a well-shuffled deck

Suppose you take the top k cards from a well-shuffled deck of n cards. You’ll get a subset of cards—though card players call this a hand of cards instead of a subset. And, there are n choose k possible hands you could get! Remember from last time:

Definition 2. The binomial coefficient

\displaystyle{ \binom{n}{k} = \frac{n(n-1)(n-2) \cdots (n-k+1)}{k(k-1)(k-2) \cdots 1}}

called n choose k is the number of ways of choosing a subset of k things from a set of n things.

I guess card-players call a set a ‘deck’, and a subset a ‘hand’. But now we can write a cool new formula for n choose k. Just multiply the top and bottom of that big fraction by

\displaystyle{  (n-k)(n-k-1) \cdots 1}

We get

\begin{array}{ccl} \displaystyle{ \binom{n}{k}} &=& \displaystyle{  \frac{n(n-1)(n-2) \cdots 1}{(k(k-1)(k-2) \cdots 1)((n-k)(n-k-1) \cdots 1)} } \\ &=& \displaystyle{ \frac{n!}{k! (n-k)!} } \end{array}

I won’t do it here, but here’s something you can prove using stuff I’ve told you. Suppose you have a well-shuffled deck of n cards and you draw a hand of k cards. Then each of these hands is equally probable!

Using this we can solve lots of puzzles.

Example 4. If you draw a hand of 5 cards from a well-shuffled standard deck, what’s the probability that you get the 10, jack, queen, king and ace of spades?

Since I’m claiming that all hands are equally probable, we just need to count the number of hands, and take the reciprocal of that.

There are

\displaystyle{ \binom{52}{5} = \frac{52 \times 51 \times 50 \times 49 \times 48}{5 \times 4 \times 3 \times 2 \times 1} }

5-card hands drawn from a 52-card deck. So, the probability of getting any particular hand is

\displaystyle{  \frac{1}{\binom{52}{5}} = \frac{5 \times 4 \times 3 \times 2 \times 1}{52 \times 51 \times 50 \times 49 \times 48} }

We can simplify this a bit since 50 is 5 × 10 and 48 is twice 4 × 3 × 2 × 1. So, the probability is

\displaystyle{  \frac{1}{52 \times 51 \times 10 \times 49 \times 2} = \frac{1}{2598960} \approx 3.85 \cdot 10^{-7}}

A royal flush

The hand we just saw:

{10♠, J♠, Q♠, K♠, A♠}

is an example of a ‘royal flush’… the best kind of hand in poker!

Definition 3. A straight is a hand of five cards that can be arranged in a consecutive sequence, for example:

{7♥, 8♣, 9♠, 10♠, J♦}

Definition 4. A straight flush is a straight whose cards are all of the same suit, for example:

{7♣, 8♣, 9♣, 10♣, J♣}

Definition 5. A royal flush is a straight flush where the cards go from 10 to ace, for example:

{10♠, J♠, Q♠, K♠, A♠}

Example 5. If you draw a 5-card hand from a standard deck, what is the probability that it is a royal flush?

We have seen that each 5-card hand has probability

\displaystyle{ \frac{1}{\binom{52}{5}} = \frac{1}{2598960} }

There are just 4 royal flushes, one for each suit. So, the probability of getting a royal flush is

\displaystyle{ \frac{4}{\binom{52}{5}} = \frac{1}{649740} \approx 0.000154\%}

Puzzles

Suppose you have a well-shuffled standard deck of 52 cards, and you draw a hand of 5 cards.

Puzzle 1. What is the probability that the hand is a straight flush?

Puzzle 2. What is the probability that the hand is a straight flush but not a royal flush?

Puzzle 3. What is the probability that the hand is a straight?

Puzzle 4. What is the probability that the hand is a straight but not a straight flush?