As you know if you’ve been reading this blog lately, I’m teaching game theory. I could use some help.

Is there a nice elementary proof of the existence of Nash equilibria for 2-person games?

Here’s the theorem I have in mind. Suppose and are matrices of real numbers. Say a **mixed strategy for player A** is a vector with

and a **mixed strategy for player B** is a vector with

A **Nash equilibrium** is a pair consisting of a mixed strategy for A and a mixed strategy for B such that:

1) For every mixed strategy for A,

2) For every mixed strategy for B,

(The idea is that is the expected payoff to player A when A chooses mixed strategy and B chooses Condition 1 says A can’t improve their payoff by unilaterally switching to some mixed strategy Similarly, condition 2 says B can’t improve *their* expected payoff by unilaterally switching to some mixed strategy )

### Some history

The history behind my question is sort of amusing, so let me tell you about that—though I probably don’t know it all.

Nash won the Nobel Prize for a one-page proof of a more general theorem for *n*-person games, but his proof uses Kakutani’s fixed-point theorem, which seems like overkill, at least for the 2-person case. There is also a proof using Brouwer’s fixed-point theorem; see here for the *n*-person case and here for the 2-person case. But again, this seems like overkill.

Earlier, von Neumann had proved a result which implies the one I’m interested in, but only in the special case where the so-called **minimax theorem**. Von Neumann wrote:

As far as I can see, there could be no theory of games … without that theorem … I thought there was nothing worth publishing until the Minimax Theorem was proved.

I believe von Neumann used Brouwer’s fixed point theorem, and I get the impression Kakutani proved *his* fixed point theorem in order to give a different proof of this result! Apparently when Nash explained his generalization to von Neumann, the latter said:

That’s trivial, you know. That’s just a fixed point theorem.

But you don’t need a fixed point theorem to prove von Neumann’s minimax theorem! There’s a more elementary proof in an appendix to Andrew Colman’s 1982 book *Game Theory and its Applications in the Social and Biological Sciences*. He writes:

In common with many people, I first encountered game theory in non-mathematical books, and I soon became intrigued by the minimax theorem but frustrated by the way the books tiptoed around it without proving it. It seems reasonable to suppose that I am not the only person who has encountered this problem, but I have not found any source to which mathematically unsophisticated readers can turn for a proper understanding of the theorem, so I have attempted in the pages that follow to provide a simple, self-contained proof with each step spelt out as clearly as possible both in symbols and words.

This proof is indeed very elementary. The deepest fact used is merely that a continuous function assumes a maximum on a compact set—and actually just a very special case of this. So, this is very nice.

Unfortunately, the proof is spelt out in such enormous elementary detail that I keep falling asleep halfway through! And worse, it only covers the case

Is there a good references to an elementary but terse proof of the existence of Nash equilibria for 2-person games? If I don’t find one, I’ll have to work through Colman’s proof and then generalize it. But I feel sure someone must have done this already.

Equation 2) has a typo (should be B on both left and right)

Thanks – fixed!

Nash equil. are best conveyed by clear descriptions of interests vs tradeoffs, ie, stories from the point of view of actual players in a game.

I asked Nash this once and he nodded and laughed at the “overproof” of the basic point.

Hi John,

In Webb’s book (which is a nice book for a mathematics course on game theory BTW) there is an elementary algebraic proof of the existence of NE in a 2 player 2 strategy game. I know this isn’t what you asked and also I’m sure you wouldn’t actually need the proof in the book but its an accessible starting point perhaps for students.

I look forward to reading future posts!

Thanks! I have a big stack of game theory books on my desk, including a bunch you recommended, but it seems not that one. I’ll check out that proof. Maybe I should present it in class.

The existence of this elementary proof, together with Colman’s fairly elementary proof that Nash equilibria exist for 2-person

zero-sumgames with anarbitrary numberof strategies, seems to hint that there’s an elementary proof of the result I actually want.But this may not be true!

Over on MathOverflow, Rabee Tourky wrote:

and Michael Greinecker wrote:

However, the blog article ‘Brouwer implies Nash implies Brouwer’, which is very nice, derives Brouwer’s fixed-point theorem from the existence of Nash equilibria where the set of pure strategies is not finite, but an arbitrary compact convex subset of $\mathbb{R}^n.$ This is seemingly a much more heavy-duty fact than what I’m after! But, the post has something to say about that….

Anyway, I seem to be closing in on the truth here.

It is interesting that for mixed strategies there are always NE. But aren’t all those cases of NEs in games where they only exist for mixed strategies stalemate situations?

Let’s take rock, paper scissors as an example. For pure strategies there is no NE, for mixed strategies there is picking at random, but that isn’t terribly interesting. Are there games where allowing mixed strategies leads to NEs (that were not there before) where one player gains an advantage?

Another thing about rock, paper scissors is that it seems rather easy to modify the game to break even the mixed strategies NEs. For example let’s say the payoff is also dependent on the consistency of choices in previous games, +1 for regular win and +0.5 for picking the same choice as last time, that should motivate players to prefer a particular choice.

Arrow wrote:

Yes; wait ’til the next post… or maybe someone here can make up an example.

Mostgames are of this form, so examples aren’t hard to find.Nash equil. are best conveyed by clear descriptions of interests vs tradeoffs, ie, stories from the point of view of actual players in a game.If this is actually true, it should be possible to write a proof along these lines. ;) Can you give examples of the “clear descriptions of interests vs. tradeoffs” you’re talking about?

My point is, instead of fielding proofs, try to tell a story. That’s how people learn.

I tell plenty of stories in the course I’m teaching, and in class we play lots of games. But it’s a course for math majors. I’ll make sure they get the

ideaof a Nash equilibrium, but they need to learn to understand proofs, too!What I’m looking for here is a proof of the existence of Nash equilibria for 2-person games that doesn’t require the Kakutani or Brouwer fixed-point theorems. It’s the use of these rather heavy-duty tools—which most kids learn only in grad school—that makes undergraduate courses on game theory present the existence of Nash equilibria as an article of faith, skipping the proof. But I suspect that for 2-person games these fixed-point theorems are overkill. It’s only linear algebra, for god’s sake!

What about just proving a special case of the fixed point theorems, just strong enough to apply to 2-person games? Since you don’t need the full strength of the theorem, you may be able to do this at undergraduate level.

I could try that. I’ll get to work soon and examine various options!

I agree with Prof. Benford. The easiest way I learned of Game Theory was on a bike ride from Tustin High School to Ed Thomas’ Book Carnival in Orange back in 2001. My girlfriend at the time asked about The Fermi Paradox or the Great Silence. (Why haven’t we heard them yet?)

As you all know the points Mike Hart and Enrico Fermi brought up were:

-The Sun is a young star.

-There are billions of stars in the galaxy that are billions of years older;

-Some of these stars likely have Earth-like planets[2] which, if the Earth is typical, may develop intelligent life;

-Presumably some of these civilizations will develop interstellar travel, as Earth seems likely to do;

-At any practical pace of interstellar travel, the galaxy can be completely colonized in just a few tens of millions of years.)

Where is everybody?! (No evidence of Von Neumann probes either?!!)

The movie ‘A Beautiful Mind’ just came out (2000) and I read up on Game Theory and used the simple Two prisoner Dilemma to answer her query. (Her parents where peaceniks, as are mine even though both our parents have had military/para-military experience)

Suggesting we reach a Kardashev level 1 society (a true global society, as it would be the only logical like option) one world governing body, one world telephone system – internet, one world language – English; Terran hegemony you name it. Just because we can get along with each other relatively well in that world would not immediately mean we have inhibition to kill other species that exercise a high degree on intelligence.

We kill elephants and dolphins all the time and they are not a threat to us.

Extra-terrestrial species may not necessarily play nice.

Why?

Because, “they” are not part of our species or civilization.

This brings us to where the prisoner dilemma kicks in.

We cannot afford to be proven wrong to think they would be peaceful.

A Kardashev level 1 would harness 10×16 to 18 power in W (watts)

Using relativistic weapons would push that number even higher.

When you get to that level then we can make starships that travel at relativistic speeds. (Thank you Prof. Benford!)

What that is essentially is a kardashev level 1 or 2 doomsday weapon. (A weapon that can take out level 1 and 2 civilizations)

To make it simple suggest we only had 2 intelligent level 1 or 2 societies in this galaxy.

You then have two civilizations either both level 1 or 2 sharing 1 galaxy having this whacko scenario

*(Me and my girl at the time had the thought matrix below. I am sure someone else online has replicated it, the internet is a big place)

You can substitute any Type 1 or 2 civilization killer

Be it nanites, relativistic kill vehicles, sun destroyers (weapons that make a home star go nova) etc. etc………..

Race 2 Ignores Race 2 Attacks

Race 1 Ignores Both live constant fear Race 1 exterminated

Race 2 lives free of fear

Race 1 Attacks Race 1 lives free of fear Both are devastated

Race 2 exterminated but not destroyed

After the date we both wanted to write a paper titled

“Game theory / Nash Equilibria multiple player scenarios as a possible response to Fermi Paradox”

We were both silly…..but whatcha gonna do? We were 17 year olds.

We did eventually write it down up to about 5 players (as in 5 intelligent level 1 or 2 societies in our Galaxy.

It was pretty funny.

but just for a 2 player response below is a less clean cut outcomes/more detailed matrix:

-Key(outcomes)

2= alive, free of fear of retaliatory strike

4= alive, but fear or retaliatory/first strike from opponent civ or federation.

1= civilization intact, but devastated, probably downgraded from level 1 or 2

3= total annihilation

Cooperate Defect

Cooperate 4,4 3,2

Defect 2,3 1,1

*(again I am sure someone else has this matrix somewhere online as I have been talking about this for some 13 years now)

In jest it can be applied to kids with high capacity guns on a playgrounds, kardashev level 1 or 2 civilizations, and hegemonic empires pre-empting first strikes.

Or whatever else you can think of.

Hehehe.

P.S. Prof. Baez did you catch Jeff Buckley’s “Grace” yet?

Darn my coding skills suck….It looked like matrix form when I typed it and it did not when I pressed the “Post Comment” button.

“That’s trivial, you know.”Trivial is in the eye of the beholder. Be especially careful if it is von Neumann’s eye. (I’m reminded of the fly between the trains.) There is a story about a famous mathematician, maybe Hardy, saying “this follows trivially from…” while writing on the blackboard. Then he paused. Then he stepped back from the blackboard and thought a bit. Then he sat down and thought some more. Then he went into another room. After half an hour, he came back, continuing his lecture where he left off, saying “Yes, it really is trivial”.

I think von Neumann claimed Nash’s theorem was trivial because he was jealous!

Nash used the same technique von Neumann had—a fixed-point theorem—to prove the existence of equilibria for a much wider and more interesting class of games! Von Neumann had considered 2-player zero-sum games. Nash considered

n-player games that could be zero-sum or nonzero-sum games. Von Neumann (being a genius)shouldhave tried to generalize his result. Nash showed the generalization could be done in five easy paragraphs.So, I think von Neumann was jealous and trying to cut Nash down to size.

There’s a 15 minute UK BBC radio programme that touches on game theory and poker here. It’s a bit… energetically breathless and seems, to me, more enthusiastic about big ideas than seeing if they’re actually what happens in the real world. But it might be interesting to some following these posts.

Maybe this is a solution.

, abusing notation, a little, is the vector of ones. Set

s.t. and with . A continuous function on a compact space,

has a maximum, no prob there. Now find the maximums of player 1 payoff, on the set of points that maximized player 2, and choose one of them, s.t . The function is again continuous, the set is compact and non-empty. And if I did not miss anything is your Nash equilibrium.

I did miss something. Player 1 is not necessarily maximizing is payoff. Bah. So dumb!

Along the same lines. For each maximize player 2’s payoff:

(continuous function on compact set).

A sequential argument easily prove is continuous. The set

is closed by ‘s continuity, and bounded, so compact.

Then any

is a Nash equilibrium.

Thanks for the comments. I fixed your TeX a bit. I’ve never seen this notation before! I’m guessing that stands for the set of that maximize Is that right?

Once I’m clear on this, I’ll think about your argument.

I am sure that’s what Luis means. I was initially surprised that you didn’t know the notation, but I think it is more commonly used by programmers than mathematicians.

That’s for sure. I’ve been doing math for decades and have never seen that notation. It looks useful, but I’ll need to explain it whenever I use it in front of mathematicians.

I noticed it is used in this paper you pointed to:

Andrew McLennan and Rabee Tourky, From imitation games to Kakutani

… but they’re economists.

I don’t understand Luis’ proof. Let

The set is like with an inequality replaced by an equality. It is the set of points in which is an optimal choice for player 2 for some choice by player 1. Similarly we can define a function and a set What we want to show is that the intersection of and is non-empty.

I don’t see how it helps to extend to and look for points in that.

Okay, I didn’t understand the proof either.

I think have an elementary proof for the case. (It’s certainly elementary: I’m not capable of other kinds.) It uses induction on , assuming (for example) the case is true, and uses the following lemma.

Lemma. Let be vectors in a finite dimensional real vector space. Then either (1) there is a vector such that for all or (2) there are nonnegative reals , not all zero, such that

Proof idea for lemma: consider the convex hull of the . If the origin is outside we can find a for (1), otherwise we can show (2).

Sketch proof of existence of Nash equilibrium. Denote the i’th row of by regarded as a vector in Player A’s payoff can be written as

Now belongs to a hyperplane in so we can project and the onto it, with becoming and becoming in Note is constrained to a hypertetrahedron Then

where does not depend on .

Now apply the lemma. If there is a such that for all i, then player B’s payoff can be improved by moving in direction from any point in , so player B will choose a point on the boundary of , which means at least one strategy is never chosen. So the game is equivalent to a smaller game and induction provides a Nash equilibrium. Otherwise there is some (not necessarily optimal) choice for player A such that so player B cannot change the payoff.

The same argument with A and B reversed either gives an equilibrium by induction, or a choice for player B such that player A cannot change the payoff. So if induction fails in both cases, we have

This looks interesting—sorry to take so long to reply!

Which hyperplane? Any old random hyperplane? Yes, belongs to infinitely many different hyperplanes, but this makes me nervous, mainly because I’m trying to understand your proof and it’s not making sense yet.

If belongs to a hyperplane, projecting onto this hyperplane won’t change at all, so we get Or am I not understanding you?

I mean the hyperplane I want regard the unit simplex in as a hypertetrahedron in

A bit later, where I say “…B’s payoff can be improved by moving in direction from any point in …” I think it would be clearer if I added that this works for any choice by player A.

Here is that part of the argument more explicitly:

Let and let , that is, is a unit vector normal to the hyperplane . Let

and

Then

and expanding that out and using leads to

where does not depend on .

Graham wrote:

Oh! I hadn’t considered that option. In math, half the time words like ‘line’, ‘plane’ and ‘hyperplane’ mean things that contain the origin, so that they’re vector spaces. And half the time they don’t. It’s about linear versus affine geometry.

Let me think about this some more now. Thanks.