Game Theory (Part 5)

23 January, 2013

Here are a bunch of puzzles about game theory, to see if you understand the material so far.

Classifying games

For each of the following games, say whether it is:

a) a single-player, 2-player or multi-player game

b) a simultaneous or sequential game

c) a zero-sum or nonzero-sum game

d) a symmetric or nonsymmetric game

e) a cooperative or noncooperative game

If it can be either one, or it is hard to decide, explain why! You will probably not know all these games. So, look them up on Wikipedia if you need to:

1) chess

2) poker

3) baseball

4) rock-paper-scissors

5) rock-paper-scissors-lizard-Spock

6) Prisoner’s Dilemma


    

7) solitaire with cards (also known as “patience”)

8) the ultimatum game

9) guess 2/3 of the average

10) Nim

Battle of the Sexes

Suppose that Alice and Bob are going to the movies. Alice wants to see the movie This is 40 while Bob wants to see Zero Dark Thirty. Let’s say Alice and Bob each have two strategies:

1. Watch the movie they really want to see.
2. Watch the movie the other one wants to see.

If they both watch the movie they really want, each goes out alone and gets a payoff of 5. If they both watch the movie the other wants to see, they again go out alone and now each gets a payoff of -5, because they’re both really bored as well as lonely. Finally, suppose one watches the movie they want while the other kindly watches the movie their partner wants. Then they go out together. The one who gets to see the movie they want gets a payoff of 10, while the one who doesn’t gets a payoff of 7. (They may not like the movie, but they get ‘points’ for being a good partner!)

Call Alice A for short, and call Bob B. Write down this game in normal form.

Prisoner’s Dilemma

Now suppose Alice and Bob have been arrested because they’re suspected of having conspired to commit a serious crime: an armed robbery of the movie theater!

They are interrogated in separate rooms. The detectives explain to each of them that they are looking at 3 years of jail even if neither of them confess. If one confesses and the other denies having committed the crime, the one who confesses will get only 1 year of jail, while the one who denies it will get 25 years. If they both confess, they will both get 10 years of jail.

Suppose the two strategies available to both of them are:

1. confess to the crime.
2. deny having done it.

Write down this game in normal form, where n years of jail time counts as a payoff of -n.

Game theory concepts

Now, for both the Battle of the Sexes and Prisoner’s Dilemma games, answer these questions:

a) Is this a zero-sum game?

b) Is this a symmetric game?

c) Does player A have a strictly dominant pure strategy? If so, which one?

d) Does player B have a strictly dominant pure strategy? If so, which one?

e) Does this game have one or more Nash equilibria? If so, what are they?

Zero-sum and symmetric games

For the next two problems, suppose we have a game in normal form described by two m \times n matrices A and B. Remember from Part 2 of our course notes that if player A chooses strategy i and player B chooses strategy j, A_{ij} is the payoff to player A and B_{ij} is the payoff to player B.

a) What conditions on the matrices A and B say that this game is a zero-sum game?

b) What conditions on the matrices A and B say that the game is symmetric?


Why It’s Getting Hot

22 January, 2013

The Berkeley Earth Surface Temperature project concludes: carbon dioxide concentration and volcanic activity suffice to explain most of the changes in earth’s surface temperature from 1751 to 2011. Carbon dioxide increase explains most of the warming; volcanic outbursts explain most of the bits of sudden cooling. The fit is not improved by the addition of a term for changes in the behavior of the Sun!

For details, see:

• Robert Rohde, Richard A. Muller, Robert Jacobsen, Elizabeth Muller, Saul Perlmutter, Arthur Rosenfeld, Jonathan Wurtele, Donald Groom and Charlotte Wickham, A new estimate of the average earth surface land temperature spanning 1753 to 2011, Geoinformatics and Geostatics: an Overview 1 (2012).

The downward spikes are explained nicely by volcanic activity. For example, you can see the 1815 eruption of Tambora in Indonesia, which blanketed the atmosphere with ash. 1816 was called The Year Without a Summer: frost and snow were reported in June and July in both New England and Northern Europe! Average global temperatures dropped 0.4–0.7 °C, resulting in major food shortages across the Northern Hemisphere. Similarly, the dip in 1783-1785 seems to be to due to Grímsvötn in Iceland.

(Carbon dioxide goes up a tiny bit in volcanic eruptions, but that’s mostly irrelevant. It’s the ash and sulfur dioxide, forming sulfuric acid droplets that help block incoming sunlight, that really matter for volcanoes!)

It’s worth noting that they get their best fit if each doubling of carbon dioxide concentration causes a 3.1 ± 0.3°C increase in land temperature. This is consistent with the 2007 IPCC report’s estimate of a 3 ± 1.5°C warming for land plus oceans when carbon dioxide doubles. This quantity is called climate sensitivity, and determining it is very important.

They also get their best fit if each extra 100 gigatonnes of atmospheric sulfates (from volcanoes) cause 1.5 ± 0.5°C of cooling.

They also look at the left-over temperature variations that are not explained by this simple model: 3.1°C of warming with each doubling of carbon dioxide, and 1.5°C of cooling for each extra 100 gigatonnes of atmospheric sulfates. Here’s what they get:

The left-over temperature variations, or ‘residuals’, are shown in black, with error bars in gray. On top is the annual data, on bottom you see a 10-year moving average. The red line is an index of the Atlantic Multidecadal Oscillation, a fluctuation in the sea surface temperature in the North Atlantic Ocean with a rough ‘period’ of 70 years.

Apparently the BEST team places more weight on the Atlantic Multidecadal Oscillation than most climate scientists. Most consider the El Niño Southern Oscillation to be more important in explaining global temperature variations! I haven’t seen why the BEST team prefers to focus attention on the Atlantic Multidecadal Oscillation. I’d like to see some more graphs…


Anasazi America (Part 1)

20 January, 2013

A few weeks ago I visited Canyon de Chelly, which is home to some amazing cliff dwellings. I took a bunch of photos, like this picture of the so-called ‘First Ruin’. You can see them and read about my adventures starting here:

• John Baez, Diary, 21 December 2012.

Here I’d like to talk about what happened to the civilization that built these cliff dwellings! It’s a fascinating tale full of mystery… and it’s full of lessons for the problems we face today, involving climate change, agriculture, energy production, and advances in technology.

First let me set the stage! Canyon de Chelly is in the Navajo Nation, a huge region with its own laws and government, not exactly part of the United States, located at the corners of Arizona, New Mexico, and Utah:

The hole in the middle is the Hopi Reservation. The Hopi are descended from,the people who built the cliff dwellings in Canyon de Chelly. Those people are often called the Anasazi, but these days the favored term is ancient Pueblo peoples.

The Hopi speak a Uto-Aztecan language, and so presumably did the Anasazi. Uto-Aztecan speakers were spread out like this shortly before the Europeans invaded:

with a bunch more down in what’s now Mexico. The Navajo are part of a different group, the Na-Dené language group:

So, the Navajo aren’t a big part of the story in this fascinating book:

• David E. Stuart, Anasazi America, U. of New Mexico Press, Albuquerque, New Mexico, 2000.

Let me summarize this story here!

After the ice

The last Ice Age, called the Wisconsin glaciation, began around 70,000 BC. The glaciers reached their maximum extent about 18,000 BC, with ice sheets down to what are now the Great Lakes. In places the ice was over 1.6 kilometers thick!

Then it started warming up. By 16,000 BC people started cultivating plants and herding animals. Around 12,000 BC, before the land bridge connecting Siberia and Canada melted, people from the so-called Clovis culture came to the Americas.

It seems likely that other people got to America earlier, moving down the Pacific coast before the inland glaciers melted. But even if the Clovis culture didn’t get there first, their arrival was a big deal. They be traced by their distinctive and elegant spear tips, called Clovis points:



After they arrived, the Clovis people broke into several local cultures, roughly around the time of the Younger Dryas cold spell beginning around 10,800 BC. By 10,000 BC, small bands of hunters roamed the Southwest, first hunting mammoths, huge bison, camels, horses and elk, and later—perhaps because they killed off the really big animals—the more familiar bison, deer, elk and antelopes we see today.

For about 5000 years the population of current-day New Mexico probably fluctuated between 2 and 6 thousand people—a density of just one person per 50 to 150 square kilometers! Changes in culture and climate were slow.

The Altithermal

Around 5,000 BC, the climate near Canyon de Chelly began to warm up, dry out, and become more strongly seasonal. This epoch is called the ‘Altithermal’. The lush grasslands that once supported huge herds of bison began to disappear in New Mexico, and those bison moved north. By 4,000 BC, the area near Canyon de Chelly became very hot, with summers often reaching 45°C, and sometimes 57° at the ground’s surface.

The people in this area responded in an interesting way: by focusing much more on gathering, and less on hunting. We know this from their improved tools for processing plants, especially yucca roots. The yucca is now the state flower of New Mexico. Here’s a picture taken by Stan Shebs:

David Stuart writes:

At first this might seem an unlikely response to unremitting heat and aridity. One could argue that the deteriorating climate might first have forced people to reduce their numbers by restricting sex, marriage, and child-bearing so that survivors would have enough game. That might well have been the short-term solution [….] When once-plentiful game becomes scarce, hunter-gatherers typically become extremely conservative about sex and reproduction. […] But by early Archaic times, the change in focus to plant resources—undoubtedly by necessity—had actually produced a marginally growing population in the San Juan Basin and its margins in spite of climatic adversity.

[….]

Ecologically, these Archaic hunters and gatherers had moved one entire link down the food chain, thereby eliminating the approximately 90-percent loss in food value that occurs when one feeds on an animal that is a plant-eater.

[….]

This is sound ecological behavior—they could not have found a better basic strategy even if they had the advantage of a contemporary university education. Do I attribute this to their genius? No. It is simply that those who stubbornly clung to the traditional big game hunting of their Paleo-Indian ancestors could not prosper, so they left fewer descendents. Those more willing to experiment, or more desperate, fared better, so their behavior eventually became traditional among their more numerous descendents.

The San Jose Period

By 3,000 BC the Altithermal was ending, big game was returning to the Southwest, yet the people retained their new-found agricultural skills. They also developed a new kind of dart for hunting, the ‘San Jose point’. So, this epoch is called the ‘San Jose period’. Populations rose to maybe about 15 to 30 thousand people in New Mexico, a vast increase over the earlier level of 2-6 thousand. But still, that’s just one person per 10 or 20 square kilometers!

The population increased until around 2,000 BC. At this point population pressures became acute… but two lucky things happened. First, the weather got wetter. Second, corn was introduced from Mexico. The first varieties had very small cobs, but gradually they were improved.

The wet weather lasted until around 500 BC. And at just about this time, beans were introduced, also from Mexico.

Their addition was critical. Corn alone is a costly food to metabolize. Its proteins are incomplete and hard to synthesize. Beans contain large amounts of lysine, the amino acid missing from corn and squash. In reasonable balance, corn, beans and squash together provide complimentary amino acids and form the basis of a nearly complete diet. This diet lacks only the salt, fat and mineral nutrients found in most meats to be healthy and complete.

By 500 BC, nearly all the elements for accelerating cultural and economic changes were finally in place—a fairly complete diet that could, if rainfall cooperated, largely replace the traditional foraging one; several additional, modestly larger-cobbed varieties of corn that not only prospered under varying growing conditions but also provided a bigger harvest; a population large enough to invest the labor necessary to plant and harvest; nearly 10 centuries of increasing familiarity with cultigens; and enhanced food-processing and storage techniques. Lacking were compelling reasons to transform an Archaic society accustomed to earning a living with approximately 500 hours of labor a year into one willing to invest the 1,000 to 2,000 yours coming to contemporary hand-tool horticulturalists.

Nature then stepped in with one persuasive, though not compelling, reason for people to make the shift.

Namely, droughts! Precipitation became very erratic for about 500 years. People responded in various ways. Some went back to the old foraging techniques. Others improved their agricultural skills, developing better breeds of corn, and tricks for storing water. The latter are the ones whose populations grew.

The Basketmakers

This led to the Basketmaker culture, where people started living in dugout ‘pit houses’ in small villages. More precisely, the Late Basketmaker II Era lasted from about 50 AD to 500 AD. New technologies included the baskets that gave this culture its name:

Pottery entered the scene around 300 AD. Have you ever thought about how important this is? Before pots, people had to cook corn and beans by putting rocks in fires and then transferring them to holes containing water!

Now, porridge and stews could be put to boil in a pot set directly into a central fire pit. The amount of heat lost and fuel used in the old cooking process—an endless cycle of collecting, heating, transferring, removing and replacing hot stones just to boil a few quarts of water—had always been enormous. By comparison, cooking with pots became quick, easy, and far more efficient. In a world more densely populated, firewood had to be gathered from greater distances. Now, less of it was needed. And there was newer fuel to supplement it—dried corncobs.

Not all the changes were good. Most adult skeletons from this period show damage from long periods spend stooping—either using a stone hoe to tend garden plots, or grinding corn while kneeling. And as they ate more corn and beans and fewer other vegetables, mineral deficiencies became common. Extreme osteoporosis afflicted many of these people: we find skulls that are porous, and broken bones. It reminds me a little of the plague of obesity, with its many side-affects, afflicting modern Americans as we move to a culture where most people work sitting down.

On the other hand, there was a massive growth in population. The number of pit-house villages grew nine-fold from 200 AD to 700 AD!

It must have been an exciting time. In only some 25 generations, these folks had transformed themselves from forager and hunters with a small economic sideline in corn, beans and squash into semisedentary villagers who farmed and kept up their foraging to fill in the economic gaps.

But this was just the beginning. By 1020, the ancient Pueblo people would begin to build housing complexes that would remain the biggest in North America until the 1880s! This happened in Chaco Canyon, 125 kilometers east of Canyon de Chelly.

Next time I’ll tell you the story of how that happened, and how later, around 1200, these people left Chaco Canyon and started to build cliff dwellings.

For now, I’ll leave you with some pictures I took of the most famous cliff dwelling in Canyon de Chelly: the ‘White House Ruins’. Click to enlarge:


Game Theory (Part 4)

18 January, 2013

Last time we talked about Nash equilibria for a 2-player normal form game. We saw that sometimes a Nash equilibrium doesn’t exist! Sometimes there’s more than one! But suppose there is at least one Nash equilibrium. How do we find one? Sometimes it’s hard.

Strict domination

But there’s a simple trick to rule out some possibilities. Sometimes player A will have a choice i that gives them a bigger payoff than some choice i' no matter what choice player B makes. In this case we say choice i strictly dominates choice i'. And in this case, there’s no way that i' could be A’s choice in a Nash equilibrium, because player A can always improve their payoff by switching to choice i.

For example, look at this game:

1 2
1    (0,0)   (-1,1)  
2 (2,-1)   (0,0)  
3 (-2,1)   (1,-1)  
4 (0,1)   (2,0)  

For player A, choice 4 strictly dominates choice 3. No matter what player B does, player A gets a bigger payoff if they make choice 4 than if they make choice 3. Stare at the black numbers in the table in rows 3 and 4 until you see this.

You can also see it using the payoff matrix for player A:

A = \left( \begin{array}{rr} 0 & -1 \\ 2 & 0 \\ -2 & 1 \\ 0 & 2 \end{array} \right)

Each number in row 4 here is bigger than the number directly above it in row 3. In other words:

0 = A_{41} > A_{31} = -2

and

2 = A_{42} > A_{32} = 0

Puzzle. Are there any other examples of a choice for player A that strictly dominates another choice?

And of course the same sort of thing works for player B. If player B has a choice j that gives them a better payoff than some choice j' no matter what player A does, then j' can’t show up as B’s choice in a Nash equilibrium.

We can make our remarks more precise:

Definition. A choice i for player A strictly dominates a choice i' if

A_{ij} > A_{i'j}

for all j. Similarly, a choice j for player B strictly dominates a choice j' if

B_{ij} > B_{ij'}

for all i'.

Theorem 1. If a choice i for player A strictly dominates a choice i', then no choice j for player B gives a Nash equilibrium (i',j). Similarly, if a choice j for player B strictly dominates a choice j', then no choice i for player A gives a Nash equilibrium (i,j').

Proof. We’ll only prove the first statement since the second one works the exact same way. Suppose i strictly dominates i'. This means that

A_{i j} > A_{i' j}

for all j. In this case, there is no way that the choice i' can be part of a Nash equilibrium. After all, if (i',j) is a Nash equilibrium we have

A_{i' j} \ge  A_{i'' j}

for all i''. But this contradicts the previous inequality—just take i'' = i.

Strict dominance

Definition. We say a choice is strictly dominant if it strictly dominates all other choices for that player.

Let me remind you again that a pure strategy is a way for one player to play a game where they always make the same choice. So, there is one pure strategy for each choice, and one choice for each pure strategy. This means we can be a bit relaxed about the difference between these words. So, what we’re calling strictly dominant choices, most people call strictly dominant pure strategies. You can read more about these ideas here:

Strategic dominance, Wikipedia.

We’ve seen that if one choice strictly dominates another, that other choice can’t be part of a Nash equilibrium. So if one choices strictly dominates all others, that choice is the only one that can be part of a Nash equilibrium! In other words:

Theorem 2. If a choice i for player A is strictly dominant, then any Nash equilibrium (i',j') must have i' = i. Similarly, if a choice j for player B is strictly dominant, then any Nash equilibrium (i',j') must have j' = j.

Proof. We’ll only prove the first statement since the second one works the exact same way. Theorem 1 says that if a choice i for player A strictly dominates a choice i', then no choice j for player B gives a Nash equilibrium (i',j). So, if i strictly dominates all other choices for player A, it is impossible to have a Nash equilibrium (i',j) unless i' = i.

We can go even further:

Theorem 3. If choice i for player A is strictly dominant and choice j for player B is strictly dominant, then (i, j) is a Nash equilibrium, and there is no other Nash equilibrium.

Proof. Suppose choice i for player A is strictly dominant and choice j for player B is strictly dominant. Then by Theorem 2 any Nash equilibrium (i',j') must have i' = i and j' = j. So, there is certainly no Nash equilibrium other than (i, j).

But we still need to check that (i, j) is a Nash equilibrium! This means we need to check that:

1) For all 1 \le i' \le m, A_{i'j} \le A_{ij}.

2) For all 1 \le j' \le m, B_{ij'} \le B_{ij}.

Part 1) is true because choice i is strictly dominant. Part 2) is true because choice j is strictly dominant. █

We can restate Theorem 3 is a less precise but rather pretty way:

Corollary. If both players have a strictly dominant pure strategy, there exists a unique Nash equilibrium.

Here I’m saying ‘pure strategy’ instead of ‘choice’ just to prepare you for people who talk that way!

Domination

I realize that the terminology here has a kind of S&M flavor to it, with all this talk of ‘strict domination’ and the like. But there’s nothing I can do about that—it’s standard!

Anyway, there’s also a less strict kind of dominance:

Definition. A choice i for player A dominates a choice i' if

A_{ij} \ge A_{i'j}

for all j. Similarly, a choice j for player B dominates a choice j' if

B_{ij} \ge B_{ij'}

for all i.

But there is less we can do with this definition. Why? Here’s why:

Puzzle. Find a normal-form two player game with 2 choices for each player, where (i,j) is a Nash equilibrium even though i is dominated by the other choice for player A and j is dominated by the other choice for player B.


Game Theory (Part 3)

17 January, 2013

Last time we started looking at 2-player normal form games. The idea is that player A has m different choices of which move to make, while player B has n choices, and we have two m \times n matrices of payoffs, A and B. If player A makes choice i and player B makes choice j, then the payoff to player A is A_{i j}, while the payoff to player B is B_{i j}.

What should you do if you’re playing a game like this? As we saw when playing rock-paper-scissors, sometimes the best thing is to adopt a strategy with some built-in randomness. In this game, if your opponent knows what you’re going to do, he can exploit that and beat you!

A strategy involving randomness is called a mixed strategy. They’re very important, and we’ll talk about them a lot. But today we’ll only consider pure strategies, where we definitely choose one of the choices available, with no randomness.

Notice: there are as many pure strategies as there are choices! For each choice, there’s a pure strategy where you always make that choice when playing the game. So, a lot of people say ‘pure strategy’ for what I call a ‘choice’. Other people use the word ‘action’.

Anyway: let’s try to figure out the best pure strategy for each player, if we can.

Sometimes this is impossible. For example, rock-paper-scissors has no best pure strategy. Could playing rock be best? No, because paper beats rock. Could playing paper be best? No, because scissors beats paper. Could playing scissors be best? No, because rock beats scissors.

But sometimes we can find a best pure strategy, and the ideas we meet will help us later when we study mixed strategies. So, let’s dive in!

Nash equilibria

Suppose player A makes choice i and player B makes choice j. Then we say (i,j) is a Nash equilibrium if neither player can improve their payoff by changing their choice.

For example, look at this game:

1 2
1    (10,10)   (9,-1)
2 (-1,1) (-1,-1)

Here we are writing the matrices A and B together in one table, like last time, with A in black and B in red. But we could also separate them out:

A = \left( \begin{array}{rr} 10 & 9 \\ -1 & -1 \end{array} \right) , \qquad B = \left( \begin{array}{rr} 10 & -1 \\ 1 & -1 \end{array} \right)

It’s just a different way of conveying the same information.

Either way, it’s pretty clear what the players should do in this game! Both players should make choice 1. That way, they both win 10 points.

Furthermore, this pair of choices (i,j) = (1,1) is certainly a Nash equilibrium. In other words: neither player can improve their payoff by unilaterally changing their choice! If player A switches to choice 2, their payoff drops to -1 points. If player B switches to choice 2, their payoff drops to 9 points.

Let’s give an official definition of Nash equilibrium and then look at some trickier examples. Remember, player A’s choices are i = 1, 2, \dots , m, while player B’s choices are j = 1, 2, \dots, n.

Definition. Given a 2-player normal form game, a pair of choices (i,j) is a Nash equilibrium if:

1) For all 1 \le i' \le m, A_{i'j} \le A_{ij}.

2) For all 1 \le j' \le n, B_{ij'} \le B_{ij}.

Condition 1) says that player A can’t improve their payoff by switching their choice from i to any other choice i'. Condition 2) says that player B can’t improve their payoff by switching their choice from j to any other choice j'.

Examples

Let’s look at more examples of Nash equilibria, to see some funny things that can happen. First let’s modify the last game a little:

1 2
1    (10,10)   (-1,10)
2 (10,-1) (-1,-1)

Is there a Nash equilibrium? Yes—and just as before, it happens when both players pick pure strategy 1. Now player A’s payoff doesn’t get any worse when they switch to pure strategy 2. But it doesn’t improve, either! It stays equal to 10. Similarly, player B’s payoff doesn’t get any worse when they switch to pure strategy 2… but it doesn’t improve. So, neither player is motivated to change their strategy.

I said a Nash equilibrium is a situation where neither player can improve their payoff by changing their choice. But it might be clearer to say: neither player can improve their payoff by unilaterally changing their choice.

What do I mean by ‘unilaterally’ changing their choice? I mean that one player changes their choice while the other player does not change their choice.

But if both players change their choices simultaneously, sometimes they can improve both their payoffs! Lets see an example of that:

1 2
1    (10,10)   (9,-1)
2 (-1,8) (20,20)

Now it looks like both players will be happiest if they pick pure strategy 2. And it’s true! Moreover, this is a Nash equilibrium. Check and see.

But what if both players pick choice 1? This is also a Nash equilibrium! Shocking but true. If they both use pure strategy 1, neither player can improve their payoff by unilaterally changing their choice. If player A changes her choice, her payoff drops to -1. And if player B changes his choice, his payoff drops to -1.

Of course, they can improve their payoff if they both simultaneously change their choice to 2. But that’s not what the concept of Nash equilibrium is about.

This raises the question of whether Nash equilibrium is a good definition of ‘the best’ choice for each player. The big problem is figuring out what ‘best’ means—we’ll have to talk about that more. But we’re also seeing some problems with the word ‘the’. There may be more than one Nash equilibrium, so we can’t always talk about ‘the’ Nash equilibrium.

In other words, our example shows there isn’t always a unique Nash equilibrium.

Furthermore, a Nash equilibrium doesn’t always exist! Check out the game of rock-paper-scissors:

rock paper scissors
rock    (0,0)   (-1,1)   (1,-1)  
paper     (1,-1)   (0,0)   (-1,1)  
scissors     (-1,1)   (1,-1)   (0,0)  

It’s easy to see that there’s no Nash equilibrium. No matter what both players choose, at least one of them can always improve their payoff by switching to a different choice. If one of them wins the game, the loser can improve their payoff by switching to a different choice. If it’s a tie, either player can improve their payoff by switching to a different choice.

So: Nash equilibria are interesting, but they don’t always exist— at least not when we consider only pure strategies, as we’ve been doing. And even when they do, they’re not always unique! This is a bit frustrating. We’d like game theory to tell us how to play games well.

We can improve the situation by allowing mixed strategies, where a player picks different choices with different probabilities. If we do this, there is always at least one Nash equilibrium! This result was proved by—you guessed it!—John Nash. He did it in 1950, in this astoundingly short paper:

• John F. Nash, Jr., Equilibrium points in n-person games, Proceedings of the National Academy of Sciences 36 (1950), 48–9.

He eventually won the Nobel prize in economics for this work.

In fact, Nash was not the first to think about Nash equilibria; this idea goes back to the French economist Antoine Cournot, who wrote about it way back in 1838! However, Nash was the first to consider Nash equilibria for mixed strategies for general simultaneous multi-player games, and prove they always exist. (As we’ll see later, von Neumann and Morgenstern had already done this for the zero-sum 2-player games.)

John Nash

If you haven’t seen the movie about Nash called A Beautiful Mind, you should! Here’s the scene where Nash figures out something about multiplayer games:

It’s not very clear, mathematically speaking… but oh well. It’s a good movie and it gives you some sense of how Nash battled with mental illness for much of his life. He has now largely recovered and spends his time at Princeton University.

Here’s the young Nash—click for more details:


A ‘practical’ question

A student in my class asked what’s the best book or website on how to play blackjack well. I asked this on Google+ and got over 50 replies, which you can read here. If you know more, please tell me!


Network Theory for Economists

15 January, 2013

Tomorrow I’m giving a talk in the econometrics seminar at U.C. Riverside. I was invited to speak on my work on network theory, so I don’t feel too bad about the fact that I’ll be saying only a little about economics and practically nothing about econometrics. Still, I’ve tried to slant the talk in a way that emphasizes possible applications to economics and game theory. Here are the slides:

Network Theory.

For long-time readers here the fun comes near the end. I explain how reaction networks can be used to describe evolutionary games. I point out that in certain classes of evolutionary games, evolution tends to increase ‘fitness’, and/or lead the players to a ‘Nash equilibrium’. For precise theorems you’ll have to click the links in my talk and read the references!

I conclude with an example: a game with three strategies and 7 Nash equilibria. Here evolution makes the proportion of these three strategies follow these flow lines, at least in the limit of large numbers of players:

This picture is from William Sandholm’s nice expository paper:

• William H. Sandholm, Evolutionary game theory, 2007.

I mentioned it before in Information Geometry (Part 12), en route to showing a proof that some quantity always decreases in a class of evolutionary games. Sometime I want to tell the whole story linking:

reaction networks
evolutionary games
the 2nd law of thermodynamics

and

Fisher’s fundamental theorem of natural selection.

But not today! Think of these talk slides as a little appetizer.


Network Theory (Part 26)

15 January, 2013

Last time I described the reachability problem for reaction networks, which has the nice feature of connecting chemistry, category theory, and computation.

Near the end I raised a question. Luca Cardelli, who works on biology and computation at Microsoft, answered it.


His answer is interesting enough that I thought you should all read it. I imagined an arbitrary chemical reaction network and said:

We could try to use these reactions to build a ‘chemical computer’. But how powerful can such a computer be? I don’t know the answer.

Cardelli replied:

The answer to that question is known. Stochastic Petri Nets (and equivalently chemical reaction networks) are not Turing-powerful in the strict sense, essentially because of all the decidable properties of Petri Nets. However, they can approximate a Turing machine up to any fixed error bound, so they are ‘almost’ Turing-complete, or ‘Turing-complete-up-to-epsilon’. The error bound can be fixed independently of the length of the computation (which, being a Turing machine, is not going to be known ahead of time); in practice, that means progressively slowing down the computation to make it more accurate over time and to remain below the global error bound.

Note also that polymerization is a chemical operation that goes beyond the power of Stochastic Petri Nets and plain chemical reaction networks: if you can form unbounded polymers (like, e.g., DNA), you can use them as registers or tapes and obtain full Turing completeness, chemically (or, you might say ‘biochemically’ because that’s where the most interesting polymers are found). An unbounded polymer corresponds to an infinite set of reactions (a small set of reactions for each polymer length), i.e. to an ‘actually infinite program’ in the language of simple reaction networks. Infinite programs of course are no good for any notion of Turing computation, so you need to use a more powerful language for describing polymerization, that is, a language that has the equivalent of molecular binding/unbinding as a primitive. That kind of language can be found in Process Algebra.

So, in addition to the

Chemical-Reaction-Networks/
Stochastic-Petri-Nets/
Turing-Completeness-Up-To-Epsilon

connection, there is another connection between

‘Biochemical’-Reaction-Networks/
Stochastic-Process-Algebra/
Full-Turing-Completeness.

And he provided a list of references. The correct answer to my question appeared first here:

• D. Soloveichik, M. Cook, E. Winfree and J. Bruck, Computation with finite stochastic chemical reaction networks, Natural Computing 7 (2008), 615–633.

contradicting an earlier claim here:

• M.O. Magnasco, Chemical kinetics is Turing universal, Phys. Rev. Lett. 78 (1997), 1190–1193.

Further work can be found here:

• Matthew Cook, David Soloveichik, Erik Winfree and Jehoshua Bruck Programmability of chemical reaction networks, in Algorithmic Bioprocesses, ed. Luca Cardelli, Springer, Berlin, 543–584, 2009.

and some of Cardelli’s own work is here:

• Gianluigi Zavattaro and Luca Cardelli, Termination problems in chemical kinetics, in CONCUR 2008 – Concurrency Theory, eds. Gianluigi Zavattaro and Luca Cardelli, Lecture Notes in Computer Science 5201, Springer, Berlin, 2008, pp. 477–491.

• Luca Cardelli and Gianluigi Zavattaro, Turing universality of the biochemical ground form, Math. Struct. Comp. Sci. 20 (2010), 45–73.

You can find lots more interesting work on Cardelli’s homepage.


Follow

Get every new post delivered to your Inbox.

Join 3,095 other followers