Tipping Points in Climate Systems

4 March, 2013

If you’ve just recently gotten a PhD, you can get paid to spend a week this summer studying tipping points in climate systems!

They’re having a program on this at ICERM: the Institute for Computational and Experimental Research in Mathematics, in Providence, Rhode Island. It’s happening from July 15th to 19th, 2013. But you have to apply soon, by the 15th of March!

For details, see below. But first, a word about tipping points… in case you haven’t thought about them much.

Tipping Points

A tipping point occurs when adjusting some parameter of a system causes it to transition abruptly to a new state. The term refers to a well-known example: as you push more and more on a glass of water, it gradually leans over further until you reach the point where it suddenly falls over. Another familiar example is pushing on a light switch until it ‘flips’ and the light turns on.

In the Earth’s climate, a number of tipping points could cause abrupt climate change:



(Click to enlarge.) They include:

• Loss of Arctic sea ice.
• Melting of the Greenland ice sheet.
• Melting of the West Antarctic ice sheet.
• Permafrost and tundra loss, leading to the release of methane.
• Boreal forest dieback.
• Amazon rainforest dieback
• West African monsoon shift.
• Indian monsoon chaotic multistability.
• Change in El Niño amplitude or frequency.
• Change in formation of Atlantic deep water.
• Change in the formation of Antarctic bottom water.

You can read about them here:

• T. M. Lenton, H. Held, E. Kriegler, J. W. Hall, W. Lucht, S. Rahmstorf, and H. J. Schellnhuber, Tipping elements in the Earth’s climate system, Proceedings of the National Academy of Sciences 105 (2008), 1786–1793.

Mathematicians are getting interested in how to predict when we’ll hit a tipping point:

• Peter Ashwin, Sebastian Wieczorek and Renato Vitolo, Tipping points in open systems: bifurcation, noise-induced and rate-dependent examples in the climate system, Phil. Trans. Roy. Soc. A 370 (2012), 1166–1184.

Abstract: Tipping points associated with bifurcations (B-tipping) or induced by noise (N-tipping) are recognized mechanisms that may potentially lead to sudden climate change. We focus here a novel class of tipping points, where a sufficiently rapid change to an input or parameter of a system may cause the system to “tip” or move away from a branch of attractors. Such rate-dependent tipping, or R-tipping, need not be associated with either bifurcations or noise. We present an example of all three types of tipping in a simple global energy balance model of the climate system, illustrating the possibility of dangerous rates of change even in the absence of noise and of bifurcations in the underlying quasi-static system.

We can test out these theories using actual data:

• J. Thompson and J. Sieber, Predicting climate tipping points as a noisy bifurcation: a review, International Journal of Chaos and Bifurcation 21 (2011), 399–423.

Abstract: There is currently much interest in examining climatic tipping points, to see if it is feasible to predict them in advance. Using techniques from bifurcation theory, recent work looks for a slowing down of the intrinsic transient responses, which is predicted to occur before an instability is encountered. This is done, for example, by determining the short-term auto-correlation coefficient ARC in a sliding window of the time series: this stability coefficient should increase to unity at tipping. Such studies have been made both on climatic computer models and on real paleoclimate data preceding ancient tipping events. The latter employ re-constituted time-series provided by ice cores, sediments, etc, and seek to establish whether the actual tipping could have been accurately predicted in advance. One such example is the end of the Younger Dryas event, about 11,500 years ago, when the Arctic warmed by 7 C in 50 years. A second gives an excellent prediction for the end of ’greenhouse’ Earth about 34 million years ago when the climate tipped from a tropical state into an icehouse state, using data from tropical Pacific sediment cores. This prediction science is very young, but some encouraging results are already being obtained. Future analyses will clearly need to embrace both real data from improved monitoring instruments, and simulation data generated from increasingly sophisticated predictive models.

The next paper is interesting because it studies tipping points experimentally by manipulating a lake. Doing this lets us study another important question: when can you push a system back to its original state after it’s already tipped?

• S. R. Carpenter, J. J. Cole, M. L. Pace, R. Batt, W. A. Brock, T. Cline, J. Coloso, J. R. Hodgson, J. F. Kitchell, D. A. Seekell, L. Smith, and B. Weidel, Early warnings of regime shifts: a whole-ecosystem experiment, Nature 332 (2011), 1079–1082.

Abstract: Catastrophic ecological regime shifts may be announced in advance by statistical early-warning signals such as slowing return rates from perturbation and rising variance. The theoretical background for these indicators is rich but real-world tests are rare, especially for whole ecosystems. We tested the hypothesis that these statistics would be early-warning signals for an experimentally induced regime shift in an aquatic food web. We gradually added top predators to a lake over three years to destabilize its food web. An adjacent lake was monitored simultaneously as a reference ecosystem. Warning signals of a regime shift were evident in the manipulated lake during reorganization of the food web more than a year before the food web transition was complete, corroborating theory for leading indicators of ecological regime shifts.

IdeaLab program

If you’re seriously interested in this stuff, and you recently got a PhD, you should apply to IdeaLab 2013, which is a program happening at ICERM from the 15th to the 19th of July, 2013. Here’s the deal:

The Idea-Lab invites 20 early career researchers (postdoctoral candidates and assistant professors) to ICERM for a week during the summer. The program will start with brief participant presentations on their research interests in order to build a common understanding of the breadth and depth of expertise. Throughout the week, organizers or visiting researchers will give comprehensive overviews of their research topics. Organizers will create smaller teams of participants who will discuss, in depth, these research questions, obstacles, and possible solutions. At the end of the week, the teams will prepare presentations on the problems at hand and ideas for solutions. These will be shared with a broad audience including invited program officers from funding agencies.

Two Research Project Topics:

• Tipping Points in Climate Systems (MPE2013 program)

• Towards Efficient Homomorphic Encryption

IdeaLab Funding Includes:

• Travel support

• Six nights accommodations

• Meal allowance

The Application Process:

IdeaLab applicants should be at an early stage of their post-PhD career. Applications for the 2013 IdeaLab are being accepted through MathPrograms.org.

Application materials will be reviewed beginning March 15, 2013.


Black Holes and the Golden Ratio

28 February, 2013

 

The golden ratio shows up in the physics of black holes!

Or does it?

Most things get hotter when you put more energy into them. But systems held together by gravity often work the other way. For example, when a red giant star runs out of fuel and collapses, its energy goes down but its temperature goes up! We say these systems have a negative specific heat.

The prime example of a system held together by gravity is a black hole. Hawking showed—using calculations, not experiments—that a black hole should not be perfectly black. It should emit ‘Hawking radiation’. So it should have a very slight glow, as if it had a temperature above zero. For a black hole the mass of the Sun this temperature would be just 6 × 10-8 kelvin.

This is absurdly chilly, much colder than the microwave background radiation left over from the Big Bang. So in practice, such a black hole will absorb stuff—stars, nearby gas and dust, starlight, microwave background radiation, and so on—and grow bigger. But if we could protect it from all this stuff, and put it in a very cold box, it would slowly shrink by emitting radiation and losing energy, and thus mass. As it lost energy, its temperature would go up. The less energy it has, the hotter it gets: a negative specific heat! Eventually, as it shrinks to nothing, it should explode in a very hot blast.

But for a spinning black hole, things are more complicated. If it spins fast enough, its specific heat will be positive, like a more ordinary object.

And according to a 1989 paper by Paul Davies, the transition to positive specific heat happens at a point governed by the golden ratio! He claimed that in units where the speed of light and gravitational constant are 1, it happens when

\displaystyle{  \frac{J^2}{M^4} = \frac{\sqrt{5} - 1}{2}  }

Here J is the black hole’s angular momentum, M is its mass, and

\displaystyle{ \frac{\sqrt{5} - 1}{2} = 0.6180339\dots }

is a version of the golden ratio! This is for black holes with no electric charge.

Unfortunately, this claim is false. Cesar Uliana, who just did a master’s thesis on black hole thermodynamics, pointed this out in the comments below after I posted this article.

And curiously, twelve years before writing this paper with the mistake in it, Davies wrote a paper that got the right answer to the same problem! It’s even mentioned in the abstract.

The correct constant is not the golden ratio! The correct constant is smaller:

\displaystyle{ 2 \sqrt{3} - 3 = 0.46410161513\dots }

However, Greg Egan figured out the nature of Davies’ slip, and thus discovered how the golden ratio really does show up in black hole physics… though in a more quirky and seemingly less significant way.

As usually defined, the specific heat of a rotating black hole measures the change in internal energy per change in temperature while angular momentum is held constant. But Davies looked at the change in internal energy per change in temperature while the ratio of angular momentum to mass is held constant. It’s this modified quantity that switches from positive to negative when

\displaystyle{  \frac{J^2}{M^4} = \frac{\sqrt{5} - 1}{2} }

In other words:

Suppose we gradually add mass and angular momentum to a black hole while not changing the ratio of angular momentum, J, to mass, M. Then J^2/M^4 gradually drops. As this happens, the black hole’s temperature increases until

\displaystyle{ \frac{J^2}{M^4} = \frac{\sqrt{5} - 1}{2} }

in units where the speed of light and gravitational constant are 1. And then it starts dropping!

What does this mean? It’s hard to tell. It doesn’t seem very important, because it seems there’s no good physical reason for the ratio of J to M to stay constant. In particular, as a black hole shrinks by emitting Hawking radiation, this ratio goes to zero. In other words, the black hole spins down faster than it loses mass.

Popularizations

Discussions of black holes and the golden ratio can be found in a variety of places. Mario Livio is the author of The Golden Ratio, and also an astrophysicist, so it makes sense that he would be interested in this connection. He wrote about it here:

• Mario Livio, The golden ratio and astronomy, Huffington Post, 22 August 2012.

Marcus Chown, the main writer on cosmology for New Scientist, talked to Livio and wrote about it here:

• Marcus Chown, The golden rule, The Guardian, 15 January 2003.

Chown writes:

Perhaps the most surprising place the golden ratio crops up is in the physics of black holes, a discovery made by Paul Davies of the University of Adelaide in 1989. Black holes and other self-gravitating bodies such as the sun have a “negative specific heat”. This means they get hotter as they lose heat. Basically, loss of heat robs the gas of a body such as the sun of internal pressure, enabling gravity to squeeze it into a smaller volume. The gas then heats up, for the same reason that the air in a bicycle pump gets hot when it is squeezed.

Things are not so simple, however, for a spinning black hole, since there is an outward “centrifugal force” acting to prevent any shrinkage of the hole. The force depends on how fast the hole is spinning. It turns out that at a critical value of the spin, a black hole flips from negative to positive specific heat—that is, from growing hotter as it loses heat to growing colder. What determines the critical value? The mass of the black hole and the golden ratio!

Why is the golden ratio associated with black holes? “It’s a complete enigma,” Livio confesses.

Extremal black holes

As we’ve seen, a rotating uncharged black hole has negative specific heat whenever the angular momentum is below a certain critical value:

\displaystyle{ J < k M^2 }

where

\displaystyle{ k = \sqrt{2 \sqrt{3} - 3} = 0.68125003863\dots }

As J goes up to this critical value, the specific heat actually approaches -\infty! On the other hand, a rotating uncharged black hole has positive specific heat when

\displaystyle{  J > kM^2}

and as J goes down to this critical value, the specific heat approaches -\infty. So, there’s some sort of ‘phase transition’ at

\displaystyle{  J = k M^2 }

But as we make the black hole spin even faster, something very strange happens when

\displaystyle{ J > M^2 }

Then the black hole gets a naked singularity!

In other words, its singularity is no longer hidden behind an event horizon. An event horizon is an imaginary surface such that if you cross it, you’re doomed to never come back out. As far as we know, all black holes in nature have their singularities hidden behind an event horizon. But if the angular momentum were too big, this would not be true!

A black hole posed right at the brink:

\displaystyle{ J = M^2 }

is called an ‘extremal’ black hole.

Black holes in nature

Most physicists believe it’s impossible for black holes to go beyond extremality. There are lots of reasons for this. But do any black holes seen in nature get close to extremality? For example, do any spin so fast that they have positive specific heat? It seems the answer is yes!

Over on Google+, Robert Penna writes:

Nature seems to have no trouble making black holes on both sides of the phase transition. The spins of about a dozen solar mass black holes have reliable measurements. GRS1915+105 is close to J=M^2. The spin of A0620-00 is close to J=0. GRO J1655-40 has a spin sitting right at the phase transition.

The spins of astrophysical black holes are set by a competition between accretion (which tends to spin things up to J=M^2) and jet formation (which tends to drain angular momentum). I don’t know of any astrophysical process that is sensitive to the black hole phase transition.

That’s really cool, but the last part is a bit sad! The problem, I suspect, is that Hawking radiation is so pathetically weak.

But by the way, you may have heard of this recent paper—about a supermassive black hole that’s spinning super-fast:

• G. Risaliti, F. A. Harrison, K. K. Madsen, D. J. Walton, S. E. Boggs, F. E. Christensen, W. W. Craig, B. W. Grefenstette, C. J. Hailey, E. Nardini, Daniel Stern and W. W. Zhang, A rapidly spinning supermassive black hole at the centre of NGC 1365, Nature (2013), 449–451.

They estimate that this black hole has a mass about 2 million times that of our sun, and that

\displaystyle{ J \ge 0.84 \, M^2 }

with 90% confidence. If so, this is above the phase transition where it gets positive specific heat.

The nitty-gritty details

Here is where Paul Davies claimed the golden ratio shows up in black hole physics:

• Paul C. W. Davies, Thermodynamic phase transitions of Kerr-Newman black holes in de Sitter space, Classical and Quantum Gravity 6 (1989), 1909–1914.

He works out when the specific heat vanishes for rotating and/or charged black holes in a universe with a positive cosmological constant: so-called de Sitter space. The formula is pretty complicated. Then he set the cosmological constant \Lambda to zero. In this case de Sitter space flattens out to Minkowski space, and his black holes reduce to Kerr–Newman black holes: that is, rotating and/or charged black holes in an asymptotically Minkowskian spacetime. He writes:

In the limit \alpha \to 0 (that is, \Lambda \to 0), the cosmological horizon no longer exists: the solution corresponds to the case of a black hole in asymptotically flat spacetime. In this case r may be explicitly eliminated to give

(\beta + \gamma)^3 + \beta^2 -\beta - \frac{3}{4} \gamma^2  = 0.   \qquad (2.17)

Here

\beta = a^2 / M^2

\gamma = Q^2 / M^2

and he says M is the black hole’s mass, Q is its charge and a is its angular momentum. He continues:

For \beta = 0 (i.e. a = 0) equation (2.17) has the solution \gamma = 3/4, or

\displaystyle{ Q^2 = \frac{3}{4} M^2 } \qquad  (2.18)

For \gamma = 0 (i.e. Q = 0), equation (2.17) may be solved to give \beta = (\sqrt{5} - 1)/2 or

\displaystyle{ a^2 = (\sqrt{5} - 1)M^2/2 \cong 0.62 M^2   }  \qquad  (2.19)

These were the results first reported for the black-hole case in Davies (1979).

In fact a can’t be the angular momentum, since the right condition for a phase transition should say the black hole’s angular momentum is some constant times its mass squared. I think Davies really meant to define

a = J/M

This is important beyond the level of a mere typo, because we get different concepts of specific heat depending on whether we hold J or a constant while taking certain derivatives!

In the usual definition of specific heat for rotating black holes, we hold J constant and see how the black hole’s heat energy changes with temperature. If we call this specific heat C_J, we have

\displaystyle{ C_J = T \left.\frac{\partial S}{\partial T}\right|_J }

where S is the black hole’s entropy. This specific heat C_J becomes infinite when

\displaystyle{ \frac{J^2}{M^4} = 2 \sqrt{3} - 3  }

But if instead we hold a = J/M constant, we get something else—and this what Davies did! If we call this modified concept of specific heat C_a, we have

\displaystyle{ C_a = T \left.\frac{\partial S}{\partial T}\right|_a }

This modified ‘specific heat’ C_a becomes infinite when

\displaystyle{  \frac{J^2}{M^4} = \frac{\sqrt{5}-1}{2} }

After proving these facts in the comments below, Greg Egan drew some nice graphs to explain what’s going on. Here are the curves of constant temperature as a function of the black hole’s mass M and angular momentum J:

The dashed parabola passing through the peaks of the curves of constant temperature is where C_J becomes infinite. This is where energy can be added without changing the temperature, so long as it’s added in a manner that leaves J constant.

And here are the same curves of constant temperature, along with the parabola where C_a becomes infinite:

This new dashed parabola intersects each curve of constant temperature at the point where the tangent to this curve passes through the origin: that is, where the tangent is a line of constant a=J/M. This is where energy and angular momentum can be added to the hole in a manner that leaves a constant without changing the temperature.

As mentioned, Davies correctly said when the ordinary specific heat C_J becomes infinite in another paper, eleven years earlier:

• Paul C. W. Davies, Thermodynamics of black holes, Rep. Prog. Phys. 41 (1978), 1313–1355.

You can see his answer on page 1336.

This 1978 paper, in turn, is a summary of previous work including an article from a year earlier:

• Paul C. W. Davies, The thermodynamic theory of black holes, Proc. Roy. Soc. Lond. A 353 (1977), 499–521.

And in the abstract of this earlier article, Davies wrote:

The thermodynamic theory underlying black-hole processes is developed in detail and applied to model systems. It is found that Kerr-Newman black holes undergo a phase transition at an angular-momentum mass ratio of 0.68M or an electric charge (Q) of 0.86M, where the heat capacity has an infinite discontinuity. Above the transition values the specific heat is positive, permitting isothermal equilibrium with a surrounding heat bath.

Here the number 0.68 is showing up because

\displaystyle{ \sqrt{ 2 \sqrt{3} - 3 } = 0.68125003863\dots }

The number 0.86 is showing up because

\displaystyle{ \sqrt{ \frac{3}{4} } = 0.86602540378\dots }

By the way, just in case you want to do some computations using experimental data, let me put the speed of light c and gravitational constant G back in the formulas. A rotating (uncharged) black hole is extremal when

\displaystyle{ c J = G M^2 }

Game Theory (Part 17)

27 February, 2013

Last time we saw the official definition of maximin strategy. Now we’ll prove something really important. In a Nash equilibrium for a zero-sum game, both players must be using a maximin strategy!

To prove this we will need to look at a lot of maxima and minima. We will always assume these maxima and minima exist. For what we’re doing, this is true. This can be proved using an important result from topology: given a continuous real valued function f: X \to \mathbb{R} on a compact set X, it has a minimum and a maximum. If you haven’t learned this yet… well, I hope you do by the time you get a degree in mathematics.

But now is not the time to talk about this. Let’s dive in!

Nash equilibria give maximin strategies

We start with a cool-looking inequality:

Theorem 1. For any zero-sum 2-player normal form game,

\displaystyle{ \min_{q'} \max_{p'} p' \cdot A q' \ge \max_{p'} \min_{q'} \; p' \cdot A q'}

Proof. Since a function is always greater than or equal to its minimum value, for any mixed strategies p and q we have

\displaystyle{  p \cdot A q \ge \min_{q'} \; p \cdot A q'}

If one function is \ge another, its maximum value is \ge the other function’s maximum value. So, the above inequality gives

\displaystyle{  \max_{p'} p' \cdot A q \ge \max_{p'} \min_{q'} \; p' \cdot A q'}

The right side here is just a number; the left side is a function of q. Since this function is always greater than or equal to the right side, so is its minimum:

\displaystyle{ \min_{q'} \max_{p'} p' \cdot A q' \ge \max_{p'} \min_{q'} \; p' \cdot A q'}   █

Next, we’ll show this cool-looking inequality becomes an equation when a Nash equilibrium exists. In fact a Nash equilibrium always exists, but we haven’t shown this yet. So:

Theorem 2. Given a zero-sum 2-player normal form game for which a Nash equilibrium exists,

\displaystyle{\min_{q'} \max_{p'} \; p' \cdot A q' = \max_{p'} \min_{q'} \; p' \cdot A q'}

Proof. Suppose a Nash equilibrium (p,q) exists. Then for any mixed strategy p' for player A, we have

\displaystyle{ p \cdot A q \ge p' \cdot A q}

since A can’t improve their payoff by switching their mixed strategy. Similarly, for any mixed strategy q' for player B, p \cdot B q \ge p \cdot B q', since B can’t improve their payoff by switching their mixed strategy. But B = -A, so this says

\displaystyle{ p \cdot A q' \ge p \cdot A q}

Combining these two inequalities, we get

\displaystyle{ p \cdot A q' \ge p' \cdot A q}

for all p', q'. The minimum of the left side over all q' must be greater than or equal to the right side, which doesn’t depend on q':

\displaystyle{ \min_{q'} p \cdot A q' \ge p' \cdot A q}

Now the maximum of the right side over all p' must be less than or equal to the left side, which doesn’t depend on p':

\displaystyle{ \min_{q'} p \cdot A q' \ge \max_{p'} p' \cdot A q}

It follows that

\begin{array}{ccl} \displaystyle{ \max_{p'} \min_{q'} p' \cdot A q'} &\ge& \displaystyle{ \min_{q'} p \cdot A q'} \\  \\  &\ge&  \displaystyle{  \max_{p'} p' \cdot A q } \\  \\ &\ge&  \displaystyle{ \min_{q'} \max_{p'} p' \cdot A q' } \end{array}

The middle inequality here is the one we saw a moment ago. The first inequality comes from the fact that the maximum value of a function is greater than or equal to any of its values:

\textrm{for all } x, \; \displaystyle{ \max_{x'} f(x') \ge f(x) }

so

\displaystyle{ \max_{p'} \min_{q'} p' \cdot A q' \ge \min_{q'} p \cdot A q' }

And the last inequality comes from the fact that the values of a function are always greater than or equal to its minimum value:

\textrm{for all } x, \; \displaystyle{ f(x) \ge \max_{x'} f(x') }

so

\displaystyle{  \max_{p'} p' \cdot A q  \ge  \min_{q'} \max_{p'} p' \cdot A q' }

Putting all these inequalities together, we get

\displaystyle{ \max_{p'} \min_{q'} p \cdot A q' \ge \min_{q'} \max_{p'}  p' \cdot A q}

On the other hand, Theorem 1 gives an inequality pointing the other way:

\displaystyle{ \min_{q'} \max_{p'} p' \cdot A q' \ge \max_{p'} \min_{q'} \; p' \cdot A q'}

So, the two sides must be equal:

\displaystyle{ \max_{p'} \min_{q'} \; p' \cdot A q' = \min_{q'} \max_{p'} \; p' \cdot A q'}

which is what we were trying to show!   █

What’s the point of this cool-looking equation? One point is it connects the terms ‘minimax’ and ‘maximin’. There’s a lot more to say about it. But right now, we need it for one big thing: it lets us prove that in a Nash equilibrium for a zero-sum game, both players must be using a maximin strategy!

Theorem 3. If (p,q) is a Nash equilibrium for a zero-sum 2-player normal-form game, then p is a maximin strategy for player A and q is a maximin strategy for player B.

Proof. Let’s assume that (p,q) is a Nash equilibrium. We need to show that p is a maximin strategy for player A and q is a maximin strategy for player B.

First let’s remember some things we saw in the proof of Theorem 2. We assumed that (p,q) is a Nash equilibrium, and we showed

\begin{array}{ccl} \displaystyle{ \max_{p'} \min_{q'} p' \cdot A q'} &\ge& \displaystyle{ \min_{q'} p \cdot A q'} \\  \\  &\ge&  \displaystyle{  \max_{p'} p' \cdot A q } \\  \\ &\ge&  \displaystyle{ \min_{q'} \max_{p'} p' \cdot A q' } \end{array}

If this looks confusing, go back to the proof of Theorem 2. But now look at the beginning and the end of this chain of inequalities. We saw in Theorem 2 that they’re equal! So all the stuff in the middle has to be equal, too. In particular,

\displaystyle{ \min_{q'} \; p \cdot A q'  = \max_{p'} \min_{q'} \; p' \cdot A q' }

Last time we saw this means that p is a maximin strategy for player A. Also,

\displaystyle{  \max_{p'} \; p' \cdot A q  = \min_{q'} \max_{p'} \; p' \cdot A q' }

Last time we saw this means that that q is a maximin strategy for player B.   █

Whew! That was quite a workout!

But we’re on a mission here, and we’re not done. We’ve shown that if (p,q) is a Nash equilibrium, p and q are maximin strategies. Next time we’ll try to show the converse: if p and q are maximin strategies, then (p,q) is a Nash equilibrium.


Game Theory (Part 16)

26 February, 2013

Last time we looked at a zero-sum game and noticed that when both players use their maximin strategy, we get a Nash equilibrium. This isn’t a coincidence—it always works this way for zero-sum games! This fact is not obvious. It will take some work to prove it. This will be our first really big theorem.

But first, we need to define a maximin strategy.

I already tried to give you the rough idea: it’s where you pick a mixed strategy that maximizes your expected payoff while assuming that no matter what mixed strategy you pick, the other player will pick the mixed strategy that minimizes your expected payoff. But to prove things about this concept, we need a more precise definition.

The setup

First, remember the setup. We’re talking about a zero-sum 2-player normal form game. So as usual, we assume:

• Player A has some set of choices i = 1, \dots, m.

• Player B has some set of choices j = 1, \dots, n.

• If player A makes choice i and player B makes choice j, the payoff to player A is A_{ij} and the payoff to player B is B_{ij}.

But, because it’s zero-sum game, we also assume

A_{ij} + B_{ij} = 0

for all choices i = 1, \dots, m and j = 1, \dots, n. In other words, the payoff matrices A and B, whose entries are these numbers A_{ij} and B_{ij}, sum to zero:

A + B = 0

We’ll let p to stand for A’s mixed strategy, and let q stand for B’s mixed strategy. These are probability distributions. So, p = (p_1, \dots, p_m) is a vector in \mathbb{R}^m obeying

0 \le p_i \le 1 , \quad \displaystyle{ \sum_{i = 1}^m p_i = 1 }

while q = (q_1 , \dots, q_m) is a vector in \mathbb{R}^n obeying

0 \le q_j \le 1 , \quad  \displaystyle{ \sum_{j=1}^n q_j = 1 }

Given these mixed strategies, A’s expected payoff is

p \cdot A q

while B’s expected payoff is

p \cdot B q = - p \cdot A q

Since B = -A, we will never mention the matrix B again!

Minima and maxima

As you might guess, we’re going to talk a lot about minima and maxima. So, we should be really clear about what they are!

Definition 1. Suppose f : X \to \mathbb{R} is any real-valued function defined on any set X. We say f has a maximum at x \in X if

f(x) \ge f(x')

for all x' \in X. In this case we call the number f(x) the maximum value of f, and we write

\displaystyle{ f(x) = \max_{x' \in X} f(x') }

Note that mathematicians use ‘maximum’ to mean an element x where the function f gets as big as possible… and use ‘maximum value’ to mean how big f gets there. This is a bit different than ordinary English!

Also note that a maximum may not exist! And if it exists, it may not be unique. For example, the function x^2 on the real line has no maximum, while the sine function has lots. So unless we know for sure a function has exactly one maximum, we should talk about a maximum instead of the maximum.

Similar stuff is true for minima, too:

Definition 1. Suppose f : X \to \mathbb{R} is any real-valued function defined on any set X. We say f has a minimum at x \in X if

f(x) \le f(x')

for all x' \in X. In this case we call the number f(x) the minimum value of f, and we write

\displaystyle{ f(x) = \min_{x' \in X} f(x') }

Security levels

Pretend you’re player A. Since it’s a zero-sum game, we know B will try to maximize their payoff… which means minimizing your payoff. So, no matter what your mixed strategy p is, you should expect that B will find a mixed strategy q' that’s a minimum of

p \cdot A q

So, you should only expect to get a payoff of

\displaystyle{ \min_{q' \in \{ \textrm{B's mixed strategies\}}} \; p \cdot A q' }

We call this player A’s security level. For short, let’s write it as

\displaystyle{ \min_{q'} \; p \cdot A q' }

A’s security level is a function of p. We graphed this function in the example we looked at last time. It’s the dark curve here:

The different lines show p \cdot A q for different choices of q. The minimum of all these gives the dark curve.

Maximin strategies

Last time we found A’s maximin strategy by finding the maximum of A’s security level—the place where that dark curve is highest!

Suppose p is a maximin strategy for player A. Since it maximizes A’s security level,

\displaystyle{ \min_{q'} \; p \cdot A q' \ge  \min_{q'} \; p' \cdot A q' }

for all mixed strategies p' for player A. In other words, we have

\displaystyle{ \min_{q'} \; p \cdot A q'  = \max_{p'} \min_{q'} \; p' \cdot A q' }

If you look at this formula, you can really see why we use the word ‘maximin’!

It’s a little-known false fact that the concept of maximin strategy was named after the Roman emperor Maximin. Such an emperor really does exist! But in fact, there were two Roman emperors named Maximin. So he exists, but he’s not unique.

 

Definitions

Now we’re ready for some definitions that summarize what we’ve learned.

Definition 3. Given a zero-sum 2-player normal form game and a mixed strategy p for player A, we define A’s security level to be

\displaystyle{ \min_{q'} \; p \cdot A q' }

Definition 4. Given a zero-sum 2-player normal form game, we say a mixed strategy p for player A is a maximin strategy if

\displaystyle{ \min_{q'} \; p \cdot A q'  = \max_{p'} \min_{q'} \; p' \cdot A q' }

We can also make up definitions that apply to player B. We just need to remember that there’s a minus sign in B’s expected payoff:

Definition 5. Given a zero-sum 2-player normal form game and a mixed strategy q for player B, we define B’s security level to be

\displaystyle{ \min_{p'} \; - p' \cdot A q }

Definition 6. Given a zero-sum 2-player normal form game, we say a mixed strategy q' for B is a minimax strategy for B if

\displaystyle{  \min_{p'} \; - p' \cdot A q = \max_{q'} \min_{p'} \; - p' \cdot A q' }

But there’s an easy fact about maxima and minima that lets us simplify this last definition. To make a function -f as big as possible is the same as making f as small as possible and then sticking a minus sign in front:

\displaystyle{ \max_{x \in X} -f(x) = - \min_{x \in X} f(x)}

Similarly, to minimize -f is the same as maximizing f and then taking the negative of that:

\displaystyle{ \min_{x \in X} -f(x) = - \max_{x \in X} f(x)}

Using these rules, we get this little theorem:

Theorem 1. Given a zero-sum 2-player normal form game, q is a minimax strategy for B if and only if:

\displaystyle{  \max_{p'} \; p' \cdot A q  = \min_{q'} \max_{p'} \; p' \cdot A q' }

Proof. Suppose that q is a minimax strategy for B. By Definition 6,

\displaystyle{  \min_{p'} \; - p' \cdot A q = \max_{q'} \min_{p'} \; - p' \cdot A q' }

Repeatedly using the rules for pushing minus signs through max and min, we see:

\displaystyle{ - \max_{p'} p \cdot A q = \max_{q'} \left( - \max_{p'} \; p \cdot A q \right) }

and then

\displaystyle{ - \max_{p'} p \cdot A q = - \min_{q'} \max_{p'} \; p' \cdot A q' }

Taking the negative of both sides, we get the equation we want:

\displaystyle{  \max_{p'} \; p' \cdot A q  = \min_{q'} \max_{p'} \; p' \cdot A q' }

We can also reverse our calculation and show that this equation implies q is a maximin strategy. So, this equation is true if and only if q is a maximin strategy for B.   █

This little theorem talks about a minimum of maxima instead of a maximum of minima. This is one reason people talk about ‘minimax strategies’. In fact what we’re calling a maximin strategy, people often call a minimax strategy!

Next time we’ll start proving some really important things, which were first shown by the great mathematician John von Neumann:

• If both players in a zero-sum game use a maximin strategy, we get a Nash equilibrium.

• In any Nash equilibrium for a zero-sum game, both players must be using a maximin strategy!

• For any zero-sum game, there exists a Nash equilibrium.

Now we’re talking about mixed strategies, of course. We already saw a while back that if we only consider pure strategies, there are games like rock-paper-scissors and matching pennies that don’t have a Nash equilibrium.

Before I quit, one more false fact. Just as the maximin strategy was named after the emperor Maximin, the minimax strategy was named after the emperor Minimax! I mentioned that Emperor Maximin really exists, but is not unique. The case of Emperor Minimax is even more interesting. He’s really unique… but he does not exist!


Open Access to Taxpayer-Funded Research

23 February, 2013

According to a White House webpage, John Holdren, director of the White House Office of Science and Technology Policy, has

… issued a memorandum today to Federal agencies that directs those with more than $100 million in research and development expenditures to develop plans to make the results of federally-funded research publicly available free of charge within 12 months after original publication.

This is already true for research funded by the National Institute of Health. For years some of us have been pushing for the National Science Foundation and other agencies to do the same thing. Elsevier and other companies fought against it, even trying to pass a law to stop it…. but a petition to the White House seems to have had an effect!

In response to this petition, Holdren now says:

while this new policy call does not insist that every agency copy the NIH approach exactly, it does ensure that similar policies will appear across government.

If this really happens, this will be very big news. So let’s fight to make sure this initiative doesn’t get watered down or undermined by the bad guys! The quickest easiest thing is to talk to the Office of Science and Technology Policy, either by phone or email, as explained here. A phone call counts more than an email.

One great thing about Holdren’s new memo is that it requires open access to experimental data, not just papers.

And one sad thing is that it only applies to federally funded research in the sciences, not the humanities. It does not apply to the National Endowment for the Humanities. Done well, research in the humanities can be just as important as scientific research… since most of our problems involve humans.


Maximum Entropy and Ecology

21 February, 2013

I already talked about John Harte’s book on how to stop global warming. Since I’m trying to apply information theory and thermodynamics to ecology, I was also interested in this book of his:

John Harte, Maximum Entropy and Ecology, Oxford U. Press, Oxford, 2011.

There’s a lot in this book, and I haven’t absorbed it all, but let me try to briefly summarize his maximum entropy theory of ecology. This aims to be “a comprehensive, parsimonious, and testable theory of the distribution, abundance, and energetics of species across spatial scales”. One great thing is that he makes quantitative predictions using this theory and compares them to a lot of real-world data. But let me just tell you about the theory.

It’s heavily based on the principle of maximum entropy (MaxEnt for short), and there are two parts:

Two MaxEnt calculations are at the core of the theory: the first yields all the metrics that describe abundance and energy distributions, and the second describes the spatial scaling properties of species’ distributions.

Abundance and energy distributions

The first part of Harte’s theory is all about a conditional probability distribution

R(n,\epsilon | S_0, N_0, E_0)

which he calls the ecosystem structure function. Here:

S_0: the total number of species under consideration in some area.

N_0: the total number of individuals under consideration in that area.

E_0: the total rate of metabolic energy consumption of all these individuals.

Given this,

R(n,\epsilon | S_0, N_0, E_0) \, d \epsilon

is the probability that given S_0, N_0, E_0, if a species is picked from the collection of species, then it has n individuals, and if an individual is picked at random from that species, then its rate of metabolic energy consumption is in the interval (\epsilon, \epsilon + d \epsilon).

Here of course d \epsilon is ‘infinitesimal’, meaning that we take a limit where it goes to zero to make this idea precise (if we’re doing analytical work) or take it to be very small (if we’re estimating R from data).

I believe that when we ‘pick a species’ we’re treating them all as equally probable, not weighting them according to their number of individuals.

Clearly R obeys some constraints. First, since it’s a probability distribution, it obeys the normalization condition:

\displaystyle{ \sum_n \int d \epsilon \; R(n,\epsilon | S_0, N_0, E_0) = 1 }

Second, since the average number of individuals per species is N_0/S_0, we have:

\displaystyle{ \sum_n \int d \epsilon \; n R(n,\epsilon | S_0, N_0, E_0) = N_0 / S_0 }

Third, since the average over species of the total rate of metabolic energy consumption of individuals within the species is E_0/ S_0, we have:

\displaystyle{ \sum_n \int d \epsilon \; n \epsilon R(n,\epsilon | S_0, N_0, E_0) = E_0 / S_0 }

Harte’s theory is that R maximizes entropy subject to these three constraints. Here entropy is defined by

\displaystyle{ - \sum_n \int d \epsilon \; R(n,\epsilon | S_0, N_0, E_0) \ln(R(n,\epsilon | S_0, N_0, E_0)) }

Harte uses this theory to calculate R, and tests the results against data from about 20 ecosystems. For example, he predicts the abundance of species as a function of their rank, with rank 1 being the most abundant, rank 2 being the second most abundant, and so on. And he gets results like this:

The data here are from:

• Green, Harte, and Ostling’s work on a serpentine grassland,

• Luquillo’s work on a 10.24-hectare tropical forest, and

• Cocoli’s work on a 2-hectare wet tropical forest.

The fit looks good to me… but I should emphasize that I haven’t had time to study these matters in detail. For more, you can read this paper, at least if your institution subscribes to this journal:

• J. Harte, T. Zillio, E. Conlisk and A. Smith, Maximum entropy and the state-variable approach to macroecology, Ecology 89 (2008), 2700–2711.

Spatial abundance distribution

The second part of Harte’s theory is all about a conditional probability distribution

\Pi(n | A, n_0, A_0)

This is the probability that n individuals of a species are found in a region of area A given that it has n_0 individuals in a larger region of area A_0.

\Pi obeys two constraints. First, since it’s a probability distribution, it obeys the normalization condition:

\displaystyle{ \sum_n  \Pi(n | A, n_0, A_0) = 1 }

Second, since the mean value of n across regions of area A equals n_0 A/A_0, we have

\displaystyle{ \sum_n n \Pi(n | A, n_0, A_0) = n_0 A/A_0 }

Harte’s theory is that \Pi maximizes entropy subject to these two constraints. Here entropy is defined by

\displaystyle{- \sum_n  \Pi(n | A, n_0, A_0)\ln(\Pi(n | A, n_0, A_0)) }

Harte explains two approaches to use this idea to derive ‘scaling laws’ for how n varies with n. And again, he compares his predictions to real-world data, and get results that look good to my (amateur, hasty) eye!

I hope sometime I can dig deeper into this subject. Do you have any ideas, or knowledge about this stuff?


Game Theory (Part 15)

20 February, 2013

In Part 13 we saw an example of a Nash equilibrium where both players use a mixed strategy: that is, make their choice randomly, using a certain probability distribution on their set of mixed strategies.

We found this Nash equilibrium using the oldest method known to humanity: we guessed it. Guessing is what mathematicians do when we don’t know anything better to do. When you’re trying to solve a new kind of problem, it’s often best to start with a very easy example, where you can guess the answer. While you’re doing this, you should try to notice patterns! These can give you tricks that are useful for harder problems, where pure guessing won’t usually work so well.

So now let’s try a slightly harder example. In Part 13 we looked at the game ‘matching pennies’, which was a zero-sum game. Now we’ll look at a modified version. It will still be a zero-sum game. We’ll find a Nash equilibrium by finding each player’s ‘maximin strategy’. This technique works for zero-sum two-player normal-form games, but not most other games.

Why the funny word ‘maximin’?

In this sort of strategy, you assume that no matter what you do, the other player will do whatever minimizes your expected payoff. This is a sensible assumption in a zero-sum game! They are trying to maximize their expected payoff, but in a zero-sum game whatever they win, you lose… so they’ll minimize your expected payoff.

So, in a maximin strategy you try to maximize your expected payoff while assuming that given whatever strategy you use, your opponent will use a strategy that minimizes your expected payoff.

Think about that sentence: it’s tricky! But it explains the funny word ‘maximin’. In brief, you’re trying to maximize the minimum of your expected payoff.

We’ll make this more precise in a while… but let’s dive in and look at a new game!

Matching pennies—modified version

In this new game, each player has a penny and must secretly turn the penny to either heads or tails. They then reveal their choices simultaneously. If the pennies do not match—one heads and one tails—B wins 10 cents from A. If the pennies are both heads, player A wins 10 cents from player B. And if the pennies are both tails, player A wins 20 cents from player B

Let’s write this game in normal form! If we say

• choice 1 is heads
• choice 2 is tails

then the normal form looks like this:

1 2
1    (10,-10)   (-10,10)  
2 (-10,10)   (20,-20)  

We can also break this into two matrices in the usual way, one showing the payoff for A:

A = \left( \begin{array}{rr} 10 & -10 \\ -10 & 20 \end{array} \right)

and one showing the payoff for B:

B = \left( \begin{array}{rr} -10 & 10 \\ 10 & -20 \end{array} \right)

Before we begin our real work, here are some warmup puzzles:

Puzzle 1. Is this a zero-sum game?

Puzzle 2. Is this a symmetric game?

Puzzle 3. Does this game have a Nash equilibrium if we consider only pure strategies?

Puzzle 4. Does this game have a dominant strategy for either player A or player B?

A’s expected payoff

Remember that we write A’s mixed strategy as

p = (p_1, p_2)

Here p_1 is the probability that A makes choice 1: that is, chooses heads. p_2 is the probability that A makes choice 2: that is, chooses tails. Similarly, we write B’s mixed strategy as

q = (q_1, q_2)

Here q_1 is the probability that B makes choice 1, and q_2 is the probability that B makes choice 2.

Given all this, the expected value of A’s payoff is

p \cdot A q

We saw this back in Part 12.

Now let’s actually calculate the expected value of A’s payoff. It helps to remember that probabilities add up to 1, so

p_1 + p_2 = 1, \qquad q_1 + q_2 = 1

It’s good to have fewer variables to worry about! So, we’ll solve for p_2 and q_2:

p_2 = 1 - p_1, \qquad q_2 = 1 - q_1

and write

p = (p_1, 1 - p_1)
q = (q_1, 1 - q_1)

Then we can calculate A’s expected payoff. We start by computing A q:

\begin{array}{ccl} A q &=&   \left( \begin{array}{rr} 10 & -10 \\ -10 & 20 \end{array} \right) \left( \begin{array}{c} q_1 \\ 1 - q_1 \end{array} \right) \\  \\ &=& \left( \begin{array}{c} 10 q_1 - 10(1-q_1) \\ -10 q_1 + 20(1 - q_1) \end{array} \right) \\  \\ &=& \left( \begin{array}{c} 20 q_1 - 10  \\ 20 - 30 q_1  \end{array} \right) \end{array}

This gives

\begin{array}{ccl} p \cdot A q &=& (p_1 , 1 - p_1) \cdot (20 q_1 - 10  ,  20 - 30 q_1) \\  \\  &=& p_1(20 q_1 - 10) + (1-p_1)(20 - 30 q_1) \\ \\ &=& 20 - 30 p_1 - 30 q_1 + 50 p_1 q_1 \end{array}

This is a bit complicated, so let’s graph A’s expected payoff in the extreme cases where B either makes choice 1 with probability 100%, or choice 2 with probability 100%.

If B makes choice 1 with probability 100%, then q_1 = 1. Put this in the formula above and we see A’s expected payoff is

p \cdot A q = 20 p_1 - 10

If we graph this as a function of p_1 we get a straight line:

On the other hand, if B makes choice 2 with probability 100%, then q_2 = 1 so q_1 = 0. Now A’s expected payoff is

p \cdot A q = 20 - 30 p_1

Graphing this, we get:

The point of these graphs becomes clearer if we draw them both together:

Some interesting things happen where the lines cross! We’ll get A’s maximin strategy by picking the value of p_1 where these lines cross! And in fact, there’s a Nash equilibrium where player A chooses this value of p_1, and B uses the same trick to choose his value of q_1.

But we can already see something simpler. We’ve drawn graphs of A’s expected payoff for two extreme cases of player B’s mixed strategy: q_1 = 1 and q_1 = 0. Suppose B uses some other mixed strategy, say q_1 = 2/5 for example. Now A’s expected payoff will be something between the two lines we’ve drawn. Let’s see what it is:

\begin{array}{ccl} p \cdot A q &=& 20 - 30 p_1 - 30 q_1 + 50 p_1 q_1 \\  \\  &=& 20 - 30 p_1 - 30 \cdot \frac{2}{5} + 50 p_1 \cdot \frac{2}{5} \\  \\  &=& 8 - 10p_1  \end{array}

If we graph this along with our other two lines, we get this:

We get a line between the other two, as we expect. But more importantly, all three lines cross at the same point!

That’s not a coincidence, that’s how it always works. If we draw lines for more different choices of B’s mixed strategy, they look like this:

The point where they all intersect looks important! It is. Soon we’ll see why.

A’s maximin strategy

Now we’ll find A’s maximin strategy. Let me explain the idea a bit more. The idea is that A is cautious, so he wants to choose p_1 that maximizes his expected payoff in the worst-case scenario. In other words, A wants to maximize his expected payoff assuming that B is trying to minimize A’s expected payoff.

Read that last sentence a few times, because it’s confusing at first.

Why would B try to minimize A’s expected payoff? B isn’t nasty: he’s just a rational agent trying to maximize his own expected payoff! But if you solved Puzzle 1, you know this game is a zero-sum game. So A’s payoff is minus B’s payoff. Thus, if B is trying to maximize his own expected payoff, he’s also minimizing A’s expected payoff.

Given this, let’s figure out what A should do. A should look at different mixed strategies B can use, and graph A’s payoff as a function of p_1. We did this:

Then, he should choose p_1 that maximizes his expected payoff in the worst-case scenario. To do this, he should focus attention on the lines that give him the smallest expected payoff:

This is the dark curve. It’s not very ‘curvy’, but mathematicians consider a broken line to be an example of a curve!

To find this dark curve, all that matters are extreme cases of B’s strategy: the case q_1 = 1 and the case q_1 = 0. So we can ignore the intermediate cases, and simplify our picture:

The dark curve is highest where the two lines cross! This happens when

-20 + 10 p_1 = 20 - 30 p_1

or in other words

p_1 = 3/5

So, A’s maximin strategy is

p = (3/5, 2/5)

It’s the mixed strategy that maximizes A’s expected payoff given that B is trying to minimize A’s expected payoff.

B’s expected payoff

Next let’s give the other guy a chance. Let’s work out B’s expected payoff and maximin strategy. The expected value of B’s payoff is

p \cdot B q

We could calculate this just like we calculated p \cdot A q , but it’s quicker to remember that this is a zero-sum game:

B = - A

so that

p \cdot B q = - p \cdot A q

and since we already saw that

p \cdot A q = 20 - 30 p_1 - 30 q_1 + 50 p_1 q_1

now we have

p \cdot B q = -20 + 30 p_1 + 30 q_1 - 50 p_1 q_1

B’s maximin strategy

To figure out B’s maximin strategy, we’ll copy what worked for player A. First we’ll work out B’s expected payoff in two extreme cases:

• The case where A always makes choice 1: p_1 = 1.

• The case where A always makes choice 2: p_1 = 0.

We’ll get two functions of q_1 whose graphs are lines. Then we’ll find where these lines intersect!

Let’s get to work. When p_1 = 1 we get

\begin{array}{ccl} p \cdot B q &=& -20 + 30 p_1 + 30 q_1 - 50 p_1 q_1  \\  \\  &=& 10 - 20 p_1 \end{array}

When p_1 = 0 we get

p \cdot B q = -20 + 30 q_1

I won’t graph these two lines, but they intersect when

10 - 20 q_1 = -20 + 30 q_1

or

q_1 = 3/5

Hmm, it’s sort of a coincidence that we’re getting the same number that we got for p_1; it won’t always work this way! But anyway, we see that B’s maximin strategy is

q = (3/5, 2/5)

A Nash equilibrium

Now let’s put the pieces together. What happens in a zero-sum game when player A and player B both choose a maximin strategy? It’s not too surprising: we get a Nash equilibrium!

Let’s see why in this example. (We can do a general proof later.) We have

p = (3/5, 2/5)

and

q = (3/5, 2/5)

To show that p and q form a Nash equilibrium, we need to check that neither player could improve their expected payoff by changing to a different mixed strategy. Remember:

Definition. Given a 2-player normal form game, a pair of mixed strategies (p,q), one for player A and one for player B, is a Nash equilibrium if:

1) For all mixed strategies p' for player A, p' \cdot A q \le p \cdot A q.

2) For all mixed strategies q' for player B, p \cdot B q' \le p \cdot B q.

I’ll check clause 1) and I’ll let you check clause 2), which is similar. For clause 1) we need to check

p' \cdot A q \le p \cdot A q

for all mixed strategies p'. Just like we did with p, we can write

p' = (p'_1, 1 - p'_1)

We can reuse one of our old calculations to see that

\begin{array}{ccl} p' \cdot A q &=& 20 - 30 p'_1 - 30 q_1 + 50 p'_1 q_1 \\   \\  &=& 20 - 30 p'_1 - 30 \cdot \frac{3}{5} + 50 p'_1 \cdot \frac{3}{5} \\   \\  &=& 2 \end{array}

Hmm, A’s expected payoff is always 2, no matter what mixed strategy he uses, as long as B uses his maximin strategy q! So of course we have

p' \cdot A q = p \cdot A q = 2

which implies that

p' \cdot A q \le p \cdot A q

If you don’t believe me, we can calculate p \cdot A q and see it equals p' \cdot A q:

\begin{array}{ccl} p \cdot A q &=& 20 - 30 p_1 - 30 q_1 + 50 p_1 q_1 \\   \\  &=& 20 - 30 \cdot \frac{3}{5} - 30 \cdot \frac{3}{5} + 50 \cdot \frac{3}{5} \cdot \frac{3}{5} \\   \\  &=& 2 \end{array}

Yup, it’s 2.

Puzzle 5. Finish showing that (p,q) is a Nash equilibrium by showing that for all mixed strategies q' for player B, p \cdot B q' \le p \cdot B q.

Puzzle 6. How high does this dark curve get?

You have the equations for the two lines, so you can figure this out. Explain the meaning of your answer!


Follow

Get every new post delivered to your Inbox.

Join 3,095 other followers