Information Aversion

22 August, 2014

 

Why do ostriches stick their heads under the sand when they’re scared?

They don’t. So why do people say they do? A Roman named Pliny the Elder might be partially to blame. He wrote that ostriches “imagine, when they have thrust their head and neck into a bush, that the whole of their body is concealed.”

That would be silly—birds aren’t that dumb. But people will actually pay to avoid learning unpleasant facts. It seems irrational to avoid information that could be useful. But people do it. It’s called information aversion.

Here’s a new experiment on information aversion:

In order to gauge how information aversion affects health care, one group of researchers decided to look at how college students react to being tested for a sexually transmitted disease.

That’s a subject a lot of students worry about, according to Josh Tasoff, an economist at Claremont Graduate University who led the study along with Ananda Ganguly, an associate professor of accounting at Claremont McKenna College.

The students were told they could get tested for the herpes simplex virus. It’s a common disease that spreads via contact. And it has two forms: HSV1 and HSV2.

The type 1 herpes virus produces cold sores. It’s unpleasant, but not as unpleasant as type 2, which targets the genitals. Ganguly says the college students were given information — graphic information — that made it clear which kind of HSV was worse.

“There were pictures of male and female genitalia with HSV2, guaranteed to kind of make them really not want to have the disease,” Ganguly says.

Once the students understood what herpes does, they were told a blood test could find out if they had either form of the virus.

Now, in previous studies on information aversion it wasn’t always clear why people declined information. So Tasoff and Ganguly designed the experiment to eliminate every extraneous reason someone might decline to get information.

First, they wanted to make sure that students weren’t declining the test because they didn’t want to have their blood drawn. Ganguly came up with a way to fix that: All of the students would have to get their blood drawn. If a student chose not to get tested, “we would draw 10 cc of their blood and in front of them have them pour it down the sink,” Ganguly says.

The researchers also assured the students that if they elected to get the blood tested for HSV1 and HSV2, they would receive the results confidentially.

And to make triply sure that volunteers who said they didn’t want the test were declining it to avoid the information, the researchers added one final catch. Those who didn’t want to know if they had a sexually transmitted disease had to pay $10 to not have their blood tested.

So what did the students choose? Quite a few declined a test.

And while only 5 percent avoided the HSV1 test, three times as many avoided testing for the nastier form of herpes.

For those who didn’t want to know, the most common explanation was that they felt the results might cause them unnecessary stress or anxiety.

Let’s try extrapolating from this. Global warming is pretty scary. What would people do to avoid learning more about it? You can’t exactly pay scientists to not tell you about it. But you can do lots of other things: not listen to them, pay people to contradict what they’re saying, and so on. And guess what? People do all these things.

So, don’t expect that scaring people about global warming will make them take action. If a problem seems scary and hard to solve, many people will just avoid thinking about it.

Maybe a better approach is to tell people things they can do about global warming. Even if these things aren’t big enough to solve the problem, they can keep people engaged.

There’s a tricky issue here. I don’t want people to think turning off the lights when they leave the room is enough to stop global warming. That’s a dangerous form of complacency. But it’s even worse if they decide global warming is such a big problem that there’s no point in doing anything about it.

There are also lots of subtleties worth exploring in further studies. What, exactly, are the situations where people seek to avoid unpleasant information? What are the situations where they will accept it? This is something we need to know.

The quote is from here:

• Shankar Vedantham, Why we think ignorance Is bliss, even when It hurts our health, Morning Edition, National Public Radio, 28 July 2014.

Here’s the actual study:

• Ananda Ganguly and Joshua Tasoff, Fantasy and dread: the demand for information and the consumption utility of the future.

Abstract. Understanding the properties of intrinsic information preference is important for predicting behavior in many domains including finance and health. We present evidence that intrinsic demand for information about the future is increasing in expected future consumption utility. In the first experiment subjects may resolve a lottery now or later. The information is useless for decision making but the larger the reward, the more likely subjects are to pay to resolve the lottery early. In the second experiment subjects may pay to avoid being tested for HSV-1 and the more highly feared HSV-2. Subjects are three times more likely to avoid testing for HSV-2, suggesting that more aversive outcomes lead to more information avoidance. We also find that intrinsic information demand is negatively correlated with positive affect and ambiguity aversion.

Here’s an attempt by economists to explain information aversion:

• Marianne Andries and Valentin Haddad, Information aversion, 27 February 2014.

Abstract. We propose a theory of inattention solely based on preferences, absent any cognitive limitations and external costs of acquiring information. Under disappointment aversion, information decisions and risk attitude are intertwined, and agents are intrinsically information averse. We illustrate this link between attitude towards risk and information in a standard portfolio problem, in which agents balance the costs, endogenous in our framework, and benefits of information. We show agents never choose to receive information continuously in a diffusive environment: they optimally acquire information at infrequent intervals only. We highlight a novel channel through which the optimal frequency of information acquisition decreases when risk increases, consistent with empirical evidence. Our framework accommodates a broad range of applications, suggesting our approach can explain many observed features of decision under uncertainty.

The photo, probably fake, is from here.


West Antarctic Ice Sheet News

16 May, 2014

You may have heard the news: two teams of scientists claiming that the West Antarctic Ice Sheet has been irreversibly destablized, leading to a slow-motion process that in some number of centuries will cause 3 meters of sea level rise.

“Today we present observational evidence that a large section of the West Antarctic Ice Sheet has gone into irreversible retreat,” an author of one of the papers, Eric Rignot, a glaciologist at NASA’s Jet Propulsion Laboratory, said at a news conference recently. “It has passed the point of no return.”

A little context might help.

The West Antarctic Ice Sheet is the ice sheet that covers Antarctica on the Western Hemisphere side of the Transantarctic Mountains. The bed of this ice sheet lies well below sea level. The ice gradually flows into floating ice shelves such as the Ross Ice Shelf and Ronne Ice Shelf, and also glaciers that dump ice into the Amundsen Sea. Click on the map to make it bigger, so you can see all these features.

The West Antarctic Ice Sheet contains about 2.2 million cubic kilometers of ice, enough to raise the world’s oceans about 4.8 meters if it all melted. To get a sense of how big it is, let’s visit a crack in one of its outlet glaciers.

In 2011, scientists working in Antarctica discovered a massive crack across the Pine Island Glacier, a major glacier in the West Antarctic Ice Sheet. The crack was 30 kilometers long, 80 meters wide and 60 meters deep. The pictures above and below show this crack—the top one is from NASA, the bottom one was taken by an explorer named Forrest McCarthy.

By July 2013, the crack expanded to the point where a slab of ice 720 square kilometers in size broke off and moved into the Amundsen Sea.

However, this event is not the news! The news is about what’s happening at the bottom of the glaciers of the West Antarctic Ice Sheet.

The West Antarctic Ice Sheet sits in a bowl-shaped depression in the earth, with the bottom of the ice below sea level. Warm ocean water is causing the ice sitting along the rim of the bowl to thin and retreat. As the edge of the ice moves away from the rim and enters deeper water, it can retreat faster.

So, there could be a kind of tipping point, where the West Antarctic Ice Sheet melts faster and faster as its bottom becomes exposed to more water. Scientists have been concerned about this for decades. But now two teams of scientists claim that tipping point has been passed.

Here’s a video that illustrates the process:

And here’s a long quote from a short ‘news and analysis’ article by Thomas Sumner in the 16 May 2014 issue of Science:

A disaster may be unfolding—in slow motion. Earlier this week, two teams of scientists reported that Thwaites Glacier, a keystone holding the massive West Antarctic Ice Sheet together, is starting to collapse. In the long run, they say, the entire ice sheet is doomed. Its meltwater would raise sea levels by more than 3 meters.

One team combined data on the recent retreat of the 182,000-square-kilometer Thwaites Glacier with a model of the glacier’s dynamics to forecast its future. In a paper on page 735, they report that in as few as 2 centuries Thwaites Glacier’s edge will recede past an underwater ridge now stalling its retreat. Their models suggest that the glacier will then cascade into rapid collapse. The second team, writing in Geophysical Research Letters, describes recent radar mapping of West Antarctica’s glaciers and confirms that the 600-meter-deep ridge is the final obstacle before the bedrock underlying the glacier dips into a deep basin.

Because inland basins connect Thwaites Glacier to other major glaciers in the region, both research teams say its collapse would flood West Antarctica with seawater, prompting a near-complete loss of ice in the area over hundreds of years.

“The next stable state for the West Antarctic Ice Sheet might be no ice sheet at all,” says the Science paper’s lead author, glaciologist Ian Joughin of the University of Washington, Seattle. “Very crudely, we are now committed to global sea level rise equivalent to a permanent Hurricane Sandy storm surge,” says glaciologist Richard Alley of Pennsylvania State University, University Park, referring to the storm that ravaged the Caribbean and the U.S. East Coast in 2012. Alley was not involved in either study.

Where Thwaites Glacier meets the Amundsen Sea, deep warm water burrows under the ice sheet’s base, forming an ice shelf from which icebergs break off. When melt and iceberg creation outpace fresh snowfall farther inland, the glacier shrinks. According to the radar mapping released this week in Geophysical Research Letters from the European Remote Sensing satellite, from 1992 to 2011 Thwaites Glacier retreated 14 kilometers. “Nowhere else in Antarctica is changing this fast,” says University of Washington Seattle glaciologist Benjamin Smith, co-author of the Science paper.

To forecast Thwaites Glacier’s fate, the team plugged satellite and aircraft radar maps of the glacier’s ice and underlying bedrock into a computer model. In simulations that assumed various melting trends, the model accurately reproduced recent ice-loss measurements and churned out a disturbing result: In all but the most conservative melt scenarios, a glacial collapse has already started. In 200 to 500 years, once the glacier’s “grounding line”—the point at which the ice begins to float—retreats past the ridge, the glacier’s face will become taller and, like a tower of blocks, more prone to collapse. The retreat will then accelerate to more than 5 kilometers per year, the team says. “On a glacial timescale, 200 to 500 years is the blink of an eye,” Joughin says.

And once Thwaites is gone, the rest of West Antarctica would be at risk.

Eric Rignot, a climate scientist at the University of California, Irvine, and the lead author of the GRL study, is skeptical of Joughin’s timeline because the computer model used estimates of future melting rates instead of calculations based on physical processes such as changing sea temperatures. “These simulations ought to go to the next stage and include realistic ocean forcing,” he says. If they do, he says, they might predict an even more rapid retreat.

I haven’t had time to carefully read the relevant papers, which are these:

• Eric Rignot, J. Mouginot, M. Morlighem, H. Seroussi and B. Scheuchl, Widespread, rapid grounding line retreat of Pine Island, Thwaites, Smith and Kohler glaciers, West Antarctica from 1992 to 2011, Geophysical Research Letters, accepted 12 May 2014.

• Ian Joughin, Benjamin E. Smith and Brooke Medley, Marine ice sheet collapse potentially underway for the Thwaites glacier basin, West Antarctica, Science, 344 (2014), 735–738.

I would like to say something more detailed about them someday.

The paper by Eric Rignot et al. is freely available—just click on the title. Unfortunately, you can’t read the other paper unless you have a journal subscription. Sumner’s article which I quoted is also not freely available. I wish scientists and the journal Science took more seriously their duty to make important research available to the public.

Here’s a video that shows Pine Island Glacier, Thwaites Glacier and some other nearby glaciers:


New IPCC Report (Part 8)

22 April, 2014

guest post by Steve Easterbrook

(8) To stay below 2°C of warming, most fossil fuels must stay buried in the ground

Perhaps the most profound advance since the previous IPCC report is a characterization of our global carbon budget. This is based on a finding that has emerged strongly from a number of studies in the last few years: the expected temperature change has a simple linear relationship with cumulative CO2 emissions since the beginning of the industrial era:

(Figure SPM.10) Global mean surface temperature increase as a function of cumulative total global CO2 emissions from various lines of evidence. Multi-model results from a hierarchy of climate-carbon cycle models for each RCP until 2100 are shown with coloured lines and decadal means (dots). Some decadal means are indicated for clarity (e.g., 2050 indicating the decade 2041−2050). Model results over the historical period (1860–2010) are indicated in black. The coloured plume illustrates the multi-model spread over the four RCP scenarios and fades with the decreasing number of available models in RCP8.5. The multi-model mean and range simulated by CMIP5 models, forced by a CO2 increase of 1% per year (1% per year CO2 simulations), is given by the thin black line and grey area. For a specific amount of cumulative CO2 emissions, the 1% per year CO2 simulations exhibit lower warming than those driven by RCPs, which include additional non-CO2 drivers. All values are given relative to the 1861−1880 base period. Decadal averages are connected by straight lines.

(Figure SPM.10) Global mean surface temperature increase as a function of cumulative total global CO2 emissions from various lines of evidence. Multi-model results from a hierarchy of climate-carbon cycle models for each RCP until 2100 are shown with coloured lines and decadal means (dots). Some decadal means are indicated for clarity (e.g., 2050 indicating the decade 2041−2050). Model results over the historical period (1860–2010) are indicated in black. The coloured plume illustrates the multi-model spread over the four RCP scenarios and fades with the decreasing number of available models in RCP8.5. The multi-model mean and range simulated by CMIP5 models, forced by a CO2 increase of 1% per year (1% per year CO2 simulations), is given by the thin black line and grey area. For a specific amount of cumulative CO2 emissions, the 1% per year CO2 simulations exhibit lower warming than those driven by RCPs, which include additional non-CO2 drivers. All values are given relative to the 1861−1880 base period. Decadal averages are connected by straight lines.

(Click to enlarge.)

The chart is a little hard to follow, but the main idea should be clear: whichever experiment we carry out, the results tend to lie on a straight line on this graph. You do get a slightly different slope in one experiment, the “1% percent CO2 increase per year” experiment, where only CO2 rises, and much more slowly than it has over the last few decades. All the more realistic scenarios lie in the orange band, and all have about the same slope.

This linear relationship is a useful insight, because it means that for any target ceiling for temperature rise (e.g. the UN’s commitment to not allow warming to rise more than 2°C above pre-industrial levels), we can easily determine a cumulative emissions budget that corresponds to that temperature. So that brings us to the most important paragraph in the entire report, which occurs towards the end of the summary for policymakers:

Limiting the warming caused by anthropogenic CO2 emissions alone with a probability of >33%, >50%, and >66% to less than 2°C since the period 1861–1880, will require cumulative CO2 emissions from all anthropogenic sources to stay between 0 and about 1560 GtC, 0 and about 1210 GtC, and 0 and about 1000 GtC since that period respectively. These upper amounts are reduced to about 880 GtC, 840 GtC, and 800 GtC respectively, when accounting for non-CO2 forcings as in RCP2.6. An amount of 531 [446 to 616] GtC, was already emitted by 2011.

Unfortunately, this paragraph is a little hard to follow, perhaps because there was a major battle over the exact wording of it in the final few hours of inter-governmental review of the “Summary for Policymakers”. Several oil states objected to any language that put a fixed limit on our total carbon budget. The compromise was to give several different targets for different levels of risk.

Let’s unpick them. First notice that the targets in the first sentence are based on looking at CO2 emissions alone; the lower targets in the second sentence take into account other greenhouse gases, and other earth systems feedbacks (e.g. release of methane from melting permafrost), and so are much lower. It’s these targets that really matter:

• To give us a one third (33%) chance of staying below 2°C of warming over pre-industrial levels, we cannot ever emit more than 880 gigatonnes of carbon.

• To give us a 50% chance, we cannot ever emit more than 840 gigatonnes of carbon.

• To give us a 66% chance, we cannot ever emit more than 800 gigatonnes of carbon.

Since the beginning of industrialization, we have already emitted a little more than 500 gigatonnes. So our remaining budget is somewhere between 300 and 400 gigatonnes of carbon. Existing known fossil fuel reserves are enough to release at least 1000 gigatonnes. New discoveries and unconventional sources will likely more than double this. That leads to one inescapable conclusion:

Most of the remaining fossil fuel reserves must stay buried in the ground.

We’ve never done that before. There is no political or economic system anywhere in the world currently that can persuade an energy company to leave a valuable fossil fuel resource untapped. There is no government in the world that has demonstrated the ability to forgo the economic wealth from natural resource extraction, for the good of the planet as a whole. We’re lacking both the political will and the political institutions to achieve this. Finding a way to achieve this presents us with a challenge far bigger than we ever imagined.


You can download all of Climate Change 2013: The Physical Science Basis here. Click below to read any part of this series:

  1. The warming is unequivocal.
  2. Humans caused the majority of it.
  3. The warming is largely irreversible.
  4. Most of the heat is going into the oceans.
  5. Current rates of ocean acidification are unprecedented.
  6. We have to choose which future we want very soon.
  7. To stay below 2°C of warming, the world must become carbon negative.
  8. To stay below 2°C of warming, most fossil fuels must stay buried in the ground.

Climate Change 2013: The Physical Science Basis is also available chapter by chapter here:

  1. Front Matter
  2. Summary for Policymakers
  3. Technical Summary
    1. Supplementary Material

Chapters

  1. Introduction
  2. Observations: Atmosphere and Surface
    1. Supplementary Material
  3. Observations: Ocean
  4. Observations: Cryosphere
    1. Supplementary Material
  5. Information from Paleoclimate Archives
  6. Carbon and Other Biogeochemical Cycles
    1. Supplementary Material
  7. Clouds and Aerosols

    1. Supplementary Material
  8. Anthropogenic and Natural Radiative Forcing
    1. Supplementary Material
  9. Evaluation of Climate Models
  10. Detection and Attribution of Climate Change: from Global to Regional
    1. Supplementary Material
  11. Near-term Climate Change: Projections and Predictability
  12. Long-term Climate Change: Projections, Commitments and Irreversibility
  13. Sea Level Change
    1. Supplementary Material
  14. Climate Phenomena and their Relevance for Future Regional Climate Change
    1. Supplementary Material

Annexes

  1. Annex I: Atlas of Global and Regional Climate Projections
    1. Supplementary Material: RCP2.6, RCP4.5, RCP6.0, RCP8.5
  2. Annex II: Climate System Scenario Tables
  3. Annex III: Glossary
  4. Annex IV: Acronyms
  5. Annex V: Contributors to the WGI Fifth Assessment Report
  6. Annex VI: Expert Reviewers of the WGI Fifth Assessment Report

Civilizational Collapse (Part 1)

25 March, 2014

This story caught my attention, since a lot of people are passing it around:

• Nafeez Ahmed, NASA-funded study: industrial civilisation headed for ‘irreversible collapse’?, Earth Insight, blog on The Guardian, 14 March 2014.

Sounds dramatic! But notice the question mark in the title. The article says that “global industrial civilisation could collapse in coming decades due to unsustainable resource exploitation and increasingly unequal wealth distribution.” But with the word “could” in there, who could possibly argue? It’s certainly possible. What’s the actual news here?

It’s about a new paper that’s been accepted the Elsevier journal Ecological Economics. Since this paper has not been published, and I don’t even know the title, it’s hard to get details yet. According to Nafeez Ahmed,

The research project is based on a new cross-disciplinary ‘Human And Nature DYnamical’ (HANDY) model, led by applied mathematician Safa Motesharrei of the US National Science Foundation-supported National Socio-Environmental Synthesis Center, in association with a team of natural and social scientists.

So I went to Safa Motesharrei‘s webpage. It says he’s a grad student getting his PhD at the Socio-Environmental Synthesis Center, working with a team of people including:

Eugenia Kalnay (atmospheric science)
James Yorke (mathematics)
Matthias Ruth (public policy)
Victor Yakovenko (econophysics)
Klaus Hubacek (geography)
Ning Zeng (meteorology)
Fernando Miralles-Wilhelm (hydrology).

I was able to find this paper draft:

• Safa Motesharri, Jorge Rivas and Eugenia Kalnay, A minimal model for human and nature interaction, 13 November 2012.

I’m not sure how this is related to the paper discussed by Nafeez Ahmed, but it includes some (though not all) of the passages quoted by him, and it describes the HANDY model. It’s an extremely simple model, so I’ll explain it to you.

But first let me quote a bit more of the Guardian article, so you can see why it’s attracting attention:

By investigating the human-nature dynamics of these past cases of collapse, the project identifies the most salient interrelated factors which explain civilisational decline, and which may help determine the risk of collapse today: namely, Population, Climate, Water, Agriculture, and Energy.

These factors can lead to collapse when they converge to generate two crucial social features: “the stretching of resources due to the strain placed on the ecological carrying capacity”; and “the economic stratification of society into Elites [rich] and Masses (or “Commoners”) [poor]” These social phenomena have played “a central role in the character or in the process of the collapse,” in all such cases over “the last five thousand years.”

Currently, high levels of economic stratification are linked directly to overconsumption of resources, with “Elites” based largely in industrialised countries responsible for both:

“… accumulated surplus is not evenly distributed throughout society, but rather has been controlled by an elite. The mass of the population, while producing the wealth, is only allocated a small portion of it by elites, usually at or just above subsistence levels.”

The study challenges those who argue that technology will resolve these challenges by increasing efficiency:

“Technological change can raise the efficiency of resource use, but it also tends to raise both per capita resource consumption and the scale of resource extraction, so that, absent policy effects, the increases in consumption often compensate for the increased efficiency of resource use.”

Productivity increases in agriculture and industry over the last two centuries has come from “increased (rather than decreased) resource throughput,” despite dramatic efficiency gains over the same period.

Modelling a range of different scenarios, Motesharri and his colleagues conclude that under conditions “closely reflecting the reality of the world today… we find that collapse is difficult to avoid.” In the first of these scenarios, civilisation:

“…. appears to be on a sustainable path for quite a long time, but even using an optimal depletion rate and starting with a very small number of Elites, the Elites eventually consume too much, resulting in a famine among Commoners that eventually causes the collapse of society. It is important to note that this Type-L collapse is due to an inequality-induced famine that causes a loss of workers, rather than a collapse of Nature.”

Another scenario focuses on the role of continued resource exploitation, finding that “with a larger depletion rate, the decline of the Commoners occurs faster, while the Elites are still thriving, but eventually the Commoners collapse completely, followed by the Elites.”

In both scenarios, Elite wealth monopolies mean that they are buffered from the most “detrimental effects of the environmental collapse until much later than the Commoners”, allowing them to “continue ‘business as usual’ despite the impending catastrophe.” The same mechanism, they argue, could explain how “historical collapses were allowed to occur by elites who appear to be oblivious to the catastrophic trajectory (most clearly apparent in the Roman and Mayan cases).”

Applying this lesson to our contemporary predicament, the study warns that:

“While some members of society might raise the alarm that the system is moving towards an impending collapse and therefore advocate structural changes to society in order to avoid it, Elites and their supporters, who opposed making these changes, could point to the long sustainable trajectory ‘so far’ in support of doing nothing.”

However, the scientists point out that the worst-case scenarios are by no means inevitable, and suggest that appropriate policy and structural changes could avoid collapse, if not pave the way toward a more stable civilisation.

The two key solutions are to reduce economic inequality so as to ensure fairer distribution of resources, and to dramatically reduce resource consumption by relying on less intensive renewable resources and reducing population growth:

“Collapse can be avoided and population can reach equilibrium if the per capita rate of depletion of nature is reduced to a sustainable level, and if resources are distributed in a reasonably equitable fashion.”

The HANDY model

So what’s the model?

It’s 4 ordinary differential equations:

\dot{x}_C = \beta_C x_C - \alpha_C x_C

\dot{x}_E = \beta_E x_E - \alpha_E x_E

\dot{y} = \gamma y (\lambda - y) - \delta x_C y

\dot{w} = \delta x_C y - C_C - C_E

where:

x_C is the population of the commoners or masses

x_E is the population of the elite

y represents natural resources

w represents wealth

The authors say that

Natural resources exist in three forms: nonrenewable stocks (fossil fuels, mineral deposits, etc), renewable stocks (forests, soils, aquifers), and flows (wind, solar radiation, rivers). In future versions of HANDY, we plan to disaggregate Nature into these three different forms, but for simpli cation in this version, we have adopted a single formulation intended to represent an amalgamation of the three forms.

So, it’s possible that the paper to be published in Ecological Economics treats natural resources using three variables instead of just one.

Now let’s look at the equations one by one:

\dot{x}_C = \beta_C x_C - \alpha_C x_C

This looks weird at first, but \beta_C and \alpha_C aren’t both constants, which would be redundant. \beta_C is a constant birth rate for commoners, while \alpha_C, the death rate for commoners, is a function of wealth.

Similarly, in

\dot{x}_E = \beta_E x_E - \alpha_E x_E

\beta_E is a constant birth rate for the elite, while \alpha_E, the death rate for the elite, is a function of wealth. The death rate is different for the elite and commoners:

For both the elite and commoners, the death rate drops linearly with increasing wealth from its maximum value \alpha_M to its minimum values \alpha_m. But it drops faster for the elite, of course! For the commoners it reaches its minimum when the wealth w reaches some value w_{th}, but for the elite it reaches its minimum earlier, when w = w_{th}/\kappa, where \kappa is some number bigger than 1.

Next, how do natural resources change?

\dot{y} = \gamma y (\lambda - y) - \delta x_C y

The first part of this equation:

\dot{y} = \gamma y (\lambda - y)

describes how natural resources renew themselves if left alone. This is just the logistic equation, famous in models of population growth. Here \lambda is the equilibrium level of natural resources, while \gamma is another number that helps say how fast the resources renew themselves. Solutions of the logistic equation look like this:

But the whole equation

\dot{y} = \gamma y (\lambda - y) - \delta x_C y

has a term saying that natural resources get used up at a rate proportional to the population of commoners x_C times the amount of natural resources y. \delta is just a constant of proportionality.

It’s curious that the population of elites doesn’t affect the depletion of natural resources, and also that doubling the amount of natural resources doubles the rate at which they get used up. Regarding the first issue, the authors offer this explanation:

The depletion term includes a rate of depletion per worker, \delta, and is proportional to both Nature and the number of workers. However, the economic activity of Elites is modeled to represent executive, management, and supervisory functions, but not engagement in the direct extraction of resources, which is done by Commoners. Thus, only Commoners produce.

I didn’t notice a discussion of the second issue.

Finally, the change in the amount of wealth is described by this equation:

\dot{w} = \delta x_C y - C_C - C_E

The first term at right precisely matches the depletion of natural resources in the previous equation, but with the opposite sign: natural resources are getting turned into ‘wealth’. C_C describes consumption by commoners and C_E describes consumption by the elite. These are both functions of wealth, a bit like the death rates… but as you’d expect increasing wealth increases consumption:

For both the elite and commoners, consumption grows linearly with increasing wealth until wealth reaches the critical level w_{th}. But it grows faster for the elites, and reaches a higher level.

So, that’s the model… at least in this preliminary version of the paper.

Some solutions of the model

There are many parameters in this model, and many different things can happen depending on their values and the initial conditions. The paper investigates many different scenarios. I don’t have the energy to describe them all, so I urge you to skim it and look at the graphs.

I’ll just show you three. Here is one that Nafeez Ahmed mentioned, where civilization

appears to be on a sustainable path for quite a long time, but even using an optimal depletion rate and starting with a very small number of Elites, the Elites eventually consume too much, resulting in a famine among Commoners that eventually causes the collapse of society.

I can see why Ahmed would like to talk about this scenario: he’s written a book called A User’s Guide to the Crisis of Civilization and How to Save It. Clearly it’s worth putting some thought into risks of this sort. But how likely is this particular scenario compared to others? For that we’d need to think hard about how well this model matches reality.

It’s obviously a crude simplification of an immensely complex and unknowable system: the whole civilization on this planet. That doesn’t mean it’s fundamentally wrong! Its predictions could still be qualitatively correct. But to gain confidence in this, we’d need material that is not made in the draft paper I’ve seen. It says:

The scenarios most closely reflecting the reality of our world today are found in the third group of experiments (see section 5.3), where we introduced economic strati cation. Under such conditions,
we find that collapse is difficult to avoid.

But it would be nice to see a more careful approach to setting model parameters, justifying the simplifications built into the model, exploring what changes when some simplifications are reduced, and so on.

Here’s a happier scenario, where the parameters are chosen differently:

The main difference is that the depletion of resources per commoner, \delta, is smaller.

And here’s yet another, featuring cycles of prosperity, overshoot and collapse:

Tentative conclusions

I hope you see that I’m neither trying to ‘shoot down’ this model nor defend it. I’m just trying to understand it.

I think it’s very important—and fun—to play around with models like this, keep refining them, comparing them against each other, and using them as tools to help our thinking. But I’m not very happy that Nafeez Ahmed called this piece of work a “highly credible wake-up call” without giving us any details about what was actually done.

I don’t expect blog articles on the Guardian to feature differential equations! But it would be great if journalists who wrote about new scientific results would provide a link to the actual work, so people who want to could dig deeper can do so. Don’t make us scour the internet looking for clues.

And scientists: if your results are potentially important, let everyone actually see them! If you think civilization could be heading for collapse, burying your evidence and your recommendations for avoiding this calamity in a closed-access Elsevier journal is not the optimal strategy to deal with the problem.

There’s been a whole side-battle over whether NASA actually funded this study:

• Keith Kloor, About that popular Guardian story on the collapse of industrial civilization, Collide-A-Scape, blog on Discover, March 21, 2014.

• Nafeez Ahmed, Did NASA fund ‘civilisation collapse’ study, or not?, Earth Insight, blog on The Guardian, 21 March 2014.

But that’s very boring compared to fun of thinking about the model used in this study… and the challenging, difficult business of trying to think clearly about the risks of civilizational collapse.

Addendum

The paper is now freely available here:

• Safa Motesharri, Jorge Rivas and Eugenia Kalnay, Human and nature dynamics (HANDY): modeling inequality and use of resources in the collapse or sustainability of societies, Ecological Economics 101 (2014), 90–102.


Markov Models of Social Change (Part 2)

5 March, 2014

guest post by Vanessa Schweizer

This is my first post to Azimuth. It’s a companion to the one by Alaistair Jamieson-Lane. I’m an assistant professor at the University of Waterloo in Canada with the Centre for Knowledge Integration, or CKI. Through our teaching and research, the CKI focuses on integrating what appears, at first blush, to be drastically different fields in order to make the world a better place. The very topics I would like to cover today, which are mathematics and policy design, are an example of our flavour of knowledge integration. However, before getting into that, perhaps some background on how I got here would be helpful.

The conundrum of complex systems

For about eight years, I have focused on various problems related to long-term forecasting of social and technological change (long-term meaning in excess of 10 years). I became interested in these problems because they are particularly relevant to how we understand and respond to global environmental changes such as climate change.

In case you don’t know much about global warming or what the fuss is about, part of what makes the problem particularly difficult is that the feedback from the physical climate system to human political and economic systems is exceedingly slow. It is so slow, that under traditional economic and political analyses, an optimal policy strategy may appear to be to wait before making any major decisions – that is, wait for scientific knowledge and technologies to improve, or at least wait until the next election [1]. Let somebody else make the tough (and potentially politically unpopular) decisions!

The problem with waiting is that the greenhouse gases that scientists are most concerned about stay in the atmosphere for decades or centuries. They are also churned out by the gigatonne each year. Thus the warming trends that we have experienced for the past 30 years, for instance, are the cumulative result of emissions that happened not only recently but also long ago—in the case of carbon dioxide, as far back as the turn of the 20th century. The world in the 1910s was quainter than it is now, and as more economies around the globe industrialize and modernize, it is natural to wonder: how will we manage to power it all? Will we still rely so heavily on fossil fuels, which are the primary source of our carbon dioxide emissions?

Such questions are part of what makes climate change a controversial topic. Present-day policy decisions about energy use will influence the climatic conditions of the future, so what kind of future (both near-term and long-term) do we want?

Futures studies and trying to learn from the past

Many approaches can be taken to answer the question of what kind of future we want. An approach familiar to the political world is for a leader to espouse his or her particular hopes and concerns for the future, then work to convince others that those ideas are more relevant than someone else’s. Alternatively, economists do better by developing and investigating different simulations of economic developments over time; however, the predictive power of even these tools drops off precipitously beyond the 10-year time horizon.

The limitations of these approaches should not be too surprising, since any stockbroker will say that when making financial investments, past performance is not necessarily indicative of future results. We can expect the same problem with rhetorical appeals, or economic models, that are based on past performances or empirical (which also implies historical) relationships.

A different take on foresight

A different approach avoids the frustration of proving history to be a fickle tutor for the future. By setting aside the supposition that we must be able to explain why the future might play out a particular way (that is, to know the ‘history’ of a possible future outcome), alternative futures 20, 50, or 100 years hence can be conceptualized as different sets of conditions that may substantially diverge from what we see today and have seen before. This perspective is employed in cross-impact balance analysis, an algorithm that searches for conditions that can be demonstrated to be self-consistent [3].

Findings from cross-impact balance analyses have been informative for scientific assessments produced by the Intergovernmental Panel on Climate Change Research, or IPCC. To present a coherent picture of the climate change problem, the IPCC has coordinated scenario studies across economic and policy analysts as well as climate scientists since the 1990s. Prior to the development of the cross-impact balance method, these researchers had to do their best to identify appropriate ranges for rates of population growth, economic growth, energy efficiency improvements, etc. through their best judgment.

A retrospective using cross-impact balances on the first Special Report on Emissions Scenarios found that the researchers did a good job in many respects. However, they underrepresented the large number of alternative futures that would result in high greenhouse gas emissions in the absence of climate policy [4].

As part of the latest update to these coordinated scenarios, climate change researchers decided it would be useful to organize alternative futures according socio-economic conditions that pose greater or fewer challenges to mitigation and adaptation. Mitigation refers to policy actions that decrease greenhouse gas emissions, while adaptation refers to reducing harms due to climate change or to taking advantage of benefits. Some climate change researchers argued that it would be sufficient to consider alternative futures where challenges to mitigation and adaptation co-varied, e.g. three families of futures where mitigation and adaptation challenges would be low, medium, or high.

Instead, cross-impact balances revealed that mixed-outcome futures—such as socio-economic conditions simultaneously producing fewer challenges to mitigation but greater challenges to adaptation—could not be completely ignored. This counter-intuitive finding, among others, brought the importance of quality of governance to the fore [5].

Although it is generally recognized that quality of governance—e.g. control of corruption and the rule of law—affects quality of life [6], many in the climate change research community have focused on technological improvements, such as drought-resistant crops, or economic incentives, such as carbon prices, for mitigation and adaptation. The cross-impact balance results underscored that should global patterns of quality of governance across nations take a turn for the worse, poor governance could stymie these efforts. This is because the influence of quality of governance is pervasive; where corruption is permitted at the highest levels of power, it may be permitted at other levels as well—including levels that are responsible for building schools, teaching literacy, maintaining roads, enforcing public order, and so forth.

The cross-impact balance study revealed this in the abstract, as summarized in the example matrices below. Alastair included a matrix like these in his post, where he explained that numerical judgments in such a matrix can be used to calculate the net impact of simultaneous influences on system factors. My purpose in presenting these matrices is a bit different, as the matrix structure can also explain why particular outcomes behave as system attractors.

In this example, a solid light gray square means that the row factor directly influences the column factor some amount, while white space means that there is no direct influence:

Dark gray squares along the diagonal have no meaning, since everything is perfectly correlated to itself. The pink squares highlight the rows for the factors “quality of governance” and “economy.” The importance of these rows is more apparent here; the matrix above is a truncated version of this more detailed one:

(Click to enlarge.)

The pink rows are highlighted because of a striking property of these factors. They are the two most influential factors of the system, as you can see from how many solid squares appear in their rows. The direct influence of quality of governance is second only to the economy. (Careful observers will note that the economy directly influences quality of governance, while quality of governance directly influences the economy). Other scholars have meticulously documented similar findings through observations [7].

As a method for climate policy analysis, cross-impact balances fill an important gap between genius forecasting (i.e., ideas about the far-off future espoused by one person) and scientific judgments that, in the face of deep uncertainty, are overconfident (i.e. neglecting the ‘fat’ or ‘long’ tails of a distribution).

Wanted: intrepid explorers of future possibilities

However, alternative visions of the future are only part of the information that’s needed to create the future that is desired. Descriptions of courses of action that are likely to get us there are also helpful. In this regard, the post by Jamieson-Lane describes early work on modifying cross-impact balances for studying transition scenarios rather than searching primarily for system attractors.

This is where you, as the mathematician or physicist, come in! I have been working with cross-impact balances as a policy analyst, and I can see the potential of this method to revolutionize policy discussions—not only for climate change but also for policy design in general. However, as pointed out by entrepreneurship professor Karl T. Ulrich, design problems are NP-complete. Those of us with lesser math skills can be easily intimidated by the scope of such search problems. For this reason, many analysts have resigned themselves to ad hoc explorations of the vast space of future possibilities. However, some analysts like me think it is important to develop methods that do better. I hope that some of you Azimuth readers may be up for collaborating with like-minded individuals on the challenge!

References

The graph of carbon emissions is from reference [2]; the pictures of the matrices are adapted from reference [5]:

[1] M. Granger Morgan, Milind Kandlikar, James Risbey and Hadi Dowlatabadi, Why conventional tools for policy analysis are often inadequate for problems of global change, Climatic Change 41 (1999), 271–281.

[2] T.F. Stocker et al., Technical Summary, in Climate Change 2013: The Physical Science Basis. Contribution of Working Group I to the Fifth Assessment Report of the Intergovernmental Panel on Climate Change (2013), T.F. Stocker, D. Qin, G.-K. Plattner, M. Tignor, S.K. Allen, J. Boschung, A. Nauels, Y. Xia, V. Bex, and P.M. Midgley (eds.) Cambridge University Press, New York.

[3] Wolfgang Weimer-Jehle, Cross-impact balances: a system-theoretical approach to cross-impact analysis, Technological Forecasting & Social Change 73 (2006), 334–361.

[4] Vanessa J. Schweizer and Elmar Kriegler, Improving environmental change research with systematic techniques for qualitative scenarios, Environmental Research Letters 7 (2012), 044011.

[5] Vanessa J. Schweizer and Brian C. O’Neill, Systematic construction of global socioeconomic pathways using internally consistent element combinations, Climatic Change 122 (2014), 431–445.

[6] Daniel Kaufman, Aart Kray and Massimo Mastruzzi, Worldwide Governance Indicators (2013), The World Bank Group.

[7] Daron Acemoglu and James Robinson, The Origins of Power, Prosperity, and Poverty: Why Nations Fail. Website.


Life’s Struggle to Survive

19 December, 2013

Here’s the talk I gave at the SETI Institute:

When pondering the number of extraterrestrial civilizations, it is worth noting that even after it got started, the success of life on Earth was not a foregone conclusion. In this talk, I recount some thrilling episodes from the history of our planet, some well-documented but others merely theorized: our collision with the planet Theia, the oxygen catastrophe, the snowball Earth events, the Permian-Triassic mass extinction event, the asteroid that hit Chicxulub, and more, including the massive environmental changes we are causing now. All of these hold lessons for what may happen on other planets!

To watch the talk, click on the video above. To see
slides of the talk, click here!

Here’s a mistake in my talk that doesn’t appear in the slides: I suggested that Theia started at the Lagrange point in Earth’s orbit. After my talk, an expert said that at that time, the Solar System had lots of objects with orbits of high eccentricity, and Theia was probably one of these. He said the Lagrange point theory is an idiosyncratic theory, not widely accepted, that somehow found its way onto Wikipedia.

Another issue was brought up in the questions. In a paper in Science, Sherwood and Huber argued that:

Any exceedence of 35 °C for extended periods should
induce hyperthermia in humans and other mammals, as dissipation of metabolic heat becomes impossible. While this never happens now, it would begin to occur with global-mean warming of about 7 °C, calling the habitability of some regions into question. With 11-12 °C warming, such regions would spread to encompass the majority of the human population as currently distributed. Eventual warmings of 12 °C are
possible from fossil fuel burning.

However, the Paleocene-Eocene Thermal Maximum seems to have been even hotter:

So, the question is: where did mammals live during this period, which mammals went extinct, if any, and does the survival of other mammals call into question Sherwood and Huber’s conclusion?


High-Speed Finance

8 August, 2012

 

These days, a lot of buying and selling of stocks is done by computers—it’s called algorithmic trading. Computers can do it much faster than people. Watch how they’ve been going wild!

The date is at lower left. In 2000 it took several seconds for computers to make a trade. By 2010 the time had dropped to milliseconds… or even microseconds. And around this year, market activity started becoming much more intense.

I can’t even see the Flash Crash on May 6 of 2010—also known as The Crash of 2:45. The Dow Jones plummeted 9% in 5 minutes, then quickly bounced back. For fifteen minutes, the economy lost a trillion dollars. Then it reappeared.

But on August 5, 2011, when the credit rating of the US got downgraded, you’ll see the activity explode! And it’s been crazy ever since.

The movie above was created by Nanex, a company that provides market data to traders. The x axis shows the time of day, from 9:30 to 16:00. The y axis… well, it’s the amount of some activity per unit time, but they don’t say what. Do you know?

The folks at Nanex have something very interesting to say about this. It’s not high frequency trading or ‘HFT’ that they’re worried about—that’s actually gone down slightly from 2008 to 2012. What’s gone up is ‘high frequency quoting’, also known as ‘quote spam’ or ‘quote stuffing’.

Over on Google+, Sergey Ten explained the idea to me:

Quote spam is a well-known tactic. It used by high-frequency traders to get competitive advantage over other high-frequency traders. HF traders generate high-frequency quote spam using a pseudorandom (or otherwise structured) algorithm, with his computers coded to ignore it. His competitors don’t know the generating algorithm and have to process each quote, thus increasing their load, consuming bandwidth and getting a slight delay in processing.

A quote is an offer to buy or sell stock at a given price. For a clear and entertaining of how this works and why traders are locked into a race for speed, try:

• Chris Stucchio, A high frequency trader’s apology, Part 1, 16 April 2012. Part 2, 25 April 2012.

I don’t know a great introduction to quote spam, but this paper isn’t bad:

• Jared F. Egginton, Bonnie F. Van Ness, and Robert A. Van Ness, Quote stuffing, 15 March 2012.

Toward the physical limits of speed

In fact, the battle for speed is so intense that trading has run up against the speed of light.

For example, by 2013 there will be a new transatlantic cable at the bottom of the ocean, the first in a decade. Why? Just to cut the communication time between US and UK traders by 5 milliseconds. The new fiber optic line will be straighter than existing ones:

“As a rule of thumb, each 62 miles that the light has to travel takes about 1 millisecond,” Thorvardarson says. “So by straightening the route between Halifax and London, we have actually shortened the cable by 310 miles, or 5 milliseconds.”

Meanwhile, a London-based company called Fixnetix has developed a special computer chip that can prepare a trade in just 740 nanoseconds. But why stop at nanoseconds?

With the race for the lowest “latency” continuing, some market participants are even talking about picoseconds––trillionths of a second. At first the realm of physics and math and then computer science, the picosecond looms as the next time barrier.

Actions that take place in nanoseconds and picoseconds in some cases run up against the sheer limitations of physics, said Mark Palmer, chief executive of Lexington, Mass.-based StreamBase Systems.

Black swans and the ultrafast machine ecology

As high-frequency trading and high-frequency quoting leave slow-paced human reaction times in the dust, markets start to behave differently. Here’s a great paper about that:

• Neil Johnson, Guannan Zhao, Eric Hunsader, Jing Meng, Amith Ravindar, Spencer Carran amd Brian Tivnan, Financial black swans driven by ultrafast machine ecology.

A black swan is an unexpectedly dramatic event, like a market crash or a stock bubble that bursts. But according to this paper, such events are now happening all the time at speeds beyond our perception!

Here’s one:

It’s a price spike in the stock of a company called Super Micro Computer, Inc.. On October 1st, 2010, it shot up 26% and then crashed back down. But this all happened in 25 milliseconds!

These ultrafast black swans happen at least once a day. And they happen most of all to financial institutions.

Here’s a great blog article about this stuff:

• Mark Buchanan, Approaching the singularity—in global finance, The Physics of Finance, 13 February 2012.

I won’t try to outdo Buchanan’s analysis. I’ll just quote the abstract of the original paper:

Society’s drive toward ever faster socio-technical systems, means that there is an urgent need to understand the threat from ‘black swan’ extreme events that might emerge. On 6 May 2010, it took just five minutes for a spontaneous mix of human and machine interactions in the global trading cyberspace to generate an unprecedented system-wide Flash Crash. However, little is known about what lies ahead in the crucial sub-second regime where humans become unable to respond or intervene sufficiently quickly. Here we analyze a set of 18,520 ultrafast black swan events that we have uncovered in stock-price movements between 2006 and 2011. We provide empirical evidence for, and an accompanying theory of, an abrupt system-wide transition from a mixed human-machine phase to a new all-machine phase characterized by frequent black swan events with ultrafast durations (<650ms for crashes, <950ms for spikes). Our theory quantifies the systemic fluctuations in these two distinct phases in terms of the diversity of the system's internal ecology and the amount of global information being processed. Our finding that the ten most susceptible entities are major international banks, hints at a hidden relationship between these ultrafast 'fractures' and the slow 'breaking' of the global financial system post-2006. More generally, our work provides tools to help predict and mitigate the systemic risk developing in any complex socio-technical system that attempts to operate at, or beyond, the limits of human response times.

Trans-quantitative analysts?

When you get into an arms race of trying to write algorithms whose behavior other algorithms can’t predict, the math involved gets very tricky. Over on Google+, F. Lengvel pointed out something strange. In May 2010, Christian Marks claimed that financiers were hiring experts on large ordinals—crudely speaking, big infinite numbers!—to design algorithms that were hard to outwit.

I can’t confirm his account, but I can’t resist quoting it:

In an unexpected development for the depressed market for mathematical logicians, Wall Street has begun quietly and aggressively recruiting proof theorists and recursion theorists for their expertise in applying ordinal notations and ordinal collapsing functions to high-frequency algorithmic trading. Ordinal notations, which specify sequences of ordinal numbers of ever increasing complexity, are being used by elite trading operations to parameterize families of trading strategies of breathtaking sophistication.

The monetary advantage of the current strategy is rapidly exhausted after a lifetime of approximately four seconds — an eternity for a machine, but barely enough time for a human to begin to comprehend what happened. The algorithm then switches to another trading strategy of higher ordinal rank, and uses this for a few seconds on one or more electronic exchanges, and so on, while opponent algorithms attempt the same maneuvers, risking billions of dollars in the process.

The elusive and highly coveted positions for proof theorists on Wall Street, where they are known as trans-quantitative analysts, have not been advertised, to the chagrin of executive recruiters who work on commission. Elite hedge funds and bank holding companies have been discreetly approaching mathematical logicians who have programming experience and who are familiar with arcane software such as the ordinal calculator. A few logicians were offered seven figure salaries, according to a source who was not authorized to speak on the matter.

Is this for real? I like the idea of ‘trans-quantitative analysts': it reminds me of ‘transfinite numbers’, which is another name for infinities. But it sounds a bit like a joke, and I haven’t been able to track down any references to trans-quantitative analysts, except people talking about Christian Marks’ blog article.

I understand a bit about ordinal notations, but I don’t think this is the time to go into that—not before I’m sure this stuff is for real. Instead, I’d rather reflect on a comment of Boris Borcic over on Google+:

Last week it occurred to me that LessWrong and OvercomingBias together might play a role to explain why Singularists don’t seem to worry about High Frequency Robot Trading as a possible pathway for Singularity-like developments. I mean IMO they should, the Singularity is about machines taking control, ownership is control, HFT involves slicing ownership in time-slices too narrow for humans to know themselves owners and possibly control.

The ensuing discussion got diverted to the question of whether algorithmic trading involved ‘intelligence’, but maybe intelligence is not the point. Perhaps algorithmic traders have become successful parasites on the financial system without themselves being intelligent, simply by virtue of their speed. And the financial system, in turn, seems to be a successful parasite on our economy. Regulatory capture—the control of the regulating agencies by the industry they’re supposed to be regulating—seems almost complete. Where will this lead?


Follow

Get every new post delivered to your Inbox.

Join 2,845 other followers