Melting Permafrost (Part 1)

1 September, 2011

Some people worry about rising sea levels due to global warming. But that will happen slowly. I worry about tipping points.

The word “tipping point” should remind you of pushing on a glass of water. If you push it a little, and then stop, it’ll right itself: no harm done. But if you push it past a certain point, it starts tipping over. Then it’s hard to stop.

So, we need to study possible tipping points in the Earth’s climate system. Here’s a list of them:

Tipping point, Azimuth Library.

Today I want to talk about one: melting permafrost. When melting permafrost in the Arctic starts releasing lots of carbon dioxide and methane—a vastly more potent greenhouse gas—the Earth will get even hotter. That, in turn, will melt even more permafrost. In theory, this feedback loop could tip the Earth over to a much hotter state. But how much should we worry about this?

Climate activist Joe Romm takes it very seriously:

• Joe Romm, NSIDC bombshell: Thawing permafrost feedback will turn Arctic from carbon sink to source in the 2020s, releasing 100 billion tons of carbon by 2100, Climate Progress, 17 February 2011.

If you click on just one link of mine today, let it be this! He writes in a clear, snappy way. But let me take you through some of the details in my own more pedestrian fashion.

For starters, the Arctic is melting. Here’s a graph of Arctic sea ice volume created by the Pan-Arctic Ice Ocean Modeling and Assimilation System—click to enlarge:

The blue line is the linear best fit, but you can see it’s been melting faster lately. Is this a glitch or a new trend? Time will tell.

2011 is considerably worse than 2007, the previous record-holder. Here you can clearly see the estimated total volume in thousands of cubic kilometers, and how it changes with the seasons:


As the Arctic melts, many things are changing. The fabled Northwest Passage is becoming a practical waterway, so battles are starting to heat up over who controls it. The U.S. and other nations see it as an international waterway. But Canada says they own it, and have the right to regulate and protect it:

• Jackie Northam, Arctic warming unlocking a fabled waterway, Morning Edition, National Public Radio, 15 August 2011.

But the 800-pound gorilla in the room is the melting permafrost. A lot of the Arctic is covered by permafrost, and it stores a lot of carbon, both as peat and as methane. After all, peat is rotten plant material, and rotting plants make methane. Recent work estimates that between 1400 and 1700 gigatonnes of carbon is stored in permafrost soils worldwide:

• C. Tarnocai, J. G. Canadell, E. A. G. Schuur, P. Kuhry, G. Mazhitova, and S. Zimov, Soil organic carbon pools in the northern circumpolar permafrost region, Global Biogeochemical Cycles 23 (2009), GB2023.

That’s more carbon than currently resides in all living things, and twice as much carbon as held by the atmosphere!

How much of this carbon will be released as the Arctic melts—and how fast? There’s a new paper about that:

• Kevin Schaefer, Tingjun Zhang, Lori Bruhwiler, Andrew Barrett, Amount and timing of permafrost carbon release in response to climate warming, Tellus B 63 (2011), 165–180.

It’s not free, but you can read Joe Romm’s summary. Here’s their estimate on how carbon will be released by melting permafrost:

So, they’re guessing that melting permafrost will release a gigatonne of carbon per year by the mid-2030s. Moreover, they say:

We predict that the PCF [permafrost carbon feedback] will change the Arctic from a carbon sink to a source after the mid-2020s and is strong enough to cancel 42-88% of the total global land sink. The thaw and decay of permafrost carbon is irreversible and accounting for the PCF will require larger reductions in fossil fuel emissions to reach a target atmospheric CO2 concentration.

One of the authors explains more details here:

“The amount of carbon released [by 2200] is equivalent to half the amount of carbon that has been released into the atmosphere since the dawn of the industrial age,” said NSIDC scientist Kevin Schaefer. “That is a lot of carbon.”

The carbon from permanently frozen ground known as permafrost “will make its impact, not only on the climate, but also on international strategies to reduce climate change Schaefer said. “If we want to hit a target carbon concentration, then we have to reduce fossil fuel emissions that much lower than previously calculated to account for this additional carbon from the permafrost,” Schaefer said. “Otherwise we will end up with a warmer Earth than we want.”

The carbon comes from plant material frozen in soil during the ice age of the Pleistocene: the icy soil trapped and preserved the biomass for thousands of years. Schaefer equates the mechanism to storing broccoli in the home freezer: “As long as it stays frozen, it stays stable for many years,” he said. “But you take it out of the freezer and it will thaw out and decay.”

Now, permafrost is thawing in a warming climate and “just like the broccoli” the biomass will thaw and decay, releasing carbon into the atmosphere like any other decomposing plant material, Schaefer said. To predict how much carbon will enter the atmosphere and when, Schaefer and coauthors modeled the thaw and decay of organic matter currently frozen in permafrost under potential future warming conditions as predicted by the Intergovernmental Panel on Climate Change.

They found that between 29-59 percent of the permafrost will disappear by 2200. That permafrost took tens of thousands of years to form, but will melt in less than 200, Schaefer said.

Sound alarmist? In fact, there are three unrealistically conservative assumptions built into this paper:

1) The authors assume the ‘moderate warming’ scenario called A1B, which has atmospheric concentrations of CO2 reaching 520 ppm by 2050 and stabilizing at 700 ppm in 2100. But so far we seem to be living out the A1F1 scenario, which reaches 1000 ppm by century’s end.

2) Their estimate of future temperatures neglects the effect of greenhouse gases released by melting permafrost.

3) They assume all carbon emitted by permafrost will be in the form of CO2, not methane.

Point 2) means that the whole question of a feedback loop is not explored in this paper. I understand why. To do that, you can’t use someone else’s climate model: you need to build your own! But it’s something we need to study. Do you know anyone who is? Joe Romm says:

Countless studies make clear that global warming will release vast quantities of greenhouse gases into the atmosphere this decade. Yet, no climate model currently incorporates the amplifying feedback from methane released by a defrosting tundra.

If we try to understand this feedback, point 3) becomes important. After all, while methane goes away faster than CO2, its greenhouse effect is much stronger while it lasts. For the first 20 years, methane has about 72 times the global warming potential of carbon dioxide. Over the first 100 years, it’s about 25 times as powerful.

Let’s think about that a minute. In 2008, we burnt about 8 gigatonnes of carbon. If Schaefer et al are right, we can expect 1 extra gigatonne of carbon to be released from Arctic permafrost by around 2035. If that’s almost all in the form of carbon dioxide, it makes our situation slightly worse. But if a lot of it is methane, which is—let’s roughly say—72 times as bad—then our situation will be dramatically worse.

But I don’t know how much of the carbon released will be in the form of methane. I also don’t know how much of the methane will turn into other organic compounds before it gets into the atmosphere. I’d really like to know!

I hope you learn more about this stuff and help me out. Here are a few good references available for free online, to get started:

• Edward A. G. Schuur et al, Vulnerability of permafrost carbon to climate change: implications for the global carbon cycle, Bioscience 58 (2008), 701-714.

• David M. Lawrence, Andrew G. Slater, Robert A. Tomas, Marika M. Holland and Clara Deser, Accelerated Arctic land warming and permafrost degradation during rapid sea ice loss, Geophysical Research Letters 35 (2008), L11506.

• Amanda Leigh Mascarelli, A sleeping giant?, Nature Reports Climate Change, 5 March 2009.

The last one discusses the rise in atmospheric methane that was observed in 2007:

It also discusses the dangers of methane being released from ice-methane crystals called methane clathrates at the bottom of the ocean—something I’m deliberately not talking about here, because it deserves its own big discussion. However, there are also clathrates in the permafrost. Here’s a picture by W. F. Kuhs, showing what methane clathrate looks like at the atomic scale:

The green guy in the middle is methane, trapped in a cage of water molecules. Click for more details.

If you know more good references, please tell me about them here or add them to:

Permafrost, Azimuth Library.


Bayesian Computations of Expected Utility

19 August, 2011

GiveWell is an organization that rates charities. They’ve met people who argue that

charities working on reducing the risk of sudden human extinction must be the best ones to support, since the value of saving the human race is so high that “any imaginable probability of success” would lead to a higher expected value for these charities than for others.

For example, say I have a dollar to spend on charity. One charity says that with this dollar they can save the life of one child in Somalia. Another says that with this dollar they can increase by .000001% our chance of saving 1 billion people from the effects of a massive asteroid colliding with the Earth.

Naively, in terms of the expected number of lives saved, the latter course of action seems 10 times better, since

.000001% × 1 billion = 10

But is it really better?

It’s a subtle question, with all sorts of complicating factors, like why should I trust these guys?

I’m not ready to present a thorough analysis of this sort of question today. But I would like to hear what you think about it. And I’d like you to read what the founder of Givewell has to say about it:

• Holden Karnofsky, Why we can’t take expected value estimates literally (even when they’re unbiased), 18 August 2011.

He argues against what he calls an ‘explicit expected value’ or ‘EEV’ approach:

The mistake (we believe) is estimating the “expected value” of a donation (or other action) based solely on a fully explicit, quantified formula, many of whose inputs are guesses or very rough estimates. We believe that any estimate along these lines needs to be adjusted using a “Bayesian prior”; that this adjustment can rarely be made (reasonably) using an explicit, formal calculation; and that most attempts to do the latter, even when they seem to be making very conservative downward adjustments to the expected value of an opportunity, are not making nearly large enough downward adjustments to be consistent with the proper Bayesian approach.

His focus, in short, is on the fact that anyone saying “this money can increase by .000001% our chance of saving 1 billion people from an asteroid impact” is likely to be pulling those numbers from thin air. If they can’t really back up their numbers with a lot of hard evidence, then our lack of confidence in their estimate should be taken into account somehow.

His article spends a lot of time analyzing a less complex but still very interesting example:

It seems fairly clear that a restaurant with 200 Yelp reviews, averaging 4.75 stars, ought to outrank a restaurant with 3 Yelp reviews, averaging 5 stars. Yet this ranking can’t be justified in an explicit expected utility framework, in which options are ranked by their estimated average/expected value.

This is the only question I really want to talk about today. Actually I’ll focus on a similar question that Tim van Beek posed on this blog:

You have two kinds of fertilizer, A and B. You know that of 4 trees who got A, three thrived and one died. Of 36 trees that got B, 24 thrived and 12 died. Which fertilizer would you buy?

So, 3/4 of the trees getting fertilizer A thrived, while only 2/3 of those getting fertilizer B thrived. That makes fertilizer A seem better. However, the sample size is considerably larger for fertilizer B, so we may feel more confident about the results in this case. Which should we choose?

Nathan Urban tackled the problem in an interesting way. Let me sketch what he did before showing you his detailed work.

Suppose that before doing any experiments at all, we assume the probability \pi that a fertilizer will make a tree thrive is a number uniformly distributed between 0 and 1. This assumption is our “Bayesian prior”.

Note: I’m not saying this prior is “correct”. You are allowed to choose a different prior! Choosing a different prior will change your answer to this puzzle. That can’t be helped. We need to make some assumption to answer this kind of puzzle; we are simply making it explicit here.

Starting from this prior, Nathan works out the probability that \pi has some value given that when we apply the fertilizer to 4 trees, 3 thrive. That’s the black curve below. He also works out the probability that \pi has some value given that when we apply the fertilizer to 36 trees, 24 thrive. That’s the red curve:

The red curve, corresponding to the experiment with 36 trees, is much more sharply peaked. That makes sense. It means that when we do more experiments, we become more confident that we know what’s going on.

We still have to choose a criterion to decide which fertilizer is best! This is where ‘decision theory’ comes in. For example, suppose we want to maximize the expected number of the trees that thrive. Then Nathan shows that fertilizer A is slightly better, despite the smaller sample size.

However, he also shows that if fertilizer A succeeded 4 out of 5 times, while fertilizer B succeeded 7 out of 9 times, the same evaluation procedure would declare fertilizer B better! Its percentage success rate is less: about 78% instead of 80%. However, the sample size is larger. And in this particular case, given our particular Bayesian prior and given what we are trying to maximize, that’s enough to make fertilizer B win.

So if someone is trying to get you to contribute to a charity, there are many interesting issues involved in deciding if their arguments are valid or just a bunch of… fertilizer.

Here is Nathan’s detailed calculation:

It’s fun to work out an official ‘correct’ answer mathematically, as John suggested. Of course, this ends up being a long way of confirming the obvious—and the answer is only as good as the assumptions—but I think it’s interesting anyway. In this case, I’ll work it out by maximizing expected utility in Bayesian decision theory, for one choice of utility function. This dodges the whole risk aversion point, but it opens discussion for how the assumptions might be modified to account for more real-world considerations. Hopefully others can spot whether I’ve made mistakes in the derivations.

In Bayesian decision theory, the first thing you do is write down the data-generating process and then compute a posterior distribution for what is unknown.

In this case, we may assume the data-generating process (likelihood function) is a binomial distribution \mathrm{Bin}(s,n|\pi) for s successes in n trials, given a probability of success \pi. Fertilizer A corresponds to s=3, n=4 and fertilizer B corresponds to s=24, n=36.

The probability of success \pi is unknown, and we want to infer its posterior conditional on the data, p(\pi|s,n). To compute a posterior we need to assume a prior on \pi.

It turns out that the Beta distribution is conjugate to a binomial likelihood, meaning that if we assume a Beta distributed prior, the then the posterior is also Beta distributed. If the prior is \pi \sim \mathrm{Beta}(\alpha_0,\beta_0) then the posterior is

\pi \sim \mathrm{Beta}(\alpha=\alpha_0+s,\beta=\beta_0+n-s).

One choice for a prior is a uniform prior on [0,1], which corresponds to a \mathrm{Beta}(1,1) distribution. There are of course other prior choices which will lead to different conclusions. For this prior, the posterior is \mathrm{Beta}(\pi; s+1, n-s+1). The posterior mode is

(\alpha-1)/(\alpha+\beta-2) = s/n

and the posterior mean is

\alpha/(\alpha+\beta) = (s+1)/(n+2).

So, what is the inference for fertilizers A and B? I made a graph of the posterior distributions. You can see that the inference for fertilizer B is sharper, as expected, since there is more data. But the inference for fertilizer A tends towards higher success rates, which can be quantified.

Fertilizer A has a posterior mode of 3/4 = 0.75 and B has a mode of 2/3 = 0.667, corresponding to the sample proportions. The mode isn’t the only measure of central tendency we could use. The means are 0.667 for A and 0.658 for B; the medians are 0.686 for A and 0.661 for B. No matter which of the three statistics we choose, fertilizer A looks better than fertilizer B.

But we haven’t really done “decision theory” yet. We’ve just compared point estimators. Actually, we have done a little decision theory, implicitly. It turns out that picking the mean corresponds to the estimator which minimizes the expected squared error in \pi, where “squared error” can be thought of formally as a loss function in decision theory. Picking the median corresponds to minimizing the expected absolute loss, and picking the mode corresponds to minimizing the minimizing the 0-1 loss (where you lose nothing if you guess \pi exactly and lose 1 otherwise).

Still, these don’t really correspond to a decision theoretic view of the problem. We don’t care about the quantity \pi at all, let alone some point estimator of it. We only care about \pi indirectly, insofar as it helps us predict something about what the fertilizer will do to new trees. For that, we have to move from the posterior distribution p(\pi|s,n) to the predictive distribution

p(y|s,n) = \int p(y|\pi,n)\,p(\pi|s,n)\,d\pi ,

where y is a random variable indicating whether a new tree will thrive under treatment. Here I assume that the success of new trees follows the same binomial distribution as in the experimental group.

For a Beta posterior, the predictive distribution is beta-binomial, and the expected number of successes for a new tree is equal to the mean of the Beta distribution for \pi – i.e. the posterior mean we computed before, (s+1)/(n+2). If we introduce a utility function such that we are rewarded 1 util for a thriving tree and 0 utils for non-thriving tree, then the expected utility is equal to the expected success rate. Therefore, under these assumptions, we should choose the fertilizer that maximizes the quantity (s+1)/(n+2), which, as we’ve seen, favors fertilizer A (0.667) over fertilizer B (0.658).

An interesting mathematical question is, does this ever work out to a “non-obvious” conclusion? That is, if fertilizer A has a sample success rate which is greater than fertilizer B’s sample success rate, but expected utility maximization prefers fertilizer B? Mathematically, we’re looking for a set {s,s',n,n'} such that s/n>s'/n' but (s+1)/(n+2) < (s'+1)/(n'+2). (Also there are obvious constraints on s and s'.) The answer is yes. For example, if fertilizer A has 4 of 5 successes while fertilizer B has 7 of 9 successes.

By the way, on a quite different note: NASA currently rates the chances of the asteroid Apophis colliding with the Earth in 2036 at 4.3 × 10-6. It estimates that the energy of such a collision would be comparable with a 510-megatonne thermonuclear bomb. This is ten times larger than the largest bomb actually exploded, the Tsar Bomba. The Tsar Bomba, in turn, was ten times larger than all the explosives used in World War II.



There’s an interesting Chinese plan to deflect Apophis if that should prove necessary. It is, however, quite a sketchy plan. I expect people will make more detailed plans shortly before Apophis comes close to the Earth in 2029.


Calculating Catastrophe

14 June, 2011

This book could be interesting. If you read it, could you tell us what you think?

• Gordon Woo, Calculating Catastrophe, World Scientific Press, Singapore, 2011.

Apparently Dr. Gordon Woo was trained in mathematical physics at Cambridge, MIT and Harvard, and has made his career as a ‘calculator of catastrophes’. He has consulted for the IAEA on the seismic safety of nuclear plants and for BP on offshore oil well drilling—it’ll be fun to see what he has to say about his triumphant success in preventing disasters in both those areas. He now works at a company called Risk Management Solutions, where he works on modelling catastrophes for insurance purposes, and has designed a model for terrorism risk.

According to the blurb I got:

This book has been written to explain, to a general readership, the underlying philosophical ideas and scientific principles that govern catastrophic events, both natural and man-made. Knowledge of the broad range of catastrophes deepens understanding of individual modes of disaster. This book will be of interest to anyone aspiring to understand catastrophes better, but will be of particular value to those engaged in public and corporate policy, and the financial markets.

The table of contents lists: Natural Hazards; Societal Hazards; A Sense of Scale; A Measure of Uncertainty; A Matter of Time; Catastrophe Complexity; Terrorism; Forecasting; Disaster Warning; Disaster Scenarios; Catastrophe Cover; Catastrophe Risk Securitization; Risk Horizons.

Maybe you know other good books on the same subject?

For a taste of his thinking, you can try this:

• Gordon Woo, Terrorism risk.

Terrorism sounds like a particularly difficult risk to model, since it involves intelligent agents who try to do unexpected things. But maybe there are still some guiding principles. Woo writes:

It turns out that the number of operatives involved in planning and preparing attacks has a tipping point in respect of the ease with which the dots might be joined by counter-terrorism forces. The opportunity for surveillance experts to spot a community of terrorists, and gather sufficient evidence for courtroom convictions, increases nonlinearly with the number of operatives – above a critical number, the opportunity improves dramatically. This nonlinearity emerges from analytical studies of networks, using modern graph theory methods (Derenyi et al. [21]). Below the tipping point, the pattern of terrorist links may not necessarily betray much of a signature to the counter-terrorism services. However, above the tipping point, a far more obvious signature may become apparent in the guise of a large connected network cluster of dots, which reveals the presence of a form of community. The most ambitious terrorist plans, involving numerous operatives, are thus liable to be thwarted. As exemplified by the audacious attempted replay in 2006 of the Bojinka spectacular, too many terrorists spoil the plot (Woo, [22]).

Intelligence surveillance and eavesdropping of terrorist networks thus constrain the pipeline of planned attacks that logistically might otherwise seem almost boundless. Indeed, such is the capability of the Western forces of counterterrorism, that most planned attacks, as many as 80% to 90%, are interdicted. For example, in the three years before the 7/7/05 London attack, eight plots were interdicted. Yet any non-interdicted planned attack is construed as a significant intelligence failure. The public expectation of flawless security is termed the ‘90-10 paradox.’ Even if 90% of plots are foiled, it is by the 10% which succeed that the security services are ultimately remembered.

Of course the reference to “modern graph theoretical methods” will be less intimidating or impressive to many readers here than to the average, quite possibly innumerate reader of this document. But here’s the actual reference, in case you’re curious:

• I. Derenyi, G. Palla and T. Vicsek, Clique percolation in random networks, Phys. Rev. Lett. 94 (2005), 160202.

Just for fun, let me summarize the main result, so you can think about how relevant it might be to terrorist networks.

A graph is roughly a bunch of dots connected by edges. A clique in a graph is some subset of dots each of which is connected to every other. So, if dots are people and we draw an edge when two people are friends, a clique is a bunch of people who are all friends with each other—hence the name ‘clique’. But we might also use a clique to represent a bunch of people who are all engaged in the same activity, like a terrorist plot.

We’ve talked here before about Erdős–Rényi random graphs. These are graphs formed by taking a bunch of dots and randomly connecting each pair by an edge with some fixed probability p. In the paper above, the authors argue that for an Erdős–Rényi random graph with N vertices, the chance that most of the cliques with k elements all touch each other and form one big fat ‘giant component’ shoots up suddenly when

p \ge [(k-1) N]^{-1/k-1}

This sort of effect is familiar in many different contexts: it’s called a ‘percolation threshold’. I can guess the implications for terrorist networks that Gordon Woo is alluding to. However, doubt the details of the math are very important here, since social networks are not well modeled by Erdős–Rényi random graphs.

In the real world, if you and I have a mutual friend, that will increase the chance that we’ll be friends. Similarly, if we share a conspirator, that increases the chance that we’re in the same conspiracy. But in a world where friendship was described by an Erdős–Rényi random graph, that would not be the case!

So, while I agree that large terrorist networks are easier to catch than small ones, I don’t think the math of Erdős–Rényi random graphs give any quantitative insight into how much easier it is.


How Sea Level Rise Will Affect New York

9 June, 2011

Let’s try answering this question on Quora:

How will global warming, and particularly sea level rises, affect New York City?

I doubt sea level rise will be the first way we’ll get badly hurt by global warming. I think it’ll be crop losses caused by floods, droughts and heat waves, and property damage caused by storms. But the question focuses on sea level rise, so perhaps we should think about that… along with any other ways that New York City is particularly susceptible to the effects of global warming.

Suppose you know a lot about New York, but you need an estimate of sea level rise to get started. In the Azimuth Project page on sea level rise, you’ll see a lot of discussion of this subject. Naturally, it’s complicated. But say you just want some numbers. Okay: very roughly, by the end of the century we can expect a sea level of at least 0.6 meters, not counting any melting from Greenland and Antarctica and at most 2 meters, including Greenland and Antarctica. That’s roughly between 2 and 6 feet.

On the other hand, there’s at least one report saying sea levels may rise in the Northeast US at twice the average global rate. What’s the latest word on that?

Now, here’s a website that claims to show what various amounts of sea level rise would do to different areas:

• Firetree.net, Flood maps, including New York City.

Details on how these maps were made are here. One problem is that they focus too much on really big sea level rises: the smallest rise shown is 1 meter, then 2 meters… and it goes up to 60 meters!

Anyway, here’s part of New York City now:

Here it is after a 1-meter (3-foot) sea level rise:


(Click to enlarge any of these.) And here’s 2 meters, or 6 feet:


It’s a bit hard to spot the effects in Manhattan. They’re much more noticeable in the low-lying areas between Jersey City and Secaucus. What are those: parks, industrial areas, or suburbs? I’ve heard New Yorkers crack jokes about the ‘swamps of Jersey’…

But of course, a lot of the city is underground. What will happen to subways and other infrastructure, like sewage systems? And what about water supplies? On coastlines, saltwater can infiltrate into surface waters and aquifers. Where does freshwater meet saltwater near New York City? How will the effect of floods and storms change?

And of course, there are other parts of New York City these little maps don’t show: for those, go here. But watch out: at first you’ll see the effect of a 7-meter sea level rise… you’ll need to change the settings to see the effects of a more realistic rise.

If you live in a place that will be flooded, let me know!

Luckily, we don’t have to figure everything out ourselves: the state of New York has a task force devoted to this. And as task forces do, they’ve written a report:

• New York Department of Environmental Conservation, Sea Level Rise Task Force, Final Report.

New York City also has an ambitious environmental plan:

• New York City, PlaNYC 2030.

Finally, let me quote part of this:

• Jim O’Grady, Sea level rise could turn New York into Venice, experts warn, WNYC News, 9 February 2011.

Because it looks ahead 200 years, this article paints a more dire picture than my remarks above:

David Bragdon, Director of the Mayor’s Office of Long-Term Planning & Sustainability, is charged with preparing for the dangers of climate change. He said the city is taking precautions like raising the pumps at a wastewater treatment plant in the Rockaways and building the Willets Point development in Queens on six feet of landfill. The goal is to manage the risk from 100-year storms—one of the most severe. The mayor’s report says by the end of this century, 100-year storms could start arriving every 15 to 35 years.

Klaus Jacob, a Columbia University research scientist who specializes in disaster risk management, said that estimate may be too conservative. “What is now the impact of a 100-year storm will be, by the end of this century, roughly a 10-year storm,” he warned.

Back on the waterfront, oceanographer Malcolm Bowman offered what he said is a suitably outsized solution to this existential threat: storm surge barriers.

They would rise from the waters at Throgs Neck, where Long Island Sound and the East River meet, and at the opening to the lower harbor between the Rockaways and Sandy Hook, New Jersey. Like the barriers on the Thames River that protect London, they would stay open most of the time to let ships pass but close to protect the city during hurricanes and severe storms.

The structures at their highest points would be 30 feet above the harbor surface. Preliminary engineering studies put the cost at around $11 billion.

Jacob suggested a different but equally drastic approach. He said sea level rise may force New Yorkers to pull back from vulnerable neighborhoods. “We will have to densify the high-lying areas and use the low-lying areas as parks and buffer zones,” he said.

In this scenario, New York in 200 years looks like Venice. Concentrations of greenhouse gases in the atmosphere have melted ice sheets in Greenland and Antarctica and raised our local sea level by six to eight feet. Inundating storms at certain times of year swell the harbor until it spills into the streets. Dozens of skyscrapers in Lower Manhattan have been sealed at the base and entrances added to higher floors. The streets of the financial district have become canals.

“You may have to build bridges or get Venice gondolas or your little speed boats ferrying yourself up to those buildings,” Jacob said.

David Bragdon is not comfortable with such scenarios. He’d rather talk about the concrete steps he’s taking now, like updating the city’s flood evacuation plan to show more neighborhoods at risk. That would help the people living in them be better prepared to evacuate.

He said it’s too soon to contemplate the “extreme” step of moving “two, three, four hundred thousand people out of areas they’ve occupied for generations,” and disinvesting “literally billions of dollars of infrastructure in those areas.” On the other hand: “Another extreme would be to hide our heads in the sand and say, ‘Nothing’s going to happen.’”

Bragdon said he doesn’t think New Yorkers of the future will have to retreat very far from shore, if at all, but he’s not sure. And he would neither commit to storm surge barriers nor eliminate them as an option. He said what’s needed is more study—and that he’ll have further details in April, when the city updates PlaNYC.

Jacob warned that in preparing for disaster, no matter how far off, there’s a gulf between study and action. “There’s a good intent,” he said of New York’s climate change planning to date. “But, you know, mother nature doesn’t care about intent. Mother nature wants to see resiliency. And that is questionable, whether we have that.”


What To Do? (Part 1)

24 April, 2011

In a comment on my last interview with Yudkowsky, Eric Jordan wrote:

John, it would be great if you could follow up at some point with your thoughts and responses to what Eliezer said here. He’s got a pretty firm view that environmentalism would be a waste of your talents, and it’s obvious where he’d like to see you turn your thoughts instead. I’m especially curious to hear what you think of his argument that there are already millions of bright people working for the environment, so your personal contribution wouldn’t be as important as it would be in a less crowded field.

I’ve been thinking about this a lot.

Indeed, the reason I quit work on my previous area of interest—categorification and higher gauge theory—was the feeling that more and more people were moving into it. When I started, it seemed like a lonely but exciting quest. By now there are plenty of conferences on it, attended by plenty of people. It would be a full-time job just keeping up, much less doing something truly new. That made me feel inadequate—and worse, unnecessary. Helping start a snowball roll downhill is fun… but what’s the point in chasing one that’s already rolling?

The people working in this field include former grad students of mine and other youngsters I helped turn on to the subject. At first this made me a bit frustrated. It’s as if I engineered my own obsolescence. If only I’d spent less time explaining things, and more time proving theorems, maybe I could have stayed at the forefront!

But by now I’ve learned to see the bright side: it means I’m free to do other things. As I get older, I’m becoming ever more conscious of my limited lifespan and the vast number of things I’d like to try.

But what to do?

This a big question. It’s a bit self-indulgent to discuss it publicly… or maybe not. It is, after all, a question we all face. I’ll talk about me, because I’m not up to tackling this question in its universal abstract form. But it could be you asking this, too.

For me this question was brought into sharp focus when I got a research position where I was allowed—nay, downright encouraged!—to follow my heart and work on what I consider truly important. In the ordinary course of life we often feel too caught up in the flow of things to do more than make small course corrections. Suddenly I was given a burst of freedom. What to do with it?

In my earlier work, I’d always taken the attitude that I should tackle whatever questions seemed most beautiful and profound… subject to the constraint that I had a good chance of making some progress on them. I realized that this attitude assumes other people will do most of the ‘dirty work’, whatever that may be. But I figured I could get away with it. I figured that if I were ever called to account—by my own conscience, say—I could point to the fact that I’d worked hard to understand the universe and also spent a lot of time teaching people, both in my job and in my spare time. Surely that counts for something?

I had, however, for decades been observing the slow-motion train wreck that our civilization seems to be engaged in. Global warming, ocean acidification and habitat loss may be combining to cause a mass extinction event, and perhaps—in conjunction with resource depletion—a serious setback to human civilization. Now is not the time to go over all the evidence: suffice it to say that I think we may be heading for serious trouble.

It’s hard to know just how much trouble. If it were just routine ‘misery as usual’, I’ll admit I’d be happy to sit back and let everyone else deal with these problems. But the more I study them, the more that seems untenable… especially since so many people are doing just that: sitting back and letting everyone else deal with them.

I’m not sure this complex of problems rises to the level of an ‘existential risk’—which Nick Bostrom defines as one where an adverse outcome would either annihilate intelligent life originating on Earth or permanently and drastically curtail its potential. But I see scenarios where we clobber ourselves quite seriously. They don’t even seem unlikely, and they don’t seem very far-off, and I don’t see people effectively rising to the occasion. So, just as I’d move to put out a fire if I saw smoke coming out of the kitchen and everyone else was too busy watching TV to notice, I feel I have to do something.

But the question remains: what to do?

Eliezer Yudkowsky had some unabashed advice:

I honestly don’t see how a rationalist can avoid this conclusion: At this absolutely critical hinge in the history of the universe—Earth in the 21st century—rational altruists should devote their marginal attentions to risks that threaten to terminate intelligent life or permanently destroy a part of its potential. Those problems, which Nick Bostrom named ‘existential risks’, have got all the scope. And when it comes to marginal impact, there are major risks outstanding that practically no one is working on. Once you get the stakes on a gut level it’s hard to see how doing anything else could be sane.

So how do you go about protecting the future of intelligent life? Environmentalism? After all, there are environmental catastrophes that could knock over our civilization… but then if you want to put the whole universe at stake, it’s not enough for one civilization to topple, you have to argue that our civilization is above average in its chances of building a positive galactic future compared to whatever civilization would rise again a century or two later. Maybe if there were ten people working on environmentalism and millions of people working on Friendly AI, I could see sending the next marginal dollar to environmentalism. But with millions of people working on environmentalism, and major existential risks that are completely ignored… if you add a marginal resource that can, rarely, be steered by expected utilities instead of warm glows, devoting that resource to environmentalism does not make sense.

Similarly with other short-term problems. Unless they’re little-known and unpopular problems, the marginal impact is not going to make sense, because millions of other people will already be working on them. And even if you argue that some short-term problem leverages existential risk, it’s not going to be perfect leverage and some quantitative discount will apply, probably a large one. I would be suspicious that the decision to work on a short-term problem was driven by warm glow, status drives, or simple conventionalism.

With that said, there’s also such a thing as comparative advantage—the old puzzle of the lawyer who works an hour in the soup clinic instead of working an extra hour as a lawyer and donating the money. Personally I’d say you can work an hour in the soup clinic to keep yourself going if you like, but you should also be working extra lawyer-hours and donating the money to the soup clinic, or better yet, to something with more scope. (See “Purchase Fuzzies and Utilons Separately” on Less Wrong.) Most people can’t work effectively on Artificial Intelligence (some would question if anyone can, but at the very least it’s not an easy problem). But there’s a variety of existential risks to choose from, plus a general background job of spreading sufficiently high-grade rationality and existential risk awareness. One really should look over those before going into something short-term and conventional. Unless your master plan is just to work the extra hours and donate them to the cause with the highest marginal expected utility per dollar, which is perfectly respectable.

Where should you go in life? I don’t know exactly, but I think I’ll go ahead and say “not environmentalism”. There’s just no way that the product of scope, marginal impact, and John Baez’s comparative advantage is going to end up being maximal at that point.

When I heard this, one of my first reactions was: “Of course I don’t want to do anything ‘conventional’, something that ‘millions of people’ are already doing”. After all, my sense of being just another guy in the crowd was a big factor in leaving work on categorification and higher gauge theory—and most people have never even heard of those subjects!

I think so far the Azimuth Project is proceeding in a sufficiently unconventional way that while it may fall flat on its face, it’s at least trying something new. Though I always want more people to join in, we’ve already got some good projects going that take advantage of my ‘comparative advantage’: the ability to do math and explain stuff.

The most visible here is the network theory project, which is a step towards the kind of math I think we need to understand a wide variety of complex systems. I’ve been putting most of my energy into that lately, and coming up with ideas faster than I can explain them. On top of that, Eric Forgy, Tim van Beek, Staffan Liljgeren, Matt Reece, David Tweed and others have other interesting projects cooking behind the scenes on the Azimuth Forum. I’ll be talking about those soon, too.

I don’t feel satisfied, though. I’m happy enough—that’s never a problem these days—but once you start trying to do things to help the world, instead of just have fun, it’s very tricky to determine the best way to proceed.

One can, of course, easily fool oneself into thinking one knows.


Stabilization Wedges (Part 5)

21 April, 2011

In 2004, Pacala and Socolow laid out a list of ways we can battle global warming using current technologies. They said that to avoid serious trouble, we need to choose seven ‘stabilization wedges’: that is, seven ways to cut carbon emissions by 1 gigatonne per year within 50 years. They listed 15 wedges to choose from, and I’ve told you about them here:

Part 1 – efficiency and conservation.

Part 2 – shifting from coal to natural gas, carbon capture and storage.

Part 3 – nuclear power and renewable energy.

Part 4 – reforestation, good soil management.

According to Pacala:

The message was a very positive one: “gee, we can solve this problem: there are lots of ways to solve it, and lots of ways for the marketplace to solve it.”

I find that interesting, because to me each wedge seems like a gargantuan enterprise—and taken together, they seem like the Seven Labors of Hercules. They’re technically feasible, but who has the stomach for them? I fear things need to get worse before we come to our senses and take action at the scale that’s required.

Anyway, that’s just me. But three years ago, Pacala publicly reconsidered his ideas for a very different reason. Based on new evidence, he gave a talk at Stanford where he said:

It’s at least possible that we’ve already let this thing go too far, and that the biosphere may start to fall apart on us, even if we do all this. We may have to fall back on some sort of dramatic Plan B. We have to stay vigilant as a species.

You can watch his talk here:

It’s pretty damned interesting: he’s a good speaker.

Here’s a dry summary of a few key points. I won’t try to add caveats: I’m sure he would add some himself in print, but I’d rather keep the message simple. I also won’t try to update his information! Not in this blog entry, anyway. But I’ll ask some questions, and I’ll be delighted if you help me out on those.

Emissions targets

First, Pacala’s review of different carbon emissions targets.

The old scientific view, circa 1998: if we could keep the CO2 from doubling from its preindustrial level of 280 parts per million, that would count as a success. Namely, most of the ‘monsters behind the door’ would not come out: continental ice sheets falling into the sea and swamping coastal cities, the collapse of the Atlantic ocean circulation, a drought in the Sahel region of Africa, etcetera.

Many experts say we’d be lucky to get away with CO2 merely doubling. At current burn rates we’ll double it by 2050, and quadruple it by the end of this century. We’ve got enough fossil fuels to send it to seven times its preindustrial levels.

Doubling it would take us to 560 parts per million. A lot of people think that’s too high to be safe. But going for lower levels gets harder:

• In Pacala and Socolow’s original paper, they talked about keeping CO2 below 500 ppm. This would require keeping CO2 emissions constant until 2050. This could be achieved by a radical decarbonization of the economies of rich countries, while allowing carbon emissions in poor countries to grow almost freely until that time.

• For a long time the IPCC and many organizations advocated keeping CO2 below 450 ppm. This would require cutting CO2 emissions by 50% by 2050, which could be achieved by a radical decarbonization in rich countries, and moderate decarbonization in poor countries.

• But by 2008 the IPCC and many groups wanted a cap of 2°C global warming, or keeping CO2 below 430 ppm. This would mean cutting CO2 emissions by 80% by 2050, which would require a radical decarbonization in both rich and poor countries.

The difference here is what poor people have to do. The rich countries need to radically cut carbon emissions in all these scenarios. In the USA, the Lieberman-Warner bill would have forced the complete decarbonization of the economy by 2050.

Then, Pacala spoke about 3 things that make him nervous:

1. Faster emissions growth

A 2007 paper by Canadell et al pointed out that starting in 2000, fossil fuel emissions started growing at 3% per year instead of the earlier figure of 1.5%. This could be due to China’s industrialization. Will this keep up in years to come? If so, the original Pacala-Socolow plan won’t work.

(How much, exactly, did the economic recession change this story?)

2. The ocean sink

Each year fossil fuel burning puts about 8 gigatons of carbon in the atmosphere. The ocean absorbs about 2 gigatons and the land absorbs about 2, leaving about 4 gigatons in the atmosphere.

However, as CO2 emissions rise, the oceanic CO2 sink has been growing less than anticipated. This seems to be due to a change in wind patterns, itself a consequence of global warming.

(What’s the latest story here?)

3. The land sink

As the CO2 levels go up, people expected plants to grow better and suck up more CO2. In the third IPCC report, models predicted that by 2050, plants will be drawing down 6 gigatonnes more carbon per year than they do now! The fourth IPCC report was similar.

This is huge: remember that right now we emit about 8 gigatonnes per year. Indeed, this effect, called CO2 fertilization, could be the difference between the land being a big carbon sink and a big carbon source. Why a carbon source? For one thing, without the plants sucking up CO2, temperatures will rise faster, and the Amazon rainforest may start to die, and permafrost in the Arctic may release more greenhouse gases (especially methane) as it melts.

In a simulation run by Pacala, where he deliberately assumed that plants fail to suck up more carbon dioxide, these effects happened and the biosphere dumped a huge amount of extra CO2 into the atmosphere: the equivalent of 26 stabilization wedges.

So, plans based on the IPCC models are essentially counting on plants to save us from ourselves.

But is there any reason to think plants might not suck up CO2 at the predicted rates?

Maybe. First, people have actually grown forests in doubled CO2 conditions to see how much faster plants grow then. But the classic experiment along these lines used young trees. In 2005, Körner et al did an experiment using mature trees… and they didn’t see them growing any faster!

Second, models in the third IPCC report assumed that as plants grew faster, they’d have no trouble getting all the nitrogen they need. But Hungate et al have argued otherwise. On the other hand, Alexander Barron discovered that some tropical plants were unexpectedly good at ramping up the rate at which they grab ahold of nitrogen from the atmosphere. But on the third hand, that only applies to the tropics. And on the fourth hand—a complicated problem like this requires one of those Indian gods with lots of hands—nitrogen isn’t the only limiting factor to worry about: there’s also phosphorus, for example.

Pacala goes on and discusses even more complicating factors. But his main point is simple. The details of CO2 fertilization matter a lot. It could make the difference between their original plan being roughly good enough… and being nowhere near good enough!

(What’s the latest story here?)


Lifeboat Foundation

1 April, 2011

I’ve been invited to join this organization. But you can join too:

Lifeboat Foundation.

I hadn’t heard of it before. Do you know anything about it? Here’s their mission statement:

The Lifeboat Foundation is a nonprofit nongovernmental organization dedicated to encouraging scientific advancements while helping humanity survive existential risks and possible misuse of increasingly powerful technologies, including genetic engineering, nanotechnology, and robotics/AI, as we move towards the Singularity.

Lifeboat Foundation is pursuing a variety of options, including helping to accelerate the development of technologies to defend humanity, including new methods to combat viruses (such as RNA interference and new vaccine methods), effective nanotechnological defensive strategies, and even self-sustaining spacecolonies in case the other defensive strategies fail.

We believe that, in some situations, it might be feasible to relinquish technological capacity in the public interest (for example, we are against the U.S. government posting the recipe for the 1918 flu virus on the Internet).

We have some of the best minds on the planet working on programs to enable our survival. We invite you to join our cause!

It seems to have Nick Bostrom and Ray Kurzweil as two of its guiding figures: the overview features quotes from both.

Overview

An existential risk is a risk that is both global and terminal. Nick Bostrom defines it as a risk “where an adverse outcome would either annihilate Earth-originating intelligent life or permanently and drastically curtail its potential”. The term is frequently used to describe disaster and doomsday scenarios caused by non-friendly superintelligence, misuse of molecular nanotechnology, or other sources of danger.

The Lifeboat Foundation was formed to prevent existential events from happening, as once they occur, humanity may have no possibility to correct the error. Unfortunately governments, and humanity in general, always react AFTER a disaster has happened, and some disasters will leave no survivors so we must react BEFORE they occur. We must be proactive.

The Lifeboat Foundation is developing programs to prevent existential events (“shields”) as well as programs to preserve civilization (“preservers”) to survive such events.

Quotes

“Our approach to existential risks cannot be one of trial-and-error. There is no opportunity to learn from errors. The reactive approach — see what happens, limit damages, and learn from experience — is unworkable. Rather, we must take a proactive approach. This requires foresight to anticipate new types of threats and a willingness to take decisive preventive action and to bear the costs (moral and economic) of such actions.” — Nick Bostrom

“We cannot rely on trial-and-error approaches to deal with existential risks… We need to vastly increase our investment in developing specific defensive technologies… We are at the critical stage today for biotechnology, and we will reach the stage where we need to directly implement defensive technologies for nanotechnology during the late teen years of this century… A self-replicating pathogen, whether biological or nanotechnology based, could destroy our civilization in a matter of days or weeks.” — Ray Kurzweil

You’ll note there’s no mention here of global warming, mass extinction of species, oil depletion and other minor nuisances. Some people consider these problems insufficiently severe to count as “existential threats”… and thus, perhaps, best left to others. Some argue that there are already enough people worrying about these problem—while other threats need more attention than they’re getting.

That would be an interesting discussion to have. But I’m afraid there’s a cultural divide between the “green crowd” and the “tech crowd” that hinders such a discussion. The green crowd worries about things like global warming, the mass extinction that may currently be underway, and peak oil. The tech crowd worries about things like nanotechnology, artificial intelligence and asteroids hitting the Earth. Each crowd tends to think the other is a bit silly… and they don’t talk to each other enough. Am I just imagining this? I don’t think so.

Of course, any generalization this vast admits many exceptions. I like Gregory Benford because he confounds naive expectations: he thinks global warming is a desperately urgent problem that overshadows all others, but he’s willing to contemplate high-tech solutions. According to my theory, that should annoy both the green crowd and the tech crowd.

Personally I think all significant threats to civilization and biosphere should be evaluated and addressed in a unified way. Setting some aside because they’re “non-existential” or overly studied seems just as dangerous as setting others aside because they seem improbable or science-fiction-esque.

For one thing, I can imagine scenarios where medium-sized problems snowball into big “existential” ones. What’s the chance that in this century, global warming leads to droughts and famines which combined with oil shortages lead to political instability, the collapse of democratic governments, wars… and finally a world-wide nuclear or biological war? Maybe low… but I bet it’s higher than the chance of an asteroid hitting the Earth in this century.

I’m pleased to see that the Lifeboat Foundation plans “future programs” that will appeal to the green crowd:

ClimateShield
To protect against global warming and other unwanted climate changes.

BioPreserver
To preserve animal life and diversity on the planet.

EnergyPreserver
If our civilization ran out of energy, it would grind to a halt, so Lifeboat Foundation is looking for solutions.

However, their current programs are strongly focused on issues that appeal to the tech crowd. Maybe that’s okay, but maybe it’s a bit unbalanced:

AIShield
To protect against unfriendly AI (Artificial Intelligence).

AsteroidShield
To protect against devastating asteroid strikes.

BioShield
To protect against bioweapons and pandemics.

InternetShield
As the Internet grows in importance, an attack on it could cause physical as well as informational damage. An attack today on hospital systems or electric utilities could lead to deaths. In the future an attack could be used to alter the output that is produced by
nanofactories worldwide leading to massive deaths.

LifeShield Bunkers
Developing fallback positions on Earth in case programs such as our BioShield and NanoShield fail globally or locally.

NanoShield
To protect against ecophages and nonreplicating
nanoweapons.

ScientificFreedomShield
This shield strives to protect scientists from obstacles that would prevent latter day Max Plancks from completing their research.

SecurityPreserver
To prevent nuclear, biological, and nanotechnological attacks from occurring by using surveillance and
sousveillance
to identify terrorists before they are able to launch their attacks.

Space Habitats
To build fail-safes against global existential risks by encouraging the spread of sustainable human civilization beyond Earth.


Follow

Get every new post delivered to your Inbox.

Join 3,094 other followers