Arctic Melting — 2015

6 January, 2016

With help from global warming and the new El Niño, 2015 was a hot year. In fact it was the hottest since we’ve been keeping records—and it ended with a bang!

• Robinson Myer, The storm that will unfreeze the North Pole, The Atlantic, 29 December 2015.

The sun has not risen above the North Pole since mid-September. The sea ice—flat, landlike, windswept, and stretching as far as the eye can see—has been bathed in darkness for months.

But later this week, something extraordinary will happen: Air temperatures at the Earth’s most northernly region, in the middle of winter, will rise above freezing for only the second time on record.

On Wednesday, the same storm system that last week spun up deadly tornadoes in the American southeast will burst into the far north, centering over Iceland. It will bring strong winds and pressure as low as is typically seen during hurricanes.

That low pressure will suck air out of the planet’s middle latitudes and send it rushing to the Arctic. And so on Wednesday, the North Pole will likely see temperatures of about 35 degrees Fahrenheit, or 2 degrees Celsius. That’s 50 degrees hotter than average: it’s usually 20 degrees Fahrenheit below zero there at this time of year.

Here’s a temperature map from a couple days later—the last day of the year, 31 December 2015:

(Click on these images to enlarge them.)

And here, more revealing, is a map of the temperature anomaly: the difference between the temperature and the usual temperature at that place at that time of the year:

I think the temperature anomaly is off the scale at certain places in the Arctic—it should have been about 30 °C hotter than normal, or 55 °F.

These maps are from a great website that will show you a variety of weather maps for any day of the year:

Climate Reanalyzer.

How about the year as a whole?

You can learn a lot about Arctic sea ice here:

• National Snow and Ice Data Center, Arctic Sea Ice News.

Here’s one graph of theirs, which shows that the extent of Arctic sea ice in 2015 was very low. It was 2 standard deviations lower than the 2000–2012 average, though not as low as the record-breaking year of 2012:

Here’s another good source of data:

• Polar Science Center, PIOMAS arctic sea ice volume reanalysis.

PIOMAS stands for the Pan-Arctic Ice Ocean Modeling and Assimilation System. Here is their estimate of the Arctic sea ice volume over the course of 2015, compared to other years:

The annual cycle is very visible here.

It’s easier to see the overall trend in this graph:

This shows, for each day, the Arctic sea ice volume minus its average over 1979–2014 for that day of the year. This is a way to remove the annual cycle and focus on the big picture, including the strange events after 2012.

What to do?

The Arctic is melting.

What does that matter to us down here? We’ll probably get strange new weather patterns. It may already be happening. I hope it’s clear by now: the first visible impact of global warming is ‘wild weather’.

But what can we do about it? Of course we should stop burning carbon. But even if we stopped completely, that wouldn’t reverse the effects of the warming so far. Someday people may want to reverse its effects—at least for the Arctic.

So, it might be good to reread part of my interview with Gregory Benford. He has a plan to cool the Arctic, which he claims is quite affordable. He’s mainly famous as a science fiction author, but he’s also an astrophysicist at U. C. Irvine.

Geoengineering the Arctic

JB: I want to spend a bit more time on your proposal to screen the Arctic. There’s a good summary here:

• Gregory Benford, Climate controls, Reason Magazine, November 1997.

But in brief, it sounds like you want to test the results of spraying a lot of micron-sized dust into the atmosphere above the Arctic Sea during the summer. You suggest diatomaceous earth as an option, because it’s chemically inert: just silica. How would the test work, exactly, and what would you hope to learn?

GB: The US has inflight refueling aircraft such as the KC-10 Extender that with minor changes spread aerosols at relevant altitudes, and pilots who know how to fly big sausages filled with fluids.



Rather than diatomaceous earth, I now think ordinary SO2 or H2S will work, if there’s enough water at the relevant altitudes. Turns out the pollutant issue is minor, since it would be only a percent or so of the SO2 already in the Arctic troposphere. The point is to spread aerosols to diminish sunlight and look for signals of less sunlight on the ground, changes in sea ice loss rates in summer, etc. It’s hard to do a weak experiment and be sure you see a signal. Doing regional experiments helps, so you can see a signal before the aerosols spread much. It’s a first step, an in-principle experiment.

Simulations show it can stop the sea ice retreat. Many fear if we lose the sea ice in summer ocean currents may alter; nobody really knows. We do know that the tundra is softening as it thaws, making roads impassible and shifting many wildlife patterns, with unforeseen long term effects. Cooling the Arctic back to, say, the 1950 summer temperature range would cost maybe $300 million/year, i.e., nothing. Simulations show to do this globally, offsetting say CO2 at 500 ppm, might cost a few billion dollars per year. That doesn’t help ocean acidification, but it’s a start on the temperature problem.

JB: There’s an interesting blog on Arctic political, military and business developments:

• Anatoly Karlin, Arctic Progress.

Here’s the overview:

Today, global warming is kick-starting Arctic history. The accelerating melting of Arctic sea ice promises to open up circumpolar shipping routes, halving the time needed for container ships and tankers to travel between Europe and East Asia. As the ice and permafrost retreat, the physical infrastructure of industrial civilization will overspread the region […]. The four major populated regions encircling the Arctic Ocean—Alaska, Russia, Canada, Scandinavia (ARCS)—are all set for massive economic expansion in the decades ahead. But the flowering of industrial civilization’s fruit in the thawing Far North carries within it the seeds of its perils. The opening of the Arctic is making border disputes more serious and spurring Russian and Canadian military buildups in the region. The warming of the Arctic could also accelerate global warming—and not just through the increased economic activity and hydrocarbons production. One disturbing possibility is that the melting of the Siberian permafrost will release vast amounts of methane, a greenhouse gas that is far more potent than CO2, into the atmosphere, and tip the world into runaway climate change.

But anyway, unlike many people, I’m not mentioning risks associated with geoengineering in order to instantly foreclose discussion of it, because I know there are also risks associated with not doing it. If we rule out doing anything really new because it’s too expensive or too risky, we might wind up locking ourselves in a "business as usual" scenario. And that could be even more risky—and perhaps ultimately more expensive as well.

GB: Yes, no end of problems. Most impressive is how they look like a descending spiral, self-reinforcing.

Certainly countries now scramble for Arctic resources, trade routes opened by thawing—all likely to become hotly contested strategic assets. So too melting Himalayan glaciers can perhaps trigger "water wars" in Asia—especially India and China, two vast lands of very different cultures. Then, coming on later, come rising sea levels. Florida starts to go away. The list is endless and therefore uninteresting. We all saturate.

So droughts, floods, desertification, hammering weather events—they draw ever less attention as they grow more common. Maybe Darfur is the first "climate war." It’s plausible.

The Arctic is the canary in the climate coalmine. Cutting CO2 emissions will take far too long to significantly affect the sea ice. Permafrost melts there, giving additional positive feedback. Methane release from the not-so-perma-frost is the most dangerous amplifying feedback in the entire carbon cycle. As John Nissen has repeatedly called attention to, the permafrost permamelt holds a staggering 1.5 trillion tons of frozen carbon, about twice as much carbon as is in the atmosphere. Much would emerge as methane. Methane is 25 times as potent a heat-trapping gas as CO2 over a century, and 72 times as potent over the first 20 years! The carbon is locked in a freezer. Yet that’s the part of the planet warming up the fastest. Really bad news:

• Kevin Schaefer, Tingjun Zhang, Lori Bruhwiler and Andrew P. Barrett, Amount and timing of permafrost carbon release in response to climate warming, Tellus, 15 February 2011.

Particularly interesting is the slowing of thermohaline circulation. In John Nissen’s "two scenarios" work there’s an uncomfortably cool future—if the Gulf Stream were to be diverted by meltwater flowing into NW Atlantic. There’s also an unbearably hot future, if the methane from not-so-permafrost and causes global warming to spiral out of control. So we have a terrifying menu.

JB: I recently interviewed Nathan Urban here. He explained a paper where he estimated the chance that the Atlantic current you’re talking about could collapse. (Technically, it’s the Atlantic meridional overturning circulation, not quite the same as the Gulf Stream.) They got a 10% chance of it happening in two centuries, assuming a business as usual scenario. But there are a lot of uncertainties in the modeling here.

Back to geoengineering. I want to talk about some ways it could go wrong, how soon we’d find out if it did, and what we could do then.

For example, you say we’ll put sulfur dioxide in the atmosphere below 15 kilometers, and most of the ozone is above 20 kilometers. That’s good, but then I wonder how much sulfur dioxide will diffuse upwards. As the name suggests, the stratosphere is "stratified" —there’s not much turbulence. That’s reassuring. But I guess one reason to do experiments is to see exactly what really happens.

GB: It’s really the only way to go forward. I fear we are now in the Decade of Dithering that will end with the deadly 2020s. Only then will experiments get done and issues engaged. All else, as tempting as ideas and simulations are, spell delay if they do not couple with real field experiments—from nozzle sizes on up to albedo measures —which finally decide.

JB: Okay. But what are some other things that could go wrong with this sulfur dioxide scheme? I know you’re not eager to focus on the dangers, but you must be able to imagine some plausible ones: you’re an SF writer, after all. If you say you can’t think of any, I won’t believe you! And part of good design is looking for possible failure modes.

GB: Plenty an go wrong with so vast an idea. But we can learn from volcanoes, that give us useful experiments, though sloppy and noisy ones, about putting aerosols into the air. Monitoring those can teach us a lot with little expense.

We can fail to get the aerosols to avoid clumping, so they fall out too fast. Or we can somehow trigger a big shift in rainfall patterns—a special danger in a system already loaded with surplus energy, as is already displaying anomalies like the bitter winters in Europe, floods in Pakistan, drought in Darfur. Indeed, some of Alan Robock’s simulations of Arctic aerosol use show a several percent decline in monsoon rain—though that may be a plus, since flooding is the #1 cause of death and destruction during the Indian monsoon.

Mostly, it might just plain fail to work. Guessing outcomes is useless, though. Here’s where experiment rules, not simulations. This is engineering, which learns from mistakes. Consider the early days of aviation. Having more time to develop and test a system gives more time to learn how to avoid unwanted impacts. Of course, having a system ready also increases the probability of premature deployment; life is about choices and dangers.

More important right now than developing capability, is understanding the consequences of deployment of that capability by doing field experiments. One thing we know: both science and engineering advance most quickly by using the dance of theory with experiment. Neglecting this, preferring only experiment, is a fundamental mistake.


The Paris Agreement

23 December, 2015

The world has come together and agreed to do something significant about climate change:

• UN Framework Convention on Climate Change, Adoption of the Paris Agreement, 12 December 2015.

Not as much as I’d like: it’s estimated that if we do just what’s been agreed to so far, we can expect 2.7 °C of warming, and more pessimistic estimates range up to 3.5 °C. But still, something significant. Furthermore, the Paris Agreement set up a system that encourages nations to ‘ratchet up’ their actions over time. Even better, it helped strengthen a kind of worldwide social network of organizations devoted to tackling climate change!

This is a nice article summarizing what the Paris Agreement actually means:

• William Sweet, A surprising success at Paris, Bulletin of Atomic Scientists, 21 December 2015.

Since it would take quite a bit of work to analyze this agreement and its implications, and I’m just starting to do this work, I’ll just quote a large chunk of this article.

Hollande, in his welcoming remarks, asked what would enable us to say the Paris agreement is good, even “great.” First, regular review and assessment of commitments, to get the world on a credible path to keep global warming in the range of 1.5–2.0 degrees Celsius. Second, solidarity of response, so that no state does nothing and yet none is “left alone.” Third, evidence of a comprehensive change in human consciousness, allowing eventually for introduction of much stronger measures, such as a global carbon tax.

UN Secretary-General Ban Ki-moon articulated similar but somewhat more detailed criteria: The agreement must be lasting, dynamic, respectful of the balance between industrial and developing countries, and enforceable, with critical reviews of pledges even before 2020. Ban noted that 180 countries had now submitted climate action pledges, an unprecedented achievement, but stressed that those pledges need to be progressively strengthened over time.

Remarkably, not only the major conveners of the conference were speaking in essentially the same terms, but civil society as well. Starting with its first press conference on opening day and at every subsequent one, representatives of the Climate Action Network, representing 900 nongovernment organizations, confined themselves to making detailed and constructive suggestions about how key provisions of the agreement might be strengthened. Though CAN could not possibly speak for every single one of its member organizations, the mainstream within the network clearly saw it was the group’s goal to obtain the best possible agreement, not to advocate for a radically different kind of agreement. The mainstream would not be taking to the streets.

This was the main thing that made Paris different, not just from Copenhagen, but from every previous climate meeting: Before, there always had been deep philosophical differences between the United States and Europe, between the advanced industrial countries and the developing countries, and between the official diplomats and civil society. At Paris, it was immediately obvious that everybody, NGOs included, was reading from the same book. So it was obvious from day one that an important agreement would be reached. National delegations would stake out tough positions, and there would be some hard bargaining. But at every briefing and in every interview, no matter how emphatic the stand, it was made clear that compromises would be made and nothing would be allowed to stand in the way of agreement being reached.

The Paris outcome

The two-part agreement formally adopted in Paris on December 12 represents the culmination of a 25-year process that began with the negotiations in 1990–91 that led to the adoption in 1992 of the Rio Framework Convention. That treaty, which would be ratified by all the world’s nations, called upon every country to take action to prevent “dangerous climate change” on the basis of common but differentiated responsibilities. Having enunciated those principles, nations were unable to agree in the next decades about just what they meant in practice. An attempt at Kyoto in 1997 foundered on the opposition of the United States to an agreement that required no action on the part of the major emitters among the developing countries. A second attempt at agreement in Copenhagen also failed.

Only with the Paris accords, for the first time, have all the world’s nations agreed on a common approach that rebalances and redefines respective responsibilities, while further specifying what exactly is meant by dangerous climate change. Paragraph 17 of the “Decision” (or preamble) notes that national pledges will have to be strengthened in the next decades to keep global warming below 2 degrees Celsius or close to 1.5 degrees, while Article 2 of the more legally binding “Agreement” says warming should be held “well below” 2 degrees and if possible limited to 1.5 degrees. Article 4 of the Agreement calls upon those countries whose emissions are still rising to have them peak “as soon as possible,” so “as to achieve a balance between anthropogenic emissions by sources and removals by sinks of greenhouse gases in the second half of this century”—a formulation that replaced a reference in Article 3 of the next-to-last draft calling for “carbon neutrality” by the second half of the century.

“The wheel of climate action turns slowly, but in Paris it has turned. This deal puts the fossil fuel industry on the wrong side of history,” commented Kumi Naidoo, executive director of Greenpeace International.

The Climate Action Network, in which Greenpeace is a leading member, along with organizations like the Union of Concerned Scientists, Friends of the Earth, the World Wildlife Fund, and Oxfam, would have preferred language that flatly adopted the 1.5 degree goal and that called for complete “decarbonization”— an end to all reliance on fossil fuels. But to the extent the network can be said have common positions, it would be able to live with the Paris formulations, to judge from many statements made by leading members in CAN’s twice- or thrice-daily press briefings, and statements made by network leaders embracing the agreement.

Speaking for scientists, at an event anticipating the final accords, H. J. Schellnhuber, leader of the Potsdam Institute for Climate Impact Research, said with a shrug that the formulation calling for net carbon neutrality by mid-century would be acceptable. His opinion carried more than the usual weight because he is sometimes credited in the German press as the father of the 2-degree standard. (Schellnhuber told me that a Potsdam team had indeed developed the idea of limiting global warming to 2 degrees in total and 0.2 degrees per decade; and that while others were working along similar lines, he personally drew the Potsdam work to the attention of future German chancellor Angela Merkel in 1994, when she was serving as environment minister.)

As for the tighter 1.5-degree standard, this is a complicated issue that the Paris accords fudge a bit. The difference between impacts expected from a 1.5-degree world and a 2-degree world are not trivial. The Greenland ice sheet, for example, is expected to melt in its entirely in the 2-degree scenario, while in a 1.5-degree world the odds of a complete melt are only 70 percent, points out climatologist Niklas Höhne, of the Cologne-based NewClimate Institute, with a distinct trace of irony. But at the same time the scientific consensus is that it would be virtually impossible to meet the 1.5-degree goal because on top of the 0.8–0.9 degrees of warming that already has occurred, another half-degree is already in the pipeline, “hidden away in the oceans,” as Schellnhuber put it. At best we might be able to work our ways back to 1.5 degrees in the 2030s or 2040s, after first overshooting it. Thus, though organizations like 350.org and scientists like James Hansen continue to insist that 1.5 degrees should be our objective, pure and simple, the scientific community and the Climate Action mainstream are reasonably comfortable with the Paris accords’ “close as possible” language.

‘Decision’ and ‘Agreement’

The main reason why the Paris accords consist of two parts, a long preamble called the “Decision,” and a legally binding part called the “Agreement,” is to satisfy the Obama administration’s concerns about having to take anything really sticky to Congress. The general idea, which was developed by the administration with a lot of expert legal advice from organizations like the Virginia-based Center for Climate and Energy Solutions, was to put really substantive matters, like how much the United States will actually do in the next decades to cut its greenhouse gas emissions, into the preamble, and to confine the treaty-like Agreement as much as possible to procedural issues like when in the future countries will talk about what.

Nevertheless, the distinction between the Decision and the Agreement is far from clear-cut. All the major issues that had to be balanced in the negotiations—not just the 1.5–2.0 degree target and the decarbonization language, but financial aid, adaptation and resilience, differentiation between rich and poor countries, reporting requirements, and review—are addressed in both parts. There is nothing unusual as such about an international agreement having two parts, a preamble and main text. What is a little odd about Paris, however, is that the preamble, at 19 pages, is considerably longer than the 11-page Agreement, as Chee Yoke Ling of the Third World Network, who is based in Beijing, pointed out. The length of the Decision, she explained, reflects not only US concerns about obtaining Senate ratification. It also arose from anxieties shared by developing countries about agreeing to legally binding provisions that might be hard to implement and politically dangerous.

In what are arguably the Paris accords’ most important provisions, the national pledges are to be collectively reassessed beginning in 2018–19, and then every five years after 2020. The general idea is to systematically exert peer group pressure on regularly scheduled occasions, so that everybody will ratchet up carbon-cutting ambitions. Those key requirements, which are very close to what CAN advocated and what diplomatic members of the so-called “high ambition” group wanted, are in the preamble, not the Agreement.

But an almost equally important provision, found in the Agreement, called for a global “stocktake” to be conducted in 2023, covering all aspects of the Agreement’s implementation, including its very contested provisions about financial aid and “loss and damage”—the question of support and compensation for countries and regions that may face extinction as a result of global warming. Not only carbon cutting efforts but obligations of the rich countries to the poor will be subject to the world’s scrutiny in 2023.

Rich and poor countries

On the critical issue of financial aid for developing countries struggling to reduce emissions and adapt to climate change, Paris affirms the Copenhagen promise of $100 billion by 2020 in the Decision (Paragraph 115) but not in the more binding Agreement—to the displeasure of the developing countries, no doubt. In the three previous draft versions of the accords, the $100 billion pledge was contained in the Agreement as well.

Somewhat similarly, the loss-and-damage language contained in the preamble does not include any reference to liability on the part of the advanced industrial countries that are primarily responsible for the climate change that has occurred up until now. This was a disappointment to representatives of the nations and regions most severely and imminently threatened by global warming, but any mention of liability would have been an absolute show-stopper for the US delegation. Still, the fact that loss and damage is broached at all represents a victory for the developing world and its advocates, who have been complaining for decades about the complete absence of the subject from the Rio convention and Kyoto Protocol.

The so-called Group of 77, which actually represents 134 developing countries plus China, appears to have played a shrewd and tough game here at Le Bourget. Its very able and engaging chairperson, South Africa’s Nozipho Mxakato-Diseko, sent a sharp shot across the prow of the rich countries on the third day of the conference, with a 17-point memorandum she e-mailed enumerating her group’s complaints.

“The G77 and China stresses that nothing under the [1992 Framework Convention] can be achieved without the provision of means of implementation to enable developing countries to play their part to address climate change,” she said, alluding to the fact that if developing countries are to do more to cut emissions growth, they need help. “However, clarity on the complete picture of the financial arrangements for the enhanced implementation of the Convention keeps on eluding us. … We hope that by elevating the importance of the finance discussions under the different bodies, we can ensure that the outcome meets Parties’ expectations and delivers what is required.”

Though the developing countries wanted stronger and more specific financial commitments and “loss-and-damage” provisions that would have included legal liability, there is evidence throughout the Paris Decision and Agreement of the industrial countries’ giving considerable ground to them. During the formal opening of the conference, President Obama met with leaders of AOSIS—the Alliance of Small Island States—and told them he understood their concerns as he, too, is “an island boy.” (Evidently that went over well.) The reference to the $100 billion floor for financial aid surely was removed from the Agreement partly because the White House at present cannot get Congress to appropriate money for any climate-related aid. But at least the commitment remained in the preamble, which was not a foregone conclusion.

Reporting and review

The one area in which the developing countries gave a lot of ground in Paris was in measuring, reporting, and verification. Under the terms of the Rio convention and Kyoto Protocol, only the advanced industrial countries—the so-called Annex 1 countries—were required to report their greenhouse gas emissions to the UN’s climate secretariat in Bonn. Extensive provisions in the Paris agreement call upon all countries to now report emissions, according to standardized procedures that are to be developed.

The climate pledges that almost all countries submitted to the UN in preparation for Paris, known as “Intended Nationally Determined Contributions,” provided a preview of what this will mean. The previous UN climate gathering, last year in Lima, had called for all the INDCs to be submitted by the summer and for the climate secretariat to do a net assessment of them by October 31, which seemed ridiculously late in the game. But when the results of that assessment were released, the secretariat’s head, Christiana Figueres, cited independent estimates that together the national declarations might put the world on a path to 2.7-degree warming. That result was a great deal better than most specialists following the procedure would have expected, this writer included. Though other estimates suggested the path might be more like 3.5 degrees, even this was a very great deal better than the business-as-usual path, which would be at least 4–5 degrees and probably higher than that by century’s end.

The formalized universal reporting requirements put into place by the Paris accords will lend a lot of rigor to the process of preparing, critiquing, and revising INDCs in the future. In effect the secretariat will be keeping score for the whole world, not just the Annex 1 countries. That kind of score-keeping can have a lot of bite, as we have witnessed in the secretariat’s assessment of Kyoto compliance.

Under the Kyoto Protocol, which the US government not only agreed to but virtually wrote, the United States was required to cut its emissions 7 percent by 2008–12, and Europe by 8 percent. From 1990, the baseline year established in the Rio treaty and its 1997 Kyoto Protocol, to 2012 (the final year in which initial Kyoto commitments applied), emissions of the 15 European countries that were party to the treaty decreased 17 percent—more than double what the protocol required of them. Emissions of the 28 countries that are now members of the EU decreased 21 percent. British emissions were down 27 percent in 2012 from 1990, and Germany’s were down 23 percent.

In the United States, which repudiated the protocol, emissions continued to rise until 2005, when they began to decrease, initially for reasons that had little or nothing to do with policy. That year, US emissions were about 15 percent above their 1990 level, while emissions of the 28 EU countries were down more than 9 percent and of the 15 European party countries more than 2 percent.


Regime Shift?

25 November, 2015

There’s no reason that the climate needs to change gradually. Recently scientists have become interested in regime shifts, which are abrupt, substantial and lasting changes in the state of a complex system.

Rasha Kamel of the Azimuth Project pointed us to a report in Science Daily which says:

Planet Earth experienced a global climate shift in the late 1980s on an unprecedented scale, fueled by anthropogenic warming and a volcanic eruption, according to new research. Scientists say that a major step change, or ‘regime shift,’ in Earth’s biophysical systems, from the upper atmosphere to the depths of the ocean and from the Arctic to Antarctica, was centered around 1987, and was sparked by the El Chichón volcanic eruption in Mexico five years earlier.

As always, it’s good to drill down through the science reporters’ summaries to the actual papers.  So I read this one:

• Philip C. Reid et al, Global impacts of the 1980s regime shift on the Earth’s climate and systems, Global Change Biology, 2015.

The authors of this paper analyzed 72 time series of climate and ecological data to search for such a regime shift, and found one around 1987. If such a thing really happened, this could be very important.

Here are some of the data they looked at:


Click to enlarge them—they’re pretty interesting! Vertical lines denote regime shift years, colored in different ways: 1984 blue, 1985 green, 1986 orange, 1987 red, 1988 brown, 1989 purple and so on. You can see that lots are red.

The paper has a lot of interesting and informed speculations about the cause of this shift—so give it a look.  For now I just want to tackle an important question of a more technical nature: how did they search for regime shifts?

They used the ‘STARS’ method, which stands for Sequential t-Test Analysis of Regime Shifts. They explain:

The STARS method (Rodionov, 2004; Rodionov & Overland, 2005) tests whether the end of one period (regime) of a certain length is different from a subsequent period (new regime). The cumulative sum of normalized deviations from the hypothetical mean level of the new regime is calculated, and then compared with the mean level of the preceding regime. A shift year is detected if the difference in the mean levels is statistically significant according to a Student’s t-test.

In his third paper, Rodionov (2006) shows how autocorrelation can be accounted for. From each year of the time series (except edge years), the rules are applied backwards and forwards to test that year as a potential shift year. The method is, therefore, a running procedure applied on sequences of years within the time series. The multiple STARS method used here repeats the procedure for 20 test-period lengths ranging from 6 to 25 years that are, for simplicity (after testing many variations), of the same length on either side of the regime shift.

Elsewhere I read that the STARS method is ‘too sensitive’. Could it be due to limitations of the ‘statistical significance’ idea involved in Student’s t-test?

You can download software that implements the STARS method here. The method is explained in the papers by Rodionov.

Do you know about this stuff?  If so, I’d like to hear your views on this paper and the STARS method.


Fires in Indonesia

2 November, 2015

I lived in Singapore for two years, and I go back to work there every summer. I love Southeast Asia, its beautiful landscapes, its friendly people, and its huge biological and cultural diversity. It’s a magical place.

But in 2013 there was a horrible haze from fires in nearby Sumatra. And this year it’s even worse. It makes me want to cry, thinking about how millions of people all over this region are being choked as the rain forest burns.

This part of the world has a dry season from May to October and then a wet season. In the dry season, Indonesian farmers slash down jungle growth, burn it, and plant crops. That is nothing new.

But now, palm oil plantations run by big companies do this on a massive scale. Jungles are disappearing at an astonishing rate. Some of this is illegal, but corrupt government officials are paid to look the other way. Whenever we buy palm oil—in soap, cookies, bread, margarine, detergents, and many other products—we become part of the problem.

This year the fires are worse. One reason is that we’re having an El Niño. That typically means more rain in California—which we desperately need. But it means less rain in Southeast Asia.

This summer it was very dry in Singapore. Then, in September, the haze started. We got used to rarely seeing the sun—only yellow-brown light filtering through the smoke. When it stinks outside, you try to stay indoors.

When I left on September 19th, the PSI index of air pollution had risen above 200, which is ‘very unhealthy’. Singapore had offered troops to help fight the fires, but Indonesia turned down the offer, saying they could handle the situation themselves. That was completely false: thousands of fires were burning out of control in Sumatra, Borneo and other Indonesian islands.

I believe the Indonesian government just didn’t want foreign troops out their land. Satellites could detect the many hot spots where fires were burning. But outrageously, the government refused to say who owned those lands.

A few days after I left, the PSI index in Singapore had shot above 300, which is ‘hazardous’. But in parts of Borneo the PSI had reached 1,986. The only name for that is hell.

By now Indonesia has accepted help from Singapore. Thanks to changing winds, the PSI in Singapore has been slowly dropping throughout October. In the last few days the rainy season has begun. Each time the rain clears the air, Singaporeans can see something beautiful and almost forgotten: a blue sky.

Rain is also helping in Borneo. But the hellish fires continue. There have been over 100,000 individual fires—mostly in Sumatra, Borneo and Papua. In many places, peat in the ground has caught on fire! It’s very hard to put out a peat fire.

If you care about the Earth, this is very disheartening. These fires have been putting over 15 million tons of carbon dioxide into the air per day – more than the whole US economy! And so far this year they’ve put out 1.5 billion tons of CO2. That’s more than Germany’s carbon emissions for the whole year—in fact, even more than Japan’s. How can we make progress on reducing carbon emissions with this going on?

For you and me, the first thing is to stop buying products with palm oil. The problem is largely one of government corruption driven by money from palm oil plantations. But the real heart of the problem lies in Indonesia. Luckily Widodo, the president of this country, may be part of the solution. But the solution will be difficult.

Quoting National Public Radio:

Widodo is Indonesia’s first president with a track record of efficient local governance in running two large cities. Strong action on the haze issue could help fulfill the promise of reform that motivated Indonesian voters to put him in office in October 2014.

The president has deployed thousands of firefighters and accepted international assistance. He has ordered a moratorium on new licenses to use peat land and ordered law enforcers to prosecute people and companies who clear land by burning forests.

“It must be stopped, we mustn’t allow our tropical rainforests to disappear because of monoculture plantations like oil palms,” Widodo said early in his administration.

Land recently burned and planted with palm trees is now under police investigation in Kalimantan [the Indonesian part of Borneo].

The problem of Indonesia’s illegal forest fires is so complex that it’s very hard to say exactly who is responsible for causing it.

Indonesia’s government has blamed both big palm oil companies and small freeholders. Poynton [executive director of the Forest Trust] says the culprits are often mid-sized companies with strong ties to local politicians. He describes them as lawless middlemen who pay local farmers to burn forests and plant oil palms, often on other companies’ concessions.

“There are these sort of low-level, Mafioso-type guys that basically say, ‘You get in there and clear the land, and I’ll then finance you to establish a palm oil plantation,'” he says.

The problem is exacerbated by ingrained government corruption, in which politicians grant land use permits for forests and peat lands to agribusiness in exchange for financial and political support.

“The disaster is not in the fires,” says independent Jakarta-based commentator Wimar Witoelar. “It’s in the way that past Indonesian governments have colluded with big palm oil businesses to make the peat lands a recipe for disaster.”

The quote is from here:

• Anthony Kuhn, As Indonesia’s annual fires rage, plenty of blame but no responsibility.

For how to avoid using palm oil, see for example:

• Lael Goodman, How many products with palm oil do I use in a day?

First, avoid processed foods. That’s smart for other reasons too.

Second, avoid stuff that contains stearic acid, sodium palmitate, sodium laureth sulfate, cetyl alcohol, glyceryl stearate and related compounds—various forms of artificial grease that are often made from palm oil. It takes work to avoid all this stuff, but at least be aware of it. These chemicals are not made in laboratories from pure carbon, hydrogen, oxygen and nitrogen! The raw ingredients often come from palm plantations, huge monocultures that are replacing the wonderful diversity of rainforest life.

For more nuanced suggestions, see the comments below. Right now I’m just so disgusted that I want to avoid palm oil.

For data on the carbon emissions of this and other fires, see:

Global fire emissions data.


1997 was the last really big El Niño.


This shows a man in Malaysia in September. Click on the pictures for more details. The picture at top shows a woman named a woman named Gaye Thavisin in Indonesia—perhaps in Kalimantan, the Indonesian half of Borneo, the third largest island in the world. Here is a bit of her story:

The Jungle River Cruise is run by Kalimantan Tour Destinations a foreign owned company set up by two women pioneering the introduction of ecotourism into a part of Central Kalimantan that to date has virtually no tourism.

Inspired by the untapped potential of Central Kalimantan’s mighty rivers, Gaye Thavisin and Lorna Dowson-Collins converted a traditional Kalimantan barge into a comfortable cruise boat with five double cabins, an inside sitting area and a upper viewing deck, bringing the first jungle cruises to the area.

Originally Lorna Dowson-Collins worked in Central Kalimantan with a local NGO on a sustainable livelihoods programme. The future livelihoods of the local people were under threat as logging left the land devastated with poor soils and no forest to fend from.

Kalimantan was teeming with the potential of her people and their fascinating culture, with beautiful forests of diverse flora and fauna, including the iconic orang-utan, and her mighty rivers providing access to these wonderful treasures.

An idea for a social enterprise emerged , which involved building a boat to journey guests to inaccessible places and provide comfortable accommodation.

Gaye Thavisin, an Australian expatriate, for 4 years operated an attractive, new hotel 36 km out of Palangkaraya in Kalimantan. Gaye was passionate about developing the tourism potential of Central Kalimantan and was also looking at the idea of boats. With her contract at the hotel coming to an end, the Jungle Cruise began to take shape!


Melting Permafrost (Part 4)

4 March, 2015

 


Russian scientists have recently found more new craters in Siberia, apparently formed by explosions of methane. Three were found last summer. They looked for more using satellite photos… and found more!

“What I think is happening here is, the permafrost has been acting as a cap or seal on the ground, through which gas can’t permeate,” says Paul Overduin, a permafrost expert at the Alfred Wegener Institute in Germany. “And it reaches a particular temperature where there’s not enough ice in it to act that way anymore. And then gas can rush out.”

It’s rather dramatic. Some Russian villagers have even claimed to see flashes in the distance when these explosions occur. But how bad is it?

The Siberian Times

An English-language newspaper called The Siberian Times has a good article about these craters, which I’ll quote extensively:

• Anna Liesowska Dozens of new craters suspected in northern Russia, The Siberian Times, 23 February 2015.


B1 – famous Yamal hole in 30 kilometers from Bovanenkovo, spotted in 2014 by helicopter pilots. Picture: Marya Zulinova, Yamal regional government press service.

Respected Moscow scientist Professor Vasily Bogoyavlensky has called for ‘urgent’ investigation of the new phenomenon amid safety fears.

Until now, only three large craters were known about in northern Russia with several scientific sources speculating last year that heating from above the surface due to unusually warm climatic conditions, and from below, due to geological fault lines, led to a huge release of gas hydrates, so causing the formation of these craters in Arctic regions.

Two of the newly-discovered large craters—also known as funnels to scientists—have turned into lakes, revealed Professor Bogoyavlensky, deputy director of the Moscow-based Oil and Gas Research Institute, part of the Russian Academy of Sciences.

Examination using satellite images has helped Russian experts understand that the craters are more widespread than was first realised, with one large hole surrounded by as many as 20 mini-craters, The Siberian Times can reveal.


Four Arctic craters: B1 – famous Yamal hole in 30 kilometers from Bovanenkovo, B2 – recently detected crater in 10 kilometers to the south from Bovanenkovo, B3 – crater located in 90 kilometers from Antipayuta village, B4 – crater located near Nosok village, on the north of Krasnoyarsk region, near Taimyr Peninsula. Picture: Vasily Bogoyavlensky.

‘We know now of seven craters in the Arctic area,’ he said. ‘Five are directly on the Yamal peninsula, one in Yamal Autonomous district, and one is on the north of the Krasnoyarsk region, near the Taimyr peninsula.

‘We have exact locations for only four of them. The other three were spotted by reindeer herders. But I am sure that there are more craters on Yamal, we just need to search for them.

‘I would compare this with mushrooms: when you find one mushroom, be sure there are few more around. I suppose there could be 20 to 30 craters more.’

He is anxious to investigate the craters further because of serious concerns for safety in these regions.

The study of satellite images showed that near the famous hole, located in 30 kilometres from Bovanenkovo are two potentially dangerous objects, where the gas emission can occur at any moment.


Satellite image of the site before the forming of the Yamal hole (B1). K1 and the red outline show the hillock (pingo) formed before the gas emission. Yellow outlines show the potentially dangerous objects. Picture: Vasily Bogoyavlensky.

He warned: ‘These objects need to be studied, but it is rather dangerous for the researchers. We know that there can occur a series of gas emissions over an extended period of time, but we do not know exactly when they might happen.

‘For example, you all remember the magnificent shots of the Yamal crater in winter, made during the latest expedition in Novomber 2014. But do you know that Vladimir Pushkarev, director of the Russian Centre of Arctic Exploration, was the first man in the world who went down the crater of gas emission?

‘More than this, it was very risky, because no one could guarantee there would not be new emissions.’

Professor Bogoyavlensky told The Siberian Times: ‘One of the most interesting objects here is the crater that we mark as B2, located 10 kilometres to the south of Bovanenkovo. On the satellite image you can see that it is one big lake surrounded by more than 20 small craters filled with water.

‘Studying the satellite images we found out that initially there were no craters nor a lake. Some craters appeared, then more. Then, I suppose that the craters filled with water and turned to several lakes, then merged into one large lake, 50 by 100 metres in diameter.

‘This big lake is surrounded by the network of more than 20 ‘baby’ craters now filled with water and I suppose that new ones could appear last summer or even now. We now counting them and making a catalogue. Some of them are very small, no more than 2 metres in diameter.’

‘We have not been at the spot yet,’ he said. ‘Probably some local reindeer herders were there, but so far no scientists.’

He explained: ‘After studying this object I am pretty sure that there was a series of gas emissions over an extended period of time. Sadly, we do not know, when exactly these emissions occur, i.e. mostly in summer, or in winter too. We see only the results of this emissions.’

The object B2 is now attracting special attention from the researchers as they seek to understand and explain the phenomenon. This is only 10km from Bovanenkovo, a major gas field, developed by Gazprom, in the Yamalo-Nenets Autonomous Okrug. Yet older satellite images do not show the existence of a lake, nor any craters, in this location.

Not only the new craters constantly forming on Yamal show that the process of gas emission is ongoing actively.

Professor Bogoyavlensky shows the picture of one of the Yamal lakes, taken by him from the helicopter and points on the whitish haze on its surface.


Yamal lake with traces of gas emissions. Picture: Vasily Bogoyavlensky.

He commented: ‘This haze that you see on the surface shows that gas seeps that go from the bottom of the lake to the surface. We call this process ‘degassing’.

‘We do not know, if there was a crater previously and then turned to lake, or the lake formed during some other process. More important is that the gases from within are actively seeping through this lake.

‘Degassing was revealed on the territory of Yamal Autonomous District about 45 years ago, but now we think that it can give us some clues about the formation of the craters and gas emissions. Anyway, we must research this phenomenon urgently, to prevent possible disasters.’

Professor Bogoyavlensky stressed: ‘For now, we can speak only about the results of our work in the laboratory, using the images from space.

‘No one knows what is happening in these craters at the moment. We plan a new expedition. Also we want to put not less than four seismic stations in Yamal district, so they can fix small earthquakes, that occur when the crater appears.

‘In two cases locals told us that they felt earth tremors. The nearest seismic station was yet too far to register these tremors.

‘I think that at the moment we know enough about the crater B1. There were several expeditions, we took probes and made measurements. I believe that we need to visit the other craters, namely B2, B3 and B4, and then visit the rest three craters, when we will know their exact location. It will give us more information and will bring us closer to understanding the phenomenon.’

He urged: ‘It is important not to scare people, but to understand that it is a very serious problem and we must research this.’

In an article for Drilling and Oil magazine, Professor Bogoyavlensky said the parapet of these craters suggests an underground explosion.

‘The absence of charred rock and traces of significant erosion due to possible water leaks speaks in favour of mighty eruption (pneumatic exhaust) of gas from a shallow underground reservoir, which left no traces on soil which contained a high percentage of ice,’ he wrote.

‘In other words, it was a gas-explosive mechanism that worked there. A concentration of 5-to-16% of methane is explosive. The most explosive concentration is 9.5%.’

Gas probably concentrated underground in a cavity ‘which formed due to the gradual melting of buried ice’. Then ‘gas was replacing ice and water’.

‘Years of experience has shown that gas emissions can cause serious damage to drilling rigs, oil and gas fields and offshore pipelines,’ he said. ‘Yamal craters are inherently similar to pockmarks.

‘We cannot rule out new gas emissions in the Arctic and in some cases they can ignite.’

This was possible in the case of the crater found at Antipayuta, on the Yamal peninsula.

‘The Antipayuta residents told how they saw some flash. Probably the gas ignited when appeared the crater B4, near Taimyr peninsula. This shows us, that such explosion could be rather dangerous and destructive.

‘We need to answer now the basic questions: what areas and under what conditions are the most dangerous? These questions are important for safe operation of the northern cities and infrastructure of oil and gas complexes.’


Crater B3 located in 90 kilometres from Antipayuta village, Yamal district. Picture: local residents.


Crater B4 located near Nosok village, on the north of Krasnoyarsk region, near Taimyr Peninsula. Picture: local residents.

How bad is it?

Since methane is a powerful greenhouse gas, some people are getting nervous. If global warming releases the huge amounts of methane trapped under permafrost, will that create more global warming? Could we be getting into a runaway feedback loop?

The Washington Post has a good article telling us to pay attention, but not panic:

• Chris Mooney, Why you shouldn’t freak out about those mysterious Siberian craters, Chris Mooney, 2 March 2015.

David Archer of the University of Chicago, a famous expert on climate change and the carbon cycle, took a look at thes craters and did some quick calculations. He estimated that “it would take about 20,000,000 such eruptions within a few years to generate the standard Arctic Methane Apocalypse that people have been talking about.”

More importantly, people are measuring the amount of methane in the air. We know how it’s doing. For example, you can make graphs of methane concentration here:

• Earth System Research Laboratory, Global Monitoring Division, Data visualization.

Click on a northern station like Alert, the scary name of a military base and research station in Nunavut—the huge northern province in Canada:




(Alert is on the very top, near Greenland.)

Choose Carbon cycle gases from the menu at right, and click on Time series. You’ll go to another page, and then choose Methane—the default choice is carbon dioxide. Go to the bottom of the page and click Submit and you’ll get a graph like this:



Methane has gone up from about 1750 to 1900 nanomoles per mole from 1985 to 2015. That’s a big increase—but not a sign of incipient disaster.

A larger perspective might help. Apparently from 1750 to 2007 the atmospheric CO2 concentration increased about 40% while the methane concentration has increased about 160%. The amount of additional radiative forcing due to CO2 is about 1.6 watts per square meter, while for methane it’s about 0.5:

Greenhouse gas: natural and anthropogenic sources, Wikipedia.

So, methane is significant, and increasing fast. So far CO2 is the 800-pound gorilla in the living room. But I’m glad Russian scientists are doing things like this:





The latest expedition to Yamal crater was initiated by the Russian Center of Arctic Exploration in early November 2014. The researchers were first in the world to enter this crater. Pictures: Vladimir Pushkarev/Russian Center of Arctic Exploration

Previous posts

For previous posts in this series, see:

Melting Permafrost (Part 1).

Melting Permafrost (Part 2).

Melting Permafrost (Part 3).


Exploring Climate Data (Part 3)

9 February, 2015

post by Nadja Kutz

This blog article is about the temperature data used in the reports of the Intergovernmental panel on Climate Change (IPCC). I present the results of an investigation into the completeness of global land surface temperature records. There are noticeable gaps in the data records, but I leave discussion about the implications of these gaps to the readers.

The data used in the newest IPCC report, namely the Fifth Assessment Report (AR5) is, as it seems, at the time of writing not yet available at the IPCC data distribution centre.

The temperature databases used for the previous report, AR4, are listed here on the website of the IPCC. These databases are:

CRUTEM3,

NCDC (probably as a guess using the data set GHCNM v3),

GISTEMP, and

• the collection of Lugina et al.

The temperature collection CRUTEM3 was put together by the Climatic Research Unit (CRU) at the University of East Anglia. According to the CRU temperature page the CRUTEM3 data and in particular the CRUTEM3 land air temperature anomalies on a 5° × 5° grid-box basis has now been superseded by the so-called CRUTEM4 collection.

Since the CRUTEM collection appeared to be an important data source for the IPCC, I started by investigating the land air temperature data collection CRUTEM4. In what follows, only the availability of so-called land air temperature measurements will be investigated. (The collections often also contain sea surface temperature (SST) measurements.)

Usually only ‘temperature grid data’ or other averaged data is used for the climate assessments. Here ‘grid’ means that data is averaged over regions that cover the earth in a grid. However, the data is originally generated by temperature measuring stations around the world. So, I was interested in this original data and its quality. For the CRUTEM collection the latest station data is called the CRUTEM4 station data collection.

I downloaded the station’s data file, which is a simple text file, from the bottom of the CRUTEM4 station data page. I noticed on a first glance that there are big gaps in the file in some regions of the world. The file is huge, though: it contains monthly measurements starting in January 1701 ending in 2011 and there are altogether 4634 stations. Quickly finding a gap in such a huge file was a sufficiently disconcerting experience that persuaded my husband Tim Hoffmann to help me to investigate this station data in more accessible way, via a visualization.

The visualization takes a long time to load, and due to some unfortunate software configuration issues (not on our side) it sometimes doesn’t work at all. Please open it now in a separate tab while reading this article:

• Nadja Kutz and Tim Hoffman, Temperature data from stations around the globe, collected by CRUTEM 4.

For those who are too lazy to explore the data themselves, or in case the visualization is not working, here are some screenshots from the visualization which documents the missing data in the CRUTEM4 dataset.

The images should speak for themselves. However, an additional explanation is provided after the images. One should in particular mention that it looks as if the deterioration of the CRUTEM4 data set has been greater in the years 2000-2009 than in the years 1980-2000.

Now you could say: okay, we know that there are budget cuts in the UK, and so probably the University of East Anglia was subject to those, but what about all these other collections in the world? This will be addressed after the images.

 
North America


Jan 1980


Jan 2000


Jan 2009

 
Africa


Jan 1980


Jan 2000


Jan 2009

 
Asia


Jan 1980


Jan 2000


Jan 2009

 
Eurasia/Northern Africa

Jan 1980


Jan 2000


Jan 2009

 

Arctic


Jan 1980


Jan 2000


Jan 2009

 
These screenshots comprise various regions of the world for the month of January for the years 1980, 2000 and 2009. Each station is represented by a small rectangle around its coordinates. The color of a rectangle indicates the monthly temperature value for that station: blue is the coldest, red is the hotttest. Black rectangles are what CRU calls ‘missing data’, denoted with -999 in the file. I prefer instead to call it ‘invalid’ data, in order to distinguish it from the missing data due to stations that have been closed down. In the visualization, closed down stations are encoded by a transparent rectangle and their markers are also present.

We couldn’t find the reasons for this invalid data. At the end of the post John Baez has provided some more literature on this question. It is worth noting that satellites can replace surface measurements only to a certain degree, as was highlighted by Stefan Rahmstorf in a blog post on RealClimate:

the satellites cannot measure the near-surface temperatures but only those overhead at a certain altitude range in the troposphere. And secondly, there are a few question marks about the long-term stability of these measurements (temporal drift).

What about other collections?

Apart from the already mentioned collections, which were used in the IPCC’s AR4 report, there are actually some more institutional collections, and I also found some private weather collections. However among those private collections I haven’t found any collection that goes back in time as far as CRUTEM4. However, it could be that some of those private collections might be more complete in terms of actual data than the collections that reach further back in time.

After discussing our visualization on the Azimuth Forum it turned out that Nick Stokes, who runs the blog MOYHU in Australia, had the same idea as me—however, already in 2011. That is in this year he had visualized station data. For his visualization he used Google Earth. Moreover, for his visualization he used different temperature collections.

If you have Google Earth installed then you can see his visualizations here:

• Nick Stokes, click here.

The link is from the documentation page of Nick Stoke’s website.

What are the major collections?

As far as we can tell, the major global collections of temperature data that go back to the 18th or 19th or at least early 20th century seem to be following. First, there are the collections already mentioned, which are also used in the AR4 report:

• The CRUTEM collection from the University of East Anglia (UK).

• the GISTEMP collection from the Goddard Institute of Space Science (GISS) at NASA (US).

• the collection of Lugina et al, which is a cooperative project involving NCDC/NOAA (US) (see also below), the University of Maryland (US), St. Petersburg State University (Russia) and the State Hydrological Institute, St. Petersburg, (Russia).

• the GHCN collection from NOAA.

Then there are these:

• the Berkeley Earth collection, called BEST

• The GSOD (Global Summary Of the Day) and Global Historical Climatology Network (GHCN) collections. Both these are run by the National Climatic Data Center (NCDC) at National Oceanic and Atmospheric Administration (NOAA) (US). It is not clear to me to what extent these two databases overlap with those of Lugina et al, which were made in cooperation with NCDC/NOAA. It is also not clear to me whether the GHCN collection had been used for the AR4 report (it seems so). There is currently also a very partially working visualization of the GSOD data here. The sparse data in specific regions (see images above) is also apparent in this visualization.

• There is a comparatively new initiative called International Surface Temperatures Initiative (ISTI) which gathers collections in a databank and seeks to provide temperature data “from hourly to century timescales”. As written on their blog, this data seems not to be quality controlled:

The ISTI dataset is not quality controlled, so, after re-reading section 3.3 of Lawrimore et al 2011, I implemented an extremely simple quality control scheme, MADQC.

What did you visualize?

As far as I had understood in the visualization by Nick Stokes—which you just opened—the collection BEST (before 1850-2010), the collections GSOD (1921-2010) and GHCN v2 (before 1850-1990) from NOAA and CRUTEM3 (before 1850-2000) are represented.

CRUTEM3 is also visualized in another way Clive Best. In Clive Best’s visualization, it seems however that one has apart from the station name no further access to other data, like station temperatures, etc. Moreover, it is not possible to set a recent time range, which is important for checking how much the dataset changed in recent times.

Unfortunately this limited possibility to set a time range holds also true for two visualizations of Nick Stokes here and here. In his first visualization, which is more exhaustive than the second, the following datasets are shown: GHCNv3 and an adjusted version of it (GADJ), a prelimary dataset from ISTI, BEST and CRUTEM 4. So his first visualization seems quite exhaustive also with respect to newer data. Unfortunately, as mentioned, setting the time range didn’t work properly (at least when I tested it). The same holds for his second visualization of GHCN v3 data. So, I was only able to trace the deterioration of recent data manually (for example, by clicking on individual stations).

Tim and I visualized CRUTEM4, that is, the updated version of CRUTEM3.

What did you not visualize?

Newer datasets after 2011/2012, for example from the aforementioned ISTI or from the private collections, are not visualized in the two collections you just opened.

Moreover in the visualizations mentioned hre, there is no coverage of the GISS collection, which however now uses NOAA’S GHCN v3 collections. The historical data of GISS could, however, be different from the other collections. The visualizations may also not cover the Lugina et al. collection, which was mentioned above in the context of the IPCC report. Lugina et al. could however be similar to GSOD (and GHCN) due to cooperation. Moreover, GHCN v3 could be substantially more exhaustive than CRUTEM or GHCN v2 (as shown in Nick Stoves visualization). However here the last collection was—like CRUTEM4—released in the spring of 2011.

GCHN v3 is also represented in Nick Stokes’ visualizations (here and here). Upon manually investigating it, it didn’t seem to much crucial additional data not found in CRUTEM4. Since this manual exploration was not exhaustive, I may be wrong—but I don’t think so.

Hence, to our knowledge, in the two visualizations you just opened, quite a lot of the available data is visualized—and as it seems “almost all” (?) of the far-back-reaching original quality controlled global surface temperature data collections as of 2011 or 2012. If you know of other similar collections please let us know.

As mentioned above private collections and in particular the ISTI collection may contain much more data. At the point of writing we don’t know in how far those newer collections will be taken into account for the new IPCC reports and in particular for the AR5 report. Moreover it seems not so clear how quality control may be ensured for those newer collections.

In conclusion, the previous IPCC reports seem to have been informed by the collections described here. Thus the coverage problems you see here need to be taken into account in discussions about the scientific base of previous climate descriptions.

Hopefully the visualizations from Nick Stokes and from Tim and me are ready for exploration! You can start to explore them yourself, and in particular see that the ‘deterioration of data’ is—just as in our CRUTEM4 visualization—also visible in Nick’s collections.

Note: I would like to thank people at the Azimuth Forum for pointing out references, and in particular Nick Stokes and Nathan Urban.

The effects of missing data

supplement by John Baez

There have always been fewer temperature recording stations in Arctic regions than other regions. The following paper initiated a controversy over how this fact affects our picture of the Earth’s climate:

Here is some discussion:

• Kevin Cowtan, Robert Way, and Dana Nuccitelli, Global warming since 1997 more than twice as fast as previously estimated, new study shows, Skeptical Science, 13 November 2013.

• Stefan Rahmstorf, Global warming since 1997 underestimated by half, RealClimate, 13 November 2013 in which it is highlighted that satellites can replace surface measurements only to a certain degree.

Anthony Watts’ protest about Cowtan, Way and the Arctic, HotWhopper, 15 November 2013.

• Victor Venema, Temperature trend over last 15 years is twice as large as previously thought , Variable Variability, 13 November 2013.

However, these posts seem to say little about the increasing amount of ‘missing data’.


Networks in Climate Science

29 November, 2014

What follows is draft of a talk I’ll be giving at the Neural Information Processing Seminar on December 10th. The actual talk may contain more stuff—for example, more work that Dara Shayda has done. But I’d love comments now, so I’m posting this now and hoping you can help out.

You can click on any of the pictures to see where it came from or get more information.

Preliminary throat-clearing

I’m very flattered to be invited to speak here. I was probably invited because of my abstract mathematical work on networks and category theory. But when I got the invitation, instead of talking about something I understood, I thought I’d learn about something a bit more practical and talk about that. That was a bad idea. But I’ll try to make the best of it.

I’ve been trying to learn climate science. There’s a subject called ‘complex networks’ where people do statistical analyses of large graphs like the worldwide web or Facebook and draw conclusions from it. People are trying to apply these ideas to climate science. So that’s what I’ll talk about. I’ll be reviewing a lot of other people’s work, but also describing some work by a project I’m involved in, the Azimuth Project.

The Azimuth Project is an all-volunteer project involving scientists and programmers, many outside academia, who are concerned about environmental issues and want to use their skills to help. This talk is based on the work of many people in the Azimuth Project, including Jan Galkowski, Graham Jones, Nadja Kutz, Daniel Mahler, Blake Pollard, Paul Pukite, Dara Shayda, David Tanzer, David Tweed, Steve Wenner and others. Needless to say, I’m to blame for all the mistakes.

Climate variability and El Niño

Okay, let’s get started.

You’ve probably heard about the ‘global warming pause’. Is this a real thing? If so, is it due to ‘natural variability’, heat going into the deep oceans, some combination of both, a massive failure of our understanding of climate processes, or something else?

Here is chart of global average air temperatures at sea level, put together by NASA’s Goddard Institute of Space Science:


You can see a lot of fluctuations, including a big dip after 1940 and a tiny dip after 2000. That tiny dip is the so-called ‘global warming pause’. What causes these fluctuations? That’s a big, complicated question.

One cause of temperature fluctuations is a kind of cycle whose extremes are called El Niño and La Niña.

A lot of things happen during an El Niño. For example, in 1997 and 1998, a big El Niño, we saw all these events:

El Niño is part of an irregular cycle that happens every 3 to 7 years, called the El Niño Southern Oscillation or ENSO. Two strongly correlated signs of an El Niño are:

1) Increased sea surface temperatures in a patch of the Pacific called the Niño 3.4 region. The temperature anomaly in this region—how much warmer it is than usual for that time of year—is called the Niño 3.4 index.

2) A decrease in air pressures in the western side of the Pacific compared to those further east. This is measured by the Southern Oscillation Index or SOI.

You can see the correlation here:

El Niños are important because they can cause billions of dollars of economic damage. They also seem to bring heat stored in the deeper waters of the Pacific into the atmosphere. So, one reason for the ‘global warming pause’ may be that we haven’t had a strong El Niño since 1998. The global warming pause might end with the next El Niño. For a while it seemed we were due for a big one this fall, but that hasn’t happened.

Teleconnections

The ENSO cycle is just one of many cycles involving teleconnections: strong correlations between weather at distant locations, typically thousands of kilometers. People have systematically looked for these teleconnections using principal component analysis of climate data, and also other techniques.

The ENSO cycle shows up automatically when you do this kind of study. It stands out as the biggest source of climate variability on time scales greater than a year and less than a decade. Some others include:

• The Pacific-North America Oscillation.
• The Pacific Decadal Oscillation.
• The North Atlantic Oscillation.
• The Arctic Oscillation.

For example, the Pacific Decadal Oscillation is a longer-period relative of the ENSO, centered in the north Pacific:

Complex network theory

Recently people have begun to study teleconnections using ideas from ‘complex network theory’.

What’s that? In complex network theory, people often start with a weighted graph: that is, a set N of nodes and for any pair of nodes i, j \in N, a weight A_{i j}, which can be any nonnegative real number.

Why is this called a weighted graph? It’s really just a matrix of nonnegative real numbers!

The reason is that we can turn any weighted graph into a graph by drawing an edge from node j to node i whenever A_{i j} >0. This is a directed graph, meaning that we should draw an arrow pointing from j to i. We could have an edge from i to j but not vice versa! Note that we can also have an edge from a node to itself.

Conversely, if we have any directed graph, we can turn it into a weighted graph by choosing the weight A_{i j} = 1 when there’s an edge from j to i, and A_{i j} = 0 otherwise.

For example, we can make a weighted graph where the nodes are web pages and A_{i j} is the number of links from the web page j to the web page i.

People in complex network theory like examples of this sort: large weighted graphs that describe connections between web pages, or people, or cities, or neurons, or other things. The goal, so far, is to compute numbers from weighted graphs in ways that describe interesting properties of these complex networks—and then formulate and test hypotheses about the complex networks we see in real life.

The El Niño basin

Here’s a very simple example of what we can do with a weighted graph. For any node i, we can sum up the weights of edges going into i:

\sum_{j \in N} A_{j i}

This is called the degree of the node i. For example, if lots of people have web pages with lots of links to yours, your webpage will have a high degree. If lots of people like you on Facebook, you will have a high degree.

So, the degree is some measure of how ‘important’ a node is.

People have constructed climate networks where the nodes are locations on the Earth’s surface, and the weight A_{i j} measures how correlated the weather is at the ith and jth location. Then, the degree says how ‘important’ a given location is for the Earth’s climate—in some vague sense.

For example, in Complex networks in climate dynamics, Donges et al take surface air temperature data on a grid and compute the correlation between grid points.

More precisely, let T_i(t) be the temperature at the ith grid point at month t after the average for that month in all years under consideration has been subtracted off, to eliminate some seasonal variations. They compute the Pearson correlation A_{i j} of T_i(t) and T_j(t) for each pair of grid points i, j. The Pearson correlation is the simplest measure of linear correlation, normalized to range between -1 and 1.

We could construct a weighted graph this way, and it would be symmetric, or undirected:

A_{i j} = A_{j i}

However, Donges et al prefer to work with a graph rather than a weighted graph. So, they create a graph where there is an edge from i to j (and also from j to i) when |A_{i j}| exceeds a certain threshold, and no edge otherwise.

They can adjust this threshold so that any desired fraction of pairs i, j actually have an edge between them. After some experimentation they chose this fraction to be 0.5%.


A certain patch dominates the world! This is the El Niño basin. The Indian Ocean comes in second.

(Some details, which I may not say:

The Pearson correlation is the covariance

\Big\langle \left( T_i - \langle T_i \rangle \right) \left( T_j - \langle T_j \rangle \right) \Big\rangle

normalized by dividing by the standard deviation of T_i and the standard deviation of T_j.

The reddest shade of red in the above picture shows nodes that are connected to 5% or more of the other nodes. These nodes are connected to at least 10 times as many nodes as average.)

The Pearson correlation detects linear correlations. A more flexible measure is mutual information: how many bits of information knowing the temperature at time t at grid point i tells you about the temperature at the same time at grid point j.

Donges et al create a climate network this way as well, putting an edge between nodes if their mutual information exceeds a certain cutoff. They choose this cutoff so that 0.5% of node pairs have an edge between them, and get the following map:


The result is almost indistinguishable in the El Niño basin. So, this feature is not just an artifact of focusing on linear correlations.

El Niño breaks climate links

We can also look at how climate networks change with time—and in particular, how they are affected by El Niños. This is the subject of a 2008 paper by Tsonis and Swanson, Topology and predictability of El Niño and La Niña networks.

They create a climate network in a way that’s similar to the one I just described. The main differences are that they:

  1. separately create climate networks for El Niño and La Niña time periods;
  2. create a link between grid points when their Pearson correlation has absolute value greater than $0.5;$

  3. only use temperature data from November to March in each year, claiming that summertime introduces spurious links.

They get this map for La Niña conditions:


and this map for El Niño conditions:


They conclude that “El Niño breaks climate links”.

This may seem to contradict what I just said a minute ago. But it doesn’t! While the El Niño basin is a region where the surface air temperatures are highly correlated to temperatures at many other points, when an El Niño actually occurs it disrupts correlations between temperatures at different locations worldwide—and even in the El Niño basin!

For the rest of the talk I want to focus on a third claim: namely, that El Niños can be predicted by means of an increase in correlations between temperatures within the El Niño basin and temperatures outside this region. This claim was made in a recent paper by Ludescher et al. I want to examine it somewhat critically.

Predicting El Niños

People really want to predict El Niños, because they have huge effects on agriculture, especially around the Pacific ocean. However, it’s generally regarded as very hard to predict El Niños more than 6 months in advance. There is also a spring barrier: it’s harder to predict El Niños through the spring of any year.

It’s controversial how much of the unpredictability in the ENSO cycle is due to chaos intrinsic to the Pacific ocean system, and how much is due to noise from outside the system. Both may be involved.

There are many teams trying to predict El Niños, some using physical models of the Earth’s climate, and others using machine learning techniques. There is a kind of competition going on, which you can see at a National Oceanic and Atmospheric Administration website.

The most recent predictions give a sense of how hard this job is:

When the 3-month running average of the Niño 3.4 index exceeds 0.5°C for 5 months, we officially declare that there is an El Niño.

As you can see, it’s hard to be sure if there will be an El Niño early next year! However, the consensus forecast is yes, a weak El Niño. This is the best we can do, now. Right now multi-model ensembles have better predictive skill than any one model.

The work of Ludescher et al

The Azimuth Project has carefully examined a 2013 paper by Ludescher et al called Very early warning of next El Niño, which uses a climate network for El Niño prediction.

They build their climate network using correlations between daily surface air temperature data between points inside the El Niño basin and certain points outside this region, as shown here:


The red dots are the points in their version of the El Niño basin.

(Next I will describe Ludescher’s procedure. I may omit some details in the actual talk, but let me include them here.)

The main idea of Ludescher et al is to construct a climate network that is a weighted graph, and to say an El Niño will occur if the average weight of edges between points in the El Niño basin and points outside this basin exceeds a certain threshold.

As in the other papers I mentioned, Ludescher et al let T_i(t) be the surface air temperature at the ith grid point at time t minus the average temperature at that location at that time of year in all years under consideration, to eliminate the most obvious seasonal effects.

They consider a time-delayed covariance between temperatures at different grid points:

\langle T_i(t) T_j(t - \tau) \rangle - \langle T_i(t) \rangle \langle T_j(t - \tau) \rangle

where \tau is a time delay, and the angle brackets denote a running average over the last year, that is:

\displaystyle{ \langle f(t) \rangle = \frac{1}{365} \sum_{d = 0}^{364} f(t - d) }

where t is the time in days.

They normalize this to define a correlation C_{i,j}^t(\tau) that ranges from -1 to 1.

Next, for any pair of nodes i and j, and for each time t, they determine the maximum, the mean and the standard deviation of |C_{i,j}^t(\tau)|, as the delay \tau ranges from -200 to 200 days.

They define the link strength S_{i,j}(t) as the difference between the maximum and the mean value of |C_{i,j}^t(\tau)|, divided by its standard deviation.

Finally, they let S(t) be the average link strength, calculated by averaging S_{i j}(t) over all pairs i,j where i is a grid point inside their El Niño basin and j is a grid point outside this basin, but still in their larger rectangle.

Here is what they get:


The blue peaks are El Niños: episodes where the Niño 3.4 index is over 0.5°C for at least 5 months.

The red line is their ‘average link strength’. Whenever this exceeds a certain threshold \Theta = 2.82, and the Niño 3.4 index is not already over 0.5°C, they predict an El Niño will start in the following calendar year.

Ludescher et al chose their threshold for El Niño prediction by training their algorithm on climate data from 1948 to 1980, and tested it on data from 1981 to 2013. They claim that with this threshold, their El Niño predictions were correct 76% of the time, and their predictions of no El Niño were correct in 86% of all cases.

On this basis they claimed—when their paper was published in February 2014—that the Niño 3.4 index would exceed 0.5 by the end of 2014 with probability 3/4.

The latest data as of 1 December 2014 seems to say: yes, it happened!

Replication and critique

Graham Jones of the Azimuth Project wrote code implementing Ludescher et al’s algorithm, as best as we could understand it, and got results close to theirs, though not identical. The code is open-source; one goal of the Azimuth Project is to do science ‘in the open’.

More interesting than the small discrepancies between our calculation and theirs is the question of whether ‘average link strengths’ between points in the El Niño basin and points outside are really helpful in predicting El Niños.

Steve Wenner, a statistician helping the Azimuth Project, noted some ambiguities in Ludescher et al‘s El Niño prediction rules and disambiguated them in a number of ways. For each way he used Fischer’s exact test to compute the p-value of the null hypothesis that Ludescher et al‘s El Niño prediction does not improve the odds that what they predict will occur.

The best he got (that is, the lowest p-value) was 0.03. This is just a bit more significant than the conventional 0.05 threshold for rejecting a null hypothesis.

Do high average link strengths between points in the El Niño basin and points elsewhere in the Pacific really increase the chance that an El Niño is coming? It is hard to tell from the work of Ludescher et al.

One reason is that they treat El Niño as a binary condition, either on or off depending on whether the Niño 3.4 index for a given month exceeds 0.5 or not. This is not the usual definition of El Niño, but the real problem is that they are only making a single yes-or-no prediction each year for 65 years: does an El Niño occur during this year, or not? 31 of these years (1950-1980) are used for training their algorithm, leaving just 34 retrodictions and one actual prediction (1981-2013, and 2014).

So, there is a serious problem with small sample size.

We can learn a bit by taking a different approach, and simply running some linear regressions between the average link strength and the Niño 3.4 index for each month. There are 766 months from 1950 to 2013, so this gives us more data to look at. Of course, it’s possible that the relation between average link strength and Niño is highly nonlinear, so a linear regression may not be appropriate. But it is at least worth looking at!

Daniel Mahler and Dara Shayda of the Azimuth Project did this and found the following interesting results.

Simple linear models

Here is a scatter plot showing the Niño 3.4 index as a function of the average link strength on the same month:


(Click on these scatter plots for more information.)

The coefficient of determination, R^2, is 0.0175. In simple terms, this means that the average link strength in a given month explains just 1.75% of the variance of the Niño 3.4 index. That’s quite low!

Here is a scatter plot showing the Niño 3.4 index as a function of the average link strength six months earlier:


Now R^2 is 0.088. So, the link strength explains 8.8% of the variance in the Niño 3.4 index 6 months later. This is still not much—but interestingly, it’s much more than when we try to relate them at the same moment in time! And the p-value is less than 2.2 \cdot 10^{-16}, so the effect is statistically significant.

Of course, we could also try to use Niño 3.4 to predict itself. Here is the Niño 3.4 index plotted against the Niño 3.4 index six months earlier:

Now R^2 = 0.162. So, this is better than using the average link strength!

That doesn’t sound good for average link strength. But now let’s could try to predict Niño 3.4 using both itself and the average link strength 6 months earlier. Here is a scatter plot showing that:

Here the x axis is an optimally chosen linear combination of average and link strength and Niño 3.4: one that maximizes R^2.

In this case we get R^2 = 0.22.

Conclusions

What can we conclude from this?

Using a linear model, the average link strength on a given month accounts for only 8% of the variance of Niño 3.4 index 6 months in the future. That sounds bad, and indeed it is.

However, there are more interesting things to say than this!

Both the Niño 3.4 index and the average link strength can be computed from the surface air temperature of the Pacific during some window in time. The Niño 3.4 index explains 16% of its own variance 6 months into the future; the average link strength explains 8%, and taken together they explain 22%. So, these two variables contain a fair amount of independent information about the Niño 3.4 index 6 months in the future.

Furthermore, they explain a surprisingly large amount of its variance for just 2 variables.

For comparison, Mahler used a random forest variant called ExtraTreesRegressor to predict the Niño 3.4 index 6 months into the future from much larger collections of data. Out of the 778 months available he trained the algorithm on the first 400 and tested it on the remaining 378.

The result: using a full world-wide grid of surface air temperature values at a given moment in time explains only 23% of the Niño 3.4 index 6 months into the future. A full grid of surface air pressure values does considerably better, but still explains only 34% of the variance. Using twelve months of the full grid of pressure values only gets around 37%.

From this viewpoint, explaining 22% of the variance with just two variables doesn’t look so bad!

Moreover, while the Niño 3.4 index is maximally correlated with itself at the same moment in time, for obvious reasons, the average link strength is maximally correlated with the Niño 3.4 index 10 months into the future:

(The lines here occur at monthly intervals.)

However, we have not tried to determine if the average link strength as Ludescher et al define it is optimal in this respect. Graham Jones has shown that simplifying their definition of this quantity doesn’t change it much. Maybe modifying their definition could improve it. There seems to be a real phenomenon at work here, but I don’t think we know exactly what it is!

My talk has avoided discussing physical models of the ENSO, because I wanted to focus on very simple, general ideas from complex network theory. However, it seems obvious that really understanding the ENSO requires a lot of ideas from meteorology, oceanography, physics, and the like. I am not advocating a ‘purely network-based approach’.


Follow

Get every new post delivered to your Inbox.

Join 3,149 other followers