## El Niño Project (Part 6)

23 July, 2014

guest post by Steven Wenner

Hi, I’m Steve Wenner.

I’m an industrial statistician with over 40 years of experience in a wide range applications (quality, reliability, product development, consumer research, biostatistics); but, somehow, time series only rarely crossed my path. Currently I’m working for a large consumer products company.

My undergraduate degree is in physics, and I also have a master’s in pure math. I never could reconcile how physicists used math (explain that Dirac delta function to me again in math terms? Heaviside calculus? On the other hand, I thought category theory was abstract nonsense until John showed me otherwise!). Anyway, I had to admit that I lacked the talent to pursue pure math or theoretical physics, so I became a statistician. I never regretted it—statistics has provided a very interesting and intellectually challenging career.

I got interested in Ludescher et al’s paper on El Niño prediction by reading Part 3 of this series. I have no expertise in climate science, except for an intense interest in the subject as a concerned citizen. So, I won’t talk about things like how Ludescher et al use a nonstandard definition of ‘El Niño’—that’s a topic for another time. Instead, I’ll look at some statistical aspects of their paper:

• Josef Ludescher, Avi Gozolchiani, Mikhail I. Bogachev, Armin Bunde, Shlomo Havlin, and Hans Joachim Schellnhuber, Very early warning of next El Niño, Proceedings of the National Academy of Sciences, February 2014. (Click title for free version, journal name for official version.)

### Analysis

I downloaded the NOAA adjusted monthly temperature anomaly data and compared the El Niño periods with the charts in this paper. I found what appear to be two errors (“phantom” El Niños) and noted some interesting situations. Some of these are annotated on the images below. Click to enlarge them:

I also listed for each year whether an El Niño initiation was predicted, or not, and whether one actually happened. I did the predictions five ways: first, I listed the author’s “arrows” as they appeared on their charts, and then I tried to match their predictions by following in turn four sets of rules. Nevertheless, I could not come up with any detailed rules that exactly reproduced the author’s results.

These were the rules I used:

An El Niño initiation is predicted for a calendar year if during the preceding year the average link strength crossed above the 2.82 threshold. However, we could also invoke additional requirements. Two possibilities are:

1. Preemption rule: the prediction of a new El Niño is canceled if the preceding year ends in an El Niño period.

2. End-of-year rule: the link strength must be above 2.82 at year’s end.

I counted the predictions using all four combinations of these two rules and compared the results to the arrows on the charts.

I defined an “El Niño initiation month” to be a month where the monthly average adjusted temperature anomaly rises up to at least 0.5 C and remains above or equal to 0.5 °C for at least five months. Note that the NOAA El Niño monthly temperature estimates are rounded to hundredths; and, on occasion, the anomaly is reported as exactly 0.5 °C. I found slightly better agreement with the authors’ El Niño periods if I counted an anomaly of exactly 0.5 °C as satisfying the threshold criterion, instead of using the strictly “greater than” condition.

Anyway, I did some formal hypothesis testing and estimation under all five scenarios. The good news is that under most scenarios the prediction method gave better results than merely guessing. (But, I wonder how many things the authors tried before they settled on their final method? Also, did they do all their work on the learning series, and then only at the end check the validation series—or were they checking both as they went about their investigations?)

The bad news is that the predictions varied with the method, and the methods were rather weak. For instance, in the training series there were 9 El Niño periods in 30 years; the authors’ rules (whatever they were, exactly) found five of the nine. At the same time, they had three false alarms in the 21 years that did not have an El Niño initiated.

I used Fisher’s exact test to compute some p-values. Suppose (as our ‘null hypothesis’) that Ludescher et al’s method does not improve the odds of a successful prediction of an El Nino initiation. What’s the probability of that method getting at least as many predictions right just by chance? Answer: 0.032 – this is marginally more significant than the conventional 1 in 20 chance that is the usual threshold for rejecting a null hypothesis, but still not terribly convincing. This was, by the way, the most significant of the five p-values for the alternative rule sets applied to the learning series.

I also computed the “relative risk” statistics for all scenarios; for instance, we are more than three times as likely to see an El Niño initiation if Ludescher et al predict one, than if they predict otherwise (the 90% confidence interval for that ratio is 1.2 to 9.7, with the point estimate 3.4). Here is a screen shot of some statistics for that case:

Again, click to enlarge—but my whole working spreadsheet is available with more details for anyone who wishes to see it. I did the statistical analysis with a program called JMP, a product of the SAS corporation.

My overall impression from all this is that Ludescher et al are suggesting a somewhat arbitrary (and not particularly well-defined) method for revealing the relationship between link strength and El Niño initiation, if, indeed, a relationship exists. Slight variations in the interpretation of their criteria and slight variations in the data result in appreciably different predictions. I wonder if there are better ways to analyze these two correlated time series.

## El Niño Project (Part 5)

12 July, 2014

And now for some comic relief.

Last time I explained how to download some weather data and start analyzing it, using programs written by Graham Jones. When you read that, did you think “Wow, that’s easy!” Or did you think “Huh? Run programs in R? How am I supposed to do that?”

If you’re in the latter group, you’re like me. But I managed to do it. And this is the tale of how. It’s a blow-by-blow account of my first steps, my blunders, my fears.

I hope that if you’re intimidated by programming, my tale will prove that you too can do this stuff… provided you have smart friends, or read this article.

• use R to create a file of temperature data for a given latitude/longitude rectangle for a given time interval.

I will not attempt to explain how to program in R.

If you want to copy what I’m doing, please remember that a few details depend on the operating system. Since I don’t care about operating systems, I use a Windows PC. If you use something better, some details will differ for you.

Also: at the end of this article there are some very basic programming puzzles.

First, let me explain a bit about my relation to computers.

I first saw a computer at the Lawrence Hall of Science in Berkeley, back when I was visiting my uncle in the summer of 1978. It was really cool! They had some terminals where you could type programs in BASIC and run them.

I got especially excited when he gave me the book Computer Lib/Dream Machines by Ted Nelson. It espoused the visionary idea that people could write texts on computers all around the world—“hypertexts” where you could click on a link in one and hop to another!

I did more programming the next year in high school, sitting in a concrete block room with a teletype terminal that was connected to a mainframe somewhere far away. I stored my programs on paper tape. But my excitement gradually dwindled, because I was having more fun doing math and physics using just pencil and paper. My own brain was more easy to program than the machine. I did not start a computer company. I did not get rich. I learned quantum mechanics, and relativity, and Gödel’s theorem.

Later I did some programming in APL in college, and still later I did a bit in Mathematica in the early 1990s… but nothing much, and nothing sophisticated. Indeed, none of these languages would be the ones you’d choose to explore sophisticated ideas in computation!

I’ve just never been very interested… until now. I now want to do a lot of data analysis. It will be embarrassing to keep asking other people to do all of it for me. I need to learn how to do it myself.

Maybe you’d like to do this stuff too—or at least watch me make a fool of myself. So here’s my tale, from the start.

To use the programs written by Graham, I need to use R, a language currently popular among statisticians. It is not the language my programmer friends would want me to learn—they’d want me to use something like Python. But tough! I can learn that later.

To download R to my Windows PC, I cleverly type download R into Google, and go to the top website it recommends:

I click the big fat button on top saying

Download R 3.1.0 for Windows

and get asked to save a file R-3.1.0-win.exe. I save it in my Downloads folder; it takes a while to download since it was 57 megabytes. When I get it, I click on it and follow the easy default installation instructions. My Desktop window now has a little icon on it that says R.

Clicking this, I get an interface where I can type commands after a red

>

symbol. Following Graham’s advice, I start by trying

> 2^(1:8)

which generates a list of powers of 2 from 21 to 28, like this:

[1] 2 4 8 16 32 64 128 256

Then I try

> mean(2^(1:8))

which gives the arithmetic mean of this list. Somewhat more fun is

> plot(rnorm(20))

which plots a bunch of points, apparently 20 standard normal deviates.

When I hear “20 standard normal deviates” I think of the members of a typical math department… but no, those are deviants. Standard normal deviates are random numbers chosen from a Gaussian distribution of mean zero and variance 1.

To do something more interesting, I need to input data.

The papers by Ludescher et al use surface air temperatures in a certain patch of the Pacific, so I want to get ahold of those. They’re here:

NCEP is the National Centers for Environmental Prediction, and NCAR is the National Center for Atmospheric Research. They have a bunch of files here containing worldwide daily average temperatures on a 2.5 degree latitude × 2.5 degree longitude grid (that’s 144 × 73 grid points), from 1948 to 2010. And if you go here, the website will help you get data from within a chosen rectangle in a grid, for a chosen time interval.

These are NetCDF files. NetCDF stands for Network Common Data Form:

NetCDF is a set of software libraries and self-describing, machine-independent data formats that support the creation, access, and sharing of array-oriented scientific data.

According to my student Blake Pollard:

I know about ftp: I’m so old that I know this was around before the web existed. Back then it meant “faster than ponies”. But I need to get R to accept data from these NetCDF files: that’s what scares me!

Graham said that R has a “package” called RNetCDF for using NetCDF files. So, I need to get ahold of this package, download some files in the NetCDF format, and somehow get R to eat those files with the help of this package.

At first I was utterly clueless! However, after a bit of messing around, I notice that right on top of the R interface there’s a menu item called Packages. I boldly click on this and choose Install Package(s).

I am rewarded with an enormous alphabetically ordered list of packages… obviously statisticians have lots of stuff they like to do over and over! I find RNetCDF, click on that and click something like “OK”.

I’m asked if I want to use a “personal library”. I click “no”, and get an error message. So I click “yes”. The computer barfs out some promising text:

utils:::menuInstallPkgs() trying URL 'http://cran.stat.nus.edu.sg/bin/windows/contrib/3.1/RNetCDF_1.6.2-3.zip' Content type 'application/zip' length 548584 bytes (535 Kb) opened URL downloaded 535 Kb

 package ‘RNetCDF’ successfully unpacked and MD5 sums checked 

The downloaded binary packages are in C:\Users\JOHN\AppData\Local\Temp\Rtmp4qJ2h8\downloaded_packages

Success!

But now I need to figure out how to download a file and get R to eat it and digest it with the help of RNetCDF.

At this point my deus ex machina, Graham, descends from the clouds and says:

You can download the files from your browser. It is probably easiest to do that for starters. Put
ftp://ftp.cdc.noaa.gov/Datasets/ncep.reanalysis.dailyavgs/surface/
into the browser, then right-click a file and Save link as…

 for (year in 1950:1979) { download.file(url=paste0("ftp://ftp.cdc.noaa.gov/Datasets/ncep.reanalysis.dailyavgs/surface/air.sig995.", year, ".nc"), destfile=paste0("air.sig995.", year, ".nc"), mode="wb") } 

It will put them into the “working directory”, probably C:\Users\JOHN\Documents. You can find the working directory using getwd(), and change it with setwd(). But you must use / not \ in the filepath.

Compared to UNIX, the Windows operating system has the peculiarity of using \ instead of / in path names, but R uses the UNIX conventions even on Windows.

So, after some mistakes, in the R interface I type

> setwd("C:/Users/JOHN/Documents/My Backups/azimuth/el nino")

and then type

> getwd()

to see if I’ve succeeded. I’m rewarded with

[1] "C:/Users/JOHN/Documents/My Backups/azimuth/el nino"

Good!

Then, following Graham’s advice, I cut-and-paste this into the R interface:

for (year in 1950:1979) { download.file(url=paste0("ftp://ftp.cdc.noaa.gov/Datasets/ncep.reanalysis.dailyavgs/surface/air.sig995.", year, ".nc"), destfile=paste0("air.sig995.", year, ".nc"), mode="wb") }

It seems to be working! A little bar appears showing how each year’s data is getting downloaded. It chugs away, taking a couple minutes for each year’s worth of data.

### Using R to process NetCDF files

Okay, now I’ve got all the worldwide daily average temperatures on a 2.5 degree latitude × 2.5 degree longitude grid from 1950 to 1970.

The world is MINE!

But what do I do with it? Graham’s advice is again essential, along with a little R program, or script, that he wrote:

The R script netcdf-convertor.R from

https://github.com/azimuth-project/el-nino/tree/master/R

will eat the file, digest it, and spit it out again. It contains instructions.

I go to this URL, which is on GitHub, a popular free web-based service for software development. You can store programs here, edit them, and GitHub will help you keep track of the different versions. I know almost nothing about this stuff, but I’ve seen it before, so I’m not intimidated.

I click on the blue thing that says netcdf-convertor.R and see something that looks like the right script. Unfortunately I can’t see how to download it! I eventually see a button I’d overlooked, cryptically labelled “Raw”. I realize that since I don’t want a roasted or oven-broiled piece of software, I should click on this. I indeed succeed in downloading netcdf-convertor.R this way. Graham later says I could have done something better, but oh well. I’m just happy nothing has actually exploded yet.

Once I’ve downloaded this script, I open it using an text processor and look at it. At top are a bunch of comments written by Graham:

 ###################################################### ######################################################

 # You should be able to use this by editing this # section only. setwd("C:/Users/Work/AAA/Programming/ProgramOutput/Nino") lat.range <- 13:14 lon.range <- 142:143 firstyear <- 1957 lastyear <- 1958 outputfilename <- paste0("Scotland-", firstyear, "-", lastyear, ".txt") ###################################################### ###################################################### # Explanation # 1. Use setwd() to set the working directory # to the one containing the .nc files such as # air.sig995.1951.nc. # Example: # setwd("C:/Users/Work/AAA/Programming/ProgramOutput/Nino") # 2. Supply the latitude and longitude range. The # NOAA data is every 2.5 degrees. The ranges are # supplied as the number of steps of this size. # For latitude, 1 means North Pole, 73 means South # Pole. For longitude, 1 means 0 degrees East, 37 # is 90E, 73 is 180, 109 is 90W or 270E, 144 is # 2.5W. # These roughly cover Scotland. # lat.range <- 13:14 # lon.range <- 142:143 # These are the area used by Ludescher et al, # 2013. It is 27x69 points which are then # subsampled to 9 by 23. # lat.range <- 24:50 # lon.range <- 48:116 # 3. Supply the years # firstyear <- 1950 # lastyear <- 1952 # 4. Supply the output name as a text string. # paste0() concatenates strings which you may find # handy: # outputfilename <- paste0("Pacific-", firstyear, "-", lastyear, ".txt") ###################################################### ###################################################### # Example of output # S013E142 S013E143 S014E142 S014E143 # Y1950P001 281.60000272654 281.570002727211 281.60000272654 280.970002740622 # Y1950P002 280.740002745762 280.270002756268 281.070002738386 280.49000275135 # Y1950P003 280.100002760068 278.820002788678 281.120002737269 280.070002760738 # Y1950P004 281.070002738386 279.420002775267 281.620002726093 280.640002747998 # ... # Y1950P193 285.450002640486 285.290002644062 285.720002634451 285.75000263378 # Y1950P194 285.570002637804 285.640002636239 286.070002626628 286.570002615452 # Y1950P195 285.92000262998 286.220002623275 286.200002623722 286.620002614334 # ... # Y1950P364 276.100002849475 275.350002866238 276.37000284344 275.200002869591 # Y1950P365 276.990002829581 275.820002855733 276.020002851263 274.72000288032 # Y1951P001 278.220002802089 277.470002818853 276.700002836064 275.870002854615 # Y1951P002 277.750002812594 276.890002831817 276.650002837181 275.520002862439 # ... # Y1952P365 280.35000275448 280.120002759621 280.370002754033 279.390002775937 # There is one row for each day, and 365 days in # each year (leap days are omitted). In each row, # you have temperatures in Kelvin for each grid # point in a rectangle. # S13E142 means 13 steps South from the North Pole # and 142 steps East from Greenwich. The points # are in reading order, starting at the top-left # (Northmost, Westmost) and going along the top # row first. # Y1950P001 means year 1950, day 1. (P because # longer periods might be used later.) 

###################################################### ###################################################### 

The instructions are admirably detailed concerning what I should do, but they don't say where the output will appear when I do it. This makes me nervous. I guess I should just try it. After all, the program is not called DestroyTheWorld.

Unfortunately, at this point a lot of things start acting weird.

It's too complicated and boring to explain in detail, but basically, I keep getting a file missing error message. I don't understand why this happens under some conditions and not others. I try lots of experiments.

Eventually I discover that one year of temperature data failed to download—the year 1949, right after the first year available! So, I'm getting the error message whenever I try to do anything involving that year of data.

To fix the problem, I simply download the 1949 data by hand from here:

(You can open ftp addresses in a web browser just like http addresses.) I put it in my working directory for R, and everything is fine again. Whew!

By the time things I get this file, I sort of know what to do—after all, I've spent about an hour trying lots of different things.

I decide to create a file listing temperatures near where I live in Riverside from 1948 to 1979. To do this, I open Graham's script netcdf-convertor.R in a word processor and change this section:
 setwd("C:/Users/Work/AAA/Programming/ProgramOutput/Nino") lat.range <- 13:14 lon.range <- 142:143 firstyear <- 1957 lastyear <- 1958 outputfilename <- paste0("Scotland-", firstyear, "-", lastyear, ".txt") 

to this:
 setwd("C:/Users/JOHN/Documents/My Backups/azimuth/el nino") lat.range <- 23:23 lon.range <- 98:98 firstyear <- 1948 lastyear <- 1979 outputfilename <- paste0("Riverside-", firstyear, "-", lastyear, ".txt") 

Why? Well, I want it to put the file in my working directory. I want the years from 1948 to 1979. And I want temperature data from where I live!

Googling the info, I see Riverside, California is at 33.9481° N, 117.3961° W. 34° N is about 56 degrees south of the North Pole, which is 22 steps of size 2.5°. And because some idiot decided everyone should count starting at 1 instead of 0 even in contexts like this, the North Pole itself is step 1, not step 0… so Riverside is latitude step 23. That's why I write:

lat.range <- 23:23

Similarly, 117.5° W is 242.5° E, which is 97 steps of size 2.5°… which counts as step 98 according to this braindead system. That's why I write:

lon.range <- 98:98

Having done this, I save the file netcdf-convertor.R under another name, Riverside.R.

And then I do some stuff that it took some fiddling around to discover.

First, in my R interface I go to the menu item File, at far left, and click on Open script. It lets me browse around, so I go to my working directory for R and choose Riverside.R. A little window called R editor opens up in my R interface, containing this script.

I'm probably not doing this optimally, but I can now right-click on the R editor and see a menu with a choice called Select all. If I click this, everything in the window turns blue. Then I can right-click again and choose Run line or selection. And the script runs!

Voilà!

It huffs and puffs, and then stops. I peek in my working directory, and see that a file called

Riverside.1948-1979.txt

has been created. When I open it, it has lots of lines, starting with these:
 S023E098 Y1948P001 279.95 Y1948P002 280.14 Y1948P003 282.27 Y1948P004 283.97 Y1948P005 284.27 Y1948P006 286.97 

As Graham promised, each line has a year and day label, followed by a vector… which in my case is just a single number, since I only wanted the temperature in one location. I’m hoping this is the temperature near Riverside, in kelvin.

### A small experiment

To see if this is working, I’d like to plot these temperatures and see if they make sense. Unfortunately I have no idea how to get R to take a file containing data of the sort I have and plot it! I need to learn how, but right now I’m exhausted, so I use another method to get the job done— a method that’s too suboptimal and embarrassing to describe here. (Hint: it involves the word “Excel”.)

I do a few things, but here’s the most interesting one—namely, not very interesting. I plot the temperatures for 1963:

I compare it to some publicly available data, not from Riverside, but from nearby Los Angeles:

As you can see, there was a cold day on January 13th, when the temperature dropped to 33°F. That seems to be visible on the graph I made, and looking at the data from which I made the graph, I see the temperature dropped to 251.4 kelvin on the 13th: that’s -7°F, very cold for here. It does get colder around Riverside than in Los Angeles in the winter, since it’s a desert, with temperatures not buffered by the ocean. So, this does seem compatible with the public records. That’s mildly reassuring.

But other features of the graph don’t match, and I’m not quite sure if they should or not. So, all this very tentative and unimpressive. However, I’ve managed to get over some of my worst fears, download some temperature data, and graph it! Now I need to learn how to use R to do statistics with this data, and graph it in a better way.

### Puzzles

You can help me out by answering these puzzles. Later I might pose puzzles where you can help us write really interesting programs. But for now it’s just about learning R.

Puzzle 1. Given a text file with lots of lines of this form:
 S023E098 Y1948P001 279.95 Y1948P002 280.14 Y1948P003 282.27 Y1948P004 283.97 

write an R program that creates a huge vector, or list of numbers, like this:
 279.95, 280.14, 282.27, 283.97, ... 

Puzzle 2: Extend the above program so that it plots this list of numbers, or outputs it to a new file.

If you want to test your programs, here’s the actual file:

### More puzzles

If those puzzles are too easy, here are two more. I gave these last time, but everyone was too wimpy to tackle them.

Puzzle 3. Modify the software so that it uses the same method to predict El Niños from 1980 to 2013. You’ll have to adjust two lines in netcdf-convertor-ludescher.R:

firstyear <- 1948
lastyear <- 1980


should become

firstyear <- 1980
lastyear <- 2013


or whatever range of years you want. You’ll also have to adjust names of years in ludescher-replication.R. Search the file for the string 19 and make the necessary changes. Ask me if you get stuck.

Puzzle 4. Right now we average the link strength over all pairs $(i,j)$ where $i$ is a node in the El Niño basin defined by Ludescher et al and $j$ is a node outside this basin. The basin consists of the red dots here:

What happens if you change the definition of the El Niño basin? For example, can you drop those annoying two red dots that are south of the rest, without messing things up? Can you get better results if you change the shape of the basin?

To study these questions you need to rewrite ludescher-replication.R a bit. Here’s where Graham defines the El Niño basin:

ludescher.basin <- function() {
lats <- c( 5, 5, 5, 5, 5, 5, 5, 5, 5, 5, 5, 5, 6, 6)
lons <- c(11,12,13,14,15,16,17,18,19,20,21,22,16,22)
stopifnot(length(lats) == length(lons))
list(lats=lats,lons=lons)
}


These are lists of latitude and longitude coordinates: (5,11), (5,12), (5,13), etc. A coordinate like (5,11) means the little circle that’s 5 down and 11 across in the grid on the above map. So, that’s the leftmost point in Ludescher’s El Niño basin. By changing these lists, you can change the definition of the El Niño basin.

Next time I’ll discuss some criticisms of Ludescher et al’s paper, but later we will return to analyzing temperature data, looking for interesting patterns.

## El Niño Project (Part 4)

8 July, 2014

As the first big step in our El Niño prediction project, Graham Jones replicated the paper by Ludescher et al that I explained last time. Let’s see how this works!

Graham did it using R, a programming language that’s good for statistics. If you prefer another language, go ahead and write software for that… and let us know! We can add it to our repository.

Today I’ll explain this stuff to people who know their way around computers. But I’m not one of those people! So, next time I’ll explain the nitty-gritty details in a way that may be helpful to people more like me.

### Getting temperature data

Say you want to predict El Niños from 1950 to 1980 using Ludescher et al’s method. To do this, you need daily average surface air temperatures in this grid in the Pacific Ocean:

Each square here is 7.5° × 7.5°. To compute these temperatures, you have to start with temperatures on a grid with smaller squares that are 2.5° × 2.5° in size:

• Earth System Research Laboratory, NCEP Reanalysis Daily Averages Surface Level, or ftp site.

This website will give you daily average surface air temperatures in whatever rectangle and time interval you want. It delivers this data in a format called NetCDF, meaning Network Common Data Form.

We’ll take a different approach. We’ll download all the temperatures in this database, and then extract the data we need using R scripts. That way, when we play other games with temperature data later, we’ll already have it.

So, go ahead and download all the files from air.sig995. 1948.nc to air.sig995.2013.nc. It will take a while… but you’ll own the world.

There are different ways to do this. If you have R fired up, just cut-and-paste this into the console:

for (year in 1950:1979) {
"ftp://ftp.cdc.noaa.gov/Datasets/ncep.reanalysis.dailyavgs/surface/air.sig995.",
year, ".nc"),
destfile=paste0("air.sig995.", year, ".nc"), mode="wb")
}


### Getting the temperatures you need

Now you have files of daily average temperatures on a 2.5° by 2.5° grid from 1948 to 2013. Make sure all these files are in your working directory for R, and download this R script from GitHub:

You can use this to get the temperatures in any time interval and any rectangle of grid points you want. The details are explained in the script. But the defaults are set to precisely what you need now!

So, just run this script. You should get a file called Pacific-1948-1980.txt. This has daily average temperatures in the region we care about, from 1948 to 1980. It should start with a really long line listing locations in a 27 × 69 grid, starting with S024E48 and ending with S248E116. I’ll explain this coordinate scheme at the end of this post. Then come hundreds of lines listing temperatures in kelvin at those locations on successive days. The first of these lines should start with Y1948P001, meaning the first day of 1948.

And I know what you’re dying to ask: yes, leap days are omitted! This annoys the perfectionist in me… but leap years make data analysis more complicated, so Ludescher et al ignore leap days, and we will too.

### Getting the El Niño data

You’ll use this data to predict El Niños, so you also want a file of the Niño 3.4 index. Remember from last time, this says how much hotter the surface of this patch of seawater is than usual for this time of year:

This is a copy of the monthly Niño 3.4 index from the US National Weather Service, which I discussed last time. It has monthly Niño 3.4 data in the column called ANOM.

Put this file in your working directory.

### Predicting El Niños

Now for the cool part. Last time I explained the average link strength’, which Ludescher et al use to predict El Niños. Now you’ll compute it.

You’ve got Pacific-1948-1980.txt and nino3.4-anoms.txt in your working directory. Download this R script written by Graham Jones, and run it:

It takes about 45 minutes on my laptop. It computes the average link strength $S$ at ten-day intervals. Then it plots $S$ in red and the Niño 3.4 index in blue, like this:

(Click to enlarge.) The shaded region is where the Niño 3.4 index is below 0.5°C. When the blue curve escapes this region and then stays above 0.5°C for at least 5 months, Ludescher et al say that there’s an El Niño.

The horizontal red line shows the threshold $\theta = 2.82.$ When $S$ exceeds this, and the Niño 3.4 index is not already over 0.5°C, Ludescher et al predict that there will be an El Niño in the next calendar year!

Our graph almost agrees with theirs:

Here the green arrows show their successful predictions, dashed arrows show false alarms, and a little letter n appears next to each El Niño they failed to predict.

The graphs don’t match perfectly. For the blue curves, we could be using Niño 3.4 from different sources. Differences in the red curves are more interesting, since that’s where all the work is involved, and we’re starting with the same data. Besides actual bugs, which are always possible, I can think of various explanations. None of them are extremely interesting, so I’ll stick them in the last section!

If you want to get ahold of our output, you can do so here:

This has the average link strength $S$ at 10-day intervals, starting from day 730 and going until day 12040, where day 1 is the first of January 1948.

So, you don’t actually have to run all these programs to get our final result. However, these programs will help you tackle some programming challenges which I’ll list now!

### Programming challenges

There are lots of variations on the Ludescher et al paper which we could explore. Here are a few easy ones to get you started. If you do any of these, or anything else, let me know!

Challenge 1. Repeat the calculation with temperature data from 1980 to 2013. You’ll have to adjust two lines in netcdf-convertor-ludescher.R:

firstyear <- 1948
lastyear <- 1980


should become

firstyear <- 1980
lastyear <- 2013


or whatever range of years you want. You’ll also have to adjust names of years in ludescher-replication.R. Search the file for the string 19 and make the necessary changes. Ask me if you get stuck.

Challenge 2. Repeat the calculation with temperature data on a 2.5° × 2.5° grid instead of the coarser 7.5° × 7.5° grid Ludescher et al use. You’ve got the data you need. Right now, the program ludescher-replication.R averages out the temperatures over little 3 × 3 squares. It starts with temperatures on a 27 × 69 grid and averages them out to obtain temperatures on the 9 × 23 grid shown here:

Here’s where that happens:

# the data per day is reduced from e.g. 27x69 to 9x23.

subsample.3x3 <- function(vals) {
stopifnot(dim(vals)[2] %% 3 == 0)
stopifnot(dim(vals)[3] %% 3 == 0)
n.sslats <- dim(vals)[2]/3
n.sslons <- dim(vals)[3]/3
ssvals <- array(0, dim=c(dim(vals)[1], n.sslats, n.sslons))
for (d in 1:dim(vals)[1]) {
for (slat in 1:n.sslats) {
for (slon in 1:n.sslons) {
ssvals[d, slat, slon] <- mean(vals[d, (3*slat-2):(3*slat), (3*slon-2):(3*slon)])
}
}
}
ssvals
}


So, you need to eliminate this and change whatever else needs to be changed. What new value of the threshold $\theta$ looks good for predicting El Niños now? Most importantly: can you get better at predicting El El Niños this way?

The calculation may take a lot longer, since you’ve got 9 times as many grid points and you’re calculating correlations between pairs. So if this is too tough, you can go the other way: use a coarser grid and see how much that degrades your ability to predict El Niños.

Challenge 3. Right now we average the link strength over all pairs $(i,j)$ where $i$ is a node in the El Niño basin defined by Ludescher et al and $j$ is a node outside this basin. The basin consists of the red dots here:

What happens if you change the definition of the El Niño basin? For example, can you drop those annoying two red dots that are south of the rest, without messing things up? Can you get better results if you change the shape of the basin?

To study these questions you need to rewrite ludescher-replication.R a bit. Here’s where Graham defines the El Niño basin:

ludescher.basin <- function() {
lats <- c( 5, 5, 5, 5, 5, 5, 5, 5, 5, 5, 5, 5, 6, 6)
lons <- c(11,12,13,14,15,16,17,18,19,20,21,22,16,22)
stopifnot(length(lats) == length(lons))
list(lats=lats,lons=lons)
}


These are lists of latitude and longitude coordinates: (5,11), (5,12), (5,13), etc. A coordinate like (5,11) means the little circle that’s 5 down and 11 across in the grid on the above map. So, that’s the leftmost point in Ludescher’s El Niño basin. By changing these lists, you can change the definition of the El Niño basin. You’ll also have to change these lists if you tackle Challenge 2.

There’s a lot more you can do… the sky’s the limit! In the weeks to come, I’ll show you lots of things we’ve actually done.

### Annoying nuances

Here are two reasons our average link strengths could differ from Ludescher’s.

Last time I mentioned that Ludescher et al claim to normalize their time-delayed cross-covariances in a sort of complicated way. I explained why I don’t think they could have actually used this method. In ludescher-replication.R, Graham used the simpler normalization described last time: namely, dividing by

$\sqrt{\langle T_i(t)^2 \rangle - \langle T_i(t) \rangle^2} \; \sqrt{\langle T_j(t-\tau)^2 \rangle - \langle T_j(t-\tau) \rangle^2}$

$\sqrt{ \langle (T_i(t) - \langle T_i(t)\rangle)^2 \rangle} \; \sqrt{ \langle (T_j(t-\tau) - \langle T_j(t-\tau)\rangle)^2 \rangle}$

Since we don’t really know what Ludescher et al did, they might have done something else.

We might also have used a different ‘subsampling’ procedure. That’s a name for how we get from the temperature data on a 9 × 69 grid to temperatures on a 3 × 23 grid. While the original data files give temperatures named after grid points, each is really an area-averaged temperature for a 2.5° × 2.5° square. Is this square centered at the grid point, or is the square having that grid point as its north-west corner, or what? I don’t know.

This data is on a grid where the coordinates are the number of steps of 2.5 degrees, counting from 1. So, for latitude, 1 means the North Pole, 73 means the South Pole. For longitude, 1 means the prime meridian, 37 means 90° east, 73 means 180° east, 109 means 270°E or 90°W, and 144 means 2.5° west. It’s an annoying system, as far as I’m concerned.

In ludescher-replication.R we use this range of coordinates:

lat.range <- 24:50
lon.range <- 48:116


That’s why your file Pacific-1948-1980.txt` has locations starting with S024E48 and ending with S248E116. Maybe Ludescher et al used a slightly different range or subsampling procedure!

There are probably lots of other nuances I haven’t noticed. Can you think of some?

## El Niño Project (Part 3)

1 July, 2014

In February, this paper claimed there’s a 75% chance the next El Niño will arrive by the end of 2014:

• Josef Ludescher, Avi Gozolchiani, Mikhail I. Bogachev, Armin Bunde, Shlomo Havlin, and Hans Joachim Schellnhuber, Very early warning of next El Niño, Proceedings of the National Academy of Sciences, February 2014. (Click title for free version, journal name for official version.)

Since it was published in a reputable journal, it created a big stir! Being able to predict an El Niño more than 6 months in advance would be a big deal. El Niños can cause billions of dollars of damage.

But that’s not the only reason we at the Azimuth Project want to analyze, criticize and improve this paper. Another reason is that it uses a climate network—and we like network theory.

Very roughly, the idea is this. Draw a big network of dots representing different places in the Pacific Ocean. For each pair of dots, compute a number saying how strongly correlated the temperatures are at those two places. The paper claims that when a El Niño is getting ready to happen, the average of these numbers is big. In other words, temperatures in the Pacific tend to go up and down in synch!

Whether this idea is right or wrong, it’s interesting—and it’s not very hard for programmers to dive in and study it.

Two Azimuth members have done just that: David Tanzer, a software developer who works for financial firms in New York, and Graham Jones, a self-employed programmer who also works on genomics and Bayesian statistics. These guys have really brought new life to the Azimuth Code Project in the last few weeks, and it’s exciting! It’s even gotten me to do some programming myself.

Soon I’ll start talking about the programs they’ve written, and how you can help. But today I’ll summarize the paper by Ludescher et al. Their methodology is also explained here:

• Josef Ludescher, Avi Gozolchiani, Mikhail I. Bogachev, Armin Bunde, Shlomo Havlin, and Hans Joachim Schellnhuber, Improved El Niño forecasting by cooperativity detection, Proceedings of the National Academy of Sciences, 30 May 2013.

### The basic idea

The basic idea is to use a climate network. There are lots of variants on this idea, but here’s a simple one. Start with a bunch of dots representing different places on the Earth. For any pair of dots $i$ and $j,$ compute the cross-correlation of temperature histories at those two places. Call some function of this the ‘link strength’ for that pair of dots. Compute the average link strength… and get excited when this gets bigger than a certain value.

The papers by Ludescher et al use this strategy to predict El Niños. They build their climate network using correlations between daily temperature data for 14 grid points in the El Niño basin and 193 grid points outside this region, as shown here:

The red dots are the points in the El Niño basin.

Starting from this temperature data, they compute an ‘average link strength’ in a way I’ll describe later. When this number is bigger than a certain fixed value, they claim an El Niño is coming.

How do they decide if they’re right? How do we tell when an El Niño actually arrives? One way is to use the ‘Niño 3.4 index’. This the area-averaged sea surface temperature anomaly in the yellow region here:

Anomaly means the temperature minus its average over time: how much hotter than usual it is. When the Niño 3.4 index is over 0.5°C for at least 5 months, Ludescher et al say there’s an El Niño. (By the way, this is not the standard definition. But we will discuss that some other day.)

Here is what they get:

The blue peaks are El Niños: episodes where the Niño 3.4 index is over 0.5°C for at least 5 months.

The red line is their ‘average link strength’. Whenever this exceeds a certain threshold $\Theta = 2.82,$ and the Niño 3.4 index is not already over 0.5°C, they predict an El Niño will start in the following calendar year.

The green arrows show their successful predictions. The dashed arrows show their false alarms. A little letter n appears next to each El Niño that they failed to predict.

You’re probably wondering where the number $2.82$ came from. They get it from a learning algorithm that finds this threshold by optimizing the predictive power of their model. Chart A here shows the ‘learning phase’ of their calculation. In this phase, they adjusted the threshold $\Theta$ so their procedure would do a good job. Chart B shows the ‘testing phase’. Here they used the value of $\Theta$ chosen in the learning phase, and checked to see how good a job it did. I’ll let you read their paper for more details on how they chose $\Theta.$

But what about their prediction now? That’s the green arrow at far right here:

On 17 September 2013, the red line went above the threshold! So, their scheme predicts an El Niño sometime in 2014. The chart at right is a zoomed-in version that shows the red line in August, September, October and November of 2013.

### The details

Now I mainly need to explain how they compute their ‘average link strength’.

Let $i$ stand for any point in this 9 × 23 grid:

For each day $t$ between June 1948 and November 2013, let $\tilde{T}_i(t)$ be the average surface air temperature at the point $i$ on day $t.$

Let $T_i(t)$ be $\tilde{T}_i(t)$ minus its climatological average. For example, if $t$ is June 1st 1970, we average the temperature at location $i$ over all June 1sts from 1948 to 2013, and subtract that from $\tilde{T}_i(t)$ to get $T_i(t).$

They call $T_i(t)$ the temperature anomaly.

(A subtlety here: when we are doing prediction we can’t know the future temperatures, so the climatological average is only the average over past days meeting the above criteria.)

For any function of time, denote its moving average over the last 365 days by:

$\displaystyle{ \langle f(t) \rangle = \frac{1}{365} \sum_{d = 0}^{364} f(t - d) }$

Let $i$ be a point in the El Niño basin, and $j$ be a point outside it. For any time lag $\tau$ between 0 and 200 days, define the time-delayed cross-covariance by:

$\langle T_i(t) T_j(t - \tau) \rangle - \langle T_i(t) \rangle \langle T_j(t - \tau) \rangle$

Note that this is a way of studying the linear correlation between the temperature anomaly at node $i$ and the temperature anomaly a time $\tau$ earlier at some node $j.$ So, it’s about how temperature anomalies inside the El Niño basin are correlated to temperature anomalies outside this basin at earlier times.

Ludescher et al then normalize this, defining the time-delayed cross-correlation $C_{i,j}^{t}(-\tau)$ to be the time-delayed cross-covariance divided by

$\sqrt{\Big{\langle} (T_i(t) - \langle T_i(t)\rangle)^2 \Big{\rangle}} \; \sqrt{\Big{\langle} (T_j(t-\tau) - \langle T_j(t-\tau)\rangle)^2 \Big{\rangle}}$

This is something like the standard deviation of $T_i(t)$ times the standard deviation of $T_j(t - \tau).$ Dividing by standard deviations is what people usually do to turn covariances into correlations. But there are some potential problems here, which I’ll discuss later.

They define $C_{i,j}^{t}(\tau)$ in a similar way, by taking

$\langle T_i(t - \tau) T_j(t) \rangle - \langle T_i(t - \tau) \rangle \langle T_j(t) \rangle$

and normalizing it. So, this is about how temperature anomalies outside the El Niño basin are correlated to temperature anomalies inside this basin at earlier times.

Next, for nodes $i$ and $j,$ and for each time $t,$ they determine the maximum, the mean and the standard deviation of $|C_{i,j}^t(\tau)|,$ as $\tau$ ranges from -200 to 200 days.

They define the link strength $S_{i j}(t)$ as the difference between the maximum and the mean value, divided by the standard deviation.

Finally, they let $S(t)$ be the average link strength, calculated by averaging $S_{i j}(t)$ over all pairs $(i,j)$ where $i$ is a node in the El Niño basin and $j$ is a node outside.

They compute $S(t)$ for every 10th day between January 1950 and November 2013. When $S(t)$ goes over 2.82, and the Niño 3.4 index is not already over 0.5°C, they predict an El Niño in the next calendar year.

There’s more to say about their methods. We’d like you to help us check their work and improve it. Soon I want to show you Graham Jones’ software for replicating their calculations! But right now I just want to conclude by:

• mentioning a potential problem in the math, and

• telling you where to get the data used by Ludescher et al.

### Mathematical nuances

Ludescher et al normalize the time-delayed cross-covariance in a somewhat odd way. They claim to divide it by

$\sqrt{\Big{\langle} (T_i(t) - \langle T_i(t)\rangle)^2 \Big{\rangle}} \; \sqrt{\Big{\langle} (T_j(t-\tau) - \langle T_j(t-\tau)\rangle)^2 \Big{\rangle}}$

This is a strange thing, since it has nested angle brackets. The angle brackets are defined as a running average over the 365 days, so this quantity involves data going back twice as long: 730 days. Furthermore, the ‘link strength’ involves the above expression where $\tau$ goes up to 200 days.

So, taking their definitions at face value, Ludescher et al could not actually compute their ‘link strength’ until 930 days after the surface temperature data first starts at the beginning of 1948. That would be late 1950. But their graph of the link strength starts at the beginning of 1950!

Perhaps they actually normalized the time-delayed cross-covariance by dividing it by this:

$\sqrt{\big{\langle} T_i(t)^2 \big{\rangle} - \big{\langle} T_i(t)\big{\rangle}^2} \; \sqrt{\big{\langle} T_j(t-\tau)^2 \big{\rangle} - \big{\langle} T_j(t-\tau)\big{\rangle}^2}$

This simpler expression avoids nested angle brackets, and it makes more sense conceptually. It is the standard deviation of $T_i(t)$ over the last 365 days, times of the standard deviation of $T_i(t-\tau)$ over the last 365 days.

As Nadja Kutz noted, the expression written by Ludescher et al does not equal this simpler expression, since:

$\Big{\langle} T_i(t) \; \langle T_i(t) \rangle \Big{\rangle} \neq \big{\langle} T_i(t) \big{\rangle} \; \big{\langle} T_i(t) \big{\rangle}$

The reason is that

$\begin{array}{ccl} \Big{\langle} T_i(t) \; \langle T_i(t) \rangle \Big{\rangle} &=& \displaystyle{ \frac{1}{365} \sum_{d = 0}^{364} T_i(t-d) \langle T_i(t-d) \rangle} \\ \\ &=& \displaystyle{ \frac{1}{365} \sum_{d = 0}^{364} \frac{1}{365} \sum_{D = 0}^{364} T_i(t-d) T_i(t-d-D)} \end{array}$

which is generically different from

$\Big{\langle} \langle T_i(t) \rangle \;\langle T_i(t) \rangle \Big{\rangle} =$

$\displaystyle{ \frac{1}{365} \sum_{D = 0}^{364} (\frac{1}{365} \sum_{d = 0}^{364} T_i(t-d-D))(\frac{1}{365} \sum_{d = 0}^{364} T_i(t-d-D) ) }$

since the terms in the latter expression contain products $T_i(t-364-364)T_i(t-364-364)$ that can’t appear in the former.

Moreover:

$\begin{array}{ccl} \Big{\langle} (T_i(t) - \langle T_i(t) \rangle)^2 \Big{\rangle} &=& \Big{\langle} T_i(t)^2 - 2 T_i(t) \langle T_i(t) \rangle + \langle T_i(t) \rangle^2 \Big{\rangle} \\ \\ &=& \langle T_i(t)^2 \rangle - 2 \big{\langle} T_i(t) \langle T_i(t) \rangle \big{\rangle} + \big{\langle} \langle T_i(t) \rangle^2 \big{\rangle} \end{array}$

But since $\big{\langle} T_i(t) \langle T_i(t) \rangle \big{\rangle} \neq \big{\langle} \langle T_i(t) \rangle \; \langle T_i(t) \rangle \big{\rangle},$ as was just shown, those terms do not cancel out in the above expression. In particular, this means that

$-2 \big{\langle} T_i(t) \langle T_i(t) \rangle \big{\rangle} + \big{\langle} \langle T_i(t) \rangle \langle T_i(t) \rangle \big{\rangle}$

contains terms $T_i(t-364-364)$ which do not appear in $\langle T_i(t)\rangle^2,$ hence

$\Big{\langle} (T_i(t) - \langle T_i(t) \rangle)^2 \Big{\rangle} \neq \langle T_i(t)^2\rangle - \langle T_i(t)\rangle^2$

So at least for the case of the standard deviation it is clear that those two definitions are not the same for a running mean. For the covariances this would still need to be shown.

### Surface air temperatures

Remember that $\tilde{T}_i(t)$ is the average surface air temperature at the grid point $i$ on day $t.$ You can get these temperatures from here:

• Earth System Research Laboratory, NCEP Reanalysis Daily Averages Surface Level, or ftp site.

These sites will give you worldwide daily average temperatures on a 2.5° latitude × 2.5° longitude grid (144 × 73 grid points), from 1948 to now. Ihe website will help you get data from within a chosen rectangle in a grid, for a chosen time interval. Alternatively, you can use the ftp site to download temperatures worldwide one year at a time. Either way, you’ll get ‘NetCDF files’—a format we will discuss later, when we get into more details about programming!

### Niño 3.4

Niño 3.4 is the area-averaged sea surface temperature anomaly in the region 5°S-5°N and 170°-120°W. You can get Niño 3.4 data here:

You can get Niño 3.4 data here:

Niño 3.4 data since 1870 calculated from the HadISST1, NOAA. Discussed in N. A. Rayner et al, Global analyses of sea surface temperature, sea ice, and night marine air temperature since the late nineteenth century, J. Geophys. Res. 108 (2003), 4407.

You can also get Niño 3.4 data here:

Monthly Niño 3.4 index, Climate Prediction Center, National Weather Service.

The actual temperatures in Celsius are close to those at the other website, but the anomalies are rather different, because they’re computed in a way that takes global warming into account. See the website for details.

Niño 3.4 is just one of several official regions in the Pacific:

• Niño 1: 80°W-90°W and 5°S-10°S.

• Niño 2: 80°W-90°W and 0°S-5°S

• Niño 3: 90°W-150°W and 5°S-5°N.

• Niño 3.4: 120°W-170°W and 5°S-5°N.

• Niño 4: 160°E-150°W and 5°S-5°N.

• Kevin E. Trenberth, The definition of El Niño, Bulletin of the American Meteorological Society 78 (1997), 2771–2777.

## El Niño Project (Part 2)

24 June, 2014

Before we dive into the exciting world of El Niño prediction, and ways that you can help, let’s have a very very basic crash course on the physics of El Niño.

El Niños are still rather mysterious. But that doesn’t mean we should ignore what the experts know, or suspect.

### The basics

Winds called trade winds blow west across the tropical Pacific, from the Americas to Asia. During La Niña years, water at the ocean’s surface moves along with the wind, warming up in the sunlight as it travels. So, warm water collects at the ocean’s surface off the coast of Asia. This creates more clouds and rainstorms there.

Meanwhile, since surface water is being dragged west by the wind, cold water from below gets pulled up to take its place in the eastern Pacific, off the coast of South America.

So, the temperature at the ocean’s surface looks like this:

This situation is actually reinforced by a feedback loop. Since the ocean’s surface is warmer near Asia, it heats the air and makes it rise. This helps the trade winds blow toward Asia: they go there to fill the ‘gap’ left by rising air.

Of course, you should be wondering: why do the trade winds blow west in the first place?

Without an answer to this, the story so far would work just as well if we switched the words ‘west’ and ‘east’. That wouldn’t mean the story is wrong. It might just mean that there were two stable states of the Earth’s climate: a La Niña state where the trade winds blow west, and another state—say, the El Niño—where they blow east. One could imagine a world permanently stuck in one of these phases. Or perhaps it could flip between these two phases for some reason.

Something roughly like the last choice is actually true. But it’s not so simple: there’s not a complete symmetry between west and east!

Why not? Mainly because the Earth is turning to the east. Air near the equator warms up and rises, so new air from more northern or southern regions moves in to take its place. But because the Earth is fatter at the equator, the equator is moving faster to the east. So, this new air from other places is moving less quickly by comparison… so as seen by someone standing on the equator, it blows west. This is called the Coriolis effect, and it produces winds like this:

Beware: a wind that blows to the west is called an easterly. So the westward-blowing trade winds I’m talking about are called "northeasterly trades" and "southeasterly trades" on this picture.

It’s also good to remember that the west Pacific touches the part of Asia also called the ‘Far East’, while the east Pacific touches the part of America also called the ‘West Coast’. So, it’s easy to get confused! If you find yourself getting confused, just repeat this sentence:

The easterlies blow west from West Coast to Far East.

Everything will instantly become much clearer.

Terminology aside, the story so far should be clear. The trade winds have a good intrinsic reason to blow west, but in the La Niña phase they’re also part of a feedback loop where they make the western Pacific warmer… which in turn helps the trade winds blow west.

But now comes an El Niño! Now for some reason the westward winds weaken. This lets the built-up warm water in the western Pacific slosh back east. And with weaker westward winds, less cold water is pulled up to the surface in the eastern Pacific. So, the eastern Pacific warms up. This makes for more clouds and rain in the eastern Pacific—that’s when we get floods in Southern California. And with the ocean warmer in the eastern Pacific, hot air rises there, which tends to counteract the westward winds even more.

In other words, all the feedbacks reverse themselves! Here’s how it looked in the big El Niño of 1997:

But note: the trade winds never mainly blow east. Even during an El Niño they still blow west, just a bit less. So, the climate is not flip-flopping between two symmetrical alternatives. It’s flip-flopping between two asymmetrical alternatives.

Here’s how it goes! The vertical height of the ocean is exaggerated here to show how water piles up:

Here we see the change in trade winds and ocean currents:

By the way, you can click on any of the pictures to get more information.

### But why?

One huge remaining question is: why do the trade winds weaken? We could also ask the same question about the start of the La Niña phase: why do the trade winds get stronger then?

The short answer is: nobody knows! At least there’s no one story that everyone agrees on. There are actually several stories… and perhaps more than one of them is true. So, at this point it is worthwhile revisiting some actual data:

The top graph shows variations in the water temperature of the tropical Eastern Pacific ocean. When it’s hot we have El Niños: those are the red hills in the top graph. The blue valleys are La Niñas. Note that it’s possible to have two El Niños in a row without an intervening La Niña, or vice versa!

The bottom graph shows the Southern Oscillation Index or SOI. This is basically the air pressure in Tahiti minus the air pressure in Darwin, Australia, divided by its standard deviation.

So, when the SOI is high, the air pressure is higher in the east Pacific than in the west Pacific. This is what we expect in an La Niña: that’s why the westward trade winds are strong then! Conversely, the SOI is low in the El Niño phase. This variation in the SOI is called the Southern Oscillation.

If you look at the graphs above, you’ll see how one looks almost like an upside-down version of the other. So, El Niño/La Niña cycle is tightly linked to the Southern Oscillation.

Another thing you’ll see from is that the ENSO is far from perfectly periodic! Here’s a graph of the Southern Oscillation Index going back a lot further:

So, there’s something inherently irregular about this oscillation. It could be chaotic—meaning that tiny changes amplify as time goes by, making long-term prediction impossible. It could be noisy—meaning that the randomness is mainly due to outside influences. It could be somewhere in between! But nobody is sure.

The graph above was made by William Kessler, an expert on El Nño. His FAQs are worth a look:

• William Kessler, El Niño: How it works, how we observe it.

He describes some theories about why an El Niño starts, and why it ends. These theories involve three additional concepts:

• The thermocline is the border between the warmer surface water in the ocean and the cold deep water, 100 to 200 meters below the surface. During the La Niña phase, warm water is blown to the western Pacific, and cold water is pulled up to the surface of the eastern Pacific. So, the thermocline becomes deeper in the west than the east:

When an El Niño occurs, the thermocline flattens out:

Oceanic Rossby waves are very low-frequency waves in the ocean’s surface and thermocline. At the ocean’s surface they are only 5 centimeters high, but hundreds of kilometers across. The surface waves are mirrored by waves in the thermocline, which are much taller, 10-50 meters in height. When the surface goes up, the thermocline goes down!

• The Madden-Julian oscillation or MJO is the largest form of variability in the tropical atmosphere on time scales of 30-90 days. It’s a pulse that moves east across the Indian Ocean and Pacific ocean at 4-8 meters/second. It manifests itself as patches of anomalously high rainfall and also anomalously low rainfall. Strong Madden-Julian Oscillations are often seen 6-12 months before an El Niño starts!

With this bit of background, I hope you’re ready for what Kessler wrote in his El Niño FAQ:

There are two main theories at present. The first is that the event is initiated by the reflection from the western boundary of the Pacific of an oceanic Rossby wave (type of low-frequency planetary wave that moves only west). The reflected wave is supposed to lower the thermocline in the west-central Pacific and thereby warm the sea surface temperature by reducing the efficiency of upwelling to cool the surface. Then that makes winds blow towards the (slightly) warmer water and really start the event. The nice part about this theory is that the oceanic Rossby waves can be observed for months before the reflection, which implies that El Niño is predictable.

The other idea is that the trigger is essentially random. The tropical convection (organized large-scale thunderstorm activity) in the rising air tends to occur in bursts that last for about a month, and these bursts propagate out of the Indian Ocean (known as the Madden-Julian Oscillation). Since the storms are geostrophic (rotating according to the turning of the earth, which means they rotate clockwise in the southern hemisphere and counter-clockwise in the north), storm winds on the equator always blow towards the east. If the storms are strong enough, or last long enough, then those eastward winds may be enough to start the sloshing. But specific Madden-Julian Oscillation events are not predictable much in advance (just as specific weather events are not predictable in advance), and so to the extent that this is the main element, then El Niño will not be predictable.

In my opinion both these two processes can be important in different El Niños. Some models that did not have the MJO storms were successful in predicting the events of 1986-87 and 1991-92. That suggests that the Rossby wave part was a main influence at that time. But those same models have failed to predict the events since then, and the westerlies have appeared to come from nowhere. It is also quite possible that these two general sets of ideas are incomplete, and that there are other causes entirely. The fact that we have very intermittent skill at predicting the major turns of the ENSO cycle (as opposed to the very good forecasts that can be made once an event has begun) suggests that there remain important elements that are await explanation.

So it’s complicated!

Next time I’ll talk about a new paper that tries to cut through these complications and predict El Niños more than 6 months in advance, using a simple idea. It’s a great opportunity for programmers to dive in and try to do better. But I think we need to keep the subtleties in mind… at least somewhere in the back of our mind.

## El Niño Project (Part 1)

20 June, 2014

A bunch of Azimuth Project members like to program, so they started the Azimuth Code Project… but now it’s getting more lively! We’re trying to understand and predict the climate phenomenon known as El Niño.

Why? Several reasons:

• It’s the biggest source of variability in the Earth’s climate on times scales between a year and a decade. It causes weather disturbances in many regions, especially near the Pacific Ocean. The last really big one happened in 1997-1998, and we’re waiting for the next.

• It’s hard to predict for more than 6 months in advance. It’s not periodic: it’s a quasi-periodic phenomenon that occurs across the tropical Pacific Ocean every 3 to 7 years.

• It matters for global warming. A lot of heat gets stored in the ocean, and a lot comes back into the atmosphere during an El Niño. So, the average surface air temperature of the Earth may reach a new high when the next El Niño comes.

• In February 2014, a paper in Proceedings of the National Academy of Sciences caused a stir by claiming to be able to predict the next El Niño more than 6 months in advance using ideas from network theory. Moreover, it claimed an El Niño would start in late 2014 with a 75% probability.

• The math involved in this paper is interesting, not too complicated, and maybe we can improve on it. At the very least, it raises a lot of questions worth studying. And it’s connected to network theory, one of the Azimuth Project’s specialties!

We are already hard at work on this project. We could use help from computer programmers, mathematicians, and physicists: there is lots to do! But it makes sense to start by explaining the issues and what we’ve done so far. We’ll do that in a series of posts here.

This first post will not get into many details. Instead, I just want to set the stage with some basic information about El Niño.

### El Niño and La Niña

This animation produced by the Australian Bureau of Meteorology shows how the cycle works:

During La Niña years, trade winds blow across the Pacific Ocean from the Americas to Asia in a strong way. So, warm surface water gets pushed toward Asia. Warmer oceans there create more clouds and rain there. The other side of the Pacific gets cooler, so there is less rain in many parts of the Americas.

During El Niño years, trade winds in the tropical Pacific weaken, and blobs of warm surface water move back toward the Americas. So, the eastern part of the Pacific warms up. We generally get more rain in the Americas… but less in Asia.

### ENSO

The cycle of El Niños and La Niñas is often called the El Niño/Southern Oscillation or ENSO. Why? Because this cycle is linked to the Southern Oscillation: an oscillation in the difference in air pressure between the eastern and western Pacific:

The top graph shows variations in the water temperature of the tropical eastern Pacific ocean: when it’s hot we have an El Niño. The bottom graph shows the air pressure in Tahiti minus the air pressure in Darwin, Australia — up to a normalization constant, this called the Southern Oscillation Index, or SOI. If you stare at the graphs a while, you’ll see they’re quite strongly correlated—or more precisely, anticorrelated, since one tends to go up when the other goes down. So, remember:

A big negative SOI goes along with an El Niño!

There are other ways besides the SOI to tell if an El Niño is happening. We’ll talk later about these quantities, how they’re defined, how you can get the data online, what we’ve done with this data, and what we want to do.

### Is a big El Niño coming?

To conclude, I just want you to watch this short movie. NASA’s Jason-2 satellite has detected blobs of hot water moving east toward America! This has made some scientists—not just those using network theory—suspect a big El Niño is on its way, perhaps a repeat of the one that started in 1997.

On the other hand, on June 17th the National Oceanic and Atmospheric Administation (NOAA) said that trends are now running “counter to typical El Niño development”. So we’ll have to wait and see… and meanwhile, try to predict!

### References

If you can’t wait to dive in, start here:

Experiments in El Niño detection and prediction, Azimuth Forum.

To join this conversation, join the forum by following these instructions:

This is the paper that got us excited:

• Josef Ludescher, Avi Gozolchiani, Mikhail I. Bogachev, Armin Bunde, Shlomo Havlin, and Hans Joachim Schellnhuber, Very early warning of next El Niño, Proceedings of the National Academy of Sciences, February 2014.

A lot of the methodology is explained here:

• Josef Ludescher, Avi Gozolchiani, Mikhail I. Bogachev, Armin Bunde, Shlomo Havlin, and Hans Joachim Schellnhuber, Improved El Niño forecasting by cooperativity detection, Proceedings of the National Academy of Sciences, 30 May 2013. (For more discussion, go to the Azimuth Forum.)

### The series so far

El Niño project (part 1): basic introduction to El Niño and our project here.

El Niño project (part 2): introduction to the physics of El Niño.

El Niño project (part 3): summary of the work of Ludescher et al.

El Niño project (part 4): how Graham Jones replicated the work by Ludescher et al, using software written in R.

El Niño project (part 5): how to download R and use it to get files of climate data.

El Niño project (part 6): Steve Wenner’s statistical analysis of the work of Ludescher et al.

El Niño project (part 7): the definition of El Niño.

El Niño project (part 8): Berezin et al on the stability of climate networks.

## Warming Slowdown? (Part 2)

5 June, 2014

guest post by Jan Galkowski

### 5. Trends Are Tricky

Trends as a concept are easy. But trends as objective measures are slippery. Consider the Keeling Curve, the record of atmospheric carbon dioxide concentration first begun by Charles Keeling in the 1950s and continued in the face of great obstacles. This curve is reproduced in Figure 8, and there presented in its original, and then decomposed into three parts, an annual sinusoidal variation, a linear trend, and a stochastic remainder.

Figure 8. Keeling CO2 concentration curve at Mauna Loa, Hawaii, showing original data and its decomposition into three parts, a sinusoidal annual variation, a linear trend, and a stochastic residual.

The question is, which component represents the true trend, long term or otherwise? Are linear trends superior to all others? The importance of a trend is tied up with to what use it will be put. A pair of trends, like the sinusoidal and the random residual of the Keeling, might be more important for predicting its short term movements. On the other hand, explicating the long term behavior of the system being measured might feature the large scale linear trend, with the seasonal trend and random variations being but distractions.

Consider the global surface temperature anomalies of Figure 5 again. What are some ways of determining trends? First, note that by “trends” what’s really meant are slopes. In the case where there are many places to estimate slopes, there are many slopes. When, for example, a slope is estimated by fitting a line to all the points, there’s just a single slope such as in Figure 9. Local linear trends can be estimated from pairs of points in differing sizes of neighborhoods, as depicted in Figures 10 and 11. These can be averaged, if you like, to obtain an overall trend.

Figure 9. Global surface temperature anomalies relative to a 1950-1980 baseline, with long term linear trend atop.

Figure 10. Global surface temperature anomalies relative to a 1950-1980 baseline, with randomly placed trends from local linear having 5 year support atop.

Figure 11. Global surface temperature anomalies relative to a 1950-1980 baseline, with randomly placed trends from local linear having 10 year support atop.

Lest the reader think constructing lots of linear trends on varying neighborhoods is somehow crude, note it has a noble history, being used by Boscovich to estimate Earth’s ellipticity about 1750, as reported by Koenker.

There is, in addition, a question of what to do if local intervals for fitting the little lines overlap, since these are then (on the face of it) not independent of one another. There are a number of statistical devices for making them independent. One way is to do clever kinds of random sampling from a population of linear trends. Another way is to shrink the intervals until they are infinitesimally small, and, so, necessarily independent. That definition is just the point slope of a curve going through the data, or its first derivative. Numerical methods for estimating these exist—and to the degree they succeed, they obtain estimates of the derivative, even if in doing do they might use finite intervals.

One good way of estimating derivatives involves using a smoothing spline, as sketched in Figure 6, and estimating the derivative(s) of that. Such an estimate of the derivative is shown in Figure 12 where the instantaneous slope is plotted in orange atop the data of Figure 6. The value of the derivative should be read using the scale to the right of the graph. The value to the left shows, as before, temperature anomaly in degrees. The cubic spline itself is plotted in green in that figure. Here it’s smoothing parameter is determined by generalized cross-validation, a principled means of taking the subjectivity out of the choice of smoothing parameter. That is explained a bit more in the caption for Figure 12. (See also Cr1979.)

Figure 12. Global surface temperature anomalies relative to a 1950-1980 baseline, with instaneous numerical estimates of derivatives in orange atop, with scale for the derivative to the right of the chart. Note how the value of the first derivative never drops below zero although its magnitude decreases as time approaches 2012. Support for the smoothing spline used to calculate the derivatives is obtained using generalized cross validation. Such cross validation is used to help reduce the possibility that a smoothing parameter is chosen to overfit a particular data set, so the analyst could expect that the spline would apply to as yet uncollected data more than otherwise. Generalized cross validation is a particular clever way of doing that, although it is abstract.

What else might we do?

We could go after a really good approximation to the data of Figure 5. One possibility is to use the Bayesian Rauch-Tung-Striebel (“RTS”) smoother to get a good approximation for the underlying curve and estimate the derivatives of that. This is a modification of the famous Kalman filter, the workhorse of much controls engineering and signals work. What that means and how these work is described in an accompanying inset box.

Using the RTS smoother demands variances of the signal be estimated as priors. The larger the ratio of the estimate of the observations variance to the estimate of the process variance is, the smoother the RTS solution. And, yes, as the reader may have guessed, that makes the result dependent upon initial conditions, although hopefully educated initial conditions.

Figure 13. Global surface temperature anomalies relative to a 1950-1980 baseline, with fits using the Rauch-Tung-Striebel smoother placed atop, in green and dark green. The former uses a prior variance of 3 times that of the Figure 5 data corrected for serial correlation. The latter uses a prior variance of 15 times that of the Figure 5 data corrected for serial correlation. The instantaneous numerical estimates of the first derivative derived from the two solutions are shown in orange and brown, respectively, with their scale of values on the right hand side of the chart. Note the two solutions are essentially identical. If compared to the smoothing spline estimate of Figure 12, the derivative has roughly the same shape, but is shifted lower in overall slope, and the drift up and below a mean value is less.

The RTS smoother result for two process variance values of 0.118 ± 002 and high 0.59 ± 0.02 is shown in Figure 13. These are 3 and 15 times the decorrelated variance value for the series of 0.039 ± 0.001, estimated using the long term variance for this series and others like it, corrected for serial correlation. One reason for using two estimates of the process variance is to see how much difference that makes. As can be seen from Figure 13, it does not make much.

Combining all six methods of estimating trends results in Figure 14, which shows the overprinted densities of slopes.

Figure 14. In a stochastic signal, slopes are random variables. They may be correlated. Fitting of smooth models can be thought of as a way of sampling these random variable. Here, empirical probability density functions for slopes of temperatures versus years are displayed, using each of the 6 methods of estimating slopes. Empirical probability densities are obtained using kernel density estimation. These are preferred to histograms by statisticians because the latter can distort the density due to bin size and boundary effects. The lines here correspond to: local linear fits with 5 years separation (dark green trace), the local linear fits with 10 years separation (green trace), the smoothing spline (blue trace), the RTS smoother with variance 3 times the corrected estimate for the data as the prior variance (orange trace, mostly hidden by brown trace), and the RTS smoother with 15 times the corrected estimate for the data (brown trace). The blue trace can barely be seen because the RTS smoother with the 3 times variance lies nearly atop of it. The slope value for a linear fit to all the points is also shown (the vertical black line).

Note the spread of possibilities given by the 5 year local linear fits. The 10 year local linear fits, the spline, and the RTS smoother fits have their mode in the vicinity of the overall slope. The 10 year local linear fits slope has broader support, meaning it admits more negative slopes in the range of temperature anomalies observed. The RTS smoother results have peaks slightly below those for the spline, the 10 year local linear fits, and the overall slope. The kernel density estimator allows the possibility of probability mass below zero, even though the spline, and two RTS smoother fits never exhibit slopes below zero. This is a Bayesian-like estimator, since the prior is the real line.

Local linear fits to HadCRUT4 time series were used by Fyfe, Gillet, and Zwiers in their 2013 paper and supplement. We do not know the computational details of those trends, since they were not published, possibly due to Nature Climate Change page count restrictions. Those details matter. From these calculations, which, admittedly, are not as comprehensive as those by Fyfe, Gillet, and Zwiers, we see that robust estimators of trends in temperature during the observational record show these are always positive, even if the magnitudes vary. The RTS smoother solutions suggest slopes in recent years are near zero, providing a basis for questioning whether or not there is a warming “hiatus”.

 The Rauch-Tung-Striebel smoother is an enhancement of the Kalman filter. Let $y_{\kappa}$ denote a set of univariate observations at equally space and successive time steps $\kappa$. Describe these as follows: $y_{\kappa} = \mathbf{G} \mathbf{x}_{\kappa} + \varepsilon_{\kappa}$ $\mathbf{x}_{\kappa + 1} = \mathbf{H} \mathbf{x}_{\kappa} + \boldsymbol\gimel_{\kappa}$ $\varepsilon_{\kappa} \sim \mathcal{N}(0, \sigma^{2}_{\varepsilon})$ $\boldsymbol\gimel_{\kappa} \sim \mathcal{N}(0, \boldsymbol\Sigma^{2}_{\eta})$ The multivariate $\mathbf{x}_{\kappa}$ is called a state vector for index $\kappa$. $\mathbf{G}$ and $\mathbf{H}$ are given, constant matrices. Equations (5.3) and (5.4) say that the noise component of observations and states are distributed as zero mean Gaussian random variables with variance $\sigma^{2}_{\varepsilon}$ and covariance $\boldsymbol\Sigma^{2}_{\eta}$, respectively. This simple formulation in practice has great descriptive power, and is widely used in engineering and data analysis. For instance, it is possible to cast autoregressive moving average models (“ARMA”) in this form. (See Kitigawa, Chapter 10.) The key idea is that equation (5.1) describes at observation at time $\kappa$ as the result of a linear regression on coefficients $\mathbf{x}_{\kappa}$, where $\mathbf{G}$ is the corresponding design matrix. Then, the coefficients themselves change with time, using a Markov-like development, a linear regression of the upcoming set of coefficients, $\mathbf{x}_{\kappa+1}$, in terms of the current coefficients, $\mathbf{x}_{\kappa}$, where $\mathbf{H}$ is the design matrix. For the purposes here, a simple version of this is used, something called a local level model (Chapter 2) and occasionally a Gaussian random walk with noise model (Section 12.3.1). In that instance, $\mathbf{G}$ and $\mathbf{H}$ are not only scalars, they are unity, resulting in the simpler $y_{\kappa} = x_{\kappa} + \varepsilon_{\kappa}$ $x_{\kappa + 1} = x_{\kappa} + \eta_{\kappa}$ $\varepsilon_{\kappa} \sim \mathcal{N}(0, \sigma^{2}_{\varepsilon})$ $\eta_{\kappa} \sim \mathcal{N}(0, \sigma^{2}_{\eta})$ with scalar variances $\sigma^{2}_{\varepsilon}$ and $\sigma^{2}_{\eta}$. In either case, the Kalman filter is a way of calculating $\mathbf{x}_{\kappa}$, given $y_{1}, y_{2}, \dots, y_{n}$, values for $\mathbf{G}$ and $\mathbf{H}$, and estimates for $\sigma^{2}_{\varepsilon}$ and $\sigma^{2}_{\eta}$. Choices for $\mathbf{G}$ and $\mathbf{H}$ are considered a model for the data. Choices for $\sigma^{2}_{\varepsilon}$ and $\sigma^{2}_{\eta}$ are based upon experience with $Y_{\kappa}$ and the model. In practice, and within limits, the bigger the ratio the smoother the solution for $\mathbf{x}_{\kappa}$ over successive $\kappa$. Now, the Rauch-Tung-Striebel extension of the Kalman filter amounts to (a) interpreting it in a Bayesian context, and (b) using that interpretation and Bayes Rule to retrospectively update $\mathbf{x}_{\kappa-1}, \mathbf{x}_{\kappa-2}, \dots, \mathbf{x}_{1}$ with the benefit of information through $y_{\kappa}$ and the current state $\mathbf{x}_{\kappa}$. Details won’t be provided here, but are described in depth in many texts, such as Cowpertwait and Metcalfe, Durbin and Koopman, and Särkkä. Finally, commenting on the observation regarding subjectivity of choice in the ratio of variances, mentioned in Section 5 at the discussion of their choice “smoother” here has a specific meaning. If this ratio is smaller, the RTS solution tracks the signal more closely, meaning its short term variability is higher. A small ratio has implications for forecasting, increasing the prediction variance.

The recent IPCC AR5 WG1 Report sets out the context in its Box TS.3:

Hiatus periods of 10 to 15 years can arise as a manifestation of internal decadal climate variability, which sometimes enhances and sometimes counteracts the long-term externally forced trend. Internal variability thus diminishes the relevance of trends over periods as short as 10 to 15 years for long-term climate change (Box 2.2, Section 2.4.3). Furthermore, the timing of internal decadal climate variability is not expected to be matched by the CMIP5 historical simulations, owing to the predictability horizon of at most 10 to 20 years (Section 11.2.2; CMIP5 historical simulations are typically started around nominally 1850 from a control run). However, climate models exhibit individual decades of GMST trend hiatus even during a prolonged phase of energy uptake of the climate system (e.g., Figure 9.8; Easterling and Wehner, 2009; Knight et al., 2009), in which case the energy budget would be balanced by increasing subsurface-ocean heat uptake (Meehl et al., 2011, 2013a; Guemas et al., 2013).

Owing to sampling limitations, it is uncertain whether an increase in the rate of subsurface-ocean heat uptake occurred during the past 15 years (Section 3.2.4). However, it is very likely that the climate system, including the ocean below 700 m depth, has continued to accumulate energy over the period 1998-2010 (Section 3.2.4, Box 3.1). Consistent with this energy accumulation, global mean sea level has continued to rise during 1998-2012, at a rate only slightly and insignificantly lower than during 1993-2012 (Section 3.7). The consistency between observed heat-content and sea level changes yields high confidence in the assessment of continued ocean energy accumulation, which is in turn consistent with the positive radiative imbalance of the climate system (Section 8.5.1; Section 13.3, Box 13.1). By contrast, there is limited evidence that the hiatus in GMST trend has been accompanied by a slower rate of increase in ocean heat content over the depth range 0 to 700 m, when comparing the period 2003-2010 against 1971-2010. There is low agreement on this slowdown, since three of five analyses show a slowdown in the rate of increase while the other two show the increase continuing unabated (Section 3.2.3, Figure 3.2). [Emphasis added by author.]

During the 15-year period beginning in 1998, the ensemble of HadCRUT4 GMST trends lies below almost all model-simulated trends (Box 9.2 Figure 1a), whereas during the 15-year period ending in 1998, it lies above 93 out of 114 modelled trends (Box 9.2 Figure 1b; HadCRUT4 ensemble-mean trend $0.26\,^{\circ}\mathrm{C}$ per decade, CMIP5 ensemble-mean trend $0.16\,^{\circ}\mathrm{C}$ per decade). Over the 62-year period 1951-2012, observed and CMIP5 ensemble-mean trends agree to within $0.02\,^{\circ}\mathrm{C}$ per decade (Box 9.2 Figure 1c; CMIP5 ensemble-mean trend $0.13\,^{\circ}\mathrm{C}$ per decade). There is hence very high confidence that the CMIP5 models show long-term GMST trends consistent with observations, despite the disagreement over the most recent 15-year period. Due to internal climate variability, in any given 15-year period the observed GMST trend sometimes lies near one end of a model ensemble (Box 9.2, Figure 1a, b; Easterling and Wehner, 2009), an effect that is pronounced in Box 9.2, Figure 1a, because GMST was influenced by a very strong El Niño event in 1998. [Emphasis added by author.]

The contributions of Fyfe, Gillet, and Zwiers (“FGZ”) are to (a) pin down this behavior for a 20 year period using the HadCRUT4 data, and, to my mind, more importantly, (b) to develop techniques for evaluating runs of ensembles of climate models like the CMIP5 suite without commissioning specific runs for the purpose. This, if it were to prove out, would be an important experimental advance, since climate models demand expensive and extensive hardware, and the number of people who know how to program and run them is very limited, possibly a more limiting practical constraint than the hardware.

This is the beginning of a great story, I think, one which both advances an understanding of how our experience of climate is playing out, and how climate science is advancing. FGZ took a perfectly reasonable approach and followed it to its logical conclusion, deriving an inconsistency. There’s insight to be won resolving it.

FGZ try to explicitly model trends due to internal variability. They begin with two equations:

1. $M_{ij}(t) = u^{m}(t) + \text{Eint}_{ij}(t) + \text{Emod}_{i}(t),$
$i = 1, \dots, N^{m}, j= 1, \dots, N_{i}$
2. $O_{k}(t) = u^{o}(t) + \text{Eint}^{o}(t) + \text{Esamp}_{k}(t),$
$k = 1, \dots, N^{o}$

$i$ is the model membership index. $j$ is the index of the $i^{\text{th}}$ model’s $j^{\text{th}}$ ensemble. $k$ runs over bootstrap samples taken from HadCRUT4 observations. Here, $M_{ij}(t)$ and $O_{k}(t)$ are trends calculated using models or observations, respectively. $u^{m}(t)$ and $u^{o}(t)$ denote the “true, unknown, deterministic trends due to external forcing” common to models and observations, respectively. $\text{Eint}_{ij}(t)$ and $\text{Eint}^{o}(t)$ are the perturbations to trends due to internal variability of models and observations. $\text{Emod}_{i}(t)$ denotes error in climate model trends for model $i$. $\text{Esamp}_{k}(t)$ denotes the sampling error in the $k^{\text{th}}$ sample. FGZ assume $\text{Emod}_{i}(t)$ are exchangeable with each other as well, at least for the same time $t$. (See [Di1977, Di1988, Ro2013c, Co2005] for more on exchangeability.) Note that while the internal variability of climate models $\text{Eint}_{ij}(t)$ varies from model to model, run to run, and time to time, the ‘internal variability of observations’, namely $\text{Eint}^{o}(t)$, is assumed to only vary with time.

The technical innovation FGZ use is to employ bootstrap resampling on the observations ensemble of HadCRUT4 and an ensemble of runs of 38 CMIP5 climate models to perform a two-sample comparison [Ch2008, Da2009, ]. In doing so, they explicitly assume, in the framework above, exchangeability of models. (Later, in the same work, they also make the same calculation assuming exchangeability of models and observations, an innovation too detailed for this present exposition.)

So, what is a bootstrap? In its simplest form, a bootstrap is a nonparametric, often robust, frequentist technique for sampling the distribution of a function of a set of population parameters, generally irrespective of the nature or complexity of that function, or the number of parameters. Since estimates of the variance of that function are themselves functions of population parameters, assuming the variance exists, the bootstrap can also be used to estimate the precision of the first set of samples, where “precision” is the reciprocal of variance. For more about the bootstrap, see the inset below..

In the case in question here, with FGZ, the bootstrap is being used to determine if the distribution of surface temperature trends as calculated from observations and the distribution of surface temperature trends as calculated from climate models for the same period have in fact similar means. This is done by examining differences of paired trends, one coming from an observation sample, one coming from a model sample, and assessing the degree of discrepancy based upon the variances of the observations trends distribution and of the models trends distribution.

The equations (6.1) and (6.2) can be rewritten:

1. $M_{ij}(t) - \text{Eint}_{ij}(t) = u^{m}(t) + \text{Emod}_{i}(t),$
$i = 1, \dots, N^{m}, j = 1, \dots, N_{i}$
2. $O_{k}(t) - \text{Eint}^{o}(t) = u^{o}(t) + \text{Esamp}_{k}(t),$
$k = 1, \dots, N^{o}$

moving the trends in internal variability to the left, calculated side. Both $\text{Eint}_{ij}(t)$ and $\text{Eint}^{o}(t)$ are not directly observable. Without some additional assumptions, which are not explicitly given in the FGZ paper, such as

1. $\text{Eint}_{ij}(t) \sim \mathcal{N}(0, \Sigma_{\text{model int}})$
2. $\text{Eint}^{o}(t) \sim \mathcal{N}(0, \Sigma_{\text{obs int}})$

we can’t really be sure we’re seeing $O_{k}(t)$ or $O_{k}(t) - \text{Eint}^{o}(t)$, or at least $O_{k}(t)$ less the mean of $\text{Eint}^{o}(t)$. The same applies to $M_{ij}(t)$ and $\text{Eint}_{ij}(t)$. Here equations (6.5) and (6.6) describe internal variabilities as being multivariate but zero mean Gaussian random variables. $\Sigma_{\text{model int}}$ and $\Sigma_{\text{obs int}}$ are covariances among models and among observations. FGZ essentially say these are diagonal with their statement “An implicit assumption is that sampling uncertainty in [observation trends] is independent of uncertainty due to internal variability and also independent of uncertainty in [model trends]”. They might not be so, but it is reasonable to suppose their diagonals are strong, and that there is a row-column exchange operator on these covariances which can produce banded matrices.

### 7. On Reconciliation

The centerpiece of the FGZ result is their Figure 1, reproduced here as Figure 15. Their conclusion, that climate models do not properly capture surface temperature observations for the given periods, is based upon the significant separation of the red density from the grey density, even when measuring that separation using pooled variances. But, surely, a remarkable feature of these graphs is not only the separation of the means of the two densities, but the marked difference in size of the variances of the two densities.

Why are climate models so less precise than HadCRUT4 observations? Moreover, why do climate models disagree with one another so dramatically? We cannot tell without getting into CMIP5 details, but the same result could be obtained if the climate models came in three Gaussian populations, each with a variance 1.5x that of the observations, but mixed together. We could also obtain the same result if, for some reason, the variance of HadCRUT4 was markedly understated.

That brings us back to the comments about HadCRUT4 made at the end of Section 3. HadCRUT4 is noted for “drop outs” in observations, where either the quality of an observation on a patch of Earth was poor or the observation was missing altogether for a certain month in history. (To be fair, both GISS and BEST have months where there is no data available, especially in early years of the record.) It also has incomplete coverage [Co2013]. Whether or not values for patches are imputed in some way, perhaps using spatial kriging, or whether or not supports to calculate trends are adjusted to avoid these omissions are decisions in use of these data which are critical to resolving the question [Co2013, Gl2011].

As seen in Section 5, what trends you get depends a lot on how they are done. FGZ did linear trends. These are nice because means of trends have simple relationships with the trends themselves. On the other hand, confining trend estimation to local linear trends binds these estimates to being only supported by pairs of actual samples, however sparse these may be. This has the unfortunate effect of producing a broadly spaced set of trends which, when averaged, appear to be a single, tight distribution, close to the vertical black line of Figure 14, but erasing all the detail available by estimating the density of trends with a robust function of the first time derivative of the series. FGZ might be improved by using such, repairing this drawback and also making it more robust against HadCRUT4’s inescapable data drops. As mentioned before, however, we really cannot know, because details of their calculations are not available. (Again, this author suspects this fault lies not with FGZ but a matter of page limits.)

In fact, that was indicated by a recent paper from Cowtan and Way, arguing that the limited coverage of HadCRUT4 might explain the discrepancy Fyfe, Gillet, and Zwiers found. In return Fyfe and Gillet argued that even admitting the corrections for polar regions which Cowtan and Way indicate, the CMIP5 models fall short in accounting for global mean surface temperatures. What could be wrong? In the context of ensemble forecasts depicting future states of the atmosphere, Wilks notes (Section 7.7.1):

Accordingly, the dispersion of a forecast ensemble can at best only approximate the [probability density function] of forecast uncertainty … In particular, a forecast ensemble may reflect errors both in statistical location (most or all ensemble members being well away from the actual state of the atmosphere, but relatively nearer to each other) and dispersion (either under- or overrepresenting the forecast uncertainty). Often, operational ensemble forecasts are found to exhibit too little dispersion …, which leads to overconfidence in probability assessment if ensemble relative frequencies are interpreted as estimating probabilities.

In fact, the IPCC reference, Toth, Palmer and others raise the same caution. It could be that the answer to why the variance of the observational data in the Fyfe, Gillet, and Zwiers graph depicted in Figure 15 is so small is that ensemble spread does not properly reflect the true probability density function of the joint distribution of temperatures across Earth. These might be “relatively nearer to each other” than the true dispersion which climate models are accommodating.

If Earth’s climate is thought of as a dynamical system, and taking note of the suggestion of Kharin that “There is basically one observational record in climate research”, we can do the following thought experiment. Suppose the total state of the Earth’s climate system can be captured at one moment in time, no matter how, and the climate can be reinitialized to that state at our whim, again no matter how. What happens if this is done several times, and then the climate is permitted to develop for, say, exactly 100 years on each “run”? What are the resulting states? Also suppose the dynamical “inputs” from the Sun, as a function of time, are held identical during that 100 years, as are dynamical inputs from volcanic forcings, as are human emissions of greenhouse gases. Are the resulting states copies of one another?

No. Stochastic variability in the operation of climate means these end states will be each somewhat different than one another. Then of what use is the “one observation record”? Well, it is arguably better than no observational record. And, in fact, this kind of variability is a major part of the “internal variability” which is often cited in these literature, including by FGZ.

Setting aside the problems of using local linear trends, FGZ’s bootstrap approach to the HadCRUT4 ensemble is an attempt to imitate these various runs of Earth’s climate. The trouble is, the frequentist bootstrap can only replicate values of observations actually seen. (See inset.) In this case, these replications are those of the HadCRUT4 ensembles. It will never produce values in-between and, as the parameters of temperature anomalies are in general continuous measures, allowing for in-between values seems a reasonable thing to do.

No algorithm can account for a dispersion which is not reflected in the variability of the ensemble. If the dispersion of HadCRUT4 is too small, it could be corrected using ensemble MOS methods (Section 7.7.1.) In any case, underdispersion could explain the remarkable difference in variances of populations seen in Figure 15. I think there’s yet another way.

Consider equations (6.1) and (6.2) again. Recall, here, $i$ denotes the $i^{th}$ model and $j$ denotes the $j^{th}$ run of model $i$. Instead of $k$, however, a bootstrap resampling of the HadCRUT4 ensembles, let $\omega$ run over all the 100 ensemble members provided, let $\xi$ run over the 2592 patches on Earth’s surface, and let $\kappa$ run over the 1967 monthly time steps. Reformulate equations (6.1) and (6.2), instead, as

1. $M_{\kappa} = u_{\kappa} + \sum_{i = 1}^{N^{m}} x_{i} \left(\text{Emod}_{i\kappa} + \text{Eint}_{i\kappa}\right)$
2. $O_{\kappa} = u_{\kappa} + \sum_{\xi = 1}^{2592} \left(x_{0} \text{Eint}^{\zeta}_{\kappa} + x_{\xi} \text{Esamp}_{\xi\kappa}\right)$

Now, $u_{\kappa}$ is a common trend at time tick $\kappa$ and $\text{Emod}_{i\kappa}$ and $\text{Eint}_{i\kappa}$ are deflections from from that trend due to modeling error and internal variability in the $i^{\text{th}}$ model, respectively, at time tick $\kappa$. Similarly, $\text{Eint}^{\zeta}_{\kappa}$ denotes deflections from the common trend baseline $u$ due to internal variability as seen by the HadCRUT4 observational data at time tick $\kappa$, and $\text{Esamp}_{\xi\kappa}$ denotes the deflection from the common baseline due to sampling error in the $\xi^{\text{th}}$ patch at time tick $\kappa$. $x_{\iota}$ are indicator variables. This is the setup for an analysis of variance or ANOVA, preferably a Bayesian one (Sections 14.1.6, 18.1). In equation (7.1), successive model runs $j$ for model $i$ are used to estimate $\text{Emod}_{i\kappa}$ and $\text{Eint}_{i\kappa}$ for every $\kappa$. In equation (7.2), different ensemble members $\omega$ are used to estimate $\text{Eint}^{\zeta}_{\kappa}$ and $\text{Esamp}_{\xi\kappa}$ for every $\kappa$. Coupling the two gives a common estimate of $u_{\kappa}$. There’s considerable flexibility in how model runs or ensemble members are used for this purpose, opportunities for additional differentiation and ability to incorporate information about relationships among models or among observations. For instance, models might be described relative to a Bayesian model average [Ra2005]. Observations might be described relative to a common or slowly varying spatial trend, reflecting dependencies among $\xi$ patches. Here, differences between observations and models get explicitly allocated to modeling error and internal variability for models, and sampling error and internal variability for observations.

More work needs to be done to assess the proper virtues of the FGZ technique, even without modification. A device like that Rohde used to compare BEST temperature observations with HadCRUT4 and GISS, one of supplying the FGZ procedure with synthetic data, would be perhaps the most informative regarding its character. Alternatively, if an ensemble MOS method were devised and applied to HadCRUT4, it might better reflect a true spread of possibilities. Because a dataset like HadCRUT4 records just one of many possible observational records the Earth might have exhibited, it would be useful to have a means of elaborating what those other possibilities were, given the single observational trace.

Regarding climate models, while they will inevitably disagree from a properly elaborated set of observations in the particulars of their statistics, in my opinion, the goal should be to strive to match the distributions of solutions these two instruments of study on their first few moments by improving both. While, statistical equivalence is all that’s sought, we’re not there yet. Assessing parametric uncertainty of observations hand-in-hand with the model builders seems to be a sensible route. Indeed, this is important. In review of the Cowtan and Way result, one based upon kriging, Kintisch summarizes the situation as reproduced in Table 1, a reproduction of his table on page 348 of the reference [Co2013, Gl2011, Ki2014]:

TEMPERATURE TRENDS
1997-2012
Source Warming ($^{\circ}\,\mathrm{C}$/decade)
Climate models 0.102-0.412
NASA data set 0.080
Cowtan/Way 0.119
 Table 1. Getting warmer. New method brings measured temperatures closer to projections. Added in quotation: “Climate models” refers to the CMIP5 series. “NASA data set” is GISS. “HadCRUT data set” is HadCRUT4. “Cowtan/Way” is from their paper. Note values are per decade, not per year.

Note that these estimates of trends, once divided by 10 years/decade to convert to a per year change in temperature, all fall well within the slope estimates depicted in the summary Figure 14. Note, too, how low the HadCRUT trend is.

If the FGZ technique, or any other, can contribute to this elucidation, it is most welcome.

As an example Lee reports how the GLOMAP model of aerosols was systematically improved using such careful statistical consideration. It seems likely to be a more rewarding way than “black box” treatments. Incidently, Dr Lindsay Lee’s article was runner-up in the Significance/Young Statisticians Section writers’ competition. It’s great to see bright young minds charging in to solve these problems!

 The bootstrap is a general name for a resampling technique, most commonly associated with what is more properly called the frequentist bootstrap. Given a sample of observations, $\mathring{Y} = \{y_{1}, y_{2}, \dots, y_{n}\}$, the bootstrap principle says that in a wide class of statistics and for certain minimum sizes of $n$, the sampling density of a statistic $h(Y)$ from a population of all $Y$, where $\mathring{Y}$ is a single observation, can be approximated by the following procedure. Sample $\mathring{Y}$ $M$ times with replacement to obtain $M$ samples each of size $n$ called $\tilde{Y}_{k}$, $k = 1, \dots, M$. For each $\tilde{Y}_{k}$, calculate $h(\tilde{Y}_{k})$ so as to obtain $H = h_{1}, h_{2}, \dots, h_{M}$. The set $H$ so obtained is an approximation of the sampling density of $h(Y)$ from a population of all $Y$. Note that because $\mathring{Y}$ is sampled, only elements of that original set of observations will ever show up in any $\tilde{Y}_{k}$. This is true even if $Y$ is drawn from an interval of the real numbers. This is where a Bayesian bootstrap might be more suitable. In a Bayesian bootstrap, the set of possibilities to be sampled are specified using a prior distribution on $Y$ [Da2009, Section 10.5]. A specific observation of $Y$, like $\mathring{Y}$, is use to update the probability density on $Y$, and then values from $Y$ are drawn in proportion to this updated probability. Thus, values in $Y$ never in $\mathring{Y}$ might be drawn. Both bootstraps will, under similar conditions, preserve the sampling distribution of $Y$.

### 8. Summary

Various geophysical datasets recording global surface temperature anomalies suggest a slowdown in anomalous global warming from historical baselines. Warming is increasing, but not as fast, and much of the media attention to this is reacting to the second time derivative of temperature, which is negative, not the first time derivative, its rate of increase. Explanations vary. In one important respect, 20 or 30 years is an insufficiently long time to assess the state of the climate system. In another, while the global surface temperature increase is slowing, oceanic temperatures continue to soar, at many depths. Warming might even decrease. None of these seem to pose a challenge to the geophysics of climate, which has substantial support both from experimental science and ab initio calculations. An interesting discrepancy is noted by Fyfe, Gillet, and Zwiers, although their calculation could be improved both by using a more robust estimator for trends, and by trying to integrate out anomalous temperatures due to internal variability in their models, because much of it is not separately observable. Nevertheless, Fyfe, Gillet, and Zwiers may have done the field a great service, making explicit a discrepancy which enables students of datasets like the important HadCRUT4 to discover an important limitation, that their dispersion across ensembles does not properly reflect the set of Earth futures which one might wish they did and, in their failure for users who think of the ensemble as representing such futures, give them a dispersion which is significantly smaller than what we might know.

The Azimuth Project can contribute, and I am planning subprojects to pursue my suggestions in Section 7, those of examining HadCRUT4 improvements using MOS ensembles, a Bayesian bootstrap, or the Bayesian ANOVA described there. Beyond trends in mean surface temperatures, there’s another more challenging statistical problem involving trends in sea levels which awaits investigation [Le2012b, Hu2010].

Working out these kinds of details is the process of science at its best, and many disciplines, not least mathematics, statistics, and signal processing, have much to contribute to the methods and interpretations of these series data. It is possible too much is being asked of a limited data set, and perhaps we have not yet observed enough of climate system response to say anything definitive. But the urgency to act responsibly given scientific predictions remains.

### Bibliography

1. Credentials. I have taken courses in geology from Binghamton University, but the rest of my knowledge of climate science is from reading the technical literature, principally publications from the American Geophysical Union and the American Meteorological Society, and self-teaching, from textbooks like Pierrehumbert. I seek to find ways where my different perspective on things canhelp advance and explain the climate science enterprise. I also apply my skills to working local environmental problems, ranging from inferring people’s use of energy in local municipalities, as well as studying things like trends in solid waste production at the same scales using Bayesian inversions. I am fortunate that techniques used in my professional work and those in these problems overlap so much. I am a member of the American Statistical Association, the American Geophysical Union, the American Meteorological Association, the International Society for Bayesian Analysis, as well as the IEEE and its signal processing society.
2. [Yo2014] D. S. Young, “Bond. James Bond. A statistical look at cinema’s most famous spy”, CHANCE Magazine, 27(2), 2014, 21-27, http://chance.amstat.org/2014/04/james-bond/.
3. [Ca2014a] S. Carson, Science of Doom, a Web site devoted to atmospheric radiation physics and forcings, last accessed 7 February 2014.
4. [Pi2012] R. T. Pierrehumbert, Principles of Planetary Climate, Cambridge University Press, 2010, reprinted 2012.
5. [Pi2011] R. T. Pierrehumbert, “Infrared radiative and planetary temperature”, Physics Today, January 2011, 33-38.
6. [Pe2006] G. W. Petty, A First Course in Atmospheric Radiation, 2nd edition, Sundog Publishing, 2006.
7. [Le2012a] S. Levitus, J. I. Antonov, T. P. Boyer, O. K. Baranova, H. E. Garcia, R. A. Locarnini, A. V. Mishonov, J. R. Reagan, D. Seidov, E. S. Yarosh, and M. M. Zweng, “World ocean heat content and thermosteric sea level change (0-2000 m), 1955-2010”, Geophysical Research Letters, 39, L10603, 2012, http://dx.doi.org/10.1029/2012GL051106.
8. [Le2012b] S. Levitus, J. I. Antonov, T. P. Boyer, O. K. Baranova, H. E. Garcia, R. A. Locarnini, A. V. Mishonov, J. R. Reagan, D. Seidov, E. S. Yarosh, and M. M. Zweng, “World ocean heat content and thermosteric sea level change (0-2000 m), 1955-2010: supplementary information”, Geophysical Research Letters, 39, L10603, 2012, http://onlinelibrary.wiley.com/doi/10.1029/2012GL051106/suppinfo.
9. [Sm2009] R. L. Smith, C. Tebaldi, D. Nychka, L. O. Mearns, “Bayesian modeling of uncertainty in ensembles of climate models”, Journal of the American Statistical Association, 104(485), March 2009.
10. Nomenclature. The nomenclature can be confusing. With respect to observations, variability arising due to choice of method is sometimes called structural uncertainty [Mo2012, Th2005].
11. [Kr2014] J. P. Krasting, J. P. Dunne, E. Shevliakova, R. J. Stouffer (2014), “Trajectory sensitivity of the transient climate response to cumulative carbon emissions”, Geophysical Research Letters, 41, 2014, http://dx.doi.org/10.1002/2013GL059141.
12. [Sh2014a] D. T. Shindell, “Inhomogeneous forcing and transient climate sensitivity”, Nature Climate Change, 4, 2014, 274-277, http://dx.doi.org/10.1038/nclimate2136.
13. [Sh2014b] D. T. Shindell, “Shindell: On constraining the Transient Climate Response”, RealClimate, http://www.realclimate.org/index.php?p=17134, 8 April 2014.
14. [Sa2011] B. M. Sanderson, B. C. O’Neill, J. T. Kiehl, G. A. Meehl, R. Knutti, W. M. Washington, “The response of the climate system to very high greenhouse gas emission scenarios”, Environmental Research Letters, 6, 2011, 034005,
http://dx.doi.org/10.1088/1748-9326/6/3/034005.
15. [Em2011] K. Emanuel, “Global warming effects on U.S. hurricane damage”, Weather, Climate, and Society, 3, 2011, 261-268, http://dx.doi.org/10.1175/WCAS-D-11-00007.1.
16. [Sm2011] L. A. Smith, N. Stern, “Uncertainty in science and its role in climate policy”, Philosophical Transactions of the Royal Society A, 269, 2011 369, 1-24, http://dx.doi.org/10.1098/rsta.2011.0149.
17. [Le2010] M. C. Lemos, R. B. Rood, “Climate projections and their impact on policy and practice”, WIREs Climate Change, 1, September/October 2010, http://dx.doi.org/10.1002/wcc.71.
18. [Sc2014] G. A. Schmidt, D. T. Shindell, K. Tsigaridis, “Reconciling warming trends”, Nature Geoscience, 7, 2014, 158-160, http://dx.doi.org/10.1038/ngeo2105.
19. [Be2013] “Examining the recent “pause” in global warming”, Berkeley Earth Memo, 2013, http://static.berkeleyearth.org/memos/examining-the-pause.pdf.
20. [Mu2013a] R. A. Muller, J. Curry, D. Groom, R. Jacobsen, S. Perlmutter, R. Rohde, A. Rosenfeld, C. Wickham, J. Wurtele, “Decadal variations in the global atmospheric land temperatures”, Journal of Geophysical Research: Atmospheres, 118 (11), 2013, 5280-5286, http://dx.doi.org/10.1002/jgrd.50458.
21. [Mu2013b] R. Muller, “Has global warming stopped?”, Berkeley Earth Memo, September 2013, http://static.berkeleyearth.org/memos/has-global-warming-stopped.pdf.
22. [Br2006] P. Brohan, J. Kennedy, I. Harris, S. Tett, P. D. Jones, “Uncertainty estimates in regional and global observed temperature changes: A new data set from 1850”, Journal of Geophysical Research—Atmospheres, 111(D12), 27 June 2006, http://dx.doi.org/10.1029/2005JD006548.
23. [Co2013] K. Cowtan, R. G. Way, “Coverage bias in the HadCRUT4 temperature series and its impact on recent temperature trends”, Quarterly Journal of the Royal Meteorological Society, 2013, http://dx.doi.org/10.1002/qj.2297.
24. [Fy2013] J. C. Fyfe, N. P. Gillett, F. W. Zwiers, “Overestimated global warming over the past 20 years”, Nature Climate Change, 3, September 2013, 767-769, and online at http://dx.doi.org/10.1038/nclimate1972.
25. [Ha2013] E. Hawkins, “Comparing global temperature observations and simulations, again”, Climate Lab Book, http://www.climate-lab-book.ac.uk/2013/comparing-observations-and-simulations-again/, 28 May 2013.
26. [Ha2014] A. Hannart, A. Ribes, P. Naveau, “Optimal fingerprinting under multiple sources of uncertainty”, Geophysical Research Letters, 41, 2014, 1261-1268, http://dx.doi.org/10.1002/2013GL058653.
27. [Ka2013a] R. W. Katz, P. F. Craigmile, P. Guttorp, M. Haran, Bruno Sansó, M.L. Stein, “Uncertainty analysis in climate change assessments”, Nature Climate Change, 3, September 2013, 769-771 (“Commentary”).
28. [Sl2013] J. Slingo, “Statistical models and the global temperature record”, Met Office, May 2013, http://www.metoffice.gov.uk/media/pdf/2/3/Statistical_Models_Climate_Change_May_2013.pdf.
29. [Tr2013] K. Trenberth, J. Fasullo, “An apparent hiatus in global warming?”, Earth’s Future, 2013,
http://dx.doi.org/10.1002/2013EF000165.
30. [Mo2012] C. P. Morice, J. J. Kennedy, N. A. Rayner, P. D. Jones, “Quantifying uncertainties in global and regional temperature change using an ensemble of observational estimates: The HadCRUT4 data set”, Journal of Geophysical Research, 117, 2012, http://dx.doi.org/10.1029/2011JD017187. See also http://www.metoffice.gov.uk/hadobs/hadcrut4/data/current/download.html where the 100 ensembles can be found.
31. [Sa2012] B. D. Santer, J. F. Painter, C. A. Mears, C. Doutriaux, P. Caldwell, J. M. Arblaster, P. J. Cameron-Smith, N. P. Gillett, P. J. Gleckler, J. Lanzante, J. Perlwitz, S. Solomon, P. A. Stott, K. E. Taylor, L. Terray, P. W. Thorne, M. F. Wehner, F. J. Wentz, T. M. L. Wigley, L. J. Wilcox, C.-Z. Zou, “Identifying human infuences on atmospheric temperature”, Proceedings of the National Academy of Sciences, 29 November 2012, http://dx.doi.org/10.1073/pnas.1210514109.
32. [Ke2011a] J. J. Kennedy, N. A. Rayner, R. O. Smith, D. E. Parker, M. Saunby, “Reassessing biases and other uncertainties in sea-surface temperature observations measured in situ since 1850, part 1: measurement and sampling uncertainties”, Journal of Geophysical Research: Atmospheres (1984-2012), 116(D14), 27 July 2011, http://dx.doi.org/10.1029/2010JD015218.
33. [Kh2008a] S. Kharin, “Statistical concepts in climate research: Some misuses of statistics in climatology”, Banff Summer School, 2008, part 1 of 3. Slide 7, “Climatology is a one-experiment science. There is basically one observational record in climate”, http://www.atmosp.physics.utoronto.ca/C-SPARC/ss08/lectures/Kharin-lecture1.pdf.
34. [Kh2008b] S. Kharin, “Climate Change Detection and Attribution: Bayesian view”, Banff Summer School, 2008, part 3 of 3, http://www.atmosp.physics.utoronto.ca/C-SPARC/ss08/lectures/Kharin-lecture3.pdf.
35. [Le2005] T. C. K. Lee, F. W. Zwiers, G. C. Hegerl, X. Zhang, M. Tsao, “A Bayesian climate change detection and attribution assessment”, Journal of Climate, 18, 2005, 2429-2440.
36. [De1982] M. H. DeGroot, S. Fienberg, “The comparison and evaluation of forecasters”, The Statistician, 32(1-2), 1983, 12-22.
37. [Ro2013a] R. Rohde, R. A. Muller, R. Jacobsen, E. Muller, S. Perlmutter, A. Rosenfeld, J. Wurtele, D. Groom, C. Wickham, “A new estimate of the average Earth surface land temperature spanning 1753 to 2011”, Geoinformatics & Geostatistics: An Overview, 1(1), 2013, http://dx.doi.org/10.4172/2327-4581.1000101.
38. [Ke2011b] J. J. Kennedy, N. A. Rayner, R. O. Smith, D. E. Parker, M. Saunby, “Reassessing biases and other uncertainties in sea-surface temperature observations measured in situ since 1850, part 2: Biases and homogenization”, Journal of Geophysical Research: Atmospheres (1984-2012), 116(D14), 27 July 2011, http://dx.doi.org/10.1029/2010JD015220.
39. [Ro2013b] R. Rohde, “Comparison of Berkeley Earth, NASA GISS, and Hadley CRU averaging techniques on ideal synthetic data”, Berkeley Earth Memo, January 2013, http://static.berkeleyearth.org/memos/robert-rohde-memo.pdf.
40. [En2014] M. H. England, S. McGregor, P. Spence, G. A. Meehl, A. Timmermann, W. Cai, A. S. Gupta, M. J. McPhaden, A. Purich, A. Santoso, “Recent intensification of wind-driven circulation in the Pacific and the ongoing warming hiatus”, Nature Climate Change, 4, 2014, 222-227, http://dx.doi.org/10.1038/nclimate2106. See also http://www.realclimate.org/index.php/archives/2014/02/going-with-the-wind/.
41. [Fy2014] J. C. Fyfe, N. P. Gillett, “Recent observed and simulated warming”, Nature Climate Change, 4, March 2014, 150-151, http://dx.doi.org/10.1038/nclimate2111.
42. [Ta2013] Tamino, “el Niño and the Non-Spherical Cow”, Open Mind blog, http://tamino.wordpress.com/2013/09/02/el-nino-and-the-non-spherical-cow/, 2 September 2013.
43. [Fy2013s] Supplement to J. C. Fyfe, N. P. Gillett, F. W. Zwiers, “Overestimated global warming over the past 20 years”, Nature Climate Change, 3, September 2013, online at http://www.nature.com/nclimate/journal/v3/n9/extref/nclimate1972-s1.pdf.
44. Ionizing. There are tiny amounts of heating due to impinging ionizing radiation from space, and changes in Earth’s magnetic field.
45. [Ki1997] J. T. Kiehl, K. E. Trenberth, “Earth’s annual global mean energy budget”, Bulletin of the American Meteorological Society, 78(2), 1997, http://dx.doi.org/10.1175/1520-0477(1997)0782.0.CO;2.
46. [Tr2009] K. Trenberth, J. Fasullo, J. T. Kiehl, “Earth’s global energy budget”, Bulletin of the American Meteorological Society, 90, 2009, 311–323, http://dx.doi.org/10.1175/2008BAMS2634.1.
47. [IP2013] IPCC, 2013: Climate Change 2013: The Physical Science Basis. Contribution of Working Group I to the Fifth Assessment Report of the Intergovernmental Panel on Climate Change [Stocker, T.F., D. Qin, G.-K. Plattner, M. Tignor, S.K. Allen, J. Boschung, A. Nauels, Y. Xia, V. Bex and P.M. Midgley (eds.)]. Cambridge University Press, Cambridge, United Kingdom and New York, NY, USA, 1535 pp. Also available online at https://www.ipcc.ch/report/ar5/wg1/.
48. [Ve2012] A. Vehtari, J. Ojanen, “A survey of Bayesian predictive methods for model assessment, selection and comparison”, Statistics Surveys, 6 (2012), 142-228, http://dx.doi.org/10.1214/12-SS102.
49. [Ge1998] J. Geweke, “Simulation Methods for Model Criticism and Robustness Analysis”, in Bayesian Statistics 6, J. M. Bernardo, J. O. Berger, A. P. Dawid and A. F. M. Smith (eds.), Oxford University Press, 1998.
50. [Co2006] P. Congdon, Bayesian Statistical Modelling, 2nd edition, John Wiley & Sons, 2006.
51. [Fe2011b] D. Ferreira, J. Marshall, B. Rose, “Climate determinism revisited: Multiple equilibria in a complex climate model”, Journal of Climate, 24, 2011, 992-1012, http://dx.doi.org/10.1175/2010JCLI3580.1.
52. [Bu2002] K. P. Burnham, D. R. Anderson, Model Selection and Multimodel Inference, 2nd edition, Springer-Verlag, 2002.
53. [Ea2014a] S. Easterbrook, “What Does the New IPCC Report Say About Climate Change? (Part 4): Most of the heat is going into the oceans”, 11 April 2014, at the Azimuth blog, https://johncarlosbaez.wordpress.com/2014/04/11/what-does-the-new-ipcc-report-say-about-climate-change-part-4/.
54. [Ko2014] Y. Kostov, K. C. Armour, and J. Marshall, “Impact of the Atlantic meridional overturning circulation on ocean heat storage and transient climate change”, Geophysical Research Letters, 41, 2014, 2108–2116, http://dx.doi.org/10.1002/2013GL058998.
55. [Me2011] G. A. Meehl, J. M. Arblaster, J. T. Fasullo, A. Hu.K. E. Trenberth, “Model-based evidence of deep-ocean heat uptake during surface-temperature hiatus periods”, Nature Climate Change, 1, 2011, 360–364, http://dx.doi.org/10.1038/nclimate1229.
56. [Me2013] G. A. Meehl, A. Hu, J. M. Arblaster, J. Fasullo, K. E. Trenberth, “Externally forced and internally generated decadal climate variability associated with the Interdecadal Pacific Oscillation”, Journal of Climate, 26, 2013, 7298–7310, http://dx.doi.org/10.1175/JCLI-D-12-00548.1.
57. [Ha2010] J. Hansen, R. Ruedy, M. Sato, and K. Lo, “Global surface temperature change”, Reviews of Geophysics, 48(RG4004), 2010, http://dx.doi.org/10.1029/2010RG000345.
58. [GISS-BEST] 3.667 (GISS) versus 3.670 (BEST).
59. Spar. The smoothing parameter is a constant which weights a penalty term proportional to the second directional derivative of the curve. The effect is that if a candidate spline is chosen which is very bumpy, this candidate is penalized and will only be chosen if the data demands it. There is more said about choice of such parameters in the caption of Figure 12.
60. [Ea2009] D. R. Easterling, M. F. Wehner, “Is the climate warming or cooling?”, Geophysical Research Letters, 36, L08706, 2009, http://dx.doi.org/10.1029/2009GL037810.
61. Hiatus. The term hiatus has a formal meaning in climate science, as described by the IPCC itself (Box TS.3).
62. [Ea2000] D. J. Easterbrook, D. J. Kovanen, “Cyclical oscillation of Mt. Baker glaciers in response to climatic changes and their correlation with periodic oceanographic changes in the northeast Pacific Ocean”, 32, 2000, Proceedings of the Geological Society of America, Abstracts with Program, page 17, http://myweb.wwu.edu/dbunny/pdfs/dje_abstracts.pdf, abstract reviewed 23 April 2014.
63. [Ea2001] D. J. Easterbrook, “The next 25 years: global warming or global cooling? Geologic and oceanographic evidence for cyclical climatic oscillations”, 33, 2001, Proceedings of the Geological Society of America, Abstracts with Program, page 253, http://myweb.wwu.edu/dbunny/pdfs/dje_abstracts.pdf, abstract reviewed 23 April 2014.
64. [Ea2005] D. J. Easterbrook, “Causes and effects of abrupt, global, climate changes and global warming”, Proceedings of the Geological Society of America, 37, 2005, Abstracts with Program, page 41, http://myweb.wwu.edu/dbunny/pdfs/dje_abstracts.pdf, abstract reviewed 23 April 2014.
65. [Ea2006a] D. J. Easterbrook, “The cause of global warming and predictions for the coming century”, Proceedings of the Geological Society of America, 38(7), Astracts with Programs, page 235, http://myweb.wwu.edu/dbunny/pdfs/dje_abstracts.pdf, abstract reviewed 23 April 2014.
66. [Ea2006b] D. J. Easterbrook, 2006b, “Causes of abrupt global climate changes and global warming predictions for the coming century”, Proceedings of the Geological Society of America, 38, 2006, Abstracts with Program, page 77, http://myweb.wwu.edu/dbunny/pdfs/dje_abstracts.pdf, abstract reviewed 23 April 2014.
67. [Ea2007] D. J. Easterbrook, “Geologic evidence of recurring climate cycles and their implications for the cause of global warming and climate changes in the coming century”, Proceedings of the Geological Society of America, 39(6), Abstracts with Programs, page 507, http://myweb.wwu.edu/dbunny/pdfs/dje_abstracts.pdf, abstract reviewed 23 April 2014.
68. [Ea2008] D. J. Easterbrook, “Correlation of climatic and solar variations over the past 500 years and predicting global climate changes from recurring climate cycles”, Proceedings of the International Geological Congress, 2008, Oslo, Norway.
69. [Wi2007] J. K. Willis, J. M. Lyman, G. C. Johnson, J. Gilson, “Correction to ‘Recent cooling of the upper ocean”‘, Geophysical Research Letters, 34, L16601, 2007, http://dx.doi.org/10.1029/2007GL030323.
70. [Ra2006] N. Rayner, P. Brohan, D. Parker, C. Folland, J. Kennedy, M. Vanicek, T. Ansell, S. Tett, “Improved analyses of changes and uncertainties in sea surface temperature measured in situ since the mid-nineteenth century: the HadSST2 dataset”, Journal of Climate, 19, 1 February 2006, http://dx.doi.org/10.1175/JCLI3637.1.
71. [Pi2006] R. Pielke, Sr, “The Lyman et al paper ‘Recent cooling in the upper ocean’ has been published”, blog entry, September 29, 2006, 8:09 AM, https://pielkeclimatesci.wordpress.com/2006/09/29/the-lyman-et-al-paper-recent-cooling-in-the-upper-ocean-has-been-published/, last accessed 24 April 2014.
72. [Ko2013] Y. Kosaka, S.-P. Xie, “Recent global-warming hiatus tied to equatorial Pacific surface cooling”, Nature, 501, 2013, 403–407, http://dx.doi.org/10.1038/nature12534.
73. [Ke1998] C. D. Keeling, “Rewards and penalties of monitoring the Earth”, Annual Review of Energy and the Environment, 23, 1998, 25–82, http://dx.doi.org/10.1146/annurev.energy.23.1.25.
74. [Wa1990] G. Wahba, Spline Models for Observational Data, Society for Industrial and Applied Mathematics (SIAM), 1990.
75. [Go1979] G. H. Golub, M. Heath, G. Wahba, “Generalized cross-validation as a method for choosing a good ridge parameter”, Technometrics, 21(2), May 1979, 215-223, http://www.stat.wisc.edu/~wahba/ftp1/oldie/golub.heath.wahba.pdf.
76. [Cr1979] P. Craven, G. Wahba, “Smoothing noisy data with spline functions: Estimating the correct degree of smoothing by the method of generalized cross-validation”, Numerische Mathematik, 31, 1979, 377-403, http://www.stat.wisc.edu/~wahba/ftp1/oldie/craven.wah.pdf.
77. [Sa2013] S. Särkkä, Bayesian Filtering and Smoothing, Cambridge University Press, 2013.
78. [Co2009] P. S. P. Cowpertwait, A. V. Metcalfe, Introductory Time Series With R, Springer, 2009.
79. [Ko2005] R. Koenker, Quantile Regression, Cambridge University Press, 2005.
80. [Du2012] J. Durbin, S. J. Koopman, Time Series Analysis by State Space Methods, Oxford University Press, 2012.
81. Process variance. Here, the process variance was taken here to be $\frac{1}{50}$ of the observations variance.
82. Probabilities. “In this Report, the following terms have been used to indicate the assessed likelihood of an outcome or a result: Virtually certain 99-100% probability, Very likely 90-100%, Likely 66-100%, About as likely as not 33-66\$%, Unlikely 0-33%, Very unlikely 0-10%, Exceptionally unlikely 0-1%. Additional terms (Extremely likely: 95-100%, More likely than not 50-100%, and Extremely unlikely 0-5%) may also be used when appropriate. Assessed likelihood is typeset in italics, e.g., very likely (see Section 1.4 and Box TS.1 for more details).”
83. [Ki2013] E. Kintsch, “Researchers wary as DOE bids to build sixth U.S. climate model”, Science 341 (6151), 13 September 2013, page 1160, http://dx.doi.org/10.1126/science.341.6151.1160.
84. Inez Fung. “It’s great there’s a new initiative,” says modeler Inez Fung of DOE’s Lawrence Berkeley National Laboratory and the University of California, Berkeley. “But all the modeling efforts are very short-handed. More brains working on one set of code would be better than working separately””.
85. Exchangeability. Exchangeability is a weaker assumption than independence. Random variables are exchangeable if their joint distribution only depends upon the set of variables, and not their order [Di1977, Di1988, Ro2013c]. Note the caution in Coolen.
86. [Di1977] P. Diaconis, “Finite forms of de Finetti’s theorem on exchangeability”, Synthese, 36, 1977, 271-281.
87. [Di1988] P. Diaconis, “Recent progress on de Finetti’s notions of exchangeability”, Bayesian Statistics, 3, 1988, 111-125.
88. [Ro2013c] J.C. Rougier, M. Goldstein, L. House, “Second-order exchangeability analysis for multi-model ensembles”, Journal of the American Statistical Association, 108, 2013, 852-863, http://dx.doi.org/10.1080/01621459.2013.802963.
89. [Co2005] F. P. A. Coolen, “On nonparametric predictive inference and objective Bayesianism”, Journal of Logic, Language and Information, 15, 2006, 21-47, http://dx.doi.org/10.1007/s10849-005-9005-7. (“Generally, though, both for frequentist and Bayesian approaches, statisticians are often happy to assume exchangeability at the prior stage. Once data are used in combination with model assumptions, exchangeability no longer holds ‘post-data’ due to the influence of modelling assumptions, which effectively are based on mostly subjective input added to the information from the data.”).
90. [Ch2008] M. R. Chernick, Bootstrap Methods: A Guide for Practitioners and Researches, 2nd edition, 2008, John Wiley & Sons.
91. [Da2009] A. C. Davison, D. V. Hinkley, Bootstrap Methods and their Application, first published 1997, 11th printing, 2009, Cambridge University Press.
92. [Mu2007] M. Mudelsee, M. Alkio, “Quantifying effects in two-sample environmental experiments using bootstrap condidence intervals”, Environmental Modelling and Software, 22, 2007, 84-96, http://dx.doi.org/10.1016/j.envsoft.2005.12.001.
93. [Wi2011] D. S. Wilks, Statistical Methods in the Atmospheric Sciences, 3rd edition, 2011, Academic Press.
94. [Pa2006] T. N. Palmer, R. Buizza, R. Hagedon, A. Lawrence, M. Leutbecher, L. Smith, “Ensemble prediction: A pedagogical perspective”, ECMWF Newsletter, 106, 2006, 10–17.
95. [To2001] Z. Toth, Y. Zhu, T. Marchok, “The use of ensembles to identify forecasts with small and large uncertainty”, Weather and Forecasting, 16, 2001, 463–477, http://dx.doi.org/10.1175/1520-0434(2001)0162.0.CO;2.
96. [Le2013a] L. A. Lee, K. J. Pringle, C. I. Reddington, G. W. Mann, P. Stier, D. V. Spracklen, J. R. Pierce, K. S. Carslaw, “The magnitude and causes of uncertainty in global model simulations of cloud condensation nuclei”, Atmospheric Chemistry and Physics Discussion, 13, 2013, 6295-6378, http://www.atmos-chem-phys.net/13/9375/2013/acp-13-9375-2013.pdf.
97. [Gl2011] D. M. Glover, W. J. Jenkins, S. C. Doney, Modeling Methods for Marine Science, Cambridge University Press, 2011.
98. [Ki2014] E. Kintisch, “Climate outsider finds missing global warming”, Science, 344 (6182), 25 April 2014, page 348, http://dx.doi.org/10.1126/science.344.6182.348.
99. [GL2011] D. M. Glover, W. J. Jenkins, S. C. Doney, Modeling Methods for Marine Science, Cambridge University Press, 2011, Chapter 7.
100. [Le2013b] L. A. Lee, “Uncertainties in climate models: Living with uncertainty in an uncertain world”, Significance, 10(5), October 2013, 34-39, http://dx.doi.org/10.1111/j.1740-9713.2013.00697.x.
101. [Ur2014] N. M. Urban, P. B. Holden, N. R. Edwards, R. L. Sriver, K. Keller, “Historical and future learning about climate sensitivity”, Geophysical Research Letters, 41, http://dx.doi.org/10.1002/2014GL059484.
102. [Th2005] P. W. Thorne, D. E. Parker, J. R. Christy, C. A. Mears, “Uncertainties in climate trends: Lessons from upper-air temperature records”, Bulletin of the American Meteorological Society, 86, 2005, 1437-1442, http://dx.doi.org/10.1175/BAMS-86-10-1437.
103. [Fr2008] C. Fraley, A. E. Raftery, T. Gneiting, “Calibrating multimodel forecast ensembles with exchangeable and missing members using Bayesian model averaging”, Monthly Weather Review. 138, January 2010, http://dx.doi.org/10.1175/2009MWR3046.1.
104. [Ow2001] A. B. Owen, Empirical Likelihood, Chapman & Hall/CRC, 2001.
105. [Al2012] M. Aldrin, M. Holden, P. Guttorp, R. B. Skeie, G. Myhre, T. K. Berntsen, “Bayesian estimation of climate sensitivity based on a simple climate model fitted to observations of hemispheric temperatures and global ocean heat content”, Environmentrics, 2012, 23, 253-257, http://dx.doi.org/10.1002/env.2140.
106. [AS2007] “ASA Statement on Climate Change”, American Statistical Association, ASA Board of Directors, adopted 30 November 2007, http://www.amstat.org/news/climatechange.cfm, last visited 13 September 2013.
107. [Be2008] L. M. Berliner, Y. Kim, “Bayesian design and analysis for superensemble-based climate forecasting”, Journal of Climate, 21, 1 May 2008, http://dx.doi.org/10.1175/2007JCLI1619.1.
108. [Fe2011a] X. Feng, T. DelSole, P. Houser, “Bootstrap estimated seasonal potential predictability of global temperature and precipitation”, Geophysical Research Letters, 38, L07702, 2011, http://dx.doi.org/10.1029/2010GL046511.
109. [Fr2013] P. Friedlingstein, M. Meinshausen, V. K. Arora, C. D. Jones, A. Anav, S. K. Liddicoat, R. Knutti, “Uncertainties in CMIP5 climate projections due to carbon cycle feedbacks”, Journal of Climate, 2013, http://dx.doi.org/10.1175/JCLI-D-12-00579.1.
110. [Ho2003] T. J. Hoar, R. F. Milliff, D. Nychka, C. K. Wikle, L. M. Berliner, “Winds from a Bayesian hierarchical model: Computations for atmosphere-ocean research”, Journal of Computational and Graphical Statistics, 12(4), 2003, 781-807, http://www.jstor.org/stable/1390978.
111. [Jo2013] V. E. Johnson, “Revised standards for statistical evidence”, Proceedings of the National Academy of Sciences, 11 November 2013, http://dx.doi.org/10.1073/pnas.1313476110, published online before print.
112. [Ka2013b] J. Karlsson, J., Svensson, “Consequences of poor representation of Arctic sea-ice albedo and cloud-radiation interactions in the CMIP5 model ensemble”, Geophysical Research Letters, 40, 2013, 4374-4379, http://dx.doi.org/10.1002/grl.50768.
113. [Kh2002] V. V. Kharin, F. W. Zwiers, “Climate predictions with multimodel ensembles”, Journal of Climate, 15, 1 April 2002, 793-799.
114. [Kr2011] J. K. Kruschke, Doing Bayesian Data Analysis: A Tutorial with R and BUGS, Academic Press, 2011.
115. [Li2008] X. R. Li, X.-B. Li, “Common fallacies in hypothesis testing”, Proceedings of the 11th IEEE International Conference on Information Fusion, 2008, New Orleans, LA.
116. [Li2013] J.-L. F. Li, D. E. Waliser, G. Stephens, S. Lee, T. L’Ecuyer, S. Kato, N. Loeb, H.-Y. Ma, “Characterizing and understanding radiation budget biases in CMIP3/CMIP5 GCMs, contemporary GCM, and reanalysis”, Journal of Geophysical Research: Atmospheres, 118, 2013, 8166-8184, http://dx.doi.org/10.1002/jgrd.50378.
117. [Ma2013b] E. Maloney, S. Camargo, E. Chang, B. Colle, R. Fu, K. Geil, Q. Hu, x. Jiang, N. Johnson, K. Karnauskas, J. Kinter, B. Kirtman, S. Kumar, B. Langenbrunner, K. Lombardo, L. Long, A. Mariotti, J. Meyerson, K. Mo, D. Neelin, Z. Pan, R. Seager, Y. Serra, A. Seth, J. Sheffield, J. Stroeve, J. Thibeault, S. Xie, C. Wang, B. Wyman, and M. Zhao, “North American Climate in CMIP5 Experiments: Part III: Assessment of 21st Century Projections”, Journal of Climate, 2013, in press, http://dx.doi.org/10.1175/JCLI-D-13-00273.1.
118. [Mi2007] S.-K. Min, D. Simonis, A. Hense, “Probabilistic climate change predictions applying Bayesian model averaging”, Philosophical Transactions of the Royal Society, Series A, 365, 15 August 2007, http://dx.doi.org/10.1098/rsta.2007.2070.
119. [Ni2001] N. Nicholls, “The insignificance of significance testing”, Bulletin of the American Meteorological Society, 82, 2001, 971-986.
120. [Pe2008] G. Pennello, L. Thompson, “Experience with reviewing Bayesian medical device trials”, Journal of Biopharmaceutical Statistics, 18(1), 81-115).
121. [Pl2013] M. Plummer, “Just Another Gibbs Sampler”, JAGS, 2013. Plummer describes this in greater detail at “JAGS: A program for analysis of Bayesian graphical models using Gibbs sampling”, Proceedings of the 3rd International Workshop on Distributed Statistical Computing (DSC 2003), 20-22 March 2003, Vienna. See also M. J. Denwood, [in review] “runjags: An R package providing interface utilities, parallel computing methods and additional distributions for MCMC models in JAGS”, Journal of Statistical Software, and http://cran.r-project.org/web/packages/runjags/. See also J. Kruschke, “Another reason to use JAGS instead of BUGS”, http://doingbayesiandataanalysis.blogspot.com/2012/12/another-reason-to-use-jags-instead-of.html, 21 December 2012.
122. [Po1994] D. N. Politis, J. P. Romano, “The Stationary Bootstrap”, Journal of the American Statistical Association, 89(428), 1994, 1303-1313, http://dx.doi.org/10.1080/01621459.1994.10476870.
123. [Sa2002] C.-E. Särndal, B. Swensson, J. Wretman, Model Assisted Survey Sampling, Springer, 1992.
124. [Ta2012] K. E. Taylor, R.J. Stouffer, G.A. Meehl, “An overview of CMIP5 and the experiment design”, Bulletin of the American Meteorological Society, 93, 2012, 485-498, http://dx.doi.org/10.1175/BAMS-D-11-00094.1.
125. [To2013] A. Toreti, P. Naveau, M. Zampieri, A. Schindler, E. Scoccimarro, E. Xoplaki, H. A. Dijkstra, S. Gualdi, J, Luterbacher, “Projections of global changes in precipitation extremes from CMIP5 models”, Geophysical Research Letters, 2013, http://dx.doi.org/10.1002/grl.50940.
126. [WC2013] World Climate Research Programme (WCRP), “CMIP5: Coupled Model Intercomparison Project”, http://cmip-pcmdi.llnl.gov/cmip5/, last visited 13 September 2013.
127. [We2011] M. B. Westover, K. D. Westover, M. T. Bianchi, “Significance testing as perverse probabilistic reasoning”, BMC Medicine, 9(20), 2011, http://www.biomedcentral.com/1741-7015/9/20.
128. [Zw2004] F. W. Zwiers, H. Von Storch, “On the role of statistics in climate research”, International Journal of Climatology, 24, 2004, 665-680.
129. [Ra2005] A. E. Raftery, T. Gneiting , F. Balabdaoui , M. Polakowski, “Using Bayesian model averaging to calibrate forecast ensembles”, Monthly Weather Review, 133, 1155–1174, http://dx.doi.org/10.1175/MWR2906.1.
130. [Ki2010] G. Kitagawa, Introduction to Time Series Modeling, Chapman & Hall/CRC, 2010.
131. [Hu2010] C. W. Hughes, S. D. P. Williams, “The color of sea level: Importance of spatial variations in spectral shape for assessing the significance of trends”, Journal of Geophysical Research, 115, C10048, 2010, http://dx.doi.org/10.1029/2010JC006102.