## El Niño Project (Part 8)

14 October, 2014

So far we’ve rather exhaustively studied a paper by Ludescher et al which uses climate networks for El Niño prediction. This time I’d like to compare another paper:

• Y. Berezin, Avi Gozolchiani, O. Guez and Shlomo Havlin, Stability of climate networks with time, Scientific Reports 2 (2012).

Some of the authors are the same, and the way they define climate networks is very similar. But their goal here is different: they want to see see how stable climate networks are over time. This is important, since the other paper wants to predict El Niños by changes in climate networks.

They divide the world into 9 zones:

For each zone they construct several climate networks. Each one is an array of numbers $W_{l r}^y$, one for each year $y$ and each pair of grid points $l, r$ in that zone. They call $W_{l r}^y$ a link strength: it’s a measure of how how correlated the weather is at those two grid points during that year.

I’ll say more later about how they compute these link strengths. In Part 3 we explained one method for doing it. This paper uses a similar but subtly different method.

The paper’s first big claim is that $W_{l r}^y$ doesn’t change much from year to year, “in complete contrast” to the pattern of local daily air temperature and pressure fluctuations. In simple terms: the strength of the correlation between weather at two different points tends to be quite stable.

Moreover, the definition of link strength involves an adjustable time delay, $\tau$. We can measure the correlation between the weather at point $l$ at any given time and point $r$ at a time $\tau$ days later. The link strength is computed by taking a maximum over time delays $\tau$. Naively speaking, the value of $\tau$ that gives the maximum correlation is “how long it typically takes for weather at point $l$ to affect weather at point $r$”. Or the other way around, if $\tau$ is negative.

This is a naive way of explaining the idea, because I’m mixing up correlation with causation. But you get the idea, I hope.

Their second big claim is that when the link strength between two points $l$ and $r$ is big, the value of $\tau$ that gives the maximum correlation doesn’t change much from year to year. In simple terms: if the weather at two locations is strongly correlated, the amount of time it takes for weather at one point to reach the other point doesn’t change very much.

### The data

How do Berezin et al define their climate network?

They use data obtained from here:

This is not exactly the same data set that Ludescher et al use, namely:

“Reanalysis 2″ is a newer attempt to reanalyze and fix up the same pile of data. That’s a very interesting issue, but never mind that now!

Berezin et al use data for:

• the geopotential height for six different pressures

and

• the air temperature at those different heights

The geopotential height for some pressure says roughly how high you have to go for air to have that pressure. Click the link if you want a more precise definition! Here’s the geopotential height field for the pressure of 500 millibars on some particular day of some particular year:

The height is in meters.

Berezin et al use daily values for this data for:

• locations world-wide on a grid with a resolution of 5° × 5°,

during:

• the years from 1948 to 2006.

They divide the globe into 9 zones, and separately study each zone:

So, they’ve got twelve different functions of space and time, where space is a rectangle discretized using a 5° × 5° grid, and time is discretized in days. From each such function they build a ‘climate network’.

How do they do it?

### The climate networks

Berezin’s method of defining a climate network is similar to Ludescher et al‘s, but different. Compare Part 3 if you want to think about this.

Let $\tilde{S}^y_l(t)$ be any one of their functions, evaluated at the grid point $l$ on day $t$ of year $y$.

Let $S_l^y(t)$ be $\tilde{S}^y_l(t)$ minus its climatological average. For example, if $t$ is June 1st and $y$ is 1970, we average the temperature at location $l$ over all June 1sts from 1948 to 2006, and subtract that from $\tilde{S}^y_l(t)$ to get $S^y_l(t)$. In other words:

$\displaystyle{ \tilde{S}^y_l(t) = S^y_l(t) - \frac{1}{N} \sum_y S^y_l(t) }$

where $N$ is the number of years considered.

For any function of time $f$, let $\langle f^y(t) \rangle$ be the average of the function over all days in year $y$. This is different than the ‘running average’ used by Ludescher et al, and I can’t even be 100% sure that Berezin mean what I just said: they use the notation $\langle f^y(t) \rangle$.

Let $l$ and $r$ be two grid points, and $\tau$ any number of days in the interval $[-\tau_{\mathrm{max}}, \tau_{\mathrm{max}}]$. Define the cross-covariance function at time $t$ by:

$\Big(f_l(t) - \langle f_l(t) \rangle\Big) \; \Big( f_r(t + \tau) - \langle f_r(t + \tau) \rangle \Big)$

I believe Berezin mean to consider this quantity, because they mention two grid points $l$ and $r$. Their notation omits the subscripts $l$ and $r$ so it is impossible to be completely sure what they mean! But what I wrote is the reasonable quantity to consider here, so I’ll assume this is what they meant.

They normalize this quantity and take its absolute value, forming:

$\displaystyle{ X_{l r}^y(\tau) = \frac{\Big|\Big(f_l(t) - \langle f_l(t) \rangle\Big) \; \Big( f_r(t + \tau) - \langle f_r(t + \tau) \rangle \Big)\Big|} {\sqrt{\Big\langle \Big(f_l(t) - \langle f_l(t)\rangle \Big)^2 \Big\rangle } \; \sqrt{\Big\langle \Big(f_r(t+\tau) - \langle f_r(t+\tau)\rangle\Big)^2 \Big\rangle } } }$

They then take the maximum value of $X_{l r}^y(\tau)$ over delays $\tau \in [-\tau_{\mathrm{max}}, \tau_{\mathrm{max}}]$, subtract its mean over delays in this range, and divide by the standard deviation. They write something like this:

$\displaystyle{ W_{l r}^y = \frac{\mathrm{MAX}\Big( X_{l r}^y - \langle X_{l r}^y\rangle \Big) }{\mathrm{STD} X_{l r}^y} }$

and say that the maximum, mean and standard deviation are taken over the (not written) variable $\tau \in [-\tau_{\mathrm{max}}, \tau_{\mathrm{max}}]$.

Each number $W_{l r}^y$ is called a link strength. For each year, the matrix of numbers $W_{l r}^y$ where $l$ and $r$ range over all grid points in our zone is called a climate network.

We can think of a climate network as a weighted complete graph with the grid points $l$ as nodes. Remember, an undirected graph is one without arrows on the edges. A complete graph is an undirected graph with one edge between any pair of nodes:

A weighted graph is an undirected graph where each edge is labelled by a number called its weight. But right now we’re also calling the weight the ‘link strength’.

A lot of what’s usually called ‘network theory’ is the study of weighted graphs. You can learn about it here:

• Ernesto Estrada, The Structure of Complex Networks: Theory and Applications, Oxford U. Press, Oxford, 2011.

Suffice it to say that given a weighted graph, there are lot of quantities you can compute from it, which are believed to tell us interesting things!

### The conclusions

I will not delve into the real meat of the paper, namely what they actually do with their climate networks! The paper is free online, so you can read this yourself.

I will just quote their conclusions and show you a couple of graphs.

The conclusions touch on an issue that’s important for the network-based approach to El Niño prediction. If climate networks are ‘stable’, not changing much in time, why would we use them to predict a time-dependent phenomenon like the El Niño Southern Oscillation?

We have established the stability of the network of connections between the dynamics of climate variables (e.g. temperatures and geopotential heights) in different geographical regions. This stability stands in fierce contrast to the observed instability of the original climatological field pattern. Thus the coupling between different regions is, to a large extent, constant and predictable. The links in the climate network seem to encapsulate information that is missed in analysis of the original field.

The strength of the physical connection, $W_{l r}$, that each link in this network represents, changes only between 5% to 30% over time. A clear boundary between links that represent real physical dependence and links that emerge due to noise is shown to exist. The distinction is based on both the high link average strength $\overline{W_{l r}}$ and on the low variability of time delays $\mathrm{STD}(T_{l r})$.

Recent studies indicate that the strength of the links in the climate network changes during the El Niño Southern Oscillation and the North Atlantic Oscillation cycles. These changes are within the standard deviation of the strength of the links found here. Indeed in Fig. 3 it is clearly seen that the coefficient of variation of links in the El Niño basin (zone 9) is larger than other regions such as zone 1. Note that even in the El Niño basin the coefficient of variation is relatively small (less than 30%).

Beside the stability of single links, also the hierarchy of the link strengths in the climate network is preserved to a large extent. We have shown that this hierarchy is partially due to the two dimensional space in which the network is embedded, and partially due to pure physical coupling processes. Moreover the contribution of each of these effects, and the level of noise was explicitly estimated. The spatial effect is typically around 50% of the observed stability, and the noise reduces the stability value by typically 5%–10%.

The network structure was further shown to be consistent across different altitudes, and a monotonic relation between the altitude distance and the correspondence between the network structures is shown to exist. This yields another indication that the observed network structure represents effects of physical coupling.

The stability of the network and the contributions of different effects were summarized in specific relation to different geographical areas, and a clear distinction between equatorial and off–equatorial areas was observed. Generally, the network structure of equatorial regions is less stable and more fluctuative.

The stability and consistence of the network structure during time and across different altitudes stands in contrast to the known unstable variability of the daily anomalies of climate variables. This contrast indicates an analogy between the behavior of nodes in the climate network and the behavior of coupled chaotic oscillators. While the fluctuations of each coupled oscillators are highly erratic and unpredictable, the interactions between the oscillators is stable and can be predicted. The possible outreach of such an analogy lies in the search for known behavior patterns of coupled chaotic oscillators in the climate system. For example, existence of phase slips in coupled chaotic oscillators is one of the fingerprints for their cooperated behavior, which is evident in each of the individual oscillators. Some abrupt changes in climate variables, for example, might be related to phase slips, and can be understood better in this context.

On the basis of our measured coefficient of variation of single links (around 15%), and the significant overall network stability of 20–40%, one may speculatively assess the extent of climate change. However, for this assessment our current available data is too short and does not include enough time from periods before the temperature trends. An assessment of the relation between the network stability and climate change might be possible mainly through launching of global climate model “experiments” realizing other climate conditions, which we indeed intend to perform.

A further future outreach of our work can be a mapping between network features (such as network motifs) and known physical processes. Such a mapping was previously shown to exist between an autonomous cluster in the climate network and El Niño. Further structures without such a climate interpretation might point towards physical coupling processes which were not observed earlier.

(I have expanded some acronyms and deleted some reference numbers.)

Finally, here two nice graphs showing the average link strength as a function of distance. The first is based on four climate networks for Zone 1, the southern half of South America:

The second is based on four climate networks for Zone 9, a big patch of the Pacific north of the Equator which roughly corresponds to the ‘El Niño basin':

As we expect, temperatures and geopotential heights get less correlated at points further away. But the rate at which the correlation drops off conveys interesting information! Graham Jones has made some interesting charts of this for the rectangle of the Pacific that Ludescher et al use for El Niño prediction, and I’ll show you those next time.

### The series so far

El Niño project (part 1): basic introduction to El Niño and our project here.

El Niño project (part 2): introduction to the physics of El Niño.

El Niño project (part 3): summary of the work of Ludescher et al.

El Niño project (part 4): how Graham Jones replicated the work by Ludescher et al, using software written in R.

El Niño project (part 5): how to download R and use it to get files of climate data.

El Niño project (part 6): Steve Wenner’s statistical analysis of the work of Ludescher et al.

El Niño project (part 7): the definition of El Niño.

El Niño project (part 8): Berezin et al on the stability of climate networks.

## Exploring Climate Data (Part 2)

16 September, 2014

guest post by Blake Pollard

I have been learning to make animations using R. This is an animation of the profile of the surface air temperature at the equator. So, the x axis here is the longitude, approximately from 120° E to 280° E. I pulled the data from the region that Graham Jones specified in his code on github: it’s equatorial line in the region that Ludescher et al. used:

For this animation I tried to show the 1997-1998 El Niño. Typically the Pacific is much cooler near South America, due to the upwelling of deep cold water:

(Click for more information.) That part of the Pacific gets even cooler during La Niña:

But it warms up during El Niños:

You can see that in the surface air temperature during the 1997-1998 El Niño, although by summer of 1998 things seem to be getting back to normal:

I want to practice making animations like this. I could make a much prettier and better-labelled animation that ran all the way from 1948 to today, but I wanted to think a little about what exactly is best to plot if we want to use it as an aid to understanding some of this El Niño business.

## El Niño Project (Part 7)

18 August, 2014

So, we’ve seen that Ludescher et al have a way to predict El Niños. But there’s something a bit funny: their definition of El Niño is not the standard one!

Precisely defining a complicated climate phenomenon like El Niño is a tricky business. Lots of different things tend to happen when an El Niño occurs. In 1997-1998, we saw these:

But what if just some of these things happen? Do we still have an El Niño or not? Is there a right answer to this question, or is it partially a matter of taste?

A related puzzle: is El Niño a single phenomenon, or several? Could there be several kinds of El Niño? Some people say there are.

Sometime I’ll have to talk about this. But today let’s start with the basics: the standard definition of El Niño. Let’s see how this differs from Ludescher et al’s definition.

### The standard definition

The most standard definitions use the Oceanic Niño Index or ONI, which is the running 3-month mean of the Niño 3.4 index:

• An El Niño occurs when the ONI is over 0.5 °C for at least 5 months in a row.

• A La Niña occurs when the ONI is below -0.5 °C for at least 5 months in a row.

Of course I should also say exactly what the ‘Niño 3.4 index’ is, and what the ‘running 3-month mean’ is.

The Niño 3.4 index is the area-averaged, time-averaged sea surface temperature anomaly for a given month in the region 5°S-5°N and 170°-120°W:

Here anomaly means that we take the area-averaged, time-averaged sea surface temperature for a given month — say February — and subtract off the historical average of this quantity — that is, for Februaries of other years on record.

If you’re clever you can already see room for subtleties and disagreements. For example, you can get sea surface temperatures in the Niño 3.4 region here:

Niño 3.4 data since 1870 calculated from the HadISST1, NOAA. Discussed in N. A. Rayner et al, Global analyses of sea surface temperature, sea ice, and night marine air temperature since the late nineteenth century, J. Geophys. Res. 108 (2003), 4407.

However, they don’t actually provide the Niño 3.4 index.

You can get the Niño 3.4 index here:

You can also get it from here:

Monthly Niño 3.4 index, Climate Prediction Center, National Weather Service.

The actual temperatures in Celsius on the two websites are quite close — but the anomalies are rather different, because the second one ‘subtracts off the historical average’ in a way that takes global warming into account. For example, to compute the Niño 3.4 index in June 1952, instead of taking the average temperature that month and subtracting off the average temperature for all Junes on record, they subtract off the average for Junes in the period 1936-1965. Averages for different periods are shown here:

You can see how these curves move up over time: that’s global warming! It’s interesting that they go up fastest during the cold part of the year. It’s also interesting to see how gentle the seasons are in this part of the world. In the old days, the average monthly temperatures ranged from 26.2 °C in the winter to 27.5 °C in the summer — a mere 1.3 °C fluctuation.

Finally, to compute the ONI in a given month, we take the average of the Niño 3.4 index in that month, the month before, and the month after. This definition of running 3-month mean has a funny feature: we can’t know the ONI for this month until next month!

You can get a table of the ONI here:

Cold and warm episodes by season, Climate Prediction Center, National Weather Service.

### Ludescher et al

Now let’s compare Ludescher et al. They say there’s an El Niño when the Niño 3.4 index is over 0.5°C for at least 5 months in a row. By not using the ONI — by using the Niño 3.4 index instead of its 3-month running mean — they could be counting some short ‘spikes’ in the Niño 3.4 index as El Niños, that wouldn’t count as El Niños by the usual definition.

I haven’t carefully checked to see how much changing the definition would affect the success rate of their predictions. To be fair, we should also let them change the value of their parameter θ, which is tuned to be good for predicting El Niños in their setup. But we can see that there could be some ‘spike El Niños’ in this graph of theirs, that might go away with a different definition. These are places where the red line goes over the horizontal line for more than 5 months, but no more:

Let’s see look at the spike around 1975. See that green arrow at the beginning of 1975? That means Ludescher et al are claiming to successfully predict an El Niño sometime the next calendar year. We can zoom in for a better look:

The tiny blue bumps are where the Niño 3.4 index exceeds 0.5.

Let’s compare the ONI as computed by the National Weather Service, month by month, with El Niños in red and La Niñas in blue:

1975: 0.5, -0.5, -0.6, -0.7, -0.8, -1.0, -1.1, -1.2, -1.4, -1.5, -1.6, -1.7

1976: -1.5, -1.1, -0.7, -0.5, -0.3, -0.1, 0.2, 0.4, 0.6, 0.7, 0.8, 0.8

1977: 0.6, 0.6, 0.3, 0.3, 0.3, 0.4, 0.4, 0.4, 0.5, 0.7, 0.8, 0.8

1978: 0.7, 0.5, 0.1, -0.2, -0.3, -0.3, -0.3, -0.4, -0.4, -0.3, -0.1, -0.1

So indeed an El Niño started in September 1976. The ONI only stayed above 0.5 for 6 months, but that’s enough. Ludescher and company luck out!

Just for fun, let’s look at the National Weather service Niño 3.4 index to see what that’s like:

1975: -0.33, -0.48, -0.72, -0.54, -0.68, -1.17, -1.07, -1.19, -1.36, -1.69 -1.45, -1.76

1976: -1.78, -1.10, -0.55, -0.53, -0.33, -0.10, 0.20, 0.39, 0.49, 0.88, 0.85, 0.63

So, this exceeded 0.5 in October 1976. That’s when Ludescher et al would say the El Niño starts, if they used the National Weather Service data.

Let’s also compare the NCAR Niño 3.4 index:

1975: -0.698, -0.592, -0.579, -0.801, -1.025, -1.205, -1.435, -1.620, -1.699 -1.855, -2.041, -1.960

1976: -1.708, -1.407, -1.026, -0.477, -0.095, 0.167, 0.465, 0.805, 1.039, 1.137, 1.290, 1.253

It’s pretty different! But it also gives an El Niño in 1976 according to Ludescher et al’ definition: the Niño 3.4 index exceeds 0.5 starting in August 1976.

### For further study

This time we didn’t get into the interesting question of why one definition of El Niño is better than another. For that, try:

• Kevin E. Trenberth, The definition of El Niño, Bulletin of the American Meteorological Society 78 (1997), 2771–2777.

There could also be fundamentally different kinds of El Niño. For example, besides the usual sort where high sea surface temperatures are centered in the Niño 3.4 region, there could be another kind centered farther west near the International Date Line. This is called the dateline El Niño or El Niño Modoki. For more, try this:

• Nathaniel C. Johnson, How many ENSO flavors can we distinguish?, Journal of Climate 26 (2013), 4816-4827.

which has lots of references to earlier work. Here, to whet your appetite, is his picture showing the 9 most common patterns of sea surface temperature anomalies in the Pacific:

At the bottom of each is a percentage showing how frequently that pattern has occurred from 1950 to 2011. To get these pictures Johnson used something called a ‘self-organizing map analysis’ – a fairly new sort of cluster analysis done using neural networks. This is the sort of thing I hope we get into as our project progresses!

### The series so far

Just in case you want to get to old articles, here’s the story so far:

El Niño project (part 1): basic introduction to El Niño and our project here.

El Niño project (part 2): introduction to the physics of El Niño.

El Niño project (part 3): summary of the work of Ludescher et al.

El Niño project (part 4): how Graham Jones replicated the work by Ludescher et al, using software written in R.

El Niño project (part 5): how to download R and use it to get files of climate data.

El Niño project (part 6): Steve Wenner’s statistical analysis of the work of Ludescher et al.

El Niño project (part 7): the definition of El Niño.

El Niño project (part 8): Berezin et al on the stability of climate networks.

## Exploring Climate Data (Part 1)

1 August, 2014

joint with Dara O Shayda

Emboldened by our experiments in El Niño analysis and prediction, people in the Azimuth Code Project have been starting to analyze weather and climate data. A lot of this work is exploratory, with no big conclusions. But it’s still interesting! So, let’s try some blog articles where we present this work.

This one will be about the air pressure on the island of Tahiti and in a city called Darwin in Australia: how they’re correlated, and how each one varies. This article will also be a quick introduction to some basic statistics, as well as ‘continuous wavelet transforms’.

### Darwin, Tahiti and El Niños

The El Niño Southern Oscillation is often studied using the air pressure in Darwin, Australia versus the air pressure in Tahiti. When there’s an El Niño, it gets stormy in the eastern Pacific so the air temperatures tend to be lower in Tahiti and higher in Darwin. When there’s a La Niña, it’s the other way around:

The Southern Oscillation Index or SOI is a normalized version of the monthly mean air pressure anomaly in Tahiti minus that in Darwin. Here anomaly means we subtract off the mean, and normalized means that we divide by the standard deviation.

So, the SOI tends to be negative when there’s an El Niño. On the other hand, when there’s an El Niño the Niño 3.4 index tends to be positive—this says it’s hotter than usual in a certain patch of the Pacific.

Here you can see how this works:

When the Niño 3.4 index is positive, the SOI tends to be negative, and vice versa!

It might be fun to explore precisely how well correlated they are. You can get the data to do that by clicking on the links above.

But here’s another question: how similar are the air pressure anomalies in Darwin and in Tahiti? Do we really need to take their difference, or are they so strongly anticorrelated that either one would be enough to detect an El Niño?

You can get the data to answer such questions here:

Southern Oscillation Index based upon annual standardization, Climate Analysis Section, NCAR/UCAR. This includes links to monthly sea level pressure anomalies in Darwin and Tahiti, in either ASCII format (click the second two links) or netCDF format (click the first one and read the explanation).

In fact this website has some nice graphs already made, which I might as well show you! Here’s the SOI and also the sum of the air pressure anomalies in Darwin and Tahiti, normalized in some way:

(Click to enlarge.)

If the sum were zero, the air pressure anomalies in Darwin and Tahiti would contain the same information and life would be simple. But it’s not!

How similar in character are the air pressure anomalies in Darwin and Tahiti? There are many ways to study this question. Dara tackled it by taking the air pressure anomaly data from 1866 to 2012 and computing some ‘continuous wavelet transforms’ of these air pressure anomalies. This is a good excuse for explaining how a continuous wavelet transform works.

### Very basic statistics

It helps to start with some very basic statistics. Suppose you have a list of numbers

$x = (x_1, \dots, x_n)$

You probably know how to take their mean, or average. People often write this with angle brackets:

$\displaystyle{ \langle x \rangle = \frac{1}{n} \sum_{i = 1}^n x_i }$

You can also calculate the mean of their squares:

$\displaystyle{ \langle x^2 \rangle = \frac{1}{n} \sum_{i = 1}^n x_i^2 }$

If you were naive you might think $\langle x^2 \rangle = \langle x \rangle^2,$ but in fact we have:

$\langle x^2 \rangle \ge \langle x \rangle^2$

and they’re equal only if all the $x_i$ are the same. The point is that if the numbers $x_i$ are spread out, the squares of the big ones (positive or negative) contribute more to the average of the squares than if we had averaged them out before squaring. The difference

$\langle x^2 \rangle - \langle x \rangle^2$

is called the variance; it says how spread out our numbers are. The square root of the variance is the standard deviation:

$\sigma_x = \sqrt{\langle x^2 \rangle - \langle x \rangle^2 }$

and this has the slight advantage that if you multiply all the numbers $x_i$ by some constant $c,$ the standard deviation gets multiplied by $|c|.$ (The variance gets multiplied by $c^2.$)

We can generalize the variance to a situation where we have two lists of numbers:

$x = (x_1, \dots, x_n)$

$y = (y_1, \dots, y_n)$

Namely, we can form the covariance

$\langle x y \rangle - \langle x \rangle \langle y \rangle$

This reduces to the variance when $x = y.$ It measures how much $x$ and $y$ vary together — ‘hand in hand’, as it were. A bit more precisely: if $x_i$ is greater than its mean value mainly for $i$ such that $y_i$ is greater than its mean value, the covariance is positive. On the other hand, if $x_i$ tends to be greater than average when $y_i$ is smaller than average — like with the air pressures at Darwin and Tahiti — the covariance will be negative.

For example, if

$x = (1,-1), \quad y = (1,-1)$

then they ‘vary hand in hand’, and the covariance

$\langle x y \rangle - \langle x \rangle \langle y \rangle = 1 - 0 = 1$

is positive. But if

$x = (1,-1), \quad y = (-1,1)$

then one is positive when the other is negative, so the covariance

$\langle x y \rangle - \langle x \rangle \langle y \rangle = -1 - 0 = -1$

is negative.

Of course the covariance will get bigger if we multiply both $x$ and $y$ by some big number. If we don’t want this effect, we can normalize the covariance and get the correlation:

$\displaystyle{ \frac{ \langle x y \rangle - \langle x \rangle \langle y \rangle }{\sigma_x \sigma_y} }$

which will always be between $-1$ and $1.$

For example, if we compute the correlation between the air pressure anomalies at Darwin and Tahiti, measured monthly from 1866 to 2012, we get
-0.253727. This indicates that when one goes up, the other tends to go down. But since we’re not getting -1, it means they’re not completely locked into a linear relationship where one is some negative number times the other.

Okay, we’re almost ready for continuous wavelet transforms! Here is the main thing we need to know. If the mean of either $x$ or $y$ is zero, the formula for covariance simplifies a lot, to

$\displaystyle{ \langle x y \rangle = \frac{1}{n} \sum_{i = 1}^n x_i y_i }$

So, this quantity says how much the numbers $x_i$ ‘vary hand in hand’ with the numbers $y_i,$ in the special case when one (or both) has mean zero.

We can do something similar if $x, y : \mathbb{R} \to \mathbb{R}$ are functions of time defined for all real numbers $t.$ The sum becomes an integral, and we have to give up on dividing by $n.$ We get:

$\displaystyle{ \int_{-\infty}^\infty x(t) y(t)\; d t }$

This is called the inner product of the functions $x$ and $y,$ and often it’s written $\langle x, y \rangle,$ but it’s a lot like the covariance.

### Continuous wavelet transforms

What are continuous wavelet transforms, and why should we care?

People have lots of tricks for studying ‘signals’, like series of numbers $x_i$ or functions $x : \mathbb{R} \to \mathbb{R}.$ One method is to ‘transform’ the signal in a way that reveals useful information. The Fourier transform decomposes a signal into sines and cosines of different frequencies. This lets us see how much power the signal has at different frequencies, but it doesn’t reveal how the power at different frequencies changes with time. For that we should use something else, like the Gabor transform explained by Blake Pollard in a previous post.

Sines and cosines are great, but we might want to look for other patterns in a signal. A ‘continuous wavelet transform’ lets us scan a signal for appearances of a given pattern at different times and also at different time scales: a pattern could go by quickly, or in a stretched out slow way.

To implement the continuous wavelet transform, we need a signal and a pattern to look for. The signal could be a function $x : \mathbb{R} \to \mathbb{R}.$ The pattern would then be another function $y: \mathbb{R} \to \mathbb{R},$ usually called a wavelet.

Here’s an example of a wavelet:

If we’re in a relaxed mood, we could call any function that looks like a bump with wiggles in it a wavelet. There are lots of famous wavelets, but this particular one is the fourth derivative of a certain Gaussian. Mathematica calls this particular wavelet DGaussianWavelet[4], and you can look up the formula under ‘Details’ on their webpage.

However, the exact formula doesn’t matter at all now! If we call this wavelet $y,$ all that matters is that it’s a bump with wiggles on it, and that its mean value is 0, or more precisely:

$\displaystyle{ \int_{-\infty}^\infty y(t) \; d t = 0 }$

As we saw in the last section, this fact lets us take our function $x$ and the wavelet $y$ and see how much they ‘vary hand it hand’ simply by computing their inner product:

$\displaystyle{ \langle x , y \rangle = \int_{-\infty}^\infty x(t) y(t)\; d t }$

Loosely speaking, this measures the ‘amount of $y$-shaped wiggle in the function $x$’. It’s amazing how hard it is to say something in plain English that perfectly captures the meaning of a simple formula like the above one—so take the quoted phrase with a huge grain of salt. But it gives a rough intuition.

Our wavelet $y$ happens to be centered at $t = 0.$ However, we might be interested in $y$-shaped wiggles that are centered not at zero but at some other number $s.$ We could detect these by shifting the function $y$ before taking its inner product with $x$:

$\displaystyle{ \int_{-\infty}^\infty x(t) y(t-s)\; d t }$

We could also be interested in measuring the amount of some stretched-out or squashed version of a $y$-shaped wiggle in the function $x.$ Again we could do this by changing $y$ before taking its inner product with $x$:

$\displaystyle{ \int_{-\infty}^\infty x(t) \; y\left(\frac{t}{P}\right) \; d t }$

When $P$ is big, we get a stretched-out version of $y.$ People sometimes call $P$ the period, since the period of the wiggles in $y$ will be proportional to this (though usually not equal to it).

Finally, we can combine these ideas, and compute

$\displaystyle{ \int_{-\infty}^\infty x(t) \; y\left(\frac{t- s}{P}\right)\; dt }$

This is a function of the shift $s$ and period $P$ which says how much of the $s$-shifted, $P$-stretched wavelet $y$ is lurking in the function $x.$ It’s a version of the continuous wavelet transform!

Mathematica implements this idea for time series, meaning lists of numbers $x = (x_1,\dots,x_n)$ instead of functions $x : \mathbb{R} \to \mathbb{R}.$ The idea is that we think of the numbers as samples of a function $x$:

$x_1 = x(\Delta t)$

$x_2 = x(2 \Delta t)$

and so on, where $\Delta t$ is some time step, and replace the integral above by a suitable sum. Mathematica has a function ContinuousWaveletTransform that does this, giving

$\displaystyle{ w(s,P) = \frac{1}{\sqrt{P}} \sum_{i = 1}^n x_i \; y\left(\frac{i \Delta t - s}{P}\right) }$

The factor of $1/\sqrt{P}$ in front is a useful extra trick: it’s the right way to compensate for the fact that when you stretch out out your wavelet $y$ by a factor of $P,$ it gets bigger. So, when we’re doing integrals, we should define the continuous wavelet transform of $y$ by:

$\displaystyle{ w(s,P) = \frac{1}{\sqrt{P}} \int_{-\infty}^\infty x(t) y(\frac{t- s}{P})\; dt }$

### The results

Dara Shayda started with the air pressure anomaly at Darwin and Tahiti, measured monthly from 1866 to 2012. Taking DGaussianWavelet[4] as his wavelet, he computed the continuous wavelet transform $w(s,P)$ as above. To show us the answer, he created a scalogram:

This is a 2-dimensional color plot showing roughly how big the continuous wavelet transform $w(s,P)$ is for different shifts $s$ and periods $P.$ Blue means it’s very small, green means it’s bigger, yellow means even bigger and red means very large.

Tahiti gave this:

You’ll notice that the patterns at Darwin and Tahiti are similar in character, but notably different in detail. For example, the red spots, where our chosen wavelet shows up strongly with period of order ~100 months, occur at different times.

Puzzle 1. What is the meaning of the ‘spikes’ in these scalograms? What sort of signal would give a spike of this sort?

Puzzle 2. Do a Gabor transform, also known as a ‘windowed Fourier transform’, of the same data. Blake Pollard explained the Gabor transform in his article Milankovitch vs the Ice Ages. This is a way to see how much a signal wiggles at a given frequency at a given time: we multiply the signal by a shifted Gaussian and then takes its Fourier transform.

Puzzle 3. Read about continuous wavelet transforms. If we want to reconstruct our signal $x$ from its continuous wavelet transform, why should we use a wavelet $y$ with

$\displaystyle{\int_{-\infty}^\infty y(t) \; d t = 0 ? }$

In fact we want a somewhat stronger condition, which is implied by the above equation when the Fourier transform of $y$ is smooth and integrable:

Continuous wavelet transform, Wikipedia.

### Another way to understand correlations

David Tweed mentioned another approach from signal processing to understanding the quantity

$\displaystyle{ \langle x y \rangle = \frac{1}{n} \sum_{i = 1}^n x_i y_i }$

If we’ve got two lists of data $x$ and $y$ that we want to compare to see if they behave similarly, the first thing we ought to do is multiplicatively scale each one so they’re of comparable magnitude. There are various possibilities for assigning a scale, but a reasonable one is to ensure they have equal ‘energy’

$\displaystyle{ \sum_{i=1}^n x_i^2 = \sum_{i=1}^n y_i^2 }$

(This can be achieved by dividing each list by its standard deviation, which is equivalent to what was done in the main derivation above.) Once we’ve done that then it’s clear that looking at

$\displaystyle{ \sum_{i=1}^n (x_i-y_i)^2 }$

gives small values when they have a very good match and progressively bigger values as they become less similar. Observe that

$\begin{array}{ccl} \displaystyle{\sum_{i=1}^n (x_i-y_i)^2 } &=& \displaystyle{ \sum_{i=1}^n (x_i^2 - 2 x_i y_i + y_i^2) }\\ &=& \displaystyle{ \sum_{i=1}^n x_i^2 - 2 \sum_{i=1}^n x_i y_i + \sum_{i=1}^n y_i^2 } \end{array}$

Since we’ve scaled things so that $\sum_{i=1}^n x_i^2$ and $\sum_{i=1}^n y_i^2$ are constants, we can see that when $\sum_{i=1}^n x_i y_i$ becomes bigger,

$\displaystyle{ \sum_{i=1}^n (x_i-y_i)^2 }$

becomes smaller. So,

$\displaystyle{\sum_{i=1}^n x_i y_i}$

serves as a measure of how close the lists are, under these assumptions.

## El Niño Project (Part 6)

23 July, 2014

guest post by Steven Wenner

Hi, I’m Steve Wenner.

I’m an industrial statistician with over 40 years of experience in a wide range applications (quality, reliability, product development, consumer research, biostatistics); but, somehow, time series only rarely crossed my path. Currently I’m working for a large consumer products company.

My undergraduate degree is in physics, and I also have a master’s in pure math. I never could reconcile how physicists used math (explain that Dirac delta function to me again in math terms? Heaviside calculus? On the other hand, I thought category theory was abstract nonsense until John showed me otherwise!). Anyway, I had to admit that I lacked the talent to pursue pure math or theoretical physics, so I became a statistician. I never regretted it—statistics has provided a very interesting and intellectually challenging career.

I got interested in Ludescher et al’s paper on El Niño prediction by reading Part 3 of this series. I have no expertise in climate science, except for an intense interest in the subject as a concerned citizen. So, I won’t talk about things like how Ludescher et al use a nonstandard definition of ‘El Niño’—that’s a topic for another time. Instead, I’ll look at some statistical aspects of their paper:

• Josef Ludescher, Avi Gozolchiani, Mikhail I. Bogachev, Armin Bunde, Shlomo Havlin, and Hans Joachim Schellnhuber, Very early warning of next El Niño, Proceedings of the National Academy of Sciences, February 2014. (Click title for free version, journal name for official version.)

### Analysis

I downloaded the NOAA adjusted monthly temperature anomaly data and compared the El Niño periods with the charts in this paper. I found what appear to be two errors (“phantom” El Niños) and noted some interesting situations. Some of these are annotated on the images below. Click to enlarge them:

I also listed for each year whether an El Niño initiation was predicted, or not, and whether one actually happened. I did the predictions five ways: first, I listed the author’s “arrows” as they appeared on their charts, and then I tried to match their predictions by following in turn four sets of rules. Nevertheless, I could not come up with any detailed rules that exactly reproduced the author’s results.

These were the rules I used:

An El Niño initiation is predicted for a calendar year if during the preceding year the average link strength crossed above the 2.82 threshold. However, we could also invoke additional requirements. Two possibilities are:

1. Preemption rule: the prediction of a new El Niño is canceled if the preceding year ends in an El Niño period.

2. End-of-year rule: the link strength must be above 2.82 at year’s end.

I counted the predictions using all four combinations of these two rules and compared the results to the arrows on the charts.

I defined an “El Niño initiation month” to be a month where the monthly average adjusted temperature anomaly rises up to at least 0.5 C and remains above or equal to 0.5 °C for at least five months. Note that the NOAA El Niño monthly temperature estimates are rounded to hundredths; and, on occasion, the anomaly is reported as exactly 0.5 °C. I found slightly better agreement with the authors’ El Niño periods if I counted an anomaly of exactly 0.5 °C as satisfying the threshold criterion, instead of using the strictly “greater than” condition.

Anyway, I did some formal hypothesis testing and estimation under all five scenarios. The good news is that under most scenarios the prediction method gave better results than merely guessing. (But, I wonder how many things the authors tried before they settled on their final method? Also, did they do all their work on the learning series, and then only at the end check the validation series—or were they checking both as they went about their investigations?)

The bad news is that the predictions varied with the method, and the methods were rather weak. For instance, in the training series there were 9 El Niño periods in 30 years; the authors’ rules (whatever they were, exactly) found five of the nine. At the same time, they had three false alarms in the 21 years that did not have an El Niño initiated.

I used Fisher’s exact test to compute some p-values. Suppose (as our ‘null hypothesis’) that Ludescher et al’s method does not improve the odds of a successful prediction of an El Nino initiation. What’s the probability of that method getting at least as many predictions right just by chance? Answer: 0.032 – this is marginally more significant than the conventional 1 in 20 chance that is the usual threshold for rejecting a null hypothesis, but still not terribly convincing. This was, by the way, the most significant of the five p-values for the alternative rule sets applied to the learning series.

I also computed the “relative risk” statistics for all scenarios; for instance, we are more than three times as likely to see an El Niño initiation if Ludescher et al predict one, than if they predict otherwise (the 90% confidence interval for that ratio is 1.2 to 9.7, with the point estimate 3.4). Here is a screen shot of some statistics for that case:

Again, click to enlarge—but my whole working spreadsheet is available with more details for anyone who wishes to see it. I did the statistical analysis with a program called JMP, a product of the SAS corporation.

My overall impression from all this is that Ludescher et al are suggesting a somewhat arbitrary (and not particularly well-defined) method for revealing the relationship between link strength and El Niño initiation, if, indeed, a relationship exists. Slight variations in the interpretation of their criteria and slight variations in the data result in appreciably different predictions. I wonder if there are better ways to analyze these two correlated time series.

## El Niño Project (Part 5)

12 July, 2014

And now for some comic relief.

Last time I explained how to download some weather data and start analyzing it, using programs written by Graham Jones. When you read that, did you think “Wow, that’s easy!” Or did you think “Huh? Run programs in R? How am I supposed to do that?”

If you’re in the latter group, you’re like me. But I managed to do it. And this is the tale of how. It’s a blow-by-blow account of my first steps, my blunders, my fears.

I hope that if you’re intimidated by programming, my tale will prove that you too can do this stuff… provided you have smart friends, or read this article.

• use R to create a file of temperature data for a given latitude/longitude rectangle for a given time interval.

I will not attempt to explain how to program in R.

If you want to copy what I’m doing, please remember that a few details depend on the operating system. Since I don’t care about operating systems, I use a Windows PC. If you use something better, some details will differ for you.

Also: at the end of this article there are some very basic programming puzzles.

First, let me explain a bit about my relation to computers.

I first saw a computer at the Lawrence Hall of Science in Berkeley, back when I was visiting my uncle in the summer of 1978. It was really cool! They had some terminals where you could type programs in BASIC and run them.

I got especially excited when he gave me the book Computer Lib/Dream Machines by Ted Nelson. It espoused the visionary idea that people could write texts on computers all around the world—“hypertexts” where you could click on a link in one and hop to another!

I did more programming the next year in high school, sitting in a concrete block room with a teletype terminal that was connected to a mainframe somewhere far away. I stored my programs on paper tape. But my excitement gradually dwindled, because I was having more fun doing math and physics using just pencil and paper. My own brain was more easy to program than the machine. I did not start a computer company. I did not get rich. I learned quantum mechanics, and relativity, and Gödel’s theorem.

Later I did some programming in APL in college, and still later I did a bit in Mathematica in the early 1990s… but nothing much, and nothing sophisticated. Indeed, none of these languages would be the ones you’d choose to explore sophisticated ideas in computation!

I’ve just never been very interested… until now. I now want to do a lot of data analysis. It will be embarrassing to keep asking other people to do all of it for me. I need to learn how to do it myself.

Maybe you’d like to do this stuff too—or at least watch me make a fool of myself. So here’s my tale, from the start.

To use the programs written by Graham, I need to use R, a language currently popular among statisticians. It is not the language my programmer friends would want me to learn—they’d want me to use something like Python. But tough! I can learn that later.

To download R to my Windows PC, I cleverly type download R into Google, and go to the top website it recommends:

I click the big fat button on top saying

Download R 3.1.0 for Windows

and get asked to save a file R-3.1.0-win.exe. I save it in my Downloads folder; it takes a while to download since it was 57 megabytes. When I get it, I click on it and follow the easy default installation instructions. My Desktop window now has a little icon on it that says R.

Clicking this, I get an interface where I can type commands after a red

>

symbol. Following Graham’s advice, I start by trying

> 2^(1:8)

which generates a list of powers of 2 from 21 to 28, like this:

[1] 2 4 8 16 32 64 128 256

Then I try

> mean(2^(1:8))

which gives the arithmetic mean of this list. Somewhat more fun is

> plot(rnorm(20))

which plots a bunch of points, apparently 20 standard normal deviates.

When I hear “20 standard normal deviates” I think of the members of a typical math department… but no, those are deviants. Standard normal deviates are random numbers chosen from a Gaussian distribution of mean zero and variance 1.

To do something more interesting, I need to input data.

The papers by Ludescher et al use surface air temperatures in a certain patch of the Pacific, so I want to get ahold of those. They’re here:

NCEP is the National Centers for Environmental Prediction, and NCAR is the National Center for Atmospheric Research. They have a bunch of files here containing worldwide daily average temperatures on a 2.5 degree latitude × 2.5 degree longitude grid (that’s 144 × 73 grid points), from 1948 to 2010. And if you go here, the website will help you get data from within a chosen rectangle in a grid, for a chosen time interval.

These are NetCDF files. NetCDF stands for Network Common Data Form:

NetCDF is a set of software libraries and self-describing, machine-independent data formats that support the creation, access, and sharing of array-oriented scientific data.

According to my student Blake Pollard:

I know about ftp: I’m so old that I know this was around before the web existed. Back then it meant “faster than ponies”. But I need to get R to accept data from these NetCDF files: that’s what scares me!

Graham said that R has a “package” called RNetCDF for using NetCDF files. So, I need to get ahold of this package, download some files in the NetCDF format, and somehow get R to eat those files with the help of this package.

At first I was utterly clueless! However, after a bit of messing around, I notice that right on top of the R interface there’s a menu item called Packages. I boldly click on this and choose Install Package(s).

I am rewarded with an enormous alphabetically ordered list of packages… obviously statisticians have lots of stuff they like to do over and over! I find RNetCDF, click on that and click something like “OK”.

I’m asked if I want to use a “personal library”. I click “no”, and get an error message. So I click “yes”. The computer barfs out some promising text:

utils:::menuInstallPkgs() trying URL 'http://cran.stat.nus.edu.sg/bin/windows/contrib/3.1/RNetCDF_1.6.2-3.zip' Content type 'application/zip' length 548584 bytes (535 Kb) opened URL downloaded 535 Kb

 package ‘RNetCDF’ successfully unpacked and MD5 sums checked 

The downloaded binary packages are in C:\Users\JOHN\AppData\Local\Temp\Rtmp4qJ2h8\downloaded_packages

Success!

But now I need to figure out how to download a file and get R to eat it and digest it with the help of RNetCDF.

At this point my deus ex machina, Graham, descends from the clouds and says:

You can download the files from your browser. It is probably easiest to do that for starters. Put
ftp://ftp.cdc.noaa.gov/Datasets/ncep.reanalysis.dailyavgs/surface/
into the browser, then right-click a file and Save link as…

 for (year in 1950:1979) { download.file(url=paste0("ftp://ftp.cdc.noaa.gov/Datasets/ncep.reanalysis.dailyavgs/surface/air.sig995.", year, ".nc"), destfile=paste0("air.sig995.", year, ".nc"), mode="wb") } 

It will put them into the “working directory”, probably C:\Users\JOHN\Documents. You can find the working directory using getwd(), and change it with setwd(). But you must use / not \ in the filepath.

Compared to UNIX, the Windows operating system has the peculiarity of using \ instead of / in path names, but R uses the UNIX conventions even on Windows.

So, after some mistakes, in the R interface I type

> setwd("C:/Users/JOHN/Documents/My Backups/azimuth/el nino")

and then type

> getwd()

to see if I’ve succeeded. I’m rewarded with

[1] "C:/Users/JOHN/Documents/My Backups/azimuth/el nino"

Good!

Then, following Graham’s advice, I cut-and-paste this into the R interface:

for (year in 1950:1979) { download.file(url=paste0("ftp://ftp.cdc.noaa.gov/Datasets/ncep.reanalysis.dailyavgs/surface/air.sig995.", year, ".nc"), destfile=paste0("air.sig995.", year, ".nc"), mode="wb") }

It seems to be working! A little bar appears showing how each year’s data is getting downloaded. It chugs away, taking a couple minutes for each year’s worth of data.

### Using R to process NetCDF files

Okay, now I’ve got all the worldwide daily average temperatures on a 2.5 degree latitude × 2.5 degree longitude grid from 1950 to 1970.

The world is MINE!

But what do I do with it? Graham’s advice is again essential, along with a little R program, or script, that he wrote:

The R script netcdf-convertor.R from

https://github.com/azimuth-project/el-nino/tree/master/R

will eat the file, digest it, and spit it out again. It contains instructions.

I go to this URL, which is on GitHub, a popular free web-based service for software development. You can store programs here, edit them, and GitHub will help you keep track of the different versions. I know almost nothing about this stuff, but I’ve seen it before, so I’m not intimidated.

I click on the blue thing that says netcdf-convertor.R and see something that looks like the right script. Unfortunately I can’t see how to download it! I eventually see a button I’d overlooked, cryptically labelled “Raw”. I realize that since I don’t want a roasted or oven-broiled piece of software, I should click on this. I indeed succeed in downloading netcdf-convertor.R this way. Graham later says I could have done something better, but oh well. I’m just happy nothing has actually exploded yet.

Once I’ve downloaded this script, I open it using an text processor and look at it. At top are a bunch of comments written by Graham:

 ###################################################### ######################################################

 # You should be able to use this by editing this # section only. setwd("C:/Users/Work/AAA/Programming/ProgramOutput/Nino") lat.range <- 13:14 lon.range <- 142:143 firstyear <- 1957 lastyear <- 1958 outputfilename <- paste0("Scotland-", firstyear, "-", lastyear, ".txt") ###################################################### ###################################################### # Explanation # 1. Use setwd() to set the working directory # to the one containing the .nc files such as # air.sig995.1951.nc. # Example: # setwd("C:/Users/Work/AAA/Programming/ProgramOutput/Nino") # 2. Supply the latitude and longitude range. The # NOAA data is every 2.5 degrees. The ranges are # supplied as the number of steps of this size. # For latitude, 1 means North Pole, 73 means South # Pole. For longitude, 1 means 0 degrees East, 37 # is 90E, 73 is 180, 109 is 90W or 270E, 144 is # 2.5W. # These roughly cover Scotland. # lat.range <- 13:14 # lon.range <- 142:143 # These are the area used by Ludescher et al, # 2013. It is 27x69 points which are then # subsampled to 9 by 23. # lat.range <- 24:50 # lon.range <- 48:116 # 3. Supply the years # firstyear <- 1950 # lastyear <- 1952 # 4. Supply the output name as a text string. # paste0() concatenates strings which you may find # handy: # outputfilename <- paste0("Pacific-", firstyear, "-", lastyear, ".txt") ###################################################### ###################################################### # Example of output # S013E142 S013E143 S014E142 S014E143 # Y1950P001 281.60000272654 281.570002727211 281.60000272654 280.970002740622 # Y1950P002 280.740002745762 280.270002756268 281.070002738386 280.49000275135 # Y1950P003 280.100002760068 278.820002788678 281.120002737269 280.070002760738 # Y1950P004 281.070002738386 279.420002775267 281.620002726093 280.640002747998 # ... # Y1950P193 285.450002640486 285.290002644062 285.720002634451 285.75000263378 # Y1950P194 285.570002637804 285.640002636239 286.070002626628 286.570002615452 # Y1950P195 285.92000262998 286.220002623275 286.200002623722 286.620002614334 # ... # Y1950P364 276.100002849475 275.350002866238 276.37000284344 275.200002869591 # Y1950P365 276.990002829581 275.820002855733 276.020002851263 274.72000288032 # Y1951P001 278.220002802089 277.470002818853 276.700002836064 275.870002854615 # Y1951P002 277.750002812594 276.890002831817 276.650002837181 275.520002862439 # ... # Y1952P365 280.35000275448 280.120002759621 280.370002754033 279.390002775937 # There is one row for each day, and 365 days in # each year (leap days are omitted). In each row, # you have temperatures in Kelvin for each grid # point in a rectangle. # S13E142 means 13 steps South from the North Pole # and 142 steps East from Greenwich. The points # are in reading order, starting at the top-left # (Northmost, Westmost) and going along the top # row first. # Y1950P001 means year 1950, day 1. (P because # longer periods might be used later.) 

###################################################### ###################################################### 

The instructions are admirably detailed concerning what I should do, but they don't say where the output will appear when I do it. This makes me nervous. I guess I should just try it. After all, the program is not called DestroyTheWorld.

Unfortunately, at this point a lot of things start acting weird.

It's too complicated and boring to explain in detail, but basically, I keep getting a file missing error message. I don't understand why this happens under some conditions and not others. I try lots of experiments.

Eventually I discover that one year of temperature data failed to download—the year 1949, right after the first year available! So, I'm getting the error message whenever I try to do anything involving that year of data.

To fix the problem, I simply download the 1949 data by hand from here:

(You can open ftp addresses in a web browser just like http addresses.) I put it in my working directory for R, and everything is fine again. Whew!

By the time things I get this file, I sort of know what to do—after all, I've spent about an hour trying lots of different things.

I decide to create a file listing temperatures near where I live in Riverside from 1948 to 1979. To do this, I open Graham's script netcdf-convertor.R in a word processor and change this section:
 setwd("C:/Users/Work/AAA/Programming/ProgramOutput/Nino") lat.range <- 13:14 lon.range <- 142:143 firstyear <- 1957 lastyear <- 1958 outputfilename <- paste0("Scotland-", firstyear, "-", lastyear, ".txt") 

to this:
 setwd("C:/Users/JOHN/Documents/My Backups/azimuth/el nino") lat.range <- 23:23 lon.range <- 98:98 firstyear <- 1948 lastyear <- 1979 outputfilename <- paste0("Riverside-", firstyear, "-", lastyear, ".txt") 

Why? Well, I want it to put the file in my working directory. I want the years from 1948 to 1979. And I want temperature data from where I live!

Googling the info, I see Riverside, California is at 33.9481° N, 117.3961° W. 34° N is about 56 degrees south of the North Pole, which is 22 steps of size 2.5°. And because some idiot decided everyone should count starting at 1 instead of 0 even in contexts like this, the North Pole itself is step 1, not step 0… so Riverside is latitude step 23. That's why I write:

lat.range <- 23:23

Similarly, 117.5° W is 242.5° E, which is 97 steps of size 2.5°… which counts as step 98 according to this braindead system. That's why I write:

lon.range <- 98:98

Having done this, I save the file netcdf-convertor.R under another name, Riverside.R.

And then I do some stuff that it took some fiddling around to discover.

First, in my R interface I go to the menu item File, at far left, and click on Open script. It lets me browse around, so I go to my working directory for R and choose Riverside.R. A little window called R editor opens up in my R interface, containing this script.

I'm probably not doing this optimally, but I can now right-click on the R editor and see a menu with a choice called Select all. If I click this, everything in the window turns blue. Then I can right-click again and choose Run line or selection. And the script runs!

Voilà!

It huffs and puffs, and then stops. I peek in my working directory, and see that a file called

Riverside.1948-1979.txt

has been created. When I open it, it has lots of lines, starting with these:
 S023E098 Y1948P001 279.95 Y1948P002 280.14 Y1948P003 282.27 Y1948P004 283.97 Y1948P005 284.27 Y1948P006 286.97 

As Graham promised, each line has a year and day label, followed by a vector… which in my case is just a single number, since I only wanted the temperature in one location. I’m hoping this is the temperature near Riverside, in kelvin.

### A small experiment

To see if this is working, I’d like to plot these temperatures and see if they make sense. Unfortunately I have no idea how to get R to take a file containing data of the sort I have and plot it! I need to learn how, but right now I’m exhausted, so I use another method to get the job done— a method that’s too suboptimal and embarrassing to describe here. (Hint: it involves the word “Excel”.)

I do a few things, but here’s the most interesting one—namely, not very interesting. I plot the temperatures for 1963:

I compare it to some publicly available data, not from Riverside, but from nearby Los Angeles:

As you can see, there was a cold day on January 13th, when the temperature dropped to 33°F. That seems to be visible on the graph I made, and looking at the data from which I made the graph, I see the temperature dropped to 251.4 kelvin on the 13th: that’s -7°F, very cold for here. It does get colder around Riverside than in Los Angeles in the winter, since it’s a desert, with temperatures not buffered by the ocean. So, this does seem compatible with the public records. That’s mildly reassuring.

But other features of the graph don’t match, and I’m not quite sure if they should or not. So, all this very tentative and unimpressive. However, I’ve managed to get over some of my worst fears, download some temperature data, and graph it! Now I need to learn how to use R to do statistics with this data, and graph it in a better way.

### Puzzles

You can help me out by answering these puzzles. Later I might pose puzzles where you can help us write really interesting programs. But for now it’s just about learning R.

Puzzle 1. Given a text file with lots of lines of this form:
 S023E098 Y1948P001 279.95 Y1948P002 280.14 Y1948P003 282.27 Y1948P004 283.97 

write an R program that creates a huge vector, or list of numbers, like this:
 279.95, 280.14, 282.27, 283.97, ... 

Puzzle 2: Extend the above program so that it plots this list of numbers, or outputs it to a new file.

If you want to test your programs, here’s the actual file:

### More puzzles

If those puzzles are too easy, here are two more. I gave these last time, but everyone was too wimpy to tackle them.

Puzzle 3. Modify the software so that it uses the same method to predict El Niños from 1980 to 2013. You’ll have to adjust two lines in netcdf-convertor-ludescher.R:

firstyear <- 1948
lastyear <- 1980


should become

firstyear <- 1980
lastyear <- 2013


or whatever range of years you want. You’ll also have to adjust names of years in ludescher-replication.R. Search the file for the string 19 and make the necessary changes. Ask me if you get stuck.

Puzzle 4. Right now we average the link strength over all pairs $(i,j)$ where $i$ is a node in the El Niño basin defined by Ludescher et al and $j$ is a node outside this basin. The basin consists of the red dots here:

What happens if you change the definition of the El Niño basin? For example, can you drop those annoying two red dots that are south of the rest, without messing things up? Can you get better results if you change the shape of the basin?

To study these questions you need to rewrite ludescher-replication.R a bit. Here’s where Graham defines the El Niño basin:

ludescher.basin <- function() {
lats <- c( 5, 5, 5, 5, 5, 5, 5, 5, 5, 5, 5, 5, 6, 6)
lons <- c(11,12,13,14,15,16,17,18,19,20,21,22,16,22)
stopifnot(length(lats) == length(lons))
list(lats=lats,lons=lons)
}


These are lists of latitude and longitude coordinates: (5,11), (5,12), (5,13), etc. A coordinate like (5,11) means the little circle that’s 5 down and 11 across in the grid on the above map. So, that’s the leftmost point in Ludescher’s El Niño basin. By changing these lists, you can change the definition of the El Niño basin.

Next time I’ll discuss some criticisms of Ludescher et al’s paper, but later we will return to analyzing temperature data, looking for interesting patterns.

## El Niño Project (Part 4)

8 July, 2014

As the first big step in our El Niño prediction project, Graham Jones replicated the paper by Ludescher et al that I explained last time. Let’s see how this works!

Graham did it using R, a programming language that’s good for statistics. If you prefer another language, go ahead and write software for that… and let us know! We can add it to our repository.

Today I’ll explain this stuff to people who know their way around computers. But I’m not one of those people! So, next time I’ll explain the nitty-gritty details in a way that may be helpful to people more like me.

### Getting temperature data

Say you want to predict El Niños from 1950 to 1980 using Ludescher et al’s method. To do this, you need daily average surface air temperatures in this grid in the Pacific Ocean:

Each square here is 7.5° × 7.5°. To compute these temperatures, you have to start with temperatures on a grid with smaller squares that are 2.5° × 2.5° in size:

• Earth System Research Laboratory, NCEP Reanalysis Daily Averages Surface Level, or ftp site.

This website will give you daily average surface air temperatures in whatever rectangle and time interval you want. It delivers this data in a format called NetCDF, meaning Network Common Data Form.

We’ll take a different approach. We’ll download all the temperatures in this database, and then extract the data we need using R scripts. That way, when we play other games with temperature data later, we’ll already have it.

So, go ahead and download all the files from air.sig995. 1948.nc to air.sig995.2013.nc. It will take a while… but you’ll own the world.

There are different ways to do this. If you have R fired up, just cut-and-paste this into the console:

for (year in 1950:1979) {
"ftp://ftp.cdc.noaa.gov/Datasets/ncep.reanalysis.dailyavgs/surface/air.sig995.",
year, ".nc"),
destfile=paste0("air.sig995.", year, ".nc"), mode="wb")
}


### Getting the temperatures you need

Now you have files of daily average temperatures on a 2.5° by 2.5° grid from 1948 to 2013. Make sure all these files are in your working directory for R, and download this R script from GitHub:

You can use this to get the temperatures in any time interval and any rectangle of grid points you want. The details are explained in the script. But the defaults are set to precisely what you need now!

So, just run this script. You should get a file called Pacific-1948-1980.txt. This has daily average temperatures in the region we care about, from 1948 to 1980. It should start with a really long line listing locations in a 27 × 69 grid, starting with S024E48 and ending with S248E116. I’ll explain this coordinate scheme at the end of this post. Then come hundreds of lines listing temperatures in kelvin at those locations on successive days. The first of these lines should start with Y1948P001, meaning the first day of 1948.

And I know what you’re dying to ask: yes, leap days are omitted! This annoys the perfectionist in me… but leap years make data analysis more complicated, so Ludescher et al ignore leap days, and we will too.

### Getting the El Niño data

You’ll use this data to predict El Niños, so you also want a file of the Niño 3.4 index. Remember from last time, this says how much hotter the surface of this patch of seawater is than usual for this time of year:

This is a copy of the monthly Niño 3.4 index from the US National Weather Service, which I discussed last time. It has monthly Niño 3.4 data in the column called ANOM.

Put this file in your working directory.

### Predicting El Niños

Now for the cool part. Last time I explained the average link strength’, which Ludescher et al use to predict El Niños. Now you’ll compute it.

You’ve got Pacific-1948-1980.txt and nino3.4-anoms.txt in your working directory. Download this R script written by Graham Jones, and run it:

It takes about 45 minutes on my laptop. It computes the average link strength $S$ at ten-day intervals. Then it plots $S$ in red and the Niño 3.4 index in blue, like this:

(Click to enlarge.) The shaded region is where the Niño 3.4 index is below 0.5°C. When the blue curve escapes this region and then stays above 0.5°C for at least 5 months, Ludescher et al say that there’s an El Niño.

The horizontal red line shows the threshold $\theta = 2.82.$ When $S$ exceeds this, and the Niño 3.4 index is not already over 0.5°C, Ludescher et al predict that there will be an El Niño in the next calendar year!

Our graph almost agrees with theirs:

Here the green arrows show their successful predictions, dashed arrows show false alarms, and a little letter n appears next to each El Niño they failed to predict.

The graphs don’t match perfectly. For the blue curves, we could be using Niño 3.4 from different sources. Differences in the red curves are more interesting, since that’s where all the work is involved, and we’re starting with the same data. Besides actual bugs, which are always possible, I can think of various explanations. None of them are extremely interesting, so I’ll stick them in the last section!

If you want to get ahold of our output, you can do so here:

This has the average link strength $S$ at 10-day intervals, starting from day 730 and going until day 12040, where day 1 is the first of January 1948.

So, you don’t actually have to run all these programs to get our final result. However, these programs will help you tackle some programming challenges which I’ll list now!

### Programming challenges

There are lots of variations on the Ludescher et al paper which we could explore. Here are a few easy ones to get you started. If you do any of these, or anything else, let me know!

Challenge 1. Repeat the calculation with temperature data from 1980 to 2013. You’ll have to adjust two lines in netcdf-convertor-ludescher.R:

firstyear <- 1948
lastyear <- 1980


should become

firstyear <- 1980
lastyear <- 2013


or whatever range of years you want. You’ll also have to adjust names of years in ludescher-replication.R. Search the file for the string 19 and make the necessary changes. Ask me if you get stuck.

Challenge 2. Repeat the calculation with temperature data on a 2.5° × 2.5° grid instead of the coarser 7.5° × 7.5° grid Ludescher et al use. You’ve got the data you need. Right now, the program ludescher-replication.R averages out the temperatures over little 3 × 3 squares. It starts with temperatures on a 27 × 69 grid and averages them out to obtain temperatures on the 9 × 23 grid shown here:

Here’s where that happens:

# the data per day is reduced from e.g. 27x69 to 9x23.

subsample.3x3 <- function(vals) {
stopifnot(dim(vals)[2] %% 3 == 0)
stopifnot(dim(vals)[3] %% 3 == 0)
n.sslats <- dim(vals)[2]/3
n.sslons <- dim(vals)[3]/3
ssvals <- array(0, dim=c(dim(vals)[1], n.sslats, n.sslons))
for (d in 1:dim(vals)[1]) {
for (slat in 1:n.sslats) {
for (slon in 1:n.sslons) {
ssvals[d, slat, slon] <- mean(vals[d, (3*slat-2):(3*slat), (3*slon-2):(3*slon)])
}
}
}
ssvals
}


So, you need to eliminate this and change whatever else needs to be changed. What new value of the threshold $\theta$ looks good for predicting El Niños now? Most importantly: can you get better at predicting El El Niños this way?

The calculation may take a lot longer, since you’ve got 9 times as many grid points and you’re calculating correlations between pairs. So if this is too tough, you can go the other way: use a coarser grid and see how much that degrades your ability to predict El Niños.

Challenge 3. Right now we average the link strength over all pairs $(i,j)$ where $i$ is a node in the El Niño basin defined by Ludescher et al and $j$ is a node outside this basin. The basin consists of the red dots here:

What happens if you change the definition of the El Niño basin? For example, can you drop those annoying two red dots that are south of the rest, without messing things up? Can you get better results if you change the shape of the basin?

To study these questions you need to rewrite ludescher-replication.R a bit. Here’s where Graham defines the El Niño basin:

ludescher.basin <- function() {
lats <- c( 5, 5, 5, 5, 5, 5, 5, 5, 5, 5, 5, 5, 6, 6)
lons <- c(11,12,13,14,15,16,17,18,19,20,21,22,16,22)
stopifnot(length(lats) == length(lons))
list(lats=lats,lons=lons)
}


These are lists of latitude and longitude coordinates: (5,11), (5,12), (5,13), etc. A coordinate like (5,11) means the little circle that’s 5 down and 11 across in the grid on the above map. So, that’s the leftmost point in Ludescher’s El Niño basin. By changing these lists, you can change the definition of the El Niño basin. You’ll also have to change these lists if you tackle Challenge 2.

There’s a lot more you can do… the sky’s the limit! In the weeks to come, I’ll show you lots of things we’ve actually done.

### Annoying nuances

Here are two reasons our average link strengths could differ from Ludescher’s.

Last time I mentioned that Ludescher et al claim to normalize their time-delayed cross-covariances in a sort of complicated way. I explained why I don’t think they could have actually used this method. In ludescher-replication.R, Graham used the simpler normalization described last time: namely, dividing by

$\sqrt{\langle T_i(t)^2 \rangle - \langle T_i(t) \rangle^2} \; \sqrt{\langle T_j(t-\tau)^2 \rangle - \langle T_j(t-\tau) \rangle^2}$

$\sqrt{ \langle (T_i(t) - \langle T_i(t)\rangle)^2 \rangle} \; \sqrt{ \langle (T_j(t-\tau) - \langle T_j(t-\tau)\rangle)^2 \rangle}$

Since we don’t really know what Ludescher et al did, they might have done something else.

We might also have used a different ‘subsampling’ procedure. That’s a name for how we get from the temperature data on a 9 × 69 grid to temperatures on a 3 × 23 grid. While the original data files give temperatures named after grid points, each is really an area-averaged temperature for a 2.5° × 2.5° square. Is this square centered at the grid point, or is the square having that grid point as its north-west corner, or what? I don’t know.

This data is on a grid where the coordinates are the number of steps of 2.5 degrees, counting from 1. So, for latitude, 1 means the North Pole, 73 means the South Pole. For longitude, 1 means the prime meridian, 37 means 90° east, 73 means 180° east, 109 means 270°E or 90°W, and 144 means 2.5° west. It’s an annoying system, as far as I’m concerned.

In ludescher-replication.R we use this range of coordinates:

lat.range <- 24:50
lon.range <- 48:116


That’s why your file Pacific-1948-1980.txt` has locations starting with S024E48 and ending with S248E116. Maybe Ludescher et al used a slightly different range or subsampling procedure!

There are probably lots of other nuances I haven’t noticed. Can you think of some?