*guest post by Glyn Adgie and Tim van Beek*

### Or: Are the Milankovich Cycles Causing the Ice Ages?

The Milankovich cycles are periodic changes in how the Earth orbits the Sun. One question is: can these changes can be responsible for the ice ages? On the first sight it seems impossible, because the changes are simply too small. But it turns out that we can find a dynamical system where a small periodic external force is actually strengthened by random ‘noise’ in the system. This phenomenon has been dubbed ‘stochastic resonance’ and has been proposed as an explanation for the ice ages.

In this blog post we would like to provide an introduction to it. We will look at a bistable toy example. And you’ll even get to play with an online simulation that some members of the Azimuth Project created, namely Glyn Adgie, Allan Erskine and Jim Stuttard!

### Equations with noise

In *This Weeks Finds*, back in ‘week308’, we had a look at a model with ‘noise’. A lot of systems can be described by ordinary differential equations:

If is nice, the time evolution of the system will be a nice smooth function like the trajectory of a thrown stone. But there are situations where we have some kind of noise, a chaotic, fluctuating influence, that we would like to take into account. This could be, for example, turbulence in the air flow around a rocket. Or, in our case, short term fluctuations of the weather of the earth. If we take this into account, we get an equation of the form

where the is some model of noise. In all the examples above, the noise is actually a simplified way to take into account fast degrees of freedom of the system at hand. This way we do not have to explicitly model these degrees of freedom, which is often impossible. But we could still try to formulate a model where long term influences of these degrees of freedom are included.

A way to make the above mathematically precise is to write the equation in the form

as an **Ito stochastic differential equation** driven by a **Wiener process**.

The mathematics behind this is somewhat sophisticated, but we don’t need to delve into it in this blog post. For those who are interested in it, we have gathered some material here:

• Stochastic differential equation, Azimuth Library.

Physicists and engineers who model systems with ordinary and partial differential equations should think of stochastic differential equations as another tool in the toolbox. With this tool, it is possible to easily model systems that exhibit new behavior.

### What is stochastic resonance?

Listening to your friend who sits across the table, the noise around you may become a serious distraction, although humans are famous for their ability to filter the signal—what your friend is saying – from the noise—all other sounds, people’s chatter, dishes clattering etc. But could you imagine a situation where *more noise* makes it *easier* to detect the signal?

Picture a marble that sits in a double well potential:

This graphic is taken from

• Roberto Benzi, Stochastic resonance: from climate to biology.

This review paper is also a nice introduction to the topic.

This simple model can be applied to various situations. Since we think about the ice ages, we can interpret the two metastable states at the two minima of the potential in this way: the minimum to the left at -1 corresponds to the Earth in an ice age. The minimum to the right at 1 corresponds to the Earth at warmer temperatures, as we know it today.

Let us add a small periodic force to the model. This periodic force corresponds to different solar input due to periodic changes in Earth’s orbit: the Milankovich cycles. There are actually several different cycles with different periods, but in our toy model we will consider only one force with one period.

If the force itself is too small to get the marble over the well that is between the two potential minima, we will not observe any periodic change in the position of the marble. But if we add noise to the motion, it may happen that the force together with the noise succeed in getting the marble from one minimum to the other! This is more likely to happen when the force is pushing the right way. So, it should happen with roughly the same periodicity as the force: we should see a ‘quasi-periodic’ motion of the marble.

But if we increase the noise more and more, the influence of the external force will become unrecognizable, and we will not recognize any pattern at all.

There should be a noise strength somewhere in between that makes the Milankovich cycles as visible as possible from the Earth’s temperature record. In other words, if we think of the periodic force as a ‘signal’, there should be some amount of noise that maximizes the ‘signal-to-noise ratio’. Can we find out what it is?

In order to find out, let us make our model precise. Let’s take the time-independent potential to be

This potential has two local minima at Let’s take the time-dependent periodic forcing, or ‘signal’, to be

The constant is the amplitude of the periodic forcing. The time-dependent effective potential of the system is therefore:

Including noise leads to the equation of motion, which is usually called a **Langevin equation** by physicists and chemists:

For more about what a Langevin equation is, see our article on stochastic differential equations.

This model has two free parameters, and As already mentioned, is the amplitude of the forcing. is called the **diffusion coefficient** and represents the strength of noise. In our case it is a constant, but in more general models it could depend on space and time.

It is possible to write programs that generate simulations aka numerical approximations to solutions of stochastic differential equations. For those interested in how this works, there is an addendum at the bottom of the blog post that explains this. We can use such a program to model the three cases of small, high and optimal noise level.

In the following graphs, green is the external force, while red is the marble position. Here is a simulation with a low level of noise:

As you can see, within the time of the simulation there is no transition from the metastable state at 1 to the one at -1. That is, in this situation Earth stays in the warm state.

Here is a simulation with a high noise level:

The marble jumps wildly between the states. By inspecting the graph with your eyes only, you don’t see any pattern in it, do you?

And finally, here is a simulation where the noise level seems to be about right to make the signal visible via a quasi-periodic motion:

Sometimes the marble makes a transition in phase with the force, sometimes it does not, sometimes it has a phase lag. It can even happen that the marble makes a transition while the force acts in the opposite direction!

Some members of the Azimuth crew have created an online model that calculates simulations of the model, for different values of the involved constants. You can see it here:

• Stochastic resonance example, Azimuth Code Project.

We especially thank Allan Erskine and Jim Stuttard.

You can change the values using the sliders under the graphic and see what happens. You can also choose different ‘random seeds’, which means that the random numbers used in the simulation will be different.

Above, we asked if one could calculate for a fixed the level of noise that maximizes the signal to noise ratio. We have defined the model system, but we still need to define ‘signal to noise ratio’ in order to address the question.

There is no general definition that makes sense for all kinds of systems. For our model system, let’s assume that the expectation value of has a Fourier expansion like this:

Then one useful definition of the **signal to noise ratio** is

So, can one calculate the value of depending on that maximizes ? We don’t know! Do you?

If you would like to know more, have a look at this page:

• Stochastic resonance, Azimuth Library.

Also, read on for more about how we numerically solved our stochastic differential equation, and the design decisions we had to make to create a simulation that runs on your web browser.

### For übernerds only

#### Numerically solving stochastic differential equations

Maybe you are asking yourself what is behind the online model? How does one ‘numerically solve’ a stochastic differential equation and in what sense are the graphs that you see there ‘close’ to the exact solution? No problem, we will explain it here!

First, you need C code like this:

#include <stdio.h> #include <stdlib.h> #include <math.h> #define znew ((z=36969*(z&65535)+(z>>16)) << 16) #define wnew ((w = 18000*(w&65535)+(w>>16))&65535) static unsigned long z = 362436069, w = 521288629; float rnor() { float x, y, v; x = ((signed long) (znew + wnew)) * 1.167239E-9; if (fabs(x) < 1.17741) return (x); y = (znew + wnew) * 2.328306E-10; if (log(y) < 0.6931472 - 0.5 * x * x) return x; x = (x > 0) ? 0.8857913 * (2.506628 - x) : -0.8857913 * (2.506628 + x); if (log(1.8857913 - y) < 0.5718733 - 0.5 * (x * x)) return (x); do { v = ((signed long) (znew + wnew)) * 4.656613E-10; x = -log(fabs(v)) * 0.3989423; y = -log((znew + wnew)* 2.328306E-10); } while (y + y < x * x); return (v > 0 ? 2.506628 + x : 2.506628 - x); } int main(void) { int index = 0; for(index = 0; index < 10; index++) { printf("%f\n", rnor()); } return EXIT_SUCCESS; }

Pretty scary, huh? Do you see what it does? We don’t think anyone stands a chance to understand all the ‘magical constants’ in it. We pasted it here so you can go around and show it to friends who claim that they know the programming language C and understand every program in seconds, and see what they can tell you about it. We’ll explain it below.

Since this section is for übernerds, we will assume that you know stochastic processes in continuous time. Brownian motion on can be viewed as a probability distribution on the space of all continuous functions on a fixed interval, This is also true for solutions of stochastic differential equations in general. A ‘numerical approximation’ to a stochastic differential equation is an discrete time approximation that samples this probability space.

Another way to think about it is to start with numerical approximations of ordinary differential equations. When you are handed such an equation of the form

with an initial condition you know that there are different discrete approximation schemes to calculate an approximate solution to it. The simplest one is the Euler scheme. You need to choose a step size and calculate values for recursively using the formula

with Connect the discrete values of with straight lines to get your approximating function For you get convergence for example in the supremum norm of to the exact solution

It is possible to extend some schemes from ordinary differential equations to stochastic differential equations that involve certain random variables in every step. The best textbook on the subject that I know is this one:

• Peter E. Kloeden and Eckhard Platen, *Numerical Solution of Stochastic Differential Equations*, Springer, Berlin, 2010. (ZMATH review.)

The extension of the Euler scheme is very simple, deceptively so! It is much less straightforward to determine the extensions of higher Runge–Kutta schemes, for example. You have been warned! The Euler scheme for stochastic differential equations of the form

is simply

where all the are independent random variables with a normal distribution If you write a computer program that calculates approximations using this scheme, you need a pseudorandom number generator. This is what the code we posted above is about. Actually, it is an efficient algorithm to calculate pseudorandom numbers with a normal distribution. It’s from this paper:

• George Marsaglia and Wai Wan Tsang, The Monty Python method for generating random variables, *ACM Transactions on Mathematical Software* **24** (1998), 341–350.

While an approximation to an ordinary differential equation approximates a function, an approximation to a stochastic differential equation approximates a probability distribution on So we need a different concept of ‘convergence’ of the approximation for The two most important ones are called ‘strong’ and ‘weak’ convergence.

A discrete approximation to an Ito process **converges strongly** at time if

It is said to be of **strong order** if there is a constant such that

As we see from the definition, a strong approximation needs to approximate the path of the original process at the end of the time interval

What about weak convergence?

A discrete approximation to an Ito process **converges weakly** for a given class of functions at a time if for every function we have

where we assume that all appearing expectation values exist and are finite.

It is said to be of **weak order** with respect to if there is a constant such that for every function :

One speaks simply of ‘weak convergence’ when is the set of all polynomials.

So weak convergence means that the approximation needs to approximate the probability distribution of the original process Weak and strong convergence are indeed different concepts. For example, the Euler scheme is of strong order 0.5 and of weak order 1.0.

Our simple online model implements the Euler scheme to calculate approximations to our model equation. So now you know it what sense it is indeed an approximation to the solution process!

#### Implementation details of the interactive online model

There appear to be three main approaches to implementing an interactive online model. It is assumed that all the main browsers understand Javascript.

1) Implement the model using a server-side programme that produces graph data in response to parameter settings, and use Javascript in the browser to plot the graphs and provide interaction inputs.

2) Pre-compute the graph data for various sets of parameter values, send these graphs to the browser, and use Javascript to select a graph and plot it. The computation of the graph data can be done on the designers machine, using their preferred language.

3) Implement the model entirely in Javascript.

Option 1) is only practical if one can run interactive programs on the server. While there is technology to do this, it is not something that most ISPs or hosting companies provide. You might be provided with Perl and PHP, but neither of these is a language of choice for numeric coding.

Option 2) has been tried. The problem with this approach is the large amount of pre-computed data required. For example, if we have 4 independent parameters, each of which takes on 10 values, then we need 10000 graphs. Also, it is not practical to provide much finer control of each parameter, as this increases the data size even more.

Option 3) looked doable. Javascript has some nice language features, such as closures, which are a good fit for implementing numerical methods for solving ODEs. The main question then is whether a typical Javascript engine can cope with the compute load.

One problem that arose with a pure Javascript implementation is the production of normal random deviates required for the numerical solution of an SDE. Javascript has a random() function, that produces uniform random deviates, and there are algorithms that could be implemented in Javascript to produce normal deviates from uniform deviates. The problems here are that there is no guarantee that all Javascript engines use the same PRNG, or that the PRNG is actually any good, and we cannot set a specific seed in Javascript. Even if the PRNG is adequate, we have the problem that different users will see different outputs for the same parameter settings. Also, reloading the page will reseed the PRNG from some unknown source, so a user will not be able to replicate a particular output.

The chosen solution was to pre-compute a sequence of random normal deviates, using a PRNG of known characteristics. This sequence is loaded from the server, and will naturally be the same for all users and sessions. This is the C++ code used to create the fixed Javascript array:

#include <boost/random/mersenne_twister.hpp> #include <boost/random/normal_distribution.hpp> #include <iostream> int main(int argc, char * argv[]) { boost::mt19937 engine(1234); // PRNG with seed = 1234 boost::normal_distribution<double> rng(0.0, 1.0); // Mean 0.0, standard deviation 1.0 std::cout << "normals = [\n"; int nSamples = 10001; while(nSamples--) { std::cout << rng(engine) << ",\n"; } std::cout << "];"; }

Deviates with a standard deviation other than 1.0 can be produced in Javascript by multiplying the pre-computed deviates by a suitable scaling factor.

The graphs only use 1000 samples from the Javascript array. To see the effects of different sample paths, the Javascript code selects one of ten slices from the pre-computed array.

Great post. I’ve learned a lot.

znew has 4 left parens but 6 right.

maybe add any needed #includes.

There seems to have been a problem with me copy&pasting the code from the pdf I have to my editor, the top two lines should be:

#define znew ((z=36969*(z&65535)+(z>>16)) <>16))&65535)

Should we correct this? I think we should rather use this error to discuss what is wrong with the code:

1. Don’t use the preprocessor. This is a text processing tool, it disables all checks that your compiler and other static code analysis tools could do for you. This is a concept that has been abandoned in all programming languages that I know of that evolved from C.

2. Use static constants instead of opaque number literals (applies to 65535).

3. Always provide unit tests with your code so that others can easily make some basic sanity checks.

4. Always document constants and methods etc.

I bet you could come up with way more remarks yourself.

Oh, the trailing # seems to make both Instiki and WordPress to eat some of the lines, the second one is missing above, defining wnew, I’ll try again:

“#define wnew ((w = 18000*(w&65535)+(w>>16)&65535)”

BTW: If anyone would like to use this code, you should better try to download it as a part of some software suite or library instead of copying it from here. I bet that Marsaglia has published his code in some suitable form or another.

Aahhh yes, Marsaglia of the famous quote “Random-number generators are like sex: when they’re good, they’re really good, but even the bad ones are still pretty good”. Does that say more about Marsaglia or about his software?

While I don’t know what George Marsaglia thought about random number generators, since I never got to know him, I wrote a little bit about this on the Azimuth project on the page random number generator:

From a practical point of view a random number generator is good if it is random enough for the application that consumes the numbers it produces, in the sense that the application does not produce any artifacts that depend on the generator used. Since no random number generator is truly random, there is no objective mathematical criterion to classify generators; in practice, users have specified a list of statistical tests that a “good” generator should pass – on the average – while “bad” generators do not – on the average (see the references for details).

From a pure mathematical point of view this means that we know some ways in which the “bad” generators fail, while we did not figure out how to unmask the “good” ones. So the bad ones are actually those generators that we happen to know more about!

Von Neumann said, “Anyone who considers arithmetical methods of producing random digits is, of course, in a state of sin.”

These days you can try to avoid sin by downloading random numbers generated using quantum mechanics by the University of Geneva and the company Id Quantique. Here are some:

They look good, don’t they? But watch out: I deliberately put one in myself, which was not generated randomly!

Or, if you need

lotsof random numbers, you can buy their product Quantis, which produces up to 16 megabits per second. You can read about it here:• Random number generation using quantum physics, Id Quantique White Paper, version 3.0, April 2010.

It uses a half-silvered mirror to split a beam of light, and single-photon detectors that can tell which way each photon went.

But there’s a catch: while ideally 50% of the photons would go through and 50% would get reflected, Id Quantique can only guarantee that the number is somewhere between 45% and 55%. So, they need to use an algorithm called an ‘unbiasing technique’ to take the somewhat random but still biased string of bits and try to purify it to maximal randomness.

This problem shows up with any physical method of generating random digits. The first unbiasing technique was developed by von Neumann:

• John von Neumann, Various techniques used in connection with random digits,

Applied Mathematics Series12(1951), 36-38.The output of the unbiasing technique needs to have fewer bits (per second) than the input. But even apart from this, I suspect that all the difficulties in defining a random sequence produced using an algorithm—the “sin” von Neumann was talking about—reappear when trying to define what counts as a valid unbiasing technique,

unlessfor some reason you know the input string is identically independently distributed… or something like that: something you probably can’t actually know about a realistic, imperfect physical system.Over on Google+, Richard Younger wrote:

I replied:

When I asked about this during my time at the University of Heidelberg I was told that physical systems usually have long time correlations too, which usually makes them as bad as random number generators as deterministic algorithms.

But the best quote I know in this context is from numerical recipes (an author cites a system administrator from memory):

“We guarantee that each number is random individually, but we don’t guarantee that more than one of them is random.”

xkcd:

“The Entropy Key is a small, unobtrusive and easily installed USB stick that generates high-quality random numbers, or entropy, which can improve the performance, security and reliability of servers. It can also be used with scientific, gambling and lottery applications, or anywhere where good random numbers are needed.”

I learn a lot by copy pasting blog posts into my interpreter so I like having only fully working documented code to read unless it’s a development or bug discussion.

Isn’t it amazing that wordpress php doesn’t be able to obey html

syntax. There must be better blogging software to use.

Jim wrote:

Actually it

didobey it just now, which is why when you typed “<pre></pre>” in your comment, it came out as a blank line! I have to type something else to get you to see <pre></pre>.(What do I have to type? I’ll leave that as an exercise for the html fanatics.)

There’s always better software to use, if you’re willing to spend the time to find it and learn how to use it. But I don’t want to spend my time that way unless it’s really necessary. It’s easy for me to fix any mistakes here, so people can just notify me of them and I’ll fix them.

I avoided using “<pre></pre>” in the blog post because the skinny-column format of the blog doesn’t work well with long lines of code that are prevented from wrapping around by <pre></pre>. They just disappear after 450 pixels—the maximum width of a blog post here.

If you want to download working code from blog articles here, and the format of the blog article screws things up, someone just needs to point me to working code and I can make it easy to download from a link in the blog article.

WordPress supports special markup for source code.

I copied and pasted your code above before I noticed that the code itself was garbled; if you put the original code in a sourcecode block, it won’t think that <stdio.h> is an html tag to be stripped.

Everything can and will be corrected – as usual, I fix all errors people point out on blog posts here. It’s past my bedtime so I’ll do it tomorrow. If “#” causes trouble I can probably just work around that somehow.

Tim – do you know if anyone has proved interesting theorems about stochastic resonance?

For example, I’d expect that the signal-to-noise ratio you defined will grow as we increase until it reaches a maximum, and then decrease from then on. This should be easy to check using numerical methods, but has anyone proved it? If not, it would be a fun math problem for the right sort of mathematician.

What are some more interesting theorems one could try to prove?

I may come back later (I have to attend an important meeting in 10 minutes), but Nathan has already cited a lot of important resources. Physicists of course use approximation techniques (like linear response theory) or semi-heuristic techniques like the Kramers rate to calculate the behaviour of their model systems.

I don’t know of any

theoremsin this context. There is the Siegert formula that can be used to calculate the moments of the probability distribution of the first exit time, with the system starting in 1, say, and passing the threshold at 0 or -0.5 or -1, you can choose that. The Siegert formula works only for the system without external forcing, i.e. A = 0, so it is not very interesting for stochastic resonance (also, I don’t have a reference at hand). But it is a hard analytic result.I’m quite sure that it should be possible to calculate some aspects of the function , for example is it continuous in 0? Does it approach 0 or some other value for ?

And of course is there a maximum and where is it? But I don’t know of any approach to the problem that could tackle these questions.

There are lots of results about stochastic resonance, starting with the double-well potential and moving to many generalizations (asymmetric wells, non-Gaussian noise, etc.) This has been studied intensively by physicists for decades.

The definition of “signal-to-noise ratio” (SNR) used in this blog post appears to correspond to the “spectral amplification” defined in Eqs. 2.9 and 4.21 of Gammaitoni et al. (1998). This does indeed have a maximum at a particular noise level; see their Figure 6. Gammaitoni et al. also defines an “SNR” in Eq. 2.13, which differs from the spectral amplification. This quantity also has a single maximum in Fig. 3, but seems to have a single

minimumin Fig. 8, which confuses me. I haven’t read the paper (a lengthy review article), so perhaps these figures correspond to different systems or different approximations … or maybe they plotted the noise-to-signal ratio in the latter figure?The above paper cites McNamara and Wiesenfeld (1989). I’m not sure how that paper defines SNR (but see Eqs. 4.4 and 5.8). It shows a maximum in SNR in Figs. 7d (which gives a

doublemaximum for low forcing frequency), and Fig. 9b.Both papers note some interesting facts, such as the SNR diverging as the noise vanishes, and the maximum of the SNR not occurring at the same noise level as the maximum of the signal itself.

Thanks for describing all those results, Nathan! It’s great if physicists have ‘shown’ this stuff but it’s not yet proved mathematically: mathematicians do best at proving things that are already known to be true. Some of these proofs are quite deep and enlightening even if the results they prove are ‘obvious’.

For example, Eliot Lieb became quite famous for proving the stability of matter.

Thanks, this was very instructive! Stochastic resonance seems to have something in common with simulated annealing (if I remember the latter correctly) in that they are both ways to make noise do useful work – respectively, making a signal visible or finding a better solution to a global optimization problem – by using it to force a system out of one local minimum into another.

I know it’s easy to sit back and think of things for you to do, but (inspired by your “talking with a friend” scenario) do you think that you could supplement your current example with demos of a noise-based a) hearing aid, b) vision aid?

Hmm, you might be onto something here:-)

Since our goal here is to explain bit of climate physics using models that can run on your web browser, I don’t think Tim and Glyn and the rest of the team will have time to design hearing aids. But you might enjoy reading about how, thanks to stochastic resonance, noise can increase the ability of a cricket to sense a sound:

• Stochastic resonance (neurobiology), Wikipedia.

You can find a lot of information about research by simply googling “stochastic resonance”. Physicists have explored the use of models for both signal detection and for hearing and vision aids, although I cannot find the exact references right now.

One relevant researcher is Peter Hänggi, (homepage part on stochastic resonance).

I remember that he or one of his collaborators has published a paper on how they did research on patients with tremor caused by Parkinson’s desease, but did not find it. But searching for it turned up this paper:

Stochastic resonance and sensory information processing: a tutorial and review of application..

This is a review paper adressed at “neuroscientists and clinical neurophysiologists”.

The main point of the post and the reason why I worked on it, however, is to demonstrate that one can get truly new qualitative behaviour of dynamic systems by including random processes that model fast degrees of freedom. This is important for climate models and a current line of research in this area.

Thanks John and Tim for the refs. Apologies for my unclear suggestion – I was envisioning a simple web app (or, for vision, perhaps just a .ppt slide) which would drive your point home by allowing your audience to experience the effects of SR on their own hearing or vision. I see now that this has been done (e.g. the Arc de Triomphe examples in John’s wikipedia reference).

It is striking that in the cricket’s cercal mechanoreceptors (at least at the level of single neurons) the optimal signal-to-noise ratio was achieved with a noise intensity 25x that of the signal.

Even more intriguing to me was an abstract referenced from Tim’s second URL: “Ubiquitous crossmodal Stochastic Resonance in humans: auditory noise facilitates tactile, visual and proprioceptive sensations.” I realize we are now far afield from climate physics but the apparent pervasiveness of this phenomenon is eye opening.

Great, it worked! Thanks Mike!!

The code above should compile and also be correct. I compiled and ran it on my Windows notebook with the help of MinGW (which uses the gcc compiler).

Note that you can re-seed the algorithm by setting z and w to different values.

Next one should of course run the DIEHARD test suite or something similar against this code to check that I don’t have any misprints in it. Unfortunately I don’t have the time to do this right now, but maybe someone else does.

Tim wrote:

and

For some reason these old July 31 comments and some other complaints appear below the later Aug. 1 comment where you posted corrected code. I mention this just in case anyone reading this gets puzzled!

All the code in the blog post now matches the most up-to-date versions I’ve been given, and it’s nicely formatted thanks to Mike Stay.

Your second code sample is still being garbled; I suspect you didn’t say language=cpp in the sourcecode header.

I actually did…

I don’t know why this is so hard—at least a couple of times, an evil daemon removed the string “cpp” from what I wrote. Anyway, I fixed it again and it looks okay to me now.

I dimly remember in graduate school that instantons are used to study phenomena like quantum tunneling across a potential barrier (such as the double well). Given the close analogies between quantum mechanics and statistical dynamics, I wonder if the phenomenon of stochastic resonance shows up in QM or QFT in any interesting way via instantons?

That’s interesting. I’m worried that because tunnelling is more like diffusion than ‘rolling over a bump’, it works differently. But I’d need to think a lot harder to see if there’s something good to do along these lines.

I just noticed that the Gammaitoni et al. review article briefly mentions “multi-instanton” solutions to a stochastically and periodically forced bistable string, i.e. the 1D Ginzburg-Landau equation (see Benzi et al. (1985)). Not what I had in mind, but a possible clue.

I don’t know who said “If you want to do physics on a computer don’t use numbers”?

This week, we’ll see how

noiseaffects this simple climate model. We’ll borrow lots of material from here:• Glyn Adgie and Tim van Beek, Increasing the signal-to-noise ratio with more noise. […]

What’s the current status of this hypothesis that Milankovich cycles are the forcing signal in a bistable stochastic resonance system, and are driving the oscillation between hot and cold climate regimes?

How would the forcing function work? Elsewhere John mentioned that changes in obliquity and precession of the axis don’t change the total amount of solar radiation received on the Earth, just its distribution with respect to latitude and seasonal characteristics, and that changes in eccentricity only minimally affect the total amount of energy delivered to the Earth.

So is the forcing signal some periodic temperature differential that is caused by interaction of the Milankovich cycles with e.g. surface characteristics of the Earth? Or do we really need to introduce more than one variable — not just temperature, but also albedo or what have you — into this stochastic resonance model for it to become a viable hypothesis?

David wrote:

My favorite theory for this, due to Didier Paillard, is explained here. A surprisingly simple model gives the right times for glacial-interglacial transitions, starting from data about Milankovitch cycles!

However, there are still a lot of mysteries in this subject… and also a lot of other research on this topic that I’m not so familiar with.

While the total amount of solar radiation delivered to the Earth doesn’t change much, the amount delivered to northern latitudes in summer can change quite a lot. See the graphs! A lot of models assume this is what triggers the glacials and interglacials.

(The north and south poles of the Earth play radically different roles in the glacial cycles, since one is an ocean surrounded by land and the other is a continent surrounded by water. Glaciation appears to happen primarily in the northern hemisphere. This asymmetry is crucial, since when there’s a lot of sun in the summer in the northern hemisphere there’s a shortage of sun in the winter in the northern hemisphere!)

John wrote:

In all applications this is usually neglected, but actually it would be eventually good to know how much, in particular one finds on Wikipedia (without reference):

On Azimuth:

How does the 1366±2 W/m

^{2}go together with the 6.9%? (The Azimuth page links to the Wikipedia page.)Is there a latitudinal dependence and if yes how big is it?

Unfortunately I couldn’t find (at least not in a decent amount of time) some easy to read calculations or at least trustworthy statements on that issue.

John wrote:

I was referring to changes in total annual insolation due to Milankovitch cycles, since that is what David was asking about: we were talking about how Milankovitch cycles may trigger glacial cycles. I was not referring to the seasonal cycles in insolation due to the elliptical orbit of the Earth, nor to changes due to inherent changes in the luminosity of the Sun.

I calculated the changes due to Milankovitch cycles in Part 9 of the Mathematics of the Environment series: they cause a change of 0.167% in the total annual amount of solar energy delivered to the top of the Earth’s atmosphere.

I believe the figure of 1366±2 W/m

^{2}is the figure averaged over a year.These figures refer to the amount of solar radiation a square meter directly facing the Sun at the top of the Earth’s atmosphere would receive. So, it has nothing to do with the slant of the Earth’s surface with respect to the rays of light from the Sun, the length of the daytime, or the light absorbed by the Earth’s atmosphere, all of which depend on the latitude (and other things).

All these other factors are incredibly important too, but they are not what is being discussed here!

Suppose we wanted to systematically investigate how effective a “stochastic resonator” was in transmitting a given input signal (forcing sinusoid) to the output signal, as a function of the noise amplitude. What would be the most natural measure of how “effectively” the signal is transmitted — this measure should include both how much of the given frequency in present in the output signal, and how much this frequency represents the dominant part of the spectrum. So for low noise amplitudes, the measure would be low, because the system is not oscillating at all between the stable states, and for high noise amplitudes, it would also be low, because that frequency would get lost in the crowd of frequencies that constitute the noise.

Graham wrote:

I agree that the ‘confining’ effect of the potential should damp out low-frequency noise—I don’t know this, but it seems plausible. Understanding these things analytically seems tough, though. Mathematically it’s a bit like nonlinear quantum field theory, which is pretty easy perturbatively but hard for nonperturbative questions, while I believe is what we’ve got here. So it might be easier for someone who likes programming to study this that way.

I should have said “what are good candidates” for such a measure.

But maybe it’s more natural and clear to have two separate measures, one for how much of the input signal gets through, and how much is it the only thing that gets through (“recall”, and “precision”), For the former we could use the frequency component in the output, with a deduction for the component at that frequency in the noise (don’t want to give too much credit to the noise).

For the latter, we could use a measure of how much the entire spectrum is concentrated at the signal frequency. But does that involve choosing an arbitrary bandwidth.

I’d like to program this. If anyone can propose an appropriate, concrete spec, that would be great.

When Glyn and Tim wrote this article, I became curious about this question. I didn’t make a lot of progress. But if you state the question more precisely, that’ll make it easier to answer.

In fact, if you state it precisely enough, that will

bethe answer!Do you want to see how much a sinusioidal signal of any given frequency gets amplified? This is easy to make precise for

lineartime-invariant systems using Fourier transform techniques. But here we are dropping linearity, though we are keeping time-invariance (if we time translate the input, the output gets time translate by the same amount). This means the answer can depend in a complicated nonlinear way on the amplitude of the input as well as its frequency. So more information is needed to make the question precise.The Wiener process (=Brownian motion) on its own would have a power spectrum proportinal to 1/frequency-squared according to http://en.wikipedia.org/wiki/Brownian_noise#Power_spectrum. With A=0, there is the ‘double dip’ which stops it wandering off, and that will, I think, make some sort of low frequency cutoff. So even with A=0, the system will have some kind of power peak at some frequency. Maybe best to understand this better first…

Interesting that this thread was revived. I just finished a post on a smaller scale stochastic resonance, one that generates quasiperiodic fluctuations via solution to the Mathieu equation.

This is the Southern Oscillation Index with connections to the Chandler Wobble:

http://contextearth.com/2014/04/05/the-chandler-wobble-and-the-soim/

At the Azimuth Code Project, we are aiming to produce educational software that is relevant to the Earth sciences and the study of climate. Our present software takes the form of interactive web pages, which allow you to experiment with the parameters of models and view their outputs. But to fully understand the meaning of a program, we need to know about the concepts and theories that inform it. So we will be writing articles to explain both the programs themselves and the math and science behind them.

In this two-part series, I’ll cover the Azimuth stochastic resonance example program, by Allan Erskine and Glyn Adgie. In the Azimuth blog article Increasing the Signal-to-Noise Ratio with More Noise, Glyn Adgie and Tim van Beek give a nice explanation of the idea of stochastic resonance, which includes some clear and exciting graphs.

Can I get in touch with Glynn (last seen 25 years ago)?

He used to frequent the Azimuth Forum, but not recently.