## Mathematics of the Environment (Part 7)

Last time we saw how the ice albedo effect could, in theory, make the Earth’s climate have two stable states. In a very simple model, we saw that a hot Earth might stay hot since it’s dark and absorbs lots of sunlight, while a cold Earth might stay cold—since it’s icy and white and reflects most of the sunlight.

If you haven’t tried it yet, make sure to play around with this program pioneered by Lesley de Cruz and then implemented in Java by Allan Erskine:

The explanations were all given in Part 6 so I won’t repeat them here!

This week, we’ll see how noise affects this simple climate model. We’ll borrow lots of material from here:

And we’ll use software written by these guys together with Allan Erskine. The power of the Azimuth Project knows no bounds!

### Stochastic differential equations

The Milankovich cycles are periodic changes in how the Earth orbits the Sun. One question is: can these changes can be responsible for the ice ages? On the first sight it seems impossible, because the changes are simply too small. But it turns out that we can find a dynamical system where a small periodic external force is actually strengthened by random ‘noise’ in the system. This phenomenon has been dubbed ‘stochastic resonance’ and has been proposed as an explanation for the ice ages. It also shows up in many other phenomena:

• Roberto Benzi, Stochastic resonance: from climate to biology.

But to understand it, we need to think a little about stochastic differential equations.

A lot of systems can be described by ordinary differential equations:

$\displaystyle{ \frac{d x}{d t} = f(x, t)}$

If $f$ is nice, the time evolution of the system will be a nice smooth function $x(t),$ like the trajectory of a thrown stone. But there are situations where we have some kind of noise, a chaotic, fluctuating influence, that we would like to take into account. This could be, for example, turbulence in the air flow around a rocket. Or, in our case, short term fluctuations of the weather of the earth. If we take this into account, we get an equation of the form

$\displaystyle{ \frac{d x}{d t} = f(x, t) + w(t) }$

where the $w$ is a ‘random function’ which models the noise. Typically this noise is just a simplified way to take into account rapidly changing fine-grained aspects of the system at hand. This way we do not have to explicitly model these aspects, which is often impossible.

### White noise

We’ll look at a model of this sort:

$\displaystyle{ \frac{d x}{d t} = f(x, t) + w(t) }$

where $w(t)$ is ‘white noise’. But what’s that?

Very naively speaking, white noise is a random function that typically looks very wild, like this:

White noise actually has a sound, too: it sounds like this! The idea is that you can take a random function like the one graphed above, and use it to drive the speakers of your computer, to produce sound waves of an equally wild sort. And it sounds like static.

However, all this is naive. Why? The concept of a ‘random function’ is not terribly hard to define, at least if you’ve taken the usual year-long course on real analysis that we force on our grad students at U.C. Riverside: it’s a probability measure on some space of functions. But white noise is a bit too spiky and singular too be a random function: it’s a random distribution.

Distributions were envisioned by Dirac but formalized later by Laurent Schwarz and others. A distribution $D(t)$ is a bit like a function, but often distributions are too nasty to have well-defined values at points! Instead, all that makes sense are expressions like this:

$\displaystyle{ \int_\infty^\infty D(t) f(t) \, d t}$

where we multiply the distribution by any compactly supported smooth function $f(t)$ and then integrate it. Indeed, we specify a distribution just by saying what all these integrals equal. For example, the Dirac delta $\delta(t)$ is a distribution defined by

$\displaystyle{ \int_\infty^\infty \delta(t) f(t) \, d t = f(0) }$

If you try to imagine the Dirac delta as a function, you run into a paradox: it should be zero everywhere except at $t = 0$, but its integral should equal 1. So, if you try to graph it, the region under the graph should be an infinitely skinny infinitely tall spike whose area is 1:

But that’s impossible, at least in the standard framework of mathematics—so such a function does not really exist!

Similarly, white noise $w(t)$ is too spiky to be an honest random function, but if we multiply it by any compactly supported smooth function $f(t)$ and integrate, we get a random variable

$\displaystyle{ \int_\infty^\infty w(t) f(t) \, d t }$

whose probability distribution is a Gaussian with mean zero and standard deviation equal to

$\displaystyle{ \sqrt{ \int_\infty^\infty f(t)^2 \, d t} }$

(Note: the word ‘distribution’ has a completely different meaning when it shows up in the phrase probability distribution! I’m assuming you’re comfortable with that meaning already.)

Indeed, the above formulas make sense and are true, not just when $f$ is compactly supported and smooth, but whenever

$\displaystyle{ \sqrt{ \int_\infty^\infty f(t)^2 \, d t} } < \infty$

If you know about Gaussians and you know about this sort of integral, which shows up all over math and physics, you’ll realize that white noise is an extremely natural concept!

### Brownian motion

While white noise is not a random function, its integral

$\displaystyle{ W(s) = \int_0^s w(s) \, ds }$

turns out to be a well-defined random function. It has a lot of names, including: the Wiener process, Brownian motion, red noise and—as a kind of off-color joke—brown noise.

The capital W here stands for Wiener: that is, Norbert Wiener, the famously absent-minded MIT professor who studied this random function… and also invented cybernetics:

Brownian noise sounds like this.

Puzzle: How does it sound different from white noise, and why?

It looks like this:

Here we are zooming in on closer and closer, while rescaling the vertical axis as well. We see that Brownian noise is self-similar: if $W(t)$ is Brownian noise, so is $W(c t)/\sqrt{c}$ for all $c > 0$.

More importantly for us, Brownian noise is the solution of this differential equation:

$\displaystyle{ \frac{d W}{d t} = w(t) }$

where $w(t)$ is white noise. This is essentially true by definition, but making it rigorous takes some work. More fancy stochastic differential equations

$\displaystyle{ \frac{d x}{d t} = f(x, t) + w(t) }$

take even more work to rigorously formulate and solve. You can read about them here:

Stochastic differential equation, Azimuth Library.

It’s actually much easier to explain the difference equations we use to approximately solve these stochastic differential equation on the computer. Suppose we discretize time into steps like this:

$t_i = i \Delta t$

where $\Delta t > 0$ is some fixed number, our ‘time step’. Then we can define

$x(t_{i+1}) = x(t_i) + f(x(t_i), t_i) \; \Delta t + w_i$

where $w_i$ are independent Gaussian random variables with mean zero and standard deviation

$\sqrt{ \Delta t}$

The square root in this formula comes from the definition I gave of white noise.

If we use a random number generator to crank out random numbers $w_i$ distributed in this way, we can write a program to work out the numbers $x(t_i)$ if we are given some initial value $x(0)$. And if the time step $\Delta t$ is small enough, we can hope to get a ‘good approximation to the true solution’. Of course defining what we mean by a ‘good approximation’ is tricky here… but I think it’s more important to just plunge in and see what happens, to get a feel for what’s going on here.

### Stochastic resonance

Let’s do an example of this equation:

$\displaystyle{ \frac{d x}{d t} = f(x, t) + w(t) }$

which exhibits ‘stochastic resonance’. Namely:

$\displaystyle{ \frac{d x}{d t} = x(t) - x(t)^3 + A \; \sin(t) + \sqrt{2D} w(t) }$

Here $A$ and $D$ are constants we get to choose. If we set them both to zero we get:

$\displaystyle{ \frac{d x}{d t} = x(t) - x(t)^3 }$

This has stable equilibrium solutions at $x = \pm 1$ and an unstable equilibrium in between at $x = 0$. So, this is a bistable model similar to the one we’ve been studying, but mathematically simpler!

Then we can add an oscillating time-dependent term:

$\displaystyle{ \frac{d x}{d t} = x(t) - x(t)^3 + A \; \sin(t) }$

which wiggles the system back and forth. This can make it jump from one equilibrium to another.

And then we can add on noise:

$\displaystyle{ \frac{d x}{d t} = x(t) - x(t)^3 + A \; \sin(t) + \sqrt{2D} w(t) }$

Let’s see what the solutions look like!

In the following graphs, the green curve is $A \sin(t),$, while the red curve is $x(t)$. Here is a simulation with a low level of noise:

As you can see, within the time of the simulation there is no transition from the stable state at 1 to the one at -1. If we were doing a climate model, this would be like the Earth staying in the warm state.

Here is a simulation with a high noise level:

The solution $x(t)$ jumps around wildly. By inspecting the graph with your eyes only, you don’t see any pattern in it, do you?

But finally, here is a simulation where the noise level is not too small, and not too big:

Here we see the noise helping the solution $x(t)$ hops from around $x = -1$ to $x = 1$, or vice versa. The solution $x(t)$ is not at all ‘periodic’—it’s quite random. But still, it tends to hop back and forth thanks to the combination of the sinusoidal term $A \sin(t)$ and the noise.

Glyn Adgie, Allan Erskine, Jim Stuttard and Tim van Beek have created an online model that lets you solve this stochastic differential equation for different values of $A$ and $D$. You can try it here:

You can change the values using the sliders under the graphic and see what happens. You can also choose different ‘random seeds’, which means that the random numbers used in the simulation will be different.