The Stochastic Resonance Program (Part 1)

guest post by David Tanzer

At the Azimuth Code Project, we are aiming to produce educational software that is relevant to the Earth sciences and the study of climate. Our present software takes the form of interactive web pages, which allow you to experiment with the parameters of models and view their outputs. But to fully understand the meaning of a program, we need to know about the concepts and theories that inform it. So we will be writing articles to explain both the programs themselves and the math and science behind them.

In this two-part series, I’ll explain this program:

Stochastic resonance.

Check it out—it runs on your browser! It was created by Allan Erskine and Glyn Adgie. In the Azimuth blog article Increasing the Signal-to-Noise Ratio with More Noise, Glyn Adgie and Tim van Beek give a nice explanation of the idea of stochastic resonance, which includes some clear and exciting graphs.

My goal today is give a compact, developer-oriented introduction to stochastic resonance, which will set the context for the next blog article, where I’ll dissect the program itself. By way of introduction, I am a software developer with research training in computer science. It’s a new area for me, and any clarifications will be welcome!

The concept of stochastic resonance

Stochastic resonance is a phenomenon, occurring under certain circumstances, in which a noise source may amplify the effect of a weak signal. This concept was used in an early hypothesis about the timing of ice-age cycles, and has since been applied to a wide range of phenomena, including neuronal detection mechanisms and patterns of traffic congestion.

Suppose we have a signal detector whose internal, analog state is driven by an input signal, and suppose the analog states are partitioned into two regions, called “on” and “off” — this is a digital state, abstracted from the analog state. With a light switch, we could take the force as the input signal, the angle as the analog state, and the up/down classification of the angle as the digital state.

Consider the effect of a periodic input signal on the digital state. Suppose the wave amplitude is not large enough to change the digital state, yet large enough to drive the analog state close to the digital state boundary. Then, a bit of random noise, occurring near the peak of an input cycle, may “tap” the system over to the other digital state. So we will see a probability of state-transitions that is synchronized with the input signal. In a complex way, the noise has amplified the input signal.

But it’s a pretty funky amplifier! Here is a picture from the Azimuth library article on stochastic resonance:

Stochastic resonance has been found in the signal detection mechanisms of neurons. There are, for example, cells in the tails of crayfish that are tuned to low-frequency signals in the water caused by predator motions. These signals are too weak to cross the firing threshold for the neurons, but with the right amount of noise, they do trigger the neurons.

See:

Stochastic resonance, Azimuth Library.

Stochastic resonance in neurobiology, David Lyttle.

Bistable stochastic resonance and Milankovitch theories of ice-age cycles

Stochastic resonance was originally formulated in terms of systems that are bistable — where each digital state is the basin of attraction of a stable equilibrium.

An early application of stochastic resonance was to a hypothesis, within the framework of bistable climate dynamics, about the timing of the ice-age cycles. Although it has not been confirmed, it remains of interest (1) historically, (2) because the timing of ice-age cycles remains an open problem, and (3) because the Milankovitch hypothesis upon which it rests is an active part of the current research.

In the bistable model, the climate states are a cold, “snowball” Earth and a hot, iceless Earth. The snowball Earth is stable because it is white, and hence reflects solar energy, which keeps it frozen. The iceless Earth is stable because it is dark, and hence absorbs solar energy, which keeps it melted.

The Milankovitch hypothesis states that the drivers of climate state change are long-duration cycles in the insolation — the solar energy received in the northern latitudes — caused by periodic changes in the Earth’s orbital parameters. The north is significant because that is where the glaciers are concentrated, and so a sufficient “pulse” in northern temperatures could initiate a state change.

Three relevant astronomical cycles have been identified:

• Changing of the eccentricity of the Earth’s elliptical orbit, with a period of 100 kiloyears

• Changing of the obliquity (tilt) of the Earth’s axis, with a period of 41 kiloyears

• Precession (swiveling) of the Earth’s axis, with a period of 23 kiloyears

In the stochastic resonance hypothesis, the Milankovitch signal is amplified by random events to produce climate state changes. In more recent Milankovitch theories, a deterministic forcing mechanism is used. In a theory by Didier Paillard, the climate is modeled with three states, called interglacial, mild glacial and full glacial, and the state changes depend on the volume of ice as well as the insolation.

See:

Milankovitch cycle, Azimuth Library.

Mathematics of the environment (part 10), John Baez. This gives an exposition of Paillard’s theory.

Bistable systems defined by a potential function

Any smooth function with two local minima can be used to define a bistable system. For instance, consider the function V(x) = x^4/4 - x^2/2:

To define the bistable system, construct a differential equation where the time derivative of x is set to the negative of the derivative of the potential at x:

dx/dt = -V'(x) = -x^3 + x = x(1 - x^2)

So, for instance, where the potential graph is sloping upward as x increases, -V'(x) is negative, and this sends X(t) ‘downhill’ towards the minimum.

The roots of V'(x) yield stable equilibria at 1 and -1, and an unstable equilibrium at 0. The latter separates the basins of attraction for the stable equilibria.

Discrete stochastic resonance

Now let’s look at a discrete-time model which exhibits stochastic resonance. This is the model used in the Azimuth demo program.

We construct the discrete-time derivative, using the potential function, a sampled sine wave, and a normally distributed random number:

\Delta X_t = -V'(X_t) * \Delta t + \mathrm{Wave}(t) + \mathrm{Noise}(t) =
X_t (1 - X_t^2) \Delta t + \alpha * \sin(\omega t) + \beta * \mathrm{GaussianSample}(t)

where \Delta t is a constant and t is restricted to multiples of \Delta t.

This equation is the discrete-time counterpart to a continuous-time stochastic differential equation.

Next time, we will look into the Azimuth demo program itself.

28 Responses to The Stochastic Resonance Program (Part 1)

  1. HenryB says:

    Pretty interesting. I’m thinking if you factor in the issue of destructive harmonics, you could explain the tingling sensation experienced by diabetics.

    • David Tanzer says:

      Hi Henry, can you elaborate on your idea, and fill in some context on diabetics, and on the tingling sensation. What’s the signal that you’re thinking of. And what would the destructive harmonics be.

      • HenryB says:

        Picture insulin genes vibrating at relatively high frequencies in the healthy state. Picture the insulin genes vibrating at lower frequencies (less able to produce insulin) in the state of diabetes. If the insulin genes in the diabetic resonate at the third harmonic frequency of any given peripheral nerve, bingo! You get a tingling sensation due to destructive harmonics.

        • David Tanzer says:

          I know very little about diabetes, but I never had a picture of it being caused by the vibrational modes of certain genes.

          According to medline plus:

          The exact cause of type 1 diabetes is unknown. Most likely it is an autoimmune disorder. This is a condition that occurs when the immune system mistakenly attacks and destroys healthy body tissue. With type 1 diabetes, an infection or another trigger causes the body to mistakenly attack the cells in the pancreas that make insulin.

          http://www.nlm.nih.gov/medlineplus/ency/article/000305.htm

          According to webmd:

          Unlike people with type 1 diabetes, the bodies of people with type 2 diabetes make insulin. But either their pancreas does not make enough insulin or the body cannot use the insulin well enough.

          http://www.webmd.com/diabetes/guide/type-2-diabetes

          Is there a “vibrational” theory about insulin production that your are basing your statement upon, which you could cite, or is this something new that you are introducing? If it’s the latter, then can you spell out the main points of your theory.

        • HenryB says:

          My theory about diabetes is by analogy with the RAS Oncogene. When the RAS Oncogene goes malignant, it always drops a hydrogen bond. I think that’s enough to double the fundamental frequency of the gene. The proof the RAS gene vibrates at excessive rates in malignancy rests with the state of cachexia seen in terminal cancer patients. These poor souls die in a state of malnutrition no matter how well they eat. By my light (I agree, it’s just a theory) diabetes is the opposite side of the coin. I picture insulin genes gaining hydrogen bonds in a series of mutations taking place over time. (It just works for type 1 diabetes).

  2. Graham Jones says:

    Imagine a child’s swing on a windy day. There is no input signal, no bistability, but the swing likes to swing with a particular period, and it is being pushed by the noisy wind. I wonder if this simpler system leads to any insights about stochastic resonance?

    • John Baez says:

      This system is much easier to analyze, at least if the swing doesn’t swing very high, so we can model it as a harmonic oscillator:

      \displaystyle{\frac{d^2 x(t)}{dt^2} = - \omega_0^2 x(t) - k \frac{d x}{d t} + F(t)}

      where x is the angular displacement of the swing, t is time, \omega_0 is the natural frequency of the swing, k is the damping due to friction, and F(t) is the external (possibly random) force. In this model I’m not including friction.

      The reason this model is easy is that it’s linear, so we can take the Fourier transform and get

      - \omega^2 \hat{x}(\omega) = - \omega_0^2 \hat{x}(\omega) + ik  \omega \hat{x}(\omega) + \hat{F}(\omega)

      and then solve this for the Fourier-transformed position of the swing:

      (\omega_0^2 - \omega^2 - ik \omega) \hat{x}(\omega) = \hat{F}(\omega)

      \displaystyle{ \hat{x}(\omega) = \frac{ \hat{F}(\omega)}{\omega_0^2 - \omega^2 - ik \omega} }

      so you see the amount of vibration of the swing at frequency \omega is proportional to the amount of force at frequency \omega, but multiplied by

      \displaystyle{ 1/(\omega_0^2 - \omega^2 - ik \omega)}

      which has a spike near \omega = \pm \omega_0 when the friction k is small. So, the swing’s motion is more excited when the random gusts of wind tend to occur at frequencies near the swing’s natural resonant frequency. One can go on and say much more, especially if we pick a specific model of the random force, e.g. white noise or pink noise.

      The hard part about the model David is discussing is that the system is nonlinear, so we can’t solve it analytically using Fourier transforms. But this nonlinearity is inherent in the bistability; there’s no way around it if we want to think about a bistable system.

      • Graham Jones says:

        Thanks! It takes me back about 30 years, when I was able to do things like that myself.

        I was thinking about it another way. A simple harmonic oscillator goes round in ellipses in (position,momentum) space, right? With damping, it spirals in to the origin. But noise can keep it going round and round. That is qualitatively like the behaviour of predator-prey model back in This Weeks Finds 309. The relationship to stochastic resonance it is not clear to me, though.

    • John Baez says:

      This reminds me: David Tanzer was wondering how we can quantify the ‘amount of amplification’ due to stochastic resonance. Since it’s a nonlinear and stochastic system, this seems tough!

      In the simpler problem Graham just posed, we can easily say how much the swing responds to random noise at a given frequency \omega: it’s

      \displaystyle{ \frac{1}{\omega_0^2 - \omega^2 - ik \omega}}

      But in stochastic resonance we’re mainly interested in how the bistable system plus noise amplifies another signal…. and mainly because of the nonlinearity, I don’t even know exactly how to make the question precise. There would be various ways, but what’s a good one? Does anyone know work on this?

      • David Tanzer says:

        Here’s the question I was thinking of: what is the “effectiveness” of the signal transfer, expressed as a function of the signal amplitude, signal frequency, and noise amplitude. This of course leads back to the question of what the appropriate definition of effectiveness is.

        Here are some “specs” for the measure. Effective transmission should be low when the signal amplitude is insufficient to bring the system close to the digital boundary. For signals that are just sub-threshold, effectiveness should increase as noise amplitude increases — to a point — and then start decreasing as the noise starts to overwhelm the signal, and the output turns to noise. What about when signal amplitude is high, and the system is deterministically oscillating between states? Well, here the transmission is quite good. Perhaps a good effectiveness measure — which may not be captured by a single number! — will show the distinction between deterministic transmission and stochastic transmission.

        It also looks intuitively clear that the effectiveness of signal transmission will increase as the period of the input signal increases — yet there are subtle points here. With a long period, the system spends more time on each cycle close to the digital boundary, giving the noise more of an opportunity to change the state. Yet if it spends a lot of time near the boundary, there is also an increasing chance that a second noise event will cross the system back over to the other side — presumably the first noise event just tapped it over just past the boundary, leaving it sensitive to a small counter-tap that will send it back. So a longer period may can be expected to lead to a period of “buzzing” between the two states.

        We could look at the expected number of transitions that will occur for each peak of the input signal. If it is even, that could be taken as a sign that the system has a greater chance of remaining in the same state. As the period gets longer and the expected number of transitions increases, I would predict that it converges to some probability of a transition on each cycle, which is less than 50%, since after all it started on one side of the digital boundary.

        The beauty of this subject is that these are testable hypotheses, by means of computer experiments. I’ll return to these questions in the followup blog article.

      • David Tanzer says:

        My working assumption is that this system is too complex for us to come up with compact formulas that give an analytic solution. But we can still define meaningful stochastic measures, and run computer experiments to estimate and graph their values.

        Once we give up on the pursuit of an analytical solution, the Fourier transform can be seen as candidate tool to be used in the definition of the effectiveness function — though I’m not sure how decisive a role it would play. It’s not a linear time-invariant system, so of course we can’t hope to analyze its behavior by analyzing the input signal into its frequency components. But the spectrum of the output may still contain useful descriptive information. In the pursuit of the Milankovitch hypothesis, for example, the power spectrum of historical time series is examined for peaks at the periods of the various component cycles, nothwithstanding the fact that we’re dealing with a multistable, nonlinear system.

        • John Baez says:

          Once we give up on the pursuit of an analytical solution…

          By the way, I’m not expecting an analytic solution either. (I sometimes get the feeling that computer scientists think mathematicians are dangerously obsessed with analytic solutions. I don’t think that’s true: we’re actually dangerously obsessed with proving theorems.)

          In the pursuit of the Milankovitch hypothesis, for example, the power spectrum of historical time series is examined for peaks at the periods of the various component cycles, nothwithstanding the fact that we’re dealing with a multistable, nonlinear system.

          That’s true. My student Blake Pollard has even done this with a windowed Fourier transform here on Azimuth.

          So, you can drive a nonlinear stochastic system like the one you’re studying with a sine wave of a particular frequency and amplitude, take the output, take its Fourier transform and figure out its expected amplitude at the same frequency. The ratio is a simple measure of the amplification. I guess this is what you were saying here:

          [...] you could measure the expected component in the spectrum of the output at frequency B, and make an appropriate numerical comparision between that and the amplitude B.

          Certainly worth trying! It would also be nice to look at the whole graph of the expected power of the output and the expected power of the noise, to help understand this issue:

          One problem with this is that when the noise dominates the output signal, we will see a large component in the output at frequency B, which has nothing to do with the input signal.

          I don’t think stochastic resonance would be a very clear way to amplify a signal when the noise is stronger than the signal even at the signal’s own frequency. But looking at some graphs would be a good way to guess conjectures (for us mathematicians) or improved algorithms (for you computation guys).

        • John Baez says:

          David wrote:

          Consider the probability density of transitions, as a function of input signal phase. This contains a lot of information. For input signals way below threshold, this probability density will be uniformly low. In the parameter regions of stochastic resonance, probability density will be strongly correlated with signal phase. When noise dominates the signal, the probability density will be uniformly high, and uncorrelated with input phase. When the input signal is super-threshold, so that the system is oscillating deterministically, the probability density will consist of spikes, at the phases where the system consistently crosses between the digital states.

          This sounds like a good approach. We have to define ‘transition’ to compute the probability density of transitions, but then someone could write a program to estimate some of the things you’re talking about here, as a function of the frequency and amplitude of the signal and the amount of noise. Then as a mathematician I’d like to look at the graphs and formulate some conjectures and prove some theorems—or get someone else to prove them.

      • David Tanzer says:

        Here was my first attempt to define effectivness of signal transmission. For a given input signal with amplitude A and frequency B, you could measure the expected component in the spectrum of the output at frequency B, and make an appropriate numerical comparision between that and the amplitude B. One problem with this is that when the noise dominates the output signal, we will see a large component in the output at frequency B, which has nothing to do with the input signal.

        Perhaps this could be refined by using a measure that quantifies the amount to which there is output power at B, and this is dominant part of the spectrum.

        But this started to look hacky, so I dropped it.

      • David Tanzer says:

        Here’s a new approach.

        When stochastic resonance kicks in in this particular system, how is the input frequency made manifest in the output signal? The probability of transitions increases at the peaks of the input signal, so there is a kind of probability wave here — a stochastic wave, not a quantum wave. This is a “signature” of the input frequency. It looks like it should be detectable by spectral analysis, but that may not give a comprehensive characterization.

        Consider the probability density of transitions, as a function of input signal phase. This contains a lot of information. For input signals way below threshold, this probability density will be uniformly low. In the parameter regions of stochastic resonance, probability density will be strongly correlated with signal phase. When noise dominates the signal, the probability density will be uniformly high, and uncorrelated with input phase. When the input signal is super-threshold, so that the system is oscillating deterministically, the probability density will consist of spikes, at the phases where the system consistently crosses between the digital states.

        Also, of course, we can look at the integral of this function over a range of phases to get the expected number of transitions. If integration over the whole cycle gives a number close to 2, that is an indication that the output is directly resonating with the input.

      • David Tanzer says:

        The correlation between this probability density and the input signal looks like an interesting scalar characterization of the effectiveness of signal transmission.

        • David Tanzer says:

          Correction, I should have said the correlation between this probability density and the value of the input signal itself, not its phase.

        • John Baez says:

          Btw, in this last comment you switched your name from “David Tanzer” to “davidarthurtanzer”. This generates a new little picture for you, makes WordPress think you’re someone with no comments previously accepted on the blog, etc. I changed you back to “David Tanzer”. It’s no big deal, but it’s nicest to always use the same name when posting comments.

  3. John Baez says:

    There are more comments over on G+, and Steve Esterly posted a good one:

    Interesting. Stochastic resonance looks very much like dithering from the signal processing world. At first I thought they were the essentially the same, but apparently whether or not dithering is an application of stochastic resonance depends on how broadly stochastic resonance is defined, and that definition is still being hashed out.

    For example, see:

    • McDonnell MD, Abbott D (2009), What is stochastic resonance? Definitions, misconceptions, debates, and its relevance to biology, PLoS Comput Biol 5 (5): e1000348. doi:10.1371/journal.pcbi.1000348 http://www.ploscompbiol.org/article/info%3Adoi%2F10.1371%2Fjournal.pcbi.1000348

    Here’s the abstract:

    Stochastic resonance is said to be observed when increases in levels of unpredictable fluctuations—e.g., random noise—cause an increase in a metric of the quality of signal transmission or detection performance, rather than a decrease. This counterintuitive effect relies on system nonlinearities and on some parameter ranges being “suboptimal”. Stochastic resonance has been observed, quantified, and described in a plethora of physical and biological systems, including neurons. Being a topic of widespread multidisciplinary interest, the definition of stochastic resonance has evolved significantly over the last decade or so, leading to a number of debates, misunderstandings, and controversies. Perhaps the most important debate is whether the brain has evolved to utilize random noise in vivo, as part of the “neural code”. Surprisingly, this debate has been for the most part ignored by neuroscientists, despite much indirect evidence of a positive role for noise in the brain. We explore some of the reasons for this and argue why it would be more surprising if the brain did not exploit randomness provided by noise—via stochastic resonance or otherwise—than if it did. We also challenge neuroscientists and biologists, both computational and experimental, to embrace a very broad definition of stochastic resonance in terms of signal-processing “noise benefits”, and to devise experiments aimed at verifying that random variability can play a functional role in the brain, nervous system, or other areas of biology.

  4. David Tanzer says:

    John Baez wrote:

    I don’t think stochastic resonance would be a very clear way to amplify a signal when the noise is stronger than the signal even at the signal’s own frequency.

    That’s the same point that I was getting at. If we have a weak signal plus a high-amplitude noise signal, then simply looking at the frequency component of the input signal in the output signal will pick up a large value, just due to the noise. If we took the ratio of this amplitude to the amplitude of the input signal, we would get a large value — but there’s no real amplification taking place here. So following this route we’d need to adjust the measure downwards whenever the input frequency is not the dominant component of the output spectrum.

    We could multiply the (expected) amplitude at frequency A by a measure that indicates the “peakedness” of the spectrum at A. Or by a global measure of how much the mass of the spectrum is concentrated in some band around A. Any suggestions for an elegant way to define these concepts would be welcomed…

    • I think that the real input signal to the system is the sum of the sine wave and the noise.

      So, for a meaningful measure of amplification, i think we need to divide the power spectral density (over a certain range of frequency) of the output, by the power spectral density (over the same frequency range) of the input (noise+sine wave).

      • This really reduces to normal amplification if there is no noise and to Signal/Noise ratio if the input waveform is small. But i am thinking that the really relevant measure in this case is S/N ratio instead of amplification.

      • David Tanzer says:

        Well, there are signals, and there are signals.

        Consider that the physical input to an audio amplifier — the voltage presented at the terminal — may be the sum of a component that reflects the vibration of the microphone and a random component due to noise through the system.

        In one sense, the “manifest signal” consists of the entire input. In the other sense, we separate this out into an informational signal and a noise component. The signal-to-noise ratio is premised on the treating the “signal” as one component of what is being analyzed.

      • David Tanzer says:

        Note I meant the previous message as a reply to your first point. Considering your second point now…

  5. […] 2014/05/10: The Stochastic Resonance Program (Part 1) by David Tanzer […]

  6. The ENSO may be a good example of stochastic resonance.

    http://contextearth.com/2014/05/02/the-soim-substantiating-the-chandler-wobble-and-tidal-connection-to-enso/

    Is it possible that the El Nino’s of ENSO are more deterministic than we think?

  7. RobWilkinson says:

    Your discussion above might be helped by looking at the use of SR for image contrast enhancement and the use of SNR measures. The following link is a good starting point:

    http://journals.cambridge.org/action/displayFulltext?type=1&pdftype=1&fid=9070340&jid=SIP&volumeId=2&issueId=-1&aid=9070337

    Try a wider search on: “Kumar Jha” OR Chouhan image enhancement

  8. Last time we introduced the concept of stochastic resonance. Briefly, it’s a way that noise can amplify a signal, by giving an extra nudge that helps a system receiving that signal make the jump from one state to another. Today we’ll describe a program that demonstrates this concept. But first, check it out:

    Stochastic resonance.

You can use HTML in your comments. You can also use LaTeX, like this: $latex E = m c^2 $. The word 'latex' comes right after the first dollar sign, with a space after it.

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out / Change )

Twitter picture

You are commenting using your Twitter account. Log Out / Change )

Facebook photo

You are commenting using your Facebook account. Log Out / Change )

Google+ photo

You are commenting using your Google+ account. Log Out / Change )

Connecting to %s

Follow

Get every new post delivered to your Inbox.

Join 2,846 other followers