Give the Earth a Present: Help Us Save Climate Data

28 December, 2016

getz_ice_shelf

We’ve been busy backing up climate data before Trump becomes President. Now you can help too, with some money to pay for servers and storage space. Please give what you can at our Kickstarter campaign here:

Azimuth Climate Data Backup Project.

If we get $5000 by the end of January, we can save this data until we convince bigger organizations to take over. If we don’t get that much, we get nothing. That’s how Kickstarter works. Also, if you donate now, you won’t be billed until January 31st.

So, please help! It’s urgent.

I will make public how we spend this money. And if we get more than $5000, I’ll make sure it’s put to good use. There’s a lot of work we could do to make sure the data is authenticated, made easily accessible, and so on.

The idea

The safety of US government climate data is at risk. Trump plans to have climate change deniers running every agency concerned with climate change. So, scientists are rushing to back up the many climate databases held by US government agencies before he takes office.

We hope he won’t be rash enough to delete these precious records. But: better safe than sorry!

The Azimuth Climate Data Backup Project is part of this effort. So far our volunteers have backed up nearly 1 terabyte of climate data from NASA and other agencies. We’ll do a lot more! We just need some funds to pay for storage space and a server until larger institutions take over this task.

The team

Jan Galkowski is a statistician with a strong interest in climate science. He works at Akamai Technologies, a company responsible for serving at least 15% of all web traffic. He began downloading climate data on the 11th of December.

• Shortly thereafter John Baez, a mathematician and science blogger at U. C. Riverside, joined in to publicize the project. He’d already founded an organization called the Azimuth Project, which helps scientists and engineers cooperate on environmental issues.

• When Jan started running out of storage space, Scott Maxwell jumped in. He used to work for NASA—driving a Mars rover among other things—and now he works for Google. He set up a 10-terabyte account on Google Drive and started backing up data himself.

• A couple of days later Sakari Maaranen joined the team. He’s a systems architect at Ubisecure, a Finnish firm, with access to a high-bandwidth connection. He set up a server, he’s downloading lots of data, he showed us how to authenticate it with SHA-256 hashes, and he’s managing many other technical aspects of this project.

There are other people involved too. You can watch the nitty-gritty details of our progress here:

Azimuth Backup Project – Issue Tracker.

and you can learn more here:

Azimuth Climate Data Backup Project.


Saving Climate Data (Part 3)

23 December, 2016

You can back up climate data, but how can anyone be sure your backups are accurate? Let’s suppose the databases you’ve backed up have been deleted, so that there’s no way to directly compare your backup with the original. And to make things really tough, let’s suppose that faked databases are being promoted as competitors with the real ones! What can you do?

One idea is ‘safety in numbers’. If a bunch of backups all match, and they were made independently, it’s less likely that they all suffer from the same errors.

Another is ‘safety in reputation’. If a bunch of backups of climate data are held by academic institutes of climate science, and another are held by climate change denying organizations (conveniently listed here), you probably know which one you trust more. (And this is true even if you’re a climate change denier, though your answer may be different than mine.)

But a third idea is to use a cryptographic hash function. In very simplified terms, this is a method of taking a database and computing a fairly short string from it, called a ‘digest’.

740px-cryptographic_hash_function-svg

A good hash function makes it hard to change the database and get a new one with the same digest. So, if the person owning a database computes and publishes the digest, anyone can check that your backup is correct by computing its digest and comparing it to the original.

It’s not foolproof, but it works well enough to be helpful.

Of course, it only works if we have some trustworthy record of the original digest. But the digest is much smaller than the original database: for example, in the popular method called SHA-256, the digest is 256 bits long. So it’s much easier to make copies of the digest than to back up the original database. These copies should be stored in trustworthy ways—for example, the Internet Archive.

When Sakari Maraanen made a backup of the University of Idaho Gridded Surface Meteorological Data, he asked the custodians of that data to publish a digest, or ‘hash file’. One of them responded:

Sakari and others,

I have made the checksums for the UofI METDATA/gridMET files (1979-2015) as both md5sums and sha256sums.

You can find these hash files here:

https://www.northwestknowledge.net/metdata/data/hash.md5

https://www.northwestknowledge.net/metdata/data/hash.sha256

After you download the files, you can check the sums with:

md5sum -c hash.md5

sha256sum -c hash.sha256

Please let me know if something is not ideal and we’ll fix it!

Thanks for suggesting we do this!

Sakari replied:

Thank you so much! This means everything to public mirroring efforts. If you’d like to help further promoting this Best Practice, consider getting it recognized as a standard when you do online publishing of key public information.

1. Publishing those hashes is already a major improvement on its own.

2. Publishing them on a secure website offers people further guarantees that there has not been any man-in-the-middle.

3. Digitally signing the checksum files offers the best easily achievable guarantees of data integrity by the person(s) who sign the checksum files.

Please consider having these three steps included in your science organisation’s online publishing training and standard Best Practices.

Feel free to forward this message to whom it may concern. Feel free to rephrase as necessary.

As a separate item, public mirroring instructions for how to best download your data and/or public websites would further guarantee permanence of all your uniquely valuable science data and public contributions.

Right now we should get this message viral through the government funded science publishing people. Please approach the key people directly – avoiding the delay of using official channels. We need to have all the uniquely valuable public data mirrored before possible changes in funding.

Again, thank you for your quick response!

There are probably lots of things to be careful about. Here’s one. Maybe you can think of more, and ways to deal with them.

What if the data keeps changing with time? This is especially true of climate records, where new temperatures and so on are added to a database every day, or month, or year. Then I think we need to ‘time-stamp’ everything. The owners of the original database need to keep a list of digests, with the time each one was made. And when you make a copy, you need to record the time it was made.


Algorithmic Thermodynamics (Part 3)

15 November, 2016

 

This is my talk for the Santa Fe Institute workshop on Statistical Mechanics, Information Processing and Biology:

Computation and thermodynamics.

It’s about the link between computation and entropy. I take the idea of a Turing machine for granted, but starting with that I explain recursive functions, the Church-Turing thesis, Kolomogorov complexity, the relation between Kolmogorov complexity and Shannon entropy, the uncomputability of Kolmogorov complexity, the ‘complexity barrier’, Levin’s computable version of complexity, and finally my work with Mike Stay on algorithmic thermodynamics.

In my talk slides I mention the ‘complexity barrier’, and state this theorem:

Theorem. Choose your favorite set of axioms for math. If it’s finite and consistent, there exists C ≥ 0, the complexity barrier, such that for no natural number n can you prove the Kolmogorov complexity of n exceeds C.

For a sketch of the proof of this result, go here:

Chaitin’s incompleteness theorem.

In my talk I showed a movie related to this: an animated video created in 2009 using a program less than 4 kilobytes long that runs on a Windows XP machine:

For more

For more details, read our paper:

• John Baez and Mike Stay, Algorithmic thermodynamics, Math. Struct. Comp. Sci. 22 (2012), 771-787.

or these blog articles:

Algorithmic thermodynamics (part 1).

Algorithmic thermodynamics (part 2).

They all emphasize slightly different aspects!


Compositionality Workshop

1 November, 2016

I’m excited! In early December I’m going to a workshop on ‘compositionality’, meaning how big complex things can be built by sticking together smaller, simpler parts:

Compositionality, 5-9 December 2016, workshop at the Simons Institute for the Theory of Computing, Berkeley. Organized by Samson Abramsky, Lucien Hardy and Michael Mislove.

In 2007 Jim Simons, the guy who helped invent Chern–Simons theory and then went on to make billions using math to run a hedge fund, founded a research center for geometry and physics on Long Island. More recently he’s also set up this institute for theoretical computer science, in Berkeley. I’ve never been there before.

‘Compositionality’ sounds like an incredibly broad topic, but since it’s part of a semester-long program on Logical structures in computation, this workshop will be aimed at theoretical computer scientists, who have specific ideas about compositionality. And these theoretical computer scientists tend to like category theory. After all, category theory is about morphisms, which you can compose.

Here’s the idea:

The compositional description of complex objects is a fundamental feature of the logical structure of computation. The use of logical languages in database theory and in algorithmic and finite model theory provides a basic level of compositionality, but establishing systematic relationships between compositional descriptions and complexity remains elusive. Compositional models of probabilistic systems and languages have been developed, but inferring probabilistic properties of systems in a compositional fashion is an important challenge. In quantum computation, the phenomenon of entanglement poses a challenge at a fundamental level to the scope of compositional descriptions. At the same time, compositionally has been proposed as a fundamental principle for the development of physical theories. This workshop will focus on the common structures and methods centered on compositionality that run through all these areas.

So, some physics and quantum computation will get into the mix!

A lot of people working on categories and computation will be at this workshop. Here’s what I know about the talks so far. If you click on the talk titles you’ll get abstracts, at least for most of them.

The program

 

Monday, December 5th, 2016
9 – 9:20 am
Coffee and Check-In
9:20 – 9:30 am
Opening Remarks
9:30 – 10:30 am
10:30 – 11 am
Break
11 – 11:35 am
11:40 am – 12:15 pm
12:20 – 2 pm
Lunch
2 – 2:35 pm
2:40 – 3:15 pm
3:30 – 4 pm
Break
4 – 5 pm
Discussion
5 – 6 pm
Reception

 

Tuesday, December 6th, 2016
9 – 9:30 am
Coffee and Check-In
9:30 – 10:30 am
10:30 – 11 am
Break
11 – 11:35 am
11:40 am – 12 pm
12:05 – 12:25 pm
12:30 – 2 pm
Lunch
2 – 2:35 pm
2:40 – 3:15 pm
3:30 – 4 pm
Break
4 – 5 pm
Discussion

 

Wednesday, December 7th, 2016
9 – 9:30 am
Coffee and Check-In
9:30 – 10:30 am
10:30 – 11 am
Break
11 – 11:20 am
11:25 – 11:45 am
11:50 am – 12:25 pm
12:30 – 2 pm
Lunch

 

Thursday, December 8th, 2016
9 – 9:30 am
Coffee and Check-In
9:30 – 10:05 am
10:10 – 10:30 am
10:35 – 11 am
Break
11 – 11:20 am
11 am – 11:45 am
11 am – 12:10 pm
12 pm – 2 pm
Lunch
2 – 2:35 pm
2:40 – 3:15 pm
3 pm – 3:50 pm
Break
3:50 – 4:25 pm
4:30 – 4:50 pm

 

Friday, December 9th, 2016
9:30 – 10:05 am
10 am – 10:45 am
10:50 – 11:20 am
Break
11:20 – 11:55 am
12 – 12:35 pm
12:40 – 2 pm
Lunch
2 – 3 pm
Discussion
3 – 3:40 pm

Complex Adaptive System Design (Part 2)

18 October, 2016


Yesterday Blake Pollard and I drove to Metron’s branch in San Diego. For the first time, I met four of the main project participants: John Foley (math), Thy Tran (programming), Tom Mifflin and Chris Boner (two higher-ups involved in the project). Jeff Monroe and Tiffany Change give us a briefing on Metron’s ExAMS software. This lets you design complex systems and view them in various ways.

The most fundamental view is the ‘activity trace’, which consists of a bunch of parallel rows, one for each ‘performer’. Each row has a bunch of boxes which represent ‘activities’ that the performer can do. Two boxes are connected by a wire when one box’s activity causes another to occur. In general, time goes from left to right. Thus, if B can only occur after A, the box for B is drawn to the right of the box for A.

The wires can also merge via logic gates. For example, suppose activity D occurs whenever A and B but not C have occurred. Then wires coming out of the A, B, and C boxes merge in a logic gate and go into the A box. However, these gates are a bit more general than your ordinary Boolean logic gates. They may also involve ‘delays’, e.g. we can say that A occurs 10 minutes after B occurs.

I would like to understand the mathematics of just these logic gates, for starters. Ignoring delays for a minute (get the pun?), they seem to be giving a generalization of Petri nets. In a Petri net we only get to use the logical connective ‘and’. In other words, an activity can occur when all of some other activities have occurred. People have considered various generalizations of Petri nets, and I think some of them allow more general logical connectives, but I’m forgetting where I saw this done. Do you know?

In the full-fledged activity traces, the ‘activity’ boxes also compute functions, whose values flow along the wires and serve as inputs to other box. That is, when an activity occurs, it produces an output, which depends on the inputs entering the box along input wires. The output then appears on the wires coming out of that box.

I forget if each activity box can have multiple inputs and multiple outputs, but that’s certainly a natural thing.

The fun part is that one one can zoom in on any activity trace, seeing more fine-grained descriptions of the activities. In this more fine-grained description each box turns into a number of boxes connected by wires. And perhaps each wire becomes a number of parallel wires? That would be mathematically natural.

Activity traces give the so-called ‘logical’ description of the complex system being described. There is also a much more complicated ‘physical’ description, saying the exact mechanical functioning of all the parts. These parts are described using ‘plugins’ which need to be carefully described ahead of time—but can then simply be used when assembling a complex system.

Our little team is supposed to be designing our own complex systems using operads, but we want to take advantage of the fact that Metron already has this working system, ExAMS. Thus, one thing I’d like to do is understand ExAMS in terms of operads and figure out how to do something exciting and new using this understanding. I was very happy when Tom Mifflin embraced this goal.

Unfortunately there’s no manual for ExAMS: the US government was willing to pay for the creation of this system, but not willing to pay for documentation. Luckily it seems fairly simple, at least the part that I care about. (There are a lot of other views derived from the activity trace, but I don’t need to worry about these.) Also, ExAMS uses some DoDAF standards which I can read about. Furthermore, in some ways it resembles UML and SySML, or more precisely, certain parts of these languages.

In particular, the ‘activity diagrams’ in UML are a lot like the activity traces in ExAMS. There’s an activity diagram at the top of this page, and another below, in which time proceeds down the page.

So, I plan to put some time into understanding the underlying math of these diagrams! If you know people who have studied them using ideas from category theory, please tell me.



The Computational Power of Chemical Reaction Networks

10 June, 2014

I’m at this workshop:

Programming with Chemical Reaction Networks: Mathematical Foundations, Banff International Research Station, 8-13 June 2014.

Luca Cardelli wrote about computation with chemical reactions in Part 26 of the network theory series here on this blog. So, it’s nice to meet him and many other researchers, learn more, and try to solve some problems together!

The first tutorial was this:

• David Soloveichik, U.C. San Francisco, The computational power of chemical reaction networks.

David works at the Center for Systems and Synthetic Biology, and their website says:

David did his graduate work with Erik Winfree at Caltech, focusing on algorithmic self-assembly and on synthetic networks of nucleic-acid interactions based on strand displacement cascades. He is interested in “molecular programming”: the systematic design of complex molecular systems based on the principles of computer science and distributed computing. More generally, he is trying to create a theoretical foundation of chemical computation applicable to both synthetic and natural systems.

According to his webpage, Soloveichik’s research interests are:

Wet-lab: the rational design of molecular interactions for synthetic biology, nanotechnology, and bioengineering. The goal is to engineer autonomous molecular systems that can sense, compute, and perform various actions. Using nucleic-acid “strand displacement cascades” as the molecular primitive, we are able to attain freedom of design that hasn’t been previously possible.

Theory: The theoretical foundation of chemical computation. Once we have a way to program molecular interactions, what programming language shall we use? How molecules can process information and carry out computation is still not well-understood; however, a formal connection to models of concurrent computation may allow systematic and scalable design, rigorous analysis and verification. Further, computational principles may elucidate the design of biological regulatory networks.

Here are my notes on his tutorial.

Motivation

We’ve got people here from different backgrounds:

• computational complexity theory
• wetlab / experimental science
• pure and applied mathematics
• software verification

CRNs (chemical reaction networks) show up in:

• chemistry
• population biology
• sensor networks
• math:
    ○ vector addition systems
    ○ Petri nets
    ○ commutative semigroups
    ○ bounded context-free languages
    ○ uniform recurrence equations

Why use them for computation? People want to go beyond the von Neumann architecture for computation. People also want to understand how cells process information. However, with a few exceptions, the computational perspective in this talk has not yet proved relevant in biology. So, there is a lot left to learn.

The model

The model of computation here will be the master equation for a chemical reaction network… since this has been explained starting Part 4 of the network theory series, I won’t review it!

Can all chemical reaction networks, even those without any conservation laws, be realized by actual chemical systems?

Though this is a subtle question, one answer is “yes, using strand displacement cascades”. This is a trick for getting DNA to simulate other chemical reactions. It’s been carried out in the lab! See this paper and many subsequent ones:

• Soloveichik, Seelig and Winfree, DNA as a universal substrate for chemical kinetics.

Abstract: Molecular programming aims to systematically engineer molecular and chemical systems of autonomous function and ever-increasing complexity. A key goal is to develop embedded control circuitry within a chemical system to direct molecular events. Here we show that systems of DNA molecules can be constructed that closely approximate the dynamic behavior of arbitrary systems of coupled chemical reactions. By using strand displacement reactions as a primitive, we construct reaction cascades with effectively unimolecular and bimolecular kinetics. Our construction allows individual reactions to be coupled in arbitrary ways such that reactants can participate in multiple reactions simultaneously, reproducing the desired dynamical properties. Thus arbitrary systems of chemical equations can be compiled into real chemical systems. We illustrate our method on the Lotka–Volterra oscillator, a limit-cycle oscillator, a chaotic system, and systems implementing feedback digital logic and algorithmic behavior.

However, even working with the master equation for a CRN, there are various things we might mean by having it compute something:

• uniform vs non-uniform: is a single CRN supposed to handle all inputs, or do we allow adding extra reactions for larger inputs? It’s a bit like Turing machines vs Boolean circuits.

• deterministic vs probabilistic: is the correct output guaranteed or merely likely?

• halting vs stabilizing: does the CRN ‘know’ when it has finished, or not? In the ‘halting’ case the CRN irreversibly produces some molecules that signal that the computation is done. In the ‘stabilizing’ case, it eventually stabilizes to the right answer, but we may not know how long to wait.

These distinctions dramatically affect the computational power. In the case of uniform computation:

• deterministic and halting: this has finite computational power.

• deterministic and stabilizing: this can decide semilinear predicates.

• probabilistic and halting: this is Turing-universal.

• probabilistic and stabilizing: this can decide \Delta_2^0 predicates, which are more general than computable ones. (Indeed, if we use Turing machines but don’t require them to signal when they’ve halted, the resulting infinitely long computations can ‘compute’ stuff that’s not computable in the usual sense.)

Deterministic stabilizing computations

Let’s look at the deterministic stabilizing computations in a bit more detail. We’ll look at decision problems. We have a subset S \subseteq \mathbb{N}^d, and we want to answer this question: is the vector X \in \mathbb{N}^d in the set S?

To do this, we represent the vector as a bunch of molecules: X_1 of the first kind, X_2 of the second kind, and so on. We call this an input. We may also include a fixed collection of additional molecules in our input, to help the reactions run.

Then we choose a chemical reaction network, and we let it run on our input. The answer to our question will be encoded in some molecules called Y and N. If X is in S, we want our chemical reaction to produce Y molecules. If it’s not, we want our reaction to produce N’s.

To make this more precise, we need to define what counts as an output. If we’ve got a bunch of molecules that

• contains Y but not N: then the output is YES.

• contains N but not Y: then the output is NO.

Otherwise the output is undefined.

Output-stable states are states with YES or NO output such that all states reachable from them via our chemical reactions give the same output. We say an output-stable-state is correct if this output is the correct answer to the question: is X in S.

Our chemical reaction network gives a deterministic stabilizing computation if for any input, and choosing any state reachable from that input, we can do further chemical reactions to reach a correct output-stable state.

In other words: starting from our input, and letting the chemical reactions <run any way they want, we will eventually stabilize at an output that gives the right answer to the question “is X in S?”

Examples

This sounds a bit complicated, but it’s really not. Let’s look at some examples!

Example 1. Suppose you want to check two numbers and see if one is greater than or equal to another. Here

S = \{(X_1,X_2) : X_2 \ge X_1 \}

How can you decide if a pair of numbers (X_1,X_2) is in this set?

You start with X_1 molecules of type A, X_2 molecules of type B, and one molecule of type Y. Then you use a chemical reaction network with these reactions:

A + N \to Y
B + Y \to N

If you let these reactions run, the Y switches to a N each time the reactions destroy an A. But the N switches back to a Y each time the reactions destroy a B.

When no more reactions are possible, we are left with either one Y or one N, which is the correct answer to your question!

Example 2. Suppose you want to check two numbers and see if one is equal to another. Here

S = \{(X_1,X_2) : X_2 = X_1 \}

How can you decide if a pair of numbers (X_1,X_2) is in here?

This is a bit harder! As before, you start with X_1 molecules of type A, X_2 molecules of type B, and one molecule of type Y. Then you use a chemical reaction network with these reactions:

A + B \to Y
Y + N \to Y
A + Y \to A + N
B + Y \to B + N

The first reaction lets an A and a B cancel out, producing a Y. If you only run this reaction, you’ll eventually be left with either a bunch of A\mathrm{s} or a bunch of B\mathrm{s} or nothing but Y\mathrm{s}.

If you have Y\mathrm{s}, your numbers were equal. The other reactions deal with the cases where you have A\mathrm{s} or B\mathrm{s} left over. But the key thing to check is that no matter what order we run the reactions, we’ll eventually get the right answer! In the end, you’ll have either Y\mathrm{s} or N\mathrm{s}, not both, and this will provide the yes-or-no answer to the question of whether X_1 = X_2.

What deterministic stabilizing computations can do

We’ve looked at some examples of deterministic stabilizing computations. The big question is: what kind of questions can they answer?

More precisely, for what subsets A \subseteq \mathbb{N}^d can we build a deterministic stabilizing computation that ends with output YES if the input X lies in A and with output NO otherwise?

The answer is: the ‘semilinear’ subsets!

• Dana Angluin, James Aspnes and David Eistenstat, Stably computable predicates are semilinear.

A set S \subseteq \mathbb{N}^d is linear if it’s of the form

\{u_0 + n_1 u_1 + \cdots + n_p u_p : n_i \in \mathbb{N}  \}

for some fixed vectors of natural numbers u_i \in \mathbb{N}^d.

A set S \subseteq \mathbb{N}^d semilinear if it’s a finite union of linear sets.

How did Angluin, Aspnes and Eisenstat prove their theorem? Apparently the easy part is showing that membership in any semilinear set can be decided by a chemical reaction network. David sketched the proof of the converse. I won’t go into it, but it used a very nice fact:

Dickson’s Lemma. Any subset of \mathbb{N}^d has a finite set of minimal elements, where we define x \le y if x_i \le y_i for all i.

For example, the region above and to the right of the hyperbola here has five minimal elements:

If you know some algebra, Dickson’s lemma should remind you of the Hilbert basis theorem, saying (for example) that every ideal in a ring of multivariable polynomials over a field is finitely generated. And in fact, Paul Gordan used Dickson’s Lemma in 1899 to help give a proof of Hilbert’s basis theorem.

It’s very neat to see how this lemma applies to chemical reaction networks! You can see how it works in Angluin, Aspnes and Eistenstat’s paper. But they call it “Higman’s lemma” for some reason.

References

Here are some of David Soloveichik’s recent talks:

• An introduction to strand displacement cascades for the Foresight Institute Conference (Palo Alto, Jan 2013): An artificial “biochemistry” with DNA.

• Paper presented at DNA Computing and Molecular Programming 18 (Aarhus, Denmark, Aug 2012): Deterministic function computation with chemical reaction networks.

• Tutorial talk for DNA Computing and Molecular Programming 17 (Pasadena, Aug 2011): The programming language of chemical kinetics, and how to discipline your DNA molecules using strand displacement cascades.

• High-level introduction to algorithmic self-assembly and stochastic chemical reaction networks as computer-theoretic models: Computer-theoretic abstractions for molecular programming.

• On algorithmic behavior in chemical reaction networks and implementing arbitrary chemical reaction networks with DNA: programming well-mixed chemical kinetics.


The Stochastic Resonance Program (Part 1)

10 May, 2014

guest post by David Tanzer

At the Azimuth Code Project, we are aiming to produce educational software that is relevant to the Earth sciences and the study of climate. Our present software takes the form of interactive web pages, which allow you to experiment with the parameters of models and view their outputs. But to fully understand the meaning of a program, we need to know about the concepts and theories that inform it. So we will be writing articles to explain both the programs themselves and the math and science behind them.

In this two-part series, I’ll explain this program:

Stochastic resonance.

Check it out—it runs on your browser! It was created by Allan Erskine and Glyn Adgie. In the Azimuth blog article Increasing the Signal-to-Noise Ratio with More Noise, Glyn Adgie and Tim van Beek give a nice explanation of the idea of stochastic resonance, which includes some clear and exciting graphs.

My goal today is give a compact, developer-oriented introduction to stochastic resonance, which will set the context for the next blog article, where I’ll dissect the program itself. By way of introduction, I am a software developer with research training in computer science. It’s a new area for me, and any clarifications will be welcome!

The concept of stochastic resonance

Stochastic resonance is a phenomenon, occurring under certain circumstances, in which a noise source may amplify the effect of a weak signal. This concept was used in an early hypothesis about the timing of ice-age cycles, and has since been applied to a wide range of phenomena, including neuronal detection mechanisms and patterns of traffic congestion.

Suppose we have a signal detector whose internal, analog state is driven by an input signal, and suppose the analog states are partitioned into two regions, called “on” and “off” — this is a digital state, abstracted from the analog state. With a light switch, we could take the force as the input signal, the angle as the analog state, and the up/down classification of the angle as the digital state.

Consider the effect of a periodic input signal on the digital state. Suppose the wave amplitude is not large enough to change the digital state, yet large enough to drive the analog state close to the digital state boundary. Then, a bit of random noise, occurring near the peak of an input cycle, may “tap” the system over to the other digital state. So we will see a probability of state-transitions that is synchronized with the input signal. In a complex way, the noise has amplified the input signal.

But it’s a pretty funky amplifier! Here is a picture from the Azimuth library article on stochastic resonance:

Stochastic resonance has been found in the signal detection mechanisms of neurons. There are, for example, cells in the tails of crayfish that are tuned to low-frequency signals in the water caused by predator motions. These signals are too weak to cross the firing threshold for the neurons, but with the right amount of noise, they do trigger the neurons.

See:

Stochastic resonance, Azimuth Library.

Stochastic resonance in neurobiology, David Lyttle.

Bistable stochastic resonance and Milankovitch theories of ice-age cycles

Stochastic resonance was originally formulated in terms of systems that are bistable — where each digital state is the basin of attraction of a stable equilibrium.

An early application of stochastic resonance was to a hypothesis, within the framework of bistable climate dynamics, about the timing of the ice-age cycles. Although it has not been confirmed, it remains of interest (1) historically, (2) because the timing of ice-age cycles remains an open problem, and (3) because the Milankovitch hypothesis upon which it rests is an active part of the current research.

In the bistable model, the climate states are a cold, “snowball” Earth and a hot, iceless Earth. The snowball Earth is stable because it is white, and hence reflects solar energy, which keeps it frozen. The iceless Earth is stable because it is dark, and hence absorbs solar energy, which keeps it melted.

The Milankovitch hypothesis states that the drivers of climate state change are long-duration cycles in the insolation — the solar energy received in the northern latitudes — caused by periodic changes in the Earth’s orbital parameters. The north is significant because that is where the glaciers are concentrated, and so a sufficient “pulse” in northern temperatures could initiate a state change.

Three relevant astronomical cycles have been identified:

• Changing of the eccentricity of the Earth’s elliptical orbit, with a period of 100 kiloyears

• Changing of the obliquity (tilt) of the Earth’s axis, with a period of 41 kiloyears

• Precession (swiveling) of the Earth’s axis, with a period of 23 kiloyears

In the stochastic resonance hypothesis, the Milankovitch signal is amplified by random events to produce climate state changes. In more recent Milankovitch theories, a deterministic forcing mechanism is used. In a theory by Didier Paillard, the climate is modeled with three states, called interglacial, mild glacial and full glacial, and the state changes depend on the volume of ice as well as the insolation.

See:

Milankovitch cycle, Azimuth Library.

Mathematics of the environment (part 10), John Baez. This gives an exposition of Paillard’s theory.

Bistable systems defined by a potential function

Any smooth function with two local minima can be used to define a bistable system. For instance, consider the function V(x) = x^4/4 - x^2/2:

To define the bistable system, construct a differential equation where the time derivative of x is set to the negative of the derivative of the potential at x:

dx/dt = -V'(x) = -x^3 + x = x(1 - x^2)

So, for instance, where the potential graph is sloping upward as x increases, -V'(x) is negative, and this sends X(t) ‘downhill’ towards the minimum.

The roots of V'(x) yield stable equilibria at 1 and -1, and an unstable equilibrium at 0. The latter separates the basins of attraction for the stable equilibria.

Discrete stochastic resonance

Now let’s look at a discrete-time model which exhibits stochastic resonance. This is the model used in the Azimuth demo program.

We construct the discrete-time derivative, using the potential function, a sampled sine wave, and a normally distributed random number:

\Delta X_t = -V'(X_t) * \Delta t + \mathrm{Wave}(t) + \mathrm{Noise}(t) =
X_t (1 - X_t^2) \Delta t + \alpha * \sin(\omega t) + \beta * \mathrm{GaussianSample}(t)

where \Delta t is a constant and t is restricted to multiples of \Delta t.

This equation is the discrete-time counterpart to a continuous-time stochastic differential equation.

Next time, we will look into the Azimuth demo program itself.