The Stochastic Resonance Program (Part 1)

10 May, 2014

guest post by David Tanzer

At the Azimuth Code Project, we are aiming to produce educational software that is relevant to the Earth sciences and the study of climate. Our present software takes the form of interactive web pages, which allow you to experiment with the parameters of models and view their outputs. But to fully understand the meaning of a program, we need to know about the concepts and theories that inform it. So we will be writing articles to explain both the programs themselves and the math and science behind them.

In this two-part series, I’ll explain this program:

Stochastic resonance.

Check it out—it runs on your browser! It was created by Allan Erskine and Glyn Adgie. In the Azimuth blog article Increasing the Signal-to-Noise Ratio with More Noise, Glyn Adgie and Tim van Beek give a nice explanation of the idea of stochastic resonance, which includes some clear and exciting graphs.

My goal today is give a compact, developer-oriented introduction to stochastic resonance, which will set the context for the next blog article, where I’ll dissect the program itself. By way of introduction, I am a software developer with research training in computer science. It’s a new area for me, and any clarifications will be welcome!

The concept of stochastic resonance

Stochastic resonance is a phenomenon, occurring under certain circumstances, in which a noise source may amplify the effect of a weak signal. This concept was used in an early hypothesis about the timing of ice-age cycles, and has since been applied to a wide range of phenomena, including neuronal detection mechanisms and patterns of traffic congestion.

Suppose we have a signal detector whose internal, analog state is driven by an input signal, and suppose the analog states are partitioned into two regions, called “on” and “off” — this is a digital state, abstracted from the analog state. With a light switch, we could take the force as the input signal, the angle as the analog state, and the up/down classification of the angle as the digital state.

Consider the effect of a periodic input signal on the digital state. Suppose the wave amplitude is not large enough to change the digital state, yet large enough to drive the analog state close to the digital state boundary. Then, a bit of random noise, occurring near the peak of an input cycle, may “tap” the system over to the other digital state. So we will see a probability of state-transitions that is synchronized with the input signal. In a complex way, the noise has amplified the input signal.

But it’s a pretty funky amplifier! Here is a picture from the Azimuth library article on stochastic resonance:

Stochastic resonance has been found in the signal detection mechanisms of neurons. There are, for example, cells in the tails of crayfish that are tuned to low-frequency signals in the water caused by predator motions. These signals are too weak to cross the firing threshold for the neurons, but with the right amount of noise, they do trigger the neurons.


Stochastic resonance, Azimuth Library.

Stochastic resonance in neurobiology, David Lyttle.

Bistable stochastic resonance and Milankovitch theories of ice-age cycles

Stochastic resonance was originally formulated in terms of systems that are bistable — where each digital state is the basin of attraction of a stable equilibrium.

An early application of stochastic resonance was to a hypothesis, within the framework of bistable climate dynamics, about the timing of the ice-age cycles. Although it has not been confirmed, it remains of interest (1) historically, (2) because the timing of ice-age cycles remains an open problem, and (3) because the Milankovitch hypothesis upon which it rests is an active part of the current research.

In the bistable model, the climate states are a cold, “snowball” Earth and a hot, iceless Earth. The snowball Earth is stable because it is white, and hence reflects solar energy, which keeps it frozen. The iceless Earth is stable because it is dark, and hence absorbs solar energy, which keeps it melted.

The Milankovitch hypothesis states that the drivers of climate state change are long-duration cycles in the insolation — the solar energy received in the northern latitudes — caused by periodic changes in the Earth’s orbital parameters. The north is significant because that is where the glaciers are concentrated, and so a sufficient “pulse” in northern temperatures could initiate a state change.

Three relevant astronomical cycles have been identified:

• Changing of the eccentricity of the Earth’s elliptical orbit, with a period of 100 kiloyears

• Changing of the obliquity (tilt) of the Earth’s axis, with a period of 41 kiloyears

• Precession (swiveling) of the Earth’s axis, with a period of 23 kiloyears

In the stochastic resonance hypothesis, the Milankovitch signal is amplified by random events to produce climate state changes. In more recent Milankovitch theories, a deterministic forcing mechanism is used. In a theory by Didier Paillard, the climate is modeled with three states, called interglacial, mild glacial and full glacial, and the state changes depend on the volume of ice as well as the insolation.


Milankovitch cycle, Azimuth Library.

Mathematics of the environment (part 10), John Baez. This gives an exposition of Paillard’s theory.

Bistable systems defined by a potential function

Any smooth function with two local minima can be used to define a bistable system. For instance, consider the function V(x) = x^4/4 - x^2/2:

To define the bistable system, construct a differential equation where the time derivative of x is set to the negative of the derivative of the potential at x:

dx/dt = -V'(x) = -x^3 + x = x(1 - x^2)

So, for instance, where the potential graph is sloping upward as x increases, -V'(x) is negative, and this sends X(t) ‘downhill’ towards the minimum.

The roots of V'(x) yield stable equilibria at 1 and -1, and an unstable equilibrium at 0. The latter separates the basins of attraction for the stable equilibria.

Discrete stochastic resonance

Now let’s look at a discrete-time model which exhibits stochastic resonance. This is the model used in the Azimuth demo program.

We construct the discrete-time derivative, using the potential function, a sampled sine wave, and a normally distributed random number:

\Delta X_t = -V'(X_t) * \Delta t + \mathrm{Wave}(t) + \mathrm{Noise}(t) =
X_t (1 - X_t^2) \Delta t + \alpha * \sin(\omega t) + \beta * \mathrm{GaussianSample}(t)

where \Delta t is a constant and t is restricted to multiples of \Delta t.

This equation is the discrete-time counterpart to a continuous-time stochastic differential equation.

Next time, we will look into the Azimuth demo program itself.

2014 on Azimuth

31 December, 2013


Happy New Year! We’ve got some fun guest posts lined up for next year, including:

Marc Harper, Relative entropy in evolutionary dynamics.

Marc Harper uses ideas from information theory in his work on bioinformatics and evolutionary game theory. This article explains some of his new work. And as a warmup, it explains how relative entropy can serve as a Lyapunov function in evolution!

This includes answering the question:

“What is a Lyapunov function, and why should I care?”

The brief answer, in case you’re eager to know, is this. A Lyapunov function is something that always increases—or always decreases—as time goes on. Examples include entropy and free energy. So, a Lyapunov function can be a way of making the 2nd law of thermodynamics mathematically precise! It’s also a way to show things are approaching equilibrium.

The overall goal here is applying entropy and information theory to better understand the behavior of biological and ecological systems. And in April 2015, Marc Harper and I are helping run a workshop on this topic! We’re doing this with John Harte, an ecologist who uses maximum entropy methods to predict the distribution, abundance and energy usage of species. It should be really interesting!

But back to blog articles:

Manoj Gopalkrishnan, Lyapunov functions for complex-balanced systems.

Manoj Gopalkrishnan is a mathematician at the Tata Institute of Fundamental Research in Mumbai who works on problems coming from chemistry and biology. This post will explain his recent paper on a Lyapunov function for chemical reactions. This function is closely related to free energy, a concept from thermodynamics. So again, one of the overall goals is to apply entropy to better understand living systems.

Since some evolutionary games are isomorphic to chemical reaction networks, this post should be connected to Marc’s. But there’s some mental work left to make the connection—for me, at least. It should be really cool when it all fits together!

Alastair Jamieson-Lane, Stochastic cross impact balance analysis.

Alastair Jamieson-Lane is a mathematician in the master’s program at the University of British Columbia. Very roughly, this post is about a method for determining which economic scenarios are more likely. The likely scenarios get fed into things like the IPCC climate models, so this is important.

This blog article has an interesting origin. Vanessa Schweizer has a bachelor’s degree in physics, a masters in environmental studies, and a PhD in engineering and public policy. She now works at the University of Waterloo on long-term decision-making problems.

A while back, I met Vanessa at a workshop called What Is Climate Change and What To Do About It?, at the Balsillie School of International Affairs, which is in Waterloo. She described her work with Alastair Jamieson-Lane and the physicist Matteo Smerlak on stochastic cross impact balance analysis. It sounded really interesting, something I’d like to work on. So I solicited some blog articles from them. I hope this is just the first!

So: Happy New Year, and good reading!

Also: we’re always looking for good guest posts here on Azimuth, and we have a system for helping you write them. So, if you know something interesting about environmental or energy issues, ecology, biology or chemistry, consider giving it a try!

If you read some posts here, especially guest posts, you’ll get an idea of what we’re looking for. David Tanzer, a software developer in New York who is very active in the Azimuth Project these days, made an organized list of Azimuth blog posts here:

Azimuth Blog Overview.

You can see the guest posts listed by author. This overview is also great for catching up on old posts!

Azimuth Blog Overview

6 September, 2013

We’ve got lots of series of articles on this blog. Some people say it’s a bit overwhelming. So David Tanzer of the Azimuth Project had a good idea: create an organized list of the articles on this blog, to make them easier to find. Here it is:

Azimuth Blog overview.

You can also find a link to this on top of the “ALSO READ THESE” list at the right-hand side of this blog!

Needless to say, this could be improved in many ways. Don’t say how: just do it!

What To Do? (Part 2)

28 August, 2013

Dear John,

If you could do anything to change the world what would you do? Many people haven’t had the opportunity to ponder that question because they have been busy studying what could be possible within a particular set of resource constraints. However, what if we push the limits? If all the barriers were removed, then what would you do?

The XXXXXXXXX Foundation has an open, aggressive, and entrepreneurial approach to philanthropy. Our goal is to produce substantial, widespread and lasting changes to society that will maximize opportunity and minimize injustice. We tap into the minds of fearless thinkers who have big, bold, transformational ideas, and work with them to invest in strategies designed to solve persistent problems.

Our team is reaching out to you because we believe you are the type of innovative thinker with ideas that just might change the world. While this is not a promise of grant funding, it is an invitation to share your ideas. You can learn more about the XXXXXXXXX Foundation by visiting our website. Thank you for your interest and I look forward to hearing your ideas.


I got this email yesterday. While I have some ideas, I really want to make the most of this chance. So: what would you do if you got this sort of opportunity? To keep things simple, let’s assume this is a legitimate email from a truly well-meaning organization—I’m checking that, and it seems to be so. Assume they could spend 1-10 million dollars on a really good idea, and assume you really want to help the world. What idea would you suggest?

Some ideas

Here are some comments from G+ to get your thoughts going. Heather Vandagriff wrote:

Hard core grassroots organization toward political involvement and education on climate issues. 

Jason Holt wrote:

Ideas are cheap.

Borislav Iordanov wrote:

I don’t agree that ideas are cheap. It could take a lifetime to have a really good one. However, one could argue that really good ideas are probably already funded. But if to maximize opportunity and to minimize injustice is the motivation, I say government transparency should be top priority. I can google the answer to almost any technical or scientific question, any historical fact, or pop culture, you name it. But I can’t know what my government is doing. And I’m not talking only, or even mostly, about things that governments hide. I’m talking about mundane day-to-day operations that are potentially not conducted in the best interest of the people, knowingly or unknowingly. I can easily find what are the upcoming concerts or movies, but it’s much harder to find out what, for instance, my local government is currently discussing so I can perhaps stop by the commissioner chamber and have my voice being heard (why aren’t there TV commercials about the public hearing of the next city ordinance?).

I realize this is not a concrete idea, but there are plenty of projects in that direction around the internet. And I don’t think such projects should come only from within government agencies because there is a conflict of interest.

Bottom line is that any sustainable, permanent change towards a better society has to involve the political process in some way, and the best (peaceful!) way to enact change there starts with real and consequential openness. Didn’t expect to write so much, sorry…

John Baez wrote:

Borislav Iordanov wrote:

But if to maximize opportunity and to minimize injustice is the motivation, I say government transparency should be top priority.

That’s a great idea… and in fact, this foundation already has a project to promote government transparency. So, I’ll either need to come up with a specific way to promote it that they haven’t thought about, or come up with something else.

Noon Silk wrote:

I guess the easy answer is some sort of education program; educating people in some way so-as to generate the skills necessary to do the thing that you really want to do. So I don’t know. Perhaps part of it could be some sort campaign to get a few coursera et al courses on climate maths, etc, and building some sort of innovative and exciting program around that.

Richard Lucas wrote:

Use existing corporate law (thanks, Capitalists!) to create collectives (maybe non-profits?) into which people could elect to participate. Participation would be contingent upon adoption of a certain set of standards for behavior impossible in the broader, geographical society in which we are immersed. Participants would enjoy a guaranteed minimum income, health care, etc – the goals of Communism, but in a limited scope, applied to participants who also exist in the general society. It’s just that participants would agree to share time, resources, and expertise with the collective. If collective living can’t be made to work in such an environment, where participation could be relatively selective up front, to include the honest and the committed…. well, then it can’t work. When the right formula is established, and the standard of living for participants is greater than for peers who are not “participants”, then you can expect more people to join. A tipping point would eventually be reached, where the majority of citizens in the broader, geographical society were also participants in an optional, voluntary, supersociety which does not respect geographic or national boundaries.

This is the only way it will work, and the beauty is that Communists and Objectivists equally hate this idea, because it breaks their frames, and because it is legal, and because if the larger society tried to block it, they would then have to justify the ability of crazy UFO cults and religions to do it. So, it can’t be stopped. There’s no theory to defend. You just do it.

Xah Lee wrote:

put the $10M to increase internet penetration, or in other ways enhance communication such as cell phone.

absolutely do not do anything that’s normally considered as good or helpful to humanity. such as help africa, women, the weak, the cripple, poor, vaccine, donation, institutionalized education etc.

even though, i’m still doubtful there’d be any improvement of humanity. $10M is like a peanut for this. One missile is $10M… 

John Baez wrote:

Xah Lee wrote:

even though, i’m still doubtful there’d be any improvement of humanity. $10M is like a peanut for this.

There are certain activities where the benefit is roughly proportional to the amount of money spent – like, giving people bed-netting that repels malaria-carrying mosquitos, or buying textbooks for students. For such activities, $10 million is often not enough to get the job done.

But there are other activities where $10 million is the difference between some good self-perpetuating phenomenon starting to happen, and it not starting to happen. This is the kind of thing I should suggest.

It’s the difference between pushing a large rock up a long hill, and starting a rock rolling down a hill.

By the way, this foundation plans to spend a lot more than $10M in total. I just want to suggest a project that will seem cheap to them, to increase the chance that they actually pursue it.

Piotr Migdal wrote:

I think that the thing that needs a drastic change in the education system. I suggest founding a “hacker university” (or “un-university”).

The educational system was designed for preparation of soldiers and factory workers. Now the job market is very different, and one cannot hope to work in one factory for his/her lifetime. Additionally, the optimal skill set is not necessarily the same for everyone (and it changes, as the World changes). But the worst thing is that schools teaches that “take no initiative, just obey” which stops working once one needs to find a job. Plus, for more creative tasks usually the top-down approach is the worst one (contrasting with the coordination tasks).

While changing the whole system may be hard, let’s think about universities; or a… un-university. Instead of attending predefined classes, let’s do the following:
• based on self-learning,
• lectures are because someone is willing to give them,
• everything voluntary (e.g. lectures and classes),
• own projects highly encouraged, starting from day one.

So basically, a collection of people who actually want to learn (!= earn a degree / prestige / position / fame), perform research which they consider the most fascinating (not merely doing science which is currently most fashionable and well-funded or “my advisor/grant/dean told so”) and undertake projects for greater good (startup-like freedom (unexperienceable in the current academia, at least – for the young) for things not necessarily giving monetary profit).

Sure, you may argue that there are more important goals (unemployment, bureaucracy, poverty, wars, ongoing destruction of natural environment – to name only a few in no particular order). But this one can be a nucleus for solving many other problems – wider in education and in general. And such a spark may yield in an unimaginable firestorm (a bad metaphor, it has to be about creation) seed can grow, flourish and make deserts blossom.


By founding I don’t mean paying for administration. Quite opposite – just rent a building, nothing more (so no tuition and no renting cost for students, to make it accessible regardless of the background). Almost all stuff (e.g. admission) in the first years based entirely on voluntary work.

John Baez wrote:

Noon Silk wrote: “I guess the easy answer is some sort of education program…”

That sounds good. The foundation already has a program to improve K-12 education in the United States. So, when it comes to education, I’d either need to give them ideas they haven’t tried in that realm, get them interested in education outside the US, or get them interested in post-secondary education. Piotr Migdal’s idea of a ‘hacker university’ might be one approach. It also seems the potential of free worldwide online courses has not yet been fully exploited. 

Piotr Migdal wrote:

The point is in going well beyond online courses (which, IMHO, are nice but not that revolutionary – textbooks are there for quite a few years; I consider things like Usenet, Wikipedia and StackExchange way more impactful for education) – by gathering a critical mass of intelligent and passionate people. But anyway, it may be the right time for innovations in education (and not only small patches).

Robert Byrne wrote:

Firstly, thanks for sharing this John! Secondly, congratulations on being chosen!

I would look into three aspects of this. 1) Who funds it, and whether you are comfortable with that, 2) do they choose candidates and generally have processes that make use of the experience of similar organizations such as MacArthur?, 3) what limits are there on using the grant — could you design your own prize to solve a problem using these funds?

But you’ve asked for ideas. The biggest problems that can be fixed/improved for $5 million! I’ll stick to education and technology. Here are some areas:
• Education reform in the U.S., think-tanks or writers who can create a model to switch away from municipal public education funding, with the aim of reducing disadvantage,
• Office, factory and home power efficiency technology, anything that needs $1 million to get to prototype,
• Solve the commute/car problem — e.g. how can more people work within the suburb in which they live? How can public transit be useful in sprawling suburbs?

John Baez wrote:

Robert Byrne wrote:

Firstly, thanks for sharing this John! Secondly, congratulations on being chosen!

Thanks! I’ve been chosen to give them ideas.

“I would look into three aspects of this. 1) Who funds it, and whether you are comfortable with that, 2) do they choose candidates and generally have processes that make use of the experience of similar organizations such as MacArthur?, 3) what limits are there on using the grant — could you design your own prize to solve a problem using these funds?”

Thanks – I definitely plant to look the gift horse in the mouth. They didn’t say anything about giving me a grant, except to say “this is not a promise of a grant”.

So, right now I’m treating this as an exercise in coming up with a really good idea that I’m happy to give away and let someone try. Naturally there’s a self-serving part of me that wants to pick an idea where my participation would be required. But knowing me, I’ll actually be happiest if I can catalyze something good in a limited amount of time and then think about other things.

“Solve the commute/car problem — e.g. how can more people work within the suburb in which they live? How can public transit be useful in sprawling suburbs?”

My wife Lisa raised this one. I would love to do something about this. But what can be done for just 1-10 million dollars? To do something good in this field with that amount of money, it seems we’d need to have a really smart idea: something where a small change can initiate some sort of chain reaction. Any specific ideas?

And so on…

In some ways this post is a followup to What To Do (Part 1), so if you haven’t read that, you might want to now.

Prospects for a Green Mathematics

15 February, 2013

contribution to the Mathematics of Planet Earth 2013 blog by John Baez and David Tanzer

It is increasingly clear that we are initiating a sequence of dramatic events across our planet. They include habitat loss, an increased rate of extinction, global warming, the melting of ice caps and permafrost, an increase in extreme weather events, gradually rising sea levels, ocean acidification, the spread of oceanic “dead zones”, a depletion of natural resources, and ensuing social strife.

These events are all connected. They come from a way of life that views the Earth as essentially infinite, human civilization as a negligible perturbation, and exponential economic growth as a permanent condition. Deep changes will occur as these idealizations bring us crashing into the brick wall of reality. If we do not muster the will to act before things get significantly worse, we will need to do so later. While we may plead that it is “too difficult” or “too late”, this doesn’t matter: a transformation is inevitable. All we can do is start where we find ourselves, and begin adapting to life on a finite-sized planet.

Where does math fit into all this? While the problems we face have deep roots, major transformations in society have always caused and been helped along by revolutions in mathematics. Starting near the end of the last ice age, the Agricultural Revolution eventually led to the birth of written numerals and geometry. Centuries later, the Enlightenment and Industrial Revolution brought us calculus and eventually a flowering of mathematics unlike any before. Now, as the 21st century unfolds, mathematics will become increasingly driven by our need to understand the biosphere and our role within it.

We refer to mathematics suitable for understanding the biosphere as green mathematics. Although it is just being born, we can already see some of its outlines.

Since the biosphere is a massive network of interconnected elements, we expect network theory will play an important role in green mathematics. Network theory is a sprawling field, just beginning to become organized, which combines ideas from graph theory, probability theory, biology, ecology, sociology and more. Computation plays an important role here, both because it has a network structure—think of networks of logic gates—and because it provides the means for simulating networks.

One application of network theory is to tipping points, where a system abruptly passes from one regime to another. Scientists need to identify nearby tipping points in the biosphere to help policy makers to head off catastrophic changes. Mathematicians, in turn, are challenged to develop techniques for detecting incipient tipping points. Another application of network theory is the study of shocks and resilience. When can a network recover from a major blow to one of its subsystems?

We claim that network theory is not just another name for biology, ecology, or any other existing science, because in it we can see new mathematical terrains. Here are two examples.

First, consider a leaf. In The Formation of a Tree Leaf by Qinglan Xia, we see a possible key to Nature’s algorithm for the growth of leaf veins. The vein system, which is a transport network for nutrients and other substances, is modeled by Xia as a directed graph with nodes for cells and edges for the “pipes” that connect the cells. Each cell gives a revenue of energy, and incurs a cost for transporting substances to and from it.

The total transport cost depends on the network structure. There are costs for each of the pipes, and costs for turning the fluid around the bends. For each pipe, the cost is proportional to the product of its length, its cross-sectional area raised to a power α, and the number of leaf cells that it feeds. The exponent α captures the savings from using a thicker pipe to transport materials together. Another parameter β expresses the turning cost.

Development proceeds through cycles of growth and network optimization. During growth, a layer of cells gets added, containing each potential cell with a revenue that would exceed its cost. During optimization, the graph is adjusted to find a local cost minimum. Remarkably, by varying α and β, simulations yield leaves resembling those of specific plants, such as maple or mulberry.

A growing network

Unlike approaches that merely create pretty images resembling leaves, Xia presents an algorithmic model, simplified yet illuminating, of how leaves actually develop. It is a network-theoretic approach to a biological subject, and it is mathematics—replete with lemmas, theorems and algorithms—from start to finish.

A second example comes from stochastic Petri nets, which are a model for networks of reactions. In a stochastic Petri net, entities are designated by “tokens” and entity types by “places” which hold the tokens. “Reactions” remove tokens from their input places and deposit tokens at their output places. The reactions fire probabilistically, in a Markov chain where each reaction rate depends on the number of its input tokens.

A stochastic Petri net

Perhaps surprisingly, many techniques from quantum field theory are transferable to stochastic Petri nets. The key is to represent stochastic states by power series. Monomials represent pure states, which have a definite number of tokens at each place. Each variable in the monomial stands for a place, and its exponent indicates the token count. In a linear combination of monomials, each coefficient represents the probability of being in the associated state.

In quantum field theory, states are representable by power series with complex coefficients. The annihilation and creation of particles are cast as operators on power series. These same operators, when applied to the stochastic states of a Petri net, describe the annihilation and creation of tokens. Remarkably, the commutation relations between annihilation and creation operators, which are often viewed as a hallmark of quantum theory, make perfect sense in this classical probabilistic context.

Each stochastic Petri net has a “Hamiltonian” which gives its probabilistic law of motion. It is built from the annihilation and creation operators. Using this, one can prove many theorems about reaction networks, already known to chemists, in a compact and elegant way. See the Azimuth network theory series for details.

Conclusion: The life of a network, and the networks of life, are brimming with mathematical content.

We are pursuing these subjects in the Azimuth Project, an open collaboration between mathematicians, scientists, engineers and programmers trying to help save the planet. On the Azimuth Wiki and Azimuth Blog we are trying to explain the main environmental and energy problems the world faces today. We are also studying plans of action, network theory, climate cycles, the programming of climate models, and more.

If you would like to help, we need you and your special expertise. You can write articles, contribute information, pose questions, fill in details, write software, help with research, help with writing, and more. Just drop us a line.

This post appeared on the blog for Mathematics of Planet Earth 2013, an international project involving over 100 scientific societies, universities, research institutes, and organizations. They’re trying to have a new blog article every day, and you can submit articles as described here.

Here are a few of their other articles:

The mathematics of extreme climatic events—with links to videos.

From the Joint Mathematics Meetings: Conceptual climate models short course—with links to online course materials.

There will always be a Gulf Stream—and exercise in singular perturbation technique.

I’m Looking For Good Math Grad Students

11 December, 2012

I am looking for hardworking math grad students who:

1) know some category theory and ideally a bit of 2-category theory,

2) know some mathematical physics, stochastic processes and/or Bayesian network theory, and

3) want to apply these ideas to subjects like chemistry, biology, ecology and climate science.

If this is you, please email me and/or apply to the math Ph.D. program at U.C. Riverside. To apply, follow the directions here. For more information, go here. The deadline is January 5th.

We have very little money for foreign students, so this advertisement is mainly for students from the US and especially California. If you want to work with me, mention my name in your application.

I can’t promise to work with you, of course, until you’re accepted and I get to know you and decide we can work well together! Luckily there are other good professors in the department doing other interesting things.

I urge would-be students to come to my seminar, which meets once a week, and also my special sessions where we work on projects, which currently also occur once a week. I’ll pick students from among people who do these things. Right now there are 6 candidates. I can’t take this many new students every year, so I’ll pick the ones who show the most initiative and promise.

I’m working on network theory and information theory, and I’m also getting started on climate physics, especially glacial cycles. You can decide if these topics interest you by clicking on the links. I’m not taking students who want to do thesis work on my old interests (quantum gravity and n-categories).

The U.C.R. math building looks 2-dimensional in this shot, but I promise you’ll get a well-rounded education if you work with me.

Talk at Berkeley

15 November, 2012

This Friday, November 16, 2012, I’ll be giving the annual Lang Lecture at the math department of U. C. Berkeley. I’ll be speaking on The Mathematics of Planet Earth. There will be tea and cookies in 1015 Evans Hall from 3 to 4 pm. The talk itself will be in 105 Northgate Hall from 4 to 5 pm, with questions going on to 5:30 if people are interested.

You’re all invited!

The Mathematics of Planet Earth

31 October, 2012

Here’s a public lecture I gave yesterday, via videoconferencing, at the 55th annual meeting of the South African Mathematical Society:

Abstract: The International Mathematical Union has declared 2013 to be the year of The Mathematics of Planet Earth. The global warming crisis is part of a bigger transformation in which humanity realizes that the Earth is a finite system and that our population, energy usage, and the like cannot continue to grow exponentially. If civilization survives this transformation, it will affect mathematics—and be affected by it—just as dramatically as the agricultural revolution or industrial revolution. We cannot know for sure what the effect will be, but we can already make some guesses.

To watch the talk, click on the video above. To see slides of the talk, click here. To see the source of any piece of information in these slides, just click on it!

My host Bruce Bartlett, an expert on topological quantum field theory, was crucial in planning the event. He was the one who edited the video, and put it on YouTube. He also made this cute poster:

I was planning to fly there using my superpowers to avoid taking a plane and burning a ton of carbon. But it was early in the morning and I was feeling a bit tired, so I used Skype.

By the way: if you’re interested in science, energy and the environment, check out the Azimuth Project, which is a collaboration to create a focal point for scientists and engineers interested in saving the planet. We’ve got some interesting projects going. If you join the Azimuth Forum, you can talk to us, learn more, and help out as much or as little as you want. The only hard part about joining the Azimuth Forum is reading the instructions well enough that you choose your whole real name, with spaces between words, as your username.

Azimuth News (Part 2)

28 September, 2012

Last week I finished a draft of a book and left Singapore, returning to my home in Riverside, California. It’s strange and interesting, leaving the humid tropics for the dry chaparral landscape I know so well.

Now I’m back to my former life as a math professor at the University of California. I’ll be going back to the Centre for Quantum Technology next summer, and summers after that, too. But life feels different now: a 2-year period of no teaching allowed me to change my research direction, but now it’s time to teach people what I’ve learned!

It also happens to be a time when the Azimuth Project is about to do a lot of interesting things. So, let me tell you some news!

Programming with Petri nets

The Azimuth Project has a bunch of new members, who are bringing with them new expertise and lots of energy. One of them is David Tanzer, who was an undergraduate math major at U. Penn, and got a Ph.D. in computer science at NYU. Now he’s a software developer, and he lives in Brooklyn, New York.

He writes:

My areas of interest include:

• Queryable encyclopedias

• Machine representation of scientific theories

• Machine representation of conflicts between contending theories

• Social and technical structures to support group problem-solving activities

• Balkan music, Afro-Latin rhythms, and jazz guitar

To me, the most meaningful applications of science are to the myriad of problems that beset the human race. So the Aziumuth Project is a good focal point for me.

And on Azimuth, he’s starting to write some articles on ‘programming with Petri nets’. We’ve talked about them a lot in the network theory series:

They’re a very general modelling tool in chemistry, biology and computer science, precisely the sort of tool we need for a deep understanding of the complex systems that keep our living planet going—though, let’s be perfectly clear about this, just one of many such tools, and one of the simplest. But as mathematical physicists, Jacob Biamonte and I have studied Petri nets in a highly theoretical way, somewhat neglecting the all-important problem of how you write programs that simulate Petri nets!

Such programs are commercially available, but it’s good to see how to write them yourself, and that’s what David Tanzer will tell us. He’ll use the language Python to write these programs in a nice modern object-oriented way. So, if you like coding, this is where the rubber meets the road.

I’m no expert on programming, but it seems the modularity of Python code nicely matches the modularity of Petri nets. This is something I’d like to get into more deeply someday, in my own effete theoretical way. I think the category-theoretic foundations of computer languages like Python are worth understanding, perhaps more interesting in fact than purely functional languages like Haskell, which are better understood. And I think they’ll turn out to be nicely related to the category-theoretic foundations of Petri nets and other networks I’m going to tell you about!

And I believe this will be important if we want to develop ‘ecotechnology’, where our machines and even our programming methodologies borrow ingenuity and wisdom from biological processes… and learn to blend with nature instead of fighting it.

Petri nets, systems biology, and beyond

Another new member of the Azimuth Project is Ken Webb. He has a BA in Cognitive Science from Carleton University in Ottawa, and an MSc in Evolutionary and Adaptive Systems from The University of Sussex in Brighton. Since then he’s worked for many years as a software developer and consultant, using many different languages and approaches.

He writes:

Things that I’m interested in include:

• networks of all types, hierarchical organization of network nodes, and practical applications

• climate change, and “saving the planet”

• programming code that anyone can run in their browser, and that anyone can edit and extend in their browser

• approaches to software development that allow independently-developed apps to work together

• the relationship between computer-science object-oriented (OO) concepts and math concepts

• how everything is connected

I’ve been paying attention to the Azimuth Project because it parallels my own interests, but with a more math focus (math is not one of my strong points). As learning exercises, I’ve reimplemented a few of the applications mentioned on Azimuth pages. Some of my online workbooks (blog-like entries that are my way of taking active notes) were based on content at the Azimuth Project.

He’s started building a Petri net modeling and simulation tool called Xholon. It’s written in Java and can be run online using Java Web Start (JNLP). Using this tool you can completely specify Petri net models using XML. You can see more details, and examples, on his Azimuth page. If I were smarter, or had more spare time, I would have already figured out how to include examples that actually run in an interactive way in blog articles here! But more on that later.

Soon I hope Ken will finish a blog entry in which he discusses how Petri nets fit into a bigger setup that can also describe ‘containers’, where molecules are held in ‘membranes’ and these membranes can allow chosen molecules through, and also split or merge—more like biology than inorganic chemistry. His outline is very ambitious:

This tutorial works through one simple example to demonstrate the commonality/continuity between a large number of different ways that people use to understand the structure and behavior of the world around us. These include chemical reaction networks, Petri nets, differential equations, agent-based modeling, mind maps, membrane computing, Unified Modeling Language, Systems Biology Markup Language, and Systems Biology Graphical Notation. The intended audience includes scientists, engineers, programmers, and other technically literate nonexperts. No math knowledge is required.

The Azimuth Server

With help from Glyn Adgie and Allan Erskine, Jim Stuttard has been setting up a server for Azimuth. All these folks are programmers, and Jim Stuttard, in particular, was a systems consultant and software applications programmer in C, C++ and Java until 2001. But he’s really interested in formal methods, and now he programs in Haskell.

I won’t say anything about the Azimuth server, since I’ll get it wrong, it’s not quite ready yet, and Jim wisely prefers to get it working a bit more before he talks about it. But you can get a feeling for what’s coming by going here.

How to find out more

You can follow what we’re doing by visiting the Azimuth Forum. Most of our conversations there are open to the world, but some can only be seen if you become a member. This is easy to do, except for one little thing.

Nobody, nobody , seems capable of reading the directions where I say, in boldface for easy visibility:

Use your whole real name as username. Spaces and capital letters are good. So, for example, a username like ‘Tim van Beek’ is good, ‘timvanbeek’ not so good, and ‘Tim’ or ‘tvb’ won’t be allowed.

The main point is that we want people involved with the Azimuth Project to have clear identities. The second, more minor point is that our software is not braindead, so you can choose a username that’s your actual name, like

Tim van Beek

instead of having to choose something silly like




But never mind me: I’m just a crotchety old curmudgeon. Come join the fun and help us save the planet by developing software that explains climate science, biology, and ecology—and, just maybe, speeds up the development of green mathematics and ecotechnology!

This Week’s Finds (Week 319)

13 April, 2012

This week I’m trying something new: including a climate model that runs on your web browser!

It’s not a realistic model; we’re just getting started. But some programmers in the Azimuth Project team are interested in making more such models—especially Allan Erskine (who made this one), Jim Stuttard (who helped me get it to work), Glyn Adgie and Staffan Liljgeren. It could be a fun way for us to learn and explain climate physics. With enough of these models, we’d have a whole online course! If you want to help us out, please say hi.

Allan will say more about the programming challenges later. But first, a big puzzle: how can small changes in the Earth’s orbit lead to big changes in the Earth’s climate? As I mentioned last time, it seems hard to understand the glacial cycles of the last few million years without answering this.

Are there feedback mechanisms that can amplify small changes in temperature? Yes. Here are a few obvious ones:

Water vapor feedback. When it gets warmer, more water evaporates, and the air becomes more humid. But water vapor is a greenhouse gas, which causes additional warming. Conversely, when the Earth cools down, the air becomes drier, so the greenhouse effect becomes weaker, which tends to cool things down.

Ice albedo feedback. Snow and ice reflect more light than liquid oceans or soil. When the Earth warms up, snow and ice melt, so the Earth becomes darker, absorbs more light, and tends to get get even warmer. Conversely, when the Earth cools down, more snow and ice form, so the Earth becomes lighter, absorbs less light, and tends to get even cooler.

Carbon dioxide solubility feedback. Cold water can hold more carbon dioxide than warm water: that’s why opening a warm can of soda can be so explosive. So, when the Earth’s oceans warm up, they release carbon dioxide. But carbon dioxide is a greenhouse gas, which causes additional warming. Conversely, when the oceaans cool down, they absorb more carbon dioxide, so the greenhouse effect becomes weaker, which tends to cool things down.

Of course, there are also negative feedbacks: otherwise the climate would be utterly unstable! There are also complicated feedbacks whose overall effect is harder to evaluate:

Planck feedback. A hotter world radiates more heat, which cools it down. This is the big negative feedback that keeps all the positive feedbacks from making the Earth insanely hot or insanely cold.

Cloud feedback. A warmer Earth has more clouds, which reflect more light but also increase the greenhouse effect.

Lapse rate feedback. An increased greenhouse effect changes the vertical temperature profile of the atmosphere, which has effects of its own—but this works differently near the poles and near the equator.

See week302 for more on feedbacks and how big they’re likely to be.

On top of all these subtleties, any proposed solution to the puzzle of glacial cycles needs to keep a few other things in mind, too:

• A really good theory will explain, not just why we have glacial cycles now, but why we didn’t have them earlier. As I explained in week317, they got started around 5 million years ago, became much colder around 2 million years ago, and switched from a roughly 41,000 year cycle to a roughly 100,000 year cycle around 1 million years ago.

• Say we dream up a whopping big positive feedback mechanism that does a great job of keeping the Earth warm when it’s warm and cold when it’s cold. If this effect is strong enough, the Earth may be bistable: it will have two stable states, a warm one and a cold one. Unfortunately, if the effect is too strong, it won’t be easy for the Earth to pop back and forth between these two states!

The classic example of a bistable system is a switch—say for an electric light. When the light is on it stays on; when the light is off it stays off. If you touch the switch very gently, nothing will happen. But if you push on it hard enough, it will suddenly pop from on to off, or vice versa.

If we’re trying to model the glacial cycles using this idea, we need the switch to have a fairly dramatic effect, yet still be responsive to a fairly gentle touch. For this to work we need enough positive feedback… but not too much.

(We could also try a different idea: maybe the Earth keeps itself in its icy glacial state, or its warm interglacial state, using some mechanism that gradually uses something up. Then, when the Earth runs out of this stuff, whatever it is, the climate can easily flip to the other state.)

We must always remember that to a good approximation, the total amount of sunlight hitting the Earth each year does not change as the Earth’s orbit changes in the so-called ‘Milankovich cycles’ that seem to be causing the ice ages. I explained why last time. What changes is not the total amount of sunlight, but something much subtler: the amount of sunlight at particular latitudes in particular seasons! In particular, Milankovitch claimed, and most scientists believe, that the Earth tends to get cold when there’s little sunlight hitting the far northern latitudes in summer.

For these and other reasons, any solution to the ice age puzzle is bound to be subtle. Instead of diving straight into this complicated morass, let’s try something much simpler. Let’s just think about how the ice albedo effect could, in theory, make the Earth bistable.

To do this, let’s look at the very simplest model in this great not-yet-published book:

• Gerald R. North, Simple Models of Global Climate.

This is a zero-dimensional energy balance model, meaning that it only involves the average temperature of the earth, the average solar radiation coming in, and the average infrared radiation going out.

The average temperature will be T, measured in Celsius. We’ll assume the Earth radiates power square meter equal to

\displaystyle{ A + B T }

where A = 218 watts/meter2 and B = 1.90 watts/meter2 per degree Celsius. This is a linear approximation taken from satellite data on our Earth. In reality, the power emitted grows faster than linearly with temperature.

We’ll assume the Earth absorbs solar energy power per square meter equal to

Q c(T)


Q is the average insolation: that is, the amount of solar power per square meter hitting the top of the Earth’s atmosphere, averaged over location and time of year. In reality Q is about 341.5 watts/meter2. This is one quarter of the solar constant, meaning the solar power per square meter that would hit a panel hovering in space above the Earth’s atmosphere and facing directly at the Sun. (Why a quarter? That’s a nice geometry puzzle: we worked it out at the Azimuth Blog once.)

c(T) is the coalbedo: the fraction of solar power that gets absorbed. The coalbedo depends on the temperature; we’ll have to say how.

Given all this, we get

\displaystyle{ C \frac{d T}{d t} = - A - B T + Q c(T(t)) }

where C is Earth’s heat capacity in joules per degree per square meter. Of course this is a funny thing, because heat energy is stored not only at the surface but also in the air and/or water, and the details vary a lot depending on where we are. But if we consider a uniform planet with dry air and no ocean, North says we may roughly take C equal to about half the heat capacity at constant pressure of the column of dry air over a square meter, namely 5 million joules per degree Celsius.

The easiest thing to do is find equilibrium solutions, meaning solutions where \frac{d T}{d t} = 0, so that

A + B T = Q c(T)

Now C doesn’t matter anymore! We’d like to solve for T as a function of the insolation Q, but it’s easier to solve for Q as a function of T:

\displaystyle{ Q = \frac{ A + B T } {c(T)} }

To go further, we need to guess some formula for the coalbedo c(T). The coalbedo, remember, is the fraction of sunlight that gets absorbed when it hits the Earth. It’s 1 minus the albedo, which is the fraction that gets reflected. Here’s a little chart of albedos:

If you get mixed up between albedo and coalbedo, just remember: coal has a high coalbedo.

Since we’re trying to keep things very simple right not, not model nature in all its glorious complexity, let’s just say the average albedo of the Earth is 0.65 when it’s very cold and there’s lots of snow. So, let

c_i = 1  - 0.65 =  0.35

be the ‘icy’ coalbedo, good for very low temperatures. Similarly, let’s say the average albedo drops to 0.3 when its very hot and the Earth is darker. So, let

c_f = 1 - 0.3 = 0.7

be the ‘ice-free’ coalbedo, good for high temperatures when the Earth is darker.

Then, we need a function of temperature that interpolates between c_i and c_f. Let’s try this:

c(T) = c_i + \frac{1}{2} (c_f-c_i) (1 + \tanh(\gamma T))

If you’re not a fan of the hyperbolic tangent function \tanh, this may seem scary. But don’t be intimidated!

The function \frac{1}{2}(1 + \tanh(\gamma T)) is just a function that goes smoothly from 0 at low temperatures to 1 at high temperatures. This ensures that the coalbedo is near its icy value c_i at low temperatures, and near its ice-free value c_f at high temperatures. But the fun part here is \gamma, a parameter that says how rapidly the coalbedo rises as the Earth gets warmer. Depending on this, we’ll get different effects!

The function c(T) rises fastest at T = 0, since that’s where \tanh (\gamma T) has the biggest slope. We’re just lucky that in Celsius T = 0 is the melting point of ice, so this makes a bit of sense.

Now Allan Erskine’s programming magic comes into play! Unfortunately it doesn’t work on this blog—yet!—so please hop over to the version of this article on my website to see it in action.

You can slide a slider to adjust the parameter \gamma to various values between 0 and 1.

You can then see how the coalbedo c(T) changes as a function of the temperature T. In this graph the temperature ranges from -50 °C and 50 °C; the graph depends on what value of \gamma you choose with slider.

You can also see how the insolation Q required to yield a given temperature T between -50 °C and 50 °C:

It’s easiest to solve for Q in terms of T. But it’s more intuitive to flip this graph over and see what equilibrium temperatures T are allowed for a given insolation Q between 200 and 500 watts per square mater.

The exciting thing is that when \gamma gets big enough, three different temperatures are compatible with the same amount of insolation! This means the Earth can be hot, cold or something intermediate even when the amount of sunlight hitting it is fixed. The intermediate state is unstable, it turns out. Only the hot and cold states are stable. So, we say the Earth is bistable in this simplified model.

Can you see how big \gamma needs to be for this bistability to kick in? It’s certainly there when \gamma = 0.05, since then we get a graph like this:

When the insolation is less than about 385 W/m2 there’s only a cold state. When it hits 385 W/m2, as shown by the green line, suddenly there are two possible temperatures: a cold one and a much hotter one. When the insolation is higher, as shown by the black line, there are three possible temperatures: a cold one, and unstable intermediate one, and a hot one. And when the insolation gets above 465 W/m2, as shown by the red line, there’s only a hot state!

Why is the intermediate state unstable when it exists? Why are the other two equilibria stable? To answer these questions, we’d need to go back and study the original equation:

C \frac{d T}{d t} = - A - B T + Q c(T(t))

and see what happens when we push T slightly away from one of its equilibrium values. That’s really fun, but we won’t do it today. Instead, let’s draw some conclusions from what we’ve just seen. There are at least three morals: a mathematical moral, a climate science model, and a software moral.

Mathematically, this model illustrates catastrophe theory. As we slowly turn up \gamma, we get different curves showing how temperature is a function of insolation… until suddenly the curve isn’t the graph of a function anymore: it becomes infinitely steep at one point! After that, we get bistability:

\gamma = 0.00

\gamma = 0.01

\gamma = 0.02

\gamma = 0.03

\gamma = 0.04

\gamma = 0.05

This is called a cusp catastrophe, and you can visualize these curves as slices of a surface in 3d, which looks roughly like this picture:

from here:

• Wolfram Mathworld, Cusp catastrophe. (Includes Mathematica package.)

The cusp catastrophe is ‘structurally stable’, meaning that small perturbations don’t change its qualitative behavior. This concept is made precise in catastrophe theory. It’s a useful concept, because it focuses our attention on robust features of models: features that don’t go away if the model is slightly wrong, as it always is.

As far as climate science goes, one moral is that it pays to spend some time making sure we understand simple models before we dive into more complicated ones. Right now we’re looking at a very simple model, but we’re already seeing some interesting phenomena. The kind of model we’re looking at now is called a Budyko-Sellers model. These have been studied since the late 1960’s:

• M. I. Budyko, On the origin of glacial epochs (in Russian), Meteor. Gidrol. 2 (1968), 3-8.

• M. I. Budyko, The effect of solar radiation variations on the climate of the earth, Tellus 21 (1969), 611-619.

• William D. Sellers, A global climatic model based on the energy balance of the earth-atmosphere system, J. Appl. Meteor. 8 (1969), 392-400.

• Carl Crafoord and Erland Källén, A note on the condition for existence of more than one steady state solution in Budyko-Sellers type models, J. Atmos. Sci. 35 (1978), 1123-1125.

• Gerald R. North, David Pollard and Bruce Wielicki, Variational formulation of Budyko-Sellers climate models, J. Atmos. Sci. 36 (1979), 255-259.

It also pays to compare our models to reality! For example, the graphs we’ve seen show some remarkably hot and cold temperatures for the Earth. That’s a bit unnerving. Let’s investigate. Suppose we set \gamma = 0 on our slider. Then the coalbedo of the Earth becomes independent of temperature: it’s 0.525, halfway between its icy and ice-free values. Then, when the insolation takes its actual value of 342.5 watts per square meter, the model says the Earth’s temperature is very chilly: about -20 °C!

Does that mean the model is fundamentally flawed? Maybe not! After all, it’s based on very light-colored Earth. Suppose we use the actual albedo of the Earth. Of course that’s hard to define, much less determine. But let’s just look up some average value of the Earth’s albedo: supposedly it’s about 0.3. That gives a coalbedo of c = 0.7. If we plug that in our formula:

\displaystyle{ Q = \frac{ A + B T } {c} }

we get 11 °C. That’s not too far from the Earth’s actual average temperature, namely about 15 ° C. So the chilly temperature of -20 °C seems to come from an Earth that’s a lot lighter in color than ours.

Our model includes the greenhouse effect, since the coeficients A and B were determined by satellite measurements of how much radiation actually escapes the Earth’s atmosphere and shoots out into space. As a further check to our model, we can look at an even simpler zero-dimensional energy balance model: a completely black Earth with no greenhouse effect. Another member of the Azimuth Project has written about this:

• Tim van Beek, Putting the Earth in a box, Azimuth, 19 June 2011.

• Tim van Beek, A quantum of warmth, Azimuth, 2 July 2011.

As he explains, this model gives the Earth a temperature of 6 °C. He also shows that in this model, lowering the albedo to a realistic value of 0.3 lowers the temperature to a chilly -18 ° C. To get from that to something like our Earth, we must take the greenhouse effect into account.

This sort of fiddling around is the sort of thing we must do to study the flaws and virtues of a climate model. Of course, any realistic climate model is vastly more sophisticated than the little toy we’ve been looking at, so the ‘fiddling around’ must also be more sophisticated. With a more sophisticated model, we can also be more demanding. For example, when I said 11 °C is “is not too far from the Earth’s actual average temperature, namely about 15 ° C”, I was being very blasé about what’s actually a big discrepancy. I only took that attitude because the calculations we’re doing now are very preliminary.

Finally, here’s what Allan has to say about the software you’ve just seen, and some fancier software you’ll see in forthcoming weeks:

Your original question in the Azimuth Forum was “What’s the easiest way to write simple programs of this sort that could be accessed and operated by clueless people online?” A “simple program” for the climate model you proposed needed two elements: a means to solve the ODE (ordinary differential equation) describing the model, and a means to interact with and visualize the results for the (clearly) “clueless people online”.

Some good suggestions were made by members of the forum:

• use a full-fledged numerical computing package such as Sage or Matlab which come loaded to the teeth with ODE solvers and interactive charting;

• use a full-featured programming language like Java which has libraries available for ode solving and charting, and which can be packaged as an applet for the web;

• do all the computation and visualization ourselves in Javascript.

While the first two suggestions were superior for computing the ODE solutions, I knew from bitter experience (as a software developer) that the truly clueless people were us bold forum members engaged in this new online enterprise: none of us were experts in this interactive/online math thing, and programming new software is almost always harder than you expect it to be.

Then actually releasing new software is harder still! Especially to an audience as large as your readership. To come up an interactive solution that would work on many different computers/browsers, the most mundane and pedestrian suggestion of “do it all ourselves in Javascript and have them run it in the browser” was also the most likely to be a success.

The issue with Javascript was that not many people use it for numerical computation, and I was down on our chances of success until Staffan pointed out the excellent JSXGraph software. JSXGraph has many examples available to get up and running, has an ODE solver, and after a copy/paste or two and some tweaking on my part we were all set.

The true vindication for going all-Javascript though was that you were subsequently able to do some copy/pasting of your own directly into TWF without any servers needing configured etc., or even any help from me! The graphs ought to be viewable by your readership for as long as browsers support Javascript (a sign of a good software release is that you don’t have to think about it afterwards).

There are some improvements I would make to how we handle future projects which we have discussed in the Forum. Foremost, using Javascript to do all our numerical work is not going to attract the best and brightest minds from the forum (or elsewhere) to help with subsequent models. My personal hope is that we allow all the numerical work to be done in whatever language people feel productive with, and that we come up with a slick way for you to embed and interact with just the data from these models in your webpages. Glyn Adgie and Jim Stuttard seem to have some great momentum in this direction.

Or perhaps creating and editing interactive math online will eventually become as easy as wiki pages are today—I know Staffan had said the Sage developers were looking to make their online workbooks more interactive. Also the bright folks behind the new Julia language are discussing ways to run (and presumably interact with) Julia in the cloud. So perhaps we should just have dragged our feet on this project for a few years for all this cool stuff to help us out! (And let’s wait for the Singularity while we’re at it.)

No, let’s not! I hope you programmers out there can help us find good solutions to the problems Allan faced. And I hope some of you actually join the team.

By the way, Allan has a somewhat spiffier version of the same Budyko-Sellers model here.

For discussions of this issue of This Week’s Finds visit my blog, Azimuth. And if you want to get involved in creating online climate models, contact me and/or join the Azimuth Forum.

Thus, the present thermal regime and glaciations of the Earth prove to be characterized by high instability. Comparatively small changes of radiation—only by 1.0-1.5%—are sufficient for the development of ice cover on the land and oceans that reaches temperate latitudes.M. I. Budyko


Get every new post delivered to your Inbox.

Join 2,817 other followers