Talk at Berkeley

15 November, 2012

This Friday, November 16, 2012, I’ll be giving the annual Lang Lecture at the math department of U. C. Berkeley. I’ll be speaking on The Mathematics of Planet Earth. There will be tea and cookies in 1015 Evans Hall from 3 to 4 pm. The talk itself will be in 105 Northgate Hall from 4 to 5 pm, with questions going on to 5:30 if people are interested.

You’re all invited!


The Mathematics of Planet Earth

31 October, 2012

Here’s a public lecture I gave yesterday, via videoconferencing, at the 55th annual meeting of the South African Mathematical Society:

Abstract: The International Mathematical Union has declared 2013 to be the year of The Mathematics of Planet Earth. The global warming crisis is part of a bigger transformation in which humanity realizes that the Earth is a finite system and that our population, energy usage, and the like cannot continue to grow exponentially. If civilization survives this transformation, it will affect mathematics—and be affected by it—just as dramatically as the agricultural revolution or industrial revolution. We cannot know for sure what the effect will be, but we can already make some guesses.

To watch the talk, click on the video above. To see slides of the talk, click here. To see the source of any piece of information in these slides, just click on it!

My host Bruce Bartlett, an expert on topological quantum field theory, was crucial in planning the event. He was the one who edited the video, and put it on YouTube. He also made this cute poster:



I was planning to fly there using my superpowers to avoid taking a plane and burning a ton of carbon. But it was early in the morning and I was feeling a bit tired, so I used Skype.

By the way: if you’re interested in science, energy and the environment, check out the Azimuth Project, which is a collaboration to create a focal point for scientists and engineers interested in saving the planet. We’ve got some interesting projects going. If you join the Azimuth Forum, you can talk to us, learn more, and help out as much or as little as you want. The only hard part about joining the Azimuth Forum is reading the instructions well enough that you choose your whole real name, with spaces between words, as your username.


Azimuth News (Part 2)

28 September, 2012

Last week I finished a draft of a book and left Singapore, returning to my home in Riverside, California. It’s strange and interesting, leaving the humid tropics for the dry chaparral landscape I know so well.

Now I’m back to my former life as a math professor at the University of California. I’ll be going back to the Centre for Quantum Technology next summer, and summers after that, too. But life feels different now: a 2-year period of no teaching allowed me to change my research direction, but now it’s time to teach people what I’ve learned!

It also happens to be a time when the Azimuth Project is about to do a lot of interesting things. So, let me tell you some news!

Programming with Petri nets

The Azimuth Project has a bunch of new members, who are bringing with them new expertise and lots of energy. One of them is David Tanzer, who was an undergraduate math major at U. Penn, and got a Ph.D. in computer science at NYU. Now he’s a software developer, and he lives in Brooklyn, New York.

He writes:

My areas of interest include:

• Queryable encyclopedias

• Machine representation of scientific theories

• Machine representation of conflicts between contending theories

• Social and technical structures to support group problem-solving activities

• Balkan music, Afro-Latin rhythms, and jazz guitar

To me, the most meaningful applications of science are to the myriad of problems that beset the human race. So the Aziumuth Project is a good focal point for me.

And on Azimuth, he’s starting to write some articles on ‘programming with Petri nets’. We’ve talked about them a lot in the network theory series:

They’re a very general modelling tool in chemistry, biology and computer science, precisely the sort of tool we need for a deep understanding of the complex systems that keep our living planet going—though, let’s be perfectly clear about this, just one of many such tools, and one of the simplest. But as mathematical physicists, Jacob Biamonte and I have studied Petri nets in a highly theoretical way, somewhat neglecting the all-important problem of how you write programs that simulate Petri nets!

Such programs are commercially available, but it’s good to see how to write them yourself, and that’s what David Tanzer will tell us. He’ll use the language Python to write these programs in a nice modern object-oriented way. So, if you like coding, this is where the rubber meets the road.

I’m no expert on programming, but it seems the modularity of Python code nicely matches the modularity of Petri nets. This is something I’d like to get into more deeply someday, in my own effete theoretical way. I think the category-theoretic foundations of computer languages like Python are worth understanding, perhaps more interesting in fact than purely functional languages like Haskell, which are better understood. And I think they’ll turn out to be nicely related to the category-theoretic foundations of Petri nets and other networks I’m going to tell you about!

And I believe this will be important if we want to develop ‘ecotechnology’, where our machines and even our programming methodologies borrow ingenuity and wisdom from biological processes… and learn to blend with nature instead of fighting it.

Petri nets, systems biology, and beyond

Another new member of the Azimuth Project is Ken Webb. He has a BA in Cognitive Science from Carleton University in Ottawa, and an MSc in Evolutionary and Adaptive Systems from The University of Sussex in Brighton. Since then he’s worked for many years as a software developer and consultant, using many different languages and approaches.

He writes:

Things that I’m interested in include:

• networks of all types, hierarchical organization of network nodes, and practical applications

• climate change, and “saving the planet”

• programming code that anyone can run in their browser, and that anyone can edit and extend in their browser

• approaches to software development that allow independently-developed apps to work together

• the relationship between computer-science object-oriented (OO) concepts and math concepts

• how everything is connected

I’ve been paying attention to the Azimuth Project because it parallels my own interests, but with a more math focus (math is not one of my strong points). As learning exercises, I’ve reimplemented a few of the applications mentioned on Azimuth pages. Some of my online workbooks (blog-like entries that are my way of taking active notes) were based on content at the Azimuth Project.

He’s started building a Petri net modeling and simulation tool called Xholon. It’s written in Java and can be run online using Java Web Start (JNLP). Using this tool you can completely specify Petri net models using XML. You can see more details, and examples, on his Azimuth page. If I were smarter, or had more spare time, I would have already figured out how to include examples that actually run in an interactive way in blog articles here! But more on that later.

Soon I hope Ken will finish a blog entry in which he discusses how Petri nets fit into a bigger setup that can also describe ‘containers’, where molecules are held in ‘membranes’ and these membranes can allow chosen molecules through, and also split or merge—more like biology than inorganic chemistry. His outline is very ambitious:

This tutorial works through one simple example to demonstrate the commonality/continuity between a large number of different ways that people use to understand the structure and behavior of the world around us. These include chemical reaction networks, Petri nets, differential equations, agent-based modeling, mind maps, membrane computing, Unified Modeling Language, Systems Biology Markup Language, and Systems Biology Graphical Notation. The intended audience includes scientists, engineers, programmers, and other technically literate nonexperts. No math knowledge is required.


The Azimuth Server

With help from Glyn Adgie and Allan Erskine, Jim Stuttard has been setting up a server for Azimuth. All these folks are programmers, and Jim Stuttard, in particular, was a systems consultant and software applications programmer in C, C++ and Java until 2001. But he’s really interested in formal methods, and now he programs in Haskell.

I won’t say anything about the Azimuth server, since I’ll get it wrong, it’s not quite ready yet, and Jim wisely prefers to get it working a bit more before he talks about it. But you can get a feeling for what’s coming by going here.

How to find out more

You can follow what we’re doing by visiting the Azimuth Forum. Most of our conversations there are open to the world, but some can only be seen if you become a member. This is easy to do, except for one little thing.

Nobody, nobody , seems capable of reading the directions where I say, in boldface for easy visibility:

Use your whole real name as username. Spaces and capital letters are good. So, for example, a username like ‘Tim van Beek’ is good, ‘timvanbeek’ not so good, and ‘Tim’ or ‘tvb’ won’t be allowed.

The main point is that we want people involved with the Azimuth Project to have clear identities. The second, more minor point is that our software is not braindead, so you can choose a username that’s your actual name, like

Tim van Beek

instead of having to choose something silly like

timvanbeek

or

tim_van_beek

But never mind me: I’m just a crotchety old curmudgeon. Come join the fun and help us save the planet by developing software that explains climate science, biology, and ecology—and, just maybe, speeds up the development of green mathematics and ecotechnology!


This Week’s Finds (Week 319)

13 April, 2012

This week I’m trying something new: including a climate model that runs on your web browser!

It’s not a realistic model; we’re just getting started. But some programmers in the Azimuth Project team are interested in making more such models—especially Allan Erskine (who made this one), Jim Stuttard (who helped me get it to work), Glyn Adgie and Staffan Liljgeren. It could be a fun way for us to learn and explain climate physics. With enough of these models, we’d have a whole online course! If you want to help us out, please say hi.

Allan will say more about the programming challenges later. But first, a big puzzle: how can small changes in the Earth’s orbit lead to big changes in the Earth’s climate? As I mentioned last time, it seems hard to understand the glacial cycles of the last few million years without answering this.

Are there feedback mechanisms that can amplify small changes in temperature? Yes. Here are a few obvious ones:

Water vapor feedback. When it gets warmer, more water evaporates, and the air becomes more humid. But water vapor is a greenhouse gas, which causes additional warming. Conversely, when the Earth cools down, the air becomes drier, so the greenhouse effect becomes weaker, which tends to cool things down.

Ice albedo feedback. Snow and ice reflect more light than liquid oceans or soil. When the Earth warms up, snow and ice melt, so the Earth becomes darker, absorbs more light, and tends to get get even warmer. Conversely, when the Earth cools down, more snow and ice form, so the Earth becomes lighter, absorbs less light, and tends to get even cooler.

Carbon dioxide solubility feedback. Cold water can hold more carbon dioxide than warm water: that’s why opening a warm can of soda can be so explosive. So, when the Earth’s oceans warm up, they release carbon dioxide. But carbon dioxide is a greenhouse gas, which causes additional warming. Conversely, when the oceaans cool down, they absorb more carbon dioxide, so the greenhouse effect becomes weaker, which tends to cool things down.

Of course, there are also negative feedbacks: otherwise the climate would be utterly unstable! There are also complicated feedbacks whose overall effect is harder to evaluate:

Planck feedback. A hotter world radiates more heat, which cools it down. This is the big negative feedback that keeps all the positive feedbacks from making the Earth insanely hot or insanely cold.

Cloud feedback. A warmer Earth has more clouds, which reflect more light but also increase the greenhouse effect.

Lapse rate feedback. An increased greenhouse effect changes the vertical temperature profile of the atmosphere, which has effects of its own—but this works differently near the poles and near the equator.

See week302 for more on feedbacks and how big they’re likely to be.

On top of all these subtleties, any proposed solution to the puzzle of glacial cycles needs to keep a few other things in mind, too:

• A really good theory will explain, not just why we have glacial cycles now, but why we didn’t have them earlier. As I explained in week317, they got started around 5 million years ago, became much colder around 2 million years ago, and switched from a roughly 41,000 year cycle to a roughly 100,000 year cycle around 1 million years ago.

• Say we dream up a whopping big positive feedback mechanism that does a great job of keeping the Earth warm when it’s warm and cold when it’s cold. If this effect is strong enough, the Earth may be bistable: it will have two stable states, a warm one and a cold one. Unfortunately, if the effect is too strong, it won’t be easy for the Earth to pop back and forth between these two states!

The classic example of a bistable system is a switch—say for an electric light. When the light is on it stays on; when the light is off it stays off. If you touch the switch very gently, nothing will happen. But if you push on it hard enough, it will suddenly pop from on to off, or vice versa.

If we’re trying to model the glacial cycles using this idea, we need the switch to have a fairly dramatic effect, yet still be responsive to a fairly gentle touch. For this to work we need enough positive feedback… but not too much.

(We could also try a different idea: maybe the Earth keeps itself in its icy glacial state, or its warm interglacial state, using some mechanism that gradually uses something up. Then, when the Earth runs out of this stuff, whatever it is, the climate can easily flip to the other state.)

We must always remember that to a good approximation, the total amount of sunlight hitting the Earth each year does not change as the Earth’s orbit changes in the so-called ‘Milankovich cycles’ that seem to be causing the ice ages. I explained why last time. What changes is not the total amount of sunlight, but something much subtler: the amount of sunlight at particular latitudes in particular seasons! In particular, Milankovitch claimed, and most scientists believe, that the Earth tends to get cold when there’s little sunlight hitting the far northern latitudes in summer.

For these and other reasons, any solution to the ice age puzzle is bound to be subtle. Instead of diving straight into this complicated morass, let’s try something much simpler. Let’s just think about how the ice albedo effect could, in theory, make the Earth bistable.

To do this, let’s look at the very simplest model in this great not-yet-published book:

• Gerald R. North, Simple Models of Global Climate.

This is a zero-dimensional energy balance model, meaning that it only involves the average temperature of the earth, the average solar radiation coming in, and the average infrared radiation going out.

The average temperature will be T, measured in Celsius. We’ll assume the Earth radiates power square meter equal to

\displaystyle{ A + B T }

where A = 218 watts/meter2 and B = 1.90 watts/meter2 per degree Celsius. This is a linear approximation taken from satellite data on our Earth. In reality, the power emitted grows faster than linearly with temperature.

We’ll assume the Earth absorbs solar energy power per square meter equal to

Q c(T)

Here:

Q is the average insolation: that is, the amount of solar power per square meter hitting the top of the Earth’s atmosphere, averaged over location and time of year. In reality Q is about 341.5 watts/meter2. This is one quarter of the solar constant, meaning the solar power per square meter that would hit a panel hovering in space above the Earth’s atmosphere and facing directly at the Sun. (Why a quarter? That’s a nice geometry puzzle: we worked it out at the Azimuth Blog once.)

c(T) is the coalbedo: the fraction of solar power that gets absorbed. The coalbedo depends on the temperature; we’ll have to say how.

Given all this, we get

\displaystyle{ C \frac{d T}{d t} = - A - B T + Q c(T(t)) }

where C is Earth’s heat capacity in joules per degree per square meter. Of course this is a funny thing, because heat energy is stored not only at the surface but also in the air and/or water, and the details vary a lot depending on where we are. But if we consider a uniform planet with dry air and no ocean, North says we may roughly take C equal to about half the heat capacity at constant pressure of the column of dry air over a square meter, namely 5 million joules per degree Celsius.

The easiest thing to do is find equilibrium solutions, meaning solutions where \frac{d T}{d t} = 0, so that

A + B T = Q c(T)

Now C doesn’t matter anymore! We’d like to solve for T as a function of the insolation Q, but it’s easier to solve for Q as a function of T:

\displaystyle{ Q = \frac{ A + B T } {c(T)} }

To go further, we need to guess some formula for the coalbedo c(T). The coalbedo, remember, is the fraction of sunlight that gets absorbed when it hits the Earth. It’s 1 minus the albedo, which is the fraction that gets reflected. Here’s a little chart of albedos:

If you get mixed up between albedo and coalbedo, just remember: coal has a high coalbedo.

Since we’re trying to keep things very simple right not, not model nature in all its glorious complexity, let’s just say the average albedo of the Earth is 0.65 when it’s very cold and there’s lots of snow. So, let

c_i = 1  - 0.65 =  0.35

be the ‘icy’ coalbedo, good for very low temperatures. Similarly, let’s say the average albedo drops to 0.3 when its very hot and the Earth is darker. So, let

c_f = 1 - 0.3 = 0.7

be the ‘ice-free’ coalbedo, good for high temperatures when the Earth is darker.

Then, we need a function of temperature that interpolates between c_i and c_f. Let’s try this:

c(T) = c_i + \frac{1}{2} (c_f-c_i) (1 + \tanh(\gamma T))

If you’re not a fan of the hyperbolic tangent function \tanh, this may seem scary. But don’t be intimidated!

The function \frac{1}{2}(1 + \tanh(\gamma T)) is just a function that goes smoothly from 0 at low temperatures to 1 at high temperatures. This ensures that the coalbedo is near its icy value c_i at low temperatures, and near its ice-free value c_f at high temperatures. But the fun part here is \gamma, a parameter that says how rapidly the coalbedo rises as the Earth gets warmer. Depending on this, we’ll get different effects!

The function c(T) rises fastest at T = 0, since that’s where \tanh (\gamma T) has the biggest slope. We’re just lucky that in Celsius T = 0 is the melting point of ice, so this makes a bit of sense.

Now Allan Erskine’s programming magic comes into play! Unfortunately it doesn’t work on this blog—yet!—so please hop over to the version of this article on my website to see it in action.

You can slide a slider to adjust the parameter \gamma to various values between 0 and 1.

You can then see how the coalbedo c(T) changes as a function of the temperature T. In this graph the temperature ranges from -50 °C and 50 °C; the graph depends on what value of \gamma you choose with slider.

You can also see how the insolation Q required to yield a given temperature T between -50 °C and 50 °C:

It’s easiest to solve for Q in terms of T. But it’s more intuitive to flip this graph over and see what equilibrium temperatures T are allowed for a given insolation Q between 200 and 500 watts per square mater.

The exciting thing is that when \gamma gets big enough, three different temperatures are compatible with the same amount of insolation! This means the Earth can be hot, cold or something intermediate even when the amount of sunlight hitting it is fixed. The intermediate state is unstable, it turns out. Only the hot and cold states are stable. So, we say the Earth is bistable in this simplified model.

Can you see how big \gamma needs to be for this bistability to kick in? It’s certainly there when \gamma = 0.05, since then we get a graph like this:

When the insolation is less than about 385 W/m2 there’s only a cold state. When it hits 385 W/m2, as shown by the green line, suddenly there are two possible temperatures: a cold one and a much hotter one. When the insolation is higher, as shown by the black line, there are three possible temperatures: a cold one, and unstable intermediate one, and a hot one. And when the insolation gets above 465 W/m2, as shown by the red line, there’s only a hot state!

Why is the intermediate state unstable when it exists? Why are the other two equilibria stable? To answer these questions, we’d need to go back and study the original equation:

C \frac{d T}{d t} = - A - B T + Q c(T(t))

and see what happens when we push T slightly away from one of its equilibrium values. That’s really fun, but we won’t do it today. Instead, let’s draw some conclusions from what we’ve just seen. There are at least three morals: a mathematical moral, a climate science model, and a software moral.

Mathematically, this model illustrates catastrophe theory. As we slowly turn up \gamma, we get different curves showing how temperature is a function of insolation… until suddenly the curve isn’t the graph of a function anymore: it becomes infinitely steep at one point! After that, we get bistability:


\gamma = 0.00

\gamma = 0.01

\gamma = 0.02

\gamma = 0.03

\gamma = 0.04

\gamma = 0.05

This is called a cusp catastrophe, and you can visualize these curves as slices of a surface in 3d, which looks roughly like this picture:



from here:

• Wolfram Mathworld, Cusp catastrophe. (Includes Mathematica package.)

The cusp catastrophe is ‘structurally stable’, meaning that small perturbations don’t change its qualitative behavior. This concept is made precise in catastrophe theory. It’s a useful concept, because it focuses our attention on robust features of models: features that don’t go away if the model is slightly wrong, as it always is.

As far as climate science goes, one moral is that it pays to spend some time making sure we understand simple models before we dive into more complicated ones. Right now we’re looking at a very simple model, but we’re already seeing some interesting phenomena. The kind of model we’re looking at now is called a Budyko-Sellers model. These have been studied since the late 1960’s:

• M. I. Budyko, On the origin of glacial epochs (in Russian), Meteor. Gidrol. 2 (1968), 3-8.

• M. I. Budyko, The effect of solar radiation variations on the climate of the earth, Tellus 21 (1969), 611-619.

• William D. Sellers, A global climatic model based on the energy balance of the earth-atmosphere system, J. Appl. Meteor. 8 (1969), 392-400.

• Carl Crafoord and Erland Källén, A note on the condition for existence of more than one steady state solution in Budyko-Sellers type models, J. Atmos. Sci. 35 (1978), 1123-1125.

• Gerald R. North, David Pollard and Bruce Wielicki, Variational formulation of Budyko-Sellers climate models, J. Atmos. Sci. 36 (1979), 255-259.

It also pays to compare our models to reality! For example, the graphs we’ve seen show some remarkably hot and cold temperatures for the Earth. That’s a bit unnerving. Let’s investigate. Suppose we set \gamma = 0 on our slider. Then the coalbedo of the Earth becomes independent of temperature: it’s 0.525, halfway between its icy and ice-free values. Then, when the insolation takes its actual value of 342.5 watts per square meter, the model says the Earth’s temperature is very chilly: about -20 °C!

Does that mean the model is fundamentally flawed? Maybe not! After all, it’s based on very light-colored Earth. Suppose we use the actual albedo of the Earth. Of course that’s hard to define, much less determine. But let’s just look up some average value of the Earth’s albedo: supposedly it’s about 0.3. That gives a coalbedo of c = 0.7. If we plug that in our formula:

\displaystyle{ Q = \frac{ A + B T } {c} }

we get 11 °C. That’s not too far from the Earth’s actual average temperature, namely about 15 ° C. So the chilly temperature of -20 °C seems to come from an Earth that’s a lot lighter in color than ours.

Our model includes the greenhouse effect, since the coeficients A and B were determined by satellite measurements of how much radiation actually escapes the Earth’s atmosphere and shoots out into space. As a further check to our model, we can look at an even simpler zero-dimensional energy balance model: a completely black Earth with no greenhouse effect. Another member of the Azimuth Project has written about this:

• Tim van Beek, Putting the Earth in a box, Azimuth, 19 June 2011.

• Tim van Beek, A quantum of warmth, Azimuth, 2 July 2011.

As he explains, this model gives the Earth a temperature of 6 °C. He also shows that in this model, lowering the albedo to a realistic value of 0.3 lowers the temperature to a chilly -18 ° C. To get from that to something like our Earth, we must take the greenhouse effect into account.

This sort of fiddling around is the sort of thing we must do to study the flaws and virtues of a climate model. Of course, any realistic climate model is vastly more sophisticated than the little toy we’ve been looking at, so the ‘fiddling around’ must also be more sophisticated. With a more sophisticated model, we can also be more demanding. For example, when I said 11 °C is “is not too far from the Earth’s actual average temperature, namely about 15 ° C”, I was being very blasé about what’s actually a big discrepancy. I only took that attitude because the calculations we’re doing now are very preliminary.

Finally, here’s what Allan has to say about the software you’ve just seen, and some fancier software you’ll see in forthcoming weeks:

Your original question in the Azimuth Forum was “What’s the easiest way to write simple programs of this sort that could be accessed and operated by clueless people online?” A “simple program” for the climate model you proposed needed two elements: a means to solve the ODE (ordinary differential equation) describing the model, and a means to interact with and visualize the results for the (clearly) “clueless people online”.

Some good suggestions were made by members of the forum:

• use a full-fledged numerical computing package such as Sage or Matlab which come loaded to the teeth with ODE solvers and interactive charting;

• use a full-featured programming language like Java which has libraries available for ode solving and charting, and which can be packaged as an applet for the web;

• do all the computation and visualization ourselves in Javascript.

While the first two suggestions were superior for computing the ODE solutions, I knew from bitter experience (as a software developer) that the truly clueless people were us bold forum members engaged in this new online enterprise: none of us were experts in this interactive/online math thing, and programming new software is almost always harder than you expect it to be.

Then actually releasing new software is harder still! Especially to an audience as large as your readership. To come up an interactive solution that would work on many different computers/browsers, the most mundane and pedestrian suggestion of “do it all ourselves in Javascript and have them run it in the browser” was also the most likely to be a success.

The issue with Javascript was that not many people use it for numerical computation, and I was down on our chances of success until Staffan pointed out the excellent JSXGraph software. JSXGraph has many examples available to get up and running, has an ODE solver, and after a copy/paste or two and some tweaking on my part we were all set.

The true vindication for going all-Javascript though was that you were subsequently able to do some copy/pasting of your own directly into TWF without any servers needing configured etc., or even any help from me! The graphs ought to be viewable by your readership for as long as browsers support Javascript (a sign of a good software release is that you don’t have to think about it afterwards).

There are some improvements I would make to how we handle future projects which we have discussed in the Forum. Foremost, using Javascript to do all our numerical work is not going to attract the best and brightest minds from the forum (or elsewhere) to help with subsequent models. My personal hope is that we allow all the numerical work to be done in whatever language people feel productive with, and that we come up with a slick way for you to embed and interact with just the data from these models in your webpages. Glyn Adgie and Jim Stuttard seem to have some great momentum in this direction.

Or perhaps creating and editing interactive math online will eventually become as easy as wiki pages are today—I know Staffan had said the Sage developers were looking to make their online workbooks more interactive. Also the bright folks behind the new Julia language are discussing ways to run (and presumably interact with) Julia in the cloud. So perhaps we should just have dragged our feet on this project for a few years for all this cool stuff to help us out! (And let’s wait for the Singularity while we’re at it.)

No, let’s not! I hope you programmers out there can help us find good solutions to the problems Allan faced. And I hope some of you actually join the team.

By the way, Allan has a somewhat spiffier version of the same Budyko-Sellers model here.

For discussions of this issue of This Week’s Finds visit my blog, Azimuth. And if you want to get involved in creating online climate models, contact me and/or join the Azimuth Forum.


Thus, the present thermal regime and glaciations of the Earth prove to be characterized by high instability. Comparatively small changes of radiation—only by 1.0-1.5%—are sufficient for the development of ice cover on the land and oceans that reaches temperate latitudes.M. I. Budyko


Energy, the Environment, and What We Can Do

5 April, 2012

A while ago I gave a talk at Google. Since I think we should cut unnecessary travel, I decided to stay here in Singapore and give the talk virtually, in the form of a robot:

I thank Mike Stay for arranging this talk at Google, and Trevor Blackwell and Suzanne Broctao at Anybots for letting me use one of their robots!

Here are the slides:

Energy, the Environment, and What We Can Do.

To see the source of any piece of information in these slides, just click on it!

And here’s me:

This talk was more ambitious than previous ones I’ve given—and not just because I was struggling to operate a robot, read my slides on my laptop, talk, and click the pages forward all at once! I said more about solutions to our problems this time. That’s where I want to head, but of course it’s infinitely harder to describe solutions than to list problems or even to convince people that they really are problems.


Azimuth on Google Plus (Part 6)

13 February, 2012

Lately the distribution of hits per hour on this blog has become very fat-tailed. In other words: the readership shoots up immensely now and then. I just noticed today’s statistics:

That spike on the right is what I’m talking about: 338 hits per hour, while before it was hovering in the low 80’s, as usual for the weekend. Why? Someone on Hacker News posted an item saying:

John Baez will give his Google Talk tomorrow in the form of a robot.

That’s true! If you’re near Silicon Valley on Monday the 13th and you want to see me in the form of a robot, come to the Google campus and listen to my talk Energy, the Environment and What We Can Do.

It starts at 4 pm in the Paramaribo Room (Building 42, Floor 2). You’ll need to check in 15 minutes before that at the main visitor’s lounge in Building 43, and someone will escort you to the talk.

But if you can’t attend, don’t worry! A video will appear on YouTube, and I’ll point you to it when it does.

I tested out the robot a few days ago from a hotel room in Australia—it’s a strange sensation! Suzanne Brocato showed me the ropes. To talk to me easily, she lowered my ‘head’ until I was just 4 feet tall. “You’re so short!” she laughed. I rolled around the offices of Anybot and met the receptionist, who was also in the form of a robot. Then we went to the office of the CEO, Trevor Blackwell, and planned out my talk a little. I need to practice more today.

But why did someone at Hacker News post that comment just then? I suspect it’s because I reminded people about my talk on Google+ last night.

The fat-tailed distribution of blog hits is also happening at the scale of days, not just hours:

The spikes happen when I talk about a ‘hot topic’. January 27th was my biggest day so far. Slashdot discovered my post about the Elsevier boycott, and send 3468 readers my way. But a total 6499 people viewed that post, so a bunch must have come from other sources.

January 31st was also big: 3271 people came to read about The Faculty of 1000. 2140 of them were sent over by Hacker News.

If I were trying to make money from advertising on this blog, I’d be pushed toward more posts about hot topics. Forget the mind-bending articles on quantropy, packed with complicated equations!

But as it is, I’m trying to do some mixture of having fun, figuring out stuff, and getting people to save the planet. (Open access publishing fits into that mandate: it’s tragic how climate crackpots post on popular blogs while experts on climate change publish their papers in journals hidden from public view!) So, I don’t want to maximize readership: what matters more is getting people to do good stuff.

Do you have any suggestions on how I could do this better, while still being me? I’m not going to get a personality transplant, so there are limits on what I’ll do.

One good idea would be to make sure every post on a ‘hot topic’ offers readers something they can do now.

Hmm, readership is still spiking:

But enough of this navel-gazing! Here are some recent Azimuth articles about energy on Google+.

Energy

1) In his State of the Union speech, Obama talked a lot about energy:

We’ve subsidized oil companies for a century. That’s long enough. It’s time to end the taxpayer giveaways to an industry that rarely has been more profitable, and double-down on a clean energy industry that never has been more promising.

He acknowledged that differences on Capitol Hill are “too deep right now” to pass a comprehensive climate bill, but he added that “there’s no reason why Congress shouldn’t at least set a clean-energy standard that creates a market for innovation.”

However, lest anyone think he actually wants to stop global warming, he also pledged “to open more than 75 percent of our potential offshore oil and gas resources.”

2) This paper claims a ‘phase change’ hit the oil markets around 2005:

• James Murray and David King, Climate policy: Oil’s tipping point has passed, Nature 481 (2011), 433–435.


They write:

In 2005, global production of regular crude oil reached about 72 million barrels per day. From then on, production capacity seems to have hit a ceiling at 75 million barrels per day. A plot of prices against production from 1998 to today shows this dramatic transition, from a time when supply could respond elastically to rising prices caused by increased demand, to when it could not (see ‘Phase shift’). As a result, prices swing wildly in response to small changes in demand. Other people have remarked on this step change in the economics of oil around the year 2005, but the point needs to be lodged more firmly in the minds of policy-makers.

3) Help out the famous climate blogger Joe Romm! He asks: What will the U.S. energy mix look like in 2050 if we cut CO2 emissions 80%?

How much total energy is consumed in 2050… How much coal, oil, and natural gas is being consumed (with carbon capture and storage of some coal and gas if you want to consider that)? What’s the price of oil? How much of our power is provided by nuclear power? How much by solar PV and how much by concentrated solar thermal? How much from wind power? How much from biomass? How much from other forms of renewable energy? What is the vehicle fleet like? How much electric? How much next-generation biofuels?

As he notes, there are lots of studies on these issues. Point him to the best ones!

4) Due to plunging prices for components, solar power prices in Germany dropped by half in the last 5 years. Now solar generates electricity at levels only slightly above what consumers pay. The subsidies will disappear entirely within a few years, when solar will be as cheap as conventional fossil fuels. Germany has added 14,000 megawatts capacity in the last 2 years and now has 24,000 megawatts in total—enough green electricity to meet nearly 4% the country’s power demand. That is expected to rise to 10% by 2020. Germany now has almost 10 times more installed capacity than the United States.

That’s all great—but, umm, what about the other 90%? What’s their long-term plan? Will they keep using coal-fired power plants? Will they buy more nuclear power from France?

In May 2011, Britain claimed it would halve carbon emissions by 2025. Is Germany making equally bold claims or not? Of course what matters is deeds, not words, but I’m curious.

5) Stephen Lacey presents some interesting charts showing the progress and problems with sustainability in the US. For example, there’s been a striking drop in how much energy is being used per dollar of GNP:


Sorry for the archaic ‘British Thermal Units’: we no longer have a king, but for some reason the U.S. failed to throw off the old British system of measurement. A BTU is a bit more than a kilojoule.

Despite these dramatic changes, Lacey says “we waste around 85% of the energy produced in the U.S.” But he doesn’t say how that number was arrived at. Does anyone know?

6) The American Council for an Energy-Efficient Economy (ACEEE) has a new report called The Long-Term Energy Efficiency Potential: What the Evidence Suggests. It describes some scenarios, including one where the US encourages a greater level of productive investments in energy efficiency so that by the year 2050, it reduces overall energy consumption by 40 to 60 percent. I’m very interested in how much efficiency can help. Some, but not all, of the improvements will be eaten up by the rebound effect.


I, Robot

24 January, 2012

On 13 February 2012, I will give a talk at Google in the form of a robot. I will look like this:


My talk will be about “Energy, the Environment and What We Can Do.” Since I think we should cut unnecessary travel, I decided to stay here in Singapore and use a telepresence robot instead of flying to California.

I thank Mike Stay for arranging this at Google, and I especially thank Trevor Blackwell and everyone else at Anybots for letting me use one of their robots!

I believe Google will film this event and make a video available. But I hope reporters attend, because it should be fun, and I plan to describe some ways we can slash carbon emissions.

More detail: I will give this talk at 4 pm Monday, February 13, 2012 in the Paramaribo Room on the Google campus (Building 42, Floor 2). Visitors and reporters are invited, but they need to check in at the main visitor’s lounge in Building 43, and they’ll need to be escorted to and from the talk, so someone will pick them up 10 or 15 minutes before the talk starts.

Energy, the Environment and What We Can Do

Abstract: Our heavy reliance on fossil fuels is causing two serious problems: global warming, and the decline of cheaply available oil reserves. Unfortunately the second problem will not cancel out the first. Each one individually seems extremely hard to solve, and taken
together they demand a major worldwide effort starting now. After an overview of these problems, we turn to the question: what can we do about them?

I also need help from all of you reading this! I want to talk about solutions, not just problems—and given my audience, and the political deadlock in the US, I especially want to talk about innovative solutions that come from individuals and companies, not governments.

Can changing whole systems produce massive cuts in carbon emissions, in a way that spreads virally rather than being imposed through top-down directives? It’s possible. Curtis Faith has some inspiring thoughts on this:

I’ve been looking on various transportation and energy and environment issues for more than 5 years, and almost no one gets the idea that we can radically reduce consumption if we look at the complete systems. In economic terms, we currently have a suboptimal Nash Equilibrium with a diminishing pie when an optimal expanding pie equilibrium is possible. Just tossing around ideas a a very high level with back of the envelope estimates we can get orders of magnitude improvements with systemic changes that will make people’s lives better if we can loosen up the grip of the big corporations and government.

To borrow a physics analogy, the Nash Equilibrium is a bit like a multi-dimensional metastable state where the system is locked into a high energy configuration and any local attempts to make the change revert to the higher energy configuration locally, so it would require sufficient energy or energy in exactly the right form to move all the different metastable states off their equilibrium either simultaneously or in a cascade.

Ideally, we find the right set of systemic economic changes that can have a cascade effect, so that they are locally systemically optimal and can compete more effectively within the larger system where the Nash Equilibrium dominates. I hope I haven’t mixed up too many terms from too many fields and confused things. These terms all have overlapping and sometimes very different meaning in the different contexts as I’m sure is true even within math and science.

One great example is transportation. We assume we need electric cars or biofuel or some such thing. But the very assumption that a car is necessary is flawed. Why do people want cars? Give them a better alternative and they’ll stop wanting cars. Now, what that might be? Public transportation? No. All the money spent building a 2,000 kg vehicle to accelerate and decelerate a few hundred kg and then to replace that vehicle on a regular basis can be saved if we eliminate the need for cars.

The best alternative to cars is walking, or walking on inclined pathways up and down so we get exercise. Why don’t people walk? Not because they don’t want to but because our cities and towns have optimized for cars. Create walkable neighborhoods and give people jobs near their home and you eliminate the need for cars. I live in Savannah, GA in a very tiny place. I never use the car. Perhaps 5 miles a week. And even that wouldn’t be necessary with the right supplemental business structures to provide services more efficiently.

Or electricity for A/C. Everyone lives isolated in structures that are very inefficient to heat. Large community structures could be air conditioned naturally using various techniques and that could cut electricity demand by 50% for neighborhoods. Shade trees are better than insulation.

Or how about moving virtually entire cities to cooler climates during the hot months? That is what people used to do. Take a train North for the summer. If the destinations are low-resource destinations, this can be a huge reduction for the city. Again, getting to this state is hard without changing a lot of parts together.

These problems are not technical, or political, they are economic. We need the economic systems that support these alternatives. People want them. We’ll all be happier and use far less resources (and money). The economic system needs to be changed, and that isn’t going to happen with politics, it will happen with economic innovation. We tend to think of our current models as the way things are, but they aren’t. Most of the status quo is comprised of human inventions, money, fractional reserve banking, corporations, etc. They all brought specific improvements that made them more effective at the time they were introduce because of the conditions during those times. Our times too are different. Some new models will work much better for solving our current problems.

Your idea really starts to address the reason why people fly unnecessarily. This change in perspective is important. What if we went back to sailing ships? And instead of flying we took long leisurely educational seminar cruises on modern versions of sail yachts? What if we improved our trains? But we need to start from scratch and design new systems so they work together effectively. Why are we stuck with models of cities based on the 19th-century norms?

We aren’t, but too many people think we are because the scope of their job or academic career is just the piece of a system, not the system itself.

System level design thinking is the key to making the difference we need. Changes to the complete systems can have order of magnitude improvements. Changes to the parts will have us fighting for tens of percentages.

Do you know good references on ideas like this—preferably with actual numbers? I’ve done some research, but I feel I must be missing a lot of things.

This book, for example, is interesting:

• Michael Peters, Shane Fudge and Tim Jackson, editors, Low Carbon Communities: Imaginative Approaches to Combating Climate Change Locally, Edward Elgar Publishing Group, Cheltenham, UK, 2010.

but I wish it had more numbers on how much carbon emissions were cut by some of the projects they describe: Energy Conscious Households in Action, the HadLOW CARBON Community, the Transition Network, and so on.


Follow

Get every new post delivered to your Inbox.

Join 3,094 other followers