This will be the first talk of the workshop. Many participants are focused on diplomacy and economics. None are officially biologists or ecologists. So, I want to set the stage with a broad perspective that fits humans into the biosphere as a whole.
I claim that climate change is just one aspect of something bigger: a new geological epoch, the Anthropocene.
I start with evidence that human civilization is having such a big impact on the biosphere that we’re entering a new geological epoch.
Then I point out what this implies. Climate change is not an isolated ‘problem’ of the sort routinely ‘solved’ by existing human institutions. It is part of a shift from the exponential growth phase of human impact on the biosphere to a new, uncharted phase.
In this new phase, institutions and attitudes will change dramatically, like it or not:
• Before we could treat ‘nature’ as distinct from ‘civilization’. Now, there is no nature separate from civilization.
• Before, we might imagine ‘economic growth’ an almost unalloyed good, with many externalities disregarded. Now, many forms of growth have reached the point where they push the biosphere toward tipping points.
In a separate talk I’ll say a bit about ‘what we can do about it’. So, nothing about that here. You can click on words in blue to see sources for the information.
This fall they’re opening a new Centre for Quantum Mathematics and Computation at Oxford University. They’ll be working on diagrammatic methods for topology and quantum theory, quantum gravity, and computation. You’ll understand what this means if you know the work of the people involved:
• Samson Abramsky
• Bob Coecke
• Christopher Douglas
• Kobi Kremnitzer
• Steve Simon
• Ulrike Tillman
• Jamie Vicary
All these people are already at Oxford, so you may wonder what’s new about this center. I’m not completely sure, but they’ve gotten money from EPSRC (roughly speaking, the British NSF), and they’re already hiring a postdoc. Applications are due on March 11, so hurry up if you’re interested!
They’re having a conference October 1st to 4th to start things off. I’ll be speaking there, and they tell me that Steve Awodey, Alexander Beilinson, Lucien Hardy, Martin Hyland, Chris Isham, Dana Scott, and Anton Zeilinger have been invited too.
I’m really looking forward to seeing Chris Isham, since he’s one of the most honest and critical thinkers about quantum gravity and the big difficulties we have in understanding this subject—and he has trouble taking airplane flights, so it’s been a long time since I’ve seen him. It’ll also be great to see all the other people I know, and meet the ones I don’t.
For example, back in the 1990’s, I used to spend summers in Cambridge talking about n-categories with Martin Hyland and his students Eugenia Cheng, Tom Leinster and Aaron Lauda (who had been an undergraduate at U.C. Riverside). And more recently I’ve been talking a lot with Jamie Vicary about categories and quantum computation—since was in Singapore some of the time while I was there. (Indeed, I’m going back there this summer, and so will he.)
I’m not as big on n-categories and quantum gravity as I used to be, but I’m still interested in the foundations of quantum theory and how it’s connected to computation, so I think I can give a talk with some new ideas in it.
In December I went to the 2012 American Geophysical Union Fall Meeting. I’d like to tell you about with the Tyndall lecture given by Ray Pierrehumbert, on “Successful Predictions”. You can watch the whole talk here:
But let me give you a summary, with some references.
Ray’s talk spanned 120 years of research on climate change. The key message is that science is a long, slow process of discovery, in which theories (and their predictions) tend to emerge long before they can be tested. We often learn just as much from the predictions that turned out to be wrong as we do from those that were right. But successful predictions eventually form the body of knowledge that we can be sure about, not just because they were successful, but because they build up into a coherent explanation of multiple lines of evidence.
Here are the successful predictions:
1896: Svante Arrhenius correctly predicts that increases in fossil fuel emissions would cause the earth to warm. At that time, much of the theory of how atmospheric heat transfer works was missing, but nevertheless, he got a lot of the process right. He was right that surface temperature is determined by the balance between incoming solar energy and outgoing infrared radiation, and that the balance that matters is the radiation budget at the top of the atmosphere. He knew that the absorption of infrared radiation was due to CO2 and water vapour, and he also knew that CO2 is a forcing while water vapour is a feedback. He understood the logarithmic relationship between CO2 concentrations in the atmosphere and surface temperature. However, he got a few things wrong too. His attempt to quantify the enhanced greenhouse effect was incorrect, because he worked with a 1-layer model of the atmosphere, which cannot capture the competition between water vapour and CO2, and doesn’t account for the role of convection in determining air temperatures. His calculations were incorrect because he had the wrong absorption characteristics of greenhouse gases. And he thought the problem would be centuries away, because he didn’t imagine an exponential growth in use of fossil fuels.
Arrhenius, as we now know, was way ahead of his time. Nobody really considered his work again for nearly 50 years, a period we might think of as the dark ages of climate science. The story perfectly illustrates Paul Hoffman’s tongue-in-cheek depiction of how scientific discoveries work: someone formulates the theory, other scientists then reject it, ignore it for years, eventually rediscover it, and finally accept it. These “dark ages” weren’t really dark, of course—much good work was done in this period. For example:
• 1900: Frank Very worked out the radiation balance, and hence the temperature, of the moon. His results were confirmed by Pettit and Nicholson in 1930.
• 1907: Robert Emden realized that a similar radiative-convective model could be applied to planets, and Gerard Kuiper and others applied this to astronomical observations of planetary atmospheres.
This work established the standard radiative-convective model of atmospheric heat transfer. This treats the atmosphere as two layers; in the lower layer, convection is the main heat transport, while in the upper layer, it is radiation. A planet’s outgoing radiation comes from this upper layer. However, up until the early 1930’s, there was no discussion in the literature of the role of carbon dioxide, despite occasional discussion of climate cycles. In 1928, George Simpson published a memoir on atmospheric radiation, which assumed water vapour was the only greenhouse gas, even though, as Richardson pointed out in a comment, there was evidence that even dry air absorbed infrared radiation.
1938: Guy Callendar is the first to link observed rises in CO2 concentrations with observed rises in surface temperatures. But Callendar failed to revive interest in Arrhenius’s work, and made a number of mistakes in things that Arrhenius had gotten right. Callendar’s calculations focused on the radiation balance at the surface, whereas Arrhenius had (correctly) focussed on the balance at the top of the atmosphere. Also, he neglected convective processes, which astrophysicists had already resolved using the radiative-convective model. In the end, Callendar’s work was ignored for another two decades.
1956: Gilbert Plass correctly predicts a depletion of outgoing radiation in the 15 micron band, due to CO2 absorption. This depletion was eventually confirmed by satellite measurements. Plass was one of the first to revisit Arrhenius’s work since Callendar, however his calculations of climate sensitivity to CO2 were also wrong, because, like Callendar, he focussed on the surface radiation budget, rather than the top of the atmosphere.
1961-2: Carl Sagan correctly predicts very thick greenhouse gases in the atmosphere of Venus, as the only way to explain the very high observed temperatures. His calculations showed that greenhouse gasses must absorb around 99.5% of the outgoing surface radiation. The composition of Venus’s atmosphere was confirmed by NASA’s Venus probes in 1967-70.
1959: Burt Bolin and Erik Eriksson correctly predict the exponential increase in CO2 concentrations in the atmosphere as a result of rising fossil fuel use. At that time they did not have good data for atmospheric concentrations prior to 1958, hence their hindcast back to 1900 was wrong, but despite this, their projection for changes forward to 2000 were remarkably good.
1967: Suki Manabe and Dick Wetherald correctly predict that warming in the lower atmosphere would be accompanied by stratospheric cooling. They had built the first completely correct radiative-convective implementation of the standard model applied to Earth, and used it to calculate a +2 °C equilibrium warming for doubling CO2, including the water vapour feedback, assuming constant relative humidity. The stratospheric cooling was confirmed in 2011 by Gillett et al.
1975: Suki Manabe and Dick Wetherald correctly predict that the surface warming would be much greater in the polar regions, and that there would be some upper troposphere amplification in the tropics. This was the first coupled general circulation model (GCM), with an idealized geography. This model computed changes in humidity, rather than assuming it, as had been the case in earlier models. It showed polar amplification, and some vertical amplification in the tropics. The polar amplification was measured, and confirmed by Serreze et al in 2009. However, the height gradient in the tropics hasn’t yet been confirmed (nor has it yet been falsified—see Thorne 2008 for an analysis)
1989: Ron Stouffer et. al. correctly predict that the land surface will warm more than the ocean surface, and that the southern ocean warming would be temporarily suppressed due to the slower ocean heat uptake. These predictions are correct, although these models failed to predict the strong warming we’ve seen over the antarctic peninsula.
Of course, scientists often get it wrong:
1900: Knut Ångström incorrectly predicts that increasing levels of CO2 would have no effect on climate, because he thought the effect was already saturated. His laboratory experiments weren’t accurate enough to detect the actual absorption properties, and even if they were, the vertical structure of the atmosphere would still allow the greenhouse effect to grow as CO2 is added.
1971: Rasool and Schneider incorrectly predict that atmospheric cooling due to aerosols would outweigh the warming from CO2. However, their model had some important weaknesses, and was shown to be wrong by 1975. Rasool and Schneider fixed their model and moved on. Good scientists acknowledge their mistakes.
1993: Richard Lindzen incorrectly predicts that warming will dry the troposphere, according to his theory that a negative water vapour feedback keeps climate sensitivity to CO2 really low. Lindzen’s work attempted to resolve a long standing conundrum in climate science. In 1981, the CLIMAP project reconstructed temperatures at the last Glacial maximum, and showed very little tropical cooling. This was inconsistent the general circulation models (GCMs), which predicted substantial cooling in the tropics (e.g. see Broccoli & Manabe 1987). So everyone thought the models must be wrong. Lindzen attempted to explain the CLIMAP results via a negative water vapour feedback. But then the CLIMAP results started to unravel, and newer proxies demonstrated that it was the CLIMAP data that was wrong, rather than the models. It eventually turns out the models were getting it right, and it was the CLIMAP data and Lindzen’s theories that were wrong. Unfortunately, bad scientists don’t acknowledge their mistakes; Lindzen keeps inventing ever more arcane theories to avoid admitting he was wrong.
In science, it’s okay to be wrong, because exploring why something is wrong usually advances the science. But sometimes, theories are published that are so bad, they are not even wrong:
2007: Courtillot et. al. predicted a connection between cosmic rays and climate change. But they couldn’t even get the sign of the effect consistent across the paper. You can’t falsify a theory that’s incoherent! Scientists label this kind of thing as “Not even wrong”.
Finally, there are, of course, some things that scientists didn’t predict. The most important of these is probably the multi-decadal fluctuations in the warming signal. If you calculate the radiative effect of all greenhouse gases, and the delay due to ocean heating, you still can’t reproduce the flat period in the temperature trend in that was observed in 1950–1970. While this wasn’t predicted, we ought to be able to explain it after the fact. Currently, there are two competing explanations. The first is that the ocean heat uptake itself has decadal fluctuations, although models don’t show this. However, it’s possible that climate sensitivity is at the low end of the likely range (say 2 °C per doubling of CO2), it’s possible we’re seeing a decadal fluctuation around a warming signal. The other explanation is that aerosols took some of the warming away from GHGs. This explanation requires a higher value for climate sensitivity (say around 3 °C), but with a significant fraction of the warming counteracted by an aerosol cooling effect. If this explanation is correct, it’s a much more frightening world, because it implies much greater warming as CO2 levels continue to increase. The truth is probably somewhere between these two. (See Armour & Roe, 2011 for a discussion.)
To conclude, climate scientists have made many predictions about the effect of increasing greenhouse gases that have proven to be correct. They have earned a right to be listened to, but is anyone actually listening? If we fail to act upon the science, will future archaeologists wade through AGU abstracts and try to figure out what went wrong? There are signs of hope—in his re-election acceptance speech, President Obama revived his pledge to take action, saying “We want our children to live in an America that isn’t threatened by the destructive power of a warming planet.”
Here’s a public lecture I gave yesterday, via videoconferencing, at the 55th annual meeting of the South African Mathematical Society:
Abstract: The International Mathematical Union has declared 2013 to be the year of The Mathematics of Planet Earth. The global warming crisis is part of a bigger transformation in which humanity realizes that the Earth is a finite system and that our population, energy usage, and the like cannot continue to grow exponentially. If civilization survives this transformation, it will affect mathematics—and be affected by it—just as dramatically as the agricultural revolution or industrial revolution. We cannot know for sure what the effect will be, but we can already make some guesses.
To watch the talk, click on the video above. To see slides of the talk, click here. To see the source of any piece of information in these slides, just click on it!
My host Bruce Bartlett, an expert on topological quantum field theory, was crucial in planning the event. He was the one who edited the video, and put it on YouTube. He also made this cute poster:
I was planning to fly there using my superpowers to avoid taking a plane and burning a ton of carbon. But it was early in the morning and I was feeling a bit tired, so I used Skype.
By the way: if you’re interested in science, energy and the environment, check out the Azimuth Project, which is a collaboration to create a focal point for scientists and engineers interested in saving the planet. We’ve got some interesting projects going. If you join the Azimuth Forum, you can talk to us, learn more, and help out as much or as little as you want. The only hard part about joining the Azimuth Forum is reading the instructions well enough that you choose your whole real name, with spaces between words, as your username.
There will be three talks. Since I’m not famous, I must either the ‘bit’ or the ‘quantum’. (Seriously, I have no idea what the title of this workshop means.)
• 3 pm. Immanuel Bloch (Max-Planck-Institut für Quantenoptik): Controlling and exploring quantum gases at the single atom level.
Abstract: Over the past years, ultracold quantum gases in optical lattices have offered remarkable opportunities to investigate static and dynamic properties of strongly correlated bosonic or fermionic quantum many-body systems. In this talk I will show how it has recently not only become possible to image such quantum gases with single atom sensitivity and single site resolution, but also how it is now possible to coherently control single atoms on individual lattice sites, how one can measure hidden order parameters and how one can follow the propagation of entangled quasiparticles in a many-body setting. In addition I will present recent results on the generation of strong effective magnetic fields for ultracold atoms in optical lattices, which has opened a new avenue for realizing fractional quantum Hall like states with atomic gases.
• 4.30 pm. Harry Buhrman (Centrum Wiskunde & Informatica & University of Amsterdam): Position-based cryptography.
Abstract: Position-based cryptography uses the geographic position of a party as its sole credential. Normally digital keys or biometric features are used. A central building block in position-based cryptography is that of position-verification. The goal is to prove to a set of verifier that one is at a certain geographical location. Protocols typically assume that messages can not travel faster than the speed of light. By responding to a verier in a timely manner one can guarantee that one is within a certain distance of that verifier. Quite recently it was shown that position-verification protocols only based on this relativistic principle can be broken by two attackers who simulate being at a the claimed position while physically residing elsewhere in space. Because of the no-cloning property of quantum information (qubits) it was believed that with the use of quantum messages one could devise protocols that were resistant to such collaborative attacks. Several schemes were proposed that later turned out to be insecure. Finally it was shown that also in the quantum case no unconditionally secure scheme is possible. We will review the field of position-based quantum cryptography and highlight some of the research currently going on in order to develop, using reasonable assumptions on the capabilities of the attackers, protocols that are secure in practice.
• 6 pm. John Baez (U.C. Riverside & CQT): Probabilities versus amplitudes.
Abstract: Some ideas from quantum theory are just beginning to percolate back to classical probability theory. For example, there is a widely used and successful theory of “chemical reaction networks”, which describes the interactions of molecules in a stochastic rather than quantum way. If we look at it from the perspective of quantum theory, this turns out to involve creation and annihilation operators, coherent states and other well-known ideas•but with a few big differences. The stochastic analogue of quantum field theory is also used in population biology, and here the connection is well-known. But what does it mean to treat wolves as fermions or bosons?
People who have been following my network theory course will know this stuff already. I’ll give a more detailed mini-course on network theory here:
• Expository Quantum Lecture Series 5 (EQuaLS5), Institute for Mathematical Research (INSPEM), Universiti Putra Malaysia, Malaysia, 9-13 January 2012.
This looks like fun because a number of people will be giving such mini-courses:
• Do Ngoc Diep (Inst of Math, Hanoi): A procedure for quantization of fields.
• Maurice de Gosson (Univ. of Vienna): The symplectic camel and quantum mechanics.
• Fredrik Stroemberg (Technical Univ. of Darmstadt): Arithmetic quantum chaos.
• S. Twareque Ali (Concordia University, Montreal): Coherent states: theory and applications.
The ‘symplectic camel’, in case you’re wondering, is an allusion to Mikhail Gromov’s result on classical mechanics limiting our ability to squeeze a region of phase space into a long and skinny shape. It’s like trying to squeeze a camel through the eye of a needle!
Later, I’ll give a version of my talk ‘Probabilities versus amplitudes’ at this workshop:
They’re also inviting mathematicians to organize workshops on this theme at the Banff International Research Station for Mathematical Innovation and Discovery, or BIRS. This is a famous and beautiful research center in the Canadian Rockies.
The deadline is coming up on September 30th, and I want to apply. If you’d like to join me, please drop me a note, either here on this blog or by email!
I’m open to all sorts of ideas, and I’d love help from biologists or climate scientists. If you don’t give me a better idea, I’ll probably do an application on network theory. It might look a bit like this:
Diagrammatic languages for describing complex networks made of interacting parts are used throughout ecology, biology, climate science, engineering, and many other fields. Examples include Systems Biology Graphical Notation, Petri nets in computer science, stochastic Petri nets and chemical reaction networks in chemistry and biochemistry, bond graphs in electrical, chemical and mechanical engineering, Bayesian networks in probabilistic reasoning, box models in climate science, and Harold Odum’s Energy Systems Language for systems ecology. Often these diagrammatic languages are invented by practitioners in a given field without reference to previous work in other fields. Recently mathematicians have set up the theoretical infrastructure needed to formalize, rigorously relate, and some cases unify these various languages. Doing this will help interdisciplinary work of the sort that is becoming important in theoretical ecology, climate science and ‘the mathematics of planet Earth’. The goal of this workshop is to bring together experts on various diagrammatic languages and mathematicians who study the general theory of diagrammatic reasoning.
If you’d be interested in coming to a workshop on this subject, let me know. Banff provides accommodation, full board, and research facilities—but not, I believe, travel funds! So, “interested in coming” means “interested enough to pay for your own flight”.
Banff does “full workshops” with 42 people for 5 days, and “half workshops” with 20 people for 5 days. Part of why I’m asking you to express your interest is to gauge which seems more appropriate.
With a growing global population competing for the same global resources, an increased frequency and intensity of dramatic climatic events, and evidence pointing to more long-term patterns of general climate change, the pressure to comprehend nature and its trends is greater than ever. Leaders in politics, sociology and economics have begun to seriously take note of issues which before were confined to the natural sciences alone, and mathematical modeling is at the heart of much of the research undertaken. The year 2013 has thus been earmarked by mathematical sciences institutes around the world as a time for a special emphasis on the study of the “Mathematics of Planet Earth” (MPE 13). This theme is to be interpreted as broadly as possible, in the aim of creating new partnerships with related disciplines and casting new light on the many ways in which the mathematical sciences can help to comprehend and tackle some of the world’s most pressing problems.
The Banff International Research Station (BIRS) is a full partner in this important initiative, as the goals of MPE 13 are completely in line with the station’s commitment to pursuing excellence in a broad range of mathematical sciences and applications. BIRS has already planned to host five workshops in 2012 which deal with the themes of MPE 13:
BIRS also invites interested applicants to use the opportunities of its 2013 program and submit proposals in line of the MPE 2013 theme, in conjunction with BIRS’ regular format for programming. Proposals should be made using the BIRS online submission process.
There is something inherently distasteful about flying around the world to talk about global warming, given the large amount of carbon burnt to fuel air travel. I’ve already turned down a couple of requests to give talks in the US and Europe, since they’re so far from my current home in Singapore. Only later did I think of the good solution, though it was perfectly obvious in retrospect: accept all these invitations to speak, but only on the condition that I give my talk over video-link. That may nudge institutions a bit towards the post-carbon future. They can accept or not, but either way they’ll have to think about these issues.
But Iran is close enough to Singapore, and the opportunity to speak about these issues to Iranian mathematicians is unusual enough, and potentially important enough, that I feel this talk is a good idea.
Do you know anything interesting about what Iranians, especially mathematicians or physicists, are doing about environmental issues?
(Note: I’m not interested in talking about politics here.)
You need the word 'latex' right after the first dollar sign, and it needs a space after it. Double dollar signs don't work, and other limitations apply, some described here. You can't preview comments here, but I'm happy to fix errors.