## Bridging the Greenhouse-Gas Emissions Gap

28 April, 2013

I could use some help here, finding organizations that can help cut greenhouse gas emissions. I’ll explain what I mean in a minute. But the big question is:

How can we bridge the gap between what we are doing about global warming and what we should be doing?

That’s what this paper is about:

• Kornelis Blok, Niklas Höhne, Kees van der Leun and Nicholas Harrison, Bridging the greenhouse-gas emissions gap, Nature Climate Change 2 (2012), 471-474.

According to the United Nations Environment Programme, we need to cut CO2 emissions by about 12 gigatonnes/year by 2020 to hold global warming to 2 °C.

After the UN climate conference in Copenhagen, many countries made pledges to reduce CO2 emissions. But by 2020 these pledges will cut emissions by at most 6 gigatonnes/year. Even worse, a lot of these pledges are contingent on other people meeting other pledges, and so on… so the confirmed value of all these pledges is only 3 gigatonnes/year.

The authors list 21 things that cities, large companies and individual citizens can do, which they claim will cut greenhouse gas emissions by the equivalent of 10 gigatonnes/year of CO2 by 2020. For each initiative on their list, they claim:

(1) there is a concrete starting position from which a significant up-scaling until the year 2020 is possible;

(2) there are significant additional benefits besides a reduction of greenhouse-gas emissions, so people can be driven by self-interest or internal motivation, not external pressure;

(3) there is an organization or combination of organizations that can lead the initiative;

(4) the initiative has the potential to reach an emission reduction by about 0.5 Gt CO2e by 2020.

### 21 Initiatives

Now I want to quote the paper and list the 21 initiatives. And here’s where I could use your help! For each of these, can you point me to one or more organizations that are in a good position to lead the initiative?

Some are already listed, but even for these I bet there are other good answers. I want to compile a list, and then start exploring what’s being done, and what needs to be done.

By the way, even if the UN estimate of the greenhouse-emissions gap is wrong, and even if all the numbers I’m about to quote are wrong, most of them are probably the right order of magnitude—and that’s all we need to get a sense of what needs to be done, and how we can do it.

#### Companies

1. Top 1,000 companies’ emission reductions. Many of the 1,000 largest greenhouse-gas-emitting companies already have greenhouse-gas emission-reduction goals to decrease their energy use and increase their long-term competitiveness, as well as to demonstrate their corporate social responsibility. An association such as the World Business Council for Sustainable Development could lead 30% of the top 1,000 companies to reduce energy-related emissions 10% below business as usual by 2020 and all companies to reduce their non-carbon dioxide greenhouse-gas emissions by 50%. Impact in 2020: up to 0.7 Gt CO2e.

2. Supply-chain emission reductions. Several companies already have social and environmental requirements for their suppliers, which are driven by increased competitiveness, corporate social responsibility and the ability to be a front-runner. An organization such as the Consumer Goods Forum could stimulate 30% of companies to require their supply chains to reduce emissions 10% below business as usual by 2020. Impact in 2020: up to 0.2 Gt CO2e.

3. Green financial institutions. More than 200 financial organizations are already members of the finance initiative of the United Nations Environment Programme (UNEP-FI). They are committed to environmental goals owing to corporate social responsibility, to gain investor certainty and to be placed well in emerging markets. UNEP-FI could lead the 20 largest banks to reduce the carbon footprint of 10% of their assets by 80%. Impact in 2020: up to 0.4 Gt of their assets by 80%. Impact in 2020: up to 0.4 Gt CO2e.

4. Voluntary-offset companies. Many companies are already offsetting their greenhouse-gas emissions, mostly without explicit external pressure. A coalition between an organization with convening power, for example UNEP, and offset providers could motivate 20% of the companies in the light industry and commercial sector to calculate their greenhouse-gas emissions, apply emission-reduction measures and offset the remaining emissions (retiring the purchased credits). It is ensured that offset projects really reduce emissions by using the ‘gold standard’ for offset projects or another comparable mechanism. Governments could provide incentives by giving tax credits for offsetting, similar to those commonly given for charitable donations. Impact by 2020: up to 2.0 Gt CO2e.

#### Other actors

5. Voluntary-offset consumers. A growing number of individuals (especially with high income) already offset their greenhouse-gas emissions, mostly for flights, but also through carbon-neutral products. Environmental NGOs could motivate 10% of the 20% of richest individuals to offset their personal emissions from electricity use, heating and transport at cost to them of around US\$200 per year. Impact in 2020: up to 1.6 Gt CO2e.

6. Major cities initiative. Major cities are large emitters of greenhouse gases and many have greenhouse-gas reduction targets. Cities are intrinsically highly motivated to act so as to improve local air quality, attractiveness and local job creation. Groups like the C40 Cities Climate Leadership Group and ICLEI — Local Governments for Sustainability could lead the 40 cities in C40 or an equivalent sample to reduce emissions 20% below business as usual by 2020, building on the thousands of emission-reduction activities already implemented by the C40 cities. Impact in 2020: up to 0.7 Gt CO2e.

7. Subnational governments. Several states in the United States and provinces in Canada have already introduced support mechanisms for renewable energy, emission-trading schemes, carbon taxes and industry regulation. As a result, they expect an increase in local competitiveness, jobs and energy security. Following the example set by states such as California, these ambitious US states and Canadian provinces could accept an emission-reduction target of 15–20% below business as usual by 2020, as some states already have. Impact in 2020: up to 0.6 Gt CO2e.

#### Energy efficiency

8. Building heating and cooling. New buildings, and increasingly existing buildings, are designed to be extremely energy efficient to realize net savings and increase comfort. The UN Secretary General’s Sustainable Energy for All Initiative could bring together the relevant players to realize 30% of the full reduction potential for 2020. Impact in 2020: up to 0.6 Gt CO2e.

9. Ban of incandescent lamps. Many countries already have phase-out schedules for incandescent lamps as it provides net savings in the long term. The en.lighten initiative of UNEP and the Global Environment Facility already has a target to globally ban incandescent lamps by 2016. Impact in 2020: up to 0.2 Gt CO2e.

10. Electric appliances. Many international labelling schemes and standards already exist for energy efficiency of appliances, as efficient appliances usually give net savings in the long term. The Collaborative Labeling and Appliance Standards Program or the Super-efficient Equipment and Appliance Deployment Initiative could drive use of the most energy-efficient appliances on the market. Impact in 2020: up to 0.6 Gt CO2e.

11. Cars and trucks. All car and truck manufacturers put emphasis on developing vehicles that are more efficient. This fosters innovation and increases their long-term competitive position. The emissions of new cars in Europe fell by almost 20% in the past decade. A coalition of manufacturers and NGOs joined by the UNEP Partnership for Clean Fuels and Vehicles could agree to save one additional liter per 100 km globally by 2020 for cars, and equivalent reductions for trucks. Impact in 2020: up to 0.7 Gt CO2e.

#### Energy supply

12. Boost solar photovoltaic energy. Prices of solar photovoltaic systems have come down rapidly in recent years, and installed capacity has increased much faster than expected. It created a new industry, an export market and local value added through, for example, roof installations. A coalition of progressive governments and producers could remove barriers by introducing good grid access and net metering rules, paving the way to add another 1,600 GW by 2020 (growth consistent with recent years). Impact in 2020: up to 1.4 Gt CO2e.

13. Wind energy. Cost levels for wind energy have come down dramatically, making wind economically competitive with fossil-fuel-based power generation in many cases. The Global Wind Energy Council could foster the global introduction of arrangements that lead to risk reduction for investments in wind energy, with, for example, grid access and guarantees. This could lead to an installation of 1,070 GW by 2020, which is 650 GW over a reference scenario. Impact in 2020: up to 1.2 Gt CO2e.

14. Access to energy through low-emission options. Strong calls and actions are already underway to provide electricity access to 1.4 billion people who are at present without and fulfill development goals. The UN Secretary General’s Sustainable Energy for All Initiative could ensure that all people without access to electricity get access through low-emission options. Impact in 2020: up to 0.4 Gt CO2e.

15. Phasing out subsidies for fossil fuels. This highly recognized option to reduce emissions would improve investment in clean energy, provide other environmental, health and security benefits, and generate income. The International Energy Agency could work with countries to phase out half of all fossil-fuel subsidies. Impact in 2020: up to 0.9 Gt CO2e.

#### Special sectors

16. International aviation and maritime transport. The aviation and shipping industries are seriously considering efficiency measures and biofuels to increase their competitive advantage. Leading aircraft and ship manufacturers could agree to design their vehicles to capture half of the technical mitigation potential. Impact in 2020: up to 0.2 Gt CO2e.

17. Fluorinated gases (hydrofluorocarbons, perflourocarbons, SF6). Recent industry-led initiatives are already underway to reduce emissions of these gases originating from refrigeration, air-conditioning and industrial processes. Industry associations, such as Refrigerants, Naturally!, could work towards meeting half of the technical mitigation potential. Impact in 2020: up to 0.3 Gt CO2e.

18. Reduce deforestation. Some countries have already shown that it is strongly possible to reduce deforestation with an integrated approach that eliminates the drivers of deforestation. This has benefits for local air pollution and biodiversity, and can support the local population. Led by an individual with convening power, for example, the United Kingdom’s Prince of Wales or the UN Secretary General, such approaches could be rolled out to all the major countries with high deforestation emissions, halving global deforestation by 2020. Impact in 2020: up to 1.8 Gt CO2e.

19. Agriculture. Options to reduce emissions from agriculture often increase efficiency. The International Federation of Agricultural Producers could help to realize 30% of the technical mitigation potential. (Well, at least it could before it collapsed, after this paper was written.) Impact in 2020: up to 0.8 Gt CO2e.

#### Air pollutants

20. Enhanced reduction of air pollutants. Reduction of classic air pollutants including black carbon has been pursued for years owing to positive impacts on health and local air quality. UNEP’s Climate and Clean Air Coalition To Reduce Short-Lived Climate Pollutants already has significant political momentum and could realize half of the technical mitigation potential. Impact in 2020: a reduction in radiative forcing impact equivalent to an emission reduction of greenhouse gases in the order of 1 Gt CO2e, but outside of the definition of the gap.

21. Efficient cook-stoves. Cooking in rural areas is a source of carbon dioxide emissions. Furthermore, there are emissions of black carbon, which also leads to global warming. Replacing these cook-stoves would also significantly increase local air quality and reduce pressure on forests from fuel-wood demand. A global development organization such as the UN Development Programme could take the lead in scaling-up the many already existing programs to eventually replace half of the existing cook-stoves. Impact in 2020: a reduction in radiative forcing impact equivalent to an emission reduction of greenhouse gases of up to 0.6 Gt CO2e, included in the effect of the above initiative and outside of the definition of the gap.

### For more

For more, see the supplementary materials to this paper, and also:

The size of the emissions gap was calculated here:

The Emissions Gap Report 2012, United Nations Environment Programme (UNEP).

If you’re in a rush, just read the executive summary.

## Energy and the Environment – What Physicists Can Do

25 April, 2013

The Perimeter Institute is a futuristic-looking place where over 250 physicists are thinking about quantum gravity, quantum information theory, cosmology and the like. Since I work on some of these things, I was recently invited to give the weekly colloquium there. But I took the opportunity to try to rally them into action:

Energy and the Environment: What Physicists Can Do. Watch the video or read the slides.

Abstract. The global warming crisis is part of a bigger transformation in which humanity realizes that the Earth is a finite system and that our population, energy usage, and the like cannot continue to grow exponentially. While politics and economics pose the biggest challenges, physicists are in a good position to help make this transition a bit easier. After a quick review of the problems, we discuss a few ways physicists can help.

On the video you can hear me say a lot of stuff that’s not on the slides: it’s more of a coherent story. The advantage of the slides is that anything in blue, you can click on to get more information. So for example, when I say that solar power capacity has been growing annually by 75% in recent years, you can see where I got that number.

I was pleased by the response to this talk. Naturally, it was not a case of physicists saying “okay, tomorrow I’ll quit working on the foundations of quantum mechanics and start trying to improve quantum dot solar cells.” It’s more about getting them to see that huge problems are looming ahead of us… and to see the huge opportunities for physicists who are willing to face these problems head-on, starting now. Work on energy technologies, the smart grid, and ‘ecotechnology’ is going to keep growing. I think a bunch of the younger folks, at least, could see this.

However, perhaps the best immediate outcome of this talk was that Lee Smolin introduced me to Manjana Milkoreit. She’s at the school of international affairs at Waterloo University, practically next door to the Perimeter Institute. She works on “climate change governance, cognition and belief systems, international security, complex systems approaches, especially threshold behavior, and the science-policy interface.”

So, she knows a lot about the all-important human and political side of climate change. Right now she’s interviewing diplomats involved in climate treaty negotiations, trying to see what they believe about climate change. And it’s very interesting!

In my next post, I’ll talk about something she pointed me to. Namely: what we can do to hold the temperature increase to 2 °C or less, given that the pledges made by various nations aren’t enough.

## Milankovitch Cycles and the Earth’s Climate

13 April, 2013

Here are the slides for a talk I’m giving at the Cal State Northridge Climate Science Seminar:

It’s a gentle introduction to these ideas, and it presents a lot of what Blake Pollard and I have said about Milankovitch cycles, in a condensed way. Of course when I give the talk, I’ll add more words, especially about the different famous ‘puzzles’.

If you have any corrections, please let me know!

I’m eager to visit Cal State Northridge and especially David Klein in their math department, since I’d like to incorporate some climate science in our math curriculum the way they’ve done there.

## Geoengineering Report

11 March, 2013

I think we should start serious research on geoengineering schemes, including actual experiments, not just calculations and simulations. I think we should do this with an open mind about whether we’ll decide that these schemes are good ideas or bad. Either way, we need to learn more about them. Simultaneously, we need an intelligent, well-informed debate about the many ethical, legal and political aspects.

Many express the fear that merely researching geoengineering schemes will automatically legitimate them, however hare-brained they are. There’s some merit to that fear. But I suspect that public opinion on geoengineering will suddenly tip from “unthinkable!” to “let’s do it now!” as soon as global warming becomes perceived as a real and present threat. This is especially true because oil, coal and gas companies have a big interest in finding solutions to global warming that don’t make them stop digging.

So if we don’t learn more about geoengineering schemes, and we start getting heat waves that threaten widespread famine, we should not be surprised if some big government goes it alone and starts doing something cheap and easy like putting tons of sulfur into the upper atmosphere… even if it’s been inadequately researched.

It’s hard to imagine a more controversial topic. But I think there’s one thing most of us should be able to agree on: we should pay attention to what governments are doing about geoengineering! So, let me quote a bit of this report prepared for the US Congress:

• Kelsi Bracmort and Richard K. Lattanzio, Geoengineering: Governance and Technology Policy, CRS Report for Congress, Congressional Research Service, 2 January 2013.

Kelsi Bracmort is a specialist in agricultural conservation and natural Resources Policy, and Richard K. Lattanzio is an analyst in environmental policy.

I will delete references to footnotes, since they’re huge and I’m too lazy to include them all here. So, go to the original text for those!

### Introduction

Climate change has received considerable policy attention in the past several years both internationally and within the United States. A major report released by the Intergovernmental Panel on Climate Change (IPCC) in 2007 found widespread evidence of climate warming, and many are concerned that climate change may be severe and rapid with potentially catastrophic consequences for humans and the functioning of ecosystems. The National Academies maintains that the climate change challenge is unlikely to be solved with any single strategy or by the people of any single country.

Policy efforts to address climate change use a variety of methods, frequently including mitigation and adaptation. Mitigation is the reduction of the principal greenhouse gas (GHG) carbon dioxide (CO2) and other GHGs. Carbon dioxide is the dominant greenhouse gas emitted naturally through the carbon cycle and through human activities like the burning of fossil fuels. Other commonly discussed GHGs include methane, nitrous oxide, hydrofluorocarbons, perfluorocarbons, and sulfur hexafluoride. Adaptation seeks to improve an individual’s or institution’s ability to cope with or avoid harmful impacts of climate change, and to take advantage of potential beneficial ones.

Some observers are concerned that current mitigation and adaptation strategies may not prevent change quickly enough to avoid extreme climate disruptions. Geoengineering has been suggested by some as a timely additional method to mitigation and adaptation that could be included in climate change policy efforts. Geoengineering technologies, applied to the climate, aim to achieve large-scale and deliberate modifications of the Earth’s energy balance in order to reduce temperatures and counteract anthropogenic (i.e., human-made) climate change; these climate modifications would not be limited by country boundaries. As an unproven concept, geoengineering raises substantial environmental and ethical concerns for some observers. Others respond that the uncertainties of geoengineering may only be resolved through further scientific and technical examination.

Proposed geoengineering technologies vary greatly in terms of their technological characteristics and possible consequences. They are generally classified in two main groups:

• Solar radiation management (SRM) method: technologies that would increase the reflectivity, or albedo, of the Earth’s atmosphere or surface, and

• Carbon dioxide removal (CDR) method: technologies or practices that would remove CO2 and other GHGs from the atmosphere.

Much of the geoengineering technology discussion centers on SRM methods (e.g., enhanced albedo, aerosol injection). SRM methods could be deployed relatively quickly if necessary, and their impact on the climate would be more imme diate than that of CDR methods. Because SRM methods do not reduce GHG from the atmosphere, global warming could resume at a rapid pace if a deployed SRM method fails or is terminated at any time. At least one relatively simple SRM method is already being deployed with government assistance. [Enhanced albedo is one SRM effort currently being undertaken by the U.S. Environmental Protection Agency. See the Enhanced Albedo section for more information.] Other proposed SRM methods are at the conceptualization stage. CDR methods include afforestation, ocean fertilization, and the use of biomass to capture and store carbon.

The 112th Congress did not take any legislative action on geoengineering. In 2009, the House Science and Technology Committee of the 111th Congress held hearings on geoengineering that examined the “potential environmental risks and benefits of various proposals, associated domestic and international governance issues, evaluation mechanisms and criteria, research and development (R&D) needs, and economic rationales supporting the deployment of geoengineering activities.”

Some foreign governments, including the United Kingdom’s, as well as scientists from Germany and India, have begun considering engaging in the research or deployment of geoengineering technologies be cause of concern over the slow progress of emissions reductions, the uncertainties of climate sensitivity, the possible existence of climate thresholds (or “tipping points”), and the political, social, and economic impact of pursuing aggressive GHG mitigation strategies.

Congressional interest in geoengineering has focused primarily on whether geoengineering is a realistic, effective, and appropriate tool for the United States to use to address climate change. However, if geoengineering technologies are deployed by the United States, another government, or a private entity, several new concerns are likely to arise related to government support for, and oversight of, geoengineering as well as the transboundary and long-term effects of geoengineering. Such was the case in the summer of 2012, when an American citizen conducted a geoengineering experiment, specifically ocean fertilization, off the west coast of Canada that some say violated two international conventions.

This report is intended as a primer on the policy issues, science, and governance of geoengineering technologies. The report will first set the policy parameters under which geoengineering technologies may be considered. It will then describe selected technologies in detail and discuss their status. The third section provides a discussion of possible approaches to governmental involvement in, and oversight of, geoengineering, including a summary of domestic and international instruments and institutions that may affect geoengineering projects.

### Geoengineering governance

Geoengineering technologies aim to modify the Earth’s energy balance in order to reduce temperatures and counteract anthropogenic climate change through large-scale and deliberate modifications. Implementation of some of the technologies may be controlled locally, while other technologies may require global input on implementation. Additionally, whether a technology can be controlled or not once implemented differs by technology type. Little research has been done on most geoengineering methods, and no major directed research programs are in place. Peer reviewed literature is scant, and deployment of the technology—either through controlled field tests or commercial enterprise—has been minimal.

Most interested observers agree that more research would be required to test the feasibility, effectiveness, cost, social and environmental impacts, and the possible unintended consequences of geoengineering before deployment; others reject exploration of the options as too risky. The uncertainties have led some policymakers to consider the need and the role for governmental oversight to guide research in the short term and to oversee potential deployment in the long term. Such governance structures, both domestic and international, could either support or constrain geoengineering activities, depending on the decisions of policymakers. As both technological development and policy considerations for geoengineering are in their early stages, several questions of governance remain in play:

• What risk factors and policy considerations enter into the debate over geoengineering activities and government oversight?

• At what point, if ever, should there be government oversight of geoengineering activities?

• If there is government oversight, what form should it take?

• If there is government oversight, who should be responsible for it?

• If there is publicly funded research and development, what should it cover and which disciplines should be engaged in it?

### Risk Factors

As a new and emerging set of technologies potentially able to address climate change, geoengineering possesses many risk factors that must be taken into policy considerations. From a research perspective, the risk of geoengineering activities most often rests in the uncertainties of the new technology (i.e., the risk of failure, accident, or unintended consequences). However, many observers believe that the greater risk in geoengineering activities may lie in the social, ethical, legal, and political uncertainties associated with deployment. Given these risks, there is an argument that appropriate mechanisms for government oversight should be established before the federal government and its agencies take steps to promote geoengineering technologies and before new geoengineering projects are commenced. Yet, the uncertainty behind the technologies makes it unclear which methods, if any, may ever mature to the point of being deemed sufficiently effective, affordable, safe, and timely as to warrant potential deployment.

Some of the more significant risks factors associated with geoengineering are as follows:

Technology Control Dilemma. An analytical impasse inherent in all emerging technologies is that potential risks may be foreseen in the design phase but can only be proven and resolved through actual research, development, and demonstration. Ideally, appropriate safeguards are put in place during the early stages of conceptualization and development, but anticipating the evolution of a new technology can be difficult. By the time a technology is widely deployed, it may be impossible to build desirable oversight and risk management provisions without major disruptions to established interests. Flexibility is often required to both support investigative research and constrain potentially harmful deployment.

Reversibility. Risk mitigation relies on the ability to cease a technology program and terminate its adverse effects in a short period of time. In principle, all geoengineering options could be abandoned on short notice, with either an instant cessation of direct climate effects or a small time lag after abandonment.

However, the issue of reversibility applies to more than just the technologies themselves. Given the importance of internal adjustments and feedbacks in the climate system—still imperfectly understood—it is unlikely that all secondary effects from large-scale deployment would end immediately. Also, choices made regarding geoengineering methods may influence other social, economic, and technological choices regarding climate science. Advancing geoengineering options in lieu of effectively mitigating GHG emissions, for example, could result in a number of adverse effects, including ocean acidification, stresses on biodiversity, climate sensitivity shocks, and other irreversible consequences. Further, investing financially in the physical infrastructure to support geoengineering may create a strong economic resistance to reversing research and deployment activities.

Encapsulation. Risk mitigation also relies on whether a technology program is modular and contained or whether it involves the release of materials into the wider environment. The issue can be framed in the context of pollution (i.e., encapsulated technologies are often viewed as more “ethical” in that they are seen as non-polluting). Several geoengineering technologies are demonstrably non-encapsulated, and their release and deployment into the wider environment may lead to technical uncertainties, impacts on non-participants, and complex policy choices. But encapsulated technologies may still have localized environmental impacts, depending on the nature, size, and location of the application. The need for regulatory action may arise as much from the indirect impacts of activities on agro-forestry, species, and habitat as from the direct impacts of released materials in atmospheric or oceanic ecosystems.

Commercial Involvement. The role of private-sector engagement in the development and promotion of geoengineering may be debated. Commercial involvement, including competition, may be positive in that it mobilizes innovation and capital investment, which could lead to the development of more effective and less costly technologies at a faster rate than in the public sector.

However, commercial involvement could bypass or neglect social, economic, and environmental risk assessments in favor of what one commentator refers to as “irresponsible entrepreneurial behavior.” Private-sector engagement would likely require some form of public subsidies or GHG emission pricing to encourage investment, as well as additional considerations including ownership models, intellectual property rights, and trade and transfer mechanisms for the dissemination of the technologies.

Public Engagement. The consequences of geoengineering—including both benefits and risks discussed above—could affect people and communities across the world. Public attitudes toward geoengineering, and public engagement in the formation, development, and execution of proposed governance, could have a critical bearing on the future of the technologies. Perceptions of risks, levels of trust, transparency of actions, provisions for liabilities and compensation, and economies of investment could play a significant role in the political feasibility of geoengineering. Public acceptance may require a wider dialogue between scientists, policymakers, and the public.

## Tipping Points in Climate Systems

4 March, 2013

If you’ve just recently gotten a PhD, you can get paid to spend a week this summer studying tipping points in climate systems!

They’re having a program on this at ICERM: the Institute for Computational and Experimental Research in Mathematics, in Providence, Rhode Island. It’s happening from July 15th to 19th, 2013. But you have to apply soon, by the 15th of March!

For details, see below. But first, a word about tipping points… in case you haven’t thought about them much.

### Tipping Points

A tipping point occurs when adjusting some parameter of a system causes it to transition abruptly to a new state. The term refers to a well-known example: as you push more and more on a glass of water, it gradually leans over further until you reach the point where it suddenly falls over. Another familiar example is pushing on a light switch until it ‘flips’ and the light turns on.

In the Earth’s climate, a number of tipping points could cause abrupt climate change:

(Click to enlarge.) They include:

• Loss of Arctic sea ice.
• Melting of the Greenland ice sheet.
• Melting of the West Antarctic ice sheet.
• Permafrost and tundra loss, leading to the release of methane.
• Boreal forest dieback.
• Amazon rainforest dieback
• West African monsoon shift.
• Indian monsoon chaotic multistability.
• Change in El Niño amplitude or frequency.
• Change in formation of Atlantic deep water.
• Change in the formation of Antarctic bottom water.

• T. M. Lenton, H. Held, E. Kriegler, J. W. Hall, W. Lucht, S. Rahmstorf, and H. J. Schellnhuber, Tipping elements in the Earth’s climate system, Proceedings of the National Academy of Sciences 105 (2008), 1786–1793.

Mathematicians are getting interested in how to predict when we’ll hit a tipping point:

• Peter Ashwin, Sebastian Wieczorek and Renato Vitolo, Tipping points in open systems: bifurcation, noise-induced and rate-dependent examples in the climate system, Phil. Trans. Roy. Soc. A 370 (2012), 1166–1184.

Abstract: Tipping points associated with bifurcations (B-tipping) or induced by noise (N-tipping) are recognized mechanisms that may potentially lead to sudden climate change. We focus here a novel class of tipping points, where a sufficiently rapid change to an input or parameter of a system may cause the system to “tip” or move away from a branch of attractors. Such rate-dependent tipping, or R-tipping, need not be associated with either bifurcations or noise. We present an example of all three types of tipping in a simple global energy balance model of the climate system, illustrating the possibility of dangerous rates of change even in the absence of noise and of bifurcations in the underlying quasi-static system.

We can test out these theories using actual data:

• J. Thompson and J. Sieber, Predicting climate tipping points as a noisy bifurcation: a review, International Journal of Chaos and Bifurcation 21 (2011), 399–423.

Abstract: There is currently much interest in examining climatic tipping points, to see if it is feasible to predict them in advance. Using techniques from bifurcation theory, recent work looks for a slowing down of the intrinsic transient responses, which is predicted to occur before an instability is encountered. This is done, for example, by determining the short-term auto-correlation coefﬁcient ARC in a sliding window of the time series: this stability coefﬁcient should increase to unity at tipping. Such studies have been made both on climatic computer models and on real paleoclimate data preceding ancient tipping events. The latter employ re-constituted time-series provided by ice cores, sediments, etc, and seek to establish whether the actual tipping could have been accurately predicted in advance. One such example is the end of the Younger Dryas event, about 11,500 years ago, when the Arctic warmed by 7 C in 50 years. A second gives an excellent prediction for the end of ’greenhouse’ Earth about 34 million years ago when the climate tipped from a tropical state into an icehouse state, using data from tropical Paciﬁc sediment cores. This prediction science is very young, but some encouraging results are already being obtained. Future analyses will clearly need to embrace both real data from improved monitoring instruments, and simulation data generated from increasingly sophisticated predictive models.

The next paper is interesting because it studies tipping points experimentally by manipulating a lake. Doing this lets us study another important question: when can you push a system back to its original state after it’s already tipped?

• S. R. Carpenter, J. J. Cole, M. L. Pace, R. Batt, W. A. Brock, T. Cline, J. Coloso, J. R. Hodgson, J. F. Kitchell, D. A. Seekell, L. Smith, and B. Weidel, Early warnings of regime shifts: a whole-ecosystem experiment, Nature 332 (2011), 1079–1082.

Abstract: Catastrophic ecological regime shifts may be announced in advance by statistical early-warning signals such as slowing return rates from perturbation and rising variance. The theoretical background for these indicators is rich but real-world tests are rare, especially for whole ecosystems. We tested the hypothesis that these statistics would be early-warning signals for an experimentally induced regime shift in an aquatic food web. We gradually added top predators to a lake over three years to destabilize its food web. An adjacent lake was monitored simultaneously as a reference ecosystem. Warning signals of a regime shift were evident in the manipulated lake during reorganization of the food web more than a year before the food web transition was complete, corroborating theory for leading indicators of ecological regime shifts.

### IdeaLab program

If you’re seriously interested in this stuff, and you recently got a PhD, you should apply to IdeaLab 2013, which is a program happening at ICERM from the 15th to the 19th of July, 2013. Here’s the deal:

The Idea-Lab invites 20 early career researchers (postdoctoral candidates and assistant professors) to ICERM for a week during the summer. The program will start with brief participant presentations on their research interests in order to build a common understanding of the breadth and depth of expertise. Throughout the week, organizers or visiting researchers will give comprehensive overviews of their research topics. Organizers will create smaller teams of participants who will discuss, in depth, these research questions, obstacles, and possible solutions. At the end of the week, the teams will prepare presentations on the problems at hand and ideas for solutions. These will be shared with a broad audience including invited program officers from funding agencies.

Two Research Project Topics:

• Tipping Points in Climate Systems (MPE2013 program)

• Towards Efficient Homomorphic Encryption

IdeaLab Funding Includes:

• Travel support

• Six nights accommodations

• Meal allowance

The Application Process:

IdeaLab applicants should be at an early stage of their post-PhD career. Applications for the 2013 IdeaLab are being accepted through MathPrograms.org.

Application materials will be reviewed beginning March 15, 2013.

## Successful Predictions of Climate Science

5 February, 2013

guest post by Steve Easterbrook

In December I went to the 2012 American Geophysical Union Fall Meeting. I’d like to tell you about with the Tyndall lecture given by Ray Pierrehumbert, on “Successful Predictions”. You can watch the whole talk here:

But let me give you a summary, with some references.

Ray’s talk spanned 120 years of research on climate change. The key message is that science is a long, slow process of discovery, in which theories (and their predictions) tend to emerge long before they can be tested. We often learn just as much from the predictions that turned out to be wrong as we do from those that were right. But successful predictions eventually form the body of knowledge that we can be sure about, not just because they were successful, but because they build up into a coherent explanation of multiple lines of evidence.

Here are the successful predictions:

1896: Svante Arrhenius correctly predicts that increases in fossil fuel emissions would cause the earth to warm. At that time, much of the theory of how atmospheric heat transfer works was missing, but nevertheless, he got a lot of the process right. He was right that surface temperature is determined by the balance between incoming solar energy and outgoing infrared radiation, and that the balance that matters is the radiation budget at the top of the atmosphere. He knew that the absorption of infrared radiation was due to CO2 and water vapour, and he also knew that CO2 is a forcing while water vapour is a feedback. He understood the logarithmic relationship between CO2 concentrations in the atmosphere and surface temperature. However, he got a few things wrong too. His attempt to quantify the enhanced greenhouse effect was incorrect, because he worked with a 1-layer model of the atmosphere, which cannot capture the competition between water vapour and CO2, and doesn’t account for the role of convection in determining air temperatures. His calculations were incorrect because he had the wrong absorption characteristics of greenhouse gases. And he thought the problem would be centuries away, because he didn’t imagine an exponential growth in use of fossil fuels.

Arrhenius, as we now know, was way ahead of his time. Nobody really considered his work again for nearly 50 years, a period we might think of as the dark ages of climate science. The story perfectly illustrates Paul Hoffman’s tongue-in-cheek depiction of how scientific discoveries work: someone formulates the theory, other scientists then reject it, ignore it for years, eventually rediscover it, and finally accept it. These “dark ages” weren’t really dark, of course—much good work was done in this period. For example:

• 1900: Frank Very worked out the radiation balance, and hence the temperature, of the moon. His results were confirmed by Pettit and Nicholson in 1930.

• 1902-14: Arthur Schuster and Karl Schwarzschild used a 2-layer radiative-convective model to explain the structure of the sun.

• 1907: Robert Emden realized that a similar radiative-convective model could be applied to planets, and Gerard Kuiper and others applied this to astronomical observations of planetary atmospheres.

This work established the standard radiative-convective model of atmospheric heat transfer. This treats the atmosphere as two layers; in the lower layer, convection is the main heat transport, while in the upper layer, it is radiation. A planet’s outgoing radiation comes from this upper layer. However, up until the early 1930′s, there was no discussion in the literature of the role of carbon dioxide, despite occasional discussion of climate cycles. In 1928, George Simpson published a memoir on atmospheric radiation, which assumed water vapour was the only greenhouse gas, even though, as Richardson pointed out in a comment, there was evidence that even dry air absorbed infrared radiation.

1938: Guy Callendar is the first to link observed rises in CO2 concentrations with observed rises in surface temperatures. But Callendar failed to revive interest in Arrhenius’s work, and made a number of mistakes in things that Arrhenius had gotten right. Callendar’s calculations focused on the radiation balance at the surface, whereas Arrhenius had (correctly) focussed on the balance at the top of the atmosphere. Also, he neglected convective processes, which astrophysicists had already resolved using the radiative-convective model. In the end, Callendar’s work was ignored for another two decades.

1956: Gilbert Plass correctly predicts a depletion of outgoing radiation in the 15 micron band, due to CO2 absorption. This depletion was eventually confirmed by satellite measurements. Plass was one of the first to revisit Arrhenius’s work since Callendar, however his calculations of climate sensitivity to CO2 were also wrong, because, like Callendar, he focussed on the surface radiation budget, rather than the top of the atmosphere.

1961-2: Carl Sagan correctly predicts very thick greenhouse gases in the atmosphere of Venus, as the only way to explain the very high observed temperatures. His calculations showed that greenhouse gasses must absorb around 99.5% of the outgoing surface radiation. The composition of Venus’s atmosphere was confirmed by NASA’s Venus probes in 1967-70.

1959: Burt Bolin and Erik Eriksson correctly predict the exponential increase in CO2 concentrations in the atmosphere as a result of rising fossil fuel use. At that time they did not have good data for atmospheric concentrations prior to 1958, hence their hindcast back to 1900 was wrong, but despite this, their projection for changes forward to 2000 were remarkably good.

1967: Suki Manabe and Dick Wetherald correctly predict that warming in the lower atmosphere would be accompanied by stratospheric cooling. They had built the first completely correct radiative-convective implementation of the standard model applied to Earth, and used it to calculate a +2 °C equilibrium warming for doubling CO2, including the water vapour feedback, assuming constant relative humidity. The stratospheric cooling was confirmed in 2011 by Gillett et al.

1975: Suki Manabe and Dick Wetherald correctly predict that the surface warming would be much greater in the polar regions, and that there would be some upper troposphere amplification in the tropics. This was the first coupled general circulation model (GCM), with an idealized geography. This model computed changes in humidity, rather than assuming it, as had been the case in earlier models. It showed polar amplification, and some vertical amplification in the tropics. The polar amplification was measured, and confirmed by Serreze et al in 2009. However, the height gradient in the tropics hasn’t yet been confirmed (nor has it yet been falsified—see Thorne 2008 for an analysis)

1989: Ron Stouffer et. al. correctly predict that the land surface will warm more than the ocean surface, and that the southern ocean warming would be temporarily suppressed due to the slower ocean heat uptake. These predictions are correct, although these models failed to predict the strong warming we’ve seen over the antarctic peninsula.

Of course, scientists often get it wrong:

1900: Knut Ångström incorrectly predicts that increasing levels of CO2 would have no effect on climate, because he thought the effect was already saturated. His laboratory experiments weren’t accurate enough to detect the actual absorption properties, and even if they were, the vertical structure of the atmosphere would still allow the greenhouse effect to grow as CO2 is added.

1971: Rasool and Schneider incorrectly predict that atmospheric cooling due to aerosols would outweigh the warming from CO2. However, their model had some important weaknesses, and was shown to be wrong by 1975. Rasool and Schneider fixed their model and moved on. Good scientists acknowledge their mistakes.

1993: Richard Lindzen incorrectly predicts that warming will dry the troposphere, according to his theory that a negative water vapour feedback keeps climate sensitivity to CO2 really low. Lindzen’s work attempted to resolve a long standing conundrum in climate science. In 1981, the CLIMAP project reconstructed temperatures at the last Glacial maximum, and showed very little tropical cooling. This was inconsistent the general circulation models (GCMs), which predicted substantial cooling in the tropics (e.g. see Broccoli & Manabe 1987). So everyone thought the models must be wrong. Lindzen attempted to explain the CLIMAP results via a negative water vapour feedback. But then the CLIMAP results started to unravel, and newer proxies demonstrated that it was the CLIMAP data that was wrong, rather than the models. It eventually turns out the models were getting it right, and it was the CLIMAP data and Lindzen’s theories that were wrong. Unfortunately, bad scientists don’t acknowledge their mistakes; Lindzen keeps inventing ever more arcane theories to avoid admitting he was wrong.

1995: John Christy and Roy Spencer incorrectly calculate that the lower troposphere is cooling, rather than warming. Again, this turned out to be wrong, once errors in satellite data were corrected.

In science, it’s okay to be wrong, because exploring why something is wrong usually advances the science. But sometimes, theories are published that are so bad, they are not even wrong:

2007: Courtillot et. al. predicted a connection between cosmic rays and climate change. But they couldn’t even get the sign of the effect consistent across the paper. You can’t falsify a theory that’s incoherent! Scientists label this kind of thing as “Not even wrong”.

Finally, there are, of course, some things that scientists didn’t predict. The most important of these is probably the multi-decadal fluctuations in the warming signal. If you calculate the radiative effect of all greenhouse gases, and the delay due to ocean heating, you still can’t reproduce the flat period in the temperature trend in that was observed in 1950–1970. While this wasn’t predicted, we ought to be able to explain it after the fact. Currently, there are two competing explanations. The first is that the ocean heat uptake itself has decadal fluctuations, although models don’t show this. However, it’s possible that climate sensitivity is at the low end of the likely range (say 2 °C per doubling of CO2), it’s possible we’re seeing a decadal fluctuation around a warming signal. The other explanation is that aerosols took some of the warming away from GHGs. This explanation requires a higher value for climate sensitivity (say around 3 °C), but with a significant fraction of the warming counteracted by an aerosol cooling effect. If this explanation is correct, it’s a much more frightening world, because it implies much greater warming as CO2 levels continue to increase. The truth is probably somewhere between these two. (See Armour & Roe, 2011 for a discussion.)

To conclude, climate scientists have made many predictions about the effect of increasing greenhouse gases that have proven to be correct. They have earned a right to be listened to, but is anyone actually listening? If we fail to act upon the science, will future archaeologists wade through AGU abstracts and try to figure out what went wrong? There are signs of hope—in his re-election acceptance speech, President Obama revived his pledge to take action, saying “We want our children to live in an America that isn’t threatened by the destructive power of a warming planet.”

## Milankovich vs the Ice Ages

30 January, 2013

guest post by Blake Pollard

Hi! My name is Blake S. Pollard. I am a physics graduate student working under Professor Baez at the University of California, Riverside. I studied Applied Physics as an undergraduate at Columbia University. As an undergraduate my research was more on the environmental side; working as a researcher at the Water Center, a part of the Earth Institute at Columbia University, I developed methods using time-series satellite data to keep track of irrigated agriculture over northwestern India for the past decade.

I am passionate about physics, but have the desire to apply my skills in more terrestrial settings. That is why I decided to come to UC Riverside and work with Professor Baez on some potentially more practical cross-disciplinary problems. Before starting work on my PhD I spent a year surfing in Hawaii, where I also worked in experimental particle physics at the University of Hawaii at Manoa. My current interests (besides passing my classes) lie in exploring potential applications of the analogy between information and entropy, as well as in understanding parallels between statistical, stochastic, and quantum mechanics.

Glacial cycles are one essential feature of Earth’s climate dynamics over timescales on the order of 100′s of kiloyears (kyr). It is often accepted as common knowledge that these glacial cycles are in some way forced by variations in the Earth’s orbit. In particular many have argued that the approximate 100 kyr period of glacial cycles corresponds to variations in the Earth’s eccentricity. As we saw in Professor Baez’s earlier posts, while the variation of eccentricity does affect the total insolation arriving to Earth, this variation is small. Thus many have proposed the existence of a nonlinear mechanism by which such small variations become amplified enough to drive the glacial cycles. Others have proposed that eccentricity is not primarily responsible for the 100 kyr period of the glacial cycles.

Here is a brief summary of some time series analysis I performed in order to better understand the relationship between the Earth’s Ice Ages and the Milankovich cycles.

I used publicly available data on the Earth’s orbital parameters computed by André Berger (see below for all references). This data includes an estimate of the insolation derived from these parameters, which is plotted below against the Earth’s temperature, as estimated using deuterium concentrations in an ice core from a site in the Antarctic called EPICA Dome C:

As you can see, it’s a complicated mess, even when you click to enlarge it! However, I’m going to focus on the orbital parameters themselves, which behave more simply. Below you can see graphs of three important parameters:

• obliquity (tilt of the Earth’s axis),
• precession (direction the tilted axis is pointing),
• eccentricity (how much the Earth’s orbit deviates from being circular).

You can click on any of the graphs here to enlarge them:

Richard Muller and Gordon MacDonald have argued that another astronomical parameter is important: the angle between the plane Earth’s orbit and the ‘invariant plane’ of the solar system. This invariant plane of the solar system depends on the angular momenta of the planets, but roughly coincides with the plane of Jupiter’s orbit, from what I understand. Here is a plot of the orbital plane inclination for the past 800 kyr:

One can see from these plots, or from some spectral analysis, that the main periodicities of the orbital parameters are:

• Obliquity ~ 42 kyr
• Precession ~ 21 kyr
• Eccentricity ~100 kyr
• Orbital plane ~ 100 kyr

Of course the curves clearly are not simple sine waves with those frequencies. Fourier transforms give information regarding the relative power of different frequencies occurring in a time series, but there is no information left regarding the time dependence of these frequencies as the time dependence is integrated out in the Fourier transform.

The Gabor transform is a generalization of the Fourier transform, sometimes referred to as the ‘windowed’ Fourier transform. For the Fourier transform:

$\displaystyle{ F(w) = \dfrac{1}{\sqrt{2\pi}} \int_{-\infty}^{\infty} f(t) e^{-iwt} \, dt}$

one may think of $e^{-iwt}$, the ‘kernel function’, as the guy acting as your basis element in both spaces. For the Gabor transform instead of $e^{-iwt}$ one defines a family of functions,

$g_{(b,\omega)}(t) = e^{i\omega(t-b)}g(t-b)$

where $g \in L^{2}(\mathbb{R})$ is called the window function. Typical windows are square windows and triangular (Bartlett) windows, but the most common is the Gaussian:

$\displaystyle{ g(t)= e^{-kt^2} }$

which is used in the analysis below. The Gabor transform of a function $f(t)$ is then given by

$\displaystyle{ G_{f}(b,w) = \int_{-\infty}^\infty f(t) \overline{g(t-b)} e^{-iw(t-b)} \, dt }$

Note the output of a Gabor transform, like the Fourier transform, is a complex function. The modulus of this function indicates the strength of a particular frequency in the signal, while the phase carries information about the… well, phase.

For example the modulus of the Gabor transform of

$\displaystyle{ f(t)=\sin(\dfrac{2\pi t}{100}) }$

is shown below. For these I used the package Rwave, originally written in S by Rene Carmona and Bruno Torresani; R port by Brandon Whitcher.

You can see that the line centered at a frequency of .01 corresponds to the function’s period of 100 time units.

A Fourier transform would do okay for such a function, but consider now a sine wave whose frequency increases linearly. As you can see below, the Gabor transform of such a function shows the linear increase of frequency with time:

The window parameter in both of the above Gabor transforms is 100 time units. Adjusting this parameter effects the vertical blurriness of the Gabor transform. For example here is the same plot as a above, but with window parameters of 300, 200, 100, and 50 time units:

You can see as you make the window smaller the line gets sharper, but only to a point. When the window becomes approximately smaller than a given period of the signal the line starts to blur again. This makes sense, because you can’t know the frequency of a signal precisely at a precise moment in time… just like you can’t precisely know both the momentum and position of a particle in quantum mechanics! The math is related, in fact.

Now let’s look at the Earth’s temperature over the past 800 kyr, estimated from the EPICA ice core deuterium concentrations:

When you look at this, first you notice spikes occurring about every 100 kyr. You can also see that the last 5 of these spikes appear to be bigger and more dramatic than the ones occurring before 500 kyr ago. Roughly speaking, each of these spikes corresponds to rapid warming of the Earth, after which occurs slightly less rapid cooling, and then a slow decrease in temperature until the next spike occurs. These are the Earth’s glacial cycles.

At the bottom of the curve, where the temperature is about about 4 °C cooler than the mean of this curve, glaciers are forming and extending down across the northern hemisphere. The relatively warm periods on the top of the spikes, about 10 °C hotter than the glacial periods. are called the interglacials. You can see that we are currently in the middle of an interglacial, so the Earth is relatively warm compared to rest of the glacial cycles.

Now we’ll take a look at the windowed Fourier transform, or the Gabor transform, of this data. The window size for these plots is 300 kyr.

Zooming in a bit, one can see a few interesting features in this plot:

We see one line at a frequency of about .024, with a sampling rate of 1 kyr, corresponds to a period of about 42 kyr, close to the period of obliquity. We also see a few things going on around a frequency of .01, corresponding to a 100 kyr period.

The band at .024 appears to be relatively horizontal, indicating an approximately constant frequency. Around the 100 kyr periods there is more going on. At a slightly higher frequency, about .015, there appears to be a band of slowly increasing frequency. Also, around .01 it’s hard to say what is really going on. It is possible that we see a combination of two frequency elements, one increasing, one decreasing, but almost symmetric. This may just be an artifact of the Gabor transform or the window and frequency parameters.

The window size for the plots below is slightly smaller, about 250 kyr. If we put the temperature and obliquity Gabor Transforms side by side, we see this:

It’s clear the lines at .024 line up pretty well.

Doing the same with eccentricity:

Eccentricity does not line up well with temperature in this exercise though both have bright bands above and below .01 .

Now for temperature and orbital inclination:

One sees that the frequencies line up better for this than for eccentricity, but one has to keep in mind that there is a nonlinear transformation performed on the ‘raw’ orbital plane data to project this down into the ‘invariant plane’ of the solar system. While this is physically motivated, it surely nudges the spectrum.

The temperature data clearly has a component with a period of approximately 42 kyr, matching well with obliquity. If you tilt your head a bit you can also see an indication of a fainter response at a frequency a bit above .04, corresponding roughly to period just below 25 kyrs, close to that of precession.

As far as the 100 kyr period goes, which is the periodicity of the glacial cycles, this analysis confirms much of what is known, namely that we can’t say for sure. Eccentricity seems to line up well with a periodicity of approximately 100 kyr, but on closer inspection there seems to be some discrepancies if you try to understand the glacial cycles as being forced by variations in eccentricity. The orbital plane inclination has a more similar Gabor transform modulus than does eccentricity.

A good next step would be to look the relative phases of the orbital parameters versus the temperature, but that’s all for now.

If you have any questions or comments or suggestions, please let me know!

### References

The orbital data used above is due to André Berger et al and can be obtained here:

Orbital variations and insolation database, NOAA/NCDC/WDC Paleoclimatology.

The temperature proxy is due to J. Jouzel et al, and it’s based on changes in deuterium concentrations from the EPICA Antarctic ice core dating back over 800 kyr. This data can be found here:

EPICA Dome C – 800 kyr deuterium data and temperature estimates, NOAA Paleoclimatology.

Here are the papers by Muller and Macdonald that I mentioned:

• Richard Muller and Gordan MacDonald, Glacial cycles and astronomical forcing, Science 277 (1997), 215–218.

• Richard Muller and Gordan MacDonald, Spectrum of 100-kyr glacial cycle: orbital inclination, not eccentricity, PNAS 1997, 8329–8334.

They also have a book:

• Richard Muller and Gordan MacDonald, Ice Ages and Astronomical Causes, Springer, Berlin, 2002.

You can also get files of the data I used here:

Berger et al orbital parameter data, with explanatory text here.

Jouzel et al EPICA Dome C temperature data, with explanatory text here.

## Anasazi America (Part 2)

24 January, 2013

Last time I told you a story of the American Southwest, starting with the arrival of small bands of hunters around 10,000 BC. I focused on the Anasazi, or ‘ancient Pueblo people’, and I led up to the Late Basketmaker III Era, from 500 to 750 AD.

The big invention during this time was the bow and arrow. Before then, large animals were killed by darts thrown from slings, which required a lot more skill and luck. But even more important was the continuing growth of agriculture: the cultivation of corn, beans and squash. This was fueled a period of dramatic population growth.

But this was just the start!

### The Pueblo I and II Eras

The Pueblo I Era began around 750 AD. At this time people started living in ‘pueblos’: houses with flat roofs held up by wooden poles. Towns became bigger, holding up to 600 people. But these towns typically lasted only 30 years or so. It seems people needed to move when conditions changed.

Starting around 800 AD, the ancient Pueblo people started building ‘great houses’: multi-storied buildings with high ceilings, rooms much larger than those in domestic dwellings, and elaborate subterranean rooms called ‘kivas’. And around 900 AD, people started building houses with stone roofs. We call this the start of the Pueblo II Era.

The center of these developments was the Chaco Canyon area in New Mexico:

Chaco Canyon is 125 kilometers east of Canyon de Chelly.
Unfortunately, I didn’t see it on my trip—I wanted to, but we didn’t have time.

By 950 AD, there were pueblos on every ridge and hilltop of the Chaco Canyon area. Due to the high population density and unpredictable rainfall, this area could no longer provide enough meat to sustain the needs of the local population. Apparently they couldn’t get enough fat, salt and minerals from a purely vegan diet—a shortcoming we have now overcome!

Yet the population continued to grow until 1000 AD. In his book Anasazi America, David Stuart wrote:

Millions of us buy mutual funds, believing the risk is spread among millions of investors and a large “basket” of fund stocks. Millions divert a portion of each hard-earned paycheck to purchase such funds for retirement. “Get in! Get in!” hawk the TV ads. “The market is going up. Historically, it always goes up in the long haul. The average rate of return this century is 9 percent per year!” Every one of us who does that is a Californian at heart, believing in growth, risk, power. It works—until an episode of too-rapid expansion in the market, combined with brutal business competition, threatens to undo it.

That is about what it was like, economically, at Chaco Canyon in the year 1000—rapid agricultural expansion, no more land to be gotten, and deepening competition. Don’t think of it as “romantic” or “primitive”. Think of it as just like 1999 in the United States, when the Dow Jones Industrial Average hit 11,000 and 30 million investors held their breath to see what would happen next.

### The Chaco phenomenon

In 1020 the rainfall became more predictable. There wasn’t more rain, it was simply less erratic. This was good for the ancient Pueblo people. At this point the ‘Chaco phenomenon’ began: an amazing flowering of civilization.

We see this in places like Pueblo Bonito, the largest great house in Chaco Canyon:

Pueblo Bonito was founded in the 800s. But starting in 1020 it grew immensely, and it kept growing until 1120. By this time it had 700 rooms, nearly half devoted to grain storage. It also had 33 kivas, which are the round structures you see here.

But Pueblo Bonito is just one of a dozen great houses built in Chaco Canyon by 1120. About 215 thousand ponderosa pine trees were cut down in this building spree! Stuart estimates that building these houses took over 2 million man-hours of work. They also built about 650 kilometers of roads! Most of these connect one great house to another… but some mysteriously seem to go to ‘nowhere’.

By 1080, however, the summer rainfall had started to decline. And by 1090 there were serious summer drought lasting for five years. We know this sort of thing from tree rings: there are enough ponderosa logs and the like that archaeologists have built up a detailed year-by-year record.

Thanks to overpopulation and these droughts, Chaco Canyon civilization was in serious trouble at this point, but it charged ahead:

Part of Chacoan society were already in deep trouble after AD 1050 as health and living conditions progressively eroded in the southern districts’ open farming communities. The small farmers in the south had first created reliable surpluses to be stored in the great houses. Ultimately, it was the increasingly terrible conditions of those farmers, the people who grew the corn, that had made Chacoan society so fatally vulnerable. They simply got back too little from their efforts to carry on.

[....]

Still, the great-house dwellers didn’t merely sit on their hands. As some farms failed, they used farm labor to expand roads, rituals, and great houses. This prehistoric version of a Keynesian growth model apparently alleviated enough of the stresses and strains to sustain growth through the 1070s. Then came the waning rainfall of the 1080s, followed by drought in the 1090s.

Circumstances in farming communities worsened quickly and dramatically with this drought; the very survival of many was at stake. The great-house elites at Chaco Canyon apparently responded with even more roads, rituals, and great houses. This was actually a period of great-house and road infrastructure “in-fill”, both in and near established open communities. In a few years, the rains returned. This could not help but powerfully reinforce the elites’ now well-established, formulaic response to problems.

But roads, rituals, and great houses simply did not do enough for the hungry farmers who produced corn and pottery. As the eleventh century drew to a close, even though the rains had come again, they walked away, further eroding the surpluses that had fueled the system. Imagine it: the elites must have believe the situation was saved, even as more farmers gave up in despair. Inexplicably, they never “exported” the modest irrigation system that had caught and diverted midsummer runoff from the mesa tops at Chaco Canyon and made local fields more productive. Instead, once again the elites responded with the sacred formula—more roads, more rituals, more great houses.

So, Stuart argues that the last of the Chaco Canyon building projects were “the desperate economic reactions of a fragile and frightened society”.

Regardless of whether this is true, we know that starting around 1100 AD, many of the ancient Pueblo people left the Chaco Canyon area. Many moved upland, to places with more rain and snow. Instead of great houses, many returned to building the simpler pit houses of old.

Tribes descending from the ancient Pueblo people still have myths about the decline of the Chaco civilization. While such tales should be taken with a huge grain of salt, these are too fascinating not to repeat. Here are two quotes:

In our history we talk of things that occurred a long time ago, of people who had enormous amounts of power, spiritual power and power over people. I think that those kinds of people lived here in Chaco…. Here at Chaco there were very powerful people who had a lot of spiritual power, and these people probably used their power in ways that caused things to change, and that may have been one of the reasons why the migrations were set to start again, because these these people were causing changes that were never meant to occur.

My response to the canyon was that some sensibility other than my Pueblo ancestors had worked on the Chaco great houses. There were the familiar elements such as the nansipu (the symbolic opening into the underworld), kivas, plazas and earth materials, but they were overlain by a strictness and precision of design that was unfamiliar…. It was clear that the purpose of these great villages was not to restate their oneness with the earth but to show the power and specialness of humans… a desire to control human and natural resources… These were men who embraced a social-political-religious hierarchy and envisioned control and power over places, resources and people.

These quotes are from an excellent book on the changing techniques and theories of archaeologists of the American Southwest:

• Stephen H. Lekson, A History of the Ancient Southwest, School for Advanced Research, Santa Fe, New Mexico, 2008.

What these quotes show, I think, is that the sensibility of current-day Pueblo people is very different from that of the people who built the great houses of Chaco Canyon. According to David Stuart, the Chaco civilization was a ‘powerful’ culture, while their descendants became an ‘efficient’ culture:

… a powerful society (or organism) captures more energy and expends (metabolizes) it more rapidly than an efficient one. Such societies tend to be structurally more complex, more wasteful of energy, more competitive, and faster paced than an efficient one. Think of modern urban America as powerful, and you will get the picture. In contrast, an efficient society “metabolizes” its energy more slowly, and so it is structurally less complex, less wasteful, less competitive, and slower. Think of Amish farmers in Pennsylvania or contemporary Pueblo farms in the American Southwest.

In competitive terms, the powerful society has an enormous short-term advantage over the efficient one if enough energy is naturally available to “feed” it, or if its technology and trade can bring in energy rapidly enough to sustain it. But when energy (food, fuel and resources) becomes scarce, or when trade and technology fail, an efficient society is advantageous because it simpler, less wasteful structure is more easily sustained in times of scarcity.

### The Pueblo III Era, and collapse

By 1150 AD, some of the ancient Pueblo people began building cliff dwellings at higher elevations—like Mesa Verde in Colorado, shown above. This marks the start of the Pueblo III Era. But this era lasted a short time. By 1280, Mesa Verde was deserted!

Some of the ruins in Canyon de Chelly also date to the Pueblo III Era. For example, the White House Ruins were built around 1200. Here are some of my pictures of this marvelous place. Click to enlarge:

But again, they were deserted by the end of the Pueblo III Era.

Why did the ancient Pueblo people move to cliff dwellings? And why did they move out so soon?

Nobody is sure. Cliff dwellings are easy to defend against attack. Built into the south face of a cliff, they catch the sun in winter to stay warm—it gets cold here in winter!—but they stay cool when the sun is straight overhead in summer. These are good reasons to build cliff dwellings. But these reasons don’t explain why cliff dwellings were so popular from 1150 to 1280, and then were abandoned!

One important factor seems to be this: there was a series of severe droughts starting around 1275. There were also raids from other tribes: speakers of Na-Dené languages, who eventually became the current-day Navajo inhabitants of this area.

But drought alone may be unable to explain what happened. There have been some fascinating attempts to model the collapse of the Anasazi culture. One is called the Artificial Anasazi Project. It used ‘agent-based modeling’ to study what the ancient Pueblo people did in Long House Valley, Arizona, from 200 to 1300. The Villages Project, a collaboration of Washington State University and the Crow Canyon Archaeological Center, focused on the region near Mesa Verde.

Quoting Stephen Lekson’s book:

Both projects mirrored actual settlement patterns from 800 to 1250 with admirable accuracy. Problems rose, however, with the abandonments of the regions, in both cases after 1250. There were unexplained exceptions, misfits between the models and reality.

Those misfits were not minor. Neither model predicted complete abandonment. Yet it happened. That’s perplexing. In the Scientific American summary of the Long House Valley model, Kohler, Gummerman, and Reynolds write, “We can only conclude that sociopolitical, ideological or environmental factors not included in our model must have contributed to the total depopulation of the valley.” Similar conundrums best the Villages Project: “None of our simulations terminated with a population decline as dramatic as what actually happened in the Mesa Verde region in the late 1200.”

These simulation projects look interesting! Of course they leave out many factors, but that’s okay: it suggests that one of those factors could be important in understanding the collapse.

For more info, click on the links. Also try this short review by the author of a famous book on why civilizations collapse:

• Jared Diamond, Life with the artificial Anasazi, Nature 419 (2002), 567–569.

From this article, here are the simulated versus ‘actual’ populations of the ancient Pueblo people in Long House Valley, Arizona, from 800 to 1350 AD:

The so-called ‘actual’ population is estimated using the number of house sites that were active at a given time, assuming five people per house.

This graph gives a shocking and dramatic ending to our tale! Lets hope our current-day tale doesn’t end so abruptly, because in abrupt transitions much gets lost. But of course the ancient Pueblo people didn’t disappear. They didn’t all die. They became an ‘efficient’ society: they learned to make do with diminished resources.

## Why It’s Getting Hot

22 January, 2013

The Berkeley Earth Surface Temperature project concludes: carbon dioxide concentration and volcanic activity suffice to explain most of the changes in earth’s surface temperature from 1751 to 2011. Carbon dioxide increase explains most of the warming; volcanic outbursts explain most of the bits of sudden cooling. The fit is not improved by the addition of a term for changes in the behavior of the Sun!

For details, see:

• Robert Rohde, Richard A. Muller, Robert Jacobsen, Elizabeth Muller, Saul Perlmutter, Arthur Rosenfeld, Jonathan Wurtele, Donald Groom and Charlotte Wickham, A new estimate of the average earth surface land temperature spanning 1753 to 2011, Geoinformatics and Geostatics: an Overview 1 (2012).

The downward spikes are explained nicely by volcanic activity. For example, you can see the 1815 eruption of Tambora in Indonesia, which blanketed the atmosphere with ash. 1816 was called The Year Without a Summer: frost and snow were reported in June and July in both New England and Northern Europe! Average global temperatures dropped 0.4–0.7 °C, resulting in major food shortages across the Northern Hemisphere. Similarly, the dip in 1783-1785 seems to be to due to Grímsvötn in Iceland.

(Carbon dioxide goes up a tiny bit in volcanic eruptions, but that’s mostly irrelevant. It’s the ash and sulfur dioxide, forming sulfuric acid droplets that help block incoming sunlight, that really matter for volcanoes!)

It’s worth noting that they get their best fit if each doubling of carbon dioxide concentration causes a 3.1 ± 0.3°C increase in land temperature. This is consistent with the 2007 IPCC report’s estimate of a 3 ± 1.5°C warming for land plus oceans when carbon dioxide doubles. This quantity is called climate sensitivity, and determining it is very important.

They also get their best fit if each extra 100 gigatonnes of atmospheric sulfates (from volcanoes) cause 1.5 ± 0.5°C of cooling.

They also look at the left-over temperature variations that are not explained by this simple model: 3.1°C of warming with each doubling of carbon dioxide, and 1.5°C of cooling for each extra 100 gigatonnes of atmospheric sulfates. Here’s what they get:

The left-over temperature variations, or ‘residuals’, are shown in black, with error bars in gray. On top is the annual data, on bottom you see a 10-year moving average. The red line is an index of the Atlantic Multidecadal Oscillation, a fluctuation in the sea surface temperature in the North Atlantic Ocean with a rough ‘period’ of 70 years.

Apparently the BEST team places more weight on the Atlantic Multidecadal Oscillation than most climate scientists. Most consider the [El Niño Southern Oscillation](http://www.azimuthproject.org/azimuth/show/ENSO) to be more important in explaining global temperature variations! I haven’t seen why the BEST team prefers to focus attention on the Atlantic Multidecadal Oscillation. I’d like to see some more graphs…

## Anasazi America (Part 1)

20 January, 2013

A few weeks ago I visited Canyon de Chelly, which is home to some amazing cliff dwellings. I took a bunch of photos, like this picture of the so-called ‘First Ruin’. You can see them and read about my adventures starting here:

• John Baez, Diary, 21 December 2012.

Here I’d like to talk about what happened to the civilization that built these cliff dwellings! It’s a fascinating tale full of mystery… and it’s full of lessons for the problems we face today, involving climate change, agriculture, energy production, and advances in technology.

First let me set the stage! Canyon de Chelly is in the Navajo Nation, a huge region with its own laws and government, not exactly part of the United States, located at the corners of Arizona, New Mexico, and Utah:

The hole in the middle is the Hopi Reservation. The Hopi are descended from,the people who built the cliff dwellings in Canyon de Chelly. Those people are often called the Anasazi, but these days the favored term is ancient Pueblo peoples.

The Hopi speak a Uto-Aztecan language, and so presumably did the Anasazi. Uto-Aztecan speakers were spread out like this shortly before the Europeans invaded:

with a bunch more down in what’s now Mexico. The Navajo are part of a different group, the Na-Dené language group:

So, the Navajo aren’t a big part of the story in this fascinating book:

• David E. Stuart, Anasazi America, U. of New Mexico Press, Albuquerque, New Mexico, 2000.

Let me summarize this story here!

### After the ice

The last Ice Age, called the Wisconsin glaciation, began around 70,000 BC. The glaciers reached their maximum extent about 18,000 BC, with ice sheets down to what are now the Great Lakes. In places the ice was over 1.6 kilometers thick!

Then it started warming up. By 16,000 BC people started cultivating plants and herding animals. Around 12,000 BC, before the land bridge connecting Siberia and Canada melted, people from the so-called Clovis culture came to the Americas.

It seems likely that other people got to America earlier, moving down the Pacific coast before the inland glaciers melted. But even if the Clovis culture didn’t get there first, their arrival was a big deal. They be traced by their distinctive and elegant spear tips, called Clovis points:

After they arrived, the Clovis people broke into several local cultures, roughly around the time of the Younger Dryas cold spell beginning around 10,800 BC. By 10,000 BC, small bands of hunters roamed the Southwest, first hunting mammoths, huge bison, camels, horses and elk, and later—perhaps because they killed off the really big animals—the more familiar bison, deer, elk and antelopes we see today.

For about 5000 years the population of current-day New Mexico probably fluctuated between 2 and 6 thousand people—a density of just one person per 50 to 150 square kilometers! Changes in culture and climate were slow.

### The Altithermal

Around 5,000 BC, the climate near Canyon de Chelly began to warm up, dry out, and become more strongly seasonal. This epoch is called the ‘Altithermal’. The lush grasslands that once supported huge herds of bison began to disappear in New Mexico, and those bison moved north. By 4,000 BC, the area near Canyon de Chelly became very hot, with summers often reaching 45°C, and sometimes 57° at the ground’s surface.

The people in this area responded in an interesting way: by focusing much more on gathering, and less on hunting. We know this from their improved tools for processing plants, especially yucca roots. The yucca is now the state flower of New Mexico. Here’s a picture taken by Stan Shebs:

David Stuart writes:

At first this might seem an unlikely response to unremitting heat and aridity. One could argue that the deteriorating climate might first have forced people to reduce their numbers by restricting sex, marriage, and child-bearing so that survivors would have enough game. That might well have been the short-term solution [....] When once-plentiful game becomes scarce, hunter-gatherers typically become extremely conservative about sex and reproduction. [...] But by early Archaic times, the change in focus to plant resources—undoubtedly by necessity—had actually produced a marginally growing population in the San Juan Basin and its margins in spite of climatic adversity.

[....]

Ecologically, these Archaic hunters and gatherers had moved one entire link down the food chain, thereby eliminating the approximately 90-percent loss in food value that occurs when one feeds on an animal that is a plant-eater.

[....]

This is sound ecological behavior—they could not have found a better basic strategy even if they had the advantage of a contemporary university education. Do I attribute this to their genius? No. It is simply that those who stubbornly clung to the traditional big game hunting of their Paleo-Indian ancestors could not prosper, so they left fewer descendents. Those more willing to experiment, or more desperate, fared better, so their behavior eventually became traditional among their more numerous descendents.

### The San Jose Period

By 3,000 BC the Altithermal was ending, big game was returning to the Southwest, yet the people retained their new-found agricultural skills. They also developed a new kind of dart for hunting, the ‘San Jose point’. So, this epoch is called the ‘San Jose period’. Populations rose to maybe about 15 to 30 thousand people in New Mexico, a vast increase over the earlier level of 2-6 thousand. But still, that’s just one person per 10 or 20 square kilometers!

The population increased until around 2,000 BC. At this point population pressures became acute… but two lucky things happened. First, the weather got wetter. Second, corn was introduced from Mexico. The first varieties had very small cobs, but gradually they were improved.

The wet weather lasted until around 500 BC. And at just about this time, beans were introduced, also from Mexico.

Their addition was critical. Corn alone is a costly food to metabolize. Its proteins are incomplete and hard to synthesize. Beans contain large amounts of lysine, the amino acid missing from corn and squash. In reasonable balance, corn, beans and squash together provide complimentary amino acids and form the basis of a nearly complete diet. This diet lacks only the salt, fat and mineral nutrients found in most meats to be healthy and complete.

By 500 BC, nearly all the elements for accelerating cultural and economic changes were finally in place—a fairly complete diet that could, if rainfall cooperated, largely replace the traditional foraging one; several additional, modestly larger-cobbed varieties of corn that not only prospered under varying growing conditions but also provided a bigger harvest; a population large enough to invest the labor necessary to plant and harvest; nearly 10 centuries of increasing familiarity with cultigens; and enhanced food-processing and storage techniques. Lacking were compelling reasons to transform an Archaic society accustomed to earning a living with approximately 500 hours of labor a year into one willing to invest the 1,000 to 2,000 yours coming to contemporary hand-tool horticulturalists.

Nature then stepped in with one persuasive, though not compelling, reason for people to make the shift.

Namely, droughts! Precipitation became very erratic for about 500 years. People responded in various ways. Some went back to the old foraging techniques. Others improved their agricultural skills, developing better breeds of corn, and tricks for storing water. The latter are the ones whose populations grew.

This led to the Basketmaker culture, where people started living in dugout ‘pit houses’ in small villages. More precisely, the Late Basketmaker II Era lasted from about 50 AD to 500 AD. New technologies included the baskets that gave this culture its name:

Pottery entered the scene around 300 AD. Have you ever thought about how important this is? Before pots, people had to cook corn and beans by putting rocks in fires and then transferring them to holes containing water!

Now, porridge and stews could be put to boil in a pot set directly into a central fire pit. The amount of heat lost and fuel used in the old cooking process—an endless cycle of collecting, heating, transferring, removing and replacing hot stones just to boil a few quarts of water—had always been enormous. By comparison, cooking with pots became quick, easy, and far more efficient. In a world more densely populated, firewood had to be gathered from greater distances. Now, less of it was needed. And there was newer fuel to supplement it—dried corncobs.

Not all the changes were good. Most adult skeletons from this period show damage from long periods spend stooping—either using a stone hoe to tend garden plots, or grinding corn while kneeling. And as they ate more corn and beans and fewer other vegetables, mineral deficiencies became common. Extreme osteoporosis afflicted many of these people: we find skulls that are porous, and broken bones. It reminds me a little of the plague of obesity, with its many side-affects, afflicting modern Americans as we move to a culture where most people work sitting down.

On the other hand, there was a massive growth in population. The number of pit-house villages grew nine-fold from 200 AD to 700 AD!

It must have been an exciting time. In only some 25 generations, these folks had transformed themselves from forager and hunters with a small economic sideline in corn, beans and squash into semisedentary villagers who farmed and kept up their foraging to fill in the economic gaps.

But this was just the beginning. By 1020, the ancient Pueblo people would begin to build housing complexes that would remain the biggest in North America until the 1880s! This happened in Chaco Canyon, 125 kilometers east of Canyon de Chelly.

Next time I’ll tell you the story of how that happened, and how later, around 1200, these people left Chaco Canyon and started to build cliff dwellings.

For now, I’ll leave you with some pictures I took of the most famous cliff dwelling in Canyon de Chelly: the ‘White House Ruins’. Click to enlarge: