The Case for Optimism on Climate Change

7 March, 2016

 

The video here is quite gripping: you should watch it!

Despite the title, Gore starts with a long and terrifying account of what climate change is doing. So what’s his case for optimism? A lot of it concerns solar power, though he also mentions nuclear power:

So the answer to the first question, “Must we change?” is yes, we have to change. Second question, “Can we change?” This is the exciting news! The best projections in the world 16 years ago were that by 2010, the world would be able to install 30 gigawatts of wind capacity. We beat that mark by 14 and a half times over. We see an exponential curve for wind installations now. We see the cost coming down dramatically. Some countries—take Germany, an industrial powerhouse with a climate not that different from Vancouver’s, by the way—one day last December, got 81 percent of all its energy from renewable resources, mainly solar and wind. A lot of countries are getting more than half on an average basis.

More good news: energy storage, from batteries particularly, is now beginning to take off because the cost has been coming down very dramatically to solve the intermittency problem. With solar, the news is even more exciting! The best projections 14 years ago were that we would install one gigawatt per year by 2010. When 2010 came around, we beat that mark by 17 times over. Last year, we beat it by 58 times over. This year, we’re on track to beat it 68 times over.

We’re going to win this. We are going to prevail. The exponential curve on solar is even steeper and more dramatic. When I came to this stage 10 years ago, this is where it was. We have seen a revolutionary breakthrough in the emergence of these exponential curves.

And the cost has come down 10 percent per year for 30 years. And it’s continuing to come down.

Now, the business community has certainly noticed this, because it’s crossing the grid parity point. Cheaper solar penetration rates are beginning to rise. Grid parity is understood as that line, that threshold, below which renewable electricity is cheaper than electricity from burning fossil fuels. That threshold is a little bit like the difference between 32 degrees Fahrenheit and 33 degrees Fahrenheit, or zero and one Celsius. It’s a difference of more than one degree, it’s the difference between ice and water. And it’s the difference between markets that are frozen up, and liquid flows of capital into new opportunities for investment. This is the biggest new business opportunity in the history of the world, and two-thirds of it is in the private sector. We are seeing an explosion of new investment. Starting in 2010, investments globally in renewable electricity generation surpassed fossils. The gap has been growing ever since. The projections for the future are even more dramatic, even though fossil energy is now still subsidized at a rate 40 times larger than renewables. And by the way, if you add the projections for nuclear on here, particularly if you assume that the work many are doing to try to break through to safer and more acceptable, more affordable forms of nuclear, this could change even more dramatically.

So is there any precedent for such a rapid adoption of a new technology? Well, there are many, but let’s look at cell phones. In 1980, AT&T, then Ma Bell, commissioned McKinsey to do a global market survey of those clunky new mobile phones that appeared then. “How many can we sell by the year 2000?” they asked. McKinsey came back and said, “900,000.” And sure enough, when the year 2000 arrived, they did sell 900,000—in the first three days. And for the balance of the year, they sold 120 times more. And now there are more cell connections than there are people in the world.

So, why were they not only wrong, but way wrong? I’ve asked that question myself, “Why?”

And I think the answer is in three parts. First, the cost came down much faster than anybody expected, even as the quality went up. And low-income countries, places that did not have a landline grid—they leap-frogged to the new technology. The big expansion has been in the developing counties. So what about the electricity grids in the developing world? Well, not so hot. And in many areas, they don’t exist. There are more people without any electricity at all in India than the entire population of the United States of America. So now we’re getting this: solar panels on grass huts and new business models that make it affordable. Muhammad Yunus financed this one in Bangladesh with micro-credit. This is a village market. Bangladesh is now the fastest-deploying country in the world: two systems per minute on average, night and day. And we have all we need: enough energy from the Sun comes to the Earth every hour to supply the full world’s energy needs for an entire year. It’s actually a little bit less than an hour. So the answer to the second question, “Can we change?” is clearly “Yes.” And it’s an ever-firmer “yes.”

Some people are much less sanguine about solar power, and they would point out all the things that Gore doesn’t mention here. For example, while Gore claims that “one day last December” Germany “got 81 percent of all its energy from renewable resources, mainly solar and wind”, the picture in general is not so good:



This is from 2014, the most recent I could easily find. At least back then, renewables were only slightly ahead of ‘brown coal’, or lignite—the dirtiest kind of coal. Furthermore, among renewables, burning ‘biomass’ produced about as much power as wind—and more than solar. And what’s ‘biomass’, exactly? A lot of it is wood pellets! Some is even imported:

• John Baez, The EU’s biggest renewable eneergy source, 18 September 2013.

So, for every piece of good news one can find a piece of bad news. But the drop in price of solar power is impressive, and photovoltaic solar power is starting to hit ‘grid parity’: the point at which it’s as cheap as the usual cost of electricity off the grid:


According to this map based on reports put out by Deutsche Bank (here and here), the green countries reached grid parity before 2014. The blue countries reached it after 2014. The olive countries have reached it only for peak grid prices. The orange regions are US states that were ‘poised to reach grid parity’ in 2015.

But of course there are other issues: the intermittency of solar power, the difficulties of storing energy, etc. How optimistic should we be?


The Paris Agreement

23 December, 2015

The world has come together and agreed to do something significant about climate change:

• UN Framework Convention on Climate Change, Adoption of the Paris Agreement, 12 December 2015.

Not as much as I’d like: it’s estimated that if we do just what’s been agreed to so far, we can expect 2.7 °C of warming, and more pessimistic estimates range up to 3.5 °C. But still, something significant. Furthermore, the Paris Agreement set up a system that encourages nations to ‘ratchet up’ their actions over time. Even better, it helped strengthen a kind of worldwide social network of organizations devoted to tackling climate change!

This is a nice article summarizing what the Paris Agreement actually means:

• William Sweet, A surprising success at Paris, Bulletin of Atomic Scientists, 21 December 2015.

Since it would take quite a bit of work to analyze this agreement and its implications, and I’m just starting to do this work, I’ll just quote a large chunk of this article.

Hollande, in his welcoming remarks, asked what would enable us to say the Paris agreement is good, even “great.” First, regular review and assessment of commitments, to get the world on a credible path to keep global warming in the range of 1.5–2.0 degrees Celsius. Second, solidarity of response, so that no state does nothing and yet none is “left alone.” Third, evidence of a comprehensive change in human consciousness, allowing eventually for introduction of much stronger measures, such as a global carbon tax.

UN Secretary-General Ban Ki-moon articulated similar but somewhat more detailed criteria: The agreement must be lasting, dynamic, respectful of the balance between industrial and developing countries, and enforceable, with critical reviews of pledges even before 2020. Ban noted that 180 countries had now submitted climate action pledges, an unprecedented achievement, but stressed that those pledges need to be progressively strengthened over time.

Remarkably, not only the major conveners of the conference were speaking in essentially the same terms, but civil society as well. Starting with its first press conference on opening day and at every subsequent one, representatives of the Climate Action Network, representing 900 nongovernment organizations, confined themselves to making detailed and constructive suggestions about how key provisions of the agreement might be strengthened. Though CAN could not possibly speak for every single one of its member organizations, the mainstream within the network clearly saw it was the group’s goal to obtain the best possible agreement, not to advocate for a radically different kind of agreement. The mainstream would not be taking to the streets.

This was the main thing that made Paris different, not just from Copenhagen, but from every previous climate meeting: Before, there always had been deep philosophical differences between the United States and Europe, between the advanced industrial countries and the developing countries, and between the official diplomats and civil society. At Paris, it was immediately obvious that everybody, NGOs included, was reading from the same book. So it was obvious from day one that an important agreement would be reached. National delegations would stake out tough positions, and there would be some hard bargaining. But at every briefing and in every interview, no matter how emphatic the stand, it was made clear that compromises would be made and nothing would be allowed to stand in the way of agreement being reached.

The Paris outcome

The two-part agreement formally adopted in Paris on December 12 represents the culmination of a 25-year process that began with the negotiations in 1990–91 that led to the adoption in 1992 of the Rio Framework Convention. That treaty, which would be ratified by all the world’s nations, called upon every country to take action to prevent “dangerous climate change” on the basis of common but differentiated responsibilities. Having enunciated those principles, nations were unable to agree in the next decades about just what they meant in practice. An attempt at Kyoto in 1997 foundered on the opposition of the United States to an agreement that required no action on the part of the major emitters among the developing countries. A second attempt at agreement in Copenhagen also failed.

Only with the Paris accords, for the first time, have all the world’s nations agreed on a common approach that rebalances and redefines respective responsibilities, while further specifying what exactly is meant by dangerous climate change. Paragraph 17 of the “Decision” (or preamble) notes that national pledges will have to be strengthened in the next decades to keep global warming below 2 degrees Celsius or close to 1.5 degrees, while Article 2 of the more legally binding “Agreement” says warming should be held “well below” 2 degrees and if possible limited to 1.5 degrees. Article 4 of the Agreement calls upon those countries whose emissions are still rising to have them peak “as soon as possible,” so “as to achieve a balance between anthropogenic emissions by sources and removals by sinks of greenhouse gases in the second half of this century”—a formulation that replaced a reference in Article 3 of the next-to-last draft calling for “carbon neutrality” by the second half of the century.

“The wheel of climate action turns slowly, but in Paris it has turned. This deal puts the fossil fuel industry on the wrong side of history,” commented Kumi Naidoo, executive director of Greenpeace International.

The Climate Action Network, in which Greenpeace is a leading member, along with organizations like the Union of Concerned Scientists, Friends of the Earth, the World Wildlife Fund, and Oxfam, would have preferred language that flatly adopted the 1.5 degree goal and that called for complete “decarbonization”— an end to all reliance on fossil fuels. But to the extent the network can be said have common positions, it would be able to live with the Paris formulations, to judge from many statements made by leading members in CAN’s twice- or thrice-daily press briefings, and statements made by network leaders embracing the agreement.

Speaking for scientists, at an event anticipating the final accords, H. J. Schellnhuber, leader of the Potsdam Institute for Climate Impact Research, said with a shrug that the formulation calling for net carbon neutrality by mid-century would be acceptable. His opinion carried more than the usual weight because he is sometimes credited in the German press as the father of the 2-degree standard. (Schellnhuber told me that a Potsdam team had indeed developed the idea of limiting global warming to 2 degrees in total and 0.2 degrees per decade; and that while others were working along similar lines, he personally drew the Potsdam work to the attention of future German chancellor Angela Merkel in 1994, when she was serving as environment minister.)

As for the tighter 1.5-degree standard, this is a complicated issue that the Paris accords fudge a bit. The difference between impacts expected from a 1.5-degree world and a 2-degree world are not trivial. The Greenland ice sheet, for example, is expected to melt in its entirely in the 2-degree scenario, while in a 1.5-degree world the odds of a complete melt are only 70 percent, points out climatologist Niklas Höhne, of the Cologne-based NewClimate Institute, with a distinct trace of irony. But at the same time the scientific consensus is that it would be virtually impossible to meet the 1.5-degree goal because on top of the 0.8–0.9 degrees of warming that already has occurred, another half-degree is already in the pipeline, “hidden away in the oceans,” as Schellnhuber put it. At best we might be able to work our ways back to 1.5 degrees in the 2030s or 2040s, after first overshooting it. Thus, though organizations like 350.org and scientists like James Hansen continue to insist that 1.5 degrees should be our objective, pure and simple, the scientific community and the Climate Action mainstream are reasonably comfortable with the Paris accords’ “close as possible” language.

‘Decision’ and ‘Agreement’

The main reason why the Paris accords consist of two parts, a long preamble called the “Decision,” and a legally binding part called the “Agreement,” is to satisfy the Obama administration’s concerns about having to take anything really sticky to Congress. The general idea, which was developed by the administration with a lot of expert legal advice from organizations like the Virginia-based Center for Climate and Energy Solutions, was to put really substantive matters, like how much the United States will actually do in the next decades to cut its greenhouse gas emissions, into the preamble, and to confine the treaty-like Agreement as much as possible to procedural issues like when in the future countries will talk about what.

Nevertheless, the distinction between the Decision and the Agreement is far from clear-cut. All the major issues that had to be balanced in the negotiations—not just the 1.5–2.0 degree target and the decarbonization language, but financial aid, adaptation and resilience, differentiation between rich and poor countries, reporting requirements, and review—are addressed in both parts. There is nothing unusual as such about an international agreement having two parts, a preamble and main text. What is a little odd about Paris, however, is that the preamble, at 19 pages, is considerably longer than the 11-page Agreement, as Chee Yoke Ling of the Third World Network, who is based in Beijing, pointed out. The length of the Decision, she explained, reflects not only US concerns about obtaining Senate ratification. It also arose from anxieties shared by developing countries about agreeing to legally binding provisions that might be hard to implement and politically dangerous.

In what are arguably the Paris accords’ most important provisions, the national pledges are to be collectively reassessed beginning in 2018–19, and then every five years after 2020. The general idea is to systematically exert peer group pressure on regularly scheduled occasions, so that everybody will ratchet up carbon-cutting ambitions. Those key requirements, which are very close to what CAN advocated and what diplomatic members of the so-called “high ambition” group wanted, are in the preamble, not the Agreement.

But an almost equally important provision, found in the Agreement, called for a global “stocktake” to be conducted in 2023, covering all aspects of the Agreement’s implementation, including its very contested provisions about financial aid and “loss and damage”—the question of support and compensation for countries and regions that may face extinction as a result of global warming. Not only carbon cutting efforts but obligations of the rich countries to the poor will be subject to the world’s scrutiny in 2023.

Rich and poor countries

On the critical issue of financial aid for developing countries struggling to reduce emissions and adapt to climate change, Paris affirms the Copenhagen promise of $100 billion by 2020 in the Decision (Paragraph 115) but not in the more binding Agreement—to the displeasure of the developing countries, no doubt. In the three previous draft versions of the accords, the $100 billion pledge was contained in the Agreement as well.

Somewhat similarly, the loss-and-damage language contained in the preamble does not include any reference to liability on the part of the advanced industrial countries that are primarily responsible for the climate change that has occurred up until now. This was a disappointment to representatives of the nations and regions most severely and imminently threatened by global warming, but any mention of liability would have been an absolute show-stopper for the US delegation. Still, the fact that loss and damage is broached at all represents a victory for the developing world and its advocates, who have been complaining for decades about the complete absence of the subject from the Rio convention and Kyoto Protocol.

The so-called Group of 77, which actually represents 134 developing countries plus China, appears to have played a shrewd and tough game here at Le Bourget. Its very able and engaging chairperson, South Africa’s Nozipho Mxakato-Diseko, sent a sharp shot across the prow of the rich countries on the third day of the conference, with a 17-point memorandum she e-mailed enumerating her group’s complaints.

“The G77 and China stresses that nothing under the [1992 Framework Convention] can be achieved without the provision of means of implementation to enable developing countries to play their part to address climate change,” she said, alluding to the fact that if developing countries are to do more to cut emissions growth, they need help. “However, clarity on the complete picture of the financial arrangements for the enhanced implementation of the Convention keeps on eluding us. … We hope that by elevating the importance of the finance discussions under the different bodies, we can ensure that the outcome meets Parties’ expectations and delivers what is required.”

Though the developing countries wanted stronger and more specific financial commitments and “loss-and-damage” provisions that would have included legal liability, there is evidence throughout the Paris Decision and Agreement of the industrial countries’ giving considerable ground to them. During the formal opening of the conference, President Obama met with leaders of AOSIS—the Alliance of Small Island States—and told them he understood their concerns as he, too, is “an island boy.” (Evidently that went over well.) The reference to the $100 billion floor for financial aid surely was removed from the Agreement partly because the White House at present cannot get Congress to appropriate money for any climate-related aid. But at least the commitment remained in the preamble, which was not a foregone conclusion.

Reporting and review

The one area in which the developing countries gave a lot of ground in Paris was in measuring, reporting, and verification. Under the terms of the Rio convention and Kyoto Protocol, only the advanced industrial countries—the so-called Annex 1 countries—were required to report their greenhouse gas emissions to the UN’s climate secretariat in Bonn. Extensive provisions in the Paris agreement call upon all countries to now report emissions, according to standardized procedures that are to be developed.

The climate pledges that almost all countries submitted to the UN in preparation for Paris, known as “Intended Nationally Determined Contributions,” provided a preview of what this will mean. The previous UN climate gathering, last year in Lima, had called for all the INDCs to be submitted by the summer and for the climate secretariat to do a net assessment of them by October 31, which seemed ridiculously late in the game. But when the results of that assessment were released, the secretariat’s head, Christiana Figueres, cited independent estimates that together the national declarations might put the world on a path to 2.7-degree warming. That result was a great deal better than most specialists following the procedure would have expected, this writer included. Though other estimates suggested the path might be more like 3.5 degrees, even this was a very great deal better than the business-as-usual path, which would be at least 4–5 degrees and probably higher than that by century’s end.

The formalized universal reporting requirements put into place by the Paris accords will lend a lot of rigor to the process of preparing, critiquing, and revising INDCs in the future. In effect the secretariat will be keeping score for the whole world, not just the Annex 1 countries. That kind of score-keeping can have a lot of bite, as we have witnessed in the secretariat’s assessment of Kyoto compliance.

Under the Kyoto Protocol, which the US government not only agreed to but virtually wrote, the United States was required to cut its emissions 7 percent by 2008–12, and Europe by 8 percent. From 1990, the baseline year established in the Rio treaty and its 1997 Kyoto Protocol, to 2012 (the final year in which initial Kyoto commitments applied), emissions of the 15 European countries that were party to the treaty decreased 17 percent—more than double what the protocol required of them. Emissions of the 28 countries that are now members of the EU decreased 21 percent. British emissions were down 27 percent in 2012 from 1990, and Germany’s were down 23 percent.

In the United States, which repudiated the protocol, emissions continued to rise until 2005, when they began to decrease, initially for reasons that had little or nothing to do with policy. That year, US emissions were about 15 percent above their 1990 level, while emissions of the 28 EU countries were down more than 9 percent and of the 15 European party countries more than 2 percent.


Why Google Gave Up

5 January, 2015

I was disappointed when Google gave up. In 2007, the company announced a bold initiative to fight global warming:

Google’s Goal: Renewable Energy Cheaper than Coal

Creates renewable energy R&D group and supports breakthrough technologies

Mountain View, Calif. (November 27, 2007) – Google (NASDAQ: GOOG) today announced a new strategic initiative to develop electricity from renewable energy sources that will be cheaper than electricity produced from coal. The newly created initiative, known as RE<C, will focus initially on advanced solar thermal power, wind power technologies, enhanced geothermal systems and other potential breakthrough technologies. RE<C is hiring engineers and energy experts to lead its research and development work, which will begin with a significant effort on solar thermal technology, and will also investigate enhanced geothermal systems and other areas. In 2008, Google expects to spend tens of millions on research and development and related investments in renewable energy. As part of its capital planning process, the company also anticipates investing hundreds of millions of dollars in breakthrough renewable energy projects which generate positive returns.

But in 2011, Google shut down the program. I never heard why. Recently two engineers involved in the project have given a good explanation:

• Ross Koningstein and David Fork, What it would really take to reverse climate change, 18 November 2014.

Please read it!

But the short version is this. They couldn’t find a way to accomplish their goal: producing a gigawatt of renewable power more cheaply than a coal-fired plant — and in years, not decades.

And since then, they’ve been reflecting on their failure and they’ve realized something even more sobering. Even if they’d been able to realize their best-case scenario — a 55% carbon emissions cut by 2050 — it would not bring atmospheric CO2 back below 350 ppm during this century.

This is not surprising to me.

What would we need to accomplish this? They say two things. First, a cheap dispatchable, distributed power source:

Consider an average U.S. coal or natural gas plant that has been in service for decades; its cost of electricity generation is about 4 to 6 U.S. cents per kilowatt-hour. Now imagine what it would take for the utility company that owns that plant to decide to shutter it and build a replacement plant using a zero-carbon energy source. The owner would have to factor in the capital investment for construction and continued costs of operation and maintenance—and still make a profit while generating electricity for less than $0.04/kWh to $0.06/kWh.

That’s a tough target to meet. But that’s not the whole story. Although the electricity from a giant coal plant is physically indistinguishable from the electricity from a rooftop solar panel, the value of generated electricity varies. In the marketplace, utility companies pay different prices for electricity, depending on how easily it can be supplied to reliably meet local demand.

“Dispatchable” power, which can be ramped up and down quickly, fetches the highest market price. Distributed power, generated close to the electricity meter, can also be worth more, as it avoids the costs and losses associated with transmission and distribution. Residential customers in the contiguous United States pay from $0.09/kWh to $0.20/kWh, a significant portion of which pays for transmission and distribution costs. And here we see an opportunity for change. A distributed, dispatchable power source could prompt a switchover if it could undercut those end-user prices, selling electricity for less than $0.09/kWh to $0.20/kWh in local marketplaces. At such prices, the zero-carbon system would simply be the thrifty choice.

But “dispatchable”, they say, means “not solar”.

Second, a lot of carbon sequestration:

While this energy revolution is taking place, another field needs to progress as well. As Hansen has shown, if all power plants and industrial facilities switch over to zero-carbon energy sources right now, we’ll still be left with a ruinous amount of CO2 in the atmosphere. It would take centuries for atmospheric levels to return to normal, which means centuries of warming and instability. To bring levels down below the safety threshold, Hansen’s models show that we must not only cease emitting CO2 as soon as possible but also actively remove the gas from the air and store the carbon in a stable form. Hansen suggests reforestation as a carbon sink. We’re all for more trees, and we also exhort scientists and engineers to seek disruptive technologies in carbon storage.

How to achieve these two goals? They say government and energy businesses should spend 10% of employee time on “strange new ideas that have the potential to be truly disruptive”.


Wind Power and the Smart Grid

18 June, 2014



Electric power companies complain about wind power because it’s intermittent: if suddenly the wind stops, they have to bring in other sources of power.

This is no big deal if we only use a little wind. Across the US, wind now supplies 4% of electric power; even in Germany it’s just 8%. The problem starts if we use a lot of wind. If we’re not careful, we’ll need big fossil-fuel-powered electric plants when the wind stops. And these need to be turned on, ready to pick up the slack at a moment’s notice!

So, a few years ago Xcel Energy, which supplies much of Colorado’s power, ran ads opposing a proposal that it use renewable sources for 10% of its power.

But now things have changed. Now Xcel gets about 15% of their power from wind, on average. And sometimes this spikes to much more!

What made the difference?

Every few seconds, hundreds of turbines measure the wind speed. Every 5 minutes, they send this data to high-performance computers 100 miles away at the National Center for Atmospheric Research in Boulder. NCAR crunches these numbers along with data from weather satellites, weather stations, and other wind farms – and creates highly accurate wind power forecasts.

With better prediction, Xcel can do a better job of shutting down idling backup plants on days when they’re not needed. Last year was a breakthrough year – better forecasts saved Xcel nearly as much money as they had in the three previous years combined.

It’s all part of the emerging smart grid—an intelligent network that someday will include appliances and electric cars. With a good smart grid, we could set our washing machine to run when power is cheap. Maybe electric cars could store solar power in the day, use it to power neighborhoods when electricity demand peaks in the evening – then recharge their batteries using wind power in the early morning hours. And so on.

References

I would love if it the Network Theory project could ever grow to the point of helping design the smart grid. So far we are doing much more ‘foundational’ work on control theory, along with a more applied project on predicting El Niños. I’ll talk about both of these soon! But I have big hopes and dreams, so I want to keep learning more about power grids and the like.

Here are two nice references:

• Kevin Bullis, Smart wind and solar power, from 10 breakthrough technologies, Technology Review, 23 April 2014.

• Keith Parks, Yih-Huei Wan, Gerry Wiener and Yubao Liu, Wind energy forecasting: a collaboration of the National Center for Atmospheric Research (NCAR) and Xcel Energy.

The first is fun and easy to read. The second has more technical details. It describes the software used (the picture on top of this article shows a bit of this), and also some of the underlying math and physics. Let me quote a bit:

High-resolution Mesoscale Ensemble Prediction Model (EPM)

It is known that atmospheric processes are chaotic in nature. This implies that even small errors in the model initial conditions combined with the imperfections inherent in the NWP model formulations, such as truncation errors and approximations in model dynamics and physics, can lead to a wind forecast with large errors for certain weather regimes. Thus, probabilistic wind prediction approaches are necessary for guiding wind power applications. Ensemble prediction is at present a practical approach for producing such probabilistic predictions. An innovative mesoscale Ensemble Real-Time Four Dimensional Data Assimilation (E-RTFDDA) and forecasting system that was developed at NCAR was used as the basis for incorporating this ensemble prediction capability into the Xcel forecasting system.

Ensemble prediction means that instead of a single weather forecast, we generate a probability distribution on the set of weather forecasts. The paper has references explaining this in more detail.

We had a nice discussion of wind power and the smart grid over on G+. Among other things, John Despujols mentioned the role of ‘smart inverters’ in enhancing grid stability:

Smart solar inverters smooth out voltage fluctuations for grid stability, DigiKey article library.

A solar inverter converts the variable direct current output of a photovoltaic solar panel into alternating current usable by the electric grid. There’s a lot of math involved here—click the link for a Wikipedia summary. But solar inverters are getting smarter.

Wild fluctuations

While the solar inverter has long been the essential link between the photovoltaic panel and the electricity distribution network and converting DC to AC, its role is expanding due to the massive growth in solar energy generation. Utility companies and grid operators have become increasingly concerned about managing what can potentially be wildly fluctuating levels of energy produced by the huge (and still growing) number of grid-connected solar systems, whether they are rooftop systems or utility-scale solar farms. Intermittent production due to cloud cover or temporary faults has the potential to destabilize the grid. In addition, grid operators are struggling to plan ahead due to lack of accurate data on production from these systems as well as on true energy consumption.

In large-scale facilities, virtually all output is fed to the national grid or micro-grid, and is typically well monitored. At the rooftop level, although individually small, collectively the amount of energy produced has a significant potential. California estimated it has more than 150,000 residential rooftop grid-connected solar systems with a potential to generate 2.7 MW.

However, while in some systems all the solar energy generated is fed to the grid and not accessible to the producer, others allow energy generated to be used immediately by the producer, with only the excess fed to the grid. In the latter case, smart meters may only measure the net output for billing purposes. In many cases, information on production and consumption, supplied by smart meters to utility companies, may not be available to the grid operators.

Getting smarter

The solution according to industry experts is the smart inverter. Every inverter, whether at panel level or megawatt-scale, has a role to play in grid stability. Traditional inverters have, for safety reasons, become controllable, so that they can be disconnected from the grid at any sign of grid instability. It has been reported that sudden, widespread disconnects can exacerbate grid instability rather than help settle it.

Smart inverters, however, provide a greater degree of control and have been designed to help maintain grid stability. One trend in this area is to use synchrophasor measurements to detect and identify a grid instability event, rather than conventional ‘perturb-and-observe’ methods. The aim is to distinguish between a true island condition and a voltage or frequency disturbance which may benefit from additional power generation by the inverter rather than a disconnect.

Smart inverters can change the power factor. They can input or receive reactive power to manage voltage and power fluctuations, driving voltage up or down depending on immediate requirements. Adaptive volts-amps reactive (VAR) compensation techniques could enable ‘self-healing’ on the grid.

Two-way communications between smart inverter and smart grid not only allow fundamental data on production to be transmitted to the grid operator on a timely basis, but upstream data on voltage and current can help the smart inverter adjust its operation to improve power quality, regulate voltage, and improve grid stability without compromising safety. There are considerable challenges still to overcome in terms of agreeing and evolving national and international technical standards, but this topic is not covered here.

The benefits of the smart inverter over traditional devices have been recognized in Germany, Europe’s largest solar energy producer, where an initiative is underway to convert all solar energy producers’ inverters to smart inverters. Although the cost of smart inverters is slightly higher than traditional systems, the advantages gained in grid balancing and accurate data for planning purposes are considered worthwhile. Key features of smart inverters required by German national standards include power ramping and volt/VAR control, which directly influence improved grid stability.


Civilizational Collapse (Part 1)

25 March, 2014

This story caught my attention, since a lot of people are passing it around:

• Nafeez Ahmed, NASA-funded study: industrial civilisation headed for ‘irreversible collapse’?, Earth Insight, blog on The Guardian, 14 March 2014.

Sounds dramatic! But notice the question mark in the title. The article says that “global industrial civilisation could collapse in coming decades due to unsustainable resource exploitation and increasingly unequal wealth distribution.” But with the word “could” in there, who could possibly argue? It’s certainly possible. What’s the actual news here?

It’s about a new paper that’s been accepted the Elsevier journal Ecological Economics. Since this paper has not been published, and I don’t even know the title, it’s hard to get details yet. According to Nafeez Ahmed,

The research project is based on a new cross-disciplinary ‘Human And Nature DYnamical’ (HANDY) model, led by applied mathematician Safa Motesharrei of the US National Science Foundation-supported National Socio-Environmental Synthesis Center, in association with a team of natural and social scientists.

So I went to Safa Motesharrei‘s webpage. It says he’s a grad student getting his PhD at the Socio-Environmental Synthesis Center, working with a team of people including:

Eugenia Kalnay (atmospheric science)
James Yorke (mathematics)
Matthias Ruth (public policy)
Victor Yakovenko (econophysics)
Klaus Hubacek (geography)
Ning Zeng (meteorology)
Fernando Miralles-Wilhelm (hydrology).

I was able to find this paper draft:

• Safa Motesharri, Jorge Rivas and Eugenia Kalnay, A minimal model for human and nature interaction, 13 November 2012.

I’m not sure how this is related to the paper discussed by Nafeez Ahmed, but it includes some (though not all) of the passages quoted by him, and it describes the HANDY model. It’s an extremely simple model, so I’ll explain it to you.

But first let me quote a bit more of the Guardian article, so you can see why it’s attracting attention:

By investigating the human-nature dynamics of these past cases of collapse, the project identifies the most salient interrelated factors which explain civilisational decline, and which may help determine the risk of collapse today: namely, Population, Climate, Water, Agriculture, and Energy.

These factors can lead to collapse when they converge to generate two crucial social features: “the stretching of resources due to the strain placed on the ecological carrying capacity”; and “the economic stratification of society into Elites [rich] and Masses (or “Commoners”) [poor]” These social phenomena have played “a central role in the character or in the process of the collapse,” in all such cases over “the last five thousand years.”

Currently, high levels of economic stratification are linked directly to overconsumption of resources, with “Elites” based largely in industrialised countries responsible for both:

“… accumulated surplus is not evenly distributed throughout society, but rather has been controlled by an elite. The mass of the population, while producing the wealth, is only allocated a small portion of it by elites, usually at or just above subsistence levels.”

The study challenges those who argue that technology will resolve these challenges by increasing efficiency:

“Technological change can raise the efficiency of resource use, but it also tends to raise both per capita resource consumption and the scale of resource extraction, so that, absent policy effects, the increases in consumption often compensate for the increased efficiency of resource use.”

Productivity increases in agriculture and industry over the last two centuries has come from “increased (rather than decreased) resource throughput,” despite dramatic efficiency gains over the same period.

Modelling a range of different scenarios, Motesharri and his colleagues conclude that under conditions “closely reflecting the reality of the world today… we find that collapse is difficult to avoid.” In the first of these scenarios, civilisation:

“…. appears to be on a sustainable path for quite a long time, but even using an optimal depletion rate and starting with a very small number of Elites, the Elites eventually consume too much, resulting in a famine among Commoners that eventually causes the collapse of society. It is important to note that this Type-L collapse is due to an inequality-induced famine that causes a loss of workers, rather than a collapse of Nature.”

Another scenario focuses on the role of continued resource exploitation, finding that “with a larger depletion rate, the decline of the Commoners occurs faster, while the Elites are still thriving, but eventually the Commoners collapse completely, followed by the Elites.”

In both scenarios, Elite wealth monopolies mean that they are buffered from the most “detrimental effects of the environmental collapse until much later than the Commoners”, allowing them to “continue ‘business as usual’ despite the impending catastrophe.” The same mechanism, they argue, could explain how “historical collapses were allowed to occur by elites who appear to be oblivious to the catastrophic trajectory (most clearly apparent in the Roman and Mayan cases).”

Applying this lesson to our contemporary predicament, the study warns that:

“While some members of society might raise the alarm that the system is moving towards an impending collapse and therefore advocate structural changes to society in order to avoid it, Elites and their supporters, who opposed making these changes, could point to the long sustainable trajectory ‘so far’ in support of doing nothing.”

However, the scientists point out that the worst-case scenarios are by no means inevitable, and suggest that appropriate policy and structural changes could avoid collapse, if not pave the way toward a more stable civilisation.

The two key solutions are to reduce economic inequality so as to ensure fairer distribution of resources, and to dramatically reduce resource consumption by relying on less intensive renewable resources and reducing population growth:

“Collapse can be avoided and population can reach equilibrium if the per capita rate of depletion of nature is reduced to a sustainable level, and if resources are distributed in a reasonably equitable fashion.”

The HANDY model

So what’s the model?

It’s 4 ordinary differential equations:

\dot{x}_C = \beta_C x_C - \alpha_C x_C

\dot{x}_E = \beta_E x_E - \alpha_E x_E

\dot{y} = \gamma y (\lambda - y) - \delta x_C y

\dot{w} = \delta x_C y - C_C - C_E

where:

x_C is the population of the commoners or masses

x_E is the population of the elite

y represents natural resources

w represents wealth

The authors say that

Natural resources exist in three forms: nonrenewable stocks (fossil fuels, mineral deposits, etc), renewable stocks (forests, soils, aquifers), and flows (wind, solar radiation, rivers). In future versions of HANDY, we plan to disaggregate Nature into these three different forms, but for simpli cation in this version, we have adopted a single formulation intended to represent an amalgamation of the three forms.

So, it’s possible that the paper to be published in Ecological Economics treats natural resources using three variables instead of just one.

Now let’s look at the equations one by one:

\dot{x}_C = \beta_C x_C - \alpha_C x_C

This looks weird at first, but \beta_C and \alpha_C aren’t both constants, which would be redundant. \beta_C is a constant birth rate for commoners, while \alpha_C, the death rate for commoners, is a function of wealth.

Similarly, in

\dot{x}_E = \beta_E x_E - \alpha_E x_E

\beta_E is a constant birth rate for the elite, while \alpha_E, the death rate for the elite, is a function of wealth. The death rate is different for the elite and commoners:

For both the elite and commoners, the death rate drops linearly with increasing wealth from its maximum value \alpha_M to its minimum values \alpha_m. But it drops faster for the elite, of course! For the commoners it reaches its minimum when the wealth w reaches some value w_{th}, but for the elite it reaches its minimum earlier, when w = w_{th}/\kappa, where \kappa is some number bigger than 1.

Next, how do natural resources change?

\dot{y} = \gamma y (\lambda - y) - \delta x_C y

The first part of this equation:

\dot{y} = \gamma y (\lambda - y)

describes how natural resources renew themselves if left alone. This is just the logistic equation, famous in models of population growth. Here \lambda is the equilibrium level of natural resources, while \gamma is another number that helps say how fast the resources renew themselves. Solutions of the logistic equation look like this:

But the whole equation

\dot{y} = \gamma y (\lambda - y) - \delta x_C y

has a term saying that natural resources get used up at a rate proportional to the population of commoners x_C times the amount of natural resources y. \delta is just a constant of proportionality.

It’s curious that the population of elites doesn’t affect the depletion of natural resources, and also that doubling the amount of natural resources doubles the rate at which they get used up. Regarding the first issue, the authors offer this explanation:

The depletion term includes a rate of depletion per worker, \delta, and is proportional to both Nature and the number of workers. However, the economic activity of Elites is modeled to represent executive, management, and supervisory functions, but not engagement in the direct extraction of resources, which is done by Commoners. Thus, only Commoners produce.

I didn’t notice a discussion of the second issue.

Finally, the change in the amount of wealth is described by this equation:

\dot{w} = \delta x_C y - C_C - C_E

The first term at right precisely matches the depletion of natural resources in the previous equation, but with the opposite sign: natural resources are getting turned into ‘wealth’. C_C describes consumption by commoners and C_E describes consumption by the elite. These are both functions of wealth, a bit like the death rates… but as you’d expect increasing wealth increases consumption:

For both the elite and commoners, consumption grows linearly with increasing wealth until wealth reaches the critical level w_{th}. But it grows faster for the elites, and reaches a higher level.

So, that’s the model… at least in this preliminary version of the paper.

Some solutions of the model

There are many parameters in this model, and many different things can happen depending on their values and the initial conditions. The paper investigates many different scenarios. I don’t have the energy to describe them all, so I urge you to skim it and look at the graphs.

I’ll just show you three. Here is one that Nafeez Ahmed mentioned, where civilization

appears to be on a sustainable path for quite a long time, but even using an optimal depletion rate and starting with a very small number of Elites, the Elites eventually consume too much, resulting in a famine among Commoners that eventually causes the collapse of society.

I can see why Ahmed would like to talk about this scenario: he’s written a book called A User’s Guide to the Crisis of Civilization and How to Save It. Clearly it’s worth putting some thought into risks of this sort. But how likely is this particular scenario compared to others? For that we’d need to think hard about how well this model matches reality.

It’s obviously a crude simplification of an immensely complex and unknowable system: the whole civilization on this planet. That doesn’t mean it’s fundamentally wrong! Its predictions could still be qualitatively correct. But to gain confidence in this, we’d need material that is not made in the draft paper I’ve seen. It says:

The scenarios most closely reflecting the reality of our world today are found in the third group of experiments (see section 5.3), where we introduced economic strati cation. Under such conditions,
we find that collapse is difficult to avoid.

But it would be nice to see a more careful approach to setting model parameters, justifying the simplifications built into the model, exploring what changes when some simplifications are reduced, and so on.

Here’s a happier scenario, where the parameters are chosen differently:

The main difference is that the depletion of resources per commoner, \delta, is smaller.

And here’s yet another, featuring cycles of prosperity, overshoot and collapse:

Tentative conclusions

I hope you see that I’m neither trying to ‘shoot down’ this model nor defend it. I’m just trying to understand it.

I think it’s very important—and fun—to play around with models like this, keep refining them, comparing them against each other, and using them as tools to help our thinking. But I’m not very happy that Nafeez Ahmed called this piece of work a “highly credible wake-up call” without giving us any details about what was actually done.

I don’t expect blog articles on the Guardian to feature differential equations! But it would be great if journalists who wrote about new scientific results would provide a link to the actual work, so people who want to could dig deeper can do so. Don’t make us scour the internet looking for clues.

And scientists: if your results are potentially important, let everyone actually see them! If you think civilization could be heading for collapse, burying your evidence and your recommendations for avoiding this calamity in a closed-access Elsevier journal is not the optimal strategy to deal with the problem.

There’s been a whole side-battle over whether NASA actually funded this study:

• Keith Kloor, About that popular Guardian story on the collapse of industrial civilization, Collide-A-Scape, blog on Discover, March 21, 2014.

• Nafeez Ahmed, Did NASA fund ‘civilisation collapse’ study, or not?, Earth Insight, blog on The Guardian, 21 March 2014.

But that’s very boring compared to fun of thinking about the model used in this study… and the challenging, difficult business of trying to think clearly about the risks of civilizational collapse.

Addendum

The paper is now freely available here:

• Safa Motesharri, Jorge Rivas and Eugenia Kalnay, Human and nature dynamics (HANDY): modeling inequality and use of resources in the collapse or sustainability of societies, Ecological Economics 101 (2014), 90–102.


Markov Models of Social Change (Part 2)

5 March, 2014

guest post by Vanessa Schweizer

This is my first post to Azimuth. It’s a companion to the one by Alaistair Jamieson-Lane. I’m an assistant professor at the University of Waterloo in Canada with the Centre for Knowledge Integration, or CKI. Through our teaching and research, the CKI focuses on integrating what appears, at first blush, to be drastically different fields in order to make the world a better place. The very topics I would like to cover today, which are mathematics and policy design, are an example of our flavour of knowledge integration. However, before getting into that, perhaps some background on how I got here would be helpful.

The conundrum of complex systems

For about eight years, I have focused on various problems related to long-term forecasting of social and technological change (long-term meaning in excess of 10 years). I became interested in these problems because they are particularly relevant to how we understand and respond to global environmental changes such as climate change.

In case you don’t know much about global warming or what the fuss is about, part of what makes the problem particularly difficult is that the feedback from the physical climate system to human political and economic systems is exceedingly slow. It is so slow, that under traditional economic and political analyses, an optimal policy strategy may appear to be to wait before making any major decisions – that is, wait for scientific knowledge and technologies to improve, or at least wait until the next election [1]. Let somebody else make the tough (and potentially politically unpopular) decisions!

The problem with waiting is that the greenhouse gases that scientists are most concerned about stay in the atmosphere for decades or centuries. They are also churned out by the gigatonne each year. Thus the warming trends that we have experienced for the past 30 years, for instance, are the cumulative result of emissions that happened not only recently but also long ago—in the case of carbon dioxide, as far back as the turn of the 20th century. The world in the 1910s was quainter than it is now, and as more economies around the globe industrialize and modernize, it is natural to wonder: how will we manage to power it all? Will we still rely so heavily on fossil fuels, which are the primary source of our carbon dioxide emissions?

Such questions are part of what makes climate change a controversial topic. Present-day policy decisions about energy use will influence the climatic conditions of the future, so what kind of future (both near-term and long-term) do we want?

Futures studies and trying to learn from the past

Many approaches can be taken to answer the question of what kind of future we want. An approach familiar to the political world is for a leader to espouse his or her particular hopes and concerns for the future, then work to convince others that those ideas are more relevant than someone else’s. Alternatively, economists do better by developing and investigating different simulations of economic developments over time; however, the predictive power of even these tools drops off precipitously beyond the 10-year time horizon.

The limitations of these approaches should not be too surprising, since any stockbroker will say that when making financial investments, past performance is not necessarily indicative of future results. We can expect the same problem with rhetorical appeals, or economic models, that are based on past performances or empirical (which also implies historical) relationships.

A different take on foresight

A different approach avoids the frustration of proving history to be a fickle tutor for the future. By setting aside the supposition that we must be able to explain why the future might play out a particular way (that is, to know the ‘history’ of a possible future outcome), alternative futures 20, 50, or 100 years hence can be conceptualized as different sets of conditions that may substantially diverge from what we see today and have seen before. This perspective is employed in cross-impact balance analysis, an algorithm that searches for conditions that can be demonstrated to be self-consistent [3].

Findings from cross-impact balance analyses have been informative for scientific assessments produced by the Intergovernmental Panel on Climate Change Research, or IPCC. To present a coherent picture of the climate change problem, the IPCC has coordinated scenario studies across economic and policy analysts as well as climate scientists since the 1990s. Prior to the development of the cross-impact balance method, these researchers had to do their best to identify appropriate ranges for rates of population growth, economic growth, energy efficiency improvements, etc. through their best judgment.

A retrospective using cross-impact balances on the first Special Report on Emissions Scenarios found that the researchers did a good job in many respects. However, they underrepresented the large number of alternative futures that would result in high greenhouse gas emissions in the absence of climate policy [4].

As part of the latest update to these coordinated scenarios, climate change researchers decided it would be useful to organize alternative futures according socio-economic conditions that pose greater or fewer challenges to mitigation and adaptation. Mitigation refers to policy actions that decrease greenhouse gas emissions, while adaptation refers to reducing harms due to climate change or to taking advantage of benefits. Some climate change researchers argued that it would be sufficient to consider alternative futures where challenges to mitigation and adaptation co-varied, e.g. three families of futures where mitigation and adaptation challenges would be low, medium, or high.

Instead, cross-impact balances revealed that mixed-outcome futures—such as socio-economic conditions simultaneously producing fewer challenges to mitigation but greater challenges to adaptation—could not be completely ignored. This counter-intuitive finding, among others, brought the importance of quality of governance to the fore [5].

Although it is generally recognized that quality of governance—e.g. control of corruption and the rule of law—affects quality of life [6], many in the climate change research community have focused on technological improvements, such as drought-resistant crops, or economic incentives, such as carbon prices, for mitigation and adaptation. The cross-impact balance results underscored that should global patterns of quality of governance across nations take a turn for the worse, poor governance could stymie these efforts. This is because the influence of quality of governance is pervasive; where corruption is permitted at the highest levels of power, it may be permitted at other levels as well—including levels that are responsible for building schools, teaching literacy, maintaining roads, enforcing public order, and so forth.

The cross-impact balance study revealed this in the abstract, as summarized in the example matrices below. Alastair included a matrix like these in his post, where he explained that numerical judgments in such a matrix can be used to calculate the net impact of simultaneous influences on system factors. My purpose in presenting these matrices is a bit different, as the matrix structure can also explain why particular outcomes behave as system attractors.

In this example, a solid light gray square means that the row factor directly influences the column factor some amount, while white space means that there is no direct influence:

Dark gray squares along the diagonal have no meaning, since everything is perfectly correlated to itself. The pink squares highlight the rows for the factors “quality of governance” and “economy.” The importance of these rows is more apparent here; the matrix above is a truncated version of this more detailed one:

(Click to enlarge.)

The pink rows are highlighted because of a striking property of these factors. They are the two most influential factors of the system, as you can see from how many solid squares appear in their rows. The direct influence of quality of governance is second only to the economy. (Careful observers will note that the economy directly influences quality of governance, while quality of governance directly influences the economy). Other scholars have meticulously documented similar findings through observations [7].

As a method for climate policy analysis, cross-impact balances fill an important gap between genius forecasting (i.e., ideas about the far-off future espoused by one person) and scientific judgments that, in the face of deep uncertainty, are overconfident (i.e. neglecting the ‘fat’ or ‘long’ tails of a distribution).

Wanted: intrepid explorers of future possibilities

However, alternative visions of the future are only part of the information that’s needed to create the future that is desired. Descriptions of courses of action that are likely to get us there are also helpful. In this regard, the post by Jamieson-Lane describes early work on modifying cross-impact balances for studying transition scenarios rather than searching primarily for system attractors.

This is where you, as the mathematician or physicist, come in! I have been working with cross-impact balances as a policy analyst, and I can see the potential of this method to revolutionize policy discussions—not only for climate change but also for policy design in general. However, as pointed out by entrepreneurship professor Karl T. Ulrich, design problems are NP-complete. Those of us with lesser math skills can be easily intimidated by the scope of such search problems. For this reason, many analysts have resigned themselves to ad hoc explorations of the vast space of future possibilities. However, some analysts like me think it is important to develop methods that do better. I hope that some of you Azimuth readers may be up for collaborating with like-minded individuals on the challenge!

References

The graph of carbon emissions is from reference [2]; the pictures of the matrices are adapted from reference [5]:

[1] M. Granger Morgan, Milind Kandlikar, James Risbey and Hadi Dowlatabadi, Why conventional tools for policy analysis are often inadequate for problems of global change, Climatic Change 41 (1999), 271–281.

[2] T.F. Stocker et al., Technical Summary, in Climate Change 2013: The Physical Science Basis. Contribution of Working Group I to the Fifth Assessment Report of the Intergovernmental Panel on Climate Change (2013), T.F. Stocker, D. Qin, G.-K. Plattner, M. Tignor, S.K. Allen, J. Boschung, A. Nauels, Y. Xia, V. Bex, and P.M. Midgley (eds.) Cambridge University Press, New York.

[3] Wolfgang Weimer-Jehle, Cross-impact balances: a system-theoretical approach to cross-impact analysis, Technological Forecasting & Social Change 73 (2006), 334–361.

[4] Vanessa J. Schweizer and Elmar Kriegler, Improving environmental change research with systematic techniques for qualitative scenarios, Environmental Research Letters 7 (2012), 044011.

[5] Vanessa J. Schweizer and Brian C. O’Neill, Systematic construction of global socioeconomic pathways using internally consistent element combinations, Climatic Change 122 (2014), 431–445.

[6] Daniel Kaufman, Aart Kray and Massimo Mastruzzi, Worldwide Governance Indicators (2013), The World Bank Group.

[7] Daron Acemoglu and James Robinson, The Origins of Power, Prosperity, and Poverty: Why Nations Fail. Website.


Markov Models of Social Change (Part 1)

24 February, 2014

guest post by Alastair Jamieson-Lane

The world is complex, and making choices in a complex world is sometimes difficult.

As any leader knows, decisions must often be made with incomplete information. To make matters worse, the experts and scientists who are meant to advise on these important matters are also doing so with incomplete information—usually limited to only one or two specialist fields. When decisions need to be made that are dependent on multiple real-world systems, and your various advisors find it difficult to communicate, this can be problematic!

The generally accepted approach is to listen to whichever advisor tells you the things you want to hear.

When such an approach fails (for whatever mysterious and inexplicable reason) it might be prudent to consider such approaches as Bayesian inference, analysis of competing hypotheses or cross-impact balance analysis.

Because these methods require experts to formalize their opinions in an explicit, discipline neutral manner, we avoid many of the problems mentioned above. Also, if everything goes horribly wrong, you can blame the algorithm, and send the rioting public down to the local university to complain there.

In this blog article I will describe cross-impact balance analysis and a recent extension to this method, explaining its use, as well as some basic mathematical underpinnings. No familiarity with cross-impact balance analysis will be required.

Wait—who is this guy?

Since this is my first time writing a blog post here, I hear introductions are in order.

Hi. I’m Alastair.

I am currently a Master’s student at the University of British Columbia, studying mathematics. In particular, I’m aiming to use evolutionary game theory to study academic publishing and hiring practices… and from there hopefully move on to studying governments (we’ll see how the PhD goes). I figure that both those systems seem important to solving the problems we’ve built for ourselves, and both may be under increasing pressure in coming years.

But that’s not what I’m here for today! Today I’m here to tell the story of cross-impact balance analysis, a tool I was introduced to at the complex systems summer school in Santa Fe.

The story

Suppose (for example) that the local oracle has foretold that burning the forests will anger the nature gods

… and that if you do not put restrictions in place, your crops will wither and die.

Well, that doesn’t sound very good.

The merchant’s guild claims that such restrictions will cause all trade to grind to a halt.

Your most trusted generals point out that weakened trade will leave you vulnerable to invasion from all neighboring kingdoms.

The sailors guild adds that the wrath of Poseidon might make nautical trade more difficult.

The alchemists propose alternative sources of heat…

… while the druids propose special crops as a way of resisting the wrath of the gods…

… and so on.

Given this complex web of interaction, it might be a good time to consult the philosophers.

Overview of CIB

This brings us to the question of what CIB (Cross-Impact Balance) analysis is, and how to use it.

At its heart, CIB analysis demands this: first, you must consider what aspects of the world you are interested in studying. This could be environmental or economic status, military expenditure, or the laws governing genetic modification. These we refer to as “descriptors”. For each “descriptor” we must create a list of possible “states”.

For example, if the descriptor we are interested in were “global temperature change” our states might be “+5 degree”, “+4 degrees” and so on down to “-2 degrees”.

The states of a descriptor are not meant to be all-encompassing, or offer complete detail, and they need not be numerical. For example, the descriptor “Agricultural policy” might have such states as “Permaculture subsidy”, “Genetic engineering”, “Intensive farming” or “No policy”.

For each of these states, we ask our panel of experts whether such a state would increase or decrease the tendency for some other descriptor to be in a particular state.

For example, we might ask: “On a scale from -3 to 3, how much does the agricultural policy of Intensive farming increase the probability that we will see global temperature increases of +2 degrees?”

By combining the opinions of a variety of experts in each field, and weighting based on certainty and expertise, we are able to construct matrices, much like the one below:

The above matrix is a description of my ant farm. The health of my colony is determined by the population, income, and education levels of my ants. For a less ant focused version of the above, please refer to:

• Elisabeth A. Lloyd and Vanessa J. Schweizer, Objectivity and a comparison of methodological scenario approaches for climate change research, Synthese (2013).

For any possible combination of descriptor states (referred to as a scenario) we can calculate the total impact on all possible descriptors. In the current scenario we have low population, high income and medium education (see highlighted rows).

Because the current scenario has high ant income, this strongly influences us to have low population (+3) and prevents a jump to high population (-3). This combined with the non-influence from education (zeros) leads to low population being the most favoured state for our population descriptor. Thus we expect no change. We say this is “consistent”.

Education however sees a different story. Here we have a strong influence towards high education levels (summing the column gives a total of 13). Thus our current state (medium education) is inconsistent, and we would expect the abundance of ant wealth to lead to an improvements in the ant schooling system.

Classical CIB analysis acts as a way to classify which hypothetical situations are consistent, and which are not.

Now, it is all well and good to claim that some scenarios are stable, but the real use of such a tool is in predicting (and influencing) the future.

By applying a deterministic rule that determines how inconsistencies are resolved, we can produce a “succession rule”. The most straight-forward example is to replace all descriptor states with whichever state is most favoured by the current scenario. In the example above we would switch to “Low population, medium income, high education”. A generation later we would switch back to “Low population, High income, medium education”, soon finding ourselves trapped in a loop.

All such rules will always lead to either a loop or a “sink”: a self consistent scenario which is succeeded only by itself.

So, how can we use this? How will this help us deal with the wrath of the gods (or ant farms)?

Firstly: we can identify loops and consistent scenarios which we believe are most favourable. It’s all well and good imagining some future utopia, but if it is inconsistent with itself, and will immediately lead to a slide into less favourable scenarios then we should not aim for it, we should find that most favourable realistic scenario and aim for that one.

Secondly: We can examine all our consistent scenarios, and determine whose “basin of attraction” we find ourselves in: that is, which scenario are we likely to end up in.

Thirdly: Suppose we could change our influence matrix slightly? How would we change it to favour scenarios we most prefer? If you don’t like the rules, change the game—or at the very least find out WHAT we would need to change to have the best effect.

Concerns and caveats

So… what are the problems we might encounter? What are the drawbacks?

Well, first of all, we note that the real world does not tend to reach any form of eternal static scenario or perfect cycle. The fact that our model does might be regarded as reason for suspicion.

Secondly, although the classical method contains succession analysis, this analysis is not necessarily intended as a completely literal “prediction” of events. It gives a rough idea of the basins of attraction of our cycles and consistent scenarios, but is also somewhat arbitrary. What succession rule is most appropriate? Do all descriptors update simultaneously? Or only the one with the most “pressure”? Are our descriptors given in order of malleability, and only the fastest changing descriptor will change?

Thirdly, in collapsing our description of the world down into a finite number of states we are ignoring many tiny details. Most of these details are not important, but in assuming that our succession rules are deterministic, we imply that these details have no impact whatsoever.

If we instead treat succession as a somewhat random process, the first two of these problems can be solved, and the third somewhat reduced.

Stochastic succession

In the classical CIB succession analysis, some rule is selected which deterministically decides which scenario follows from the present. Stochastic succession analysis instead tells us the probability that a given scenario will lead to another.

The simplest example of a stochastic succession rule is to simply select a single descriptor at random each time step, and only consider updates that might happen to that descriptor. This we refer to as dice succession. This (in some ways) represents hidden information: two systems that might look identical on the surface from the point of view of our very blockish CIB analysis might be different enough underneath to lead to different outcomes. If we have a shaky agricultural system, but a large amount of up-and-coming research, then which of these two factors becomes important first is down to the luck of the draw. Rather than attempt to model this fine detail, we instead merely accept it and incorporate this uncertainty into our model.

Even this most simplistic change leads to dramatics effects on our system. Most importantly, almost all cycles vanish from our results, as forks in the road allow us to diverge from the path of the cycle.

We can take stochastic succession further and consider more exotic rules for our transitions, ones that allow any transition to take place, not merely those that are most favored. For example:

P(x,y) = A e^{I_x(y)/T}

Here x is our current scenario, y is some possible future scenario, and I_x(y) is the total impact score of y from the perspective of x. A is a simple normalizing constant, and T is our system’s temperature. High temperature systems are dominated by random noise, while low temperature systems are dominated by the influences described by our experts.

Impact score is calculated by summing the impact of each state of our current scenario, on each state of our target scenario. For example, for the above, suppose we want to find I_x(y) when x is the given scenario “Low population, High income, medium education” and y was the scenario “Medium population, medium income, High education”. We consider all numbers that are in rows which were states of x and in columns that are states of y. This would give:

I_x(y)= (0+0+0) + (-2 +0 +10) +(6+7+0) = 21

Here each bracket refers to the sum of a particular column.
More generically we can write the formula as:

\displaystyle{ I_x(y)= \sum_{i \subset x, \;j \subset y} M_{i,j} }

Here M_{i,j} refers to an entry in our cross-impact balance matrix, i and j are both states, and i \subset x reads as “i is a state of x”.

We refer to this function for computing transition probabilities as the Boltzmann succession law, due to its similarity to the Boltzmann distribution found in physics. We use it merely as an example, and by no means wish to imply that we expect the transitions for our true system to act in a precisely Boltzmann-like manner. Alternative functions can, and should, be experimented with. The Boltzmann succession law is however an effective example and has a number of nice properties: P(x,y) is always positive, unchanged by adding a constant to every element of the cross-impact balance matrix, contains adjustable parameters, and unbounded above.

The Boltzmann succession rule is what I will refer to as fully stochastic: it allows transitions even against our experts’ judgement (with low probability). This is in contrast to dice succession which picks a direction at random, but still contains scenarios from which our system can not escape.

Effects of stochastic succession

‘Partially stochastic’ processes such as the dice rule have very limited effect on the long term behavior of the model. Aside from removing most cycles, they behave almost exactly like our deterministic succession rules. So, let us instead discuss the more interesting fully stochastic succession rules.

In the fully stochastic system we can ask “after a very long time, what is the probability we will be in scenario x?”

By asking this question we can get some idea of the relative importance of all our future scenarios and states.

For example, if the scenario “high population, low education, low income” has a 40% probability in the long term, while most other scenarios have a probability of 0.2%, we can see that this scenario is crucial to the understanding of our system. Often scenarios already identified by deterministic succession analysis are the ones with the greatest long term probability—but by looking at long term probability we also gain information about the relative importance of each scenario.

In addition, we can encounter scenarios which are themselves inconsistent, but form cycles and/or clusters of interconnected scenarios. We can also notice scenarios that while technically ‘consistent’ in the deterministic rules are only barely so, and have limited weight due to a limited basin of attraction. We might identify scenarios that seem familiar in the real world, but are apparently highly unlikely in our analysis, indicating either that we should expect change… or perhaps suggesting a missing descriptor or a cross-impact in need of tweaking.

Armed with such a model, we can investigate what we can do to increase the short term and long term likelihood of desirable scenarios, and decrease the likelihood of undesirable scenarios.

Some further reading

As a last note, here are a few freely available resources that may prove useful. For a more formal introduction to CIB, try:

• Wolfgang Weimer-Jehle, Cross-impact balances: a system-theoretical approach to cross-impact analysis, Technological Forecasting & Social Change 73 (2006), 334–361.

• Wolfgang Weimer-Jehle, Properties of cross-impact balance analysis.

You can find free software for doing a classical CIB analysis here:

• ZIRIUS, ScenarioWizard.

ZIRIUS is the Research Center for Interdisciplinary Risk and Innovation Studies of the University of Stuttgart.

Here are some examples of CIB in action:

• Gerhard Fuchs, Ulrich Fahl, Andreas Pyka, Udo Staber, Stefan Voegele and Wolfgang Weimer-Jehle, Generating innovation scenarios using the cross-impact methodology, Department of Economics, University of Bremen, Discussion-Papers Series No. 007-2008.

• Ortwin Renn, Alexander Jager, Jurgen Deuschle and Wolfgang Weimer-Jehle, A normative-functional concept of sustainability and its indicators, International Journal of Global Environmental Issues, 9 (2008), 291–317.

Finally, this page contains a more complete list of articles, both practical and theoretical:

• ZIRIUS, Cross-impact balance analysis: publications.


Follow

Get every new post delivered to your Inbox.

Join 3,233 other followers