Wind Power and the Smart Grid

18 June, 2014



Electric power companies complain about wind power because it’s intermittent: if suddenly the wind stops, they have to bring in other sources of power.

This is no big deal if we only use a little wind. Across the US, wind now supplies 4% of electric power; even in Germany it’s just 8%. The problem starts if we use a lot of wind. If we’re not careful, we’ll need big fossil-fuel-powered electric plants when the wind stops. And these need to be turned on, ready to pick up the slack at a moment’s notice!

So, a few years ago Xcel Energy, which supplies much of Colorado’s power, ran ads opposing a proposal that it use renewable sources for 10% of its power.

But now things have changed. Now Xcel gets about 15% of their power from wind, on average. And sometimes this spikes to much more!

What made the difference?

Every few seconds, hundreds of turbines measure the wind speed. Every 5 minutes, they send this data to high-performance computers 100 miles away at the National Center for Atmospheric Research in Boulder. NCAR crunches these numbers along with data from weather satellites, weather stations, and other wind farms – and creates highly accurate wind power forecasts.

With better prediction, Xcel can do a better job of shutting down idling backup plants on days when they’re not needed. Last year was a breakthrough year – better forecasts saved Xcel nearly as much money as they had in the three previous years combined.

It’s all part of the emerging smart grid—an intelligent network that someday will include appliances and electric cars. With a good smart grid, we could set our washing machine to run when power is cheap. Maybe electric cars could store solar power in the day, use it to power neighborhoods when electricity demand peaks in the evening – then recharge their batteries using wind power in the early morning hours. And so on.

References

I would love if it the Network Theory project could ever grow to the point of helping design the smart grid. So far we are doing much more ‘foundational’ work on control theory, along with a more applied project on predicting El Niños. I’ll talk about both of these soon! But I have big hopes and dreams, so I want to keep learning more about power grids and the like.

Here are two nice references:

• Kevin Bullis, Smart wind and solar power, from 10 breakthrough technologies, Technology Review, 23 April 2014.

• Keith Parks, Yih-Huei Wan, Gerry Wiener and Yubao Liu, Wind energy forecasting: a collaboration of the National Center for Atmospheric Research (NCAR) and Xcel Energy.

The first is fun and easy to read. The second has more technical details. It describes the software used (the picture on top of this article shows a bit of this), and also some of the underlying math and physics. Let me quote a bit:

High-resolution Mesoscale Ensemble Prediction Model (EPM)

It is known that atmospheric processes are chaotic in nature. This implies that even small errors in the model initial conditions combined with the imperfections inherent in the NWP model formulations, such as truncation errors and approximations in model dynamics and physics, can lead to a wind forecast with large errors for certain weather regimes. Thus, probabilistic wind prediction approaches are necessary for guiding wind power applications. Ensemble prediction is at present a practical approach for producing such probabilistic predictions. An innovative mesoscale Ensemble Real-Time Four Dimensional Data Assimilation (E-RTFDDA) and forecasting system that was developed at NCAR was used as the basis for incorporating this ensemble prediction capability into the Xcel forecasting system.

Ensemble prediction means that instead of a single weather forecast, we generate a probability distribution on the set of weather forecasts. The paper has references explaining this in more detail.

We had a nice discussion of wind power and the smart grid over on G+. Among other things, John Despujols mentioned the role of ‘smart inverters’ in enhancing grid stability:

Smart solar inverters smooth out voltage fluctuations for grid stability, DigiKey article library.

A solar inverter converts the variable direct current output of a photovoltaic solar panel into alternating current usable by the electric grid. There’s a lot of math involved here—click the link for a Wikipedia summary. But solar inverters are getting smarter.

Wild fluctuations

While the solar inverter has long been the essential link between the photovoltaic panel and the electricity distribution network and converting DC to AC, its role is expanding due to the massive growth in solar energy generation. Utility companies and grid operators have become increasingly concerned about managing what can potentially be wildly fluctuating levels of energy produced by the huge (and still growing) number of grid-connected solar systems, whether they are rooftop systems or utility-scale solar farms. Intermittent production due to cloud cover or temporary faults has the potential to destabilize the grid. In addition, grid operators are struggling to plan ahead due to lack of accurate data on production from these systems as well as on true energy consumption.

In large-scale facilities, virtually all output is fed to the national grid or micro-grid, and is typically well monitored. At the rooftop level, although individually small, collectively the amount of energy produced has a significant potential. California estimated it has more than 150,000 residential rooftop grid-connected solar systems with a potential to generate 2.7 MW.

However, while in some systems all the solar energy generated is fed to the grid and not accessible to the producer, others allow energy generated to be used immediately by the producer, with only the excess fed to the grid. In the latter case, smart meters may only measure the net output for billing purposes. In many cases, information on production and consumption, supplied by smart meters to utility companies, may not be available to the grid operators.

Getting smarter

The solution according to industry experts is the smart inverter. Every inverter, whether at panel level or megawatt-scale, has a role to play in grid stability. Traditional inverters have, for safety reasons, become controllable, so that they can be disconnected from the grid at any sign of grid instability. It has been reported that sudden, widespread disconnects can exacerbate grid instability rather than help settle it.

Smart inverters, however, provide a greater degree of control and have been designed to help maintain grid stability. One trend in this area is to use synchrophasor measurements to detect and identify a grid instability event, rather than conventional ‘perturb-and-observe’ methods. The aim is to distinguish between a true island condition and a voltage or frequency disturbance which may benefit from additional power generation by the inverter rather than a disconnect.

Smart inverters can change the power factor. They can input or receive reactive power to manage voltage and power fluctuations, driving voltage up or down depending on immediate requirements. Adaptive volts-amps reactive (VAR) compensation techniques could enable ‘self-healing’ on the grid.

Two-way communications between smart inverter and smart grid not only allow fundamental data on production to be transmitted to the grid operator on a timely basis, but upstream data on voltage and current can help the smart inverter adjust its operation to improve power quality, regulate voltage, and improve grid stability without compromising safety. There are considerable challenges still to overcome in terms of agreeing and evolving national and international technical standards, but this topic is not covered here.

The benefits of the smart inverter over traditional devices have been recognized in Germany, Europe’s largest solar energy producer, where an initiative is underway to convert all solar energy producers’ inverters to smart inverters. Although the cost of smart inverters is slightly higher than traditional systems, the advantages gained in grid balancing and accurate data for planning purposes are considered worthwhile. Key features of smart inverters required by German national standards include power ramping and volt/VAR control, which directly influence improved grid stability.


Civilizational Collapse (Part 1)

25 March, 2014

This story caught my attention, since a lot of people are passing it around:

• Nafeez Ahmed, NASA-funded study: industrial civilisation headed for ‘irreversible collapse’?, Earth Insight, blog on The Guardian, 14 March 2014.

Sounds dramatic! But notice the question mark in the title. The article says that “global industrial civilisation could collapse in coming decades due to unsustainable resource exploitation and increasingly unequal wealth distribution.” But with the word “could” in there, who could possibly argue? It’s certainly possible. What’s the actual news here?

It’s about a new paper that’s been accepted the Elsevier journal Ecological Economics. Since this paper has not been published, and I don’t even know the title, it’s hard to get details yet. According to Nafeez Ahmed,

The research project is based on a new cross-disciplinary ‘Human And Nature DYnamical’ (HANDY) model, led by applied mathematician Safa Motesharrei of the US National Science Foundation-supported National Socio-Environmental Synthesis Center, in association with a team of natural and social scientists.

So I went to Safa Motesharrei‘s webpage. It says he’s a grad student getting his PhD at the Socio-Environmental Synthesis Center, working with a team of people including:

Eugenia Kalnay (atmospheric science)
James Yorke (mathematics)
Matthias Ruth (public policy)
Victor Yakovenko (econophysics)
Klaus Hubacek (geography)
Ning Zeng (meteorology)
Fernando Miralles-Wilhelm (hydrology).

I was able to find this paper draft:

• Safa Motesharri, Jorge Rivas and Eugenia Kalnay, A minimal model for human and nature interaction, 13 November 2012.

I’m not sure how this is related to the paper discussed by Nafeez Ahmed, but it includes some (though not all) of the passages quoted by him, and it describes the HANDY model. It’s an extremely simple model, so I’ll explain it to you.

But first let me quote a bit more of the Guardian article, so you can see why it’s attracting attention:

By investigating the human-nature dynamics of these past cases of collapse, the project identifies the most salient interrelated factors which explain civilisational decline, and which may help determine the risk of collapse today: namely, Population, Climate, Water, Agriculture, and Energy.

These factors can lead to collapse when they converge to generate two crucial social features: “the stretching of resources due to the strain placed on the ecological carrying capacity”; and “the economic stratification of society into Elites [rich] and Masses (or “Commoners”) [poor]” These social phenomena have played “a central role in the character or in the process of the collapse,” in all such cases over “the last five thousand years.”

Currently, high levels of economic stratification are linked directly to overconsumption of resources, with “Elites” based largely in industrialised countries responsible for both:

“… accumulated surplus is not evenly distributed throughout society, but rather has been controlled by an elite. The mass of the population, while producing the wealth, is only allocated a small portion of it by elites, usually at or just above subsistence levels.”

The study challenges those who argue that technology will resolve these challenges by increasing efficiency:

“Technological change can raise the efficiency of resource use, but it also tends to raise both per capita resource consumption and the scale of resource extraction, so that, absent policy effects, the increases in consumption often compensate for the increased efficiency of resource use.”

Productivity increases in agriculture and industry over the last two centuries has come from “increased (rather than decreased) resource throughput,” despite dramatic efficiency gains over the same period.

Modelling a range of different scenarios, Motesharri and his colleagues conclude that under conditions “closely reflecting the reality of the world today… we find that collapse is difficult to avoid.” In the first of these scenarios, civilisation:

“…. appears to be on a sustainable path for quite a long time, but even using an optimal depletion rate and starting with a very small number of Elites, the Elites eventually consume too much, resulting in a famine among Commoners that eventually causes the collapse of society. It is important to note that this Type-L collapse is due to an inequality-induced famine that causes a loss of workers, rather than a collapse of Nature.”

Another scenario focuses on the role of continued resource exploitation, finding that “with a larger depletion rate, the decline of the Commoners occurs faster, while the Elites are still thriving, but eventually the Commoners collapse completely, followed by the Elites.”

In both scenarios, Elite wealth monopolies mean that they are buffered from the most “detrimental effects of the environmental collapse until much later than the Commoners”, allowing them to “continue ‘business as usual’ despite the impending catastrophe.” The same mechanism, they argue, could explain how “historical collapses were allowed to occur by elites who appear to be oblivious to the catastrophic trajectory (most clearly apparent in the Roman and Mayan cases).”

Applying this lesson to our contemporary predicament, the study warns that:

“While some members of society might raise the alarm that the system is moving towards an impending collapse and therefore advocate structural changes to society in order to avoid it, Elites and their supporters, who opposed making these changes, could point to the long sustainable trajectory ‘so far’ in support of doing nothing.”

However, the scientists point out that the worst-case scenarios are by no means inevitable, and suggest that appropriate policy and structural changes could avoid collapse, if not pave the way toward a more stable civilisation.

The two key solutions are to reduce economic inequality so as to ensure fairer distribution of resources, and to dramatically reduce resource consumption by relying on less intensive renewable resources and reducing population growth:

“Collapse can be avoided and population can reach equilibrium if the per capita rate of depletion of nature is reduced to a sustainable level, and if resources are distributed in a reasonably equitable fashion.”

The HANDY model

So what’s the model?

It’s 4 ordinary differential equations:

\dot{x}_C = \beta_C x_C - \alpha_C x_C

\dot{x}_E = \beta_E x_E - \alpha_E x_E

\dot{y} = \gamma y (\lambda - y) - \delta x_C y

\dot{w} = \delta x_C y - C_C - C_E

where:

x_C is the population of the commoners or masses

x_E is the population of the elite

y represents natural resources

w represents wealth

The authors say that

Natural resources exist in three forms: nonrenewable stocks (fossil fuels, mineral deposits, etc), renewable stocks (forests, soils, aquifers), and flows (wind, solar radiation, rivers). In future versions of HANDY, we plan to disaggregate Nature into these three different forms, but for simpli cation in this version, we have adopted a single formulation intended to represent an amalgamation of the three forms.

So, it’s possible that the paper to be published in Ecological Economics treats natural resources using three variables instead of just one.

Now let’s look at the equations one by one:

\dot{x}_C = \beta_C x_C - \alpha_C x_C

This looks weird at first, but \beta_C and \alpha_C aren’t both constants, which would be redundant. \beta_C is a constant birth rate for commoners, while \alpha_C, the death rate for commoners, is a function of wealth.

Similarly, in

\dot{x}_E = \beta_E x_E - \alpha_E x_E

\beta_E is a constant birth rate for the elite, while \alpha_E, the death rate for the elite, is a function of wealth. The death rate is different for the elite and commoners:

For both the elite and commoners, the death rate drops linearly with increasing wealth from its maximum value \alpha_M to its minimum values \alpha_m. But it drops faster for the elite, of course! For the commoners it reaches its minimum when the wealth w reaches some value w_{th}, but for the elite it reaches its minimum earlier, when w = w_{th}/\kappa, where \kappa is some number bigger than 1.

Next, how do natural resources change?

\dot{y} = \gamma y (\lambda - y) - \delta x_C y

The first part of this equation:

\dot{y} = \gamma y (\lambda - y)

describes how natural resources renew themselves if left alone. This is just the logistic equation, famous in models of population growth. Here \lambda is the equilibrium level of natural resources, while \gamma is another number that helps say how fast the resources renew themselves. Solutions of the logistic equation look like this:

But the whole equation

\dot{y} = \gamma y (\lambda - y) - \delta x_C y

has a term saying that natural resources get used up at a rate proportional to the population of commoners x_C times the amount of natural resources y. \delta is just a constant of proportionality.

It’s curious that the population of elites doesn’t affect the depletion of natural resources, and also that doubling the amount of natural resources doubles the rate at which they get used up. Regarding the first issue, the authors offer this explanation:

The depletion term includes a rate of depletion per worker, \delta, and is proportional to both Nature and the number of workers. However, the economic activity of Elites is modeled to represent executive, management, and supervisory functions, but not engagement in the direct extraction of resources, which is done by Commoners. Thus, only Commoners produce.

I didn’t notice a discussion of the second issue.

Finally, the change in the amount of wealth is described by this equation:

\dot{w} = \delta x_C y - C_C - C_E

The first term at right precisely matches the depletion of natural resources in the previous equation, but with the opposite sign: natural resources are getting turned into ‘wealth’. C_C describes consumption by commoners and C_E describes consumption by the elite. These are both functions of wealth, a bit like the death rates… but as you’d expect increasing wealth increases consumption:

For both the elite and commoners, consumption grows linearly with increasing wealth until wealth reaches the critical level w_{th}. But it grows faster for the elites, and reaches a higher level.

So, that’s the model… at least in this preliminary version of the paper.

Some solutions of the model

There are many parameters in this model, and many different things can happen depending on their values and the initial conditions. The paper investigates many different scenarios. I don’t have the energy to describe them all, so I urge you to skim it and look at the graphs.

I’ll just show you three. Here is one that Nafeez Ahmed mentioned, where civilization

appears to be on a sustainable path for quite a long time, but even using an optimal depletion rate and starting with a very small number of Elites, the Elites eventually consume too much, resulting in a famine among Commoners that eventually causes the collapse of society.

I can see why Ahmed would like to talk about this scenario: he’s written a book called A User’s Guide to the Crisis of Civilization and How to Save It. Clearly it’s worth putting some thought into risks of this sort. But how likely is this particular scenario compared to others? For that we’d need to think hard about how well this model matches reality.

It’s obviously a crude simplification of an immensely complex and unknowable system: the whole civilization on this planet. That doesn’t mean it’s fundamentally wrong! Its predictions could still be qualitatively correct. But to gain confidence in this, we’d need material that is not made in the draft paper I’ve seen. It says:

The scenarios most closely reflecting the reality of our world today are found in the third group of experiments (see section 5.3), where we introduced economic strati cation. Under such conditions,
we find that collapse is difficult to avoid.

But it would be nice to see a more careful approach to setting model parameters, justifying the simplifications built into the model, exploring what changes when some simplifications are reduced, and so on.

Here’s a happier scenario, where the parameters are chosen differently:

The main difference is that the depletion of resources per commoner, \delta, is smaller.

And here’s yet another, featuring cycles of prosperity, overshoot and collapse:

Tentative conclusions

I hope you see that I’m neither trying to ‘shoot down’ this model nor defend it. I’m just trying to understand it.

I think it’s very important—and fun—to play around with models like this, keep refining them, comparing them against each other, and using them as tools to help our thinking. But I’m not very happy that Nafeez Ahmed called this piece of work a “highly credible wake-up call” without giving us any details about what was actually done.

I don’t expect blog articles on the Guardian to feature differential equations! But it would be great if journalists who wrote about new scientific results would provide a link to the actual work, so people who want to could dig deeper can do so. Don’t make us scour the internet looking for clues.

And scientists: if your results are potentially important, let everyone actually see them! If you think civilization could be heading for collapse, burying your evidence and your recommendations for avoiding this calamity in a closed-access Elsevier journal is not the optimal strategy to deal with the problem.

There’s been a whole side-battle over whether NASA actually funded this study:

• Keith Kloor, About that popular Guardian story on the collapse of industrial civilization, Collide-A-Scape, blog on Discover, March 21, 2014.

• Nafeez Ahmed, Did NASA fund ‘civilisation collapse’ study, or not?, Earth Insight, blog on The Guardian, 21 March 2014.

But that’s very boring compared to fun of thinking about the model used in this study… and the challenging, difficult business of trying to think clearly about the risks of civilizational collapse.

Addendum

The paper is now freely available here:

• Safa Motesharri, Jorge Rivas and Eugenia Kalnay, Human and nature dynamics (HANDY): modeling inequality and use of resources in the collapse or sustainability of societies, Ecological Economics 101 (2014), 90–102.


Markov Models of Social Change (Part 2)

5 March, 2014

guest post by Vanessa Schweizer

This is my first post to Azimuth. It’s a companion to the one by Alaistair Jamieson-Lane. I’m an assistant professor at the University of Waterloo in Canada with the Centre for Knowledge Integration, or CKI. Through our teaching and research, the CKI focuses on integrating what appears, at first blush, to be drastically different fields in order to make the world a better place. The very topics I would like to cover today, which are mathematics and policy design, are an example of our flavour of knowledge integration. However, before getting into that, perhaps some background on how I got here would be helpful.

The conundrum of complex systems

For about eight years, I have focused on various problems related to long-term forecasting of social and technological change (long-term meaning in excess of 10 years). I became interested in these problems because they are particularly relevant to how we understand and respond to global environmental changes such as climate change.

In case you don’t know much about global warming or what the fuss is about, part of what makes the problem particularly difficult is that the feedback from the physical climate system to human political and economic systems is exceedingly slow. It is so slow, that under traditional economic and political analyses, an optimal policy strategy may appear to be to wait before making any major decisions – that is, wait for scientific knowledge and technologies to improve, or at least wait until the next election [1]. Let somebody else make the tough (and potentially politically unpopular) decisions!

The problem with waiting is that the greenhouse gases that scientists are most concerned about stay in the atmosphere for decades or centuries. They are also churned out by the gigatonne each year. Thus the warming trends that we have experienced for the past 30 years, for instance, are the cumulative result of emissions that happened not only recently but also long ago—in the case of carbon dioxide, as far back as the turn of the 20th century. The world in the 1910s was quainter than it is now, and as more economies around the globe industrialize and modernize, it is natural to wonder: how will we manage to power it all? Will we still rely so heavily on fossil fuels, which are the primary source of our carbon dioxide emissions?

Such questions are part of what makes climate change a controversial topic. Present-day policy decisions about energy use will influence the climatic conditions of the future, so what kind of future (both near-term and long-term) do we want?

Futures studies and trying to learn from the past

Many approaches can be taken to answer the question of what kind of future we want. An approach familiar to the political world is for a leader to espouse his or her particular hopes and concerns for the future, then work to convince others that those ideas are more relevant than someone else’s. Alternatively, economists do better by developing and investigating different simulations of economic developments over time; however, the predictive power of even these tools drops off precipitously beyond the 10-year time horizon.

The limitations of these approaches should not be too surprising, since any stockbroker will say that when making financial investments, past performance is not necessarily indicative of future results. We can expect the same problem with rhetorical appeals, or economic models, that are based on past performances or empirical (which also implies historical) relationships.

A different take on foresight

A different approach avoids the frustration of proving history to be a fickle tutor for the future. By setting aside the supposition that we must be able to explain why the future might play out a particular way (that is, to know the ‘history’ of a possible future outcome), alternative futures 20, 50, or 100 years hence can be conceptualized as different sets of conditions that may substantially diverge from what we see today and have seen before. This perspective is employed in cross-impact balance analysis, an algorithm that searches for conditions that can be demonstrated to be self-consistent [3].

Findings from cross-impact balance analyses have been informative for scientific assessments produced by the Intergovernmental Panel on Climate Change Research, or IPCC. To present a coherent picture of the climate change problem, the IPCC has coordinated scenario studies across economic and policy analysts as well as climate scientists since the 1990s. Prior to the development of the cross-impact balance method, these researchers had to do their best to identify appropriate ranges for rates of population growth, economic growth, energy efficiency improvements, etc. through their best judgment.

A retrospective using cross-impact balances on the first Special Report on Emissions Scenarios found that the researchers did a good job in many respects. However, they underrepresented the large number of alternative futures that would result in high greenhouse gas emissions in the absence of climate policy [4].

As part of the latest update to these coordinated scenarios, climate change researchers decided it would be useful to organize alternative futures according socio-economic conditions that pose greater or fewer challenges to mitigation and adaptation. Mitigation refers to policy actions that decrease greenhouse gas emissions, while adaptation refers to reducing harms due to climate change or to taking advantage of benefits. Some climate change researchers argued that it would be sufficient to consider alternative futures where challenges to mitigation and adaptation co-varied, e.g. three families of futures where mitigation and adaptation challenges would be low, medium, or high.

Instead, cross-impact balances revealed that mixed-outcome futures—such as socio-economic conditions simultaneously producing fewer challenges to mitigation but greater challenges to adaptation—could not be completely ignored. This counter-intuitive finding, among others, brought the importance of quality of governance to the fore [5].

Although it is generally recognized that quality of governance—e.g. control of corruption and the rule of law—affects quality of life [6], many in the climate change research community have focused on technological improvements, such as drought-resistant crops, or economic incentives, such as carbon prices, for mitigation and adaptation. The cross-impact balance results underscored that should global patterns of quality of governance across nations take a turn for the worse, poor governance could stymie these efforts. This is because the influence of quality of governance is pervasive; where corruption is permitted at the highest levels of power, it may be permitted at other levels as well—including levels that are responsible for building schools, teaching literacy, maintaining roads, enforcing public order, and so forth.

The cross-impact balance study revealed this in the abstract, as summarized in the example matrices below. Alastair included a matrix like these in his post, where he explained that numerical judgments in such a matrix can be used to calculate the net impact of simultaneous influences on system factors. My purpose in presenting these matrices is a bit different, as the matrix structure can also explain why particular outcomes behave as system attractors.

In this example, a solid light gray square means that the row factor directly influences the column factor some amount, while white space means that there is no direct influence:

Dark gray squares along the diagonal have no meaning, since everything is perfectly correlated to itself. The pink squares highlight the rows for the factors “quality of governance” and “economy.” The importance of these rows is more apparent here; the matrix above is a truncated version of this more detailed one:

(Click to enlarge.)

The pink rows are highlighted because of a striking property of these factors. They are the two most influential factors of the system, as you can see from how many solid squares appear in their rows. The direct influence of quality of governance is second only to the economy. (Careful observers will note that the economy directly influences quality of governance, while quality of governance directly influences the economy). Other scholars have meticulously documented similar findings through observations [7].

As a method for climate policy analysis, cross-impact balances fill an important gap between genius forecasting (i.e., ideas about the far-off future espoused by one person) and scientific judgments that, in the face of deep uncertainty, are overconfident (i.e. neglecting the ‘fat’ or ‘long’ tails of a distribution).

Wanted: intrepid explorers of future possibilities

However, alternative visions of the future are only part of the information that’s needed to create the future that is desired. Descriptions of courses of action that are likely to get us there are also helpful. In this regard, the post by Jamieson-Lane describes early work on modifying cross-impact balances for studying transition scenarios rather than searching primarily for system attractors.

This is where you, as the mathematician or physicist, come in! I have been working with cross-impact balances as a policy analyst, and I can see the potential of this method to revolutionize policy discussions—not only for climate change but also for policy design in general. However, as pointed out by entrepreneurship professor Karl T. Ulrich, design problems are NP-complete. Those of us with lesser math skills can be easily intimidated by the scope of such search problems. For this reason, many analysts have resigned themselves to ad hoc explorations of the vast space of future possibilities. However, some analysts like me think it is important to develop methods that do better. I hope that some of you Azimuth readers may be up for collaborating with like-minded individuals on the challenge!

References

The graph of carbon emissions is from reference [2]; the pictures of the matrices are adapted from reference [5]:

[1] M. Granger Morgan, Milind Kandlikar, James Risbey and Hadi Dowlatabadi, Why conventional tools for policy analysis are often inadequate for problems of global change, Climatic Change 41 (1999), 271–281.

[2] T.F. Stocker et al., Technical Summary, in Climate Change 2013: The Physical Science Basis. Contribution of Working Group I to the Fifth Assessment Report of the Intergovernmental Panel on Climate Change (2013), T.F. Stocker, D. Qin, G.-K. Plattner, M. Tignor, S.K. Allen, J. Boschung, A. Nauels, Y. Xia, V. Bex, and P.M. Midgley (eds.) Cambridge University Press, New York.

[3] Wolfgang Weimer-Jehle, Cross-impact balances: a system-theoretical approach to cross-impact analysis, Technological Forecasting & Social Change 73 (2006), 334–361.

[4] Vanessa J. Schweizer and Elmar Kriegler, Improving environmental change research with systematic techniques for qualitative scenarios, Environmental Research Letters 7 (2012), 044011.

[5] Vanessa J. Schweizer and Brian C. O’Neill, Systematic construction of global socioeconomic pathways using internally consistent element combinations, Climatic Change 122 (2014), 431–445.

[6] Daniel Kaufman, Aart Kray and Massimo Mastruzzi, Worldwide Governance Indicators (2013), The World Bank Group.

[7] Daron Acemoglu and James Robinson, The Origins of Power, Prosperity, and Poverty: Why Nations Fail. Website.


Markov Models of Social Change (Part 1)

24 February, 2014

guest post by Alastair Jamieson-Lane

The world is complex, and making choices in a complex world is sometimes difficult.

As any leader knows, decisions must often be made with incomplete information. To make matters worse, the experts and scientists who are meant to advise on these important matters are also doing so with incomplete information—usually limited to only one or two specialist fields. When decisions need to be made that are dependent on multiple real-world systems, and your various advisors find it difficult to communicate, this can be problematic!

The generally accepted approach is to listen to whichever advisor tells you the things you want to hear.

When such an approach fails (for whatever mysterious and inexplicable reason) it might be prudent to consider such approaches as Bayesian inference, analysis of competing hypotheses or cross-impact balance analysis.

Because these methods require experts to formalize their opinions in an explicit, discipline neutral manner, we avoid many of the problems mentioned above. Also, if everything goes horribly wrong, you can blame the algorithm, and send the rioting public down to the local university to complain there.

In this blog article I will describe cross-impact balance analysis and a recent extension to this method, explaining its use, as well as some basic mathematical underpinnings. No familiarity with cross-impact balance analysis will be required.

Wait—who is this guy?

Since this is my first time writing a blog post here, I hear introductions are in order.

Hi. I’m Alastair.

I am currently a Master’s student at the University of British Columbia, studying mathematics. In particular, I’m aiming to use evolutionary game theory to study academic publishing and hiring practices… and from there hopefully move on to studying governments (we’ll see how the PhD goes). I figure that both those systems seem important to solving the problems we’ve built for ourselves, and both may be under increasing pressure in coming years.

But that’s not what I’m here for today! Today I’m here to tell the story of cross-impact balance analysis, a tool I was introduced to at the complex systems summer school in Santa Fe.

The story

Suppose (for example) that the local oracle has foretold that burning the forests will anger the nature gods

… and that if you do not put restrictions in place, your crops will wither and die.

Well, that doesn’t sound very good.

The merchant’s guild claims that such restrictions will cause all trade to grind to a halt.

Your most trusted generals point out that weakened trade will leave you vulnerable to invasion from all neighboring kingdoms.

The sailors guild adds that the wrath of Poseidon might make nautical trade more difficult.

The alchemists propose alternative sources of heat…

… while the druids propose special crops as a way of resisting the wrath of the gods…

… and so on.

Given this complex web of interaction, it might be a good time to consult the philosophers.

Overview of CIB

This brings us to the question of what CIB (Cross-Impact Balance) analysis is, and how to use it.

At its heart, CIB analysis demands this: first, you must consider what aspects of the world you are interested in studying. This could be environmental or economic status, military expenditure, or the laws governing genetic modification. These we refer to as “descriptors”. For each “descriptor” we must create a list of possible “states”.

For example, if the descriptor we are interested in were “global temperature change” our states might be “+5 degree”, “+4 degrees” and so on down to “-2 degrees”.

The states of a descriptor are not meant to be all-encompassing, or offer complete detail, and they need not be numerical. For example, the descriptor “Agricultural policy” might have such states as “Permaculture subsidy”, “Genetic engineering”, “Intensive farming” or “No policy”.

For each of these states, we ask our panel of experts whether such a state would increase or decrease the tendency for some other descriptor to be in a particular state.

For example, we might ask: “On a scale from -3 to 3, how much does the agricultural policy of Intensive farming increase the probability that we will see global temperature increases of +2 degrees?”

By combining the opinions of a variety of experts in each field, and weighting based on certainty and expertise, we are able to construct matrices, much like the one below:

The above matrix is a description of my ant farm. The health of my colony is determined by the population, income, and education levels of my ants. For a less ant focused version of the above, please refer to:

• Elisabeth A. Lloyd and Vanessa J. Schweizer, Objectivity and a comparison of methodological scenario approaches for climate change research, Synthese (2013).

For any possible combination of descriptor states (referred to as a scenario) we can calculate the total impact on all possible descriptors. In the current scenario we have low population, high income and medium education (see highlighted rows).

Because the current scenario has high ant income, this strongly influences us to have low population (+3) and prevents a jump to high population (-3). This combined with the non-influence from education (zeros) leads to low population being the most favoured state for our population descriptor. Thus we expect no change. We say this is “consistent”.

Education however sees a different story. Here we have a strong influence towards high education levels (summing the column gives a total of 13). Thus our current state (medium education) is inconsistent, and we would expect the abundance of ant wealth to lead to an improvements in the ant schooling system.

Classical CIB analysis acts as a way to classify which hypothetical situations are consistent, and which are not.

Now, it is all well and good to claim that some scenarios are stable, but the real use of such a tool is in predicting (and influencing) the future.

By applying a deterministic rule that determines how inconsistencies are resolved, we can produce a “succession rule”. The most straight-forward example is to replace all descriptor states with whichever state is most favoured by the current scenario. In the example above we would switch to “Low population, medium income, high education”. A generation later we would switch back to “Low population, High income, medium education”, soon finding ourselves trapped in a loop.

All such rules will always lead to either a loop or a “sink”: a self consistent scenario which is succeeded only by itself.

So, how can we use this? How will this help us deal with the wrath of the gods (or ant farms)?

Firstly: we can identify loops and consistent scenarios which we believe are most favourable. It’s all well and good imagining some future utopia, but if it is inconsistent with itself, and will immediately lead to a slide into less favourable scenarios then we should not aim for it, we should find that most favourable realistic scenario and aim for that one.

Secondly: We can examine all our consistent scenarios, and determine whose “basin of attraction” we find ourselves in: that is, which scenario are we likely to end up in.

Thirdly: Suppose we could change our influence matrix slightly? How would we change it to favour scenarios we most prefer? If you don’t like the rules, change the game—or at the very least find out WHAT we would need to change to have the best effect.

Concerns and caveats

So… what are the problems we might encounter? What are the drawbacks?

Well, first of all, we note that the real world does not tend to reach any form of eternal static scenario or perfect cycle. The fact that our model does might be regarded as reason for suspicion.

Secondly, although the classical method contains succession analysis, this analysis is not necessarily intended as a completely literal “prediction” of events. It gives a rough idea of the basins of attraction of our cycles and consistent scenarios, but is also somewhat arbitrary. What succession rule is most appropriate? Do all descriptors update simultaneously? Or only the one with the most “pressure”? Are our descriptors given in order of malleability, and only the fastest changing descriptor will change?

Thirdly, in collapsing our description of the world down into a finite number of states we are ignoring many tiny details. Most of these details are not important, but in assuming that our succession rules are deterministic, we imply that these details have no impact whatsoever.

If we instead treat succession as a somewhat random process, the first two of these problems can be solved, and the third somewhat reduced.

Stochastic succession

In the classical CIB succession analysis, some rule is selected which deterministically decides which scenario follows from the present. Stochastic succession analysis instead tells us the probability that a given scenario will lead to another.

The simplest example of a stochastic succession rule is to simply select a single descriptor at random each time step, and only consider updates that might happen to that descriptor. This we refer to as dice succession. This (in some ways) represents hidden information: two systems that might look identical on the surface from the point of view of our very blockish CIB analysis might be different enough underneath to lead to different outcomes. If we have a shaky agricultural system, but a large amount of up-and-coming research, then which of these two factors becomes important first is down to the luck of the draw. Rather than attempt to model this fine detail, we instead merely accept it and incorporate this uncertainty into our model.

Even this most simplistic change leads to dramatics effects on our system. Most importantly, almost all cycles vanish from our results, as forks in the road allow us to diverge from the path of the cycle.

We can take stochastic succession further and consider more exotic rules for our transitions, ones that allow any transition to take place, not merely those that are most favored. For example:

P(x,y) = A e^{I_x(y)/T}

Here x is our current scenario, y is some possible future scenario, and I_x(y) is the total impact score of y from the perspective of x. A is a simple normalizing constant, and T is our system’s temperature. High temperature systems are dominated by random noise, while low temperature systems are dominated by the influences described by our experts.

Impact score is calculated by summing the impact of each state of our current scenario, on each state of our target scenario. For example, for the above, suppose we want to find I_x(y) when x is the given scenario “Low population, High income, medium education” and y was the scenario “Medium population, medium income, High education”. We consider all numbers that are in rows which were states of x and in columns that are states of y. This would give:

I_x(y)= (0+0+0) + (-2 +0 +10) +(6+7+0) = 21

Here each bracket refers to the sum of a particular column.
More generically we can write the formula as:

\displaystyle{ I_x(y)= \sum_{i \subset x, \;j \subset y} M_{i,j} }

Here M_{i,j} refers to an entry in our cross-impact balance matrix, i and j are both states, and i \subset x reads as “i is a state of x”.

We refer to this function for computing transition probabilities as the Boltzmann succession law, due to its similarity to the Boltzmann distribution found in physics. We use it merely as an example, and by no means wish to imply that we expect the transitions for our true system to act in a precisely Boltzmann-like manner. Alternative functions can, and should, be experimented with. The Boltzmann succession law is however an effective example and has a number of nice properties: P(x,y) is always positive, unchanged by adding a constant to every element of the cross-impact balance matrix, contains adjustable parameters, and unbounded above.

The Boltzmann succession rule is what I will refer to as fully stochastic: it allows transitions even against our experts’ judgement (with low probability). This is in contrast to dice succession which picks a direction at random, but still contains scenarios from which our system can not escape.

Effects of stochastic succession

‘Partially stochastic’ processes such as the dice rule have very limited effect on the long term behavior of the model. Aside from removing most cycles, they behave almost exactly like our deterministic succession rules. So, let us instead discuss the more interesting fully stochastic succession rules.

In the fully stochastic system we can ask “after a very long time, what is the probability we will be in scenario x?”

By asking this question we can get some idea of the relative importance of all our future scenarios and states.

For example, if the scenario “high population, low education, low income” has a 40% probability in the long term, while most other scenarios have a probability of 0.2%, we can see that this scenario is crucial to the understanding of our system. Often scenarios already identified by deterministic succession analysis are the ones with the greatest long term probability—but by looking at long term probability we also gain information about the relative importance of each scenario.

In addition, we can encounter scenarios which are themselves inconsistent, but form cycles and/or clusters of interconnected scenarios. We can also notice scenarios that while technically ‘consistent’ in the deterministic rules are only barely so, and have limited weight due to a limited basin of attraction. We might identify scenarios that seem familiar in the real world, but are apparently highly unlikely in our analysis, indicating either that we should expect change… or perhaps suggesting a missing descriptor or a cross-impact in need of tweaking.

Armed with such a model, we can investigate what we can do to increase the short term and long term likelihood of desirable scenarios, and decrease the likelihood of undesirable scenarios.

Some further reading

As a last note, here are a few freely available resources that may prove useful. For a more formal introduction to CIB, try:

• Wolfgang Weimer-Jehle, Cross-impact balances: a system-theoretical approach to cross-impact analysis, Technological Forecasting & Social Change 73 (2006), 334–361.

• Wolfgang Weimer-Jehle, Properties of cross-impact balance analysis.

You can find free software for doing a classical CIB analysis here:

• ZIRIUS, ScenarioWizard.

ZIRIUS is the Research Center for Interdisciplinary Risk and Innovation Studies of the University of Stuttgart.

Here are some examples of CIB in action:

• Gerhard Fuchs, Ulrich Fahl, Andreas Pyka, Udo Staber, Stefan Voegele and Wolfgang Weimer-Jehle, Generating innovation scenarios using the cross-impact methodology, Department of Economics, University of Bremen, Discussion-Papers Series No. 007-2008.

• Ortwin Renn, Alexander Jager, Jurgen Deuschle and Wolfgang Weimer-Jehle, A normative-functional concept of sustainability and its indicators, International Journal of Global Environmental Issues, 9 (2008), 291–317.

Finally, this page contains a more complete list of articles, both practical and theoretical:

• ZIRIUS, Cross-impact balance analysis: publications.


Global Climate Change Negotiations

28 October, 2013

 

There were many interesting talks at the Interdisciplinary Climate Change Workshop last week—too many for me to describe them all in detail. But I really must describe the talks by Radoslav Dimitrov. They were full of important things I didn’t know. Some are quite promising.

Radoslav S. Dimitrov is a professor at the Department of Political Science at Western University. What’s interesting is that he’s also been a delegate for the European Union at the UN climate change negotiations since 1990! His work documents the history of climate negotiations from behind closed doors.

Here are some things he said:

• In international diplomacy, there is no questioning the reality and importance of human-caused climate change. The question is just what to do about it.

• Governments go through every line of the IPCC reports twice. They cannot add anything the scientists have written, but they can delete things. All governments have veto power. This makes the the IPCC reports more conservative than they otherwise would be: “considerably diluted”.

• The climate change negotiations have surprised political scientists in many ways:

1) There is substantial cooperation even without the USA taking the lead.

2) Developing countries are accepting obligations, with many overcomplying.

3) There has been action by many countries and subnational entities without any treaty obligations.

4) There have been repeated failures of negotiation despite policy readiness.

• In 2011, China and Saudi Arabia rejected the final agreement at Durban as inadequate. Only Canada, the United States and Australia had been resisting stronger action on climate change. Canada abandoned the Kyoto Protocol the day after the collapse of negotiations at Durban. They publicly blamed China, India and Brazil, even though Brazil had accepted dramatic emissions cuts and China had, for the first time, accepted limits on emissions. Only India had taken a “hardline” attitude. Publicly blaming some other country for the collapse of negotiations is a no-no in diplomacy, so the Chinese took this move by Canada as a slap in the face. In return, they blamed Canada and “the West” for the collapse of Durban.

• Dimitrov is studying the role of persuasion in diplomacy, recording and analyzing hundreds of hours of discussions. Countries try to change each other’s minds, not just behavior.

• The global elite do not see climate change negotiations as an environmental issue. Instead, they feel they are “negotiating the future economy”. They focus on the negative economic consequences of inaction, and the economic benefits of climate action.

• In particular, the EU has managed to persuade many countries that climate change is worth tackling now. They do this with economic, not environmental arguments. For example, they argue that countries who take the initiative will have an advantage in future employment, getting most of the “green jobs”. Results include China’s latest 5-year plan, which some have called “the most progressive legislation in history”, and also Japan’s plan for a 60-80% reduction of carbon emissions. The EU itself also expects big returns on investment in climate change.

I apologize for any oversimplifications or downright errors in my notes here.

References

You can see some slides for Dimitrov’s talks here:

• Radoslav S. Dimitrov, A climate of change.

For more, try reading this article, which is free online:

• Radoslav S. Dimitrov, Inside Copenhagen: the state of climate governance, Global Environmental Politics 10 (2010), 18–24.

and these more recent book chapters, which are apparently not as easy to get:

• Radoslav S. Dimitrov, Environmental diplomacy, in Handbook of Global Environmental Politics, edited by Paul Harris, Routledge, forthcoming as of 2013.

• Radoslav S. Dimitrov, International negotiations, in Handbook of Global Climate and Environmental Policy, edited by Robert Falkner, Wiley-Blackwell forthcoming as of 2013.

• Radoslav S. Dimitrov, Persuasion in world politics: The UN climate change negotiations, in Handbook of Global Environmental Politics, edited by Peter Dauvergne, Edward Elgar Publishing, Cheltenham, UK, 2012.

• Radoslav S. Dimitrov, American prosperity and the high politics of climate change, in Prospects for a Post-American World, edited by Sabrina Hoque and Sean Clark, University of Toronto Press, Toronto, 2012.


What To Do About Climate Change?

23 October, 2013

Here are the slides for my second talk in the Interdisciplinary Climate Change Workshop at the Balsillie School of International Affairs:

What To Do About Climate Change?

Like the first it’s just 15 minutes long, so it’s very terse.

I start by noting that slowing the rate of carbon burning won’t stop global warming: most carbon dioxide stays in the air over a century, though individual molecules come and go. Global warming is like a ratchet.

So, we will:

1) leave fossil fuels unburnt,

2) sequester carbon,

3) actively cool the Earth, and/or

4) live with a hotter climate.

Of course we may do a mix of these…. though we’ll certainly do some of option 4), and we might do only this one. My goal in this short talk is not mainly to argue for a particular mix! I mainly want to present some information about the various options.

I do not say anything about the best ways to do option 4); I merely provide some arguments that we’ll wind up doing a lot of this one… because I’m afraid some of the participants in the workshop may be in denial about that.

I also argue that we should start doing research on option 3), because like it or not, I think people are going to become very interested in geoengineering, and without enough solid information about it, people are likely to make bad mistakes: for example, diving into ambitious projects out of desperation.

As usual, if you click on a phrase in blue in this talk, you can get more information.

I want to really thank everyone associated with Azimuth for helping find and compile the information used in this talk! It’s really been a team effort!


What is Climate Change?

21 October, 2013

Here are the slides for a 15-minute talk I’m giving on Friday for the Interdisciplinary Climate Change Workshop at the Balsillie School of International Affairs:

What is Climate Change?

This will be the first talk of the workshop. Many participants are focused on diplomacy and economics. None are officially biologists or ecologists. So, I want to set the stage with a broad perspective that fits humans into the biosphere as a whole.

I claim that climate change is just one aspect of something bigger: a new geological epoch, the Anthropocene.

I start with evidence that human civilization is having such a big impact on the biosphere that we’re entering a new geological epoch.

Then I point out what this implies. Climate change is not an isolated ‘problem’ of the sort routinely ‘solved’ by existing human institutions. It is part of a shift from the exponential growth phase of human impact on the biosphere to a new, uncharted phase.

In this new phase, institutions and attitudes will change dramatically, like it or not:

Before we could treat ‘nature’ as distinct from ‘civilization’. Now, there is no nature separate from civilization.

Before, we might imagine ‘economic growth’ an almost unalloyed good, with many externalities disregarded. Now, many forms of growth have reached the point where they push the biosphere toward tipping points.

In a separate talk I’ll say a bit about ‘what we can do about it’. So, nothing about that here. You can click on words in blue to see sources for the information.


The EU’s Biggest Renewable Energy Source

18 September, 2013

Puzzle. The European Union has a goal of producing 20% of all its energy from renewable sources by 2020. Right now, which source of renewable energy does the EU use most?

1) wind
2) solar
3) hydropower
4) tides
5) geothermal
6) trash
7) wood
8) bureaucrats in hamster wheels
9) trolls

Think about it a bit before reading further!

The Economist writes:

Which source of renewable energy is most important to the European Union? Solar power, perhaps? (Europe has three-quarters of the world’s total installed capacity of solar photovoltaic energy.) Or wind? (Germany trebled its wind-power capacity in the past decade.) The answer is neither. By far the largest so-called renewable fuel used in Europe is wood.

In its various forms, from sticks to pellets to sawdust, wood (or to use its fashionable name, biomass) accounts for about half of Europe’s renewable-energy consumption. In some countries, such as Poland and Finland, wood meets more than 80% of renewable-energy demand. Even in Germany, home of the Energiewende (energy transformation) which has poured huge subsidies into wind and solar power, 38% of non-fossil fuel consumption comes from the stuff.

I haven’t yet found confirmation of this on the EU’s own websites, but this page:

• Eurostat, Renewable energy statistics.

says that in 2010, 67.6% of primary renewable energy production in the EU came from “biomass and waste”. This is at least compatible with The Economist‘s claims. Hydropower accounted for 18.9%, wind for 7.7%, geothermal for 3.5% and solar for just 2.2%.

It seems that because wood counts as renewable energy in the EU, and there are big incentives to increase the use of renewable energy, demand for wood is booming. According to the Economist, imports of wood pellets into the EU rose by 50% in 2010 alone. They say that thanks to Chinese as well as EU demand, global trade in these pellets could rise five- or sixfold from 10-12 million tonnes a year now to 60 million tonnes by 2020.

Wood from tree farms may be approximately carbon-neutral, but turning it into pellets takes energy… and importing wood pellets takes more. The EU may be making a mistake here.

Or maybe not.

Either way, it’s interesting that we always hear about the rising use of wind and solar in the EU, but not about wood.

Can you find more statistics or well-informed discussions about wood as a renewable energy source?

Here’s the article:

Wood: the fuel of the future, The Economist, 6 April 2013.

If its facts are wrong, I’d like to know.


P.S. – This is the 400th post on this blog!


Localizing and Networking Basic Technology

8 May, 2013

guest post by Iuval Clejan

Natural philosophy (aka science) is distinguished from pure philosophy or mathematics by coupling theory to experiment. Engineering is distinguished from science in its focus on solving practical problems rather than merely coming up with more accurate models of the universe. Climate change will not be fixed by pure philosophy or argumentation. We need to use the methods of science and engineering to make progress towards a solution. The problem is complicated and involves not just climate dynamics and ecology, but psychology, economics and technology. Besides theory and experiment, we now have the tool of simulation. I propose a think-tank (or more properly, a think/do/simulate-tank) analogous to the Manhattan Project, which developed the first atomic bomb. However, this project would involve social and physical scientists, computer programmers, engineers, farmers and craftspeople who are trying to collaboratively solve the problem of how to provide food, shelter, water, clothes, medicine and recreation for a self contained village in a sustainable way. Sustainability has psychological dimensions, not just ecological. For example, it implies that people would want to keep living in this village, or similar villages. If we are interested in sustainability beyond the initial village, then sustainability implies replicability—that the village would inspire many other people to live similarly.

Initial outputs of this project would be well-founded suggestions regarding what kinds of production skills are needed and how to effectively network them, how many people, how much land, how much time spent on production in order to achieve village-scale independence and sustainability. An eventual outcome would be an actual demonstration of a functional village.

Why village? The word village is used here to mean a group of people who are economically networked in isolation from the rest of the global economy. It also implies choosing a particular geographic location, so not all outputs would be transferable to other locations, though with the initial simulation stage many locations could be tried.

Why economic isolation? Without putting a boundary on the experiment, the problem is too complex, even for simulation. Entropy reduction is the same reason cells have membranes and scientists have labs. The membrane could be permeable to sunlight, wind (and emissions) and water, but at first it might be simpler to keep it impermeable to economic exchange. In addition, it is easy to externalize all unsustainable practices without a membrane. But the size of the membrane is not predetermined. One possible conclusion might be that the village has to be the size of the whole earth. Another reason for starting with a village is that changes in biological (and probably other complex) systems always proceed from small populations that can spread out by replication. It is more practical to achieve a global change in lifestyle and technology starting with a small group of willing people who can then inspire others by example, rather than try to impose a change on a large population, the way fascist and communist experiments have proceeded. Another reason for keeping things smaller and more local is that a stronger feedback between production and consumption may arise, which would regulate unsustainable consumption, because the environmental, social and psychological costs of production are visible in the village, as opposed to hidden or abstracted from the consumers. There are other reasons for localization (e.g. resilience, freedom, more meaningful employment for more people, better relations among people or between people and nature), less directly related to climate change, and more speculative.

This is probably the place to admit my main bias. I am a Gandhist Luddite (who has a PhD in Physics, worked as a semiconductor engineer and a molecular biologist) , not the angry, machine-smashing kind, and I like not only to tinker with technology, but to think how it affects people and nature. I don’t think all technology can be equated with progress. I call this project the Luddite Manhattan Project (or or Localizing and Networking Basic Technology project) for that reason and because it parallels the project that produced the nuclear bomb. I think that the craftspeople and farmers would contribute more to this project than the scientists and engineers. I think that in the multidimensional optimization of technology, we have focused too much on efficiency (disregarding other human values) and that the industrial revolution was largely a mistake (though some good things came out of it, like global communication). If we focus on other human values, we can optimize technology better. I think that localism of basic-needs production (when coupled to non-technological things like democracy) is a constraint from which many other good things such as sustainability, full, meaningful employment, freedom, and good social relations would follow, though it too can be taken to extremes. Given my bias, I suspect that the kind of technology network that would be most sustainable would be pre-industrial, with a few modern innovations. If we really did the book-keeping accurately we would probably find that industrial production is unsustainable. Or rather we would find that pre-industrial production can be sustainable, while current industrial production is not (I leave open the possibility that industrial production might be sustainable in the future, with new innovations, but even then it tramples too many human values). But these conclusions would be outputs of the project, not pre-assumptions or inputs of the project. I welcome some discussion of these ideas, followed by computation, testing and implementation.

The technical part of the project is basically a networking problem. It would allow initial imports (in a way that would allow replicability—that is don’t hog a disproportionate fraction of resources into the village) into a specific location and then network existing technologies so that the system is self-sustaining. What one craftsperson produces, others in the village must use so that the village can continue in perpetuity. A blacksmith needs some fuel, but also customers who need his products and can exchange stuff that he needs. A cooper is mostly useless in the current industrial economy, but would probably find some use in a local village economy, where people need ways to store water and other liquids.

Here are some typical challenges and questions the project would face: How can antibiotics be made on a village scale with no external inputs? What can’t be made and can we find substitutes? Are there missing technology links and can we invent them, or do we need to start with another scenario? What food needs to be produced to provide basic caloric needs to all inhabitants of the village? How much area is required? How can water be captured and transported without plastic or rubber? How much carbon is emitted in production of everything? Where does garbage go? How can metals be recycled? Can plastic be produced? Can electronics be produced? Is there enough time for art, science, scholarship and other forms of edifying human activity? What kind of economic systems work? Is there an optimal one as far as sustainability, or is it a matter of personal preference? These are all questions that can be tackled, if we face them with curiosity and realism, instead of with fear and the kind of magical thinking that most people have towards technology and other things they don’t understand. I’ve heard that Leonardo Da Vinci was the last man to understand the technology of his age, but we have computers to help us.

It might be appropriate at this stage to mention that I do not advocate giving up entirely the industrial mode of production, or the global trade it requires. The Localizing and Networking Basic Technology project would address only food, shelter, water, medicine, all the subsidiary crafts necessary to sustain these, and a few edifying human activities like art, music and scholarship. Computers and internet hardware are almost certainly best left to industrial production, and so are cars, airplanes (but the need for these will drastically decrease if this project is successful), some of the parts for particle accelerators and fancy biotech equipment, etc.

The initial computational stage of the project could model itself on online multiplayer games like Warcraft and planning games like Sim City (I have tried to contact Will Wright, to no avail). I do not play these games (I prefer simple low tech games personally), but I see the usefulness of online collaboration and computation for this project, as a sort of in-silico evolution. Programmers and mathematicians could set up the software to allow both online collaboration and some central planning. I think the simplest solutions should be tried first, i.e. the most primitive technologies, like hunting and gathering. My educated guess is that they will be shown incapable of providing basic needs given the current world population. The same conclusion would probably follow for current industrial production, except the incapacity would be with regards to sustainability. I predict the sweet spot where both sustainability and capacity to “feed the world” (meaning provide a decent life) would be achieved by pre-industrial, agrarian and craft-based production.

I am totally willing to be proven wrong by this experiment about my anti-industrialization bias. With regards to scientific experimentation, there needs to be well posed hypotheses that can be proven wrong, and good controls. The engineering approach is an alternative. Who is willing to work on this project? Let’s make amends for unleashing the horror of the Bomb on the earth, tackle climate change realistically and have some technical fun. For further information please see:

• Iuval Clejan, Luddite Manhattan Project, first stage, 16 April 2012.

• Iuval Clejan, A proposal for funding a blueprint of a village-based technology ecosystem, 5 February 2012.


Energy and the Environment – What Physicists Can Do

25 April, 2013

 

The Perimeter Institute is a futuristic-looking place where over 250 physicists are thinking about quantum gravity, quantum information theory, cosmology and the like. Since I work on some of these things, I was recently invited to give the weekly colloquium there. But I took the opportunity to try to rally them into action:

Energy and the Environment: What Physicists Can Do. Watch the video or read the slides.

Abstract. The global warming crisis is part of a bigger transformation in which humanity realizes that the Earth is a finite system and that our population, energy usage, and the like cannot continue to grow exponentially. While politics and economics pose the biggest challenges, physicists are in a good position to help make this transition a bit easier. After a quick review of the problems, we discuss a few ways physicists can help.

On the video you can hear me say a lot of stuff that’s not on the slides: it’s more of a coherent story. The advantage of the slides is that anything in blue, you can click on to get more information. So for example, when I say that solar power capacity has been growing annually by 75% in recent years, you can see where I got that number.

I was pleased by the response to this talk. Naturally, it was not a case of physicists saying “okay, tomorrow I’ll quit working on the foundations of quantum mechanics and start trying to improve quantum dot solar cells.” It’s more about getting them to see that huge problems are looming ahead of us… and to see the huge opportunities for physicists who are willing to face these problems head-on, starting now. Work on energy technologies, the smart grid, and ‘ecotechnology’ is going to keep growing. I think a bunch of the younger folks, at least, could see this.

However, perhaps the best immediate outcome of this talk was that Lee Smolin introduced me to Manjana Milkoreit. She’s at the school of international affairs at Waterloo University, practically next door to the Perimeter Institute. She works on “climate change governance, cognition and belief systems, international security, complex systems approaches, especially threshold behavior, and the science-policy interface.”

So, she knows a lot about the all-important human and political side of climate change. Right now she’s interviewing diplomats involved in climate treaty negotiations, trying to see what they believe about climate change. And it’s very interesting!

In my next post, I’ll talk about something she pointed me to. Namely: what we can do to hold the temperature increase to 2 °C or less, given that the pledges made by various nations aren’t enough.


Follow

Get every new post delivered to your Inbox.

Join 2,799 other followers