John Harte

27 October, 2012

Earlier this week I gave a talk on the Mathematics of Planet Earth at the University of Southern California, and someone there recommended that I look into John Harte’s work on maximum entropy methods in ecology. He works at U.C. Berkeley.

I checked out his website and found that his goals resemble mine: save the planet and understand its ecosystems. He’s a lot further along than I am, since he comes from a long background in ecology while I’ve just recently blundered in from mathematical physics. I can’t really say what I think of his work since I’m just learning about it. But I thought I should point out its existence.

This free book is something a lot of people would find interesting:

• John and Mary Ellen Harte, Cool the Earth, Save the Economy: Solving the Climate Crisis Is EASY, 2008.

EASY? Well, it’s an acronym. Here’s the basic idea of the US-based plan described in this book:

Any proposed energy policy should include these two components:

Technical/Behavioral: What resources and technologies are to be used to supply energy? On the demand side, what technologies and lifestyle changes are being proposed to consumers?

Incentives/Economic Policy: How are the desired supply and demand options to be encouraged or forced? Here the options include taxes, subsidies, regulations, permits, research and development, and education.

And a successful energy policy should satisfy the AAA criteria:

Availability. The climate crisis will rapidly become costly to society if we do not take action expeditiously. We need to adopt now those technologies that are currently available, provided they meet the following two additional criteria:

Affordability. Because of the central role of energy in our society, its cost to consumers should not increase significantly. In fact, a successful energy policy could ultimately save consumers money.

Acceptability. All energy strategies have environmental, land use, and health and safety implications; these must be acceptable to the public. Moreover, while some interest groups will undoubtedly oppose any particular energy policy, political acceptability at a broad scale is necessary.

Our strategy for preventing climate catastrophe and achieving energy independence includes:

Energy Efficient Technology at home and at the workplace. Huge reductions in home energy use can be achieved with available technologies, including more efficient appliances such as refrigerators, water heaters, and light bulbs. Home retrofits and new home design features such as “smart” window coatings, lighter-colored roofs where there are hot summers, better home insulation, and passive solar designs can also reduce energy use. Together, energy efficiency in home and industry can save the U.S. up to approximately half of the energy currently consumed in those sectors, and at no net cost—just by making different choices. Sounds good, doesn’t it?

Automobile Fuel Efficiency. Phase in higher Corporate Average Fuel Economy (CAFE) standards for automobiles, SUVs and light trucks by requiring vehicles to go 35 miles per gallon of gas (mpg) by 2015, 45 mpg by 2020, and 60 mpg by 2030. This would rapidly wipe out our dependence on foreign oil and cut emissions from the vehicle sector by two-thirds. A combination of plug-in hybrid, lighter car body materials, re-design and other innovations could readily achieve these standards. This sounds good, too!

Solar and Wind Energy. Rooftop photovoltaic panels and solar water heating units should be phased in over the next 20 years, with the goal of solar installation on 75% of U.S. homes and commercial buildings by 2030. (Not all roofs receive sufficient sunlight to make solar panels practical for them.) Large wind farms, solar photovoltaic stations, and solar thermal stations should also be phased in so that by 2030, all U.S. electricity demand will be supplied by existing hydroelectric, existing and possibly some new nuclear, and, most importantly, new solar and wind units. This will require investment in expansion of the grid to bring the new supply to the demand, and in research and development to improve overnight storage systems. Achieving this goal would reduce our dependence on coal to practically zero. More good news!

You are part of the answer. Voting wisely for leaders who promote the first three components is one of the most important individual actions one can make. Other actions help, too. Just as molecules make up mountains, individual actions taken collectively have huge impacts. Improved driving skills, automobile maintenance, reusing and recycling, walking and biking, wearing sweaters in winter and light clothing in summer, installing timers on thermostats and insulating houses, carpooling, paying attention to energy efficiency labels on appliances, and many other simple practices and behaviors hugely influence energy consumption. A major education campaign, both in schools for youngsters and by the media for everyone, should be mounted to promote these consumer practices.

No part of EASY can be left out; all parts are closely integrated. Some parts might create much larger changes—for example, more efficient home appliances and automobiles—but all parts are essential. If, for example, we do not achieve the decrease in electricity demand that can be brought about with the E of EASY, then it is extremely doubtful that we could meet our electricity needs with the S of EASY.

It is equally urgent that once we start implementing the plan, we aggressively export it to other major emitting nations. We can reduce our own emissions all we want, but the planet will continue to warm if we can’t convince other major global emitters to reduce their emissions substantially, too.

What EASY will achieve. If no actions are taken to reduce carbon dioxide emissions, in the year 2030 the U.S. will be emitting about 2.2 billion tons of carbon in the form of carbon dioxide. This will be an increase of 25% from today’s emission rate of about 1.75 billion tons per year of carbon. By following the EASY plan, the U.S. share in a global effort to solve the climate crisis (that is, prevent catastrophic warming) will result in U.S emissions of only about 0.4 billion tons of carbon by 2030, which represents a little less than 25% of 2007 carbon dioxide emissions.128 Stated differently, the plan provides a way to eliminate 1.8 billion tons per year of carbon by that date.

We must act urgently: in the 14 months it took us to write this book, atmospheric CO2 levels rose by several billion tons of carbon, and more climatic consequences have been observed. Let’s assume that we conserve our forests and other natural carbon reservoirs at our current levels, as well as maintain our current nuclear and hydroelectric plants (or replace them with more solar and wind generators). Here’s what implementing EASY will achieve, as illustrated by Figure 3.1 on the next page.

Please check out this book and help me figure out if the numbers add up! I could also use help understanding his research, for example:

• John Harte, Maximum Entropy and Ecology: A Theory of Abundance, Distribution, and Energetics, Oxford University Press, Oxford, 2011.

The book is not free but the first chapter is.

This paper looks really interesting too:

• J. Harte, T. Zillio, E. Conlisk and A. B. Smith, Maximum entropy and the state-variable approach to macroecology, Ecology 89 (2008), 2700–-2711.

Again, it’s not freely available—tut tut. Ecologists should follow physicists and make their work free online; if you’re serious about saving the planet you should let everyone know what you’re doing! However, the abstract is visible to all, and of course I can use my academic superpowers to get ahold of the paper for myself:

Abstract: The biodiversity scaling metrics widely studied in macroecology include the species-area relationship (SAR), the scale-dependent species-abundance distribution (SAD), the distribution of masses or metabolic energies of individuals within and across species, the abundance-energy or abundance-mass relationship across species, and the species-level occupancy distributions across space. We propose a theoretical framework for predicting the scaling forms of these and other metrics based on the state-variable concept and an analytical method derived from information theory. In statistical physics, a method of inference based on information entropy results in a complete macro-scale description of classical thermodynamic systems in terms of the state variables volume, temperature, and number of molecules. In analogy, we take the state variables of an ecosystem to be its total area, the total number of species within any specified taxonomic group in that area, the total number of individuals across those species, and the summed metabolic energy rate for all those individuals. In terms solely of ratios of those state variables, and without invoking any specific ecological mechanisms, we show that realistic functional forms for the macroecological metrics listed above are inferred based on information entropy. The Fisher log series SAD emerges naturally from the theory. The SAR is predicted to have negative curvature on a log-log plot, but as the ratio of the number of species to the number of individuals decreases, the SAR becomes better and better approximated by a power law, with the predicted slope z in the range of 0.14-0.20. Using the 3/4 power mass-metabolism scaling relation to relate energy requirements and measured body sizes, the Damuth scaling rule relating mass and abundance is also predicted by the theory. We argue that the predicted forms of the macroecological metrics are in reasonable agreement with the patterns observed from plant census data across habitats and spatial scales. While this is encouraging, given the absence of adjustable fitting parameters in the theory, we further argue that even small discrepancies between data and predictions can help identify ecological mechanisms that influence macroecological patterns.

High-Speed Finance

8 August, 2012


These days, a lot of buying and selling of stocks is done by computers—it’s called algorithmic trading. Computers can do it much faster than people. Watch how they’ve been going wild!

The date is at lower left. In 2000 it took several seconds for computers to make a trade. By 2010 the time had dropped to milliseconds… or even microseconds. And around this year, market activity started becoming much more intense.

I can’t even see the Flash Crash on May 6 of 2010—also known as The Crash of 2:45. The Dow Jones plummeted 9% in 5 minutes, then quickly bounced back. For fifteen minutes, the economy lost a trillion dollars. Then it reappeared.

But on August 5, 2011, when the credit rating of the US got downgraded, you’ll see the activity explode! And it’s been crazy ever since.

The movie above was created by Nanex, a company that provides market data to traders. The x axis shows the time of day, from 9:30 to 16:00. The y axis… well, it’s the amount of some activity per unit time, but they don’t say what. Do you know?

The folks at Nanex have something very interesting to say about this. It’s not high frequency trading or ‘HFT’ that they’re worried about—that’s actually gone down slightly from 2008 to 2012. What’s gone up is ‘high frequency quoting’, also known as ‘quote spam’ or ‘quote stuffing’.

Over on Google+, Sergey Ten explained the idea to me:

Quote spam is a well-known tactic. It used by high-frequency traders to get competitive advantage over other high-frequency traders. HF traders generate high-frequency quote spam using a pseudorandom (or otherwise structured) algorithm, with his computers coded to ignore it. His competitors don’t know the generating algorithm and have to process each quote, thus increasing their load, consuming bandwidth and getting a slight delay in processing.

A quote is an offer to buy or sell stock at a given price. For a clear and entertaining of how this works and why traders are locked into a race for speed, try:

• Chris Stucchio, A high frequency trader’s apology, Part 1, 16 April 2012. Part 2, 25 April 2012.

I don’t know a great introduction to quote spam, but this paper isn’t bad:

• Jared F. Egginton, Bonnie F. Van Ness, and Robert A. Van Ness, Quote stuffing, 15 March 2012.

Toward the physical limits of speed

In fact, the battle for speed is so intense that trading has run up against the speed of light.

For example, by 2013 there will be a new transatlantic cable at the bottom of the ocean, the first in a decade. Why? Just to cut the communication time between US and UK traders by 5 milliseconds. The new fiber optic line will be straighter than existing ones:

“As a rule of thumb, each 62 miles that the light has to travel takes about 1 millisecond,” Thorvardarson says. “So by straightening the route between Halifax and London, we have actually shortened the cable by 310 miles, or 5 milliseconds.”

Meanwhile, a London-based company called Fixnetix has developed a special computer chip that can prepare a trade in just 740 nanoseconds. But why stop at nanoseconds?

With the race for the lowest “latency” continuing, some market participants are even talking about picoseconds––trillionths of a second. At first the realm of physics and math and then computer science, the picosecond looms as the next time barrier.

Actions that take place in nanoseconds and picoseconds in some cases run up against the sheer limitations of physics, said Mark Palmer, chief executive of Lexington, Mass.-based StreamBase Systems.

Black swans and the ultrafast machine ecology

As high-frequency trading and high-frequency quoting leave slow-paced human reaction times in the dust, markets start to behave differently. Here’s a great paper about that:

• Neil Johnson, Guannan Zhao, Eric Hunsader, Jing Meng, Amith Ravindar, Spencer Carran amd Brian Tivnan, Financial black swans driven by ultrafast machine ecology.

A black swan is an unexpectedly dramatic event, like a market crash or a stock bubble that bursts. But according to this paper, such events are now happening all the time at speeds beyond our perception!

Here’s one:

It’s a price spike in the stock of a company called Super Micro Computer, Inc.. On October 1st, 2010, it shot up 26% and then crashed back down. But this all happened in 25 milliseconds!

These ultrafast black swans happen at least once a day. And they happen most of all to financial institutions.

Here’s a great blog article about this stuff:

• Mark Buchanan, Approaching the singularity—in global finance, The Physics of Finance, 13 February 2012.

I won’t try to outdo Buchanan’s analysis. I’ll just quote the abstract of the original paper:

Society’s drive toward ever faster socio-technical systems, means that there is an urgent need to understand the threat from ‘black swan’ extreme events that might emerge. On 6 May 2010, it took just five minutes for a spontaneous mix of human and machine interactions in the global trading cyberspace to generate an unprecedented system-wide Flash Crash. However, little is known about what lies ahead in the crucial sub-second regime where humans become unable to respond or intervene sufficiently quickly. Here we analyze a set of 18,520 ultrafast black swan events that we have uncovered in stock-price movements between 2006 and 2011. We provide empirical evidence for, and an accompanying theory of, an abrupt system-wide transition from a mixed human-machine phase to a new all-machine phase characterized by frequent black swan events with ultrafast durations (<650ms for crashes, <950ms for spikes). Our theory quantifies the systemic fluctuations in these two distinct phases in terms of the diversity of the system's internal ecology and the amount of global information being processed. Our finding that the ten most susceptible entities are major international banks, hints at a hidden relationship between these ultrafast 'fractures' and the slow 'breaking' of the global financial system post-2006. More generally, our work provides tools to help predict and mitigate the systemic risk developing in any complex socio-technical system that attempts to operate at, or beyond, the limits of human response times.

Trans-quantitative analysts?

When you get into an arms race of trying to write algorithms whose behavior other algorithms can’t predict, the math involved gets very tricky. Over on Google+, F. Lengvel pointed out something strange. In May 2010, Christian Marks claimed that financiers were hiring experts on large ordinals—crudely speaking, big infinite numbers!—to design algorithms that were hard to outwit.

I can’t confirm his account, but I can’t resist quoting it:

In an unexpected development for the depressed market for mathematical logicians, Wall Street has begun quietly and aggressively recruiting proof theorists and recursion theorists for their expertise in applying ordinal notations and ordinal collapsing functions to high-frequency algorithmic trading. Ordinal notations, which specify sequences of ordinal numbers of ever increasing complexity, are being used by elite trading operations to parameterize families of trading strategies of breathtaking sophistication.

The monetary advantage of the current strategy is rapidly exhausted after a lifetime of approximately four seconds — an eternity for a machine, but barely enough time for a human to begin to comprehend what happened. The algorithm then switches to another trading strategy of higher ordinal rank, and uses this for a few seconds on one or more electronic exchanges, and so on, while opponent algorithms attempt the same maneuvers, risking billions of dollars in the process.

The elusive and highly coveted positions for proof theorists on Wall Street, where they are known as trans-quantitative analysts, have not been advertised, to the chagrin of executive recruiters who work on commission. Elite hedge funds and bank holding companies have been discreetly approaching mathematical logicians who have programming experience and who are familiar with arcane software such as the ordinal calculator. A few logicians were offered seven figure salaries, according to a source who was not authorized to speak on the matter.

Is this for real? I like the idea of ‘trans-quantitative analysts’: it reminds me of ‘transfinite numbers’, which is another name for infinities. But it sounds a bit like a joke, and I haven’t been able to track down any references to trans-quantitative analysts, except people talking about Christian Marks’ blog article.

I understand a bit about ordinal notations, but I don’t think this is the time to go into that—not before I’m sure this stuff is for real. Instead, I’d rather reflect on a comment of Boris Borcic over on Google+:

Last week it occurred to me that LessWrong and OvercomingBias together might play a role to explain why Singularists don’t seem to worry about High Frequency Robot Trading as a possible pathway for Singularity-like developments. I mean IMO they should, the Singularity is about machines taking control, ownership is control, HFT involves slicing ownership in time-slices too narrow for humans to know themselves owners and possibly control.

The ensuing discussion got diverted to the question of whether algorithmic trading involved ‘intelligence’, but maybe intelligence is not the point. Perhaps algorithmic traders have become successful parasites on the financial system without themselves being intelligent, simply by virtue of their speed. And the financial system, in turn, seems to be a successful parasite on our economy. Regulatory capture—the control of the regulating agencies by the industry they’re supposed to be regulating—seems almost complete. Where will this lead?

The Living Smart Grid

7 April, 2012

guest post by Todd McKissick

The last few years, in energy circles, people have begun the public promotion of what they call the smart grid.  This is touted as providing better control, prediction and utilization of our nation’s electrical grid system.  However, it doesn’t provide anyone except the utilities more benefits.  It’s expected to cost much more and to actually take away some of the convenience of having all the power you want, when you want it.

We can do better.

Let’s investigate the benefits of some changes to their so called smart grid.  If implemented, these changes will allow instant indirect control and balance of all local grid sections while automatically keeping supply in check with demand.  It can drastically cut the baseload utilization of existing transmission lines.  It can provide early benefits from running it in pseudo parallel mode with no changes at all by simply publishing customer specific real-time prices.  Once that gains some traction, a full implementation only requires adding smart meters to make it work. Both of these stages can be adopted at any rate and benefits only as much as it is adopted. Since it allows Demand Reduction (DR) and Distributed Generation (DG) from any small source to compete in price fairly with the big boys, it encourages tremendous competition between both generators and consumers.

To initiate this process, the real-time price must be determined for each customer.  This is easily done at the utility by breaking down their costs and overhead into three categories.  First, generation is monitored at its location.  Second, transmission is monitored for its contribution.  Both of these are being done already, so nothing new yet.  Third, distribution needs to be monitored at all the nodes and end points in the customer’s last leg of the chain.  Much of this is done and the rest is being done or planned through various smart meter movements.  Once all three of these prices are broken down, they can be applied to the various groups of customers and feeder segments.  This yields a total price to each customer that varies in real time with all the dynamics built in.  By simply publishing that price online, it signals the supply/demand imbalance that applies to them.

This is where the self correction aspect of the system comes into play.  If a transmission line goes down, the affected customers’ price will instantly spike, immediately causing loads to drop offline and storage systems and generation systems to boost their output.  This is purely price driven so no hard controls are sent to the customer equipment to make this happen.  Should a specific load be set to critical use, like a lifeline system for a person or business, they have less risk of losing power completely but will pay an increased amount for the duration of the event.  Even transmission rerouting decisions can be based on the price, allowing neighboring local grids to export their excess to aid a nearby shortfall.  Should an area find its price trending higher or lower over time, the economics will easily point to whatever and wherever something is needed to be added to the system.  This makes forecasting the need for new equipment easier at both the utility and the customer level.

If CO2 or some other emission charge was created, it can quickly be added to the cost of individual generators, allowing the rest of the system to re-balance around it automatically.

Once the price is published, people will begin tracking their home and heavy loading appliances to calculate their exact electrical bill.  When they learn they can adapt usage profiles and save money, they will create systems to automatically do so.  This will lead to intelligent and power saving appliances, a new generation of smart thermostats, short cycling algorithms in HVAC and even more home automation.  The result of these operations is to balance demand to supply.

When this process begins, the financial incentive becomes real for the customer, attracting them to request live billing.  This can happen as small as one customer at a time for anyone with a smart meter installed.  Both customer and utility benefit from their switchover.

A truly intelligent system like this eliminates the necessity of full grid replacement that some people are proposing.  Instead, it focuses on making the existing one more stable.  Incrementally and in proportion to adoption, the grid stability and redundancy will naturally increase without further cost. The appliance manufacturers already have many load predictive products waiting for the market to call for them so the cost to advance this whole system is fully redundant with the cost of replacement meters which is already happening or planned soon. We need to ensure that the new meters have live rate capability.

This is the single biggest solution to our energy crisis. It will standardize grid interconnection which will entice distributed generation (DG).  As it stands now, most utilities view DG in a negative light with regards to grid stability.  Many issues such as voltage, frequency and phase regulation are often topics they cite.  In reality, however, the current inverter standards ensure that output is appropriately synchronized.  The same applies to power factor issues.  While reducing power sent via the grid directly reduces the load, it’s only half of the picture.

DG with storage and vehicle-to-grid hybrids both give the customer an opportunity to save up their excess and sell it to the grid when it earns the most.  By giving them the live prices, they will also be encouraged to grow their market.  It is an obvious outgrowth for them to buy and store power from the grid in the middle of the night and sell it back for a profit during afternoon peaks.  In fact this is already happening in some markets.

Demand reduction (DR), or load shedding, acts the same as onsite generation in that it reduces the power sent via the grid.  It also acts similar to storage in that it can time shift loads to cheaper rate periods.  To best take advantage of this, people will utilize increasingly better algorithms for price prediction.  The net effect is thousands of individuals competing on prediction techniques to flatten out the peaks into the valleys of the grid’s daily profile.  This competition will be in direct proportion to the local grid instability in a given area.

According to Peter Mark Jansson and John Schmalzel [1]:

From material utilization perspectives significant hardware is manufactured and installed for this infrastructure often to be used at less than 20-40% of its operational capacity for most of its lifetime. These inefficiencies lead engineers to require additional grid support and conventional generation capacity additions when renewable technologies (such as solar and wind) and electric vehicles are to be added to the utility demand/supply mix. Using actual data from the PJM [PJM 2009] the work shows that consumer load management, real time price signals, sensors and intelligent demand/supply control offer a compelling path forward to increase the efficient utilization and carbon footprint reduction of the world’s grids. Underutilization factors from many distribution companies indicate that distribution feeders are often operated at only 70-80% of their peak capacity for a few hours per year, and on average are loaded to less than 30-40% of their capability.

At this time the utilities are limiting adoption rates to a couple percent.  A well known standardization could replace that with a call for much more.  Instead of discouraging participation, it will encourage innovation and enhance forecasting and do so without giving away control over how we wish to use our power.  Best of all, it is paid for by upgrades that are already being planned. How's that for a low cost SMART solution?

[1] Peter Mark Jansson and John Schmalzel, Increasing utilization of US electric grids via smart technologies: integration of load management, real time pricing, renewables, EVs and smart grid sensors, The International Journal of Technology, Knowledge and Society 7, 47-60.

I, Robot

24 January, 2012

On 13 February 2012, I will give a talk at Google in the form of a robot. I will look like this:

My talk will be about “Energy, the Environment and What We Can Do.” Since I think we should cut unnecessary travel, I decided to stay here in Singapore and use a telepresence robot instead of flying to California.

I thank Mike Stay for arranging this at Google, and I especially thank Trevor Blackwell and everyone else at Anybots for letting me use one of their robots!

I believe Google will film this event and make a video available. But I hope reporters attend, because it should be fun, and I plan to describe some ways we can slash carbon emissions.

More detail: I will give this talk at 4 pm Monday, February 13, 2012 in the Paramaribo Room on the Google campus (Building 42, Floor 2). Visitors and reporters are invited, but they need to check in at the main visitor’s lounge in Building 43, and they’ll need to be escorted to and from the talk, so someone will pick them up 10 or 15 minutes before the talk starts.

Energy, the Environment and What We Can Do

Abstract: Our heavy reliance on fossil fuels is causing two serious problems: global warming, and the decline of cheaply available oil reserves. Unfortunately the second problem will not cancel out the first. Each one individually seems extremely hard to solve, and taken
together they demand a major worldwide effort starting now. After an overview of these problems, we turn to the question: what can we do about them?

I also need help from all of you reading this! I want to talk about solutions, not just problems—and given my audience, and the political deadlock in the US, I especially want to talk about innovative solutions that come from individuals and companies, not governments.

Can changing whole systems produce massive cuts in carbon emissions, in a way that spreads virally rather than being imposed through top-down directives? It’s possible. Curtis Faith has some inspiring thoughts on this:

I’ve been looking on various transportation and energy and environment issues for more than 5 years, and almost no one gets the idea that we can radically reduce consumption if we look at the complete systems. In economic terms, we currently have a suboptimal Nash Equilibrium with a diminishing pie when an optimal expanding pie equilibrium is possible. Just tossing around ideas a a very high level with back of the envelope estimates we can get orders of magnitude improvements with systemic changes that will make people’s lives better if we can loosen up the grip of the big corporations and government.

To borrow a physics analogy, the Nash Equilibrium is a bit like a multi-dimensional metastable state where the system is locked into a high energy configuration and any local attempts to make the change revert to the higher energy configuration locally, so it would require sufficient energy or energy in exactly the right form to move all the different metastable states off their equilibrium either simultaneously or in a cascade.

Ideally, we find the right set of systemic economic changes that can have a cascade effect, so that they are locally systemically optimal and can compete more effectively within the larger system where the Nash Equilibrium dominates. I hope I haven’t mixed up too many terms from too many fields and confused things. These terms all have overlapping and sometimes very different meaning in the different contexts as I’m sure is true even within math and science.

One great example is transportation. We assume we need electric cars or biofuel or some such thing. But the very assumption that a car is necessary is flawed. Why do people want cars? Give them a better alternative and they’ll stop wanting cars. Now, what that might be? Public transportation? No. All the money spent building a 2,000 kg vehicle to accelerate and decelerate a few hundred kg and then to replace that vehicle on a regular basis can be saved if we eliminate the need for cars.

The best alternative to cars is walking, or walking on inclined pathways up and down so we get exercise. Why don’t people walk? Not because they don’t want to but because our cities and towns have optimized for cars. Create walkable neighborhoods and give people jobs near their home and you eliminate the need for cars. I live in Savannah, GA in a very tiny place. I never use the car. Perhaps 5 miles a week. And even that wouldn’t be necessary with the right supplemental business structures to provide services more efficiently.

Or electricity for A/C. Everyone lives isolated in structures that are very inefficient to heat. Large community structures could be air conditioned naturally using various techniques and that could cut electricity demand by 50% for neighborhoods. Shade trees are better than insulation.

Or how about moving virtually entire cities to cooler climates during the hot months? That is what people used to do. Take a train North for the summer. If the destinations are low-resource destinations, this can be a huge reduction for the city. Again, getting to this state is hard without changing a lot of parts together.

These problems are not technical, or political, they are economic. We need the economic systems that support these alternatives. People want them. We’ll all be happier and use far less resources (and money). The economic system needs to be changed, and that isn’t going to happen with politics, it will happen with economic innovation. We tend to think of our current models as the way things are, but they aren’t. Most of the status quo is comprised of human inventions, money, fractional reserve banking, corporations, etc. They all brought specific improvements that made them more effective at the time they were introduce because of the conditions during those times. Our times too are different. Some new models will work much better for solving our current problems.

Your idea really starts to address the reason why people fly unnecessarily. This change in perspective is important. What if we went back to sailing ships? And instead of flying we took long leisurely educational seminar cruises on modern versions of sail yachts? What if we improved our trains? But we need to start from scratch and design new systems so they work together effectively. Why are we stuck with models of cities based on the 19th-century norms?

We aren’t, but too many people think we are because the scope of their job or academic career is just the piece of a system, not the system itself.

System level design thinking is the key to making the difference we need. Changes to the complete systems can have order of magnitude improvements. Changes to the parts will have us fighting for tens of percentages.

Do you know good references on ideas like this—preferably with actual numbers? I’ve done some research, but I feel I must be missing a lot of things.

This book, for example, is interesting:

• Michael Peters, Shane Fudge and Tim Jackson, editors, Low Carbon Communities: Imaginative Approaches to Combating Climate Change Locally, Edward Elgar Publishing Group, Cheltenham, UK, 2010.

but I wish it had more numbers on how much carbon emissions were cut by some of the projects they describe: Energy Conscious Households in Action, the HadLOW CARBON Community, the Transition Network, and so on.


24 October, 2011

I always thought the opposite of a boycott was a girlcott. Turns out it’s a ‘buycott’.

In a boycott, a bunch of people punish a company they dislike by not buying their stuff. In a buycott, they reward one they like.

Here on Azimuth, Allan Erskine pointed out one organization pushing this idea: Carrotmob, founded by Brent Schulkin. Why ‘Carrotmob’? Well, while a boycott threatens a company with the ‘stick’ of economic punishment, a mob of customers serves as a ‘carrot’ to reward good behavior.

Carrotmob’s first buycott was local: they went to 23 convenience stores in San Francisco with a plan to transform one into the most environmentally-friendly store in the neighborhood, and promised to bring in a bunch of consumers to the winner to spend a bunch of money on one day. In order to receive the increased sales from this event, store owners were invited to place bids on what percentage of that revenue they’d spend on making their store more energy-efficient. The winning bid was 22%, by K & D Market. On the day of the campaign, hundreds of people arrived and spent over $9200. In exchange, the store took 22% of that revenue, and used it to do a full retrofit of their lighting system.

Can it be scaled up? Can these deals be enforced? Time will tell, but it seems like a good thing to try. For one thing, unlike a boycott, it spreads good vibes, because it’s a positive-sum game. On the other hand, over on Google+, Matt McIrvin wrote:

I’m a little skeptical that this kind of approach works over the long term, because it would have the effect of increasing the market price of “good” products through increased demand, which in turn means that anyone who doesn’t care about the attribute in question will be motivated to buy the lower-priced “bad” products instead. What you end up with is just a market sector of politically endorsed products that may do a good niche business but that most people ignore.

This is also the big problem with just telling people to go green instead of taxing or otherwise regulating environmental externalities.

Here are some good stories:

Ready? Set. Shop! One genius environmentalist puts the flash-mob phenomenon to high-minded use, San Francisco Magazine, June 2008.

Change we can profit from, The Economist, 29 January 2009.

For more, try the references here:

Carrotmob, Wikipedia.

What other innovative strategies could environmentalists use, that we should know about?

By the way, boycotts are named after Captain Charles Boycott. The story is sort of interesting…

The Network of Global Corporate Control

3 October, 2011

While protesters are trying to occupy Wall Street and spread their movement to other cities…

… others are trying to mathematically analyze the network of global corporate control:

• Stefania Vitali, James B. Glattfelder and Stefano Battiston, The network of global corporate control.

Here’s a little ‘directed graph’:

Very roughly, a directed graph consists of some vertices and some edges with arrows on them. Vitali, Glattfelder and Battiston built an enormous directed graph by taking 43,060 transnational corporations and seeing who owns a stake in whom:

If we zoom in on the financial sector, we can see the companies those protestors are upset about:

Zooming out again, we could check that the graph as a whole consists of many pieces. But the largest piece contains 3/4 of all the corporations studied, including all the top by economic value, and accounting for 94.2% of the total operating revenue.

Within this there is a large ‘core’, containing 1347 corporations each of whom owns directly and/or indirectly shares in every other member of the core. On average, each member of the core has direct ties to 20 others. As a result, about 3/4 of the ownership of firms in the core remains in the hands of firms of the core itself. As the authors put it:

This core can be seen as an economic “super-entity” that raises new important issues both for researchers and policy makers.

If you’ve never thought much about modern global capitalism, the existence of this ‘core’ may seem shocking and scary… like an enormous invisible spiderweb wrapping around the globe, dominating us, controlling every move we make. Or maybe you can see a tremendous new business opportunity, waiting to be exploited!

But if you’ve already thought about these things, the existence of this core probably seems obvious. What’s new here is the use of certain ideas in math—graph theory, to be precise—to study it quantitatively.

So, let me say a bit more about the math! What’s a directed graph, exactly? It’s a set V and a subset E of V \times V. We call the elements of V vertices and the elements of E edges. Since an edge is an ordered pair of vertices, it has a ‘starting point’ and an ‘endpoint’—that’s why we call this kind of graph ‘directed’.

(Note that we can have an edge going from a vertex to itself, but we cannot have more than one edge going from some vertex v to some vertex v'. If you don’t like this, use some other kind of graph: there are many kinds!)

I spoke about ‘pieces’ of a directed graph, but that’s not a precise term, since there are various kinds of pieces:

• A connected component is a maximal set of vertices such that we can get from any one to any other by an undirected path, meaning a path of edges where we don’t care which way the arrows point.

• A strongly connected component is a maximal set of vertices such that we can get from any one to any other by an directed path, meaning a path of edges where at each step we walk ‘forwards’, along with the arrow.

I didn’t state these definitions very precisely, but I hope you can fill in the details. Maybe an example will help! This graph has three strongly connected components, shaded in blue, but just one connected component:

So when I said this:

The graph consists of many pieces, but the largest contains 3/4 of all the corporations studied, including all the top by economic value, and accounting for 94.2% of the total operating revenue.

I was really talking about the largest connected component. But when I said this:

Within this there is a large ‘core’ containing 1347 corporations each of whom owns directly and/or indirectly shares in every other member of the core.

I was really talking about a strongly connected component. When you look at random directed graphs, there often turns out to be one strongly connected component that’s a lot bigger than all the rest. This is called the core, or the giant strongly connected component.

In fact there’s a whole study of random directed graphs, which is relevant not only to corporations, but also to webpages! Webpages link to other webpages, giving a directed graph. (True, one webpage can link to another more than once, but we can either ignore that subtlety or use a different concept of graph that handles this.)

And it turns out that for various types of random directed graphs, we tend to get a so-called ‘bowtie structure’, like this:

In the middle you see the core, or giant strongly connected component, labelled SCC. (Yes, that’s where Exxon sits, like a spider in the middle of the web!)

Connected to this by paths going in, we have the left half of the bowtie, labelled IN. Connected to the core by paths going out, we have the right half of the bowtie, labelled OUT

There are also usually some IN-tendrils going out of the IN region, and some OUT-tendrils going into the ‘OUT’ region.

There may also be tubes going from IN to OUT while avoiding the core.

All this is one connected component: the largest one. But finally, not shown here, there may be a bunch of other smaller connected components. Presumably if these are large enough they have a similar structure.

Now: can we use this knowledge to do something good? Or it all too obvious so far? After all, so far we’re just saying the network of global corporate control is a fairly ordinary sort of random directed graph. Maybe we need to go beyond this, and think about ways in which it’s not ordinary. In fact, I should reread the paper with that in mind.

Or… well, maybe you have some ideas.

(By the way, I don’t think ‘overthrowing’ the network of global corporate control is a feasible or even desirable project. I’m not espousing any sort of revolutionary ideology, and I’m not interested in discussing politics here. I’m more interested in understanding the world and looking for some leverage points where we can gently nudge things in slightly better directions. If there were a way to do this by taking advantage of the power of corporations, that would be cool.)

Environmental News From China

13 August, 2011

I was unable to access this blog last week while I was in Changchun—sorry!

But I’m back in Singapore now, so here’s some news, mostly from the 2 August 2011 edition of China Daily, the government’s official English newspaper. As you’ll see, they’re pretty concerned about environmental problems. But to balance the picture, here’s a picture from Changbai Mountain, illustrating the awesome beauty of the parts of China that remain wild:

The Chinese have fallen in love with cars. Though less than 6% of Chinese own cars so far, that’s already 75 million cars, a market exceeded only by the US.

The price of real estate in China is shooting up—but as car ownership soars, you’ll have to pay a lot more if you want to buy a parking lot for your apartment. The old apartments don’t have them. In Beijing the average price of a parking lot is 140,000 yuan, which is about $22,000. In Shanghai it’s 150,000 yuan. But in fancy neighborhoods the price can be much higher: for example, up to 800,000 yuan in Beijing!

For comparison, the average salary in Beijing was 36,000 yuan in 2007—and the median is probably much lower, since there are lots of poor people and just a few rich ones. On top of that, I bet this figure doesn’t include the many undocumented people who have come from the countryside to work in Beijing. The big cities in China are much richer than the rest of the country: the average salary throughout the country was 11,000 yuan, and the average rural wage was just 3,600 yuan. This disparity is causing young people to flood into the cities, leaving behind villages mostly full of old folks.

Thanks to intensive use of coal, increasing car ownership and often-ignored regulations, air quality is bad in most Chinese cities. In Changchun, a typical summer day resembles the very worst days in Los Angeles, where the air is yellowish-grey except for a small blue region directly overhead.

In a campaign to improve the air quality in Beijing, drivers are getting subsidized to turn in cars made in 1995 or earlier. As usual, it’s the old clunkers that stink the worst: 27% of the cars in Beijing are over 8 years old, but they make 60% of the air pollution. The government is hoping to eliminate 400,000 old cars and cut the emission of nitrogen oxide by more than 10,000 tonnes per year by 2015.

But this policy is also supposed to stoke the market for new automobiles. That’s a bit strange, since Beijing is a huge city with massive traffic jams—some say the worst in the world! As a result, the government has taken strong steps to limit car sales in Beijing.

In Beijing, if you want to buy a car, you have to enter a lottery to get a license plate! Car sales have been capped at 240,000 this year, and for the first lottery people’s chances of winning were just one in ten:

• Louisa Lim, License plate lottery meant to curb Beijing traffic, Morning Edition, 26 January 2011.

Why is the government trying to stoke new car sales in Beijing while simultaneously trying to limit them? Maybe it’s just a rhetorical move to placate the car dealers, who hate the lottery system. Or maybe it’s because the government makes money from selling cars: it’s a state-controlled industry.

On another front, since July there has been a drought in the provinces of Gansu, Guizhou and Hunan, the Inner Mongolia autonomous region, and the Ningxia Hui autonomous region, which is home to many non-Han ethnic groups including the Hui. It’s caused water shortages for 4.3 million people. In some villages all the crops have died. Drought relief agencies are sending out more water pumps and delivering drinking water.

In Gansu province, at least, the current drought is part of a bigger desertification process.

Once they grew rice in Gansu, but then they moved to wheat:

• Tu Xin-Yi, Drought in Gansu, Tzu Chi, 5 January 2011.

China is among the nations that are experiencing severe desertification. One of the hardest hit areas is Gansu Province, deep in the nation’s heartland. The province, which includes parts of the Gobi, Badain Jaran, and Tengger Deserts, is suffering moisture drawdown year after year. As water goes up into the air, so does irrigation and agriculture. People can hardly make a living from the arid land.

But the land was once quite rich and hospitable to agriculture, a far cry from what greets the eye today. Ruoli, in central Gansu, epitomizes the big dry-up. The area used to be verdant farmland where, with abundant rainfall, all kinds of plants grew lush and dense; but now the land is dry and yields next to nothing. All this dramatic change has come about in just 50 years—lightning-fast, a mere blink of an eye in geological terms.

Rapid desertification is forcing many parties, including the government, to take action. Some residents have moved away to seek better livelihoods elsewhere, and the government offers incentives for people to relocate to the lowlands Tzu Chi built a new village to accommodate some of these migrants.

Tzu Chi is a Buddhist organization with a strong interest in climate change. The dramatic change they speak of seems to be part of a longer-term drying trend in this region. Here is one of a series of watchtowers near Dunhuang, once a thriving city at the eastern end of the Silk Road. I don’t think this area was such a desert back then:

Meanwhile, down in southern China, the Guanxi Zhuang autonomous region is seeing its worst electricity shortage in the last 2 decades, with 30% of the demand for electric power unmet, and rolling blackouts. They blame the situation on a shortage of coal and the fact that the local river isn’t deep enough to provide hydropower.

On the bright side, China is investing a lot in wind power. Their response to the financial crisis of of 2009 included $220 billion investment in renewable energy. Baoding province is now one of the world’s centers for producing wind turbines, and by 2020 China plans to have 100 gigawatts of peak wind power online.

That’s pretty good! Remember our discussion of Pacala and Socolow’s stabilization wedges? The world needs to reduce carbon emissions by roughly 10 gigatonnes per year by about 2050 to stay out of trouble. Pacala and Socolow call each 1-gigatonne slice of this carbon pie a ‘wedge’. We could reduce carbon emissions by one ‘wedge’ by switching 700 gigawatts of coal power to 2000 gigawatts of peak wind power. Why 700 of coal for 2000 of wind? Because unfortunately most of the time wind power doesn’t work at peak efficiency!

So, the Chinese plan to do 1/20 of a wedge of wind power by 2020. Multiply that effort by a factor of 200 worldwide by 2050, and we’ll be in okay shape. That’s quite a challenge! Of course we won’t do it all with wind.

And while the US and Europe are worried about excessive government and private debt, China is struggling to figure out how to manage its vast savings. China has a $3.2 trillion foreign reserve, which is 30% of the world’s total. The fraction invested in the US dollars has dropped from 71% in 1999 to 61% in 2010, but that’s still a lot of money, so any talk of the US defaulting, or a drop in the dollar, makes the Chinese government very nervous. This article goes into a bit more detail:

• Zhang Monan, Dollar depreciation dilemma, China Daily, 2 August 2011.

In a move to keep the value of their foreign reserves and improve their ratio of return, an increasing number of countries have set up sovereign wealth funds in recent years, especially since the onset of the global financial crisis. So far, nearly 30 countries or regions have established sovereign wealth funds and the total assets at their disposal amounted to $3.98 trillion in early 2011.

Compared to its mammoth official foreign reserve, China has made much slower progress than many countries in the expansion of its sovereign wealth funds, especially in its stock investments. Currently, China has only three main sovereign wealth funds: One with assets of $347.1 billion is managed by the Hong Kong-based SAFE Investment Co Ltd; the second, with assets of $288.8 billion, is managed by the China Investment Corporation, a wholly State-owned enterprise engaging in foreign assets investment; the third fund of $146.5 billion is managed by the National Social Security Fund.

From the perspective of its investment structure, China’s sovereign wealth funds have long attached excessive importance to mobility and security. For example, the China Investment Corporation has invested 87.4 percent of its funds in cash assets and only 3.2 percent in stocks, in sharp contrast to the global average of 45 percent in stock investments.

What’s interesting to me is that on the one hand we have these big problems, like global warming, and on the other hand these people with tons of money struggling to find good ways to invest it. Is there a way to make each of these problems the solution to the other?