Ken Caldeira on What To Do

25 January, 2016

Famous climate scientist Ken Caldeira has a new article out:

• Ken Caldeira, Stop Emissions!, Technology Review, January/February 2016, 41–43.

Let me quote a bit:

Many years ago, I protested at the gates of a nuclear power plant. For a long time, I believed it would be easy to get energy from biomass, wind, and solar. Small is beautiful. Distributed power, not centralized.

I wish I could still believe that.

My thinking changed when I worked with Marty Hoffert of New York University on research that was first published in Nature in 1998. It was the first peer-reviewed study that examined the amount of near-zero-emission energy we would need in order to solve the climate problem. Unfortunately, our conclusions still hold. We need massive deployment of affordable and dependable near-zero-emission energy, and we need a major research and development program to develop better energy and transportation systems.

It’s true that wind and solar power have been getting much more attractive in recent years. Both have gotten significantly cheaper. Even so, neither wind nor solar is dependable enough, and batteries do not yet exist that can store enough energy at affordable prices to get a modern industrial society through those times when the wind is not blowing and the sun is not shining.

Recent analyses suggest that wind and solar power, connected by a continental-scale electric grid and using natural-gas power plants to provide backup, could reduce greenhouse-gas emissions from electricity production by about two-thirds. But generating electricity is responsible for only about one-third of total global carbon dioxide emissions, which are increasing by more than 2 percent a year. So even if we had this better electric sector tomorrow, within a decade or two emissions would be back where they are today.

We need to bring much, much more to bear on the climate problem. It can’t be solved unless it is addressed as seriously as we address national security. The politicians who go to the Paris Climate Conference are making commitments that fall far short of what would be needed to substantially reduce climate risk.

Daunting math

Four weeks ago, a hurricane-strength cyclone smashed into Yemen, in the Arabian Peninsula, for the first time in recorded history. Also this fall, a hurricane with the most powerful winds ever measured slammed into the Pacific coast of Mexico.

Unusually intense storms such as these are a predicted consequence of global warming, as are longer heat waves and droughts and many other negative weather-related events that we can expect to become more commonplace. Already, in the middle latitudes of the Northern Hemisphere, average temperatures are increasing at a rate that is equivalent to moving south about 10 meters (30 feet) each day. This rate is about 100 times faster than most climate change that we can observe in the geologic record, and it gravely threatens biodiversity in many parts of the world. We are already losing about two coral reefs each week, largely as a direct consequence of our greenhouse-gas emissions.

Recently, my colleagues and I studied what will happen in the long term if we continue pulling fossil carbon out of the ground and releasing it into the atmosphere. We found that it would take many thousands of years for the planet to recover from this insult. If we burn all available fossil-fuel resources and dump the resulting carbon dioxide waste in the sky, we can expect global average temperatures to be 9 °C (15 °F) warmer than today even 10,000 years into the future. We can expect sea levels to be about 60 meters (200 feet) higher than today. In much of the tropics, it is possible that mammals (including us) would not be able to survive outdoors in the daytime heat. Thus, it is essential to our long-term well-being that fossil-fuel carbon does not go into our atmosphere.

If we want to reduce the threat of climate change in the near future, there are actions to take now: reduce emissions of short-lived pollutants such as black carbon, cut emissions of methane from natural-gas fields and landfills, and so on. We need to slow and then reverse deforestation, adopt electric cars, and build solar, wind, and nuclear plants.

But while existing technologies can start us down the path, they can’t get us to our goal. Most analysts believe we should decarbonize electricity generation and use electricity for transportation, industry, and even home heating. (Using electricity for heating is wildly inefficient, but there may be no better solution in a carbon-constrained world.) This would require a system of electricity generation several times larger than the one we have now. Can we really use existing technology to scale up our system so dramatically while markedly reducing emissions from that sector?

Solar power is the only energy source that we know can power civilization indefinitely. Unfortunately, we do not have global-scale electricity grids that could wheel solar energy from day to night. At the scale of the regional electric grid, we do not have batteries that can balance daytime electricity generation with nighttime demand.

We should do what we know how to do. But all the while, we need to be thinking about what we don’t know how to do. We need to find better ways to generate, store, and transmit electricity. We also need better zero-carbon fuels for the parts of the economy that can’t be electrified. And most important, perhaps, we need better ways of using energy.

Energy is a means, not an end. We don’t want energy so much as we want what it makes possible: transportation, entertainment, shelter, and nutrition. Given United Nations estimates that the world will have at least 11 billion people by the end of this century (50 percent more than today), and given that we can expect developing economies to grow rapidly, demand for services that require energy is likely to increase by a factor of 10 or more over the next century. If we want to stabilize the climate, we need to reduce total emissions from today’s level by a factor of 10. Put another way, if we want to destroy neither our environment nor our economy, we need to reduce the emissions per energy service provided by a factor of 100. This requires something of an energy miracle.

The essay continues.

Near the end, he writes “despite all these reasons for despair, I’m hopeful”. He is hopeful that a collective change of heart is underway that will enable humanity to solve this problem. But he doesn’t claim to know any workable solution to the problem. In fact, he mostly list reasons why various possible solutions won’t be enough.


Underestimating Renewables

4 January, 2016

The International Energy Agency, or IEA for short, is an autonomous intergovernmental organization based in Paris. They were established in the 1970’s after the OPEC embargo sent oil prices to new highs. Their main job is to guess the future when it comes to energy production. They do this in their annual World Energy Outlook or WEO.

I’ve tended to trust their predictions, since they seem to have a lot of expertise and don’t seem to have a strong axe to grind. They believe global warming is a serious problem and they’ve outlined some plans for what to do.

However, I’m now convinced that they’ve consistently underestimated the growth of renewable energy. Not just a little—a lot.

This is bad news in a way: who can I trust now? But of course it’s mainly good news! I am now more optimistic about the potential of wind and solar power.

To explain what I mean, I’m just going to quote a chunk of this article:

• David Roberts, The International Energy Agency consistently underestimates wind and solar power. Why?, Vox, October 12, 2015.

Here goes:

David Roberts on the IEA

That the IEA has historically underestimated wind and solar is beyond dispute. The latest look at the issue comes from Energy Post editor Karel Beckman, who draws on a recent report from the Energy Watch Group (EWG), an independent Berlin-based think tank. The report analyzes the predictive success of previous WEOs.

Here’s the history of additions to electric generation capacity by renewables excluding big hydro, along with successive WEO projections:

[Chart from Energy Watch Group. Click to enlarge.]

As you can see, IEA keeps bumping up its projections, but never enough to catch up to reality. It’s only now getting close.

It gets even worse when you dig into the details. Here’s the bill of particulars:

• WEO 2010 projected 180 GW of installed solar PV capacity by 2024; that target was met in January 2015.

• Current installed PV capacity exceeds WEO 2010 projections for 2015 by threefold.

• Installed wind capacity in 2010 exceeded WEO 2002 and 2004 projections by 260 and 104 percent respectively.

• WEO 2002 projections for wind energy in 2030 were exceeded in 2010.

Other, independent analysts (like those at Bloomberg New Energy Finance and Citi) have come closer to accurately forecasting renewables. The only forecasts that match IEA’s inaccurate pessimism are those from the likes of BP, Shell, and Exxon Mobil.

Here are IEA’s wind and solar projections broken out, from a 2014 post by the folks at eco-consultancy Ecofys:

[Click to enlarge.]

Back in 2013, energy analyst Adam Whitmore took a look at the IEA’s track record on renewables. He found it abysmal, like everyone else. This year, he returned to the WEO to see if it has improved and found that, well, it hasn’t.

Here he shows the rate of growth in annual installations of renewables, and what the IEA projects for the future:

[Click to enlarge.]

(The dashed lines are the standard WEO projections, what happens if nothing changes. The dotted lines are from the “bridge scenario” in the WEO Special Report on Energy and Climate Change, which is supposed to represent some policy ambition.)

As Whitmore says, it’s possible that the rate of solar PV installations will suddenly plunge by some 40 percent and then enter a long steady-state period, but there’s no reason to think it’s particularly plausible.

For more

Roberts goes on to analyze various possible reasons for the IEA’s consistent underestimates. They’re worth reading, but none of them seems like an obvious smoking gun.

I suppose if I were very careful I would check all the graphs and numbers in Roberts’ article, but I’m inclined to trust them. He’s getting them from various sources; this is a factual issue that can be easily checked, and I haven’t seen anyone arguing the other side.

If you want to check some numbers yourself, you can download these free books:

WEO 2011.

WEO 2010.

WEO 2009.

WEO 2008.

WEO 2007.

WEO 2006.

The following report, mentioned above, goes into more detail about the IEA’s failures:

• Matthieu Metayer, Christian Breyer and Hans-Josef Fell, The projections for the future and quality in the past of the World Energy Outlook for solar PV and other renewable energy technologies, Energy Watch Group, 2015.

They write:

Summing up, the IEA keeps ignoring the exponential growth of new renewable energies such as solar and wind, and does not learn from its past mistakes.

But this leaves me with a question. Who is doing the best job of predicting energy trends? This is where we could really use a well-developed, easily accessed prediction market.


Why Google Gave Up

5 January, 2015

I was disappointed when Google gave up. In 2007, the company announced a bold initiative to fight global warming:

Google’s Goal: Renewable Energy Cheaper than Coal

Creates renewable energy R&D group and supports breakthrough technologies

Mountain View, Calif. (November 27, 2007) – Google (NASDAQ: GOOG) today announced a new strategic initiative to develop electricity from renewable energy sources that will be cheaper than electricity produced from coal. The newly created initiative, known as RE<C, will focus initially on advanced solar thermal power, wind power technologies, enhanced geothermal systems and other potential breakthrough technologies. RE<C is hiring engineers and energy experts to lead its research and development work, which will begin with a significant effort on solar thermal technology, and will also investigate enhanced geothermal systems and other areas. In 2008, Google expects to spend tens of millions on research and development and related investments in renewable energy. As part of its capital planning process, the company also anticipates investing hundreds of millions of dollars in breakthrough renewable energy projects which generate positive returns.

But in 2011, Google shut down the program. I never heard why. Recently two engineers involved in the project have given a good explanation:

• Ross Koningstein and David Fork, What it would really take to reverse climate change, 18 November 2014.

Please read it!

But the short version is this. They couldn’t find a way to accomplish their goal: producing a gigawatt of renewable power more cheaply than a coal-fired plant — and in years, not decades.

And since then, they’ve been reflecting on their failure and they’ve realized something even more sobering. Even if they’d been able to realize their best-case scenario — a 55% carbon emissions cut by 2050 — it would not bring atmospheric CO2 back below 350 ppm during this century.

This is not surprising to me.

What would we need to accomplish this? They say two things. First, a cheap dispatchable, distributed power source:

Consider an average U.S. coal or natural gas plant that has been in service for decades; its cost of electricity generation is about 4 to 6 U.S. cents per kilowatt-hour. Now imagine what it would take for the utility company that owns that plant to decide to shutter it and build a replacement plant using a zero-carbon energy source. The owner would have to factor in the capital investment for construction and continued costs of operation and maintenance—and still make a profit while generating electricity for less than $0.04/kWh to $0.06/kWh.

That’s a tough target to meet. But that’s not the whole story. Although the electricity from a giant coal plant is physically indistinguishable from the electricity from a rooftop solar panel, the value of generated electricity varies. In the marketplace, utility companies pay different prices for electricity, depending on how easily it can be supplied to reliably meet local demand.

“Dispatchable” power, which can be ramped up and down quickly, fetches the highest market price. Distributed power, generated close to the electricity meter, can also be worth more, as it avoids the costs and losses associated with transmission and distribution. Residential customers in the contiguous United States pay from $0.09/kWh to $0.20/kWh, a significant portion of which pays for transmission and distribution costs. And here we see an opportunity for change. A distributed, dispatchable power source could prompt a switchover if it could undercut those end-user prices, selling electricity for less than $0.09/kWh to $0.20/kWh in local marketplaces. At such prices, the zero-carbon system would simply be the thrifty choice.

But “dispatchable”, they say, means “not solar”.

Second, a lot of carbon sequestration:

While this energy revolution is taking place, another field needs to progress as well. As Hansen has shown, if all power plants and industrial facilities switch over to zero-carbon energy sources right now, we’ll still be left with a ruinous amount of CO2 in the atmosphere. It would take centuries for atmospheric levels to return to normal, which means centuries of warming and instability. To bring levels down below the safety threshold, Hansen’s models show that we must not only cease emitting CO2 as soon as possible but also actively remove the gas from the air and store the carbon in a stable form. Hansen suggests reforestation as a carbon sink. We’re all for more trees, and we also exhort scientists and engineers to seek disruptive technologies in carbon storage.

How to achieve these two goals? They say government and energy businesses should spend 10% of employee time on “strange new ideas that have the potential to be truly disruptive”.


Wind Power and the Smart Grid

18 June, 2014



Electric power companies complain about wind power because it’s intermittent: if suddenly the wind stops, they have to bring in other sources of power.

This is no big deal if we only use a little wind. Across the US, wind now supplies 4% of electric power; even in Germany it’s just 8%. The problem starts if we use a lot of wind. If we’re not careful, we’ll need big fossil-fuel-powered electric plants when the wind stops. And these need to be turned on, ready to pick up the slack at a moment’s notice!

So, a few years ago Xcel Energy, which supplies much of Colorado’s power, ran ads opposing a proposal that it use renewable sources for 10% of its power.

But now things have changed. Now Xcel gets about 15% of their power from wind, on average. And sometimes this spikes to much more!

What made the difference?

Every few seconds, hundreds of turbines measure the wind speed. Every 5 minutes, they send this data to high-performance computers 100 miles away at the National Center for Atmospheric Research in Boulder. NCAR crunches these numbers along with data from weather satellites, weather stations, and other wind farms – and creates highly accurate wind power forecasts.

With better prediction, Xcel can do a better job of shutting down idling backup plants on days when they’re not needed. Last year was a breakthrough year – better forecasts saved Xcel nearly as much money as they had in the three previous years combined.

It’s all part of the emerging smart grid—an intelligent network that someday will include appliances and electric cars. With a good smart grid, we could set our washing machine to run when power is cheap. Maybe electric cars could store solar power in the day, use it to power neighborhoods when electricity demand peaks in the evening – then recharge their batteries using wind power in the early morning hours. And so on.

References

I would love if it the Network Theory project could ever grow to the point of helping design the smart grid. So far we are doing much more ‘foundational’ work on control theory, along with a more applied project on predicting El Niños. I’ll talk about both of these soon! But I have big hopes and dreams, so I want to keep learning more about power grids and the like.

Here are two nice references:

• Kevin Bullis, Smart wind and solar power, from 10 breakthrough technologies, Technology Review, 23 April 2014.

• Keith Parks, Yih-Huei Wan, Gerry Wiener and Yubao Liu, Wind energy forecasting: a collaboration of the National Center for Atmospheric Research (NCAR) and Xcel Energy.

The first is fun and easy to read. The second has more technical details. It describes the software used (the picture on top of this article shows a bit of this), and also some of the underlying math and physics. Let me quote a bit:

High-resolution Mesoscale Ensemble Prediction Model (EPM)

It is known that atmospheric processes are chaotic in nature. This implies that even small errors in the model initial conditions combined with the imperfections inherent in the NWP model formulations, such as truncation errors and approximations in model dynamics and physics, can lead to a wind forecast with large errors for certain weather regimes. Thus, probabilistic wind prediction approaches are necessary for guiding wind power applications. Ensemble prediction is at present a practical approach for producing such probabilistic predictions. An innovative mesoscale Ensemble Real-Time Four Dimensional Data Assimilation (E-RTFDDA) and forecasting system that was developed at NCAR was used as the basis for incorporating this ensemble prediction capability into the Xcel forecasting system.

Ensemble prediction means that instead of a single weather forecast, we generate a probability distribution on the set of weather forecasts. The paper has references explaining this in more detail.

We had a nice discussion of wind power and the smart grid over on G+. Among other things, John Despujols mentioned the role of ‘smart inverters’ in enhancing grid stability:

Smart solar inverters smooth out voltage fluctuations for grid stability, DigiKey article library.

A solar inverter converts the variable direct current output of a photovoltaic solar panel into alternating current usable by the electric grid. There’s a lot of math involved here—click the link for a Wikipedia summary. But solar inverters are getting smarter.

Wild fluctuations

While the solar inverter has long been the essential link between the photovoltaic panel and the electricity distribution network and converting DC to AC, its role is expanding due to the massive growth in solar energy generation. Utility companies and grid operators have become increasingly concerned about managing what can potentially be wildly fluctuating levels of energy produced by the huge (and still growing) number of grid-connected solar systems, whether they are rooftop systems or utility-scale solar farms. Intermittent production due to cloud cover or temporary faults has the potential to destabilize the grid. In addition, grid operators are struggling to plan ahead due to lack of accurate data on production from these systems as well as on true energy consumption.

In large-scale facilities, virtually all output is fed to the national grid or micro-grid, and is typically well monitored. At the rooftop level, although individually small, collectively the amount of energy produced has a significant potential. California estimated it has more than 150,000 residential rooftop grid-connected solar systems with a potential to generate 2.7 MW.

However, while in some systems all the solar energy generated is fed to the grid and not accessible to the producer, others allow energy generated to be used immediately by the producer, with only the excess fed to the grid. In the latter case, smart meters may only measure the net output for billing purposes. In many cases, information on production and consumption, supplied by smart meters to utility companies, may not be available to the grid operators.

Getting smarter

The solution according to industry experts is the smart inverter. Every inverter, whether at panel level or megawatt-scale, has a role to play in grid stability. Traditional inverters have, for safety reasons, become controllable, so that they can be disconnected from the grid at any sign of grid instability. It has been reported that sudden, widespread disconnects can exacerbate grid instability rather than help settle it.

Smart inverters, however, provide a greater degree of control and have been designed to help maintain grid stability. One trend in this area is to use synchrophasor measurements to detect and identify a grid instability event, rather than conventional ‘perturb-and-observe’ methods. The aim is to distinguish between a true island condition and a voltage or frequency disturbance which may benefit from additional power generation by the inverter rather than a disconnect.

Smart inverters can change the power factor. They can input or receive reactive power to manage voltage and power fluctuations, driving voltage up or down depending on immediate requirements. Adaptive volts-amps reactive (VAR) compensation techniques could enable ‘self-healing’ on the grid.

Two-way communications between smart inverter and smart grid not only allow fundamental data on production to be transmitted to the grid operator on a timely basis, but upstream data on voltage and current can help the smart inverter adjust its operation to improve power quality, regulate voltage, and improve grid stability without compromising safety. There are considerable challenges still to overcome in terms of agreeing and evolving national and international technical standards, but this topic is not covered here.

The benefits of the smart inverter over traditional devices have been recognized in Germany, Europe’s largest solar energy producer, where an initiative is underway to convert all solar energy producers’ inverters to smart inverters. Although the cost of smart inverters is slightly higher than traditional systems, the advantages gained in grid balancing and accurate data for planning purposes are considered worthwhile. Key features of smart inverters required by German national standards include power ramping and volt/VAR control, which directly influence improved grid stability.


Carbon Emissions from Coal-Fired Power Plants

13 September, 2013

The 50 dirtiest electric power plants in the United States—all coal-fired—emit as much carbon dioxide as half of America’s 240 million cars.

The dirtiest 1% spew out a third of the carbon produced by US power plants.

And the 100 dirtiest plants—still a tiny fraction of the country’s 6,000 power plants—account for a fifth of all US carbon emissions.

According to this report, curbing the emissions of these worst offenders would be one of the best ways to cut US carbon emissions, reducing the risk that emissions will trigger dangerous climate change:

• Environment America Research and Policy Center, America’s dirtiest power plants: their oversized contribution to global warming and what we can do about it, 2013.

Some states in the US already limit carbon pollution from power plants. At the start of this year, California imposed a cap on carbon dioxide emissions, and in 2014 it will link with Quebec’s carbon market. Nine states from Maine to Maryland participate in the Regional Greenhouse Gas Initiative (RGGI), which caps emissions from power plants in the Northeast.

At the federal level, a big step forward was the 2007 Supreme Court decision saying the Environmental Protection Agency should develop plans to regulate carbon emissions. The EPA is now getting ready to impose carbon emission limits for all new power plants in the US. But some of the largest sources of carbon dioxide are existing power plants, so getting them to shape up or shut down could have big benefits.

What to do?

Here’s what the report suggests:

• The Obama Administration should set strong limits on carbon dioxide pollution from new power plants to prevent the construction of a new generation of dirty power plants, and force existing power plants to clean up by setting strong limits on carbon dioxide emissions from all existing power plants.

• New plants – The Environmental Protection Agency (EPA) should work to meet its September 2013 deadline for re-proposing a stringent emissions standard for new power plants. It should also set a deadline for finalizing these standards no later than June 2015.

• Existing plants – The EPA should work to meet the timeline put forth by President Obama for proposing and finalizing emissions standards for existing power plants. This timeline calls for limits on existing plants to be proposed by June 2014 and finalized by June 2015. The standards should be based on the most recent climate science and designed to achieve the emissions reduction targets that are necessary to avoid the worst impacts of global warming.

In addition to cutting pollution from power plants, the United States should adopt a suite of clean energy policies at the local, state, and federal levels to curb emissions of carbon dioxide from energy use in other sectors.

In particular, the United States should prioritize establishing a comprehensive, national plan to reduce carbon pollution from all sources – including transportation, industrial activities, and the commercial and residential sectors.

Other policies to curb emissions include:

• Retrofitting three-quarters of America’s homes and businesses for improved energy efficiency, and implementing strong building energy codes to dramatically reduce fossil fuel consumption in new homes and businesses.

• Adopting a federal renewable electricity standard that calls for 25 percent of America’s electricity to come from clean, renewable sources by 2025.

• Strengthening and implementing state energy efficiency resource standards that require utilities to deliver energy efficiency improvements in homes, businesses and industries.

• Installing more than 200 gigawatts of solar panels and other forms of distributed renewable energy at residential, commercial and industrial buildings over the next two decades.

• Encouraging the use of energy-saving combined heat-and-power systems in industry.

• Facilitating the deployment of millions of plug-in vehicles that operate partly or solely on electricity, and adopting clean fuel standards that require a reduction in the carbon intensity of transportation fuels.

• Ensuring that the majority of new residential and commercial development in metropolitan areas takes place in compact, walkable communities with access to a range of transportation options.

• Expanding public transportation service to double ridership by 2030, encouraging further ridership increases through better transit service, and reducing per-mile global warming pollution from transit vehicles. The U.S. should also build high-speed rail lines in 11 high-priority corridors by 2030.

• Strengthening and expanding the Regional Greenhouse Gas Initiative, which limits carbon dioxide pollution from power plants in nine northeastern state, and implementing California’s Global Warming Solutions Act (AB32), which places an economy-wide cap on the state’s greenhouse gas emissions.

Carbon emitted per power produced

An appendix to this report list the power plants that emit the most carbon dioxide by name, along with estimates of their emissions. That’s great! But annoyingly, they do not seem to list the amounts of energy per year produced by these plants.

If carbon emissions were strictly proportional to the amount of energy produced, that would tend to undercut the the notion that the biggest carbon emitters are especially naughty. But in fact there’s a lot of variability in the amount of carbon emitted per energy generated. You can see that in this chart of theirs:

So, it would be good to see a list of the worst power plants in terms of CO2 emitted per energy generated.

The people who prepared this report could probably create such a list without much extra work, since they write:

We obtained fuel consumption and electricity generation data for power plants operating in the United States from the U.S. Department of Energy’s Energy Information Administration (EIA) 2011 December EIA-923 Monthly Time Series.


John Harte

27 October, 2012

Earlier this week I gave a talk on the Mathematics of Planet Earth at the University of Southern California, and someone there recommended that I look into John Harte’s work on maximum entropy methods in ecology. He works at U.C. Berkeley.

I checked out his website and found that his goals resemble mine: save the planet and understand its ecosystems. He’s a lot further along than I am, since he comes from a long background in ecology while I’ve just recently blundered in from mathematical physics. I can’t really say what I think of his work since I’m just learning about it. But I thought I should point out its existence.

This free book is something a lot of people would find interesting:

• John and Mary Ellen Harte, Cool the Earth, Save the Economy: Solving the Climate Crisis Is EASY, 2008.

EASY? Well, it’s an acronym. Here’s the basic idea of the US-based plan described in this book:

Any proposed energy policy should include these two components:

Technical/Behavioral: What resources and technologies are to be used to supply energy? On the demand side, what technologies and lifestyle changes are being proposed to consumers?

Incentives/Economic Policy: How are the desired supply and demand options to be encouraged or forced? Here the options include taxes, subsidies, regulations, permits, research and development, and education.

And a successful energy policy should satisfy the AAA criteria:

Availability. The climate crisis will rapidly become costly to society if we do not take action expeditiously. We need to adopt now those technologies that are currently available, provided they meet the following two additional criteria:

Affordability. Because of the central role of energy in our society, its cost to consumers should not increase significantly. In fact, a successful energy policy could ultimately save consumers money.

Acceptability. All energy strategies have environmental, land use, and health and safety implications; these must be acceptable to the public. Moreover, while some interest groups will undoubtedly oppose any particular energy policy, political acceptability at a broad scale is necessary.

Our strategy for preventing climate catastrophe and achieving energy independence includes:

Energy Efficient Technology at home and at the workplace. Huge reductions in home energy use can be achieved with available technologies, including more efficient appliances such as refrigerators, water heaters, and light bulbs. Home retrofits and new home design features such as “smart” window coatings, lighter-colored roofs where there are hot summers, better home insulation, and passive solar designs can also reduce energy use. Together, energy efficiency in home and industry can save the U.S. up to approximately half of the energy currently consumed in those sectors, and at no net cost—just by making different choices. Sounds good, doesn’t it?

Automobile Fuel Efficiency. Phase in higher Corporate Average Fuel Economy (CAFE) standards for automobiles, SUVs and light trucks by requiring vehicles to go 35 miles per gallon of gas (mpg) by 2015, 45 mpg by 2020, and 60 mpg by 2030. This would rapidly wipe out our dependence on foreign oil and cut emissions from the vehicle sector by two-thirds. A combination of plug-in hybrid, lighter car body materials, re-design and other innovations could readily achieve these standards. This sounds good, too!

Solar and Wind Energy. Rooftop photovoltaic panels and solar water heating units should be phased in over the next 20 years, with the goal of solar installation on 75% of U.S. homes and commercial buildings by 2030. (Not all roofs receive sufficient sunlight to make solar panels practical for them.) Large wind farms, solar photovoltaic stations, and solar thermal stations should also be phased in so that by 2030, all U.S. electricity demand will be supplied by existing hydroelectric, existing and possibly some new nuclear, and, most importantly, new solar and wind units. This will require investment in expansion of the grid to bring the new supply to the demand, and in research and development to improve overnight storage systems. Achieving this goal would reduce our dependence on coal to practically zero. More good news!

You are part of the answer. Voting wisely for leaders who promote the first three components is one of the most important individual actions one can make. Other actions help, too. Just as molecules make up mountains, individual actions taken collectively have huge impacts. Improved driving skills, automobile maintenance, reusing and recycling, walking and biking, wearing sweaters in winter and light clothing in summer, installing timers on thermostats and insulating houses, carpooling, paying attention to energy efficiency labels on appliances, and many other simple practices and behaviors hugely influence energy consumption. A major education campaign, both in schools for youngsters and by the media for everyone, should be mounted to promote these consumer practices.

No part of EASY can be left out; all parts are closely integrated. Some parts might create much larger changes—for example, more efficient home appliances and automobiles—but all parts are essential. If, for example, we do not achieve the decrease in electricity demand that can be brought about with the E of EASY, then it is extremely doubtful that we could meet our electricity needs with the S of EASY.

It is equally urgent that once we start implementing the plan, we aggressively export it to other major emitting nations. We can reduce our own emissions all we want, but the planet will continue to warm if we can’t convince other major global emitters to reduce their emissions substantially, too.

What EASY will achieve. If no actions are taken to reduce carbon dioxide emissions, in the year 2030 the U.S. will be emitting about 2.2 billion tons of carbon in the form of carbon dioxide. This will be an increase of 25% from today’s emission rate of about 1.75 billion tons per year of carbon. By following the EASY plan, the U.S. share in a global effort to solve the climate crisis (that is, prevent catastrophic warming) will result in U.S emissions of only about 0.4 billion tons of carbon by 2030, which represents a little less than 25% of 2007 carbon dioxide emissions.128 Stated differently, the plan provides a way to eliminate 1.8 billion tons per year of carbon by that date.

We must act urgently: in the 14 months it took us to write this book, atmospheric CO2 levels rose by several billion tons of carbon, and more climatic consequences have been observed. Let’s assume that we conserve our forests and other natural carbon reservoirs at our current levels, as well as maintain our current nuclear and hydroelectric plants (or replace them with more solar and wind generators). Here’s what implementing EASY will achieve, as illustrated by Figure 3.1 on the next page.

Please check out this book and help me figure out if the numbers add up! I could also use help understanding his research, for example:

• John Harte, Maximum Entropy and Ecology: A Theory of Abundance, Distribution, and Energetics, Oxford University Press, Oxford, 2011.

The book is not free but the first chapter is.

This paper looks really interesting too:

• J. Harte, T. Zillio, E. Conlisk and A. B. Smith, Maximum entropy and the state-variable approach to macroecology, Ecology 89 (2008), 2700–-2711.

Again, it’s not freely available—tut tut. Ecologists should follow physicists and make their work free online; if you’re serious about saving the planet you should let everyone know what you’re doing! However, the abstract is visible to all, and of course I can use my academic superpowers to get ahold of the paper for myself:

Abstract: The biodiversity scaling metrics widely studied in macroecology include the species-area relationship (SAR), the scale-dependent species-abundance distribution (SAD), the distribution of masses or metabolic energies of individuals within and across species, the abundance-energy or abundance-mass relationship across species, and the species-level occupancy distributions across space. We propose a theoretical framework for predicting the scaling forms of these and other metrics based on the state-variable concept and an analytical method derived from information theory. In statistical physics, a method of inference based on information entropy results in a complete macro-scale description of classical thermodynamic systems in terms of the state variables volume, temperature, and number of molecules. In analogy, we take the state variables of an ecosystem to be its total area, the total number of species within any specified taxonomic group in that area, the total number of individuals across those species, and the summed metabolic energy rate for all those individuals. In terms solely of ratios of those state variables, and without invoking any specific ecological mechanisms, we show that realistic functional forms for the macroecological metrics listed above are inferred based on information entropy. The Fisher log series SAD emerges naturally from the theory. The SAR is predicted to have negative curvature on a log-log plot, but as the ratio of the number of species to the number of individuals decreases, the SAR becomes better and better approximated by a power law, with the predicted slope z in the range of 0.14-0.20. Using the 3/4 power mass-metabolism scaling relation to relate energy requirements and measured body sizes, the Damuth scaling rule relating mass and abundance is also predicted by the theory. We argue that the predicted forms of the macroecological metrics are in reasonable agreement with the patterns observed from plant census data across habitats and spatial scales. While this is encouraging, given the absence of adjustable fitting parameters in the theory, we further argue that even small discrepancies between data and predictions can help identify ecological mechanisms that influence macroecological patterns.


The Living Smart Grid

7 April, 2012

guest post by Todd McKissick

The last few years, in energy circles, people have begun the public promotion of what they call the smart grid.  This is touted as providing better control, prediction and utilization of our nation’s electrical grid system.  However, it doesn’t provide anyone except the utilities more benefits.  It’s expected to cost much more and to actually take away some of the convenience of having all the power you want, when you want it.

We can do better.

Let’s investigate the benefits of some changes to their so called smart grid.  If implemented, these changes will allow instant indirect control and balance of all local grid sections while automatically keeping supply in check with demand.  It can drastically cut the baseload utilization of existing transmission lines.  It can provide early benefits from running it in pseudo parallel mode with no changes at all by simply publishing customer specific real-time prices.  Once that gains some traction, a full implementation only requires adding smart meters to make it work. Both of these stages can be adopted at any rate and benefits only as much as it is adopted. Since it allows Demand Reduction (DR) and Distributed Generation (DG) from any small source to compete in price fairly with the big boys, it encourages tremendous competition between both generators and consumers.

To initiate this process, the real-time price must be determined for each customer.  This is easily done at the utility by breaking down their costs and overhead into three categories.  First, generation is monitored at its location.  Second, transmission is monitored for its contribution.  Both of these are being done already, so nothing new yet.  Third, distribution needs to be monitored at all the nodes and end points in the customer’s last leg of the chain.  Much of this is done and the rest is being done or planned through various smart meter movements.  Once all three of these prices are broken down, they can be applied to the various groups of customers and feeder segments.  This yields a total price to each customer that varies in real time with all the dynamics built in.  By simply publishing that price online, it signals the supply/demand imbalance that applies to them.

This is where the self correction aspect of the system comes into play.  If a transmission line goes down, the affected customers’ price will instantly spike, immediately causing loads to drop offline and storage systems and generation systems to boost their output.  This is purely price driven so no hard controls are sent to the customer equipment to make this happen.  Should a specific load be set to critical use, like a lifeline system for a person or business, they have less risk of losing power completely but will pay an increased amount for the duration of the event.  Even transmission rerouting decisions can be based on the price, allowing neighboring local grids to export their excess to aid a nearby shortfall.  Should an area find its price trending higher or lower over time, the economics will easily point to whatever and wherever something is needed to be added to the system.  This makes forecasting the need for new equipment easier at both the utility and the customer level.

If CO2 or some other emission charge was created, it can quickly be added to the cost of individual generators, allowing the rest of the system to re-balance around it automatically.

Once the price is published, people will begin tracking their home and heavy loading appliances to calculate their exact electrical bill.  When they learn they can adapt usage profiles and save money, they will create systems to automatically do so.  This will lead to intelligent and power saving appliances, a new generation of smart thermostats, short cycling algorithms in HVAC and even more home automation.  The result of these operations is to balance demand to supply.

When this process begins, the financial incentive becomes real for the customer, attracting them to request live billing.  This can happen as small as one customer at a time for anyone with a smart meter installed.  Both customer and utility benefit from their switchover.

A truly intelligent system like this eliminates the necessity of full grid replacement that some people are proposing.  Instead, it focuses on making the existing one more stable.  Incrementally and in proportion to adoption, the grid stability and redundancy will naturally increase without further cost. The appliance manufacturers already have many load predictive products waiting for the market to call for them so the cost to advance this whole system is fully redundant with the cost of replacement meters which is already happening or planned soon. We need to ensure that the new meters have live rate capability.

This is the single biggest solution to our energy crisis. It will standardize grid interconnection which will entice distributed generation (DG).  As it stands now, most utilities view DG in a negative light with regards to grid stability.  Many issues such as voltage, frequency and phase regulation are often topics they cite.  In reality, however, the current inverter standards ensure that output is appropriately synchronized.  The same applies to power factor issues.  While reducing power sent via the grid directly reduces the load, it’s only half of the picture.

DG with storage and vehicle-to-grid hybrids both give the customer an opportunity to save up their excess and sell it to the grid when it earns the most.  By giving them the live prices, they will also be encouraged to grow their market.  It is an obvious outgrowth for them to buy and store power from the grid in the middle of the night and sell it back for a profit during afternoon peaks.  In fact this is already happening in some markets.

Demand reduction (DR), or load shedding, acts the same as onsite generation in that it reduces the power sent via the grid.  It also acts similar to storage in that it can time shift loads to cheaper rate periods.  To best take advantage of this, people will utilize increasingly better algorithms for price prediction.  The net effect is thousands of individuals competing on prediction techniques to flatten out the peaks into the valleys of the grid’s daily profile.  This competition will be in direct proportion to the local grid instability in a given area.

According to Peter Mark Jansson and John Schmalzel [1]:

From material utilization perspectives significant hardware is manufactured and installed for this infrastructure often to be used at less than 20-40% of its operational capacity for most of its lifetime. These inefficiencies lead engineers to require additional grid support and conventional generation capacity additions when renewable technologies (such as solar and wind) and electric vehicles are to be added to the utility demand/supply mix. Using actual data from the PJM [PJM 2009] the work shows that consumer load management, real time price signals, sensors and intelligent demand/supply control offer a compelling path forward to increase the efficient utilization and carbon footprint reduction of the world’s grids. Underutilization factors from many distribution companies indicate that distribution feeders are often operated at only 70-80% of their peak capacity for a few hours per year, and on average are loaded to less than 30-40% of their capability.

At this time the utilities are limiting adoption rates to a couple percent.  A well known standardization could replace that with a call for much more.  Instead of discouraging participation, it will encourage innovation and enhance forecasting and do so without giving away control over how we wish to use our power.  Best of all, it is paid for by upgrades that are already being planned. How's that for a low cost SMART solution?



[1] Peter Mark Jansson and John Schmalzel, Increasing utilization of US electric grids via smart technologies: integration of load management, real time pricing, renewables, EVs and smart grid sensors, The International Journal of Technology, Knowledge and Society 7, 47-60.


Follow

Get every new post delivered to your Inbox.

Join 3,151 other followers