A Double Conference

23 February, 2018

Here’s a cool way to cut carbon emissions: a double conference. The idea is to have a conference in two faraway locations connected by live video stream, to reduce the amount of long-distance travel!

Even better, it’s about a great subject:

• Higher algebra and mathematical physics, August 13–17, 2018, Perimeter Institute, Waterloo, Canada, and Max Planck Institute for Mathematics, Bonn, Germany.

Here’s the idea:

“Higher algebra” has become important throughout mathematics, physics, and mathematical physics, and this conference will bring together leading experts in higher algebra and its mathematical physics applications. In physics, the term “algebra” is used quite broadly: any time you can take two operators or fields, multiply them, and write the answer in some standard form, a physicist will be happy to call this an “algebra”. “Higher algebra” is characterized by the appearance of a hierarchy of multilinear operations (e.g. A-infinity and L-infinity algebras). These structures can be higher categorical in nature (e.g. derived categories, cohomology theories), and can involve mixtures of operations and co-operations (Hopf algebras, Frobenius algebras, etc.). Some of these notions are purely algebraic (e.g. algebra objects in a category), while others are quite geometric (e.g. shifted symplectic structures).

An early manifestation of higher algebra in high-energy physics was supersymmetry. Supersymmetry makes quantum field theory richer and thus more complicated, but at the same time many aspects become more tractable and many problems become exactly solvable. Since then, higher algebra has made numerous appearances in mathematical physics, both high- and low-energy._

Participation is limited. Some financial support is available for early-career mathematicians. For more information and to apply, please visit the conference website of the institute closer to you:

North America: http://www.perimeterinstitute.ca/HAMP
Europe: http://www.mpim-bonn.mpg.de/HAMP

If you have any questions, please write to double.conference.2018@gmail.com.

One of the organizers, Aaron Mazel-Gee, told me:

We are also interested in spreading the idea of double conferences more generally: we’re hoping that our own event’s success inspires other academic communities to organize their own double conferences. We’re hoping to eventually compile a sort of handbook to streamline the process for others, so that they can learn from our own experiences regarding the various unique challenges that organizing such an event poses. Anyways, all of this is just to say that I would be happy for you to publicize this event anywhere that it might reach these broader audiences.

So, if you’re interested in having a double conference, please contact the organizers of this one for tips on how to do it! I’m sure they’ll have better advice after they’ve actually done it. I’ve found that the technical details really matter for these things: it can be very frustrating when they don’t work correctly. Avoiding such problems requires testing everything ahead of time—under conditions that exactly match what you’re planning to do!


California’s “State of the State”

29 January, 2018

On January 25th, Jerry Brown, governor of California, gave his last annual State of the State speech. It’s about looking forward to the future: tackling hard problems now. I wish more politicians were focused on this.

You can see the whole speech annotated here. Here is the first part. The last line states the vision:

The bolder path is still our way forward.

State of the State (first part)

Good morning. As our Constitution requires, I’m here to report on the condition of our state.

Simply put, California is prospering. While it faces its share of difficulties, we should never forget the bounty and the endless opportunities bestowed on this special place—or the distance we’ve all traveled together these last few years.

It is now hard to visualize—or even remember—the hardships, the bankruptcies and the home foreclosures so many experienced during the Great Recession. Unemployment was above 12 percent and 1.3 million Californians lost their jobs.

The deficit was $27 billion in 2011. The New York Times, they called us: “The Coast of Dystopia.” The Wall Street Journal saw: “The Great California Exodus.” The Economist of London pronounced us: “The Ungovernable State.” And the Business Insider simply said: “California is Doomed.”

Even today, you will find critics who claim that the California dream is dead. But I’m used to that. Back in my first term, a prestigious report told us that California had the worst business climate in America. In point of fact, personal income in 1975, my first year as governor, was $154 billion. Today it has grown to $2.4 trillion. In just the last eight years alone, California’s personal income has grown $845 billion and 2.8 million new jobs have been created. Very few places in the world can match that record.

That is one of the reasons why confidence in the work that you are doing has risen so high. That contrasts sharply with the abysmal approval ratings given to the United States Congress. Certainly our on-time budgets are well received, thanks in large part to the lowering of the two-thirds vote to a simple majority to pass the budget.

But public confidence has also been inspired by your passing—with both Republicans and Democratic votes:

• Pension reform—and don’t minimize that, that was a big pension reform. May not be the final one, but it was there and you did it, Republicans and Democrats;

• Workers’ Compensation reform, another vote with Republicans and Democrats there;

• The Water Bond;

• The Rainy Day Fund; and

• The Cap-and-Trade Program.

And by the way, you Republicans, as I look over here and I look over there, don’t worry, I’ve got your back!

All these programs are big and very important to our future. And their passage demonstrates that some American governments can actually get things done—even in the face of deepening partisan division.

The recent fires and mudslides show us how much we are affected by natural disasters and how we can rise to the occasion—at the local level, at the state level and with major help from the federal government. I want to especially thank all of the firefighters, first responders and volunteers. They answered the call to help their fellow neighbors, in some cases even when their own homes were burning. Here we see an example of people working together irrespective of party.

The president himself has given California substantial assistance and the congressional leadership is now sponsoring legislation to help California, as well as the other states that have suffered major disasters—Texas, Florida and the Commonwealth of Puerto Rico.

In this regard, we should never forget our dependency on the natural environment and the fundamental challenges it presents to the way we live. We can’t fight nature. We have to learn how to get along with her. And I want to say that again: We can’t fight nature. We have to learn how to get along with her.

And that’s not so easy. For thousands of years this land now called California supported no more than 300,000 people. That’s 300,000 people and they did that for thousands and thousands—some people say, as long as 20,000 years. Today, 40 million people live in the same place and their sheer impact on the soils, the forests and the entire ecosystem has no long-term precedent. That’s why we have to innovate constantly and create all manner of shelter, machines and creative technologies. That will continue, but only with ever greater public and private investment.

The devastating forest fires and the mudslides are a profound and growing challenge. Eight of the state’s most destructive fires have occurred in the last five years. Last year’s Thomas fire in Ventura and Santa Barbara counties was the largest in recorded history. The mudslides that followed were among the most lethal the state has ever encountered. In 2017, we had the highest average summer temperatures in recorded history. Over the last 40 years, California’s fire season has increased 78 days—and in some places it is nearly year-round.

So we have to be ready with the necessary firefighting capability and communication systems to warn residents of impending danger. We also have to manage our forests—and soils—much more intelligently.

Toward that end, I will convene a task force composed of scientists and knowledgeable forest practitioners to review thoroughly the way our forests are managed and suggest ways to reduce the threat of devastating fires. They will also consider how California can increase resiliency and carbon storage capacity. Trees in California should absorb CO2, not generate huge amounts of black carbon and greenhouse gas as they do today when forest fires rage across the land.

Despite what is widely believed by some of the most powerful people in Washington, the science of climate change is not in doubt. The national academies of science of every major country in the world—including Russia and China—have all endorsed the mainstream view that human caused greenhouse gases are trapping heat in the oceans and in the atmosphere and that action must be taken to avert catastrophic changes in our weather systems. All nations agree except one and that is solely because of one man: our current president.

Here in California, we follow a different path. Enlightened by top scientists at the University of California, Stanford and Caltech, among others, our state has led the way. I’ll enumerate just how:

• Building and appliance efficiency standards;

• Renewable electricity—reaching 50 percent in just a few years;

• A powerful low-carbon fuel standard; incentives for zero-emission vehicles;

• Ambitious policies to reduce short-lived climate pollutants like methane and black carbon;

• A UN sponsored climate summit this September in San Francisco; and

• The nation’s only functioning cap-and-trade system.

I will shortly provide an expenditure plan for the revenues that the cap-and-trade auctions have generated. Your renewing this program on a bipartisan basis was a major achievement and will ensure that we will have substantial sums to invest in communities all across the state—both urban and agricultural.

The goal is to make our neighborhoods and farms healthier, our vehicles cleaner—zero emission the sooner the better—and all our technologies increasingly lowering their carbon output. To meet these ambitious goals, we will need five million zero-emission vehicles on the road by 2030. And we’re going to get there. Believe me. We only have 350,000 today, so we’ve all got a lot of work. And think of all the jobs and how much cleaner our air will be then.

When you passed cap-and-trade legislation, you also passed a far-reaching air pollution measure that for the first time focuses on pollutants that disproportionately affect specific neighborhoods. Instead of just measuring pollutants over vast swaths of land, regulators will zero in on those communities which are particularly disadvantaged by trains, trucks or factories.

Along with clean air, clean water is a fundamental good that must be protected and made available on a sustainable basis. When droughts occur, conservation measures become imperative. In recent years, you have passed historic legislation to manage California’s groundwater, which local governments are now implementing.

In addition, you passed—and more than two-thirds of voters approved—a water bond that invests in safe drinking water, conservation and storage. As a result, we will soon begin expending funds on some of the storage we’ve needed for decades.

As the climate changes and more water arrives as rain instead of snow, it is crucial that we are able to capture the overflow in a timely and responsible way. That, together with recycling and rainwater recapture will put us in the best position to use water wisely and in the most efficient way possible. We are also restoring the Sacramento and San Joaquin watersheds to protect water supplies and improve California’s iconic salmon runs.

Finally, we have the California Waterfix, a long studied and carefully designed project to modernize our broken water system. I am convinced that it will conserve water, protect the fish and the habitat in the Delta and ensure the delivery of badly needed water to the millions of people who depend on California’s aqueducts. Local water districts—in both the North and South—are providing the leadership and the financing because they know it is vital for their communities, and for the whole state. That is true, and that is the reason why I have persisted.

Our economy, the sixth largest in the world, depends on mobility, which only a modern and efficient transportation system provides. The vote on the gas tax was not easy but it was essential, given the vast network of roads and bridges on which California depends and the estimated $67 billion in deferred maintenance on our infrastructure. Tens of millions of cars and trucks travel over 330 billion miles a year. The sun’s only 93 million miles away.

The funds that SB 1 makes available are absolutely necessary if we are going to maintain our roads and transit systems in good repair. Twenty-five other states have raised gas taxes. Even the U.S. Chamber of Commerce has called for a federal gas tax because the highway trust fund is nearly broke.

Government does what individuals can’t do, like build roads and bridges and support local bus and light rail systems. This is our common endeavor by which we pool our resources through the public sector and improve all of our lives. Fighting a gas tax may appear to be good politics, but it isn’t. I will do everything in my power to defeat any repeal effort that gets on the ballot. You can count on that.

I’m looking for that one Republican. A brave, brave man.

Since I have talked about tunnels and transportation, I will bring up one more item of infrastructure: high-speed rail. I make no bones about it. I like trains and I like high-speed trains even better. So did the voters in 2008 when they approved the bond. Look, 11 other countries have high-speed trains. They are now taken for granted all over Europe, in Japan and in China. President Reagan himself said in Japan on November 11, 1983 the following, and I quote: “The State of California is planning to build a rapid speed train that is adapted from your highly successful bullet train.” Yes, we were, and now we are actually building it. Takes a long time.

Like any big project, there are obstacles. There were for the Bay Area Rapid Transit System, for the Golden Gate Bridge and the Panama Canal. I’ll pass over in silence the Bay bridge, that was almost 20 years. And by the way, it was over budget by $6 billion on a $1 billion project. So that happens. But not with the high-speed rail, we’ve got that covered.

But build it they did and build it we will—America’s first high-speed rail system. One link between San Jose and San Francisco—an electrified Caltrain—is financed and ready to go. Another billion, with matching funds, will be invested in Los Angeles to improve Union Station as a major transportation hub and fix the Anaheim corridor.

The next step is completing the Valley segment and getting an operating system connected to San Jose. Yes, it costs lots of money but it is still cheaper and more convenient than expanding airports, which nobody wants to, and building new freeways, which landowners often object to. All of that is to meet the growing demand. It will be fast, quiet and powered by renewable electricity and last for a hundred years. After all you guys are gone.

Already, more than 1,500 construction workers are on the job at 17 sites and hundreds of California businesses are providing services, generating thousands of job years of employment. As the global economy puts more Americans out of work and lowers wages, infrastructure projects like this will be a key source of well-paid California jobs.

Difficulties challenge us but they can’t discourage or stop us. Whether it’s roads or trains or dams or renewable energy installations or zero-emission cars, California is setting the pace for the entire nation. Yes, there are critics, there are lawsuits and there are countless obstacles. But California was built on dreams and perseverance and the bolder path is still our way forward.

What’s next?

On January 26th, the governor’s office made this announcement:

Taking action to further California’s climate leadership, Governor Edmund G. Brown Jr. today signed an executive order to boost the supply of zero-emission vehicles and charging and refueling stations in California. The Governor also detailed the new plan for investing $1.25 billion in cap-and-trade auction proceeds to reduce carbon pollution and improve public health and the environment.

“This executive order aims to curb carbon pollution from cars and trucks and boost the number of zero-emission vehicles driven in California,” said Governor Brown. “In addition, the cap-and-trade investments will, in varying degrees, reduce California’s carbon footprint and improve the quality of life for all.”

Zero-Emission Vehicle Executive Order

California is taking action to dramatically reduce carbon emissions from transportation—a sector that accounts for 50 percent of the state’s greenhouse gas emissions and 80 percent of smog-forming pollutants.

To continue to meet California’s climate goals and clean air standards, California must go even further to accelerate the market for zero-emission vehicles. Today’s executive order implements the Governor’s call for a new target of 5 million ZEVs in California by 2030, announced in his State of the State address yesterday, and will help significantly expand vehicle charging infrastructure.

The Administration is also proposing a new eight-year initiative to continue the state’s clean vehicle rebates and spur more infrastructure investments. This $2.5 billion initiative will help bring 250,000 vehicle charging stations and 200 hydrogen fueling stations to California by 2025.

Today’s action builds on past efforts to boost zero-emission vehicles, including: legislation signed last year and in 2014 and 2013; adopting the 2016 Zero-Emission Vehicle Plan and the Advanced Clean Cars program; hosting a Zero-Emission Vehicle Summit; launching a multi-state ZEV Action Plan; co-founding the International ZEV Alliance; and issuing Executive Order B-16-12 in 2012 to help bring 1.5 million zero-emission vehicles to California by 2025.

In addition to today’s executive order, the Governor also released the 2018 plan for California’s Climate Investments—a statewide initiative that puts billions of cap-and-trade dollars to work reducing greenhouse gas emissions, strengthening the economy and improving public health and the environment–particularly in disadvantaged communities.

California Climate Investments projects include affordable housing, renewable energy, public transportation, zero-emission vehicles, environmental restoration, more sustainable agriculture and recycling, among other projects. At least 35 percent of these investments are made in disadvantaged and low-income communities.

The $1.25 billion climate investment plan can be found here.


Renewable Energy News

1 August, 2016

Some good news:

• Ed Crooks, Balance of power tilts from fossil fuels to renewable energy, Financial Times, 26 July 2016.

These are strange days in the energy business. Startling headlines are emerging from the sector that would have seemed impossible just a few years ago.

The Dubai Electricity and Water Authority said in May it had received bids to develop solar power projects that would deliver electricity costing less than three cents per kilowatt hour. This established a new worldwide low for the contracted cost of delivering solar power to the grid—and is priced well below the benchmark of what the emirate and other countries typically pay for electricity from coal-fired stations.

In the UK, renowned for its miserable overcast weather, solar panels contributed more power to the grid than coal plants for the month of May.

In energy-hungry Los Angeles, the electricity company AES is installing the world’s largest battery, with capacity to power hundreds of thousands of homes at times of high demand, replacing gas-fired plants which are often used at short notice to increase supply to the grid.

Trina Solar, the Chinese company that is the world’s largest solar panel manufacturer, said it had started selling in 20 new markets last year, from Poland to Mauritius and Nepal to Uruguay.

[…]

Some new energy technologies, meanwhile, are not making much progress, such as the development of power plants that capture and store the carbon dioxide they produce. It is commonly assumed among policymakers that carbon capture has become essential if humankind is to enjoy the benefits of fossil fuels while avoiding their polluting effects.

It is clear, too, that the growth of renewables and other low-carbon energy sources will not follow a straight line. Investment in “clean” energy has been faltering this year after hitting a record in 2015, according to Bloomberg New Energy Finance. For the first half of 2016, it is down 23 per cent from the equivalent period last year.

Even so, the elements are being put in place for what could be a quite sudden and far-reaching energy transition, which could be triggered by an unexpected and sustained surge in oil prices. If China or India were to make large-scale policy commitments to electric vehicles, they would have a dramatic impact on the outlook for oil demand.

I’m also interested in Elon Musk’s Gigafactory: a lithium-ion battery factory in Nevada with a projected capacity of 50 gigawatt-hours/year of battery packs in 2018, ultimately ramping up to 150 GWh/yr. These battery packs are mainly designed for Musk’s electric car company, Tesla.

So far, Tesla is having trouble making lots of cars: its Fremont, California plant theoretically has the capacity to make 500,000 cars per year, but last year it only built 50,000. For some of the reasons, see this:

• Matthew Debord, Tesla has to overcome a major problem for its massive new Gigafactory to succeed, Singapore Business Insider, 1 August 2016.

Basically, it’s hard to make cars as efficiently as traditional auto companies have learned to do, and as long as people don’t buy many electric cars, it’s hard to get better quickly.

Still, Musk has big dreams for his Gigafactory, which I can only applaud. Here’s what it should look like when it’s done:




The Case for Optimism on Climate Change

7 March, 2016

 

The video here is quite gripping: you should watch it!

Despite the title, Gore starts with a long and terrifying account of what climate change is doing. So what’s his case for optimism? A lot of it concerns solar power, though he also mentions nuclear power:

So the answer to the first question, “Must we change?” is yes, we have to change. Second question, “Can we change?” This is the exciting news! The best projections in the world 16 years ago were that by 2010, the world would be able to install 30 gigawatts of wind capacity. We beat that mark by 14 and a half times over. We see an exponential curve for wind installations now. We see the cost coming down dramatically. Some countries—take Germany, an industrial powerhouse with a climate not that different from Vancouver’s, by the way—one day last December, got 81 percent of all its energy from renewable resources, mainly solar and wind. A lot of countries are getting more than half on an average basis.

More good news: energy storage, from batteries particularly, is now beginning to take off because the cost has been coming down very dramatically to solve the intermittency problem. With solar, the news is even more exciting! The best projections 14 years ago were that we would install one gigawatt per year by 2010. When 2010 came around, we beat that mark by 17 times over. Last year, we beat it by 58 times over. This year, we’re on track to beat it 68 times over.

We’re going to win this. We are going to prevail. The exponential curve on solar is even steeper and more dramatic. When I came to this stage 10 years ago, this is where it was. We have seen a revolutionary breakthrough in the emergence of these exponential curves.

And the cost has come down 10 percent per year for 30 years. And it’s continuing to come down.

Now, the business community has certainly noticed this, because it’s crossing the grid parity point. Cheaper solar penetration rates are beginning to rise. Grid parity is understood as that line, that threshold, below which renewable electricity is cheaper than electricity from burning fossil fuels. That threshold is a little bit like the difference between 32 degrees Fahrenheit and 33 degrees Fahrenheit, or zero and one Celsius. It’s a difference of more than one degree, it’s the difference between ice and water. And it’s the difference between markets that are frozen up, and liquid flows of capital into new opportunities for investment. This is the biggest new business opportunity in the history of the world, and two-thirds of it is in the private sector. We are seeing an explosion of new investment. Starting in 2010, investments globally in renewable electricity generation surpassed fossils. The gap has been growing ever since. The projections for the future are even more dramatic, even though fossil energy is now still subsidized at a rate 40 times larger than renewables. And by the way, if you add the projections for nuclear on here, particularly if you assume that the work many are doing to try to break through to safer and more acceptable, more affordable forms of nuclear, this could change even more dramatically.

So is there any precedent for such a rapid adoption of a new technology? Well, there are many, but let’s look at cell phones. In 1980, AT&T, then Ma Bell, commissioned McKinsey to do a global market survey of those clunky new mobile phones that appeared then. “How many can we sell by the year 2000?” they asked. McKinsey came back and said, “900,000.” And sure enough, when the year 2000 arrived, they did sell 900,000—in the first three days. And for the balance of the year, they sold 120 times more. And now there are more cell connections than there are people in the world.

So, why were they not only wrong, but way wrong? I’ve asked that question myself, “Why?”

And I think the answer is in three parts. First, the cost came down much faster than anybody expected, even as the quality went up. And low-income countries, places that did not have a landline grid—they leap-frogged to the new technology. The big expansion has been in the developing counties. So what about the electricity grids in the developing world? Well, not so hot. And in many areas, they don’t exist. There are more people without any electricity at all in India than the entire population of the United States of America. So now we’re getting this: solar panels on grass huts and new business models that make it affordable. Muhammad Yunus financed this one in Bangladesh with micro-credit. This is a village market. Bangladesh is now the fastest-deploying country in the world: two systems per minute on average, night and day. And we have all we need: enough energy from the Sun comes to the Earth every hour to supply the full world’s energy needs for an entire year. It’s actually a little bit less than an hour. So the answer to the second question, “Can we change?” is clearly “Yes.” And it’s an ever-firmer “yes.”

Some people are much less sanguine about solar power, and they would point out all the things that Gore doesn’t mention here. For example, while Gore claims that “one day last December” Germany “got 81 percent of all its energy from renewable resources, mainly solar and wind”, the picture in general is not so good:



This is from 2014, the most recent I could easily find. At least back then, renewables were only slightly ahead of ‘brown coal’, or lignite—the dirtiest kind of coal. Furthermore, among renewables, burning ‘biomass’ produced about as much power as wind—and more than solar. And what’s ‘biomass’, exactly? A lot of it is wood pellets! Some is even imported:

• John Baez, The EU’s biggest renewable eneergy source, 18 September 2013.

So, for every piece of good news one can find a piece of bad news. But the drop in price of solar power is impressive, and photovoltaic solar power is starting to hit ‘grid parity’: the point at which it’s as cheap as the usual cost of electricity off the grid:


According to this map based on reports put out by Deutsche Bank (here and here), the green countries reached grid parity before 2014. The blue countries reached it after 2014. The olive countries have reached it only for peak grid prices. The orange regions are US states that were ‘poised to reach grid parity’ in 2015.

But of course there are other issues: the intermittency of solar power, the difficulties of storing energy, etc. How optimistic should we be?


The Paris Agreement

23 December, 2015

The world has come together and agreed to do something significant about climate change:

• UN Framework Convention on Climate Change, Adoption of the Paris Agreement, 12 December 2015.

Not as much as I’d like: it’s estimated that if we do just what’s been agreed to so far, we can expect 2.7 °C of warming, and more pessimistic estimates range up to 3.5 °C. But still, something significant. Furthermore, the Paris Agreement set up a system that encourages nations to ‘ratchet up’ their actions over time. Even better, it helped strengthen a kind of worldwide social network of organizations devoted to tackling climate change!

This is a nice article summarizing what the Paris Agreement actually means:

• William Sweet, A surprising success at Paris, Bulletin of Atomic Scientists, 21 December 2015.

Since it would take quite a bit of work to analyze this agreement and its implications, and I’m just starting to do this work, I’ll just quote a large chunk of this article.

Hollande, in his welcoming remarks, asked what would enable us to say the Paris agreement is good, even “great.” First, regular review and assessment of commitments, to get the world on a credible path to keep global warming in the range of 1.5–2.0 degrees Celsius. Second, solidarity of response, so that no state does nothing and yet none is “left alone.” Third, evidence of a comprehensive change in human consciousness, allowing eventually for introduction of much stronger measures, such as a global carbon tax.

UN Secretary-General Ban Ki-moon articulated similar but somewhat more detailed criteria: The agreement must be lasting, dynamic, respectful of the balance between industrial and developing countries, and enforceable, with critical reviews of pledges even before 2020. Ban noted that 180 countries had now submitted climate action pledges, an unprecedented achievement, but stressed that those pledges need to be progressively strengthened over time.

Remarkably, not only the major conveners of the conference were speaking in essentially the same terms, but civil society as well. Starting with its first press conference on opening day and at every subsequent one, representatives of the Climate Action Network, representing 900 nongovernment organizations, confined themselves to making detailed and constructive suggestions about how key provisions of the agreement might be strengthened. Though CAN could not possibly speak for every single one of its member organizations, the mainstream within the network clearly saw it was the group’s goal to obtain the best possible agreement, not to advocate for a radically different kind of agreement. The mainstream would not be taking to the streets.

This was the main thing that made Paris different, not just from Copenhagen, but from every previous climate meeting: Before, there always had been deep philosophical differences between the United States and Europe, between the advanced industrial countries and the developing countries, and between the official diplomats and civil society. At Paris, it was immediately obvious that everybody, NGOs included, was reading from the same book. So it was obvious from day one that an important agreement would be reached. National delegations would stake out tough positions, and there would be some hard bargaining. But at every briefing and in every interview, no matter how emphatic the stand, it was made clear that compromises would be made and nothing would be allowed to stand in the way of agreement being reached.

The Paris outcome

The two-part agreement formally adopted in Paris on December 12 represents the culmination of a 25-year process that began with the negotiations in 1990–91 that led to the adoption in 1992 of the Rio Framework Convention. That treaty, which would be ratified by all the world’s nations, called upon every country to take action to prevent “dangerous climate change” on the basis of common but differentiated responsibilities. Having enunciated those principles, nations were unable to agree in the next decades about just what they meant in practice. An attempt at Kyoto in 1997 foundered on the opposition of the United States to an agreement that required no action on the part of the major emitters among the developing countries. A second attempt at agreement in Copenhagen also failed.

Only with the Paris accords, for the first time, have all the world’s nations agreed on a common approach that rebalances and redefines respective responsibilities, while further specifying what exactly is meant by dangerous climate change. Paragraph 17 of the “Decision” (or preamble) notes that national pledges will have to be strengthened in the next decades to keep global warming below 2 degrees Celsius or close to 1.5 degrees, while Article 2 of the more legally binding “Agreement” says warming should be held “well below” 2 degrees and if possible limited to 1.5 degrees. Article 4 of the Agreement calls upon those countries whose emissions are still rising to have them peak “as soon as possible,” so “as to achieve a balance between anthropogenic emissions by sources and removals by sinks of greenhouse gases in the second half of this century”—a formulation that replaced a reference in Article 3 of the next-to-last draft calling for “carbon neutrality” by the second half of the century.

“The wheel of climate action turns slowly, but in Paris it has turned. This deal puts the fossil fuel industry on the wrong side of history,” commented Kumi Naidoo, executive director of Greenpeace International.

The Climate Action Network, in which Greenpeace is a leading member, along with organizations like the Union of Concerned Scientists, Friends of the Earth, the World Wildlife Fund, and Oxfam, would have preferred language that flatly adopted the 1.5 degree goal and that called for complete “decarbonization”— an end to all reliance on fossil fuels. But to the extent the network can be said have common positions, it would be able to live with the Paris formulations, to judge from many statements made by leading members in CAN’s twice- or thrice-daily press briefings, and statements made by network leaders embracing the agreement.

Speaking for scientists, at an event anticipating the final accords, H. J. Schellnhuber, leader of the Potsdam Institute for Climate Impact Research, said with a shrug that the formulation calling for net carbon neutrality by mid-century would be acceptable. His opinion carried more than the usual weight because he is sometimes credited in the German press as the father of the 2-degree standard. (Schellnhuber told me that a Potsdam team had indeed developed the idea of limiting global warming to 2 degrees in total and 0.2 degrees per decade; and that while others were working along similar lines, he personally drew the Potsdam work to the attention of future German chancellor Angela Merkel in 1994, when she was serving as environment minister.)

As for the tighter 1.5-degree standard, this is a complicated issue that the Paris accords fudge a bit. The difference between impacts expected from a 1.5-degree world and a 2-degree world are not trivial. The Greenland ice sheet, for example, is expected to melt in its entirely in the 2-degree scenario, while in a 1.5-degree world the odds of a complete melt are only 70 percent, points out climatologist Niklas Höhne, of the Cologne-based NewClimate Institute, with a distinct trace of irony. But at the same time the scientific consensus is that it would be virtually impossible to meet the 1.5-degree goal because on top of the 0.8–0.9 degrees of warming that already has occurred, another half-degree is already in the pipeline, “hidden away in the oceans,” as Schellnhuber put it. At best we might be able to work our ways back to 1.5 degrees in the 2030s or 2040s, after first overshooting it. Thus, though organizations like 350.org and scientists like James Hansen continue to insist that 1.5 degrees should be our objective, pure and simple, the scientific community and the Climate Action mainstream are reasonably comfortable with the Paris accords’ “close as possible” language.

‘Decision’ and ‘Agreement’

The main reason why the Paris accords consist of two parts, a long preamble called the “Decision,” and a legally binding part called the “Agreement,” is to satisfy the Obama administration’s concerns about having to take anything really sticky to Congress. The general idea, which was developed by the administration with a lot of expert legal advice from organizations like the Virginia-based Center for Climate and Energy Solutions, was to put really substantive matters, like how much the United States will actually do in the next decades to cut its greenhouse gas emissions, into the preamble, and to confine the treaty-like Agreement as much as possible to procedural issues like when in the future countries will talk about what.

Nevertheless, the distinction between the Decision and the Agreement is far from clear-cut. All the major issues that had to be balanced in the negotiations—not just the 1.5–2.0 degree target and the decarbonization language, but financial aid, adaptation and resilience, differentiation between rich and poor countries, reporting requirements, and review—are addressed in both parts. There is nothing unusual as such about an international agreement having two parts, a preamble and main text. What is a little odd about Paris, however, is that the preamble, at 19 pages, is considerably longer than the 11-page Agreement, as Chee Yoke Ling of the Third World Network, who is based in Beijing, pointed out. The length of the Decision, she explained, reflects not only US concerns about obtaining Senate ratification. It also arose from anxieties shared by developing countries about agreeing to legally binding provisions that might be hard to implement and politically dangerous.

In what are arguably the Paris accords’ most important provisions, the national pledges are to be collectively reassessed beginning in 2018–19, and then every five years after 2020. The general idea is to systematically exert peer group pressure on regularly scheduled occasions, so that everybody will ratchet up carbon-cutting ambitions. Those key requirements, which are very close to what CAN advocated and what diplomatic members of the so-called “high ambition” group wanted, are in the preamble, not the Agreement.

But an almost equally important provision, found in the Agreement, called for a global “stocktake” to be conducted in 2023, covering all aspects of the Agreement’s implementation, including its very contested provisions about financial aid and “loss and damage”—the question of support and compensation for countries and regions that may face extinction as a result of global warming. Not only carbon cutting efforts but obligations of the rich countries to the poor will be subject to the world’s scrutiny in 2023.

Rich and poor countries

On the critical issue of financial aid for developing countries struggling to reduce emissions and adapt to climate change, Paris affirms the Copenhagen promise of $100 billion by 2020 in the Decision (Paragraph 115) but not in the more binding Agreement—to the displeasure of the developing countries, no doubt. In the three previous draft versions of the accords, the $100 billion pledge was contained in the Agreement as well.

Somewhat similarly, the loss-and-damage language contained in the preamble does not include any reference to liability on the part of the advanced industrial countries that are primarily responsible for the climate change that has occurred up until now. This was a disappointment to representatives of the nations and regions most severely and imminently threatened by global warming, but any mention of liability would have been an absolute show-stopper for the US delegation. Still, the fact that loss and damage is broached at all represents a victory for the developing world and its advocates, who have been complaining for decades about the complete absence of the subject from the Rio convention and Kyoto Protocol.

The so-called Group of 77, which actually represents 134 developing countries plus China, appears to have played a shrewd and tough game here at Le Bourget. Its very able and engaging chairperson, South Africa’s Nozipho Mxakato-Diseko, sent a sharp shot across the prow of the rich countries on the third day of the conference, with a 17-point memorandum she e-mailed enumerating her group’s complaints.

“The G77 and China stresses that nothing under the [1992 Framework Convention] can be achieved without the provision of means of implementation to enable developing countries to play their part to address climate change,” she said, alluding to the fact that if developing countries are to do more to cut emissions growth, they need help. “However, clarity on the complete picture of the financial arrangements for the enhanced implementation of the Convention keeps on eluding us. … We hope that by elevating the importance of the finance discussions under the different bodies, we can ensure that the outcome meets Parties’ expectations and delivers what is required.”

Though the developing countries wanted stronger and more specific financial commitments and “loss-and-damage” provisions that would have included legal liability, there is evidence throughout the Paris Decision and Agreement of the industrial countries’ giving considerable ground to them. During the formal opening of the conference, President Obama met with leaders of AOSIS—the Alliance of Small Island States—and told them he understood their concerns as he, too, is “an island boy.” (Evidently that went over well.) The reference to the $100 billion floor for financial aid surely was removed from the Agreement partly because the White House at present cannot get Congress to appropriate money for any climate-related aid. But at least the commitment remained in the preamble, which was not a foregone conclusion.

Reporting and review

The one area in which the developing countries gave a lot of ground in Paris was in measuring, reporting, and verification. Under the terms of the Rio convention and Kyoto Protocol, only the advanced industrial countries—the so-called Annex 1 countries—were required to report their greenhouse gas emissions to the UN’s climate secretariat in Bonn. Extensive provisions in the Paris agreement call upon all countries to now report emissions, according to standardized procedures that are to be developed.

The climate pledges that almost all countries submitted to the UN in preparation for Paris, known as “Intended Nationally Determined Contributions,” provided a preview of what this will mean. The previous UN climate gathering, last year in Lima, had called for all the INDCs to be submitted by the summer and for the climate secretariat to do a net assessment of them by October 31, which seemed ridiculously late in the game. But when the results of that assessment were released, the secretariat’s head, Christiana Figueres, cited independent estimates that together the national declarations might put the world on a path to 2.7-degree warming. That result was a great deal better than most specialists following the procedure would have expected, this writer included. Though other estimates suggested the path might be more like 3.5 degrees, even this was a very great deal better than the business-as-usual path, which would be at least 4–5 degrees and probably higher than that by century’s end.

The formalized universal reporting requirements put into place by the Paris accords will lend a lot of rigor to the process of preparing, critiquing, and revising INDCs in the future. In effect the secretariat will be keeping score for the whole world, not just the Annex 1 countries. That kind of score-keeping can have a lot of bite, as we have witnessed in the secretariat’s assessment of Kyoto compliance.

Under the Kyoto Protocol, which the US government not only agreed to but virtually wrote, the United States was required to cut its emissions 7 percent by 2008–12, and Europe by 8 percent. From 1990, the baseline year established in the Rio treaty and its 1997 Kyoto Protocol, to 2012 (the final year in which initial Kyoto commitments applied), emissions of the 15 European countries that were party to the treaty decreased 17 percent—more than double what the protocol required of them. Emissions of the 28 countries that are now members of the EU decreased 21 percent. British emissions were down 27 percent in 2012 from 1990, and Germany’s were down 23 percent.

In the United States, which repudiated the protocol, emissions continued to rise until 2005, when they began to decrease, initially for reasons that had little or nothing to do with policy. That year, US emissions were about 15 percent above their 1990 level, while emissions of the 28 EU countries were down more than 9 percent and of the 15 European party countries more than 2 percent.


Why Google Gave Up

5 January, 2015

I was disappointed when Google gave up. In 2007, the company announced a bold initiative to fight global warming:

Google’s Goal: Renewable Energy Cheaper than Coal

Creates renewable energy R&D group and supports breakthrough technologies

Mountain View, Calif. (November 27, 2007) – Google (NASDAQ: GOOG) today announced a new strategic initiative to develop electricity from renewable energy sources that will be cheaper than electricity produced from coal. The newly created initiative, known as RE<C, will focus initially on advanced solar thermal power, wind power technologies, enhanced geothermal systems and other potential breakthrough technologies. RE<C is hiring engineers and energy experts to lead its research and development work, which will begin with a significant effort on solar thermal technology, and will also investigate enhanced geothermal systems and other areas. In 2008, Google expects to spend tens of millions on research and development and related investments in renewable energy. As part of its capital planning process, the company also anticipates investing hundreds of millions of dollars in breakthrough renewable energy projects which generate positive returns.

But in 2011, Google shut down the program. I never heard why. Recently two engineers involved in the project have given a good explanation:

• Ross Koningstein and David Fork, What it would really take to reverse climate change, 18 November 2014.

Please read it!

But the short version is this. They couldn’t find a way to accomplish their goal: producing a gigawatt of renewable power more cheaply than a coal-fired plant — and in years, not decades.

And since then, they’ve been reflecting on their failure and they’ve realized something even more sobering. Even if they’d been able to realize their best-case scenario — a 55% carbon emissions cut by 2050 — it would not bring atmospheric CO2 back below 350 ppm during this century.

This is not surprising to me.

What would we need to accomplish this? They say two things. First, a cheap dispatchable, distributed power source:

Consider an average U.S. coal or natural gas plant that has been in service for decades; its cost of electricity generation is about 4 to 6 U.S. cents per kilowatt-hour. Now imagine what it would take for the utility company that owns that plant to decide to shutter it and build a replacement plant using a zero-carbon energy source. The owner would have to factor in the capital investment for construction and continued costs of operation and maintenance—and still make a profit while generating electricity for less than $0.04/kWh to $0.06/kWh.

That’s a tough target to meet. But that’s not the whole story. Although the electricity from a giant coal plant is physically indistinguishable from the electricity from a rooftop solar panel, the value of generated electricity varies. In the marketplace, utility companies pay different prices for electricity, depending on how easily it can be supplied to reliably meet local demand.

“Dispatchable” power, which can be ramped up and down quickly, fetches the highest market price. Distributed power, generated close to the electricity meter, can also be worth more, as it avoids the costs and losses associated with transmission and distribution. Residential customers in the contiguous United States pay from $0.09/kWh to $0.20/kWh, a significant portion of which pays for transmission and distribution costs. And here we see an opportunity for change. A distributed, dispatchable power source could prompt a switchover if it could undercut those end-user prices, selling electricity for less than $0.09/kWh to $0.20/kWh in local marketplaces. At such prices, the zero-carbon system would simply be the thrifty choice.

But “dispatchable”, they say, means “not solar”.

Second, a lot of carbon sequestration:

While this energy revolution is taking place, another field needs to progress as well. As Hansen has shown, if all power plants and industrial facilities switch over to zero-carbon energy sources right now, we’ll still be left with a ruinous amount of CO2 in the atmosphere. It would take centuries for atmospheric levels to return to normal, which means centuries of warming and instability. To bring levels down below the safety threshold, Hansen’s models show that we must not only cease emitting CO2 as soon as possible but also actively remove the gas from the air and store the carbon in a stable form. Hansen suggests reforestation as a carbon sink. We’re all for more trees, and we also exhort scientists and engineers to seek disruptive technologies in carbon storage.

How to achieve these two goals? They say government and energy businesses should spend 10% of employee time on “strange new ideas that have the potential to be truly disruptive”.


Wind Power and the Smart Grid

18 June, 2014



Electric power companies complain about wind power because it’s intermittent: if suddenly the wind stops, they have to bring in other sources of power.

This is no big deal if we only use a little wind. Across the US, wind now supplies 4% of electric power; even in Germany it’s just 8%. The problem starts if we use a lot of wind. If we’re not careful, we’ll need big fossil-fuel-powered electric plants when the wind stops. And these need to be turned on, ready to pick up the slack at a moment’s notice!

So, a few years ago Xcel Energy, which supplies much of Colorado’s power, ran ads opposing a proposal that it use renewable sources for 10% of its power.

But now things have changed. Now Xcel gets about 15% of their power from wind, on average. And sometimes this spikes to much more!

What made the difference?

Every few seconds, hundreds of turbines measure the wind speed. Every 5 minutes, they send this data to high-performance computers 100 miles away at the National Center for Atmospheric Research in Boulder. NCAR crunches these numbers along with data from weather satellites, weather stations, and other wind farms – and creates highly accurate wind power forecasts.

With better prediction, Xcel can do a better job of shutting down idling backup plants on days when they’re not needed. Last year was a breakthrough year – better forecasts saved Xcel nearly as much money as they had in the three previous years combined.

It’s all part of the emerging smart grid—an intelligent network that someday will include appliances and electric cars. With a good smart grid, we could set our washing machine to run when power is cheap. Maybe electric cars could store solar power in the day, use it to power neighborhoods when electricity demand peaks in the evening – then recharge their batteries using wind power in the early morning hours. And so on.

References

I would love if it the Network Theory project could ever grow to the point of helping design the smart grid. So far we are doing much more ‘foundational’ work on control theory, along with a more applied project on predicting El Niños. I’ll talk about both of these soon! But I have big hopes and dreams, so I want to keep learning more about power grids and the like.

Here are two nice references:

• Kevin Bullis, Smart wind and solar power, from 10 breakthrough technologies, Technology Review, 23 April 2014.

• Keith Parks, Yih-Huei Wan, Gerry Wiener and Yubao Liu, Wind energy forecasting: a collaboration of the National Center for Atmospheric Research (NCAR) and Xcel Energy.

The first is fun and easy to read. The second has more technical details. It describes the software used (the picture on top of this article shows a bit of this), and also some of the underlying math and physics. Let me quote a bit:

High-resolution Mesoscale Ensemble Prediction Model (EPM)

It is known that atmospheric processes are chaotic in nature. This implies that even small errors in the model initial conditions combined with the imperfections inherent in the NWP model formulations, such as truncation errors and approximations in model dynamics and physics, can lead to a wind forecast with large errors for certain weather regimes. Thus, probabilistic wind prediction approaches are necessary for guiding wind power applications. Ensemble prediction is at present a practical approach for producing such probabilistic predictions. An innovative mesoscale Ensemble Real-Time Four Dimensional Data Assimilation (E-RTFDDA) and forecasting system that was developed at NCAR was used as the basis for incorporating this ensemble prediction capability into the Xcel forecasting system.

Ensemble prediction means that instead of a single weather forecast, we generate a probability distribution on the set of weather forecasts. The paper has references explaining this in more detail.

We had a nice discussion of wind power and the smart grid over on G+. Among other things, John Despujols mentioned the role of ‘smart inverters’ in enhancing grid stability:

Smart solar inverters smooth out voltage fluctuations for grid stability, DigiKey article library.

A solar inverter converts the variable direct current output of a photovoltaic solar panel into alternating current usable by the electric grid. There’s a lot of math involved here—click the link for a Wikipedia summary. But solar inverters are getting smarter.

Wild fluctuations

While the solar inverter has long been the essential link between the photovoltaic panel and the electricity distribution network and converting DC to AC, its role is expanding due to the massive growth in solar energy generation. Utility companies and grid operators have become increasingly concerned about managing what can potentially be wildly fluctuating levels of energy produced by the huge (and still growing) number of grid-connected solar systems, whether they are rooftop systems or utility-scale solar farms. Intermittent production due to cloud cover or temporary faults has the potential to destabilize the grid. In addition, grid operators are struggling to plan ahead due to lack of accurate data on production from these systems as well as on true energy consumption.

In large-scale facilities, virtually all output is fed to the national grid or micro-grid, and is typically well monitored. At the rooftop level, although individually small, collectively the amount of energy produced has a significant potential. California estimated it has more than 150,000 residential rooftop grid-connected solar systems with a potential to generate 2.7 MW.

However, while in some systems all the solar energy generated is fed to the grid and not accessible to the producer, others allow energy generated to be used immediately by the producer, with only the excess fed to the grid. In the latter case, smart meters may only measure the net output for billing purposes. In many cases, information on production and consumption, supplied by smart meters to utility companies, may not be available to the grid operators.

Getting smarter

The solution according to industry experts is the smart inverter. Every inverter, whether at panel level or megawatt-scale, has a role to play in grid stability. Traditional inverters have, for safety reasons, become controllable, so that they can be disconnected from the grid at any sign of grid instability. It has been reported that sudden, widespread disconnects can exacerbate grid instability rather than help settle it.

Smart inverters, however, provide a greater degree of control and have been designed to help maintain grid stability. One trend in this area is to use synchrophasor measurements to detect and identify a grid instability event, rather than conventional ‘perturb-and-observe’ methods. The aim is to distinguish between a true island condition and a voltage or frequency disturbance which may benefit from additional power generation by the inverter rather than a disconnect.

Smart inverters can change the power factor. They can input or receive reactive power to manage voltage and power fluctuations, driving voltage up or down depending on immediate requirements. Adaptive volts-amps reactive (VAR) compensation techniques could enable ‘self-healing’ on the grid.

Two-way communications between smart inverter and smart grid not only allow fundamental data on production to be transmitted to the grid operator on a timely basis, but upstream data on voltage and current can help the smart inverter adjust its operation to improve power quality, regulate voltage, and improve grid stability without compromising safety. There are considerable challenges still to overcome in terms of agreeing and evolving national and international technical standards, but this topic is not covered here.

The benefits of the smart inverter over traditional devices have been recognized in Germany, Europe’s largest solar energy producer, where an initiative is underway to convert all solar energy producers’ inverters to smart inverters. Although the cost of smart inverters is slightly higher than traditional systems, the advantages gained in grid balancing and accurate data for planning purposes are considered worthwhile. Key features of smart inverters required by German national standards include power ramping and volt/VAR control, which directly influence improved grid stability.