Power Density

5 October, 2010

Today I’ve been thinking about “power density”, and I’ve got some questions for you.

But let’s start at the beginning!

In his 2009 talk at the Long Now Foundation, the engineer Saul Griffith made some claims that fill me with intense dread. Stewart Brand summarized the talk as follows:

The world currently runs on about 16 terawatts (trillion watts) of energy, most of it burning fossil fuels. To level off at 450 ppm of carbon dioxide, we will have to reduce the fossil fuel burning to 3 terawatts and produce all the rest with renewable energy, and we have to do it in 25 years or it’s too late. Currently about half a terrawatt comes from clean hydropower and one terrawatt from clean nuclear. That leaves 11.5 terawatts to generate from new clean sources.

That would mean the following. (Here I’m drawing on notes and extrapolations I’ve written up previously from discussion with Griffith):

“Two terawatts of photovoltaic would require installing 100 square meters of 15-percent-efficient solar cells every second, second after second, for the next 25 years. (That’s about 1,200 square miles of solar cells a year, times 25 equals 30,000 square miles of photovoltaic cells.) Two terawatts of solar thermal? If it’s 30 percent efficient all told, we’ll need 50 square meters of highly reflective mirrors every second. (Some 600 square miles a year, times 25.) Half a terawatt of biofuels? Something like one Olympic swimming pools of genetically engineered algae, installed every second. (About 15,250 square miles a year, times 25.) Two terawatts of wind? That’s a 300-foot-diameter wind turbine every 5 minutes. (Install 105,000 turbines a year in good wind locations, times 25.) Two terawatts of geothermal? Build 3 100-megawatt steam turbines every day — 1,095 a year, times 25. Three terawatts of new nuclear? That’s a 3-reactor, 3-gigawatt plant every week — 52 a year, times 25”.

In other words, the land area dedicated to renewable energy (“Renewistan”) would occupy a space about the size of Australia to keep the carbon dioxide level at 450 ppm. To get to Hansen’s goal of 350 ppm of carbon dioxide, fossil fuel burning would have to be cut to ZERO, which means another 3 terawatts would have to come from renewables, expanding the size of Renewistan further by 26 percent.

The main scary part is the astounding magnitude of this project, and how far we are from doing anything remotely close. Griffith describes it as not like the Manhattan Project, but like World War II — only with everyone on the same side.

But another scary part is the amount of land that needs to get devoted to “Renewistan” in this scheme. This is where power density comes in.

The term power density is used in various ways, but in the work of Vaclav Smil it means the number of usable watts that can be produced per square meter of land (or water) by a given technology, and that’s how I’ll use it here.

Smil’s main point is that renewable forms of energy generally have a much lower power density than fossil fuels. As Griffith points out, this could have massive effects. Or consider the plan for England, Scotland and Wales on page 215 of David MacKay‘s book Without the Hot Air:



That’s a lot of land devoted to energy production!

Smil wrote an interesting paper about power density:

• Vaclav Smil, Power density primer: understanding the spatial dimension of the unfolding transition to renewable electricity generation

In it, he writes:

Energy density is easy – power density is confusing.

One look at energy densities of common fuels is enough to understand while we prefer coal over wood and oil over coal: air-dried wood is, at best, 17 MJ/kg, good-quality bituminous coal is 22-25 MJ/kg, and refined oil products are around 42 MJ/kg. And a comparison of volumetric energy densities makes it clear why shipping non-compressed, non-liquefied natural gas would never work while shipping crude oil is cheap: natural gas rates around 35 MJ/m3, crude oil has around 35 GJ/m3 and hence its volumetric energy density is a thousand times (three orders of magnitude) higher. An obvious consequence: without liquefied (or at least compressed) natural gas there can be no intercontinental shipments of that clean fuel.

Power density is a much more complicated variable. Engineers have used power densities as revealing measures of performance for decades – but several specialties have defined them in their own particular ways….

For the past 25 years I have favored a different, and a much broader, measure of power density as perhaps the most universal measure of energy flux: W/m2 of horizontal area of land or water surface rather than per unit of the working surface of a converter.

Here are some of his results:

• No other mode of large-scale electricity generation occupies as little space as gas turbines: besides their compactness they do not need fly ash disposal or flue gas desulfurization. Mobile gas turbines generate electricity with power densities higher than 15,000 W/m2 and large (>100 MW) stationary set-ups can easily deliver 4,000-5,000 W/m2. (What about the area needed for mining?)

• Most large modern coal-fired power plants generate electricity with power densities ranging from 100 to 1,000 W/m2, including the area of the mine, the power plant, etcetera.

• Concentrating solar power (CSP) projects use tracking parabolic mirrors in order to reflect and concentrate solar radiation on a central receiver placed in a high tower, for the purposes of powering a steam engine. All facilities included, these deliver at most 10 W/m2.

• Photovoltaic panels are fixed in an optimal tilted south-facing position and hence receive more radiation than a unit of horizontal surface, but the average power densities of solar parks are low. Additional land is needed for spacing the panels for servicing, access roads, inverter and transformer facilities and service structures — and only 85% of a panel’s DC rating is transmitted from the park to the grid as AC power. All told, they deliver 4-9 W/m2.

• Wind turbines have fairly high power densities when the rate measures the flux of wind’s kinetic energy moving through the working surface: the area swept by blades. This power density is commonly above 400 W/m2 — but power density expressed as electricity generated per land area is much less! At best we can expect a peak power of 6.6 W/m2 and even a relatively high average capacity factor of 30% would bring that down to only about 2 W/m2.

• The energy density of dry wood (18-21 GJ/ton) is close to that of sub-bituminous coal. But if we were to supply a significant share of a nation’s electricity from wood we would have to establish extensive tree plantations. We could not expect harvests surpassing 20 tons/hectare, with 10 tons/hectare being more typical. Harvesting all above-ground tree mass and feeding it into chippers would allow for 95% recovery of the total field production, but even if the fuel’s average energy density were 19 GJ/ton, the plantation would yield no more than 190 GJ/hectare, resulting in harvest power density of 0.6 W/m2.

Of course, power density is of limited value in making decisions regarding power generation, because:

1. The price of a square meter of land or water varies vastly depending on its location.

2. Using land for one purpose does not always prevent its use for others: e.g. solar panels on roofs, crops or solar panels between wind turbines.

Nonetheless, Smil’s basic point, that most forms of renewable forms of energy will require us to devote larger areas of the Earth to energy production, seems fairly robust. (An arguable exception is breeder reactors, which in conjunction with extracting uranium from seawater might be considered a form of renewable energy. This is important.)

On the other hand, fans of solar energy argue that much smaller areas would be needed to supply the world’s power. There are two possible reasons, and I haven’t sorted them out yet:

1) They may be talking about electrical power, which is roughly one order of magnitude less than total power usage.

2) As Smil’s calculations show, solar power allows for significantly greater power density than wind or biofuels. Griffith’s area for ‘Renewistan’ may be large because it includes a significant amount of power from those other sources.

What do you folks think? I’ve got a lot of questions:

• what’s the power density for nuclear power?

• what’s the power density for sea-based wind farms?

and some harder ones, like:

• how useful is the concept of power density?

• how much land area would be devoted to power production in a well-crafted carbon-neutral economy?

and that perennial favorite:

• what am I ignoring that I should be thinking about?

If Saul Griffith’s calculations are wrong, and keeping the world from exceeding 450 ppm of CO2 is easier than he thinks, we need to know!


Recommended Reading

2 October, 2010

The Azimuth Project is taking off! Today I woke up and found two new articles full of cool stuff I hadn’t known. Check them out:

EROEI, about the idea of Energy Returned on Energy Invested.

Peak phosphorus, about the crucial role of phosphorus as a fertilizer, and how the moment of peak phosphorus production may have already passed.

Both were initiated by David Tweed, but they’ve both already been polished by other people — Eric Forgy and Graham Jones, so far. So, it’s working!

Here’s the easiest way for you to help save the planet today:

1) Think of the most important book or article you’ve read about environmental problems, how to solve them, or some closely related topic.

2) Go to the Recommended reading page on Azimuth.

3) Click the button that says “Edit” at the bottom left.

4) Add your recommended reading! You’ll see items that look sort of like this:

### _The Necessary Revolution_

* Authors: Peter M. Senge, Bryan Smith, Nina Kruschwitz, Joe Laur and Sara Schley
* Publisher: Random House, New York, 2008
* Recommended by: [[Moneesha Mehta]]
* [Link](http://www.google.com/search?q=the+necessary+revolution)

**Summary:** I confess, I haven’t read the book, but I’ve listened to the abridged version on CD many times as I drive between Toronto and Ottawa. It never fails to inspire me. Peter Senge et al discuss how organizations, private, public, and non-profit, can all work together and build on their organizational strengths to create more sustainable operations.

Copy this format and add:

## _the name of your favorite article or book_

* Author(s): the author(s) name(s)
* Publisher: publisher and date
* Recommended by: [[your name]]
* [Link](a URL to help people find more information)

**Summary:** A little summary.

5) Type your name in the little box at the bottom of the page, and hit the Submit button.

Easy!

And if step 4 seems too complicated, don’t worry! Just enter the information in a paragraph of text — we’ll fix up the formatting.

Our ultimate goal is not a huge unsorted list of important articles and books about environmental issues. We’re trying to build a structure where it’s easy to find information — in fact, wisdom — on specific topics!

But right now we’re just getting started. We need, among other things, to rapidly accumulate relevant data. So — take 5 minutes today to help us out. And find out what other people think you’d enjoy reading!


This Week’s Finds (Week 303)

30 September, 2010

Now for the second installment of my interview with Nathan Urban, a colleague who started out in quantum gravity and now works on "global climate change from an Earth system perspective, with an emphasis on Bayesian data-model calibration, probabilistic prediction, risk assessment, and decision analysis".

But first, a word about Bali. One of the great things about living in Singapore is that it’s close to a lot of interesting places. My wife and I just spent a week in Ubud. This town is the cultural capital of Bali — full of dance, music, and crafts. It’s also surrounded by astounding terraced rice paddies:

In his book Whole Earth Discipline, Stewart Brand says "one of the finest examples of beautifully nuanced ecosystem engineering is the thousand-year-old terraced rice irrigation complex in Bali".

Indeed, when we took a long hike with a local guide, Made Dadug, we learned that that all the apparent "weeds" growing in luxuriant disarray near the rice paddies were in fact carefully chosen plants: cacao, coffee, taro, ornamental flowers, and so on. "See this bush? It’s citronella — people working on the fields grab a pinch and use it for mosquito repellent." When a paddy loses its nutrients they plant sweet potatos there instead of rice, to restore the soil.

Irrigation is managed by a system of local water temples, or "subaks". It’s not a top-down hierarchy: instead, each subak makes decisions in a more or less democratic way, while paying attention to what neighboring subaks do. Brand cites the work of Steve Lansing on this subject:

• J. Stephen Lansing, Perfect Order: Recognizing Complexity in Bali, Princeton U. Press, Princeton, New Jersey, 2006.

Physicists interested in the spontaneous emergence of order will enjoy this passage:

This book began with a question posed by a colleague. In 1992 I gave a lecture at the Santa Fe Institute, a recently created research center devoted to the study of "complex systems." My talk focused on a simulation model that my colleague James Kremer and I had created to investigate the ecological role of water temples. I need to explain a little about how this model came to be built; if the reader will bear with me, the relevance will soon become clear.

Kremer is a marine scientist, a systems ecologist, and a fellow surfer. One day on a California beach I told him the story of the water temples, and of my struggles to convince the consultants that the temples played a vital role in the ecology of the rice terraces. I asked Jim if a simulation model, like the ones he uses to study coastal ecology, might help to clarify the issue. It was not hard to persuade him to come to Bali to take a look. Jim quickly saw that a model of a single water temple would not be very useful. The whole point about water temples is that they interact. Bali is a steep volcanic island, and the rivers and streams are short and fast. Irrigation systems begin high up on the volcanoes, and follow one after another at short intervals all the way to the seacoast. The amount of water each subak gets depends less on rainfall than on how much water is used by its upstream neighbors. Water temples provide a venue for the farmers to plan their irrigation schedules so as to avoid shortages when the paddies need to be flooded. If pests are a problem, they can synchronize harvests and flood a block of terraces so that there is nothing for the pests to eat. Decisions about water taken by each subak thus inevitably affect its neighbors, altering both the availability of water and potential levels of pest infestations.

Jim proposed that we build a simulation model to capture all of these processes for an entire watershed. Having recently spent the best part of a year studying just one subak, the idea of trying to model nearly two hundred of them at once struck me as rather ambitious. But as Jim pointed out, the question is not whether flooding can control pests, but rather whether the entire collection of temples in a watershed can strike an optimal balance between water sharing and pest control.

We set to work plotting the location of all 172 subaks lying between the Oos and Petanu rivers in central Bali. We mapped the rivers and irrigation systems, and gathered data on rainfall, river flows, irrigation schedules, water uptake by crops such as rice and vegetables, and the population dynamics of the major rice pests. With these data Jim constructed a simulation model. At the beginning of each year the artificial subaks in the model are given a schedule of crops to plant for the next twelve months, which defines their irrigation needs. Then, based on historic rainfall data, we simulate rainfall, river flow, crop growth, and pest damage. The model keeps track of harvest data and also shows where water shortages or pest damage occur. It is possible to simulate differences in rainfall patterns or the growth of different kinds of crops, including both native Balinese rice and the new rice promoted by the Green Revolution planners. We tested the model by simulating conditions for two cropping seasons, and compared its predictions with real data on harvest yields for about half the subaks. The model did surprisingly well, accurately predicting most of the variation in yields between subaks. Once we knew that the model’s predictions were meaningful, we used it to compare different scenarios of water management. In the Green Revolution scenario, every subak tries to plant rice as often as possible and ignores the water temples. This produces large crop losses from pest outbreaks and water shortages, much like those that were happening in the real world. In contrast, the “water temple” scenario generates the best harvests by minimizing pests and water shortages.

Back at the Santa Fe Institute, I concluded this story on a triumphant note: consultants to the Asian Development Bank charged with evaluating their irrigation development project in Bali had written a new report acknowledging our conclusions. There would be no further opposition to management by water temples. When I finished my lecture, a researcher named Walter Fontana asked a question, the one that prompted this book: could the water temple networks self-organize? At first I did not understand what he meant by this. Walter explained that if he understood me correctly, Kremer and I had programmed the water temple system into our model, and shown that it had a functional role. This was not terribly surprising. After all, the farmers had had centuries to experiment with their irrigation systems and find the right scale of coordination. But what kind of solution had they found? Was there a need for a Great Designer or an Occasional Tinkerer to get the whole watershed organized? Or could the temple network emerge spontaneously, as one subak after another came into existence and plugged in to the irrigation systems? As a problem solver, how well could the temple networks do? Should we expect 10 percent of the subaks to be victims of water shortages at any given time because of the way the temple network interacts with the physical hydrology? Thirty percent? Two percent? Would it matter if the physical layout of the rivers were different? Or the locations of the temples?

Answers to most of these questions could only be sought if we could answer Walter’s first large question: could the water temple networks self-organize? In other words, if we let the artificial subaks in our model learn a little about their worlds and make their own decisions about cooperation, would something resembling a water temple network emerge? It turned out that this idea was relatively easy to implement in our computer model. We created the simplest rule we could think of to allow the subaks to learn from experience. At the end of a year of planting and harvesting, each artificial subak compares its aggregate harvests with those of its four closest neighbors. If any of them did better, copy their behavior. Otherwise, make no changes. After every subak has made its decision, simulate another year and compare the next round of harvests. The first time we ran the program with this simple learning algorithm, we expected chaos. It seemed likely that the subaks would keep flipping back and forth, copying first one neighbor and then another as local conditions changed. But instead, within a decade the subaks organized themselves into cooperative networks that closely resembled the real ones.

Lansing describes how attempts to modernize farming in Bali in the 1970’s proved problematic:

To a planner trained in the social sciences, management by water temples looks like an arcane relic from the premodern era. But to an ecologist, the bottom-up system of control has some obvious advantages. Rice paddies are artificial aquatic ecosystems, and by adjusting the flow of water farmers can exert control over many ecological processes in their fields. For example, it is possible to reduce rice pests (rodents, insects, and diseases) by synchronizing fallow periods in large contiguous blocks of rice terraces. After harvest, the fields are flooded, depriving pests of their habitat and thus causing their numbers to dwindle. This method depends on a smoothly functioning, cooperative system of water management, physically embodied in proportional irrigation dividers, which make it possible to tell at a glance how much water is flowing into each canal and so verify that the division is in accordance with the agreed-on schedule.

Modernization plans called for the replacement of these proportional dividers with devices called "Romijn gates," which use gears and screws to adjust the height of sliding metal gates inserted across the entrances to canals. The use of such devices makes it impossible to determine how much water is being diverted: a gate that is submerged to half the depth of a canal does not divert half the flow, because the velocity of the water is affected by the obstruction caused by the gate itself. The only way to accurately estimate the proportion of the flow diverted by a Romijn gate is with a calibrated gauge and a table. These were not supplied to the farmers, although $55 million was spent to install Romijn gates in Balinese irrigation canals, and to rebuild some weirs and primary canals.

The farmers coped with the Romijn gates by simply removing them or raising them out of the water and leaving them to rust.

On the other hand, Made said that the people village really appreciated this modern dam:

Using gears, it takes a lot less effort to open and close than the old-fashioned kind:

Later in this series of interviews we’ll hear more about sustainable agriculture from Thomas Fischbacher.

But now let’s get back to Nathan!

JB: Okay. Last time we were talking about the things that altered your attitude about climate change when you started working on it. And one of them was how carbon dioxide stays in the atmosphere a long time. Why is that so important? And is it even true? After all, any given molecule of CO2 that’s in the air now will soon get absorbed by the ocean, or taken up by plants.

NU: The longevity of atmospheric carbon dioxide is important because it determines the amount of time over which our actions now (fossil fuel emissions) will continue to have an influence on the climate, through the greenhouse effect.

You have heard correctly that a given molecule of CO2 doesn’t stay in the atmosphere for very long. I think it’s about 5 years. This is known as the residence time or turnover time of atmospheric CO2. Maybe that molecule will go into the surface ocean and come back out into the air; maybe photosynthesis will bind it in a tree, in wood, until the tree dies and decays and the molecule escapes back to the atmosphere. This is a carbon cycle, so it’s important to remember that molecules can come back into the air even after they’ve been removed from it.

But the fate of an individual CO2 molecule is not the same as how long it takes for the CO2 content of the atmosphere to decrease back to its original level after new carbon has been added. The latter is the answer that really matters for climate change. Roughly, the former depends on the magnitude of the gross carbon sink, while the latter depends on the magnitude of the net carbon sink (the gross sink minus the gross source).

As an example, suppose that every year 100 units of CO2 are emitted to the atmosphere from natural sources (organic decay, the ocean, etc.), and each year (say with a 5 year lag), 100 units are taken away by natural sinks (plants, the ocean, etc). The 5 years actually doesn’t matter here; the system is in steady-state equilibrium, and the amount of CO2 in the air is constant. Now suppose that humans add an extra 1 unit of CO2 each year. If nothing else changes, then the amount of carbon in the air will increase every year by 1 unit, indefinitely. Far from the carbon being purged in 5 years, we end up with an arbitrarily large amount of carbon in the air.

Even if you only add carbon to the atmosphere for a finite time (e.g., by running out of fossil fuels), the CO2 concentration will ultimately reach, and then perpetually remain at, a level equivalent to the amount of new carbon added. Individual CO2 molecules may still get absorbed within 5 years of entering the atmosphere, and perhaps fewer of the carbon atoms that were once in fossil fuels will ultimately remain in the atmosphere. But if natural sinks are only removing an amount of carbon equal in magnitude to natural sources, and both are fixed in time, you can see that if you add extra fossil carbon the overall atmospheric CO2 concentration can never decrease, regardless of what individual molecules are doing.

In reality, natural carbon sinks tend to grow in proportion to how much carbon is in the air, so atmospheric CO2 doesn’t remain elevated indefinitely in response to a pulse of carbon into the air. This is kind of the biogeochemical analog to the "Planck feedback" in climate dynamics: it acts to restore the system to equilibrium. To first order, atmospheric CO2 decays or "relaxes" exponentially back to the original concentration over time. But this relaxation time (variously known as a "response time", "adjustment time", "recovery time", or, confusingly, "residence time") isn’t a function of the residence time of a CO2 molecule in the atmosphere. Instead, it depends on how quickly the Earth’s carbon removal processes react to the addition of new carbon. For example, how fast plants grow, die, and decay, or how fast surface water in the ocean mixes to greater depths, where the carbon can no longer exchange freely with the atmosphere. These are slower processes.

There are actually a variety of response times, ranging from years to hundreds of thousands of years. The surface mixed layer of the ocean responds within a year or so; plants within decades to grow and take up carbon or return it to the atmosphere through rotting or burning. Deep ocean mixing and carbonate chemistry operate on longer time scales, centuries to millennia. And geologic processes like silicate weathering are even slower, tens of thousands of years. The removal dynamics are a superposition of all these processes, with a fair chunk taken out quickly by the fast processes, and slower processes removing the remainder more gradually.

To summarize, as David Archer put it, "The lifetime of fossil fuel CO2 in the atmosphere is a few centuries, plus 25 percent that lasts essentially forever." By "forever" he means "tens of thousands of years" — longer than the present age of human civilization. This inspired him to write this pop-sci book, taking a geologic view of anthropogenic climate change:

• David Archer, The Long Thaw: How Humans Are Changing the Next 100,000 Years of Earth’s Climate, Princeton University Press, Princeton, New Jersey, 2009.

A clear perspective piece on the lifetime of carbon is:

• Mason Inman, Carbon is forever, Nature Reports Climate Change, 20 November 2008.

which is based largely on this review article:

• David Archer, Michael Eby, Victor Brovkin, Andy Ridgwell, Long Cao, Uwe Mikolajewicz, Ken Caldeira, Katsumi Matsumoto, Guy Munhoven, Alvaro Montenegro, and Kathy Tokos, Atmospheric lifetime of fossil fuel carbon dioxide, Annual Review of Earth and Planetary Sciences 37 (2009), 117-134.

For climate implications, see:

• Susan Solomon, Gian-Kasper Plattner, Reto Knutti and Pierre Friedlingstein, Irreversible climate change due to carbon dioxide emissions, PNAS 106 (2009), 1704-1709.

• M. Eby, K. Zickfeld, A. Montenegro, D. Archer, K. J. Meissner and A. J. Weaver, Lifetime of anthropogenic climate change: millennial time scales of potential CO2 and surface temperature perturbations, Journal of Climate 22 (2009), 2501-2511.

• Long Cao and Ken Caldeira, Atmospheric carbon dioxide removal: long-term consequences and commitment, Environmental Research Letters 5 (2010), 024011.

For the very long term perspective (how CO2 may affect the glacial-interglacial cycle over geologic time), see:

• David Archer and Andrey Ganopolski, A movable trigger: Fossil fuel CO2 and the onset of the next glaciation, Geochemistry Geophysics Geosystems 6 (2005), Q05003.

JB: So, you’re telling me that even if we do something really dramatic like cut fossil fuel consumption by half in the next decade, we’re still screwed. Global warming will keep right on, though at a slower pace. Right? Doesn’t that make you feel sort of hopeless?

NU: Yes, global warming will continue even as we reduce emissions, although more slowly. That’s sobering, but not grounds for total despair. Societies can adapt, and ecosystems can adapt — up to a point. If we slow the rate of change, then there is more hope that adaptation can help. We will have to adapt to climate change, regardless, but the less we have to adapt, and the more gradual the adaptation necessary, the less costly it will be.

What’s even better than slowing the rate of change is to reduce the overall amount of it. To do that, we’d need to not only reduce carbon emissions, but to reduce them to zero before we consume all fossil fuels (or all of them that would otherwise be economically extractable). If we emit the same total amount of carbon, but more slowly, then we will get the same amount of warming, just more slowly. But if we ultimately leave some of that carbon in the ground and never burn it, then we can reduce the amount of final warming. We won’t be able to stop it dead, but even knocking a degree off the extreme scenarios would be helpful, especially if there are "tipping points" that might otherwise be crossed (like a threshold temperature above which a major ice sheet will disintegrate).

So no, I don’t feel hopeless that we can, in principle, do something useful to mitigate the worst effects of climate change, even though we can’t plausibly stop or reverse it on normal societal timescales. But sometimes I do feel hopeless that we lack the public and political will to actually do so. Or at least, that we will procrastinate until we start seeing extreme consequences, by which time it’s too late to prevent them. Well, it may not be too late to prevent future, even more extreme consequences, but the longer we wait, the harder it is to make a dent in the problem.

I suppose here I should mention the possibility of climate geoengineering, which is a proposed attempt to artificially counteract global warming through other means, such as reducing incoming sunlight with reflective particles in the atmosphere, or space mirrors. That doesn’t actually cancel all climate change, but it can negate a lot of the global warming. There are many risks involved, and I regard it as a truly last-ditch effort if we discover that we really are "screwed" and can’t bear the consequences.

There is also an extreme form of carbon cycle geoengineering, known as air capture and sequestration, which extracts CO2 from the atmosphere and sequesters it for long periods of time. There are various proposed technologies for this, but it’s highly uncertain whether this can feasibly be done on the necessary scales.

JB: Personally, I think society will procrastinate until we see extreme climate changes. Recently millions of Pakistanis were displaced by floods: a quarter of their country was covered by water. We can’t say for sure this was caused by global warming — but it’s exactly the sort of thing we should expect.

But you’ll notice, this disaster is nowhere near enough to make politicians talk about cutting fossil fuel usage! It’ll take a lot of disasters like this to really catch people’s attention. And by then we’ll be playing a desperate catch-up game, while people in many countries are struggling to survive. That won’t be easy. Just think how little attention the Pakistanis can spare for global warming right now.

Anyway, this is just my own cheery view. But I’m not hopeless, because I think there’s still a lot we can do to prevent a terrible situation from becoming even worse. Since I don’t think the human race will go extinct anytime soon, it would be silly to "give up".

Now, you’re just started a position at the Woodrow Wilson School at Princeton. When I was an undergrad there, this school was the place for would-be diplomats. What’s a nice scientist like you doing in a place like this? I see you’re in the Program in Science, Technology and Environmental Policy, or "STEP program". Maybe it’s too early for you to give a really good answer, but could you say a bit about what they do?

NU: Let me pause to say that I don’t know whether the Pakistan floods are "exactly the sort of thing we should expect" to happen to Pakistan, specifically, as a result of climate change. Uncertainty in the attribution of individual events is one reason why people don’t pay attention to them. But it is true that major floods are examples of extreme events which could become more (or less) common in various regions of the world in response to climate change.

Returning to your question, the STEP program includes a number of scientists, but we are all focused on policy issues because the Woodrow Wilson School is for public and international affairs. There are physicists who work on nuclear policy, ecologists who study environmental policy and conservation biology, atmospheric chemists who look at ozone and air pollution, and so on. Obviously, climate change is intimately related to public and international policy. I am mostly doing policy-relevant science but may get involved in actual policy to some extent. The STEP program has ties to other departments such as Geosciences, interdisciplinary umbrella programs like the Atmospheric and Ocean Sciences program and the Princeton Environmental Institute, and NOAA’s nearby Geophysical Fluid Dynamics Laboratory, one of the world’s leading climate modeling centers.

JB: How much do you want to get into public policy issues? Your new boss, Michael Oppenheimer, used to work as chief scientist for the Environmental Defense Fund. I hadn’t known much about them, but I’ve just been reading a book called The Climate War. This book says a lot about the Environmental Defense Fund’s role in getting the US to pass cap-and-trade legislation to reduce sulfur dioxide emissions. That’s quite an inspiring story! Many of the same people then went on to push for legislation to reduce greenhouse gases, and of course that story is less inspiring, so far: no success yet. Can you imagine yourself getting into the thick of these political endeavors?

NU: No, I don’t see myself getting deep into politics. But I am interested in what we should be doing about climate change, specifically, the economic assessment of climate policy in the presence of uncertainties and learning. That is, how hard should we be trying to reduce CO2 emissions, accounting for the fact that we’re unsure what climate the future will bring, but expect to learn more over time. Michael is very interested in this question too, and the harder problem of "negative learning":

• Michael Oppenheimer, Brian C. O’Neill and Mort Webster, Negative learning, Climatic Change 89 (2008), 155-172.

"Negative learning" occurs if what we think we’re learning is actually converging on the wrong answer. How fast could we detect and correct such an error? It’s hard enough to give a solid answer to what we might expect to learn, let alone what we don’t expect to learn, so I think I’ll start with the former.

I am also interested in the value of learning. How will our policy change if we learn more? Can there be any change in near-term policy recommendations, or will we learn slowly enough that new knowledge will only affect later policies? Is it more valuable — in terms of its impact on policy — to learn more about the most likely outcomes, or should we concentrate on understanding better the risks of the worst-case scenarios? What will cause us to learn the fastest? Better surface temperature observations? Better satellites? Better ocean monitoring systems? What observables should they we looking at?

The question "How much should we reduce emissions" is, partially, an economic one. The safest course of action from the perspective of climate impacts is to immediately reduce emissions to a much lower level. But that would be ridiculously expensive. So some kind of cost-benefit approach may be helpful: what should we do, balancing the costs of emissions reductions against their climate benefits, knowing that we’re uncertain about both. I am looking at so-called "economic integrated assessment" models, which combine a simple model of the climate with an even simpler model of the world economy to understand how they influence each other. Some argue these models are too simple. I view them more as a way of getting order-of-magnitude estimates of the relative values of different uncertainty scenarios or policy options under specified assumptions, rather than something that can give us "The Answer" to what our emissions targets should be.

In a certain sense it may be moot to look at such cost-benefit analyses, since there is a huge difference between "what may be economically optimal for us to do" and "what we will actually do". We have not yet approached current policy recommendations, so what’s the point of generating new recommendations? That’s certainly a valid argument, but I still think it’s useful to have a sense of the gap between what we are doing and what we "should" be doing.

Economics can only get us so far, however (and maybe not far at all). Traditional approaches to economics have a very narrow way of viewing the world, and tend to ignore questions of ethics. How do you put an economic value on biodiversity loss? If we might wipe out polar bears, or some other species, or a whole lot of species, how much is it "worth" to prevent that? What is the Great Barrier Reef worth? Its value in tourism dollars? Its value in "ecosystem services" (the more nebulous economic activity which indirectly depends on its presence, such as fishing)? Does it have intrinsic value, and is worth something (what?) to preserve, even if it has no quantifiable impact on the economy whatsoever?

You can continue on with questions like this. Does it make sense to apply standard economic discounting factors, which effectively value the welfare of future generations less than that of the current generation? See for example:

• John Quiggin, Stern and his critics on discounting and climate change: an editorial essay, Climatic Change 89 (2008), 195-205.

Economic models also tend to preserve present economic disparities. Otherwise, their "optimal" policy is to immediately transfer a lot of the wealth of developed countries to developing countries — and this is without any climate change — to maximize the average "well-being" of the global population, on the grounds that a dollar is worth more to a poor person than a rich person. This is not a realistic policy and arguably shouldn’t happen anyway, but you do have to be careful about hard-coding potential inequities into your models:

• Seth D. Baum and William E. Easterling, Space-time discounting in climate change adaptation, Mitigation and Adaptation Strategies for Global Change 15 (2010), 591-609.

More broadly, it’s possible for economics models to allow sea level rise to wipe out Bangladesh, or other extreme scenarios, simply because some countries have so little economic output that it doesn’t "matter" if they disappear, as long as other countries become even more wealthy. As I said, economics is a narrow lens.

After all that, it may seem silly to be thinking about economics at all. The main alternative is the "precautionary principle", which says that we shouldn’t take suspected risks unless we can prove them safe. After all, we have few geologic examples of CO2 levels rising as far and as fast as we are likely to increase them — to paraphrase Wally Broecker, we are conducting an uncontrolled and possibly unprecedented experiment on the Earth. This principle has some merits. The common argument, "We should do nothing unless we can prove the outcome is disastrous", is a strange burden of proof from a decision analytic point of view — it has little to do with the realities of risk management under uncertainty. Nobody’s going to say "You can’t prove the bridge will collapse, so let’s build it". They’re going to say "Prove it’s safe (to within a certain guarantee) before we build it". Actually, a better analogy to the common argument might be: you’re driving in the dark with broken headlights, and insist “You’ll have to prove there are no cliffs in front of me before I’ll consider slowing down.” In reality, people should slow down, even if it makes them late, unless they know there are no cliffs.

But the precautionary principle has its own problems. It can imply arbitrarily expensive actions in order to guard against arbitrarily unlikely hazards, simply because we can’t prove they’re safe, or precisely quantify their exact degree of unlikelihood. That’s why I prefer to look at quantitative cost-benefit analysis in a probabilistic framework. But it can be supplemented with other considerations. For example, you can look at stabilization scenarios: where you "draw a line in the sand" and say we can’t risk crossing that, and apply economics to find the cheapest way to avoid crossing the line. Then you can elaborate that to allow for some small but nonzero probability of crossing it, or to allow for temporary "overshoot", on the grounds that it might be okay to briefly cross the line, as long as we don’t stay on the other side indefinitely. You can tinker with discounting assumptions and the decision framework of expected utility maximization. And so on.

JB: This is fascinating stuff. You’re asking a lot of really important questions — I think I see about 17 question marks up there. Playing the devil’s advocate a bit, I could respond: do you known any answers? Of course I don’t expect "ultimate" answers, especially to profound questions like how much we should allow economics to guide our decision, versus tempering it with other ethical considerations. But it would be nice to see an example where thinking about these issues turned up new insights that actually changed people’s behavior. Cases where someone said "Oh, I hadn’t thought of that…", and then did something different that had a real effect.

You see, right now the world as it is seems so far removed from the world as it should be that one can even start to doubt the usefulness of pondering the questions you’re raising. As you said yourself, "We’re not yet even coming close to current policy recommendations, so what’s the point of generating new recommendations?"

I think the cap-and-trade idea is a good example, at least as far as sulfur dioxide emissions go: the Clean Air Act Amendments of 1990 managed to reduce SO2 emissions in the US from about 19 million tons in 1980 to about 7.6 million tons in 2007. Of course this idea is actually a bunch of different ideas that need to work together in a certain way… but anyway, some example related to global warming would be a bit more reassuring, given our current problems with that.

NU: Climate change economics has been very influential in generating momentum for putting a price on carbon (through cap-and-trade or otherwise), in Europe and the U.S., in showing that such policy had the potential to be a net benefit considering the risks of climate change. SO2 emissions markets are one relevant piece of this body of research, although the CO2 problem is much bigger in scope and presents more problems for such approaches. Climate economics has been an important synthesis of decision analysis and scientific uncertainty quantification, which I think we need more of. But to be honest, I’m not sure what immediate impact additional economic work may have on mitigation policy, unless we begin approaching current emissions targets. So from the perspective of immediate applications, I also ponder the usefulness of answering these questions.

That, however, is not the only perspective I think about. I’m also interested in how what we should do is related to what we might learn — if not today, then in the future. There are still important open questions about how well we can see something potentially bad coming, the answers to which could influence policies. For example, if a major ice sheet begins to substantially disintegrate within the next few centuries, would we be able to see that coming soon enough to step up our mitigation efforts in time to prevent it? In reality that’s a probabilistic question, but let’s pretend it’s a binary outcome. If the answer is "yes", that could call for increased investment in "early warning" observation systems, and a closer coupling of policy to the data produced by such systems. (Well, we should be investing more in those anyway, but people might get the point more strongly, especially if research shows that we’d only see it coming if we get those systems in place and tested soon.) If the answer is "no", that could go at least three ways. One way it could go is that the precautionary principle wins: if we think that we could put coastal cities under water, and we wouldn’t see it coming in time to prevent it, that might finally prompt more preemptive mitigation action. Another is that we start looking more seriously at last-ditch geoengineering approaches, or carbon air capture and sequestration. Or, if people give up on modifying the climate altogether, then it could prompt more research and development into adaptation. All of those outcomes raise new policy questions, concerning how much of what policy response we should aim for.

Which brings me to the next policy option. The U.S. presidential science advisor, John Holdren, has said that we have three choices for climate change: mitigate, adapt, or suffer. Regardless of what we do about the first, people will likely be doing some of the other two; the question is how much. If you’re interested in research that has a higher likelihood of influencing policy in the near term, adaptation is probably what you should work on. (That, or technological approaches like climate/carbon geoengineering, energy systems, etc.) People are already looking very seriously at adaptation (and in some cases are already putting plans into place). For example, the Port Authority of Los Angeles needs to know whether, or when, to fortify their docks against sea level rise, and whether a big chunk of their business could disappear if the Northwest Passage through the Arctic Ocean opens permanently. They have to make these investment decisions regardless of what may happen with respect to geopolitical emissions reduction negotiations. The same kinds of learning questions I’m interested in come into play here: what will we know, and when, and how should current decisions be structured knowing that we will be able to periodically adjust those decisions?

So, why am I not working on adaptation? Well, I expect that I will be, in the future. But right now, I’m still interested in a bigger question, which is how well can we bound the large risks and our ability to prevent disasters, rather than just finding the best way to survive them. What is the best and the worst that can happen, in principle? Also, I’m concerned that right now there is too much pressure to develop adaptation policies to a level of detail which we don’t yet have the scientific capability to develop. While global temperature projections are probably reasonable within their stated uncertainty ranges, we have a very limited ability to predict, for example, how precipitation may change over a particular city. But that’s what people want to know. So scientists are trying to give them an answer. But it’s very hard to say whether some of those answers right now are actionably credible. You have to choose your problems carefully when you work in adaptation. Right now I’m opting to look at sea level rise, partly because it is less affected by the some of the details of local meteorology.

JB: Interesting. I think I’m going to cut our conversation here, because at this point it took a turn that will really force me to do some reading! And it’s going to take a while. But it should be fun!


The climatic impacts of releasing fossil fuel CO2 to the atmosphere will last longer than Stonehenge, longer than time capsules, longer than nuclear waste, far longer than the age of human civilization so far. – David Archer


Ashtekar on Black Hole Evaporation

29 September, 2010

This post is a bit different from the usual fare here. The relativity group at Louisiana State University runs an innovative series of talks, the International Loop Quantum Gravity Seminar, where participants worldwide listen and ask questions by telephone, and the talks are made available online. Great idea! Why fly the speaker’s body thousands of miles through the stratosphere from point A to point B when all you really want are their precious megabytes of wisdom?

This seminar is now starting up a blog, to go along with the talks. Jorge Pullin invited me to kick it off with a post. Following his instructions, I won’t say anything very technical. I’ll just provide an easy intro that anyone who likes physics can enjoy.

• Abhay Ashtekar, Quantum evaporation of 2-d black holes, 21 September 2010. PDF of the slides, and audio in either .wav (45MB) or .aif format (4MB).

Abhay Ashtekar has long been one of the leaders of loop quantum gravity. Einstein described gravity using a revolutionary theory called general relativity. In the mid-1980s, Ashtekar discovered a way to reformulate the equations of general relativity in a way that brings out their similarity to the equations describing the other forces of nature. Gravity has always been the odd man out, so this was very intriguing.

Shortly thereafter, Carlo Rovelli and Lee Smolin used this new formulation to tackle the problem of quantizing gravity: that is, combining general relativity with the insights from quantum mechanics. The result is called “loop quantum gravity” because in an early version it suggested that at tiny distance scales, the geometry of space was not smooth, but made of little knotted or intersecting loops.

Later work suggested a network-like structure, and still later time was brought into the game. The whole story is still very tentative and controversial, but it’s quite a fascinating business. Maybe this movie will give you a rough idea of the images that flicker through people’s minds when they think about this stuff:

… though personally I hear much cooler music in my head. Bach, or Eno — not these cheesy detective show guitar licks.

Now, one of the goals of any theory of quantum gravity must be to resolve certain puzzles that arise in naive attempts to blend general relativity and quantum mechanics. And one of the most famous is the so-called black hole information paradox. (I don’t think it’s actually a “paradox”, but that’s what people usually call it.)

The problem began when Hawking showed, by a theoretical calculation, that black holes aren’t exactly black. In fact he showed how to compute the temperature of a black hole, and found that it’s not zero. Anything whose temperature is above absolute zero will radiate light: visible light if it’s hot enough, infrared if it’s cooler, microwaves if it’s even cooler, and so on. So, black holes must ‘glow’ slightly.

Very slightly. The black holes that astronomers have actually detected, formed by collapsing stars, would have a ridiculously low temperature: for example, about 0.00000002 degrees Kelvin for a black hole that’s 3 times the mass of our Sun. So, nobody has actually seen the radiation from a black hole.

But Hawking’s calculations say that the smaller a black hole is, the hotter it is! Its temperature is inversely proportional to its mass. So, in principle, if we wait long enough, and keep stuff from falling into our black hole, it will ‘evaporate’. In other words: it will gradually radiate away energy, and thus lose mass (since E = mc2), and thus get hotter, and thus radiate more energy, and so on, in a vicious feedback loop. In the end, it will disappear in a big blast of gamma rays!

At least that’s what Hawking’s calculations say. These calculations were not based on a full-fledged theory of quantum gravity, so they’re probably just approximately correct. This may be the way out of the “black hole information paradox”.

But what’s the paradox? Patience — I’m gradually leading up to it. First, you need to know that in all the usual physical processes we see, information is conserved. If you’ve studied physics you’ve probably heard that various important quantities don’t change with time: they’re “conserved”. You’ve probably heard about conservation of energy, and momentum, and angular momentum and electric charge. But conservation of information is equally fundamental, or perhaps even more so: it says that if you know everything about what’s going on now, you can figure out everything about what’s going on later — and vice versa, too!

Actually, if you’ve studied physics a little but not very much, you may find my remarks puzzling. If so, don’t feel bad! Conservation of information is not usually mentioned in the courses that introduce the other conservation laws. The concept of information is fundamental to thermodynamics, but it appears in disguised form: “entropy”. There’s a minus sign lurking around here: while information is a precise measure of how much you do know, entropy measures how much you don’t know. And to add to the confusion, the first thing they tell you about entropy is that it’s not conserved. Indeed, the Second Law of Thermodynamics says that the entropy of a closed system tends to increase!

But after a few years of hard thinking and heated late-night arguments with your physics pals, it starts to make sense. Entropy as considered in thermodynamics is a measure of how much information you lack about a system when you only know certain things about it — things that are easily measured. For example, if you have a box of gas, you might measure its volume and energy. You’d still be ignorant about the details of all the molecules inside. The amount of information you lack is the entropy of the gas.

And as time passes, information tends to pass from easily measured forms to less easily measured forms, so entropy increases. But the information is still there in principle — it’s just hard to access. So information is conserved.

There’s a lot more to say here. For example: why does information tend to pass from easily measured forms to less easily measured forms, instead of the reverse? Does thermodynamics require a fundamental difference between future and past — a so-called “arrow of time”? Alas, I have to sidestep this question, because I’m supposed to be telling you about the black hole information paradox.

So: back to black holes!

Suppose you drop an encyclopedia into a black hole. The information in the encyclopedia seems to be gone. At the very least, it’s extremely hard to access! So, people say the entropy has increased. But could the information still be there in hidden form?

Hawking’s original calculations suggested the answer is no. Why? Because they said that as the black hole radiates and shrinks away, the radiation it emits contains no information about the encyclopedia you threw in — or at least, no information except a few basic things like its energy, momentum, angular momentum and electric charge. So no matter how clever you are, you can’t examine this radiation and use it to reconstruct the encyclopedia article on, say, Aardvarks. This information is lost to the world forever!

So what’s the black hole information paradox? Well, it’s not exactly a “paradox”. The problem is just that in every other process known to physicists, information is conserved — so it seems very unpalatable to allow any exception to this rule. But if you try to figure out a way to save information conservation in the case of black holes, it’s tough. Tough enough, in fact, to have bothered many smart physicists for decades.

Indeed, Stephen Hawking and the physicist John Preskill made a famous bet about this puzzle in 1997. Hawking bet that information wasn’t conserved; Preskill bet it was. In fact, they bet an encyclopedia!

In 2004 Hawking conceded the bet to Preskill, as shown above. It happened a conference in Dublin — I was there and blogged about it. Hawking conceded because he did some new calculations suggesting that information can gradually leak out of the black hole, thanks to the radiation. In other words: if you throw an encyclopedia in a black hole, a sufficiently clever physicist can indeed reconstruct the article on Aardvarks by carefully examining the radiation from the black hole. It would be incredibly hard, since the information would be highly scrambled. But it could be done in principle.

Unfortunately, Hawking’s calculation is very hand-wavy at certain crucial steps — in fact, more hand-wavy than certain calculations that had already been done with the help of string theory (or more precisely, the AdS-CFT conjecture). And, neither approach makes it easy to see in detail how the information comes out in the radiation.

This finally brings us to Ashtekar’s talk. Despite what you might guess from my warmup, his talk was not about loop quantum gravity. Certainly everyone working on loop quantum gravity would love to see this theory resolve the black hole information paradox. I’m sure Ashtekar is aiming in that direction. But his talk was about a warmup problem, a “toy model” involving black holes in 2d spacetime instead of our real-world 4-dimensional spacetime.

The advantage of 2d spacetime is that the math becomes a lot easier there. There’s been a lot of work on black holes in 2d spacetime, and Ashtekar is presenting some new work on an existing model, the Callen-Giddings-Harvey-Strominger black hole. This new work is a mixture of analytical and numerical calculations done over the last 2 years by Ashtekar together with Frans Pretorius, Fethi Ramazanoglu, Victor Taveras and Madhavan Varadarajan.

I will not attempt to explain this work in detail! The main point is this: all the information that goes into the black hole leaks back out in the form of radiation as the black hole evaporates.

But the talks also covers many other interesting issues. For example, the final stages of black hole evaporation display interesting properties that are independent of the details of its initial state. Physicists call this sort of phenomenon “universality”.

Furthermore, when the black hole finally shrinks to nothing, it sends out a pulse of gravitational radiation, but not enough to destroy the universe. It may seem very peculiar to imagine that the death spasms of a black hole could destroy the universe, but in fact some approximate “semiclassical” calculations of Hawking and Stewart suggested just that! They found that in 2d spacetime, the dying black hole emitted a pulse of infinite spacetime curvature — dubbed a “thunderbolt” — which made it impossible to continue spacetime beyond that point. But they suggested that a more precise calculation, taking quantum gravity fully into account, would eliminate this effect. And this seems to be the case.

For more, listen to Ashtekar’s talk while looking at the PDF file of his slides!


Jacob Biamonte on Tensor Networks

29 September, 2010

One of the unexpected pleasures of starting work at the Centre for Quantum Technologies was realizing that the math I learned in loop quantum gravity and category theory can also be used in quantum computation and condensed matter physics!

In loop quantum gravity I learned a lot about “spin networks”. When I sailed up to the abstract heights of category theory, I discovered that these were a special case of “string diagrams”. And now, going back down to earth, I see they have a special case called “tensor networks”.

Jacob Biamonte is a postdoc who splits his time between Oxford and the CQT, and he’s just finished a paper on tensor networks:

• Jacob Biamonte, Algebra and coalgebra on categorical tensor network states.

He’s eager to get your comments on this paper. So, if you feel you should be able to understand this paper but have trouble with something, or you spot a mistake, he’d like to hear from you.

Heck, he’d also like to hear from you if you love the paper! But as usual, the most helpful feedback is not just a pat on the back, but a suggestion for how to make things better.

Let me just get you warmed up here…

There’s a general theory of string diagrams, which are graphs with edges labelled by objects and vertices labelled by morphisms in some symmetric monoidal category with duals. Any string diagram determines a morphism in that category. If you hang out here, there’s a good chance you already know and love this stuff. I’ll assume you do.

To get examples, we can take any compact Lie group, and look at its category of finite-dimensional unitary representations. Then the string diagrams are called spin networks. When the group is SU(2), we get back the original spin networks considered by Penrose. These are important in loop quantum gravity.

But when the group is the trivial group, it turns out that our string diagrams are important in quantum computation and condensed matter physics! And now they go by a different name: tensor networks!

The idea, in a nutshell, is that you can draw a string diagram with all its edges labelled by some Hilbert space H, and vertices labelled by linear operators. Suppose you have a diagram with no edges coming in and n edges coming out. Then this diagram describes a linear operator

\psi : \mathbb{C} \to H^{\otimes n}

or in other words, a state

\psi \in H^{\otimes n}

This is called a tensor network state. Similarly, if we have a diagram with n edges coming in and m coming out, it describes a linear operator

T : H^{\otimes n} \to H^{\otimes m}

And the nice thing is, we can apply this operator to our tensor network state \psi and get a new tensor network state T \psi. Even better, we can do this all using pictures!

Tensor networks have led to new algorithms in quantum computation, and new ways of describing time evolution operators in condensed matter physics. But most of this work makes no explicit reference to category theory. Jacob’s paper is, among other things, an attempt to bridge this cultural gap. It also tries to bridge the gap between tensor networks and Yves Lafont’s string diagrams for Boolean logic.

Here’s the abstract of Jacob’s paper:

We present a set of new tools which extend the problem solving techniques and range of applicability of network theory as currently applied to quantum many-body physics. We use this new framework to give a solution to the quantum decomposition problem. Specifically, given a quantum state S with k terms, we are now able to construct a tensor network with poly(k) rank three tensors that describes the state S. This solution became possible by synthesizing and tailoring several powerful modern techniques from higher mathematics: category theory, algebra and coalgebra and applicable results from classical network theory and graphical calculus. We present several examples (such as categorical MERA networks etc.) which illustrate how the established methods surrounding tensor network states arise as a special instance of this more general framework, which we call Categorical Tensor Network States.

Take a look! If you have comments, please make them over at my other blog, the n-Category Café — since I’ve also announced this paper there, and it’ll be less confusing if all the comments show up in one place.


Quantum Entanglement from Feedback Control

28 September, 2010

Now André Carvalho from the physics department at Australian National University in Canberra is talking about “Quantum feedback control for entanglement production”. He’s in a theory group with strong connections to the atom laser experimental group at ANU. This theory group works on measurement and control theory for Bose-Einstein condensates and atom lasers.

The good news: recent advances in real-time monitoring allows the control of quantum systems using feedback.

The big question: can we use feedback to design the system dynamics to produce and stabilize entangled states?

The answer: yes.

Start by considering two atoms in a cavity, interacting with a laser. Think of each atom as a 2-state system — so the Hilbert space of the pair of atoms is

\mathbb{C}^2 \otimes \mathbb{C}^2

We’ll say what the atoms are doing using not a pure state (a unit vector) but a mixed state (a density matrix). The atoms’ time evolution will be described by Lindbladian mechanics. This is a generalization of Hamiltonian mechanics that allows for dissipative processes — processes that increase entropy! A bit more precisely, we’re talking here about the quantum analogue of a Markov process. Even more precisely, we’re talking about the Lindblad equation: the most general equation describing a time evolution for density matrices that is time-translation-invariant, Markovian, trace preserving and completely positive.

As time passes, an initially entangled 2-atom state will gradually ‘decohere’, losing its entanglement.

But next, introduce feedback. Can we do this in a way that makes the entanglement become large as time passes?

With ‘homodyne monitoring’, you can do pretty well. But with ‘photodetection monitoring’, you can do great! As time passes, every state will evolve to approach the maximally entangled state: the ‘singlet state’. This is the density matrix

| \psi \rangle \langle \psi |

corresponding to the pure state

|\psi \rangle = \frac{1}{\sqrt{2}} (\uparrow \otimes \downarrow - \downarrow \otimes \uparrow)

So: the system dynamics can be engineered using feedback to product and stabilize highly entangled state. In fact this is true not just for 2-atom systems, but multi-atom systems! And at least for 2-atom systems, this scheme is robust against imperfections and detection inefficiencies. The question of robustness is still under study for multi-atom systems.

For more details, try:

• A. R. R. Carvalho, A. J. S. Reid, and J. J. Hope, Controlling entanglement by direct quantum feedback.

Abstract:
We discuss the generation of entanglement between electronic states of two atoms in a cavity using direct quantum feedback schemes. We compare the effects of different control Hamiltonians and detection processes in the performance of entanglement production and show that the quantum-jump-based feedback proposed by us in Phys. Rev. A 76 010301(R) (2007) can protect highly entangled states against decoherence. We provide analytical results that explain the robustness of jump feedback, and also analyse the perspectives of experimental implementation by scrutinising the effects of imperfections and approximations in our model.

How do homodyne and photodetection feedback work? I’m not exactly sure, but this quote helps:

In the homodyne-based scheme, the detector registers
a continuous photocurrent, and the feedback Hamiltonian
is constantly applied to the system. Conversely, in
the photocounting-based strategy, the absence of signal
predominates and the control is only triggered after a
detection click, i.e. a quantum jump, occurs.


The Azimuth Project

27 September, 2010

Here’s the long-promised wiki:

The Azimuth Project

We’re going to make this into the place where scientists and engineers will go when they’re looking for reliable information on environmental problems, or ideas for projects to work on.

We’ve got our work cut out for us. If you click the link today — September 27th, 2010 — you won’t see much there. But I promise to keep making it better, with bulldog determination. And I hope you join me.

In addition to the wiki there’s a discussion forum:

The Azimuth Forum

where we can discuss work in progress on the Azimuth Project, engage in collaborative research, and decide on Azimuth policies.

Anybody can read the stuff on the Azimuth Forum. But if you want to join the conversation, you need to become a member. To learn how, read this and carefully follow the steps.

You’ll see a few sample articles on the Azimuth Project wiki, but they’re really just stubs. My plan now is to systematically go through some big issues — starting with some we’ve already discussed here — and blog about them. The resulting blog posts, and your responses to them, will then be fed into the wiki. My goal is to generate:

• readable, reliable summaries of environmental challenges we face,

• pointers to scientists and engineers working on these challenges,

and lists of

• ideas these people have had,

• questions that they have,

• questions that they should be thinking about, but aren’t.

Let the games begin! Let’s try to keep this planet a wonderful place!


Follow

Get every new post delivered to your Inbox.

Join 3,095 other followers