Quantum Phase Measurement Via Flux Qubits

11 August, 2010

Yesterday at the Centre for Quantum Technologies, H. T. Ng from RIKEN in Japan gave a talk on “Quantum Phase Measurement in a Superconducting Circuit”. His goal is to develop a procedure for measuring the relative phase in a superposition of states of the electromagnetic field. Consider a single vibrational mode of the electromagnetic field in some cavity. Take a superposition of a state with 0 photons in this mode and one with N photons in this mode:

|0> + exp(-iθ)|N>

How can we measure the phase θ?

The approach is to let the electromagnetic field interact with a Josephson junction. The dream is to use this as part of a recipe for factoring integers, using the mathematics of so-called "Gauss sums":

• H. T. Ng, Franco Nori, Quantum phase measurement and Gauss sum factorization of large integers in a superconducting circuit.

The math of Gauss sums is an important branch of number theory, but I don’t understand it, so I’ll focus on the physics.

The trick is to use a Josephson junction. A Josephson junction consists of two superconductors separated by a very thin layer of insulating material – 3 nanometers or less – or a possibly thicker layer of conducting but not superconducting material. Electrons can tunnel through this barrier, so current can flow through it. As I mentioned here earlier, the London-Landau-Ginzburg theory says a superconductor is characterized by a ‘macroscopic wave function’ – a complex function with a phase that depends on position and time. This phase is different at the two sides of the barrier, and this phase difference is very important!

If we call this phase difference φ, the basic equations governing a Josephson junction say that:

• The voltage V across the junction is proportional to dφ/dt.

• The current I across the junction is proportional to sin φ.

I’ve never studied Josephson junctions before, but here’s the impression I got from the talk together with some superficial web browsing. Please correct me if I’m wrong…

We can treat the phase difference φ and the voltage V as quantum observables that are canonically conjugate, up to some constant factor. In other words, φ is analogous to ‘position’, while V is analogous to ‘momentum’.

(The phase difference really lives on a circle – in other words, exp(iφ) is what really matters, not φ. So, the voltage should take on discrete evenly spaced values. Right? In math jargon: the dual of the circle group is the group of integers.)

This analogy is useful for understanding the dynamics of the Josephson junction. The junction acts like a quantum particle running around a circle, with the phase difference φ acting like the particle’s position. We can set up our Josephson junction so this particle moves in a potential. The potential is a function on the circle.

Clever experimentalists can make sure this potential has a nice deep local minimum. How? I don’t know. There was some questions from the audience about how the potential arises — what causes it, physically. But I’m still ignorant about this, so I’d appreciate help.

Anyway: if the phase difference φ stays near this local minimum, we can approximate the behavior of the Josephson junction by a harmonic oscillator. The discrete energy levels only become apparent at very low temperatures – less than 1 degree above absolute zero.

In physics, approximations reign supreme! If the potential is deep enough, we simplify the problem further and restrict attention to the two lowest-energy states of our harmonic oscillator, say |g\rangle (the ground state) and |e\rangle (the first excited state). Then the Josephson junction acts like a two-state system… or in modern jargon, a qubit!

I believe this is called a phase qubit. You can learn more here:

• Wikipedia, Phase qubit.

It’s been shown that by adjusting a coupling between two systems of this sort, we can get a iSWAP gate:

gg\rangle \mapsto |gg\rangle

ee\rangle \mapsto |ee\rangle

|ge\rangle \mapsto \frac{1}{\sqrt{2}} \left(|ge\rangle -i|eg\rangle\right)

|eg\rangle \mapsto \frac{1}{\sqrt{2}} \left(|eg\rangle -i|ge\rangle \right)

Also, Andreas Wallraff and coworkers have shown how to couple a phase qubit to a single vibrational mode of the electromagnetic field in a superconducting resonator!

• A. Wallraff, D. I. Schuster, A. Blais, L. Frunzio, R.-S. Huang, J. Majer, S. Kumar, S. M. Girvin and R. J. Schoelkopf, Circuit quantum electrodynamics: coherent coupling of a single photon to a Cooper pair box, extended version of Nature (London) 431 (2004), 162.

This system can be described by the Jaynes-Cummings model. A single mode of the electromagnetic field is nicely described by a quantum harmonic oscillator, and the Jaynes-Cummings model describes a quantum harmonic oscillator coupled to a qubit.

Back in 1996, Law and Eberly proposed a method to use such a coupling to create arbitrary states of a single mode of the electromagnetic field:

• C. K. Law and J. H. Eberly, Arbitrary control of a quantum electromagnetic field, Phys. Rev. Lett. 76 (1996), 1055–1058.

At U.C. Santa Barbara, Hofheinz and coworkers carried out this idea experimentally:

• Max Hofheinz, H. Wang, M. Ansmann, Radoslaw C. Bialczak, Erik Lucero, M. Neeley, A. D. O’Connell, D. Sank, J. Wenner, John M. Martinis, and A. N. Cleland, Synthesizing arbitrary quantum states in a superconducting resonator, Nature 459 (28 May 2009), 546-549.

They constructed quite general superpositions of multi-photon states of a single mode of the electromagnetic field in a superconducting resonator – a box full of microwave radiation. They used a phase quibit to pump photons into the resonator. Then they measured the state of these photons using another phase qubit, via a technique called “Wigner tomography”. They can measure the state of the qubit with 98% fidelity!

The challenge discussed by Ng is to start with a Fock state of this form:

|0> + exp(-iθ)|N>

and encode the phase information \theta to a qubit. The goal is to do ‘state transfer’, letting the photon field interact with the qubit so that the above state winds up making the qubit have the state

|g> + exp(-iθ)|e>

He described a strategy for doing this. It gets harder when N gets bigger. Then he spoke about using this to factor integers…

I’m just beginning to learn about various ways to use superconductors to hold qubits. This looks like a good place to start:

• M. H. Devoret, A. Wallraff, and J. M. Martinis, Superconducting qubits: a short review.

Abstract: Superconducting qubits are solid state electrical circuits fabricated using techniques borrowed from conventional integrated circuits. They are based on the Josephson tunnel junction, the only non-dissipative, strongly non-linear circuit element available at low temperature. In contrast to microscopic entities such as spins or atoms, they tend to be well coupled to other circuits, which make them appealing from the point of view of readout and gate implementation. Very recently, new designs of superconducting qubits based on multi-junction circuits have solved the problem of isolation from unwanted extrinsic electromagnetic perturbations. We discuss in this review how qubit decoherence is affected by the intrinsic noise of the junction and what can be done to improve it.

You’ll note that this post was A Tale of Two Phases: the relative phase in a superposition of quantum states, and the phase difference across a Josephson junction. They’re quite different in character: the former is just a number, while we’re treating the latter as an operator! This may seem weird, so I thought I should emphasize it. I want to ponder the appearance of ‘phase operators’ in quantum optics and elsewhere… there should be some good math in here.


Curriki

8 August, 2010

Textbooks are expensive. They could be almost free, especially in subjects like trigonometry or calculus, which don’t change very fast.

I’m a radical when it comes to the dissemination of knowledge: I want to give as much away for free as I can! So if I weren’t doing Azimuth, I’d probably be working to push for open-source textbooks.

Luckily, someone much better at this sort of thing is already doing that. David Roberts — a mathematician you may have seen at the n-Category Café — recently pointed out this good news:

• Ashley Vance, $200 Textbook vs. Free — You Do the Math, New York Times, July 31, 2010.

Scott McNealy, cofounder of Sun Microsystems, recently said goodbye to that company and started spearheading a push towards open-source textbooks:

Early this year, Oracle, the database software maker, acquired Sun for $7.4 billion, leaving Mr. McNealy without a job. He has since decided to aim his energy and some money at Curriki, an online hub for free textbooks and other course material that he spearheaded six years ago.

“We are spending $8 billion to $15 billion per year on textbooks” in the United States, Mr. McNealy says. “It seems to me we could put that all online for free.”

The nonprofit Curriki fits into an ever-expanding list of organizations that seek to bring the blunt force of Internet economics to bear on the education market. Even the traditional textbook publishers agree that the days of tweaking a few pages in a book just to sell a new edition are coming to an end.

Whenever it happens, it’ll be none too soon for me!

Let us hope that someday the Azimuth Project becomes part of this trend…


Thermodynamics and Wick Rotation

6 August, 2010

Having two blogs is a bit confusing. My student Mike Stay has some deep puzzles about physics, which I posted over at the n-Category Café:

• Mike Stay, Thermodynamics and Wick Rotation.

But maybe this blog already has some of its own readers, who don’t usually read the n-Café, but are interested in physics? I don’t know.

Anyway: if you’re interested in the mysterious notion of temperature as imaginary time, please click the link and help us figure it out. This should keep us entertained until I’m done with “week300” — the last issue of This Week’s Finds in Mathematical Physics.

No comments here, please — that would get really confusing.


Introduction to Climate Change

4 August, 2010

No, this post is not an introduction to climate change. It’s a question from Alex Hoffnung, who recently got his Ph.D. from U.C. Riverside after working with me on categorified Hecke algebras. Now he’s headed for a postdoc at the University of Ottawa. He’s a cool dude:

And he has a question that I’m sure many mathematicians and other scientists share, so I’ll make it a guest post here:


Have you come across anything like “Intro to Climate Change”? The big problem is that I have following the issues surrounding climate change are getting a handle on what the issues are. How hard is it to objectively state some of the more important foundational issues without running into controversy?


How Hot Is Too Hot?

30 July, 2010

How hot is too hot? This interesting paper tackles that question:

• Steven C. Sherwood and Matthew Huber, An adaptability limit to climate change due to heat stress, Proceedings of the National Academy of Sciences, early edition 2010.

Abstract: Despite the uncertainty in future climate-change impacts, it is often assumed that humans would be able to adapt to any possible warming. Here we argue that heat stress imposes a robust upper limit to such adaptation. Peak heat stress, quantified by the wetbulb temperature TW, is surprisingly similar across diverse climates today. TW never exceeds 31 °C. Any exceedence of 35 °C for extended periods should induce hyperthermia in humans and other mammals, as dissipation of metabolic heat becomes impossible. While this never happens now, it would begin to occur with global-mean warming of about 7 °C, calling the habitability of some regions into question. With 11–12 °C warming, such regions would spread to encompass the majority of the human population as currently distributed. Eventual warmings of 12 °C are possible from fossil fuel burning. One implication is that recent estimates of the costs of unmitigated climate change are too low unless the range of possible warming can somehow be narrowed. Heat stress also may help explain trends in the mammalian fossil record.

Huh? Temperatures going up by 12 degrees Celsius??? Well, this is a worst-case scenario — the sort of thing that’s only likely to kick in if we keep up ‘business as usual’ for a long, long time:

Recent studies have highlighted the possibility of large global warmings in the absence of strong mitigation measures, for example the possibility of over 7 °C of warming this century alone. Warming will not stop in 2100 if emissions continue. Each doubling of carbon dioxide is expected to produce 1.9–4.5 °C of warming at equilibrium, but this is poorly constrained on the high side and according to one new estimate has a 5% chance of exceeding 7.1 °C per doubling. Because combustion of all available fossil fuels could produce 2.75 doublings of CO2 by 2300, even a 4.5 °C sensitivity could eventually produce 12 °C of warming. Degassing of various natural stores of methane and/or CO2 in a warmer climate could increase warming further. Thus while central estimates of business-as-usual warming by 2100 are 3–4 °C, eventual warmings of 10 °C are quite feasible and even 20 °C is theoretically possible.

A key notion in Sherwood and Huber’s paper is the concept of wet-bulb temperature. Apparently this term has several meanings, but Sherwood and Huber use it to mean “the temperature as measured by covering a standard thermometer bulb with a wetted cloth and fully ventilating it”.

This can be lower than the ‘dry-bulb temperature’, thanks to evaporative cooling. And that’s important, because we sweat to stay cool.

Indeed, this is the big difference between Riverside California (my permanent home) and Singapore (where I’m living now). It’s dry there, and humid here, so my sweat doesn’t evaporate so nicely here — so the wet-bulb temperature tends to be higher. In Riverside air conditioning seems like a bit of an indulgence much of the time, though it’s quite common for shops to let it run blasting until the air is downright frigid. In Singapore I’m afraid I really like it, though when I’m in control, I keep it set at 28 °C — perhaps more for dehumidification than cooling?

Sherwood and Huber write:

A resting human body generates ∼100 W of metabolic heat that (in addition to any absorbed solar heating) must be carried away via a combination of heat conduction, evaporative cooling, and net infrared radiative cooling. Net conductive and evaporative cooling can occur only if an object is warmer than the environmental wet-bulb temperature TW, measured by covering a standard thermometer bulb with a wetted cloth and fully ventilating it. The second law of thermodynamics does not allow an object to lose heat to an environment whose TW exceeds the object’s temperature, no matter how wet or well-ventilated. Infrared radiation under conditions of interest here will usually produce a small additional heating.

[…]

Humans maintain a core body temperature near 37 °C that varies slightly among individuals but does not adapt to local climate. Human skin temperature is strongly regulated at 35 °C or below under normal conditions, because the skin must be cooler than body core in order for metabolic heat to be conducted to the skin. Sustained skin temperatures above 35 °C imply elevated core body temperatures (hyperthermia), which reach lethal values (42–43 °C) for skin temperatures of 37–38 °C even for acclimated and fit individuals. We would thus expect sufficiently long periods of TW > 35 °C to be intolerable.

Now, temperatures of 35 °C (we say 95 degrees Fahrenheit) are entirely routine during the day in Riverside. Of course, it’s much cooler in my un-air-conditioned home because we leave open the windows when it gets cool at night, and the concrete slab under the floor stays cool, and the house has great insulation. Still, after a few years of getting acclimated, walking around in 35 °C weather seems like no big deal. We only think it’s seriously hot when it reaches 40 °C.

But these are not wet-bulb temperatures: the humidity is usually really low! So what’s the wet-bulb temperature when it’s 35 °C and the relative humidity is, say, 20%? I should look it up… but maybe you know where to look?

If you look on page 2 of Sherwood and Huber’s paper you’ll see three graphs. The top graph is the world today. You’ll see histograms of the average temperature (in black), the average annual maximum temperature (in blue), and the average annual maximum wet-bulb temperature (in red). The interesting thing is how the red curve is sharply peaked between 15 °C and 30 °C, dropping off sharply above 31 °C.

The bottom graph shows an imagined world that’s about 12 °C warmer. It’s too hot.

As the authors note:

The highest instantaneous TW anywhere on Earth today is about 30 °C (with a tiny fraction of values reaching 31 °C). The most-common TW, max is 26–27 °C, only a few degrees lower. Thus, peak potential heat stress is surprisingly similar across many regions on Earth. Even though the hottest temperatures occur in subtropical deserts, relative humidity there is so low that TW, max is no higher than in the deep tropics. Likewise, humid mid-latitude regions such as the Eastern United States, China, southern Brazil, and Argentina experience TW, max during summer heat waves comparable to tropical ones, even though annual mean temperatures are significantly lower. The highest values of T in any given region also tend to coincide with low relative humidity.

But what if it gets a lot hotter?

Could humans survive > 35 °C? Periods of net heat storage can be endured, though only for a few hours, and with ample time needed for recovery. Unfortunately, observed extreme-TW events (TW 26 °C) are long-lived: Adjacent nighttime minima of TW are typically within 2–3 °C of the daytime peak, and adjacent daily maxima are typically within 1 °C. Conditions would thus prove intolerable if the peak TW exceeded, by more than 1–2 °C, the highest value that could be sustained for at least a full day. Furthermore, heat dissipation would be very inefficient unless TW were at least 1–2 °C below skin temperature, so to sustain heat loss without dangerously elevated body temperature would require TW of 34 °C or lower. Taking both of these factors into account, we estimate that the survivability limit for peak six-hourly TW is probably close to 35 °C for humans, though this could be a degree or two off. Similar limits would apply to other mammals but at various thresholds depending on their core body temperature and mass.

I find the statement “Adjacent nighttime minima of TW are typically within 2–3 °C of the daytime peak” quite puzzling. Maybe it’s true in extremely humid climates, but in dry climates it tends to cool down significantly at night. Even here in Singapore there seems to be typically a 5 °C difference between day and night. But maybe it’s less during a heat wave.

The paper does not discuss behavioral adaptations, and that makes it a bit misleading. Even without fossil fuels people can do things like living underground during the day and using windcatchers to bring cool underground air into the house. Here’s a windcatcher that my friend Greg Egan photographed in Yazd during his trip to Iran:

But, of course, this sort of world would support far fewer people than live here now!

Another obvious doubt concerns the distant past, when it was a lot warmer than now. I’m talking about the Paleogene, which ended 23 million years ago. If you haven’t heard of the Paleogene — which is term that came into play after I learned my geological time periods back in grade school — maybe you’ll be interested to hear that it’s the beginning of the Cenozoic, consisting of the Paleocene, Eocene, and Oligocene. Since then the Earth has been in a cooling phase:

How did mammals manage back then?

Mammals have survived past warm climates; does this contradict our conclusions? The last time temperatures approached values considered here is the Paleogene, when global-mean temperature was perhaps 10 °C and tropical temperature perhaps 5–6 °C warmer than modern, implying TW of up to 36 °C with a most-common TW, max of 32–33 °C. This would still leave room for the survival of mammals in most locations, especially if their core body temperatures were near the high end of those of today’s mammals (near 39 °C). Transient temperature spikes, such as during the PETM or Paleocene-Eocene Thermal Maximum, might imply intolerable conditions over much broader areas, but tropical terrestrial mammalian records are too sparse to directly test this. We thus find no inconsistency with our conclusions, but this should be revisited when more evidence is available.


High Temperature Superconductivity

29 July, 2010

Here at the physics department of the National University of Singapore, Tony Leggett is about to speak on “Cuprate superconductivity: the current state of play”. I’ll take notes and throw them on this blog in a rough form. As always, my goal is to start some interesting conversations. So, go ahead and ask questions, or fill in some more details. Not everything I write here is something I understand!

Certain copper oxide compounds can be superconductive at relatively high temperatures — for example, above the boiling point of liquid nitrogen, 77 kelvin. These compounds consist of checkerboard layers with four oxygen atoms at the corners of each square and one copper in the middle. It’s believed that the electrons move around in these layers in an essentially two-dimensional way. Two-dimensional physics allows for all sorts of exotic possibilities! But nobody is sure how these superconductors work. The topic has been around for about 25 years, but according to Leggett, there’s no one theory that commands the assent of more than 20% of the theorists.

Here’s the outline of Leggett’s talk:

1. What is superconductivity?

2. Brief overview of cuprate structure and properties.

3. What do we know for sure about high-temperature superconductivity (HTS) in the cuprates? That is, what do we know without relying on any microscopic model, since these models are all controversial?

4. Some existing models.

5. Are we asking the right questions?

1. What is superconductivity?

For starters, he asked: what is superconductivity? It involves at least two phenomena that don’t necessarily need to go together, but seem to always go together in practice, and are typically considered together. One: perfect diamagnetism — in the “Meissner effect“, the medium completely excludes magnetic fields. This is an equilibrium effect. Two: persistent currents — this is an incredibly stable metastable effect.

Note the difference: if we start with a ball of stuff in magnetic field and slowly lower its temperature, once it becomes superconductive it will exclude the magnetic field. There are never any currents present, since we’re in thermodynamic equilibrium at any stage.

On the other hand, a ring of stuff with a current flowing around it is not in thermal equibrium. It’s just a metastable state.

The London-Landau-Ginzburg theory of superconductivity is a ‘phenomenological’ theory: it doesn’t try to describe the underlying microscopic cause, just what seems to happen. Among other things, it says that a superconductor is characterized by a ‘macroscopic wave function’ \psi(r), a complex function with phase \exp(i \phi(r)). The current is given by

J(r) \propto |\psi(r)|^2 (\nabla \phi(r) - e A(r))

where e is a charge (in fact the charge of an electron pair, as was later realized).

This theory explains the Meissner effect and also persistent currents, and it’s probably good for cuprate superconductors.

2. The structure and behavior of cuprate superconductors

The structure of a typical cuprate: there are n planes made of CuO2 and other atoms (typically alkaline earth), and then, between these, a material that serves as a ‘charge reservoir’.

He showed us the phase diagram for a typical cuprate as a function of temperature and the ‘doping’: that is, the number of extra ‘holes’ – missing electrons – per CuO2. No cuprate has yet been shown to have this phase diagram in its entirety! But different ones have been seen to have different parts, so we may guess the story is like this:

There’s an antiferromagnetic insulator phase at low doping. At higher doping there’s a strange ‘pseudogap’ phase. Nobody knows if this ‘pseudogap’ phase extends to zero temperature. At still higher dopings we see a superconductive phase at low temperature and a ‘strange metal’ phase above some temperature. This temperature reaches a max at a doping of about 0.16 — a more or less universal figure — but the value of this maximum temperature depends a lot on the material. At higher dopings the superconductive phase goes away.

There are over 200 superconducting cuprates, but there are some cuprates that can never be made superconducting — those with multilayers spaced by strontium or barium.

Both ‘normal’ and superconducting states are highly anisotropic. But the ‘normal’ states are actually very anomalous — hence the term ‘strange metal’. The temperature-dependence of various properties are very unusual. By comparison the behaviour of the superconducting phase is less strange!

Most (but not all) properties are approximately consistent with the hypothesis that at a given doping, the properties are universal.

The superconducting phase is highly sensitive to doping and pressure.

3. What do we know for sure about superconductivity in the cuprates?

There’s strong evidence that cuprate superconductivity is due to the formation of Cooper pairs, just as for ordinary superconductors.

The ‘universality’ of high-temperature superconductivity in cuprate with very different chemical compositions suggests that the main actors are the electrons in the CuO2 planes. Most researchers believe this.

There’s a lot of NMR experiments suggesting that the spins of the electrons in the Cooper pairs are in the ‘singlet’ state:

up ⊗ down – down ⊗ up

Absence of substantial far-infrared absorption above the gap edge suggests that pairs are formed from time-reversed states (despite the work of Tahir–Kheli).

The ‘radius’ of the Cooper pairs is very small: only 3-10 angstroms, instead of thousands as in an ordinary superconductor!

In ordinary superconductor the wave function of a Cooper pair is in an s state (spherically symmetric state). In a cuprate superconductor it seems to have the symmetry of x^2 - y^2: that is, a d state that’s odd under 90 degree rotation in the plane of the cuprate (the x y plane), but even under reflection in either the x or y axis.

There’s good evidence that the pairs in different multilayers are effectively independent (despite the Anderson Interlayer Tunnelling Theory).

There isn’t a substantial dependence on the isotopes used to make the stuff, so it’s believed that phonons don’t play a major role.

At least 95% of the literature makes all of the above assumptions and a lot more. Most models are specific Hamiltonians that obey all these assumptions.

4. Models of high-temperature superconductivity in cuprates

How will we know when we have a ‘satisfactory’ theory? We should either be able to:

A) give a blueprint for building a room-temperature superconductor using cuprates, or

B) assert with confidence that we will never be able to do this, or at least

C) say exactly why we cannot do either A) or B).

No model can yet do this!

Here are some classes of models, from conservative to exotic:

1. Phonon-induced attraction – the good old BCS mechanism, which explains ordinary superconductors. These models have lots of problems when applied to cuprates, e.g. the fact that we don’t see an isotope effect.

2. Attraction induced by the exchange of some other boson: spin fluctuations, excitons, fluctuations of ‘stripes’ or still more exotic objects.

3. Theories starting from the single-band Hubbard model. These include theories based on the postulate of ‘exotic ordering’ in the ground state, e.g. charge-spin separation.

5. What are the right questions to ask?

The energy is the sum of 3 terms: the kinetic energy, the potential energy of the interaction between the conduction electrons and the static lattice, and the potential energy of the interaction of the conduction electrons among each other (both intra-plane and inter-plane). One of these must go down when Cooper pairs must form! The third term is the obvious suspect.

Then Leggett wrote a lot of equations which I cannot copy fast enough… and concluded that there are two basic possibilities, “Eliashberg” and “overscreening”. The first is that electrons with opposite momentum and spin attract each other in the normal phase. The second is that there’s no attraction required in the normal phase, but the interaction is modified by pairing: pairing can cause “screening” of the Coulomb repulsion. Which one is it?

Another good question: Why does the critical temperature depend on the number of layers in a multilayer? There are various possible explanations. The “boring” explanation is that superconductivity is a single-plane phenomenon, but multi-layering affects properties of individual planes. The “interesting” explanations say that inter-plane effects are essential: for example, as in the Anderson inter-layer tunnelling model, or due to a Kosterlitz-Thouless effect, or due to inter-plane Coulomb interactions.

Leggett clearly likes the last possibility, with the energy savings taking place due to increased screening, and with the energy saving taking place predominantly at long wavelengths and mid-infrared frequencies. This gives a natural explanation of why all known high-temperature superconductors are strongly two-dimensional, and it explains many more of their properties, too. Moreover it’s unambiguously falsifiable in electron energy-loss spectroscopy experiments. He has proposed an experimental test, which will be carried out soon.

He bets that with at least a 50% chance some of the younger members of the audience will live to see room-temperature superconductors.


Overfishing

28 July, 2010

While climate change is the 800-pound gorilla of ecological issues, I don’t want it to completely dominate the conversation here. There are a lot of other issues to think about. For example, overfishing!

My friend the mathematician John Terilla says that after we had dinner together at a friend’s house, he can’t help thinking about overfishing — especially when he eats fish. I’m afraid I have that effect on people these days.

(In case you’re wondering, we didn’t have fish for dinner.)

Anyway, John just pointed out this book review:

• Elizabeth Kolbert, The scales fall: is there any hope for our overfished oceans?, New Yorker, August 2, 2010.

It’s short and very readable. It starts out talking about tuna. In the last 40 years, the numbers of bluefin tuna have dropped by roughly 80 percent. A big part of the problem is ICCAT, which either means the International Commission for the Conservation of Atlantic Tunas, or else the International Conspiracy to Catch All Tunas, depending on whom you ask. In 2008, ICCAT scientists recommended that the bluefin catch in the eastern Atlantic and the Mediterranean be limited to 8500-15,000 tons. ICCAT went ahead and adopted a quota of 22,000 tons! So it’s no surprise that we’re in trouble now.

But it’s not just tuna. Look at what happened to cod off the east coast of Newfoundland:



In fact, there’s evidence that the population of all kinds of big predatory fish has dropped 90% since 1950:

• Ransom A. Myers and Boris Worm, Rapid worldwide depletion of predatory fish communities, Nature 423 May 15, 2003.

Of course you’d expect someone with the name “Worm” to be against fishing, but Myers agrees: “From giant blue marlin to mighty bluefin tuna, and from tropical groupers to Antarctic cod, industrial fishing has scoured the global ocean. There is no blue frontier left. Since 1950, with the onset of industrialized fisheries, we have rapidly reduced the resource base to less than 10 percent—not just in some areas, not just for some stocks, but for entire communities of these large fish species from the tropics to the poles.”

In fact, we’re “fishing down the food chain”: now that the big fish are gone, we’re going after larger and large numbers of smaller and smaller species, with former “trash fish” now available at your local market. It’s a classic tragedy of the commons: with nobody able to own fish, everyone is motivated to break agreements to limit fishing. Here’s a case where I think some intelligent applications of economics and game theory could work wonders. But who has the muscle to forge and enforce agreements? Clearly ICCAT and other existing bodies do not!

But there’s still hope. For starters, learn which fish to avoid eating. And think about this:

It is almost as though we use our military to fight the animals in the ocean. We are gradually winning this war to exterminate them. And to see this destruction happen, for nothing really – for no reason – that is a bit frustrating. Strangely enough, these effects are all reversible, all the animals that have disappeared would reappear, all the animals that were small would grow, all the relationships that you can’t see any more would re-establish themselves, and the system would re-emerge. So that’s one thing to be optimistic about. The oceans, much more so than the land, are reversible…Daniel Pauly


Follow

Get every new post delivered to your Inbox.

Join 3,095 other followers