Trends in Quantum Information Processing

16 August, 2010

This week the scientific advisory board of the CQT is coming to town, so we’re having lots and lots of talks.

Right now Michele Mosca, deputy director of the Institute for Quantum Computing in Waterloo, Canada, is speaking about “Trends in Quantum Information Processing” from a computer science / mathematics perspective.

(Why mathematics? He says he will always consider himself a mathematician…)

Alice and Bob

Mosca began by noting some of the cultural / linguistic differences that become important in large interdisciplinary collaborations. For example, computer scientists like to talk about people named Alice and Bob playing bizarre games. These games are usually an attempt to distill some aspect of a hard problem into a simple story.

Impossibility

Mosca also made a remark on “impossibility”. Namely: proofs that things are impossible, often called “no-go theorems”, are very important, but we shouldn’t overinterpret them. Just because one approach to doing something doesn’t work, doesn’t mean we can’t do it! From an optimistic viewpoint, no-go theorems simply tell us what assumptions we need to get around. There is, however, a difference between optimism and insanity.

For example: “We can’t build a quantum computer because of decoherence…”

Knill, Laflamme and Milburn initially set out to prove that linear optics alone would not give universal quantum computer… but then they discovered it could.

• E. Knill, R. Laflamme, and G. J. Milburn, A scheme for efficient quantum computation with linear optics, Nature 409 (2001), 46. Also see their paper Efficient linear optics quantum computation on the arXiv.

Another example: “Quantum black-box algorithms give at most a polynomial speedup for total functions…”

If this result — proved by whom? — had been discovered before Shor’s algorithm, it might have discouraged Shor from finding this algorithm. But his algorithm involves a partial function.

Yet another: “Nuclear magnetic resonance quantum computing is not scalable…”

So far each argument for this can be gotten around, though nobody knows how to get around all of them simultaneously. Attempts to do this have led to important discoveries.

From theory to practice

Next, Mosca showed a kind of flow chart. Abstract algorithms spawn various models of quantum computation: circuit models, measurement-based models, adiabatic models, topological models, continuous-time models, cellular automaton models, and so on. These spawn specific architectures for (so far hypothetical) quantum computers; a lot of work still needs to be done to develop fault-tolerant architectures. Then come specific physical realizations that might implement these architectures: involving trapped ions, photon qubits, superconducting circuit qubits, spin qubits and so on. We’re just at the beginning of a long path.

A diversity of quantum algorithms

There are still lots of people who say there are just two interesting quantum algorithms:

Shor’s algorithm (for factoring integers in a time that’s polynomial as a function in their number of digits), and

Grover’s algorithm for searching lists in a time that’s less than linear as a function of the number of items in the list.

Mosca said:

The next time you hear someone say this, punch ’em in the head!

For examples of many different quantum algorithms, see Stephen Jordan’s ‘zoo’, and this paper:

• Michele Mosca, Quantum algorithms.

Then Mosca discussed three big trends…

1. Working with untrusted quantum apparatus

This trend started with:

• Artur Ekert, Quantum cryptography based on Bell’s theorem,
Phys. Rev. Lett. 67 (1991 Aug 5), 661-663.

Then:

• Dominic Mayers and and Andrew Yao, Quantum cryptography with imperfect apparatus.

And then much more, thanks in large part to people at the CQT! The idea is that if you have a quantum system whose behavior you don’t entirely trust, you can still do useful stuff with it. This idea is related to the subject of multi-prover interactive proofs, which however are studied by a different community.

This year some of these ideas got implemented in an actual experiment:

• S. Pironio, A. Acin, S. Massar, A. Boyer de la Giroday, D. N. Matsukevich, P. Maunz, S. Olmschenk, D. Hayes, L. Luo, T. A. Manning, and C. Monroe, Random numbers certified by Bell’s theorem, Nature 464 (2010), 1021.

(Yay! Someone who had the guts to put a paper for Nature on the arXiv!)

2. Ideas from topological quantum computing

The seeds here were planted in the late 1990s by:

• Michael Freedman, P/NP, and the quantum field computer,
Proc. Natl. Acad. Sci. USA 95 (1998 January 6), 98–101.

This suggested that a topological quantum computer could efficiently compute the Jones polynomial at certain points — a #P-hard problem. That would mean quantum computers can solve NP problems in polynomial time! But it turned out that limitations in precision prevent this. With a great sigh of relief, computer scientists decided they didn’t need to learn topological quantum field theory… but then came:

• A. Kitaev, Fault-tolerant quantum computation by anyons, Ann. Phys. 303 (2003), 2-30.

This sort of computer may or may not ever be built, but it’s very interesting either way. And in 2005, Raussendorf, Harrington and Goyal used ideas from topological quantum computing to build fault-tolerance into another approach to quantum computing, based on “cluster states”:

• Robert Raussendorf, Jim Harrington, and Kovid Goyal, Topological fault-tolerance in cluster state quantum computation, 2007.

Subsequently there’s been a huge amount of work along these lines. (Physics junkies should check out the Majorana fermion codes of Bravyi, Leemhuis and Terhal.)

3. Semidefinite programming

The seeds here were planted about 10 years ago by Kitaev and Watrous. There’s been a wide range of applications since then. If you search quant-ph for abstracts containing the buzzword “semidefinite” you’ll get almost a hundred hits!

So what is semidefinite programming? It’s a relative of linear programming, an optimization method that’s widely used in microeconomics and the management of production, transportation and the like. I don’t understand it, but I guess it’s like a quantum version of linear programming!

I can parrot the definition:

A semidefinite program is a triple (\Phi, a,b). Here \Phi is a linear map that sends linear operators on some Hilbert space X to linear operators on some Hilbert space Y. Such a linear map is called a superoperator, since it operates on operators! We assume \Phi takes self-adjoint operators to self-adjoint operators. What about a and b? They are self-adjoint operators on X and Y, respectively.

This data gives us two problems:

Primal problem: find x, a positive semidefinite operator on X that minimizes \langle a,x \rangle subject to the constraint such that \Phi(x) \ge b.

Dual problem: find y, a positive semidefinite operator on Y that maximizes \langle b, y\rangle subject to the constraint \Phi^*(y) \le a.

This should remind you of duality in linear programming. In linear programming, the dual of the dual problem is the original ‘primal’ problem! Also, if you know the solution to either problem, you know the solution to both. Is this stuff true for semidefinite programming too?

Example: given a bunch of density matrices, find an optimal bunch of measurements that distinguish them.

Example: ‘Quantum query algorithms’ are a generalization of Grover’s search algorithm. Reichardt used semidefinite programming to show there is always an optimal algorithm that uses just 2 reflections:

• Ben W. Reichardt, Reflections for quantum query algorithms.

There are lots of other trends, of course! These are just a few. Mosca apologized to anyone whose important work wasn’t mentioned, and I apologize even more, since I left out a lot of what he said….


The Geometry of Quantum Phase Transitions

13 August, 2010

Today at the CQT, Paolo Zanardi from the University of Southern California is giving a talk on “Quantum Fidelity and the Geometry of Quantum Criticality”. Here are my rough notes…

The motto from the early days of quantum information theory was “Information is physical.” You need to care about the physical medium in which information is encoded. But we can also turn it around: “Physics is informational”.

In a “classical phase transition”, thermal fluctuations play a crucial role. At zero temperature these go away, but there can still be different phases depending on other parameters. A transition between phases at zero temperature is called a quantum phase transitions. One way to detect a quantum phase transition is simply to notice that ground state depends very sensitively on the parameters near such a point. We can do this mathematically using a precise way of measuring distances between states: the Fubini-Study metric, which I’ll define below.

Suppose that M is a manifold parametrizing Hamiltonians for a quantum system, so each point x \in M gives a self-adjoint operator H(x) on some finite-dimensional Hilbert space, say \mathbb{C}^n. Of course in the thermodynamic limit (the limit of infinite volume) we expect our quantum system to be described by an infinite-dimensional Hilbert space, but let’s start out with a finite-dimensional one.

Furthermore, let’s suppose each Hamiltonian has a unique ground state, or at least a chosen ground state, say \psi(x). Here x does not indicate a point in space: it’s a point in M, our space of Hamiltonians!

This ground state \psi(x) is really defined only up to phase, so we should think of it as giving an element of the projective space \mathbb{C P}^{n-1}. There’s a god-given metric on projective space, called the Fubini-Study metric. Since we have a map from M to projective space, sending each point x to the state \psi(x) (modulo phase), we can pull back the Fubini-Study metric via this map to get a metric on M.

But, the resulting metric may not be smooth, because \psi(x) may not depend smoothly on x. The metric may have singularities at certain points, especially after we take the thermodynamic limit. We can think of these singular points as being ‘phase transitions’.

If what I said in the last two paragraphs makes no sense, perhaps a version in something more like plain English will be more useful. We’ve got a quantum system depending on some parameters, and there may be points where the ground state of this quantum system depends in a very drastic way on slight changes in the parameters.

But we can also make the math a bit more explicit. What’s the Fubini-Study metric? Given two unit vectors in a Hilbert space, say \psi and \psi', their Fubini-Study distance is just the angle between them:

d(\psi, \psi') = \cos^{-1}|\langle \psi, \psi' \rangle|

This is an honest Riemannian metric on the projective version of the Hilbert space. And in case you’re wondering about the term ‘quantum fidelity’ in the title of Zanardi’s talk, the quantity

|\langle \psi, \psi' \rangle|

is called the fidelity. The fidelity ranges between 0 and 1, and it’s 1 when two unit vectors are the same up to a phase. To convert this into a distance we take the arc-cosine.

When we pull the Fubini-Study metric back to M, we get a Riemannian metric away from the singular points, and in local coordinates this metric is given by the following cool formula:

g_{\mu \nu} = {\rm Re}\left(\langle \partial_\mu \psi, \partial_\nu \psi \rangle - \langle \partial_\mu \psi, \psi \rangle \langle \psi, \partial_\nu \psi \rangle \right)

where \partial_\mu \psi is the derivative of the ground state \psi(x) as we move x in the \muth coordinate direction.

But Michael Berry came up with an even cooler formula for g_{\mu \nu}. Let’s call the eigenstates of the Hamiltonian \psi_n(x), so that

H(x) \psi_n(x) = E_n(x) \, \psi_n(x)

And let’s rename the ground state \psi_0(x), so

\psi(x) = \psi_0(x)

and

H(x) \psi_0(x) = E_0(x) \, \psi_0(x)

Then a calculation familiar to those you’d see in first-order perturbation theory shows that

g_{\mu \nu} = {\rm Re} \sum_n \langle \psi_0 , \partial_\mu H \; \psi_n \rangle \langle \partial_\nu H \; \psi_n, \psi_0 \rangle / (E_n - E_0)^2

This is nice because it shows g_{\mu \nu} is likely to become singular at points where the ground state becomes degenerate, i.e. where two different states both have minimal energy, so some difference E_n - E_0 becomes zero.

To illustrate these ideas, Zanardi did an example: the XY model in an external magnetic field. This is a ‘spin chain’: a bunch of spin-1/2 particles in a row, each interacting with their nearest neighbors. So, for a chain of length L, the Hilbert space is a tensor product of L copies of \mathbb{C}^2:

\mathbb{C}^2 \otimes \cdots \otimes \mathbb{C}^2

The Hamiltonian of the XY model depends on two real parameters \lambda and \gamma. The parameter \lambda describes a magnetic field pointing in the z direction:

H(\lambda, \gamma) = \sum_i \left(\frac{1+\gamma}{2}\right) \, \sigma^x_i \sigma^x_{i+1} \; + \; \left(\frac{1-\gamma}{2}\right) \, \sigma^y_i \sigma^y_{i+1} \; + \; \lambda \sigma_i^z

where the \sigma‘s are the ever-popular Pauli matrices. The first term makes the x components of the spins of neighboring particles want to point in opposite directions when \gamma is big. The second term makes y components of neighboring spins want to point in the same direction when \gamma is big. And the third term makes all the spins want to point up (resp. down) in the z direction when \lambda is big and negative (resp. positive).

What’s our poor spin chain to do, faced with such competing directives? At zero temperature it seeks the state of lowest energy. When \lambda is less than -1 all the spins get polarized in the spin-up state; when it’s bigger than 1 they all get polarized in the spin-down state. For \lambda in between, there is also some sort of phase transition at \gamma = 0. What’s this like? Some sort of transition between ferromagnetic and antiferromagnetic?

We can use a transformation to express this as a fermionic system and solve it exactly. Physicists love exactly solvable systems, so there have been thousands of papers about the XY model. In the thermodynamic limit (L \to +\infty) the ground state can be computed explicitly, so we can explicitly work out the metric d on the parameter space that has \lambda, \gamma as coordinates!

I will not give the formulas — Zanardi did, but they’re too scary for me. I’ll skip straight to the punchline. Away from phase transitions, we see that for nearby values of parameters, say

x = (\gamma, \delta)

and

x' = (\gamma', \delta')

the ground states have

|\langle \psi(x), \psi(x') \rangle| \sim \exp(-c L)

for some constant c. That’s not surprising: even though the two ground states are locally very similar, since we have a total of L spins in our spin chain, the overall inner product goes like \exp(-c L).

But at phase transitions, the inner product |\langle \psi(x), \psi(x') \rangle| decays even faster with L:

|\langle \psi(x), \psi(x') \rangle| \sim \exp(-c' L^2)

for some other constant c'.

This is called enhanced orthogonalization since it means the ground states at slightly different values of our parameters get close to orthogonal even faster as L grows. Or in other words: their distance as measured by the metric g_{\mu \nu} grows even faster.

This sort of phase transition is an example of a “quantum phase transition”. Note: we’re detecting this phase transition not by looking at the ground state expectation value of a given observable, but by how the ground state itself changes drastically as we change the parameters governing the Hamiltonian.

The exponent of L here — namely the 2 in L^2 — is ‘universal’: i.e., it’s robust with respect to changes in the parameters and even the detailed form of the Hamiltonian.

Zanardi concluded with an argument showing that not every quantum phase transition can be detected by enhanced orthogonalization. For more details, try:

• Silvano Garnerone, N. Tobias Jacobson, Stephan Haas and Paolo Zanardi, Fidelity approach to the disordered quantum XY model.

• Silvano Garnerone, N. Tobias Jacobson, Stephan Haas and Paolo Zanardi, Scaling of the fidelity susceptibility in a disordered quantum spin chain.

For more on the basic concepts, start here:

• Lorenzo Campos Venuti and Paolo Zanardi, Quantum critical scaling of the geometric tensors, 10.1103 Phys. Rev. Lett. 99.095701.

As a final little footnote, I should add that Paolo Zanardi said the metric g_{\mu \nu} defined as above was analogous to Fisher information metric. So, David Corfield should like this…


Quantum Phase Measurement Via Flux Qubits

11 August, 2010

Yesterday at the Centre for Quantum Technologies, H. T. Ng from RIKEN in Japan gave a talk on “Quantum Phase Measurement in a Superconducting Circuit”. His goal is to develop a procedure for measuring the relative phase in a superposition of states of the electromagnetic field. Consider a single vibrational mode of the electromagnetic field in some cavity. Take a superposition of a state with 0 photons in this mode and one with N photons in this mode:

|0> + exp(-iθ)|N>

How can we measure the phase θ?

The approach is to let the electromagnetic field interact with a Josephson junction. The dream is to use this as part of a recipe for factoring integers, using the mathematics of so-called "Gauss sums":

• H. T. Ng, Franco Nori, Quantum phase measurement and Gauss sum factorization of large integers in a superconducting circuit.

The math of Gauss sums is an important branch of number theory, but I don’t understand it, so I’ll focus on the physics.

The trick is to use a Josephson junction. A Josephson junction consists of two superconductors separated by a very thin layer of insulating material – 3 nanometers or less – or a possibly thicker layer of conducting but not superconducting material. Electrons can tunnel through this barrier, so current can flow through it. As I mentioned here earlier, the London-Landau-Ginzburg theory says a superconductor is characterized by a ‘macroscopic wave function’ – a complex function with a phase that depends on position and time. This phase is different at the two sides of the barrier, and this phase difference is very important!

If we call this phase difference φ, the basic equations governing a Josephson junction say that:

• The voltage V across the junction is proportional to dφ/dt.

• The current I across the junction is proportional to sin φ.

I’ve never studied Josephson junctions before, but here’s the impression I got from the talk together with some superficial web browsing. Please correct me if I’m wrong…

We can treat the phase difference φ and the voltage V as quantum observables that are canonically conjugate, up to some constant factor. In other words, φ is analogous to ‘position’, while V is analogous to ‘momentum’.

(The phase difference really lives on a circle – in other words, exp(iφ) is what really matters, not φ. So, the voltage should take on discrete evenly spaced values. Right? In math jargon: the dual of the circle group is the group of integers.)

This analogy is useful for understanding the dynamics of the Josephson junction. The junction acts like a quantum particle running around a circle, with the phase difference φ acting like the particle’s position. We can set up our Josephson junction so this particle moves in a potential. The potential is a function on the circle.

Clever experimentalists can make sure this potential has a nice deep local minimum. How? I don’t know. There was some questions from the audience about how the potential arises — what causes it, physically. But I’m still ignorant about this, so I’d appreciate help.

Anyway: if the phase difference φ stays near this local minimum, we can approximate the behavior of the Josephson junction by a harmonic oscillator. The discrete energy levels only become apparent at very low temperatures – less than 1 degree above absolute zero.

In physics, approximations reign supreme! If the potential is deep enough, we simplify the problem further and restrict attention to the two lowest-energy states of our harmonic oscillator, say |g\rangle (the ground state) and |e\rangle (the first excited state). Then the Josephson junction acts like a two-state system… or in modern jargon, a qubit!

I believe this is called a phase qubit. You can learn more here:

• Wikipedia, Phase qubit.

It’s been shown that by adjusting a coupling between two systems of this sort, we can get a iSWAP gate:

gg\rangle \mapsto |gg\rangle

ee\rangle \mapsto |ee\rangle

|ge\rangle \mapsto \frac{1}{\sqrt{2}} \left(|ge\rangle -i|eg\rangle\right)

|eg\rangle \mapsto \frac{1}{\sqrt{2}} \left(|eg\rangle -i|ge\rangle \right)

Also, Andreas Wallraff and coworkers have shown how to couple a phase qubit to a single vibrational mode of the electromagnetic field in a superconducting resonator!

• A. Wallraff, D. I. Schuster, A. Blais, L. Frunzio, R.-S. Huang, J. Majer, S. Kumar, S. M. Girvin and R. J. Schoelkopf, Circuit quantum electrodynamics: coherent coupling of a single photon to a Cooper pair box, extended version of Nature (London) 431 (2004), 162.

This system can be described by the Jaynes-Cummings model. A single mode of the electromagnetic field is nicely described by a quantum harmonic oscillator, and the Jaynes-Cummings model describes a quantum harmonic oscillator coupled to a qubit.

Back in 1996, Law and Eberly proposed a method to use such a coupling to create arbitrary states of a single mode of the electromagnetic field:

• C. K. Law and J. H. Eberly, Arbitrary control of a quantum electromagnetic field, Phys. Rev. Lett. 76 (1996), 1055–1058.

At U.C. Santa Barbara, Hofheinz and coworkers carried out this idea experimentally:

• Max Hofheinz, H. Wang, M. Ansmann, Radoslaw C. Bialczak, Erik Lucero, M. Neeley, A. D. O’Connell, D. Sank, J. Wenner, John M. Martinis, and A. N. Cleland, Synthesizing arbitrary quantum states in a superconducting resonator, Nature 459 (28 May 2009), 546-549.

They constructed quite general superpositions of multi-photon states of a single mode of the electromagnetic field in a superconducting resonator – a box full of microwave radiation. They used a phase quibit to pump photons into the resonator. Then they measured the state of these photons using another phase qubit, via a technique called “Wigner tomography”. They can measure the state of the qubit with 98% fidelity!

The challenge discussed by Ng is to start with a Fock state of this form:

|0> + exp(-iθ)|N>

and encode the phase information \theta to a qubit. The goal is to do ‘state transfer’, letting the photon field interact with the qubit so that the above state winds up making the qubit have the state

|g> + exp(-iθ)|e>

He described a strategy for doing this. It gets harder when N gets bigger. Then he spoke about using this to factor integers…

I’m just beginning to learn about various ways to use superconductors to hold qubits. This looks like a good place to start:

• M. H. Devoret, A. Wallraff, and J. M. Martinis, Superconducting qubits: a short review.

Abstract: Superconducting qubits are solid state electrical circuits fabricated using techniques borrowed from conventional integrated circuits. They are based on the Josephson tunnel junction, the only non-dissipative, strongly non-linear circuit element available at low temperature. In contrast to microscopic entities such as spins or atoms, they tend to be well coupled to other circuits, which make them appealing from the point of view of readout and gate implementation. Very recently, new designs of superconducting qubits based on multi-junction circuits have solved the problem of isolation from unwanted extrinsic electromagnetic perturbations. We discuss in this review how qubit decoherence is affected by the intrinsic noise of the junction and what can be done to improve it.

You’ll note that this post was A Tale of Two Phases: the relative phase in a superposition of quantum states, and the phase difference across a Josephson junction. They’re quite different in character: the former is just a number, while we’re treating the latter as an operator! This may seem weird, so I thought I should emphasize it. I want to ponder the appearance of ‘phase operators’ in quantum optics and elsewhere… there should be some good math in here.


High Temperature Superconductivity

29 July, 2010

Here at the physics department of the National University of Singapore, Tony Leggett is about to speak on “Cuprate superconductivity: the current state of play”. I’ll take notes and throw them on this blog in a rough form. As always, my goal is to start some interesting conversations. So, go ahead and ask questions, or fill in some more details. Not everything I write here is something I understand!

Certain copper oxide compounds can be superconductive at relatively high temperatures — for example, above the boiling point of liquid nitrogen, 77 kelvin. These compounds consist of checkerboard layers with four oxygen atoms at the corners of each square and one copper in the middle. It’s believed that the electrons move around in these layers in an essentially two-dimensional way. Two-dimensional physics allows for all sorts of exotic possibilities! But nobody is sure how these superconductors work. The topic has been around for about 25 years, but according to Leggett, there’s no one theory that commands the assent of more than 20% of the theorists.

Here’s the outline of Leggett’s talk:

1. What is superconductivity?

2. Brief overview of cuprate structure and properties.

3. What do we know for sure about high-temperature superconductivity (HTS) in the cuprates? That is, what do we know without relying on any microscopic model, since these models are all controversial?

4. Some existing models.

5. Are we asking the right questions?

1. What is superconductivity?

For starters, he asked: what is superconductivity? It involves at least two phenomena that don’t necessarily need to go together, but seem to always go together in practice, and are typically considered together. One: perfect diamagnetism — in the “Meissner effect“, the medium completely excludes magnetic fields. This is an equilibrium effect. Two: persistent currents — this is an incredibly stable metastable effect.

Note the difference: if we start with a ball of stuff in magnetic field and slowly lower its temperature, once it becomes superconductive it will exclude the magnetic field. There are never any currents present, since we’re in thermodynamic equilibrium at any stage.

On the other hand, a ring of stuff with a current flowing around it is not in thermal equibrium. It’s just a metastable state.

The London-Landau-Ginzburg theory of superconductivity is a ‘phenomenological’ theory: it doesn’t try to describe the underlying microscopic cause, just what seems to happen. Among other things, it says that a superconductor is characterized by a ‘macroscopic wave function’ \psi(r), a complex function with phase \exp(i \phi(r)). The current is given by

J(r) \propto |\psi(r)|^2 (\nabla \phi(r) - e A(r))

where e is a charge (in fact the charge of an electron pair, as was later realized).

This theory explains the Meissner effect and also persistent currents, and it’s probably good for cuprate superconductors.

2. The structure and behavior of cuprate superconductors

The structure of a typical cuprate: there are n planes made of CuO2 and other atoms (typically alkaline earth), and then, between these, a material that serves as a ‘charge reservoir’.

He showed us the phase diagram for a typical cuprate as a function of temperature and the ‘doping’: that is, the number of extra ‘holes’ – missing electrons – per CuO2. No cuprate has yet been shown to have this phase diagram in its entirety! But different ones have been seen to have different parts, so we may guess the story is like this:

There’s an antiferromagnetic insulator phase at low doping. At higher doping there’s a strange ‘pseudogap’ phase. Nobody knows if this ‘pseudogap’ phase extends to zero temperature. At still higher dopings we see a superconductive phase at low temperature and a ‘strange metal’ phase above some temperature. This temperature reaches a max at a doping of about 0.16 — a more or less universal figure — but the value of this maximum temperature depends a lot on the material. At higher dopings the superconductive phase goes away.

There are over 200 superconducting cuprates, but there are some cuprates that can never be made superconducting — those with multilayers spaced by strontium or barium.

Both ‘normal’ and superconducting states are highly anisotropic. But the ‘normal’ states are actually very anomalous — hence the term ‘strange metal’. The temperature-dependence of various properties are very unusual. By comparison the behaviour of the superconducting phase is less strange!

Most (but not all) properties are approximately consistent with the hypothesis that at a given doping, the properties are universal.

The superconducting phase is highly sensitive to doping and pressure.

3. What do we know for sure about superconductivity in the cuprates?

There’s strong evidence that cuprate superconductivity is due to the formation of Cooper pairs, just as for ordinary superconductors.

The ‘universality’ of high-temperature superconductivity in cuprate with very different chemical compositions suggests that the main actors are the electrons in the CuO2 planes. Most researchers believe this.

There’s a lot of NMR experiments suggesting that the spins of the electrons in the Cooper pairs are in the ‘singlet’ state:

up ⊗ down – down ⊗ up

Absence of substantial far-infrared absorption above the gap edge suggests that pairs are formed from time-reversed states (despite the work of Tahir–Kheli).

The ‘radius’ of the Cooper pairs is very small: only 3-10 angstroms, instead of thousands as in an ordinary superconductor!

In ordinary superconductor the wave function of a Cooper pair is in an s state (spherically symmetric state). In a cuprate superconductor it seems to have the symmetry of x^2 - y^2: that is, a d state that’s odd under 90 degree rotation in the plane of the cuprate (the x y plane), but even under reflection in either the x or y axis.

There’s good evidence that the pairs in different multilayers are effectively independent (despite the Anderson Interlayer Tunnelling Theory).

There isn’t a substantial dependence on the isotopes used to make the stuff, so it’s believed that phonons don’t play a major role.

At least 95% of the literature makes all of the above assumptions and a lot more. Most models are specific Hamiltonians that obey all these assumptions.

4. Models of high-temperature superconductivity in cuprates

How will we know when we have a ‘satisfactory’ theory? We should either be able to:

A) give a blueprint for building a room-temperature superconductor using cuprates, or

B) assert with confidence that we will never be able to do this, or at least

C) say exactly why we cannot do either A) or B).

No model can yet do this!

Here are some classes of models, from conservative to exotic:

1. Phonon-induced attraction – the good old BCS mechanism, which explains ordinary superconductors. These models have lots of problems when applied to cuprates, e.g. the fact that we don’t see an isotope effect.

2. Attraction induced by the exchange of some other boson: spin fluctuations, excitons, fluctuations of ‘stripes’ or still more exotic objects.

3. Theories starting from the single-band Hubbard model. These include theories based on the postulate of ‘exotic ordering’ in the ground state, e.g. charge-spin separation.

5. What are the right questions to ask?

The energy is the sum of 3 terms: the kinetic energy, the potential energy of the interaction between the conduction electrons and the static lattice, and the potential energy of the interaction of the conduction electrons among each other (both intra-plane and inter-plane). One of these must go down when Cooper pairs must form! The third term is the obvious suspect.

Then Leggett wrote a lot of equations which I cannot copy fast enough… and concluded that there are two basic possibilities, “Eliashberg” and “overscreening”. The first is that electrons with opposite momentum and spin attract each other in the normal phase. The second is that there’s no attraction required in the normal phase, but the interaction is modified by pairing: pairing can cause “screening” of the Coulomb repulsion. Which one is it?

Another good question: Why does the critical temperature depend on the number of layers in a multilayer? There are various possible explanations. The “boring” explanation is that superconductivity is a single-plane phenomenon, but multi-layering affects properties of individual planes. The “interesting” explanations say that inter-plane effects are essential: for example, as in the Anderson inter-layer tunnelling model, or due to a Kosterlitz-Thouless effect, or due to inter-plane Coulomb interactions.

Leggett clearly likes the last possibility, with the energy savings taking place due to increased screening, and with the energy saving taking place predominantly at long wavelengths and mid-infrared frequencies. This gives a natural explanation of why all known high-temperature superconductors are strongly two-dimensional, and it explains many more of their properties, too. Moreover it’s unambiguously falsifiable in electron energy-loss spectroscopy experiments. He has proposed an experimental test, which will be carried out soon.

He bets that with at least a 50% chance some of the younger members of the audience will live to see room-temperature superconductors.


Bose Statistics and Classical Fields

22 July, 2010

Right now Kazimierz Rzążewski from the Center for Theoretical Physics at the Polish Academy of Sciences is giving a talk on “Bose statistics and classical fields”.

Abstract: Statistical properties of quantum systems are the heart of quantum statistical physics. Probability distributions of Bose-Einstein condensate are well understood for an ideal gas. In the presence of interactions only crude approximations are available. In this talk I will argue that now we have a powerful computational tool to study the statistics of weakly interacting Bose gas which is based on the so-called classical field approximation.

For a 3d ideal gas of bosonic atoms trapped in a 3d harmonic oscillator potential, the fraction of atoms in the ground state goes like

1 - cT^3

for T below a certain critical value, and 0 above that.

The grand canonical ensemble, where we assume the number of particles in our gas and its total energy are both variable, is a dubious method for Bose-Einstein condensates, because there’s no contact with a particle reservoir. The canonical ensemble is also fishy, where we assume the particle number is fixed both the total energy is variable, is also fishy. Why? Because there’s not contact with a heat reservoir, either. The microcanonical ensemble, where the energy and number of particles are both fixed, is closest to experimental reality.

We see this when we compute the fluctuations of the number of particles in the ground state. For the grand canonical ensemble, the standard deviation of the number of particles in the ground state becomes infinite at temperature below a certain value!

The fun starts when we move from the ideal gas to a weakly interacting gas. Most papers here consider particles trapped in a box, not in a harmonic oscillator — and they use the Bogoliubov approximation, which is exactly soluble for a box. This approximation involves a quadratic Hamiltonian that’s a sum of terms, one for each mode in the box. To set up this equation we need to use the Bogoliubov-deGennes equations.

As the temperature goes up, the Bogoliubov approximation breaks down… so we need a new approach.

Here is Rzążewski’s approach. A gas of bosons is described by a quantum field. But we can approximate the long-wavelength part of this quantum field by a classical field. Of course the basic idea here is not new. In our study of electromagnetism — this is what lets us approximate the quantum electromagnetic field by a classical field obeying the classical Maxwell equations. But the new part is setting up a theory that keeps some of the virtues of the quantum description, while approximating it with a classical one at low frequencies (i.e., large distance scales).

So: for modes below the cutoff we describe the system using annihilation and creation operators; for each mode above the cutoff we have 2d classical phase space. But: how to put in a nice ‘cutoff’ where we make the transition from the quantum field to the classical field?

Testing this problem on an exactly soluble model is a good idea: for example, the 1-dimensional ideal gas!

It turns out that by choosing the cutoff in an optimal way, the approximation is very good — not just for the 1d ideal gas, but also the 3d case, in both a harmonic potential and in a box. There is an analytic form for this optimal cutoff.

But more significant is the nonideal gas, where the particles repel each other. Here it’s easiest to start with the 1d case of a gas trapped in a harmonic oscillator potential. Now it’s more complicated. But we can simulate it numerically using the Metropolis algorithm!

We can also study ‘quasicondensates‘, where the coherence length is shorter than the size of the box, or the size of the cloud of atoms. (For example, in 2 dimensions, at temperatures above the Berezinskii-Kosterlitz-Thouless transition, there are lots of vortices in the gas, so the phase of the gas is nearly uniform only in small patches.)

Some papers:

• E. Witkowska, M. Gajda, and K. Rzążewski,
Bose statistics and classical fields, Phys. Rev. A 79 (2009), 033631.

• E. Witkowska, M. Gajda, and K. Rzążewski,
Monte Carlo method, classical fields and Bose statistics,
Opt. Comm. 283 (2010), 671-675.

• Z. Idziaszek, L. Zawitkowski, M. Gajda, and K. Rzążewski, Fluctuations of weakly interacting Bose-Einstein condensate, Europhysics Lett. 86 (2009), 10002.

As usual, I’d love it if an expert came along and explained anything more about these ideas. For example, I’m pretty vague about how exactly the Metropolis algorithm is used here.


Quantum Steganography

19 July, 2010

Besides talking about environmental issues, I’d also like to use this blog to talk about my day job at the Centre for Quantum Technologies. I hope this isn’t too distracting…

I’d like to try live-blogging a talk here. Today there’s a talk by Bilal Shaw of the University of Southern California about a paper he wrote with Todd Brun on Quantum Steganography.

“Steganography” is the art of hiding information by embedding it in a seemingly innocent message. In case you’re wondering – and I’ve got the kind of mind that can’t help wondering – the word “steganography” actually is etymologically related to the word “stegosaurus”. They both go back to words meaning “cover” or “roof”. Some other words with the same root are “thatch”, “deck” -and even “detect”, which is like “de-deck”: to take the lid off something!

Steganography is an ancient art, still thriving today. For example, that Russian spy ring they just caught were embedding secret data in publicly visible websites. The advantage of steganography over ordinary cryptography is that if you do it right, it doesn’t draw attention to itself. See this picture?

Remove all but the two least significant bits of each color component and you’ll get a picture that’s almost black. But then make that picture 85 times brighter and here’s what you’ll see:

All this is purely classical, of course. But what fiendish tricks can we play using quantum mechanics? Can we hide Schrödinger’s cat in a seemingly innocent tree?

Bilal’s paper describes a few recipes for quantum steganography. Alas, I’m not good enough at cryptography and live-blogging to beautifully deliver an instant summary of how they work. But roughly, the idea is to fake the effects of mildly “depolarizing” channel, one that introduces some errors into the qubits you’re transmitting, pushing pure states closer to the center of the Bloch sphere, where pure noise lives. You can’t introduce too many errors, since this would make the error rate suspiciously high to someone spying on our transmissions. So, there’s a kind of tradeoff here…

I’d be happy for an expert to give a better description!


Follow

Get every new post delivered to your Inbox.

Join 3,095 other followers