Time Crystals

26 September, 2012

When water freezes and forms a crystal, it creates a periodic pattern in space. Could there be something that crystallizes to form a periodic pattern in time? Frank Wilczek, who won the Nobel Prize for helping explain why quarks and gluons trapped inside a proton or neutron act like freely moving particles when you examine them very close up, dreamt up this idea and called it a time crystal:

• Frank Wilczek, Classical time crystals.

• Frank Wilczek, Quantum time crystals.

‘Time crystals’ sound like something from Greg Egan’s Orthogonal trilogy, set in a universe where there’s no fundamental distinction between time and space. But Wilczek wanted to realize these in our universe.

Of course, it’s easy to make a system that behaves in an approximately periodic way while it slowly runs down: that’s how a clock works: tick tock, tick tock, tick tock… But a system that keeps ‘ticking away’ without using up any resource or running down would be a strange new thing. There’s no telling what weird stuff we might do with it.

It’s also interesting because physicists love symmetry. In ordinary physics there are two very important symmetries: spatial translation symmetry, and time translation symmetry. Spatial translation symmetry says that if you move an experiment any amount to the left or right, it works the same way. Time translation symmetry says that if you do an experiment any amount of time earlier or later, it works the same way.

Crystals are fascinating because they ‘spontaneously break’ spatial translation symmetry. Take a liquid, cool it until it freezes, and it forms a crystal which does not look the same if you move it any amount to the right or left. It only looks the same if you move it certain discrete steps to the right or left!

The idea of a ‘time crystal’ is that it’s a system that spontaneously breaks time translation symmetry.

Given how much physicists have studied time translation symmetry and spontaneous symmetry breaking, it’s sort of shocking that nobody before 2012 wrote about this possibility. Or maybe someone did—but I haven’t heard about it.

It takes real creativity to think of an idea so radical yet so simple. But Wilczek is famously creative. For example, he came up with anyons: a new kind of particle, neither boson nor fermion, that lives in a universe where space is 2-dimensional. And now we can make those in the lab.

Unfortunately, Wilczek didn’t know how to make a time crystal. But now a team including Xiang Zhang (seated) and Tongcang Li (standing) at U.C. Berkeley have a plan for how to do it.

Actually they propose a ring-shaped system that’s periodic in time and also in space, as shown in the picture. They call it a space-time crystal:

Here we propose a space-time crystal of trapped ions and a method to realize it experimentally by confining ions in a ring-shaped trapping potential with a static magnetic field. The ions spontaneously form a spatial ring crystal due to Coulomb repulsion. This ion crystal can rotate persistently at the lowest quantum energy state in magnetic fields with fractional fluxes. The persistent rotation of trapped ions produces the temporal order, leading to the formation of a space-time crystal. We show that these space-time crystals are robust for direct experimental observation. The proposed space-time crystals of trapped ions provide a new dimension for exploring many-body physics and emerging properties of matter.

The new paper is here:

• Tongcang Li, Zhe-Xuan Gong, Zhang-Qi Yin, H. T. Quan, Xiaobo Yin, Peng Zhang, L.-M. Duan and Xiang Zhang, Space-time crystals of trapped ions.

Alas, the press release put out by Lawrence Berkeley National Laboratory is very misleading. It describes the space-time crystal as a clock that

will theoretically persist even after the rest of our universe reaches entropy, thermodynamic equilibrium or “heat-death”.

NO!

First of all, ‘reaching entropy’ doesn’t mean anything. More importantly, as time goes by and things fall apart, this space-time crystal, assuming anyone can actually make it, will also fall apart.

I know what the person talking to the reporter was trying to say: the cool thing about this setup is that it gives a system that’s truly time-periodic, not gradually using up some resource and running down like an ordinary clock. But nonphysicist readers, seeing an article entitled ‘A Clock that Will Last Forever’, may be fooled into thinking this setup is, umm, a clock that will last forever. It’s not.

If this setup were the whole universe, it might keep ticking away forever. But in fact it’s just a small, carefully crafted portion of our universe, and it interacts with the rest of our universe, so it will gradually fall apart when everything else does… or in fact much sooner: as soon as the scientists running it turn off the experiment.

Classifying space-time crystals

What could we do with space-time crystals? It’s way too early to tell, at least for me. But since I’m a mathematician, I’d be happy to classify them. Over on Google+, William Rutiser asked if there are 4d analogs of the 3d crystallographic groups. And the answer is yes! Mathematicians with too much time on their hands have classified the analogues of crystallographic groups in 4 dimensions:

Space group: classification in small dimensions, Wikipedia.

In general these groups are called space groups (see the article for the definition). In 1 dimension there are just two, namely the symmetry groups of this:

— o — o — o — o — o — o —

and this:

— > — > — > — > — > — > —

In 2 dimensions there are 17 and they’re called wallpaper groups. In 3 dimensions there are 230 and they are called crystallographic groups. In 4 dimensions there are 4894, in 5 dimensions there are… hey, Wikipedia leaves this space blank in their table!… and in 6 dimensions there are 28,934,974.

So, there is in principle quite a large subject to study here, if people can figure out how to build a variety of space-time crystals.

There’s already book on this, if you’re interested:

• Harold Brown, Rolf Bulow, Joachim Neubuser, Hans Wondratschek and Hans Zassenhaus, Crystallographic Groups of Four-Dimensional Space, Wiley Monographs in Crystallography, 1978.


Quantizing Electrical Circuits

2 February, 2012

As you may know, there’s a wonderful and famous analogy between classical mechanics and electrical circuit theory. I explained it back in “week288”, so I won’t repeat that story now. If you don’t know what I’m talking about, take a look!

This analogy opens up the possibility of quantizing electrical circuits by straightforwardly copying the way we quantize classical mechanics problems. I’d often wondered if this would be useful.

It is, and people have done it:

• Michel H. Devoret, Quantum fluctuations in electrical circuits.

Michel Devoret, Rob Schoelkopf and others call this idea quantronics: the study of mesoscopic electronic effects in which collective degrees of freedom like currents and voltages behave quantum mechanically.

I just learned about this from a talk by Sean Barrett here in Coogee. There are lots of cool applications, but right now I’m mainly interested in how this extends the set of analogies between different physical theories.

One interesting thing is how they quantize circuits with resistors. Over in classical mechanics, this corresponds to systems with friction. These systems, called ‘dissipative’ systems, don’t have a conserved energy. More precisely, energy leaks out of the system under consideration and gets transferred to the environment in the form of heat. It’s hard to quantize systems where energy isn’t conserved, so people in quantronics model resistors as infinite chains of inductors and capacitors: see the ‘LC ladder circuit’ on page 15 of Devoret’s notes. This idea is also the basis of the Caldeira–Leggett model of a particle coupled to a heat bath made of harmonic oscillators: it amounts to including the environment as part of the system being studied.


A Quantum Hammersley–Clifford Theorem

29 January, 2012

I’m at this workshop:

Sydney Quantum Information Theory Workshop: Coogee 2012, 30 January – 2 February 2012, Coogee Bay Hotel, Coogee, Sydney, organized by Stephen Bartlett, Gavin Brennen, Andrew Doherty and Tom Stace.

Right now David Poulin is speaking about a quantum version of the Hammersley–Clifford theorem, which is a theorem about Markov networks. Let me quickly say a bit about what he proved! This will be a bit rough, since I’m doing it live…

The mutual information between two random variables is

I(A:B) = S(A) + S(B) - S(A,B)

The conditional mutual information between three random variables C is

I(A:B|C) = \sum_c p(C=c) I(A:B|C=c)

It’s the average amount of information about B learned by measuring A when you already knew C.

All this works for both classical (Shannon) and quantum (von Neumann) entropy. So, when we say ‘random variable’ above, we
could mean it in the traditional classical sense or in the quantum sense.

If I(A:B|C) = 0 then A, C, B has the following Markov property: if you know C, learning A tells you nothing new about B. In condensed matter physics, say a spin system, we get (quantum) random variables from measuring what’s going on in regions, and we have short range entanglement if I(A:B|C) = 0 when C corresponds to some sufficiently thick region separating the regions A and B. We’ll get this in any Gibbs state of a spin chain with a local Hamiltonian.

A Markov network is a graph with random variables at vertices (and thus subsets of vertices) such that I(A:B|C) = 0 whenever C is a subset of vertices that completely ‘shields’ the subset A from the subset B: any path from A to B goes through a vertex in a C.

The Hammersley–Clifford theorem says that in the classical case we can get any Markov network from the Gibbs state

\exp(-\beta H)

of a local Hamiltonian H, and vice versa. Here a Hamiltonian is local if it is a sum of terms, one depending on the degrees of freedom in each clique in the graph:

H = \sum_{C \in \mathrm{cliques}} h_C

Hayden, Jozsa, Petz and Winter gave a quantum generalization of one direction of this result to graphs that are just ‘chains’, like this:

o—o—o—o—o—o—o—o—o—o—o—o

Namely: for such graphs, any quantum Markov network is the Gibbs state of some local Hamiltonian. Now Poulin has shown the same for all graphs. But the converse is, in general, false. If the different terms h_C in a local Hamiltonian all commute, its Gibbs state will have the Markov property. But otherwise, it may not.

For some related material, see:

• David Poulin, Quantum graphical models and belief propagation.


Probabilities Versus Amplitudes

5 December, 2011

Here are the slides of the talk I’m giving at the CQT Annual Symposium on Wednesday afternoon, which is Tuesday morning for a lot of you. If you catch mistakes, I’d love to hear about them before then!

Probabilities versus amplitudes.

Abstract: Some ideas from quantum theory are just beginning to percolate back to classical probability theory. For example, there is a widely used and successful theory of “chemical reaction networks”, which describes the interactions of molecules in a stochastic rather than quantum way. If we look at it from the perspective of quantum theory, this turns out to involve creation and annihilation operators, coherent states and other well-known ideas—but with a few big differences. The stochastic analogue of quantum field theory is also used in population biology, and here the connection is well-known. But what does it mean to treat wolves as fermions or bosons?


Liquid Light

28 November, 2011

Elisabeth Giacobino works at the Ecole Normale Supérieure in Paris. Last week she gave a talk at the Centre for Quantum Technologies. It was about ‘polariton condensates’. You can see a video of her talk here.

What’s a polariton? It’s a strange particle: a blend of matter and light. Polaritons are mostly made of light… with just enough matter mixed in so they can form a liquid! This liquid can form eddies just like water. Giacobino and her team of scientists have actually gotten pictures:

Physicists call this liquid a ‘polariton condensate’, but normal people may better appreciate how wonderful it is if we call it liquid light. That’s not 100% accurate, but it’s close—you’ll see what I mean in a minute.

Here’s a picture of Elisabeth Giacobino (at right) and her coworkers in 2010—not exactly the same team who is working on liquid light, but the best I can find:

How to make liquid light

How do you make liquid light?

First, take a thin film of some semiconductor like gallium arsenide. It’s full of electrons roaming around, so imagine a sea of electrons, like water. If you knock out an electron with enough energy, you’ll get a ‘hole’ which can move around like a particle of its own. Yes, the absence of a thing can act like a thing. Imagine an air bubble in the sea.

All this so far is standard stuff. But now for something more tricky: if you knock an electron just a little, it won’t go far from the hole it left behind. They’ll be attracted to each other, so they’ll orbit each other!

What you’ve got now is like a hydrogen atom—but instead of an electron and a proton, it’s made from an electron and a hole! It’s called an exciton. In Giacobino’s experiments, the excitons are 200 times as big as hydrogen atoms.

Excitons are exciting, but not exciting enough for us. So next, put a mirror on each side of your thin film. Now light can bounce back and forth. The light will interact with the excitons. If you do it right, this lets a particle of light—called a photon—blend with an exciton and form a new particle called polariton.

How does a photon ‘blend’ with an exciton? Umm, err… this involves quantum mechanics. In quantum mechanics you can take two possible situations and add them and get a new one, a kind of ‘blend’ called a ‘superposition’. ‘Schrödinger’s cat’ is what you get when you blend a live cat and a dead cat. People like to argue about why we don’t see half-live, half-dead cats. But never mind: we can see a blend of a photon and an exciton! Giacobino and her coworkers have done just that.

The polaritons they create are mostly light, with just a teeny bit of exciton blended in. Photons have no mass at all. So, perhaps it’s not surprising that their polaritons have a very small mass: about 10-5 times as heavy as an electron!

They don’t last very long: just about 4-10 picoseconds. A picosecond is a trillionth of a second, or 10-12 seconds. After that they fall apart. However, this is long enough for polaritons to do lots of interesting things.

For starters, polaritons interact with each other enough to form a liquid. But it’s not just any ordinary liquid: it’s often a superfluid, like very cold liquid helium. This means among other things, that it has almost no viscosity.

So: it’s even better than liquid light: it’s superfluid light!

The flow of liquid light

What can you do with liquid light?

For starters, you can watch it flow around obstacles. Semiconductors have ‘defects’—little flaws in the crystal structure. These act as obstacles to the flow of polaritons. And Giacobimo and her team have seen the flow of polaritons around defects in the semiconductor:

The two pictures at left are two views of the polariton condensate flowing smoothly around a defect. In these pictures the condensate is a superfluid.

The two pictures in the middle show a different situation. Here the polariton condensate is viscous enough so that it forms a trail of eddies as it flows past the defect. Yes, eddies of light!

And the two pictures at right show yet another situation. In every fluid, we can have waves of pressure. This is called… ‘sound’. Yes, this is how ordinary sound works in air, or under water. But we can also have sound in a polariton condensate!

That’s pretty cool: sound in liquid light! But wait. We haven’t gotten to the really cool part yet. Whenever you have a fluid moving past an obstacle faster than the speed of sound, you get a ‘shock wave’: the obstacle leaves an expanding trail of sound in its wake, behind it, because the sound can’t catch up. That’s why jets flying faster than sound leave a sonic boom behind them.

And that’s what you’re seeing in the pictures at right. The polariton condensate is flowing past the defect faster than the speed of sound, which happens to be around 850,000 meters per second in this experiment. We’re seeing the shock wave it makes. So, we’re seeing a sonic boom in liquid light!

It’s possible we’ll be able to use polariton condensates for interesting new technologies. Giacobimo and her team are also considering using them to study Hawking radiation: the feeble glow that black holes emit according to Hawking’s predictions. There aren’t black holes in polariton condensates, but it may be possible to create a similar kind of radiation. That would be really cool!

But to me, just being able to make a liquid consisting mostly of light, and study its properties, is already a triumph: just for the beauty of it.

Scary technical details

All the pictures of polariton condensates flowing around a defect came from here:

• A. Amo, S. Pigeon, D. Sanvitto, V. G. Sala, R. Hivet, I. Carusotto, F. Pisanello, G. Lemenager, R. Houdre, E. Giacobino, C. Ciuti, and A. Bramati, Hydrodynamic solitons in polariton superfluids.

and this is the paper to read for more details.

I tried to be comprehensible to ordinary folks, but there are a few more things I can’t resist saying.

First, there are actually many different kinds of polaritons. In general, polaritons are quasiparticles formed by the interaction of photons and matter. For example, in some crystals sound acts like it’s made of particles, and these quasiparticles are called ‘phonons’. But sometimes phonons can interact with light to form quasiparticles—and these are called ‘phonon-polaritons’. I’ve only been talking about ‘exciton-polaritons’.

If you know a bit about superfluids, you may be interested to hear that the wavy patterns show the phase of the order parameter ψ in the Landau-Ginzburg theory of superfluids:

If you know about quantum field theory, you may be interested to know that the Hamiltonian describing photon-exciton interactions involves terms roughly like

\alpha a^\dagger a + \beta b^\dagger b + \gamma (a^\dagger b + b^\dagger a)

where a is the annihilation operator for photons, b is the annihilation operator for excitons, the Greek letters are various constants, and the third term describes the interaction of photons and excitons. We can simplify this Hamiltonian by defining new particles that are linear combinations of photons and excitons. It’s just like diagonalizing a matrix; we get something like

\delta c^\dagger c + \epsilon d^\dagger d

where c and d are certain linear combinations of a and b. These act as annihilation operators for our new particles… and one of these new particles is the very light ‘polariton’ I’ve been talking about!


Is Life Improbable?

31 May, 2011

Mine? Yes. And maybe you’ve wondered just how improbable your life is. But that’s not really the question today…

Here at the Centre for Quantum Technologies, Dagomir Kaszlikowski asked me to give a talk on this paper:

• John Baez, Is life improbable?, Foundations of Physics 19 (1989), 91-95.

This was the second paper I wrote, right after my undergraduate thesis. Nobody ever seemed to care about it, so it’s strange—but nice—to finally be giving a talk on it.

My paper does not try to settle the question its title asks. Rather, it tries to refute the argument here:

• Eugene P. Wigner, The probability of the existence of a self-reproducing unit, Symmetries and Reflections, Indiana University Press, Bloomington, 1967, pp. 200-208.

According Wigner, his argument

purports to show that, according to standard quantum mechanical theory, the probability is zero for the existence of self-reproducing states, i.e., organisms.

Given how famous Eugene Wigner is (he won a Nobel prize, after all) and how earth-shattering his result would be if true, it’s surprising how little criticism his paper has received. David Bohm mentioned it approvingly in 1969. In 1974 Hubert Yockey cited it saying

for all physics has to offer, life should never have appeared and if it ever did it would soon die out.

As you’d expect, there are some websites mentioning Wigner’s argument as evidence that some supernatural phenomenon is required to keep life going. Wigner himself believed it was impossible to formulate quantum theory in a fully consistent way without referring to consciousness. Since I don’t believe either of these claims, I think it’s good to understand the flaw in Wigner’s argument.

So, let me start by explaining his argument. Very roughly, it purports to show that if there are many more ways a chunk of matter can be ‘dead’ than ‘living’, the chance is zero that we can choose some definition of ‘living’ and a suitable ‘nutrient’ state such that every ‘living’ chunk of matter can interact with this ‘nutrient’ state to produce two ‘living’ chunks.

In making this precise, Wigner considers more than just two chunks of matter: he also allows there to be an ‘environment’. So, he considers a quantum system made of three parts, and described by a Hilbert space

H = H_1 \otimes H_1 \otimes H_2

Here the first H_1 corresponds to a chunk of matter. The second H_1 corresponds to another chunk of matter. The space H_3 corresponds to the ‘environment’. Suppose we wait for a certain amount of time and see what the system does; this will be described by some unitary operator

S: H \to H

Wigner asks: if we pick this operator S in a random way, what’s the probability that there’s some n-dimensional subspace of ‘living organism’ states in H_1, and some ‘nutrient plus environment’ state in H_1 \otimes H_2, such that the time evolution sends any living organism together with the nutrient plus environment to two living organisms and some state of the environment?

A bit more precisely: suppose we pick S in a random way. Then what’s the probability that there exists an n-dimensional subspace

V \subseteq H_1

and a state

w \in H_1 \otimes H_2

such that S maps every vector in V \otimes \langle w \rangle to a vector in V \otimes V \otimes H_2? Here \langle w \rangle means the 1-dimensional subspace spanned by the vector w.

And his answer is: if

\mathrm{dim}(H_1) \gg n

then this probability is zero.

You may need to reread the last few paragraphs a couple times to understand Wigner’s question, and his answer. In case you’re still confused, I should say that V \subseteq H_1 is what I’m calling the space of ‘living organism’ states of our chunk of matter, while w \in H_1 \otimes H_2 is the ‘nutrient plus environment’ state.

Now, Wigner did not give a rigorous proof of his claim, nor did he say exactly what he meant by ‘probability’: he didn’t specify a probability measure on the space of unitary operators on H. But if we use the obvious choice (called ‘normalized Haar measure’) his argument can most likely be turned into a proof.

So, I don’t want to argue with his math. I want to argue with his interpretation of the math. He concludes that

the chances are nil for the existence of a set of ‘living’ states for which one can find a nutrient of such nature that interaction always leads to multiplication.

The problem is that he fixed the decomposition of the Hilbert space H as a tensor product

H = H_1 \otimes H_1 \otimes H_2

before choosing the time evolution operator S. There is no good reason to do that. It only makes sense split up a physical into parts this way after we have some idea of what the dynamics is. An abstract Hilbert space doesn’t come with a favored decomposition as a tensor product into three parts!

If we let ourselves pick this decomposition after picking the operator S, the story changes completely. My paper shows:

Theorem 1. Let H, H_1 and H_2 be finite-dimensional Hilbert spaces with H \cong H_1 \otimes H_1 \otimes H_2. Suppose S : H \to H is any unitary operator, suppose V is any subspace of H_1, and suppose w is any unit vector in H_1 \otimes H_2 Then there is a unitary isomorphism

U: H \to H_1 \otimes H_1 \otimes H_2

such that if we identify H with H_1 \otimes H_1 \otimes H_2 using U, the operator S maps V \otimes \langle w \rangle into V \otimes V \otimes H_2.

In other words, if we allow ourselves to pick the decomposition after picking S, we can always find a ‘living organism’ subspace of any dimension we like, together with a ‘nutrient plus environment’ state that allows our living organism to reproduce.

However, if you look at the proof in my paper, you’ll see it’s based on a kind of cheap trick (as I forthrightly admit). Namely, I pick the ‘nutrient plus environment’ state to lie in V \otimes H_2, so the nutrient actually consists of another organism!

This goes to show that you have to be very careful about theorems like this. To prove that life is improbable, you need to find some necessary conditions for what counts as life, and show that these are improbable (in some sense, and of course it matters a lot what that sense is). Refuting such an argument does not prove that life is probable: for that you need some sufficient conditions for what counts as life. And either way, if you prove a theorem using a ‘cheap trick’, it probably hasn’t gotten to grips with the real issues.

I also show that as the dimension of H approaches infinity, the probability approaches 1 that we can get reproduction with a 1-dimensional ‘living organism’ subspace and a ‘nutrient plus environment’ state that lies in orthogonal complement of V \otimes H_2. In other words, the ‘nutrient’ is not just another organism sitting there all ready to go!

More precisely:

Theorem 2. Let H, H_1 and H_2 be finite-dimensional Hilbert spaces with \mathrm{dim}(H) = \mathrm{dim}(H_1)^2 \cdot \mathrm{dim}(H_2). Let \mathbf{S'} be the set of unitary operators S: H \to H with the following property: there’s a unit vector v \in H_1, a unit vector w \in V^\perp \otimes H_2, and a unitary isomorphism

U: H \to H_1 \otimes H_1 \otimes H_2

such that if we identify H with H_1 \otimes H_1 \otimes H_2 using U, the operator S maps v \otimes w into \langle v\rangle \otimes \langle v \rangle \otimes H_2. Then the normalized Haar measure of \mathbf{S'} approaches 1 as \mathrm{dim}(H) \to \infty.

Here V^\perp is the orthogonal complement of V \subseteq H_1; that is, the space of all vectors perpendicular to V.

I won’t include the proofs of these theorems, since you can see them in my paper.

Just to be clear: I certainly don’t think these theorems prove that life is probable! You can’t have theorems without definitions, and I think that coming up with a good general definition of ‘life’, or even supposedly simpler concepts like ‘entity’ and ‘reproduction’, is extremely tough. The formalism discussed here is oversimplified for dozens of reasons, a few of which are listed at the end of my paper. So far we’re only in the first fumbling stages of addressing some very hard questions.

All my theorems do is point out that Wigner’s argument has a major flaw: he’s choosing a way to divide the world into chunks of matter and the environment before choosing his laws of physics. This doesn’t make much sense, and reversing the order dramatically changes the conclusions.

By the way: I just started looking for post-1989 discussions of Wigner’s paper. So far I haven’t found any interesting ones. Here’s a more recent paper that’s somewhat related, which doesn’t mention Wigner’s work:

• Indranil Chakrabarty and Prashant, Non existence of quantum mechanical self replicating machine, 2005.

The considerations here seem more closely related to the Wooters–Zurek no-cloning theorem.


Quantum Information Processing 2011 (Day 2)

12 January, 2011

Here are some very fragmentary notes on the second day of QIP 2011. You can see arXiv references, slides, and videos of the talks here. I’ll just give links to the slides, and again I’ll only mention 3 talks.

Stephanie Werner gave a talk on the relation between the uncertainty principle and nonlocality in quantum theory. There’s a general framework for physical theories, called “generalized probabilistic theories”, which includes classical and quantum mechanics as special cases. In this framework we can see that while quantum theory is “nonlocal” in the sense made famous by John Bell, even more nonlocal theories are logically possible!

For example, while quantum theory violates the Clauser–Horn–Shimony–Holt inequality, which is obeyed by any local hidden variables theory, it doesn’t violate it to the maximum possible extent. There is a logically conceivable gadget, the Popescu–Rohrlich box, which violates the CHSH inequalities to the maximum extent allowed by a theory that prohibits faster-than-light signalling. However, such a gadget would give us implausibly godlike computational powers.

In Werner’s talk, she explained another reason not to hope for more nonlocality than quantum theory provides. Namely, given the “steering” ability we have in quantum theory — that is, our ability to prepare a state at one location by doing a measurement at another — the theory cannot be more nonlocal than it is while still obeying the uncertainty principle.

Jérémie Roland gave a talk on generalizations of Grover’s search algorithm. Grover’s algorithm is one of the implausibly godlike powers that quantum computers might give us, if only we could build the bloody things: it’s a way to search a list of N items for a given item in a time that’s only on the order of N1/2. This algorithm assumes we can freely jump from place to place on the list, so instead of a linearly ordered “list” it’s probably better to visualize this data structure as a complete graph with N vertices. Roland’s talk explained a way to generalize this idea to arbitrary graphs.

He began by considering a Markov chain on the graph — that is, a way to blunder randomly from vertex to vertex along the graph, where you can only go from one vertex to another in one step if there’s an edge connecting them. He assumed that it’s reversible and ergodic. Then, starting from this, he described how to fashion a quantum process that finds the desired vertex (or vertices) in about the square root of the time that the Markov chain would take.

Robin Kothari gave a talk on using quantum computation to efficiently detect certain properties of graphs. He considered “minor-closed properties” of graphs. Let me just tell you what those properties are, and tell you about a fascinating older result about them.

The word graph means many slightly different things, but in this blog entry I mean a finite set V whose elements are called vertices, together with a set E of unordered pairs of distinct vertices, which we call edges. So, these are undirected finite graphs without self-loops or multiple edges connecting vertices.

A minor of a graph is another graph that can be obtained from the first one by repeatedly:

1) removing edges,

2) removing vertices that have no edges connected to them, and

3) contracting edges, that is, shrinking them to nothing and then identifying the vertices at both ends, like this:



For example, this graph:



is a minor of this one:



A property of graphs is minor-closed if whenever one graph has it, all its minors also have it.

What are some minor-closed properties? An obvious one is lacking cycles, that is, being a forest. You can get rid of cycles by getting rid of edges and vertices, or contracting edges, but you can’t create them!

A more interesting minor-closed property is planarity. If you can draw a graph on the plane, you can clearly also draw the graph you get by removing an edge, or removing a lone vertex, or contracting an edge.

Now, Kuratowski showed that planar graphs as precisely those that don’t have this graph:



or this one:



as minors.

Similarly, graphs that lack cycles are precisely those that don’t have a triangle as a minor!

So, we could ask if this pattern generalizes. Given a minor-closed property of graphs, is it equivalent to a property saying that there’s some finite set of graphs that don’t show up as minors?

This is called Wagner’s conjecture, after Klaus Wagner. He published it in 1970.

Wagner’s conjecture is true! It was proved by Neil Robertson and Paul Seymour in 2004. But the really interesting thing, to me, is that their proof takes about 400 or 500 pages!

I find this quite surprising… but then, I wouldn’t have guessed the four-color theorem was so hard to prove, either.

To make sure you understand Wagner’s conjecture, check that if we dropped the word “finite”, it would have a one-sentence proof.