Network Theory I

2 March, 2014

 

Here’s a video of a talk I gave last Tuesday—part of a series. You can see the slides here:

Network Theory I: electrical circuits and signal-flow graphs.

Click on items in blue, or pictures, for more information.

One reason I’m glad I gave this talk is because afterwards Jamie Vicary pointed out some very interesting consequences of the relations among signal-flow diagrams listed in my talk. It turns out they imply equations familiar from the theory of complementarity in categorical quantum mechanics!

This is the kind of mathematical surprise that makes life worthwhile for me. It seemed utterly shocking at first, but I think I’ve figured out why it happens. Now is not the time to explain… but I’ll have to do it soon, both here and in the paper that Jason Eberle are writing about control theory.

For now, besides the slides, the best place to read more about this program is here:

• Brendan Fong, A compositional approach to control theory.


The Elitzur–Vaidman Bomb-Testing Method

24 August, 2013

Quantum mechanics forces us to refine our attitude to counterfactual conditionals: questions about what would have happened if we had done something, even though we didn’t.

“What would the position of the particle be if I’d measured that… when actually I measured its momentum?” Here you’ll usually get no definite answer.

But sometimes you can use quantum mechanics to find out what would have happened if you’d done something… when classically it seems impossible!

Suppose you have a bunch of bombs. Some have a sensor that will absorb a photon you shine on it, and make the bomb explode! Others have a broken sensor, that won’t interact with the photon at all.

Can you choose some working bombs? You can tell if a bomb works by shining a photon on it. But if it works, it blows up—and then it doesn’t work anymore!

So, it sounds impossible. But with quantum mechanics you can do it. You can find some bombs that would have exploded if you had shone photons at them!

Here’s how:

Put a light that emits a single photon at A. Have the photon hit the half-silvered mirror at lower left, so it has a 50% chance of going through to the right, and a 50% chance of reflecting and going up. But in quantum mechanics, it sort of does both!

Put a bomb at B. Recombine the photon’s paths using two more mirrors. Have the two paths meet at a second half-silvered mirror at upper right. You can make it so that if the bomb doesn’t work, the photon interferes with itself and definitely goes to C, not D.

But if the bomb works, it absorbs the photon and explodes unless the photon takes the top route… in which case, when it hits the second half-silvered mirror, it has a 50% chance of going to C and a 50% chance of going to D.

So:

• If the bomb doesn’t work, the photon has a 100% chance of going to C.

• If the bomb works, there’s a 50% chance that it absorbs the photon and explodes. There’s also a 50% chance that the bomb does not explode—and then the photon is equally likely to go to either C or D. So, the photon has a 25% chance of reaching C and a 25% chance of reaching D.

So: if you see a photon at D, you know you have a working bomb… but the bomb has not exploded!

For each working bomb there’s:

• a 50% chance that it explodes,
• a 25% chance that it doesn’t explode but you can’t tell if it works,
• a 25% chance that it doesn’t explode but you can tell that it works.

This is the Elitzur–Vaidman bomb-testing method. It was invented by Avshalom Elitzur and Lev Vaidman in 1993. One year later, physicists actually did an experiment to show this idea works… but alas, not using actual bombs!

In 1996, Kwiat showed that using more clever methods, you can reduce the percentage of wasted working bombs as close to zero as you like. And pushing the idea even further, Graeme Mitchison and Richard Jozsa showed in 1999 that you can get a quantum computer to do a calculation for you without even turning it on!

This sounds amazing, but it’s really no more amazing than the bomb-testing method I’ve already described.

References

For details, read these:

• A. Elitzur and L. Vaidman, Quantum mechanical interaction-free measurements, Found. Phys. 23 (1993), 987–997.

• Paul G. Kwiat, H. Weinfurter, T. Herzog, A. Zeilinger, and M. Kasevich, Experimental realization of “interaction-free” measurements.

• Paul G. Kwiat, Interaction-free measurements.

• Graeme Mitchison and Richard Jozsa, Counterfactual computation, Proc. Roy. Soc. Lond. A457 (2001), 1175–1194.

The picture is from the Wikipedia article, which also has other references:

Elitzur–Vaidman bomb tester, Wikipedia.

Bas Spitters pointed out this category-theoretic analysis of the issue:

• Robert Furber and Bart Jacobs, Towards a categorical account of conditional probability.


Centre for Quantum Mathematics and Computation

6 March, 2013

This fall they’re opening a new Centre for Quantum Mathematics and Computation at Oxford University. They’ll be working on diagrammatic methods for topology and quantum theory, quantum gravity, and computation. You’ll understand what this means if you know the work of the people involved:

• Samson Abramsky
• Bob Coecke
• Christopher Douglas
• Kobi Kremnitzer
• Steve Simon
• Ulrike Tillman
• Jamie Vicary

All these people are already at Oxford, so you may wonder what’s new about this center. I’m not completely sure, but they’ve gotten money from EPSRC (roughly speaking, the British NSF), and they’re already hiring a postdoc. Applications are due on March 11, so hurry up if you’re interested!

They’re having a conference October 1st to 4th to start things off. I’ll be speaking there, and they tell me that Steve Awodey, Alexander Beilinson, Lucien Hardy, Martin Hyland, Chris Isham, Dana Scott, and Anton Zeilinger have been invited too.

I’m really looking forward to seeing Chris Isham, since he’s one of the most honest and critical thinkers about quantum gravity and the big difficulties we have in understanding this subject—and he has trouble taking airplane flights, so it’s been a long time since I’ve seen him. It’ll also be great to see all the other people I know, and meet the ones I don’t.

For example, back in the 1990′s, I used to spend summers in Cambridge talking about n-categories with Martin Hyland and his students Eugenia Cheng, Tom Leinster and Aaron Lauda (who had been an undergraduate at U.C. Riverside). And more recently I’ve been talking a lot with Jamie Vicary about categories and quantum computation—since was in Singapore some of the time while I was there. (Indeed, I’m going back there this summer, and so will he.)

I’m not as big on n-categories and quantum gravity as I used to be, but I’m still interested in the foundations of quantum theory and how it’s connected to computation, so I think I can give a talk with some new ideas in it.


Quantum Computing Position at U.C. Riverside

6 October, 2012

 

Here at U.C. Riverside, Alexander Korotkov wants to hire a postdoc in quantum measurement and quantum computing with superconducting qubits.

He writes:

The work will be mainly related to quantum feedback of superconducting qubits. The first experiment was published in Nature today. (Some News & Views discussion can be seen here.) The theory is still rather simple and needs improvement.

 


Time Crystals

26 September, 2012

When water freezes and forms a crystal, it creates a periodic pattern in space. Could there be something that crystallizes to form a periodic pattern in time? Frank Wilczek, who won the Nobel Prize for helping explain why quarks and gluons trapped inside a proton or neutron act like freely moving particles when you examine them very close up, dreamt up this idea and called it a time crystal:

• Frank Wilczek, Classical time crystals.

• Frank Wilczek, Quantum time crystals.

‘Time crystals’ sound like something from Greg Egan’s Orthogonal trilogy, set in a universe where there’s no fundamental distinction between time and space. But Wilczek wanted to realize these in our universe.

Of course, it’s easy to make a system that behaves in an approximately periodic way while it slowly runs down: that’s how a clock works: tick tock, tick tock, tick tock… But a system that keeps ‘ticking away’ without using up any resource or running down would be a strange new thing. There’s no telling what weird stuff we might do with it.

It’s also interesting because physicists love symmetry. In ordinary physics there are two very important symmetries: spatial translation symmetry, and time translation symmetry. Spatial translation symmetry says that if you move an experiment any amount to the left or right, it works the same way. Time translation symmetry says that if you do an experiment any amount of time earlier or later, it works the same way.

Crystals are fascinating because they ‘spontaneously break’ spatial translation symmetry. Take a liquid, cool it until it freezes, and it forms a crystal which does not look the same if you move it any amount to the right or left. It only looks the same if you move it certain discrete steps to the right or left!

The idea of a ‘time crystal’ is that it’s a system that spontaneously breaks time translation symmetry.

Given how much physicists have studied time translation symmetry and spontaneous symmetry breaking, it’s sort of shocking that nobody before 2012 wrote about this possibility. Or maybe someone did—but I haven’t heard about it.

It takes real creativity to think of an idea so radical yet so simple. But Wilczek is famously creative. For example, he came up with anyons: a new kind of particle, neither boson nor fermion, that lives in a universe where space is 2-dimensional. And now we can make those in the lab.

Unfortunately, Wilczek didn’t know how to make a time crystal. But now a team including Xiang Zhang (seated) and Tongcang Li (standing) at U.C. Berkeley have a plan for how to do it.

Actually they propose a ring-shaped system that’s periodic in time and also in space, as shown in the picture. They call it a space-time crystal:

Here we propose a space-time crystal of trapped ions and a method to realize it experimentally by confining ions in a ring-shaped trapping potential with a static magnetic field. The ions spontaneously form a spatial ring crystal due to Coulomb repulsion. This ion crystal can rotate persistently at the lowest quantum energy state in magnetic fields with fractional fluxes. The persistent rotation of trapped ions produces the temporal order, leading to the formation of a space-time crystal. We show that these space-time crystals are robust for direct experimental observation. The proposed space-time crystals of trapped ions provide a new dimension for exploring many-body physics and emerging properties of matter.

The new paper is here:

• Tongcang Li, Zhe-Xuan Gong, Zhang-Qi Yin, H. T. Quan, Xiaobo Yin, Peng Zhang, L.-M. Duan and Xiang Zhang, Space-time crystals of trapped ions.

Alas, the press release put out by Lawrence Berkeley National Laboratory is very misleading. It describes the space-time crystal as a clock that

will theoretically persist even after the rest of our universe reaches entropy, thermodynamic equilibrium or “heat-death”.

NO!

First of all, ‘reaching entropy’ doesn’t mean anything. More importantly, as time goes by and things fall apart, this space-time crystal, assuming anyone can actually make it, will also fall apart.

I know what the person talking to the reporter was trying to say: the cool thing about this setup is that it gives a system that’s truly time-periodic, not gradually using up some resource and running down like an ordinary clock. But nonphysicist readers, seeing an article entitled ‘A Clock that Will Last Forever’, may be fooled into thinking this setup is, umm, a clock that will last forever. It’s not.

If this setup were the whole universe, it might keep ticking away forever. But in fact it’s just a small, carefully crafted portion of our universe, and it interacts with the rest of our universe, so it will gradually fall apart when everything else does… or in fact much sooner: as soon as the scientists running it turn off the experiment.

Classifying space-time crystals

What could we do with space-time crystals? It’s way too early to tell, at least for me. But since I’m a mathematician, I’d be happy to classify them. Over on Google+, William Rutiser asked if there are 4d analogs of the 3d crystallographic groups. And the answer is yes! Mathematicians with too much time on their hands have classified the analogues of crystallographic groups in 4 dimensions:

Space group: classification in small dimensions, Wikipedia.

In general these groups are called space groups (see the article for the definition). In 1 dimension there are just two, namely the symmetry groups of this:

— o — o — o — o — o — o —

and this:

— > — > — > — > — > — > —

In 2 dimensions there are 17 and they’re called wallpaper groups. In 3 dimensions there are 230 and they are called crystallographic groups. In 4 dimensions there are 4894, in 5 dimensions there are… hey, Wikipedia leaves this space blank in their table!… and in 6 dimensions there are 28,934,974.

So, there is in principle quite a large subject to study here, if people can figure out how to build a variety of space-time crystals.

There’s already book on this, if you’re interested:

• Harold Brown, Rolf Bulow, Joachim Neubuser, Hans Wondratschek and Hans Zassenhaus, Crystallographic Groups of Four-Dimensional Space, Wiley Monographs in Crystallography, 1978.


Quantizing Electrical Circuits

2 February, 2012

As you may know, there’s a wonderful and famous analogy between classical mechanics and electrical circuit theory. I explained it back in “week288″, so I won’t repeat that story now. If you don’t know what I’m talking about, take a look!

This analogy opens up the possibility of quantizing electrical circuits by straightforwardly copying the way we quantize classical mechanics problems. I’d often wondered if this would be useful.

It is, and people have done it:

• Michel H. Devoret, Quantum fluctuations in electrical circuits.

Michel Devoret, Rob Schoelkopf and others call this idea quantronics: the study of mesoscopic electronic effects in which collective degrees of freedom like currents and voltages behave quantum mechanically.

I just learned about this from a talk by Sean Barrett here in Coogee. There are lots of cool applications, but right now I’m mainly interested in how this extends the set of analogies between different physical theories.

One interesting thing is how they quantize circuits with resistors. Over in classical mechanics, this corresponds to systems with friction. These systems, called ‘dissipative’ systems, don’t have a conserved energy. More precisely, energy leaks out of the system under consideration and gets transferred to the environment in the form of heat. It’s hard to quantize systems where energy isn’t conserved, so people in quantronics model resistors as infinite chains of inductors and capacitors: see the ‘LC ladder circuit’ on page 15 of Devoret’s notes. This idea is also the basis of the Caldeira–Leggett model of a particle coupled to a heat bath made of harmonic oscillators: it amounts to including the environment as part of the system being studied.


A Quantum Hammersley–Clifford Theorem

29 January, 2012

I’m at this workshop:

Sydney Quantum Information Theory Workshop: Coogee 2012, 30 January – 2 February 2012, Coogee Bay Hotel, Coogee, Sydney, organized by Stephen Bartlett, Gavin Brennen, Andrew Doherty and Tom Stace.

Right now David Poulin is speaking about a quantum version of the Hammersley–Clifford theorem, which is a theorem about Markov networks. Let me quickly say a bit about what he proved! This will be a bit rough, since I’m doing it live…

The mutual information between two random variables is

I(A:B) = S(A) + S(B) - S(A,B)

The conditional mutual information between three random variables C is

I(A:B|C) = \sum_c p(C=c) I(A:B|C=c)

It’s the average amount of information about B learned by measuring A when you already knew C.

All this works for both classical (Shannon) and quantum (von Neumann) entropy. So, when we say ‘random variable’ above, we
could mean it in the traditional classical sense or in the quantum sense.

If I(A:B|C) = 0 then A, C, B has the following Markov property: if you know C, learning A tells you nothing new about B. In condensed matter physics, say a spin system, we get (quantum) random variables from measuring what’s going on in regions, and we have short range entanglement if I(A:B|C) = 0 when C corresponds to some sufficiently thick region separating the regions A and B. We’ll get this in any Gibbs state of a spin chain with a local Hamiltonian.

A Markov network is a graph with random variables at vertices (and thus subsets of vertices) such that I(A:B|C) = 0 whenever C is a subset of vertices that completely ‘shields’ the subset A from the subset B: any path from A to B goes through a vertex in a C.

The Hammersley–Clifford theorem says that in the classical case we can get any Markov network from the Gibbs state

\exp(-\beta H)

of a local Hamiltonian H, and vice versa. Here a Hamiltonian is local if it is a sum of terms, one depending on the degrees of freedom in each clique in the graph:

H = \sum_{C \in \mathrm{cliques}} h_C

Hayden, Jozsa, Petz and Winter gave a quantum generalization of one direction of this result to graphs that are just ‘chains’, like this:

o—o—o—o—o—o—o—o—o—o—o—o

Namely: for such graphs, any quantum Markov network is the Gibbs state of some local Hamiltonian. Now Poulin has shown the same for all graphs. But the converse is, in general, false. If the different terms h_C in a local Hamiltonian all commute, its Gibbs state will have the Markov property. But otherwise, it may not.

For some related material, see:

• David Poulin, Quantum graphical models and belief propagation.


Probabilities Versus Amplitudes

5 December, 2011

Here are the slides of the talk I’m giving at the CQT Annual Symposium on Wednesday afternoon, which is Tuesday morning for a lot of you. If you catch mistakes, I’d love to hear about them before then!

Probabilities versus amplitudes.

Abstract: Some ideas from quantum theory are just beginning to percolate back to classical probability theory. For example, there is a widely used and successful theory of “chemical reaction networks”, which describes the interactions of molecules in a stochastic rather than quantum way. If we look at it from the perspective of quantum theory, this turns out to involve creation and annihilation operators, coherent states and other well-known ideas—but with a few big differences. The stochastic analogue of quantum field theory is also used in population biology, and here the connection is well-known. But what does it mean to treat wolves as fermions or bosons?


Liquid Light

28 November, 2011

Elisabeth Giacobino works at the Ecole Normale Supérieure in Paris. Last week she gave a talk at the Centre for Quantum Technologies. It was about ‘polariton condensates’. You can see a video of her talk here.

What’s a polariton? It’s a strange particle: a blend of matter and light. Polaritons are mostly made of light… with just enough matter mixed in so they can form a liquid! This liquid can form eddies just like water. Giacobino and her team of scientists have actually gotten pictures:

Physicists call this liquid a ‘polariton condensate’, but normal people may better appreciate how wonderful it is if we call it liquid light. That’s not 100% accurate, but it’s close—you’ll see what I mean in a minute.

Here’s a picture of Elisabeth Giacobino (at right) and her coworkers in 2010—not exactly the same team who is working on liquid light, but the best I can find:

How to make liquid light

How do you make liquid light?

First, take a thin film of some semiconductor like gallium arsenide. It’s full of electrons roaming around, so imagine a sea of electrons, like water. If you knock out an electron with enough energy, you’ll get a ‘hole’ which can move around like a particle of its own. Yes, the absence of a thing can act like a thing. Imagine an air bubble in the sea.

All this so far is standard stuff. But now for something more tricky: if you knock an electron just a little, it won’t go far from the hole it left behind. They’ll be attracted to each other, so they’ll orbit each other!

What you’ve got now is like a hydrogen atom—but instead of an electron and a proton, it’s made from an electron and a hole! It’s called an exciton. In Giacobino’s experiments, the excitons are 200 times as big as hydrogen atoms.

Excitons are exciting, but not exciting enough for us. So next, put a mirror on each side of your thin film. Now light can bounce back and forth. The light will interact with the excitons. If you do it right, this lets a particle of light—called a photon—blend with an exciton and form a new particle called polariton.

How does a photon ‘blend’ with an exciton? Umm, err… this involves quantum mechanics. In quantum mechanics you can take two possible situations and add them and get a new one, a kind of ‘blend’ called a ‘superposition’. ‘Schrödinger’s cat’ is what you get when you blend a live cat and a dead cat. People like to argue about why we don’t see half-live, half-dead cats. But never mind: we can see a blend of a photon and an exciton! Giacobino and her coworkers have done just that.

The polaritons they create are mostly light, with just a teeny bit of exciton blended in. Photons have no mass at all. So, perhaps it’s not surprising that their polaritons have a very small mass: about 10-5 times as heavy as an electron!

They don’t last very long: just about 4-10 picoseconds. A picosecond is a trillionth of a second, or 10-12 seconds. After that they fall apart. However, this is long enough for polaritons to do lots of interesting things.

For starters, polaritons interact with each other enough to form a liquid. But it’s not just any ordinary liquid: it’s often a superfluid, like very cold liquid helium. This means among other things, that it has almost no viscosity.

So: it’s even better than liquid light: it’s superfluid light!

The flow of liquid light

What can you do with liquid light?

For starters, you can watch it flow around obstacles. Semiconductors have ‘defects’—little flaws in the crystal structure. These act as obstacles to the flow of polaritons. And Giacobimo and her team have seen the flow of polaritons around defects in the semiconductor:

The two pictures at left are two views of the polariton condensate flowing smoothly around a defect. In these pictures the condensate is a superfluid.

The two pictures in the middle show a different situation. Here the polariton condensate is viscous enough so that it forms a trail of eddies as it flows past the defect. Yes, eddies of light!

And the two pictures at right show yet another situation. In every fluid, we can have waves of pressure. This is called… ‘sound’. Yes, this is how ordinary sound works in air, or under water. But we can also have sound in a polariton condensate!

That’s pretty cool: sound in liquid light! But wait. We haven’t gotten to the really cool part yet. Whenever you have a fluid moving past an obstacle faster than the speed of sound, you get a ‘shock wave’: the obstacle leaves an expanding trail of sound in its wake, behind it, because the sound can’t catch up. That’s why jets flying faster than sound leave a sonic boom behind them.

And that’s what you’re seeing in the pictures at right. The polariton condensate is flowing past the defect faster than the speed of sound, which happens to be around 850,000 meters per second in this experiment. We’re seeing the shock wave it makes. So, we’re seeing a sonic boom in liquid light!

It’s possible we’ll be able to use polariton condensates for interesting new technologies. Giacobimo and her team are also considering using them to study Hawking radiation: the feeble glow that black holes emit according to Hawking’s predictions. There aren’t black holes in polariton condensates, but it may be possible to create a similar kind of radiation. That would be really cool!

But to me, just being able to make a liquid consisting mostly of light, and study its properties, is already a triumph: just for the beauty of it.

Scary technical details

All the pictures of polariton condensates flowing around a defect came from here:

• A. Amo, S. Pigeon, D. Sanvitto, V. G. Sala, R. Hivet, I. Carusotto, F. Pisanello, G. Lemenager, R. Houdre, E. Giacobino, C. Ciuti, and A. Bramati, Hydrodynamic solitons in polariton superfluids.

and this is the paper to read for more details.

I tried to be comprehensible to ordinary folks, but there are a few more things I can’t resist saying.

First, there are actually many different kinds of polaritons. In general, polaritons are quasiparticles formed by the interaction of photons and matter. For example, in some crystals sound acts like it’s made of particles, and these quasiparticles are called ‘phonons’. But sometimes phonons can interact with light to form quasiparticles—and these are called ‘phonon-polaritons’. I’ve only been talking about ‘exciton-polaritons’.

If you know a bit about superfluids, you may be interested to hear that the wavy patterns show the phase of the order parameter ψ in the Landau-Ginzburg theory of superfluids:

If you know about quantum field theory, you may be interested to know that the Hamiltonian describing photon-exciton interactions involves terms roughly like

\alpha a^\dagger a + \beta b^\dagger b + \gamma (a^\dagger b + b^\dagger a)

where a is the annihilation operator for photons, b is the annihilation operator for excitons, the Greek letters are various constants, and the third term describes the interaction of photons and excitons. We can simplify this Hamiltonian by defining new particles that are linear combinations of photons and excitons. It’s just like diagonalizing a matrix; we get something like

\delta c^\dagger c + \epsilon d^\dagger d

where c and d are certain linear combinations of a and b. These act as annihilation operators for our new particles… and one of these new particles is the very light ‘polariton’ I’ve been talking about!


Is Life Improbable?

31 May, 2011

Mine? Yes. And maybe you’ve wondered just how improbable your life is. But that’s not really the question today…

Here at the Centre for Quantum Technologies, Dagomir Kaszlikowski asked me to give a talk on this paper:

• John Baez, Is life improbable?, Foundations of Physics 19 (1989), 91-95.

This was the second paper I wrote, right after my undergraduate thesis. Nobody ever seemed to care about it, so it’s strange—but nice—to finally be giving a talk on it.

My paper does not try to settle the question its title asks. Rather, it tries to refute the argument here:

• Eugene P. Wigner, The probability of the existence of a self-reproducing unit, Symmetries and Reflections, Indiana University Press, Bloomington, 1967, pp. 200-208.

According Wigner, his argument

purports to show that, according to standard quantum mechanical theory, the probability is zero for the existence of self-reproducing states, i.e., organisms.

Given how famous Eugene Wigner is (he won a Nobel prize, after all) and how earth-shattering his result would be if true, it’s surprising how little criticism his paper has received. David Bohm mentioned it approvingly in 1969. In 1974 Hubert Yockey cited it saying

for all physics has to offer, life should never have appeared and if it ever did it would soon die out.

As you’d expect, there are some websites mentioning Wigner’s argument as evidence that some supernatural phenomenon is required to keep life going. Wigner himself believed it was impossible to formulate quantum theory in a fully consistent way without referring to consciousness. Since I don’t believe either of these claims, I think it’s good to understand the flaw in Wigner’s argument.

So, let me start by explaining his argument. Very roughly, it purports to show that if there are many more ways a chunk of matter can be ‘dead’ than ‘living’, the chance is zero that we can choose some definition of ‘living’ and a suitable ‘nutrient’ state such that every ‘living’ chunk of matter can interact with this ‘nutrient’ state to produce two ‘living’ chunks.

In making this precise, Wigner considers more than just two chunks of matter: he also allows there to be an ‘environment’. So, he considers a quantum system made of three parts, and described by a Hilbert space

H = H_1 \otimes H_1 \otimes H_2

Here the first H_1 corresponds to a chunk of matter. The second H_1 corresponds to another chunk of matter. The space H_3 corresponds to the ‘environment’. Suppose we wait for a certain amount of time and see what the system does; this will be described by some unitary operator

S: H \to H

Wigner asks: if we pick this operator S in a random way, what’s the probability that there’s some n-dimensional subspace of ‘living organism’ states in H_1, and some ‘nutrient plus environment’ state in H_1 \otimes H_2, such that the time evolution sends any living organism together with the nutrient plus environment to two living organisms and some state of the environment?

A bit more precisely: suppose we pick S in a random way. Then what’s the probability that there exists an n-dimensional subspace

V \subseteq H_1

and a state

w \in H_1 \otimes H_2

such that S maps every vector in V \otimes \langle w \rangle to a vector in V \otimes V \otimes H_2? Here \langle w \rangle means the 1-dimensional subspace spanned by the vector w.

And his answer is: if

\mathrm{dim}(H_1) \gg n

then this probability is zero.

You may need to reread the last few paragraphs a couple times to understand Wigner’s question, and his answer. In case you’re still confused, I should say that V \subseteq H_1 is what I’m calling the space of ‘living organism’ states of our chunk of matter, while w \in H_1 \otimes H_2 is the ‘nutrient plus environment’ state.

Now, Wigner did not give a rigorous proof of his claim, nor did he say exactly what he meant by ‘probability’: he didn’t specify a probability measure on the space of unitary operators on H. But if we use the obvious choice (called ‘normalized Haar measure’) his argument can most likely be turned into a proof.

So, I don’t want to argue with his math. I want to argue with his interpretation of the math. He concludes that

the chances are nil for the existence of a set of ‘living’ states for which one can find a nutrient of such nature that interaction always leads to multiplication.

The problem is that he fixed the decomposition of the Hilbert space H as a tensor product

H = H_1 \otimes H_1 \otimes H_2

before choosing the time evolution operator S. There is no good reason to do that. It only makes sense split up a physical into parts this way after we have some idea of what the dynamics is. An abstract Hilbert space doesn’t come with a favored decomposition as a tensor product into three parts!

If we let ourselves pick this decomposition after picking the operator S, the story changes completely. My paper shows:

Theorem 1. Let H, H_1 and H_2 be finite-dimensional Hilbert spaces with H \cong H_1 \otimes H_1 \otimes H_2. Suppose S : H \to H is any unitary operator, suppose V is any subspace of H_1, and suppose w is any unit vector in H_1 \otimes H_2 Then there is a unitary isomorphism

U: H \to H_1 \otimes H_1 \otimes H_2

such that if we identify H with H_1 \otimes H_1 \otimes H_2 using U, the operator S maps V \otimes \langle w \rangle into V \otimes V \otimes H_2.

In other words, if we allow ourselves to pick the decomposition after picking S, we can always find a ‘living organism’ subspace of any dimension we like, together with a ‘nutrient plus environment’ state that allows our living organism to reproduce.

However, if you look at the proof in my paper, you’ll see it’s based on a kind of cheap trick (as I forthrightly admit). Namely, I pick the ‘nutrient plus environment’ state to lie in V \otimes H_2, so the nutrient actually consists of another organism!

This goes to show that you have to be very careful about theorems like this. To prove that life is improbable, you need to find some necessary conditions for what counts as life, and show that these are improbable (in some sense, and of course it matters a lot what that sense is). Refuting such an argument does not prove that life is probable: for that you need some sufficient conditions for what counts as life. And either way, if you prove a theorem using a ‘cheap trick’, it probably hasn’t gotten to grips with the real issues.

I also show that as the dimension of H approaches infinity, the probability approaches 1 that we can get reproduction with a 1-dimensional ‘living organism’ subspace and a ‘nutrient plus environment’ state that lies in orthogonal complement of V \otimes H_2. In other words, the ‘nutrient’ is not just another organism sitting there all ready to go!

More precisely:

Theorem 2. Let H, H_1 and H_2 be finite-dimensional Hilbert spaces with \mathrm{dim}(H) = \mathrm{dim}(H_1)^2 \cdot \mathrm{dim}(H_2). Let \mathbf{S'} be the set of unitary operators S: H \to H with the following property: there’s a unit vector v \in H_1, a unit vector w \in V^\perp \otimes H_2, and a unitary isomorphism

U: H \to H_1 \otimes H_1 \otimes H_2

such that if we identify H with H_1 \otimes H_1 \otimes H_2 using U, the operator S maps v \otimes w into \langle v\rangle \otimes \langle v \rangle \otimes H_2. Then the normalized Haar measure of \mathbf{S'} approaches 1 as \mathrm{dim}(H) \to \infty.

Here V^\perp is the orthogonal complement of V \subseteq H_1; that is, the space of all vectors perpendicular to V.

I won’t include the proofs of these theorems, since you can see them in my paper.

Just to be clear: I certainly don’t think these theorems prove that life is probable! You can’t have theorems without definitions, and I think that coming up with a good general definition of ‘life’, or even supposedly simpler concepts like ‘entity’ and ‘reproduction’, is extremely tough. The formalism discussed here is oversimplified for dozens of reasons, a few of which are listed at the end of my paper. So far we’re only in the first fumbling stages of addressing some very hard questions.

All my theorems do is point out that Wigner’s argument has a major flaw: he’s choosing a way to divide the world into chunks of matter and the environment before choosing his laws of physics. This doesn’t make much sense, and reversing the order dramatically changes the conclusions.

By the way: I just started looking for post-1989 discussions of Wigner’s paper. So far I haven’t found any interesting ones. Here’s a more recent paper that’s somewhat related, which doesn’t mention Wigner’s work:

• Indranil Chakrabarty and Prashant, Non existence of quantum mechanical self replicating machine, 2005.

The considerations here seem more closely related to the Wooters–Zurek no-cloning theorem.


Follow

Get every new post delivered to your Inbox.

Join 2,716 other followers