Quantum Foundations Mailing List

9 December, 2010

Bob Coecke and Jamie Vicary have started a mailing list on “quantum foundations”.

They write:

It was agreed by many that the existence of a quantum foundations mailing list, with a wide scope and involving the broad international community, was long overdue. This moderated list (to avoid spam or abuse) will mainly distribute announcements of conferences and other international events in the area, as well as other relevant adverts such as jobs in the area. It is set up at Oxford University, which should provide a guarantee of stability and sustainability. The scope ranges from the mathematical end of quantum foundations research to the purely philosophical issues.

(UN)SUBSCRIBING INSTRUCTIONS:

To subscribe to the list, send a blank email to
quantum-foundations-subscribe@maillist.ox.ac.uk

To unsubscribe from the list, send a blank email to
quantum-foundations-unsubscribe@maillist.ox.ac.uk

Any complaints etc can be send to Bob Coecke and Jamie Vicary.

I have deleted their email addresses here, along with the address for posting articles to the list, to lessen the amount of spam these addresses get. But it’s easy enough to find Bob and Jamie’s addresses, and presumably when you subscribe you’ll be told how to post messages!


Solèr’s Theorem

1 December, 2010

Here’s another post on the foundations of quantum theory:

Solèr’s Theorem.

It’s about an amazing result, due to Maria Pia Solèr, which singles out real, complex and quaternionic Hilbert spaces as special. If you want to talk about it, please join the conversation over on the n-Category Café.

All these recent highly mathematical blog posts are a kind of spinoff of a paper I’m writing on quantum theory and division algebras. That paper is almost done. Then our normal programming will continue: I’ll keep going through Pacala and Socolow’s “stabilization wedges”, and also do a This Week’s Finds where I interview Tim Palmer.


State-Observable Duality

25 November, 2010

It’s confusing having two blogs if you only have one life. I post about my work at the Centre for Quantum Technology here. I post about abstract algebra at the n-Category Café. But what do I do when my work at the Centre for Quantum Technology starts using a lot of abstract algebra?

I guess this time I’ll do posts over there, but link to them here:

State-Observable Duality (Part 1).

State-Observable Duality (Part 2).

State-Observable Duality (Part 3).

This is a 3-part series on the foundations of quantum theory, leading up to a discussion of a concept I call ‘state-observable duality’. The first part talks about normed division algebras. The second talks about the Jordan-von Neumann-Wigner paper on Jordan algebras in quantum theory. The third talks about state-observable duality and the Koecher-Vinberg theorem.

I think I’ll take comments over there, so our discussion of environmental issues here doesn’t get interrupted!


Entropy and Uncertainty

19 October, 2010

I was going to write about a talk at the CQT, but I found a preprint lying on a table in the lecture hall, and it was so cool I’ll write about that instead:

• Mario Berta, Matthias Christandl, Roger Colbeck, Joseph M. Renes, Renato Renner, The uncertainty principle in the presence of quantum memory, Nature Physics, July 25, 2010.

Actually I won’t talk about the paper per se, since it’s better if I tell you a more basic result that I first learned from reading this paper: the entropic uncertainty principle!

Everyone loves the concept of entropy, and everyone loves the uncertainty principle. Even folks who don’t understand ’em still love ’em. They just sound so mysterious and spooky and dark. I love ’em too. So, it’s nice to see a mathematical relation between them.

I explained entropy back here, so let me say a word about the uncertainty principle. It’s a limitation on how accurately you can measure two things at once in quantum mechanics. Sometimes you can only know a lot about one thing if you don’t know much about the other. This happens when those two things “fail to commute”.

Mathematically, the usual uncertainty principle says this:

\Delta A \cdot \Delta B \ge \frac{1}{2} |\langle [A,B] \rangle |

In plain English: the uncertainty in A times the uncertainty in B is bigger than the absolute value of the expected value of their commutator

[A,B] = A B - B A

Whoops! That started off as plain English, but it degenerated into plain gibberish near the end… which is probably why most people don’t understand the uncertainty principle. I don’t think I’m gonna cure that today, but let me just nail down the math a bit.

Suppose A and B are observables — and to keep things really simple, by observable I’ll just mean a self-adjoint n \times n matrix. Suppose \psi is a state: that is, a unit vector in \mathbb{C}^n. Then the expected value of A in the state \psi is the average answer you get when you measure that observable in that state. Mathematically it’s equal to

\langle A \rangle = \langle \psi, A \psi \rangle

Sorry, there are a lot of angle brackets running around here: the ones at right stand for the inner product in \mathbb{C}^n, which I’m assuming you understand, while the ones at left are being defined by this equation. They’re just a shorthand.

Once we can compute averages, we can compute standard deviations, so we define the standard deviation of an observable A in the state \psi to be \Delta A where

(\Delta A)^2 = \langle A^2 \rangle - \langle A \rangle^2

Got it? Just like in probability theory. So now I hope you know what every symbol here means:

\Delta A \cdot \Delta B \ge \frac{1}{2} |\langle [A,B] \rangle |

and if you’re a certain sort of person you can have fun going home and proving this. Hint: it takes an inequality to prove an inequality. Other hint: what’s the most important inequality in the universe?

But now for the fun part: entropy!

Whenever you have an observable A and a state \psi, you get a probability distribution: the distribution of outcomes when you measure that observable in that state. And this probability distribution has an entropy! Let’s call the entropy S(A). I’ll define it a bit more carefully later.

But the point is: this entropy is really a very nice way to think about our uncertainty, or ignorance, of the observable A. It’s better, in many ways, than the standard deviation. For example, it doesn’t change if we multiply A by 2. The standard deviation doubles, but we’re not twice as ignorant!

Entropy is invariant under lots of transformations of our observables. So we should want an uncertainty principle that only involves entropy. And here it is, the entropic uncertainty principle:

S(A) + S(B) \ge \mathrm{log} \, \frac{1}{c}

Here c is defined as follows. To keep things simple, suppose that A is nondegenerate, meaning that all its eigenvalues are distinct. If it’s not, we can tweak it a tiny bit and it will be. Let its eigenvectors be called \phi_i. Similarly, suppose B is nondegenerate and call its eigenvectors \chi_j. Then we let

c = \mathrm{max}_{i,j} |\langle \phi_i, \chi_j \rangle|^2

Note this becomes 1 when there’s an eigenvector of A that’s also an eigenvector of B. In this case its possible to find a state where we know both observables precisely, and in this case also

\mathrm{log}\, \frac{1}{c} = 0

And that makes sense: in this case S(A) + S(B), which measures our ignorance of both observables, is indeed zero.

But if there’s no eigenvector of A that’s also an eigenvector of B, then c is smaller than 1, so

\mathrm{log} \, \frac{1}{c} > 0

so the entropic uncertainty principle says we really must have some ignorance about either A or B (or both).

So the entropic uncertainty principle makes intuitive sense. But let me define the entropy S(A), to make the principle precise. If \phi_i are the eigenvectors of A, the probabilities of getting various outcomes when we measure A in the state \psi are

p_i = |\langle \phi_i, \psi \rangle|^2

So, we define the entropy by

S(A) = - \sum_i p_i \; \mathrm{log}\, p_i

Here you can use any base for your logarithm, as long as you’re consistent. Mathematicians and physicists use e, while computer scientists, who prefer integers, settle for the best known integer approximation: 2.

Just kidding! Darn — now I’ve insulted all the computer scientists. I hope none of them reads this.

Who came up with this entropic uncertainty principle? I’m not an expert on this, so I’ll probably get this wrong, but I gather it came from an idea of Deutsch:

• David Deutsch, Uncertainty in quantum measurements, Phys. Rev. Lett. 50 (1983), 631-633.

Then it got improved and formulated as a conjecture by Kraus:

• K. Kraus, Complementary observables and uncertainty relations, Phys. Rev. D 35 (1987), 3070-3075.

and then that conjecture was proved here:

• H. Maassen and J. B. Uffink, Generalized entropic uncertainty relations, Phys. Rev. Lett. 60 (1988), 1103-1106.

The paper I found in the lecture hall proves a more refined version where the system being measured — let’s call it X — is entangled to the observer’s memory apparatus — let’s call it O. In this situation they show

S(A|O) + S(B|O) \ge S(X|O) + \mathrm{log} \, \frac{1}{c}

where I’m using a concept of “conditional entropy”: the entropy of something given something else. Here’s their abstract:

The uncertainty principle, originally formulated by Heisenberg, clearly illustrates the difference between classical and quantum mechanics. The principle bounds the uncertainties about the outcomes of two incompatible measurements, such as position and momentum, on a particle. It implies that one cannot predict the outcomes for both possible choices of measurement to arbitrary precision, even if information about the preparation of the particle is available in a classical memory. However, if the particle is prepared entangled with a quantum memory, a device that might be available in the not-too-distant future, it is possible to predict the outcomes for both measurement choices precisely. Here, we extend the uncertainty principle to incorporate this case, providing a lower bound on the uncertainties, which depends on the amount of entanglement between the particle and the quantum memory. We detail the application of our result to witnessing entanglement and to quantum key distribution.

By the way, on a really trivial note…

My wisecrack about 2 being the best known integer approximation to e made me wonder: since 3 is actually closer to e, are there some applications where ternary digits would theoretically be better than binary ones? I’ve heard of "trits" but I don’t actually know any applications where they’re optimal.

Oh — here’s one.


Quantum Entanglement from Feedback Control

28 September, 2010

Now André Carvalho from the physics department at Australian National University in Canberra is talking about “Quantum feedback control for entanglement production”. He’s in a theory group with strong connections to the atom laser experimental group at ANU. This theory group works on measurement and control theory for Bose-Einstein condensates and atom lasers.

The good news: recent advances in real-time monitoring allows the control of quantum systems using feedback.

The big question: can we use feedback to design the system dynamics to produce and stabilize entangled states?

The answer: yes.

Start by considering two atoms in a cavity, interacting with a laser. Think of each atom as a 2-state system — so the Hilbert space of the pair of atoms is

\mathbb{C}^2 \otimes \mathbb{C}^2

We’ll say what the atoms are doing using not a pure state (a unit vector) but a mixed state (a density matrix). The atoms’ time evolution will be described by Lindbladian mechanics. This is a generalization of Hamiltonian mechanics that allows for dissipative processes — processes that increase entropy! A bit more precisely, we’re talking here about the quantum analogue of a Markov process. Even more precisely, we’re talking about the Lindblad equation: the most general equation describing a time evolution for density matrices that is time-translation-invariant, Markovian, trace preserving and completely positive.

As time passes, an initially entangled 2-atom state will gradually ‘decohere’, losing its entanglement.

But next, introduce feedback. Can we do this in a way that makes the entanglement become large as time passes?

With ‘homodyne monitoring’, you can do pretty well. But with ‘photodetection monitoring’, you can do great! As time passes, every state will evolve to approach the maximally entangled state: the ‘singlet state’. This is the density matrix

| \psi \rangle \langle \psi |

corresponding to the pure state

|\psi \rangle = \frac{1}{\sqrt{2}} (\uparrow \otimes \downarrow - \downarrow \otimes \uparrow)

So: the system dynamics can be engineered using feedback to product and stabilize highly entangled state. In fact this is true not just for 2-atom systems, but multi-atom systems! And at least for 2-atom systems, this scheme is robust against imperfections and detection inefficiencies. The question of robustness is still under study for multi-atom systems.

For more details, try:

• A. R. R. Carvalho, A. J. S. Reid, and J. J. Hope, Controlling entanglement by direct quantum feedback.

Abstract:
We discuss the generation of entanglement between electronic states of two atoms in a cavity using direct quantum feedback schemes. We compare the effects of different control Hamiltonians and detection processes in the performance of entanglement production and show that the quantum-jump-based feedback proposed by us in Phys. Rev. A 76 010301(R) (2007) can protect highly entangled states against decoherence. We provide analytical results that explain the robustness of jump feedback, and also analyse the perspectives of experimental implementation by scrutinising the effects of imperfections and approximations in our model.

How do homodyne and photodetection feedback work? I’m not exactly sure, but this quote helps:

In the homodyne-based scheme, the detector registers
a continuous photocurrent, and the feedback Hamiltonian
is constantly applied to the system. Conversely, in
the photocounting-based strategy, the absence of signal
predominates and the control is only triggered after a
detection click, i.e. a quantum jump, occurs.


Quantum Optics with Quantum Dots

7 September, 2010

Here at the CQT, Alexia Auffèves from the Institut Néel is talking about “Revisiting cavity quantum electrodynamics with quantum dots and semiconducting cavities (when decoherence becomes a resource)”.

She did her graduate work doing experimental work on Rydberg atoms — that is, atoms in highly excited states, which can be much larger than normal atoms. But then — according to her collaborator Marcelo Santos, who introduced her — she “went over to the dark side” and became a theorist. She now works at a quantum optics group at Institut Néel in Grenoble, France.

This group does a lot of quantum optics with quantum dots. If you’ve never heard about quantum optics or quantum dots, I’ve got to tell you about them: they’re really quite cool! So, the next section will be vastly less sophisticated than Auffèves’ actual talk. Experts can hold their noses and skip straight to the section after that.

An elementary digression

Quantum optics is the branch of optics where we take into account the fact that light obeys the rules of quantum theory. So, light energy comes in discrete packets, called “photons”. You shouldn’t visualize a photon as a tiny pellet: light comes in waves, which can be very smeared out, but the strength of any particular wave comes in discrete amounts: no photons, one photon, two photons, etc. To really understand photons, you need to learn a theory called quantum electrodynamics, or QED for short.

A quantum dot is a very tiny piece of semiconductor, often stuck onto a semiconductor made of some different material. Here are a bunch of quantum dots made by an outfit called Essential Research:



 

What’s a semiconductor? It’s a material in which electrons and holes like to run around. Any matter made of atoms has a lot of electrons in it, of course. But a semiconductor can have some extra electrons, not attached to any atom, which roam around freely. It can also have some missing electrons, and these so-called holes can also roam around freely, just as if they were particles.

Now, let quantum optics meet semiconductor! If you hit a semiconductor with photon, you can create an electron-hole pair: an extra electron here, and a missing one there. If you think about it, this is just a fancy way of talking about knocking an electron off one of the atoms! But it’s a useful way of thinking.

Imagine, for example, a line of kids each holding one apple in each hand. You knock an apple out of one kid’s hand and another kid catches it. Now you’ve got a “hole” in your line of apples, but also a kid with an extra apple further down the line. As the kids try to correct your disturbance by passing their apples around, you will see the extra apple move along, and also perhaps see the hole move along, until they meet and — poof! — annihilate each other.

To strain your powers of visualization: just like a photon, an electron or hole is not really a little pellet. Quantum mechanics applies, so every particle is a wave.

To add to the fun, electrons and holes often attract each other and sort of orbit each other before they annihilate. An electron-hole pair engaged in such a dance is called an exciton — and intriguingly, an exciton can itself roam around like a particle!

But in a quantum dot, it cannot. A quantum dot is too small for an exciton to “roam around”: it can only sit, trapped there, vibrating.

Next, let quantum optics meet quantum dot! If a quantum dot absorbs a photon, an exciton may form. Conversely, when the exciton decays — the electron and hole annihilating each other — the quantum dot may emit a photon.

Put this setup in a very, very tiny box with an open door — a “cavity” — and you can do all sorts of fun things.

Back to business

The quantum optics group at the Institut Néel does both experimental and theoretical work. Four members of this group have come to visit the CQT. There are three main topics studied by this group:

• Cavity QED with quantum dots and optical semiconducting cavities. There are interesting similarities and differences between quantum dots and isolated atoms.

• One-dimensional solid-state atoms. This kind of system can operate at “giant optical nonlinearity”, and it can be stimulated with single photons.

• “Broad” atomic ensembles coupled to cavities, and their potential for solid-state quantum memories.

She will only talk about the first!

The simplest sort of cavity QED involves a 2-level system — for example, an atom that can hop between two energy levels — coupled to the electromagnetic field in a cavity.

But instead of an atom, Alexia Auffèves will consider a quantum dot made of one semiconducting material sitting on some other semiconducting material. An electron-hole pair created in the dot wants to stay in the dot, since it has less energy there. Like an atom, a quantum dot may be approximated by a 2-level system. But now the two “levels” are the state with nothing there, and the state with an electron-hole pair. The electron-hole pair has an energy of about 1 eV more than the state with nothing there.

Next, let’s put our quantum dot in a cavity. We want an ultrasmall cavity that has a high Q factor. Remember: when you’ve got a damped harmonic oscillator, a high Q factor means not much damping, so you get a tall, sharp resonance. For a cavity to have a high Q factor, we need light bouncing around inside to leak out slowly. That way, the cavity emits photons at quite sharply defined frequencies.

There are various ways to make tiny cavities with a Q factor from 1000 to 100,000. But the trick is getting a quantum dot to sit in the right place in the cavity!

Now, a quantum dot acts differently than an isolated atom: after all, it’s attached to a hunk of semiconductor. So, our quantum dot interacts with electrons and holes and phonons in this stuff. This causes a lowering of its Q factor, hence a broadening of its spectral lines. But we can adjust how this works, so the dot acts like a 2-level system with a tunable environment.

This lets us probe a new regime for cavity QED! The theorists’ game: replace a 2-level atom by a quantum dot, and see what happens to standard cavity QED results.

For example, look at spontaneous emission by a quantum dot in a cavity.

For an atom in a cavity, the atomic spectral lines are usually much narrower than the cavity resonance modes. Then the atom emits light at essentially its natural frequencies, with a strength affected by the cavity resonance modes.

But with quantum dots, we can make the quantum dot spectral lines much wider than the cavity resonance modes! Then the dot seems to emit white light, as far as the cavity is concerned. But, the dot emits more photons at the cavity frequency: this is called “cavity feeding”. People have been working on understanding this since 2007.

I think I’ll stop here, though this is where the real meat of Auffèves’ talk actually starts! You can get a bit more of a sense of it from her abstract:

Abstract: Thanks to technological progresses in the field of solid-state physics, a wide range of quantum optics experiments previously restricted to atomic physics, can now be implemented using quantum dots (QDs) and semi-conducting cavities. Still, a QD is far from being an isolated two-level atom. As a matter of fact, solid state emitters are intrinsically coupled to the matrix they are embedded in, leading to decoherence processes that unavoidably broaden any transition between the discrete states of these artificial atoms. At the same time, very high quality factors and ultra small modal volumes are achieved for state of the art cavities. These new conditions open an unexplored regime for cavity quantum electrodynamics (CQED) so far, where the emitter’s linewidth can be of the same order of magnitude, or even broader than the cavity mode one. In this kind of exotic regime, unusual phenomena can be observed. In particular, we have shown [1] that photons spontaneously emitted by a QD coupled to a detuned cavity can efficiently by emitted at the cavity frequency, even if the detuning is large; whereas if the QD is continously pumped, decoherence can induce lasing [2]. These effects clearly show that decoherence, far from being a drawback, is a fundamental resource in solid-state cavity quantum electrodynamics, offering appealing perspectives in the context of advanced nano-photonic devices.

And for more details, read the references:

• [1] Alexia Auffèves, Jean-Michel Gérard, and Jean-Philippe Poizat, Pure emitter’s dephasing: a resource for advanced single photon sources PRA 79, 053838 (2009).

• [2] A. Auffèves, D. Gerace, J. M. Gérard, M. Franca Santos, L. C. Andreani, and J. P. Poizat, Controlling the dynamics of a coupled atom-cavity system by pure dephasing: basics and applications in nanophotonics, PRB 81, 245419 (2010).


Control of Cold Molecular Ions

22 August, 2010

On Tuesday, Dzmitry Matsukevich gave a talk on “Control and Manipulation of Cold Molecular Ions”. He just arrived here at the CQT, coming from Christopher Monroe’s Trapped Ion Quantum Information Group at the University of Maryland.



Cold molecules can be used to study:

• Quantum information
• Precision measurements
• Quantum chemistry
• Strongly interacting degenerate gases

But most work uses neutral molecules; work on cold molecular ions is a bit new. The advantage of working with ions is that since they’re electrically charged, they can be trapped in radio-frequency Paul traps. This allows them to be isolated from the environment for days or weeks, and methods developed for ion trap quantum computations can be applied to them. On the downside, it’s hard to find spectroscopic data on most molecular ions.

Molecules and molecular ions can wiggle in various ways, with different characteristic frequencies:

Vibrational modes: about 30 terahertz

Rotational modes: about 10-100 gigahertz

Hyperfine modes: about 1 gigahertz

Room temperature corresponds to a frequency of about 6.25 gigahertz, so lots of modes are excited at this temperature. To make molecular ions easier to understand and manipulate, we’d prefer them to jump around between just a few modes: those of least energy. For this, we need to cool them down.

How can we do this? Use sympathetic cooling: our molecules can lose energy by interacting with trapped atomic ions that we keep cool using a laser!

(It might not be obvious that you can use a laser to cool something, but you can. The most popular method is called Doppler cooling. It’s basically a trick to make moving atoms more likely to emit photons than atoms that are standing still.)

Once our molecular ions are cold, how can we get them into specific desired states? Use a mode locked pulsed laser to drive stimulated Raman transitions.

Huh? As far as I can tell, this means “blast our molecular ion with an extremely brief pulse of light: it can then absorb a photon and emit a photon of a different energy, while itself jumping to a state of higher or lower energy.”

Here “extremely brief” can mean anywhere from picoseconds (10-12 seconds) to femtoseconds (10-15 seconds).

Once we’ve got our molecular ion in a specific state, it’ll get entangled with neighboring atomic ions thanks to their collective motion. This lets us try to implement quantum logic operations. There’s a large available Hilbert space: many qubits can be stored in a single molecule.

This paper shows how to use stimulated Raman transitions to create entangled atomic qubits:

• D. Hayes, D. N. Matsukevich, P. Maunz, D. Hucul, Q. Quraishi, S. Olmschenk, W. Campbell, J. Mizrahi, C. Senko, and C. Monroe, Entanglement of atomic qubits using an optical frequency comb, Phys. Rev. Lett. 104 (2010) 140501.

Precision control of molecular ions also lets us do precision measurements! Hyperfine modes depend on the mass of the electron and the fine structure constant. Vibrational and rotational modes depend on the mass of the proton. This allows accurate measurement of the ratio of the proton and electron mass.

(People looking at quasars in different parts of the sky see different drifts in the fine structure constant. One observation in the Northern hemisphere sees the fine structure constant changing, while one in the Southern hemisphere sees that it’s not. It’ll probably turn out nothing real is happening — at least that’s my conservative opinion — but it’s worth studying.)

What molecular ions are good to use?

• ionized silicon oxide, SiO+: convenient transition wavelengths, no hyperfine structure, and… umm… almost diagonal Frank-Condon factors.

• ionized molecular chlorine, Cl2+: the fine structure splitting is close to the vibrational splitting, so this is good for precision measurements of the variation of the fine structure constant.

Matsukevich’s goals in the next 3 years:

• build an ion trap apparatus for simultaneously trapping Yb+ atomic ions and SiO+ molecular ions: the ytterbium lets us do sympathetic cooling.

• develop methods to load them, cool them, and do cool things with them!


Follow

Get every new post delivered to your Inbox.

Join 3,094 other followers