This year’s session of the big conference on quantum computation, quantum cryptography, and so on is being held in Singapore this year:
• QIP 2011, the 14th Workshop on Quantum Information Processing, 8-14 January 2011, Singapore.
Because the battery on my laptop is old and not very energetic, and I can’t find any sockets in the huge ballroom where the talks are being delivered, I can’t live-blog the talks. So, my reports here will be quite incomplete.
Here are microscopic summaries of just three talks from Monday’s session. You can see arXiv references, slides, and videos of the talks here. I’ll just give links to the slides.
Christian Kurtsiefer gave a nice talk on how to exploit the physics of photodetectors to attack quantum key distribution systems! By cutting the optical fiber and shining a lot of light down both directions, evil Eve can ‘blind’ Alice and Bob’s photodetectors. Then, by shining a quick even brighter pulse of light, she can fool one of their photodetectors into thinking it’s seen a single photon. She can even fool them into thinking they’ve seen a violation of Bell’s inequality, by purely classical means, thanks to the fact that only a small percentage of photons make it down the cable in the first place. Christian and his collaborators have actually done this trick in an experiment here at the CQT!
Tzu-Chieh Wei and Akimasa Miyake gave a two-part joint talk on how the AKLT ground state is universal for measurement-based quantum computation. The AKLT ground state works like this: you’ve got a hexagonal lattice with three spin-1/2 particles at each vertex. Think of each particle as attached to one of the three edges coming out of that vertex. In the ground state, you start by putting the pair of particles at the ends of each edge in the spin-0 (also known as “singlet”, or antisymmetrized) state, and then you project the three particles at each vertex down to the spin-3/2 (completely symmetrized) state. This is indeed the ground state of a cleverly chosen antiferromagnetic Hamiltonian. But has anyone ever prepared this sort of system in the lab?
David Poulin gave a talk on how to efficiently compute time evolution given a time-dependent quantum Hamiltonian. The trickiness arises from Hamiltonians that change very rapidly with time. In a naive evenly spaced discretization of the time-ordered exponential, this would require you to use lots of tiny time steps to get a good approximation. But using a clever randomly chosen discretization you can avoid this problem, at least for uniformly bounded Hamiltonians, those obeying:
for all times . The key is that the high-frequency part of a time-dependent Hamiltonian only couples faraway energy levels, but a uniformly bounded Hamiltonian doesn’t have faraway energy levels.
A couple more things — really just notes to myself:
Steve Flammia told me about this paper relating the Cramer-Rao bound (which involves Fisher information) to the time-energy uncertainty principle:
• Sergio Boixo, Steven T. Flammia, Carlton M. Caves, and J.M. Geremia, Generalized limits for single-parameter quantum estimation.
Markus Müller told me about a paper mentioning relations between Maxwell’s demon and algorithmic entropy. I need to get some references on this work — it might help me make progress on algorithmic thermodynamics. It’s probably one of these:
• Markus Müller, Quantum Kolmogorov complexity and the quantum Turing machine (PhD thesis).
• Markus Müller, On the quantum Kolmogorov complexity of classical strings, Int. J. Quant. Inf. 7 (2009), 701-711.
Hmm — the first one says:
A concrete proposal for an application of quantum Kolmogorov complexity is to analyze a quantum version of the thought experiment of Maxwell’s demon. In one of the versions of this thought experiment, some microscopic device tries to decrease the entropy of some gas in a box, without the expense of energy, by intelligently opening or closing some little door that separates both halves of the box. It is clear that a device like this cannot work as described, since its existence would violate the second law of thermodynamics. But then, the question is what prevents such a little device (or “demon”) from operating.
Roughly, the answer is that the demon has to make observations to decide whether to close or open the door, and these observations accumulate information. From time to time, the demon must erase this additional information, which is only possible at the expense of energy, due to Landauer’s principle. In Li and Vitanyi’s book An Introduction to Kolmogorov Complexity and Its Applications, this cost of energy is analyzed under very weak assumptions with the help of Kolmogorov complexity. Basically, the energy that the demon can extract from the gas is limited by the difference of the entropy of the gas, plus the difference of the Kolmogorov complexity of the demon’s memory before and after the demon’s actions. The power of this analysis is that it even encloses the case that the demon has a computer to do clever calculations, e.g. to compress the accumulated information before erasing it.
So, I guess I need to reread Li and Vitanyi’s book!
If somebody told me I had to invent a relation between Maxwell’s Demon and algorithmic notions of information, I’d say something like the following:
Imagine a long chain of Szilard engines in a thermal bath at temperature . If we know one bit of information about each Szilard engine, i.e., which side of the cylinder’s partition the atom is on, we can extract worth of energy from the thermal bath. (If we don’t know where the atom is inside the cylinder, we end up putting energy in as often as we get it out, so on average we get nowhere.) Let “0” denote the case where the atom is on the left side of the cylinder, and let “1” denote the case where it is on the right; then, the configuration of the chain of Szilard engines is the binary expansion of some natural number . The Kolmogorov complexity of is then the length of the shortest program capable of extracting the maximum energy possible from the “Szilard reactor” of configuration . Levin tells us that this is not so different from the algorithmic information defined by summing over all programs which output .
Markus Müller was telling me about a situation where Maxwell’s demon is trying to erase a string of bits, but tries to reduce the work this takes by compressing this bit string down to shortest program that prints out these bits, and then erasing that. I think I’ve tracked down this idea to Li and Vitanyi’s book — see the new improved remarks at the end of my blog entry above.
It’s possible that you’re talking about the same thing in reverse, or something like that. I’m too burnt out after another day of talks to think about this now. I’ll wait until I’ve freed up some space in my brain… right now I need to erase some information.
The Landauer gedankenexperiment of erasing a message is basically the inverse of the Szilard–Bennett gedankenexperiment of extracting energy from a heat bath using isothermal operations and information about the locations of atoms inside the cylinders. In the “erasure” experiment, you start with a message stored in some physical medium (like a long row of Szilard cylinders) and ask how much energy is necessary to reset all its bits to 0. The more you know about the bit string, the more energy-efficient your erasure operation can be; in the sort of limit which appeals to people who study Carnot engines, the erasure can be made at zero cost, if one has perfect information about the message contents. On the other hand, if you have a long row of Szilard cylinders with all the internal atoms on the 0 side, then it’s easy to extract maximal energy — and that extraction operation leaves the atoms randomly distributed between the 0 and 1 positions.
(Feynman’s Lectures on Computation have a typically engaging treatment, which is where I first read about all this stuff. And I think I need to reread Li and Vitanyi, too!)
Landauer was an excellent physicist. Although obsessed almost exclusively (one can have multiple obsessions) with reversible computing, including the billiard example, during his last couple of decades, his focus on keeping the physics in his analyses is most commendable. In particular, he insisted on making operational definitions with all processes explicit.
Although he did study quantum computation, almost entirely reversible w/photons, he was convinced that the quantum computers theoretically studied in the 1990s were irretrievably flawed because of having no discernable physical components.
Vaccaro and Barnett just published “Information erasure without an energy cost” in ProcRSoc
Presumably the physical problem is the assumed decoupling within the spin and heat bath, while somehow still maintaining the ability to “return to equilibrium”.
Thanks for the reference. Mike Stay also pointed it out to me, when it hit the arXiv:
• Joan A. Vaccaro, Stephen M. Barnett, Information erasure without an energy cost.
I also got an email from John Norton, pointing out another attack on the orthodoxy:
• John Norton, No go result for the thermodynamics of computation.
I hope that the Entropy Club, here at the CQT, figures out what’s going on with all these arguments.
The link behind David Poulin leads me to the slides of the AKLT ground state.
Huh? I did some work on numerics on time dependent evolution equations in my diploma thesis but have never heard of this ansatz…
Which ansatz? Uniform boundedness? Of course physically realistic Hamiltonians on infinite-dimensional Hilbert spaces are hardly ever bounded. But this talk was probably mainly about finite-dimensional Hilbert spaces, since that’s what quantum information theory people like to think about — and then it’s a quite reasonable assumption.
Thanks for catching the problem with the link to Poulin’s talk. I fixed it.
By the way, in case anyone was interested in that paper Steve Flammia mentioned, relating Fisher information to the time-uncertainty energy relation, I’ve added a link to that at the end of the blog entry.
A “randomly chosen” discretization. Of course the whole story becomes much more interesting in infinite dimensions…
Oh. The randomly chosen discretization is apparently crucial here. People tell me that efficient numerical computation of high-dimensional integrals seems to require random choices, and this looks sort of similar.
Haven’t looked at the talk yet, but the “efficient computation” with random choices is quite easy to understand: First of all, it is very easy to implement a Monte-Carlo integration, but this is not what this is about. More important is that the rate of convergence of a Monte-Carlo integration is independent of the dimension of the space you integrate over, theoretically. The rate of convergence for deterministic integration schemes depends on the number of base points (I mean the points where the scheme evaluates the function that has to be integrated), and that depends on the dimension. This is a very naive argument of course and needs some refinement.
But very naively, if you need 10 base points in every dimension in a deterministic scheme, your computational effort will scale like with the dimension d, while d does not matter for the error rate of a Monte-Carlo integration, theoretically, which will always be where m is the number of randomly chosen base points.
Let’s see if this is the kind of argument that can be generalized to time dependent Hamiltonians…
Shoot, I don’t understand the slides of David Poulin’s talk and there is no reference to any published paper, but I’m pretty sure that the talk is not about Hamiltonians that are bounded from above: Poulin talks about n-particle Hamiltonians that contain interaction terms with at most k interacting particles, so that the number of interacting particles is bounded, but not the norm of the Hamiltonian.
Quantum information processing focuses on finite-dimensional Hilbert spaces. So, maybe Poulin is doing what people in this subject often do: considering a lattice with a finite-dimensional Hilbert space at each site, and a Hamiltonian that’s a sum of ‘n-particle Hamiltonians’, i.e., operators on Hilbert spaces of the form .
For example if we have a square lattice and each site only interacts with itself and its nearest neighbors, we’ll have just 1-particle and 2-particle terms in the sum. A famous example like this is the quantum Heisenberg model.
For a fancier example involving a hexagonal lattice, see page 4 of the slides on talk about the AKLT state.
But anyway: yes, it would be easier to understand the details if there were a paper to read.
[…] more update: John Baez has a post on a few talks. […]
[…] Quantum Information Processing 2011 « Azimuth This entry was posted in Uncategorized and tagged additional, decide-whether, demon, door, […]
“only a small percentage of photons make it down the cable in the first place”
probably an easily-answered question, but where do the photons that don’t make it go? Are they somehow annhilated?
Where does light go when it gets absorbed? At a fundamental level, there’s a reaction where a charged particle like an electron can absorb a photon. People write it like this:
or using a Feynman diagram where two particles come in and just one comes out. And this sort of reaction is taking place whenever ordinary everyday matter absorbs light.
I was surprised at how few photons make it very far down an optical fiber without being absorbed. I’m guessing that in ordinary applications of optical fibers one can use tricks to keep ‘reboosting’ the light, but these don’t work well for quantum cryptography, because they don’t preserve the quantum states of individual photons.
as to the Muller paper, you (JB) write: “Roughly, the answer is that the demon has to make observations to decide whether to close or open the door, and these observations accumulate information. From time to time, the demon must erase this additional information,”
Maybe the demon categorizes the information to create rules and after a while evolves a complete set of rules and perhaps even a rule for the outliers, too, cf. http://www.gocomics.com/calvinandhobbes/2009/07/05/ which doesn’t necessitate the “erasing” of information. You can have all the observations you want, but unless you somehow systematize them, you don’t have any useful information.
once knowing that the photons can interact strongly with the fiber, you may not be surprised that the effect has been exploited for years. The application with the most use is distributed temperature sensing
although distributed vibration sensing has the most history.
OK, the quantum version of the Szilárd engine is officially too weird for me to think about right now.
Thanks, Blake! Wonmin Son of the Entropy Club also pointed this out.
Whenever a physics paper contains a figure showing a demon wearing a joker’s cap, you know you’re in trouble.