This week the scientific advisory board of the CQT is coming to town, so we’re having lots and lots of talks.
Right now Michele Mosca, deputy director of the Institute for Quantum Computing in Waterloo, Canada, is speaking about “Trends in Quantum Information Processing” from a computer science / mathematics perspective.
(Why mathematics? He says he will always consider himself a mathematician…)
Alice and Bob
Mosca began by noting some of the cultural / linguistic differences that become important in large interdisciplinary collaborations. For example, computer scientists like to talk about people named Alice and Bob playing bizarre games. These games are usually an attempt to distill some aspect of a hard problem into a simple story.
Mosca also made a remark on “impossibility”. Namely: proofs that things are impossible, often called “no-go theorems”, are very important, but we shouldn’t overinterpret them. Just because one approach to doing something doesn’t work, doesn’t mean we can’t do it! From an optimistic viewpoint, no-go theorems simply tell us what assumptions we need to get around. There is, however, a difference between optimism and insanity.
For example: “We can’t build a quantum computer because of decoherence…”
Knill, Laflamme and Milburn initially set out to prove that linear optics alone would not give universal quantum computer… but then they discovered it could.
• E. Knill, R. Laflamme, and G. J. Milburn, A scheme for efficient quantum computation with linear optics, Nature 409 (2001), 46. Also see their paper Efficient linear optics quantum computation on the arXiv.
Another example: “Quantum black-box algorithms give at most a polynomial speedup for total functions…”
If this result — proved by whom? — had been discovered before Shor’s algorithm, it might have discouraged Shor from finding this algorithm. But his algorithm involves a partial function.
Yet another: “Nuclear magnetic resonance quantum computing is not scalable…”
So far each argument for this can be gotten around, though nobody knows how to get around all of them simultaneously. Attempts to do this have led to important discoveries.
From theory to practice
Next, Mosca showed a kind of flow chart. Abstract algorithms spawn various models of quantum computation: circuit models, measurement-based models, adiabatic models, topological models, continuous-time models, cellular automaton models, and so on. These spawn specific architectures for (so far hypothetical) quantum computers; a lot of work still needs to be done to develop fault-tolerant architectures. Then come specific physical realizations that might implement these architectures: involving trapped ions, photon qubits, superconducting circuit qubits, spin qubits and so on. We’re just at the beginning of a long path.
A diversity of quantum algorithms
There are still lots of people who say there are just two interesting quantum algorithms:
• Shor’s algorithm (for factoring integers in a time that’s polynomial as a function in their number of digits), and
• Grover’s algorithm for searching lists in a time that’s less than linear as a function of the number of items in the list.
The next time you hear someone say this, punch ’em in the head!
For examples of many different quantum algorithms, see Stephen Jordan’s ‘zoo’, and this paper:
• Michele Mosca, Quantum algorithms.
Then Mosca discussed three big trends…
1. Working with untrusted quantum apparatus
This trend started with:
• Artur Ekert, Quantum cryptography based on Bell’s theorem,
Phys. Rev. Lett. 67 (1991 Aug 5), 661-663.
• Dominic Mayers and and Andrew Yao, Quantum cryptography with imperfect apparatus.
And then much more, thanks in large part to people at the CQT! The idea is that if you have a quantum system whose behavior you don’t entirely trust, you can still do useful stuff with it. This idea is related to the subject of multi-prover interactive proofs, which however are studied by a different community.
This year some of these ideas got implemented in an actual experiment:
• S. Pironio, A. Acin, S. Massar, A. Boyer de la Giroday, D. N. Matsukevich, P. Maunz, S. Olmschenk, D. Hayes, L. Luo, T. A. Manning, and C. Monroe, Random numbers certified by Bell’s theorem, Nature 464 (2010), 1021.
(Yay! Someone who had the guts to put a paper for Nature on the arXiv!)
2. Ideas from topological quantum computing
The seeds here were planted in the late 1990s by:
• Michael Freedman, P/NP, and the quantum field computer,
Proc. Natl. Acad. Sci. USA 95 (1998 January 6), 98–101.
This suggested that a topological quantum computer could efficiently compute the Jones polynomial at certain points — a #P-hard problem. That would mean quantum computers can solve NP problems in polynomial time! But it turned out that limitations in precision prevent this. With a great sigh of relief, computer scientists decided they didn’t need to learn topological quantum field theory… but then came:
• A. Kitaev, Fault-tolerant quantum computation by anyons, Ann. Phys. 303 (2003), 2-30.
This sort of computer may or may not ever be built, but it’s very interesting either way. And in 2005, Raussendorf, Harrington and Goyal used ideas from topological quantum computing to build fault-tolerance into another approach to quantum computing, based on “cluster states”:
• Robert Raussendorf, Jim Harrington, and Kovid Goyal, Topological fault-tolerance in cluster state quantum computation, 2007.
Subsequently there’s been a huge amount of work along these lines. (Physics junkies should check out the Majorana fermion codes of Bravyi, Leemhuis and Terhal.)
3. Semidefinite programming
The seeds here were planted about 10 years ago by Kitaev and Watrous. There’s been a wide range of applications since then. If you search quant-ph for abstracts containing the buzzword “semidefinite” you’ll get almost a hundred hits!
So what is semidefinite programming? It’s a relative of linear programming, an optimization method that’s widely used in microeconomics and the management of production, transportation and the like. I don’t understand it, but I guess it’s like a quantum version of linear programming!
I can parrot the definition:
A semidefinite program is a triple . Here is a linear map that sends linear operators on some Hilbert space to linear operators on some Hilbert space . Such a linear map is called a superoperator, since it operates on operators! We assume takes self-adjoint operators to self-adjoint operators. What about and ? They are self-adjoint operators on and , respectively.
This data gives us two problems:
Primal problem: find , a positive semidefinite operator on that minimizes subject to the constraint such that .
Dual problem: find , a positive semidefinite operator on Y that maximizes subject to the constraint .
This should remind you of duality in linear programming. In linear programming, the dual of the dual problem is the original ‘primal’ problem! Also, if you know the solution to either problem, you know the solution to both. Is this stuff true for semidefinite programming too?
Example: given a bunch of density matrices, find an optimal bunch of measurements that distinguish them.
Example: ‘Quantum query algorithms’ are a generalization of Grover’s search algorithm. Reichardt used semidefinite programming to show there is always an optimal algorithm that uses just 2 reflections:
• Ben W. Reichardt, Reflections for quantum query algorithms.
There are lots of other trends, of course! These are just a few. Mosca apologized to anyone whose important work wasn’t mentioned, and I apologize even more, since I left out a lot of what he said….