I was going to write about a talk at the CQT, but I found a preprint lying on a table in the lecture hall, and it was so cool I’ll write about that instead:
• Mario Berta, Matthias Christandl, Roger Colbeck, Joseph M. Renes, Renato Renner, The uncertainty principle in the presence of quantum memory, Nature Physics, July 25, 2010.
Actually I won’t talk about the paper per se, since it’s better if I tell you a more basic result that I first learned from reading this paper: the entropic uncertainty principle!
Everyone loves the concept of entropy, and everyone loves the uncertainty principle. Even folks who don’t understand ’em still love ’em. They just sound so mysterious and spooky and dark. I love ’em too. So, it’s nice to see a mathematical relation between them.
I explained entropy back here, so let me say a word about the uncertainty principle. It’s a limitation on how accurately you can measure two things at once in quantum mechanics. Sometimes you can only know a lot about one thing if you don’t know much about the other. This happens when those two things “fail to commute”.
Mathematically, the usual uncertainty principle says this:
In plain English: the uncertainty in times the uncertainty in
is bigger than the absolute value of the expected value of their commutator
Whoops! That started off as plain English, but it degenerated into plain gibberish near the end… which is probably why most people don’t understand the uncertainty principle. I don’t think I’m gonna cure that today, but let me just nail down the math a bit.
Suppose and
are observables — and to keep things really simple, by observable I’ll just mean a self-adjoint
matrix. Suppose
is a state: that is, a unit vector in
. Then the expected value of
in the state
is the average answer you get when you measure that observable in that state. Mathematically it’s equal to
Sorry, there are a lot of angle brackets running around here: the ones at right stand for the inner product in , which I’m assuming you understand, while the ones at left are being defined by this equation. They’re just a shorthand.
Once we can compute averages, we can compute standard deviations, so we define the standard deviation of an observable in the state
to be
where
Got it? Just like in probability theory. So now I hope you know what every symbol here means:
and if you’re a certain sort of person you can have fun going home and proving this. Hint: it takes an inequality to prove an inequality. Other hint: what’s the most important inequality in the universe?
But now for the fun part: entropy!
Whenever you have an observable and a state
, you get a probability distribution: the distribution of outcomes when you measure that observable in that state. And this probability distribution has an entropy! Let’s call the entropy
. I’ll define it a bit more carefully later.
But the point is: this entropy is really a very nice way to think about our uncertainty, or ignorance, of the observable . It’s better, in many ways, than the standard deviation. For example, it doesn’t change if we multiply
by 2. The standard deviation doubles, but we’re not twice as ignorant!
Entropy is invariant under lots of transformations of our observables. So we should want an uncertainty principle that only involves entropy. And here it is, the entropic uncertainty principle:
Here is defined as follows. To keep things simple, suppose that
is nondegenerate, meaning that all its eigenvalues are distinct. If it’s not, we can tweak it a tiny bit and it will be. Let its eigenvectors be called
. Similarly, suppose
is nondegenerate and call its eigenvectors
. Then we let
Note this becomes 1 when there’s an eigenvector of that’s also an eigenvector of
. In this case its possible to find a state where we know both observables precisely, and in this case also
And that makes sense: in this case , which measures our ignorance of both observables, is indeed zero.
But if there’s no eigenvector of that’s also an eigenvector of
, then
is smaller than 1, so
so the entropic uncertainty principle says we really must have some ignorance about either or
(or both).
So the entropic uncertainty principle makes intuitive sense. But let me define the entropy , to make the principle precise. If
are the eigenvectors of
, the probabilities of getting various outcomes when we measure
in the state
are
So, we define the entropy by
Here you can use any base for your logarithm, as long as you’re consistent. Mathematicians and physicists use e, while computer scientists, who prefer integers, settle for the best known integer approximation: 2.
Just kidding! Darn — now I’ve insulted all the computer scientists. I hope none of them reads this. ![]()
Who came up with this entropic uncertainty principle? I’m not an expert on this, so I’ll probably get this wrong, but I gather it came from an idea of Deutsch:
• David Deutsch, Uncertainty in quantum measurements, Phys. Rev. Lett. 50 (1983), 631-633.
Then it got improved and formulated as a conjecture by Kraus:
• K. Kraus, Complementary observables and uncertainty relations, Phys. Rev. D 35 (1987), 3070-3075.
and then that conjecture was proved here:
• H. Maassen and J. B. Uffink, Generalized entropic uncertainty relations, Phys. Rev. Lett. 60 (1988), 1103-1106.
The paper I found in the lecture hall proves a more refined version where the system being measured — let’s call it — is entangled to the observer’s memory apparatus — let’s call it
. In this situation they show
where I’m using a concept of “conditional entropy”: the entropy of something given something else. Here’s their abstract:
The uncertainty principle, originally formulated by Heisenberg, clearly illustrates the difference between classical and quantum mechanics. The principle bounds the uncertainties about the outcomes of two incompatible measurements, such as position and momentum, on a particle. It implies that one cannot predict the outcomes for both possible choices of measurement to arbitrary precision, even if information about the preparation of the particle is available in a classical memory. However, if the particle is prepared entangled with a quantum memory, a device that might be available in the not-too-distant future, it is possible to predict the outcomes for both measurement choices precisely. Here, we extend the uncertainty principle to incorporate this case, providing a lower bound on the uncertainties, which depends on the amount of entanglement between the particle and the quantum memory. We detail the application of our result to witnessing entanglement and to quantum key distribution.
By the way, on a really trivial note…
My wisecrack about 2 being the best known integer approximation to e made me wonder: since 3 is actually closer to e, are there some applications where ternary digits would theoretically be better than binary ones? I’ve heard of "trits" but I don’t actually know any applications where they’re optimal.
Oh — here’s one.
Posted by John Baez 
