So far in this series of posts I’ve been explaining a paper by Gavin Crooks. Now I want to go ahead and explain a little research of my own.
I’m not claiming my results are new — indeed I have no idea whether they are, and I’d like to hear from any experts who might know. I’m just claiming that this is some work I did last weekend.
People sometimes worry that if they explain their ideas before publishing them, someone will ‘steal’ them. But I think this overestimates the value of ideas, at least in esoteric fields like mathematical physics. The problem is not people stealing your ideas: the hard part is giving them away. And let’s face it, people in love with math and physics will do research unless you actively stop them. I’m reminded of this scene from the Marx Brothers movie where Harpo and Chico, playing wandering musicians, walk into a hotel and offer to play:
Groucho: What do you fellows get an hour?
Chico: Oh, for playing we getta ten dollars an hour.
Groucho: I see…What do you get for not playing?
Chico: Twelve dollars an hour.
Groucho: Well, clip me off a piece of that.
Chico: Now, for rehearsing we make special rate. Thatsa fifteen dollars an hour.
Groucho: That’s for rehearsing?
Chico: Thatsa for rehearsing.
Groucho: And what do you get for not rehearsing?
Chico: You couldn’t afford it.
So, I’m just rehearsing in public here — but I of course I hope to write a paper about this stuff someday, once I get enough material.
Remember where we were. We had considered a manifold — let’s finally give it a name, say — that parametrizes Gibbs states of some physical system. By Gibbs state, I mean a state that maximizes entropy subject to constraints on the expected values of some observables. And we had seen that in favorable cases, we get a Riemannian metric on ! It looks like this:
where are our observables, and the angle bracket means ‘expected value’.
All this applies to both classical or quantum mechanics. Crooks wrote down a beautiful formula for this metric in the classical case. But since I’m at the Centre for Quantum Technologies, not the Centre for Classical Technologies, I redid his calculation in the quantum case. The big difference is that in quantum mechanics, observables don’t commute! But in the calculations I did, that didn’t seem to matter much — mainly because I took a lot of traces, which imposes a kind of commutativity:
In fact, if I’d wanted to show off, I could have done the classical and quantum cases simultaneously by replacing all operators by elements of any von Neumann algebra equipped with a trace. Don’t worry about this much: it’s just a general formalism for treating classical and quantum mechanics on an equal footing. One example is the algebra of bounded operators on a Hilbert space, with the usual concept of trace. Then we’re doing quantum mechanics as usual. But another example is the algebra of suitably nice functions on a suitably nice space, where taking the trace of a function means integrating it. And then we’re doing classical mechanics!
For example, I showed you how to derive a beautiful formula for the metric I wrote down a minute ago:
But if we want to do the classical version, we can say Hey, presto! and write it down like this:
What did I do just now? I changed the trace to an integral over some space . I rewrote as to make you think ‘probability distribution’. And I don’t need to take the real part anymore, since is everything already real when we’re doing classical mechanics. Now this metric is the Fisher information metric that statisticians know and love!
In what follows, I’ll keep talking about the quantum case, but in the back of my mind I’ll be using von Neumann algebras, so everything will apply to the classical case too.
So what am I going to do? I’m going to fix a big problem with the story I’ve told so far.
Here’s the problem: so far we’ve only studied a special case of the Fisher information metric. We’ve been assuming our states are Gibbs states, parametrized by the expectation values of some observables . Our manifold was really just some open subset of : a point in here was a list of expectation values.
But people like to work a lot more generally. We could look at any smooth function from a smooth manifold to the set of density matrices for some quantum system. We can still write down the metric
in this more general situation. Nobody can stop us! But it would be better if we could derive this formula, as before, starting from a formula like the one we had before:
The challenge is that now we don’t have observables to start with. All we have is a smooth function from some manifold to some set of states. How can we pull observables out of thin air?
Well, you may remember that last time we had
where were some functions on our manifold and
was the partition function. Let’s copy this idea.
So, we’ll start with our density matrix , but then write it as
where is some self-adjoint operator and
(Note that , like , is really an operator-valued function on . So, I should write something like to denote its value at a particular point , but I won’t usually do that. As usual, I expect some intelligence on your part!)
Now we can repeat some calculations I did last time. As before, let’s take the logarithm of :
and then differentiate it. Suppose are local coordinates near some point of . Then
Last time we had nice formulas for both terms on the right-hand side above. To get similar formulas now, let’s define operators
This gives a nice name to the first term on the right-hand side above. What about the second term? We can calculate it out:
where in the last step we use the chain rule. Next, use the definition of and , and get:
This is just what we got last time! Ain’t it fun to calculate when it all works out so nicely?
So, putting both terms together, we see
This is a nice formula for the ‘fluctuation’ of the observables , meaning how much they differ from their expected values. And it looks exactly like the formula we had last time! The difference is that last time we started out assuming we had a bunch of observables, , and defined to be the state maximizing the entropy subject to constraints on the expectation values of all these observables.
Now we’re starting with and working backwards.
From here on out, it’s easy. As before, we can define to be the real part of the covariance matrix:
Using the formula
When this matrix is positive definite at every point, we get a Riemanian metric on . Last time I said this is what people call the ‘Bures metric’ — though frankly, now that I examine the formulas, I’m not so sure. But in the classical case, it’s called the Fisher information metric.
Differential geometers like to use as a shorthand for , so they’d write down our metric in a prettier way:
Differential geometers like coordinate-free formulas, so let’s also give a coordinate-free formula for our metric. Suppose is a point in our manifold, and suppose are tangent vectors to this point. Then
Here is a smooth operator-valued function on , and means the derivative of this function in the direction at the point .
So, this is all very nice. To conclude, two more points: a technical one, and a more important philosophical one.
First, the technical point. When I said could be any smooth function from a smooth manifold to some set of states, I was actually lying. That’s an important pedagogical technique: the brazen lie.
We can’t really take the logarithm of every density matrix. Remember, we take the log of a density matrix by taking the log of all its eigenvalues. These eigenvalues are ≥ 0, but if one of them is zero, we’re in trouble! The logarithm of zero is undefined.
On the other hand, there’s no problem taking the logarithm of our density-matrix-valued function when it’s positive definite at each point of . You see, a density matrix is positive definite iff its eigenvalues are all > 0. In this case it has a unique self-adjoint logarithm.
So, we must assume is positive definite. But what’s the physical significance of this ‘positive definiteness’ condition? Well, any density matrix can be diagonalized using some orthonormal basis. It can then be seen as probabilistic mixture — not a quantum superposition! — of pure states taken from this basis. Its eigenvalues are the probabilities of finding the mixed state to be in one of these pure states. So, saying that all its eigenvalues are all > 0 amounts to saying that all the pure states in this orthonormal basis show up with nonzero probability! Intuitively, this means our mixed state is ‘really mixed’. For example, it can’t be a pure state. In math jargon, it means our mixed state is in the interior of the convex set of mixed states.
Second, the philosophical point. Instead of starting with the density matrix , I took as fundamental. But different choices of give the same . After all,
where we cleverly divide by the normalization factor
to get . So, if we multiply by any positive constant, or indeed any positive function on our manifold , will remain unchanged!
So we have added a little extra information when switching from to . You can think of this as ‘gauge freedom’, because I’m saying we can do any transformation like
is a smooth function. This doesn’t change , so arguably it doesn’t change the ‘physics’ of what I’m doing. It does change . It also changes the observables
But it doesn’t change their ‘fluctuations’
so it doesn’t change the metric .
This gauge freedom is interesting, and I want to understand it better. It’s related to something very simple yet mysterious. In statistical mechanics the partition function begins life as ‘just a normalizing factor’. If you change the physics so that gets multiplied by some number, the Gibbs state doesn’t change. But then the partition function takes on an incredibly significant role as something whose logarithm you differentiate to get lots of physically interesting information! So in some sense the partition function doesn’t matter much… but changes in the partition function matter a lot.
This is just like the split personality of phases in quantum mechanics. On the one hand they ‘don’t matter’: you can multiply a unit vector by any phase and the pure state it defines doesn’t change. But on the other hand, changes in phase matter a lot.
Indeed the analogy here is quite deep: it’s the analogy between probabilities in statistical mechanics and amplitudes in quantum mechanics, the analogy between in statistical mechanics and in quantum mechanics, and so on. This is part of a bigger story about ‘rigs’ which I told back in the Winter 2007 quantum gravity seminar, especially in week13. So, it’s fun to see it showing up yet again… even though I don’t completely understand it here.
[Note: in the original version of this post, I omitted the real part in my definition , giving a ‘Riemannian metric’ that was neither real nor symmetric in the quantum case. Most of the comments below are based on that original version, not the new fixed one.]