Today at the CQT, Paolo Zanardi from the University of Southern California is giving a talk on “Quantum Fidelity and the Geometry of Quantum Criticality”. Here are my rough notes…

The motto from the early days of quantum information theory was “Information is physical.” You need to care about the physical medium in which information is encoded. But we can also turn it around: “Physics is informational”.

In a “classical phase transition”, thermal fluctuations play a crucial role. At zero temperature these go away, but there can still be different phases depending on other parameters. A transition between phases at zero temperature is called a quantum phase transitions. One way to detect a quantum phase transition is simply to notice that ground state depends very sensitively on the parameters near such a point. We can do this mathematically using a precise way of measuring distances between states: the Fubini-Study metric, which I’ll define below.

Suppose that is a manifold parametrizing Hamiltonians for a quantum system, so each point gives a self-adjoint operator on some finite-dimensional Hilbert space, say . Of course in the thermodynamic limit (the limit of infinite volume) we expect our quantum system to be described by an *infinite-dimensional* Hilbert space, but let’s start out with a finite-dimensional one.

Furthermore, let’s suppose each Hamiltonian has a unique ground state, or at least a chosen ground state, say . Here does *not* indicate a point in space: it’s a point in , our space of Hamiltonians!

This ground state is really defined only up to phase, so we should think of it as giving an element of the projective space . There’s a god-given metric on projective space, called the Fubini-Study metric. Since we have a map from to projective space, sending each point to the state (modulo phase), we can pull back the Fubini-Study metric via this map to get a metric on .

But, the resulting metric *may not be smooth*, because may not depend smoothly on . The metric may have singularities at certain points, especially after we take the thermodynamic limit. We can think of these singular points as being ‘phase transitions’.

If what I said in the last two paragraphs makes no sense, perhaps a version in something more like plain English will be more useful. We’ve got a quantum system depending on some parameters, and there may be points where the ground state of this quantum system depends in a very drastic way on slight changes in the parameters.

But we can also make the math a bit more explicit. What’s the Fubini-Study metric? Given two unit vectors in a Hilbert space, say and , their **Fubini-Study distance** is just the angle between them:

This is an honest Riemannian metric on the projective version of the Hilbert space. And in case you’re wondering about the term ‘quantum fidelity’ in the title of Zanardi’s talk, the quantity

is called the **fidelity**. The fidelity ranges between 0 and 1, and it’s 1 when two unit vectors are the same up to a phase. To convert this into a distance we take the arc-cosine.

When we pull the Fubini-Study metric back to , we get a Riemannian metric away from the singular points, and in local coordinates this metric is given by the following cool formula:

where is the derivative of the ground state as we move in the th coordinate direction.

But Michael Berry came up with an even cooler formula for . Let’s call the eigenstates of the Hamiltonian , so that

And let’s rename the ground state , so

and

Then a calculation familiar to those you’d see in first-order perturbation theory shows that

This is nice because it shows is likely to become singular at points where the ground state becomes degenerate, i.e. where two different states both have minimal energy, so some difference becomes zero.

To illustrate these ideas, Zanardi did an example: the XY model in an external magnetic field. This is a ‘spin chain’: a bunch of spin-1/2 particles in a row, each interacting with their nearest neighbors. So, for a chain of length , the Hilbert space is a tensor product of copies of :

The Hamiltonian of the XY model depends on two real parameters and . The parameter describes a magnetic field pointing in the direction:

where the ‘s are the ever-popular Pauli matrices. The first term makes the components of the spins of neighboring particles want to point in opposite directions when is big. The second term makes components of neighboring spins want to point in the same direction when is big. And the third term makes all the spins want to point up (resp. down) in the direction when is big and negative (resp. positive).

What’s our poor spin chain to do, faced with such competing directives? At zero temperature it seeks the state of lowest energy. When is less than -1 all the spins get polarized in the spin-up state; when it’s bigger than 1 they all get polarized in the spin-down state. For in between, there is also some sort of phase transition at . What’s this like? Some sort of transition between ferromagnetic and antiferromagnetic?

We can use a transformation to express this as a fermionic system and solve it exactly. Physicists love exactly solvable systems, so there have been thousands of papers about the XY model. In the thermodynamic limit () the ground state can be computed explicitly, so we can explicitly work out the metric on the parameter space that has as coordinates!

I will not give the formulas — Zanardi did, but they’re too scary for me. I’ll skip straight to the punchline. Away from phase transitions, we see that for nearby values of parameters, say

and

the ground states have

for some constant . That’s not surprising: even though the two ground states are locally very similar, since we have a total of spins in our spin chain, the overall inner product goes like .

But at phase transitions, the inner product decays even faster with :

for some other constant .

This is called **enhanced orthogonalization** since it means the ground states at slightly different values of our parameters get close to orthogonal *even faster* as grows. Or in other words: their distance as measured by the metric *grows* even faster.

This sort of phase transition is an example of a “quantum phase transition”. Note: we’re detecting this phase transition not by looking at the ground state expectation value of a given observable, but by how the ground state *itself* changes drastically as we change the parameters governing the Hamiltonian.

The exponent of here — namely the 2 in — is ‘universal’: i.e., it’s robust with respect to changes in the parameters and even the detailed form of the Hamiltonian.

Zanardi concluded with an argument showing that not every quantum phase transition can be detected by enhanced orthogonalization. For more details, try:

• Silvano Garnerone, N. Tobias Jacobson, Stephan Haas and Paolo Zanardi, Fidelity approach to the disordered quantum XY model.

• Silvano Garnerone, N. Tobias Jacobson, Stephan Haas and Paolo Zanardi, Scaling of the fidelity susceptibility in a disordered quantum spin chain.

For more on the basic concepts, start here:

• Lorenzo Campos Venuti and Paolo Zanardi, Quantum critical scaling of the geometric tensors, 10.1103 *Phys. Rev. Lett.* 99.095701.

As a final little footnote, I should add that Paolo Zanardi said the metric defined as above was analogous to Fisher information metric. So, David Corfield should like this…

Gosh, was that really me four years ago? I would loved to have got to the bottom of information geometry.

Regarding your post, there was a quantum version of information geometry, see, e.g., this Fields Institute meeting. I remember Ray Streater worked on it.

Hmm, I see people are maintaining interest in the field.

Cambridge university press has recently released a volume with the title “ALGEBRAIC AND GEOMETRIC METHODS IN STATISTICS” with two contributions by Ray Streater about information geometry, the web page is here:

http://www.cambridge.org/catalogue/catalogue.asp?isbn=9780521896191

Let’s see if I can post the link:

geometric statistics.

Isn’t there a ψ missing in the first term of the formula for the Fubini-Study metric ?

Yes, thanks. Fixed.

By the way, besides this metric:

it’s also very interesting to look at this closed 2-form:

This is the curvature of a connection that describes the change in phase of the ground state as you parallel transport it around a loop — the so-called Berry phase or geometric phase.

The point is that the projective space has a Kähler structure on it: the complex-valued analogue of a Riemannian metric. The real part of this is a Riemannian metric and the imaginary part is a symplectic structure. Pulling these back to our parameter space , we get and as above. Note that while I said is a ‘Riemannian metric’, it could be degenerate at some points and it could blow up at some points. Similarly, is a closed 2-form, but it could be degenerate at some points and it could blow up at some points.

So, could be a Kähler structure on in some cases, but it’s often something a bit more general.

Hi John, great exposition of Zanardi’s talk. To be precise the exponent of the orthogonalization at a phase transition is 2 for the case of the XY model. In general one has:

where is the spacial dimension, the dynamical critical exponent, and is the scaling dimension of the operator driving the transition. The formula is valid when the size of the system is much larger than the correlation length ().

Best

Lorenzo

Thanks! I’m sorry I didn’t get to the really interesting result you just mentioned. It was the climax of Paolo’s talk, but my laptop had run out of power!

Anyone who wants to see slides that go into this material in more detail should try:

• Lorenzo Campos Venuti, The fidelity approach, criticality, and boundary-CFT.

Because you nobly resisted suggesting that I refer to your work, I’ve added a reference to your work with Paolo in my blog entry. I would have added it before if I’d noticed it.

Sorry guys, I’m new to word press. There are a couple of typos in the post above. The formula for the scalar product is $$!\left|\langle\psi\left(x\right)|\psi\left(x+\delta x\right)\rangle\right|=1-const.\delta x^{2}L^{2(d+z-\Delta)}+O\left(\delta x^{3}\right)$$.

The formula is valid in the quasi-critical region, i.e. close to the critical point when the size of the system is (much) *shorter* than the correlation legth ( ).

Sorry guys, I’m new to wordpress. There are a couple of typos in the post above. The formula for the scalar product is actually

$$!\left|\langle\psi\left(x\right)|\psi\left(x+\delta x\right)\rangle\right|=1-const.\delta x^{2}L^{2(d+z-\Delta)}+O\left(\delta x^{3}\right)$$.

The formula is valid in the quasi-critical region, i.e. close to the critical point when the size of the system is (much) *shorter* than the correlation length ( ).

Hi John,

Great exposition! Now I’d definitely like to learn more about this topic…so Wikipedia says that a statistical manifold is a manifold whose points are probability measures on a common probability space, and about the Fisher information metric: “The distance between two points on a statistical differential manifold is the amount of information between them, i.e. the informational difference between them.”

Hm, I don’t think I get that. Is there a toy example that illustrates this point? And what is the analogy to the metric defined for the XY system?

And last and least some nitpicking, plus I get to try some latex, the equation:

contains a typo, the index on the right side is n, but should be 0, correct?

Is there a way to indicate that is not a wavefunction evaluated at the spacepoint x, but a groundstate dependent on a generic parameter x, for the unwary?

Tim wrote:

Sure! Just take your favorite manifold that parametrizes probability distributions. Since I’m feeling simplistic this morning, I’ll take the unit interval, . Each point in here describes a coin that has a probability of landing heads up. Mathematically, it describes a certain probability distribution on the space

So, you can crank through those Fisher metric formulas from the Wikipedia article and get a metric on the unit interval. The distance between two points and describes how much true information you have about the behavior of coin if you

thinkit’s the coin .If you’re having trouble getting this, maybe it’ll be good to think about Fisher information.

The fidelity

is analogous to Fisher information because it’s a way of measuring how much you know about a system in state when you

thinkit’s in the state . But the analogy is an imperfect one because the fidelity goes down when you’re ‘further away’. Taking the arc-cosine cures that — that’s the Fubini-Study distance — but we still don’t have any logarithms in the formula, as typical of formulas involving information. So, I’d say it’s a slightly loose but still useful analogy.(I’m sure it’s possible to change the formulas a bit to get a

perfectanalogy, because classical probability theory is a sub-theory of quantum probability theory. Don’t know if anyone’s done this.)Thanks, I fixed that mistake. You forgot to put the word ‘latex’ after the dollar sign when writing … I fixed that. It’s really easy to forget to keep putting in ‘latex’, but only I have the magic ability to fix that mistake, around here.

Yeah, I’ll explain that more clearly. But I refuse to avoid using the letter just because some rigid fool thinks the letter

mustdenote a spacetime point. There are only 26 letters — 52 when you count lower-case and upper-case — so people who want to learn more than 52 concepts need to get used to reusing the same letter for different concepts.That’s exactly what I thought, too (ugh, we need more letters ).

Ok, so I looked up Fisher information, let me just briefly wrap it up in a way I understand, in terms of classical statistics (sorry, this will be a little bit cryptic):

Let’s say we have a parametrized family of probability distributions and can obtain iid (independent identically distributed) samples

from it, and would like to use these to estimate .

What we would like to have is a uniformly minimum variance unbiased estimator (

UMVUE) . Now, the Cramér-Rao lower bound gives a lower bound on the variance of all unbiased estimators, it is proportional to the inverse of the Fisher information matrix .For example for the Gaussian normal distribution with known variance and unknown mean value , the Fisher information matrix has one entry, and that is

,

where n is the sample size. So, we can increase our ability to estimate the mean by increasing the sample size or by having a situation with lower variance. So, there is the connection to the main blog post and something I’m a little bit familiar with.

Another fun fact I read about (it’s mentioned by Wikipedia, too), is that the Fisher-Rao metric is unique in the following sense:

Definition: A stochastic map is a linear map on the algebra of random variables that preserves positivity and takes 1 to itself.

If the metric measures the ability to distinguish states (a state in this context is synonymous to a fixed probability distribution, i.e. a point on a given statistical manifold, people seem to use this to stress the analogy to the quantum situation), then the distance between two states should be reduced by any stochastic map, that is, a stochastic map should “muddy the waters” and reduce the ability to distinguish states.

It is possible to prove that the Fisher-Rao metric is the only one (up to multiples) that has this property:

Cencov, N. N. (1982). Statistical Decision Rules and Optimal Inference (Providence, RI, American Mathematical Society).

But this does not extend to the quantum situation, that is, how to “quantize” the Fisher-Rao metric seems to be ambiguous, much like in the “quantization procedures” of classical mechanics.

But my lecturer said that the zero temperature is impossible.

Good point!

That’s true – but we can compute mathematically what

wouldhappen at zero temperature, and do experiments that compare these results to what we see at low but nonzero temperatures. This is enough to mathematically define a ‘quantum phase transition’ and then study them experimentally:At zero temperature, we would have an actual phase transition, meaning that expectation values of certain observables depend in a non-analytic way on parameters like the external magnetic field. At small but nonzero temperatures, the observables are analytic — but their first, second, etc. derivatives become very large.

Ugh, here is a really stupid question, doesn’t (citing the “phase transition” page from Wikipedia)

and

confuse analytic and smooth?

In practice, expectation values of most observables in statistical mechanics tend to be real-analytic functions except at certain points where they blow up, or have a jump discontinuity, or their nth derivative fails to be continuous for some n. Any of these ‘bad points’ gets called a phase transition.

It’s mathematically possible for the expectation value of some observable to depend smoothly but non-analytically on some parameters, and I bet one could even dream up physical systems where this happens, but I’ve never actually seen people talk about them.

Anyway, I think that of the two quotes you showed me, the second one is mathematically more precise. I don’t know what ‘degree of non-analyticity’ means, except in a handwavy way where it just means ‘how bad a function is at some point’.

I will fix it.

Yes John (and Tim),

Non-analytic but C

^{∞}phase transitions are possible and indeed even quite common to a certain degree (in 1-dimensional zero temperature quantum mechanics). The kind of transitions I am talking about is the Berezinskii–Kosterlitz–Thouless (BKT) transition. It takes place for instance in the 1D Hubbard model at U=0 or in the XXZ Heisenberg model at δ=1. For instance, for the Hubbard model the exact form of the energy density is known and it is indeed a non-analytic function: its Taylor expansion at U=0 does not converge, but the function is smooth at any order. Not surprisingly the asymptotic expansion (i.e. the not converging formal Tayolor expansion at U=0) of the energy coincides with the one obtained by perturbation theory.Thanks, Lorenzo! I clearly need to learn more condensed matter physics. I didn’t know about these smooth but nonanalytic phase transitions!

maybe you have gevrey classes in mind?

http://eom.springer.de/g/g120040.htm

I don’t know if Gevrey classes are relevant here… I’d never heard of them.

By the way, I’m not fond of fake email addresses. I don’t think they’re necessary: I’m pretty sure the only person who can see them is me, and I’d only send you email for a darn good reason. (I don’t sell Viagra, for example.) But in the future, if you want to use a fake email address when posting here, please don’t use

myemail address. For some reason it makes my picture show up next to your comment, even though you listed your name as “tice”.So, I changed your email address to “tice@math.ucr.edu”. If you use this in the future, the same little picture, or “gravatar”, will appear by your comments.

Please look at the Chapter “The Geometry of Quantum Phase Transitions” of the book “Understanding Quantum Phase Transitions” edited by Lincoln Carr.

Thanks very much! Of course arXiv papers are infinitely more useful than book chapters. Books are expensive and hard to find. arXiv papers are free and instantly available anywhere with internet access. Is this chapter on the arXiv?

Good morning,

It is not in the archive. This Chapter is a review of several papers written on the geometry of quantum phase transitions. Most of those original papers are in the archive.

I can send you a pdf of the Chapter if you send me your email to ortizg@indiana.edu