Information Geometry (Part 7)

Today, I want to describe how the Fisher information metric is related to relative entropy. I’ve explained both these concepts separately (click the links for details); now I want to put them together.

But first, let me explain what this whole series of blog posts is about. Information geometry, obviously! But what’s that?

Information geometry is the geometry of ‘statistical manifolds’. Let me explain that concept twice: first vaguely, and then precisely.

Vaguely speaking, a statistical manifold is a manifold whose points are hypotheses about some situation. For example, suppose you have a coin. You could have various hypotheses about what happens when you flip it. For example: you could hypothesize that the coin will land heads up with probability x, where x is any number between 0 and 1. This makes the interval [0,1] into a statistical manifold. Technically this is a manifold with boundary, but that’s okay.

Or, you could have various hypotheses about the IQ’s of American politicians. For example: you could hypothesize that they’re distributed according to a Gaussian probability distribution with mean x and standard deviation y. This makes the space of pairs (x,y) into a statistical manifold. Of course we require y \ge 0, which gives us a manifold with boundary. We might also want to assume x \ge 0, which would give us a manifold with corners, but that’s okay too. We’re going to be pretty relaxed about what counts as a ‘manifold’ here.

If we have a manifold whose points are hypotheses about some situation, we say the manifold ‘parametrizes’ these hypotheses. So, the concept of statistical manifold is fundamental to the subject known as parametric statistics.

Parametric statistics is a huge subject! You could say that information geometry is the application of geometry to this subject.

But now let me go ahead and make the idea of ‘statistical manifold’ more precise. There’s a classical and a quantum version of this idea. I’m working at the Centre of Quantum Technologies, so I’m being paid to be quantum—but today I’m in a classical mood, so I’ll only describe the classical version. Let’s say a classical statistical manifold is a smooth function p from a manifold M to the space of probability distributions on some measure space \Omega.

We should think of \Omega as a space of events. In our first example, it’s just \{H, T\}: we flip a coin and it lands either heads up or tails up. In our second it’s \mathbb{R}: we measure the IQ of an American politician and get some real number.

We should think of M as a space of hypotheses. For each point x \in M, we have a probability distribution p_x on \Omega. This is hypothesis about the events in question: for example “when I flip the coin, there’s 55% chance that it will land heads up”, or “when I measure the IQ of an American politician, the answer will be distributed according to a Gaussian with mean 0 and standard deviation 100.”

Now, suppose someone hands you a classical statistical manifold (M,p). Each point in M is a hypothesis. Apparently some hypotheses are more similar than others. It would be nice to make this precise. So, you might like to define a metric on M that says how ‘far apart’ two hypotheses are. People know lots of ways to do this; the challenge is to find ways that have clear meanings.

Last time I explained the concept of relative entropy. Suppose we have two probability distributions on \Omega, say p and q. Then the entropy of p relative to q is the amount of information you gain when you start with the hypothesis q but then discover that you should switch to the new improved hypothesis p. It equals:

\int_\Omega \; \frac{p}{q} \; \ln(\frac{p}{q}) \; q d \omega

You could try to use this to define a distance between points x and y in our statistical manifold, like this:

S(x,y) =  \int_\Omega \; \frac{p_x}{p_y} \; \ln(\frac{p_x}{p_y}) \; p_y d \omega

This is definitely an important function. Unfortunately, as I explained last time, it doesn’t obey the axioms that a distance function should! Worst of all, it doesn’t obey the triangle inequality.

Can we ‘fix’ it? Yes, we can! And when we do, we get the Fisher information metric, which is actually a Riemannian metric on M. Suppose we put local coordinates on some patch of M containing the point x. Then the Fisher information metric is given by:

g_{ij}(x) = \int_\Omega  \partial_i (\ln p_x) \; \partial_j (\ln p_x) \; p_x d \omega

You can think of my whole series of articles so far as an attempt to understand this funny-looking formula. I’ve shown how to get it from a few different starting-points, most recently back in Part 3. But now let’s get it starting from relative entropy!

Fix any point in our statistical manifold and choose local coordinates for which this point is the origin, 0. The amount of information we gain if move to some other point x is the relative entropy S(x,0). But what’s this like when x is really close to 0? We can imagine doing a Taylor series expansion of S(x,0) to answer this question.

Surprisingly, to first order the answer is always zero! Mathematically:

\partial_i S(x,0)|_{x = 0} = 0

In plain English: if you change your mind slightly, you learn a negligible amount — not an amount proportional to how much you changed your mind.

This must have some profound significance. I wish I knew what. Could it mean that people are reluctant to change their minds except in big jumps?

Anyway, if you think about it, this fact makes it obvious that S(x,y) can’t obey the triangle inequality. S(x,y) could be pretty big, but if we draw a curve from x and y, and mark n closely spaced points x_i on this curve, then S(x_{i+1}, x_i) is zero to first order, so it must be of order 1/n^2, so if the triangle inequality were true we’d have

S(x,y) \le \sum_i S(x_{i+i},x_i) \le \mathrm{const} \, n \cdot \frac{1}{n^2}

for all n, which is a contradiction.

In plain English: if you change your mind in one big jump, the amount of information you gain is more than the sum of the amounts you’d gain if you change your mind in lots of little steps! This seems pretty darn strange, but the paper I mentioned in part 1 helps:

• Gavin E. Crooks, Measuring thermodynamic length.

You’ll see he takes a curve and chops it into lots of little pieces as I just did, and explains what’s going on.

Okay, so what about second order? What’s

\partial_i \partial_j S(x,0)|_{x = 0} ?

Well, this is the punchline of this blog post: it’s the Fisher information metric:

\partial_i \partial_j S(x,0)|_{x = 0} = g_{ij}

And since the Fisher information metric is a Riemannian metric, we can then apply the usual recipe and define distances in a way that obeys the triangle inequality. Crooks calls this distance thermodynamic length in the special case that he considers, and he explains its physical meaning.

Now let me prove that

\partial_i S(x,0)|_{x = 0} = 0

and

\partial_i \partial_j S(x,0)|_{x = 0} = g_{ij}

This can be somewhat tedious if you do it by straighforwardly grinding it out—I know, I did it. So let me show you a better way, which requires more conceptual acrobatics but less brute force.

The trick is to work with the universal statistical manifold for the measure space \Omega. Namely, we take M to be the space of all probability distributions on \Omega! This is typically an infinite-dimensional manifold, but that’s okay: we’re being relaxed about what counts as a manifold here. In this case, we don’t need to write p_x for the probability distribution corresponding to the point x \in M. In this case, a point of M just is a probability distribution on \Omega, so we’ll just call it p.

If we can prove the formulas for this universal example, they’ll automatically follow for every other example, by abstract nonsense. Why? Because any statistical manifold with measure space \Omega is the same as a manifold with a smooth map to the universal statistical manifold! So, geometrical structures on the universal one ‘pull back’ to give structures on all the rest. The Fisher information metric and the function S can be defined as pullbacks in this way! So, to study them, we can just study the universal example.

(If you’re familiar with ‘classifying spaces for bundles’ or other sorts of ‘classifying spaces’, all this should seem awfully familiar. It’s a standard math trick.)

So, let’s prove that

\partial_i S(x,0)|_{x = 0} = 0

by proving it in the universal example. Given any probability distribution q, and taking a nearby probability distribution p, we can write

\frac{p}{q} = 1 + f

where f is some small function. We only need to show that S(p,q) is zero to first order in f. And this is pretty easy. By definition:

S(p,q) =  \int_\Omega \; \frac{p}{q} \, \ln(\frac{p}{q}) \; q d \omega

or in other words,

S(p,q) =  \int_\Omega \; (1 + f) \, \ln(1 + f) \; q d \omega

We can calculate this to first order in f and show we get zero. But let’s actually work it out to second order, since we’ll need that later:

\ln (1 + f) = f - \frac{1}{2} f^2 + \cdots

so

(1 + f) \, \ln (1+ f) = f + \frac{1}{2} f^2 + \cdots

so

\begin{aligned} S(p,q) &=& \int_\Omega \; (1 + f) \; \ln(1 + f) \; q d \omega \\ &=& \int_\Omega f \, q d \omega + \frac{1}{2} \int_\Omega f^2\, q d \omega + \cdots \end{aligned}

Why does this vanish to first order in f? It’s because p and q are both probability distributions and p/q = 1 + f, so

\int_\Omega (1 + f) \, q d\omega = \int_\Omega p d\omega = 1

but also

\int_\Omega q d\omega = 1

so subtracting we see

\int_\Omega f \, q d\omega = 0

So, S(p,q) vanishes to first order in f. Voilà!

Next let’s prove the more interesting formula:

\partial_i \partial_j S(x,0)|_{x = 0} = g_{ij}

which relates relative entropy to the Fisher information metric. Since both sides are symmetric matrices, it suffices to show their diagonal entries agree in any coordinate system:

\partial^2_i S(x,0)|_{x = 0} = g_{ii}

Devoted followers of this series of posts will note that I keep using this trick, which takes advantage of the polarization identity.

To prove

\partial^2_i S(x,0)|_{x = 0} = g_{ii}

it’s enough to consider the universal example. We take the origin to be some probability distribution q and take x to be a nearby probability distribution p which is pushed a tiny bit in the ith coordinate direction. As before we write p/q = 1 + f. We look at the second-order term in our formula for S(p,q):

\frac{1}{2} \int_\Omega f^2\, q d \omega

Using the usual second-order Taylor’s formula, which has a \frac{1}{2} built into it, we can say

\partial^2_i S(x,0)|_{x = 0} = \int_\Omega f^2\, q d \omega

On the other hand, our formula for the Fisher information metric gives

g_{ii} = \left. \int_\Omega  \partial_i \ln p \; \partial_i \ln p \; q d \omega \right|_{p=q}

The right hand sides of the last two formulas look awfully similar! And indeed they agree, because we can show that

\left. \partial_i \ln p \right|_{p = q} = f

How? Well, we assumed that p is what we get by taking q and pushing it a little bit in the ith coordinate direction; we have also written that little change as

p/q = 1 + f

for some small function f. So,

\partial_i (p/q) = f

and thus:

\partial_i p = f q

and thus:

\partial_i \ln p = \frac{\partial_i p}{p} = \frac{fq}{p}

so

\left. \partial_i \ln p \right|_{p=q} = f

as desired.

This argument may seem a little hand-wavy and nonrigorous, with words like ‘a little bit’. If you’re used to taking arguments involving infinitesimal changes and translating them into calculus (or differential geometry), it should make sense. If it doesn’t, I apologize. It’s easy to make it more rigorous, but only at the cost of more annoying notation, which doesn’t seem good in a blog post.

Boring technicalities

If you’re actually the kind of person who reads a section called ‘boring technicalities’, I’ll admit to you that my calculations don’t make sense if the integrals diverge, or we’re dividing by zero in the ratio p/q. To avoid these problems, here’s what we should do. Fix a \sigma-finite measure space (\Omega, d\omega). Then, define the universal statistical manifold to be the space P(\Omega,d \omega) consisting of all probability measures that are equivalent to d\omega, in the usual sense of measure theory. By Radon-Nikodym, we can write any such measure as q d \omega where q \in L^1(\Omega, d\omega). Moreover, given two of these guys, say p d \omega and q d\omega, they are absolutely continuous with respect to each other, so we can write

p d \omega = \frac{p}{q} \; q d \omega

where the ratio p/q is well-defined almost everywhere and lies in L^1(\Omega, q d\omega). This is enough to guarantee that we’re never dividing by zero, and I think it’s enough to make sure all my integrals converge.

We do still need to make P(\Omega,d \omega) into some sort of infinite-dimensional manifold, to justify all the derivatives. There are various ways to approach this issue, all of which start from the fact that L^1(\Omega, d\omega) is a Banach space, which is about the nicest sort of infinite-dimensional manifold one could imagine. Sitting in L^1(\Omega, d\omega) is the hyperplane consisting of functions q with

\int_\Omega q d\omega = 1

and this is a Banach manifold. To get P(\Omega,d \omega) we need to take a subspace of that hyperplane. If this subspace were open then P(\Omega,d \omega) would be a Banach manifold in its own right. I haven’t checked this yet, for various reasons.

For one thing, there’s a nice theory of ‘diffeological spaces’, which generalize manifolds. Every Banach manifold is a diffeological space, and every subset of a diffeological space is again a diffeological space. For many purposes we don’t need our ‘statistical manifolds’ to be manifolds: diffeological spaces will do just fine. This is one reason why I’m being pretty relaxed here about what counts as a ‘manifold’.

For another, I know that people have worked out a lot of this stuff, so I can just look things up when I need to. And so can you! This book is a good place to start:

• Paolo Gibilisco, Eva Riccomagno, Maria Piera Rogantin and Henry P. Wynn, Algebraic and Geometric Methods in Statistics, Cambridge U. Press, Cambridge, 2009.

I find the chapters by Raymond Streater especially congenial. For the technical issue I’m talking about now it’s worth reading section 14.2, “Manifolds modelled by Orlicz spaces”, which tackles the problem of constructing a universal statistical manifold in a more sophisticated way than I’ve just done. And in chapter 15, “The Banach manifold of quantum states”, he tackles the quantum version!

25 Responses to Information Geometry (Part 7)

  1. Blake Stacey says:

    Interesting stuff!

    (On the copy-editing front: it’s missing a “latex” in $L^1(\Omega, d\omega)$.)

  2. David Corfield says:

    And you can retrieve the Fisher information metric from other divergences, as discussed on p. 4 of Snoussi’s The geometry of prior selection.

    • John Baez says:

      Thanks! I’m guessing the `δ-divergence’ defined on top of this page (in equation 2) is the Rényi version of relative entropy.

      This is a nice paper! I’m still not at the point of understanding all this stuff, but I’m getting a lot closer.

  3. tomate says:

    It would be interesting to know under which conditions Fisher’s metric defines a flat Levi-Civita connection.

    • John Baez says:

      Nice question!

      We can make a bit of progress on this in the simple case where our space of events \Omega has a finite number of points, say

      \Omega = \{1,2,\dots,n\}

      Then the universal statistical manifold is the space of all probability distributions on \Omega. This is the simplex:

      \{ p \in [0,1]^n \; \; : \; \; \sum_{i = 1}^n p_i = 1 \}

      So for example for n = 3 it’s a triangle.

      But the Fisher metric does not make this simplex flat! Instead, it turns out to be isometric to a piece of a sphere:

      \{ x \in [0,1]^n \; \; : \; \; \sum_{i = 1}^n x_i^2 = 1 \}

      So, for example, when n = 3 it’s shaped like an eighth of a sphere.

      Knowing this, we can begin to understand when the Fisher metric on a general statistical manifold is flat (at least when \Omega is finite.) We need to map that manifold into the universal statistical manifold so that its image is a flat submanifold sitting inside this piece of a sphere!

      • tomate says:

        mmm, I’ll have to think about this, It’s not straightforward to me that Fisher’s curvature should be that of a sphere…

        (as I process information I elaborate it on a new blog, you might want to take a look at it from time to time)

  4. John F says:

    One important concept for interpretation is that on any statistical manifold events shouldn’t be just points but rather collections of points of various densities. One reason is independent events are interpenetrating point clouds.

    • John Baez says:

      John F wrote:

      One important concept for interpretation is that on any statistical manifold events shouldn’t be just points…

      Just to clear up any possible miscommunication: in my setup, I’m calling points in the statistical manifold ‘hypotheses’, not ‘events’.

      There are two spaces floating around here: the statistical manifold M whose points are hypotheses, and the measure space \Omega whose points are events. Each point in M gives rise to a probability distribution on \Omega. So, each hypothesis is a probability distribution of events.

      E.g., when \Omega = \{H, T\}, a typical hypothesis is “the coin will land heads-up with probability 60%”.

      • Graham says:

        Your setup is confusing if you are used to ordinary (say Bayesian) statistical inference! I think it translates like this.

        A Bayesian prior is a distribution over hypotheses. You don’t jump from one hypothesis to another, but weight them differently as the likelihood changes as you get more data. Usually, as you get more data, the posterior becomes more like a multivariate gaussian. To estimate the (inverse of the) covariance matrix, you can take second derivatives of -log(posterior), evaluated at the mode of the posterior. The result is the Fisher information matrix (not metric).

        In statistical inference, you are therefore interested in how the curvature of log(posterior) changes at a single point as more data becomes available. You could say: data tells the log(posterior) how to curve.

      • John F says:

        It’s not miscommunication, just stupidity on my part. I thought we were discussing how far apart two distributions are, instead it is “how far apart two hypotheses are”?

        I can kind of imagine smoothly mapping points in M to distributions in Ω. I have trouble imagining smoothly mapping independent event distributions to points. As Graham suggests I think in practice there is fuzziness with points in M being special cases of pointy distributions in M.

        But even with points, what about inference as parameter estimation? Probably we need homework examples. Is this an example:

        Suppose during the week I’m likely to have to stop by a store to get milk, and/or bread, and/or dog food, and/or Coke, depending on the day and time -say for my family more likely to need dog food later in the week. Then an event is me picking up some set of those, and an hypothesis is the set of probabilities of me picking each up? Then in this case different times have different hypotheses, so you can guess the time from what I got.

  5. Tomate discusses Fisher information, starting from a rework of John Baez’s post Information Geometry (Part 7), but then moving on to new ideas.

  6. Barbaresco Frederic says:

    In France, “information Geometry” is included in a larger mathematical domain “Geometric Science of Information”, that is debated in Brillouin Seminar launched in 2009 :

    http://repmus.ircam.fr/brillouin/past-events

    http://repmus.ircam.fr/brillouin/home

    You can register for Brillouin Seminar News :

    http://listes.ircam.fr/wws/info/informationgeometry

    Recently, Brillouin Seminar has organized :
    – In 2011, a French-Indian Workshop on “Matrix Information Geometries” at Ecole Polytechnique. Proceedings will be published by Springer in 2012. Slides and abstracts are available on website :

    http://www.informationgeometry.org/MIG/

    – In 2012, a Symposium on “Information Geometry and Optimal Transport Theory” at Institut Henri Poincaré in Paris with GDR CNRS MSPC. All slides are available on the website :

    http://www.ceremade.dauphine.fr/~peyre/mspc/mspc-thales-12/

    You can find a very recent French PhD Dissertation in English on this subject by Yang Le and supervised by Marc Arnaudon :
    Medians of probability measures in Riemannian manifolds

    http://tel.archives-ouvertes.fr/tel-00664188

  7. Treating the Fisher information metric as an infinitesimal Kullback-Leibler is interesting, but it begs the question: Given probability distributions P and Q (parameterized by some lambda), what is the geodesic distance between the two?

    Well, I suppose this is a rhetorical question, its of course

    \displaystyle{ \int\sqrt{\frac{d\lambda^j}{dt}g_{jk}\frac{d\lambda^k}{dt}} \;dt}

    for some path \lambda(t) connecting P and Q, we know this from textbooks. But what is this, intuitively?

    The point is that since we have a metric, we then also have geodesics connecting any two points (assuming simply connected, convex manifold, etc). Since the KL doesn’t satisfy the triangle inequality, it clearly cannot be measuring the geodesic. So what is it measuring?

    Well, the partial answer is that KL does not need to make an assumption of a parameter lambda, or of a “path” connecting P and Q, and so is “general” in this way. But still, I have no particular intuition here…

    • Err, well, clearly I mangled things a bit above, and can’t figure out how to edit to fix it. So, please “understand what I mean, not what I say”.

      [Moderator's note: I fixed your comment. There is no ability for you to retroactively edit comments. On this blog, as on most WordPress blogs, you need to type

      $latex x = y$

      to get the equation

      x = y

      The ‘latex’ must occur directly after the dollar sign, with no space in between. There must be a space after it. And double dollar signs don’t work, at least on this blog.]

      • Oh, hmm, right. Silly me. The geodesic length is the square root of the Jensen-Shannon divergence, as pointed out by Gavin Crooks in the quoted article. I’m not masochistic enough to try to verify this explicitly, although it would be a good exercise…

        Next question: in principle, one could calculate the Levi-civita symbols, the curvatute tensor, the ricci tensor, “simply” by plugging in the expression of the fisher metric, and turning the crank. My question then is: does one get lucky, do these expressions simplify, or do they remain a big mess?

        Again, since Jensen-Shannon does follow the geodesic, and since its a symmetrized form of Kullback-Leibler, I guess this mens that KL somehow follows a path that just starts in the wrong direction, and then curves around? Well, of course it does, but intuitively, its..?

      • John Baez says:

        Hi, Linas! Good to see you here! You wrote:

        The geodesic length is the square root of the Jensen–Shannon divergence, as pointed out by Gavin Crooks in the quoted article.

        Since I forget what the Jensen–Shannon divergence is, I imagine many other readers will too, so let’s remind ourselves by peeking at the Wikipedia article

        Oh, okay. It’s a well-known way of making the relative entropy (also called the ‘Kullback-Leibler divergence’ by people who enjoy obscure technical terms) into an actual metric on the space of probability distributions.

        Given two probability distributions p and q on a measure space (\Omega, \omega) the entropy of p relative to q is

        \displaystyle{ S(p,q) = \int_\Omega \; p\, \ln(\frac{p}{q}) \, d \omega }

        To make this symmetrical, we define a new probability distribution

        m = \displaystyle{ \frac{1}{2}(p+q)}

        that’s the midpoint of p and q in some obvious naive sense, and define the Jensen–Shannon distance to be

        \displaystyle{ JSD(p,q) = \frac{1}{2} \left( S(p,m) + S(m,d) \right) }

        Unlike the relative entropy, this is obviously symmetric:

        JSD(p,q) = JSD(q,p)

        And unlike the relative entropy, but less obviously, it also obeys the triangle inequality!

        JDS(p,r) \le JSD(p,q) + JSD(q,r)

        Nonetheless, I always thought this idea was a bit of a ‘trick’, and thus not worth studying.

        For a second you made me think the Jensen–Shannon distance was just the square of the geodesic distance as measured by the Fisher information metric. And that would mean it’s not just a trick!

        But unfortunately that’s not true… and probably not even what you were trying to say. The truth is here, and it’s less pretty.

        • Yellow caution alert: Over the last few days, I quadrupled the size of the wikipedia article on the Fisher information metric, synthesizing from the Crooks paper, some of your posts, and other bits and pieces culled via google. i.e. you’re reading what I wrote…

          (Dislaimer: I learn things by expanding WP articles. They’re not peer-reviewed. Mistakes can creep in. They’re sometimes sloppy and disorganized; it doesn’t pay to be a perfectionist on WP).

      • John Baez says:

        Linas wrote:

        Next question: in principle, one could calculate the Levi-Civita symbols, the curvatute tensor, the Ricci tensor, “simply” by plugging in the expression of the Fisher metric, and turning the crank. My question then is: does one get lucky, do these expressions simplify, or do they remain a big mess?

        In the end it’s all incredibly beautiful. I think the right path was sketched earlier here. First, observe that the the space of probability distributions on an n-element set is an (n-1)-simplex. Then, show tha the Fisher information metric makes this simplex isometric to a portion of a round (n-1)-sphere!

        Once we know this, it’s clear that the curvature will be very nice and simple.

        But last week Simon Willerton and I were trying to check that the simplex with its Fisher information metric really is isometric to a portion of a sphere. We got stuck, mainly because the ‘obvious’ map from the simplex to the sphere, namely the one that sends a point

        (p_1 , \dots, p_n)

        such that p_i \ge 0 and p_1 + \cdots p_n = 1 to the point

        (p_1 , \dots, p_n) /\sqrt{p_1^2 + \cdots + p_n^2}

        on the unit sphere, didn’t seem to work. We couldn’t guess the right one, and we couldn’t find a mistake in our calculation. I tried to cheat by looking up the answer, but I couldn’t quickly find it, and then I got distracted.

        It’s all well-known stuff, to those who know it well. In a recent paper, Gromov hints that this relation between the Fisher information metric on the simplex and the round metric on the sphere is secretly a link between probability theory and quantum theory (where states lie on a sphere, but in a complex Hilbert space). But he doesn’t elaborate, and he might have been joking.

        • Perhaps Gromov is hinting at the random gaussian unitary ensemble? I’ve only gotten 3-4 page into his paper.

          I imagine there should be all sorts of ‘secret links’, not just for simplexes and hilbert spaces, but for generic homogenous spaces. At the risk of stating ‘obvious’ things you understand better than I … Whenever one’s got some collection of operators acting on some space, you can ask what happens if you start picking out operators ‘at random’ (after specifying a measure that defines what ‘random’ means). By starting with something that’s homogenous, you get gobs of symmetry, so measures and metrics are … um, ahem, cough cough ‘sphere-like’. Generalizations of legendre polynomials and clebsch-gordon coefficients of good-ol su(2) to various wild hypergeometric series. There’s a vast ocean of neato results coming out of this: there’s also gobs of hyperbolic behaviour, lots of ergodic trajectories, and so analogs of the Riemann zeta that magically seem to obey the Riemann hypothesis (anything ergodic/hyperbolic seems to generalize something or other from number theory, this seems to be a general rule). I’m sure Gromov is quite aware of these things, given his history…

          … wish there were more hours in the day, it feels like I’m in a candy store, sometimes…

      • John Baez says:

        Sometime jokes turn out to be true.

        I got motivated again to cheat and look up the answer, and it’s very fascinating! The trick is to think of an amplitude as being like the square root of a probability—as usual in quantum mechanics!

        We can map a probability distribution

        (p_1 , \dots, p_n)

        to a point on the sphere of radius 1, namely

        (p_1^{1/2}, \dots, p_n^{1/2})

        And then the Fisher information metric on the simplex gets sent to the usual round metric on a portion of the sphere… up to a constant factor.

        If you want the metrics to match exactly, you should map the simplex to the sphere of radius 2, via

        (p_1 , \dots, p_n) \mapsto 2  (p_1^{1/2}, \dots, p_n^{1/2})

        I’m not sure how important that is: perhaps it simply means the Fisher information metric was defined slightly wrong, and should include a factor of 1/2 out front.

        But what’s the deep inner meaning of this relation between probability distributions and points on the sphere? It seems to be saying that the ‘right’ way to measure distances between probability distributions is to treat them as coming from quantum states, and measure the distance between those!

        • Turns out that trick you provide above leads directly to the Fubini-Study metric. Set the phase part of the complex wave-function to zero, apply the Fubini-Study metric, and you get (four times) the Fisher information metric. The Bures metric is identical to the Fubini-Study metric, except that its normally written for mixed states, while FB is normally expressed w/ pure states. The differences in notation obscures a lot. Wikipedia knows all, including references.

          The part that awes me is that log p and the wave-function phase alpha put together gives a symplectic form. This is surely somehow important, but I don’t know why.

        • John Baez says:

          Very intriguing stuff! I’m glad you’re trying to improve the Wikipedia article on Fisher information by adding some of these known but obscure facts. Someday some kid will read this stuff, put all the puzzle pieces together, and unify classical mechanics, quantum mechanics and probability theory in a new way. Unless we get there first, of course.

        • The other part that is weird is that log p materializes as if it were a vector in a tangent space. This suggests that the shannon entropy has some geometric interpretation, but I can’t tell what it is.

  8. Huh. Will ponder. If I manage to clean this up, will insert into the WP article :-) A quickie google search for ‘fisher information metric’ and ‘curvature’ bring up papers complicated enough to suggest that few are aware of of this: I guess a brute-force attack must get mired.

You can use HTML in your comments. You can also use LaTeX, like this: $latex E = m c^2 $. The word 'latex' comes right after the first dollar sign, with a space after it.

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out / Change )

Twitter picture

You are commenting using your Twitter account. Log Out / Change )

Facebook photo

You are commenting using your Facebook account. Log Out / Change )

Google+ photo

You are commenting using your Google+ account. Log Out / Change )

Connecting to %s

Follow

Get every new post delivered to your Inbox.

Join 2,843 other followers