Quantropy (Part 4)

There’s a new paper on the arXiv:

• John Baez and Blake Pollard, Quantropy.

Blake is a physics grad student at U. C. Riverside who plans to do his thesis with me.

If you have carefully read all my previous posts on quantropy (Part 1, Part 2 and Part 3), there’s only a little new stuff here. But still, it’s better organized, and less chatty.

And in fact, Blake came up with a lot of new stuff for this paper! He studied the quantropy of the harmonic oscillator, and tweaked the analogy between statistical mechanics and quantum mechanics in an interesting way. Unfortunately, we needed to put a version of this paper on the arXiv by a deadline, and our writeup of this new work wasn’t quite ready (my fault). So, we’ll put that other stuff in a new version—or, I’m thinking now, a separate paper.

But here are two new things.

First, putting this paper on the arXiv had the usual good effect of revealing some existing work on the same topic. Joakim Munkhammar emailed me and pointed out this paper, which is free online:

• Joakim Munkhammar, Canonical relational quantum mechanics from information theory, Electronic Journal of Theoretical Physics 8 (2011), 93–108.

You’ll see it cites Garrett Lisi’s paper and pushes forward in various directions. There seems to be a typo where he writes the path integral

Z = \displaystyle{ \int e^{-\alpha S(q) } D q}

and says

In order to fit the purpose Lisi concluded that the Lagrange multiplier value \alpha \equiv 1/i \hbar. In similarity with Lisi’s approach we shall also assume that the arbitrary scaling-part of the constant \alpha is in fact 1/\hbar.

I’m pretty sure he means 1/i\hbar, given what he writes later. However, he speaks of ‘maximizing entropy’, which is not quite right for a complex-valued quantity; Blake and I prefer to give this new quantity a new name, and speak of ‘finding a stationary point of quantropy’.

But in a way these are small issues; being a mathematician, I’m more quick to spot tiny technical defects than to absorb significant new ideas, and it will take a while to really understand Munkhammar’s paper.

Second, while writing our paper, Blake and I noticed another similarity between the partition function of a classical ideal gas and the partition function of a quantum free particle. Both are given by an integral like this:

Z = \displaystyle{\int e^{-\alpha S(q) } D q }

where S is a quadratic function of q \in \mathbb{R}^n. Here n is the number of degrees of freedom for the particles in the ideal gas, or the number of time steps for a free particle on a line (where we are discretizing time). The only big difference is that

\alpha = 1/kT

for the ideal gas, but

\alpha = 1/i \hbar

for the free particle.

In both cases there’s an ambiguity in the answer! The reason is that to do this integral, we need to pick a measure D q. The obvious guess is Lebesgue measure

dq = dq_1 \cdots dq_n

on \mathbb{R}^n. But this can’t be right, on physical grounds!

The reason is that the partition function Z needs to be dimensionless, but d q has units. To correct this, we need to divide dq by some dimensionful quantity to get D q.

In the case of the ideal gas, this dimensionful quantity involves the ‘thermal de Broglie wavelength’ of the particles in the gas. And this brings Planck’s constant into the game, even though we’re not doing quantum mechanics: we’re studying the statistical mechanics of a classical ideal gas!

That’s weird and interesting. It’s not the only place where we see that classical statistical mechanics is incomplete or inconsistent, and we need to introduce some ideas from quantum physics to get sensible answers. The most famous one is the ultraviolet catastrophe. What are all rest?

In the case of the free particle, we need to divide by a quantity with dimensions of lengthn to make

dq = dq_1 \cdots dq_n

dimensionless, since each dq_i has dimensions of length. The easiest way is to introduce a length scale \Delta x and divide each dq_i by that. This is commonly done when people study the free particle. This length scale drops out of the final answer for the questions people usually care about… but not for the quantropy.

Similarly, Planck’s constant drops out of the final answer for some questions about the classical ideal gas, but not for its entropy!

So there’s an interesting question here, about what this new length scale \Delta x means, if anything. One might argue that quantropy is a bad idea, and the need for this new length scale to make it unambiguous is just proof of that. However, the mathematical analogy to quantum mechanics is so precise that I think it’s worth going a bit further out on this limb, and thinking a bit more about what’s going on.

Some weird sort of déjà vu phenomenon seems to be going on. Once upon a time, people tried to calculate the partition functions of classical systems. They discovered they were infinite or ambiguous until they introduced Planck’s constant, and eventually quantum mechanics. Then Feynman introduced the path integral approach to quantum mechanics. In this approach one is again computing partition functions, but now with a new meaning, and with complex rather than real exponentials. But these partition functions are again infinite or ambiguous… for very similar mathematical reasons! And at least in some cases, we can remove the ambiguity using the same trick as before: introducing a new constant. But then… what?

Are we stuck in an infinite loop here? What, if anything, is the meaning of this ‘second Planck’s constant’? Does this have anything to do with second quantization? (I don’t see how, but I can’t resist asking.)

71 Responses to Quantropy (Part 4)

  1. Phil Gossett says:

    There seems to be a typo in your typo:

    There seems to be a typo where he writes the path integral

    Formula does not parse

    and another here:

    Both are given by an integral like this:

    Formula does not parse

    where $S$

    and finally here:

    Does this have anything to do with <a href ="http://en.wikipedia.org/wiki/Second_quantization&quot; second quantization?

  2. Phil Gossett says:

    Sorry. Never mind. They all seem good now. Not sure what happened there…

  3. Tobias Fritz says:

    So why does the partition function Z have to be dimensionless? Why not regard it as a quantity of dimension lengthn? For example, consider the probability that the system is at a certain position q,

    P(q) = \frac{1}{Z} e^{-\alpha S(q)}.

    This P should be a probability density, and hence Z better have dimension lengthn!

    Also, equations like

    \langle S\rangle = -\frac{\partial\log Z}{\partial\alpha} = -\frac{1}{Z}\cdot\frac{\partial Z}{\partial\alpha},

    still make sense with dimensionful Z. So what’s the problem with this?

    Apologies if this has already been discussed elsewhere; I haven’t seen it explained in the paper.

    • John Baez says:

      Good question! There are two slightly different questions: does Z need to be dimensionless in statistical mechanics, and does it need to be so in quantum mechanics.

      In statistical mechanics everyone seem to agree that Z should be dimensionless. For example, the free energy is

      F = -kT \ln Z

      where T is temperature and k is Boltzmann’s constant. k T has dimensions of energy so \ln Z should be dimensionless, which means that Z is dimensionless.

      Indeed, the usual rules give no consistent way of assigning dimensions to \ln Z unless Z is dimensionless. You cleverly note that the derivative of \ln Z can still make dimensional sense, but in statistical mechanics we also really care about \ln Z itself.

      (By the way, in the paper we use a system where temperature has units of energy and k = 1. This doesn’t affect the conclusions here.)

      In quantum mechanics we could, I suppose, argue that the ‘free action’ -i \hbar \ln Z is not physically interesting so we don’t need \ln Z to make dimensional sense. But instead, we are hoping that the whole analogy between stat mech and quant mech works as seamlessly as possible. So for us, the same argument that forced Z to be dimensionless in stat mech also applies to quant mech.

      For example, consider the probability that the system is at a certain position q,

      P(q) = \frac{1}{Z} e^{-\alpha S(q)}.

      This P should be a probability density, and hence Z better have dimension lengthn!

      I’ll have to think about this when I have some more time. It sounds like you have me trapped, but I will try to escape… or at least try to figure out what’s going on. Thanks!

      • Tobias Fritz says:

        Aha! So if I hypothetically used a dimensionful partition function and had a value of e.g. Z=7 m^2, the free energy would end up being

        F=-kT\ln 7 - 2 kT \ln m,

        which indeed seems odd — since what could taking the logarithm of a unit possibly mean? On the other hand, the second term is like a choice of zero for F. Since observable quantities typically (or always?) involve only free energy differences, this choice of zero doesn’t have observational consequences.

      • John Baez says:

        That’s a reasonable argument and maybe it’s true in some fundamental sense. However, when people compute the free energy of a classical ideal gas. they actually want to know the answer ‘on the nose’, not just up to a constant. The reason, perhaps, is that in this subject everyone assumes that the free energy of a box full of vacuum is zero. So a zero of energy has already been fixed.

        The calculation turns out to be very interesting. You can see one version here:

        • S. M. Tan, Statistical Physics, Chapter 4: the Classical Ideal Gas.

        where the final answer is in equation (4.31). One strange thing about this particular version of the calculation is that it starts as a quantum calculation and then takes the classical limit. You will see that the volume of the box divided by the ‘thermal DeBroglie wavelength’ of the gas molecules shows up in the answer—see eq. (4.36) for an explanation of what I mean.

        Thus, the answer involves Planck’s constant, even in the classical limit!

        You can also try to compute the free energy of the classical ideal gas purely using classical mechanics. People must have tried this before quantum mechanics was invented. This is closer to what we’re talking about here. In this you naively start by computing

        Z = \displaystyle{\int_{\mathbb{R}^{3n} \times B^{n}} e^{-E(p)/kT} \; d^{3n} p \, d^{3n} q }

        where p \in \mathbb{R}^{3n} describes the momentum of n particles in a 3-dimensional box B \subseteq \mathbb{R}^3, q \in B^n describes the position of those particles, and E(p) is the kinetic energy of those particles, a quadratic function of p.

        However, this formula for Z is ‘wrong’, because Z is not dimensionless! It has units of momentum times position to the nth power: that is, action to the nth power. To make it dimensionless we need to divide the measure

        d^{3n} p \, d^{3n} q

        by something with units of action to the nth power… and this is where Planck’s constant, or more precisely \hbar^n, shows up! In this approach, it enters in a somewhat ad hoc way.

        In other words, we’re seeing that to compute the free energy of a classical ideal gas ‘on the nose’, not just up to a constant, we’re forced to introduce a quantity with dimensions of action. This quantity later turns out to equal Planck’s constant.

        So, we’re seeing a way that quantum mechanics pushes its nose under the door even when you didn’t invite it, like a camel that you wish would stay outside your tent.

      • John Baez says:

        Tobias wrote:

        Since observable quantities typically (or always?) involve only free energy differences, this choice of zero doesn’t have observational consequences.

        I want to more clearly admit that to a large extent this must be the right answer to the puzzle.

        In statistical mechanics, adding a constant to the Hamiltonian doesn’t change the probability of being in any state. It multiplies the partition function by a constant. It adds a constant to the free energy. But for most purposes, it should be considered as a kind of ‘gauge transformation’ – a symmetry that doesn’t affect anything we can measure.

        Similarly, in the path integral approach to quantum mechanics, adding a constant to the action doesn’t change the amplitude of any history. It multiplies the partition function by a constant. It adds a constant to the free action. But for most purposes, it should be considered as a kind of ‘gauge transformation’.

        Nonetheless, in statistical mechanics there are some situations where people want a specific answer for the free energy, not just ‘up to a constant’. To get this, it seems we need to introduce some ideas from quantum statistical mechanics. This has the effect of introducing Planck’s constant, which allows us to make the partition function dimensionless and also (it seems) choose a specific ‘correct value’ for it!

        Similarly—following the analogy I’m always using here—there could be some situations in quantum mechanics where we want a specific answer for the free action, not just ‘up to a constant’. To get this, it seems we need to introduce some ideas from… some new subject! This has the effect—at least in the example Blake and I looked at in our paper—of introducing a fundamental length scale, which allows us to make the partition function dimensionless and also choose a specific value for it.

        I want to think about this more, even if it’s rather peculiar. Pushing analogies until they break is often interesting.

        (Naturally the idea of choosing a fundamental length scale makes me think of quantum gravity and the Planck length. Could quantum gravity solve some problems of quantum mechanics just as quantum mechanics solved some problems of statistical mechanics? This is a somewhat far-out idea. I’m not claiming it makes sense. But I want to be the first to mention it, just in case it turns out to be brilliant instead of stupid.)

        • domenico says:

          It is very, very, interesting.
          In statistical mechanics (canonical and grand canonical) the partition function is evaluated using the number of quantum states of simple system (for example particles confined in a box, or quantum harmonic oscillator), with the volume of each distinct quantum state.
          If there is an analogy here, then there is a number of trajectory between initial state and final state, that can give the correct partition function.
          I am thinking two possibility.
          The first (improbable) is a simple constrain on the possible trajectories, for example optical fibers (for photons), or multiple slits (for particles), to reduce the number of the trajectories.
          The second is a particle in a box experiment, where a single particle spread in the initial box, and after a slow septum movement until a final state: if it possible to calculate the number of trajectories between the initial state and the final state, then it is possible to evaluate the correct partition function for this system (denominator of Z); I am thinking that if the movement is slow (momentum constraint), the trajectory is unique, when the movement is quick, the number of trajectory grow.
          If exist a quantum partition function, then exist a number of trajectories for each volume in the phase space (constraint on the possible trajectories), and it is the first time that read something like this.

        • Tobias Fritz says:

          Thanks! So abstractly, maybe there is a sense in which a Hamiltonian or action actually takes values not in the real numbers, but in an affine line. And likewise for the free energy. I’m quite confused about what to make of this, though…

          In general, I find it important to stress that there is no consistency problem with classical statistical mechanics without quantum theory. Physicists often seem confused about this kind of issue and summon one theory to the “rescue” of another, even if the latter is perfectly consistent. I wonder if this applies e.g. to Bohr’s general relativity argument in the Bohr-Einstein debates, but this is a can of worms that I’d rather keep closed.

        • John Baez says:

          Tobias wrote:

          In general, I find it important to stress that there is no consistency problem with classical statistical mechanics without quantum theory.

          As an abstract general framework it’s consistent and also extremely powerful. However, there are some important classical Hamiltonians H on infinite-dimensional vector spaces V for which

          Z = \displaystyle{ \int_V \exp(-H(x)/kT) \, dx }

          is ill-defined, yet the corresponding quantized analogue

          Z = \displaystyle{ \mathrm{tr} \exp(-\hat{H}/kT)}

          is well-defined. Namely, the Hamiltonian for a box of electromagnetic radiation. This is the ultraviolet catastrophe, and this is how Planck stumbled into quantum mechanics. My point now is that even in situations where the integral is well-defined, sometimes it’s not dimensionless until we divide dx by a power of \hbar, or some arbitrarily chosen constant with units of action.

          Physicists have psychological reasons for wanting to find ways that new theories can ‘save’ old ones, instead of continuing to pursue the old ones as self-contained self-consistent objects of study. They get paid for discovering new theories.

        • domenico says:

          Excuse me, there was an overlapping of the two definitions, while writing.
          I write the quantum partition function for path integral like analogy.
          If exist a path integral (that have an analog form with a partition function), then exist a number of trajectories for each volume in the phase space (constraint on the possible trajectories), and it is the first time that read something like this.

        • Bruce Smith says:

          This has the effect … of introducing a fundamental length scale, which allows us to make the partition function dimensionless and also choose a specific value for it.

          By “it” I think you mean the length scale. Can you elaborate on how this allows you to choose a specific value for it? I don’t see why, in the pure classical case, it’s insufficient to just choose an arbitrary value for it.

        • John Baez says:

          John wrote:

          This has the effect … of introducing a fundamental length scale, which allows us to make the partition function dimensionless and also choose a specific value for it.

          Bruce wrote:

          By “it” I think you mean the length scale.

          No, I meant the partition function.

          It might be helpful to recall the more familiar but completely analogous situation in statistical mechanics, where we need to choose a quantity with dimensions of action to make the partition function Z dimensionless, and also give the partition function a specific value.

          This is important, for example, when we try to compute the free energy of a classical ideal gas, which is

          -k T \ln Z

          If we only know the partition function up to a constant multiple, we only know the free energy up to an additive constant. And if a quantity isn’t dimensionless, it’s usually considered bad to take its logarithm (though Tobias has argued above that it can be okay).

          Earlier I wrote:

          When people compute the free energy of a classical ideal gas. they actually want to know the answer ‘on the nose’, not just up to a constant. The reason, perhaps, is that in this subject everyone assumes that the free energy of a box full of vacuum is zero. So a zero of energy has already been fixed.

          You can see one version here:

          • S. M. Tan, Statistical Physics, Chapter 4: the Classical Ideal Gas.

          where the final answer is in equation (4.31). One strange thing about this particular version of the calculation is that it starts as a quantum calculation and then takes the classical limit. You will see that the volume of the box divided by the ‘thermal DeBroglie wavelength’ of the gas molecules shows up in the answer—see eq. (4.36) for an explanation of what I mean.

          Thus, the answer involves Planck’s constant, even in the classical limit!

          You can also try to compute the free energy of the classical ideal gas purely using classical mechanics. People must have tried this before quantum mechanics was invented. This is closer to what we’re talking about here. In this you naively start by computing

          Z = \displaystyle{\int_{\mathbb{R}^{3n} \times B^{n}} e^{-E(p)/kT} \; d^{3n} p \, d^{3n} q }

          where p \in \mathbb{R}^{3n} describes the momentum of n particles in a 3-dimensional box B \subseteq \mathbb{R}^3, q \in B^n describes the position of those particles, and E(p) is the kinetic energy of those particles, a quadratic function of p.

          However, this formula for Z is ‘wrong’, because Z is not dimensionless! It has units of momentum times position to the nth power: that is, action to the nth power. To make it dimensionless we need to divide the measure

          d^{3n} p \, d^{3n} q

          by something with units of action to the nth power… and this is where Planck’s constant, or more precisely \hbar^n, shows up! In this approach, it enters in a somewhat ad hoc way.

          In other words, we’re seeing that to compute the free energy of a classical ideal gas ‘on the nose’, not just up to a constant, we’re forced to introduce a quantity with dimensions of action. This quantity later turns out to equal Planck’s constant.

          So, we’re seeing a way that quantum mechanics pushes its nose under the door even when you didn’t invite it, like a camel that you wish would stay outside your tent.

        • Bruce Smith says:

          Right — I understand this part:

          to compute the free energy of a classical ideal gas ‘on the nose’, not just up to a constant, we’re forced to introduce a quantity with dimensions of action.

          But in the absence of any desire to go beyond the classical situation, I don’t get this part:

          This quantity later turns out to equal Planck’s constant.

          That is, sticking to classical ideas, I don’t see why that “quantity with dimensions of action” can’t have an arbitrary value.

        • John Baez says:

          Bruce wrote:

          That is, sticking to classical ideas, I don’t see why that “quantity with dimensions of action” can’t have an arbitrary value.

          Yes, it can have any value you want. But its value affects the free energy you compute for the classical ideal gas. If you want to get the “right answer”—the answer we now believe to be close to the answer for an actual gas—you need to pick this quantity with dimensions of action to be Planck’s constant.

          But suppose we didn’t have quantum mechanics! Would we be in serious trouble?

          Probably not. Since only energy differences are measurable, one can argue—as Tobias did earlier in this long conversation, that an additive constant ambiguity in our definition of free energy doesn’t affect any physical predictions. That sounds correct to me.

          But there’s more. The ambiguity also affects the entropy you compute for the classical ideal gas! This is more surprising, because—at least nowadays—we’re less likely to think of entropy as being defined only up to an additive constant.

          But in fact, if you try to measure the entropy of a substance (as opposed to computing it), you’ll see that we typically fix this constant using the Third Law of Thermodynamics:

          The entropy of a perfect crystal at absolute zero is exactly equal to zero.

          So, we measure the entropy of a gas at a given temperature by first chilling it down to absolute zero, or close, and then keeping careful track of how much heat energy it takes to warm it up to the given temperature. If we can’t afford to do this experiment, entropy is defined only up to a constant.
          (And indeed, we can never afford to get all the way down to absolute zero: we have to hope that close is good enough.)

          But a classical ideal gas never freezes and forms a crystal! I believe its entropy just keeps dropping more and more as we go closer and closer to absolute zero! More precisely, it goes to -\infty: at low temperatures it goes roughly like \log(T), up to some fudge factors.

          In short, the free energy and energy of a classical ideal gas are ambiguous up to additive constants, and the latter behaves in a rather annoying way. If there were no quantum mechanics we could learn to live with this, but in fact physicists were irritated by this problem until quantum mechanics came along.

          I found this article quite helpful:

          • S. M. Tan, Statistical Physics, Chapter 4: the Classical Ideal Gas.

          though they should have come out and said what I just said.

      • Alek says:

        The problem with the units of Z comes from dimensionally invalid formula for differential entropy used in obtaining Gibbs-Boltzmann distribution. instead of integrating p(q)log(q) one should calculate p(q)log(p(q)/m(q)) – where m(q) is a “base measure” i.e dq^n – thus dimensionful. After solving the optimization problem the base measure will appear in the numerator…

  4. Blake Stacey says:

    Typo on p. 2: “The kinetic energy is often, though not always, has a minimum at velocity zero.” And p. 15: “In the subject of computation, there is a superficiallly different notion”.

  5. crabel2013 says:

    In the approach that you took, the a(q) is a path density function? How does this approach differ from Maximum caliber?

    Bit confused later on (page 6) how can the path density not be conserved?

    There is a similarity in what you do here to how Gibbs derived the index of probability, using Gibbs notation. The index of probability \eta is defined as \mathrm{Log}P where P is the coefficient of probability, a conserved quantity, from the conservation of the density in phase (continuity).

    Gibbs then did a first order expansion \eta and arrived at the canonical distribution, where \eta=\frac{\psi-\epsilon}{\theta}

    Where I am confused (This is due to my ignorance) is that the partition function, at least how Gibbs defines it in Chapter 8 seems to be a means of converting from the internal coordinate system to that of energy alone. Can this approach be applied to what you are doing here?

    Also, the modulus of the the action paths is constant, 1/\lambda? Why would the variability of paths be fixed? Here I am really showing my ignorance…

    Thank you for your patience and your interesting paper.

  6. Mark Meckes says:

    While the paper sounds very interesting, I must say that the lead-off “There’s a new paper on the arXiv” sounds less than newsworthy.

  7. Uncle Al says:

    Claude Shannon showed entropy is a measure of information. Information sources quantum mechanics, arxiv.org:1208.0493. Perhaps entropy measures are default quantized.

    View at Medium.com
    Is there a Noetherean symmetry enforcing conservation of information?

  8. jessemckeown says:

    Just to chase the analogy from the other end, the ultraviolet catastrophe can be read: supposing one initially has a classical field oscilating with finite energy, and a stochastic process redistributing that energy among the field modes at random and in a suitably conservative and symmetric and transitive way; then the bandwidth of the system will tend to infinity in a particular, central-limit-theorem kind of way (even while the signal strength in any subband tends to zero).

    From the fourier-dual point of view of resolution, that means the discernible features of the oscilating field will tend towards very small size. And this is (when I get sloppy and provocative all at once) amusingly like what we see in the observable universe: the discernible structures (galaxy local groups etc.) are getting smaller when compared to the universe-as-a-whole — although we usually say it the other way around, that the observable universe is stretching apart.

  9. Joan Vazquez says:

    Sorry for this very basic question, but I don’t really understand the analogy T \mapsto i\hbar, basically because I don’t see how i \hbar is a variable.

    Why isn’t i \hbar a constant?

    • John Baez says:

      Of course i \hbar is a constant. But when you write down formulas in quantum mechanics involving \hbar, you can treat it as a variable. This is widely done. For example, people study the ‘classical limit’ of quantum mechanics by doing calculations that include \hbar and then taking the limit \hbar \to 0. Then you get back to classical mechanics.

      A more refined version of this idea gives the ‘loop expansion’ in quantum field theory. We write answers to physical questions as power series in \hbar, and we find the coefficients of these power series in the usual way, using Taylor series: treating the answer as a function of \hbar and repeatedly differentiating it!

      In short: it works, and it’s illuminating. So, it’s good to do, even if it raises questions we can’t answer yet. That’s often the case in physics.

      However, it would be nice to understand this mystery better. Maybe we could improve the analogy between statistical mechanics and quantum mechanics by making T analogous to something we can actually adjust in the lab!

      In our next paper Blake and I hope to do that. We were going to include this in our first paper, but it turned out to be harder than we thought.

  10. garrett says:

    Congratulations to you and Blake on getting this paper out! Good to see this stuff getting more attention.

    There is an issue troubling me though. In your post, you criticize Joakim Munkhammar for referring to quantropy incorrectly as entropy, because it is complex. And, more seriously, in your paper, you choose to refer to the “amplitude”, a(x) — but, in calculations, such as at the bottom of p5, you use it not as an amplitude but as a probability density. This clearly troubles you too, since, on the following line, you admit it’s “a bit odd.” But I think it’s worse than a bit odd, I think it’s terribly wrong to refer to a(x) as “amplitude,” because that has a different and very strong meaning to physicists, implying p(x) = a^* a , which you do not mean.

    I realize this may seem like bandying about over what we call things, but it’s obscuring a potentially fascinating subject. What we are doing here is generalizing to complex probabilities, and using that to derive QM. The relevant quote, I think from Hadamard, is “The shortest path between two truths in the real domain often passes through the complex domain.” You and Blake are, perhaps wisely, being hesitant about tackling anything as crazy as complex probability. But I think that does need to happen here, and tiptoeing around the issue will sadly delay the fun.

    • lee bloomquist says:

      Axioms for possibility and impossibility

      (Information and Impossibilities, Jon Barwise, Notre Dame J. Formal Logic Volume 38, Number 4 (1997), 488-515.)

      What kind of numbers would mean “possibility”?

      Something is either possible or it’s not, so it seems no possibility could rationally be compared to any other by a “greater-than” “less than” relation. Every one of them just is a possibility or is not. There is no half-way point in being a possibility.

      Then for example there’s the possibility of my being in the universe, it just depends on me being alive or not. Next imagine a divided universe, with part A and part B. The possibility of me being in the universe could then be either (a) the possibility of being in part A or (b) the possibility of being in part B. In the case where I make the A part of the universe vanish, the possibility that I’m in the universe is then just the possibility that I will be in part B. And vice versa.

      thePossibilityOfBeingInTheUniverse = A + B = B + A = thePossibilityOfBeingInTheUniverse

      Possibility seems to be that kind of number.

      But I also experience possibility in another way. Say that I’m planning a BBQ, and I want to invite Homer. But Homer hates, I mean hates, to BBQ in the rain. So initially in my plans for the party, I consider for planning purposes the conjunction of two possibilities: (a) the possibility it will rain “R” and (b) the possibility that Homer will come “H”. If both possibilities occur we have to move the party into the garage as quickly as possible, to avoid Homer’s whining. But if either possibility vanishes, my worry about quickly moving the party into the garage vanishes. It doesn’t matter which possibility I think of first.

      thePossibilityOfUsingGarage = R*H = H*R = thePossibilityOfUsingGarage

      So possibility must be this kind of number as well.

      I’ve also had another kind of experience. Say that Homer suddenly turns into a complete ass, which is not an unusual experience for him. I quickly factor out any consideration I once had at all for Homer’s baby man worries about the rain. I factor that consideration completely out from my planning process, and what the hell, if it rains, it rains and we’ll think of something. That’s part of the fun down here in the islands!

      BEFORE:

      ScopeOfPlanning: thePossibilityOfUsingGarage = R*H

      AFTER:

      ScopeOfPlanning: thePossibilityOfUsingGarage/H = R

      Factoring away from the possibility of using the garage any consideration of the possibility that Homer will attend results in me only considering only the possibility it will rain, and heck I think that’s part of the fun. Sort of an adaptive team exercise, especially in the islands.

      This narrows down the suspects. The kind of number that means “possibility” must be one of the four kinds of numbers that are in the division algebras. Check it out- the above constraints mean a complex number means possibility, and if it vanishes so does the possibility…

      ?

    • John Baez says:

      Garrett wrote, modulo tiny changes in notation:

      … I think it’s terribly wrong to refer to a(q) as “amplitude,” because that has a different and very strong meaning to physicists […] You and Blake are, perhaps wisely, being hesitant about tackling anything as crazy as complex probability. But I think that does need to happen here, and tiptoeing around the issue will sadly delay the fun.

      I’m not being hesitant. I think a(q) is an amplitude.

      When you use the path integral approach to compute the wavefunction \psi(t,x) for a particle that starts at a specific position x_0 at a specific time t_0, you integrate a(q) over all paths starting at x_0 at time t_0 and ending at x at time t. You can then prove that \psi(t,x) obeys Schrödinger’s equation.

      Since \psi(t,x) is an amplitude, I claim that a(q) should also be considered an amplitude.

      Then we can say: to figure out the amplitude for a particle to get somewhere, we work out the amplitude that it takes any specific path, and then integrate that over all paths that get there.

      In a double slit experiment, we see constructive and destructive interference effects, thanks to the fact that these numbers a(q) are complex.

      I think what I’m saying fits quite nicely with how Feynman explained path integrals, and how physicists use them.

      There’s more to say, but this is enough to get the conversation going.

      • garrett says:

        Well, yes, that establishes that a is related to an amplitude, \psi , but IS a an amplitude? I’m not sure of this, but I think a is always used as a probability. It’s normalized as a probability:

        \int_X a dx =1

        And it’s used to compute expected values as a probability:

        $latex = \int_X a A dx $

        For a to be a probability amplitude, and not a probability, I think it’s necessary that it be used to compute a probability as p = a^* a . Is this ever the case?

        Maybe we need a new word for a generalized probability that can be complex?

      • John Baez says:

        I figure: if the complex number a(q) associated to a path q were a probability density and we integrated it over the set of paths from one spacetime point to another, we’d get the probability of going from the first point to the second. But we don’t: we get the amplitude of going from the first point to the second. So I feel fine calling a(q) an amplitude density, or amplitude for short.

        I don’t mind you calling it a complex probability, though.

        I completely agree that there’s something remarkable about how we normalize a(q), not by requiring that the integral of |a(q)|^2 equal 1, but by requiring that the integral of a(q) equal 1.

        This is something I want to understand. Of course this is needed for Wick rotation—the replacement of time by imaginary time, reinterpreted as i times inverse temperature—to relate amplitudes in quantum mechanics to probabilities statistical mechanics. But I want to understand the ‘deep inner meaning’ of Wick rotation. That’s what this ‘quantropy’ work is about.

        I don’t think that deciding what to call a(q) is important to me. When we understand what’s going on, the terminology may fall into place. There are some things Blake is starting to do that I’m hoping will make real progress. And the stuff we’ve done so far opens up lots of new questions—like the one mentioned in this blog article, the need for a new dimensionful parameter, analogous to Planck’s constant, to make the partition function Z dimensionless.

        • garrett says:

          When you integrate over paths that go from a first spacetime point to a second, you get an amplitude, \psi , because you have manifestly instituted time ordering. An amplitude is half a probability (or, somewhat more accurately, a square root). To maintain reversal symmetry, you need also include paths from the second point to a third. Combining these amplitudes gives a probability, p = \psi^* \psi . Amplitudes come from factoring probability in this way. And what you are calling a is a probability and not an amplitude.

        • John Baez says:

          Garrett wrote:

          When you integrate over paths that go from a first spacetime point to a second, you get an amplitude, \psi, because you have manifestly instituted time ordering.

          I’m not sure what you mean by that. In my previous comment, when I integrated a(q) over all paths q with endpoints (t_0, x_0) and (t_1,x_1), I said the result was the amplitude for a particle that starts at the spacetime point (t_0, x_0) to end up at (t_1,x_1). But it’s also the amplitude for a particle that ends up at the point (t_1, x_1) to start at (t_0,x_0). It’s only my language—“starts at”, “ends up at”—that breaks time reversal symmetry. And I’m only talking that way because that’s how people tend to talk. If t_0 < t_1 we say the particle “starts at” (t_0, x_0) and “ends up at” (t_1,x_1), but I don’t need to assume t_0 < t_1 and I don’t need to talk in a time-asymmetric way.

          To maintain reversal symmetry, you need also include paths from the second point to a third. Combining these amplitudes gives a probability, p = \psi^* \psi. Amplitudes come from factoring probability in this way.

          I don’t know about this “you need” business. But I’m perfectly happy to consider this other setup if it helps shed light on the relation between probabilities and amplitudes.

          It sounds interesting, but it’s not shedding light for me yet. Let me try to understand what you’re saying. Say I have three spacetime points z_0, z_1, z_2. I let \psi_1 be the integral of a(q) over all paths with endpoints z_0 and z_1. I let \psi_2 be the integral of a(q) over all paths with endpoints z_1 and z_2. I multiply \psi_1 and \psi_2, or maybe \psi_1 and \psi_2^*. What does this mean? You seem to be saying it’s a probability.

          I think that if z_1 = z_0, then what I’m calling \psi_2 will often equal \psi_1^*, so that

          \psi_1 \psi_2 = |\psi_1|^2

          is the probability of a particle starting at z_0 reaching z_1.

          I don’t feel a strong sense of enlightenment yet. In particular I’m not sure why you mentioned a ‘third point’ if this third one is the first one.

          And what you are calling a is a probability and not an amplitude.

          Repeatedly saying this isn’t going to speed up information transmission between us. It’s very unlikely that after saying this four times, saying it a fifth time will teach me anything. I heard you say it each time.

        • garrett says:

          Ha, sorry, I’ll make an effort to be less sphexish. This is what I meant, more precisely, about the probability factoring:

          p(x',t') = \int{Dx \, \delta(x'(t') - x) \, p(x)}
          = \left( \int^{x(t')=x'}{Dx \, p^{t'}(x)} \right) \left( \int_{x(t') = x'}{Dx \, p_{t'}(x)} \right)
          = \psi(x',t') \, \psi^*(x',t')

          in which

          \psi(x',t') = \int^{x(t')=x'}{Dx \, p^{t'}(x)}
          = \frac{1}{\sqrt{Z}} \int^{x(t')=x'}{Dx \, e^{ - \alpha S^{t'}}}

          The quantum wavefunction, \psi(x',t') , is the amplitude for paths with t  t' leaving from x' — these amplitudes multiply to give the probability of the system passing through x'(t') . This should also work if we presume other, possibly identical, start and end points. And it requires a time-independent Lagrangian.

        • John Baez says:

          It took me quite a while to fix the LaTeX on your comment. The problem turned out to be that you used

          instead of

          in various places. These are different characters and only the latter is digestible by LaTeX here. After I recover I’ll think about what you actually said.

        • garrett says:

          Thank you! I tested it through my LaTeX compiler and it worked, so I was sad to see the errors here. Comment preview would be good. Or maybe there’s a sandbox?

        • John Baez says:

          Comment preview would be good, but so far I’ve been too lazy to pay the money to get these extra features. Sorry.

          You can use my blog as a sandbox if you want, e.g. the “About” page; I’ll just delete stuff after a while.

          I also don’t mind fixing people’s TeX; I just mentioned your case because the bug was fiendishly subtle.

          You had to do something a bit unusual to put right single quote marks in two different fonts in your comment. If I type x’, everything is fine in TeX here: it gives

          x'

          You did that most of the time, but your comment also had some instances of the fancy curly quote

          which I can’t type. I can only cut-and-paste it. If I use that and put x’ in TeX it gives

          x’

          So, whatever weird thing you did, don’t.

        • lee bloomquist says:

          garrett says:

          17 November, 2013 at 12:55 pm

          …I think it’s necessary that it be used to compute a probability as p = a a^* . Is this ever the case?…

          (If not I wonder: are we seeing a subtle physical distinction?)

          John Baez says:
          17 November, 2013 at 5:36 pm

          “… I completely agree that there’s something remarkable about how we normalize , not by requiring that the integral of a(x) equal 1, but by requiring that the integral of a(x) equal 1. . . .”

          I can think of two games, one called probability learning from the Handbook of Experimental Economics, the other a kind of iterated shell game, where in both cases these formulas hold–

          q = a a^*

          \ln(1/p) + \ln(1/(1-q)) = \ln(1/q) + \ln(1/(1-p))

          Where q = a a^* as in then Born rule, this equation is an equilibrium law constraining certain kinds of entropic force in mathematical games (as Professor Baez showed in a previous post).

          It is called Born’s Rule if p=q. But in these games it’s a law governing an equilibrium between two opposing entropic forces, a game whose result is the Rule. Neither force can destroy the other, and due to finite situations where the iterated games can be counted by finite integers, the observed percentage rates which result dither around the above law of probability learning.

          In a category where it’s not the case that q = a a^* , it would be unthinkable to play these games.

  11. amarashiki says:

    John…I believe from some time ago, that if duality is implemented at the level your analogies seem to point out, the “new” Planck constant should be related to the MAXimum action, and not to the minimum action (that is the first Planck’s constant)…

  12. Scott says:

    Hi there,

    A couple questions concerning some points that are confusing me:

    1. As discussed elsewhere (http://en.wikipedia.org/wiki/Path_integral_formulation) the complex partition function Z defined in the context of ‘quantropy’ discussed here seems to be exactly the quantum propagator K. Given that this is true the propagator is known exactly (see for example Hagen Kleinert’s book) for a variety of systems including the free particle, harmonic oscillator, the hydrogen atom etc. Yet, Z discussed here for the free particle doesn’t seem to correspond to the propagator for the free particle. Am I mistaken then that the complex Z discussed here is not in fact identical to the propagator?

    On a related note,

    2. The dimensionful constant \Delta x used in calculating Z seems to be identical to the normalization constant used for the Gaussian integrations used in calculations for exact propagators. It turns out that the later normalization constants, when generalized to higher dimensional systems and or considerations of the relativistic particle on a curved manifold, are related to the Van Vleck determinant which in turn relates to the semi classical approximation and the Hamilton Jacobi equation. So, am I also mistaken that the normalization constants discussed here are in equivalent?

    Thanks ahead of time for clarification of these points!
    Cheers,
    Scott

    • John Baez says:

      Hi! I’m grading homeworks, so I’ll answer your first question now and tackle the second one when I need another break.

      Am I mistaken then that the complex Z discussed here is not in fact identical to the propagator?

      The partition function we discuss is not the propagator.

      The partition function Z is, in principle, the integral of the exponentiated action

      \exp(-i S(q) / \hbar)

      over all paths q that start at some time t_0 and end at some time t_1.

      The propagator K(t_0, x_0; t_1, x_1) is the integral of the exponentiated action over all paths that start at position x_0 at time t_0 and end at position x_1 at time t_1.

      When I say ‘in principle’, it’s for this reason. In our paper we study the partition function of a free particle. This has space translation symmetry: if you push a path 2 meters to the right it has the same action. So, if you integrate over all paths that start at some time t_0 and end at some time t_1, you get infinity.

      To deal with this, we use a standard trick called ‘gauge-fixing’. This means that we only integrate over paths that start at some fixed position x_0 at time t_0.

      This makes the partition function look more like the propagator than it really should! The only difference now is that in the partition function we let the path end at whatever position it wants at time t_1, while in the propagator we make it end at the position x_1.

      If we were studying the harmonic oscillator, we would not need to do this gauge-fixing. Blake is working on that case now, and I hope we’ll talk about it soon.

      In any event, the partition function can be obtained from the propagator by doing a further integral: integrating over the allowed endpoints of the path.

      • Scott says:

        Hi there,
        Thanks for the clarification!

        So, the partition function is the propagator integrated over all future end points;

        Z \equiv \int_{ - \infty}^{+ \infty} dx_{1} K\left(t_{0},x_{0};t_{1},x_{1}\right)

        which explains why they appear so similar to functional integrals! Further, given the exact expressions known for K\left(t_{0},x_{0};t_{1},x_{1}\right) for various systems; it shouldn’t be difficult to calculate the complex partition function for those systems.

        In terms of \Delta x, the more I looked at things it seems (although I could be mistaken) that this is Feynman’s ‘proportionality factor’. In the folklore, Dirac’s original paper postulated that the short time propagator ‘goes like’

        \exp{i {\cal L} dt/\hbar}

        Feynman and Herbert Jehle supposedly considered this in a bar one night and discussed what ‘goes like’ could mean? The next day Feynman realized, by mental gymnastics, that in order for this to agree with Schrödinger’s equation in the limit \lim_{dt \rightarrow 0} ‘goes like’ really means ‘is proportional to’. The constant of proportionality being

        \displaystyle{ C\left(dt\right) = \sqrt{\frac{2 \pi \hbar dt}{-i m}} }

        in the case of the free particle in one dimension. Here, the path integral is written

        \displaystyle{ {\cal D}x\left(t\right) = \frac{1}{C} \frac{dx_{1}}{C}  \frac{dx_{2}}{C} \ldots \frac{dx_{N-1}}{C} }

        If I am not mistaken \Delta x = C\left(dt\right) is required to make the short time propagator satisfy the heat kernel initial condition

        K(t_{0},x_{0};t_{1},x_{1}) = \lim_{t_{0} \to t_{1}} \delta \left(x_{0}-x_{1}\right)

        Cheers,

        Scott

        • John Baez says:

          Where did you read this story about Feynman and Jehle? And where did you read about Feynman’s ‘proportionality factor’. I’d like to read more about the history of these ideas, because it might contain some useful clues. Embarrassingly, I don’t know Feynman and Hibbs book on path integrals very well.

        • Blake Stacey says:

          Looking it up in Gleick’s biography, I found that Feynman tells the story in his Nobel lecture.

        • Scott says:

          Hi there,
          Sorry about the ‘dog’s breakfast’ I made with my LaTex; thanks for making various corrections!

          Feynman discusses in his Nobel lecture (http://www.nobelprize.org/nobel_prizes/physics/laureates/1965/feynman-lecture.html) how Jehle pointed out Dirac’s paper concerning the role of the classical action when they were at a beer party at the Nassau Tavern in Princeton. This is also described in more detail in Gleick’s book ‘Genius’, Silvan’s Schweber’s book ‘QED and the Men Who Made it: Dyson, Feynman, Schwinger and Tomonaga’ (and other papers http://www.rpgroup.caltech.edu/courses/aph105c/2006/articles/Schweber1986.pdf).

          Cheers,
          Scott

        • Scott says:

          Discussion of the proportionality factor, as it figures in establishing the path integral as a solution of Schrodinger’s equation, can be found in Feynman and Hibbs or Kleinert’s books. Peskin and Schroeder’s ‘Intro to QFT’ and Zinn Justin’s ‘QFT and Critical Phenomena’ also show the derivation.

          An online discussion (by Zinn Justin) is also available at scholarpedia (http://www.scholarpedia.org/article/Path_integral). In that article equations (3), (18), and (19) include the normalization discussed above.

          Hope this is more helpful

    • John Baez says:

      So, in case anyone is too lazy to read Feynman’s Nobel lecture, here’s the story:

      […] when I was struggling with this problem, I went to a beer party in the Nassau Tavern in Princeton. There was a gentleman, newly arrived from Europe (Herbert Jehle) who came and sat next to me. Europeans are much more serious than we are in America because they think that a good place to discuss intellectual matters is a beer party. So, he sat by me and asked, “what are you doing” and so on, and I said, “I’m drinking beer.” Then I realized that he wanted to know what work I was doing and I told him I was struggling with this problem, and I simply turned to him and said, “listen, do you know any way of doing quantum mechanics, starting with action – where the action integral comes into the quantum mechanics?” “No”, he said, “but Dirac has a paper in which the Lagrangian, at least, comes into quantum mechanics. I will show it to you tomorrow.”

      Next day we went to the Princeton Library, they have little rooms on the side to discuss things, and he showed me this paper. What Dirac said was the following: There is in quantum mechanics a very important quantity which carries the wave function from one time to another, besides the differential equation but equivalent to it, a kind of a kernal, which we might call K(x', x), which carries the wave function \psi(x) known at time t, to the wave function \psi(x') at time t + \epsilon, Dirac points out that this function K was analogous to the quantity in classical mechanics that you would calculate if you took the exponential of i \epsilon, multiplied by the Lagrangian imagining that these two positions x,x' corresponded to times t and t + \epsilon. In other words,

      K(x',x) is analogous to \displaystyle{e^{i\epsilon L((x'-x)/\epsilon, x)/\hbar}}

      Professor Jehle showed me this, I read it, he explained it to me, and I said, “what does he mean, they are analogous; what does that mean, analogous? What is the use of that?” He said, “you Americans! You always want to find a use for everything!” I said that I thought that Dirac must mean that they were equal. “No”, he explained, “he doesn’t mean they are equal.” “Well”, I said, “let’s see what happens if we make them equal.”

      So I simply put them equal, taking the simplest example where the Lagrangian is

      \frac{1}{2} mv^2 - V(x)

      but soon found I had to put a constant of proportionality A in, suitably adjusted. When I substituted for K to get

      \psi(x', t+\epsilon) =

      A \int \exp[\frac{i \epsilon}{\hbar}  L((x'-x)/\epsilon, x)] \, \psi(x,t) \, dx

      and just calculated things out by Taylor series expansion, out came the Schrödinger equation. So, I turned to Professor Jehle, not really understanding, and said, “well, you see Professor Dirac meant that they were proportional.”

      Professor Jehle’s eyes were bugging out – he had taken out a little notebook and was rapidly copying it down from the blackboard, and said, “no, no, this is an important discovery. You Americans are always trying to find out how something can be used. That’s a good way to discover things!” So, I thought I was finding out what Dirac meant, but, as a matter of fact, had made the discovery that what Dirac thought was analogous, was, in fact, equal. I had then, at least, the connection between the Lagrangian and quantum mechanics, but still with wave functions and infinitesimal times.

      So it’s a great story, and illuminating, but it doesn’t have much to say about the “constant of proportionality A, suitably adjusted.” That’s what I’m interested in now!

      • lee bloomquist says:

        Professor Baez, I should probably try to clarify my comment about the constraints on the kind of number to represent “possibility.” (No greater-than, less-than relation, A+B= B+A. AB = BA. AB/B=A.) There are lots of other models for “possibility.” But here’s why I like Jon Barwise’s model from his paper “Information and Impossibilities.”

        First here’s a link to a perhaps motivating quote. It’s what a student of Max Born, Herbert S. Green in Matrix Mechanics wrote about “possibility.”

        (https://docs.google.com/file/d/0B9LMgeIAqlIEUnhiTHFyNWRTU2c/edit?usp=docslist_api)

        The constraints are a way of trying to abstract from this a minimal structure.

        Next from Barwise: “The Inverse Relationship Principle: Whenever there is an increase in available information there is a corresponding decrease in possibilities, and vice versa.”

        Factoring away a possibility (AB/B=A) means fewer possibilities and for example in a slit experiment more information about where the particle will land. Multiplying (AB) meams increasing the possibilities and therefore less information about where the particle will land.

        Green’s statement has for its semantic context the Heisenberg picture, where probability is constant and not a function of time, moreover in an experiment measured over a finite (not infinite or infinitesimal) situation. But when probability becomes a function of time then the time-varying probability could for example be a sine wave which over the finite situation averages to the very same constant probability. For each higher range of frequencies there could exist a theory with time-varying probability which averages to the same, constant probability of the finite situation. Does “possibility” also work like this?– for higher ranges of frequencies the theory for each range expresses time-varying possibilities which “average” to the same constant possibility as in the finite situation. If so, is there an equation using quantropy?

  13. domenico says:

    It is only a thought in a walk.
    If the path integral have an analogy with the partition function, then there is a number of trajectories (that connect the space points with infinitesimal distance) for each four-dimensional volume.
    I use the free particle propagator, so that seem that the “volume” of a single trajectory is
    \frac{i \hbar dt}{m}
    so the number of trajectories in a “volume” is:
    N=\frac{m dx}{\hbar dt}
    it seem that the Schrodinger equation, or the equivalent path integral, say that the four-dimensional volume is quantized (like an electron in an atom) when the space is little (there is a minimum volume for a single trajectory), and that the movement is like an intersection of fluxes tube in the space; this don’t give new information, because it contain only old physics.

    • John Baez says:

      For a quantum particle moving in 1d space we need a quantity \Delta x with dimensions of length to make the partition function dimensionless. In 3d space we’d need something with dimensions of length cubed (which we could get from something with dimensions of length). I don’t think we’d need something with dimensions of 4d volume.

      But anyway, your thoughts are interesting, though a bit mysterious.

  14. westy31 says:

    Interesting stuff!
    Instead of looking at it from the statistical thermodynamics point of view, you could look also at it from the phenomenological thermodynamic point of view. There, you have the relation

    dS=dQ/T

    I am always intrigued by this formula, it holds exactly, but you need to know nothing about quantum mechanics or statistical mechanics. In fact, it is much older I believe than the discovery of the atom.

    Translating this to quantum mechanics (S is now action instead of entropy):

    dQ=dS/(ih)

    But in contrast to T in thermodynamics, ih is a constant. So:

    Q = S/ih + constant.

    Hmm.., is quantropy just a constant times action?

    Gerard

    • westy31,
      Or perhaps S/quantropy is uncertainty in the amount of action?

      That one brings us back to the starting point of Heisenberg.

    • John Baez says:

      From a statistical mechanics point of view you can see the relation between quantropy and action in our paper:

      \displaystyle{ Q = \frac{1}{i \hbar} \langle A \rangle + \ln Z }

      Here Q is the quantropy, \langle A \rangle is the expected value of the action, and Z is the partition function. - i \hbar \ln Z is the free action, the quantity analogous to free energy in the analogy we’re discussing.

      So, I think Gerhard’s clever argument that quantropy is proportional to (the expected value of the) action must be somewhat flawed. I’m not sure what the flaw is, but it could be than in the thermodynamics formula dQ = dS/T we are considering how Q (which now means heat) and S (entropy) vary as we change the temperature, which is analogous to \hbar, but later in the argument Gerhard is treating \hbar as fixed. I think we need to treat \hbar as variable to get the analogy to work. This may be upsetting to some, but it’s mathematically valid… and then it invalidates Gerhard’s step where he treats \hbar as constant. Maybe this is why he’s getting

      \displaystyle{ Q = \frac{1}{i \hbar} \langle A \rangle }

      in my notation.

      Just a guess.

      • westy31 says:

        I can think of a thermodynamical system that has a constant temperature (T), just like the quantum analogy, where ih is constant: An evaporating liquid with a very low specific heat, but with a considerable heat of evaporation (H). If n were the number of molecules in the vapour phase, the energy would be nH, and the entropy would be nH/T_boil.

        This is a bit like a quantum system with a collection of spins, which can be UP or DOWN, with different energies. I do not know if the total expected action of such a system could be proportional to the number of particles in the UP state?

        About the formula with the partition function (Z).
        Could it be that ln(Z) is also a constant? In that case our formulas would agree. ln(Z) could be interpreted as the vacuum quantropy. With the thermodynamic system, you could also add a zero point entropy. This term would be arbitrary in phenomenological thermodynamics, but in statistical mechanics it would be something like the number of ways of rearranging the vacuum.

        I do not have a good intuition of what it means to add action to a quantum system, but maybe this discussion is a good exercise for that!

        Gerard

        • John Baez says:

          Gerard wrote:

          Could it be that ln(Z) is also a constant?

          No, it’s not. I suggest reading our paper—it’s easy and fun! In thermodynamics, -T \ln(Z) is the free energy. In quantum mechanics, -i \hbar \ln(Z) is the ‘free action’.

        • westy31 says:

          Reply to John’s answer:

          I did sort of read your article (spent a couple of hours on it).

          It seems to me that Z is a sum over all microstates, weighted by exp(-E/kt) or exp(-S/ih) respectively. What puzzles me is whether this is ‘constant’. With constant I mean the same for a vacuum as for an excited field. I thought Z was a property of the system, regardless of its actual state, whereas \langle A \rangle depends on some kind way of adding action, for example by creating particles. This is really something I would like to understand.

          Gerard

        • John Baez says:

          Gerard wrote:

          It seems to me that Z is a sum over all microstates, weighted by exp(-E/kT) or exp(-S/ih) respectively.

          In statistical mechanics Z is a sum over all microstates, weighted by \exp(-E/kT).

          In quantum mechanics Z is a sum over all histories, weighted by \exp(-A/i\hbar). (I’m going to continue writing A for action and S for entropy, because otherwise things will get very confusing.)

          Microstates are ways a system can be at a given time. Histories are ways a system can be over all time, or a duration of time.

          What puzzles me is whether this is ‘constant’. With constant I mean the same for a vacuum as for an excited field.

          I’d say it’s constant—so constant that your last statement barely makes sense. Z is not defined ‘for a vacuum’ or ‘for an excited field’. Z is just a property of the system, depending only on kT (in statistical mechanics) or Planck’s constant \hbar (in quantum mechanics).

          We can change the definition of Z so it depends on other things, like the volume of a box containing the system. This is very useful. But our paper doesn’t talk about that.

        • westy31 says:

          I’d say it’s constant—so constant that your last statement barely makes sense. Z is not defined ‘for a vacuum’ or ‘for an excited field’. Z is just a property of the system []

          Aha. Actually, I suggested in a previous post I suggested that it *was* constant. Perhaps we should make more clear: constant with respect to what.
          I am trying to understand d(Quantropy) = d(Action)/ih.
          This would imply Quantropy = (Action)/ih + Constant.
          It would all make sense if the constant would be ln(Z).

          ‘constant’ in the thermodynamic case would mean it does not depend on temperature. You have for a constant temperature system:
          d(Entropy) = d(Energy)/kT
          -> Entropy = Energy/kT + Constant
          I gave an example of a constant temperature system 2 posts ago, this formula would make sense for such a system.
          The constant in this case is the entropy at zero temperature.

          So in the quantum case, ln(Z) is ‘constant’ in the same sense, it does not depend on the amount of action added to the vacuum.
          I assume that an excited system has a different action than the vacuum, and that you can go from the vacuum to the excited state by successive excitations.
          That is how I interpret
          d(Quantropy) = d(Action)/ih

          Gerard

    • Frederik De Roo says:

      Isn’t dQ = T dS only valid for a reversible process in closed systems? I haven’t thought this through at all, but perhaps the ‘closed’ assumption doesn’t hold in the quantropy analogy.

      • John Baez says:

        Frederik wrote:

        Isn’t dQ = T dS only valid for a reversible process in closed systems?

        That sounds true. In a general quantum system, I think the only way I can change the analogues of Q and S is by changing \hbar (which is analogous to 1/kT). I can check to see whether the relation analogous to dQ = T d S holds when I do this. The analogy to statistical mechanics is very good, so it should.

        In a more specific quantum system I would adjust more parameters, just as in a more specific thermodynamic system, like a gas in a piston, I can change the volume as well as the temperature.

  15. Jason says:

    The name is “Munkhammar” (the “r” is missing everywhere)

  16. Tilman Neumann says:

    Hi John,

    I have been working some years in private on entropic priors, albeit from the Bayesian inference point of view. It’s one of my two favorite math subjects.

    So I remembered that Saul Youssef wrote a couple of papers on getting quantum physics from “exotic” probabilities, and he has a page of references related to it (including his own papers):
    http://physics.bu.edu/~youssef/quantum/quantum_refs.html

    If I got it right then there is a difference to your approach: He does not use the maximum entropy principle, but directly applies the Cox axioms (frequently used to derive the ME principle) to real and complex numbers and even quaternions, and he claims that he gets statistical mechanics, quantum theory and Dirac theory from it, respectively.

    By the way, have you not been tempted to work out the analogy with black hole thermodynamics? Its fun, you get meaningful things like area of the event horizon and so on…

    Best wishes
    Till

  17. John Baez says:

    This Nobel prize winner has a fun piece on the future of physics:

    • Frank Wilczek, Physics in 100 years.

    The first part is about SO(1O) grand unification, etc.—good stuff to know, but not new. Then he says something that reminds me of my paper with Blake on quantropy, and my claim that all minimum principles may boil down to some sort of generalization of Occam’s razor (minimizing algorithmic complexity for hypotheses):

    The “higher”, integrated forms of dynamics are more constrained than the lower, derived forms. Thus force fields derived from energy principles must be conservative, and dynamical equations that follow from action principles must be capable of being written in canonical, Hamiltonian form. Two of the Maxwell equations (the magnetic Gauss law and Faraday’s law of induction) become identities when we introduce potentials. Is it possible to go further in this direction?

    Leaving that interesting question open, let us consider more closely the foundational quantity in present-day physics: action. Our fundamental laws are most powerfully expressed using Feynman path integrals. Within that framework action appears directly and prominently, supplying the measure. And it is at the level of action that our local symmetry principles take a simple form, as statements of invariance.

    Given Planck’s constant as a unit, action becomes a purely numerical (dimensionless) quantity. The world-action is therefore a specific numerical quantity that governs the basic operation of the physical world. One would like for such a basic quantity to have profound independent meaning.

    Information is another dimensionless quantity that plays a large and increasing role in our description of the world. Many of the terms that arise naturally in discussions of information have a distinctly physical character. For example we commonly speak of density of information and flow of information. Going deeper, we find far-reaching analogies between information and (negative) entropy, as noted already in Shannon’s original work. Nowadays many discussions of the microphysical origin of entropy, and of foundations of statistical mechanics in general, start from discussions of information and ignorance. I think it is fair to say that there has been a unification fusing the physical quantity (negative) entropy and the conceptual quantity information.

    A strong formal connection between entropy and action arises through the Euclidean, imaginary-time path integral formulation of partition functions. Indeed, in that framework the expectation value of the Euclideanized action essentially is the entropy. The identication of entropy with Euclideanized action has been used, among other things, to motivate an algebraically simple (but deeply mysterious) “derivation” of black hole entropy.

    If one could motivate the imaginary-time path integral directly and insightfully, rather than indirectly through the apparatus of energy eigenvalues, Boltzmann factors, and so forth, then one would have progressed toward this general prediction of unification:

    Fundamental action principles, and thus the laws of physics, will be reinterpreted as statements about information and its transformations.

    • Garrett Lisi says:

      The relation of fundamental physics to information is certainly a tightly woven web. I also was interested by what Frank had to say in that article about Gravi-GUT unification:

      “Although general relativity is based on broadly the same principle of local symmetry that guides us to the other interactions, its implementation of that principle is significantly different. The near-equality of unified coupling strengths, as we just discussed, powerfully suggests that there should be a unified theory including all four forces, but that fact in itself does not tell us how to achieve it.
      String theory may offer a framework in which such four-force unification can be achieved. This is not the event, and I am not the person, to review the vast amount of work that has been done in that direction, so far with inconclusive results. It would be disappointing if string theory does not, in future years, make more direct contact with empirical reality. There are many possibilities, including some hint of additional spatial dimensions (i.e., a useful larger broken symmetry SO(1, N) → SO(1, 3))…”

      If he continues that line, he’ll see that he needs to use at least spin(11,3) to include gravity and the gauge fields acting on a fermion generation, or spin(12,4) if he wishes to have de Sitter spacetime, at which point he’ll be looking at e8(-24). I’ll have to ask him about that at some point. I do hope Frank will reconsider his strong favorable stance on supersymmetry once he has to pay up on a bet on it in a few months.

  18. Simon Burton says:

    I would quite like to see some of these quantropy calculations in the context of the harmonic oscillator. I feel I have some chance of understanding a calculation done in a countable dimensional vector space.

    Or is this left as an exercise for the reader?

    • John Baez says:

      All the Hilbert spaces used in physics are countable dimensional. But anyway, I agree that after the free particle, the harmonic oscillator is the next example to study!

      My student Blake Pollard was once planning to write a paper on the quantropy of the harmonic oscillator. He did a bunch of calculations, but he got distracted by other things (like trying to write a thesis). I doubt he’ll come back to that project. So, anyone who wants should give it a try and report back.

    • Back then I did some calculations on the harmonic oscillator. If I remember correctly they ran out of hand pretty quickly and got filed away with the comment “hard work or idea needed”.

      I would also like to see some progression here.

  19. Herb says:

    Roger Balian tried to mix equilibrium statistical mechanics with quantum states in his work “Justification of the Maximum Entropy Criterion in Quantum Mechanics”. https://link.springer.com/chapter/10.1007/978-94-015-7860-8_9

  20. Fulvio Bergadano says:

    Dears Mr. Baez and Mr. Pollard,
    I’ve read your article and found it profoundly interesting. I would be honored if you answered me a couple of questions that came to my mind while studying it.
    There is one point I don’t really get: why do you choose to speak of histories instead of paths and then feel allowed to use standard Lebesgue integrals instead of path integrals?
    I tried to reformulate the article by using path integrals and I have noticed no serious additional complication.
    Moreover, I had the feeling that some important property of quantropy was missing in order to establish a real analogy with entropy: the additivity. I’ve made a proof myself and would be glad to show you.
    Thank you for the attention,
    Fulvio Bergadano

You can use Markdown or HTML in your comments. You can also use LaTeX, like this: $latex E = m c^2 $. The word 'latex' comes right after the first dollar sign, with a space after it.

This site uses Akismet to reduce spam. Learn how your comment data is processed.