This post is a bit different from the usual fare here. The relativity group at Louisiana State University runs an innovative series of talks, the International Loop Quantum Gravity Seminar, where participants worldwide listen and ask questions by telephone, and the talks are made available online. Great idea! Why fly the speaker’s body thousands of miles through the stratosphere from point A to point B when all you really want are their precious megabytes of wisdom?

This seminar is now starting up a blog, to go along with the talks. Jorge Pullin invited me to kick it off with a post. Following his instructions, I won’t say anything very technical. I’ll just provide an easy intro that anyone who likes physics can enjoy.

• Abhay Ashtekar, Quantum evaporation of 2-d black holes, 21 September 2010. PDF of the slides, and audio in either .wav (45MB) or .aif format (4MB).

Abhay Ashtekar has long been one of the leaders of loop quantum gravity. Einstein described gravity using a revolutionary theory called general relativity. In the mid-1980s, Ashtekar discovered a way to reformulate the equations of general relativity in a way that brings out their similarity to the equations describing the other forces of nature. Gravity has always been the odd man out, so this was very intriguing.

Shortly thereafter, Carlo Rovelli and Lee Smolin used this new formulation to tackle the problem of *quantizing* gravity: that is, combining general relativity with the insights from quantum mechanics. The result is called “loop quantum gravity” because in an early version it suggested that at tiny distance scales, the geometry of space was not smooth, but made of little knotted or intersecting loops.

Later work suggested a network-like structure, and still later *time* was brought into the game. The whole story is still very tentative and controversial, but it’s quite a fascinating business. Maybe this movie will give you a rough idea of the images that flicker through people’s minds when they think about this stuff:

… though personally I hear much cooler music in my head. Bach, or Eno — not these cheesy detective show guitar licks.

Now, one of the goals of any theory of quantum gravity must be to resolve certain puzzles that arise in naive attempts to blend general relativity and quantum mechanics. And one of the most famous is the so-called black hole information paradox. (I don’t think it’s actually a “paradox”, but that’s what people usually call it.)

The problem began when Hawking showed, by a theoretical calculation, that black holes aren’t exactly black. In fact he showed how to compute the temperature of a black hole, and found that it’s not zero. Anything whose temperature is above absolute zero will radiate light: visible light if it’s hot enough, infrared if it’s cooler, microwaves if it’s even cooler, and so on. So, black holes must ‘glow’ slightly.

*Very* slightly. The black holes that astronomers have actually detected, formed by collapsing stars, would have a ridiculously low temperature: for example, about 0.00000002 degrees Kelvin for a black hole that’s 3 times the mass of our Sun. So, nobody has actually seen the radiation from a black hole.

But Hawking’s calculations say that the smaller a black hole is, the hotter it is! Its temperature is inversely proportional to its mass. So, in principle, if we wait long enough, and keep stuff from falling into our black hole, it will ‘evaporate’. In other words: it will gradually radiate away energy, and thus lose mass (since E = mc^{2}), and thus get hotter, and thus radiate more energy, and so on, in a vicious feedback loop. In the end, it will disappear in a big blast of gamma rays!

At least that’s what Hawking’s calculations say. These calculations were not based on a full-fledged theory of quantum gravity, so they’re probably just *approximately* correct. This may be the way out of the “black hole information paradox”.

But what’s the paradox? Patience — I’m gradually leading up to it. First, you need to know that in all the usual physical processes we see, information is conserved. If you’ve studied physics you’ve probably heard that various important quantities don’t change with time: they’re “conserved”. You’ve probably heard about conservation of energy, and momentum, and angular momentum and electric charge. But conservation of information is equally fundamental, or perhaps even more so: it says that if you know everything about what’s going on now, you can figure out everything about what’s going on later — and vice versa, too!

Actually, if you’ve studied physics a little but not very much, you may find my remarks puzzling. If so, don’t feel bad! Conservation of information is not usually mentioned in the courses that introduce the other conservation laws. The concept of information is fundamental to thermodynamics, but it appears in disguised form: “entropy”. There’s a minus sign lurking around here: while information is a precise measure of how much you *do* know, entropy measures how much you *don’t* know. And to add to the confusion, the first thing they tell you about entropy is that it’s *not* conserved. Indeed, the Second Law of Thermodynamics says that the entropy of a closed system tends to increase!

But after a few years of hard thinking and heated late-night arguments with your physics pals, it starts to make sense. Entropy as considered in thermodynamics is a measure of how much information you *lack* about a system when you only know certain things about it — things that are easily measured. For example, if you have a box of gas, you might measure its volume and energy. You’d still be ignorant about the details of all the molecules inside. The amount of information you lack is the entropy of the gas.

And as time passes, information tends to pass from easily measured forms to less easily measured forms, so entropy increases. But the information is still there in principle — it’s just hard to access. So information is conserved.

There’s a lot more to say here. For example: why does information tend to pass from easily measured forms to less easily measured forms, instead of the reverse? Does thermodynamics require a fundamental difference between future and past — a so-called “arrow of time”? Alas, I have to sidestep this question, because I’m supposed to be telling you about the black hole information paradox.

So: back to black holes!

Suppose you drop an encyclopedia into a black hole. The information in the encyclopedia seems to be gone. At the very least, it’s extremely hard to access! So, people say the entropy has increased. But could the information still be there in hidden form?

Hawking’s original calculations suggested the answer is *no*. Why? Because they said that as the black hole radiates and shrinks away, the radiation it emits contains no information about the encyclopedia you threw in — or at least, no information except a few basic things like its energy, momentum, angular momentum and electric charge. So no matter how clever you are, you can’t examine this radiation and use it to reconstruct the encyclopedia article on, say, Aardvarks. This information is *lost to the world forever!*

So what’s the black hole information paradox? Well, it’s not exactly a “paradox”. The problem is just that in every other process known to physicists, information is conserved — so it seems very unpalatable to allow any exception to this rule. But if you try to figure out a way to *save* information conservation in the case of black holes, it’s tough. Tough enough, in fact, to have bothered many smart physicists for decades.

Indeed, Stephen Hawking and the physicist John Preskill made a famous bet about this puzzle in 1997. Hawking bet that information wasn’t conserved; Preskill bet it was. In fact, they bet an encyclopedia!

In 2004 Hawking conceded the bet to Preskill, as shown above. It happened a conference in Dublin — I was there and blogged about it. Hawking conceded because he did some new calculations suggesting that information *can* gradually leak out of the black hole, thanks to the radiation. In other words: if you throw an encyclopedia in a black hole, a sufficiently clever physicist *can indeed* reconstruct the article on Aardvarks by carefully examining the radiation from the black hole. It would be incredibly hard, since the information would be highly scrambled. But it could be done in principle.

Unfortunately, Hawking’s calculation is very hand-wavy at certain crucial steps — in fact, more hand-wavy than certain calculations that had already been done with the help of string theory (or more precisely, the AdS-CFT conjecture). And, neither approach makes it easy to see in detail *how* the information comes out in the radiation.

This finally brings us to Ashtekar’s talk. Despite what you might guess from my warmup, his talk was *not* about loop quantum gravity. Certainly everyone working on loop quantum gravity would love to see this theory resolve the black hole information paradox. I’m sure Ashtekar is aiming in that direction. But his talk was about a warmup problem, a “toy model” involving black holes in 2d spacetime instead of our real-world 4-dimensional spacetime.

The advantage of 2d spacetime is that the math becomes a lot easier there. There’s been a lot of work on black holes in 2d spacetime, and Ashtekar is presenting some new work on an existing model, the Callen-Giddings-Harvey-Strominger black hole. This new work is a mixture of analytical and numerical calculations done over the last 2 years by Ashtekar together with Frans Pretorius, Fethi Ramazanoglu, Victor Taveras and Madhavan Varadarajan.

I will not attempt to explain this work in detail! The main point is this: **all the information that goes into the black hole leaks back out in the form of radiation as the black hole evaporates**.

But the talks also covers many other interesting issues. For example, the final stages of black hole evaporation display interesting properties that are *independent of the details of its initial state*. Physicists call this sort of phenomenon “universality”.

Furthermore, when the black hole finally shrinks to nothing, it sends out a pulse of gravitational radiation, but *not enough to destroy the universe*. It may seem very peculiar to imagine that the death spasms of a black hole *could* destroy the universe, but in fact some approximate “semiclassical” calculations of Hawking and Stewart suggested just that! They found that in 2d spacetime, the dying black hole emitted a pulse of infinite spacetime curvature — dubbed a “thunderbolt” — which made it impossible to continue spacetime beyond that point. But they suggested that a more precise calculation, taking quantum gravity fully into account, would eliminate this effect. And this seems to be the case.

For more, listen to Ashtekar’s talk while looking at the PDF file of his slides!

John: “conservation of information is equally fundamental, or perhaps even more so: it says that if you know everything about what’s going on now, you can figure out everything about what’s going on later — and vice versa, too! ”

Information is not conserved. All we can figure out are probabilities, we cannot figure out precise details.

For example hook up some radioactive element with a Geiger counter in such a way that when decay is detected the time from the start of experiment in microseconds is recorded in computer memory. This is a trivial matter to accomplish.

Now, if information is really conserved, please explain where does the information contained in the memory after running this experiment come from – try evolving the setup back in time and show where each bit originated – after all if information is truly conserved then those bits where already encoded in the state of reality back then so the question is where?

Unless decay is actually deterministic information cannot be conserved, it is being created the moment the decay takes place.

This is a controversial issue, because it’s related to the question of how we should understand the so-called “collapse of the wavefunction”, if indeed such a thing does happen.

It looks as if nature randomly chooses a specific time for the Geiger counter to click. If this were true, time evolution would not be unitary and information would indeed not be conserved.

But I believe in fact that the universe goes into a superposition of states with all possible times for the click. I believe time evolution is unitary and information is conserved.

This may seem implausible. There are, however, several reasons that it’s actually plausible:

1) It explains the observed data equally well (though people love to argue about this).

2) Alternative theories, in which the universe evolves via some combination of unitary time evolution and nonunitary wave function collapse, are inelegant and have problems of their own. (For example, objective collapse theories, of which the most carefully worked out is the GRW theory.)

3) In situations where a measurement can be ‘undone’ by being ‘completely forgotten’, the apparent wavefunction collapse goes away. (For example, in the delayed choice quantum eraser experiment.)

I don’t expect to convince you, just to let you know my point of view.

Thanks for your reply, it’s appreciated but as I am not a fan of MWI myself, as you expected, I am not convinced.

Here’s why:

MWI is the most preposterous speculation ever seriously considered – inventing an infinite number of invisible parallel universes just to make an explanation of one physical phenomena slightly more mathematically appealing. Where is logic here? The other interpretations don’t require such excess and still explain the results just as well, to me even admitting that we simply don’t know what happens would be so much better.

I consider MWI the most brutal violation of Ockham’s razor one can imagine – “entities must not be multiplied beyond necessity.” How is all the fantastic and ill-defined excess of MWI necessary when everything can be explained using ensemble or bhomian or other interpretations?

MWI is like a regression to prescientific times when everything was explained by invoking an invisible universe inhabited by ghosts, spirits and gods.

And besides such an interpretation of conservation of information turns it into a completely meaningless statement of faith.

Meaningless because you can make anything you want conserved (or not conserved) by invoking parallel invisible universes which carry a part of it.

For example your distance from the equator is conserved because when you move away from it in this universe you move toward it by exactly the same amount in another parallel one.

And a statement of faith because just like the “equator law” above the MWI variant of conservation of information cannot be refuted by science – there is no way to prove that alternative universes do exist or to test what happens in them so who knows if the information does indeed add up.

So, yeah, I am not convinced ;)

I think we’ve both had these conversations before. So, I’ll be brief.

First, the conservation of information is built into the mathematical fabric of quantum mechanics: if pure states evolve into pure states via a unitary operator , the entropy of any mixed state (or more precisely, density matrix) is conserved:

where the new state is given in terms of the old one by

This is just a calculation:

So, regardless of their beliefs concerning the

interpretationof quantum mechanics, physicists routinely use this fact. Hawking’s original calculation shocked the world because it seemed to show that in the evaporation of black holes, pure states didnotevolve to pure states via a unitary operator: instead, they turned into mixed states!But his original calculation was approximate: it treated gravity classically while treating the radiation it evaporated into using quantum mechanics. Hawking later redid the calculation in a ‘fully quantum’ way, and got a different answer: this time, he found that black hole evaporation is described by a unitary operator. That’s why he conceded that bet with John Preskill.

Unfortunately, even by the standards of quantum field theory, his new calculation is only heuristic — it would make any mathematician’s eyeballs fall out. So, people are still very interested to do this calculation in more careful ways — and that’s what Ashtekar and his collaborators have done in the 2d case.

As for the interpretations of quantum mechanics, I think it’s safe to say that none pleases everyone. You mentioned the “many worlds interpretation”. I don’t believe in that interpretation. But I also don’t believe in any interpretation that features “wave function collapse” as an objective physical process. Anyone who believes that the wave function collapses has a duty to say

when, where, and how— otherwise they haven’t proposed a precise theory, and in physics of this sort, it’s better to be precise and provably wrong than vague and irrefutable.The GRW theory has the great merit of being a precise and (in principle) testable theory of objective wave function collapse. I don’t believe it, but I respect it.

No, that’s not tinnitus. It’s John Baez ringing up 10 points on his own Crackpot Index. Because this sentence:

“Anyone who believes that the wave function collapses has a duty to say when, where, and how — otherwise they haven’t proposed a precise theory, and in physics of this sort, it’s better to be precise and provably wrong than vague and irrefutable.”

pretty clearly activates rule No. 17. (Unless, of course, the scare quotes around “mechanism” mean you have to use that actual word.)

When the sun shines on our world, you could have a photon which was pretty well described by Maxwell and had a wave function ranging across a big chunk of the inner solar system quite abruptly turning into an excitement of an electron orbital. This abrupt transition is the experimental fact, and an orthodoxy built on unitary evolution alternating with wave function collapse has worked well enough to dominate quantum physics for about three quarters of a century. Sorry, but collapse happens whether you want to call it that or not.

Information loss has been understood since of the work of Rolf Landauer, almost fifty years ago. I suppose it could be that the poster child of irreversible change, the black hole, is the the only exception to information loss. But I’m not believing it.

And as for the Argument From The Purity Of Black Holes, isn’t it just as paradoxical for impure stuff like stars and dust become Pure as it is for Pure Things like Black Holes to evolve to mixed states? Cuz, isn’t unitary evolution time reversable?

I’ll just clarify a few things.

1) When I said a theory of wavefunction collapse needs to describe “when, where and how” the wavefunction collapses, I did

notmean that the theory needs to propose a “mechanism”. We don’t need to posit little gears and springs, or invisible elves, that do the collapsing. I meant that the theory should provide a specific formula for computing the probability that the wavefunction collapses under any given conditions, and the probability that it collapses to a given new wavefunction.The GRW theory

doessay “when, where and how” — for a quick description, read this.But if — for example — we merely say “the wavefunction collapses when an observer makes a measurement”, we are

notsaying “when, where and how”. This “theory” is too vague without a further specification of what counts as a “measurement” or an “observer”.So, I’m saying we’re free to believe in wavefunction collapse, but then our job of science has just begun: the next step is to formulate an adequate

theoryof wavefunction collapse.2) I’m not saying the black hole works differently from everything else. I’m saying it works the

sameas everything else. Hawking’s original calculation said it worked differently from everything else; later he decided it was the same. Unitary time evolution is standard in quantum mechanics; Hawking made waves by arguing that it failed in the case of black holes, but upon conceding that bet with Preskill, he agreed that even black holes evolve in unitary way.If you think something I’m saying sounds funny, it’s almost surely because you adopt a different interpretation of quantum mechanics than I do — not because I believe black holes behave in some unusual way.

3) Though you speak of “the Argument From The Purity Of Black Holes”, I have never attempted to conclude anything about the interpretation of quantum mechanics from anything about black holes.

It seems that Hawking’s pure quantum black hole is identified with as Wheeler’s hairless black hole. The hairless hole got that way by emitting gravity waves and whatnot, and I suppose that the impurity went with it. So much for *my* impurity argument.

So we have a hole that’s either pure and doesn’t radiate or it’s impure and it does. Either way information is lost. Hawking does say something about correlations within the radiation and something about tunnelling, but I don’t see any motivation for either of these except a desire to preserve information.

Information loss is real physics. There are formulas with Boltzmann’s constant and everything. It’s how the second law of thermodynamics manifests in quantum information theory. It’s the irreversible change of information. (Logically, irreversible info change could have gone 2 ways. Forgetting or info creating. But when we have forgetting, info creating is not irreversible.)

BTW, I’m getting quite annoyed with physics prima donnas contradicting one of the fundamental tenets of quantum info, a field in which I intend to become a dilettante.

I favor John Cramers Transactional Interpretation. TI involves taking physics formalism seriously and not taking on unnecessary intellectual baggage like consciousness and multiple universes. However, it does require redical nonlocality. It can’t be helped. We live a nonlocal universe. (Please note that I don’t agree with some of Cramer’s more recent statements.)

In TI, wavefunction collapse involves waves moving forward and backward in time and partially cancelling in a process called a transaction. (Did I mention the radical nonlocality?)

Finally, rationalize it how you want. I got you good on rule 17.

Michael said:

What is a “pure” black hole? And what is an “impure” black hole?

The textbook explanation of the second law of thermodynamics, based on Boltzmann’s ideas, does not imply any irreversible change of information.

QFT in curved spacetime overstretches both frameworks a little bit (QFT and GR), but I think it is safe to say that Hawking did not

tailorhis calculations in order to disprove himself.“Irreversible” in the given context means that the microscopic systems do not evolve according to an evolution equation with an unitary time translation operator. It does not mean that there is no way to “undo” what has happened :-) You can have information loss and information gain in your theory

andirreversibility, that’s not a contradiction :-)Do you have a reference of some kind?

Um, denouncing special relativity raises item 18 on the index.

Hi Tim van Beek

Consider my sentence about the black radiating to read:

“So we have a hole that’s either in a stationary state and doesn’t radiate or it’s in a mixed state and it does.”.

For your questions with regard the thermodynamics of information loss see the Stanford Encyclopedia of Phylosophy

http://plato.stanford.edu/entries/information-entropy/

You can skip over the equations and still get the gist.

With regard to the compatibility of information gain and information loss, I recall that time with the blond-who-only-riding-the-bus-until-her-car-was-fixed, when I realized I had gained information about her favorite song but lost information about her phone number.

About Hawking’s paper –

I do not intend any aspersion to the integrity of Stephen Hawking. If I gave that impression I regret it and I apologize. I used the word “motivate” in the sense of bolstering a scientific argument.

I do think that Hawkins fell into the Feynmann trap of fooling himself. Even the best of us are subject to that.

Finally with regard to questioning relativity, Einstien’s theory is alive and kicking. It continues to govern the motion of matter, of energy, and of information. Nonlocality manifests itelf in correlations of quantum noise in measurements at seperate locations which can only be explained by supposing that settings of the measuring instrument at one location affect the measurement results at the other. In some circumstances this can indeed be seen as defying relativity. (Also, while there is good evidence for nonlocality I acknowledge that I did not present any in my post.)

The link on “this” is mal-formed.

Thanks, Blake — I fixed the link.

The wavefunction collapse and Many Worlds fuss etc. reminds me of one of my favorite puzzlements: It seems nobody seriously considers Rovelli’s Relational Quantum Mechanics. (Also no mention at the nLab – but it smells like category stuff to me.)

(Sorry for off-topic. But I needed to get this off my chest: One day I want to seriously learn QM – without the fuss.)

Hi Tim van Beek

Concerning your questions about my deleted comment.

Regarding Hawking’s paper:

Change my sentence about the black radiating to read:

“So we have a hole that’s either in a stationary state and doesn’t radiate or it’s in a mixed state and it does.”.

The only thing thermal radiation tells us is the black hole temperature, so either way information is lost.

Hawking talks about tunneling as means for info to get out of the black hole, but nothing in the paper justifies that.

For your questions with regard the thermodynamics of information loss see the Stanford Encyclopedia of Phylosophy

http://plato.stanford.edu/entries/information-entropy/

You can skip over the equations and still get the gist.

With regard to the compatibility of information gain and information loss, I recall that time with the blond-who-only-riding-the-bus-until-her-car-was-fixed, when I realized I had gained information about her favorite song but lost information I had had about her phone number.

Finally with regard to questioning relativity and Crackpot Index rule 18, Einstein’s theory is alive and kicking. It continues to govern the propagation of matter, of energy, and of information. Nonlocality manifests itelf in correlations of quantum noise in measurements at separate locations, which can only be explained by supposing that settings of the measuring instrument at one location affect the measurement results at the other. In some circumstances this can indeed be seen as defying relativity. (Also, while there is good evidence for nonlocality, I acknowledge that I did not present any in my post.)

Florifulgator said:

This is about an interpretation of QM, the truly hot topics in theoretical physics are about theories that could eventually lead to predictions or descriptions of observational phenomena :-)

Learning QM means one first has to learn how to calculate some toy examples like the spectrum of hydrogen. There is no use in trying to learn about different interpretations of the theory before you can calculate anything.

Now, before you can start with the real stuff, you have to understand Hamiltonian mechanics and Hilbert spaces – which may require some effort, depending on your background.

But there are a lot of good introductions to quantum computing that explain systems of finite dimension, and in order to understand this, all you need is linear algebra and complex numbers :-) Maybe that’s a good place to start.

Michael said:

Was there a glitch? Now both your comments are displayed here :-)

A question to our host: Should we curtail the discussion and take it elsewhere, because it is off-topic?

Sorry, I still don’t get it, both “stationary state” and “mixed state” have some narrow precise technical meaning to me in this context:

1. “stationary” means “invariant under time translation”. A light bulb that’s radiating, connected to an infinite electricity reservoir and does not suffer any aging processes is in a stationary state.

2. “mixed state” is the opposite of a “pure state” in QM.

Now, if you believe in Hawking radiation that

everyblack hole radiates. If you don’t thennoblack hole radiates.QFT unifies both special relativity and the kind of “nonlocality” of quantum correlations that you mention, therefore there is at least one established framework that tells us that there is no contradiction :-) The trap that catches the unwary from time to time is that “nonlocal” is used in two very different connotations. In a certain sense QM is nonlocal, but it is not acausal, and only that would contradict special relativity.

I see, but Hawking’s reasoning seems to be that the information “destroyed” when an object “crosses” the event horizon can be reconstructed, in principle, from the radiation of the black hole. This is a result that does not contradict any statement made in the encyclopedia, or does it? (Which one?)

Tim wrote:

Michael thought I was going to delete his comment and replace it with a new one. But since you two have developed a conversation about the comment to be deleted, I can’t delete it now without making that conversation even more confusing to readers than it already is.

1. I’m very happy to have you discuss the black hole entropy puzzle.

2. I will put up with you discussing the interpretation of quantum mechanics.

3. I’d rather you wouldn’t discuss why I didn’t delete a comment in time to prevent precisely such a discussion.

JB said:

1. Ok, I’m mostly interested in AQFT on curved spacetimes and started to write about it a while back, when I had more time, on the nLab: AQFT on curved spacetimes. Remarkably Robert Wald presents a derivation of Hawking radiation in his textbook mentioned there. I did not have a look at Hawkings latest calculations, but I doubt that its translation to the AQFT formulation can be done, given the state of things. It would be nice to now the opinion of the AQFT community about this…

2. My interest in interpretations of QM is quite limited, but I’m happy to join a discussion if I can contribute. I’d guess that there are more appropriate places to discuss this anyway. BTW: It’s a basic anchorman technique to tell people “you will have the oportunity to discuss this over there, later, but meanwhile let’s get back to our main topic.” in order to curtail an off-topic discussion).

3. Sorry.

Dear AI,

“if information is truly conserved then those bits where already encoded in the state of reality back then so the question is where?”

I think that you have a point, but in the same time, I think that John is right too, when he said that the evolution is unitary. Here is how I think the quantum world works:

The evolution is unitary. The solution to Schrödinger’s equation is determined by some initial conditions. The difference is that the initial conditions are not completely established “at the beginning of time”. At each moment of time we have a bunch of solutions. (This bunch of solutions is not necessarily MWI, only a set of possible solutions) Each new measurement adds new information, reducing the set of solutions. This means that indeed the information recorded in the computer’s memory pre-exists. The initial condition is “delayed” until the measurement is performed. Think of this as a “delayed initial conditions interpretation”.

One can view the initial conditions distributed at various times and places, not necessarily at the beginning of the universe. Perhaps the highest amount of these conditions was set back then, but not all. Quantum observations fill the gaps in the initial conditions. Think at the EPR experiment: two spin measurements at different places. They act like establishing the spin back in time until the moment when the two electrons interacted. At that point, the interaction conserves the spin, and this imposes the Bell correlations.

It may seem strange to think that the initial conditions are delayed, but I think this captures the essence of quantum mechanics, allowing in the same time the unitary evolution.

Dear John,

I think that the problem of information in black holes singularities has the origin (and solution) at the very classical level – in general relativity.

If we try to evolve a classical field through a semi-Riemannian singularity, we can’t, because at the singularity the Cauchy surface becomes singular. My opinion is that it becomes singular because we make it so by an implicit assumption. The implicit assumption is that the topology is generated by the metric. The Penrose-Hawking theorems only show that the distances between some points in the space-like hypersurfaces become zero. The idea that this means the points become identified is the ingredient which made so many people says “general relativity predicts its own breakdown at singularity”. But this is not necessarily true. We can use instead of Einstein’s equations Palatini’s, Plebanski’s, or Ashtekar’s formalism to allow degenerate metric. By doing so, Penrose-Hawking theorems no longer force us to admit a breakdown of the spacetime structure. Of course, the spacetime continues to be geodesically incomplete, but the geodesics and the classical fields can be continued through the regions where the metric is degenerate.

Best regards,

Cristi

Somebody (Japan, Korea?) had a paper in the mid 1980s that uniquely evolved quantum fields (I think SUSY) through metric singularities using Gribov’s ideas. I have long lost the reference. But it is amusing to explore the consequences of having an essential singularity sitting on the identity.

A version of the above blog post has now appeared on the International Loop Quantum Gravity seminar blog… in English, but also

en Español!Nice post! That is pretty nice that it is available in more than one language. Do you know if the post was fed into a translator or if someone took the time to translate it?

Jorge Pullin translated it himself. I never tried to tell you much about loop quantum gravity, since by the time you came along I was working on different things. But anyway: Jorge Pullin is one of the founding fathers of this subject, and he’s from Argentina, and he studied with Rodolfo Gambini, a Uruguayan who was one of the first to take seriously the idea of studying connections using their holonomies around loops, and formulating gauge theories in these terms. There’s also a community of Mexican physicists doing loop quantum gravity. So, I guess Jorge wants to translate some of these loop quantum gravity blog entries into Spanish. Alas, even though my dad’s first language was Spanish, I don’t speak it.

Automatic translation doesn’t work very well for scientific text unless you have a scientist come along afterward and fix the mistakes. Here’s how my blog entry looks like translated into Spanish and then back into English via Babelfish:

Okay until near the end: in particular, “black hole” turned into “jail”.

(Of course, automatically translating a text

twicesquares the problem, so I’m not really being fair to Babelfish.)Ugh, of

courseautomatic translation does not work, computers are stupid!But translation by experts does not work either :-)

Here is the story: there is a famous short poem by Göthe, that was translated to Japanese in 1902, translated to French in 1911 and back to German (people assumed that is was an original Japanese poem).

There is no similarity with the original at all :-)

Reference

Poetry has been defined as “that which is lost in translation”.

Of course simple everyday sentences like “I want to buy a fish.” are easier to translate than poetry. And while the average

passiveword pool of Germans has been estimated to be 8 000 words, Johann Wolfgang von Göthesactiveword pool has been estimated to be 80 000 words.But in this case the poem in question was really quite simple :-)

(I’d like to know how “peaks” became “jade pavillon” and “treetops” became “snowed over cherry trees in the moonlight” :-)

Sometimes I think the general theory of relativity ‘looks like’ the theory of electrodynamics.

The vector potential is like the Levi-Civita connection.

But I don’t know how to treat the Levi-Civita connection as a more fundamental quantity.

The metric is more fundamental in Gravity theory.

You might like the Palatini approach to gravity, which treats the connection and ‘frame field’ as equally independent quantities, both fundamental. The frame field determines the metric. You can read about the Palatini approach in Wald’s book, or my book. Ashtekar’s approach, which led to loop quantum gravity, is a further development along these lines. I also explain all that in my book.

For a quick

freeintroduction to all this stuff and how it gets used in loop quantum gravity, you could try this. But it’s a bit mathematical.Thank you very much!

I stopped reading your book few months ago, because the exercises in the chapter of fibre bundle look difficult for me; though the results seems to be true, I just couldn’t prove them in rigorous maths.

I had read some of the chapter of Semi-Riemannian geometry, but I stopped.

You shouldn’t stop reading my book merely because you can’t do all the exercises. If I had only read books where I could do all the exercises, I would have read very few books. Usually I’m satisfied if the results “seem true”.

There are many different levels of understanding and reading is not a linear process. I usually skip a few later chapters, re-read chapters after some time, write down questions that I cannot answer (yet), try some excercises, fail, find another book about the same topic, start reading that etc.

When I was a student I was under the impression that one should learn the material as it is presented in the classroom, understand everything and pass the exams, then turn to the next topic etc…but once you stop attending classes after your graduation you’ll see that this is just one of many ways and maybe not the best one for everyone.

I assume JB refers to “Gauge Fields, Knots, and Gravity”, listed as no. 3 under books here.

The problem with this kind of question is that some physics faculties don’t have professors that know enough about both QFT and GR to explain these kinds of connections.

Yes, that’s the one.

Another nice (but more mathematically sophisticated) introduction to the Palatini formalism is the thesis of my student Derek Wise, starting around page 155. His approach uses a number of tricks I learned after writing my book, together with some tricks of his own.

Yes. I mean the book “Gauge Fields, Knots, and Gravity”.

It’s nice.

Hi Tim van Beek

It appears that my abuse of jargon is confusing – primarily to myself.

With regard to the black hole that either radiates or it doesn’t, pretty obviously, I should have said it was in a pure state or a mixed state. So we have two possibilities:

1 – The hole is in a pure state.

Then as John Baez instructs us, it will not radiate. So any information about the past of the hole is locked away. The information is effectively lost.

2 – The hole is in a mixed state.

It radiates as a black body. All we learn from the blackbody radiation is the temperature of the hole and, according Hawking and Bekenstein, also the mass. All other information is lost to us.

Regarding special relativity and nonlocality, I suppose I should have used the phrase “relativistic causality”. I can readily Lorentz transform spacelike lines which would represent faster than light motion, and so in that sense relativity is compatible with nonlocality. I get causality within this framework by connecting events by timeike lines in their historical order. Relativistic causality rules matter, energy, and information. But he correlated noise is effectively acausal.

Re: Stanford Encyclopedia of Physics . It is telling us that information loss is associated with entropy gain. This is compatible with the enormous entropy that a black hole possesses. If the information of the matter-energy within the hole could be restored, what would happen to the entropy created by its loss? Removing this entropy would be a violation of thermodynamic law.

Regarding information coming from within an event horizon. This would violate causality. This is because inside the horizon, inwards is a timelike direction. But that doesn’t matter. The horizon takes literally forever to form, unless you have the viewpoint of someone falling in.