Algorithmic Thermodynamics (Part 1)

12 October, 2010

My grad student Mike Stay and I put a new paper on the arXiv:

• John Baez and Mike Stay, Algorithmic thermodynamics.

Mike has a masters degree in computer science, and he’s working for Google on a project called Caja. This is a system for letting people write programs in JavaScript while protecting the end users from dirty tricks the programmers might have tried. With me, he’s mainly working on the intersection of computer science and category theory, trying to bring 2-categories into the game. But Mike also knows a lot about algorithmic information theory, a subject with fascinating connections to thermodynamics. So, it was natural for us to work on that too.

Let me just tell you a little about what we did.



Around 1948, the electrical engineer Claude Shannon came up with a mathematical theory of information. Here is a quote that gives a flavor of his thoughts:

The whole point of a message is that it should contain something new.

Say you have a source of information, for example a mysterious radio transmission from an extraterrestrial civilization. Suppose every day you get a signal sort of like this:

00101101011111101010101011011101110

How much information are you getting? If the message always looks like this, presumably not much:

00000000000000000000000000000000000

Shannon came up with a precise formula for the information. But beware: it’s not really a formula for the information of a particular message. It’s a formula for the average information of a message chosen from some probability distribution. It’s this:

- \sum_i p_i \;\mathrm{log}(p_i)

where we sum over all possible messages, and p_i is the probability of the i th message.

So, for example, suppose you keep getting the same message. Then every message has probability 0 except for one message, which has probability 1. Then either p_i or \ln(p_i) is zero, so the information is zero.

That seems vaguely plausible in the example where every day we get this message:

000000000000000000000000000000000000000

It may seem less plausible if every day we get this message:

011010100010100010100010000010100000100

It looks like the aliens are trying to tell us which numbers are prime! 1 is not prime, 2 is, 3 is, 4 is not, 5 is, and so on. Aren’t we getting some information?

Maybe so: this could be considered a defect of Shannon information. On the other hand, you might be willing to admit that if we keep getting the same message every day, the second time we get it we’re not getting any new information. Or, you might not be willing to admit this — it’s actually a subtle issue, and I don’t feel like arguing.

But at the very least, you’ll probably admit that the second time you get the same message, you get less new information than the first time. The third time you get even less, and so on. So it’s quite believable that in the long run, the average amount of new information per message approaches 0 in this case. For Shannon, information means new information.

On the other hand, suppose we are absolutely unable to to predict each new bit we get from the aliens. Suppose our ability to predict the next bit is no better than our ability to predict whether a fair coin comes up heads or tails. Then Shannon’s formula says we are getting the same amount of new information with every bit: namely, \mathrm{log}(2).

If we take the logarithm using base 2 here, we get 1 — so we say we’re getting one bit of information. If we take it using base e, as physicists prefer, we get \ln(2) — and we say we’re getting \ln(2) nats of information. One bit equals \ln(2) nats.

There’s a long road from these reflections to a full justification of the specific formula for Shannon information! To begin marching down that road you can read his original paper, A mathematical theory of communication.

Anyway: it soon became clear that Shannon’s formula was almost the same as the usual formula for “entropy”, which goes back to Josiah Willard Gibbs. Gibbs was actually the first engineer to get a Ph.D. in the United States, back in 1863… but he’s mainly famous as a mathematician, physicist and chemist.



Entropy is a measure of disorder. Suppose we have a box of stuff — solid, liquid, gas, whatever. There are many possible states this stuff can be in: for example, the atoms can be in different positions, and have different velocities. Suppose we only know the probability that the stuff is in any one of the allowed states. If the ith state is occupied with probability p_i, Gibbs said the entropy of our box of stuff is

- k \sum_i p_i \; \mathrm{log}(p_i)

Here k is a constant called Boltzmann’s constant.

There’s a wonderful story here, which I don’t have time to tell in detail. The way I wrote Shannon’s formula for information and Gibbs’ formula for entropy, you’d think only a moron would fail to instantly grasp that they’re basically the same. But historically, it took some work.

The appearance of Bolzmann’s constant hints at why. It shows up because people had ideas about entropy, and the closely related concept of temperature, long before they realized the full mathematical meaning of these concepts! So entropy traditionally came in units of “joules/kelvin”, and physicists had a visceral understanding of it. But dividing by Boltzmann’s constant, we can translate that notion of entropy into the modern, more abstract way of thinking of entropy as information!

Henceforth I’ll work in units where k = 1, as modern mathematical physicists do, and treat entropy and information as the same concept.

Closely related to information and entropy is a third concept, which I will call Kolmogorov complexity, though it was developed by many people — including Martin-Löf, Solomonoff, Kolmogorov, Levin, and Chaitin — and it has many names, including descriptive complexity, Kolmogorov-Chaitin complexity, algorithmic entropy, and program-size complexity. You may be intimidated by all these names, but you shouldn’t be: when lots of people keep rediscovering something and giving it new names, it’s usually because this thing is pathetically simple.

So, what is Kolmogorov complexity? It’s a way of measuring the information in a single message rather than a probability distribution of messages. And it’s just the length of the shortest computer program that prints out this message.

I suppose some clarification may be needed here:

1) I’m only talking about programs without any input, that either calculate, print out a message, and halt… or calculate forever and never halt.

2) Of course the length of the shortest program that prints out the desired message depends on the programming language. But there are theorems saying it doesn’t depend “too much” on the language. So don’t worry about it: just pick your favorite language and stick with that.

If you think about it, Kolmogorov complexity is a really nice concept. The Kolmogorov complexity of a string of a million 0’s is a lot less than a million: you can write a short program that says “print a million 0’s”. But the Kolmogov complexity of a highly unpredictable string of a million 0’s and 1’s is about a million: you basically need to include that string in your program and then say “print this string”. Those are the two extremes, but in general the complexity will be somewhere in between.

Various people — the bigshots listed above, and others too — soon realized that Kolmogorov complexity is deeply connected to Shannon information. They have similar properties, but they’re also directly related. It’s another great story, and I urge you to learn it. For that, I recommend:

• Ming Li and Paul Vitanyi, An Introduction to Kolmogorov Complexity and Its Applications, Springer, Berlin, 2008.

To understand the relation a bit better, Mike and I started thinking about probability measures on the set of programs. People had already thought about this — but we thought about it a bit more the way physicists do.

Physicists like to talk about something called a Gibbs ensemble. Suppose we have a set X and a function

F: X \to \mathbb{R}

Then the Gibbs ensemble is the probability distribution on X that maximizes entropy subject to the condition that F has some specified average, or “expected value”.

So, to find the Gibbs ensemble, we need to find a probability distribution p : X \to \mathbb{R} that maximizes

- \sum_{i \in X} p_i \; \mathrm{log}(p_i)

subject to the constraint that

\sum_{i \in X} p_i \; F(i) = \langle F \rangle

where \langle F \rangle is some number, the expected value of F. Finding the probability distribution that does the job is an exercise in Lagrange multipliers. I won’t do it. There’s a nice formula for the answer, but we won’t need it here.

What you really need to know is something more important: why Gibbs invented the Gibbs ensemble! He invented it to solve some puzzles that sound completely impossible at first.

For example, suppose you have a box of stuff and you don’t know which state it’s in. Suppose you only know the expected value of its energy, say \langle E \rangle. What’s the probability that this stuff is in its ith state?

This may sound like an insoluble puzzle: how can we possibly know? But Gibbs proposed an answer! He said, basically: find the probability distribution p that maximizes entropy subject to the constraint that the mean value of energy is \langle E \rangle. Then the answer is p_i.

In other words: use the Gibbs ensemble.

Now let’s come back to Kolmogorov complexity.

Imagine randomly picking a program out of a hat. What’s the probability that you pick a certain program? Again, this sounds like an impossible puzzle. But you can answer it if you know the expected value of the length of this program! Then you just use the Gibbs ensemble.

What does this have to do with Kolmogorov complexity? Here I’ll be a bit fuzzy, because the details are in our paper, and I want you to read that.

Suppose we start out with the Gibbs ensemble I just mentioned. In other words, we have a program in a hat, but all you know is the expected value of its length.

But now suppose I tell you the message that this program prints out. Now you know more. How much more information do you have now? The Kolmogorov complexity of the message — that’s how much!

(Well, at least this is correct up to some error bounded by a constant.)

The main fuzzy thing in what I just said is “how much more information do you have?” You see, I’ve explained information, but I haven’t explained “information gain”. Information is a quantity you compute from one probability distribution. Information gain is a quantity you compute from two. More precisely,

- \sum_i p_i \; \mathrm{log}(p_i/q_i)

is the information you gain when you thought the probability distribution was q, but then someone comes along and tells you it’s p.

In fact, we argue that information gain is more fundamental than information. This is a Bayesian idea: q is your “prior”, the probability distribution you thought was true, and the information you get upon hearing the distribution is p should be defined relative to this prior. When people think they’re talking about information without any prior, they are really using a prior that’s so bland that they don’t notice it’s there: a so-called “uninformative prior”.

But I digress. To help you remember the story so far, let me repeat myself. Up to a bounded error, the Kolmogorov complexity of a message is the information gained when you start out only knowing the expected length of a program, and then learn which message the program prints out.

But this is just the beginning of the story. We’ve seen how Kolmogorov complexity is related to Gibbs ensembles. Now that we have Gibbs ensembles running around, we can go ahead and do thermodynamics! We can talk about quantities analogous to temperature, pressure, and so on… and all the usual thermodynamic relations hold! We can even take the ideas about steam engines and apply them to programs!

But for that, please read our paper. Here’s the abstract, which says what we really do:

Algorithmic entropy can be seen as a special case of entropy as studied in statistical mechanics. This viewpoint allows us to apply many techniques developed for use in thermodynamics to the subject of algorithmic information theory. In particular, suppose we fix a universal prefix-free Turing machine and let X be the set of programs that halt for this machine. Then we can regard X as a set of ‘microstates’, and treat any function on X as an ‘observable’. For any collection of observables, we can study the Gibbs ensemble that maximizes entropy subject to constraints on expected values of these observables. We illustrate this by taking the log runtime, length, and output of a program as observables analogous to the energy E, volume V and number of molecules N in a container of gas. The conjugate variables of these observables allow us to define quantities which we call the ‘algorithmic temperature’ T, ‘algorithmic pressure’ P and `algorithmic potential’ μ, since they are analogous to the temperature, pressure and chemical potential. We derive an analogue of the fundamental thermodynamic relation dE = T dS – P d V + μ dN, and use it to study thermodynamic cycles analogous to those for heat engines. We also investigate the values of T, P and μ for which the partition function converges. At some points on the boundary of this domain of convergence, the partition function becomes uncomputable. Indeed, at these points the partition function itself has nontrivial algorithmic entropy.


Ashtekar on Black Hole Evaporation

29 September, 2010

This post is a bit different from the usual fare here. The relativity group at Louisiana State University runs an innovative series of talks, the International Loop Quantum Gravity Seminar, where participants worldwide listen and ask questions by telephone, and the talks are made available online. Great idea! Why fly the speaker’s body thousands of miles through the stratosphere from point A to point B when all you really want are their precious megabytes of wisdom?

This seminar is now starting up a blog, to go along with the talks. Jorge Pullin invited me to kick it off with a post. Following his instructions, I won’t say anything very technical. I’ll just provide an easy intro that anyone who likes physics can enjoy.

• Abhay Ashtekar, Quantum evaporation of 2-d black holes, 21 September 2010. PDF of the slides, and audio in either .wav (45MB) or .aif format (4MB).

Abhay Ashtekar has long been one of the leaders of loop quantum gravity. Einstein described gravity using a revolutionary theory called general relativity. In the mid-1980s, Ashtekar discovered a way to reformulate the equations of general relativity in a way that brings out their similarity to the equations describing the other forces of nature. Gravity has always been the odd man out, so this was very intriguing.

Shortly thereafter, Carlo Rovelli and Lee Smolin used this new formulation to tackle the problem of quantizing gravity: that is, combining general relativity with the insights from quantum mechanics. The result is called “loop quantum gravity” because in an early version it suggested that at tiny distance scales, the geometry of space was not smooth, but made of little knotted or intersecting loops.

Later work suggested a network-like structure, and still later time was brought into the game. The whole story is still very tentative and controversial, but it’s quite a fascinating business. Maybe this movie will give you a rough idea of the images that flicker through people’s minds when they think about this stuff:

… though personally I hear much cooler music in my head. Bach, or Eno — not these cheesy detective show guitar licks.

Now, one of the goals of any theory of quantum gravity must be to resolve certain puzzles that arise in naive attempts to blend general relativity and quantum mechanics. And one of the most famous is the so-called black hole information paradox. (I don’t think it’s actually a “paradox”, but that’s what people usually call it.)

The problem began when Hawking showed, by a theoretical calculation, that black holes aren’t exactly black. In fact he showed how to compute the temperature of a black hole, and found that it’s not zero. Anything whose temperature is above absolute zero will radiate light: visible light if it’s hot enough, infrared if it’s cooler, microwaves if it’s even cooler, and so on. So, black holes must ‘glow’ slightly.

Very slightly. The black holes that astronomers have actually detected, formed by collapsing stars, would have a ridiculously low temperature: for example, about 0.00000002 degrees Kelvin for a black hole that’s 3 times the mass of our Sun. So, nobody has actually seen the radiation from a black hole.

But Hawking’s calculations say that the smaller a black hole is, the hotter it is! Its temperature is inversely proportional to its mass. So, in principle, if we wait long enough, and keep stuff from falling into our black hole, it will ‘evaporate’. In other words: it will gradually radiate away energy, and thus lose mass (since E = mc2), and thus get hotter, and thus radiate more energy, and so on, in a vicious feedback loop. In the end, it will disappear in a big blast of gamma rays!

At least that’s what Hawking’s calculations say. These calculations were not based on a full-fledged theory of quantum gravity, so they’re probably just approximately correct. This may be the way out of the “black hole information paradox”.

But what’s the paradox? Patience — I’m gradually leading up to it. First, you need to know that in all the usual physical processes we see, information is conserved. If you’ve studied physics you’ve probably heard that various important quantities don’t change with time: they’re “conserved”. You’ve probably heard about conservation of energy, and momentum, and angular momentum and electric charge. But conservation of information is equally fundamental, or perhaps even more so: it says that if you know everything about what’s going on now, you can figure out everything about what’s going on later — and vice versa, too!

Actually, if you’ve studied physics a little but not very much, you may find my remarks puzzling. If so, don’t feel bad! Conservation of information is not usually mentioned in the courses that introduce the other conservation laws. The concept of information is fundamental to thermodynamics, but it appears in disguised form: “entropy”. There’s a minus sign lurking around here: while information is a precise measure of how much you do know, entropy measures how much you don’t know. And to add to the confusion, the first thing they tell you about entropy is that it’s not conserved. Indeed, the Second Law of Thermodynamics says that the entropy of a closed system tends to increase!

But after a few years of hard thinking and heated late-night arguments with your physics pals, it starts to make sense. Entropy as considered in thermodynamics is a measure of how much information you lack about a system when you only know certain things about it — things that are easily measured. For example, if you have a box of gas, you might measure its volume and energy. You’d still be ignorant about the details of all the molecules inside. The amount of information you lack is the entropy of the gas.

And as time passes, information tends to pass from easily measured forms to less easily measured forms, so entropy increases. But the information is still there in principle — it’s just hard to access. So information is conserved.

There’s a lot more to say here. For example: why does information tend to pass from easily measured forms to less easily measured forms, instead of the reverse? Does thermodynamics require a fundamental difference between future and past — a so-called “arrow of time”? Alas, I have to sidestep this question, because I’m supposed to be telling you about the black hole information paradox.

So: back to black holes!

Suppose you drop an encyclopedia into a black hole. The information in the encyclopedia seems to be gone. At the very least, it’s extremely hard to access! So, people say the entropy has increased. But could the information still be there in hidden form?

Hawking’s original calculations suggested the answer is no. Why? Because they said that as the black hole radiates and shrinks away, the radiation it emits contains no information about the encyclopedia you threw in — or at least, no information except a few basic things like its energy, momentum, angular momentum and electric charge. So no matter how clever you are, you can’t examine this radiation and use it to reconstruct the encyclopedia article on, say, Aardvarks. This information is lost to the world forever!

So what’s the black hole information paradox? Well, it’s not exactly a “paradox”. The problem is just that in every other process known to physicists, information is conserved — so it seems very unpalatable to allow any exception to this rule. But if you try to figure out a way to save information conservation in the case of black holes, it’s tough. Tough enough, in fact, to have bothered many smart physicists for decades.

Indeed, Stephen Hawking and the physicist John Preskill made a famous bet about this puzzle in 1997. Hawking bet that information wasn’t conserved; Preskill bet it was. In fact, they bet an encyclopedia!

In 2004 Hawking conceded the bet to Preskill, as shown above. It happened a conference in Dublin — I was there and blogged about it. Hawking conceded because he did some new calculations suggesting that information can gradually leak out of the black hole, thanks to the radiation. In other words: if you throw an encyclopedia in a black hole, a sufficiently clever physicist can indeed reconstruct the article on Aardvarks by carefully examining the radiation from the black hole. It would be incredibly hard, since the information would be highly scrambled. But it could be done in principle.

Unfortunately, Hawking’s calculation is very hand-wavy at certain crucial steps — in fact, more hand-wavy than certain calculations that had already been done with the help of string theory (or more precisely, the AdS-CFT conjecture). And, neither approach makes it easy to see in detail how the information comes out in the radiation.

This finally brings us to Ashtekar’s talk. Despite what you might guess from my warmup, his talk was not about loop quantum gravity. Certainly everyone working on loop quantum gravity would love to see this theory resolve the black hole information paradox. I’m sure Ashtekar is aiming in that direction. But his talk was about a warmup problem, a “toy model” involving black holes in 2d spacetime instead of our real-world 4-dimensional spacetime.

The advantage of 2d spacetime is that the math becomes a lot easier there. There’s been a lot of work on black holes in 2d spacetime, and Ashtekar is presenting some new work on an existing model, the Callen-Giddings-Harvey-Strominger black hole. This new work is a mixture of analytical and numerical calculations done over the last 2 years by Ashtekar together with Frans Pretorius, Fethi Ramazanoglu, Victor Taveras and Madhavan Varadarajan.

I will not attempt to explain this work in detail! The main point is this: all the information that goes into the black hole leaks back out in the form of radiation as the black hole evaporates.

But the talks also covers many other interesting issues. For example, the final stages of black hole evaporation display interesting properties that are independent of the details of its initial state. Physicists call this sort of phenomenon “universality”.

Furthermore, when the black hole finally shrinks to nothing, it sends out a pulse of gravitational radiation, but not enough to destroy the universe. It may seem very peculiar to imagine that the death spasms of a black hole could destroy the universe, but in fact some approximate “semiclassical” calculations of Hawking and Stewart suggested just that! They found that in 2d spacetime, the dying black hole emitted a pulse of infinite spacetime curvature — dubbed a “thunderbolt” — which made it impossible to continue spacetime beyond that point. But they suggested that a more precise calculation, taking quantum gravity fully into account, would eliminate this effect. And this seems to be the case.

For more, listen to Ashtekar’s talk while looking at the PDF file of his slides!


Jacob Biamonte on Tensor Networks

29 September, 2010

One of the unexpected pleasures of starting work at the Centre for Quantum Technologies was realizing that the math I learned in loop quantum gravity and category theory can also be used in quantum computation and condensed matter physics!

In loop quantum gravity I learned a lot about “spin networks”. When I sailed up to the abstract heights of category theory, I discovered that these were a special case of “string diagrams”. And now, going back down to earth, I see they have a special case called “tensor networks”.

Jacob Biamonte is a postdoc who splits his time between Oxford and the CQT, and he’s just finished a paper on tensor networks:

• Jacob Biamonte, Algebra and coalgebra on categorical tensor network states.

He’s eager to get your comments on this paper. So, if you feel you should be able to understand this paper but have trouble with something, or you spot a mistake, he’d like to hear from you.

Heck, he’d also like to hear from you if you love the paper! But as usual, the most helpful feedback is not just a pat on the back, but a suggestion for how to make things better.

Let me just get you warmed up here…

There’s a general theory of string diagrams, which are graphs with edges labelled by objects and vertices labelled by morphisms in some symmetric monoidal category with duals. Any string diagram determines a morphism in that category. If you hang out here, there’s a good chance you already know and love this stuff. I’ll assume you do.

To get examples, we can take any compact Lie group, and look at its category of finite-dimensional unitary representations. Then the string diagrams are called spin networks. When the group is SU(2), we get back the original spin networks considered by Penrose. These are important in loop quantum gravity.

But when the group is the trivial group, it turns out that our string diagrams are important in quantum computation and condensed matter physics! And now they go by a different name: tensor networks!

The idea, in a nutshell, is that you can draw a string diagram with all its edges labelled by some Hilbert space H, and vertices labelled by linear operators. Suppose you have a diagram with no edges coming in and n edges coming out. Then this diagram describes a linear operator

\psi : \mathbb{C} \to H^{\otimes n}

or in other words, a state

\psi \in H^{\otimes n}

This is called a tensor network state. Similarly, if we have a diagram with n edges coming in and m coming out, it describes a linear operator

T : H^{\otimes n} \to H^{\otimes m}

And the nice thing is, we can apply this operator to our tensor network state \psi and get a new tensor network state T \psi. Even better, we can do this all using pictures!

Tensor networks have led to new algorithms in quantum computation, and new ways of describing time evolution operators in condensed matter physics. But most of this work makes no explicit reference to category theory. Jacob’s paper is, among other things, an attempt to bridge this cultural gap. It also tries to bridge the gap between tensor networks and Yves Lafont’s string diagrams for Boolean logic.

Here’s the abstract of Jacob’s paper:

We present a set of new tools which extend the problem solving techniques and range of applicability of network theory as currently applied to quantum many-body physics. We use this new framework to give a solution to the quantum decomposition problem. Specifically, given a quantum state S with k terms, we are now able to construct a tensor network with poly(k) rank three tensors that describes the state S. This solution became possible by synthesizing and tailoring several powerful modern techniques from higher mathematics: category theory, algebra and coalgebra and applicable results from classical network theory and graphical calculus. We present several examples (such as categorical MERA networks etc.) which illustrate how the established methods surrounding tensor network states arise as a special instance of this more general framework, which we call Categorical Tensor Network States.

Take a look! If you have comments, please make them over at my other blog, the n-Category Café — since I’ve also announced this paper there, and it’ll be less confusing if all the comments show up in one place.


Quantum Control Theory

16 August, 2010

The environmental thrust of this blog will rise to the surface again soon, I promise. I’m just going to a lot of talks on quantum technology, condensed matter physics, and the like. Ultimately the two threads should merge in a larger discourse that ranges from highly theoretical to highly practical. But right now you’re probably just confused about the purpose of this blog — it’s smeared out all across the intellectual landscape.

Anyway, to add to the confusion: I just got a nice email from Giampiero Campa, who in week294 had pointed me to the fascinating papers on control theory by Jan Willems. Control theory is the art of getting open systems — systems that interact with their environment — to behave in ways you want.

Since complex systems like ecosystems or the entire Earth are best understood as made of many interacting open systems, and/or being open systems themselves, I think ideas from control theory could become very important in understanding the Earth and how our actions affect it. But I’m also fascinated by control theory because of how it combines standard ideas in physics with new ideas that are best expressed using category theory — a branch of math I happen to know and like. (See week296 and subsequent issues for more on this.) And quantum control theory — the art of getting quantum systems to do what you want — is the sort of thing people here at the CQT may find interesting.

In short, control theory seems like a promising meeting-place for some of my disparate interests. Not necessarily the most important thing for ‘saving the planet’, by any means! But the kind of thing I can’t resist thinking about.

In his email, Campa pointed me to two new papers on this subject:

• Anthony M. Bloch, Roger W. Brockett, and Chitra Rangan, Finite controllability of infinite-dimensional quantum systems, IEEE Transactions on Automatic Control 55 (August 2010), 1797-1805.

• Matthew James and John E. Gough, Quantum dissipative systems and feedback control design by interconnection, IEEE Transactions on Automatic Control 55 (August 2010), 1806-1821.

The second one is related to the ideas of Jan Willems:

Abstract: The purpose of this paper is to extend J.C. Willems’ theory of dissipative systems to open quantum systems described by quantum noise models. This theory, which combines ideas from quantum physics and control theory, provides useful methods for analysis and design of dissipative quantum systems. We describe the interaction of the plant and a class of external systems, called exosystems, in terms of feedback networks of interconnected open quantum systems. Our results include an infinitesimal characterization of the dissipation property, which generalizes the well-known Positive Real and Bounded Real Lemmas, and is used to study some properties of quantum dissipative systems. We also show how to formulate control design problems using network models for open quantum systems, which implements Willems’ “control by interconnection” for open quantum systems. This control design formulation includes, for example, standard problems of stabilization, regulation, and robust control.

I don’t have anything intelligent to say about these papers yet. Does anyone out know if ideas from quantum control theory have been used to tackle the problems that decoherence causes in quantum computation? The second article makes me wonder about this:

In the physics literature, methods have been developed to model energy loss and decoherence (loss of quantum coherence) arising from the interaction of a system with an environment. These models may be expressed using tools which include completely positive maps, Lindblad generators, and master equations. In the 1980s it became apparent that a wide range of open quantum systems, such as those found in quantum optics, could be described within a new unitary framework of quantum stochastic differential equations, where quantum noise is used to represent the influence of large heat baths and boson fields (which includes optical and phonon fields). Completely positive maps, Lindblad generators, and master equations are obtained by taking expectations.

Quantum noise models cover a wide range of situations involving light and matter. In this paper, we use quantum noise models for boson fields, as occur in quantum optics, mesoscopic superconducting circuits, and nanomechanical systems, although many of the ideas could be extended to other contexts. Quantum noise models can be used to describe an optical cavity, which consists of a pair of mirrors (one of which is partially transmitting) supporting a trapped mode of light. This cavity mode may interact with a free external optical field through the partially transmitting mirror. The external field consists of two components: the input field, which is the field before it has interacted with the cavity mode, and the output field, being the field after interaction. The output field may carry away energy, and in this way the cavity system dissipates energy. This quantum system is in some ways analogous to the RLC circuit discussed above, which stores electromagnetic energy in the inductor and capacitor, but loses energy as heat through the resistor. The cavity also stores electromagnetic energy, quantized as photons, and these may be lost to the external field…


Thermodynamics and Wick Rotation

6 August, 2010

Having two blogs is a bit confusing. My student Mike Stay has some deep puzzles about physics, which I posted over at the n-Category Café:

• Mike Stay, Thermodynamics and Wick Rotation.

But maybe this blog already has some of its own readers, who don’t usually read the n-Café, but are interested in physics? I don’t know.

Anyway: if you’re interested in the mysterious notion of temperature as imaginary time, please click the link and help us figure it out. This should keep us entertained until I’m done with “week300” — the last issue of This Week’s Finds in Mathematical Physics.

No comments here, please — that would get really confusing.


Follow

Get every new post delivered to your Inbox.

Join 3,095 other followers