Network Theory (Part 18)

21 November, 2011

joint with Jacob Biamonte

Last time we explained how ‘reaction networks’, as used in chemistry, are just another way of talking about Petri nets. We stated an amazing result, the deficiency zero theorem, which settles quite a number of questions about chemical reactions. Now let’s illustrate it with an example.

Our example won’t show how powerful this theorem is: it’s too simple. But it’ll help explain the ideas involved.

Diatomic molecules

A diatomic molecule consists of two atoms of the same kind, stuck together:

At room temperature there are 5 elements that are diatomic gases: hydrogen, nitrogen, oxygen, fluorine, chlorine. Bromine is a diatomic liquid, but easily evaporates into a diatomic gas:

Iodine is a crystal at room temperatures:

but if you heat it a bit, it becomes a diatomic liquid and then a gas:

so people often list it as a seventh member of the diatomic club.

When you heat any diatomic gas enough, it starts becoming a ‘monatomic’ gas as molecules break down into individual atoms. However, just as a diatomic molecule can break apart into two atoms:

A_2 \to A + A

two atoms can recombine to form a diatomic molecule:

A + A \to A_2

So in equilibrium, the gas will be a mixture of diatomic and monatomic forms. The exact amount of each will depend on the temperature and pressure, since these affect the likelihood that two colliding atoms stick together, or a diatomic molecule splits apart. The detailed nature of our gas also matters, of course.

But we don’t need to get into these details here! Instead, we can just write down the ‘rate equation’ for the reactions we’re talking about. All the details we’re ignoring will be hiding in some constants called ‘rate constants’. We won’t try to compute these; we’ll leave that to our chemist friends.

A reaction network

To write down our rate equation, we start by drawing a ‘reaction network’. For this, we can be a bit abstract and call the diatomic molecule B instead of A_2. Then it looks like this:

We could write down the same information using a Petri net:

But today let’s focus on the reaction network! Staring at this picture, we can read off various things:

Species. The species are the different kinds of atoms, molecules, etc. In our example the set of species is S = \{A, B\}.

Complexes. A complex is a finite sum of species, like A, or A + A, or for a fancier example using more efficient notation, 2 A + 3 B. So, we can think of a complex as a vector v \in \mathbb{R}^S. The complexes that actually show up in our reaction network form a set C \subseteq \mathbb{R}^S. In our example, C = \{A+A, B\}.

Reactions. A reaction is an arrow going from one complex to another. In our example we have two reactions: A + A \to B and B \to A + A.

Chemists define a reaction network to be a triple (S, C, T) where S is a set of species, C is the set of complexes that appear in the reactions, and T is the set of reactions v \to w where v, w \in C. (Stochastic Petri net people call reactions transitions, hence the letter T.)

So, in our example we have:

• Species S = \{A,B\}.

• Complexes: C= \{A+A, B\}.

• Reactions: T =  \{A+A\to B, B\to A+A\}.

To get the rate equation, we also need one more piece of information: a rate constant r(\tau) for each reaction \tau \in T. This is a nonnegative real number that affects how fast the reaction goes. All the details of how our particular diatomic gas behaves at a given temperature and pressure are packed into these constants!

The rate equation

The rate equation says how the expected numbers of the various species—atoms, molecules and the like—changes with time. This equation is deterministic. It’s a good approximation when the numbers are large and any fluctuations in these numbers are negligible by comparison.

Here’s the general form of the rate equation:

\displaystyle{ \frac{d}{d t} x_i =  \sum_{\tau\in T} r(\tau) \, (n_i(\tau)-m_i(\tau)) \, x^{m(\tau)}  }

Let’s take a closer look. The quantity x_i is the expected population of the ith species. So, this equation tells us how that changes. But what about the right hand side? As you might expect, it’s a sum over reactions. And:

• The term for the reaction \tau is proportional to the rate constant r(\tau).

• Each reaction \tau goes between two complexes, so we can write it as m(\tau) \to n(\tau). Among chemists the input m(\tau) is called the reactant complex, and the output is called the product complex. The difference n_i(\tau)-m_i(\tau) tells us how many items of species i get created, minus how many get destroyed. So, it’s the net amount of this species that gets produced by the reaction \tau. The term for the reaction \tau is proportional to this, too.

• Finally, the law of mass action says that the rate of a reaction is proportional to the product of the concentrations of the species that enter as inputs. More precisely, if we have a reaction \tau with input is the complex m(\tau), we define x^{m(\tau)} = x_1^{m_1(\tau)} \cdots x_k^{m_k(\tau)}. The law of mass action says the term for the reaction \tau is proportional to this, too!

Let’s see what this says for the reaction network we’re studying:

Let’s write x_1(t) for the number of A atoms and x_2(t) for the number of B molecules. Let the rate constant for the reaction B \to A + A be \alpha, and let the rate constant for A + A \to B be \beta, Then the rate equation is this:

\displaystyle{\frac{d}{d t} x_1 =  2 \alpha x_2 - 2 \beta x_1^2 }

\displaystyle{\frac{d}{d t} x_2 = -\alpha x_2 + \beta x_1^2 }

This is a bit intimidating. However, we can solve it in closed form thanks to something very precious: a conserved quantity.

We’ve got two species, A and B. But remember, B is just an abbreviation for a molecule made of two A atoms. So, the total number of A atoms is conserved by the reactions in our network. This is the number of A‘s plus twice the number of B‘s: x_1 + 2x_2. So, this should be a conserved quantity: it should not change with time. Indeed, by adding the first equation above to twice the second, we see:

\displaystyle{\frac{d}{d t} \left( x_1 + 2x_2 \right) = 0 }

As a consequence, any solution will stay on a line

x_1 + 2 x_2 = c

for some constant c. We can use this fact to rewrite the rate equation just in terms of x_1:

\displaystyle{ \frac{d}{d t} x_1 = \alpha (2c - x_1) - 2 \beta x_1^2 }

This is a separable differential equation, so we can solve it if we can figure out how to do this integral

\displaystyle{ t = \int \frac{d x_1}{\alpha (2c - x_1) - 2 \beta x_1^2 }  }

and then solve for x_1.

This sort of trick won’t work for more complicated examples.
But the idea remains important: the numbers of atoms of various kinds—hydrogen, helium, lithium, and so on—are conserved by chemical reactions, so a solution of the rate equation can’t roam freely in \mathbb{R}^S. It will be trapped on some hypersurface, which is called a ‘stoichiometric compatibility class’. And this is very important.

We don’t feel like doing the integral required to solve our rate equation in closed form, because this idea doesn’t generalize too much. On the other hand, we can always solve the rate equation numerically. So let’s try that!

For example, suppose we set \alpha = \beta = 1. We can plot the solutions for three different choices of initial conditions, say (x_1,x_2) = (0,3), (4,0), and (3,3). We get these graphs:

It looks like the solution always approaches an equilibrium. We seem to be getting different equilibria for different initial conditions, and the pattern is a bit mysterious. However, something nice happens when we plot the ratio x_1^2 / x_2:

Apparently it always converges to 1. Why should that be? It’s not terribly surprising. With both rate constants equal to 1, the reaction A + A \to B proceeds at a rate equal to the square of the number of A‘s, namely x_1^2. The reverse reaction proceeds at a rate equal to the number of B‘s, namely x_2. So in equilibrium, we should have x_1^2 = x_2.

But why is the equilibrium stable? In this example we could see that using the closed-form solution, or maybe just common sense. But it also follows from a powerful theorem that handles a lot of reaction networks.

The deficiency zero theorem

It’s called Feinberg’s deficiency zero theorem, and we saw it last time. Very roughly, it says that if our reaction network is ‘weakly reversible’ and has ‘deficiency zero’, the rate equation will have equilibrium solutions that behave about as nicely as you could want.

Let’s see how this works. We need to remember some jargon:

Weakly reversible. A reaction network is weakly reversible if for every reaction v \to w in the network, there exists a path of reactions in the network starting at w and leading back to v.

Reversible. A reaction network is reversible if for every reaction v \to w in the network, w \to v is also a reaction in the network. Any reversible reaction network is weakly reversible. Our example is reversible, since it consists of reactions A + A \to B, B \to A + A.

But what about ‘deficiency zero’? We defined that concept last time, but let’s review:

Connected component. A reaction network gives a kind of graph with complexes as vertices and reactions as edges. Two complexes lie in the same connected component if we can get from one to the other by a path of reactions, where at each step we’re allowed to go either forward or backward along a reaction. Chemists call a connected component a linkage class. In our example there’s just one:

Stoichiometric subspace. The stoichiometric subspace is the subspace \mathrm{Stoch} \subseteq \mathbb{R}^S spanned by the vectors of the form w - v for all reactions v \to w in our reaction network. This subspace describes the directions in which a solution of the rate equation can move. In our example, it’s spanned by B - 2 A and 2 A - B, or if you prefer, (-2,1) and (2,-1). These vectors are linearly dependent, so the stoichiometric subspace has dimension 1.

Deficiency. The deficiency of a reaction network is the number of complexes, minus the number of connected components, minus the dimension of the stoichiometric subspace. In our example there are 2 complexes, 1 connected component, and the dimension of the stoichiometric subspace is 1. So, our reaction network has deficiency 2 – 1 – 1 = 0.

So, the deficiency zero theorem applies! What does it say? To understand it, we need a bit more jargon. First of all, a vector x \in \mathbb{R}^S tells us how much we’ve got of each species: the amount of species i \in S is the number x_i. And then:

Stoichiometric compatibility class. Given a vector v\in \mathbb{R}^S, its stoichiometric compatibility class is the subset of all vectors that we could reach using the reactions in our reaction network:

\{ v + w \; : \; w \in \mathrm{Stoch} \}

In our example, where the stoichiometric subspace is spanned by (2,-1), the stoichiometric compatibility class of the vector (v_1,v_2) is the line consisting of points

(x_1, x_2) = (v_1,v_2) + s(2,-1)

where the parameter s ranges over all real numbers. Notice that this line can also be written as

x_1 + 2x_2 = c

We’ve already seen that if we start with initial conditions on such a line, the solution will stay on this line. And that’s how it always works: as time passes, any solution of the rate equation stays in the same stoichiometric compatibility class!

In other words: the stoichiometric subspace is defined by a bunch of linear equations, one for each linear conservation law that all the reactions in our network obey.

Here a linear conservation law is a law saying that some linear combination of the numbers of species does not change.

Next:

Positivity. A vector in \mathbb{R}^S is positive if all its components are positive; this describes a a container of chemicals where all the species are actually present. The positive stoichiometric compatibility class of x\in \mathbb{R}^S consists of all positive vectors in its stoichiometric compatibility class.

We finally have enough jargon in our arsenal to state the zero deficiency theorem. We’ll only state the part we need today:

Zero Deficiency Theorem (Feinberg). If a reaction network has deficiency zero and is weakly reversible, and the rate constants are positive, the rate equation has exactly one equilibrium solution in each positive stoichiometric compatibility class. Any sufficiently nearby solution that starts in the same class will approach this equilibrium as t \to +\infty.

In our example, this theorem says there’s just one positive
equilibrium (x_1,x_2) in each line

x_1 + 2x_2 = c

We can find it by setting the time derivatives to zero:

\displaystyle{ \frac{d}{d t}x_1 =  2 \alpha x_2 - 2 \beta x_1^2 = 0 }

\displaystyle{ \frac{d}{d t} x_2 = -\alpha x_2 + \beta x_1^2 = 0 }

Solving these, we get

\displaystyle{ \frac{x_1^2}{x_2} = \frac{\alpha}{\beta} }

So, these are our equilibrium solutions. It’s easy to verify that indeed, there’s one of these in each stoichiometric compatibility class x_1 + 2x_2 = c. And the zero deficiency theorem also tells us that any sufficiently nearby solution that starts in the same class will approach this equilibrium as t \to \infty.

This partially explains what we saw before in our graphs. It shows that in the case \alpha = \beta = 1, any solution that starts by nearly having

\displaystyle{\frac{x_1^2}{x_2} = 1}

will actually have

\displaystyle{\lim_{t \to +\infty} \frac{x_1^2}{x_2} = 1 }

But in fact, in this example we don’t even need to start near the equilibrium for our solution to approach the equilibrium! What about in general? We don’t know, but just to get the ball rolling, we’ll risk the following wild guess:

Conjecture. If a reaction network has deficiency zero and is weakly reversible, and the rate constants are positive, the rate equation has exactly one equilibrium solution in each positive stoichiometric compatibility class, and any positive solution that starts in the same class will approach this equilibrium as t \to +\infty.

If anyone knows a proof or counterexample, we’d be interested. If this result were true, it would really clarify the dynamics of reaction networks in the zero deficiency case.

Next time we’ll talk about this same reaction network from a stochastic point of view, where we think of the atoms and molecules as reacting in a probabilistic way. And we’ll see how the conservation laws we’ve been talking about today are related to Noether’s theorem for Markov processes!


Network Theory (Part 16)

4 November, 2011

We’ve been comparing two theories: stochastic mechanics and quantum mechanics. Last time we saw that any graph gives us an example of both theories! It’s a bit peculiar, but today we’ll explore the intersection of these theories a little further, and see that it has another interpretation. It’s also the theory of electrical circuits made of resistors!

That’s nice, because I’m supposed to be talking about ‘network theory’, and electrical circuits are perhaps the most practical networks of all:

I plan to talk a lot about electrical circuits. I’m not quite ready to dive in, but I can’t resist dipping my toe in the water today. Why don’t you join me? It’s not too cold!

Dirichlet operators

Last time we saw that any graph gives us an operator called the ‘graph Laplacian’ that’s both infinitesimal stochastic and self-adjoint. That means we get both:

• a Markov process describing the random walk of a classical particle on the graph.

and

• a 1-parameter unitary group describing the motion of a quantum particle on the graph.

That’s sort of neat, so it’s natural to wonder what are all the operators that are both infinitesimal stochastic and self-adjoint. They’re called ‘Dirichlet operators’, and at least in the finite-dimensional case we’re considering, they’re easy to completely understand. Even better, it turns out they describe electrical circuits made of resistors!

Today let’s take a lowbrow attitude and think of a linear operator H : \mathbb{C}^n \to \mathbb{C}^n as an n \times n matrix with entries H_{i j}. Then:

H is self-adjoint if it equals the conjugate of its transpose:

H_{i j} = \overline{H}_{j i}

H is infinitesimal stochastic if its columns sum to zero and its off-diagonal entries are real and nonnegative:

\displaystyle{ \sum_i H_{i j} = 0 }

i \ne j \Rightarrow H_{i j} \ge 0

H is a Dirichlet operator if it’s both self-adjoint and infinitesimal stochastic.

What are Dirichlet operators like? Suppose H is a Dirichlet operator. Then its off-diagonal entries are \ge 0, and since

\displaystyle{ \sum_i H_{i j} = 0}

its diagonal entries obey

\displaystyle{ H_{i i} = - \sum_{ i \ne j} H_{i j} \le 0 }

So all the entries of the matrix H are real, which in turn implies it’s symmetric:

H_{i j} = \overline{H}_{j i} = H_{j i}

So, we can build any Dirichlet operator H as follows:

• Choose the entries above the diagonal, H_{i j} with i < j, to be arbitrary nonnegative real numbers.

• The entries below the diagonal, H_{i j} with i > j, are then forced on us by the requirement that H be symmetric: H_{i j} = H_{j i}.

• The diagonal entries are then forced on us by the requirement that the columns sum to zero: H_{i i} = - \sum_{ i \ne j} H_{i j}.

Note that because the entries are real, we can think of a Dirichlet operator as a linear operator H : \mathbb{R}^n \to \mathbb{R}^n. We’ll do that for the rest of today.

Circuits made of resistors

Now for the fun part. We can easily draw any Dirichlet operator! To this we draw n dots, connect each pair of distinct dots with an edge, and label the edge connecting the ith dot to the jth with any number H_{i j} \ge 0:

This contains all the information we need to build our Dirichlet operator. To make the picture prettier, we can leave out the edges labelled by 0:

Like last time, the graphs I’m talking about are simple: undirected, with no edges from a vertex to itself, and at most one edge from one vertex to another. So:

Theorem. Any finite simple graph with edges labelled by positive numbers gives a Dirichlet operator, and conversely.

We already talked about a special case last time: if we label all the edges by the number 1, our operator H is called the graph Laplacian. So, now we’re generalizing that idea by letting the edges have more interesting labels.

What’s the meaning of this trick? Well, we can think of our graph as an electrical circuit where the edges are wires. What do the numbers labelling these wires mean? One obvious possibility is to put a resistor on each wire, and let that number be its resistance. But that doesn’t make sense, since we’re leaving out wires labelled by 0. If we leave out a wire, that’s not like having a wire of zero resistance: it’s like having a wire of infinite resistance! No current can go through when there’s no wire. So the number labelling an edge should be the conductance of the resistor on that wire. Conductance is the reciprocal of resistance.

So, our Dirichlet operator above gives a circuit like this:

Here Ω is the symbol for an ‘ohm’, a unit of resistance… but the upside-down version, namely ℧, is the symbol for a ‘mho’, a unit of conductance that’s the reciprocal of an ohm.

Let’s see if this cute idea leads anywhere. Think of a Dirichlet operator H : \mathbb{R}^n \to \mathbb{R}^n as a circuit made of resistors. What could a vector \psi \in \mathbb{R}^n mean? It assigns a real number to each vertex of our graph. The only sensible option is for this number to be the electric potential at that point in our circuit. So let’s try that.

Now, what’s

\langle \psi, H \psi \rangle  ?

In quantum mechanics this would be a very sensible thing to look at: it would be gives us the expected value of the Hamiltonian H in a state \psi. But what does it mean in the land of electrical circuits?

Up to a constant fudge factor, it turns out to be the power consumed by the electrical circuit!

Let’s see why. First, remember that when a current flows along a wire, power gets consumed. In other words, electrostatic potential energy gets turned into heat. The power consumed is

P = V I

where V is the voltage across the wire and I is the current flowing along the wire. If we assume our wire has resistance R we also have Ohm’s law:

I = V / R

so

\displaystyle{ P = \frac{V^2}{R} }

If we write this using the conductance instead of the resistance R, we get

P = \textrm{conductance} \; V^2

But our electrical circuit has lots of wires, so the power it consumes will be a sum of terms like this. We’re assuming H_{i j} is the conductance of the wire from the ith vertex to the jth, or zero if there’s no wire connecting them. And by definition, the voltage across this wire is the difference in electrostatic potentials at the two ends: \psi_i - \psi_j. So, the total power consumed is

\displaystyle{ P = \sum_{i \ne j}  H_{i j} (\psi_i - \psi_j)^2 }

This is nice, but what does it have to do with \langle \psi , H \psi \rangle?

The answer is here:

Theorem. If H : \mathbb{R}^n \to \mathbb{R}^n is any Dirichlet operator, and \psi \in \mathbb{R}^n is any vector, then

\displaystyle{ \langle \psi , H \psi \rangle = -\frac{1}{2} \sum_{i \ne j}  H_{i j} (\psi_i - \psi_j)^2 }

Proof. Let’s start with the formula for power:

\displaystyle{ P = \sum_{i \ne j}  H_{i j} (\psi_i - \psi_j)^2 }

Note that this sum includes the condition i \ne j, since we only have wires going between distinct vertices. But the summand is zero if i = j, so we also have

\displaystyle{ P = \sum_{i, j}  H_{i j} (\psi_i - \psi_j)^2 }

Expanding the square, we get

\displaystyle{ P = \sum_{i, j}  H_{i j} \psi_i^2 - 2 H_{i j} \psi_i \psi_j + H_{i j} \psi_j^2 }

The middle term looks promisingly similar to \langle \psi, H \psi \rangle, but what about the other two terms? Because H_{i j} = H_{j i}, they’re equal:

\displaystyle{ P = \sum_{i, j} - 2 H_{i j} \psi_i \psi_j + 2 H_{i j} \psi_j^2  }

And in fact they’re zero! Since H is infinitesimal stochastic, we have

\displaystyle{ \sum_i H_{i j} = 0 }

so

\displaystyle{ \sum_i H_{i j} \psi_j^2 = 0 }

and it’s still zero when we sum over j. We thus have

\displaystyle{ P = - 2 \sum_{i, j} H_{i j} \psi_i \psi_j }

But since \psi_i is real, this is -2 times

\displaystyle{ \langle \psi, H \psi \rangle  = \sum_{i, j}  H_{i j} \overline{\psi}_i \psi_j }

So, we’re done.   █

An instant consequence of this theorem is that a Dirichlet operator has

\langle \psi , H \psi \rangle \le 0

for all \psi. Actually most people use the opposite sign convention in defining infinitesimal stochastic operators. This makes H_{i j} \le 0, which is mildly annoying, but it gives

\langle \psi , H \psi \rangle \ge 0

which is nice. When H is a Dirichlet operator, defined with this opposite sign convention, \langle \psi , H \psi \rangle is called a Dirichlet form.

The big picture

Maybe it’s a good time to step back and see where we are.

So far we’ve been exploring the analogy between stochastic mechanics and quantum mechanics. Where do networks come in? Well, they’ve actually come in twice so far:

1) First we saw that Petri nets can be used to describe stochastic or quantum processes where things of different kinds randomly react and turn into other things. A Petri net is a kind of network like this:

The different kinds of things are the yellow circles; we called them states, because sometimes we think of them as different states of a single kind of thing. The reactions where things turn into other things are the blue squares: we called them transitions. We label the transitions by numbers to say the rates at which they occur.

2) Then we looked at stochastic or quantum processes where in each transition a single thing turns into a single thing. We can draw these as Petri nets where each transition has just one state as input and one state as output. But we can also draw them as directed graphs with edges labelled by numbers:

Now the dark blue boxes are states and the edges are transitions!

Today we looked at a special case of the second kind of network: the Dirichlet operators. For these the ‘forward’ transition rate H_{i j} equals the ‘reverse’ rate H_{j i}, so our graph can be undirected: no arrows on the edges. And for these the rates H_{i i} are determined by the rest, so we can omit the edges from vertices to themselves:

The result can be seen as an electrical circuit made of resistors! So we’re building up a little dictionary:

• Stochastic mechanics: \psi_i is a probability and H_{i j} is a transition rate (probability per time).

• Quantum mechanics: \psi_i is an amplitude and H_{i j} is a transition rate (amplitude per time).

• Circuits made of resistors: \psi_i is a voltage and H_{i j} is a conductance.

This dictionary may seem rather odd—especially the third item, which looks completely different than the first two! But that’s good: when things aren’t odd, we don’t get many new ideas. The whole point of this ‘network theory’ business is to think about networks from many different viewpoints and let the sparks fly!

Actually, this particular oddity is well-known in certain circles. We’ve been looking at the discrete version, where we have a finite set of states. But in the continuum, the classic example of a Dirichlet operator is the Laplacian H = \nabla^2. And then we have:

• The heat equation:

\frac{d}{d t} \psi = \nabla^2 \psi

is fundamental to stochastic mechanics.

• The Schrödinger equation:

\frac{d}{d t} \psi = -i \nabla^2 \psi

is fundamental to quantum mechanics.

• The Poisson equation:

\nabla^2 \psi = -\rho

is fundamental to electrostatics.

Briefly speaking, electrostatics is the study of how the electric potential \psi depends on the charge density \rho. The theory of electrical circuits made of resistors can be seen as a special case, at least when the current isn’t changing with time.

I’ll say a lot more about this… but not today! If you want to learn more, this is a great place to start:

• P. G. Doyle and J. L. Snell, Random Walks and Electrical Circuits, Mathematical Association of America, Washington DC, 1984.

This free online book explains, in a really fun informal way, how random walks on graphs, are related to electrical circuits made of resistors. To dig deeper into the continuum case, try:

• M. Fukushima, Dirichlet Forms and Markov Processes, North-Holland, Amsterdam, 1980.


Network Theory (Part 15)

26 October, 2011

Last time we saw how to get a graph whose vertices are states of a molecule and whose edges are transitions between states. We focused on two beautiful but not completely realistic examples that both give rise to the same highly symmetrical graph: the ‘Desargues graph’.

Today I’ll start with a few remarks about the Desargues graph. Then I’ll get to work showing how any graph gives:

• A Markov process, namely a random walk on the graph.

• A quantum process, where instead of having a probability to hop from vertex to vertex as time passes, we have an amplitude.

The trick is to use an operator called the ‘graph Laplacian’, a discretized version of the Laplacian which happens to be both infinitesimal stochastic and self-adjoint. As we saw in Part 12, such an operator will give rise both to a Markov process and a quantum process (that is, a one-parameter unitary group).

The most famous operator that’s both infinitesimal stochastic and self-adjoint is the Laplacian, \nabla^2. Because it’s both, the Laplacian shows up in two important equations: one in stochastic mechanics, the other in quantum mechanics.

• The heat equation:

\displaystyle{ \frac{d}{d t} \psi = \nabla^2 \psi }

describes how the probability \psi(x) of a particle being at the point x smears out as the particle randomly walks around:

The corresponding Markov process is called ‘Brownian motion’.

• The Schrödinger equation:

\displaystyle{ \frac{d}{d t} \psi = -i \nabla^2 \psi }

describes how the amplitude \psi(x) of a particle being at the point x wiggles about as the particle ‘quantumly’ walks around.

Both these equations have analogues where we replace space by a graph, and today I’ll describe them.

Drawing the Desargues graph

First I want to show you a nice way to draw the Desargues graph. For this it’s probably easiest to go back to our naive model of an ethyl cation:

Even though ethyl cations don’t really look like this, and we should be talking about some trigonal bipyramidal molecule instead, it won’t affect the math to come. Mathematically, the two problems are isomorphic! So let’s stick with this nice simple picture.

We can be a bit more abstract, though. A state of the ethyl cation is like having 5 balls, with 3 in one pile and 2 in the other. And we can focus on the first pile and forget the second, because whatever isn’t in the first pile must be in the second.

Of course a mathematician calls a pile of things a ‘set’, and calls those things ‘elements’. So let’s say we’ve got a set with 5 elements. Draw a red dot for each 2-element subset, and a blue dot for each 3-element subset. Draw an edge between a red dot and a blue dot whenever the 2-element subset is contained in the 3-element subset. We get the Desargues graph.

That’s true by definition. But I never proved that any of the pictures I showed you are correct! For example, this picture shows the Desargues graph:

but I never really proved this fact—and I won’t now, either.

To draw a picture we know is correct, it’s actually easier to start with a big graph that has vertices for all the subsets of our 5-element set. If we draw an edge whenever an n-element subset is contained in an (n+1)-element subset, the Desargues graph will be sitting inside this big graph.

Here’s what the big graph looks like:

This graph has 2^5 vertices. It’s actually a picture of a 5-dimensional hypercube! The vertices are arranged in columns. There’s

• one 0-element subset,

• five 1-element subsets,

• ten 2-element subsets,

• ten 3-element subsets,

• five 4-element subsets,

• one 5-element subset.

So, the numbers of vertices in each column go like this:

1 \quad 5 \quad 10 \quad 10 \quad 5 \quad 1

which is a row in Pascal’s triangle. We get the Desargues graph if we keep only the vertices corresponding to 2- and 3-element subsets, like this:

It’s less pretty than our earlier picture, but at least there’s no mystery to it. Also, it shows that the Desargues graph can be generalized in various ways. For example, there’s a theory of bipartite Kneser graphs H(n,k). The Desargues graph is H(5,2).

Desargues’ theorem

I can’t resist answering this question: why is it called the ‘Desargues graph’? This name comes from Desargues’ theorem, a famous result in projective geometry. Suppose you have two triangles ABC and abc, like this:

Suppose the lines Aa, Bb, and Cc all meet at a single point, the ‘center of perspectivity’. Then the point of intersection of ab and AB, the point of intersection of ac and AC, and the point of intersection of bc and BC all lie on a single line, the ‘axis of perspectivity’. The converse is true too. Quite amazing!

The Desargues configuration consists of all the actors in this drama:

• 10 points: A, B, C, a, b, c, the center of perspectivity, and the three points on the axis of perspectivity

and

• 10 lines: Aa, Bb, Cc, AB, AC, BC, ab, ac, bc and the axis of perspectivity

Given any configuration of points and lines, we can form a graph called its Levi graph by drawing a vertex for each point or line, and drawing edges to indicate which points lie on which lines. And now for the punchline: Levi graph of the Desargues configuration is the ‘Desargues-Levi graph’!—or Desargues graph, for short.

Alas, I don’t know how this is relevant to anything I’ve discussed. For now it’s just a tantalizing curiosity.

A random walk on the Desargues graph

Back to business! I’ve been telling you about the analogy between quantum mechanics and stochastic mechanics. This analogy becomes especially interesting in chemistry, which lies on the uneasy borderline between quantum and stochastic mechanics.

Fundamentally, of course, atoms and molecules are described by quantum mechanics. But sometimes chemists describe chemical reactions using stochastic mechanics instead. When can they get away with this? Apparently whenever the molecules involved are big enough and interacting with their environment enough for ‘decoherence’ to kick in. I won’t attempt to explain this now.

Let’s imagine we have a molecule of iron pentacarbonyl with—here’s the unrealistic part, but it’s not really too bad—distinguishable carbonyl groups:

Iron pentacarbonyl is liquid at room temperatures, so as time passes, each molecule will bounce around and occasionally do a maneuver called a ‘pseudorotation’:

We can approximately describe this process as a random walk on a graph whose vertices are states of our molecule, and whose edges are transitions between states—namely, pseudorotations. And as we saw last time, this graph is the Desargues graph:

Note: all the transitions are reversible here. And thanks to the enormous amount of symmetry, the rates of all these transitions must be equal.

Let’s write V for the set of vertices of the Desargues graph. A probability distribution of states of our molecule is a function

\displaystyle{ \psi : V \to [0,\infty) }

with

\displaystyle{ \sum_{x \in V} \psi(x) = 1 }

We can think of these probability distributions as living in this vector space:

L^1(V) = \{ \psi: V \to \mathbb{R} \}

I’m calling this space L^1 because of the general abstract nonsense explained in Part 12: probability distributions on any measure space live in a vector space called L^1. Today that notation is overkill, since every function on V lies in L^1. But please humor me.

The point is that we’ve got a general setup that applies here. There’s a Hamiltonian:

H : L^1(V) \to L^1(V)

describing the rate at which the molecule randomly hops from one state to another… and the probability distribution \psi \in L^1(X) evolves in time according to the equation:

\displaystyle{ \frac{d}{d t} \psi(t) = H \psi(t) }

But what’s the Hamiltonian H? It’s very simple, because it’s equally likely for the state to hop from any vertex to any other vertex that’s connected to that one by an edge. Why? Because the problem has so much symmetry that nothing else makes sense.

So, let’s write E for the set of edges of the Desargues graph. We can think of this as a subset of V \times V by saying (x,y) \in E when x is connected to y by an edge. Then

\displaystyle{ (H \psi)(x) =  \sum_{y \,\, \textrm{such that} \,\, (x,y) \in E} \!\!\!\!\!\!\!\!\!\!\! \psi(y) \quad - \quad 3 \psi(x) }

We’re subtracting 3 \psi(x) because there are 3 edges coming out of each vertex x, so this is the rate at which the probability of staying at x decreases. We could multiply this Hamiltonian by a constant if we wanted the random walk to happen faster or slower… but let’s not.

The next step is to solve this discretized version of the heat equation:

\displaystyle{ \frac{d}{d t} \psi(t) = H \psi(t) }

Abstractly, the solution is easy:

\psi(t) = \exp(t H) \psi(0)

But to actually compute \exp(t H), we might want to diagonalize the operator H. In this particular example, we could take advantage of the enormous symmetry of the Desargues graph. Its symmetry group includes the permutation group S_5, so we could take the vector space L^1(V) and break it up into irreducible representations of S_5. Each of these will give an eigenspace of H, so by this method we can diagonalize H. I’d sort of like to try this… but it’s a big digression, so I won’t. At least, not today!

Graph Laplacians

The Hamiltonian we just saw is an example of a ‘graph Laplacian’. We can write down such a Hamiltonian for any graph, but it gets a tiny bit more complicated when different vertices have different numbers of edges coming out of them.

The word ‘graph’ means lots of things, but right now I’m talking about simple graphs. Such a graph has a set of vertices V and a set of edges E \subseteq V \times V, such that

(x,y) \in E \implies (y,x) \in E

which says the edges are undirected, and

(x,x) \notin E

which says there are no loops. Let d(x) be the degree of the vertex x, meaning the number of edges coming out of it.

Then the graph Laplacian is this operator on L^1(V):

\displaystyle{ (H \psi)(x) =  \sum_{y \,\, \textrm{such that} \, \,(x,y) \in E} \!\!\!\!\!\!\!\!\!\!\! \Psi(y) \quad - \quad d(x) \Psi(x) }

There is a huge amount to say about graph Laplacians! If you want, you can get started here:

• Michael William Newman, The Laplacian Spectrum of Graphs, Masters Thesis, Department of Mathematics, University of Manitoba, 2000.

But for now, let’s just say that \exp(t H) is a Markov process describing a random walk on the graph, where hopping from one vertex to any neighboring vertex has unit probability per unit time. We can make the hopping faster or slower by multiplying H by a constant. And here is a good time to admit that most people use a graph Laplacian that’s the negative of ours, and write time evolution as \exp(-t H). The advantage is that then the eigenvalues of the Laplacian are \ge 0.

But what matters most is this. We can write the operator H as a matrix whose entry H_{x y} is 1 when there’s an edge from x to y and 0 otherwise, except when x = y, in which case the entry is -d(x). And then:

Puzzle 1. Show that for any finite graph, the graph Laplacian H is infinitesimal stochastic, meaning that:

\displaystyle{ \sum_{x \in V} H_{x y} = 0 }

and

x \ne y \implies  H_{x y} \ge 0

This fact implies that for any t \ge 0, the operator \exp(t H) is stochastic—just what we need for a Markov process.

But we could also use H as a Hamiltonian for a quantum system, if we wanted. Now we think of \psi(x) as the amplitude for being in the state x \in V. But now \psi is a function

\psi : V \to \mathbb{C}

with

\displaystyle{ \sum_{x \in V} |\psi(x)|^2 = 1 }

We can think of this function as living in the Hilbert space

L^2(V) = \{ \psi: V \to \mathbb{C} \}

where the inner product is

\langle \phi, \psi \rangle = \displaystyle{ \sum_{x \in V} \overline{\phi(x)} \psi(x) }

Puzzle 2. Show that for any finite graph, the graph Laplacian H: L^2(V) \to L^2(V) is self-adjoint, meaning that:

H_{x y} = \overline{H}_{y x}

This implies that for any t \in \mathbb{R}, the operator \exp(-i t H) is unitary—just what we need for one-parameter unitary group. So, we can take this version of Schrödinger’s equation:

\displaystyle{ \frac{d}{d t} \psi = -i H \psi }

and solve it:

\displaystyle{ \psi(t) = \exp(-i t H) \psi(0) }

and we’ll know that time evolution is unitary!

So, we’re in a dream world where we can do stochastic mechanics and quantum mechanics with the same Hamiltonian. I’d like to exploit this somehow, but I’m not quite sure how. Of course physicists like to use a trick called Wick rotation where they turn quantum mechanics into stochastic mechanics by replacing time by imaginary time. We can do that here. But I’d like to do something new, special to this context.

Maybe I should learn more about chemistry and graph theory. Of course, graphs show up in at least two ways: first for drawing molecules, and second for drawing states and transitions, as I’ve been doing. These books are supposed to be good:

• Danail Bonchev and D.H. Rouvray, eds., Chemical Graph Theory: Introduction and Fundamentals, Taylor and Francis, 1991.

• Nenad Trinajstic, Chemical Graph Theory, CRC Press, 1992.

• R. Bruce King, Applications of Graph Theory and Topology in Inorganic Cluster Coordination Chemistry, CRC Press, 1993.

The second is apparently the magisterial tome of the subject. The prices of these books are absurd: for example, Amazon sells the first for $300, and the second for $222. Luckily the university here should have them…


A Bet Concerning Neutrinos (Part 3)

19 October, 2011

As you’ve probably heard, an experiment called OPERA measured how fast neutrinos go from a particle accelerator in Switzerland to a detector in Italy. They got a speed slightly faster than light. This got a lot of people excited.

As a conservative old fart, I made a bet with Frederik De Roo saying that no, neutrinos do not go faster than light.

Since then, various reports have been zipping across the internet at near light-speed, claiming that neutrinos don’t go faster than light. But I think they’re a bit premature. Much as I’d like to win, I don’t think I’ve won just yet.

For example, last week someone who works on artificial intelligence at a university in the Netherlands said that the OPERA team made a mistake in their use of special relativity—a mistake that explains away their result:

• Ronald A.J. van Elburg Times of flight between a source and a detector observed from a GPS satellite, 12 October 2011.

Two days later, a pseudonymous blogger who works for MIT’s Technology Review said the argument was “convincing”:

• KentuckyFC, Faster-than-light neutrino puzzle claimed solved by special relativity, The Physics arXiv Blog, 14 October 2011.

The popular news media got all excited! But they may have been getting ahead of themselves. After all, the OPERA team includes a bunch of particle physicists. Special relativity is child’s play for them. Would they really screw up that bad, after years of checking and rechecking their work? Chad Orzel suggests not:

• Chad Orzel, Experimentalists aren’t idiots: The neutrino saga continues, Uncertain Principles, 16 October 2011.

And none of the physicists I know find Elburg’s argument very convincing.

But that’s not all! A couple weeks earlier, Cohen and Glashow did a calculation:

• Andrew G. Cohen, Sheldon L. Glashow, New constraints on neutrino velocities, 29 September 2011.

According to this, faster-than-light neutrinos would lose energy by emitting lots of electron-positron pairs, a bit like how a supersonic jet makes a sonic boom. Two days ago, another team of physicists doing experiments on neutrinos at Gran Sasso claimed that together with their experiment, this result refutes the existence of faster-than-light neutrinos:

• ICARUS team, A search for the analogue to Cherenkov radiation by high energy neutrinos at superluminal speeds in ICARUS, 17 October 2011.

At least one good physics blogger has taken this work as “definitive”:

• Tomasso Dorigo, ICARUS refutes Opera’s superluminal neutrinos, A Quantum Diaries Survivor, 18 October 2011.

He says:

The saga of the superluminal neutrinos took a dramatic turn today, with the publication of a very simple yet definitive study by ICARUS…

And so, the news media are getting excited again, saying that now the OPERA result is really dead, like a vampire with two stakes through its heart.

But how “definitive” is this result, really? I’m a bit disappointed that the Cohen–Glashow paper doesn’t clearly state the assumptions that go into their argument. They zip through the calculation in an offhand way that suggests they’re using standard principles of physics to their heart’s content—in particular, special relativity. Normally that’s fine. But not here. After all, if faster-than-light neutrinos were signalling a breakdown of any of these principles, their calculation might be invalid.

Of course I don’t believe neutrinos are going faster than light: that’s why I made that bet! If you don’t want to believe it either, that’s fine. But if you want to entertain this possibility, in order to disprove it, you’d better be clear on the logic you’re using.

Without actually measuring the speed of neutrinos, the best you can hope for is something like this: “If theoretical principles X and Y and Z are true, then our experiment shows neutrinos don’t go faster than light.” So neutrinos could still go faster than light… but only if X or Y or Z is false.

Maybe X, Y and Z are principles we hold sacred—maybe even more sacred than the principle that nothing goes faster than light! But shocking discoveries can have shocking consequences. Sacred truths can fall like dominoes.

Given this, I think the only truly definitive way to hammer the nail in the coffin of the OPERA experiment is to either

1) find a mistake in the experiment that convincingly explains its result

or

2) do more measurements of the speed of neutrinos.

And maybe Dorigo acknowledges this, in a way. He says:

So, forget superluminal neutrinos. Or maybe not: what remains to be seen is whether other experiments will find results consistent with v=c or not. That’s right: regardless of the tight ICARUS bound, every nerd with a neutrino detector in his or her garage is already set up to produce an independent confirmation of the startling OPERA result… We’ll soon see measurements by MINOS and Borexino, for instance. Interesting times to be a neutrino expert are these!

(Emphasis mine.)

So, I’m going to wait and see what happens. I want to win my bet fair and square.


Network Theory (Part 13)

11 October, 2011

Unlike some recent posts, this will be very short. I merely want to show you the quantum and stochastic versions of Noether’s theorem, side by side.

Having made my sacrificial offering to the math gods last time by explaining how everything generalizes when we replace our finite set X of states by an infinite set or an even more general measure space, I’ll now relax and state Noether’s theorem only for a finite set. If you’re the sort of person who finds that unsatisfactory, you can do the generalization yourself.

Two versions of Noether’s theorem

Let me write the quantum and stochastic Noether’s theorem so they look almost the same:

Theorem. Let X be a finite set. Suppose H is a self-adjoint operator on L^2(X), and let O be an observable. Then

[O,H] = 0

if and only if for all states \psi(t) obeying Schrödinger’s equation

\displaystyle{ \frac{d}{d t} \psi(t) = -i H \psi(t) }

the expected value of O in the state \psi(t) does not change with t.

Theorem. Let X be a finite set. Suppose H is an infinitesimal stochastic operator on L^1(X), and let O be an observable. Then

[O,H] =0

if and only if for all states \psi(t) obeying the master equation

\displaystyle{ \frac{d}{d t} \psi(t) = H \psi(t) }

the expected values of O and O^2 in the state \psi(t) do not change with t.

This makes the big difference stick out like a sore thumb: in the quantum version we only need the expected value of O, while in the stochastic version we need the expected values of O and O^2!

Brendan Fong proved the stochastic version of Noether’s theorem in Part 11. Now let’s do the quantum version.

Proof of the quantum version

My statement of the quantum version was silly in a couple of ways. First, I spoke of the Hilbert space L^2(X) for a finite set X, but any finite-dimensional Hilbert space will do equally well. Second, I spoke of the “self-adjoint operator” H and the “observable” O, but in quantum mechanics an observable is the same thing as a self-adjoint operator!

Why did I talk in such a silly way? Because I was attempting to emphasize the similarity between quantum mechanics and stochastic mechanics. But they’re somewhat different. For example, in stochastic mechanics we have two very different concepts: infinitesimal stochastic operators, which generate symmetries, and functions on our set X, which are observables. But in quantum mechanics something wonderful happens: self-adjoint operators both generate symmetries and are observables! So, my attempt was a bit strained.

Let me state and prove a less silly quantum version of Noether’s theorem, which implies the one above:

Theorem. Suppose H and O are self-adjoint operators on a finite-dimensional Hilbert space. Then

[O,H] = 0

if and only if for all states \psi(t) obeying Schrödinger’s equation

\displaystyle{ \frac{d}{d t} \psi(t) = -i H \psi(t) }

the expected value of O in the state \psi(t) does not change with t:

\displaystyle{ \frac{d}{d t} \langle \psi(t), O \psi(t) \rangle = 0 }

Proof. The trick is to compute the time derivative I just wrote down. Using Schrödinger’s equation, the product rule, and the fact that H is self-adjoint we get:

\begin{array}{ccl}  \displaystyle{ \frac{d}{d t} \langle \psi(t), O \psi(t) \rangle } &=&   \langle -i H \psi(t) , O \psi(t) \rangle + \langle \psi(t) , O (- i H \psi(t)) \rangle \\  \\  &=& i \langle \psi(t) , H O \psi(t) \rangle -i \langle \psi(t) , O H \psi(t)) \rangle \\  \\  &=& - i \langle \psi(t), [O,H] \psi(t) \rangle  \end{array}

So, if [O,H] = 0, clearly the above time derivative vanishes. Conversely, if this time derivative vanishes for all states \psi(t) obeying Schrödinger’s equation, we know

\langle \psi, [O,H] \psi \rangle = 0

for all states \psi and thus all vectors in our Hilbert space. Does this imply [O,H] = 0? Yes, because i times a commutator of a self-adjoint operators is self-adjoint, and for any self-adjoint operator A we have

\forall \psi  \; \; \langle \psi, A \psi \rangle = 0 \qquad \Rightarrow \qquad A = 0

This is a well-known fact whose proof goes like this. Assume \langle \psi, A \psi \rangle = 0 for all \psi. Then to show A = 0, it is enough to show \langle \phi, A \psi \rangle = 0 for all \phi and \psi. But we have a marvelous identity:

\begin{array}{ccl} \langle \phi, A \psi \rangle &=& \frac{1}{4} \left( \langle \phi + \psi, \, A (\phi + \psi) \rangle \; - \; \langle \psi - \phi, \, A (\psi - \phi) \rangle \right. \\ && \left. +i \langle \psi + i \phi, \, A (\psi + i \phi) \rangle \; - \; i\langle \psi - i \phi, \, A (\psi - i \phi) \rangle \right) \end{array}

and all four terms on the right vanish by our assumption.   █

The marvelous identity up there is called the polarization identity. In plain English, it says: if you know the diagonal entries of a self-adjoint matrix in every basis, you can figure out all the entries of that matrix in every basis.

Why is it called the ‘polarization identity’? I think because it shows up in optics, in the study of polarized light.

Comparison

In both the quantum and stochastic cases, the time derivative of the expected value of an observable O is expressed in terms of its commutator with the Hamiltonian. In the quantum case we have

\displaystyle{ \frac{d}{d t} \langle \psi(t), O \psi(t) \rangle = - i \langle \psi(t), [O,H] \psi(t) \rangle }

and for the right side to always vanish, we need [O,H] = 0 latex , thanks to the polarization identity. In the stochastic case, a perfectly analogous equation holds:

\displaystyle{ \frac{d}{d t} \int O \psi(t) = \int [O,H] \psi(t) }

but now the right side can always vanish even without [O,H] = 0. We saw a counterexample in Part 11. There is nothing like the polarization identity to save us! To get [O,H] = 0 we need a supplementary hypothesis, for example the vanishing of

\displaystyle{ \frac{d}{d t} \int O^2 \psi(t) }

Okay! Starting next time we’ll change gears and look at some more examples of stochastic Petri nets and Markov processes, including some from chemistry. After some more of that, I’ll move on to networks of other sorts. There’s a really big picture here, and I’m afraid I’ve been getting caught up in the details of a tiny corner.


Network Theory (Part 12)

9 October, 2011

Last time we proved a version of Noether’s theorem for stochastic mechanics. Now I want to compare that to the more familiar quantum version.

But to do this, I need to say more about the analogy between stochastic mechanics and quantum mechanics. And whenever I try, I get pulled toward explaining some technical issues involving analysis: whether sums converge, whether derivatives exist, and so on. I’ve been trying to avoid such stuff—not because I dislike it, but because I’m afraid you might. But the more I put off discussing these issues, the more they fester and make me unhappy. In fact, that’s why it’s taken so long for me to write this post!

So, this time I will gently explore some of these issues. But don’t be scared: I’ll mainly talk about some simple big ideas. Next time I’ll discuss Noether’s theorem. I hope that by getting the technicalities out of my system, I’ll feel okay about hand-waving whenever I want.

And if you’re an expert on analysis, maybe you can help me with a question.

Stochastic mechanics versus quantum mechanics

First, we need to recall the analogy we began sketching in Part 5, and push it a bit further. The idea is that stochastic mechanics differs from quantum mechanics in two big ways:

• First, instead of complex amplitudes, stochastic mechanics uses nonnegative real probabilities. The complex numbers form a ring; the nonnegative real numbers form a mere rig, which is a ‘ring without negatives’. Rigs are much neglected in the typical math curriculum, but unjustly so: they’re almost as good as rings in many ways, and there are lots of important examples, like the natural numbers \mathbb{N} and the nonnegative real numbers, [0,\infty). For probability theory, we should learn to love rigs.

But there are, alas, situations where we need to subtract probabilities, even when the answer comes out negative: namely when we’re taking the time derivative of a probability. So sometimes we need \mathbb{R} instead of just [0,\infty).

• Second, while in quantum mechanics a state is described using a ‘wavefunction’, meaning a complex-valued function obeying

\int |\psi|^2 = 1

in stochastic mechanics it’s described using a ‘probability distribution’, meaning a nonnegative real function obeying

\int \psi = 1

So, let’s try our best to present the theories in close analogy, while respecting these two differences.

States

We’ll start with a set X whose points are states that a system can be in. Last time I assumed X was a finite set, but this post is so mathematical I might as well let my hair down and assume it’s a measure space. A measure space lets you do integrals, but a finite set is a special case, and then these integrals are just sums. So, I’ll write things like

\int f

and mean the integral of the function f over the measure space X, but if X is a finite set this just means

\sum_{x \in X} f(x)

Now, I’ve already defined the word ‘state’, but both quantum and stochastic mechanics need a more general concept of state. Let’s call these ‘quantum states’ and ‘stochastic states’:

• In quantum mechanics, the system has an amplitude \psi(x) of being in any state x \in X. These amplitudes are complex numbers with

\int | \psi |^2 = 1

We call \psi: X \to \mathbb{C} obeying this equation a quantum state.

• In stochastic mechanics, the system has a probability \psi(x) of being in any state x \in X. These probabilities are nonnegative real numbers with

\int \psi = 1

We call \psi: X \to [0,\infty) obeying this equation a stochastic state.

In quantum mechanics we often use this abbreviation:

\langle \phi, \psi \rangle = \int \overline{\phi} \psi

so that a quantum state has

\langle \psi, \psi \rangle = 1

Similarly, we could introduce this notation in stochastic mechanics:

\langle \psi \rangle = \int \psi

so that a stochastic state has

\langle \psi \rangle = 1

But this notation is a bit risky, since angle brackets of this sort often stand for expectation values of observables. So, I’ve been writing \int \psi, and I’ll keep on doing this.

In quantum mechanics, \langle \phi, \psi \rangle is well-defined whenever both \phi and \psi live in the vector space

L^2(X) = \{ \psi: X \to \mathbb{C} \; : \; \int |\psi|^2 < \infty \}

In stochastic mechanics, \langle \psi \rangle is well-defined whenever \psi lives in the vector space

L^1(X) =  \{ \psi: X \to \mathbb{R} \; : \; \int |\psi| < \infty \}

You’ll notice I wrote \mathbb{R} rather than [0,\infty) here. That’s because in some calculations we’ll need functions that take negative values, even though our stochastic states are nonnegative.

Observables

A state is a way our system can be. An observable is something we can measure about our system. They fit together: we can measure an observable when our system is in some state. If we repeat this we may get different answers, but there’s a nice formula for average or ‘expected’ answer.

• In quantum mechanics, an observable is a self-adjoint operator A on L^2(X). The expected value of A in the state \psi is

\langle \psi, A \psi \rangle

Here I’m assuming that we can apply A to \psi and get a new vector A \psi \in L^2(X). This is automatically true when X is a finite set, but in general we need to be more careful.

• In stochastic mechanics, an observable is a real-valued function A on X. The expected value of A in the state \psi is

\int A \psi

Here we’re using the fact that we can multiply A and \psi and get a new vector A \psi \in L^1(X), at least if A is bounded. Again, this is automatic if X is a finite set, but not otherwise.

Symmetries

Besides states and observables, we need ‘symmetries’, which are transformations that map states to states. We use these to describe how our system changes when we wait a while, for example.

• In quantum mechanics, an isometry is a linear map U: L^2(X) \to L^2(X) such that

\langle U \phi, U \psi \rangle = \langle \phi, \psi \rangle

for all \psi, \phi \in L^2(X). If U is an isometry and \psi is a quantum state, then U \psi is again a quantum state.

• In stochastic mechanics, a stochastic operator is a linear map U: L^1(X) \to L^1(X) such that

\int U \psi = \int \psi

and

\psi \ge 0 \; \; \Rightarrow \; \; U \psi \ge 0

for all \psi \in L^1(X). If U is stochastic and \psi is a stochastic state, then U \psi is again a stochastic state.

In quantum mechanics we are mainly interested in invertible isometries, which are called unitary operators. There are lots of these, and their inverses are always isometries. There are, however, very few stochastic operators whose inverses are stochastic:

Puzzle 1. Suppose X is a finite set. Show that every isometry U: L^2(X) \to L^2(X) is invertible, and its inverse is again an isometry.

Puzzle 2. Suppose X is a finite set. Which stochastic operators U: L^1(X) \to L^1(X) have stochastic inverses?

This is why we usually think of time evolution as being reversible quantum mechanics, but not in stochastic mechanics! In quantum mechanics we often describe time evolution using a ‘1-parameter group’, while in stochastic mechanics we describe it using a 1-parameter semigroup… meaning that we can run time forwards, but not backwards.

But let’s see how this works in detail!

Time evolution in quantum mechanics

In quantum mechanics there’s a beautiful relation between observables and symmetries, which goes like this. Suppose that for each time t we want a unitary operator U(t) :  L^2(X) \to L^2(X) that describes time evolution. Then it makes a lot of sense to demand that these operators form a 1-parameter group:

Definition. A collection of linear operators U(t) (t \in \mathbb{R}) on some vector space forms a 1-parameter group if

U(0) = 1

and

U(s+t) = U(s) U(t)

for all s,t \in \mathbb{R}.

Note that these conditions force all the operators U(t) to be invertible.

Now suppose our vector space is a Hilbert space, like L^2(X). Then we call a 1-parameter group a 1-parameter unitary group if the operators involved are all unitary.

It turns out that 1-parameter unitary groups are either continuous in a certain way, or so pathological that you can’t even prove they exist without the axiom of choice! So, we always focus on the continuous case:

Definition. A 1-parameter unitary group is strongly continuous if U(t) \psi depends continuously on t for all \psi, in this sense:

t_i \to t \;\; \Rightarrow \; \;\|U(t_i) \psi - U(t) \psi \| \to 0

Then we get a classic result proved by Marshall Stone back in the early 1930s. You may not know him, but he was so influential at the University of Chicago during this period that it’s often called the “Stone Age”. And here’s one reason why:

Stone’s Theorem. There is a one-to-one correspondence between strongly continuous 1-parameter unitary groups on a Hilbert space and self-adjoint operators on that Hilbert space, given as follows. Given a strongly continuous 1-parameter unitary group U(t) we can always write

U(t) = \exp(-i t H)

for a unique self-adjoint operator H. Conversely, any self-adjoint operator determines a strongly continuous 1-parameter group this way. For all vectors \psi for which H \psi is well-defined, we have

\displaystyle{ \left.\frac{d}{d t} U(t) \psi \right|_{t = 0} = -i H \psi }

Moreover, for any of these vectors, if we set

\psi(t) = \exp(-i t H) \psi

we have

\displaystyle{ \frac{d}{d t} \psi(t) = - i H \psi(t) }

When U(t) = \exp(-i t H) describes the evolution of a system in time, H is is called the Hamiltonian, and it has the physical meaning of ‘energy’. The equation I just wrote down is then called Schrödinger’s equation.

So, simply put, in quantum mechanics we have a correspondence between observables and nice one-parameter groups of symmetries. Not surprisingly, our favorite observable, energy, corresponds to our favorite symmetry: time evolution!

However, if you were paying attention, you noticed that I carefully avoided explaining how we define \exp(- i t H). I didn’t even say what a self-adjoint operator is. This is where the technicalities come in: they arise when H is unbounded, and not defined on all vectors in our Hilbert space.

Luckily, these technicalities evaporate for finite-dimensional Hilbert spaces, such as L^2(X) for a finite set X. Then we get:

Stone’s Theorem (Baby Version). Suppose we are given a finite-dimensional Hilbert space. In this case, a linear operator H on this space is self-adjoint iff it’s defined on the whole space and

\langle \phi , H \psi \rangle = \langle H \phi, \psi \rangle

for all vectors \phi, \psi. Given a strongly continuous 1-parameter unitary group U(t) we can always write

U(t) = \exp(- i t H)

for a unique self-adjoint operator H, where

\displaystyle{ \exp(-i t H) \psi = \sum_{n = 0}^\infty \frac{(-i t H)^n}{n!} \psi }

with the sum converging for all \psi. Conversely, any self-adjoint operator on our space determines a strongly continuous 1-parameter group this way. For all vectors \psi in our space we then have

\displaystyle{ \left.\frac{d}{d t} U(t) \psi \right|_{t = 0} = -i H \psi }

and if we set

\psi(t) = \exp(-i t H) \psi

we have

\displaystyle{ \frac{d}{d t} \psi(t) = - i H \psi(t) }

Time evolution in stochastic mechanics

We’ve seen that in quantum mechanics, time evolution is usually described by a 1-parameter group of operators that comes from an observable: the Hamiltonian. Stochastic mechanics is different!

First, since stochastic operators aren’t usually invertible, we typically describe time evolution by a mere ‘semigroup’:

Definition. A collection of linear operators U(t) (t \in [0,\infty)) on some vector space forms a 1-parameter semigroup if

U(0) = 1

and

U(s+t) = U(s) U(t)

for all s, t \ge 0.

Now suppose this vector space is L^1(X) for some measure space X. We want to focus on the case where the operators U(t) are stochastic and depend continuously on t in the same sense we discussed earlier.

Definition. A 1-parameter strongly continuous semigroup of stochastic operators U(t) : L^1(X) \to L^1(X) is called a Markov semigroup.

What’s the analogue of Stone’s theorem for Markov semigroups? I don’t know a fully satisfactory answer! If you know, please tell me.

Later I’ll say what I do know—I’m not completely clueless—but for now let’s look at the ‘baby’ case where X is a finite set. Then the story is neat and complete:

Theorem. Suppose we are given a finite set X. In this case, a linear operator H on L^1(X) is infinitesimal stochastic iff it’s defined on the whole space,

\int H \psi = 0

for all \psi \in L^1(X), and the matrix of H in terms of the obvious basis obeys

H_{i j} \ge 0

for all j \ne i. Given a Markov semigroup U(t) on L^1(X), we can always write

U(t) = \exp(t H)

for a unique infinitesimal stochastic operator H, where

\displaystyle{ \exp(t H) \psi = \sum_{n = 0}^\infty \frac{(t H)^n}{n!} \psi }

with the sum converging for all \psi. Conversely, any infinitesimal stochastic operator on our space determines a Markov semigroup this way. For all \psi \in L^1(X) we then have

\displaystyle{ \left.\frac{d}{d t} U(t) \psi \right|_{t = 0} = H \psi }

and if we set

\psi(t) = \exp(t H) \psi

we have the master equation:

\displaystyle{ \frac{d}{d t} \psi(t) = H \psi(t) }

In short, time evolution in stochastic mechanics is a lot like time evolution in quantum mechanics, except it’s typically not invertible, and the Hamiltonian is typically not an observable.

Why not? Because we defined an observable to be a function A: X \to \mathbb{R}. We can think of this as giving an operator on L^1(X), namely the operator of multiplication by A. That’s a nice trick, which we used to good effect last time. However, at least when X is a finite set, this operator will be diagonal in the obvious basis consisting of functions that equal 1 at one point of X and zero elsewhere. So, it can only be infinitesimal stochastic if it’s zero!

Puzzle 3. If X is a finite set, show that any operator on L^1(X) that’s both diagonal and infinitesimal stochastic must be zero.

The Hille–Yosida theorem

I’ve now told you everything you really need to know… but not everything I want to say. What happens when X is not a finite set? What are Markov semigroups like then? I can’t abide letting this question go unresolved! Unfortunately I only know a partial answer.

We can get a certain distance using the Hille-Yosida theorem, which is much more general.

Definition. A Banach space is vector space with a norm such that any Cauchy sequence converges.

Examples include Hilbert spaces like L^2(X) for any measure space, but also other spaces like L^1(X) for any measure space!

Definition. If V is a Banach space, a 1-parameter semigroup of operators U(t) : V \to V is called a contraction semigroup if it’s strongly continuous and

\| U(t) \psi \| \le \| \psi \|

for all t \ge 0 and all \psi \in V.

Examples include strongly continuous 1-parameter unitary groups, but also Markov semigroups!

Puzzle 4. Show any Markov semigroup is a contraction semigroup.

The Hille–Yosida theorem generalizes Stone’s theorem to contraction semigroups. In my misspent youth, I spent a lot of time carrying around Yosida’s book Functional Analysis. Furthermore, Einar Hille was the advisor of my thesis advisor, Irving Segal. Segal generalized the Hille–Yosida theorem to nonlinear operators, and I used this generalization a lot back when I studied nonlinear partial differential equations. So, I feel compelled to tell you this theorem:

Hille-Yosida Theorem. Given a contraction semigroup U(t) we can always write

U(t) = \exp(t H)

for some densely defined operator H such that H - \lambda I has an inverse and

\displaystyle{ \| (H - \lambda I)^{-1} \psi \| \le \frac{1}{\lambda} \| \psi \| }

for all \lambda > 0 and \psi \in V. Conversely, any such operator determines a strongly continuous 1-parameter group. For all vectors \psi for which H \psi is well-defined, we have

\displaystyle{ \left.\frac{d}{d t} U(t) \psi \right|_{t = 0} = H \psi }

Moreover, for any of these vectors, if we set

\psi(t) = U(t) \psi

we have

\displaystyle{ \frac{d}{d t} \psi(t) = H \psi(t) }

If you like, you can take the stuff at the end of this theorem to be what we mean by saying U(t) = \exp(t H). When U(t) = \exp(t H), we say that H generates the semigroup U(t).

But now suppose V = L^1(X). Besides the conditions in the Hille–Yosida theorem, what extra conditions on H are necessary and sufficient for it to generate a Markov semigroup? In other words, what’s a definition of ‘infinitesimal stochastic operator’ that’s suitable not only when X is a finite set, but an arbitrary measure space?

I asked this question on Mathoverflow a few months ago, and so far the answers have not been completely satisfactory.

Some people mentioned the Hille–Yosida theorem, which is surely a step in the right direction, but not the full answer.

Others discussed the special case when \exp(t H) extends to a bounded self-adjoint operator on L^2(X). When X is a finite set, this special case happens precisely when the matrix H_{i j} is symmetric: the probability of hopping from j to i equals the probability of hopping from i to j. This is a fascinating special case, not least because when H is both infinitesimal stochastic and self-adjoint, we can use it as a Hamiltonian for both stochastic mechanics and quantum mechanics! Someday I want to discuss this. However, it’s just a special case.

After grabbing people by the collar and insisting that I wanted to know the answer to the question I actually asked—not some vaguely similar question—the best answer seems to be Martin Gisser’s reference to this book:

• Zhi-Ming Ma and Michael Röckner, Introduction to the Theory of (Non-Symmetric) Dirichlet Forms, Springer, Berlin, 1992.

This book provides a very nice self-contained proof of the Hille-Yosida theorem. On the other hand, it does not answer my question in general, but only when the skew-symmetric part of H is dominated (in a certain sense) by the symmetric part.

So, I’m stuck on this front, but that needn’t bring the whole project to a halt. We’ll just sidestep this question.

For a good well-rounded introduction to Markov semigroups and what they’re good for, try:

• Ryszard Rudnicki, Katarzyna Pichór and Marta Tyran-Kamínska, Markov semigroups and their applications.


A Bet Concerning Neutrinos (Part 2)

5 October, 2011

We negotiated it, and now we’ve agreed:

This bet concerns whether neutrinos can go faster than light. John Baez bets they cannot. For the sake of the environment and out of scientific curiosity, Frederik De Roo bets that they can.

At any time before October 2021, either John or Frederik can claim they have won this bet. When that happens, they will try to agree whether it’s true beyond a reasonable doubt, false beyond a reasonable doubt, or uncertain that neutrinos can (under some conditions) go faster than light. If they cannot agree, the situation counts as uncertain.

If they decide it’s true, John is only allowed to take one round-trip airplane trip during one of the next 5 years. John is allowed to choose which year this is. He can make his choice at any time (before 4 years have passed).

If they decide it’s false, Frederik has to produce 10 decent Azimuth Library articles during one of the next 5 years—where ‘decent’ means ‘deserving of three thumbs up emoticons on the Azimuth Forum’. He is allowed to choose which year this is. He can make his choice at any time (before 4 years have passed).

If they decide it’s uncertain, they can renegotiate the bet (or just decide not to continue it).


Follow

Get every new post delivered to your Inbox.

Join 3,095 other followers