Network Theory (Part 16)

4 November, 2011

We’ve been comparing two theories: stochastic mechanics and quantum mechanics. Last time we saw that any graph gives us an example of both theories! It’s a bit peculiar, but today we’ll explore the intersection of these theories a little further, and see that it has another interpretation. It’s also the theory of electrical circuits made of resistors!

That’s nice, because I’m supposed to be talking about ‘network theory’, and electrical circuits are perhaps the most practical networks of all:

I plan to talk a lot about electrical circuits. I’m not quite ready to dive in, but I can’t resist dipping my toe in the water today. Why don’t you join me? It’s not too cold!

Dirichlet operators

Last time we saw that any graph gives us an operator called the ‘graph Laplacian’ that’s both infinitesimal stochastic and self-adjoint. That means we get both:

• a Markov process describing the random walk of a classical particle on the graph.

and

• a 1-parameter unitary group describing the motion of a quantum particle on the graph.

That’s sort of neat, so it’s natural to wonder what are all the operators that are both infinitesimal stochastic and self-adjoint. They’re called ‘Dirichlet operators’, and at least in the finite-dimensional case we’re considering, they’re easy to completely understand. Even better, it turns out they describe electrical circuits made of resistors!

Today let’s take a lowbrow attitude and think of a linear operator H : \mathbb{C}^n \to \mathbb{C}^n as an n \times n matrix with entries H_{i j}. Then:

H is self-adjoint if it equals the conjugate of its transpose:

H_{i j} = \overline{H}_{j i}

H is infinitesimal stochastic if its columns sum to zero and its off-diagonal entries are real and nonnegative:

\displaystyle{ \sum_i H_{i j} = 0 }

i \ne j \Rightarrow H_{i j} \ge 0

H is a Dirichlet operator if it’s both self-adjoint and infinitesimal stochastic.

What are Dirichlet operators like? Suppose H is a Dirichlet operator. Then its off-diagonal entries are \ge 0, and since

\displaystyle{ \sum_i H_{i j} = 0}

its diagonal entries obey

\displaystyle{ H_{i i} = - \sum_{ i \ne j} H_{i j} \le 0 }

So all the entries of the matrix H are real, which in turn implies it’s symmetric:

H_{i j} = \overline{H}_{j i} = H_{j i}

So, we can build any Dirichlet operator H as follows:

• Choose the entries above the diagonal, H_{i j} with i < j, to be arbitrary nonnegative real numbers.

• The entries below the diagonal, H_{i j} with i > j, are then forced on us by the requirement that H be symmetric: H_{i j} = H_{j i}.

• The diagonal entries are then forced on us by the requirement that the columns sum to zero: H_{i i} = - \sum_{ i \ne j} H_{i j}.

Note that because the entries are real, we can think of a Dirichlet operator as a linear operator H : \mathbb{R}^n \to \mathbb{R}^n. We’ll do that for the rest of today.

Circuits made of resistors

Now for the fun part. We can easily draw any Dirichlet operator! To this we draw n dots, connect each pair of distinct dots with an edge, and label the edge connecting the ith dot to the jth with any number H_{i j} \ge 0:

This contains all the information we need to build our Dirichlet operator. To make the picture prettier, we can leave out the edges labelled by 0:

Like last time, the graphs I’m talking about are simple: undirected, with no edges from a vertex to itself, and at most one edge from one vertex to another. So:

Theorem. Any finite simple graph with edges labelled by positive numbers gives a Dirichlet operator, and conversely.

We already talked about a special case last time: if we label all the edges by the number 1, our operator H is called the graph Laplacian. So, now we’re generalizing that idea by letting the edges have more interesting labels.

What’s the meaning of this trick? Well, we can think of our graph as an electrical circuit where the edges are wires. What do the numbers labelling these wires mean? One obvious possibility is to put a resistor on each wire, and let that number be its resistance. But that doesn’t make sense, since we’re leaving out wires labelled by 0. If we leave out a wire, that’s not like having a wire of zero resistance: it’s like having a wire of infinite resistance! No current can go through when there’s no wire. So the number labelling an edge should be the conductance of the resistor on that wire. Conductance is the reciprocal of resistance.

So, our Dirichlet operator above gives a circuit like this:

Here Ω is the symbol for an ‘ohm’, a unit of resistance… but the upside-down version, namely ℧, is the symbol for a ‘mho’, a unit of conductance that’s the reciprocal of an ohm.

Let’s see if this cute idea leads anywhere. Think of a Dirichlet operator H : \mathbb{R}^n \to \mathbb{R}^n as a circuit made of resistors. What could a vector \psi \in \mathbb{R}^n mean? It assigns a real number to each vertex of our graph. The only sensible option is for this number to be the electric potential at that point in our circuit. So let’s try that.

Now, what’s

\langle \psi, H \psi \rangle  ?

In quantum mechanics this would be a very sensible thing to look at: it would be gives us the expected value of the Hamiltonian H in a state \psi. But what does it mean in the land of electrical circuits?

Up to a constant fudge factor, it turns out to be the power consumed by the electrical circuit!

Let’s see why. First, remember that when a current flows along a wire, power gets consumed. In other words, electrostatic potential energy gets turned into heat. The power consumed is

P = V I

where V is the voltage across the wire and I is the current flowing along the wire. If we assume our wire has resistance R we also have Ohm’s law:

I = V / R

so

\displaystyle{ P = \frac{V^2}{R} }

If we write this using the conductance instead of the resistance R, we get

P = \textrm{conductance} \; V^2

But our electrical circuit has lots of wires, so the power it consumes will be a sum of terms like this. We’re assuming H_{i j} is the conductance of the wire from the ith vertex to the jth, or zero if there’s no wire connecting them. And by definition, the voltage across this wire is the difference in electrostatic potentials at the two ends: \psi_i - \psi_j. So, the total power consumed is

\displaystyle{ P = \sum_{i \ne j}  H_{i j} (\psi_i - \psi_j)^2 }

This is nice, but what does it have to do with \langle \psi , H \psi \rangle?

The answer is here:

Theorem. If H : \mathbb{R}^n \to \mathbb{R}^n is any Dirichlet operator, and \psi \in \mathbb{R}^n is any vector, then

\displaystyle{ \langle \psi , H \psi \rangle = -\frac{1}{2} \sum_{i \ne j}  H_{i j} (\psi_i - \psi_j)^2 }

Proof. Let’s start with the formula for power:

\displaystyle{ P = \sum_{i \ne j}  H_{i j} (\psi_i - \psi_j)^2 }

Note that this sum includes the condition i \ne j, since we only have wires going between distinct vertices. But the summand is zero if i = j, so we also have

\displaystyle{ P = \sum_{i, j}  H_{i j} (\psi_i - \psi_j)^2 }

Expanding the square, we get

\displaystyle{ P = \sum_{i, j}  H_{i j} \psi_i^2 - 2 H_{i j} \psi_i \psi_j + H_{i j} \psi_j^2 }

The middle term looks promisingly similar to \langle \psi, H \psi \rangle, but what about the other two terms? Because H_{i j} = H_{j i}, they’re equal:

\displaystyle{ P = \sum_{i, j} - 2 H_{i j} \psi_i \psi_j + 2 H_{i j} \psi_j^2  }

And in fact they’re zero! Since H is infinitesimal stochastic, we have

\displaystyle{ \sum_i H_{i j} = 0 }

so

\displaystyle{ \sum_i H_{i j} \psi_j^2 = 0 }

and it’s still zero when we sum over j. We thus have

\displaystyle{ P = - 2 \sum_{i, j} H_{i j} \psi_i \psi_j }

But since \psi_i is real, this is -2 times

\displaystyle{ \langle \psi, H \psi \rangle  = \sum_{i, j}  H_{i j} \overline{\psi}_i \psi_j }

So, we’re done.   █

An instant consequence of this theorem is that a Dirichlet operator has

\langle \psi , H \psi \rangle \le 0

for all \psi. Actually most people use the opposite sign convention in defining infinitesimal stochastic operators. This makes H_{i j} \le 0, which is mildly annoying, but it gives

\langle \psi , H \psi \rangle \ge 0

which is nice. When H is a Dirichlet operator, defined with this opposite sign convention, \langle \psi , H \psi \rangle is called a Dirichlet form.

The big picture

Maybe it’s a good time to step back and see where we are.

So far we’ve been exploring the analogy between stochastic mechanics and quantum mechanics. Where do networks come in? Well, they’ve actually come in twice so far:

1) First we saw that Petri nets can be used to describe stochastic or quantum processes where things of different kinds randomly react and turn into other things. A Petri net is a kind of network like this:

The different kinds of things are the yellow circles; we called them states, because sometimes we think of them as different states of a single kind of thing. The reactions where things turn into other things are the blue squares: we called them transitions. We label the transitions by numbers to say the rates at which they occur.

2) Then we looked at stochastic or quantum processes where in each transition a single thing turns into a single thing. We can draw these as Petri nets where each transition has just one state as input and one state as output. But we can also draw them as directed graphs with edges labelled by numbers:

Now the dark blue boxes are states and the edges are transitions!

Today we looked at a special case of the second kind of network: the Dirichlet operators. For these the ‘forward’ transition rate H_{i j} equals the ‘reverse’ rate H_{j i}, so our graph can be undirected: no arrows on the edges. And for these the rates H_{i i} are determined by the rest, so we can omit the edges from vertices to themselves:

The result can be seen as an electrical circuit made of resistors! So we’re building up a little dictionary:

• Stochastic mechanics: \psi_i is a probability and H_{i j} is a transition rate (probability per time).

• Quantum mechanics: \psi_i is an amplitude and H_{i j} is a transition rate (amplitude per time).

• Circuits made of resistors: \psi_i is a voltage and H_{i j} is a conductance.

This dictionary may seem rather odd—especially the third item, which looks completely different than the first two! But that’s good: when things aren’t odd, we don’t get many new ideas. The whole point of this ‘network theory’ business is to think about networks from many different viewpoints and let the sparks fly!

Actually, this particular oddity is well-known in certain circles. We’ve been looking at the discrete version, where we have a finite set of states. But in the continuum, the classic example of a Dirichlet operator is the Laplacian H = \nabla^2. And then we have:

• The heat equation:

\frac{d}{d t} \psi = \nabla^2 \psi

is fundamental to stochastic mechanics.

• The Schrödinger equation:

\frac{d}{d t} \psi = -i \nabla^2 \psi

is fundamental to quantum mechanics.

• The Poisson equation:

\nabla^2 \psi = -\rho

is fundamental to electrostatics.

Briefly speaking, electrostatics is the study of how the electric potential \psi depends on the charge density \rho. The theory of electrical circuits made of resistors can be seen as a special case, at least when the current isn’t changing with time.

I’ll say a lot more about this… but not today! If you want to learn more, this is a great place to start:

• P. G. Doyle and J. L. Snell, Random Walks and Electrical Circuits, Mathematical Association of America, Washington DC, 1984.

This free online book explains, in a really fun informal way, how random walks on graphs, are related to electrical circuits made of resistors. To dig deeper into the continuum case, try:

• M. Fukushima, Dirichlet Forms and Markov Processes, North-Holland, Amsterdam, 1980.


Network Theory (Part 15)

26 October, 2011

Last time we saw how to get a graph whose vertices are states of a molecule and whose edges are transitions between states. We focused on two beautiful but not completely realistic examples that both give rise to the same highly symmetrical graph: the ‘Desargues graph’.

Today I’ll start with a few remarks about the Desargues graph. Then I’ll get to work showing how any graph gives:

• A Markov process, namely a random walk on the graph.

• A quantum process, where instead of having a probability to hop from vertex to vertex as time passes, we have an amplitude.

The trick is to use an operator called the ‘graph Laplacian’, a discretized version of the Laplacian which happens to be both infinitesimal stochastic and self-adjoint. As we saw in Part 12, such an operator will give rise both to a Markov process and a quantum process (that is, a one-parameter unitary group).

The most famous operator that’s both infinitesimal stochastic and self-adjoint is the Laplacian, \nabla^2. Because it’s both, the Laplacian shows up in two important equations: one in stochastic mechanics, the other in quantum mechanics.

• The heat equation:

\displaystyle{ \frac{d}{d t} \psi = \nabla^2 \psi }

describes how the probability \psi(x) of a particle being at the point x smears out as the particle randomly walks around:

The corresponding Markov process is called ‘Brownian motion’.

• The Schrödinger equation:

\displaystyle{ \frac{d}{d t} \psi = -i \nabla^2 \psi }

describes how the amplitude \psi(x) of a particle being at the point x wiggles about as the particle ‘quantumly’ walks around.

Both these equations have analogues where we replace space by a graph, and today I’ll describe them.

Drawing the Desargues graph

First I want to show you a nice way to draw the Desargues graph. For this it’s probably easiest to go back to our naive model of an ethyl cation:

Even though ethyl cations don’t really look like this, and we should be talking about some trigonal bipyramidal molecule instead, it won’t affect the math to come. Mathematically, the two problems are isomorphic! So let’s stick with this nice simple picture.

We can be a bit more abstract, though. A state of the ethyl cation is like having 5 balls, with 3 in one pile and 2 in the other. And we can focus on the first pile and forget the second, because whatever isn’t in the first pile must be in the second.

Of course a mathematician calls a pile of things a ‘set’, and calls those things ‘elements’. So let’s say we’ve got a set with 5 elements. Draw a red dot for each 2-element subset, and a blue dot for each 3-element subset. Draw an edge between a red dot and a blue dot whenever the 2-element subset is contained in the 3-element subset. We get the Desargues graph.

That’s true by definition. But I never proved that any of the pictures I showed you are correct! For example, this picture shows the Desargues graph:

but I never really proved this fact—and I won’t now, either.

To draw a picture we know is correct, it’s actually easier to start with a big graph that has vertices for all the subsets of our 5-element set. If we draw an edge whenever an n-element subset is contained in an (n+1)-element subset, the Desargues graph will be sitting inside this big graph.

Here’s what the big graph looks like:

This graph has 2^5 vertices. It’s actually a picture of a 5-dimensional hypercube! The vertices are arranged in columns. There’s

• one 0-element subset,

• five 1-element subsets,

• ten 2-element subsets,

• ten 3-element subsets,

• five 4-element subsets,

• one 5-element subset.

So, the numbers of vertices in each column go like this:

1 \quad 5 \quad 10 \quad 10 \quad 5 \quad 1

which is a row in Pascal’s triangle. We get the Desargues graph if we keep only the vertices corresponding to 2- and 3-element subsets, like this:

It’s less pretty than our earlier picture, but at least there’s no mystery to it. Also, it shows that the Desargues graph can be generalized in various ways. For example, there’s a theory of bipartite Kneser graphs H(n,k). The Desargues graph is H(5,2).

Desargues’ theorem

I can’t resist answering this question: why is it called the ‘Desargues graph’? This name comes from Desargues’ theorem, a famous result in projective geometry. Suppose you have two triangles ABC and abc, like this:

Suppose the lines Aa, Bb, and Cc all meet at a single point, the ‘center of perspectivity’. Then the point of intersection of ab and AB, the point of intersection of ac and AC, and the point of intersection of bc and BC all lie on a single line, the ‘axis of perspectivity’. The converse is true too. Quite amazing!

The Desargues configuration consists of all the actors in this drama:

• 10 points: A, B, C, a, b, c, the center of perspectivity, and the three points on the axis of perspectivity

and

• 10 lines: Aa, Bb, Cc, AB, AC, BC, ab, ac, bc and the axis of perspectivity

Given any configuration of points and lines, we can form a graph called its Levi graph by drawing a vertex for each point or line, and drawing edges to indicate which points lie on which lines. And now for the punchline: Levi graph of the Desargues configuration is the ‘Desargues-Levi graph’!—or Desargues graph, for short.

Alas, I don’t know how this is relevant to anything I’ve discussed. For now it’s just a tantalizing curiosity.

A random walk on the Desargues graph

Back to business! I’ve been telling you about the analogy between quantum mechanics and stochastic mechanics. This analogy becomes especially interesting in chemistry, which lies on the uneasy borderline between quantum and stochastic mechanics.

Fundamentally, of course, atoms and molecules are described by quantum mechanics. But sometimes chemists describe chemical reactions using stochastic mechanics instead. When can they get away with this? Apparently whenever the molecules involved are big enough and interacting with their environment enough for ‘decoherence’ to kick in. I won’t attempt to explain this now.

Let’s imagine we have a molecule of iron pentacarbonyl with—here’s the unrealistic part, but it’s not really too bad—distinguishable carbonyl groups:

Iron pentacarbonyl is liquid at room temperatures, so as time passes, each molecule will bounce around and occasionally do a maneuver called a ‘pseudorotation’:

We can approximately describe this process as a random walk on a graph whose vertices are states of our molecule, and whose edges are transitions between states—namely, pseudorotations. And as we saw last time, this graph is the Desargues graph:

Note: all the transitions are reversible here. And thanks to the enormous amount of symmetry, the rates of all these transitions must be equal.

Let’s write V for the set of vertices of the Desargues graph. A probability distribution of states of our molecule is a function

\displaystyle{ \psi : V \to [0,\infty) }

with

\displaystyle{ \sum_{x \in V} \psi(x) = 1 }

We can think of these probability distributions as living in this vector space:

L^1(V) = \{ \psi: V \to \mathbb{R} \}

I’m calling this space L^1 because of the general abstract nonsense explained in Part 12: probability distributions on any measure space live in a vector space called L^1. Today that notation is overkill, since every function on V lies in L^1. But please humor me.

The point is that we’ve got a general setup that applies here. There’s a Hamiltonian:

H : L^1(V) \to L^1(V)

describing the rate at which the molecule randomly hops from one state to another… and the probability distribution \psi \in L^1(X) evolves in time according to the equation:

\displaystyle{ \frac{d}{d t} \psi(t) = H \psi(t) }

But what’s the Hamiltonian H? It’s very simple, because it’s equally likely for the state to hop from any vertex to any other vertex that’s connected to that one by an edge. Why? Because the problem has so much symmetry that nothing else makes sense.

So, let’s write E for the set of edges of the Desargues graph. We can think of this as a subset of V \times V by saying (x,y) \in E when x is connected to y by an edge. Then

\displaystyle{ (H \psi)(x) =  \sum_{y \,\, \textrm{such that} \,\, (x,y) \in E} \!\!\!\!\!\!\!\!\!\!\! \psi(y) \quad - \quad 3 \psi(x) }

We’re subtracting 3 \psi(x) because there are 3 edges coming out of each vertex x, so this is the rate at which the probability of staying at x decreases. We could multiply this Hamiltonian by a constant if we wanted the random walk to happen faster or slower… but let’s not.

The next step is to solve this discretized version of the heat equation:

\displaystyle{ \frac{d}{d t} \psi(t) = H \psi(t) }

Abstractly, the solution is easy:

\psi(t) = \exp(t H) \psi(0)

But to actually compute \exp(t H), we might want to diagonalize the operator H. In this particular example, we could take advantage of the enormous symmetry of the Desargues graph. Its symmetry group includes the permutation group S_5, so we could take the vector space L^1(V) and break it up into irreducible representations of S_5. Each of these will give an eigenspace of H, so by this method we can diagonalize H. I’d sort of like to try this… but it’s a big digression, so I won’t. At least, not today!

Graph Laplacians

The Hamiltonian we just saw is an example of a ‘graph Laplacian’. We can write down such a Hamiltonian for any graph, but it gets a tiny bit more complicated when different vertices have different numbers of edges coming out of them.

The word ‘graph’ means lots of things, but right now I’m talking about simple graphs. Such a graph has a set of vertices V and a set of edges E \subseteq V \times V, such that

(x,y) \in E \implies (y,x) \in E

which says the edges are undirected, and

(x,x) \notin E

which says there are no loops. Let d(x) be the degree of the vertex x, meaning the number of edges coming out of it.

Then the graph Laplacian is this operator on L^1(V):

\displaystyle{ (H \psi)(x) =  \sum_{y \,\, \textrm{such that} \, \,(x,y) \in E} \!\!\!\!\!\!\!\!\!\!\! \Psi(y) \quad - \quad d(x) \Psi(x) }

There is a huge amount to say about graph Laplacians! If you want, you can get started here:

• Michael William Newman, The Laplacian Spectrum of Graphs, Masters Thesis, Department of Mathematics, University of Manitoba, 2000.

But for now, let’s just say that \exp(t H) is a Markov process describing a random walk on the graph, where hopping from one vertex to any neighboring vertex has unit probability per unit time. We can make the hopping faster or slower by multiplying H by a constant. And here is a good time to admit that most people use a graph Laplacian that’s the negative of ours, and write time evolution as \exp(-t H). The advantage is that then the eigenvalues of the Laplacian are \ge 0.

But what matters most is this. We can write the operator H as a matrix whose entry H_{x y} is 1 when there’s an edge from x to y and 0 otherwise, except when x = y, in which case the entry is -d(x). And then:

Puzzle 1. Show that for any finite graph, the graph Laplacian H is infinitesimal stochastic, meaning that:

\displaystyle{ \sum_{x \in V} H_{x y} = 0 }

and

x \ne y \implies  H_{x y} \ge 0

This fact implies that for any t \ge 0, the operator \exp(t H) is stochastic—just what we need for a Markov process.

But we could also use H as a Hamiltonian for a quantum system, if we wanted. Now we think of \psi(x) as the amplitude for being in the state x \in V. But now \psi is a function

\psi : V \to \mathbb{C}

with

\displaystyle{ \sum_{x \in V} |\psi(x)|^2 = 1 }

We can think of this function as living in the Hilbert space

L^2(V) = \{ \psi: V \to \mathbb{C} \}

where the inner product is

\langle \phi, \psi \rangle = \displaystyle{ \sum_{x \in V} \overline{\phi(x)} \psi(x) }

Puzzle 2. Show that for any finite graph, the graph Laplacian H: L^2(V) \to L^2(V) is self-adjoint, meaning that:

H_{x y} = \overline{H}_{y x}

This implies that for any t \in \mathbb{R}, the operator \exp(-i t H) is unitary—just what we need for one-parameter unitary group. So, we can take this version of Schrödinger’s equation:

\displaystyle{ \frac{d}{d t} \psi = -i H \psi }

and solve it:

\displaystyle{ \psi(t) = \exp(-i t H) \psi(0) }

and we’ll know that time evolution is unitary!

So, we’re in a dream world where we can do stochastic mechanics and quantum mechanics with the same Hamiltonian. I’d like to exploit this somehow, but I’m not quite sure how. Of course physicists like to use a trick called Wick rotation where they turn quantum mechanics into stochastic mechanics by replacing time by imaginary time. We can do that here. But I’d like to do something new, special to this context.

Maybe I should learn more about chemistry and graph theory. Of course, graphs show up in at least two ways: first for drawing molecules, and second for drawing states and transitions, as I’ve been doing. These books are supposed to be good:

• Danail Bonchev and D.H. Rouvray, eds., Chemical Graph Theory: Introduction and Fundamentals, Taylor and Francis, 1991.

• Nenad Trinajstic, Chemical Graph Theory, CRC Press, 1992.

• R. Bruce King, Applications of Graph Theory and Topology in Inorganic Cluster Coordination Chemistry, CRC Press, 1993.

The second is apparently the magisterial tome of the subject. The prices of these books are absurd: for example, Amazon sells the first for $300, and the second for $222. Luckily the university here should have them…


Network Theory (Part 14)

15 October, 2011

We’ve been doing a lot of hard work lately. Let’s take a break and think about a fun example from chemistry!

The ethyl cation

Suppose you start with a molecule of ethane, which has 2 carbons and 6 hydrogens arranged like this:

Then suppose you remove one hydrogen. The result is a positively charged ion, or ‘cation’. When I was a kid, I thought the opposite of a cation should be called a ‘dogion’. Alas, it’s not.

This particular cation, formed from removing one hydrogen from an ethane molecule, is called an ‘ethyl cation’. People used to think it looked like this:

They also thought a hydrogen could hop from the carbon with 3 hydrogens attached to it to the carbon with 2. So, they drew a graph with a vertex for each way the hydrogens could be arranged, and an edge for each hop. It looks really cool:

The red vertices come from arrangements where the first carbon has 2 hydrogens attached to it, and the blue vertices come from those where the second carbon has 2 hydrogens attached to it. So, each edge goes between a red vertex and a blue vertex.

This graph has 20 vertices, which are arrangements or ‘states’ of the ethyl cation. It has 30 edges, which are hops or ‘transitions’. Let’s see why those numbers are right.

First I need to explain the rules of the game. The rules say that the 2 carbon atoms are distinguishable: there’s a ‘first’ one and a ‘second’ one. The 5 hydrogen atoms are also distinguishable. But, all we care about is which carbon atom each hydrogen is bonded to: we don’t care about further details of its location. And we require that 2 of the hydrogens are bonded to one carbon, and 3 to the other.

If you’re a physicist, you may wonder why the rules work this way: after all, at a fundamental level, identical particles aren’t really distinguishable. I’m afraid I can’t give a fully convincing explanation right now: I’m just reporting the rules as they were told to me!

Given these rules, there are 2 choices of which carbon has two hydrogens attached to it. Then there are

\displaystyle{ \binom{5}{2} = \frac{5 \times 4}{2 \times 1} = 10}

choices of which two hydrogens are attached to it. This gives a total of 2 × 10 = 20 states. These are the vertices of our graph: 10 red and 10 blue.

The edges of the graph are transitions between states. Any hydrogen in the group of 3 can hop over to the group of 2. There are 3 choices for which hydrogen atom makes the jump. So, starting from any vertex in the graph there are 3 edges. This means there are 3 \times 20 / 2 = 30 edges.

Why divide by 2? Because each edge touches 2 vertices. We have to avoid double-counting them.

The Desargues graph

The idea of using this graph in chemistry goes back to this paper:

• A. T. Balaban, D. Fǎrcaşiu and R. Bǎnicǎ, Graphs of multiple 1,2-shifts in carbonium ions and related systems, Rev. Roum. Chim. 11 (1966), 1205.

This paper is famous because it was the first to use graphs in chemistry to describe molecular transitions, as opposed to using them as pictures of molecules!

But this particular graph was already famous for other reasons. It’s called the Desargues-Levi graph, or Desargues graph for short:

Desargues graph, Wikipedia.

Later I’ll say why it’s called this.

There are lots of nice ways to draw the Desargues graph. For example:

The reason why we can draw such pretty pictures is that the Desargues graph is very symmetrical. Clearly any permutation of the 5 hydrogens acts as a symmetry of the graph, and so does any permutation of the 2 carbons. This gives a symmetry group S_5 \times S_2, which has 5! \times 2! = 240 elements. And in fact this turns out to be the full symmetry group of the Desargues graph.

The Desargues graph, its symmetry group, and its applications to chemistry are discussed here:

• Milan Randic, Symmetry properties of graphs of interest in chemistry: II: Desargues-Levi graph, Int. Jour. Quantum Chem. 15 (1997), 663-682.

The ethyl cation, revisited

We can try to describe the ethyl cation using probability theory. If at any moment its state corresponds to some vertex of the Desargues graph, and it hops randomly along edges as time goes by, it will trace out a random walk on the Desargues graph. This is a nice example of a Markov process!

We could also try to describe the ethyl cation using quantum mechanics. Then, instead of having a probability of hopping along an edge, it has an amplitude of doing so. But as we’ve seen, a lot of similar math will still apply.

It should be fun to compare the two approaches. But I bet you’re wondering which approach is correct. This is a somewhat tricky question, at least for me. The answer would seem to depend on how much the ethyl cation is interacting with its environment—for example, bouncing off other molecules. When a system is interacting a lot with its environment, a probabilistic approach seems to be more appropriate. The relevant buzzword is ‘environmentally induced decoherence’.

However, there’s something much more basic I have tell you about.

After the paper by Balaban, Fǎrcaşiu and Bǎnicǎ came out, people gradually realized that the ethyl cation doesn’t really look like the drawing I showed you! It’s what chemists call ‘nonclassical’ ion. What they mean is this: its actual structure is not what you get by taking the traditional ball-and-stick model of an ethane molecule and ripping off a hydrogen. The ethyl cation really looks like this:

For more details, and pictures that you can actually rotate, see:

• Stephen Bacharach, Ethyl cation, Computational Organic Chemistry.

So, if we stubbornly insist on applying the Desargues graph to realistic chemistry, we need to find some other molecule to apply it to.

Trigonal bipyramidal molecules

Luckily, there are lots of options! They’re called trigonal bipyramidal molecules. They look like this:

The 5 balls on the outside are called ‘ligands’: they could be atoms or bunches of atoms. In chemistry, ‘ligand‘ just means something that’s stuck onto a central thing. For example, in phosphorus pentachloride the ligands are chlorine atoms, all attached to a central phosphorus atom:

It’s a colorless solid, but as you might expect, it’s pretty nasty stuff: it’s not flammable, but it reacts with water or heat to produce toxic chemicals like hydrogen chloride.

Another example is iron pentacarbonyl, where 5 carbon-oxygen ligands are attached to a central iron atom:

You can make this stuff by letting powdered iron react with carbon monoxide. It’s a straw-colored liquid with a pungent smell!

Whenever you’ve got a molecule of this shape, the ligands come in two kinds. There are the 2 ‘axial’ ones, and the 3 ‘equatorial’ ones:

And the molecule has 20 states… but only if count the states a certain way. We have to treat all 5 ligands as distinguishable, but think of two arrangements of them as the same if we can rotate one to get the other. The trigonal bipyramid has a rotational symmetry group with 6 elements. So, there are 5! / 6 = 20 states.

The transitions between states are devilishly tricky. They’re called pseudorotations, and they look like this:

If you look very carefully, you’ll see what’s going on. First the 2 axial ligands move towards each other to become equatorial.
Now the equatorial ones are no longer in the horizontal plane: they’re in the plane facing us! Then 2 of the 3 equatorial ones swing out to become axial. This fancy dance is called the Berry pseudorotation mechanism.

To get from one state to another this way, we have to pick 2 of the 3 equatorial ligands to swing out and become axial. There are 3 choices here. So, if we draw a graph with states as vertices and transitions as edges, it will have 20 vertices and 20 × 3 / 2 = 30 edges. That sounds suspiciously like the Desargues graph!

Puzzle 1. Show that the graph with states of a trigonal bipyramidal molecule as vertices and pseudorotations as edges is indeed the Desargues graph.

I think this fact was first noticed here:

• Paul C. Lauterbur and Fausto Ramirez, Pseudorotation in trigonal-bipyramidal molecules, J. Am. Chem. Soc. 90 (1968), 6722–6726.

Okay, enough for now! Next time I’ll say more about the Markov process or quantum process corresponding to a random walk on the Desargues graph. But since the Berry pseudorotation mechanism is so hard to visualize, I’ll pretend that the ethyl cation looks like this:

and I’ll use this picture to help us think about the Desargues graph.

That’s okay: everything we’ll figure out can easily be translated to apply to the real-world situation of a trigonal bipyramidal molecule. The virtue of math is that when two situations are ‘mathematically the same’, or ‘isomorphic’, we can talk about either one, and the results automatically apply to the other. This is true even if the one we talk about doesn’t actually exist in the real world!


Network Theory (Part 13)

11 October, 2011

Unlike some recent posts, this will be very short. I merely want to show you the quantum and stochastic versions of Noether’s theorem, side by side.

Having made my sacrificial offering to the math gods last time by explaining how everything generalizes when we replace our finite set X of states by an infinite set or an even more general measure space, I’ll now relax and state Noether’s theorem only for a finite set. If you’re the sort of person who finds that unsatisfactory, you can do the generalization yourself.

Two versions of Noether’s theorem

Let me write the quantum and stochastic Noether’s theorem so they look almost the same:

Theorem. Let X be a finite set. Suppose H is a self-adjoint operator on L^2(X), and let O be an observable. Then

[O,H] = 0

if and only if for all states \psi(t) obeying Schrödinger’s equation

\displaystyle{ \frac{d}{d t} \psi(t) = -i H \psi(t) }

the expected value of O in the state \psi(t) does not change with t.

Theorem. Let X be a finite set. Suppose H is an infinitesimal stochastic operator on L^1(X), and let O be an observable. Then

[O,H] =0

if and only if for all states \psi(t) obeying the master equation

\displaystyle{ \frac{d}{d t} \psi(t) = H \psi(t) }

the expected values of O and O^2 in the state \psi(t) do not change with t.

This makes the big difference stick out like a sore thumb: in the quantum version we only need the expected value of O, while in the stochastic version we need the expected values of O and O^2!

Brendan Fong proved the stochastic version of Noether’s theorem in Part 11. Now let’s do the quantum version.

Proof of the quantum version

My statement of the quantum version was silly in a couple of ways. First, I spoke of the Hilbert space L^2(X) for a finite set X, but any finite-dimensional Hilbert space will do equally well. Second, I spoke of the “self-adjoint operator” H and the “observable” O, but in quantum mechanics an observable is the same thing as a self-adjoint operator!

Why did I talk in such a silly way? Because I was attempting to emphasize the similarity between quantum mechanics and stochastic mechanics. But they’re somewhat different. For example, in stochastic mechanics we have two very different concepts: infinitesimal stochastic operators, which generate symmetries, and functions on our set X, which are observables. But in quantum mechanics something wonderful happens: self-adjoint operators both generate symmetries and are observables! So, my attempt was a bit strained.

Let me state and prove a less silly quantum version of Noether’s theorem, which implies the one above:

Theorem. Suppose H and O are self-adjoint operators on a finite-dimensional Hilbert space. Then

[O,H] = 0

if and only if for all states \psi(t) obeying Schrödinger’s equation

\displaystyle{ \frac{d}{d t} \psi(t) = -i H \psi(t) }

the expected value of O in the state \psi(t) does not change with t:

\displaystyle{ \frac{d}{d t} \langle \psi(t), O \psi(t) \rangle = 0 }

Proof. The trick is to compute the time derivative I just wrote down. Using Schrödinger’s equation, the product rule, and the fact that H is self-adjoint we get:

\begin{array}{ccl}  \displaystyle{ \frac{d}{d t} \langle \psi(t), O \psi(t) \rangle } &=&   \langle -i H \psi(t) , O \psi(t) \rangle + \langle \psi(t) , O (- i H \psi(t)) \rangle \\  \\  &=& i \langle \psi(t) , H O \psi(t) \rangle -i \langle \psi(t) , O H \psi(t)) \rangle \\  \\  &=& - i \langle \psi(t), [O,H] \psi(t) \rangle  \end{array}

So, if [O,H] = 0, clearly the above time derivative vanishes. Conversely, if this time derivative vanishes for all states \psi(t) obeying Schrödinger’s equation, we know

\langle \psi, [O,H] \psi \rangle = 0

for all states \psi and thus all vectors in our Hilbert space. Does this imply [O,H] = 0? Yes, because i times a commutator of a self-adjoint operators is self-adjoint, and for any self-adjoint operator A we have

\forall \psi  \; \; \langle \psi, A \psi \rangle = 0 \qquad \Rightarrow \qquad A = 0

This is a well-known fact whose proof goes like this. Assume \langle \psi, A \psi \rangle = 0 for all \psi. Then to show A = 0, it is enough to show \langle \phi, A \psi \rangle = 0 for all \phi and \psi. But we have a marvelous identity:

\begin{array}{ccl} \langle \phi, A \psi \rangle &=& \frac{1}{4} \left( \langle \phi + \psi, \, A (\phi + \psi) \rangle \; - \; \langle \psi - \phi, \, A (\psi - \phi) \rangle \right. \\ && \left. +i \langle \psi + i \phi, \, A (\psi + i \phi) \rangle \; - \; i\langle \psi - i \phi, \, A (\psi - i \phi) \rangle \right) \end{array}

and all four terms on the right vanish by our assumption.   █

The marvelous identity up there is called the polarization identity. In plain English, it says: if you know the diagonal entries of a self-adjoint matrix in every basis, you can figure out all the entries of that matrix in every basis.

Why is it called the ‘polarization identity’? I think because it shows up in optics, in the study of polarized light.

Comparison

In both the quantum and stochastic cases, the time derivative of the expected value of an observable O is expressed in terms of its commutator with the Hamiltonian. In the quantum case we have

\displaystyle{ \frac{d}{d t} \langle \psi(t), O \psi(t) \rangle = - i \langle \psi(t), [O,H] \psi(t) \rangle }

and for the right side to always vanish, we need [O,H] = 0 latex , thanks to the polarization identity. In the stochastic case, a perfectly analogous equation holds:

\displaystyle{ \frac{d}{d t} \int O \psi(t) = \int [O,H] \psi(t) }

but now the right side can always vanish even without [O,H] = 0. We saw a counterexample in Part 11. There is nothing like the polarization identity to save us! To get [O,H] = 0 we need a supplementary hypothesis, for example the vanishing of

\displaystyle{ \frac{d}{d t} \int O^2 \psi(t) }

Okay! Starting next time we’ll change gears and look at some more examples of stochastic Petri nets and Markov processes, including some from chemistry. After some more of that, I’ll move on to networks of other sorts. There’s a really big picture here, and I’m afraid I’ve been getting caught up in the details of a tiny corner.


Network Theory (Part 12)

9 October, 2011

Last time we proved a version of Noether’s theorem for stochastic mechanics. Now I want to compare that to the more familiar quantum version.

But to do this, I need to say more about the analogy between stochastic mechanics and quantum mechanics. And whenever I try, I get pulled toward explaining some technical issues involving analysis: whether sums converge, whether derivatives exist, and so on. I’ve been trying to avoid such stuff—not because I dislike it, but because I’m afraid you might. But the more I put off discussing these issues, the more they fester and make me unhappy. In fact, that’s why it’s taken so long for me to write this post!

So, this time I will gently explore some of these issues. But don’t be scared: I’ll mainly talk about some simple big ideas. Next time I’ll discuss Noether’s theorem. I hope that by getting the technicalities out of my system, I’ll feel okay about hand-waving whenever I want.

And if you’re an expert on analysis, maybe you can help me with a question.

Stochastic mechanics versus quantum mechanics

First, we need to recall the analogy we began sketching in Part 5, and push it a bit further. The idea is that stochastic mechanics differs from quantum mechanics in two big ways:

• First, instead of complex amplitudes, stochastic mechanics uses nonnegative real probabilities. The complex numbers form a ring; the nonnegative real numbers form a mere rig, which is a ‘ring without negatives’. Rigs are much neglected in the typical math curriculum, but unjustly so: they’re almost as good as rings in many ways, and there are lots of important examples, like the natural numbers \mathbb{N} and the nonnegative real numbers, [0,\infty). For probability theory, we should learn to love rigs.

But there are, alas, situations where we need to subtract probabilities, even when the answer comes out negative: namely when we’re taking the time derivative of a probability. So sometimes we need \mathbb{R} instead of just [0,\infty).

• Second, while in quantum mechanics a state is described using a ‘wavefunction’, meaning a complex-valued function obeying

\int |\psi|^2 = 1

in stochastic mechanics it’s described using a ‘probability distribution’, meaning a nonnegative real function obeying

\int \psi = 1

So, let’s try our best to present the theories in close analogy, while respecting these two differences.

States

We’ll start with a set X whose points are states that a system can be in. Last time I assumed X was a finite set, but this post is so mathematical I might as well let my hair down and assume it’s a measure space. A measure space lets you do integrals, but a finite set is a special case, and then these integrals are just sums. So, I’ll write things like

\int f

and mean the integral of the function f over the measure space X, but if X is a finite set this just means

\sum_{x \in X} f(x)

Now, I’ve already defined the word ‘state’, but both quantum and stochastic mechanics need a more general concept of state. Let’s call these ‘quantum states’ and ‘stochastic states’:

• In quantum mechanics, the system has an amplitude \psi(x) of being in any state x \in X. These amplitudes are complex numbers with

\int | \psi |^2 = 1

We call \psi: X \to \mathbb{C} obeying this equation a quantum state.

• In stochastic mechanics, the system has a probability \psi(x) of being in any state x \in X. These probabilities are nonnegative real numbers with

\int \psi = 1

We call \psi: X \to [0,\infty) obeying this equation a stochastic state.

In quantum mechanics we often use this abbreviation:

\langle \phi, \psi \rangle = \int \overline{\phi} \psi

so that a quantum state has

\langle \psi, \psi \rangle = 1

Similarly, we could introduce this notation in stochastic mechanics:

\langle \psi \rangle = \int \psi

so that a stochastic state has

\langle \psi \rangle = 1

But this notation is a bit risky, since angle brackets of this sort often stand for expectation values of observables. So, I’ve been writing \int \psi, and I’ll keep on doing this.

In quantum mechanics, \langle \phi, \psi \rangle is well-defined whenever both \phi and \psi live in the vector space

L^2(X) = \{ \psi: X \to \mathbb{C} \; : \; \int |\psi|^2 < \infty \}

In stochastic mechanics, \langle \psi \rangle is well-defined whenever \psi lives in the vector space

L^1(X) =  \{ \psi: X \to \mathbb{R} \; : \; \int |\psi| < \infty \}

You’ll notice I wrote \mathbb{R} rather than [0,\infty) here. That’s because in some calculations we’ll need functions that take negative values, even though our stochastic states are nonnegative.

Observables

A state is a way our system can be. An observable is something we can measure about our system. They fit together: we can measure an observable when our system is in some state. If we repeat this we may get different answers, but there’s a nice formula for average or ‘expected’ answer.

• In quantum mechanics, an observable is a self-adjoint operator A on L^2(X). The expected value of A in the state \psi is

\langle \psi, A \psi \rangle

Here I’m assuming that we can apply A to \psi and get a new vector A \psi \in L^2(X). This is automatically true when X is a finite set, but in general we need to be more careful.

• In stochastic mechanics, an observable is a real-valued function A on X. The expected value of A in the state \psi is

\int A \psi

Here we’re using the fact that we can multiply A and \psi and get a new vector A \psi \in L^1(X), at least if A is bounded. Again, this is automatic if X is a finite set, but not otherwise.

Symmetries

Besides states and observables, we need ‘symmetries’, which are transformations that map states to states. We use these to describe how our system changes when we wait a while, for example.

• In quantum mechanics, an isometry is a linear map U: L^2(X) \to L^2(X) such that

\langle U \phi, U \psi \rangle = \langle \phi, \psi \rangle

for all \psi, \phi \in L^2(X). If U is an isometry and \psi is a quantum state, then U \psi is again a quantum state.

• In stochastic mechanics, a stochastic operator is a linear map U: L^1(X) \to L^1(X) such that

\int U \psi = \int \psi

and

\psi \ge 0 \; \; \Rightarrow \; \; U \psi \ge 0

for all \psi \in L^1(X). If U is stochastic and \psi is a stochastic state, then U \psi is again a stochastic state.

In quantum mechanics we are mainly interested in invertible isometries, which are called unitary operators. There are lots of these, and their inverses are always isometries. There are, however, very few stochastic operators whose inverses are stochastic:

Puzzle 1. Suppose X is a finite set. Show that every isometry U: L^2(X) \to L^2(X) is invertible, and its inverse is again an isometry.

Puzzle 2. Suppose X is a finite set. Which stochastic operators U: L^1(X) \to L^1(X) have stochastic inverses?

This is why we usually think of time evolution as being reversible quantum mechanics, but not in stochastic mechanics! In quantum mechanics we often describe time evolution using a ‘1-parameter group’, while in stochastic mechanics we describe it using a 1-parameter semigroup… meaning that we can run time forwards, but not backwards.

But let’s see how this works in detail!

Time evolution in quantum mechanics

In quantum mechanics there’s a beautiful relation between observables and symmetries, which goes like this. Suppose that for each time t we want a unitary operator U(t) :  L^2(X) \to L^2(X) that describes time evolution. Then it makes a lot of sense to demand that these operators form a 1-parameter group:

Definition. A collection of linear operators U(t) (t \in \mathbb{R}) on some vector space forms a 1-parameter group if

U(0) = 1

and

U(s+t) = U(s) U(t)

for all s,t \in \mathbb{R}.

Note that these conditions force all the operators U(t) to be invertible.

Now suppose our vector space is a Hilbert space, like L^2(X). Then we call a 1-parameter group a 1-parameter unitary group if the operators involved are all unitary.

It turns out that 1-parameter unitary groups are either continuous in a certain way, or so pathological that you can’t even prove they exist without the axiom of choice! So, we always focus on the continuous case:

Definition. A 1-parameter unitary group is strongly continuous if U(t) \psi depends continuously on t for all \psi, in this sense:

t_i \to t \;\; \Rightarrow \; \;\|U(t_i) \psi - U(t) \psi \| \to 0

Then we get a classic result proved by Marshall Stone back in the early 1930s. You may not know him, but he was so influential at the University of Chicago during this period that it’s often called the “Stone Age”. And here’s one reason why:

Stone’s Theorem. There is a one-to-one correspondence between strongly continuous 1-parameter unitary groups on a Hilbert space and self-adjoint operators on that Hilbert space, given as follows. Given a strongly continuous 1-parameter unitary group U(t) we can always write

U(t) = \exp(-i t H)

for a unique self-adjoint operator H. Conversely, any self-adjoint operator determines a strongly continuous 1-parameter group this way. For all vectors \psi for which H \psi is well-defined, we have

\displaystyle{ \left.\frac{d}{d t} U(t) \psi \right|_{t = 0} = -i H \psi }

Moreover, for any of these vectors, if we set

\psi(t) = \exp(-i t H) \psi

we have

\displaystyle{ \frac{d}{d t} \psi(t) = - i H \psi(t) }

When U(t) = \exp(-i t H) describes the evolution of a system in time, H is is called the Hamiltonian, and it has the physical meaning of ‘energy’. The equation I just wrote down is then called Schrödinger’s equation.

So, simply put, in quantum mechanics we have a correspondence between observables and nice one-parameter groups of symmetries. Not surprisingly, our favorite observable, energy, corresponds to our favorite symmetry: time evolution!

However, if you were paying attention, you noticed that I carefully avoided explaining how we define \exp(- i t H). I didn’t even say what a self-adjoint operator is. This is where the technicalities come in: they arise when H is unbounded, and not defined on all vectors in our Hilbert space.

Luckily, these technicalities evaporate for finite-dimensional Hilbert spaces, such as L^2(X) for a finite set X. Then we get:

Stone’s Theorem (Baby Version). Suppose we are given a finite-dimensional Hilbert space. In this case, a linear operator H on this space is self-adjoint iff it’s defined on the whole space and

\langle \phi , H \psi \rangle = \langle H \phi, \psi \rangle

for all vectors \phi, \psi. Given a strongly continuous 1-parameter unitary group U(t) we can always write

U(t) = \exp(- i t H)

for a unique self-adjoint operator H, where

\displaystyle{ \exp(-i t H) \psi = \sum_{n = 0}^\infty \frac{(-i t H)^n}{n!} \psi }

with the sum converging for all \psi. Conversely, any self-adjoint operator on our space determines a strongly continuous 1-parameter group this way. For all vectors \psi in our space we then have

\displaystyle{ \left.\frac{d}{d t} U(t) \psi \right|_{t = 0} = -i H \psi }

and if we set

\psi(t) = \exp(-i t H) \psi

we have

\displaystyle{ \frac{d}{d t} \psi(t) = - i H \psi(t) }

Time evolution in stochastic mechanics

We’ve seen that in quantum mechanics, time evolution is usually described by a 1-parameter group of operators that comes from an observable: the Hamiltonian. Stochastic mechanics is different!

First, since stochastic operators aren’t usually invertible, we typically describe time evolution by a mere ‘semigroup’:

Definition. A collection of linear operators U(t) (t \in [0,\infty)) on some vector space forms a 1-parameter semigroup if

U(0) = 1

and

U(s+t) = U(s) U(t)

for all s, t \ge 0.

Now suppose this vector space is L^1(X) for some measure space X. We want to focus on the case where the operators U(t) are stochastic and depend continuously on t in the same sense we discussed earlier.

Definition. A 1-parameter strongly continuous semigroup of stochastic operators U(t) : L^1(X) \to L^1(X) is called a Markov semigroup.

What’s the analogue of Stone’s theorem for Markov semigroups? I don’t know a fully satisfactory answer! If you know, please tell me.

Later I’ll say what I do know—I’m not completely clueless—but for now let’s look at the ‘baby’ case where X is a finite set. Then the story is neat and complete:

Theorem. Suppose we are given a finite set X. In this case, a linear operator H on L^1(X) is infinitesimal stochastic iff it’s defined on the whole space,

\int H \psi = 0

for all \psi \in L^1(X), and the matrix of H in terms of the obvious basis obeys

H_{i j} \ge 0

for all j \ne i. Given a Markov semigroup U(t) on L^1(X), we can always write

U(t) = \exp(t H)

for a unique infinitesimal stochastic operator H, where

\displaystyle{ \exp(t H) \psi = \sum_{n = 0}^\infty \frac{(t H)^n}{n!} \psi }

with the sum converging for all \psi. Conversely, any infinitesimal stochastic operator on our space determines a Markov semigroup this way. For all \psi \in L^1(X) we then have

\displaystyle{ \left.\frac{d}{d t} U(t) \psi \right|_{t = 0} = H \psi }

and if we set

\psi(t) = \exp(t H) \psi

we have the master equation:

\displaystyle{ \frac{d}{d t} \psi(t) = H \psi(t) }

In short, time evolution in stochastic mechanics is a lot like time evolution in quantum mechanics, except it’s typically not invertible, and the Hamiltonian is typically not an observable.

Why not? Because we defined an observable to be a function A: X \to \mathbb{R}. We can think of this as giving an operator on L^1(X), namely the operator of multiplication by A. That’s a nice trick, which we used to good effect last time. However, at least when X is a finite set, this operator will be diagonal in the obvious basis consisting of functions that equal 1 at one point of X and zero elsewhere. So, it can only be infinitesimal stochastic if it’s zero!

Puzzle 3. If X is a finite set, show that any operator on L^1(X) that’s both diagonal and infinitesimal stochastic must be zero.

The Hille–Yosida theorem

I’ve now told you everything you really need to know… but not everything I want to say. What happens when X is not a finite set? What are Markov semigroups like then? I can’t abide letting this question go unresolved! Unfortunately I only know a partial answer.

We can get a certain distance using the Hille-Yosida theorem, which is much more general.

Definition. A Banach space is vector space with a norm such that any Cauchy sequence converges.

Examples include Hilbert spaces like L^2(X) for any measure space, but also other spaces like L^1(X) for any measure space!

Definition. If V is a Banach space, a 1-parameter semigroup of operators U(t) : V \to V is called a contraction semigroup if it’s strongly continuous and

\| U(t) \psi \| \le \| \psi \|

for all t \ge 0 and all \psi \in V.

Examples include strongly continuous 1-parameter unitary groups, but also Markov semigroups!

Puzzle 4. Show any Markov semigroup is a contraction semigroup.

The Hille–Yosida theorem generalizes Stone’s theorem to contraction semigroups. In my misspent youth, I spent a lot of time carrying around Yosida’s book Functional Analysis. Furthermore, Einar Hille was the advisor of my thesis advisor, Irving Segal. Segal generalized the Hille–Yosida theorem to nonlinear operators, and I used this generalization a lot back when I studied nonlinear partial differential equations. So, I feel compelled to tell you this theorem:

Hille-Yosida Theorem. Given a contraction semigroup U(t) we can always write

U(t) = \exp(t H)

for some densely defined operator H such that H - \lambda I has an inverse and

\displaystyle{ \| (H - \lambda I)^{-1} \psi \| \le \frac{1}{\lambda} \| \psi \| }

for all \lambda > 0 and \psi \in V. Conversely, any such operator determines a strongly continuous 1-parameter group. For all vectors \psi for which H \psi is well-defined, we have

\displaystyle{ \left.\frac{d}{d t} U(t) \psi \right|_{t = 0} = H \psi }

Moreover, for any of these vectors, if we set

\psi(t) = U(t) \psi

we have

\displaystyle{ \frac{d}{d t} \psi(t) = H \psi(t) }

If you like, you can take the stuff at the end of this theorem to be what we mean by saying U(t) = \exp(t H). When U(t) = \exp(t H), we say that H generates the semigroup U(t).

But now suppose V = L^1(X). Besides the conditions in the Hille–Yosida theorem, what extra conditions on H are necessary and sufficient for it to generate a Markov semigroup? In other words, what’s a definition of ‘infinitesimal stochastic operator’ that’s suitable not only when X is a finite set, but an arbitrary measure space?

I asked this question on Mathoverflow a few months ago, and so far the answers have not been completely satisfactory.

Some people mentioned the Hille–Yosida theorem, which is surely a step in the right direction, but not the full answer.

Others discussed the special case when \exp(t H) extends to a bounded self-adjoint operator on L^2(X). When X is a finite set, this special case happens precisely when the matrix H_{i j} is symmetric: the probability of hopping from j to i equals the probability of hopping from i to j. This is a fascinating special case, not least because when H is both infinitesimal stochastic and self-adjoint, we can use it as a Hamiltonian for both stochastic mechanics and quantum mechanics! Someday I want to discuss this. However, it’s just a special case.

After grabbing people by the collar and insisting that I wanted to know the answer to the question I actually asked—not some vaguely similar question—the best answer seems to be Martin Gisser’s reference to this book:

• Zhi-Ming Ma and Michael Röckner, Introduction to the Theory of (Non-Symmetric) Dirichlet Forms, Springer, Berlin, 1992.

This book provides a very nice self-contained proof of the Hille-Yosida theorem. On the other hand, it does not answer my question in general, but only when the skew-symmetric part of H is dominated (in a certain sense) by the symmetric part.

So, I’m stuck on this front, but that needn’t bring the whole project to a halt. We’ll just sidestep this question.

For a good well-rounded introduction to Markov semigroups and what they’re good for, try:

• Ryszard Rudnicki, Katarzyna Pichór and Marta Tyran-Kamínska, Markov semigroups and their applications.


Network Theory (Part 11)

4 October, 2011

jointly written with Brendan Fong

Noether proved lots of theorems, but when people talk about Noether’s theorem, they always seem to mean her result linking symmetries to conserved quantities. Her original result applied to classical mechanics, but today we’d like to present a version that applies to ‘stochastic mechanics’—or in other words, Markov processes.

What’s a Markov process? We’ll say more in a minute—but in plain English, it’s a physical system where something hops around randomly from state to state, where its probability of hopping anywhere depends only on where it is now, not its past history. Markov processes include, as a special case, the stochastic Petri nets we’ve been talking about.

Our stochastic version of Noether’s theorem is copied after a well-known quantum version. It’s yet another example of how we can exploit the analogy between stochastic mechanics and quantum mechanics. But for now we’ll just present the stochastic version. Next time we’ll compare it to the quantum one.

Markov processes

We should and probably will be more general, but let’s start by considering a finite set of states, say X. To describe a Markov process we then need a matrix of real numbers H = (H_{i j})_{i, j \in X}. The idea is this: suppose right now our system is in the state i. Then the probability of being in some state j changes as time goes by—and H_{i j} is defined to be the time derivative of this probability right now.

So, if \psi_i(t) is the probability of being in the state i at time t, we want the master equation to hold:

\displaystyle{ \frac{d}{d t} \psi_i(t) = \sum_{j \in X} H_{i j} \psi_j(t) }

This motivates the definition of ‘infinitesimal stochastic’, which we recall from Part 5:

Definition. Given a finite set X, a matrix of real numbers H = (H_{i j})_{i, j \in X} is infinitesimal stochastic if

i \ne j \implies H_{i j} \ge 0

and

\displaystyle{ \sum_{i \in X} H_{i j} = 0 }

for all j \in X.

The inequality says that if we start in the state i, the probability of being found in some other state, which starts at 0, can’t go down, at least initially. The equation says that the probability of being somewhere or other doesn’t change. Together, these facts imply that that:

H_{i i} \le 0

That makes sense: the probability of being in the state $i$, which starts at 1, can’t go up, at least initially.

Using the magic of matrix multiplication, we can rewrite the master equation as follows:

\displaystyle{\frac{d}{d t} \psi(t) = H \psi(t) }

and we can solve it like this:

\psi(t) = \exp(t H) \psi(0)

If H is an infinitesimal stochastic operator, we will call \exp(t H) a Markov process, and H its Hamiltonian.

(Actually, most people call \exp(t H) a Markov semigroup, and reserve the term Markov process for another way of looking at the same idea. So, be careful.)

Noether’s theorem is about ‘conserved quantities’, that is, observables whose expected values don’t change with time. To understand this theorem, you need to know a bit about observables. In stochastic mechanics an observable is simply a function assigning a number O_i to each state i \in X.

However, in quantum mechanics we often think of observables as matrices, so it’s nice to do that here, too. It’s easy: we just create a matrix whose diagonal entries are the values of the function O. And just to confuse you, we’ll also call this matrix O. So:

O_{i j} = \left\{ \begin{array}{ccl}  O_i & \textrm{if} & i = j \\ 0 & \textrm{if} & i \ne j  \end{array} \right.

One advantage of this trick is that it lets us ask whether an observable commutes with the Hamiltonian. Remember, the commutator of matrices is defined by

[O,H] = O H - H O

Noether’s theorem will say that [O,H] = 0 if and only if O is ‘conserved’ in some sense. What sense? First, recall that a stochastic state is just our fancy name for a probability distribution \psi on the set X. Second, the expected value of an observable O in the stochastic state \psi is defined to be

\displaystyle{ \sum_{i \in X} O_i \psi_i }

In Part 5 we introduced the notation

\displaystyle{ \int \phi = \sum_{i \in X} \phi_i }

for any function \phi on X. The reason is that later, when we generalize X from a finite set to a measure space, the sum at right will become an integral over X. Indeed, a sum is just a special sort of integral!

Using this notation and the magic of matrix multiplication, we can write the expected value of O in the stochastic state \psi as

\int O \psi

We can calculate how this changes in time if \psi obeys the master equation… and we can write the answer using the commutator [O,H]:

Lemma. Suppose H is an infinitesimal stochastic operator and O is an observable. If \psi(t) obeys the master equation, then

\displaystyle{ \frac{d}{d t} \int O \psi(t) = \int [O,H] \psi(t) }

Proof. Using the master equation we have

\displaystyle{ \frac{d}{d t} \int O \psi(t) = \int O \frac{d}{d t} \psi(t) = \int O H \psi(t) } \qquad \qquad \qquad \; (1)

But since H is infinitesimal stochastic,

\displaystyle{ \sum_{i \in X} H_{i j} = 0  }

so for any function \phi on X we have

\displaystyle{ \int H \phi = \sum_{i, j \in X} H_{i j} \phi_j = 0 }

and in particular

\int H O \psi(t) = 0   \quad \; \qquad \qquad \qquad \qquad   \qquad \qquad \qquad \qquad (2)

Since [O,H] = O H - H O , we conclude from (1) and (2) that

\displaystyle{ \frac{d}{d t} \int O \psi(t) = \int [O,H] \psi(t) }

as desired.   █

The commutator doesn’t look like it’s doing much here, since we also have

\displaystyle{ \frac{d}{d t} \int O \psi(t) = \int O H \psi(t) }

which is even simpler. But the commutator will become useful when we get to Noether’s theorem!

Noether’s theorem

Here’s a version of Noether’s theorem for Markov processes. It says an observable commutes with the Hamiltonian iff the expected values of that observable and its square don’t change as time passes:

Theorem. Suppose H is an infinitesimal stochastic operator and O is an observable. Then

[O,H] =0

if and only if

\displaystyle{ \frac{d}{d t} \int O\psi(t) = 0 }

and

\displaystyle{ \frac{d}{d t} \int O^2\psi(t) = 0 }

for all \psi(t) obeying the master equation.

If you know Noether’s theorem from quantum mechanics, you might be surprised that in this version we need not only the observable but also its square to have an unchanging expected value! We’ll explain this, but first let’s prove the theorem.

Proof. The easy part is showing that if [O,H]=0 then \frac{d}{d t} \int O\psi(t) = 0 and \frac{d}{d t} \int O^2\psi(t) = 0. In fact there’s nothing special about these two powers of t; we’ll show that

\displaystyle{ \frac{d}{d t} \int O^n \psi(t) = 0 }

for all n. The point is that since H commutes with O, it commutes with all powers of O:

[O^n, H] = 0

So, applying the Lemma to the observable O^n, we see

\displaystyle{ \frac{d}{d t} \int O^n \psi(t) =  \int [O^n, H] \psi(t) = 0 }

The backward direction is a bit trickier. We now assume that

\displaystyle{ \frac{d}{d t} \int O\psi(t) = \frac{d}{d t} \int O^2\psi(t) = 0 }

for all solutions \psi(t) of the master equation. This implies

\int O H\psi(t) = \int O^2 H\psi(t) = 0

or since this holds for all solutions,

\displaystyle{ \sum_{i \in X} O_i H_{i j} = \sum_{i \in X} O_i^2H_{i j} = 0 }  \qquad \qquad \qquad \qquad  \qquad \qquad (3)

We wish to show that [O,H]= 0.

First, recall that we can think of O is a diagonal matrix with:

O_{i j} = \left\{ \begin{array}{ccl}  O_i & \textrm{if} & i = j \\ 0 & \textrm{if} & i \ne j  \end{array} \right.

So, we have

\begin{array}{ccl} [O,H]_{i j} &=& \displaystyle{ \sum_{k \in X} (O_{i k}H_{k j} - H_{i k} O_{k j}) } \\ \\ &=& O_i H_{i j} - H_{i j}O_j \\ \\ &=& (O_i-O_j)H_{i j} \end{array}

To show this is zero for each pair of elements i, j \in X, it suffices to show that when H_{i j} \ne 0, then O_j = O_i. That is, we need to show that if the system can move from state j to state i, then the observable takes the same value on these two states.

In fact, it’s enough to show that this sum is zero for any j \in X:

\displaystyle{ \sum_{i \in X} (O_j-O_i)^2 H_{i j} }

Why? When i = j, O_j-O_i = 0, so that term in the sum vanishes. But when i \ne j, (O_j-O_i)^2 and H_{i j} are both non-negative—the latter because H is infinitesimal stochastic. So if they sum to zero, they must each be individually zero. Thus for all i \ne j, we have (O_j-O_i)^2H_{i j}=0. But this means that either O_i = O_j or H_{i j} = 0, which is what we need to show.

So, let’s take that sum and expand it:

\displaystyle{ \sum_{i \in X} (O_j-O_i)^2 H_{i j} = \sum_i (O_j^2 H_{i j}- 2O_j O_i H_{i j} +O_i^2 H_{i j}) }

which in turn equals

\displaystyle{  O_j^2\sum_i H_{i j} - 2O_j \sum_i O_i H_{i j} + \sum_i O_i^2 H_{i j} }

The three terms here are each zero: the first because H is infinitesimal stochastic, and the latter two by equation (3). So, we’re done!   █

Markov chains

So that’s the proof… but why do we need both O and its square to have an expected value that doesn’t change with time to conclude [O,H] = 0? There’s an easy counterexample if we leave out the condition involving O^2. However, the underlying idea is clearer if we work with Markov chains instead of Markov processes.

In a Markov process, time passes by continuously. In a Markov chain, time comes in discrete steps! We get a Markov process by forming \exp(t H) where H is an infinitesimal stochastic operator. We get a Markov chain by forming the operator U, U^2, U^3, \dots where U is a ‘stochastic operator’. Remember:

Definition. Given a finite set X, a matrix of real numbers U = (U_{i j})_{i, j \in X} is stochastic if

U_{i j} \ge 0

for all i, j \in X and

\displaystyle{ \sum_{i \in X} U_{i j} = 1 }

for all j \in X.

The idea is that U describes a random hop, with U_{i j} being the probability of hopping to the state i if you start at the state j. These probabilities are nonnegative and sum to 1.

Any stochastic operator gives rise to a Markov chain U, U^2, U^3, \dots . And in case it’s not clear, that’s how we’re defining a Markov chain: the sequence of powers of a stochastic operator. There are other definitions, but they’re equivalent.

We can draw a Markov chain by drawing a bunch of states and arrows labelled by transition probabilities, which are the matrix elements U_{i j}:

Here is Noether’s theorem for Markov chains:

Theorem. Suppose U is a stochastic operator and O is an observable. Then

[O,U] =0

if and only if

\displaystyle{  \int O U \psi = \int O \psi }

and

\displaystyle{ \int O^2 U \psi = \int O^2 \psi }

for all stochastic states \psi.

In other words, an observable commutes with U iff the expected values of that observable and its square don’t change when we evolve our state one time step using U.

You can probably prove this theorem by copying the proof for Markov processes:

Puzzle. Prove Noether’s theorem for Markov chains.

But let’s see why we need the condition on the square of observable! That’s the intriguing part. Here’s a nice little Markov chain:

where we haven’t drawn arrows labelled by 0. So, state 1 has a 50% chance of hopping to state 0 and a 50% chance of hopping to state 2; the other two states just sit there. Now, consider the observable O with

O_i = i

It’s easy to check that the expected value of this observable doesn’t change with time:

\displaystyle{  \int O U \psi = \int O \psi }

for all \psi. The reason, in plain English, is this. Nothing at all happens if you start at states 0 or 2: you just sit there, so the expected value of O doesn’t change. If you start at state 1, the observable equals 1. You then have a 50% chance of going to a state where the observable equals 0 and a 50% chance of going to a state where it equals 2, so its expected value doesn’t change: it still equals 1.

On the other hand, we do not have [O,U] = 0 in this example, because we can hop between states where O takes different values. Furthermore,

\displaystyle{  \int O^2 U \psi \ne \int O^2 \psi }

After all, if you start at state 1, O^2 equals 1 there. You then have a 50% chance of going to a state where O^2 equals 0 and a 50% chance of going to a state where it equals 4, so its expected value changes!

So, that’s why \int O U \psi = \int O \psi for all \psi is not enough to guarantee [O,U] = 0. The same sort of counterexample works for Markov processes, too.

Finally, we should add that there’s nothing terribly sacred about the square of the observable. For example, we have:

Theorem. Suppose H is an infinitesimal stochastic operator and O is an observable. Then

[O,H] =0

if and only if

\displaystyle{ \frac{d}{d t} \int f(O) \psi(t) = 0 }

for all smooth f: \mathbb{R} \to \mathbb{R} and all \psi(t) obeying the master equation.

Theorem. Suppose U is a stochastic operator and O is an observable. Then

[O,U] =0

if and only if

\displaystyle{  \int f(O) U \psi = \int f(O) \psi }

for all smooth f: \mathbb{R} \to \mathbb{R} and all stochastic states \psi.

These make the ‘forward direction’ of Noether’s theorem stronger… and in fact, the forward direction, while easier, is probably more useful! However, if we ever use Noether’s theorem in the ‘reverse direction’, it might be easier to check a condition involving only O and its square.


The Network of Global Corporate Control

3 October, 2011

While protesters are trying to occupy Wall Street and spread their movement to other cities…

… others are trying to mathematically analyze the network of global corporate control:

• Stefania Vitali, James B. Glattfelder and Stefano Battiston, The network of global corporate control.

Here’s a little ‘directed graph’:

Very roughly, a directed graph consists of some vertices and some edges with arrows on them. Vitali, Glattfelder and Battiston built an enormous directed graph by taking 43,060 transnational corporations and seeing who owns a stake in whom:


If we zoom in on the financial sector, we can see the companies those protestors are upset about:


Zooming out again, we could check that the graph as a whole consists of many pieces. But the largest piece contains 3/4 of all the corporations studied, including all the top by economic value, and accounting for 94.2% of the total operating revenue.

Within this there is a large ‘core’, containing 1347 corporations each of whom owns directly and/or indirectly shares in every other member of the core. On average, each member of the core has direct ties to 20 others. As a result, about 3/4 of the ownership of firms in the core remains in the hands of firms of the core itself. As the authors put it:

This core can be seen as an economic “super-entity” that raises new important issues both for researchers and policy makers.

If you’ve never thought much about modern global capitalism, the existence of this ‘core’ may seem shocking and scary… like an enormous invisible spiderweb wrapping around the globe, dominating us, controlling every move we make. Or maybe you can see a tremendous new business opportunity, waiting to be exploited!

But if you’ve already thought about these things, the existence of this core probably seems obvious. What’s new here is the use of certain ideas in math—graph theory, to be precise—to study it quantitatively.

So, let me say a bit more about the math! What’s a directed graph, exactly? It’s a set V and a subset E of V \times V. We call the elements of V vertices and the elements of E edges. Since an edge is an ordered pair of vertices, it has a ‘starting point’ and an ‘endpoint’—that’s why we call this kind of graph ‘directed’.

(Note that we can have an edge going from a vertex to itself, but we cannot have more than one edge going from some vertex v to some vertex v'. If you don’t like this, use some other kind of graph: there are many kinds!)

I spoke about ‘pieces’ of a directed graph, but that’s not a precise term, since there are various kinds of pieces:

• A connected component is a maximal set of vertices such that we can get from any one to any other by an undirected path, meaning a path of edges where we don’t care which way the arrows point.

• A strongly connected component is a maximal set of vertices such that we can get from any one to any other by an directed path, meaning a path of edges where at each step we walk ‘forwards’, along with the arrow.

I didn’t state these definitions very precisely, but I hope you can fill in the details. Maybe an example will help! This graph has three strongly connected components, shaded in blue, but just one connected component:

So when I said this:

The graph consists of many pieces, but the largest contains 3/4 of all the corporations studied, including all the top by economic value, and accounting for 94.2% of the total operating revenue.

I was really talking about the largest connected component. But when I said this:

Within this there is a large ‘core’ containing 1347 corporations each of whom owns directly and/or indirectly shares in every other member of the core.

I was really talking about a strongly connected component. When you look at random directed graphs, there often turns out to be one strongly connected component that’s a lot bigger than all the rest. This is called the core, or the giant strongly connected component.

In fact there’s a whole study of random directed graphs, which is relevant not only to corporations, but also to webpages! Webpages link to other webpages, giving a directed graph. (True, one webpage can link to another more than once, but we can either ignore that subtlety or use a different concept of graph that handles this.)

And it turns out that for various types of random directed graphs, we tend to get a so-called ‘bowtie structure’, like this:

In the middle you see the core, or giant strongly connected component, labelled SCC. (Yes, that’s where Exxon sits, like a spider in the middle of the web!)

Connected to this by paths going in, we have the left half of the bowtie, labelled IN. Connected to the core by paths going out, we have the right half of the bowtie, labelled OUT

There are also usually some IN-tendrils going out of the IN region, and some OUT-tendrils going into the ‘OUT’ region.

There may also be tubes going from IN to OUT while avoiding the core.

All this is one connected component: the largest one. But finally, not shown here, there may be a bunch of other smaller connected components. Presumably if these are large enough they have a similar structure.

Now: can we use this knowledge to do something good? Or it all too obvious so far? After all, so far we’re just saying the network of global corporate control is a fairly ordinary sort of random directed graph. Maybe we need to go beyond this, and think about ways in which it’s not ordinary. In fact, I should reread the paper with that in mind.

Or… well, maybe you have some ideas.

(By the way, I don’t think ‘overthrowing’ the network of global corporate control is a feasible or even desirable project. I’m not espousing any sort of revolutionary ideology, and I’m not interested in discussing politics here. I’m more interested in understanding the world and looking for some leverage points where we can gently nudge things in slightly better directions. If there were a way to do this by taking advantage of the power of corporations, that would be cool.)


Follow

Get every new post delivered to your Inbox.

Join 3,095 other followers