Network Theory (Part 16)

We’ve been comparing two theories: stochastic mechanics and quantum mechanics. Last time we saw that any graph gives us an example of both theories! It’s a bit peculiar, but today we’ll explore the intersection of these theories a little further, and see that it has another interpretation. It’s also the theory of electrical circuits made of resistors!

That’s nice, because I’m supposed to be talking about ‘network theory’, and electrical circuits are perhaps the most practical networks of all:

I plan to talk a lot about electrical circuits. I’m not quite ready to dive in, but I can’t resist dipping my toe in the water today. Why don’t you join me? It’s not too cold!

Dirichlet operators

Last time we saw that any graph gives us an operator called the ‘graph Laplacian’ that’s both infinitesimal stochastic and self-adjoint. That means we get both:

• a Markov process describing the random walk of a classical particle on the graph.

and

• a 1-parameter unitary group describing the motion of a quantum particle on the graph.

That’s sort of neat, so it’s natural to wonder what are all the operators that are both infinitesimal stochastic and self-adjoint. They’re called ‘Dirichlet operators’, and at least in the finite-dimensional case we’re considering, they’re easy to completely understand. Even better, it turns out they describe electrical circuits made of resistors!

Today let’s take a lowbrow attitude and think of a linear operator H : \mathbb{C}^n \to \mathbb{C}^n as an n \times n matrix with entries H_{i j}. Then:

H is self-adjoint if it equals the conjugate of its transpose:

H_{i j} = \overline{H}_{j i}

H is infinitesimal stochastic if its columns sum to zero and its off-diagonal entries are real and nonnegative:

\displaystyle{ \sum_i H_{i j} = 0 }

i \ne j \Rightarrow H_{i j} \ge 0

H is a Dirichlet operator if it’s both self-adjoint and infinitesimal stochastic.

What are Dirichlet operators like? Suppose H is a Dirichlet operator. Then its off-diagonal entries are \ge 0, and since

\displaystyle{ \sum_i H_{i j} = 0}

its diagonal entries obey

\displaystyle{ H_{i i} = - \sum_{ i \ne j} H_{i j} \le 0 }

So all the entries of the matrix H are real, which in turn implies it’s symmetric:

H_{i j} = \overline{H}_{j i} = H_{j i}

So, we can build any Dirichlet operator H as follows:

• Choose the entries above the diagonal, H_{i j} with i < j, to be arbitrary nonnegative real numbers.

• The entries below the diagonal, H_{i j} with i > j, are then forced on us by the requirement that H be symmetric: H_{i j} = H_{j i}.

• The diagonal entries are then forced on us by the requirement that the columns sum to zero: H_{i i} = - \sum_{ i \ne j} H_{i j}.

Note that because the entries are real, we can think of a Dirichlet operator as a linear operator H : \mathbb{R}^n \to \mathbb{R}^n. We’ll do that for the rest of today.

Circuits made of resistors

Now for the fun part. We can easily draw any Dirichlet operator! To this we draw n dots, connect each pair of distinct dots with an edge, and label the edge connecting the ith dot to the jth with any number H_{i j} \ge 0:

This contains all the information we need to build our Dirichlet operator. To make the picture prettier, we can leave out the edges labelled by 0:

Like last time, the graphs I’m talking about are simple: undirected, with no edges from a vertex to itself, and at most one edge from one vertex to another. So:

Theorem. Any finite simple graph with edges labelled by positive numbers gives a Dirichlet operator, and conversely.

We already talked about a special case last time: if we label all the edges by the number 1, our operator H is called the graph Laplacian. So, now we’re generalizing that idea by letting the edges have more interesting labels.

What’s the meaning of this trick? Well, we can think of our graph as an electrical circuit where the edges are wires. What do the numbers labelling these wires mean? One obvious possibility is to put a resistor on each wire, and let that number be its resistance. But that doesn’t make sense, since we’re leaving out wires labelled by 0. If we leave out a wire, that’s not like having a wire of zero resistance: it’s like having a wire of infinite resistance! No current can go through when there’s no wire. So the number labelling an edge should be the conductance of the resistor on that wire. Conductance is the reciprocal of resistance.

So, our Dirichlet operator above gives a circuit like this:

Here Ω is the symbol for an ‘ohm’, a unit of resistance… but the upside-down version, namely ℧, is the symbol for a ‘mho’, a unit of conductance that’s the reciprocal of an ohm.

Let’s see if this cute idea leads anywhere. Think of a Dirichlet operator H : \mathbb{R}^n \to \mathbb{R}^n as a circuit made of resistors. What could a vector \psi \in \mathbb{R}^n mean? It assigns a real number to each vertex of our graph. The only sensible option is for this number to be the electric potential at that point in our circuit. So let’s try that.

Now, what’s

\langle \psi, H \psi \rangle  ?

In quantum mechanics this would be a very sensible thing to look at: it would be gives us the expected value of the Hamiltonian H in a state \psi. But what does it mean in the land of electrical circuits?

Up to a constant fudge factor, it turns out to be the power consumed by the electrical circuit!

Let’s see why. First, remember that when a current flows along a wire, power gets consumed. In other words, electrostatic potential energy gets turned into heat. The power consumed is

P = V I

where V is the voltage across the wire and I is the current flowing along the wire. If we assume our wire has resistance R we also have Ohm’s law:

I = V / R

so

\displaystyle{ P = \frac{V^2}{R} }

If we write this using the conductance instead of the resistance R, we get

P = \textrm{conductance} \; V^2

But our electrical circuit has lots of wires, so the power it consumes will be a sum of terms like this. We’re assuming H_{i j} is the conductance of the wire from the ith vertex to the jth, or zero if there’s no wire connecting them. And by definition, the voltage across this wire is the difference in electrostatic potentials at the two ends: \psi_i - \psi_j. So, the total power consumed is

\displaystyle{ P = \sum_{i \ne j}  H_{i j} (\psi_i - \psi_j)^2 }

This is nice, but what does it have to do with \langle \psi , H \psi \rangle?

The answer is here:

Theorem. If H : \mathbb{R}^n \to \mathbb{R}^n is any Dirichlet operator, and \psi \in \mathbb{R}^n is any vector, then

\displaystyle{ \langle \psi , H \psi \rangle = -\frac{1}{2} \sum_{i \ne j}  H_{i j} (\psi_i - \psi_j)^2 }

Proof. Let’s start with the formula for power:

\displaystyle{ P = \sum_{i \ne j}  H_{i j} (\psi_i - \psi_j)^2 }

Note that this sum includes the condition i \ne j, since we only have wires going between distinct vertices. But the summand is zero if i = j, so we also have

\displaystyle{ P = \sum_{i, j}  H_{i j} (\psi_i - \psi_j)^2 }

Expanding the square, we get

\displaystyle{ P = \sum_{i, j}  H_{i j} \psi_i^2 - 2 H_{i j} \psi_i \psi_j + H_{i j} \psi_j^2 }

The middle term looks promisingly similar to \langle \psi, H \psi \rangle, but what about the other two terms? Because H_{i j} = H_{j i}, they’re equal:

\displaystyle{ P = \sum_{i, j} - 2 H_{i j} \psi_i \psi_j + 2 H_{i j} \psi_j^2  }

And in fact they’re zero! Since H is infinitesimal stochastic, we have

\displaystyle{ \sum_i H_{i j} = 0 }

so

\displaystyle{ \sum_i H_{i j} \psi_j^2 = 0 }

and it’s still zero when we sum over j. We thus have

\displaystyle{ P = - 2 \sum_{i, j} H_{i j} \psi_i \psi_j }

But since \psi_i is real, this is -2 times

\displaystyle{ \langle \psi, H \psi \rangle  = \sum_{i, j}  H_{i j} \overline{\psi}_i \psi_j }

So, we’re done.   █

An instant consequence of this theorem is that a Dirichlet operator has

\langle \psi , H \psi \rangle \le 0

for all \psi. Actually most people use the opposite sign convention in defining infinitesimal stochastic operators. This makes H_{i j} \le 0, which is mildly annoying, but it gives

\langle \psi , H \psi \rangle \ge 0

which is nice. When H is a Dirichlet operator, defined with this opposite sign convention, \langle \psi , H \psi \rangle is called a Dirichlet form.

The big picture

Maybe it’s a good time to step back and see where we are.

So far we’ve been exploring the analogy between stochastic mechanics and quantum mechanics. Where do networks come in? Well, they’ve actually come in twice so far:

1) First we saw that Petri nets can be used to describe stochastic or quantum processes where things of different kinds randomly react and turn into other things. A Petri net is a kind of network like this:

The different kinds of things are the yellow circles; we called them states, because sometimes we think of them as different states of a single kind of thing. The reactions where things turn into other things are the blue squares: we called them transitions. We label the transitions by numbers to say the rates at which they occur.

2) Then we looked at stochastic or quantum processes where in each transition a single thing turns into a single thing. We can draw these as Petri nets where each transition has just one state as input and one state as output. But we can also draw them as directed graphs with edges labelled by numbers:

Now the dark blue boxes are states and the edges are transitions!

Today we looked at a special case of the second kind of network: the Dirichlet operators. For these the ‘forward’ transition rate H_{i j} equals the ‘reverse’ rate H_{j i}, so our graph can be undirected: no arrows on the edges. And for these the rates H_{i i} are determined by the rest, so we can omit the edges from vertices to themselves:

The result can be seen as an electrical circuit made of resistors! So we’re building up a little dictionary:

• Stochastic mechanics: \psi_i is a probability and H_{i j} is a transition rate (probability per time).

• Quantum mechanics: \psi_i is an amplitude and H_{i j} is a transition rate (amplitude per time).

• Circuits made of resistors: \psi_i is a voltage and H_{i j} is a conductance.

This dictionary may seem rather odd—especially the third item, which looks completely different than the first two! But that’s good: when things aren’t odd, we don’t get many new ideas. The whole point of this ‘network theory’ business is to think about networks from many different viewpoints and let the sparks fly!

Actually, this particular oddity is well-known in certain circles. We’ve been looking at the discrete version, where we have a finite set of states. But in the continuum, the classic example of a Dirichlet operator is the Laplacian H = \nabla^2. And then we have:

• The heat equation:

\frac{d}{d t} \psi = \nabla^2 \psi

is fundamental to stochastic mechanics.

• The Schrödinger equation:

\frac{d}{d t} \psi = -i \nabla^2 \psi

is fundamental to quantum mechanics.

• The Poisson equation:

\nabla^2 \psi = -\rho

is fundamental to electrostatics.

Briefly speaking, electrostatics is the study of how the electric potential \psi depends on the charge density \rho. The theory of electrical circuits made of resistors can be seen as a special case, at least when the current isn’t changing with time.

I’ll say a lot more about this… but not today! If you want to learn more, this is a great place to start:

• P. G. Doyle and J. L. Snell, Random Walks and Electrical Circuits, Mathematical Association of America, Washington DC, 1984.

This free online book explains, in a really fun informal way, how random walks on graphs, are related to electrical circuits made of resistors. To dig deeper into the continuum case, try:

• M. Fukushima, Dirichlet Forms and Markov Processes, North-Holland, Amsterdam, 1980.

66 Responses to Network Theory (Part 16)

  1. Frederik De Roo says:

    Nice! But I believe that G is the symbol for conductance, and C is used for capacitance instead. In addition, though not as funny as the ‘mho’, the sievert (S) is the SI unit of conductance.

  2. Arrow says:

    Siemens is the unit of conductance, sievert is radiation dose.

    One thing i keep wondering about is are there any practical applications of such approach to electrical circuits? Networks of resistors are not particularly interesting, can for example the first two circuits pictured on this page be described in this manner? Or if that is too much perhaps complex circuits with ideal capacitance and inductance in addition to resistance?

    • John Baez says:

      Electrical circuits made of resistors are incredibly interesting—there’s a whole book about them, which I referred to in this post, and it’s well worth reading. (It’s free, too.)

      However, electrical circuits made only of resistors don’t display any interesting time-dependent behavior; they’re just a nice discretization of electrostatics. If you want to build an oscillator, say, you need more circuit elements.

      Luckily, I’ll show (and it’s well-known) that the mathematical framework described here for electrical circuits for circuits made of resistors can be extended to include capacitors and resistors.

      As for ‘practical applications’, I’ll admit I’m theoretically driven: I’m mainly trying to organize all known approaches to complex systems made of interacting parts into a clear unified whole, not (yet) make advances in any one of these approaches. Those advances will come quite naturally when all the different theories start talking to each other in a clear framework. For example, once I saw how stochastic mechanics and quantum mechanics were related, it only took a few weeks to realize there should be a version of Noether’s theorem relating symmetries and conserved quantities (famous in quantum mechanics) for stochastic mechanics… and Brendan Fong was able to prove it in about a week. Now that I see how electrical circuits made of resistors relate to those other two subjects, other ideas will emerge. Either people know something about circuits made or resistors that can shed new light on quantum mechanics and stochastic mechanics, or vice versa, or both. And this sort of ‘cross-talk’ between subjects will grow as time goes on. I’ve learned over the years that trying to push straight to applications is not a good working style for me (though it’s great for many other people); instead, I do better when I patiently put the puzzle pieces together and figure out what’s going on. There’s a ‘problem-solving’ style of mathematics and a ‘theory-building’ style, and I follow the latter.

      I learned, while writing this article, that G is the official symbol for conductance, and that ‘siemens’ is more official than ‘mho’. However, ‘G’ seemed less memorable, and more likely to raise annoying questions (“why G?”) than ‘C’, and ‘siemens’ seemed less fun than ‘mho’.

    • Frederik De Roo says:

      And luckily the symbol for sievert is Sv.

      That was a slip of my fingers, recorded here for eternity (or until the world wide web breaks down) to remind me I should reread before I post…

      @John, I think that anyone who has ever worked with capacitance would be more comfortable with G than C.

    • Blake Stacey says:

      Why G instead of C? Well, when we get to circuits with inductors and capacitors next week, what will we use to stand for capacitance? :-)

      I don’t think we’ve used K for anything yet. Maybe we should start defining circuits with konductance or kapacitance. Or, maybe we could use K for the amplitude of an AC oscillation. If anyone asks, we can reply, “We learned this material out of German books, and K is the first letter in the German word for power station.”

      • Frederik De Roo says:

        Wouldn’t it be nice to follow standard terminology and to use Y?

        @John: if you have another student ;) a theorem that comes to mind when talking about electrical circuits is Thévenin’s theorem

      • John Baez says:

        Now I think I’ll use the standard symbol for ‘admittance’, since eventually I’ll generalize all this stuff to circuits containing capacitors and inductors.

        • Eric says:

          Not sure if that absolves you. You’ll still want to say things like, “The admittance of a capacitor is Y = i ω C.”

        • John Baez says:

          What I’m saying is that I’ll never use C for conductance, only capacitance. I’ll use Y for admittance, and when my circuit involves only resistors, that’ll be the same as conductance.

          I’ve decided that for this post, since I only need a symbol for conductance in one equation, it’ll be least confusing if I call it…

          \textrm{conductance}

  3. Eugene says:

    P. G. Doyle and J. L. Snell, “Random Walks and Electrical Circuits,”

    was published as “Random Walks and Electrical Networks,”

  4. kunegis says:

    Hi John–

    I’ve been enjoying your series ‘network theory’ from the beginning, and this entry really touched home for me. I can’t help but give the following quiz:

    If the network has edges with negative weights, how can a Dirichlet operator be defined, and what is its interpretation in terms of electrical circuits, stochastic mechanics and quantum mechanics?

    • John Baez says:

      I’m glad you’re enjoying the show!

      There’s no problem with negative weights for edges in quantum mechanics, since the recipe I gave still defines a self-adjoint operator. But for stochastic mechanics it makes me nervous to have negative transition probabilities. Formally, I suppose I could make a new version of probability theory where I drop the restriction that probabilities be nonnegative. But I’m not sure what use it would be. (It’s different from real quantum mechanics, which I understand reasonably well..)

      I can also formally work with electrical circuits built from resistors whose conductances can be arbitrary real numbers. But again I’m not sure what use this would be. There’s a causality condition in the theory of linear electrical circuits that breaks down when you build a circuit from inductors, capacitors and resistors whose conductances can be negative.

      I’m eager to hear your thoughts on these subjects!

      • westy31 says:

        I really should take more time to catch up with your network discussion!
        Negative resistances come up in various ways. One way is in discretization based on triangulations of space with obtuse triangles.
        Another is in network equivalents of higher order differential equations, such as the beam equation.
        http://westy31.home.xs4all.nl/Electric.html#Higher_order
        Also, in ‘space time circuits’, as shown on the same page.
        Imaginary capacitances show up in the electric equivalent for the Schrödinger equation.
        Mathematically, there is no problem with negative resistors. In real life, they would violate thermodynamics, by converting heat into electric energy. But that’s just the time reversal of dissipation.

        Gerard

      • Hi,

        To finally reply: in the past I’ve been studying rating networks, i.e. bipartite networks between persons and things (e.g. movies) that people can like or dislike. To predict ratings in these networks one idea is to use the resistance distance, which can in fact be extended to signed networks by slightly changing the definition of the graph Laplacian, using the sum of absolute edge weights as diagonal values. The resulting matrix is still positive semidefinite, and even positive definite when every connected component contains a cycle with an odd number of negative edges. The same signed variant of the graph Laplacian also arises when drawing signed graphs in the “obvious” way, i.e. by placing nodes at the mean coordinates of their positive neighbors.

        I’d like to hear your take on this!

        BTW, I now have a blog entry written up about this.

  5. This is pretty exciting stuff. I just did a few calculations related to this example and wanted post them here. First, the matrix for the example is

    H=\left( \begin{array}{ccccc}  0 & \frac{1}{2} & \frac{1}{2} & \frac{1}{2} & 0 \\  \frac{1}{2} & 0 & 1 & \frac{1}{2} & 0 \\  \frac{1}{2} & 1 & 0 & 0 & 1 \\  \frac{1}{2} & \frac{1}{2} & 0 & 0 & 0 \\  0 & 0 & 1 & 0 & 0 \end{array} \right)

    From this matrix, we can define a graph and label the edges with the entries in the matrix. Of course this is not shocking news since we determined the matrix from a graph to begin with! But here is where I’m going with this. First, the graph of H is not directed (e.g. all edges go both ways) since the matrix is symmetric. We call a graph strongly connected if for each pair of nodes there is a sequence of directed edges leading from any node to any other. This is the case for H .

    It turns out that H is called an irreducible matrix if and only if the corresponding graph is strongly connected. This means that the so called Perron-Frobenius Theory applies (see wikipedia). In fact, since the matrix is irreducible the following (fairly strong conditions) hold.

    1. There exists a real maximal eigenvalue r > 0.

    2. The corresponding eigenspace of r is simple. That is, it is one dimensional, and is hence spanned by a single eigenvector v.

    3. The entries of v are all positive real numbers. So all entries are real, positive non-zero numbers.

    There are more conditions too, but these already give us enough to bring up some interesting points. First, this largest eigenvalue condition, means that there is a steady state! The network would then be fault tolerant, as small perturbations k form this steady state (v + k), would evolve back to the steady state (v ). There is another condition that this perhaps more interesting.

    4. The eigenvector v is the only eigenvector that does not have at least one negative entry!

    Hold on! Is that even possible? We know that for stochastic mechanics, we live in an L^1 space. We can’t have negative probabilities! So for any such (strongly connected) stochastic operator, we are either in the unique steady state, or we are in a sum of at least two states (to make sure all entries in the state of the system are positive—a requirement for a valid probability distributions.

    So let’s pretend that we don’t yet trust Perron and Frobenius, but we trust computers. We would then use Mathematica to make sure this is true. Here are the eigenvalues

    \{1.76601, -1.48396, 0.559161, -0.5, -0.341207\}

    and…sure enough, we have r = 1.76601 as expected. For the eigenvectors, first we have

    v=\{1.15274,1.54241,1.76601,0.763062,1\}

    and as a reminder, we indeed confirm that all entries are strictly positive. Now what about the rest of the eigenvectors? To match the theory, we should confirm that v is the only eigenvector whose components are all positive.

    Here are the other five eigenvectors, so we might check for negative entries directly:

    \{0.298709,1.05278,-1.48396,-0.455367,1\}

    \{-0.634202,-0.370238,0.559161,-0.898166,1\}

    \{-2,1,0,1,0\}

    \{0.127202,-0.947178,-0.341207,1.20158,1\}

    So we confirm that the remaining eigenvectors are exactly as expected, the do contain negative entries! and everything agrees with the Perron–Frobenius theory of non-negative matrices, as expected.

    So this is all very interesting! The spooky part occurs when we elevate H to a quantum operator. Quantum mechanically, the state of a system could be in a unique eigenstate such as this

    \psi = \{0.298709,1.05278,-1.48396,-0.455367,1\}

    since quantum mechanics lives in an L^2 space. In stochastic mechanics this is out of the question, since negative probabilities are not allowed. In stochastic mechanics, the best we could do is have a valid state in a mixture of \psi and some other eigenvectors—to ensure that all entries are positive. For example, a valid (though not normalised) state in stochastic mechanics would be

    0.84 v + \psi = \{1.26734, 2.34885, 0, 0.185826, 1.84029\}

    But \psi alone is forbidden!

    • Note that for stochastic and quantum operators, if H v = r v then k \cdot v is an eigenvector of H also with eigenvalue r. In addition, if H v = r v then k \cdot H has eigenvalue k \cdot r. This means that we could have scaled H to be 4 H instead of 2 H which I did by mistake in the last comment. There is no harm in scaling operators like this — physically it will change only the time scales of the problem.

    • Greg Egan says:

      The matrix H here isn’t infinitesimal stochastic. What you’ve described is interesting, and maybe it’s related somehow to Dirichlet operators, but the Dirichlet operator for the graph associated with your matrix H would need some negative diagonal entries so that every column summed to zero.

      • John Baez says:

        When we’re computing the power used by an electrical circuit, namely

        \displaystyle{ P = \sum_{i, j} H_{i j} (\psi_i - \psi_j)^2 }

        we can leave out the diagonal (i = j) terms without changing the answer. But when we’re using H to describe a Markov process, the diagonal terms really matter—though of course they’re determined by the other terms, by the constraint that H is infinitesimal stochastic.

        By the way, the formula I just wrote for P makes it blitheringly obvious that P doesn’t depend on the diagonal entries of H. The same fact seems almost paradoxical if we use this other formula:

        \displaystyle{ P = -\frac{1}{2} \sum_{i , j} H_{i j} \psi_i \psi_j }

        But the point is that to derive this other formula, we needed to use the fact that H is self-adjoint and infinitesimal stochastic!

        • In my previous comment, I wanted to give a quick example of something I found rather perplexing. The strange oddity where you end up with forbidden eigenstates in stochastic mechanics: moreover, these forbidden states are perfectly fine in quantum mechanics! These eigenstates have negative entries in the expansion of their coefficients. Because of the L^1 structure of measurements in classical mechanics, eigenstates with negative coefficients are not allowed to be states of a valid physical system. Quantum mechanics enjoys the structure of an L^2 theory. In quantum mechanics, all eigenstates of a Hamiltonian operator are allowed.

          In contract to quantum mechanics, it is on the other hand impossible for a stochastic system to be in an eigenstate which has negative entries. Such states are forbidden in stochastic mechanics as they are not valid probability distributions.

          What’s more, is that from the Perron-Frobenius Theory of non-negative matrices we can identify certain general conditions that predict the existence of a unique eigenstate that will be the only classically allowed probability distribution. All other eigenstates would not contain only terms with positive coefficients. In other words, in stochastic mechanics, it is possible that there would be only one valid physical eigenstate of a system. I aim to give an example of these circumstances.

          Such an example is something rather interesting, worthy of its own complete blog post and not just a comment. Also, Greg is clearly correct, and the last example was slightly outside the class of operators that I was considering. Just to quickly review what I was after.

          H is an intensity matrix or infinitesimally stochastic if

          \sum_j H_{ij}= 0

          \forall i\neq j, ~H_{ij}\geq 0

          In addition to this, H will be a valid quantum operator if

          H = H^\dagger

          Now I wanted to give an example of an operator where the Perron-Frobenius Theory applies, to find some of these perplexing eigenstates. To do this I will consider operators H that are infinitesimally stochastic with the following form.

          H = H' - kI

          where I is the identity matrix and k is real positive. Note that adding a constant k to H only shifts its spectrum by k and in particular does not change the eigenvectors. Now we can place some conditions on H'.

          (i) \forall i, ~H_{ii} = 0

          (ii) H' is strongly connected

          (iii) \sum_j H_{ij}= k

          Now all I need to do is show that such operators exist! hmmmn. It turns out to be straightforward to find some one of them in every dimension d. Let Q be the matrix with all entries 1, then

          H' = Q - I

          H is the complete graph with all edges having weight 1, and so H' is strongly connected. In such a case,

          H = H' - d\cdot I

          and H is readily shown to be infinitesimally stochastic. There are also other possibilities. For instance, if we let

          H'=\left( \begin{array}{ccccc}  0 & 1 & 1 & 0 & 0 \\  1 & 0 & 0 & 1 & 0 \\  1 & 0 & 0 & 0 & 1 \\  0 & 1 & 0 & 0 & 1 \\  0 & 0 & 1 & 1 & 0 \end{array} \right)

          then H' -2 I is infinitesimally stochastic. The corresponding graph of H' is strongly connected:

          and so Perron-Frobenius Theory applies. We find that the Perron root or maximal eigenvalue is given as 2 and the corresponding eigenvector as \{1, 1, 1, 1, 1\} . All other eigenvalues are less than 2. All other eigenstates are confirmed to contain at least one negative entry. The other eigenstates individually are forbidden states of the corresponding stochastic system. This was all a bit fast, but soon I’ll post a more complete and detailed discussion of these interesting operators.

  6. David Corfield says:

    Minor point, but when you write of a complex matrix

    H is infinitesimal stochastic if its columns sum to zero and its off-diagonal entries are nonnegative,

    is it conventionally understood that if a complex number is said to be nonnegative, that it is real and nonnegative? I know the order relation makes no sense on the complex numbers as a whole, but the wording seems odd to me.

    • John Baez says:

      David wrote:

      Is it conventionally understood that if a complex number is said to be nonnegative, that it is real and nonnegative?

      I thought so. Anyway, that’s what I meant.

      Perhaps this usage is more common when applied to linear operators on complex Hilbert spaces: for example, everyone agrees a linear operator

      T: \mathbb{C}^n \to \mathbb{C}^n

      is ‘nonnegative’ if

      \langle \psi, T \psi \rangle \ge 0

      for all \psi \in \mathbb{C}^n.

      Taking n = 1 you see a 1×1 matrix, i.e. a complex number, is ‘nonnegative’ if it’s real and nonnegative. However, I was trying to be clear, not show off my sophistication! So, I’ll change ‘nonnegative’ to ‘real and nonnegative’.

  7. Thévenin’s theorem? Interesting point! In the theory of (non-)deterministic automata (http://en.wikipedia.org/wiki/Nondeterministic_finite_automaton) there is a construction that gets you an equivalent deterministic machine (the so-called power set construction). Such input-output equivalences seem to be a common theme and it indeed might be worth to check whether they have a place somewhere in your theory.

    • John Baez says:

      In a paper I’m writing on electrical circuits, I construct a category where the morphisms are electrical circuits of a certain sort. I also construct category where the morphisms are ‘input-output equivalence classes’ of such circuits. In the latter, you treat an electrical circuit as a ‘black box’: you don’t care about what’s inside, just what it does. There’s a ‘forgetful functor’ from the first category to the second, where we forget the details inside the black box.

      So, Thévenin’s theorem and Norton’s theorem can be seen as theorems about the functor from the first category to the second: any morphism in the image of some class of circuits is actually in the image of some smaller class.

      In case someone here forgot:

      Thévenin’s theorem says that any combination of voltage sources, current sources, and resistors with two terminals is electrically equivalent to a single voltage source and a single resistor in series. Norton’s theorem says that any collection of voltage sources, current sources, and resistors with two terminals is electrically equivalent to a single current source and a single resistor in parallel.

      And yes, all this should be an example of a very common theme! For any sort of gizmo made out of parts, we should have a choice as to whether we care about its inner workings or treat it as a ‘black box’. This should give two different categories C and D and a forgetful functor F : C \to D, and we can ask when morphisms in the image of some class of morphisms of C are actually in the image of some smaller class.

      It would be nice to collect lots of known theorems of this sort, and see if lots of them are special cases of a few general results. That’s the kind of stuff I want to do.

      A very simple theorem of this sort, which applies to what I’m talking about today—circuits made only of resistors!—lies behind the Y-Δ transform. It says any circuit like the one at left is equivalent, ‘as a black box’, to a circuit like the one at right:

      This can be seen as a theorem about Dirichlet operators!

  8. jamievicary says:

    Not much to say here, other than that I’m enjoying following along!

    It surprised me briefly that there was no interesting dynamics for charges moving through resistors. I suppose the reason for that is that we have no inductance in our circuits, so charge can reach its equilibrium configuration infinitely quickly. But I would expect it’s still interesting to consider how much energy would be dissipated as an un-equilibrated charge density redistributes itself.

    • John Baez says:

      Jamie wrote:

      It surprised me briefly that there was no interesting dynamics for charges moving through resistors.

      Yes, it puzzled me too—and in a way it still does, since a Dirichlet operator H, which describes a circuit made of resistors, can be used to formulate an equation like the heat equation

      \displaystyle{ \frac{d}{d t} \psi = H \psi }

      thinking of H as a discrete analogue of the Laplacian.

      However, as you note, we need inductance in our circuits to make current want to keep flowing in them, and get an equation like the above to hold.

      I posed a version of this puzzle back in ‘week294’. I said, suppose you have a loop of wire with a very small resistance, and some current flowing in it: what will happen? Some answers start here.

  9. This reference is doubly outdated:

    • M. Fukushima, Dirichlet Forms and Markov Processes, North-Holland, Amsterdam, 1980.

    • M. Fukushima, Y. Oshima, M. Takeda, Dirichlet Forms and Symmetric Markov Processes, 2nd rev. and ext. ed. (1st ed. 1994), De Gruyter, 2010.

    • John Baez says:

      I found the newer editions to be more cluttered and less clear than the old one! There’s a Zen saying “first thought best thought”, which is sometimes true for books. But thanks: our readers should be allowed to choose for themselves.

  10. John Baez says:

    Massimo Ostilli visited the CQT yesterday and spoke about complex networks, which for him (and many people) means the study of large random graphs. He also pointed out that we can use a random walk to compute \exp(t H) for any matrix H with nonnegative off-diagonal entries, essentially by writing

    H = A + B

    where A is infinitesimal stochastic and B is diagonal. To compute a matrix element \exp(t H)_{i j}, we can use A to define a random walk and sum over paths from j to i, weighted by their probability of occurrence, but also weighted by a factor of \exp(s B_{k k}) for each time the walk sits at the vertex k for a time s.

    According to Ostilli this idea is called ‘the exact probabilistic representation of lattice quantum systems’, and it’s described here:

    • Matteo Beccaria, Carlo Presilla, Gian Fabrizio De Angelis and Giovanni Jona-Lasinio, An exact representation of the fermion dynamics in terms of Poisson processes and its connection with Monte Carlo algorithms, Europhys. Lett. 48 (1999), 243-249.

    This idea was apparently be generalized to matrices with some negative off-diagonal entries, here:

    • Matteo Beccaria, Carlo Presilla, Gian Fabrizio De Angelis and Giovanni Jona-Lasinio, Evolution of fermionic systems as an expectation over Poisson processes, in Recent Progress in Many-Body Theories, Series on Advances in Quantum Many-Body Theory – Vol. 3, edited by R. F. Bishop, K. A. Gernoth, N. R. Walet, Y. Xian (World Scientific, Singapore, 2000), pp. 461-464.

    • Matteo Beccaria, Carlo Presilla, Gian Fabrizio De Angelis, Giovanni Jona-Lasinio, Probabilistic representation of fermionic lattice systems, Nucl. Phys. B – Proceed. Suppl. 83-84 (2000), 911-913.

  11. […] In Part 16 of John Baez’ series on Network Theory, he discussed electrical networks. […]

  12. We recall from Part 16 that there is a class of operators called ‘Dirichlet operators’ that are valid Hamiltonians for both stochastic and quantum mechanics […]

  13. amarashiki says:

    Three questions:

    1st. How could you include the Dirac equation, the Maxwell equations and Einstein Field Equations in the same footing you did with Schrödinger, Poisson and heat equations?

    2nd. What about a discrete nonlinear foundation of memdevices related to the whole picture?

    3rd. Graph theory, network theory, matroid theory and categories. What about hypergraphs of Petri nets?

    • John Baez says:

      These questions are very big. I can’t answer them except to say that 1) I’m slowly working to unify everything that naturally fits into ‘network theory’, so stay tuned to this series of posts, and 2) I know how Maxwell theory fits into this game, but these days I’m more interested in ecology, biology and chemistry than so-called ‘fundamental’ physics, so I probably won’t try to bring the Dirac equation and Einstein’s equations into this story.

  14. amarashiki says:

    I think I DO know how to get Maxwell equations fit into the game as well…I read long ago the discrete EM talk of one student you had…And moreover, I have read how Schrödinger-like equation for E and B can serve as a guess for Maxwell equations…Eintein’s field equations seem to be a harder task but also possible. And Dirac’s equation should be also built from the known root-of-laplacian procedure…

    About your answer to the other 2 questions, I am a little disturbed only, ;). But yes, I know you have moved from your fundamental issues…Not me, … I am a theoretical physicist, I don’t know how to move away from it…

    I AM tuned to your blog…Just subscribed long ago ;) So, you have an unbiased follower …I think I mentioned before I follow you from my undergraduate times (your TWF will be always remembered and admired). Keep doing this superb blogger job John, it is not easy at all.

    Finally, concerning the third question, and your mysterious comment about using network theory as a global tool/framework…Have you thought about network theory using hypergraph theory?

    • John Baez says:

      I’m glad you enjoyed This Week’s Finds in Mathematical Physics, and also my new stuff.

      If you haven’t read about the Feynman checkerboard discretization of the Dirac equation in 1+1 dimensions, and its extensions to higher dimensions, you might be interested in those.

      In 3+1 dimensions, it’s nice to use a hypercubic lattice in which each edge is lightlike. This lattice gives a coordinate system in which the Minkowski metric is

      \left( \begin{array}{cccc} 0 & 1 & 1 & 1 \\  1 & 0 & 1 & 1 \\  1 & 1 & 0 & 1 \\  1 & 1 & 1 & 0 \end{array} \right)

      There’s something hilarious about this, but it’s also very pretty. We get zeroes down the diagonal because as we move forward in time, 4 edges move outwards from each vertex at the speed of light. And they move outwards like the corners of an expanding regular tetrahedron, so there’s perfect symmetry: that’s why all the off-diagonal entries are equal. A recent paper is here.

      I don’t know anything about hypergraphs! Maybe I should fix that someday.

  15. David Tanzer says:

    This is great stuff!

    For a voltage vector V in \mathbb{R}^n, you show that \langle H(V), V \rangle is the energy consumed by the circuit.

    We can also give a substantial interpretation of H(V) itself: it is the vector I of currents that are induced by the voltage vector. Here the signs are oriented so that I_j is the net flow of current into the jth node of the circuit.

    Let’s illustrate with an example with three nodes x_1, x_2, x_3, with a 1 ohm resistor connecting each pair of points.

    Let the voltage vector be (a,b,c).

    And let

    H = \left( \begin{array}{ccc} -2 & 1 & 1 \\  1 & -2 & 1 \\   1 & 1 & -2 \end{array} \right)

    Then:

    H(a,b,c) =
    (-2a + b + c, a - 2b + c, a + b - 2c) =
    (b-a + c-a, a-b + c-b, a-c + b-c) =
    (\mathrm{inflow \; into \; a, \; inflow \; into \; b, \; inflow \; into c}) = \mathrm{current} I.

    Now, given that H(V) = I, it follows as a natural corollary that \langle V, H(V) \rangle is the power consumed by the network.

    • David Tanzer says:

      .. which leads me to the following question.

      As an “observable,” this electrical Hamiltonian H should be described as “power,” because for a state V, \langle H(V), V \rangle = \langle I, V \rangle = the power.

      So for the observable H called power, H(V) is the current.

      Can this perspective be carried back into quantum mechanics, so that for other observables O, we can give a physical interpretation of O(x) as something analogous to the “current” that is induced by the “potential” x?

      Here is the root question. If x is a function that assigns quantum mechanical amplitudes to conditions, and O is an observable, then what can be said about the general interpretation of O(x)?

    • John Baez says:

      I fixed your first comment following your corrections, put it into LaTeX (in a rather ugly and lazy way), and deleted your corrections.

      The idea about H(V) being the current associated to the voltage vector V is well-known in certain circles; I can give you references if you want. But it’s definitely worth saying!

      This, however, seems new to me:

      Can this perspective be carried back into quantum mechanics, so that for other observables O, we can give a physical interpretation of O(x) as something analogous to the “current” that is induced by the “potential” x?

      I guess my only answer is: let’s try it and see if we can extract anything interesting from this viewpoint. It sounds like a nice idea! But, we’ll need to do something with it, to see what it can do for us.

      Here is the root question. If x is a function that assigns quantum mechanical amplitudes to conditions, and O is an observable, then what can be said about the general interpretation of O(x)?

      Right!

      In general physicists are fairly close-mouthed about this. When as a student you hear that an observable in quantum mechanics is a self-adjoint operator O, the first thing you want to know is what you get when you apply this operator to a state x. That is, what’s the physical interpretation of Ox? But usually your teachers side-step this question. At least, all of mine did!

      This is especially frustrating because when you have two operators that don’t commute, like position Q and momentum P, it would be nice to know what it means that

      P Q x \ne Q P x

      As a student, you can’t help but ask: is this this telling us something about what happens if we first measure position and then momentum, versus first measuring momentum and then position? But a clear answer is rarely forthcoming.

      Here’s what I can say. Any observable O gives a 1-parameter family of symmetries \exp(i s O), that is, unitary operator depending on a real parameter s. For example. if O is momentum, \exp(i s O) describes spatial translation by a distance s. Thus the quantity

      \exp(i s O) x

      has a clear meaning: it’s the state we get by applying the symmetry \exp(i s O) to the state x. For example, if x is the state of an atom and O is momentum, \exp(i s O) x is the state of the atom after it’s been moved a distance s.

      Then, we have

      O x = -i \frac{d}{d s} \exp(i s O) x \vert_{s= 0}

      is basically the rate at which the state x changes as we apply the symmetry \exp(i s O) and change s. E.g, the rate at which the atom’s state changes as we move it along.

      That’s the best I can do to explain the meaning of O x. Maybe someone else can do better. Of course we all know what \langle x, O x \rangle means: it’s the expected value of the observable O in the state x. But we’re wondering about O x all by itself!

      So now it’s your turn: see if you can understand the relation between x and Ox as somehow analogous to the relation between voltage and current!

  16. David Tanzer says:

    It is physically clear that any voltage vector V0 which is constant on each of the connected components will produce zero current. Hence V0 is a null vector of H.

  17. David Tanzer says:

    To give a full physical interpretation of the Hamiltonian dynamic on this network of resistors, we should connect a capacitor between each node and a ground point. Give them all the same unit capacitance. This is what will make the current H(V)(j) into node j actually produce a rising voltage at V(j) – this condition is needed to fulfill the equation dV / dt = H(V).

    Intuitively, we see that any given initial state of the network, given by a voltage vector V, the network will asymptotically charge/discharge to an equilibrium state where, for each connected component of the graph, all voltages are the same.

    For each connected component C, the final voltage is easily calculated, by dividing the total charge in C in the network by the number of nodes in C:

    FinalVoltage(V) = Sum(V(j)) / n = mean(V(C)), for all j in C.

    The capacitor + resistor circuit gives a physical picture of eigenvectors and non-eignevectors. An eigenvector is a voltage vector V that heads in a “straight line” towards the equilibrium vector. A non-eigenvector does some “turning” as it heads towards the equilibrium vector.

  18. nad says:

    John wrote:

    The idea about H(V) being the current associated to the voltage vector V is well-known in certain circles; I can give you references if you want. But it’s definitely worth saying!

    The \psi in \psi are electric potentials, so it seems that H_{ij}(\psi_i - \psi_j) would be the current for the edge ij.

    Moreover I feel a little unease to call the above electrical circuit, that is mainly because I haven’t seen a discussion about Kirchhoff’s laws, but maybe I have missed that.

    By Kirchhoff’s second law one would have the constraints:

    \sum_{loop}(\psi_i - \psi_j)=0

    and for those configurations of \psi‘s which satisfy the second law it seems one would need to demand for Kirchhoffs first law that:

    \sum_j H_{ij}(\psi_i - \psi_j) =0,

    which gives a constrained on the entries of H, which may eventually not be preserved during a time evolution of the above indicated sort. ????

    Warning: my physics diploma was 22 years ago and I haven’t done too much physics in the meantime.

    • nad says:

      sorry I also forgot the latex.

      • John Baez says:

        I’ll fix it. This is a good time to remind everyone that there’s a big warning when you type in a comment here, saying:


        You can use HTML in your comments. You can also use LaTeX, like this: $latex E = m c^2 $. The word ‘latex’ comes right after the first dollar sign, with a space after it.

        • nad says:

          I’ll fix it. This is a good time to remind everyone that there’s a big warning when you type in a comment here, saying:

          Since there are so much commercials on webpages my brain starts blanking out even big buttons.

          Thanks for the link to your paper on electrical circuits. You wrote:

          But since any real Hilbert space is equipped with a canonical isomorphism to its dual, we get isomorphisms
          r:C_0(\Gamma) \to C^0(\Gamma) \qquad r:C_1(\Gamma) \to C^1(\Gamma)

          I understand what r:C_1(\Gamma) \to C^1(\Gamma) means physically, but I haven’t yet understood what r:C_0(\Gamma) \to C^0(\Gamma) means physically. Or in other words: Given a real engineering-like electrical circuit I thought one is usually only given r:C_1(\Gamma)\to C^1(\Gamma) and the boundary conditions. ?

        • John Baez says:

          Hi! By the way, your LaTeX comments didn’t work until I fixed them because you included the UNICODE symbol

          inside dollar signs; LaTeX doesn’t know how to handle that. The LaTeX symbol for a right-pointing arrow is

          \to

          so I replaced → with \to everywhere and now it’s fine.

          Since there are so much commercials on webpages my brain starts blanking out even big buttons.

          Even ones you helped create! I see now why you wanted it to be orange.

          I haven’t yet understood what r:C_0(\Gamma) \to C^0(\Gamma) means physically.

          It’s hard to say what it means, because it’s very boring: it doesn’t involve any arbitrary choices. The vector space C_0(\Gamma) has a basis given by the vertices of the graph \Gamma. In my notes, I give C_0(\Gamma) an inner product where this basis is orthonormal. As usual, an inner product gives an isomorphism between a finite-dimensional vector space and its dual. The dual of C_0(\Gamma) is called C^0(\Gamma). So, we get an isomorphism

          C_0(\Gamma) \cong C^0(\Gamma)

          which I call

          r:C_0(\Gamma) \to C^0(\Gamma)

          This is very different than the case of

          r:C_1(\Gamma) \to C^1(\Gamma)

          which has an interesting physical meaning. This again comes from an inner product on C_1(\Gamma). But to define the inner product on C_1(\Gamma), we start by arbitrarily choosing for each edge the graph \Gamma a positive number called its resistance. To define the inner product on C_0(\Gamma), we don’t choose anything!

          This may seem weird, but it works wonderfully—and by the way, it’s not something I invented. It was Hermann Weyl who first applied ideas of homology, cohomology and Hodge theory to electrical circuits made of resistors, and lots of other people have developed these ideas ever since. I give lots of references.

        • Nad says:

          Even ones you helped create! I see now why you wanted it to be orange.

          orange was a suggestion. In fact such a color depends also on the overall style of a blog, moreover if you want to keep a high attention factor you would probably need to change this color regularly. Thats one reason why the fashion industry keeps changing the colors which are en vogue.

          I meanwhile cover sometimes parts of my screen because otherwise I couldn’t read a text because of all that blinking commercials. If this commercial flooding trend continues then I could imagine that they place blinks inside texts and then I would probably cease to read those texts, like I did with TV when they started to heavily interrupt films with commercials, that is since over 20 years I don’t have a TV and thus I rarely watch it even at friends places. I understand that one likes to use ads for financing, however things can be overdone.

          In my notes, I give C_0(\Gamma) an inner product where this basis is orthonormal.

          yes, but this seems to be a choice. That is – I haven’t really investigated the whole thing thoroughly, but it seems to me that you could also use non-orthonormal bases. This would of course enter the Laplacian via d^* and it seems on a first glance to be related to a gauge transformation? In particular I wonder in how far this could be used to include magnetism into that framework** ?. Did Weyl write something on that? I didn’t find a free copy of the paper and I currently don’t have a library account (apart from the fact that my spanish is not overly good…)

          I wish I had known about that work of Weyl and that I could have read a text like yours already during my studies. I actually used the discrete one-forms on graphs in one of my articles. The geometers in Berlin treated the cohomology theory of graphs as a kind of mathematical folklore, but I always wondered where it came from, so I actually cited Vladimir Arnold’s book on classical mechanics (one of the few textbooks I actually own) as a source, because that was the oldest resource I could find on that issue (at least in a decent time).

          However Arnold didn’t seem to have cited Weyl. Do you know where this cohomology theory on graphs was treated earliest? Did Weyl cite someone there?

          **I am also not sure whether you really want comments on that – it seemed to me that you wanted to develop something like this for yourself. (I usually don’t like it if I try to develop something and then someone is constantly saying you could do it in such and such way, that’s also why things like the polymath project do not look very attractive to me, but people are different.)

        • John Baez says:

          Nad wrote:

          yes, but this seems to be a choice.

          Everything we do in math is a choice, but what I meant is that this particular recipe works regardless of which circuit one is considering, without requiring any additional information like the resistance of the edges.

          I agree, though, that it would be fun to examine other choices, and see if there’s an interesting concept analogous to resistance that applies to the vertices of the electrical circuit! The concept obviously exists—I can see that now, thanks to you—but I don’t know what it means or how interesting it is.

          This would of course enter the Laplacian via d^* and it seems on a first glance to be related to a gauge transformation?

          Hmm, it seems this sort of transformation would have the effect of changing the electrostatic potential \phi(x) to a new one f(x) \phi(x) where f is a positive function of the vertices of the graph. This would be some funny sort of ‘gauge transformation’ that’s not usually considered in electromagnetism.

          I am also not sure whether you really want comments on that – it seemed to me that you wanted to develop something like this for yourself.

          I blog because I want comments! Of course I don’t feel obliged to do what people suggest. I have this paper worked out in my mind and don’t plan to change it much—I just want to finish the darn thing and publish it. I’ve been sitting on it for over 2 years. But often comments affect what I do. Maybe this one will too, eventually. But first I’d need to understand the physical meaning of picking different inner products on C_0(\Gamma). I doubt I’ll figure that out before finishing this paper—it feels like the subject of some other paper.

          Do you know where this cohomology theory on graphs was treated earliest?

          Cohomology of graphs is probably very old, but I don’t know when it was first studied. I haven’t actually found Weyl’s paper, so I don’t know who he referred to. All the interesting references I know on cohomology and electrical circuits are included in my article.

        • nad says:

          where f is a positive function of the vertices of the graph.

          I don’t see at the moment why f(x) needs to be positive. In particular one could perhaps include complex transformations, which I could imagine could mimick in a linearized version the term i \cdot e \cdot \mathrm{vector \; potential}. But as said this is just a quick guess.

          I blog because I want comments!

          I usually want comments only if I ask for them. I have usually (knock on wood) no problems to get new ideas. For me the hard part is often rather to sort out, which ideas to follow, whether I should try to get better ideas (which might be strenuous) and to sit down and to work things out. It is even often so that the pressure to let ideas come out is so big that this process hinders the working-out process. When I was a child people rather often said: “Die Fantasie geht schon wieder mit ihr durch” (“Fantasy is again bolting her”).

          I think this is also a reason, why I don’t always like people comment without being asked, because often I already had the same idea anyways and if people comment directly to you they somehow set a claim that they could work on the same thing and especially if this is about the development of novelties then it depends really on the person wether this ends not up in rat-race. I often rather prefer to stop working on something in this case (alone for the reason that I am slow in working out and the reason that I like to do also other things in parallel). Here by the way also the incentives play a role. If novelty is a strong incentive to work something out then a set claim may be a killer.

          I had also cases where people used an idea from me, sat immediately down, worked it out overnight and then presented it as their work the next day. And in fact if an idea is there it is often hard to judge how hard it was to come up with it and how much one should attribute to the person who had it. I usually try to cite people’s ideas if they do not appear too trivial (but as said that’s of course a quite debatable issue). If people made their work public than they sat a claim and depending how serious this claim is you have to take that into account.

          And of course often enough you may even not know about such a publication (or you may not have understood it because it is written in some obliterated language), here it is often important that the way on how you came up with your work is documented, because this may make clear that you where not just copying someones work.

          That’s why I think it would be good to have something as a closable archive, where the development process can be better evaluated and where you could decide for yourself whether you constantly want to set claims (and depending on the potentials eventually enter a rat race) or wether you are silent and would let others also work on the same issues in parallel. Having different treatments of a subject might be beneficial, as one can see on your paper on electrical circuits.

          So it seems:

          • H. Weyl, Repartición de corriente en una red conductora, Rev. Mat. Hisp. Amer. 5 (1923), 153-164.

          can only be buried in some library. I hope there are still some copies left.

          I am actually rather interested in this discrete treatment of physical quantities (my Ph.D. thesis had a similar background) and I had seen that you had some work on discrete Maxwell’s equations with Derek Wise, is this going to be included in your current or a follow up paper on electrical circuit work?

    • John Baez says:

      Nad wrote:

      Moreover I feel a little unease to call the above electrical circuit, that is mainly because I haven’t seen a discussion about Kirchhoff’s laws, but maybe I have missed that.

      Good point!

      Kirchhoff’s voltage law

      \sum_{loop}(\psi_i - \psi_j)=0

      is automatically true in this formalism: it just says things like

      (\psi_1 - \psi_2) + (\psi_2 - \psi_3) + (\psi_3 - \psi_1) = 0

      Kirchhoff’s current law follows automatically when the voltage minimizes the power

      -2 \langle \psi, H \psi \rangle

      subject to constraints on the voltage \psi at certain vertices called the ‘terminals’ of our circuit. More precisely, Kirchhoff’s current law will hold at all vertices except these terminals: current flows in and out of these terminals.

      To see a proof of this (known) fact, try my paper on electrical circuits. It’s just a draft, and it’s not very easy to read, but someday I’ll make it nicer and start blogging about it here.

      There’s a lot more to say…

  19. Blake Stacey says:

    I think this is where the article was reprinted in Weyl’s collected works.

    • nad says:

      Thanks Blake, my google algorithm unfortunately didnt reveal that result….I should probably write a complaint letter to the head of the google books search algorithm division :)

      So it seems that if one doesn’t find access to a library one has to pay:
      439 Euros!

      There go the summer holidays.

      The Weyl book seems to be quite a cash cow for Springer, given that this book series is almost 50 years on the market and the fact that probably every scientific library in the world had (should have?) bought it.

      What was wrong with Hilbert and Weyl?

    • Blake Stacey says:

      There’s an interesting anecdote about that work of Weyl:

      [Robert Kotiuga]: Beno, there is something I really don’t understand about Hermann Weyl.

      [Beno Eckmann]: What is it?

      RK: Well, in his collected works, there are two papers about electrical circuit theory and topology dating from 1922/3. They are written in Spanish and published in an obscure Mexican mathematics journal. They are also the only papers he ever wrote in Spanish, the only papers published in a relatively obscure place, and just about the only expository papers he ever wrote on algebraic topology. It would seem that he didn’t want his colleagues to read these papers.

      BE: Exactly!

      RK: What do you mean?

      BE: Because topology was not respectable!

      RK: Why was topology not respectable?

      BE: Hilbert!

  20. John Baez says:

    Nadja wrote:

    I don’t see at the moment why f(x) needs to be positive.

    I assume the resistance of each edge is positive because then they make the space of 1-chains C_1(\Gamma) into an inner product space. Similarly I would assume this function of each vertex is positive so that they make the space of 0-chains C_0(\Gamma) into an inner product space. Why? Because if I have a chain complex of inner product spaces I can hit it with the usual tools of Hodge theory, as I’ve done in this paper of mine. One could try to generalize but I feel no desire to.

    I usually want comments only if I ask for them.

    I always want comments even if only 10% are useful to me. When a bunch of smart people start talking, they get to know each other and start trading ideas and attracting other smart people… and it becomes easier for me to find out answers to questions, and find good collaborators.

    You’re right that having a crowd of people talking about similar things causes various problems. In a way this is why I quit working on n-categories: too many very smart people were starting to work on them, and I needed more ‘open space’ and solitude. But right now I’m trying to learn lots of things and invent a slightly new branch of math that borrows ideas from dozens of existing subjects, so it’s good for me to hear lots of people’s ideas – people from different intellectual communities.

    I had seen that you had some work on discrete Maxwell’s equations with Derek Wise, is this going to be included in your current or a follow up paper on electrical circuit work?

    The electrical circuit project could easily expand to include work on discretized Maxwell equations—Derek’s work with me on chain field theory is always on my mind these days—but I don’t think I’ll go in that direction anytime soon: I’m more interested in the relation between electrical circuits and Markov processes. I’m trying to set up a theory of networks that includes lots of very practical applications.

    • nad says:

      I wrote:

      I usually want comments only if I ask for them.

      It seems I was unclear. I meant here comments to work in progress. Comments to other things are of course OK. (I thought this was clear by the context of the rest of the comment, but it seems not.)

      I assume the resistance of each edge is positive because then they make the space of 1-chains C_1(\Gamma) into an inner product space.

      I don’t know enough about Hodge theory in order to be able to judge how much this Hodge theory wouldn’t be applicable when using hermitian forms as an inner product. There must be some troubles (need of symmetry?), because otherwise someone would have formulated such a generalized Hodge theory already. So I understand that you do not feel like going through the same troubles.

      What is “p-form electromagnetism” ? Is this article about some discretization of QED?

      I had actually meant some other work, which was somewhere on your homepage, about discrete Maxwell equations.

      You’re right that having a crowd of people talking about similar things causes various problems. In a way this is why I quit working on n-categories: too many very smart people were starting to work on them, and I needed more ‘open space’ and solitude.

      Most people at the n-category cafe speak in a language which is totally foreign to me.

      • John Baez says:

        Nad wrote:

        I meant here comments to work in progress. Comments to other things are of course OK. (I thought this was clear by the context of the rest of the comment, but it seems not.)

        It was clear. In my last comment I talking about why I like to blog about work in progress, like this ‘network theory’ project. One reason I blog about work in progress is because I like to build up a community of experts who tell me when I’m making mistakes or overlooking interesting facts.

        John wrote:

        I assume the resistance of each edge is positive because then they make the space of 1-chains C_1(\Gamma) into an inner product space. […] Why? Because if I have a chain complex of inner product spaces I can hit it with the usual tools of Hodge theory, as I’ve done in this paper of mine.

        Nad wrote:

        I don’t know enough about Hodge theory in order to be able to judge how much this Hodge theory wouldn’t be applicable when using hermitian forms as an inner product. There must be some troubles (need of symmetry?), because otherwise someone would have formulated such a generalized Hodge theory already.

        It’s not the symmetry, but the positive definiteness of the inner product that would go away if we let our resistances be negative numbers. Remember, the inner product \langle I, I \rangle is the power consumed by a circuit when we run the current I \in C_1(\Gamma) along the wires. The power consumed by each wire equals the square of the current along that wire times the resistance of that wire. If the resistance were negative this power could be negative! Then \langle I , I \rangle could be negative.

        This would not only be unphysical, it would mess up the Hodge theory. If you have an ‘inner product’ that’s not positive definite on your p-chains, the inner product on p-cochains won’t be either. And then you can’t really do most of the interesting things people do with Hodge theory, like show that a harmonic p-cochain \mu is both closed and coclosed:

        (dd^* + d^* d) \mu = 0 \implies d \mu = 0 \; \textrm{and} \; d^* \mu = 0

        (Think of \mu as a p-form and dd^* + d^* d as the Laplacian.) This basic lemma of Hodge theory goes away, as far as I can tell, when the inner product isn’t positive definite! After all, the usual proof is

        (dd^* + d^* d) \mu = 0 \implies

        \langle \mu,  (dd^* + d^* d) \mu \rangle = 0  \implies

        \langle d^* \mu, d^* \mu \rangle + \langle d \mu, d \mu \rangle = 0 \implies

        \langle d^* \mu, d^* \mu \rangle = 0 \; \textrm{and} \;  \langle d \mu, d \mu \rangle = 0 \implies

        d \mu = 0 \; \textrm{and} \; d^* \mu = 0

        but the last two steps assume the inner product is positive definite! This is why Hodge theory works so much better on Riemannian manifolds than on Lorentzian ones.

        What is “p-form electromagnetism” ?

        p-form electromagnetism is the generalization of Maxwell’s equations where the electromagnetic vector potential A is a p-form instead of a 1-form. 2-form electromagnetism shows up in string theory, and it’s the simplest example of a higher gauge theory. Just as point particles like to couple to a 1-form, strings like to couple to a 2-form.

        Is this article about some discretization of QED?

        Yes, in a sense. Derek was working with me when I was interested in higher gauge theory, so he wrote this paper on the quantization of vacuum p-form electromagnetism on discretized spacetimes (for example, simplicial complexes), where you can work it all out rigorously. By ‘vacuum’ I mean that he only considers the electromagnetic field, no charged particles (or strings, or 2-branes, or….). This makes the theory linear and exactly solvable.

        If you only care about ordinary electromagnetism, set p = 1.

        • nad says:

          It was clear. In my last comment I talking about why I like to blog about work in progress, like this ‘network theory’ project. One reason I blog about work in progress is because I like to build up a community of experts who tell me when I’m making mistakes or overlooking interesting facts.

          There are of course comments which are welcome even during a work in progress, like for example if someone points out relevant literature or bad mistakes. However if a commenting situation is used for stealing ideas or if comments are of the sort: I think you should do this in that and that way (where it is obvious that one should do it in that and that way) or the whole thing you are doing here is bullshit (without specifying exactly WHAT is bullshit) then this can be rather unpleasant. I am not inventing these comments. I actually got exactly this “bullshit” comment after giving a about 2 hr presentation on some work in my Ph.D. work and this evaluation was one reason why I hesitated to publish the work. (I published it nonetheless, but only because someone else said I should publish it. I mean it is somewhat irritating if someone tells you, you are doing bullshit and you do not know what is bullshit and what to do about it) I was once in a talk with a rather big audience, where the presenter was told that what he had been doing in his work (he talked about his work) was thoroughly studied and that and that didn’t work because a.s.o. The presenter (who was if I remember correctly already in some solid position in France) broke into tears and couldn’t even talk further. The whole situation was rather disturbing. There was a similar situation during a talk, where luckily the presenter got some defense from within the audience, but nonetheless the accusations where fierce in enough that the presenter broke also into tears here. I think one should avoid such situations. I have also rather troubling recollections about Andreas Floer in this context.

          Of course if something is wrong then this has to be pointed out, but first there are different ways on how one can point out a mistake and secondly if a mistake would have been detected at an earlier stage the presenter would not end up in such a situation. So here commenting during a work is in progress would actually be essential.

          So if I say I prefer not to get comments to work in progress I actually mean these latter bullshit-types of comments , but it is usually rather complicated to explain what types of comments one wants and since it seems that the bullshit comments are more frequent than the other types I prefer to just say: please no comments.

          There is also the problem that if your work is (almost) in parallel on something, i.e. if you are possibly in competition for novelty then this can get very tricky. Often enough you better would not want to know, what the other is doing and in order not to bring a competitor into the bad situation of possibly needing to help you it could actually be a nice thing to say: no you don’t need to make comments.

          Think of \mu as a p-form and dd^* + d^* d as the Laplacian.

          The power consumed by each wire equals the square of the current along that wire times the resistance of that wire. If the resistance were negative this power could be negative!

          But I was talking about C_0(\Gamma) and C^0(\Gamma) and not C_1(\Gamma) and C^1(\Gamma). There could be different inner products on these spaces.

          Think of \mu as a p-form and dd^* + d^* d as the Laplacian.

          I don’t know why you suddenly define the Laplacian in this way, because in your circuit paper it is just d^*d , which would be enough to ensure closedness for a “harmonic” form (I assume you mean here a form for which: laplace \; \mu = 0), this follows from d \mu =0 and positive definiteness as you had already pointed out above. This is of course rather important with respect to Kirchhoff’s laws. But why do I need coclosedness?

          p-form electromagnetism is the generalization of Maxwell’s equations where the electromagnetic vector potential A is a p-form instead of a 1-form. 2-form electromagnetism shows up in string theory, and it’s the simplest example of a higher gauge theory.

          Aha. I am still trying to get an image in my head what people may mean with higher gauge theory and string theory. I blurrily understand it as some kind of generalization of a connection (i.e. a prescription on how vectorfields are to be transformed) that allows somehow for nonlocal transformations?

          What is the physical interpretation of a magnetic p-form? Some multipole magnetism? Is p=0 (monopole) also allowed?!?!

        • John Baez says:

          By the way, Nad: in this particular conversation, please try to post your comments so they appear below the comment you’re replying to. You keep posting them so they show up far above it, and I keep fixing them.

          John wrote:

          The power consumed by each wire equals the square of the current along that wire times the resistance of that wire. If the resistance were negative this power could be negative!

          Nad wrote:

          But I was talking about C_0(\Gamma) and C^0(\Gamma) and not C_1(\Gamma) and C^1(\Gamma). There could be different inner products on these spaces.

          True. I was just pointing out the physical reason why I assume the inner product on C_1(\Gamma)—and thus dually on the isomorphic space C^1(\Gamma)—is positive definite. If you can discover or invent a physical interpretation of the inner product on C_0(\Gamma) and C^0(\Gamma), you will be able to decide whether you want those to be positive definite too.

          John wrote:

          Think of \mu as a p-form and dd^* + d^* d as the Laplacian.

          Nad replied:

          I don’t know why you suddenly define the Laplacian in this way, because in your circuit paper it is just d^*d

          dd^* + d^* d is, up to a sign, the usual definition of the Laplacian on p-forms. But when dealing with circuits, all the spaces in our cochain complex

          C^0 \to C^1 \to C^2 \to \cdots

          are zero-dimensional except C^0 and C^1: there are just vertices and edges in our graph, no higher-dimensional stuff. So in this particular case d^* vanishes on 0-cochains, so

          (dd^* + d^* d) \mu = d^* d \mu

          for any 0-cochain \mu. I took advantage of this simplification in my paper. But now we’re talking about more general things, such as electromagnetism and p-form electromagnetism, where we need 2-cochains, 3-cochains, etc.

          Indeed, the basic lemma of Hodge theory

          (dd^* + d^* d) \mu = 0 \implies d \mu = 0 \; \textrm{and} \; d^* \mu = 0

          is completely trivial in the case of electrical circuits! In that case, either \mu is a 0-cochain and d^* \mu is always zero, or \mu is a 1-chain and d \mu is always zero. So the answer to this question of yours:

          But why do I need coclosedness?

          is simply that you don’t “need” it, you have it: 0-cochains are automatically coclosed: d^* \mu is zero for any 0-cochain \mu, unless we’re in a weird situation where there exist nonzero (-1)-cochains!

          I am still trying to get an image in my head what people may mean with higher gauge theory and string theory.

          The basic idea is just this: you can integrate a 1-form A over the worldline of a particle, and that gives you the action for a particle coupled to the electromagnetic field. Similarly, you can integrate a 2-form B over the worldsheet of a string, and that gives you the action for a string coupled to the ‘higher electromagnetic field’, which is usually called the Kalb-Ramond field. It’s all just the usual story with the dimensions increased by one.

          My paper with John Huerta, An invitation to higher gauge theory, was supposed to be a friendly way to start learning about this stuff. But I’ve switched over to working on quite different subjects, so I’m not really interested in talking about it anymore. I mainly keep Derek Wise’s work in mind because it uses a lot of the same math that I’m now applying to electrical circuits.

        • nad says:

          By the way, Nad: in this particular conversation, please try to post your comments so they appear below the comment you’re replying to. You keep posting them so they show up far above it, and I keep fixing them.

          I am sorry. I thought I always clicked on the reply button right next to your answers, but may be not, let’s see where this here will end up.

          First of all thanks a lot for giving me such detailled answers. It is very rare that you find someone who are allowed to ask “holes into the stomach” (Löcher in den Bauch) (means probably you ask so much that the person you ask gets really hungry). Just as a warning: during my studies I was told by a person that one of the professors already complained to this person about my “stupid questions”, in this conversation this professor also said that I am “no big light” (“Sie ist keine grosse Leuchte”). So your answers are very likely pearls before breakfast, unless someone else reads this here too and learns by your answers or might be inspired by my stupid questions (Like an artist or so…).

          … is simply that you don’t “need” it, you have it: 0-cochains are automatically coclosed: d^* \mu is zero for any 0-cochain \mu, unless we’re in a weird situation where there exist nonzero (-1)-cochains!

          So you say that in a theory of electricity which behaves with integrity and discretion d^* \mu must be zero.

          But if you postulate this anyways then less then ever I understand why you would need the positive definiteness of the inner product on C_0(\Gamma) or C^0(\Gamma)?!

          dd^* + d^* d is, up to a sign, the usual definition of the Laplacian on p-forms.

          What is the problem with just taking d^* d ? The possibly negative spectrum on a compact manifold?

          By the way in this context I may ask what I always wanted to know. If you look at the Weitzenböck tensor (difference between your usual Laplacian and the rough Laplacian) then this looks “almost” like the Einstein tensor. Can one formulate the “Einstein operator” (here I mean Einstein tensor + cosmological term – Energy momentum tensor, or may be just the Einstein tensor for simplicity) as some linear combination of the Laplacians of two different (co)derivatives ? If this would be the case then this could eventually be a useful ingredient for the discretization of the field equations (on which I believe probably tons of people have been working at, at least already when I was still a student there were a couple of people interested in this.

        • John Baez says:

          Nad wrote:

          So you say that in a theory of electricity which behaves with integrity and discretion d^* \mu must be zero.

          That’s your interpretation… I prefer to repeat what I actually said, which sounds much less interesting, but has the advantage of being surely true:

          If d^* maps 0-forms to (-1)-forms, and all (-1)-forms are zero, and \mu is a 0-form, then d^* \mu = 0.

          (Some people say “there are no (-1)-forms”, but a mathematically more sophisticated approach is to say all (-1)-forms are zero, and that’s the approach I usually take.)

          But if you postulate this anyways then less then ever I understand why you would need the positive definiteness of the inner product on C_0(\Gamma) or C^0(\Gamma)?!

          I will let you worry about this; I don’t feel like it. What I know is that:

          1) I have a specific formula for the inner product on C^0(\Gamma) which works nicely for electrical circuits; this inner product is positive definite.

          2) There’s a standard formula for the inner product on \Omega^0(M) (0-forms on a compact manifold), which for Riemannian manifolds is positive definite.

          3) We need the inner product on zero-forms to be positive definite for this standard calculation in Hodge theory to work:

          (dd^* + d^* d) \mu = 0 \implies

          \langle \mu,  (dd^* + d^* d) \mu \rangle = 0  \implies

          \langle d^* \mu, d^* \mu \rangle + \langle d \mu, d \mu \rangle = 0 \implies

          \langle d^* \mu, d^* \mu \rangle = 0 \; \textrm{and} \;  \langle d \mu, d \mu \rangle = 0 \implies

          d \mu = 0 \; \textrm{and} \; d^* \mu = 0

          in the case where \mu is a 1-form.

          What is the problem with just taking d^* d? The possibly negative spectrum on a compact manifold?

          No, that’s not the problem:

          \langle \mu , d^* d \mu \rangle = \langle d \mu, d \mu \rangle \ge 0

          so in any situation where we’ve worked out the details carefully enough that d^* d is self-adjoint, we can see that its spectrum is nonnegative.

          The problem with d^* d is that it’s not the Laplacian.

          For example, we have a pre-existing definition of the Laplacian of a vector field on Euclidean \mathbb{R}^n, and thus, using the metric to turn a vector field into a 1-form, we get a definition of the Laplacian of a 1-form on \mathbb{R}^n. It’s just the obvious thing: take the Laplacian of each component using the standard basis of 1-forms dx_i.

          But this Laplacian does not agree with d^* d! For example take n = 1. In this case d^* d of any 1-form is zero!

          On the other hand, this pre-existing definition of the Laplacian of a 1-form on \mathbb{R}^n does agree with d^* d + d^* d.

          There are lots of other reasons why everyone defines d^* d + d^* d to be the Laplacian. Mostly, it makes Hodge theory work (just read the section on ‘Riemannian Hodge theory’, which is nice and short and lists the key results).

          If you look at the Weitzenböck tensor (difference between your usual Laplacian and the rough Laplacian) then this looks “almost” like the Einstein tensor. Can one formulate the “Einstein operator” (here I mean Einstein tensor + cosmological term – Energy momentum tensor, or may be just the Einstein tensor for simplicity) as the difference of the laplacians of two different (co)derivatives ?

          I’ve never thought about this. I needed to look up the Weitzenböck identity to think about this. Maybe some trickery could turn the operator this reference calls A—I guess that’s your ‘Weitzenböck tensor’—into the Einstein tensor. But I don’t instantly see how to do it. Real experts on general relativity should know if it’s possible. It’s a nice idea.

        • nad says:

          No, that’s not the problem:
          \langle \mu , d^* d \mu \rangle = \langle d \mu, d \mu \rangle \ge 0

          so in any situation where we’ve worked out the details carefully enough that d^* d is self-adjoint, we can see that its spectrum is nonnegative.

          If you assume positive definiteness and your other definitions yes, but as said one could try to generalize that. Eventually it would be enough to use a hermitian form then you can keep your positive definiteness. But it seems that in order to include magnetism some generalizations may eventually need to be made.

          The problem with d^* d is that it’s not the Laplacian.

          It is not the Hodge Laplacian.

          I’ve never thought about this. I needed to look up the Weitzenböck identity to think about this. Maybe some trickery could turn the operator this reference calls A—I guess that’s your ‘Weitzenböck tensor’—into the Einstein tensor.

          Yes I meant that operator A.

          I called it Weitzenböck tensor because I don’t know how this thing is usually called in various dialects. And I called it tensor because the Wikipedia entry said:

          A is a linear operator of order zero involving only the curvature.

          So I understood this as that A is at least locally linear, i.e. that it is a tensor. But may be this interpretation is wrong. I might have been lured into that interpretation by the fact that the difference of two covariant derivatives is a tensor, if I remember correctly ( : O) ). By the way, since I haven’t thought about this stuff since quite a while it feels quite awkward to talk about this in public. It feels really quite like a public Nadja-Geometry-Amnesia test. : O

          Maybe some trickery could turn the operator this reference calls A..

          If it is a tensor then locally, lets assume that one has invertibility then it seems (again if I remember correctly) one could just compose one tensor (as a linear map) with the inverse of the other and get a linear map between them. So if this “tensor” interpretation is correct then locally this seems to work always (apart from divergencies due to noninvertibility etc.), but what about globally? I mean apart from that plus/minus sign and modulo the fact that I don’t know what is exactly meant by these alternation map and this universal derivation inverse to θ on 1-forms and what they could spoil up this Weitzenböck “tensor” looks really like the Einstein tensor.

          If I remember correctly ( : O ) the Einstein tensor was constructed as a generalization to the laplacian in Poisson equation for gravity, so I was hesitating to assume that it could be related to the difference of two Laplacians, thats what was meant by my “replystammer”. So I wonder wether there exists some way, with which one could express the Einstein tensor as a linear expression (meaning here a difference or a mean or something in that sense) of two different laplacians, just as the Weitzenböck “tensor” is the difference between the Hodge laplacian and the rough laplacian.

        • John Baez says:

          Nad wrote:

          So I understood this as that A is at least locally linear, i.e. that it is a tensor.

          Yes, it’s a tensor, built from the Riemann tensor somehow. The Wikipedia formula for it is indeed poorly explained. Their so-called “alternation map”

          \theta: T^* M \otimes \Omega^p(M) \to \Omega^{p+1}(M)

          is obviously just taking the wedge product of a 1-form and a p-form to get a (p+1)-form—what else could possibly make sense here? Some weird person just decided to call it the “alternation map” instead of its usual name, “wedge product” or “exterior multiplication”.

          Similarly, their so-called “universal derivation inverse to \theta on 1-forms”

          \#:\Omega^{p+1}(M)\rightarrow T^*M\otimes\Omega^p(M)

          has got to come from interior multiplication:

          i: T M \otimes \Omega^{p+1} (M) \to \Omega^p(M)

          by using duality to turn it into a map

          \#:\Omega^{p+1}(M)\rightarrow T^*M\otimes\Omega^p(M)

          Again, there’s nothing else it could reasonably be.

          I’m getting tired of this Weitzenböck stuff, so I’ll stop here. By the way, this This article claims to “demystify” the Weitzenböck tensor, but it would take a little while to read. Also by the way, I’ve never heard the term “rough Laplacian” before.

        • John Baez says:

          There’s some problem going on with the blog: comments keep appearing higher up than they should. I hope it’s just a disturbance in the Earth’s magnetic field that will go away on its own. I have moved a bunch of comments down so that the conversation makes more sense, and I’ve taken the liberty of deleting some comments by Nad and Blake that talk only about this stuff like this. I hope that if we keep hitting ‘reply’ at the very bottom of this list of comments, nothing terrible will happen.

  21. John Baez says:

    Thanks for offering an explanation of why that paper is in Spanish, Blake! It’s very weird trying to read something by Weyl in Spanish. I can only see the first few pages on Google; maybe if I’m feeling energetic I’ll try to get ahold of his collected works, scan in this paper, and distribute it to the world.

  22. nad says:

    I’m getting tired of this Weitzenböck stuff, so I’ll stop here.

    thanks anyways. It was probably the longest discussion about Laplacians I ever had in my life.

    This article claims to “demystify” the Weitzenböck tensor, but it would take a little while to read.

    This seems to be at most a draft not an article like on page 3 it starts out:

    1. Lichnerowicz Laplacians (r⇤T)(X2,…,Xk) = ␣(rEiT)(Ei,X2,…,Xk)
    In this section …. Conventions

    and given the sparseness of definitions it is for me impossible to follow the paper.
    And by very briefly looking at it seems it wouldn’t help too much
    anyway, since it seems to mention only different Laplacians constructed from one connection, but I meant different Laplacians which could be constructed from different connections.

  23. nad says:

    but I meant different Laplacians which could be constructed from different connections.

    Like if this A operator, whose form I do not understand, because amongst others I do not understand how this alternation map works, and the whole terminology there is alien to me (there are also no further links on that wikipedia page), but which looks on a first glance like the Einstein operator just with a different sign in front of the one-half scalare curvature then this Lichnerowicz formula suggests that one could write the Einstein tensor as some linear combination from the usual A operator and that “A operator on Spin manifolds”. (which seems to be the scalarcurvature according to Wikipedia) or in other
    words if this A is what it looks like then subtracting the Einstein tensor from the A operator could give the scalar curvature (i.e. the “A operator on Spin manifolds”).

    Anyways I am aware that these are very very vague guesses and I just wanted to drop them here, because eventually this might catalyze some more guesses.

    • John Baez says:

      Most of the terminology in this page is standard in differential geometry, so it makes sense to me, though I don’t know why all the claims are true.

      If you have a question or two about what terms mean, I’ll be glad to answer them. If you have lots of questions, you may want to read some books on differential geometry. I really enjoyed Choquet-Bruhat et al‘s book Analysis, Manifolds and Physics.

      The term ‘alternation operator’ is a bad choice because I’ve never seen anyone use that term, but it’s still clear (to me) what it must be, because there’s only one interesting operator

      T^* M \otimes \Omega^p M \to \Omega^{p+1} M

      An element of T^*_x M is called a 1-form at the point x \in M. An element of \Omega^p_x M) is called a p-form at the point x \in M. We can take the wedge product (also called ‘exterior product’) of a 1-form and a p-form and get (p+1)-form. So, we get a bilinear map

      T^*_x M \times \Omega^p_x M \to \Omega^{p+1}_x M

      and thus a linear map

      T^*_x M \otimes \Omega^p_x M \to \Omega^{p+1}_x M

      Since we can do this at any point x, we get a vector bundle map

      T^* M \otimes \Omega^p M \to \Omega^{p+1} M

      and that’s what the ‘alternation map’ must be.

You can use Markdown or HTML in your comments. You can also use LaTeX, like this: $latex E = m c^2 $. The word 'latex' comes right after the first dollar sign, with a space after it.

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out / Change )

Twitter picture

You are commenting using your Twitter account. Log Out / Change )

Facebook photo

You are commenting using your Facebook account. Log Out / Change )

Google+ photo

You are commenting using your Google+ account. Log Out / Change )

Connecting to %s