Network Theory (Part 30)

The network theory series is back! You may have thought it died out, but in fact it’s just getting started. Over the last year my grad students have made huge strides in working out the math of networks. Now it’s time to explain what they’ve done.

In the last three episodes I explained how electrical circuits made of resistors, inductors and capacitors are a great metaphor for many kinds of complex systems made of interacting parts. And it’s not just a metaphor; there are mathematically rigorous analogies—in other words, isomorphisms—between such electrical circuits and various other kinds of ‘passive linear networks’.

I showed you a chart of these analogies last time:

displacement:    q flow:      \dot q momentum:      p effort:           \dot p
Mechanics: translation position velocity momentum force
Mechanics: rotation angle angular velocity angular momentum torque
Electronics charge current flux linkage voltage
Hydraulics volume flow pressure momentum pressure
Thermal Physics entropy entropy flow temperature momentum temperature
Chemistry moles molar flow chemical momentum chemical potential

But what do I mean by a ‘passive linear network’? Let me explain this very roughly at first, since we’ll be painfully precise later on.

Right now by ‘network’ I mean a graph with gizmos called ‘components’ on the edges. For example:

In a network there is some kind of ‘flow’ running along each edge, and also some kind of ‘effort’ across that edge. For example, in electronics the flow is electrical current and the effort is voltage. The chart shows the meaning of flow and effort in other examples.

‘Passivity’ means roughly that none of the components put out energy that didn’t earlier come in. For example, resistors lose energy (which goes into heat, which we’re not counting). Capacitors can store energy and later release it. So, resistors and capacitors are passive—and so are inductors. But batteries and current sources actually put out energy, so we won’t allow them in our networks yet. For now, we’re just studying how passive components respond to a source of flow or effort.

For some subtleties that show up when you try to make the concept of passivity precise, try:

Passivity (engineering), Wikipedia.

Finally, ‘linearity’ means that the flow along each edge of our network is linearly related to the effort across that edge. Here are the key examples:

• For electrical resistors, linearity is captured by Ohm’s law. If an edge e in our network is labelled by a resistor of resistance R, usually drawn like this:

then Ohm’s law says:

V = R I

where V is the voltage across that edge and I is the current along that edge.

• If our edge e is labelled by an inductor of inductance L:

we have

\displaystyle{ V = L \frac{d I}{d t} }

Here we need to think of the voltage and current as functions of time.

• If our edge e is labelled by a capacitor of capacitance C:

we write the equation

\displaystyle{ I = C \frac{d V}{d t} }

where again we think of the voltage and current as functions of time.

Both linearity and passivity are simplifying assumptions that we eventually want to drop. If we include batteries or current sources, we’re dropping passivity. And if include transistors, we’re dropping linearity. Obviously both these are important!

However, there is a lot to do even with these simplifying assumptions. And now it’s time to get started!

In what follows, I will not use the terms ‘flow’ and ‘effort’, which are chosen to be neutral and subject-independent. Instead, I’ll use the vocabulary from electronics, e.g. ‘current’ and ‘voltage’. The reason is that we’ve all heard of resistors, capacitors, Ohm’s law and Kirchhoff’s laws, and while these have analogues in every row of the chart, it seems pointless to make up weird new ‘neutral’ terms for all these concepts.

But don’t be fooled by the jargon! We’re not merely studying electrical circuits. We’ll be studying passive linear networks in full generality… with the help of category theory.

Linear passive networks as morphisms

To get going, let’s think about circuits made of resistors. We can do this without harm, because we’ll later include capacitors and inductors using a simple effortless trick. Namely, we’ll generalize the ‘resistance’ of a resistor, which is a real number, to something called ‘impedance’, which is an element of some larger field. Everything will be so abstract that replacing resistances with impedances will be as easy as snapping our fingers.

Right now I want to define a category where the morphisms are circuits made of resistors. Any morphism will go from some ‘inputs’ to some ‘outputs’, like this:

So a morphism is roughly a graph with edges labelled by numbers called ‘resistances’, with some special nodes called ‘inputs’ and some special nodes called ‘outputs’.

What can do with morphisms? Compose them! So, suppose we have a second morphism whose inputs match the outputs of the first:

Then we can compose them, attaching the outputs of the first to the inputs of the second. We get this morphism as the result:

So, composing morphisms is a way to build big electrical circuits—or other ‘linear passive networks’—out of little ones.

This seems pretty simple, but let’s try to formalize it and see why we have a category. In fact it takes a bit of thought. To check that we get a category, we need to check that composition is associative:

(fg)h = f(gh)

and that each object x has an identity morphism 1_x : x \to x that does what an identity should:

f 1_x = f

1_x g = g

All these equations seem obviously true in our example… until you try to prove them.

You might think an identity morphism should be a bunch of straight pieces of wire—a bunch of edges each with an input node and an output node—but that doesn’t quite work, since sticking an extra edge onto a graph gives a new graph with an extra edge!

Also, we are composing circuits by ‘sticking them together’. This process is formalized in category theory using a pushout, and pushouts are only defined ‘up to canonical isomorphism’. The very simplest example is the disjoint union of two sets. We all know what it means, but if you examine it carefully, you’ll see it’s only defined up to canonical isomorphism, because it involves a choice of how we make the two sets disjoint, and this choice is somewhat arbitrary.

All this means the category we’re after is a bit subtler than you might at first expect; in fact, it’s most naturally thought of as a bicategory, meaning roughly that all the equations above hold only ‘up to canonical isomorphism’.

So, we proceed like this.

First we define a concept of ‘labelled graph’, where (for now) only the edges are labelled. We do this because we want our circuits to have edges labelled by ‘resistances’, which are real numbers. But we do it in greater generality because later we’ll want the edges to be labelled by ‘impedances’, which are elements of some other field. And since we’re studying electrical circuits just as examples of networks, later still we will probably want graphs whose edges are labelled in still other ways.

So:

Definition. A graph consists a finite set E of edges, a finite set N of nodes, and two functions

s,t : E \to N

Thus each edge e will have some node s(e) as its source and some node t(e) as its target:

Definition. Given a set L, we define an L-labelled graph to be a graph together with a function r : E \to L. This assigns to each edge e \in E its label r(e) \in L. We call L the label set.

We use the letter r because for circuits of resistors we will take the label set to be

L = (0,\infty) \subset \mathbb{R}

the positive real numbers, and r(e) will be the resistance of the edge e. For circuits that also contain inductors and capacitors we will take the label set to be the positive elements of some larger field… but more about that later!

Now we want to give our L-labelled graph a set of nodes called ‘inputs’ and a set of nodes called ‘outputs’. You might think the set of inputs should be disjoint from the set of outputs, but that’s a tactical error! It turns out an identity morphism in our category should have the inputs being exactly the same as the outputs… and no edges at all:

To handle this nicely, we need to make a category of L-labelled graphs. This works in the obvious way, if you’re used to this stuff. A morphism from one L-labelled graph to another sends edges to edges, nodes to nodes, and preserves everything in sight:

Definition. Given L-graphs \Gamma = (E,N,s,t,r) and \Gamma' = (E',N',s',t',r'), a morphism of L-labelled graphs from \Gamma to \Gamma' is a pair of functions

\epsilon: E \to E'

\nu : N \to N'

such that the following diagrams commute:

There is a category L\mathrm{Graph} where the objects are L-labelled graphs and the morphisms are as we’ve just defined them.

Warning: the morphisms in L\mathrm{Graph} are not the morphisms of the kind we really want, the ones that look like this:

They are just a step along the way. A morphism of the kind we really want is a diagram like this in L\mathrm{Graph}:

where \Gamma is an L-labelled graph and I, O are L-labelled graphs with no edges!

You see, if I and O have no edges, all they have is nodes. We call the nodes of I the inputs and those of O the outputs. The morphisms i: I \to \Gamma and o : O \to \Gamma say how these nodes are included in \Gamma. \Gamma is our circuit made of resistors.

In general, any diagram shaped like this is called a cospan:

If we turned the arrows around it would be called a span. Cospans are good whenever you a thing with an ‘input end’ and an ‘output end’, and you want to describe how the ends are included in the thing. So, they’re precisely what we need for describing a circuit made of resistors, like this:

This makes us want to cook up a category L\mathrm{Circ} where:

• an object I is an L-labelled graph with no edges. We can alternatively think of it as a finite set: a set of nodes.

• a morphism from I to O is a cospan of L-labelled graphs:

We still need to say how to compose these morphisms. We know it will amount to attaching the outputs of one circuit to the inputs of the next—that’s all there is to it! But we need to make this precise and prove we get a category. And as I’ve hinted, we will actually get something bigger and better: a bicategory! This will come as no surprise to if you’re familiar with span and cospan bicategories–but it may induce a heart attack otherwise.

This bicategory can then be ‘watered down’ to give our category L\mathrm{Circ}. And when we take

L = (0,\infty)

we’ll get the category where morphisms are circuits made of resistors! We’ll call this \mathrm{ResCirc}.

Then I’ll explain what we can do with this category! There’s no end of things we could do with it. But the main thing Brendan does is study the ‘black-boxing’ operation, where we take a circuit, forget its inner details, and only keep track of what it does. This turns out to be quite interesting.

References

I thank Brendan Fong for drawing some of the pictures of circuits here. For the details of what I’m starting to explain here, read our paper:

• John Baez and Brendan Fong, A compositional framework for passive linear networks.

You can learn more about the underlying ideas here:

• Dean C. Karnopp, Donald L. Margolis and Ronald C. Rosenberg, System Dynamics: a Unified Approach, Wiley, New York, 1990.

• Forbes T. Brown, Engineering System Dynamics: a Unified Graph-Centered Approach, CRC Press, Boca Raton, 2007.

• Francois E. Cellier, Continuous System Modelling, Springer, Berlin, 1991.

38 Responses to Network Theory (Part 30)

  1. David Winsemius says:

    This looks very much like a course I took at Berkeley my last year as an undergraduate entitled “Non-equilibrium Thermodynamics”. It was mostly out of the EE Dept and all of the electronic analogies (and isomorphisms) were utilized. I was very enthralled with the generalization possible for modeling biological systems such as membranes where there were physical boundaries across which voltages and chemical potentials could be modeled. I think you need to look into Onsager relations. (If I remember correctly a Nobel was given to Onsager.)

  2. Joerg Hanisch says:

    What is the advantage to use the impedance analogy rather than the mobility analogy, which does not preserve the topology of the system?

    • John Baez says:

      I don’t know what the “mobility analogy” is, or what it means to say that it doesn’t preserve the topology of the system. Could you explain?

      Anyway, you’ll see that setup we’re beginning to discuss here works quite nicely. There could easily be others that work nicely too!

      • Joerg Hanisch says:

        The “mobility analogy” is the name of the “Firestone analogy” in wikipedia: http://en.wikipedia.org/wiki/Mobility_analogy
        The “impedance analogy” ist the “old analogy”:
        http://en.wikipedia.org/wiki/Impedance_analogy
        Topology preserving means parallel components will be parallel in all the analogies, not serial. That is only true in the mobility/Firestone analogy.
        In (Part 29) you discuss a mechanical “auto-analogy”, which describes the dualizing process between the two possibilities.
        I wonder, if just the firestone analogies should be called “analogy”, but the assignment velocity-current resp. force-voltage should be called “dualogy” on the basis of the topology of the system.

  3. Harvey Brown says:

    I don’t know if this is relevant but in 1948 a new approach to studying water transport in trees was introduced based on an Ohmic analogy. It involved studying the soil-plant-atmosphere continuum with a phenomenological model using the notions of resistance, capacitance and water potential. This approach dominated the field for decades. A brief account can be found in section 7.2 in the following paper on the rise of sap in trees (a subject that I think came up some time ago on this blog) which also contains in section 3 a brief account of the role of trees in climate change:
    http://arxiv.org/abs/1404.3066

    • John Baez says:

      Hi, thanks! You used the past tense: has this approach fallen out of favor?

      I wonder if someone has developed a detailed Ohmic analogy for circulatory systems, e.g. veins in a plant leaf. This would be interesting because those veins clearly form a network.

      • Harvey Brown says:

        Since the late sixties and early seventies a new paradigm called Hydraulic Architecture has dominated the literature, one which combines elements of the Ohmic analogy with elements of the original Cohesion-Tension theory of the rise of sap which dates back to the end of the 19th century.

        You might be interested in the following papers on leaf structure:
        L. Sack and N M Holbrook, “Leaf Hydraulics”, Annu. Rev. Plant Biol. (2006) 57, pp. 361-381.
        A McKown et al, “Decoding Leaf Hydraulics with a Spatially Explicit Model: Principles of Venation Architecture and Implications for Its Evolution” The American Naturalist (2010) 175(4), 447-460.

      • Derek Wise says:

        John, I saw this post coincidentally just after thinking about how leaves are little networks. I was thinking about it because of this leaf we found in the garden, which makes this very clear:

  4. davetweed says:

    Since a previous article has sensitized about the spelling of Kirchhoff….

  5. Steve Wenner says:

    I once had the task of implementing an isomorphism between the 1st & 3rd rows of your table in hardware, for a particular case. As a summer engineering intern at Bell Labs in the pre-transistor era, I assigned myself the job of simulating the mechanical behavior of a telephone switching relay with the resistors, capacitors and inductors of an analogue computer (digital computers were not yet up to this task, but were getting there). I just checked with Google and I am amazed that the model of relay (the “BJ3”) that I worked on is still available; the backstop contact tab may be just a little bit heavier on that relay as a result of my summer work to understand the “chatter” behavior better.

    • John Baez says:

      Cool! These analogies are sure pervasive. Anyone who wants to dive into them should try:

      • Dean C. Karnopp, Donald L. Margolis and Ronald C. Rosenberg, System Dynamics: a Unified Approach, Wiley, New York, 1990.

      • Forbes T. Brown, Engineering System Dynamics: a Unified Graph-Centered Approach, CRC Press, Boca Raton, 2007.

  6. David Winsemius says:

    Further detail on my exposure to this approach to integrating mathematical modeling of physical, chemical, and biology as a coherent whole:

    https://robot.bolink.org/ebooks/NetworkThermo.pdf

    There are numerous alternate citations to this compact summary published in Nature regarding the collaboration of GEORGE OSTER, ALAN PERELSON, and AHARON KATCHALSKY. Just use the search strategy of a concatentation of their names and “Network Thermodynamics”.

    • John Baez says:

      Thanks, David! I’d seen this paper a long time ago, but it’s time to refresh my memory and see exactly how far this approach has gone. In Borg-like fashion, we want to assimilate all different approaches to network theory, adding their distinctive qualities to the Collective.

  7. Bruce Smith says:

    Is there a good reason to rule out resistance of exactly 0?

    • David Winsemius says:

      With a resistance of 0 it effectively changes the graph so that the connections to nodes at either end become coincident.

    • John Baez says:

      Bruce wrote:

      Is there a good reason to rule out resistance of exactly 0?

      This is one of many ‘design decisions’ that we struggled with. But…

      Soon we’ll want to fix the potentials at input and output nodes and see what the potentials at other nodes do. These are determined by a nice rule: they minimize the total power consumed by the resistors! The power consumed by a single resistor is the voltage across that resistor times the current through that resistor. But the current is the voltage divided by the resistance! So, this formalism gets a bit ugly when the resistance can be zero.

      (Try to stuck a perfectly conductive piece of wire into the two holes of an electrical socket, and this ugliness will be made manifest.)

      Luckily, if you have an edge of zero resistance, you can achieve the same effect by ‘collapsing’ that edge: deleting it, and identifying the two nodes at its ends. That’s roughly what David Winsemius said.

      You’ll note that I said an ‘identity morphism’ in \mathrm{ResCirc} looks like this:

      If we allowed edges of zero resistance, we might instead try to let an identity morphism be a graph with zero-resistance edges connecting the inputs and outputs. But since zero-resistance edges make life a bit difficult, and since there’s no law against graphs without edges, we might as well put them to use as identity morphisms!

      As you can begin to see, there are lots of slightly different ways to do things, which should ultimately be ‘equivalent’ in some sense that needs to be made precise. Jamie Vicary has proposed a project to Brendan, which I’ll remind him of now: give a nice category-theoretic characterization of the category (or bicategory) \mathrm{ResCirc}. If we do this, we might be in a position to prove that different ways of making the ‘design decisions’ give equivalent categories (or bicatgories)… either in the standard sense of equivalence for these structures, or some looser sense.

  8. domenico says:

    It is a pity that the linear passive network is not a general directed graph because of the Kirchhoff’s law, so that I think that it cannot realize each possible mathematical morphism.
    I am thinking that if the graph have a diode in each edge, then the constraint give an arrow in the edge, but this don’t solve the problem because of the conservation laws in the nodes.
    It is possible to change the direction definition with other, different from current direction, using arbitrary definition.
    I am thinkin that if there is an alternating current, then the direction of the current in each edge change each semiperiod, so that there are two morphisms in a passive network (a flip between pushout and pullback).

    • John Baez says:

      Domenico wrote:

      It is a pity that the linear passive network is not a general directed graph because of the Kirchhoff’s law, so that I think that it cannot realize each possible mathematical morphism.

      I don’t know what you mean. Any directed graph can be made into a linear passive network by assigning an arbitrary resistance (or more generally, impedance) to each edge.

      I don’t know what “each possible mathematical morphism” means: it’s too vague, since you have to specify a category before you can talk about morphisms. Which category are you talking about?

      • domenico says:

        I thought that the first part of the post represent a correspondence between category and electric circuit, and this enlightened me in the overlapping.
        In my Valkenburg book on the net, there is an arbitrary arrow in each branch, but this direction is not constant between contiguous loop (I think to simplify the calculus), but it is possible to assign an arbitrary arrow between two nodes to solve the circuit; I think that it is possible, after a solution, to assign the arrow in each branch using the current direction (a natural choice with an arbitrariness for null currents), so that the function between input node, and output node, is a morphism, and it is not possible (in the natural choice) to assign arbitrary direction to the arrows; but rereading the post I understand that there may be a problem with the identity morphism …

        • John Baez says:

          Domenico wrote:

          I thought that the first part of the post represent a correspondence between category and electric circuit…

          I’m not sure what you mean by that. The goal of the post is to construct a specific category, \mathrm{ResCirc}, in which a morphism is an electrical circuit made of resistors. After some review of why this could be interesting, I start by showing what a morphism in \mathrm{ResCirc} looks like:

          Then I describe these morphisms in a precise way. \mathrm{ResCirc} has the following objects and morphisms:

          • an object I is a (0,\infty)-labelled graph with no edges. We can alternatively think of it as a finite set: a set of nodes.

          • a morphism from an object I to an object O is a diagram like this in the category of (0,\infty)-labelled graphs:

          Then I say that next time I’ll explain how we compose these morphisms.

          You seem to be noting that the use of directed graphs in this formalism—graphs with arrows on the edges—is a bit tricky.

          I think that it is possible, after a solution, to assign the arrow in each branch using the current direction (a natural choice with an arbitrariness for null currents),

          I can’t do that, first because I can’t allow any arbitrary choices (it destroys mathematics to make arbitrary choices), but second and more importantly because I want my circuit to be the same mathematical entity regardless of what the currents are.

          The use of directed graphs is actually unnecessary; it seemed convenient to me but right now I’m doubting it’s a good idea.

          rereading the post I understand that there may be a problem with the identity morphism …

          I said there’s a problem with identity morphisms unless we allow graphs with no edges. But graphs with no edges are perfectly valid graphs! So, we can take identity morphisms to look like this:

          I feel the post communicated very little of what I was intending to say, perhaps because I was trying to say too many different things. But your comments are very helpful because I’m teaching a class on this today, and you’ve reminded me how important it is to say a few things and say them clearly.

  9. John Baez says:

    A draft of our paper is out!

    • John Baez and Brendan Fong, A compositional framework for passive linear networks.

    For the next few posts, starting with this one, I’ll be summarizing some of the ideas in this paper. If you find mistakes in the paper or things that are unclear, let us know! We will be working to improve it.

    • Eugene says:

      Hi John, are you guys planning to put in references to proofs in section 6.3? The earliest I know are probably in Weinstein’s CBMS lectures. One textbook reference in S. Benenti, “Hamiltonian structures and generating families.”

  10. Eugene says:

    I was a little confused in my comment. The idea for lagrangian correspondences is due to Weinstein. But it first appeared in print (with credit to Weinstein) in a paper of Guillemin and Sternberg:

    Some problems in integral geometry and some related problems in microlocal analysis , Am. J. Math. 101 (1979) 915–955, MR 0536046.

    Weinstein seems to have published the idea only in 1981:

    Symplectic geometry, Bull. Amer. Math. Soc. (N.S.) 5 (1981), no. 1, 1–13.

  11.  

    With luck, this video will be the first of a series. I’m giving a seminar on network theory at U.C. Riverside this fall. I’ll start by sketching the results in this new paper:

    • John Baez and Brendan Fong, A compositional framework for passive linear networks.

    It’s a big paper, and I also want to talk about other papers, so I certainly won’t explain everything in here—just enough to help you get started! If you have questions, don’t be shy about asking them.

    In this first seminar, I start with a quick overview of network theory, and then begin building a category where the morphisms are electrical circuits. Here are some lecture notes to go along with the video:

    • Network theory (part 30).

  12. […] Last time we came up with a category of labelled graphs and described circuits as ‘cospans’ in this category. […]

  13. Eugene says:

    One thing that puzzles me about your (bi)category of labelled graphs is that you don’t requre inputs to map to leaves and you don’t require the outputs to map to roots. Is my understanding correct? By a root I mean a vertex with no outgoing edges and a leaf a vertex with no incoming edges (since everything is directed, this makes sense).

    This suggests to me that your resulting category will be quite different from the various operads of wiring diagrams that David has been studying. He has several.

    • John Baez says:

      Eugene wrote:

      Is my understanding correct?

      Yes, that’s right.

      There are many things to say here, but I’ll just say three.

      1) For the circuits we’re talking about now, the directions on the edges are not really important, since resistors, capacitors and inductors don’t have a built-in directionality. For diodes the direction would be important: if you turn around a diode on an edge you get a different circuit. But it would never make sense, for electrical circuits, to require that output terminals have no outgoing edges and input terminals have no incoming edges. For diodes this restriction would actually eliminate some interesting circuits.

      2) For the circuits we’re talking about now, the distinction between inputs and outputs is purely conventional; the categories we’ll be talking about are all dagger-categories, meaning we can switch what we call the input and what we call the output. Later in the seminar, when we get to control theory, we may meet some categories where the inputs are truly different from the outputs.

      3) It’s tempting to use zero-resistance wires as identity morphisms. However, we’re avoiding zero-resistance wires in our formalism because they behave in a singular way (a ‘short circuit’). We can avoid zero-resistance wires without loss of generality because whenever you have a zero-resistance wire in an electrical circuit, you can collapse that edge in your graph and get rid of it, obtaining a circuit that behaves the same way.

      But if you allow zero-resistance wires, you can take any electrical circuit, add extra zero-resistance wires at the input and output terminals, and get an electrical circuit that behaves in the same way but obeys the restriction you mention.

      (Later I’ll make the concept of ‘behavior’ precise; for now people can read about it in the paper Brendan and I are writing.)

  14. Enon Harris says:

    Looking at the factors of the different classes of units allows seeing more of the consistent analogies between different physical quantities, particularly when you notice that most of the classes of units factor into two parts, {length and (time or frequency)} and one of these five: { unity, mass, charge, mass/charge, mass/charge^2 }.

    Using the number of length and time factors in a type of unit to place that unit type horizontally and vertically respectively in a table, and creating a table for each of the five other factors, about fifty classes of units or physical quantities can be arranged in a stack of tables whose geometric relationships define algebraic relations and possible classes of dimensionally valid physical equations.

    Here are two single-page versions of this arrangement of physical unit classes:
    Physical Units Factor Tables / Large Print
    Physical Units Factor Tables

    Thermal units don’t fit neatly into this scheme, but I have noted some of the more important ones in a text box.

    The names of the unit classes are from Alan Eliasen’s physically typed language / Swiss-Army-chainsaw desk calculator Frink. Names in parentheses are not defined in Frink, but still work in calculations.

You can use Markdown or HTML in your comments. You can also use LaTeX, like this: $latex E = m c^2 $. The word 'latex' comes right after the first dollar sign, with a space after it.

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out / Change )

Twitter picture

You are commenting using your Twitter account. Log Out / Change )

Facebook photo

You are commenting using your Facebook account. Log Out / Change )

Google+ photo

You are commenting using your Google+ account. Log Out / Change )

Connecting to %s