Network Theory (Part 27)

This quarter my graduate seminar at UCR will be about network theory. I have a few students starting work on this, so it seems like a good chance to think harder about the foundations of the subject. I’ve decided that bicategories of spans play a basic role, so I want to talk about those.

If you haven’t read the series up to now, don’t worry! Nothing I do for a while will rely on that earlier stuff. I want a fresh start. But just for a minute, I want to talk about the big picture: how the new stuff will relate to the old stuff.

So far this series has been talking about three closely related kinds of networks:

Markov processes
stochastic Petri nets
stochastic reaction networks

but there are many other kinds of networks, and I want to bring some more into play:

circuit diagrams
bond graphs
signal-flow graphs

These come from the world of control theory and engineering—especially electrical engineering, but also mechanical, hydraulic and other kinds of engineering.

My goal is not to tour different formalisms, but to integrate them into a single framework, so we can easily take ideas and theorems from one discipline and apply them to another.

For example, in Part 16 we saw that a special class of Markov processes can also be seen as a special class of circuit diagrams: namely, electrical circuits made of resistors. Also, in Part 17 we saw that stochastic Petri nets and stochastic reaction networks are just two different ways of talking about the same thing. This allows us to take results from chemistry—where they like stochastic reaction networks, which they call ‘chemical reaction networks’—and apply them to epidemiology, where they like stochastic Petri nets, which they call ‘compartmental models’.

As you can see, fighting through the thicket of terminology is half the battle here! The problem is that people in different applied subjects keep reinventing the same mathematics, using terminologies specific to their own interests… making it harder to see how generally applicable their work actually is. But we can’t blame them for doing this. It’s the job of mathematicians to step in, learn all this stuff, and extract the general ideas.

We can see a similar thing happening when writing was invented in ancient Mesopotamia, around 3000 BC. Different trades invented their own numbering systems! A base-60 system, the S system, was used to count most discrete objects, such as sheep or people. But for ‘rations’ such as cheese or fish, they used a base 120 system, the B system. Another system, the ŠE system, was used to measure quantities of grain. There were about a dozen such systems! Only later did they get standardized.

Circuit diagrams

But enough chit-chat; let’s get to work. I want to talk about circuit diagrams—diagrams of electrical circuits. They can get really complicated:

This is a 10-watt audio amplifier with bass boost. It looks quite intimidating. But I’ll start with a simple class of circuit diagrams, made of just a few kinds of parts:

• resistors,
• inductors,
• capacitors,
• voltage sources

and maybe some others later on. I’ll explain how you can translate any such diagram into a system of differential equations that describes how the voltages and currents along the wires change with time.

This is something you’d learn in a basic course on electrical engineering, at least back in the old days before analogue circuits had been largely replaced by digital ones. But my goal is different. I’m not mainly interested in electrical circuits per se: to me the important thing is how circuit diagrams provide a pictorial way of reasoning about differential equations… and how we can use the differential equations to describe many kinds of systems, not just electrical circuits.

So, I won’t spend much time explaining why electrical circuits do what they do—see the links for that. I’ll focus on the math of circuit diagrams, and how they apply to many different subjects, not just electrical circuits.

Let’s start with an example:

This describes a current flowing around a loop of wire with 4 elements on it: a resistor, an inductor, a capacitor, and a voltage source—for example, a battery. Each of these elements is designated by a cute symbol, and each has a real number associated to it:

• This is a resistor:

and it comes with a number R, called its resistance.

• This is an inductor:

and it comes with a number L, called its inductance.

• This is a capacitor:

and it comes with a number C, called its capacitance.

• This is a voltage source:

and it comes with a number V, called its voltage.

You may wonder why inductance got called L instead of I. Well, it’s probably because I stands for ‘current’. And then you’ll ask why current is called I instead of C. I don’t know: maybe because C stands for ‘capacitance’. If every word started with its own unique letter, we wouldn’t have these problems. But then we wouldn’t need words.

Here’s another example:

This example has two new features. First, it has places where wires meet, drawn as black dots. These dots are often called nodes, or sometimes vertices. Since ‘vertex’ starts with V and so does ‘voltage’, let’s call the dots ‘nodes’. Roughly speaking, a graph is a thing with nodes and edges, like this:

This suggests that in our circuit, the wires with elements on them should be seen as edges of a graph. Or perhaps just the wires should be seen as edges, and the elements should be seen as nodes! This is an example of a ‘design decision’ we have to make when formalizing the theory of circuit diagrams. There are also various different precise definitions of ‘graph’, and we need to try to choose the best one.

A second new feature of this example is that it has some white dots called terminals, where wires end. Mathematically these terminals are also nodes in our graph, but they play a special role: they are places where we are allowed to connect this circuit to another circuit. You’ll notice this circuit doesn’t have a voltage source. So, it’s like piece of electrical equipment without its own battery. We need to plug it in for it to do anything interesting!

This is very important. Big complicated electrical circuits are often made by hooking together smaller ones. The pieces are best thought of as ‘open systems’: that is, physical systems that interact with the outside world. Traditionally, a lot of physics focuses on ‘closed systems’, which don’t interact with the outside the world—the part of the world we aren’t modeling. But network theory is all about how we can connect open systems together to form larger open systems (or closed systems). And this is one place where category shows up. As we’ll see, we can think of an open system as a ‘morphism’ going from some inputs to some outputs, and we can ‘compose’ morphisms to get new morphisms by hooking them together.

Differential equations from circuit diagrams

Let me sketch how to get a bunch of ordinary differential equations from a circuit diagram. These equations will say what the circuit does.

We start with a graph having some set N of nodes and some set E of edges. To say how much current is flowing along each edge it will be helpful to give each edge a direction, like this:

So, define a graph to consist of two functions

s,t : E \to N

Then each edge e will have some node s(e) as its source, or starting-point, and some node t(e) as its target, or endpoint:

(This kind of graph is often called a directed multigraph or quiver, to distinguish it from other kinds, but I’ll just say ‘graph’.)

Next, each edge is labelled by one of four elements: resistor, capacitor, inductor or voltage source. It’s also labelled by a real number, which we call the resistance, capacitance, inductance or voltage of that element. We will make this part prettier later on, so we can easily introduce more kinds of elements without any trouble.

Finally, we specify a subset T \subseteq N and call these nodes terminals.

Our goal now is to write down some ordinary differential equations that say how a bunch of variables change with time. These variables come in two kinds:

• Each edge e has a current running along it, which is a function of time denoted I_e. So, for each e \in E we have a function

I_e : \mathbb{R} \to \mathbb{R}

• Each edge e also has a voltage across it, which is a function of time denoted V_e. So, for each e \in E we have a function

V_e : \mathbb{R} \to \mathbb{R}

We now write down a bunch of equations obeyed by these currents and voltages. First there are some equations called Kirchhoff’s laws:

Kirchhoff’s current law says that for each node that is not a terminal, the total current flowing into that node equals the total current flowing out. In other words:

\displaystyle{ \sum_{e: t(e) = n} I_e = \sum_{e: s(e) = n} I_e }

for each node n \in N - T. We don’t impose Kirchhoff’s current law at terminals, because we want to allow current to flow in or out there!

Kirchhoff’s voltage law says that we can choose for each node a potential \phi_n, which is a function of time:

\phi_n : \mathbb{R} \to \mathbb{R}

such that

V_e = \phi_{s(e)} - \phi_{t(e)}

for each e \in E. In other words, the voltage across each edge is the difference of potentials at the two ends of this edge. This is a slightly nonstandard way to state Kirchhoff’s voltage law, but it’s equivalent to the usual one.

In addition to Kirchhoff’s laws, there’s an equation for each edge, relating the current and voltage on that edge. The details of this equation depends on the element labelling that edge, so we consider the four cases in turn:

• If our edge e is labelled by a resistor of resistance R:

we write the equation

V_e = R I_e

This is called Ohm’s law.

• If our edge e is labelled by an inductor of inductance L:

we write the equation

\displaystyle{ V_e = L \frac{d I_e}{d t} }

I don’t know a name for this equation, but you can read about it here.

• If our edge e is labelled by a capacitor of capacitance C:

we write the equation

\displaystyle{ I_e = C \frac{d V_e}{d t} }

I don’t know a name for this equation, but you can read about it here.

• If our edge e is labelled by a voltage source of voltage V:

we write the equation

V_e = V

This explains the term ‘voltage source’.

Puzzles

Next time we’ll look at some examples and see how we can start polishing up this formalism into something more pretty. But you can get to work now:

Puzzle 1. Starting from the rules above, write down and simplify the equations for this circuit:

Puzzle 2. Do the same for this circuit:

Puzzle 3. If we added a fifth kind of element, our rules for getting equations from circuit diagrams would have more symmetry between voltages and currents. What is this extra element?

37 Responses to Network Theory (Part 27)

  1. HK says:

    You need a current source.

    I am absolutely besotted with your idea of touring different formalisms. Please do let me know where I can buy this in book format, imho it would be something worth reading and rereading.

    From my admittedly limited perspective, it also rather naturally leads to information theory and then to Kolmogorov complexity, and possibly to Dr. Stogatz’s work over at MIT.

    Really looking forward to further posts. Thank you so much for sharing.

    • John Baez says:

      Thanks! I admit that I enjoy touring different formalisms, and it’s good to learn them, so we can talk to more people… but I also want to do some ‘data compression’, and reduce the number of required formalisms down to some bare minimum, and understand how they’re related.

      Parts 2-26 of this network theory series will become a book fairly soon, and you can download a draft here. I’m not sure what I’ll do with the forthcoming parts yet. Maybe someday I’ll write a bigger, better book. But one book at a time!

  2. Problem 1.

    Apply Kirchhoff’s voltage law:

    V = V_R + V_C + V_L

    Take a time derivative:

    \dot V = \dot V_R + \dot V_C + \dot V_L

    The current I is constant on the full loop. The dynamics of the components obey their own equations:

    V_R = I R

    \displaystyle{  I = C \frac{d V_C}{dt} }

    \displaystyle{ V_L = L \frac{d I}{dt} }

    Therefore,

    \dot V = L  \ddot I + R \dot I + \frac{1}{C} I

    from which the time dependence of the three components can be found, given V .

    • John Baez says:

      Great! And if we think of the current as the time derivative of something else—namely the charge Q on the capacitor’s plate:

      I = \dot{Q}

      then we can rewrite this as

      L \ddot Q + R \dot Q + \frac{1}{C} Q = V

      which should remind us intensely of the equation describing the height q of a particle of mass m hanging on a massless spring with spring constant k and damping coefficient c, being pushed on by some force F:

      m \ddot q + c \dot q + k q = f

      And this is part of the grand set of analogies people have developed between electrical circuits, mechanics, and many other subjects.

  3. westy31 says:

    I recently found some nice stuff in electric circuits:

    http://www.quantum-chemistry-history.com/Kron_Dat/Kron-1945/Kron-JAP-1945/Kron-JAP-1945.htm

    These elaborate circuit equivalents were not only used as intellectual exercise, but as actual analog computers for solving differential equations, before digital computers.

    Gerard

    • John Baez says:

      Thanks for the link! It’s nice to hear from you again! Of course I know your website:

      • Gerard Westendorp, Electric circuit diagram equivalents of fields.

      I’m interested in these things again: not using electric circuits as analogue computers, but using ideas in electric circuit theory to help develop a common category-theoretic formulation of many kinds of systems: first linear ones, and then nonlinear ones.

      You can get an idea of what I mean from this book:

      • Forbes T. Brown, Engineering System Dynamics: A Unified Graph-Centered Approach, CRC Press, 2007.

      It’s huge (1078 pages, for only $131), very nice, but it uses bond graphs instead of electrical circuit diagrams. They’re roughly equivalent formalisms, and I’ll try to talk about both of them in this series.

      • westy31 says:

        Hi John,

        Glad you still like my website, I needed a bit of pep talk. I’ve been a bit inactive on the web lately. I’ve been working on electric circuits, but unable to finalize it on the website yet, its kind of a tough chapter. It turns out there is a cool relation between (n+1)+1 dimensional space, Cayley-Menger matrices and electric circuits on triangulated grids. Related also to the Einstein equations, but as I said, progress is slow. But discussions like these might speed things up.

        As an engineer, I have used electric circuits for thermal and acoustic applications. Especially the acoustics got me interested in electric circuits. For example, it is very convenient to model what happens acoustically when you drill a small hole somewhere in a long pipe, things like that. Maybe I should write a bit more on it, although there is a book by Beranek that is kind of a classic on the subject.

        Looking foreward to more discussions on this. I am convinced the subject is by no means exhausted. I know bond graphs, but somehow I tend to prefer circuits.

        Gerard

        • John Baez says:

          I’m not convinced that bond graphs are ‘better’, though the people who work deeply on these analogies seem to like them. I will try to state a mathematical theorem relating them to circuit diagrams in a precise way—that’s probably more important than deciding which one is ‘better’.

        • Daniel Mahler says:

          It seems to me that circuits are more general than bond graphs, since I do not know of a way to reperesent lattice shaped circuits into n-port networks and thus bond graphs

        • John Baez says:

          Yes, circuits are a bit more general than bond graphs, and that’s why I’m focusing on them here. Later I’ll describe a category where circuits are morphisms, and bond graphs will give an important subcategory.

    • Daniel Mahler says:

      Nice to see somebody interested in Kron. I am particularly interested in possible connections between Kron’s work & Information Geometry. I am fairly confident that the connection exists. Amai’s supervisor, Kazuo Kondo, was deeply influenced by Kron. Kondo’s research group, the RAAG, was largely dedicated to extending Kron’s geometrical idea’s. Amari’s master’s and doctoral theses were titled Topological and Information-Theoretical Foundation of Diakoptics and Codiakoptics. and Diakoptics of Information Spaces respectively, “diakoptics” being the term Kron coined for his methods. Unfortunately, my understanding of both Information Geometry and Diakoptics is currently too limited to say anything useful about the connection. However, I think it should help tie together a number of threads in this forum, particularly given that Kron, Kondo & the RAAG formulated much of their ideas in terms of tensor geometry.

      • John Baez says:

        ‘Diakoptics’ sounds a bit weird, but the Wikipedia article on it says:

        Gabriel Kron’s Diakoptics (Greek dia– through + kopto– cut,tear) or Method of Tearing involves breaking a (usually physical) problem down into subproblems which can be solved independently before being joined back together to obtain a solution to the whole problem.

        Gabriel Kron was an unconventional Engineer who worked for GE in the US until his death in 1968. He was responsible for the first load flow (electricity) distribution system in New York.

        He was perhaps most famous for his Method of Tearing, a technique for splitting up physical problems into subproblems, solving each individual subproblem and then recombining to give an (unexpectedly) exact overall solution. The technique is efficient on sequential computers, but is particularly so on parallel architectures. Whether this holds for quantum parallelism is as yet unknown. It is peculiar as a decomposition method, in that it involves taking values on the “intersection layer” (the boundary between subsystems) into account. The method has been rediscovered by the parallel processing community recently under the name “Domain Decomposition”.

        A multilevel hierarchical version of the Method, in which the subsystems are recursively torn into subsubsystems etc., was published by Keith Bowden in 1991.

        I’ve heard of the ‘method of tearing’ before in Jan Willems’ expository articles on control theory. I don’t really know how this method works, but one of my main concerns these days is how to use category theory to describe the process of building a physical system out of interacting parts. ‘Tearing’ sounds like the reverse operation. I can’t help but think it’s logically secondary. But I need to get a sense of how people use it in practice.

        • I think that the whole point of diakoptics/tearing is to solve the parts and then glue the solutions back together. So one tears systems, but glues solutions. This reminds me Joseph Goguen’s & Grant Malcolm’s work on modelling (de)composition of systems using sheaves in category theory. Malcolm also has interests in biological modelling and diagrams (the broad sense of the word, not just categories)

          Another interesting aspect of Kron’s work is that when he treated moving systems like EM motors & generators using non linear coordinate transformations, making real use tensors and differential geometry.

  4. Problem 2.

    Here

    V = V_C = V_R + V_L

    and

    V_R = I_R R

    \displaystyle{ V_L = L \frac{d I_L}{dt} }

    since I_R = I_L =: I , therefore

    \displaystyle{  V = I R + L \frac{d I}{dt} }

    and

    \displaystyle{  I_C = \frac{\dot V}{C} }

    • John Baez says:

      Thanks! I think one fun thing to do with this problem is to work out the total current through the circuit:

      \mathbf{I} := I_R + I_C = I_L + I_C

      in terms of the voltage across the two terminals,

      \mathbf{V} := V_C = V_R + V_L

      in the case where the voltage \mathbf{V} varies sinusoidally with time, for example

      \mathbf{V} = \exp(i \omega t)

      for some frequency \omega. (Here I’m using complex voltages, which makes the math simpler.) We should see that this circuit acts as a band-pass filter. In other words, when

      \mathbf{V} = \exp(i \omega t)

      we should see that

      \mathbf{I} = f(\omega)  \exp(i \omega t)

      solves the equations for some function f, which we can work out. Moreover, we should see that f biggest when \omega is near a certain frequency, and smaller for frequencies far away from that. So, we can use this circuit to filter a signal, amplifying the part whose frequency is near this special frequency, and damping out the rest.

      I haven’t actually done the calculation yet…

      • In this case you suggest,
        I_{total} = I_R + I_C = I_L + I_C

        We will define I:= I_R=I_L and we will find I by summing the voltage around the loop with the resistor and inductor.

        e^{i \omega t} = I R + L \dot I

        To solve the above, we can use an integrating factor, yeilding

        \frac{e^{i\omega t}}{R+i\omega L}

        Returning to the expression for the total current, and using

        I_C = C \dot V_C

        results in

        \frac{e^{i\omega t}}{R+ i \omega L} + i \omega C e^{i \omega t}

        the voltage appears in each term, so we weill consider the constant factor (John’s f(\omega) above

        (R+i\omega L)^{-1} + i \omega C

        = \frac{R - i \omega L}{R^2 + \omega^2 L^2} + i \omega C

        For the voltage and current to be in phase, the complex part of this expression must vanish. In other works,

        \omega^2 = \frac{1}{LC} - \frac{R^2}{L^2}

        • Jason Erbele says:

          I was curious to see what would happen using an extremization principle instead of assuming the voltage and current must be in phase. \frac{df(\omega)}{d\omega} is marginally easier to deal with as f(\omega) = (R+i\omega L)^{-1} + i\omega C than after the denominator is made real, so it was at that point that I diverged from your calculation.
          \frac{df}{d\omega} = -iL(R+i\omega L)^{-2} + iC = 0
          ends up having two pure imaginary solutions:
          \omega = i(\frac{R}{L} \pm (LC)^{-1/2}).
          Their product is real, and looks somewhat familiar…
          \omega_1 \omega_2 = \frac{1}{LC} - \frac {R^2}{L^2}.

  5. Samuel Vidal says:

    Hey ! Love your work.
    In puzzle 3. are you talking about the memristor of Leon Chua ?

    • John Baez says:

      No, though I know why you’re saying that.

      In the four elements I describe, there’s almost a symmetry which:

      • interchanges voltage V and current I

      • interchanges inductance L and capacitance C

      • interchanges resistance R and conductance G = 1/R

      But to make this symmetry complete we need a fifth element.

      By the way, there’s a movie about this puzzle.

      • David Lyon says:

        Would you perhaps be summoning the fabled ‘current source’ from outer space to protect mankind from cosmic evil?

      • Samuel Vidal says:

        Thx for the explanation. I was thinking of the (constant) current source also but I couldn’t resist throwing in the idea of the memristor ha ha. It is a relatively new idea that opens fabulous technology in the years to come. I think that we should get used to see it around. In my opinion it should definitely belong to the picture.

        • John Baez says:

          The memristor also fits into the pictures, but it’s more subtle. I talked about it on week294 of This Week’s Finds. This was part of a series of articles that was sort of a warmup for this course.

  6. I think this is a beautiful intro to electrical circuits.

    In general if you have a graph of N edges you end up with 2N unknowns (currents and voltages along each edge), but you also have 2N equations (N equations for each component law, and N from the Kirchhoff rules), which allow you to actually solve the system of ODEs.

    Regarding the “fifth element”, i think a current source (I = I_e) is definitely needed for symmetry, but perhaps you mean something else.

  7. martopix says:

    The I for current comes from “intensity of current”.

  8. davetweed says:

    This may well be something you might be expanding on later in the course, but: both the Kirchoff laws are “linear”. (Eg, it’s the sum of the incoming currents that equals the outgoing currents whereas at least potentially one could imagine a rule that the sum of squares of incoming “flow” must equal the sum of squares of outgoing “flow”, say.) Obviously physical electricity turns out to work like that, but is there a more abstract reason for that the key rules are “linear” and that things don’t work/are inconsistent otherwise?

    (Kirchoff’s laws are used in some more abstract stuff on networks and I haven’t yet seen anything I recognise that suggests why this form is the most useful/appropriate.)

    • John Baez says:

      Kirchhoff’s laws are part of a mathematical subject called homology theory, and linearity is built into this subject in a really deep way. I’m definitely going to talk about that further down the line.

      But I’m not sure I’ll ever answer your question, because it sounds like you’re looking for a clear reason why things won’t work otherwise. I don’t know about that. Nor do I know about attempts to generalize homology theory to cases where the ‘boundary operator’ is nonlinear. If we could do that in a nice way, we might get a way to make Kirchhoff’s laws nonlinear.

      By the way, ‘Kirchhoff’ has two h’s. I always mixed this up myself, until I reminded myself sternly that the name comes from ‘Kirch’ (church) and ‘hoff’ (courtyard, though actually that’s spelled ‘Hof’, so I guess this mnemonic is slightly flawed).

  9. […] Last time I left you with some puzzles. One was to use the laws of electrical circuits to work out what this one does […]

  10. John Baez says:

    I fixed the treatment of Kirchhoff’s current law in this blog entry. We don’t want to impose this law at nodes that are ‘terminals’.

  11. Daniel W says:

    A minor quibble from an old electrical engineer… In problems 1, 2 the initial charge on the capacitor and the initial flux on the inductor are ignored or unspecified. In full generality, these should also be considered when writing out the integro-differential equations that characterize the circuit by including the initial voltage across the capacitor and the initial current through the inductor.

    As for problem 3, I was going to suggest a transformer which steps up and down voltages and currents symmetrically, but I looked up your week 294 notes and saw your mention of a nullor, consisting of a nullator and a norator. I never heard of such a thing when I was studying EE, so I’m glad to be learning new everyday…

  12. k says:

    hey brother baez, can you explain that sigma notation. I just dont quite get what it means. and if I_e is the current of the edge e, but its also a function of time, why is it not labelled I(t) or something?

    • John Baez says:

      Sigma is the notation for ‘sum’. So,

      \displaystyle{ \sum_{e: t(e) = n} I_e = \sum_{e: s(e) = n} I_e }

      says that for each node n that is not a terminal, the sum of the currents I_e over all edges having n as their target (e such that t(e) = n) equals the sum of the currents I_e over all edges having n as their source (e such that s(e) = n)).

      Or, in plain English, it says what I said:

      the total current flowing into that node equals the total current flowing out.

      Also:

      if I_e is the current of the edge e, but its also a function of time, why is it not labelled I_e(t) or something?

      Do that if you like. Physicists often write simply v to be the velocity of something even if the velocity depends on time. Mathematicians would say that v stands for the function of time, while v(t) stands for the value of that function at a particular time. Either way, they’re happy leaving out the t.

  13. Remember from Part 27 that when we put it to work, our circuit has a current flowing along each edge […]

  14. […] use of such networks for modeling dynamics is taught by many, including via the tutorials of Professor John Baez of University of California, Davis. They are more general than it may seem, being both equivalent to linear differential equations […]

You can use Markdown or HTML in your comments. You can also use LaTeX, like this: $latex E = m c^2 $. The word 'latex' comes right after the first dollar sign, with a space after it.

This site uses Akismet to reduce spam. Learn how your comment data is processed.