## A Compositional Framework for Passive Linear Networks

Here’s a new paper on network theory:

• John Baez and Brendan Fong, A compositional framework for passive linear networks, Theory and Applications of Categories 33 (2018), 1158–1222.

While my paper with Jason Erbele studies signal flow diagrams, this one focuses on circuit diagrams. The two are different, but closely related.

I’ll explain their relation at the Turin workshop in May. For now, let me just talk about this paper with Brendan. There’s a lot in here, but let me just try to explain the main result. It’s all about ‘black boxing’: hiding the details of a circuit and only remembering its behavior as seen from outside.

### The idea

In late 1940s, just as Feynman was developing his diagrams for processes in particle physics, Eilenberg and Mac Lane initiated their work on category theory. Over the subsequent decades, and especially in the work of Joyal and Street in the 1980s, it became clear that these developments were profoundly linked: monoidal categories have a precise graphical representation in terms of string diagrams, and conversely monoidal categories provide an algebraic foundation for the intuitions behind Feynman diagrams. The key insight is the use of categories where morphisms describe physical processes, rather than structure-preserving maps between mathematical objects.

In work on fundamental physics, the cutting edge has moved from categories to higher categories. But the same techniques have filtered into more immediate applications, particularly in computation and quantum computation. Our paper is part of a new program of applying string diagrams to engineering, with the aim of giving diverse diagram languages a unified foundation based on category theory.

Indeed, even before physicists began using Feynman diagrams, various branches of engineering were using diagrams that in retrospect are closely related. Foremost among these are the ubiquitous electrical circuit diagrams. Although less well-known, similar diagrams are used to describe networks consisting of mechanical, hydraulic, thermodynamic and chemical systems. Further work, pioneered in particular by Forrester and Odum, applies similar diagrammatic methods to biology, ecology, and economics.

As discussed in detail by Olsen, Paynter and others, there are mathematically precise analogies between these different systems. In each case, the system’s state is described by variables that come in pairs, with one variable in each pair playing the role of ‘displacement’ and the other playing the role of ‘momentum’. In engineering, the time derivatives of these variables are sometimes called ‘flow’ and ‘effort’.

 displacement:    $q$ flow:      $\dot q$ momentum:      $p$ effort:           $\dot p$ Mechanics: translation position velocity momentum force Mechanics: rotation angle angular velocity angular momentum torque Electronics charge current flux linkage voltage Hydraulics volume flow pressure momentum pressure Thermal Physics entropy entropy flow temperature momentum temperature Chemistry moles molar flow chemical momentum chemical potential

In classical mechanics, this pairing of variables is well understood using symplectic geometry. Thus, any mathematical formulation of the diagrams used to describe networks in engineering needs to take symplectic geometry as well as category theory into account.

While diagrams of networks have been independently introduced in many disciplines, we do not expect formalizing these diagrams to immediately help the practitioners of these disciplines. At first the flow of information will mainly go in the other direction: by translating ideas from these disciplines into the language of modern mathematics, we can provide mathematicians with food for thought and interesting new problems to solve. We hope that in the long run mathematicians can return the favor by bringing new insights to the table.

Although we keep the broad applicability of network diagrams in the back of our minds, our paper talks in terms of electrical circuits, for the sake of familiarity. We also consider a somewhat limited class of circuits. We only study circuits built from ‘passive’ components: that is, those that do not produce energy. Thus, we exclude batteries and current sources. We only consider components that respond linearly to an applied voltage. Thus, we exclude components such as nonlinear resistors or diodes. Finally, we only consider components with one input and one output, so that a circuit can be described as a graph with edges labeled by components. Thus, we also exclude transformers. The most familiar components our framework covers are linear resistors, capacitors and inductors.

While we want to expand our scope in future work, the class of circuits made from these components has appealing mathematical properties, and is worthy of deep study. Indeed, these circuits has been studied intensively for many decades by electrical engineers. Even circuits made exclusively of resistors have inspired work by mathematicians of the caliber of Weyl and Smale!

Our work relies on this research. All we are adding is an emphasis on symplectic geometry and an explicitly ‘compositional’ framework, which clarifies the way a larger circuit can be built from smaller pieces. This is where monoidal categories become important: the main operations for building circuits from pieces are composition and tensoring.

Our strategy is most easily illustrated for circuits made of linear resistors. Such a resistor dissipates power, turning useful energy into heat at a rate determined by the voltage across the resistor. However, a remarkable fact is that a circuit made of these resistors always acts to minimize the power dissipated this way. This ‘principle of minimum power’ can be seen as the reason symplectic geometry becomes important in understanding circuits made of resistors, just as the principle of least action leads to the role of symplectic geometry in classical mechanics.

Here is a circuit made of linear resistors:

The wiggly lines are resistors, and their resistances are written beside them: for example, $3\Omega$ means 3 ohms, an ‘ohm’ being a unit of resistance. To formalize this, define a circuit of linear resistors to consist of:

• a set $N$ of nodes,
• a set $E$ of edges,
• maps $s,t : E \to N$ sending each edge to its source and target node,
• a map $r: E \to (0,\infty)$ specifying the resistance of the resistor
labelling each edge,
• maps $i : X \to N,$ $o : Y \to N$ specifying the inputs and outputs of the circuit.

When we run electric current through such a circuit, each node $n \in N$ gets a potential $\phi(n).$ The voltage across an edge $e \in E$ is defined as the change in potential as we move from to the source of $e$ to its target, $\phi(t(e)) - \phi(s(e)).$ The power dissipated by the resistor on this edge is then

$\displaystyle{ \frac{1}{r(e)}\big(\phi(t(e))-\phi(s(e))\big)^2 }$

The total power dissipated by the circuit is therefore twice

$\displaystyle{ P(\phi) = \frac{1}{2}\sum_{e \in E} \frac{1}{r(e)}\big(\phi(t(e))-\phi(s(e))\big)^2 }$

The factor of $\frac{1}{2}$ is convenient in some later calculations.

Note that $P$ is a nonnegative quadratic form on the vector space $\mathbb{R}^N.$ However, not every nonnegative definite quadratic form on $\mathbb{R}^N$ arises in this way from some circuit of linear resistors with $N$ as its set of nodes. The quadratic forms that do arise are called Dirichlet forms. They have been extensively investigated, and they play a major role in our work.

We write

$\partial N = i(X) \cup o(Y)$

for the set of terminals: that is, nodes corresponding to inputs or outputs. The principle of minimum power says that if we fix the potential at the terminals, the circuit will choose the potential at other nodes to minimize the total power dissipated. An element $\psi$ of the vector space $\mathbb{R}^{\partial N}$ assigns a potential to each terminal. Thus, if we fix $\psi,$ the total power dissipated will be twice

$Q(\psi) = \min_{\substack{ \phi \in \mathbb{R}^N \\ \phi\vert_{\partial N} = \psi}} \; P(\phi)$

The function $Q : \mathbb{R}^{\partial N} \to \mathbb{R}$ is again a Dirichlet form. We call it the power functional of the circuit.

Now, suppose we are unable to see the internal workings of a circuit, and can only observe its ‘external behavior’: that is, the potentials at its terminals and the currents flowing into or out of these terminals. Remarkably, this behavior is completely determined by the power functional $Q.$ The reason is that the current at any terminal can be obtained by differentiating $Q$ with respect to the potential at this terminal, and relations of this form are all the relations that hold between potentials and currents at the terminals.

The Laplace transform allows us to generalize this immediately to circuits that can also contain linear inductors and capacitors, simply by changing the field we work over, replacing $\mathbb{R}$ by the field $\mathbb{R}(s)$ of rational functions of a single real variable, and talking of impedance where we previously talked of resistance. We obtain a category $\mathrm{Circ}$ where an object is a finite set, a morphism $f : X \to Y$ is a circuit with input set $X$ and output set $Y,$ and composition is given by identifying the outputs of one circuit with the inputs of the next, and taking the resulting union of labelled graphs. Each such circuit gives rise to a Dirichlet form, now defined over $\mathbb{R}(s),$ and this Dirichlet form completely describes the externally observable behavior of the circuit.

We can take equivalence classes of circuits, where two circuits count as the same if they have the same Dirichlet form. We wish for these equivalence classes of circuits to form a category. Although there is a notion of composition for Dirichlet forms, we find that it lacks identity morphisms or, equivalently, it lacks morphisms representing ideal wires of zero impedance. To address this we turn to Lagrangian subspaces of symplectic vector spaces. These generalize quadratic forms via the map

$\Big(Q: \mathbb{F}^{\partial N} \to \mathbb{F}\Big) \longmapsto$

$\mathrm{Graph}(dQ) = \{(\psi, dQ_\psi) \mid \psi \in \mathbb{F}^{\partial N} \} \; \subseteq \; \mathbb{F}^{\partial N} \oplus (\mathbb{F}^{\partial N})^\ast$

taking a quadratic form $Q$ on the vector space $\mathbb{F}^{\partial N}$ over the field $\mathbb{F}$ to the graph of its differential $dQ.$ Here we think of the symplectic vector space $\mathbb{F}^{\partial N} \oplus (\mathbb{F}^{\partial N})^\ast$ as the state space of the circuit, and the subspace $\mathrm{Graph}(dQ)$ as the subspace of attainable states, with $\psi \in \mathbb{F}^{\partial N}$ describing the potentials at the terminals, and $dQ_\psi \in (\mathbb{F}^{\partial N})^\ast$ the currents.

This construction is well-known in classical mechanics, where the principle of least action plays a role analogous to that of the principle of minimum power here. The set of Lagrangian subspaces is actually an algebraic variety, the Lagrangian Grassmannian, which serves as a compactification of the space of quadratic forms. The Lagrangian Grassmannian has already played a role in Sabot’s work on circuits made of resistors. For us, its importance it that we can find identity morphisms for the composition of Dirichlet forms by taking circuits made of parallel resistors and letting their resistances tend to zero: the limit is not a Dirichlet form, but it exists in the Lagrangian Grassmannian.

Indeed, there exists a category $\mathrm{LagrRel}$ with finite dimensional symplectic vector spaces as objects and Lagrangian relations as morphisms: that is, linear relations from $V$ to $W$ that are given by Lagrangian subspaces of $\overline{V} \oplus W,$ where $\overline{V}$ is the symplectic vector space conjugate to $V$—that is, with the sign of the symplectic structure switched.

To move from the Lagrangian subspace defined by the graph of the differential of the power functional to a morphism in the category $\mathrm{LagrRel}$—that is, to a Lagrangian relation— we must treat seriously the input and output functions of the circuit. These express the circuit as built upon a cospan:

Applicable far more broadly than this present formalization of circuits, cospans model systems with two ‘ends’, an input and output end, albeit without any connotation of directionality: we might just as well exchange the role of the inputs and outputs by taking the mirror image of the above diagram. The role of the input and output functions, as we have discussed, is to mark the terminals we may glue onto the terminals of another circuit, and the pushout of cospans gives formal precision to this gluing construction.

One upshot of this cospan framework is that we may consider circuits with elements of $N$ that are both inputs and outputs, such as this one:

This corresponds to the identity morphism on the finite set with two elements. Another is that some points may be considered an input or output multiple times, like here:

This lets to connect two distinct outputs to the above double input.

Given a set $X$ of inputs or outputs, we understand the electrical behavior on this set by considering the symplectic vector space $\mathbb{F}^X \oplus {(\mathbb{F}^X)}^\ast,$ the direct sum of the space $\mathbb{F}^X$ of potentials and the space ${(\mathbb{F}^X)}^\ast$ of currents at these points. A Lagrangian relation specifies which states of the output space $\mathbb{F}^Y \oplus {(\mathbb{F}^Y)}^\ast$ are allowed for each state of the input space $\mathbb{F}^X \oplus {(\mathbb{F}^X)}^\ast.$ Turning the Lagrangian subspace $\mathrm{Graph}(dQ)$ of a circuit into this information requires that we understand the ‘symplectification’

$Sf: \mathbb{F}^B \oplus {(\mathbb{F}^B)}^\ast \to \mathbb{F}^A \oplus {(\mathbb{F}^A)}^\ast$

and ‘twisted symplectification’

$S^tf: \mathbb{F}^B \oplus {(\mathbb{F}^B)}^\ast \to \overline{\mathbb{F}^A \oplus {(\mathbb{F}^A)}^\ast}$

of a function $f: A \to B$ between finite sets. In particular we need to understand how these apply to the input and output functions with codomain restricted to $\partial N$; abusing notation, we also write these $i: X \to \partial N$ and $o: Y \to \partial N.$

The symplectification $Sf$ is a Lagrangian relation, and the catch phrase is that it ‘copies voltages’ and ‘splits currents’. More precisely, for any given potential-current pair $(\psi,\iota)$ in $\mathbb{F}^B \oplus {(\mathbb{F}^B)}^\ast,$ its image under $Sf$ consists of all elements of $(\psi', \iota')$ in $\mathbb{F}^A \oplus {(\mathbb{F}^A)}^\ast$ such that the potential at $a \in A$ is equal to the potential at $f(a) \in B,$ and such that, for each fixed $b \in B,$ collectively the currents at the $a \in f^{-1}(b)$ sum to the current at $b.$ We use the symplectification $So$ of the output function to relate the state on $\partial N$ to that on the outputs $Y.$

As our current framework is set up to report the current out of each node, to describe input currents we define the twisted symplectification:

$S^tf: \mathbb{F}^B \oplus {(\mathbb{F}^B)}^\ast \to \overline{\mathbb{F}^A \oplus {(\mathbb{F}^A)}^\ast}$

almost identically to the above, except that we flip the sign of the currents $\iota' \in (\mathbb{F}^A)^\ast.$ This again gives a Lagrangian relation. We use the twisted symplectification $S^ti$ of the input function to relate the state on $\partial N$ to that on the inputs.

The Lagrangian relation corresponding to a circuit then comprises exactly a list of the potential-current pairs that are possible electrical states of the inputs and outputs of the circuit. In doing so, it identifies distinct circuits. A simple example of this is the identification of a single 2-ohm resistor:

with two 1-ohm resistors in series:

Our inability to access the internal workings of a circuit in this representation inspires us to call this process black boxing: you should imagine encasing the circuit in an opaque black box, leaving only the terminals accessible. Fortunately, this information is enough to completely characterize the external behavior of a circuit, including how it interacts when connected with other circuits!

Put more precisely, the black boxing process is functorial: we can compute the black-boxed version of a circuit made of parts by computing the black-boxed versions of the parts and then composing them. In fact we shall prove that $\mathrm{Circ}$ and $\mathrm{LagrRel}$ are dagger compact categories, and the black box functor preserves all this extra structure:

Theorem. There exists a symmetric monoidal dagger functor, the black box functor

$\blacksquare: \mathrm{Circ} \to \mathrm{LagrRel}$

mapping a finite set $X$ to the symplectic vector space $\mathbb{F}^X \oplus (\mathbb{F}^X)^\ast$ it generates, and a circuit $\big((N,E,s,t,r),i,o\big)$ to the Lagrangian relation

$\bigcup_{v \in \mathrm{Graph}(dQ)} S^ti(v) \times So(v) \subseteq \overline{\mathbb{F}^X \oplus (\mathbb{F}^X)^\ast} \oplus \mathbb{F}^Y \oplus (\mathbb{F}^Y)^\ast$

where $Q$ is the circuit’s power functional.

The goal of this paper is to prove and explain this result. The proof is more tricky than one might first expect, but our approach involves concepts that should be useful throughout the study of networks, such as ‘decorated cospans’ and ‘corelations’.

Give it a read, and let us know if you have questions or find mistakes!

### 46 Responses to A Compositional Framework for Passive Linear Networks

1. Tobias Fritz says:

Can you use $\mathrm{Circ}$ to come up with a presentation of $\mathrm{LagrRel}$, similar to how you derived a presentation of $\mathrm{FinRel}_k$?

More precisely, the black box functor is essentially surjective: every finite-dimensional symplectic vector space is isomorphic to one of the form $\mathbf{F}^X\oplus (\mathbf{F}^X)^*$. Is the functor also full? This question should be relevant for engineering-type problems where one wants to realize a desired Lagrangian relation in terms of a circuit.

If the black box functor is full, then do you know how which relations one needs to impose on the generators of $\mathrm{Circ}$ in order to turn it into a presentation of $\mathrm{LagrRel}$?

• John Baez says:

I’m almost sure the functor is not full. A dense set of Lagrangian subspaces arise as the graphs of quadratic forms in the manner described above. But not every quadratic form on $\mathbb{R}^n$—not even every nonnegative one—comes from a circuit made of resistors: only the Dirichlet forms do. Similarly, I believe not every Lagrangian subspace of $\mathbb{R}^n \oplus (\mathbb{R}^n)^*$ comes from a circuit made of resistors and perfectly conductive writes. There should be a concept of ‘Dirichlet’ Lagrangian subspace, and we should have talked about it in our paper. These should be give the image of the black box functor.

There should be a nice presentation of a category $\mathrm{DirLagrRel}$ which has symplectic vector spaces as objects and Dirichlet Lagrangian subspaces as morphisms.

On the other hand, it would also be very nice to give a presentation of the larger category $\mathrm{LagrRel}.$

I didn’t know about Dirichlet forms until I started studying electrical circuits in earnest; they seem a bit underpublicized. But they have lots of nice equivalent characterizations, and we give a few on page 17 of our paper. There’s also a very nice theory of Dirichlet forms in the infinite-dimensional case, which people use to study electric potentials in substances with spatially varying electrical conductivity—or if you prefer a more mathematical motivation, random walks and the Laplace equation!

The idea of Dirichlet forms, as opposed to general nonnegative quadratic forms, is that they capture the idea of locality.

• Tobias Fritz says:

So let me try to understand why there are nonnegative quadratic forms on $\mathbb{R}^n$ that are not Dirichlet forms. The most general Dirichlet form looks like this:

$\frac{1}{2}\sum_{i,j=1}^n \frac{1}{r(i,j)} (\phi(i) - \phi(j))^2$

where the nodes are enumerated by $i,j=1,\ldots,n$. This is the most general case because we can assume without loss of generality that the circuit does not have any resistors in parallel; and we can also assume that there is a resistor between any two nodes, because we can put $r(i,j)=r(j,i)=\infty$ for those $i,j$ that we do not want to connect directly by a resistor.

In other words, the Dirichlet forms on $\mathbb{R}^n$ are parametrized by the symmetric matrix $r(i,j)$, where the values on the diagonal are irrelevant since the resulting terms cancel: using a resistor to connect some node to itself has no effect! So the space of Dirichlet forms is at most $\tfrac{n(n-1)}{2}$-dimensional. On the other hand, the space of nonnegative quadratic forms on $\mathbb{R}^n$ is $\frac{n(n+1)}{2}$-dimensional. So the Dirichlet forms live on a subspace of codimension $n$, and most nonnegative quadratic forms are not Dirichlet forms…

The idea of Dirichlet forms, as opposed to general nonnegative quadratic forms, is that they capture the idea of locality.

Ah, nice! They sure look a lot like a kinetic term in a quantum field theory, with the resistance playing the role of the squared length of an edge.

In fact, as mentioned in your paper, the Dirichlet form is essentially the Laplace operator on the circuit. So now I suspect that the Dirichlet forms are precisely those nonnegative quadratic forms which are infinitesimal stochastic when expressed in the standard basis. This seems intuitive, as a current is nothing but a bunch of electrons performing a random walk!

• John Baez says:

Tobias wrote:

They sure look a lot like a kinetic term in a quantum field theory, with the resistance playing the role of the squared length of an edge.

Exactly!

So now I suspect that the Dirichlet forms are precisely those nonnegative quadratic forms which are infinitesimal stochastic when expressed in the standard basis.

That’s right! We didn’t say it in our paper because we didn’t think it would help most people. But my book with Jacob has a lot of stuff on Dirichlet operators: that is, infinitesimal stochastic self-adjoint operators. You can turn a Dirichlet form into a Dirichlet operator, or vice versa, using the inner product.

So, another moral about Dirichlet operators, or Dirichlet forms, is that they generalize the beautiful interplay between stochastic and quantum phenomena that you see when you stick an $i$ in the heat equation and get Schrödinger’s equation.

I keep hoping physics will make a lot of progress when we actually understand this well-known but mysterious fact: the nicest Markov processes becomes quantum processes when you rotate time 90°.

• Tobias Fritz says:

Great, thanks! This discussion has increased the connectivity of my mental network of scientific concepts significantly ;)

• John Baez says:

Great! I always love it when I can compress my knowledge and make room for new facts.

2. Eugene Lerman says:

This is not exactly on topic, but something about the table you have early in your post bothers me. More specifically it’s the first row:

position velocity momentum force

In geometrical mechanics, where do these things live?

Position is a point in a manifold (configuration space), velocity is a tangent vector, momentum is a covector and force is a map from the tangent bundle to the cotangent bundle. This is because forces depend on position and velocity and take values in covectors (for various reasons; for examples forces can be integrated along curves to do work). So a force does not at all look like an element of the tangent bundle of the cotangent bundle of the configuration space. So how could a force be the derivative of momentum?

• John Baez says:

All these analogies are traditionally studied for linear systems, and our paper too is about linear systems. So take your configuration space to be a finite-dimensional vector space. This may seem declassé to a geometer, but a surprisingly large amount of engineering focuses on this case. That’s why you see this sort of chart in lots of engineering books.

When you get to networks made of systems whose configuration spaces are general manifolds, or whose phase spaces are general symplectic manifolds, or Poisson manifolds, or… whatever… there’s a lot more left to do.

• Eugene Lerman says:

It’s really not surprising that a lot of engineering papers deal with systems that live in ${\mathbb R}^n$ (or open subsets of ${\mathbb R}^n$ or whatever). I know that my comment is a bit off topic. But what bothers me is this:

Is there something deep going on or is this just an artifact of a bunch of simplifying assumptions?

• John Baez says:

A large portion of control theory assumes linearity because you’re trying to keep the system near an equilibrium point, you can linearize near that equilibrium, and if you let the system drift so far from equilibrium that the nonlinearity matters it means you’ve done a bad job and deserve all the suffering you get!

The classic example is balancing a pendulum upside-down by moving a cart back and forth. You assume $\theta$ is small enough that $\sin \theta$ is close to $\theta$:

This is a nice reflection of the basic difference between physics and engineering. In physics you try to understand nature in all its crazy glory. In engineering you try to create systems that behave in ways you can understand.

However, nonlinear control theory is also important, and you could say it’s fundamental to living systems. I just want to understand the mathematics of linear networks a bit before moving on to the nonlinear ones… mainly because there’s basic stuff to understand that hasn’t been figured out yet.

• Eugene Lerman says:

I have no issues with you wanting to understand linear systems or, more generally, systems living on (open) subsets of vector spaces.

My comments were more along the lines of: I don’t understand the geometry of pairings of “flows” and “efforts.” And I would love to know if such pairings exist in a nonlinear settings.

As far as linearising near equilibria goes, linearisation is not very helpful if you are trying to prove stability of an equilibrium of a conservative system.
As you probably know if an equilibrium of a conservative system is stable, the eigenvalues of a linearised system have to be purely imaginary. However, the converse doesn’t hold: the eigenvalues can be purely imaginary and the system can be unstable — the nonlinearities takes over right away in the case of purely imaginary eigenvalues.

• domenico says:

I think that each system is Hamiltonianizable, and that these Hamiltonians are linear:
$0=F(y,\dot y,\ddot y, \cdots)$
$0=\frac{d}{dt} F(y,\dot y,\ddot y, \cdots)$
$0= \dot y \partial_y F+ y^{(2)} \partial_{y^{(1)}} F+\cdots$
$y^{(n)} = -\frac{ \dot y \partial_y F+ \cdots }{ \partial_{y^{(n-1)}} F } = G( y^{(1)}, \cdots )$
$\dot y_j = y_{j-1}$
$\dot y_1 = y_{0} = y$
$H = p_N G+\sum_j p_j y_{j+1}$
$p_j = -\frac{\partial H}{\partial y_j}$
$\dot y_N = G = y^{(N)}$
so that each dynamic can be write like a projection of an Hamiltonian dynamic.
The funny thing is that the classical and quantum mechanics are equal.

• John Baez says:

Eugene wrote:

As far as linearising near equilibria goes, linearisation is not very helpful if you are trying to prove stability of an equilibrium of a conservative system.

You probably know this, but control theory you aren’t seeking to prove equilibria of Hamiltonian systems are stable, you’re trying to stabilize equilibria that are unstable by measuring the system and actively applying forces to it that depend on your measurements.

Again, a great example is the upside-down pendulum on the cart:

It’s obviously unstable, but you can still keep it standing by measuring $\theta$ and applying a force $\vec F$ that depends on $\theta$ in a suitable way… and you can make this dependence linear, so it’s a purely linear problem. The approximation $\sin \theta \approx \theta$ is being used to achieve this linearity, but the approximation is okay, since deviations from it get corrected by your active stabilization!

Again, I’m not trying to say nonlinear issues are uninteresting. They’re incredibly interesting: the linear theory is just the tip of the iceberg. However, there are big fat textbooks on control theory that consider nothing but linear systems—and they’re both very practical, and mathematically interesting, and ripe for a category-theoretic treatment. So that’s what Jason is working on.

By the way, anyone who wants to know more can click on the picture. For even more, I really recommend this book:

• Bernard Friedland, Control System Design: An Introduction to State-Space Methods, Courier Dover Publications, 2012.

3. Eugene Lerman says:

You probably know this, but control theory one isn’t seeking to prove equilibria are stable, one is trying to stabilize equilibria that are unstable by measuring the system and applying forces to it that depend on what you measure.

That’s certainly one very common approach. There is also a passivity-based approach. See this book, for example. Or these slides.

• John Baez says:

Thanks, I’ll check those out. I don’t know about it, but I like the sound of “passive control”. It sounds very wu wei.

• Eugene Lerman says:

port-Hamiltonian systems are all supposed to be about passive control…

• John Baez says:

The slides contain this quote:

As engineers, we are not really concerned in knowing what Nature does, but rather in forcing Nature to do what we want.

That’s sort of what I was trying to say… but I think “forcing” should be replaced by something a bit more gentle and cooperative.

4. Last time I talked about a new paper I wrote with Brendan Fong. Our paper uses a formalism that Brendan developed here:

• Brendan Fong, Decorated cospans.

5. linasv says:

OK, so I understood everything right up to the point of symplectification. Intuitively, I see what you are trying to do, and why, but I cannot quite get back to formulas I can manipulate. So, for example, consider a tank: an LC circuit, an inductor and capacitor in parallel. We know the impedance is a combination of $i\omega L$ and $1 / i\omega C$. For terminals, we can have one end with is hooked to ground, for both input and output, and another, hooked to antenna as input that drives it, and again, as output from which I siphon energy. A more complex example might be a transmission line or a Butterworth filter. We know that, as a black box, such circuits are described by transfer functions.

OK, so transfer functions are traditionally written in the frequency domain, whereas the symplectic vector space is in the “time domain”. I guess that means that $(\psi , dQ_\psi )$ is the symplectic form, and that I should be calling it the transfer matrix. You are saying that it is the symplectification of a function $f:A \to B$ from finite set to set, but I just cannot visualize what f is supposed to be for a tank circuit.

I’m also confused by (because I can’t recall them off the top of my head, and am too lazy to crack open a standard textbook) how to get from a time-domain transfer matrix to the frequency-domain function with its poles and zeros, but maybe never mind about that.

The reason I want to see the worked example(s) is less that I am interested in electronics, but rather, because of two or three other directions:

— the infinite-dimensional transfer matrixes are transfer operators, and they can be used to describe the transition to chaos, or at least a transition to ergodic behavior.

— The role of entropy in this transition. So David Ruelle has made a life-work of the transfer operator, but I cannot recall seeing a symplectic conjugate temperature anywhere in the introduction-to-chaos texts I’ve read. Was I not paying attention? Or is it because the symplectic conjugate is not a temperature, but a map from tangent to cotangent space, as Eugene points out? Is it the Kullback-Leibler divergence? How? Huh??

— A better understanding of “information geometry”. There’s a lot of talk in (for example) ecology about non-equilibrium thermodynamics, and various appeals to minimum/maximum entropy principles.

So, for example: is the following described by a “passive linear network”? Consider the diurnal solar heating of a body of water, and the evaporation from it? I think its more or less linear, and I think its passive (the time-varying heating of the sun being like a time-varying voltage applied to a passive circuit) but the symplectic nature of this system is mysterious to me. This is one possible example of “non-equilibrium thermodynamics”; does the symplectic-Lagrangian least-action principle apply, and how, exactly? Details? Is it really the “principle of maximum entropy generation” in disguise?

Apologies in advance for the long digressive post.

• John Baez says:

Thanks for the comment! There’s a lot to talk about here, but for starters:

so transfer functions are traditionally written in the frequency domain, whereas the symplectic vector space is in the “time domain”.

No, it’s in the frequency domain. As I all-too-briefly noted, this is why we replace resistances by ‘impedances’, which can be functions of the frequency variable $s:$

The Laplace transform allows us to generalize [everything] immediately to circuits that can also contain linear inductors and capacitors, simply by changing the field we work over, replacing $\mathbb{R}$ by the field $\mathbb{R}(s)$ of rational functions of a single real variable, and talking of impedance where we previously talked of resistance.

Our paper explains it more detail:

Although inductors and capacitors impose a linear relationship if we involve the derivatives of current and voltage, to mimic the above work on resistors we wish to have a constant of proportionality between functions representing the current and voltage themselves. Various integral transforms perform just this role; electrical engineers typically use the Laplace transform. This lets us write a function of time $t$ instead as a function of frequencies $s,$ and in doing so turns differentiation with respect to $t$ into multiplication by $s,$ and integration with respect to $t$ into division by $s.$

In detail, given a function

$f: [0, \infty) \to \mathbb{R}$

we define the Laplace transform of $f$

$\mathfrak{L}\{f\}(s) = \int_{0}^\infty f(t) e^{-st} dt$

We also use the notation $\mathfrak{L}(f)(s) = F(s),$ denoting the Laplace transform of a function in upper case, and refer to the Laplace transforms as lying in the frequency domain or $s$-domain. For us, the three
crucial properties of the Laplace transform are then:

• linearity: $\mathfrak{L}\{af+bg\}(s) = aF(s)+bG(s)$ for $a,b\in \mathbb{R}$;

• differentiation: $\mathfrak{L}\{\dot{f}\}(s) = s F(s) - f(0)$;

• integration: if $g(t) = \int_0^t f(\tau)d\tau$ then

$\displaystyle{ G(s) = \frac{1}{s} F(s) }$

Writing $V(s)$ and $I(s)$ for the Laplace transform of the voltage $v(t)$ and current $i(t)$ across a component respectively, and recalling that by assumption $v(t) = i(t) = 0$ for $t < 0,$ the $s$-domain behaviors of components become, for a resistor of resistance $R$:

$V(s) = RI(s)$

for an inductor of inductance $L$:

$V(s) = sLI(s)$

and for a capacitor of capacitance $C$:

$\displaystyle{ V(s) = \frac1{sC} I(s) }$

Note that for each component the voltage equals the current times a rational function of the real variable $s,$ called the \define{impedance} and in general denoted by $Z.$ Note also that the impedance is a \define{positive real function}, meaning that it lies in the set

$\mathbb{R}(s)^+ = \{ Z \in \mathbb{R}(s) : \forall s \in \mathbb{C} \;\; \mathrm{Re}(s) > 0 \implies \mathrm{Re}(Z(s)) > 0 \}$

While $Z$ is a quotient of polynomials with real cofficients, in this definition we are applying it to complex values of $s,$ and demanding that its real part be positive in the open left half-plane. Positive real functions were introduced by Otto Brune in 1931, and they play a basic role in circuit theory.

Indeed, Brune convincingly argued that for any conceivable passive linear component we have this generalization of Ohm’s law:

$V(s)=Z(s)I(s)$

where $I \in \mathbb{R}(s)$ is the current, $V \in \mathbb{R}(s)$ is the voltage and $Z \in \mathbb{R}(s)^+$ is the impedance of the component. As we shall see, generalizing from circuits of linear resistors to arbitrary passive linear circuits is just a matter of formally replacing resistances by impedances. This amounts to replacing the field $\mathbb{R}$ by the larger field $\mathbb{R}(s),$ and replacing the set of positive reals, $\mathbb{R}^+ = (0,\infty),$ by the set of positive real functions, $\mathbb{R}(s)^+.$ From a mathematical perspective we might as well work with any field with a mildly well-behaved notion of ‘positive element’, and we do this in the next section.

• John Baez says:

Linus Vepstas wrote:

Intuitively, I see what you are trying to do, and why, but I cannot quite get back to formulas I can manipulate.

Instead of tackling the very specific examples you mentioned, it’ll be easier for me to do a general circuit with just one input and one output. This includes some of the examples you mentioned.

Suppose the circuit has impedance $Z.$ We’re using the Laplace transform to work in the frequency domain, as explained in my last comment. So, $Z$ and everything quantity I mention will be some function of the Laplace transform variable $s.$ This means they’re all elements of some field $\mathbb{F}$: some field of functions of $s.$ The detailed nature of this field doesn’t matter much.

Saying that the impedance is $Z$ means that

$V = Z I$

where $V$ is the voltage across the circuit and $I$ is the current through the circuit. This is a glorified version of Ohm’s law. I think you’re asking: how do we see this relation as a Lagrangian relation between symplectic vector spaces?

First, what are the symplectic vector spaces?

They are both $\mathbb{F}^2.$ The input wire has a potential $\phi_1$ and current $I_1$, and these are ‘canonically conjugate variables’ just like the position and momentum of a particle, so we want to think of the pair

$(\phi_1, I_1) \in \mathbb{F}^2$

as a point in a symplectic vector space. The symplectic structure is the usual 2-form

$\omega : \mathbb{F}^2 \times \mathbb{F}^2 \to \mathbb{F}$

we see in classical mechanics, but with potential and current replacing position and momentum:

$\omega((\phi, I), (\phi', I')) = \phi I' - \phi' I$

The output wire has potential $\phi_2$ and current $I_2$, and these again live in the symplectic vector space $\mathbb{F}^2,$ with the same symplectic structure.

Second, what is the Lagrangian relation?

It’s just the linear relationship that holds between the input and output potentials and currents:

$I_1 = I_2$

$\phi_2 - \phi_1 = Z I_1$

The first equation is says the current flowing in equals the current flowing out; the second one is the glorified version of Ohm’s law, using the fact that the voltage across our circuit is

$V = \phi_2 - \phi_1$

and the current through it is

$I = I_1 = I_2.$

These two linear equations

$I_1 = I_2$

$\phi_2 - \phi_1 = Z I_1$

pick out a 2-dimensional linear subspace of the direct sum of our symplectic vector spaces, $\mathbb{F}^2 \oplus \mathbb{F}^2.$ And, in fact, this is a Lagrangian relation.

• John Baez says:

Linus wrote:

So, for example: is the following described by a “passive linear network”? Consider the diurnal solar heating of a body of water, and the evaporation from it? I think it’s more or less linear, and I think it’s passive (the time-varying heating of the sun being like a time-varying voltage applied to a passive circuit) but the symplectic nature of this system is mysterious to me.

Suppose you can approximately model this system with a network of linear resistors, inductors and capacitors, except for the incoming sunlight, which you can model as an external time-dependent current source or voltage source. Then our formalism will apply.

Your model will consist of a graph where the edges are labelled by impedances, with some input and output nodes. Each end of each edge will have an associated ‘potential’ and ‘current’. The list of all potentials and current will be an element of a symplectic vector space $\mathbb{F}^{2 n}.$

There are lots of systems for which this kind of model can be made. Henry Paynter, the inventor of bond graphs, drew this picture of one:

Our paper doesn’t mention bond graphs at all, so let’s not talk about those now—there’s too much to talk about already. Click the picture if you want more on bond graphs. My point is just that systems that fit into our formalism are very common.

This is one possible example of “non-equilibrium thermodynamics”; does the symplectic-Lagrangian least-action principle apply, and how, exactly? Details? Is it really the “principle of maximum entropy generation” in disguise?

Our paper explains the ‘principle of minimum power’ in quite a bit of detail for circuits made of resistors. This is actually a principle of minimum entropy generation, since the ‘power’ of such a circuit is actually the power dissipated, that is, the rate at which useful energy is turned into heat.

When we introduce inductors and capacitors we show that a formal analogue of the principal of minimum power still applies! However, now the power is not a real number: it’s an element of the field $\mathbb{F}$ of rational functions of one real variable, the Laplace transform variable $s.$

I have not yet succeeded in finding a regime in which the ‘principle of maximum entropy generation’ applies! Sometime I’ll put up a link to a video of this talk by Roderick Dewar, a proponent of the principle of maximum entropy generation:

He essentially admitted that his attempts to derive such a principle have not yet succeeded.

• For ENSO, the analogous LC circuit is described on the Azimuth Forum here:
http://forum.azimuthproject.org/discussion/comment/14572/#Comment_14572

And it actually works!

• Graham Jones says:

On maximum entropy production. I guess that an amplifier with positive feedback does maximum entropy production. It chooses the frequency at which it can make the biggest noise it is capable of. But I also guess that ‘passive’ rules out feedback and ‘linear’ rules out amplification.

6. linasv says:

And one last question: what is the equivalent of the canonical 1-form (Liouville form) for a passive network? What is it’s interpretation, in electrical terms? What about the solder form?

• John Baez says:

The important form on a symplectic vector space is the symplectic 2-form, and above I explained what that was in our setup—at least in an example.

7. In work related to decorated cospans (such as our paper on circuits or John and Jason’s work on signal flow diagrams), our semantics usually is constructed from a field of values—not a physicist’s ‘field’, bt an algebraist’s sort of ‘field’, where you can add, multiply, subtract and divide.

8. I have been (very) slowly working through this paper. It is lots of fun. I am just a working engineer who reads things like this for the entertainment value.

A couple of questions. I didn’t see you excluding shorts from the resistor networks. Was resistance not equal to zero a condition I missed? I kept seeing reference to non-negative values for r’s. Did I miss something?

I’m only on section 2. What I have read so far on Equivalent Circuits calls to mind the Thevenin/Norton Theorems. I did some scratching around on paper to see if there was a similar ‘canonical’ innards to any black box -resistor network. Is it correct that you could take any resistor network with inputs n and outputs m and replace it by a canonical equivalent network in which each input i was connected to each output j by exactly one resistor Rij? I recall reading somewhere that such a set up could be thought of as an old fashioned analog computer. Pull all the outputs to ground. Then the input voltages v1, … , vn, are mapped to i1,…, im via matrix multiplication (a vector form of ohm’s law.) I think that is right. I know I read about this at some point but searching for a reference on the web I came up empty.

Anyway, thanks for putting this together. It is very interesting.

• John Baez says:

Rob Macdonald wrote:

I have been (very) slowly working through this paper. It is lots of fun.

Great!

I am just a working engineer who reads things like this for the entertainment value.

Don’t say “just”.

A couple of questions. I didn’t see you excluding shorts from the resistor networks. Was resistance not equal to zero a condition I missed?

We don’t allow zero resistance, and this plays a huge role in our work. On the one hand a resistor of resistance would suck up infinite power if the voltage across it were nonzero, and this would destroy our use of the ‘power functional’. (Mathematically, we’d be dividing by zero.) On the other hand we want something like a perfectly conductive wire to serve as an identity morphism in our category. We cleverly navigate between Scylla and Charybdis by allowing a circuit where the input terminal is the output terminal, and there’s no wire and no resistor.

We begin seriously discussing this in Section 3.5, where we write:

Dirichlet forms with large values of $k$—corresponding to resistors with resistance close to zero—act as approximate identities.’

In this way we might interpret the identities we wish to introduce into this category as the behaviors of idealized components with zero resistance: perfectly conductive wires. Unfortunately, the power functional of a purely conductive wire is undefined: the formula for it involves division by zero. In real life, coming close to this situation leads to the disaster that electricians call a short circuit’: a huge amount of power dissipated for even a small voltage. This is why we have fuses and circuit breakers.

Nonetheless, we have most of the structure required for a category. A category without identity morphisms’ is called a semicategory, so we see:

There is a semicategory where:

• the objects are finite sets,

• a morphism from $T$ to $S$ is a Dirichlet form $Q \in D(S,T)$.

• composition of morphisms is given by the principle of minimum power:

$(R \circ Q)(\gamma, \alpha) = \min_{T} Q(\gamma, \beta) + R(\beta, \alpha)$

We would like to make this into a category. One easy way to do this is to formally adjoin identity morphisms; this trick works for any semicategory. However, we obtain a better category if we include more morphisms: more behaviors corresponding to circuits made of perfectly conductive wires. As the expression for the extended power functional includes the reciprocals of impedances, such circuits cannot be expressed within the framework we have developed thus far. Indeed, for these idealized circuits there is no function taking boundary potentials to boundary currents: the vanishing impedance would imply that any difference in potentials at the boundary induces ‘infinite’ currents. To deal with this issue, we generalize Dirichlet forms to Lagrangian relations. First, however, we develop a category theoretic framework, based around decorated cospans, to define the category of circuits itself and understand its basic properties.

These remarks set the tone for the next section.

I kept seeing reference to non-negative values for r’s. Did I miss something?

Maybe there are some typos where we say “non-negative”, but we should say “positive”. If you can point them out, I can see if they need to be fixed!

More later.

• John Baez says:

I kept seeing reference to non-negative values for r’s.

Where? I haven’t been able to find those. We talk a lot about nonnegative quadratic forms, but that just because the power used by a circuit is $\ge 0$, and it can be zero.

In the Introduction we define a circuit of resistors to consist of

• a set $N$ of nodes,
• a set $E$ of edges,
• maps $s,t : E \to N$ sending each edge to its source and target node,
• a map $r: E \to (0,\infty)$ specifying the resistance of the resistor labelling each edge,
• maps $i: X \to N$, $o:Y \to N$ specifying the inputs and outputs of the circuit.

So here the resistances have to be positive!

We study circuits of resistors using Dirichlet forms, and we say

Definition. Given a finite set $S$, a Dirichlet form on $S$ is a quadratic form $Q: \mathbb{R}^S \to \mathbb{R}$ given by the formula

$Q(\psi) = \displaystyle{ \sum_{i,j} c_{i j} (\psi_i - \psi_j)^2 }$

for some nonnegative real numbers $c_{i j}$.

However, these numbers are more closely related to conductivities than resistances. After all, the power used by a circuit is the Dirichlet form

$P(\phi) = \displaystyle{ \frac{1}{2} \sum_{e \in E} \frac{1}{r(e)}\big(\phi(t(e))-\phi(s(e))\big)^2 }$

where $\phi$ is the potential and $r(e)$ is the resistance of a wire (or ‘edge’) going from the node $s(e)$ to the node $t(e)$. You’ll notice that we’re taking the reciprocal of the resistance here, to get the conductivity. So, the resistance can’t be zero.

• Thanks that clears it up. I was confusing the cij’s with the r(e)’s. But I understand now. Appreciate it!

• John Baez says:

Rob wrote:

Is it correct that you could take any resistor network with inputs $n$ and outputs $m$ and replace it by a canonical equivalent network in which each input $i$ was connected to each output $j$ by exactly one resistor $r_{ij}$?

Almost!

1) You also need to allow wires going between inputs, and wires going between outputs.

2) You also need to allow ‘wires of infinite resistance’—which we treat simply as ‘missing’ wires.

3) We also allow something like ‘wires of zero resistance’. But not really! To be precise: we allow an input to also be an output, or allow two inputs to be the same node, or allow two outputs to be the same node. (See my previous discussion of zero resistance, to see why we do this.)

If we disallow case 3), then every circuit of resistors is equivalent to one where there is at most one wire with a resistor on it going from any input or output to any other input or output.

This is why we use ‘Dirichlet forms’ as power functionals and write a general Dirichlet form as

$Q(\psi) = \displaystyle{ \sum_{i,j} c_{i j} (\psi_i - \psi_j)^2 }$

You can think of this as the power emitted by a circuit with a single wire of conductance $c_{i j}$ going from node $i$ to node $j$ if $c_{i j} > 0$… or no wire going from $i$ to $j$ if $c_{i j} = 0.$

9. Earlier, Brendan and I introduced a way to ‘black box’ a circuit and define the relation it determines between potential-current pairs at the input and output terminals. This relation describes the circuit’s external behavior as seen by an observer who can only perform measurements at the terminals.

An important fact is that black boxing is ‘compositional’: if one builds a circuit from smaller pieces, the external behavior of the whole circuit can be determined from the external behaviors of the pieces. For category theorists, this means that black boxing is a functor!

Our new paper with Blake develops a similar ‘black box functor’ for detailed balanced Markov processes, and relates it to the earlier one for circuits.

10. Rob says:

I have been slowly looking at this paper. Good thing I don’t get paid to understand its contents or I would be homeless! Anyhow, I was trying to understand the relationship between corelations and pushouts and I found myself wondering, is there a standard algorithm for computing a pushout in finite set?

I asked the question here on stackexchange. I’m not sure the answer isnt’ obvious, but I’m also not sure it’s not not obvious, if you get me.

http://math.stackexchange.com/questions/1422056/is-there-an-algorithm-for-computing-pushouts-in-sf-finset

If anyone reading this paper has any interest in this question you might have a look there. Some of the comments indicated that it was trivial (like asking for an algorithm to compute the sum of two objects in finite set). If it is obvious, I should be able to program a compute to do it, which given the time I had to devote to it, I couldn’t.

• John Baez says:

I hope you’re enjoying the lengthy delights of this paper.

It’s quite easy to work out a pushout in the category of finite sets. Say you have two functions $f: X \to Y$, $g : X \to Z$ and you want to take their pushout. First you form the disjoint union $Y + Z$. Then you identify any element $f(x) \in X$ with the corresponding element $g(x) = Z$.

In general, in any category, a pushout can be formed by taking first a coproduct and then a coequalizer. That’s what I’m doing above.

• rob macdonald says:

I think Oden stuck a knife in his eye so that he could see. That seems extreme to me. And my deductible is so high, I would have to okay it with my accountant first. So I will just ask…

I thought the idea of a pushout was “the finest equivalence…” Here is an example, say X = {x1,x2}, Y = {y1, y2}, Z = {z1}. Say, f:X->Y is defined by (x1,y1) “x1 maps to y1”, (x2, y2), and g:X->Z, is the only map of sets from X to Z. then the pushout P is a one point set, with the maps from Y and Z to it the only maps…Correct? So even though x1 and x2 are mapped to different things in Y, they are pushed to the same element in P.

Here is another (contrived) example. say X has 4 elements, Y has 2 and Z has 3, labeled in the obvious ways, {x1,…,x4}, etc

define the map f as (x1, y1), (x2, y1), (x3, y2), (x4, y2)

define the map g as (x1, z1), (x2, z2), (x3, z2), (x4, z3)

I gather that the pushout P has has 1 element in it. Correct? Or I need to stick a hot poker in my eye, because I don’t understand what a pushout is. Nothing dies on the internet, so for all time my decedents can read this and suffer the shame of knowing their great great great grandfather didn’t know how to compute a pushout in finite set. Is that obvious? As obvious as say, the sum of two objects. Is it obvious that z3 and y1 get mapped to the same element of P even though they are not in the pre-image of the same element of X? Or do I just not understand what a pushout is.

I tried to write a program to compute the pushout of two maps in finset so I could understand this paper. I think I have it. I wrote it in R. (Parenthetically, that is how I ended up here: Engineer needs to tally all the reasons the light bulbs broke, uses R, finds out R is a “functional programming language”, doesn’t know there was such a thing as a non-functional programming language, reads about Haskell, ends up on the n-cafe on his lunch hour, when he should be filling out Return Material Authorizations for said broken light bulbs, reads about how some physicists are studying first year electrical engineering because it will unlock the secrets of the universe… Now is trying to understand category theory. The internet is a beautiful thing.)

When I first set out to write this program I assumed that it was all old hat. That there was an existing set of database operations (like Union, Join, etc…) which would encompass the idea. Maybe there is, and I just haven’t gotten it yet. The closest functional relative I found was “group-by”, which is a sort of primitive for “equivalence’ in the world of data-tables. i still “feel” like there “should” be an existing operation on data that is the “push-out” or “coequalizer”.

Pushout, seems like something we would use on data as a sort of “group by across multiple traits” but I haven’t found the construction coded, thus the question. (again, forgive me if I just don’t understand the basic idea).

Here is an artificial example I was thinking of. Let’s say P is a set of people. Lets say that T is a set of mutations of the (mitochondrial) genes (T) and similarly for S. There are maps f:P->T, you are mapped to particular mutation you have, and g:P->S, similarly. Then there are some interesting things to compute. There is a map m:P->P, “mother”. There are also natural sections from T->P and S->P , where we pick out the first person to have the particular mutation. There is also a series of pushouts associated with g and f, starting with P, then m(P), m(m(P))…

Let’s say the pushouts are Q_i…. then isn’t it interesting if Q_i+1 < Q_i? it would mean that you had a genetically isolated population, by those two genes.

this seems artificial. (we don’t know who everyones mother is ad nausium).

11. Of course circuits made solely of conductive wires are not very exciting for electrical engineers. But in an earlier paper, Brendan introduced corelations as an important stepping-stone toward more general circuits:

• John Baez and Brendan Fong, A compositional framework for passive linear circuits. (Blog article here.)

The key point is simply that you use conductive wires to connect resistors, inductors, capacitors, batteries and the like and build interesting circuits—so if you don’t fully understand the math of conductive wires, you’re limited in your ability to understand circuits in general!

12. I’ve got seven grad students working on this project—or actually eight, if you count Brendan Fong: I’ve been helping him on his dissertation, but he’s actually a student at Oxford.

Brendan was the first to join the project. I wanted him to work on electrical circuits, which are a nice familiar kind of network, a good starting point. But he went much deeper: he developed a general category­-theoretic framework for studying networks. We then applied it to electrical circuits, and other things as well.

13. I’ve got seven grad students working on this project—or actually eight, if you count Brendan Fong: I’ve been helping him on his dissertation, but he’s actually a student at Oxford.

Brendan was the first to join the project. I wanted him to work on electrical circuits, which are a nice familiar kind of network, a good starting point. But he went much deeper: he developed a general category­-theoretic framework for studying networks. We then applied it to electrical circuits, and other things as well.

14. At some point I gave Brendan Fong a project: describe the category whose morphisms are electrical circuits. He took up the challenge much more ambitiously than I’d ever expected, developing powerful general frameworks to solve not only this problem but also many others. He did this in a number of papers, most of which I’ve already discussed […]

15. People who have read Fong’s thesis, or his paper with me on electric circuits:

• John Baez and Brendan Fong, A compositional framework for
passive linear networks
. (Blog article here.)

or my paper with Blake Pollard on reaction networks:

• John Baez and Blake Pollard, A compositional framework for reaction networks.

will find many of Darbo’s ideas eerily similar.

16. wo students in the Applied Category Theory 2018 school wrote a blog article about Brendan Fong’s theory of decorated cospans:

• Jonathan Lorand and Fabrizio Genovese, Hypergraph categories of cospans, The n-Category Café, 28 February 2018.

What’s especially interesting to me is that both Jonathan and Fabrizio know some mathematical physics, and they’re part of a group who will be working with me on some problems as part of the Applied Category Theory 2018 school! Brendan and Blake Pollard and I used symplectic geometry and decorated cospans to study the black-boxing of electrical circuits and Markov processes… maybe we should try to go further with that project!

17. In this paper, we study the props for various kinds of electrical circuits:

• John Baez, Brandon Coya and Franciscus Rebro, Props in network theory.

We illustrate the usefulness of props by giving a new, shorter proof of the ‘black-boxing theorem’ proved here:

• John Baez and Brendan Fong, A compositional framework for passive linear networks. (Blog article here.)

18. John Baez says:

Yay! Our paper was finally published today!

• John Baez and Brendan Fong, A compositional framework for passive linear networks, Theory and Applications of Categories 33 (2018), 1158–1222.

The referee took a long time, but they caught some serious errors and also demanded a major restructuring of the paper. We rewrote it using Brendan’s ideas on decorated corelations, and it’s become a much deeper paper, but also shorter.

Brendan is coming by this Sunday—Thanksgiving weekend—and we’ll celebrate the end of a project that started in 2013.

• Frank Gammon says:

Awesome! Congratulations.

This site uses Akismet to reduce spam. Learn how your comment data is processed.