Here’s a new paper on network theory:
• John Baez and Brendan Fong, A compositional framework for passive linear networks.
While my paper with Jason Erbele studies signal flow diagrams, this one focuses on circuit diagrams. The two are different, but closely related.
I’ll explain their relation at the Turin workshop in May. For now, let me just talk about this paper with Brendan. There’s a lot in here, but let me just try to explain the main result. It’s all about ‘black boxing’: hiding the details of a circuit and only remembering its behavior as seen from outside.
The idea
In late 1940s, just as Feynman was developing his diagrams for processes in particle physics, Eilenberg and Mac Lane initiated their work on category theory. Over the subsequent decades, and especially in the work of Joyal and Street in the 1980s, it became clear that these developments were profoundly linked: monoidal categories have a precise graphical representation in terms of string diagrams, and conversely monoidal categories provide an algebraic foundation for the intuitions behind Feynman diagrams. The key insight is the use of categories where morphisms describe physical processes, rather than structure-preserving maps between mathematical objects.
In work on fundamental physics, the cutting edge has moved from categories to higher categories. But the same techniques have filtered into more immediate applications, particularly in computation and quantum computation. Our paper is part of a new program of applying string diagrams to engineering, with the aim of giving diverse diagram languages a unified foundation based on category theory.
Indeed, even before physicists began using Feynman diagrams, various branches of engineering were using diagrams that in retrospect are closely related. Foremost among these are the ubiquitous electrical circuit diagrams. Although less well-known, similar diagrams are used to describe networks consisting of mechanical, hydraulic, thermodynamic and chemical systems. Further work, pioneered in particular by Forrester and Odum, applies similar diagrammatic methods to biology, ecology, and economics.
As discussed in detail by Olsen, Paynter and others, there are mathematically precise analogies between these different systems. In each case, the system’s state is described by variables that come in pairs, with one variable in each pair playing the role of ‘displacement’ and the other playing the role of ‘momentum’. In engineering, the time derivatives of these variables are sometimes called ‘flow’ and ‘effort’.
displacement: | flow: | momentum: | effort: | |
Mechanics: translation | position | velocity | momentum | force |
Mechanics: rotation | angle | angular velocity | angular momentum | torque |
Electronics | charge | current | flux linkage | voltage |
Hydraulics | volume | flow | pressure momentum | pressure |
Thermal Physics | entropy | entropy flow | temperature momentum | temperature |
Chemistry | moles | molar flow | chemical momentum | chemical potential |
In classical mechanics, this pairing of variables is well understood using symplectic geometry. Thus, any mathematical formulation of the diagrams used to describe networks in engineering needs to take symplectic geometry as well as category theory into account.
While diagrams of networks have been independently introduced in many disciplines, we do not expect formalizing these diagrams to immediately help the practitioners of these disciplines. At first the flow of information will mainly go in the other direction: by translating ideas from these disciplines into the language of modern mathematics, we can provide mathematicians with food for thought and interesting new problems to solve. We hope that in the long run mathematicians can return the favor by bringing new insights to the table.
Although we keep the broad applicability of network diagrams in the back of our minds, our paper talks in terms of electrical circuits, for the sake of familiarity. We also consider a somewhat limited class of circuits. We only study circuits built from ‘passive’ components: that is, those that do not produce energy. Thus, we exclude batteries and current sources. We only consider components that respond linearly to an applied voltage. Thus, we exclude components such as nonlinear resistors or diodes. Finally, we only consider components with one input and one output, so that a circuit can be described as a graph with edges labeled by components. Thus, we also exclude transformers. The most familiar components our framework covers are linear resistors, capacitors and inductors.
While we want to expand our scope in future work, the class of circuits made from these components has appealing mathematical properties, and is worthy of deep study. Indeed, these circuits has been studied intensively for many decades by electrical engineers. Even circuits made exclusively of resistors have inspired work by mathematicians of the caliber of Weyl and Smale!
Our work relies on this research. All we are adding is an emphasis on symplectic geometry and an explicitly ‘compositional’ framework, which clarifies the way a larger circuit can be built from smaller pieces. This is where monoidal categories become important: the main operations for building circuits from pieces are composition and tensoring.
Our strategy is most easily illustrated for circuits made of linear resistors. Such a resistor dissipates power, turning useful energy into heat at a rate determined by the voltage across the resistor. However, a remarkable fact is that a circuit made of these resistors always acts to minimize the power dissipated this way. This ‘principle of minimum power’ can be seen as the reason symplectic geometry becomes important in understanding circuits made of resistors, just as the principle of least action leads to the role of symplectic geometry in classical mechanics.
Here is a circuit made of linear resistors:
The wiggly lines are resistors, and their resistances are written beside them: for example, means 3 ohms, an ‘ohm’ being a unit of resistance. To formalize this, define a circuit of linear resistors to consist of:
• a set of nodes,
• a set of edges,
• maps sending each edge to its source and target node,
• a map specifying the resistance of the resistor
labelling each edge,
• maps specifying the inputs and outputs of the circuit.
When we run electric current through such a circuit, each node gets a potential The voltage across an edge is defined as the change in potential as we move from to the source of to its target, The power dissipated by the resistor on this edge is then
The total power dissipated by the circuit is therefore twice
The factor of is convenient in some later calculations.
Note that is a nonnegative quadratic form on the vector space However, not every nonnegative definite quadratic form on arises in this way from some circuit of linear resistors with as its set of nodes. The quadratic forms that do arise are called Dirichlet forms. They have been extensively investigated, and they play a major role in our work.
We write
for the set of terminals: that is, nodes corresponding to inputs or outputs. The principle of minimum power says that if we fix the potential at the terminals, the circuit will choose the potential at other nodes to minimize the total power dissipated. An element of the vector space assigns a potential to each terminal. Thus, if we fix the total power dissipated will be twice
The function is again a Dirichlet form. We call it the power functional of the circuit.
Now, suppose we are unable to see the internal workings of a circuit, and can only observe its ‘external behavior’: that is, the potentials at its terminals and the currents flowing into or out of these terminals. Remarkably, this behavior is completely determined by the power functional The reason is that the current at any terminal can be obtained by differentiating with respect to the potential at this terminal, and relations of this form are all the relations that hold between potentials and currents at the terminals.
The Laplace transform allows us to generalize this immediately to circuits that can also contain linear inductors and capacitors, simply by changing the field we work over, replacing by the field of rational functions of a single real variable, and talking of impedance where we previously talked of resistance. We obtain a category where an object is a finite set, a morphism is a circuit with input set and output set and composition is given by identifying the outputs of one circuit with the inputs of the next, and taking the resulting union of labelled graphs. Each such circuit gives rise to a Dirichlet form, now defined over and this Dirichlet form completely describes the externally observable behavior of the circuit.
We can take equivalence classes of circuits, where two circuits count as the same if they have the same Dirichlet form. We wish for these equivalence classes of circuits to form a category. Although there is a notion of composition for Dirichlet forms, we find that it lacks identity morphisms or, equivalently, it lacks morphisms representing ideal wires of zero impedance. To address this we turn to Lagrangian subspaces of symplectic vector spaces. These generalize quadratic forms via the map
taking a quadratic form on the vector space over the field to the graph of its differential Here we think of the symplectic vector space as the state space of the circuit, and the subspace as the subspace of attainable states, with describing the potentials at the terminals, and the currents.
This construction is well-known in classical mechanics, where the principle of least action plays a role analogous to that of the principle of minimum power here. The set of Lagrangian subspaces is actually an algebraic variety, the Lagrangian Grassmannian, which serves as a compactification of the space of quadratic forms. The Lagrangian Grassmannian has already played a role in Sabot’s work on circuits made of resistors. For us, its importance it that we can find identity morphisms for the composition of Dirichlet forms by taking circuits made of parallel resistors and letting their resistances tend to zero: the limit is not a Dirichlet form, but it exists in the Lagrangian Grassmannian.
Indeed, there exists a category with finite dimensional symplectic vector spaces as objects and Lagrangian relations as morphisms: that is, linear relations from to that are given by Lagrangian subspaces of where is the symplectic vector space conjugate to —that is, with the sign of the symplectic structure switched.
To move from the Lagrangian subspace defined by the graph of the differential of the power functional to a morphism in the category —that is, to a Lagrangian relation— we must treat seriously the input and output functions of the circuit. These express the circuit as built upon a cospan:
Applicable far more broadly than this present formalization of circuits, cospans model systems with two ‘ends’, an input and output end, albeit without any connotation of directionality: we might just as well exchange the role of the inputs and outputs by taking the mirror image of the above diagram. The role of the input and output functions, as we have discussed, is to mark the terminals we may glue onto the terminals of another circuit, and the pushout of cospans gives formal precision to this gluing construction.
One upshot of this cospan framework is that we may consider circuits with elements of that are both inputs and outputs, such as this one:
This corresponds to the identity morphism on the finite set with two elements. Another is that some points may be considered an input or output multiple times, like here:
This lets to connect two distinct outputs to the above double input.
Given a set of inputs or outputs, we understand the electrical behavior on this set by considering the symplectic vector space the direct sum of the space of potentials and the space of currents at these points. A Lagrangian relation specifies which states of the output space are allowed for each state of the input space Turning the Lagrangian subspace of a circuit into this information requires that we understand the ‘symplectification’
and ‘twisted symplectification’
of a function between finite sets. In particular we need to understand how these apply to the input and output functions with codomain restricted to ; abusing notation, we also write these and
The symplectification is a Lagrangian relation, and the catch phrase is that it ‘copies voltages’ and ‘splits currents’. More precisely, for any given potential-current pair in its image under consists of all elements of in such that the potential at is equal to the potential at and such that, for each fixed collectively the currents at the sum to the current at We use the symplectification of the output function to relate the state on to that on the outputs
As our current framework is set up to report the current out of each node, to describe input currents we define the twisted symplectification:
almost identically to the above, except that we flip the sign of the currents This again gives a Lagrangian relation. We use the twisted symplectification of the input function to relate the state on to that on the inputs.
The Lagrangian relation corresponding to a circuit then comprises exactly a list of the potential-current pairs that are possible electrical states of the inputs and outputs of the circuit. In doing so, it identifies distinct circuits. A simple example of this is the identification of a single 2-ohm resistor:
with two 1-ohm resistors in series:
Our inability to access the internal workings of a circuit in this representation inspires us to call this process black boxing: you should imagine encasing the circuit in an opaque black box, leaving only the terminals accessible. Fortunately, this information is enough to completely characterize the external behavior of a circuit, including how it interacts when connected with other circuits!
Put more precisely, the black boxing process is functorial: we can compute the black-boxed version of a circuit made of parts by computing the black-boxed versions of the parts and then composing them. In fact we shall prove that and are dagger compact categories, and the black box functor preserves all this extra structure:
Theorem. There exists a symmetric monoidal dagger functor, the black box functor
mapping a finite set to the symplectic vector space it generates, and a circuit to the Lagrangian relation
where is the circuit’s power functional.
The goal of this paper is to prove and explain this result. The proof is more tricky than one might first expect, but our approach involves concepts that should be useful throughout the study of networks, such as ‘decorated cospans’ and ‘corelations’.
Give it a read, and let us know if you have questions or find mistakes!
Can you use to come up with a presentation of , similar to how you derived a presentation of ?
More precisely, the black box functor is essentially surjective: every finite-dimensional symplectic vector space is isomorphic to one of the form . Is the functor also full? This question should be relevant for engineering-type problems where one wants to realize a desired Lagrangian relation in terms of a circuit.
If the black box functor is full, then do you know how which relations one needs to impose on the generators of in order to turn it into a presentation of ?
I’m almost sure the functor is not full. A dense set of Lagrangian subspaces arise as the graphs of quadratic forms in the manner described above. But not every quadratic form on —not even every nonnegative one—comes from a circuit made of resistors: only the Dirichlet forms do. Similarly, I believe not every Lagrangian subspace of comes from a circuit made of resistors and perfectly conductive writes. There should be a concept of ‘Dirichlet’ Lagrangian subspace, and we should have talked about it in our paper. These should be give the image of the black box functor.
There should be a nice presentation of a category which has symplectic vector spaces as objects and Dirichlet Lagrangian subspaces as morphisms.
On the other hand, it would also be very nice to give a presentation of the larger category
I didn’t know about Dirichlet forms until I started studying electrical circuits in earnest; they seem a bit underpublicized. But they have lots of nice equivalent characterizations, and we give a few on page 17 of our paper. There’s also a very nice theory of Dirichlet forms in the infinite-dimensional case, which people use to study electric potentials in substances with spatially varying electrical conductivity—or if you prefer a more mathematical motivation, random walks and the Laplace equation!
The idea of Dirichlet forms, as opposed to general nonnegative quadratic forms, is that they capture the idea of locality.
So let me try to understand why there are nonnegative quadratic forms on that are not Dirichlet forms. The most general Dirichlet form looks like this:
where the nodes are enumerated by . This is the most general case because we can assume without loss of generality that the circuit does not have any resistors in parallel; and we can also assume that there is a resistor between any two nodes, because we can put for those that we do not want to connect directly by a resistor.
In other words, the Dirichlet forms on are parametrized by the symmetric matrix $r(i,j)$, where the values on the diagonal are irrelevant since the resulting terms cancel: using a resistor to connect some node to itself has no effect! So the space of Dirichlet forms is at most -dimensional. On the other hand, the space of nonnegative quadratic forms on is -dimensional. So the Dirichlet forms live on a subspace of codimension , and most nonnegative quadratic forms are not Dirichlet forms…
Ah, nice! They sure look a lot like a kinetic term in a quantum field theory, with the resistance playing the role of the squared length of an edge.
In fact, as mentioned in your paper, the Dirichlet form is essentially the Laplace operator on the circuit. So now I suspect that the Dirichlet forms are precisely those nonnegative quadratic forms which are infinitesimal stochastic when expressed in the standard basis. This seems intuitive, as a current is nothing but a bunch of electrons performing a random walk!
Tobias wrote:
Exactly!
That’s right! We didn’t say it in our paper because we didn’t think it would help most people. But my book with Jacob has a lot of stuff on Dirichlet operators: that is, infinitesimal stochastic self-adjoint operators. You can turn a Dirichlet form into a Dirichlet operator, or vice versa, using the inner product.
So, another moral about Dirichlet operators, or Dirichlet forms, is that they generalize the beautiful interplay between stochastic and quantum phenomena that you see when you stick an in the heat equation and get Schrödinger’s equation.
I keep hoping physics will make a lot of progress when we actually understand this well-known but mysterious fact: the nicest Markov processes becomes quantum processes when you rotate time 90°.
Great, thanks! This discussion has increased the connectivity of my mental network of scientific concepts significantly ;)
Great! I always love it when I can compress my knowledge and make room for new facts.
This is not exactly on topic, but something about the table you have early in your post bothers me. More specifically it’s the first row:
position velocity momentum force
In geometrical mechanics, where do these things live?
Position is a point in a manifold (configuration space), velocity is a tangent vector, momentum is a covector and force is a map from the tangent bundle to the cotangent bundle. This is because forces depend on position and velocity and take values in covectors (for various reasons; for examples forces can be integrated along curves to do work). So a force does not at all look like an element of the tangent bundle of the cotangent bundle of the configuration space. So how could a force be the derivative of momentum?
All these analogies are traditionally studied for linear systems, and our paper too is about linear systems. So take your configuration space to be a finite-dimensional vector space. This may seem declassé to a geometer, but a surprisingly large amount of engineering focuses on this case. That’s why you see this sort of chart in lots of engineering books.
When you get to networks made of systems whose configuration spaces are general manifolds, or whose phase spaces are general symplectic manifolds, or Poisson manifolds, or… whatever… there’s a lot more left to do.
It’s really not surprising that a lot of engineering papers deal with systems that live in (or open subsets of or whatever). I know that my comment is a bit off topic. But what bothers me is this:
Is there something deep going on or is this just an artifact of a bunch of simplifying assumptions?
A large portion of control theory assumes linearity because you’re trying to keep the system near an equilibrium point, you can linearize near that equilibrium, and if you let the system drift so far from equilibrium that the nonlinearity matters it means you’ve done a bad job and deserve all the suffering you get!
The classic example is balancing a pendulum upside-down by moving a cart back and forth. You assume is small enough that is close to :
This is a nice reflection of the basic difference between physics and engineering. In physics you try to understand nature in all its crazy glory. In engineering you try to create systems that behave in ways you can understand.
However, nonlinear control theory is also important, and you could say it’s fundamental to living systems. I just want to understand the mathematics of linear networks a bit before moving on to the nonlinear ones… mainly because there’s basic stuff to understand that hasn’t been figured out yet.
I have no issues with you wanting to understand linear systems or, more generally, systems living on (open) subsets of vector spaces.
My comments were more along the lines of: I don’t understand the geometry of pairings of “flows” and “efforts.” And I would love to know if such pairings exist in a nonlinear settings.
As far as linearising near equilibria goes, linearisation is not very helpful if you are trying to prove stability of an equilibrium of a conservative system.
As you probably know if an equilibrium of a conservative system is stable, the eigenvalues of a linearised system have to be purely imaginary. However, the converse doesn’t hold: the eigenvalues can be purely imaginary and the system can be unstable — the nonlinearities takes over right away in the case of purely imaginary eigenvalues.
I think that each system is Hamiltonianizable, and that these Hamiltonians are linear:
so that each dynamic can be write like a projection of an Hamiltonian dynamic.
The funny thing is that the classical and quantum mechanics are equal.
Eugene wrote:
You probably know this, but control theory you aren’t seeking to prove equilibria of Hamiltonian systems are stable, you’re trying to stabilize equilibria that are unstable by measuring the system and actively applying forces to it that depend on your measurements.
Again, a great example is the upside-down pendulum on the cart:
It’s obviously unstable, but you can still keep it standing by measuring and applying a force that depends on in a suitable way… and you can make this dependence linear, so it’s a purely linear problem. The approximation is being used to achieve this linearity, but the approximation is okay, since deviations from it get corrected by your active stabilization!
Again, I’m not trying to say nonlinear issues are uninteresting. They’re incredibly interesting: the linear theory is just the tip of the iceberg. However, there are big fat textbooks on control theory that consider nothing but linear systems—and they’re both very practical, and mathematically interesting, and ripe for a category-theoretic treatment. So that’s what Jason is working on.
By the way, anyone who wants to know more can click on the picture. For even more, I really recommend this book:
• Bernard Friedland, Control System Design: An Introduction to State-Space Methods, Courier Dover Publications, 2012.
That’s certainly one very common approach. There is also a passivity-based approach. See this book, for example. Or these slides.
Thanks, I’ll check those out. I don’t know about it, but I like the sound of “passive control”. It sounds very wu wei.
port-Hamiltonian systems are all supposed to be about passive control…
The slides contain this quote:
That’s sort of what I was trying to say… but I think “forcing” should be replaced by something a bit more gentle and cooperative.
Last time I talked about a new paper I wrote with Brendan Fong. Our paper uses a formalism that Brendan developed here:
• Brendan Fong, Decorated cospans.
OK, so I understood everything right up to the point of symplectification. Intuitively, I see what you are trying to do, and why, but I cannot quite get back to formulas I can manipulate. So, for example, consider a tank: an LC circuit, an inductor and capacitor in parallel. We know the impedance is a combination of and . For terminals, we can have one end with is hooked to ground, for both input and output, and another, hooked to antenna as input that drives it, and again, as output from which I siphon energy. A more complex example might be a transmission line or a Butterworth filter. We know that, as a black box, such circuits are described by transfer functions.
OK, so transfer functions are traditionally written in the frequency domain, whereas the symplectic vector space is in the “time domain”. I guess that means that is the symplectic form, and that I should be calling it the transfer matrix. You are saying that it is the symplectification of a function from finite set to set, but I just cannot visualize what f is supposed to be for a tank circuit.
I’m also confused by (because I can’t recall them off the top of my head, and am too lazy to crack open a standard textbook) how to get from a time-domain transfer matrix to the frequency-domain function with its poles and zeros, but maybe never mind about that.
The reason I want to see the worked example(s) is less that I am interested in electronics, but rather, because of two or three other directions:
— the infinite-dimensional transfer matrixes are transfer operators, and they can be used to describe the transition to chaos, or at least a transition to ergodic behavior.
— The role of entropy in this transition. So David Ruelle has made a life-work of the transfer operator, but I cannot recall seeing a symplectic conjugate temperature anywhere in the introduction-to-chaos texts I’ve read. Was I not paying attention? Or is it because the symplectic conjugate is not a temperature, but a map from tangent to cotangent space, as Eugene points out? Is it the Kullback-Leibler divergence? How? Huh??
— A better understanding of “information geometry”. There’s a lot of talk in (for example) ecology about non-equilibrium thermodynamics, and various appeals to minimum/maximum entropy principles.
So, for example: is the following described by a “passive linear network”? Consider the diurnal solar heating of a body of water, and the evaporation from it? I think its more or less linear, and I think its passive (the time-varying heating of the sun being like a time-varying voltage applied to a passive circuit) but the symplectic nature of this system is mysterious to me. This is one possible example of “non-equilibrium thermodynamics”; does the symplectic-Lagrangian least-action principle apply, and how, exactly? Details? Is it really the “principle of maximum entropy generation” in disguise?
Apologies in advance for the long digressive post.
Thanks for the comment! There’s a lot to talk about here, but for starters:
No, it’s in the frequency domain. As I all-too-briefly noted, this is why we replace resistances by ‘impedances’, which can be functions of the frequency variable
Our paper explains it more detail:
Although inductors and capacitors impose a linear relationship if we involve the derivatives of current and voltage, to mimic the above work on resistors we wish to have a constant of proportionality between functions representing the current and voltage themselves. Various integral transforms perform just this role; electrical engineers typically use the Laplace transform. This lets us write a function of time instead as a function of frequencies and in doing so turns differentiation with respect to into multiplication by and integration with respect to into division by
In detail, given a function
we define the Laplace transform of
We also use the notation denoting the Laplace transform of a function in upper case, and refer to the Laplace transforms as lying in the frequency domain or -domain. For us, the three
crucial properties of the Laplace transform are then:
• linearity: for ;
• differentiation: ;
• integration: if then
Writing and for the Laplace transform of the voltage and current across a component respectively, and recalling that by assumption $v(t) = i(t) = 0$ for the -domain behaviors of components become, for a resistor of resistance :
for an inductor of inductance :
and for a capacitor of capacitance :
Note that for each component the voltage equals the current times a rational function of the real variable called the \define{impedance} and in general denoted by Note also that the impedance is a \define{positive real function}, meaning that it lies in the set
While is a quotient of polynomials with real cofficients, in this definition we are applying it to complex values of and demanding that its real part be positive in the open left half-plane. Positive real functions were introduced by Otto Brune in 1931, and they play a basic role in circuit theory.
Indeed, Brune convincingly argued that for any conceivable passive linear component we have this generalization of Ohm’s law:
where is the current, is the voltage and is the impedance of the component. As we shall see, generalizing from circuits of linear resistors to arbitrary passive linear circuits is just a matter of formally replacing resistances by impedances. This amounts to replacing the field by the larger field and replacing the set of positive reals, by the set of positive real functions, From a mathematical perspective we might as well work with any field with a mildly well-behaved notion of ‘positive element’, and we do this in the next section.
Linus Vepstas wrote:
Instead of tackling the very specific examples you mentioned, it’ll be easier for me to do a general circuit with just one input and one output. This includes some of the examples you mentioned.
Suppose the circuit has impedance We’re using the Laplace transform to work in the frequency domain, as explained in my last comment. So, and everything quantity I mention will be some function of the Laplace transform variable This means they’re all elements of some field : some field of functions of The detailed nature of this field doesn’t matter much.
Saying that the impedance is means that
where is the voltage across the circuit and is the current through the circuit. This is a glorified version of Ohm’s law. I think you’re asking: how do we see this relation as a Lagrangian relation between symplectic vector spaces?
First, what are the symplectic vector spaces?
They are both The input wire has a potential and current , and these are ‘canonically conjugate variables’ just like the position and momentum of a particle, so we want to think of the pair
as a point in a symplectic vector space. The symplectic structure is the usual 2-form
we see in classical mechanics, but with potential and current replacing position and momentum:
The output wire has potential and current , and these again live in the symplectic vector space with the same symplectic structure.
Second, what is the Lagrangian relation?
It’s just the linear relationship that holds between the input and output potentials and currents:
The first equation is says the current flowing in equals the current flowing out; the second one is the glorified version of Ohm’s law, using the fact that the voltage across our circuit is
and the current through it is
These two linear equations
pick out a 2-dimensional linear subspace of the direct sum of our symplectic vector spaces, And, in fact, this is a Lagrangian relation.
Linus wrote:
Suppose you can approximately model this system with a network of linear resistors, inductors and capacitors, except for the incoming sunlight, which you can model as an external time-dependent current source or voltage source. Then our formalism will apply.
Your model will consist of a graph where the edges are labelled by impedances, with some input and output nodes. Each end of each edge will have an associated ‘potential’ and ‘current’. The list of all potentials and current will be an element of a symplectic vector space
There are lots of systems for which this kind of model can be made. Henry Paynter, the inventor of bond graphs, drew this picture of one:
Our paper doesn’t mention bond graphs at all, so let’s not talk about those now—there’s too much to talk about already. Click the picture if you want more on bond graphs. My point is just that systems that fit into our formalism are very common.
Our paper explains the ‘principle of minimum power’ in quite a bit of detail for circuits made of resistors. This is actually a principle of minimum entropy generation, since the ‘power’ of such a circuit is actually the power dissipated, that is, the rate at which useful energy is turned into heat.
When we introduce inductors and capacitors we show that a formal analogue of the principal of minimum power still applies! However, now the power is not a real number: it’s an element of the field of rational functions of one real variable, the Laplace transform variable
I have not yet succeeded in finding a regime in which the ‘principle of maximum entropy generation’ applies! Sometime I’ll put up a link to a video of this talk by Roderick Dewar, a proponent of the principle of maximum entropy generation:
• Roderick Dewar, Maximum entropy and maximum entropy production in biological systems: survival of the likeliest?
He essentially admitted that his attempts to derive such a principle have not yet succeeded.
For ENSO, the analogous LC circuit is described on the Azimuth Forum here:
http://forum.azimuthproject.org/discussion/comment/14572/#Comment_14572
And it actually works!
On maximum entropy production. I guess that an amplifier with positive feedback does maximum entropy production. It chooses the frequency at which it can make the biggest noise it is capable of. But I also guess that ‘passive’ rules out feedback and ‘linear’ rules out amplification.
And one last question: what is the equivalent of the canonical 1-form (Liouville form) for a passive network? What is it’s interpretation, in electrical terms? What about the solder form?
The important form on a symplectic vector space is the symplectic 2-form, and above I explained what that was in our setup—at least in an example.
In work related to decorated cospans (such as our paper on circuits or John and Jason’s work on signal flow diagrams), our semantics usually is constructed from a field of values—not a physicist’s ‘field’, bt an algebraist’s sort of ‘field’, where you can add, multiply, subtract and divide.
I have been (very) slowly working through this paper. It is lots of fun. I am just a working engineer who reads things like this for the entertainment value.
A couple of questions. I didn’t see you excluding shorts from the resistor networks. Was resistance not equal to zero a condition I missed? I kept seeing reference to non-negative values for r’s. Did I miss something?
I’m only on section 2. What I have read so far on Equivalent Circuits calls to mind the Thevenin/Norton Theorems. I did some scratching around on paper to see if there was a similar ‘canonical’ innards to any black box -resistor network. Is it correct that you could take any resistor network with inputs n and outputs m and replace it by a canonical equivalent network in which each input i was connected to each output j by exactly one resistor Rij? I recall reading somewhere that such a set up could be thought of as an old fashioned analog computer. Pull all the outputs to ground. Then the input voltages v1, … , vn, are mapped to i1,…, im via matrix multiplication (a vector form of ohm’s law.) I think that is right. I know I read about this at some point but searching for a reference on the web I came up empty.
Anyway, thanks for putting this together. It is very interesting.
Rob Macdonald wrote:
Great!
Don’t say “just”.
We don’t allow zero resistance, and this plays a huge role in our work. On the one hand a resistor of resistance would suck up infinite power if the voltage across it were nonzero, and this would destroy our use of the ‘power functional’. (Mathematically, we’d be dividing by zero.) On the other hand we want something like a perfectly conductive wire to serve as an identity morphism in our category. We cleverly navigate between Scylla and Charybdis by allowing a circuit where the input terminal is the output terminal, and there’s no wire and no resistor.
We begin seriously discussing this in Section 3.5, where we write:
These remarks set the tone for the next section.
Maybe there are some typos where we say “non-negative”, but we should say “positive”. If you can point them out, I can see if they need to be fixed!
More later.
Where? I haven’t been able to find those. We talk a lot about nonnegative quadratic forms, but that just because the power used by a circuit is , and it can be zero.
In the Introduction we define a circuit of resistors to consist of
So here the resistances have to be positive!
We study circuits of resistors using Dirichlet forms, and we say
However, these numbers are more closely related to conductivities than resistances. After all, the power used by a circuit is the Dirichlet form
where is the potential and is the resistance of a wire (or ‘edge’) going from the node to the node . You’ll notice that we’re taking the reciprocal of the resistance here, to get the conductivity. So, the resistance can’t be zero.
Thanks that clears it up. I was confusing the cij’s with the r(e)’s. But I understand now. Appreciate it!
Rob wrote:
Almost!
1) You also need to allow wires going between inputs, and wires going between outputs.
2) You also need to allow ‘wires of infinite resistance’—which we treat simply as ‘missing’ wires.
3) We also allow something like ‘wires of zero resistance’. But not really! To be precise: we allow an input to also be an output, or allow two inputs to be the same node, or allow two outputs to be the same node. (See my previous discussion of zero resistance, to see why we do this.)
If we disallow case 3), then every circuit of resistors is equivalent to one where there is at most one wire with a resistor on it going from any input or output to any other input or output.
This is why we use ‘Dirichlet forms’ as power functionals and write a general Dirichlet form as
You can think of this as the power emitted by a circuit with a single wire of conductance going from node to node if … or no wire going from to if
Earlier, Brendan and I introduced a way to ‘black box’ a circuit and define the relation it determines between potential-current pairs at the input and output terminals. This relation describes the circuit’s external behavior as seen by an observer who can only perform measurements at the terminals.
An important fact is that black boxing is ‘compositional’: if one builds a circuit from smaller pieces, the external behavior of the whole circuit can be determined from the external behaviors of the pieces. For category theorists, this means that black boxing is a functor!
Our new paper with Blake develops a similar ‘black box functor’ for detailed balanced Markov processes, and relates it to the earlier one for circuits.
I have been slowly looking at this paper. Good thing I don’t get paid to understand its contents or I would be homeless! Anyhow, I was trying to understand the relationship between corelations and pushouts and I found myself wondering, is there a standard algorithm for computing a pushout in finite set?
I asked the question here on stackexchange. I’m not sure the answer isnt’ obvious, but I’m also not sure it’s not not obvious, if you get me.
http://math.stackexchange.com/questions/1422056/is-there-an-algorithm-for-computing-pushouts-in-sf-finset
If anyone reading this paper has any interest in this question you might have a look there. Some of the comments indicated that it was trivial (like asking for an algorithm to compute the sum of two objects in finite set). If it is obvious, I should be able to program a compute to do it, which given the time I had to devote to it, I couldn’t.
I hope you’re enjoying the lengthy delights of this paper.
It’s quite easy to work out a pushout in the category of finite sets. Say you have two functions , and you want to take their pushout. First you form the disjoint union . Then you identify any element with the corresponding element .
In general, in any category, a pushout can be formed by taking first a coproduct and then a coequalizer. That’s what I’m doing above.
I think Oden stuck a knife in his eye so that he could see. That seems extreme to me. And my deductible is so high, I would have to okay it with my accountant first. So I will just ask…
I thought the idea of a pushout was “the finest equivalence…” Here is an example, say X = {x1,x2}, Y = {y1, y2}, Z = {z1}. Say, f:X->Y is defined by (x1,y1) “x1 maps to y1”, (x2, y2), and g:X->Z, is the only map of sets from X to Z. then the pushout P is a one point set, with the maps from Y and Z to it the only maps…Correct? So even though x1 and x2 are mapped to different things in Y, they are pushed to the same element in P.
Here is another (contrived) example. say X has 4 elements, Y has 2 and Z has 3, labeled in the obvious ways, {x1,…,x4}, etc
define the map f as (x1, y1), (x2, y1), (x3, y2), (x4, y2)
define the map g as (x1, z1), (x2, z2), (x3, z2), (x4, z3)
I gather that the pushout P has has 1 element in it. Correct? Or I need to stick a hot poker in my eye, because I don’t understand what a pushout is. Nothing dies on the internet, so for all time my decedents can read this and suffer the shame of knowing their great great great grandfather didn’t know how to compute a pushout in finite set. Is that obvious? As obvious as say, the sum of two objects. Is it obvious that z3 and y1 get mapped to the same element of P even though they are not in the pre-image of the same element of X? Or do I just not understand what a pushout is.
I tried to write a program to compute the pushout of two maps in finset so I could understand this paper. I think I have it. I wrote it in R. (Parenthetically, that is how I ended up here: Engineer needs to tally all the reasons the light bulbs broke, uses R, finds out R is a “functional programming language”, doesn’t know there was such a thing as a non-functional programming language, reads about Haskell, ends up on the n-cafe on his lunch hour, when he should be filling out Return Material Authorizations for said broken light bulbs, reads about how some physicists are studying first year electrical engineering because it will unlock the secrets of the universe… Now is trying to understand category theory. The internet is a beautiful thing.)
When I first set out to write this program I assumed that it was all old hat. That there was an existing set of database operations (like Union, Join, etc…) which would encompass the idea. Maybe there is, and I just haven’t gotten it yet. The closest functional relative I found was “group-by”, which is a sort of primitive for “equivalence’ in the world of data-tables. i still “feel” like there “should” be an existing operation on data that is the “push-out” or “coequalizer”.
Pushout, seems like something we would use on data as a sort of “group by across multiple traits” but I haven’t found the construction coded, thus the question. (again, forgive me if I just don’t understand the basic idea).
Here is an artificial example I was thinking of. Let’s say P is a set of people. Lets say that T is a set of mutations of the (mitochondrial) genes (T) and similarly for S. There are maps f:P->T, you are mapped to particular mutation you have, and g:P->S, similarly. Then there are some interesting things to compute. There is a map m:P->P, “mother”. There are also natural sections from T->P and S->P , where we pick out the first person to have the particular mutation. There is also a series of pushouts associated with g and f, starting with P, then m(P), m(m(P))…
Let’s say the pushouts are Q_i…. then isn’t it interesting if Q_i+1 < Q_i? it would mean that you had a genetically isolated population, by those two genes.
this seems artificial. (we don’t know who everyones mother is ad nausium).
Of course circuits made solely of conductive wires are not very exciting for electrical engineers. But in an earlier paper, Brendan introduced corelations as an important stepping-stone toward more general circuits:
• John Baez and Brendan Fong, A compositional framework for passive linear circuits. (Blog article here.)
The key point is simply that you use conductive wires to connect resistors, inductors, capacitors, batteries and the like and build interesting circuits—so if you don’t fully understand the math of conductive wires, you’re limited in your ability to understand circuits in general!
I’ve got seven grad students working on this project—or actually eight, if you count Brendan Fong: I’ve been helping him on his dissertation, but he’s actually a student at Oxford.
Brendan was the first to join the project. I wanted him to work on electrical circuits, which are a nice familiar kind of network, a good starting point. But he went much deeper: he developed a general category-theoretic framework for studying networks. We then applied it to electrical circuits, and other things as well.
I’ve got seven grad students working on this project—or actually eight, if you count Brendan Fong: I’ve been helping him on his dissertation, but he’s actually a student at Oxford.
Brendan was the first to join the project. I wanted him to work on electrical circuits, which are a nice familiar kind of network, a good starting point. But he went much deeper: he developed a general category-theoretic framework for studying networks. We then applied it to electrical circuits, and other things as well.