Click on items in blue, or pictures, for more information.

One reason I’m glad I gave this talk is because afterwards Jamie Vicary pointed out some very interesting consequences of the relations among signal-flow diagrams listed in my talk. It turns out they imply equations familiar from the theory of complementarity in categorical quantum mechanics!

This is the kind of mathematical surprise that makes life worthwhile for me. It seemed utterly shocking at first, but I think I’ve figured out why it happens. Now is not the time to explain… but I’ll have to do it soon, both here and in the paper that Jason Eberle are writing about control theory.

For now, besides the slides, the best place to read more about this program is here:

Couldn’t you describe “cup” as coduplication followed by throwing-away? And “cap” as co-throwing-away followed by duplication? Or am I missing something?

So, it seems to me like you can get away with one less generator on slide 39, where you list {scalar-multiplication, addition, duplication, deletion, zero, cup, cap}. Suppose instead the generators are {scalar-multiplication, addition, duplication, coduplication, deletion, codeletion}. Then you can generate cup and cap as above. Zero is a little trickier, but it seems like you can get zero by codeletion, followed by duplication, followed by multiplying one duplicate by -1 and leaving the other alone, followed by addition.

This feels like a more natural basis since it has deletion/codeletion and duplication/coduplication as symmetric pairs which are independent of the particular sets in play, and scalar-multiplication and addition as the specific operators germane to vector spaces. As we move forward and start dealing with objects that are maybe not finite vector spaces over a field, duplication/coduplication and deletion/codeletion are probably going to stay the same, and maybe we can just drop domain-relevant operators in to complete the basis. (For instance, this might help with non-linear control theory…)

I suppose really all this does is dispose of “zero” as a generator, and nearly every set that we might care about has a distinguished point that we probably want to call “zero” or something like it, but in a certain sense it’s more interesting to derive zero from its property that it’s the unique element of a vector space which is equal to x – x for any other x in the vector space.

You could alternatively derive zero from its property as the (additive) identity. That diagram is a little too complicated for me to attempt describing with text, but here is a picture:

It is actually possible to eliminate both Zero and Deletion as generators for FinRel. 5 seems to be the minimum number of generators needed, but getting rid of too many generators, willy-nilly, seems to muddy up the picture.

The simplest way to eliminate Deletion is by using Duplication! That is, Duplication followed by a cup. The simplest way I’ve found to make Zero redundant is by using Scalar-multiplication… by zero. With a Codeletion on top. Your first Zero is equivalent to this when you use the relation involving (b+c) on the slide that shows you get the rig structure of back. Your picture at the end is very nice, though. I had not thought of trying to make Zero redundant without the use of Scalar-multiplication.

I agree that it can be helpful to separate those features that will continue to be important in nonlinear control theory—where we might have a manifold replacing a vector space—from those that are linear: addition, zero and their dualized versions, coaddition and cozero:

(Coaddition could be called ‘splitting’, since it relates a vector to all pairs of vectors that sum to it. Cozero relates a vector to if and to nothing at all otherwise.)

The simplest nonlinear setting analogous to ours is the category of finite sets and relations. I guess cartesian product of sets provides the correct symmetric monoidal structure (though one could make a case for using disjoint union).

Here one has duplication, deletion, cup and their dualized versions:

and

though as you note, these are redundant:

Are all relations generated from functions of these sorts using the symmetric monoidal category structure? I don’t see the functions

and

showing up, though I may not be looking hard enough. These functions arise very naturally from disjoint unions, since they’re examples of the natural maps

(I would like to write the last exclamation mark upside-down, but I’m finding it impossible and I’ve wasted enough time on it. Outside LaTeX equations it’s easy enough: ¡.)

One observation i have is that if you allow signals to be vectors (that is then the multiplication C is a matrix multiplication and in some sense both addition and duplication become particular cases of multiplication:
and ).

I have more questions but i want to finish looking at the second half of the talk before raising them …

is very nice, and it’s reflected in the notation we use for those operations, one being the upside-down version of the other, with black turned to white (or in these slides, light blue). Similarly, zero

is the transpose of deletion

As I mention in the talk, zero and addition make into a commutative monoid, while deletion and duplication make it into a cocommutative comonoid. That’s a bunch of algebra jargon, but among other things it means that they share very similar properties, but ‘turned upside down’, thanks to one pair being the transpose of the others!

I hope my talk explains it a bit better than I’m managing here, with the help of more pictures and hand-waving.

Yes i think this part is clear both in the post above and in the talk. While i am at it, if you allow f to be a nonlinear map, then you can go pretty far in terms of describing nonlinear systems with signal flow diagrams, e.g. you can describe the dynamical system:

(which is pretty general) with just 3 blocks, one for the function f, one for the function g, and an “integrator” block. The problem is that in this context the integrator can’t be described by a simple multiplication by 1/s, but it has to be a time-domain integration operation. Anyway, perhaps you mention something like this in the part of the talk i didn’t see yet. So let me see the complete talk before i say anything more.

I’m focusing on linear systems in this talk but the new ideas—the category-theoretic ones—can be adapted to the nonlinear context. It seems best to start by exploring their consequences in the well-understood and very pretty linear case.

Ok, so this is what i would love to understand better, let’s see how well i can formulate the question …

First, i am assuming that given a signal flow diagram, you can rotate it, stretch it, rotate each block with respect to the diagram, and deform the “wires” as much as i want without affecting one bit the meaning (that is the underlying equations) of the diagram. This assumption comes natural to me but i am not sure it’s shared, (especially when i look at your talk from 42:30 to 43:30 where it looks like for you the objects on the left and right are indeed different). Also you talk about “flipping things upside down” to obtain different objects, which implies that your diagrams have a preferred direction.

Now assuming that my assumption (invariance under rotations and deformations) holds, then i can look at slide 8 of your talk (min 8:20), and think that nothing prevents me to connect the output q to the input F and arrange the closed diagram into an equilateral triangle that represents the (unstable) system (with q(0) = starting position). And i can stretch and deform that triangle anyway i want and it would still be exactly the same thing (because the underlying equation would be the same).

Now in this equilateral triangle, the 3 connections between one function and the following (that is the a,v,q “wires”) are just identity morphisms to me. Obviously they might be deformed to fit a triangle but that shouldn’t matter, they are still identities.

Now i expect you’ll come back to me saying “no, you cannot do that, you can’t have 3 identity morphisms in a closed loop, you will need some cup and cap morphism along the way” (where exactly?, and why there ?), to which my answers would be … why can’t i do that ?

The first way out could be that my original assumption is right and also i can actually do the triangle without cups or caps.

Another way out is that my original assumption does not hold.

Hopefully I understand your question well enough to shed some light. If I misunderstood, I apologize in advance.

For a given signal flow diagram, one may certainly stretch the components without changing the meaning… as long as the vertical order of generators (and braidings) is kept the same. Rotating is a bit touchier. An upside-down version of addition is not addition, but coaddition. There are a couple of different ways we can handle this – we can make coaddition an extra generator, or we can define it in terms of addition, cups and caps. The latter is what is going on in the diagram on the whiteboard during 42:30 to 43:30. An advantage to this tactic is that only two new generators need to be added to make a co- for everything we had in FinVect, instead of five.

So it is by virtue of cup and cap that we can rotate parts of a signal flow diagram by an arbitrary amount, as long as the overall topology is preserved. And as long as the individual components have all input wires above the component, and all output wires below the component. Wires sticking out sideways are ambiguous, so purely horizontal wires are the only thing technically forbidden. I think that’s what’s happening in your equilateral triangle idea – about the only way I can see to avoid cups and caps is by drawing one of the wires horizontally.

I can think of one other way to view your triangle that might seem to avoid cups and caps. If the relations are at the corners of the triangle and no horizontal lines exist, there will be one relation above the other two. That top relation will have two outputs, no inputs. Multiplication (in the extended field, ) has one input, one output, so the only way to reconcile is if the top relation is a composition of multiplication and cap. You certainly can have three identity morphisms in your triangle, but the only ways I can see that appear to get rid of cups and caps in your triangle example don’t actually get rid of them – they merely get swept under the rug. They may be hidden, but they are still there.

I asked myself that same question today. The answer I came up with is that he wants every “network diagram” to, overall, denote a morphism (which is a composition of the morphisms expressed by each “tier” of the network diagram as you move from “top” to “bottom”). And yet, I have the same intuition as you: I feel like I ought to be able to draw a “1/s” triangle and an “s” triangle that feed into each other, which would denote the “claim” that those triangles are inverses. What does this mean? Well, I suppose it asserts that if I were to expand out the diagram, replacing one of the identity morphisms by a duplication (and declaring that the “top” of the diagram) and replacing another line by a coduplication (declaring that the “bottom” of the diagram), the resulting diagram would denote a relation which is actually a proper (total) function.

But suppose I have three transformers. Maybe I’m feeding the output of multiply-by-2 into multiply-by-4 and feeding the output of multiply-by-4 into multiply-by-0.125 and the output of multiply-by-0.125 back into multiply-by-2. Now I have a bunch of places to put the “top” and the “bottom” of the diagram (six, to be precise). No matter which I choose, the relation denoted should still be a proper function. Whereas, if I change one of the scalars to 3, the relation denoted should become {(0,0)}, no matter where I put the top and bottom.

It seems to me like there ought to be a formal sense in which we can talk about diagrams where the top and bottom are unspecified. That is, there should be certain conditions/contexts under which the choice of top and bottom “don’t matter”.

In fact, given that we switched from maps to relations, it’s not clear what purpose the distinction between “inputs” and “outputs” serves. We could just call everything an input. The space of relations doesn’t change. If this makes sense, then it might be easier to talk about diagrams with no inputs: instead of choosing a “top” and a “bottom”, we only need to choose a “top”: replace one line with a duplication and now we have a relation with a single input, i.e., a subset of the field. Then, the condition I proposed above would just become “the subset is the entire field”. (Though, in some contexts, we might also want to assert other properties of such a subset, like, “it contains elements distinct from 0”. A parallel with universal and existential quantification.)

Ok, thanks to both of you for your answers. I see now that the components are not invariant under rotation.

This is a big departure from the way control engineers are used to think about flow diagrams, where each components has input and output labels and as long as outputs are connected to inputs then the diagram represents some dynamical system, (or anyway just a system of equations).

Arrows are there just as an additional help, but really the directionality is only given by the components, which are seen as functions (not relations) which process inputs to produce outputs. John said something about that here.

In these kinds of flow diagrams one just rotate (and stretch) anything and nothing changes (provided that the same outputs are connected to the same inputs), because the underlying equations are the same.

When you introduce relations into the mix then you probably gain a lot of power to represent more things in a more compact way, but i think that the rules about what is an input and what is an output and especially which you can connect to which, become a little less obvious, at least from my point of view.

I think that this (the introduction of relations) is probably why in this this formalism the vertical ordering of generators is important, and, as Jason said, you need to preserve it, and you need to make sure that “individual components have all input wires above the component, and all output wires below the component”.

So i think that if you end up talking to control engineers it might be important to stress these differences …

Thanks for helping us start thinking about what we’ll need to say to be comprehensible to control theorists. I think the way we’re doing things is less different from the ‘usual’ way than you believe; in fact I feel we’re just formalizing the usual way. In particular, I believe that ordinary signal flow diagrams give, in general, relations between inputs and outputs rather than just functions. But to explain this requires examples and pictures… so instead of trying to convince you here, I’ll make sure we write the paper in a way that includes these!

I am sure it’s just me just not fully getting the “rules of the game”, probably because i’m starting from my own assumptions.

Maybe what’s going on is that blocks are really rotation invariant but you are using similar symbols for different things (e.g. addition and coaddition) so you have to use the vertical ordering to eliminate the ambiguity of which one you are referring to.

And the fact that there are no clear input and output labels and no arrows does not help, i think.

For example if you have a relation F: U ~> V would you say that there is a fixed direction from U to V (so u is always the input and u always the output) or not ?

If i want to think at u as output and v as input can i use the same relation, and the same name and symbol in the flow diagram (and just connect it backwards), or i have to use the inverse relation F^-1 : V ~> U (and maybe call it coF or something ?).

Related to this, i kind of assume i’s forbidden to connect two inputs directly by an identity morphism, but i might be wrong on that one too.

Sorry about the many questions, hopefully they at least make some sense to you, anyway … just trying to get to the bottom of this.

I’m busy travelling from Cambridge to Erlangen today so I’ll answer your questions tomorrow! The issue is that I’m trying to describe signal flow diagrams as morphisms in a category, and this automatically forces certain ways of thinking—ways that I’m completely used to, but apparently go against how practitioners of control theory (or at least you) like to think!

I think the way we’re doing things is less different from the ‘usual’ way than you believe

I think that if you see things from afar then there are no differences and it’s clearly the same thing.

However looking closely i feel that there are a few small differences (either in the conventions used or in the underlying formalization, or both) which i haven’t been able to pinpoint exactly yet.

It looks like there might be slight differences in the set of assumptions regarding inputs, outputs, the signals’ direction of propagation, and a preferential direction in which component are drawn … anyway, all interesting stuff.

A non-mathematical ‘context’ point: often, a control system is needed not because the ‘system’ (perhaps better called ‘effector’?) is likely to destroy destroy itself, but because it is too expensive or even impossible to calculate how much effect is needed to get/keep the perception () to/at the needed setting (e.g. whether a bird is sitting on the end of the telescope or not, and if so, how heavy it is …). Which could perhaps be added to the picture by adding a ‘disturbance’ to the output of the effector before transforming this into the perception (Powers 1973 _Behavior: the Control of Perception_)

I have a question. I like to play with tensor networks (for quantum many-body systems) and we frequently use tensor network diagrams as a tool for representing quantum states, operators, etc. These are the symmetric monoidal category using the tensor product (and also a dagger). This lets us deal with tensor contractions – generally speaking, multiplication.

But sometimes I want to add things. Perhaps the superposition of two wavefunctions or the sum of two operators. Or perhaps it is useful to perform the direct sum of two vector spaces (for instance, we take advantage of symmetries by breaking up the Hilbert space into its different symmetry sectors). However, in these cases I find that drawing diagrams becomes more difficult. I didn’t realize until today that we can have a symmetric monoidal category with parallel wires being direct sums (actually I had been using diagrams for these recently without realizing it – that seems to be the beauty of category theory).

Is there some nice category that includes BOTH direct sums and tensor products? Has a nice, graphical language been developed to deal with this? I think that could be quite helpful…

There are a number of related notions here, and it takes a little effort to see which one you need for a given purpose, but very often what you want is a symmetric monoidal category with finite colimits such that taking the tensor product with any object gives a functor that distributes over colimits. This is my own personal favorite definition of 2-rig. Examples include and , where the addition in both cases is the coproduct, a special case of finite colimits.

Has a nice, graphical language been developed to deal with this? I think that could be quite helpful…

The obvious graphical notation here gets a bit awkward so people don’t seem to use it much. The problem is that if we denote using parallel wires, we need to make up some slightly awkward notation for and vice versa. At least, nobody has found a notation where the distributive law is ‘visually obvious’ and also easy to draw. You can use 3d diagrams with going in one direction and in another, but it seems to be more pain than it’s worth.

You can use Markdown or HTML in your comments. You can also use LaTeX, like this: $latex E = m c^2 $. The word 'latex' comes right after the first dollar sign, with a space after it. Cancel reply

You need the word 'latex' right after the first dollar sign, and it needs a space after it. Double dollar signs don't work, and other limitations apply, some described here. You can't preview comments here, but I'm happy to fix errors.

Couldn’t you describe “cup” as coduplication followed by throwing-away? And “cap” as co-throwing-away followed by duplication? Or am I missing something?

Yes, coduplication followed by deletion is the cup! Jason Eberle has worked out all these relations, but I didn’t get around to discussing them.

So other people can understand what we’re yammering on about:

For any set coduplication is a relation

where we check to see if two elements are equal, and if so we keep one—if not, the result is undefined. In other words:

Deletion is the unique function from to your favorite one-element set:

Composing these (as relations), we indeed get the cup

(Here I’m using straight arrows for functions, wiggly ones for relations).

In short,

Taking daggers, which means reversing relations, we get the other fact you mentioned:

Or as I’d say it, codeletion followed by duplication equals the cap!

Cool!

So, it seems to me like you can get away with one less generator on slide 39, where you list {scalar-multiplication, addition, duplication, deletion, zero, cup, cap}. Suppose instead the generators are {scalar-multiplication, addition, duplication, coduplication, deletion, codeletion}. Then you can generate cup and cap as above. Zero is a little trickier, but it seems like you can get zero by codeletion, followed by duplication, followed by multiplying one duplicate by -1 and leaving the other alone, followed by addition.

This feels like a more natural basis since it has deletion/codeletion and duplication/coduplication as symmetric pairs which are independent of the particular sets in play, and scalar-multiplication and addition as the specific operators germane to vector spaces. As we move forward and start dealing with objects that are maybe not finite vector spaces over a field, duplication/coduplication and deletion/codeletion are probably going to stay the same, and maybe we can just drop domain-relevant operators in to complete the basis. (For instance, this might help with non-linear control theory…)

I suppose really all this does is dispose of “zero” as a generator, and nearly every set that we might care about has a distinguished point that we probably want to call “zero” or something like it, but in a certain sense it’s more interesting to derive zero from its property that it’s the unique element of a vector space which is equal to x – x for any other x in the vector space.

You could alternatively derive zero from its property as the (additive) identity. That diagram is a little too complicated for me to attempt describing with text, but here is a picture:

It is actually possible to eliminate both Zero and Deletion as generators for FinRel. 5 seems to be the minimum number of generators needed, but getting rid of too many generators, willy-nilly, seems to muddy up the picture.

The simplest way to eliminate Deletion is by using Duplication! That is, Duplication followed by a cup. The simplest way I’ve found to make Zero redundant is by using Scalar-multiplication… by zero. With a Codeletion on top. Your first Zero is equivalent to this when you use the relation involving (b+c) on the slide that shows you get the rig structure of back. Your picture at the end is very nice, though. I had not thought of trying to make Zero redundant without the use of Scalar-multiplication.

Nice!

I agree that it can be helpful to separate those features that will continue to be important in nonlinear control theory—where we might have a manifold replacing a vector space—from those that are linear: addition, zero and their dualized versions, coaddition and cozero:

(Coaddition could be called ‘splitting’, since it relates a vector to all pairs of vectors that sum to it. Cozero relates a vector to if and to nothing at all otherwise.)

The simplest nonlinear setting analogous to ours is the category of finite sets and relations. I guess cartesian product of sets provides the correct symmetric monoidal structure (though one could make a case for using disjoint union).

Here one has duplication, deletion, cup and their dualized versions:

and

though as you note, these are redundant:

Are all relations generated from functions of these sorts using the symmetric monoidal category structure? I don’t see the functions

and

showing up, though I may not be looking hard enough. These functions arise very naturally from disjoint unions, since they’re examples of the natural maps

(I would like to write the last exclamation mark upside-down, but I’m finding it impossible and I’ve wasted enough time on it. Outside LaTeX equations it’s easy enough: ¡.)

The link you provided to Brendan Fong’s work-in-progress is broken. I think you need capital letters in each word of the filename.

Whoops! You’re right—fixed. Here it is:

• Brendan Fong, A compositional approach to control theory.

It’s got a nice introduction to the overall philosophy.

One observation i have is that if you allow signals to be vectors (that is then the multiplication C is a matrix multiplication and in some sense both addition and duplication become particular cases of multiplication:

and ).

I have more questions but i want to finish looking at the second half of the talk before raising them …

This fact that addition in a vector space

is the transpose of duplication

is very nice, and it’s reflected in the notation we use for those operations, one being the upside-down version of the other, with black turned to white (or in these slides, light blue). Similarly, zero

is the transpose of deletion

As I mention in the talk, zero and addition make into a commutative monoid, while deletion and duplication make it into a cocommutative comonoid. That’s a bunch of algebra jargon, but among other things it means that they share very similar properties, but ‘turned upside down’, thanks to one pair being the transpose of the others!

I hope my talk explains it a bit better than I’m managing here, with the help of more pictures and hand-waving.

Yes i think this part is clear both in the post above and in the talk. While i am at it, if you allow f to be a nonlinear map, then you can go pretty far in terms of describing nonlinear systems with signal flow diagrams, e.g. you can describe the dynamical system:

(which is pretty general) with just 3 blocks, one for the function f, one for the function g, and an “integrator” block. The problem is that in this context the integrator can’t be described by a simple multiplication by 1/s, but it has to be a time-domain integration operation. Anyway, perhaps you mention something like this in the part of the talk i didn’t see yet. So let me see the complete talk before i say anything more.

I’m focusing on linear systems in this talk but the new ideas—the category-theoretic ones—can be adapted to the nonlinear context. It seems best to start by exploring their consequences in the well-understood and very pretty linear case.

Ok, so this is what i would love to understand better, let’s see how well i can formulate the question …

First, i am assuming that given a signal flow diagram, you can rotate it, stretch it, rotate each block with respect to the diagram, and deform the “wires” as much as i want without affecting one bit the meaning (that is the underlying equations) of the diagram. This assumption comes natural to me but i am not sure it’s shared, (especially when i look at your talk from 42:30 to 43:30 where it looks like for you the objects on the left and right are indeed different). Also you talk about “flipping things upside down” to obtain

differentobjects, which implies that your diagrams have a preferred direction.Now assuming that my assumption (invariance under rotations and deformations) holds, then i can look at slide 8 of your talk (min 8:20), and think that nothing prevents me to connect the output q to the input F and arrange the closed diagram into an equilateral triangle that represents the (unstable) system (with q(0) = starting position). And i can stretch and deform that triangle anyway i want and it would still be exactly the same thing (because the underlying equation would be the same).

Now in this equilateral triangle, the 3 connections between one function and the following (that is the a,v,q “wires”) are just identity morphisms to me. Obviously they might be deformed to fit a triangle but that shouldn’t matter, they are still identities.

Now i expect you’ll come back to me saying “no, you cannot do that, you can’t have 3 identity morphisms in a closed loop, you will need some cup and cap morphism along the way” (where exactly?, and why there ?), to which my answers would be … why can’t i do that ?

The first way out could be that my original assumption is right and also i can actually do the triangle without cups or caps.

Another way out is that my original assumption does not hold.

Or … there is something i am missing …

Hopefully I understand your question well enough to shed some light. If I misunderstood, I apologize in advance.

For a given signal flow diagram, one may certainly stretch the components without changing the meaning… as long as the vertical order of generators (and braidings) is kept the same. Rotating is a bit touchier. An upside-down version of addition is not addition, but coaddition. There are a couple of different ways we can handle this – we can make coaddition an extra generator, or we can define it in terms of addition, cups and caps. The latter is what is going on in the diagram on the whiteboard during 42:30 to 43:30. An advantage to this tactic is that only two new generators need to be added to make a co- for everything we had in FinVect, instead of five.

So it is by virtue of cup and cap that we can rotate parts of a signal flow diagram by an arbitrary amount, as long as the overall topology is preserved. And as long as the individual components have all input wires above the component, and all output wires below the component. Wires sticking out sideways are ambiguous, so purely horizontal wires are the only thing technically forbidden. I think that’s what’s happening in your equilateral triangle idea – about the only way I can see to avoid cups and caps is by drawing one of the wires horizontally.

I can think of one other way to view your triangle that might seem to avoid cups and caps. If the relations are at the corners of the triangle and no horizontal lines exist, there will be one relation above the other two. That top relation will have two outputs, no inputs. Multiplication (in the extended field, ) has one input, one output, so the only way to reconcile is if the top relation is a composition of multiplication and cap. You certainly can have three identity morphisms in your triangle, but the only ways I can see that appear to get rid of cups and caps in your triangle example don’t actually get rid of them – they merely get swept under the rug. They may be hidden, but they are still there.

I asked myself that same question today. The answer I came up with is that he wants every “network diagram” to, overall, denote

amorphism (which is a composition of the morphisms expressed by each “tier” of the network diagram as you move from “top” to “bottom”). And yet, I have the same intuition as you: I feel like I ought to be able to draw a “1/s” triangle and an “s” triangle that feed into each other, which would denote the “claim” that those triangles are inverses. What does this mean? Well, I suppose it asserts that if I were to expand out the diagram, replacing one of the identity morphisms by a duplication (and declaring that the “top” of the diagram) and replacing another line by a coduplication (declaring that the “bottom” of the diagram), the resulting diagram would denote a relation which is actually a proper (total) function.But suppose I have three transformers. Maybe I’m feeding the output of multiply-by-2 into multiply-by-4 and feeding the output of multiply-by-4 into multiply-by-0.125 and the output of multiply-by-0.125 back into multiply-by-2. Now I have a bunch of places to put the “top” and the “bottom” of the diagram (six, to be precise). No matter which I choose, the relation denoted should still be a proper function. Whereas, if I change one of the scalars to 3, the relation denoted should become {(0,0)}, no matter where I put the top and bottom.

It seems to me like there ought to be a formal sense in which we can talk about diagrams where the top and bottom are unspecified. That is, there should be certain conditions/contexts under which the choice of top and bottom “don’t matter”.

In fact, given that we switched from maps to relations, it’s not clear what purpose the distinction between “inputs” and “outputs” serves. We could just call everything an input. The space of relations doesn’t change. If this makes sense, then it might be easier to talk about diagrams with no inputs: instead of choosing a “top” and a “bottom”, we only need to choose a “top”: replace one line with a duplication and now we have a relation with a single input, i.e., a subset of the field. Then, the condition I proposed above would just become “the subset is the entire field”. (Though, in some contexts, we might also want to assert other properties of such a subset, like, “it contains elements distinct from 0”. A parallel with universal and existential quantification.)

Ok, thanks to both of you for your answers. I see now that the components are not invariant under rotation.

This is a big departure from the way control engineers are used to think about flow diagrams, where each components has input and output labels and as long as outputs are connected to inputs then the diagram represents some dynamical system, (or anyway just a system of equations).

Arrows are there just as an additional help, but really the directionality is only given by the components, which are seen as functions (

not relations) which process inputs to produce outputs. John said something about that here.In these kinds of flow diagrams one just rotate (and stretch) anything and nothing changes (provided that the same outputs are connected to the same inputs), because the underlying equations are the same.

When you introduce relations into the mix then you probably gain a lot of power to represent more things in a more compact way, but i think that the rules about what is an input and what is an output and especially which you can connect to which, become a little less obvious, at least from my point of view.

I think that this (the introduction of relations) is probably why in this this formalism the vertical ordering of generators is important, and, as Jason said, you need to preserve it, and you need to make sure that “individual components have all input wires above the component, and all output wires below the component”.

So i think that if you end up talking to control engineers it might be important to stress these differences …

Thanks

Thanks for helping us start thinking about what we’ll need to say to be comprehensible to control theorists. I think the way we’re doing things is less different from the ‘usual’ way than you believe; in fact I feel we’re just formalizing the usual way. In particular, I believe that ordinary signal flow diagrams give, in general, relations between inputs and outputs rather than just functions. But to explain this requires examples and pictures… so instead of trying to convince you here, I’ll make sure we write the paper in a way that includes these!

I am sure it’s just me just not fully getting the “rules of the game”, probably because i’m starting from my own assumptions.

Maybe what’s going on is that blocks are really rotation invariant but you are using similar symbols for different things (e.g. addition and coaddition) so you have to use the vertical ordering to eliminate the ambiguity of which one you are referring to.

And the fact that there are no clear input and output labels and no arrows does not help, i think.

For example if you have a relation F: U ~> V would you say that there is a fixed direction from U to V (so u is always the input and u always the output) or not ?

If i want to think at u as output and v as input can i use the same relation, and the same name and symbol in the flow diagram (and just connect it backwards), or i have to use the inverse relation F^-1 : V ~> U (and maybe call it coF or something ?).

Related to this, i kind of assume i’s forbidden to connect two inputs directly by an identity morphism, but i might be wrong on that one too.

Sorry about the many questions, hopefully they at least make some sense to you, anyway … just trying to get to the bottom of this.

I’m busy travelling from Cambridge to Erlangen today so I’ll answer your questions tomorrow! The issue is that I’m trying to describe signal flow diagrams as morphisms in a category, and this automatically forces certain ways of thinking—ways that I’m completely used to, but apparently go against how practitioners of control theory (or at least you) like to think!

I think that if you see things from afar then there are no differences and it’s clearly the same thing.

However looking closely i feel that there are a few small differences (either in the conventions used or in the underlying formalization, or both) which i haven’t been able to pinpoint exactly yet.

It looks like there might be slight differences in the set of assumptions regarding inputs, outputs, the signals’ direction of propagation, and a preferential direction in which component are drawn … anyway, all interesting stuff.

A non-mathematical ‘context’ point: often, a control system is needed not because the ‘system’ (perhaps better called ‘effector’?) is likely to destroy destroy itself, but because it is too expensive or even impossible to calculate how much effect is needed to get/keep the perception () to/at the needed setting (e.g. whether a bird is sitting on the end of the telescope or not, and if so, how heavy it is …). Which could perhaps be added to the picture by adding a ‘disturbance’ to the output of the effector before transforming this into the perception (Powers 1973 _Behavior: the Control of Perception_)

I have a question. I like to play with tensor networks (for quantum many-body systems) and we frequently use tensor network diagrams as a tool for representing quantum states, operators, etc. These are the symmetric monoidal category using the tensor product (and also a dagger). This lets us deal with tensor contractions – generally speaking, multiplication.

But sometimes I want to add things. Perhaps the superposition of two wavefunctions or the sum of two operators. Or perhaps it is useful to perform the direct sum of two vector spaces (for instance, we take advantage of symmetries by breaking up the Hilbert space into its different symmetry sectors). However, in these cases I find that drawing diagrams becomes more difficult. I didn’t realize until today that we can have a symmetric monoidal category with parallel wires being direct sums (actually I had been using diagrams for these recently without realizing it – that seems to be the beauty of category theory).

Is there some nice category that includes BOTH direct sums and tensor products? Has a nice, graphical language been developed to deal with this? I think that could be quite helpful…

Andy wrote:

Yes: read about 2-rigs and rig category on the nLab.

There are a number of related notions here, and it takes a little effort to see which one you need for a given purpose, but very often what you want is a symmetric monoidal category with finite colimits such that taking the tensor product with any object gives a functor that distributes over colimits. This is my own personal favorite definition of 2-rig. Examples include and , where the addition in both cases is the coproduct, a special case of finite colimits.

The obvious graphical notation here gets a bit awkward so people don’t seem to use it much. The problem is that if we denote using parallel wires, we need to make up some slightly awkward notation for and vice versa. At least, nobody has found a notation where the distributive law is ‘visually obvious’ and also easy to draw. You can use 3d diagrams with going in one direction and in another, but it seems to be more pain than it’s worth.