*To understand ecosystems, ultimately will be to understand networks.*– B. C. Patten and M. Witkamp

A while back I decided one way to apply my math skills to help save the planet was to start pushing toward green mathematics: a kind of mathematics that can interact with biology and ecology just as fruitfully as traditional mathematics interacts with physics. As usual with math, the payoffs will come slowly, but they may be large. It’s not a substitute for doing other, more urgent things—but if mathematicians don’t do this, who will?

As a first step in this direction, I decided to study *networks*.

This May, a small group of mathematicians is meeting in Turin for a workshop on the categorical foundations of network theory, organized by Jacob Biamonte. I’m trying to get us mentally prepared for this. We all have different ideas, yet they should fit together somehow.

Tobias Fritz, Eugene Lerman and David Spivak have all written articles here about their work, though I suspect Eugene will have a lot of completely new things to say, too. Now it’s time for me to say what my students and I have been doing.

Despite my ultimate aim of studying biological and ecological networks, I decided to start by clarifying the math of networks that appear in chemistry and engineering, since these are simpler, better understood, useful in their own right, and probably a good warmup for the grander goal. I’ve been working with Brendan Fong on electrical ciruits, and with Jason Erbele on control theory. Let me talk about this paper:

• John Baez and Jason Erbele, Categories in control.

Control theory is the branch of engineering that focuses on manipulating open systems—systems with inputs and outputs—to achieve desired goals. In control theory, signal-flow diagrams are used to describe linear ways of manipulating signals, for example smooth real-valued functions of time. Here’s a real-world example; click the picture for more details:

For a category theorist, at least, it is natural to treat signal-flow diagrams as string diagrams in a symmetric monoidal category. This forces some small changes of perspective, which I’ll explain, but more important is the question: *which symmetric monoidal category?*

We argue that the answer is: the category of finite-dimensional vector spaces over a certain field but with *linear relations* rather than linear maps as morphisms, and *direct sum* rather than tensor product providing the symmetric monoidal structure. We use the field consisting of rational functions in one real variable This variable has the meaning of differentation. A linear relation from to is thus a system of linear constant-coefficient ordinary differential equations relating ‘input’ signals and ‘output’ signals.

Our main goal in this paper is to provide a complete ‘generators and relations’ picture of this symmetric monoidal category, with the generators being familiar components of signal-flow diagrams. It turns out that the answer has an intriguing but mysterious connection to ideas that are familiar in the diagrammatic approach to quantum theory! Quantum theory also involves linear algebra, but it uses linear maps between Hilbert spaces as morphisms, and the tensor product of Hilbert spaces provides the symmetric monoidal structure.

We hope that the category-theoretic viewpoint on signal-flow diagrams will shed new light on control theory. However, in this paper we only lay the groundwork.

### Signal flow diagrams

There are several basic operations that one wants to perform when manipulating signals. The simplest is multiplying a signal by a scalar. A signal can be amplified by a constant factor:

where We can write this as a string diagram:

Here the labels and on top and bottom are just for explanatory purposes and not really part of the diagram. Control theorists often draw arrows on the wires, but this is unnecessary from the string diagram perspective. Arrows on wires are useful to distinguish objects from their

duals, but ultimately we will obtain a compact closed category where each object is its own dual, so the arrows can be dropped. What we really need is for the box denoting scalar multiplication to have a clearly defined input and output. This is why we draw it as a triangle. Control theorists often use a rectangle or circle, using arrows on wires to indicate which carries the input and which the output

A signal can also be integrated with respect to the time variable:

Mathematicians typically take differentiation as fundamental, but engineers sometimes prefer integration, because it is more robust against small perturbations. In the end it will not matter much here. We can again draw integration as a string diagram:

Since this looks like the diagram for scalar multiplication, it is natural to extend to the field of rational functions of a variable which stands for differentiation. Then differentiation becomes a special case of scalar multiplication, namely multiplication by and integration becomes multiplication by Engineers accomplish the same effect with Laplace transforms, since differentiating a signal $f$ is equivalent to multiplying its Laplace transform

by the variable Another option is to use the Fourier transform: differentiating is equivalent to multiplying its Fourier transform

by Of course, the function needs to be sufficiently well-behaved to justify calculations involving its Laplace or Fourier transform. At a more basic level, it also requires some work to treat integration as the two-sided inverse of differentiation. Engineers do this by considering signals that vanish for and choosing the antiderivative that vanishes under the same condition. Luckily all these issues can be side-stepped in a formal treatment of signal-flow diagrams: we can simply treat signals as living in an unspecified vector space over the field The field would work just as well, and control theory relies heavily on complex analysis. In our paper we work over an arbitrary field

The simplest possible signal processor is a rock, which takes the 'input' given by the force on the rock and produces as 'output' the rock's position Thanks to Newton's second law we can describe this using a signal-flow diagram:

Here composition of morphisms is drawn in the usual way, by attaching the output wire of one morphism to the input wire of the next.

To build more interesting machines we need more building blocks, such as addition:

and duplication:

When these linear maps are written as matrices, their matrices are transposes of each other. This is reflected in the string diagrams for addition and duplication:

The second is essentially an upside-down version of the first. However, we draw addition as a dark triangle and duplication as a light one because we will later want another way to ‘turn addition upside-down’ that does *not* give duplication. As an added bonus, a light upside-down triangle resembles the Greek letter the usual symbol for duplication.

While they are typically not considered worthy of mention in control theory, for completeness we must include two other building blocks. One is the zero map from the zero-dimensional vector space to our field which we denote as and draw as follows:

The other is the zero map from to sometimes called ‘deletion’, which we denote as and draw thus:

Just as the matrices for addition and duplication are transposes of each other, so are the matrices for zero and deletion, though they are rather degenerate, being and matrices, respectively. Addition and zero make into a **commutative monoid**, meaning that the following relations hold:

The equation at right is the commutative law, and the crossing of strands is the braiding:

by which we switch two signals. In fact this braiding is a symmetry, so it does not matter which strand goes over which:

Dually, duplication and deletion make into a cocommutative **comonoid**. This means that if we reflect the equations obeyed by addition and zero across the horizontal axis and turn dark operations into light ones, we obtain another set of valid equations:

There are also relations between the monoid and comonoid operations. For example, adding two signals and then duplicating the result gives the same output as duplicating each signal and then adding the results:

This diagram is familiar in the theory of Hopf algebras, or more generally bialgebras. Here it is an example of the fact that the monoid operations on are comonoid homomorphisms—or equivalently, the comonoid operations are monoid homomorphisms.

We summarize this situation by saying that is a **bimonoid**. These are all the bimonoid laws, drawn as diagrams:

The last equation means we can actually make the diagram at left disappear, since it equals the identity morphism on the 0-dimensional vector space, which is drawn as *nothing*.

So far all our string diagrams denote linear maps. We can treat these as morphisms in the category where objects are finite-dimensional vector spaces over a field and morphisms are linear maps. This category is equivalent to the category where the only objects are vector spaces for and then morphisms can be seen as matrices. The space of signals is a vector space over which may not be finite-dimensional, but this does not cause a problem: an matrix with entries in still defines a linear map from to in a functorial way.

In applications of string diagrams to quantum theory, we make into a symmetric monoidal category using the tensor product of vector spaces. In control theory, we instead make into a symmetric monoidal category using the *direct sum* of vector spaces. In Lemma 1 of our paper we prove that for any field with direct sum is generated as a symmetric monoidal category by the one object together with these morphisms:

where is arbitrary.

However, these generating morphisms obey some unexpected relations! For example, we have:

Thus, it is important to find a complete set of relations obeyed by these generating morphisms, thus obtaining a presentation of as a symmetric monoidal category. We do this in Theorem 2. In brief, these relations say:

(1) is a bicommutative bimonoid;

(2) the rig operations of can be recovered from the generating morphisms;

(3) all the generating morphisms commute with scalar multiplication.

Here item (2) means that and in the field can be expressed in terms of signal-flow diagrams as follows:

Multiplicative inverses cannot be so expressed, so our signal-flow diagrams so far do not know that is a field. Additive inverses also cannot be expressed in this way. So, we expect that a version of Theorem 2 will hold whenever is a mere rig: that is, a ‘ring without negatives’, like the natural numbers. The one change is that instead of working with vector spaces, we should work with finitely presented free -modules.

Item (3), the fact that all our generating morphisms commute with scalar multiplication, amounts to these diagrammatic equations:

While Theorem 2 is a step towards understanding the category-theoretic underpinnings of control theory, it does not treat signal-flow diagrams that include ‘feedback’. Feedback is one of the most fundamental concepts in control theory because a control system without feedback may be highly sensitive to disturbances or unmodeled behavior. Feedback allows these uncontrolled behaviors to be mollified. As a string diagram, a basic feedback system might look schematically like this:

The user inputs a ‘reference’ signal, which is fed into a controller, whose output is fed into a system, which control theorists call a ‘plant’, which in turn produces its own output. But then the system’s output is duplicated, and one copy is fed into a sensor, whose output is added (or if we prefer, subtracted) from the reference signal.

In string diagrams—unlike in the usual thinking on control theory—it is essential to be able to read any diagram from top to bottom as a composite of tensor products of generating morphisms. Thus, to incorporate the idea of feedback, we need two more generating morphisms. These are the ‘cup’:

and ‘cap’:

These are not maps: they are relations. The cup imposes the relation that its two inputs be equal, while the cap does the same for its two outputs. This is a way of describing how a signal flows around a bend in a wire.

To make this precise, we use a category called An object of this category is a finite-dimensional vector space over while a morphism from to denoted is a **linear relation**, meaning a linear subspace

In particular, when a linear relation from to is just an arbitrary system of constant-coefficient linear ordinary differential equations relating input variables and output variables.

Since the direct sum is also the cartesian product of and a linear relation is indeed a relation in the usual sense, but with the property that if is related to and is related to then is related to whenever

We compose linear relations and as follows:

Any linear map gives a linear relation namely the graph of that map:

Composing linear maps thus becomes a special case of composing linear relations, so becomes a subcategory of Furthermore, we can make into a monoidal category using direct sums, and it becomes symmetric monoidal using the braiding already present in

In these terms, the **cup** is the linear relation

given by

while the **cap** is the linear relation

given by

These obey the **zigzag relations**:

Thus, they make into a compact closed category where and thus every object, is its own dual.

Besides feedback, one of the things that make the cap and cup useful is that they allow any morphism to be ‘plugged in backwards’ and thus ‘turned around’. For instance, turning around integration:

we obtain differentiation. In general, using caps and cups we can turn around any linear relation and obtain a linear relation called the **adjoint** of which turns out to given by

For example, if is nonzero, the adjoint of scalar multiplication by is multiplication by :

Thus, caps and cups allow us to express multiplicative inverses in terms of signal-flow diagrams! One might think that a problem arises when when but no: the adjoint of scalar multiplication by is

In Lemma 3 we show that is generated, as a symmetric monoidal category, by these morphisms:

where is arbitrary.

In Theorem 4 we find a complete set of relations obeyed by these generating morphisms,thus giving a presentation of as a symmetric monoidal category. To describe these relations, it is useful to work with adjoints of the generating morphisms. We have already seen that the adjoint of scalar multiplication by is scalar multiplication by except when Taking adjoints of the other four generating morphisms of we obtain four important but perhaps unfamiliar linear relations. We draw these as ‘turned around’ versions of the original generating morphisms:

• **Coaddition** is a linear relation from to that holds when the two outputs sum to the input:

• **Cozero** is a linear relation from to that holds when the input is zero:

• **Coduplication** is a linear relation from to that holds when the two inputs both equal the output:

• **Codeletion** is a linear relation from to that holds always:

Since and automatically obey turned-around versions of the relations obeyed by and we see that acquires a *second* bicommutative bimonoid structure when considered as an object in

Moreover, the four dark operations make into a Frobenius monoid. This means that is a monoid, is a comonoid, and the **Frobenius relation** holds:

All three expressions in this equation are linear relations saying that the sum of the two inputs equal the sum of the two outputs.

The operation sending each linear relation to its adjoint extends to a contravariant functor

which obeys a list of properties that are summarized by saying that is a †-compact category. Because two of the operations in the Frobenius monoid are adjoints of the other two, it is a **†-Frobenius monoid**.

This Frobenius monoid is also special, meaning that

comultiplication (in this case ) followed by multiplication (in this case ) equals the identity:

This Frobenius monoid is also commutative—and cocommutative, but for Frobenius monoids this follows from commutativity.

Starting around 2008, commutative special †-Frobenius monoids have become important in the categorical foundations of quantum theory, where they can be understood as ‘classical structures’ for quantum systems. The category of finite-dimensional Hilbert spaces and linear maps is a †-compact category, where any linear map has an adjoint given by

for all A commutative special †-Frobenius monoid in is then the same as a Hilbert space with a chosen orthonormal basis. The reason is that given an orthonormal basis for a finite-dimensional Hilbert space we can make into a commutative special †-Frobenius monoid with multiplication given by

and unit given by

The comultiplication duplicates basis states:

Conversely, any commutative special †-Frobenius monoid in arises this way.

Considerably earlier, around 1995, commutative Frobenius monoids were recognized as important in topological quantum field theory. The reason, ultimately, is that the free symmetric monoidal category on a commutative Frobenius monoid is the category with 2-dimensional oriented cobordisms as morphisms. But the free symmetric monoidal category on a commutative *special* Frobenius monoid was worked out even earlier: it is the category with finite sets as objects, where a morphism is an isomorphism class of cospans

This category can be made into a †-compact category in an obvious way, and then the 1-element set becomes a commutative special †-Frobenius monoid.

For all these reasons, it is interesting to find a commutative special †-Frobenius monoid lurking at the heart of control theory! However, the Frobenius monoid here has yet another property, which is more unusual. Namely, the unit followed by the counit is the identity:

We call a special Frobenius monoid that also obeys this extra law **extra-special**. One can check that the free symmetric monoidal category on a commutative extra-special Frobenius monoid is the category with finite sets as objects, where a morphism is an equivalence relation on the disjoint union and we compose and by letting and generate an equivalence relation on and then restricting this to

As if this were not enough, the light operations share many properties with the dark ones. In particular, these operations make into a commutative extra-special †-Frobenius monoid in a second way. In summary:

• is a bicommutative bimonoid;

• is a bicommutative bimonoid;

• is a commutative extra-special †-Frobenius monoid;

• is a commutative extra-special †-Frobenius monoid.

It should be no surprise that with all these structures built in, signal-flow diagrams are a powerful method of designing processes.

However, it is surprising that most of these structures are present in a seemingly very different context: the so-called ZX calculus, a diagrammatic formalism for working with complementary observables in quantum theory. This arises naturally when one has an -dimensional Hilbert space $H$ with two orthonormal bases that are mutually unbiased, meaning that

for all Each orthonormal basis makes into commutative special †-Frobenius monoid in Moreover, the multiplication and unit of either one of these Frobenius monoids fits together with the comultiplication and counit of the other to form a bicommutative bimonoid. So, we have all the structure present in the list above—except that these Frobenius monoids are only extra-special if is 1-dimensional.

The field is also a 1-dimensional vector space, but this is a red herring: in *every* finite-dimensional vector space naturally acquires all four structures listed above, since addition, zero, duplication and deletion are well-defined and obey all the relations we have discussed. Jason and I focus on in our paper simply because it generates all the objects via direct sum.

Finally, in the cap and cup are related to the light and dark operations as follows:

Note the curious factor of in the second equation, which breaks some of the symmetry we have seen so far. This equation says that two elements sum to zero if and only if Using the zigzag relations, the two equations above give

We thus see that in both additive and multiplicative inverses can be expressed in terms of the generating morphisms used in signal-flow diagrams.

Theorem 4 of our paper gives a presentation of based on the ideas just discussed. Briefly, it says that is equivalent to the symmetric monoidal category generated by an object and these morphisms:

• addition

• zero

• duplication

• deletion

• scalar multiplication for any

• cup

• cap

obeying these relations:

(1) is a bicommutative bimonoid;

(2) and obey the zigzag equations;

(3) is a commutative extra-special †-Frobenius monoid;

(4) is a commutative extra-special †-Frobenius monoid;

(5) the field operations of can be recovered from the generating morphisms;

(6) the generating morphisms (1)-(4) commute with scalar multiplication.

Note that item (2) makes into a †-compact category, allowing us to mention the adjoints of generating morphisms in the subsequent relations. Item (5) means that and also additive and multiplicative inverses in the field can be expressed in terms of signal-flow diagrams in the manner we have explained.

So, we have a good categorical understanding of the linear algebra used in signal flow diagrams!

Now Jason is moving ahead to apply this to some interesting problems… but that’s another story, for later.

Great stuff! I’m probably too tired to have any sensible thoughts right now, but let me try anyway.

The zero generator looks a bit redundant, because you can write it as scalar multiplication by followed by codeletion. Can you similarly write addition in terms of the other generators? (I guess not, but how do you prove this?)

The diagrams used here remind me of linear network coding. There, one also likes to draw diagrams in , considered as a monoidal category under direct sum, where is some finite field.

I don’t know how to write addition in terms of other generators, and like you I doubt it’s possible. I guess one should be able to figure out the subcategory generated without addition.

You noted that in the generator ‘zero’

is the composite of codeletion

and scalar multiplication by zero

It should be a fun puzzle to derive this from the relations we listed, just to check that Jason found enough relations!

But I also think the generator ‘deletion’

should be the composite of scalar multiplication by zero

and cozero

So in the end I don’t think zero is any more or less redundant than deletion. We’re more interested in a symmetrical and beautiful set of generators than a minimal set.

It takes a while to get used to coaddition, cozero, codeletion and coduplication, since they forgot to tell us about those in school.

I was looking into finding minimal sets of generators at one point, so I will make a couple of comments on the matter.

Deletion can be written as a composite using only duplication and cup. Similarly, the generator ‘zero’ can be written as a composite using addition, scalar multiplication by -1, and cap. That one could be considered cheating if you don’t want to take scalar multiplication by -1 to be fundamental, but if you Frankenstein stuff together, you might be able to get the ‘zero’ generator from cap, cup, duplication, and addition.

Notwithstanding comment 1, it is plain that the list of generators, where is the number of elements in the field , has some (extreme, in some cases) redundancy. None of the infinitely many scalar multiplication generators are necessary for , for instance. We can get them all “for free” because is the smallest field that contains the rig , which we get “for free” from duplication and addition of identity morphisms. Scalar multiplication generators only become necessary when you do something crazy like take to be the field . There you really do need scalar multiplication by (or some rational multiple thereof) to be in your set of generators.

Summing up comments 1 and 2, any linear transformation of vector spaces over (probably)* only

requires:.

It would not be nice to read or draw.

I can think of another way to make a list of only four generators that does the same job and definitely works, but I will leave that as a puzzle.

*I’m approximately 95% confident ‘zero’ doesn’t need to be on the list I give here.

Here’s a new paper on network theory:

• John Baez and Brendan Fong, A compositional framework for passive linear networks.

While my paper with Jason Erbele, Categories in control, studies

signal flow diagrams, this one focuses oncircuit diagrams. The two are different, but closely related.I’ll talk about their relation at the Turin workshop in May. For now, let me just talk about this paper with Brendan.

Over on the n-Category Café, Tim Campion wrote:

Tim wrote:

If your head doesn’t spin, you’re not doing enough math!

But it might help to say this. In diagrammatic algebra we love commutative monoids:

and we love their upside-down version, cocommutative comonoids:

In linear algebra addition gives us a (dark) commutative monoid, and duplication gives us a (light) cocommutative comonoid, so we are very happy.

But when we work with linear relations we can ‘reflect’ any morphism to get one pointing the other way, so we get

another(light) commutative monoid and (dark) cocommutative monoid… so we are twice as happy!There are two main ways a monoid and a comonoid can fit together. They can form a Frobenius monoid:

and indeed that’s what happens with the dark operations… and also with the light operations:

Or, they can form a bimonoid:

and that’s how the dark operations interact with the light ones… but it’s also how the light ones interact with the dark ones:

So, we’re really in a maximally pleasant context.

There’s a lot to say about

whya Frobenius monoids and a bimonoid are the two main ways for a monoid and comonoid to fit together, but I’ll leave that as puzzle.Tim wrote:

Indeed the category of linear relations is ‘poset-enriched‘, which makes it a 2-category of a specially simple sort.

I haven’t thought much about this. Some people have looked at this idea much more generally: for any regular category you can define a category of relations, which poset-enriched, and indeed forms a pleasant sort of category called an ‘allegory‘. But there should be extra beautiful features in the special case we’re considering, or maybe in any abelian category.

Jason Erbele is studying controllability, observability and stability for signal flow networks. We’ll get back to you on that!

Grothendieck pioneered using the paradigm of category theory to study mathematical structure. Lawvere, in the 60s, worked out some deep implications regarding logic. Since then, applications have popped up all over: in linguistics, cognitive science, philosophy, functional programming, dynamical systems, theoretical physics, control theory, to name a few.

If you don’t understand all these hieroglyphics, please reread our blog about categories in control theory, and ask some questions!

Over on the n-Category Café, Rob MacDonald wrote:

Someday I hope category theory will be of use to (at least a nonempty set of) electrical engineers. So far the flow of information is going the other way: by trying to understand what electrical engineers are doing, we’re getting new ideas about category theory.

I haven’t thought much about digital circuits or even nonlinear analog circuits. A bunch of the abstract framework we’re developing is applicable to those kinds of circuits. But I find it frustrating to develop abstract frameworks without applying them to concrete examples, so I’ve chosen linear analog continuous-time circuits as my primary example to start with. Once I understand that example, I’ll be eager to work on other examples.

So, thanks for giving me something to learn about and think about: how a flip-flop can be used to store information! It should be tractable, since the math is “already understood” to the satisfaction of practitioners. Yet there should still be interesting things to learn, by studying it with the help of category theory and other mathematical power tools.

Hi John,

I am late to the party and you may have answered this already:

where do you need finite dimensionality?

Or, to put it differently: if you allow infinite dimensional vector spaces, how much of the structure is there?

We’re focusing on the 1-dimensional vector space over a field , say itself, and saying “the category of finite-dimensional vector spaces is the free symmetric monoidal category on an object with the following structures…” The ‘tensor product’ in this symmetric monoidal category is

direct sum. So, we’re using the fact that every finite-dimensional vector space is a finite sum of copies of In the infinite-dimensional case we would need a suitable notion of ‘infinite direct sums’, e.g. coproducts. But if we used coproducts we could save some work because coproducts automatically come with some of the structures that we’re listing ‘by hand’ here.Anyway, here’s the good news.

Everyvector space has the structures we listed here. They all come with addition and duplication maps:They all come with zero and deletion (the unique linear maps from and to the zero-dimensional vector space):

They all come with the ‘cap’ and ‘cup’ linear relations:

which let you turn around any linear relation and get one In other words, any linear subspace of gives one

And these obey all the axioms I listed!

• is a bicommutative bimonoid;

• is a bicommutative bimonoid;

• is a commutative extra-special †-Frobenius monoid;

• is a commutative extra-special †-Frobenius monoid.

So, infinite-dimensionality does not hurt.

Hi John,

Are you familiar with Ronald K. Pearson? His book, Discrete-Time Dynamic Models (1999), has a chapter on category theory which I found very insightful. It is an interesting book, in part because it is the only engineering reference I know of that utilizes category theory.

The book emphasizes nonlinear control systems and the relations between different model classes (linear, bilinear, “NARMAX,” etc). He makes a category theoretic distinction between “structural” vs “behavioral” models as well as continuous-time vs discrete-time.

It might be an interesting read for you. Just thought I’d bring it up since R.K. Pearson appears only moderately well-known, having spent most of his career in industry.

I don’t know about Pearson or this book! I should read it! Thanks.

We’re big on categories where the control systems are

morphismsin a symmetric monoidal category, so that you can compose and tensor them. I’d be simultaneously excited and depressed if Pearson had already done this.That book sounds like something I should read, too. It seems to be available online through the UCR library, and apparently downloadable for 14 days.

I wonder if it has an auto-destruct sequence when you download it that causes smoke to come out of your computer or tablet when it reaches the end the fortnight, like the tapes in Mission Impossible…

Skimming through chapter 7 (the chapter on category theory), Pearson gives many examples of categories describing a great number of things, where the control systems are morphisms. I did not find him using

monoidalcategories, so there should be great potential just in recasting his examples in the light of symmetric monoidal categories. I like what he has to say in the chapter summary, after the recap of the various topics that he categorified:“Given the general difficulty of analyzing the qualitative behavior of nonlinear dynamic models, any analytical tool this flexible would seem to merit further examination.”

Sounds interesting! I’ll have to read this book.

By the way, I don’t use ‘categorified’ the way you just did, meaning ‘study using categories’. I consider categorification to be semi-systematic process of boosting mathematical structures up the ladder of n-categories. For example, we can take the fact that addition of natural numbers is commutative and associative, and categorify it to obtain the fact that is a symmetric monoidal category under .

When I say categorification is ‘semi-systematic’, what I mean is this. The actual systematic process is ‘decategorification’, where we take a category and convert it into a set, namely its set of isomorphism classes of objects. A functor between categories gives a function between their decategorifications. For example, the coproduct

gives the function

Categorification is the attempt to reverse this systematic process: to take structures in set-based mathematics and see them as decategorifications of structures in category-based mathematics.

Unfortunately (or maybe fortunately) you came to UCR a few years after I was really interested in categorification. But as my grad student I expect you to spread the word! The beginning of this paper is supposed to be fun:

• John Baez and James Dolan, Categorification.

For example, in a multigraph category every object has the structure of a ‘special commutative Frobenius algebra’. That’s a bit of a mouthful, but John defined it a while back, and examples include categories where morphisms are electrical circuits, and categories where morphisms are signal flow diagrams.

Jason Erbele wrote a paper on categories and control theory. Independently, Bonchi, Sobociński and Zanasi did some closely related work:

• Filippo Bonchi, Paweł Sobociński and Fabio Zanasi, Interacting Hopf algebras.

• Filippo Bonchi, Paweł Sobociński and Fabio Zanasi, A categorical semantics of signal flow graphs.

I’ll explain some of the ideas at the Turin meeting on the categorical foundations of network theory. But I also want to talk about this new paper that Simon Wadsley of Cambridge University wrote with my student Nick Woods:

• Simon Wadsley and Nick Woods, PROPs for linear systems.

Commutative Frobenius monoids obey the commutative monoid laws, the cocommutative comonoid laws and the Frobenius laws. They play a fundamental role in 2d topological quantum field theory. Special Frobenius monoids are also well-known. But the ‘extra’ law, which says that a little piece of wire not connected to anything can be thrown away with no effect, is rather new. Jason Erbele and I gave it that name in our work on control theory:

• John Baez and Jason Erbele, Categories in control. (Blog article here.)

My other students are in the math department at U. C. Riverside. Jason Erbele is working on “control theory”, a branch of engineering where you try to design feedback loops to make sure processes run in a stable way. Control theory uses networks called “signal flow diagrams”, and Jason has worked out how to understand these using category theory.

Jason isn’t using Brendan’s framework: he’s using a different one, called PROPs, which were developed a long time ago for use in algebraic topology. My student Franciscus Rebro has been developing it further, for use in our project. It gives a nice way to describe networks in terms of their basic building blocks. It also illuminates the similarity between signal flow diagrams and Feynman diagrams! They’re very similar, but there’s a big difference: in signal flow diagrams the signals are classical, while Feynman diagrams are quantum-mechanical.

My other students are in the math department at U. C. Riverside. Jason Erbele is working on “control theory”, a branch of engineering where you try to design feedback loops to make sure processes run in a stable way. Control theory uses networks called “signal flow diagrams”, and Jason has worked out how to understand these using category theory.