Abstract. To describe systems composed of interacting parts, scientists and engineers draw diagrams of networks: flow charts, electrical circuit diagrams, signal-flow graphs, Feynman diagrams and the like. In principle all these different diagrams fit into a common framework: the mathematics of monoidal categories. This has been known for some time. However, the details are more challenging, and ultimately more rewarding, than this basic insight. Here we explain how various applications of reaction networks and Petri nets fit into this framework.

If you see typos or other problems please let me know now!

I hope to blog a bit about the workshop… it promises to be very interesting.

Thanks! But since it’s a conference on reaction networks, I don’t think explaining the term “reaction networks” is a high priority!

On the other hand, I expect that most of the audience won’t know what a monoidal category is.

(By the way, chemists usually call them “chemical reaction networks”, but since they’re good for other things I call them “reaction networks”. I hope this will help get non-chemists interested. For some reason the organizers of this conference chose, in their title, to call them “chemical networks”.)

Is it known if Neural Networks also fit into the Framework of
monoidal categories ? Despite some improvement in pattern
recognition by convolutional Neural Networks ( deep learning ),
decisive breakthroughs in the area seem to be lacking since
the 80’s and 90’s , so other theoretical approaches like the one
through category theory could potentially be useful.

Sorry for the delayed reply. Neural networks of various kinds can indeed be seen as morphisms in monoidal categories of various kinds. These are actually symmetric monoidal categories, and the use of ‘feedback’ means these categories are compact closed or at least traced.

I have been very slowly learning about neural networks to see if there’s anything interesting I can say about them. Simply describing the relevant monoidal categories is not quite interesting enough for me!

I’m sure plenty of my readers know more about neural networks than I do. Thanks to ‘deep learning’, neural networks are big business now!

A good illustration of this is Google’s TensorFlow project. It’s an “open-source software library for machine intelligence”, which makes it easy for people to write their own software using neural networks.

The word “TensorFlow” makes me think of monoidal categories and string diagrams—and I don’t think I’m being silly. Here is their summary:

TensorFlow™ is an open source software library for numerical computation using data flow graphs. Nodes in the graph represent mathematical operations, while the graph edges represent the multidimensional data arrays (tensors) communicated between them.

So there are definitely monoidal categories lurking in the background here! But since this is a highly popular area, I only want to get involved if I can do something interesting: these days I dislike bandwagons, so I only want to jump aboard one if I can take it in a slightly new direction.

On Slide 32, your transition boxes are labeled , but the comment is “becomes a Petri net with rates if we choose r1,r2 > 0”. Seems like the boxes should be labeled ‘r’ from this point on.

Thanks! I’m giving the talk in a couple hours, and it’s already uploaded to the organizers’ computer, so it’s too late to do anything about that, but I can fix it on the web later if I decide too.

Indeed, it’s probably wise for me to use diagrams with boxes labelled by constants like when discussing Petri nets with rates.

One thing that comes up repeatedly when we reason using diagrams is a theorem of the following form:

The generating function over all diagrams is the exponential of the generating function over connected diagrams.

For example, this occurs in probability theory, when we relate moments and cumulants. Analogous, but more sophisticated-seeming, results hold in statistical physics and quantum field theory. A variation of the idea arises in connection with the Bell numbers, and in the coefficients that emerge from iterating the chain rule. A long time ago, I wondered if there was a general “linked cluster theorem” that could be formulated in category-theoretic terms, somehow capturing the basic notions: “a diagram yields a number”; “when we write diagrams beside each other, their numbers multiply”; “we can add clutches of diagrams”. But I could never turn that feeling into something solid and useful.

(Likewise, in supersymmetric quantum mechanics, there’s a “shape invariance criterion” that’s kind of the grown-up version of the commutator for creation and annihilation operators—or for multiplication and differentiation by a formal variable in the theory of generating functions. I wondered if the shape invariance criterion could be categorified, or groupoidified, or something; but I could never take that idea anywhere either.)

A long time ago, I wondered if there was a general “linked cluster theorem” that could be formulated in category-theoretic terms, somehow capturing the basic notions: “a diagram yields a number”; “when we write diagrams beside each other, their numbers multiply”; “we can add clutches of diagrams”. But I could never turn that feeling into something solid and useful.

Diagrams of vearious kinds, like the one at the top of this page, or Feynman diagrams of various kinds, are typically morphisms in some monoidal category or other—that’s what I started out explaining in my talk. Whenever you have a monoidal category and a monoidal functor your feeling will become a theorem.

More precisely, ‘closed’ diagrams (those with no inputs or outputs, i.e. no ‘external legs’) will be sent by to linear maps from a 1-dimensional vector space to itself: that is, numbers. A closed diagram formed by setting two closed diagrams side by side will receive as its number the product of the numbers for its two parts. We may not be able to add diagrams in , but we can always form a monoidal category in which we can! Morphisms in are simply formal linear combinations of those in so extends uniquely to a monoidal which is linear on morphisms.

Everyone should learn this stuff, but where? I guess I’ve not been explaining it loudly and clearly enough!

Thanks! That does help clarify things. (I haven’t been completely absent—I did smash a light mill a few weeks ago—just busy, distracted and intermittently ill.) Sometimes, I learn mathematics very slowly. This typically happens because I find an idea interesting and try to read about it, but every exposition presumes background that I do not have. Or worse yet, it is not clear from the reading what is new and important, what is new and pedantic detail, and what is the standard background I could get out of a fifty-year-old book, if I only knew which book to retrieve.

I was figuring that the solidification of my idea would require monoidal categories, since composition of diagrams calls for something that is like a tensor product. Taking formal linear combinations of morphisms in one category to obtain another (and seeing what properties the new category inherits from the old as a result) is still a touch too abstract for me to see immediately as the thing I should try.

composition of diagrams calls for something that is like a tensor product.

Actually composition of diagrams, i.e. attaching the outputs of one to the inputs of the next, calls composition. It’s an example of composing morphisms in a category:

Setting diagrams side by side is what calls for tensoring. Tensoring is something you can do with morphisms in a monoidal category:

And then there are linear combinations:

Taking formal linear combinations of morphisms in one category to obtain another (and seeing what properties the new category inherits from the old as a result) is still a touch too abstract for me to see immediately as the thing I should try.

You’re not alone! Everyone is familiar with composing operators: for example, if you fry something and then dump it into a pot of soup, you’re composing operators. But taking linear combinations of operators is much harder to visualize: for example, the counterintuitive ‘half-alive, half-dead cat’ that Schrödinger made famous is the result of applying a linear combination of operators to a cat.

Bob Coecke has emphasized this point: when it comes to quantum mechanics, the superposition principle is the hard part to visualize, while composition and tensoring are easy. So, it’s good to start by emphasizing the monoidal category, and only later see what you can with linear combinations of morphisms:

Abstract. The quantum mechanical formalism doesn’t support our intuition, nor does it elucidate the key concepts that govern the behaviour of the entities that are subject to the laws of quantum physics. The arrays of complex numbers are kin to the arrays of 0s and 1s of the early days of computer programming practice. In this review we present steps towards a diagrammatic ‘high-level’ alternative for the Hilbert space formalism, one which appeals to our intuition. It allows for intuitive reasoning about interacting quantum systems, and trivialises many otherwise involved and tedious computations. It clearly exposes limitations such as the no-cloning theorem, and phenomena such as quantum teleportation. As a logic, it supports ‘automation’. It allows for a wider variety of underlying theories, and can be easily modified, having the potential to provide the required step-stone towards a deeper conceptual understanding of quantum theory, as well as its unification with other physical theories. Specific applications discussed here are purely diagrammatic proofs of several quantum computational schemes, as well as an analysis of the structural origin of quantum non-locality. The underlying mathematical foundation of this high-level diagrammatic formalism relies on so-called monoidal categories, a product of a fairly recent development in mathematics. These monoidal categories do not only provide a natural foundation for physical theories, but also for proof theory, logic, programming languages, biology, cooking,…. The challenge is to discover the necessary additional pieces of structure that allow us to predict genuine quantum phenomena.

After I posted my comment, I realized that “composition” was not the word I should have used for the act of writing diagrams side-by-side. It was applicable in the looser, colloquial sense of the word, but not the technical one. But by the time I got back here to correct myself, you’d already made the point better than I could—so thanks! :-)

You can use Markdown or HTML in your comments. You can also use LaTeX, like this: $latex E = m c^2 $. The word 'latex' comes right after the first dollar sign, with a space after it. Cancel reply

You need the word 'latex' right after the first dollar sign, and it needs a space after it. Double dollar signs don't work, and other limitations apply, some described here. You can't preview comments here, but I'm happy to fix errors.

not til slide 14 did I ind out why the word reaction

Thanks! But since it’s a conference on reaction networks, I don’t think explaining the term “reaction networks” is a high priority!

On the other hand, I expect that most of the audience won’t know what a monoidal category is.

(By the way, chemists usually call them “chemical reaction networks”, but since they’re good for other things I call them “reaction networks”. I hope this will help get non-chemists interested. For some reason the organizers of this conference chose, in their title, to call them “chemical networks”.)

Inputs and Outputs: (starting at slide 4) The arrows all seem to flow inward.

Right! That’s perfectly fine, and I wanted to make that point because it’s a bit counterintuitive.

Is it known if Neural Networks also fit into the Framework of

monoidal categories ? Despite some improvement in pattern

recognition by convolutional Neural Networks ( deep learning ),

decisive breakthroughs in the area seem to be lacking since

the 80’s and 90’s , so other theoretical approaches like the one

through category theory could potentially be useful.

Sorry for the delayed reply. Neural networks of various kinds can indeed be seen as morphisms in monoidal categories of various kinds. These are actually symmetric monoidal categories, and the use of ‘feedback’ means these categories are compact closed or at least traced.

I have been very slowly learning about neural networks to see if there’s anything interesting I can say about them. Simply describing the relevant monoidal categories is not quite interesting enough for me!

Hi John, I wonder if any of your readers are more into neural networks.

I’m sure plenty of my readers know more about neural networks than I do. Thanks to ‘deep learning’, neural networks are big business now!

A good illustration of this is Google’s TensorFlow project. It’s an “open-source software library for machine intelligence”, which makes it easy for people to write their own software using neural networks.

The word “TensorFlow” makes me think of monoidal categories and string diagrams—and I don’t think I’m being silly. Here is their summary:

So there are definitely monoidal categories lurking in the background here! But since this is a highly popular area, I only want to get involved if I can do something interesting: these days I dislike bandwagons, so I only want to jump aboard one if I can take it in a slightly new direction.

Very clear exposition.

On Slide 32, your transition boxes are labeled , but the comment is “becomes a Petri net with rates if we choose r1,r2 > 0”. Seems like the boxes should be labeled ‘r’ from this point on.

Thanks! I’m giving the talk in a couple hours, and it’s already uploaded to the organizers’ computer, so it’s too late to do anything about

that, but I can fix it on the web later if I decide too.Indeed, it’s probably wise for me to use diagrams with boxes labelled by constants like when discussing Petri nets with rates.

One thing that comes up repeatedly when we reason using diagrams is a theorem of the following form:

The generating function over all diagrams is the exponential of the generating function over connected diagrams.For example, this occurs in probability theory, when we relate moments and cumulants. Analogous, but more sophisticated-seeming, results hold in statistical physics and quantum field theory. A variation of the idea arises in connection with the Bell numbers, and in the coefficients that emerge from iterating the chain rule. A long time ago, I wondered if there was a general “linked cluster theorem” that could be formulated in category-theoretic terms, somehow capturing the basic notions: “a diagram yields a number”; “when we write diagrams beside each other, their numbers multiply”; “we can add clutches of diagrams”. But I could never turn that feeling into something solid and useful.

(Likewise, in supersymmetric quantum mechanics, there’s a “shape invariance criterion” that’s kind of the grown-up version of the commutator for creation and annihilation operators—or for multiplication and differentiation by a formal variable in the theory of generating functions. I wondered if the shape invariance criterion could be categorified, or groupoidified, or something; but I could never take that idea anywhere either.)

Hi! Long time no “see”!

Diagrams of vearious kinds, like the one at the top of this page, or Feynman diagrams of various kinds, are typically morphisms in some monoidal category or other—that’s what I started out explaining in my talk. Whenever you have a monoidal category and a monoidal functor your feeling will become a theorem.

More precisely, ‘closed’ diagrams (those with no inputs or outputs, i.e. no ‘external legs’) will be sent by to linear maps from a 1-dimensional vector space to itself: that is, numbers. A closed diagram formed by setting two closed diagrams side by side will receive as its number the product of the numbers for its two parts. We may not be able to add diagrams in , but we can always form a monoidal category in which we can! Morphisms in are simply formal linear combinations of those in so extends uniquely to a monoidal which is linear on morphisms.

Everyone should learn this stuff, but where? I guess I’ve not been explaining it loudly and clearly enough!

Thanks! That does help clarify things. (I haven’t been completely absent—I did smash a light mill a few weeks ago—just busy, distracted and intermittently ill.) Sometimes, I learn mathematics very slowly. This typically happens because I find an idea interesting and try to read about it, but every exposition presumes background that I do not have. Or worse yet, it is not clear from the reading what is new and important, what is new and pedantic detail, and what is the standard background I could get out of a fifty-year-old book, if I only knew which book to retrieve.

I was figuring that the solidification of my idea would require monoidal categories, since composition of diagrams calls for

somethingthat is like a tensor product. Taking formal linear combinations of morphisms in one category to obtain another (and seeing what properties the new category inherits from the old as a result) is still a touch too abstract for me to see immediately as the thing I should try.Hope you’re feeling healthier these days!

Actually composition of diagrams, i.e. attaching the outputs of one to the inputs of the next, calls

composition. It’s an example of composing morphisms in a category:Setting diagrams side by side is what calls for tensoring. Tensoring is something you can do with morphisms in a

monoidalcategory:And then there are linear combinations:

You’re not alone! Everyone is familiar with composing operators: for example, if you fry something and then dump it into a pot of soup, you’re composing operators. But taking linear combinations of operators is much harder to visualize: for example, the counterintuitive ‘half-alive, half-dead cat’ that Schrödinger made famous is the result of applying a linear combination of operators to a cat.

Bob Coecke has emphasized this point: when it comes to quantum mechanics, the

superposition principleis the hard part to visualize, while composition and tensoring are easy. So, it’s good to start by emphasizing the monoidal category, and only later see what you can with linear combinations of morphisms:• Bob Coecke, Quantum picturalism.

After I posted my comment, I realized that “composition” was not the word I should have used for the act of writing diagrams side-by-side. It was applicable in the looser, colloquial sense of the word, but not the technical one. But by the time I got back here to correct myself, you’d already made the point better than I could—so thanks! :-)

I figured it would also help other folks to see pictures of composition vs. tensoring.

I didn’t see any response to the comment about neural networks??

Hi, Jim. I just now replied to the comment on neural networks. Thanks for the reminder.

L.S.

This is to make you know that Johan E. Mebius (jemebius@xs4all.nl) has died on August 24, 2017.

Yours truly,

Rienk Mebius.