*guest post by David Spivak*

• Part 2: Creating a knowledge network.

### From parts to wholes

Remember where we were. Ologs, linguistically-enhanced sketches, just weren’t doing justice to the idea that each step in a recipe is itself a recipe. But the idea seemed ripe for mathematical formulation.

Thus, I returned to a question I’d wondered about in the very beginning: how is macro-understanding built from micro-understanding? How can multiple individual humans come together, like cells in a multicellular organism, to make a whole that is itself a surviving decision-maker?

There were, and continue to be, a lot of “open-to-Spivak” questions one can ask: How are stories about events built from sub-stories about sub-events? How is macro-economics built from micro-economics? Are large-scale phenomena always based on, and relatable to, interactions between smaller-scale phenomena? For example, I still want to understand, in very basic terms, how classical (large-scale) phenomena are a manifestation of quantum phenomena.

Neuroscience professor Michael Gazzaniga has a similar question: How does cognition arise from the interaction of tiny event-noticers, and how does society emerge and effect individual brains? As put it in the last paragraph of his book *Who’s In Charge*, we are in need of a language by which to understand the interfaces of “our layered hierarchical existence”, because doing so “holds the answer to our quest for understanding mind/brain relationships.” He goes on:

Understanding how to develop a vocabulary for those layered interactions, for me, constitutes the scientific problem of this century.

I tend to be infatuated with this same kind of idea: cognition emerging from interactions between sub-cognitive pieces. This is what got me interested in what I now call “operadic modularity”. Luckily again, my Office of Naval Research hero (now at the Air Force Office of Scientific Research) granted me a chance to study it.

The idea is this: modularity is about arranging many modules into a single whole, which is another module, usable as part of a larger system. Each system of modules is a module of its own, and we see the nesting phenomenon. Operads can model the language of *nestable interface arrangements*, and their algebras can model how interfaces are filled in with the required sorts of modules.

Here, by operad, I mean symmetric colored operad, or symmetric multicategory. Operads are like categories—they have objects, morphisms, identities, and a unital and associative composition formula—the difference is that the domain of a morphism is a finite set of objects (in an operad) rather than a single object (as in a category). So morphisms in an operad are written like we call such a morphism *n*-ary.

An early example, formulated operadically by Peter May (the inventor of operads) is called the little 2-cubes operad, denoted It has only one object, say a square ⬜, and an *n*-ary morphism

⬜ ,…, ⬜ ⬜

is any arrangement of non-overlapping squares in a larger square. These arrangements clearly display a nesting property.

Another source of examples comes from the fact that every monoidal category has an underlying operad with

(Either was symmetric monoidal to begin with or you can add in symmetries, roughly by multiplying each hom-set by ) The operad underlying the cartesian monoidal category of sets is an example I’ll use later.

If you want to think about operads as modeling modularity—building one thing out of many—the first trick is to imagine the codomain object as the exterior and all the domain objects as sitting inside it, on the interior. May’s little 2-cubes operad gives the idea: squares *in* a square. From now on, if I speak of many little objects arranged inside one big object, I always mean it this way: the interior objects constitute the domain, the exterior object is the codomain, and the arrangement itself is the morphism. These arrangements can be nested inside one another, corresponding to composition in the operad.

What are other types of nested phenomena, which we might be able to think about operadically? How about circles wired together in a larger circle? An object in this operad is a circle with some number of wires sticking out; let’s call it a ported-circle. A morphism from *n*-many ported-circles to one ported-circle is any connection pattern involving—i.e., wiring together of—the ports. This description can be interpreted in a few different ways; I usually mean the underlying operad of the monoidal category of “sets and cospans under disjoint union”, but the “spaghetti and meatballs operad” of circular planar arc diagrams is another good interpretation.

Once you have an operad you have a kind of calculus for nestable arrangements. As I’ve been saying, I often think of the morphisms in an operad in terms of pictures, such as wiring diagrams or squares-in-a-square. If you say you want these pictures to “mean something”, you’re probably looking for an algebra This operad functor which acts like a lax functor between monoidal categories, would tell you the set of fillers or **fills** that can be placed into each object in the picture.

I often think of the operad as a *picture language*, and the -algebra its *intended semantics*. Not only does such a set-valued functor on give you a set of fills for each object , it would also give you a formula for producing a large-scale fill (element of ) from any arrangement of small-scale fills (element of ).

For example, given a pointed space , you can ask for the set of based 2-spheres

⬜

in it. Here, is any element of ⬜ Think of a based sphere in as a continuous map from the filled-in square to that sends the boundary of the square to the basepoint of Given *n* spheres in an arrangement of non-overlapping squares in a square prescribes a new based sphere ⬜ The idea is that you send all the unused space in the big exterior square to the basepoint of , and follow the instructions when you get to the th little square inside. Thus any “2-fold loop space” gives an algebra of May’s little 2-cubes operad.

So recently, I’ve been thinking a lot about operadic modularity, i.e., cases in which a thing can be built out of a bunch of simpler things. Note that not all cases of “nesting” have such a clear picture language. For example, context-free grammars are modular: you build [postal-address] out of [name-part], [street-address] and [zip-part], you build each of these, e.g., [name-part], in any of several ways (there is an optional suffix part and the option to abbreviate your first name using an initial). The point is, you build things out of smaller parts, nested inside still smaller parts. Seeing context-free grammars as free operads is one of the things Hermida, Makkai, and Power explained in their paper on higher dimensional multigraphs.

The operadic notion of modularity can also be applied to building hierarchical protein materials. Like context-free grammars, the operad of such materials doesn’t come with a nice picture language. However, it can be formalized as an operad nonetheless. That is, there is a grammar of actions that one can apply to a bunch of polypeptides, actions such as “attach”, “overlay”, “rigidMotion”, “helix”, “makeArray”. From these you can build proteins that are quite complex from a simple vocabulary of 20 amino acids. I’ve joined forces with Tristan Giesa and Ravi Jagadeesan to make such a program. The software package, called *Matriarch*, for “Materi-als Arch-itecture”, should be available soon as an open source Python library.

There are lots of operads whose morphisms look like string diagrams of various sorts. These operads, which generalize a set-theoretic version of May’s topological little 2-cubes, have clear picture languages. The algebras on such “visualizable” operads can model things like databases and dynamical systems. Over the past year or so, I’ve been writing a series of “worked example” papers, such as those above, in which I explain various picture languages and semantics for them.

I find that operads provide a nice language in which to discuss string diagrams of various sorts. String-diagrammatic languages exist for many different “doctrines”, such as categories, categories without identities, monoidal categories, cartesian monoidal categories, traced monoidal categories, operads, etc. For example, Dylan Rupel and I realized that traced monoidal categories are (well, if you have enough equipment and an expert like Patrick Schultz around) algebras on the operad of oriented 1-cobordisms. It seems to me that the other doctrines above are similarly associated to operads that are “nearby” Cob, e.g., sub-operads of Cob, operads under Cob, etc. Maps between these various operads should induce known adjunctions between the corresponding doctrines.

That brings us to present day. There will be a workshop in Turin in a couple of months, and I think it’ll be a lot of fun:

• Categorical Foundations of Network Theory, May 25-28, ISI Foundation, Turin, Italy.

I’m looking forward to hearing from John Baez, Jacob Biamonte, Eugene Lerman, Tobias Fritz and others, about what they’ve been thinking about recently. I think we’ll find interesting common ground. If there’s interest, I’d be happy to talk about categorical models for how information is communicated throughout a network, and whether this gives any insight that can lead to better decision-making by the larger whole.

A small nit:

If I understand your construction correctly, then this is not the same as the previous one, but you are now referring to the space of basepoint-preserving maps . Is that right?

I found this section confusing, even though I understand n-fold loops spaces and the little n-cubes operad. The notation seemed a bit mysterious. Maybe this is part of why.

Oops, that was a typo. I think I meant to leave the $A$ off here, and just say something like,

‘In this way, any “2-fold loop space” gives an algebra of May’s little 2-cubes operad.’

John, would it be possible to fix that?

Sure, I’ll fix it. By the way, if anyone wants to use LaTeX in comments here, just read the directions that appear right above the box where you type your comment. That makes the difference between $A$ and .

When I read “emerge” it clicked and I recalled one of my most expensive never-studied books:

• A.C. Ehresmann and J.-P. Vanbremeersch,

Memory Evolutive Systems – Hierarchy, Emergence, Cognition, Elsevier 2007Another category theoretic theory of Life and all… Do you have a comment on this?

They have a theory of “emergentist reductionism” based on a “multiplicity principle”, which says something like “there’s more than one way to do it”. And by this they picture things emergent down to the necessity of a quantum-like world.

That was my most expensive category theory book too. I think it’s interesting stuff, but I didn’t get a sense of where exactly it leads.

What I’m realizing is that people in science aren’t going to think much about category theory until it solves something, rather than just gives a language. In math, we find it not just to be a nice language; we’re able to use that language (and whatever else CT is) to prove new theorems. Theorems are our currency, so it works for us.

I think what scientists want to see is new results, new testable predictions, which are eventually validated. I don’t follow too closely, but I’m not sure Memory Evolutive Systems ever got to this next step. Indeed, it’s pretty hard to do so; I would like to get much further than I have in that respect.

When I helped run the Dagstuhl workshop Categorical methods at the crossroads last spring, we invited Andrée Ehresmann to speak about this work. I later spent time talking to her in Paris, where she’s working with some music theorists at IRCAM. I even blogged about her on Google+:

But I digress! I think I

roughlyunderstand her approach of describing emergence by starting with a category and building up a new category where objects are diagrams of a specified type in the original category… and repeating this process.To the limited extent that I understand her work, I sympathize with it. I am very happy to see one of the pioneers of category theory continuing to break new ground in an ambitious way. But as far as I can tell, I think the material in this book needs to be supplemented by many other ideas to really capture the details of complex phenomena. Some of these other ideas will be very general, while others will be specific to a particular problem or class of problems.

Coming from a mathematical physics background, I’m happiest when I see things applied to examples. If I don’t see all the details of how my general formalisms handle specific examples, I feel like I’m floating in clouds. That’s why a lot of my work on networks so far has dealt with specific well-known kinds of neworks: chemical reaction networks, electrical circuits, bond graphs and signal flow diagrams. I’m trying to draw general lessons about symmetric monoidal bicategories and the like, but my formalisms also incorporate ideas like Kirchoff’s laws, equilibrium for chemical reactions, and so on.

1) Do people at the Santa Fe Institute do related work? Based on a dated book I recently read, they too were big on the parts-to-wholes theme.

2) If you really are bucking for Hari Seldon’s job (as appears to be the case), once you get this all sorted out you will need to translate it into natural language for the rest of us:-)

arch1 wrote:

Yes, they recently invited me to this workshop:

• Kinetic networks: from topology to design, 17-19 September 2014, Santa Fe Institute. Organized by Yoav Kallus, Pablo Damasceno, and Sidney Redner.

Unfortunately I can’t go, because I’m coming back from Singapore to California on the 18th.

This is just one of many interesting things people are doing at the Santa Fe Institute.

I hope David Spivak isn’t trying to ‘predict the history of the galaxy’ like Hari Seldon did. And I think getting network theory sorted out will take quite a while—enough time for you to learn category theory and operads.

Interesting. Thanks for your reply John (and a premature welcome back)!

Yeah, I’ve talked with some people from SFI. I don’t think they use operads in this way, but they’re definitely interested in complex systems.

Last time we talked about understanding types of networks as categories of decorated cospans. Earlier, David Spivak told us about understanding networks as algebras of an operad. Both these frameworks work at capturing notions of modularity and interconnection. Are they then related? How?