## Complex Adaptive System Design (Part 3)

It’s been a long time since I’ve blogged about the Complex Adaptive System Composition and Design Environment or CASCADE project run by John Paschkewitz. For a reminder, read these:

Complex adaptive system design (part 1), Azimuth, 2 October 2016.

Complex adaptive system design (part 2), Azimuth, 18 October 2016.

A lot has happened since then, and I want to explain it.

I’m working with Metron Scientific Solutions to develop new techniques for designing complex networks.

The particular problem we began cutting our teeth on is a search and rescue mission where a bunch of boats, planes and drones have to locate and save people who fall overboard during a boat race in the Caribbean Sea. Subsequently the Metron team expanded the scope to other search and rescue tasks. But the real goal is to develop very generally applicable new ideas on designing and ‘tasking’ networks of mobile agents—that is, designing these networks and telling the agents what to do.

We’re using the mathematics of ‘operads’, in part because Spivak’s work on operads has drawn a lot of attention and raised a lot of hopes:

An operad is a bunch of operations for sticking together smaller things to create bigger ones—I’ll explain this in detail later, but that’s the core idea. Spivak described some specific operads called ‘operads of wiring diagrams’ and illustrated some of their potential applications. But when we got going on our project, we wound up using a different class of operads, which I’ll call ‘network operads’.

Here’s our dream, which we’re still trying to make into a reality:

Network operads should make it easy to build a big network from smaller ones and have every agent know what to do. You should be able to ‘slap together’ a network, throwing in more agents and more links between them, and automatically have it do something reasonable. This should be more flexible than an approach where you need to know ahead of time exactly how many agents you have, and how they’re connected, before you can tell them what to do.

You don’t want a network to malfunction horribly because you forgot to hook it up correctly. You want to focus your attention on optimizing the network, not getting it to work at all. And you want everything to work so smoothly that it’s easy for the network to adapt to changing conditions.

To achieve this we’re using network operads, which are certain special ‘typed operads’. So before getting into the details of our approach, I should say a bit about typed operads. And I think that will be enough for today’s post: I don’t want to overwhelm you with too much information at once.

In general, a ‘typed operad’ describes ways of sticking together things of various types to get new things of various types. An ‘algebra’ of the operad gives a particular specification of these things and the results of sticking them together. For now I’ll skip the full definition of a typed operad and only highlight the most important features. A typed operad $O$ has:

• a set $T$ of types.

• collections of operations $O(t_1,...,t_n ; t)$ where $t_i, t \in T$. Here $t_1, \dots, t_n$ are the types of the inputs, while $t$ is the type of the output.

• ways to compose operations. Given an operation
$f \in O(t_1,\dots,t_n ;t)$ and $n$ operations

$g_1 \in O(t_{11},\dots,t_{1 k_1}; t_1),\dots, g_n \in O(t_{n1},\dots,t_{n k_n};t_n)$

we can compose them to get

$f \circ (g_1,\dots,g_n) \in O(t_{11}, \dots, t_{nk_n};t)$

These must obey some rules.

But if you haven’t seen operads before, you’re probably reeling in horror—so I need to rush in and save you by showing you the all-important pictures that help explain what’s going on!

First of all, you should visualize an operation $f \in O(t_1, \dots, t_n; t)$ as a little gizmo like this:

It has $n$ inputs at top and one output at bottom. Each input, and the output, has a ‘type’ taken from the set $T.$ So, for example, if you operation takes two real numbers, adds them and spits out the closest integer, both input types would be ‘real’, while the output type would be ‘integer’.

The main thing we do with operations is compose them. Given an an operation $f \in O(t_1,\dots,t_n ;t),$ we can compose it with $n$ operations

$g_1 \in O(t_{11},\dots,t_{1 k_1}; t_1), \quad \dots, \quad g_n \in O(t_{n1},\dots,t_{n k_n};t_n)$

by feeding their outputs into the inputs of $f,$ like this:

The result is an operation we call

$f \circ (g_1, \dots, g_n)$

Note that the input types of $f$ have to match the output types of the $g_i$ for this to work! This is the whole point of types: they forbid us from composing operations in ways that don’t make sense.

This avoids certain stupid mistakes. For example, you can take the square root of a positive number, but you may not want to take the square root of a negative number, and you definitely don’t want to take the square root of a hamburger. While you can land a plane on an airstrip, you probably don’t want to land a plane on a person.

The operations in an operad are quite abstract: they aren’t really operating on anything. To render them concrete, we need another idea: operads have ‘algebras’.

An algebra $A$ of the operad $O$ specifies a set of things of each type $t \in T$ such that the operations of $O$ act on these sets. A bit more precisely, an algebra consists of:

• for each type $t \in T,$ a set $A(t)$ of things of type $t$

• an action of $O$ on $A,$ that is, a collection of maps

$\alpha : O(t_1,...,t_n ; t) \times A(t_1) \times \cdots \times A(t_n) \to A(t)$

obeying some rules.

In other words, an algebra turns each operation $f \in O(t_1,...,t_n ; t)$ into a function that eats things of types $t_1, \dots, t_n$ and spits out a thing of type $t.$

When we get to designing systems with operads, the fact that the same operad can have many algebras will be useful. Our operad will have operations describing abstractly how to hook up networks to form larger networks. An algebra will give a specific implementation of these operations. We can use one algebra that’s fairly fine-grained and detailed about what the operations actually do, and another that’s less detailed. There will then be a map between from the first algebra to the second, called an ‘algebra homomorphism’, that forgets some fine-grained details.

There’s a lot more to say—all this is just the mathematical equivalent of clearing my throat before a speech—but I’ll stop here for now.

And as I do—since it also takes me time to stop talking—I should make it clear yet again that I haven’t even given the full definition of typed operads and their algebras! Besides the laws I didn’t write down, there’s other stuff I omitted. Most notably, there’s a way to permute the inputs of an operation in an operad, and operads have identity operations, one for each type.

To see the full definition of an ‘untyped’ operad, which is really an operad with just one type, go here:

They just call it an ‘operad’. Note that they first explain ‘non-symmetric operads’, where you can’t permute the inputs of operations, and then explain operads, where you can.

If you’re mathematically sophisticated, you can easily guess the laws obeyed by a typed operad just by looking at this article and inserting the missing types. You can also see the laws written down in Spivak’s paper, but with some different terminology: he calls types ‘objects’, he calls operations ‘morphisms’, and he calls typed operads ‘symmetric colored operads’—or once he gets going, just ‘operads’.

You can also see the definition of a typed operad in Section 2.1 here:

• Donald Yau, Operads of wiring diagrams.

What I would call a typed operad with $S$ as its set of types, he calls an ‘$S$-colored operad’.

I guess it’s already evident, but I’ll warn you that the terminology in this subject varies quite a lot from author to author: for example, a certain community calls typed operads ‘symmetric multicategories’. This is annoying at first but once you understand the subject it’s as ignorable as the fact that mathematicians have many different accents. The main thing to remember is that operads come in four main flavors, since they can either be typed or untyped, and they can either let you permute inputs or not. I’ll always be working with typed operads where you can permute inputs.

Finally, I’ll say that while the definition of operad looks lengthy and cumbersome at first, it becomes lean and elegant if you use more category theory.

Next time I’ll give you an example of an operad: the simplest ‘network

Some posts in this series:

Part 2. Metron’s software for system design.

Part 3. Operads: the basic idea.

Part 4. Network operads: an easy example.

Part 5. Algebras of network operads: some easy examples.

Part 6. Network models.

Part 7. Step-by-step compositional design and tasking using commitment networks.

Part 8. Compositional tasking using category-valued network models.

Part 9 – Network models from Petri nets with catalysts.

### 10 Responses to Complex Adaptive System Design (Part 3)

1. James Smith says:

Hi,

I’m not sure about the wisdom of permuting inputs. It seems as if it might complicate things horribly. Can you perhaps justify it a little? I guess types would restrict the number of permutations but even then, why bother? Why not just have a different operad with the different inputs?

• John Baez says:

You need symmetries to express the fact that certain operations are unchanged when you permute their inputs… or to express how certain operations change when you permute their inputs.

For example, there is a nonsymmetric operad whose algebras are precisely sets equipped with an associative multiplication. This operad has one type and a binary operation $m$ obeying the law

$m \circ (m, 1) = m \circ (1, m)$

However, there’s no nonsymmetric operad whose algebras are precisely sets with a commutative associative operation. To express that law you need the ability to permute inputs, so you need a symmetric operad.

It appears that for our applications, we want the ability to say things like ‘connecting boat 1 to boat 2 with a radio channel is the same as connecting boat 2 to boat 1 with a radio channel’.

Most mathematicians use symmetric operads, because in many applications they’re necessary. We also know it’s pretty easy to switch to nonsymmetric operads when they suffice. (Technically: there’s a forgetful functor from the category of symmetric operads to the category of nonsymmetric ones, and it has a left adjoint.) When it comes to writing software, it’s possible that symmetric operads are a bit of a drag and will be avoided unless absolutely necessary.

Why not just have a different operad with the different inputs?

I suspect you meant ‘operation’ here, not ‘operad’.

• James Smith says:

yes sorry i’ve been getting my operations and operads mixed up…

• John Baez says:

Just think of it this way: an operad is a bunch of operations.

2. James Smith says:

I thought of an answer to my question immediately after posting it, although I’m not sure it adds anything. You want your networks to exhibiting certain kinds of fuzziness or robustness (although these aren’t very good words to use) and one way you might do this is permuting inputs. You, might, say, have operads that have the same outputs regardless of how you permute their inputs? Maybe I should be talking of operations rather than operads, too.

• John Baez says:

James wrote:

You, might, say, have operads that have the same outputs regardless of how you permute their inputs? Maybe I should be talking of operations rather than operads, too.

You definitely mean ‘operations’ here. But the answer to your question here is ‘yes’, as I explained in my previous comment. To assert that an operation has the same output regardless of how you permute its inputs, or even to say precisely how the output changes when you permute the inputs, you need to be able to permute the inputs.

Mathematics has a huge hierarchy of formalisms of varying expressive power, with operads near the bottom—you can only say very simple things with an operad. Symmetric operads are a wee bit more expressive than nonsymmetric ones. When you want to say more sophisticated things, you need a more expressive formalism, and you pay the price in increased complication. The trick is to choose the formalism that’s just expressive enough for your current needs, and know how to hop nimbly between formalisms as your needs change.

3. James Smith says:

All good, thank you.

4. Last time I introduced typed operads. A typed operad has a bunch of operations for putting together things of various types and getting new things of various types. This is a very general idea! But in the CASCADE project we’re interested in something more specific: networks. So we want operations for putting together networks and getting new networks. […]

5. Remember from Part 3: an operad has operations, which are abstract ways of sticking things together. An algebra makes these operations concrete: it specifies some sets of actual things, and how the operations in the operad get implemented as actual ways to stick these things together.

So, an operad $O$ can have one algebra in which things are described in a bare-bones, simplified way, and another algebra in which things are described in more detail. Indeed it will typically have many algebras, corresponding to many levels of detail, but let’s just think about two for a minute.

This site uses Akismet to reduce spam. Learn how your comment data is processed.