Categories in Control

23 April, 2015


To understand ecosystems, ultimately will be to understand networks. – B. C. Patten and M. Witkamp

A while back I decided one way to apply my math skills to help save the planet was to start pushing toward green mathematics: a kind of mathematics that can interact with biology and ecology just as fruitfully as traditional mathematics interacts with physics. As usual with math, the payoffs will come slowly, but they may be large. It’s not a substitute for doing other, more urgent things—but if mathematicians don’t do this, who will?

As a first step in this direction, I decided to study networks.

This May, a small group of mathematicians is meeting in Turin for a workshop on the categorical foundations of network theory, organized by Jacob Biamonte. I’m trying to get us mentally prepared for this. We all have different ideas, yet they should fit together somehow.

Tobias Fritz, Eugene Lerman and David Spivak have all written articles here about their work, though I suspect Eugene will have a lot of completely new things to say, too. Now it’s time for me to say what my students and I have doing.

Despite my ultimate aim of studying biological and ecological networks, I decided to start by clarifying the math of networks that appear in chemistry and engineering, since these are simpler, better understood, useful in their own right, and probably a good warmup for the grander goal. I’ve been working with Brendan Fong on electrical ciruits, and with Jason Erbele on control theory. Let me talk about this paper:

• John Baez and Jason Erbele, Categories in control.

Control theory is the branch of engineering that focuses on manipulating open systems—systems with inputs and outputs—to achieve desired goals. In control theory, signal-flow diagrams are used to describe linear ways of manipulating signals, for example smooth real-valued functions of time. Here’s a real-world example; click the picture for more details:



For a category theorist, at least, it is natural to treat signal-flow diagrams as string diagrams in a symmetric monoidal category. This forces some small changes of perspective, which I’ll explain, but more important is the question: which symmetric monoidal category?

We argue that the answer is: the category \mathrm{FinRel}_k of finite-dimensional vector spaces over a certain field k, but with linear relations rather than linear maps as morphisms, and direct sum rather than tensor product providing the symmetric monoidal structure. We use the field k = \mathbb{R}(s) consisting of rational functions in one real variable s. This variable has the meaning of differentation. A linear relation from k^m to k^n is thus a system of linear constant-coefficient ordinary differential equations relating m ‘input’ signals and n ‘output’ signals.

Our main goal in this paper is to provide a complete ‘generators and relations’ picture of this symmetric monoidal category, with the generators being familiar components of signal-flow diagrams. It turns out that the answer has an intriguing but mysterious connection to ideas that are familiar in the diagrammatic approach to quantum theory! Quantum theory also involves linear algebra, but it uses linear maps between Hilbert spaces as morphisms, and the tensor product of Hilbert spaces provides the symmetric monoidal structure.

We hope that the category-theoretic viewpoint on signal-flow diagrams will shed new light on control theory. However, in this paper we only lay the groundwork.

Signal flow diagrams

There are several basic operations that one wants to perform when manipulating signals. The simplest is multiplying a signal by a scalar. A signal can be amplified by a constant factor:

f \mapsto cf

where c \in \mathbb{R}. We can write this as a string diagram:

Here the labels f and c f on top and bottom are just for explanatory purposes and not really part of the diagram. Control theorists often draw arrows on the wires, but this is unnecessary from the string diagram perspective. Arrows on wires are useful to distinguish objects from their
duals, but ultimately we will obtain a compact closed category where each object is its own dual, so the arrows can be dropped. What we really need is for the box denoting scalar multiplication to have a clearly defined input and output. This is why we draw it as a triangle. Control theorists often use a rectangle or circle, using arrows on wires to indicate which carries the input f and which the output c f.

A signal can also be integrated with respect to the time variable:

f \mapsto \int f

Mathematicians typically take differentiation as fundamental, but engineers sometimes prefer integration, because it is more robust against small perturbations. In the end it will not matter much here. We can again draw integration as a string diagram:

Since this looks like the diagram for scalar multiplication, it is natural to extend \mathbb{R} to \mathbb{R}(s), the field of rational functions of a variable s which stands for differentiation. Then differentiation becomes a special case of scalar multiplication, namely multiplication by s, and integration becomes multiplication by 1/s. Engineers accomplish the same effect with Laplace transforms, since differentiating a signal $f$ is equivalent to multiplying its Laplace transform

\displaystyle{  (\mathcal{L}f)(s) = \int_0^\infty f(t) e^{-st} \,dt  }

by the variable s. Another option is to use the Fourier transform: differentiating f is equivalent to multiplying its Fourier transform

\displaystyle{   (\mathcal{F}f)(\omega) = \int_{-\infty}^\infty f(t) e^{-i\omega t}\, dt  }

by -i\omega. Of course, the function f needs to be sufficiently well-behaved to justify calculations involving its Laplace or Fourier transform. At a more basic level, it also requires some work to treat integration as the two-sided inverse of differentiation. Engineers do this by considering signals that vanish for t < 0, and choosing the antiderivative that vanishes under the same condition. Luckily all these issues can be side-stepped in a formal treatment of signal-flow diagrams: we can simply treat signals as living in an unspecified vector space over the field \mathbb{R}(s). The field \mathbb{C}(s) would work just as well, and control theory relies heavily on complex analysis. In our paper we work over an arbitrary field k.

The simplest possible signal processor is a rock, which takes the 'input' given by the force F on the rock and produces as 'output' the rock's position q. Thanks to Newton's second law F=ma, we can describe this using a signal-flow diagram:

Here composition of morphisms is drawn in the usual way, by attaching the output wire of one morphism to the input wire of the next.

To build more interesting machines we need more building blocks, such as addition:

+ : (f,g) \mapsto f + g

and duplication:

\Delta :  f \mapsto (f,f)

When these linear maps are written as matrices, their matrices are transposes of each other. This is reflected in the string diagrams for addition and duplication:

The second is essentially an upside-down version of the first. However, we draw addition as a dark triangle and duplication as a light one because we will later want another way to ‘turn addition upside-down’ that does not give duplication. As an added bonus, a light upside-down triangle resembles the Greek letter \Delta, the usual symbol for duplication.

While they are typically not considered worthy of mention in control theory, for completeness we must include two other building blocks. One is the zero map from the zero-dimensional vector space \{0\} to our field k, which we denote as 0 and draw as follows:

The other is the zero map from k to \{0\}, sometimes called ‘deletion’, which we denote as ! and draw thus:

Just as the matrices for addition and duplication are transposes of each other, so are the matrices for zero and deletion, though they are rather degenerate, being 1 \times 0 and 0 \times 1 matrices, respectively. Addition and zero make k into a commutative monoid, meaning that the following relations hold:

The equation at right is the commutative law, and the crossing of strands is the braiding:

B : (f,g) \mapsto (g,f)

by which we switch two signals. In fact this braiding is a symmetry, so it does not matter which strand goes over which:

Dually, duplication and deletion make k into a cocommutative comonoid. This means that if we reflect the equations obeyed by addition and zero across the horizontal axis and turn dark operations into light ones, we obtain another set of valid equations:

There are also relations between the monoid and comonoid operations. For example, adding two signals and then duplicating the result gives the same output as duplicating each signal and then adding the results:

This diagram is familiar in the theory of Hopf algebras, or more generally bialgebras. Here it is an example of the fact that the monoid operations on k are comonoid homomorphisms—or equivalently, the comonoid operations are monoid homomorphisms.

We summarize this situation by saying that k is a bimonoid. These are all the bimonoid laws, drawn as diagrams:


The last equation means we can actually make the diagram at left disappear, since it equals the identity morphism on the 0-dimensional vector space, which is drawn as nothing.

So far all our string diagrams denote linear maps. We can treat these as morphisms in the category \mathrm{FinVect}_k, where objects are finite-dimensional vector spaces over a field k and morphisms are linear maps. This category is equivalent to the category where the only objects are vector spaces k^n for n \ge 0, and then morphisms can be seen as n \times m matrices. The space of signals is a vector space V over k which may not be finite-dimensional, but this does not cause a problem: an n \times m matrix with entries in k still defines a linear map from V^n to V^m in a functorial way.

In applications of string diagrams to quantum theory, we make \mathrm{FinVect}_k into a symmetric monoidal category using the tensor product of vector spaces. In control theory, we instead make \mathrm{FinVect}_k into a symmetric monoidal category using the direct sum of vector spaces. In Lemma 1 of our paper we prove that for any field k, \mathrm{FinVect}_k with direct sum is generated as a symmetric monoidal category by the one object k together with these morphisms:

where c \in k is arbitrary.

However, these generating morphisms obey some unexpected relations! For example, we have:

Thus, it is important to find a complete set of relations obeyed by these generating morphisms, thus obtaining a presentation of \mathrm{FinVect}_k as a symmetric monoidal category. We do this in Theorem 2. In brief, these relations say:

(1) (k, +, 0, \Delta, !) is a bicommutative bimonoid;

(2) the rig operations of k can be recovered from the generating morphisms;

(3) all the generating morphisms commute with scalar multiplication.

Here item (2) means that +, \cdot, 0 and 1 in the field k can be expressed in terms of signal-flow diagrams as follows:

Multiplicative inverses cannot be so expressed, so our signal-flow diagrams so far do not know that k is a field. Additive inverses also cannot be expressed in this way. So, we expect that a version of Theorem 2 will hold whenever k is a mere rig: that is, a ‘ring without negatives’, like the natural numbers. The one change is that instead of working with vector spaces, we should work with finitely presented free k-modules.

Item (3), the fact that all our generating morphisms commute with scalar multiplication, amounts to these diagrammatic equations:

While Theorem 2 is a step towards understanding the category-theoretic underpinnings of control theory, it does not treat signal-flow diagrams that include ‘feedback’. Feedback is one of the most fundamental concepts in control theory because a control system without feedback may be highly sensitive to disturbances or unmodeled behavior. Feedback allows these uncontrolled behaviors to be mollified. As a string diagram, a basic feedback system might look schematically like this:

The user inputs a ‘reference’ signal, which is fed into a controller, whose output is fed into a system, which control theorists call a ‘plant’, which in turn produces its own output. But then the system’s output is duplicated, and one copy is fed into a sensor, whose output is added (or if we prefer, subtracted) from the reference signal.

In string diagrams—unlike in the usual thinking on control theory—it is essential to be able to read any diagram from top to bottom as a composite of tensor products of generating morphisms. Thus, to incorporate the idea of feedback, we need two more generating morphisms. These are the ‘cup':

and ‘cap':

These are not maps: they are relations. The cup imposes the relation that its two inputs be equal, while the cap does the same for its two outputs. This is a way of describing how a signal flows around a bend in a wire.

To make this precise, we use a category called \mathrm{FinRel}_k. An object of this category is a finite-dimensional vector space over k, while a morphism from U to V, denoted L : U \rightharpoonup V, is a linear relation, meaning a linear subspace

L \subseteq U \oplus V

In particular, when k = \mathbb{R}(s), a linear relation L : k^m \to k^n is just an arbitrary system of constant-coefficient linear ordinary differential equations relating m input variables and n output variables.

Since the direct sum U \oplus V is also the cartesian product of U and V, a linear relation is indeed a relation in the usual sense, but with the property that if u \in U is related to v \in V and u' \in U is related to v' \in V then cu + c'u' is related to cv + c'v' whenever c,c' \in k.

We compose linear relations L : U \rightharpoonup V and L' : V \rightharpoonup W as follows:

L'L = \{(u,w) \colon \; \exists\; v \in V \;\; (u,v) \in L \textrm{ and } (v,w) \in L'\}

Any linear map f : U \to V gives a linear relation F : U \rightharpoonup V, namely the graph of that map:

F = \{ (u,f(u)) : u \in U \}

Composing linear maps thus becomes a special case of composing linear relations, so \mathrm{FinVect}_k becomes a subcategory of \mathrm{FinRel}_k. Furthermore, we can make \mathrm{FinRel}_k into a monoidal category using direct sums, and it becomes symmetric monoidal using the braiding already present in \mathrm{FinVect}_k.

In these terms, the cup is the linear relation

\cup : k^2 \rightharpoonup \{0\}

given by

\cup \; = \; \{ (x,x,0) : x \in k   \} \; \subseteq \; k^2 \oplus \{0\}

while the cap is the linear relation

\cap : \{0\} \rightharpoonup k^2

given by

\cap \; = \; \{ (0,x,x) : x \in k   \} \; \subseteq \; \{0\} \oplus k^2

These obey the zigzag relations:

Thus, they make \mathrm{FinRel}_k into a compact closed category where k, and thus every object, is its own dual.

Besides feedback, one of the things that make the cap and cup useful is that they allow any morphism L : U \rightharpoonup V to be ‘plugged in backwards’ and thus ‘turned around’. For instance, turning around integration:

we obtain differentiation. In general, using caps and cups we can turn around any linear relation L : U \rightharpoonup V and obtain a linear relation L^\dagger : V \rightharpoonup U, called the adjoint of L, which turns out to given by

L^\dagger = \{(v,u) : (u,v) \in L \}

For example, if c \in k is nonzero, the adjoint of scalar multiplication by c is multiplication by c^{-1}:

Thus, caps and cups allow us to express multiplicative inverses in terms of signal-flow diagrams! One might think that a problem arises when when c = 0, but no: the adjoint of scalar multiplication by 0 is

\{(0,x) : x \in k \} \subseteq k \oplus k

In Lemma 3 we show that \mathrm{FinRel}_k is generated, as a symmetric monoidal category, by these morphisms:

where c \in k is arbitrary.

In Theorem 4 we find a complete set of relations obeyed by these generating morphisms,thus giving a presentation of \mathrm{FinRel}_k as a symmetric monoidal category. To describe these relations, it is useful to work with adjoints of the generating morphisms. We have already seen that the adjoint of scalar multiplication by c is scalar multiplication by c^{-1}, except when c = 0. Taking adjoints of the other four generating morphisms of \mathrm{FinVect}_k, we obtain four important but perhaps unfamiliar linear relations. We draw these as ‘turned around’ versions of the original generating morphisms:

Coaddition is a linear relation from k to k^2 that holds when the two outputs sum to the input:

+^\dagger : k \rightharpoonup k^2

+^\dagger = \{(x,y,z)  : \; x = y + z \} \subseteq k \oplus k^2

Cozero is a linear relation from k to \{0\} that holds when the input is zero:

0^\dagger : k \rightharpoonup \{0\}

0^\dagger = \{ (0,0)\} \subseteq k \oplus \{0\}

Coduplication is a linear relation from k^2 to k that holds when the two inputs both equal the output:

\Delta^\dagger : k^2 \rightharpoonup k

\Delta^\dagger = \{(x,y,z)  : \; x = y = z \} \subseteq k^2 \oplus k

Codeletion is a linear relation from \{0\} to k that holds always:

!^\dagger : \{0\} \rightharpoonup k

!^\dagger = \{(0,x) \} \subseteq \{0\} \oplus k

Since +^\dagger,0^\dagger,\Delta^\dagger and !^\dagger automatically obey turned-around versions of the relations obeyed by +,0,\Delta and !, we see that k acquires a second bicommutative bimonoid structure when considered as an object in \mathrm{FinRel}_k.

Moreover, the four dark operations make k into a Frobenius monoid. This means that (k,+,0) is a monoid, (k,+^\dagger, 0^\dagger) is a comonoid, and the Frobenius relation holds:

All three expressions in this equation are linear relations saying that the sum of the two inputs equal the sum of the two outputs.

The operation sending each linear relation to its adjoint extends to a contravariant functor

\dagger : \mathrm{FinRel}_k\ \to \mathrm{FinRel}_k ,

which obeys a list of properties that are summarized by saying that \mathrm{FinRel}_k is a †-compact category. Because two of the operations in the Frobenius monoid (k, +,0,+^\dagger,0^\dagger) are adjoints of the other two, it is a †-Frobenius monoid.

This Frobenius monoid is also special, meaning that
comultiplication (in this case +^\dagger) followed by multiplication (in this case +) equals the identity:

This Frobenius monoid is also commutative—and cocommutative, but for Frobenius monoids this follows from commutativity.

Starting around 2008, commutative special †-Frobenius monoids have become important in the categorical foundations of quantum theory, where they can be understood as ‘classical structures’ for quantum systems. The category \mathrm{FinHilb} of finite-dimensional Hilbert spaces and linear maps is a †-compact category, where any linear map f : H \to K has an adjoint f^\dagger : K \to H given by

\langle f^\dagger \phi, \psi \rangle = \langle \phi, f \psi \rangle

for all \psi \in H, \phi \in K . A commutative special †-Frobenius monoid in \mathrm{FinHilb} is then the same as a Hilbert space with a chosen orthonormal basis. The reason is that given an orthonormal basis \psi_i for a finite-dimensional Hilbert space H, we can make H into a commutative special †-Frobenius monoid with multiplication m : H \otimes H \to H given by

m (\psi_i \otimes \psi_j ) = \left\{ \begin{array}{cl}  \psi_i & i = j \\                                                                 0 & i \ne j  \end{array}\right.

and unit i : \mathbb{C} \to H given by

i(1) = \sum_i \psi_i

The comultiplication m^\dagger duplicates basis states:

m^\dagger(\psi_i) = \psi_i \otimes \psi_i

Conversely, any commutative special †-Frobenius monoid in \mathrm{FinHilb} arises this way.

Considerably earlier, around 1995, commutative Frobenius monoids were recognized as important in topological quantum field theory. The reason, ultimately, is that the free symmetric monoidal category on a commutative Frobenius monoid is 2\mathrm{Cob}, the category with 2-dimensional oriented cobordisms as morphisms. But the free symmetric monoidal category on a commutative special Frobenius monoid was worked out even earlier: it is the category with finite sets as objects, where a morphism f : X \to Y is an isomorphism class of cospans

X \longrightarrow S \longleftarrow Y

This category can be made into a †-compact category in an obvious way, and then the 1-element set becomes a commutative special †-Frobenius monoid.

For all these reasons, it is interesting to find a commutative special †-Frobenius monoid lurking at the heart of control theory! However, the Frobenius monoid here has yet another property, which is more unusual. Namely, the unit 0 : \{0\} \rightharpoonup k followed by the counit 0^\dagger : k \rightharpoonup \{0\} is the identity:

We call a special Frobenius monoid that also obeys this extra law extra-special. One can check that the free symmetric monoidal category on a commutative extra-special Frobenius monoid is the category with finite sets as objects, where a morphism f : X \to Y is an equivalence relation on the disjoint union X \sqcup Y, and we compose f : X \to Y and g : Y \to Z by letting f and g generate an equivalence relation on X \sqcup Y \sqcup Z and then restricting this to X \sqcup Z.

As if this were not enough, the light operations share many properties with the dark ones. In particular, these operations make k into a commutative extra-special †-Frobenius monoid in a second way. In summary:

(k, +, 0, \Delta, !) is a bicommutative bimonoid;

(k, \Delta^\dagger, !^\dagger, +^\dagger, 0^\dagger) is a bicommutative bimonoid;

(k, +, 0, +^\dagger, 0^\dagger) is a commutative extra-special †-Frobenius monoid;

(k, \Delta^\dagger, !^\dagger, \Delta, !) is a commutative extra-special †-Frobenius monoid.

It should be no surprise that with all these structures built in, signal-flow diagrams are a powerful method of designing processes.

However, it is surprising that most of these structures are present in a seemingly very different context: the so-called ZX calculus, a diagrammatic formalism for working with complementary observables in quantum theory. This arises naturally when one has an n-dimensional Hilbert space $H$ with two orthonormal bases \psi_i, \phi_i that are mutually unbiased, meaning that

|\langle \psi_i, \phi_j \rangle|^2 = \displaystyle{\frac{1}{n}}

for all 1 \le i, j \le n. Each orthonormal basis makes H into commutative special †-Frobenius monoid in \mathrm{FinHilb}. Moreover, the multiplication and unit of either one of these Frobenius monoids fits together with the comultiplication and counit of the other to form a bicommutative bimonoid. So, we have all the structure present in the list above—except that these Frobenius monoids are only extra-special if H is 1-dimensional.

The field k is also a 1-dimensional vector space, but this is a red herring: in \mathrm{FinRel}_k every finite-dimensional vector space naturally acquires all four structures listed above, since addition, zero, duplication and deletion are well-defined and obey all the relations we have discussed. Jason and I focus on k in our paper simply because it generates all the objects \mathrm{FinRel}_k via direct sum.

Finally, in \mathrm{FinRel}_k the cap and cup are related to the light and dark operations as follows:

Note the curious factor of -1 in the second equation, which breaks some of the symmetry we have seen so far. This equation says that two elements x, y \in k sum to zero if and only if -x = y. Using the zigzag relations, the two equations above give

We thus see that in \mathrm{FinRel}_k, both additive and multiplicative inverses can be expressed in terms of the generating morphisms used in signal-flow diagrams.

Theorem 4 of our paper gives a presentation of \mathrm{FinRel}_k based on the ideas just discussed. Briefly, it says that \mathrm{FinRel}_k is equivalent to the symmetric monoidal category generated by an object k and these morphisms:

• addition +: k^2 \rightharpoonup k
• zero 0 : \{0\} \rightharpoonup k
• duplication \Delta: k\rightharpoonup k^2
• deletion ! : k \rightharpoonup 0
• scalar multiplication c: k\rightharpoonup k for any c\in k
• cup \cup : k^2 \rightharpoonup \{0\}
• cap \cap : \{0\} \rightharpoonup k^2

obeying these relations:

(1) (k, +, 0, \Delta, !) is a bicommutative bimonoid;

(2) \cap and \cup obey the zigzag equations;

(3) (k, +, 0, +^\dagger, 0^\dagger) is a commutative extra-special †-Frobenius monoid;

(4) (k, \Delta^\dagger, !^\dagger, \Delta, !) is a commutative extra-special †-Frobenius monoid;

(5) the field operations of k can be recovered from the generating morphisms;

(6) the generating morphisms (1)-(4) commute with scalar multiplication.

Note that item (2) makes \mathrm{FinRel}_k into a †-compact category, allowing us to mention the adjoints of generating morphisms in the subsequent relations. Item (5) means that +, \cdot, 0, 1 and also additive and multiplicative inverses in the field k can be expressed in terms of signal-flow diagrams in the manner we have explained.

So, we have a good categorical understanding of the linear algebra used in signal flow diagrams!

Now Jason is moving ahead to apply this to some interesting problems… but that’s another story, for later.


Resource Convertibility (Part 3)

13 April, 2015

guest post by Tobias Fritz

In Part 1 and Part 2, we learnt about ordered commutative monoids and how they formalize theories of resource convertibility and combinability. In this post, I would like to say a bit about the applications that have been explored so far. First, the study of resource theories has become a popular subject in quantum information theory, and many of the ideas in my paper actually originate there. I’ll list some references at the end. So I hope that the toolbox of ordered commutative monoids will turn out to be useful for this. But here I would like to talk about an example application that is much easier to understand, but no less difficult to analyze: graph theory and the resource theory of zero-error communication.

A graph consists of a bunch of nodes connected by a bunch of edges, for example like this:

This particular graph is the pentagon graph or 5-cycle. To give it some resource-theoretic interpretation, think of it as the distinguishability graph of a communication channel, where the nodes are the symbols that can be sent across the channel, and two symbols share an edge if and only if they can be unambiguously decoded. For example, the pentagon graph roughly corresponds to the distinguishability graph of my handwriting, when restricted to five letters only:

So my ‘w’ is distinguishable from my ‘u’, but it may be confused for my ‘m’. In order to communicate unambiguously, it looks like I should restrict myself to using only two of those letters in writing, since any third of them may be mistaken for one of the other three. But alternatively, I could use a block code to create context around each letter which allows for perfect disambiguation. This is what happens in practice: I write in natural language, where an entire word is usually not ambiguous.

One can now also consider graph homomorphisms, which are maps like this:

The numbers on the nodes indicate where each node on the left gets mapped to. Formally, a graph homomorphism is a function taking nodes to nodes such that adjacent nodes get mapped to adjacent nodes. If a homomorphism G\to H exists between graphs G and H, then we also write H\geq G; in terms of communication channels, we can interpret this as saying that H simulates G, since the homomorphism provides a map between the symbols which preserves distinguishability. A ‘code’ for a communication channel is then just a homomorphism from the complete graph in which all nodes share an edge to the graph which describes the channel. With this ordering structure, the collection of all finite graphs forms an ordered set. This ordered set has an intricate structure which is intimately related to some big open problems in graph theory.

We can also combine two communication channels to form a compound one. Going back to the handwriting example, we can consider the new channel in which the symbols are pairs of letters. Two such pairs are distinguishable if and only if either the first letters of each pair are distinguishable or the second letters are,

(a,b) \sim (a',b') \:\Leftrightarrow\: a\sim a' \:\lor\: b\sim b'

When generalized to arbitrary graphs, this yields the definition of disjunctive product of graphs. It is not hard to show that this equips the ordered set of graphs with a binary operation compatible with the ordering, so that we obtain an ordered commutative monoid denoted Grph. It mathematically formalizes the resource theory of zero-error communication.

Using the toolbox of ordered commutative monoids combined with some concrete computations on graphs, one can show that Grph is not cancellative: if K_{11} is the complete graph on 11 nodes, then 3C_5\not\geq K_{11}, but there exists a graph G such that

3 C_5 + G \geq K_{11} + G

The graph G turns out to have 136 nodes. This result seems to be new. But if you happen to have seen something like this before, please let me know!

Last time, we also talked about rates of conversion. In Grph, it turns out that some of these correspond to famous graph invariants! For example, the rate of conversion from a graph G to the single-edge graph K_2 is Shannon capacity \Theta(\overline{G}), where \overline{G} is the complement graph. This is of no surprise since \Theta was originally defined by Shannon with precisely this rate in mind, although he did not use the language of ordered commutative monoids. In any case, the Shannon capacity \Theta(\overline{G}) is a graph invariant notorious for its complexity: it is not known whether there exists an algorithm to compute it! But an application of the Rate Theorem from Part 2 gives us a formula for the Shannon capacity:

\Theta(\overline{G}) = \inf_f f(G)

where f ranges over all graph invariants which are monotone under graph homomorphisms, multiplicative under disjunctive product, and normalized such that f(K_2) = 2. Unfortunately, this formula still not produce an algorithm for computing \Theta. But it nonconstructively proves the existence of many new graph invariants f which approximate the Shannon capacity from above.

Although my story ends here, I also feel that the whole project has barely started. There are lots of directions to explore! For example, it would be great to fit Shannon’s noisy channel coding theorem into this framework, but this has turned out be technically challenging. If you happen to be familiar with rate-distortion theory and you want to help out, please get in touch!

References

Here is a haphazard selection of references on resource theories in quantum information theory and related fields:

• Igor Devetak, Aram Harrow and Andreas Winter, A resource framework for quantum Shannon theory.

• Gilad Gour, Markus P. Müller, Varun Narasimhachar, Robert W. Spekkens and Nicole Yunger Halpern, The resource theory of informational nonequilibrium in thermodynamics.

• Fernando G.S.L. Brandão, Michał Horodecki, Nelly Huei Ying Ng, Jonathan Oppenheim and Stephanie Wehner, The second laws of quantum thermodynamics.

• Iman Marvian and Robert W. Spekkens, The theory of manipulations of pure state asymmetry: basic tools and equivalence classes of states under symmetric operations.

• Elliott H. Lieb and Jakob Yngvason, The physics and mathematics of the second law of thermodynamics.


Resource Convertibility (Part 2)

10 April, 2015

guest post by Tobias Fritz

In Part 1, I introduced ordered commutative monoids as a mathematical formalization of resources and their convertibility. Today I’m going to say something about what to do with this formalization. Let’s start with a quick recap!

Definition: An ordered commutative monoid is a set A equipped with a binary relation \geq, a binary operation +, and a distinguished element 0 such that the following hold:

+ and 0 equip A with the structure of a commutative monoid;

\geq equips A with the structure of a partially ordered set;

• addition is monotone: if x\geq y, then also x + z \geq y + z.

Recall also that we think of the x,y\in A as resource objects such that x+y represents the object consisting of x and y together, and x\geq y means that the resource object x can be converted into y.

When confronted with an abstract definition like this, many people ask: so what is it useful for? The answer to this is twofold: first, it provides a language which we can use to guide our thoughts in any application context. Second, the definition itself is just the very start: we can now also prove theorems about ordered commutative monoids, which can be instantiated in any particular application context. So the theory of ordered commutative monoids will provide a useful toolbox for talking about concrete resource theories and studying them. In the remainder of this post, I’d like to say a bit about what this toolbox contains. For more, you’ll have to read the paper!

To start, let’s consider catalysis as one of the resource-theoretic phenomena neatly captured by ordered commutative monoids. Catalysis is the phenomenon that certain conversions become possible only due to the presence of a catalyst, which is an additional resource object which does not get consumed in the process of the conversion. For example, we have

\text{timber + nails}\not\geq \text{table},

\text{timber + nails + saw + hammer} \geq \text{table + saw + hammer}

because making a table from timber and nails requires a saw and a hammer as tools. So in this example, ‘saw + hammer’ is a catalyst for the conversion of ‘timber + nails’ into ‘table’. In mathematical language, catalysis occurs precisely when the ordered commutative monoid is not cancellative, which means that x + z\geq y + z sometimes holds even though x\geq y does not. So, the notion of catalysis perfectly matches up with a very natural and familiar notion from algebra.

One can continue along these lines and study those ordered commutative monoids which are cancellative. It turns out that every ordered commutative monoid can be made cancellative in a universal way; in the resource-theoretic interpretation, this boils down to replacing the convertibility relation by catalytic convertibility, in which x is declared to be convertible into y as soon as there exists a catalyst which achieves this conversion. Making an ordered commutative monoid cancellative like this is a kind of ‘regularization': it leads to a mathematically more well-behaved structure. As it turns out, there are several additional steps of regularization that can be performed, and all of these are both mathematically natural and have an appealing resource-theoretic interpretation. These regularizations successively take us from the world of ordered commutative monoids to the realm of linear algebra and functional analysis, where powerful theorems are available. For now, let me not go into the details, but only try to summarize one of the consequences of this development. This requires a bit of preparation.

In many situations, it is not just of interest to convert a single copy of some resource object x into a single copy of some y; instead, one may be interested in converting many copies of x into many copies of y all together, and thereby maximizing (or minimizing) the ratio of the resulting number of y‘s compared to the number of x‘s that get consumed. This ratio is measured by the maximal rate:

\displaystyle{ R_{\mathrm{max}}(x\to y) = \sup \left\{ \frac{m}{n} \:|\: nx \geq my \right\} }

Here, m and n are natural numbers, and nx stands for the n-fold sum x+\cdots+x, and similarly for my. So this maximal rate quantifies how many y’ s we can get out of one copy of x, when working in a ‘mass production’ setting. There is also a notion of regularized rate, which has a slightly more complicated definition that I don’t want to spell out here, but is similar in spirit. The toolbox of ordered commutative monoids now provides the following result:

Rate Theorem: If x\geq 0 and y\geq 0 in an ordered commutative monoid A which satisfies a mild technical assumption, then the maximal regularized rate from x to y can be computed like this:

\displaystyle{ R_{\mathrm{max}}^{\mathrm{reg}}(x\to y) = \inf_f \frac{f(y)}{f(x)} }

where f ranges over all functionals on A with f(y)\neq 0.

Wait a minute, what’s a ‘functional’? It’s defined to be a map f:A\to\mathbb{R} which is monotone,

x\geq y \:\Rightarrow\: f(x)\geq f(y)

and additive,

f(x+y) = f(x) + f(y)

In economic terms, we can think of a functional as a consistent assignment of prices to all resource objects. If x is at least as useful as y, then the price of x should be at least as high as the price of y; and the price of two objects together should be the sum of their individual prices. So the f in the rate formula above ranges over all ‘markets’ on which resource objects can be ‘traded’ at consistent prices. The term ‘functional’ is supposed to hint at a relation to functional analysis. In fact, the proof of the theorem crucially relies on the Hahn–Banach Theorem.

The mild technical mentioned in the Rate Theorem is that the ordered commutative monoid needs to have a generating pair. This turns out to hold in the applications that I have considered so far, and I hope that it will turn out to hold in most others as well. For the full gory details, see the paper.

So this provides some idea of what kinds of gadgets one can find in the toolbox of ordered commutative monoids. Next time, I’ll show some applications to graph theory and zero-error communication and say a bit about where this project might be going next.


Resource Convertibility (Part 1)

7 April, 2015

guest post by Tobias Fritz

Hi! I am Tobias Fritz, a mathematician at the Perimeter Institute for Theoretical Physics in Waterloo, Canada. I like to work on all sorts of mathematical structures which pop up in probability theory, information theory, and other sorts of applied math. Today I would like to tell you about my latest paper:

The mathematical structure of theories of resource convertibility, I.

It should be of interest to Azimuth readers as it forms part of what John likes to call ‘green mathematics’. So let’s get started!

Resources and their management are an essential part of our everyday life. We deal with the management of time or money pretty much every day. We also consume natural resources in order to afford food and amenities for (some of) the 7 billion people on our planet. Many of the objects that we deal with in science and engineering can be considered as resources. For example, a communication channel is a resource for sending information from one party to another. But for now, let’s stick with a toy example: timber and nails constitute a resource for making a table. In mathematical notation, this looks like so:

\mathrm{timber} + \mathrm{nails} \geq \mathrm{table}

We interpret this inequality as saying that “given timber and nails, we can make a table”. I like to write it as an inequality like this, which I think of as stating that having timber and nails is at least as good as having a table, because the timber and nails can always be turned into a table whenever one needs a table.

To be more precise, we should also take into account that making the table requires some tools. These tools do not get consumed in the process, so we also get them back out:

\text{timber} + \text{nails} + \text{saw} + \text{hammer} \geq \text{table} + \text{hammer} + \text{saw}

Notice that this kind of equation is analogous to a chemical reaction equation like this:

2 \mathrm{H}_2 + \mathrm{O}_2 \geq \mathrm{H}_2\mathrm{O}

So given a hydrogen molecules and an oxygen molecule, we can let them react such as to form a molecule of water. In chemistry, this kind of equation would usually be written with an arrow ‘\rightarrow’ instead of an ordering symbol ‘\geq’ , but here we interpret the equation slightly differently. As with the timber and the nails and nails above, the inequality says that if we have two hydrogen atoms and an oxygen atom, then we can let them react to a molecule of water, but we don’t have to. In this sense, having two hydrogen atoms and an oxygen atom is at least as good as having a molecule of water.

So what’s going on here, mathematically? In all of the above equations, we have a bunch of stuff on each side and an inequality ‘\geq’ in between. The stuff on each side consists of a bunch of objects tacked together via ‘+’ . With respect to these two pieces of structure, the collection of all our resource objects forms an ordered commutative monoid:

Definition: An ordered commutative monoid is a set A equipped with a binary relation \geq, a binary operation +, and a distinguished element 0 such that the following hold:

+ and 0 equip A with the structure of a commutative monoid;

\geq equips A with the structure of a partially ordered set;

• addition is monotone: if x\geq y, then also x + z \geq y + z.

Here, the third axiom is the most important, since it tells us how the additive structure interacts with the ordering structure.

Ordered commutative monoids are the mathematical formalization of resource convertibility and combinability as follows. The elements x,y\in A are the resource objects, corresponding to the ‘collections of stuff’ in our earlier examples, such as x = \text{timber} + \text{nails} or y = 2 \text{H}_2 + \text{O}_2. Then the addition operation simply joins up collections of stuff into bigger collections of stuff. The ordering relation \geq is what formalizes resource convertibility, as in the examples above. The third axiom states that if we can convert x into y, then we can also convert x together with z into y together with z for any z, for example by doing nothing to z.

A mathematically minded reader might object that requiring A to form a partially ordered set under \geq is too strong a requirement, since it requires two resource objects to be equal as soon as they are mutually interconvertible: x \geq y and y \geq x implies x = y. However, I think that this is not an essential restriction, because we can regard this implication as the definition of equality: ‘x = y’ is just a shorthand notation for ‘x\geq y and y\geq x’ which formalizes the perfect interconvertibility of resource objects.

We could now go back to the original examples and try to model carpentry and chemistry in terms of ordered commutative monoids. But as a mathematician, I needed to start out with something mathematically precise and rigorous as a testing ground for the formalism. This helps ensure that the mathematics is sensible and useful before diving into real-world applications. So, the main example in my paper is the ordered commutative monoid of graphs, which has a resource-theoretic interpretation in terms of zero-error information theory. As graph theory is a difficult and traditional subject, this application constitutes the perfect training camp for the mathematics of ordered commutative monoids. I will get to this in Part 3.

In Part 2, I will say something about what one can do with ordered commutative monoids. In the meantime, I’d be curious to know what you think about what I’ve said so far!

Resource convertibility: part 2.


A Networked World (Part 3)

2 April, 2015

guest post by David Spivak

Part 1: The problem.

Part 2: Creating a knowledge network.

From parts to wholes

Remember where we were. Ologs, linguistically-enhanced sketches, just weren’t doing justice to the idea that each step in a recipe is itself a recipe. But the idea seemed ripe for mathematical formulation.

Thus, I returned to a question I’d wondered about in the very beginning: how is macro-understanding built from micro-understanding? How can multiple individual humans come together, like cells in a multicellular organism, to make a whole that is itself a surviving decision-maker?

There were, and continue to be, a lot of “open-to-Spivak” questions one can ask: How are stories about events built from sub-stories about sub-events? How is macro-economics built from micro-economics? Are large-scale phenomena always based on, and relatable to, interactions between smaller-scale phenomena? For example, I still want to understand, in very basic terms, how classical (large-scale) phenomena are a manifestation of quantum phenomena.

Neuroscience professor Michael Gazzaniga has a similar question: How does cognition arise from the interaction of tiny event-noticers, and how does society emerge and effect individual brains? As put it in the last paragraph of his book Who’s In Charge, we are in need of a language by which to understand the interfaces of “our layered hierarchical existence”, because doing so “holds the answer to our quest for understanding mind/brain relationships.” He goes on:

Understanding how to develop a vocabulary for those layered interactions, for me, constitutes the scientific problem of this century.

I tend to be infatuated with this same kind of idea: cognition emerging from interactions between sub-cognitive pieces. This is what got me interested in what I now call “operadic modularity”. Luckily again, my Office of Naval Research hero (now at the Air Force Office of Scientific Research) granted me a chance to study it.

The idea is this: modularity is about arranging many modules into a single whole, which is another module, usable as part of a larger system. Each system of modules is a module of its own, and we see the nesting phenomenon. Operads can model the language of nestable interface arrangements, and their algebras can model how interfaces are filled in with the required sorts of modules.

Here, by operad, I mean symmetric colored operad, or symmetric multicategory. Operads are like categories—they have objects, morphisms, identities, and a unital and associative composition formula—the difference is that the domain of a morphism is a finite set of objects (in an operad) rather than a single object (as in a category). So morphisms in an operad are written like \varphi\colon X_1,\ldots,X_n\to Y; we call such a morphism n-ary.

An early example, formulated operadically by Peter May (the inventor of operads) is called the little 2-cubes operad, denoted E_2. It has only one object, say a square ⬜, and an n-ary morphism

⬜ ,…, ⬜ \longrightarrow

is any arrangement of n non-overlapping squares in a larger square. These arrangements clearly display a nesting property.

Another source of examples comes from the fact that every monoidal category (C,\otimes) has an underlying operad \mathcal{O}_C, with

\mathrm{Hom}_{\mathcal{O}_C}(X_1,\ldots,X_n;Y):=\mathrm{Hom}_{C}(X_1\otimes\cdots\otimes X_n,Y)

(Either C was symmetric monoidal to begin with or you can add in symmetries, roughly by multiplying each hom-set by n!.) The operad \mathbf{Set} underlying the cartesian monoidal category (\mathbf{Set},\times,\{1\}) of sets is an example I’ll use later.

If you want to think about operads as modeling modularity—building one thing out of many—the first trick is to imagine the codomain object as the exterior and all the domain objects as sitting inside it, on the interior. May’s little 2-cubes operad gives the idea: squares in a square. From now on, if I speak of many little objects arranged inside one big object, I always mean it this way: the interior objects constitute the domain, the exterior object is the codomain, and the arrangement itself is the morphism. These arrangements can be nested inside one another, corresponding to composition in the operad.

What are other types of nested phenomena, which we might be able to think about operadically? How about circles wired together in a larger circle? An object in this operad is a circle with some number of wires sticking out; let’s call it a ported-circle. A morphism from n-many ported-circles to one ported-circle is any connection pattern involving—i.e., wiring together of—the ports. This description can be interpreted in a few different ways; I usually mean the underlying operad of the monoidal category of “sets and cospans under disjoint union”, but the “spaghetti and meatballs operad” of circular planar arc diagrams is another good interpretation.

Once you have an operad \mathcal{O}, you have a kind of calculus for nestable arrangements. As I’ve been saying, I often think of the morphisms in an operad in terms of pictures, such as wiring diagrams or squares-in-a-square. If you say you want these pictures to “mean something”, you’re probably looking for an algebra F:\mathcal{O}\to\textbf{Sets}. This operad functor F, which acts like a lax functor between monoidal categories, would tell you the set F(X) of fillers or fills that can be placed into each object X\in\mathcal{O} in the picture.

I often think of the operad \mathcal{O} as a picture language, and the \mathcal{O}-algebra F its intended semantics. Not only does such a set-valued functor on \mathcal{O} give you a set of fills for each object X\in\mathcal{O}, it would also give you a formula for producing a large-scale fill (element of F(Y)) from any arrangement \varphi\colon X_1,\ldots,X_n\to Y of small-scale fills (element of F(X_1)\times\cdots\times F(X_n)).

For example, given a pointed space A, you can ask for the set of based 2-spheres

L_A()=\{\ell\colon S^2\to A\}

in it. Here, \ell is any element of L_A(). Think of a based sphere in A as a continuous map from the filled-in square to A that sends the boundary of the square to the basepoint of A. Given n spheres \ell_1,\ldots, \ell_n in A, an arrangement \varphi of non-overlapping squares in a square prescribes a new based sphere L_A(\varphi)(\ell_1,\ldots, \ell_n)\in L_A(). The idea is that you send all the unused space in the big exterior square to the basepoint of A, and follow the instructions \ell_i when you get to the ith little square inside. Thus any “2-fold loop space” gives an algebra of May’s little 2-cubes operad.

So recently, I’ve been thinking a lot about operadic modularity, i.e., cases in which a thing can be built out of a bunch of simpler things. Note that not all cases of “nesting” have such a clear picture language. For example, context-free grammars are modular: you build [postal-address] out of [name-part], [street-address] and [zip-part], you build each of these, e.g., [name-part], in any of several ways (there is an optional suffix part and the option to abbreviate your first name using an initial). The point is, you build things out of smaller parts, nested inside still smaller parts. Seeing context-free grammars as free operads is one of the things Hermida, Makkai, and Power explained in their paper on higher dimensional multigraphs.

The operadic notion of modularity can also be applied to building hierarchical protein materials. Like context-free grammars, the operad of such materials doesn’t come with a nice picture language. However, it can be formalized as an operad nonetheless. That is, there is a grammar of actions that one can apply to a bunch of polypeptides, actions such as “attach”, “overlay”, “rigidMotion”, “helix”, “makeArray”. From these you can build proteins that are quite complex from a simple vocabulary of 20 amino acids. I’ve joined forces with Tristan Giesa and Ravi Jagadeesan to make such a program. The software package, called Matriarch, for “Materi-als Arch-itecture”, should be available soon as an open source Python library.

There are lots of operads whose morphisms look like string diagrams of various sorts. These operads, which generalize a set-theoretic version of May’s topological little 2-cubes, have clear picture languages. The algebras on such “visualizable” operads can model things like databases and dynamical systems. Over the past year or so, I’ve been writing a series of “worked example” papers, such as those above, in which I explain various picture languages and semantics for them.

I find that operads provide a nice language in which to discuss string diagrams of various sorts. String-diagrammatic languages exist for many different “doctrines”, such as categories, categories without identities, monoidal categories, cartesian monoidal categories, traced monoidal categories, operads, etc. For example, Dylan Rupel and I realized that traced monoidal categories are (well, if you have enough equipment and an expert like Patrick Schultz around) algebras on the operad of oriented 1-cobordisms. It seems to me that the other doctrines above are similarly associated to operads that are “nearby” Cob, e.g., sub-operads of Cob, operads under Cob, etc. Maps between these various operads should induce known adjunctions between the corresponding doctrines.

That brings us to present day. There will be a workshop in Turin in a couple of months, and I think it’ll be a lot of fun:

• Categorical Foundations of Network Theory, May 25-28, ISI Foundation, Turin, Italy.

I’m looking forward to hearing from John Baez, Jacob Biamonte, Eugene Lerman, Tobias Fritz and others, about what they’ve been thinking about recently. I think we’ll find interesting common ground. If there’s interest, I’d be happy to talk about categorical models for how information is communicated throughout a network, and whether this gives any insight that can lead to better decision-making by the larger whole.


A Networked World (Part 2)

30 March, 2015

guest post by David Spivak

Part 1: The problem.

Creating a knowledge network

In 2007, I asked myself: as mathematically as possible, what can formally ground meaningful information, including both its successful communication and its role in decision-making? I believed that category theory could be useful in formalizing the type of object that we call information, and the type of relationship that we call communication.

Over the next few years, I worked on this project. I tried to understand what information is, how it is stored, and how it can be transferred between entities that think differently. Since databases store information, I wanted to understand databases category-theoretically. I eventually decided that databases are basically just categories \mathcal{C}, corresponding to a collection of meaningful concepts and connections between them, and that these categories are equipped with functors \mathcal{C}\to\mathbf{Set}. Such a functor assigns to each meaningful concept a set of examples and connects them as dictated by the morphisms of \mathbf{Set}. I later found out that this “databases as categories” idea was not original; it is due to Rosebrugh and others. My view on the subject has matured a bit since then, but I still like this basic conception of databases.

If we model a person’s knowledge as a database (interconnected tables of examples of things and relationships), then the network of knowledgeable humans could be conceptualized as a simplicial complex equipped with a sheaf of databases. Here, a vertex represents an individual, with her database of knowledge. An edge represents a pair of individuals and a common ground database relating their individual databases. For example, you and your brother have a database of concepts and examples from your history. The common-ground database is like the intersection of the two databases, but it could be smaller (if the people don’t yet know they agree on something). In a simplicial complex, there are not only vertices and edges, but also triangles (and so on). These would represent databases held in common between three or more people.

I wanted “regular people” to actually make such a knowledge network, i.e., to share their ideas in the form of categories and link them together with functors. Of course, most people don’t know categories and functors, so I thought I’d make things easier for them by equipping categories with linguistic structures: text boxes for objects, labeled arrows for morphisms. For example, “a person has a mother” would be a morphism from the “person” object, to the “mother” object. I called such a linguistic category an olog, playing on the word blog. The idea (originally inspired during a conversation with my friend Ralph Hutchison) was that I wanted people, especially scientists, to blog their ontologies, i.e., to write “onto-logs” like others make web-logs.

Ologs codify knowledge. They are like concept webs, except with more rules that allow them to simultaneously serve as database schemas. By introducing ologs, I hoped I could get real people to upload their ideas into what is now called the cloud, and make the necessary knowledge network. I tried to write my papers to engage an audience of intelligent lay-people rather than for an audience of mathematicians. It was a risk, but to me it was the only honest approach to the larger endeavor.

(For students who might want to try going out on a limb like this, you should know that I was offered zero jobs after my first postdoc at University of Oregon. The risk was indeed risky, and one has to be ok with that. I personally happened to be the beneficiary of good luck and was offered a grant, out of the clear blue sky, by a former PhD in algebraic geometry, who worked at the Office of Naval Research at the time. That, plus the helping hands of Haynes Miller and many other brilliant and wonderful people, can explain how I lived to tell the tale.)

So here’s how the simplicial complex of ologs would ideally help humanity steer. Suppose we say that in order for one person to learn from another, the two need to find a common language and align some ideas. This kind of (usually tacit) agreement on, or alignment of, an initial common-ground vocabulary and concept-set is important to get their communication onto a proper footing.

For two vertices in such a simplicial network, the richer their common-ground olog (i.e., the database corresponding to the edge between them) is, the more quickly and accurately the vertices can share new ideas. As ideas are shared over a simplex, all participating databases can be updated, hence making the communication between them richer. In around 2010, Mathieu Anel and I worked out a formal way this might occur; however, we have not yet written it up. The basic idea can be found here.

In this setup, the simplicial complex of human knowledge should grow organically. Scientists, business people, and other people might find benefit in ologging their ideas and conceptions, and using them to learn from their peers. I imagined a network organizing itself, where simplices of like-minded people could share information with neighboring groups across common faces.

I later wrote a book called Category Theory for the Sciences, available free online, to help scientists learn how category theory could apply to familiar situations like taxonomies, graphs, and symmetries. Category theory, simply explained, becomes a wonderful key to the whole world of pure mathematics. It’s the closest thing we have to a universal language of thought, and therefore an appropriate language for forming connections.

My working hypothesis for the knowledge network was this. The information held by people whose worldview is more true—more accurate—would have better predictive power, i.e., better results. This is by definition: I define ones knowledge to be accurate as the extent to which, when he uses this knowledge to direct his actions, he has good luck handling his worldly affairs. As Louis Pasteur said, “Luck favors the prepared mind.” It follows that if someone has a track record of success, others will value finding broad connections into his olog. However, to link up with someone you must find a part of your olog that aligns with his—a functorial connection—and you can only receive meaningful information from him to the extent that you’ve found such common ground.

Thus, people who like to live in fiction worlds would find it difficult to connect, except to other like-minded “Obama’s a Christian”-type people. To the extent you are imbedded in a fictional—less accurate, less predictive—part of the network, you will find it difficult to establish functorial connections to regions of more accurate knowledge, and therefore you can’t benefit from the predictive and conceptual value of this knowledge.

In other words, people would be naturally inclined to try to align their understanding with people that are better informed. I felt hope that this kind of idea could lead to a system in which honesty and accuracy were naturally rewarded. At the very least, those who used it could share information much more effectively than they do now. This was my plan; I just had to make it real.

I had a fun idea for publicizing ologs. The year was in 2008, and I remember thinking it would be fantastic if I could olog the political platform and worldview of Barack Obama and of Sarah Palin. I wished I could sit down with them and other politicians and help them write ologs about what they believed and wanted for the country. I imagined that some politicians might have ologs that look like a bunch of disconnected text boxes—like a brain with neurons but no synapses—a collection of talking points but no real substantive ideas.

Anyway, there I was, trying to understand everything this way: all information was categories (or perhaps sketches) and presheaves. I would work with interested people from any academic discipline, such as materials science, to make ologs about whatever information they wanted to record category-theoretically. Ologs weren’t a theory of everything, but instead, as Jack Morava put it, a theory of anything.

One day I was working on a categorical sketch to model processes within processes, but somehow it really wasn’t working properly. The idea was simple: each step in a recipe is a mini-recipe of its own. Like chopping the carrots means getting out a knife and cutting board, putting a carrot on there, and bringing the knife down successively along it. You can keep zooming into any of these and see it as its own process. So there is some kind of nested, fractal-like behavior here. The olog I made could model the idea of steps in a recipe, but I found it difficult to encode the fact that each step was itself a recipe.

This nesting thing seemed like an idea that mathematics should treat beautifully, and ologs weren’t doing it justice. It was then when I finally admitted that there might be other fish in the mathematical sea.

Part 3: From parts to wholes.


A Networked World (Part 1)

27 March, 2015

guest post by David Spivak

The problem

The idea that’s haunted me, and motivated me, for the past seven years or so came to me while reading a book called The Moment of Complexity: our Emerging Network Culture, by Mark C. Taylor. It was a fascinating book about how our world is becoming increasingly networked—wired up and connected—and that this is leading to a dramatic increase in complexity. I’m not sure if it was stated explicitly there, but I got the idea that with the advent of the World Wide Web in 1991, a new neural network had been born. The lights had been turned on, and planet earth now had a brain.

I wondered how far this idea could be pushed. Is the world alive, is it a single living thing? If it is, in the sense I meant, then its primary job is to survive, and to survive it’ll have to make decisions. So there I was in my living room thinking, “oh my god, we’ve got to steer this thing!”

Taylor pointed out that as complexity increases, it’ll become harder to make sense of what’s going on in the world. That seemed to me like a big problem on the horizon, because in order to make good decisions, we need to have a good grasp on what’s occurring. I became obsessed with the idea of helping my species through this time of unprecedented complexity. I wanted to understand what was needed in order to help humanity make good decisions.

What seemed important as a first step is that we humans need to unify our understanding—to come to agreement—on matters of fact. For example, humanity still doesn’t know whether global warming is happening. Sure almost all credible scientists have agreed that it is happening, but does that steer money into programs that will slow it or mitigate its effects? This isn’t an issue of what course to take to solve a given problem; it’s about whether the problem even exists! It’s like when people were talking about Obama being a Muslim, born in Kenya, etc., and some people were denying it, saying he was born in Hawaii. If that’s true, why did he repeatedly refuse to show his birth certificate?

It is important, as a first step, to improve the extent to which we agree on the most obvious facts. This kind of “sanity check” is a necessary foundation for discussions about what course we should take. If we want to steer the ship, we have to make committed choices, like “we’re turning left now,” and we need to do so as a group. That is, there needs to be some amount of agreement about the way we should steer, so we’re not fighting ourselves.

Luckily there are a many cases of a group that needs to, and is able to, steer itself as a whole. For example as a human, my neural brain works with my cells to steer my body. Similarly, corporations steer themselves based on boards of directors, and based on flows of information, which run bureaucratically and/or informally between different parts of the company. Note that in neither case is there any suggestion that each part—cell, employee, or corporate entity—is “rational”; they’re all just doing their thing. What we do see in these cases is that the group members work together in a context where information and internal agreement is valued and often attained.

It seemed to me that intelligent, group-directed steering is possible. It does occur. But what’s the mechanism by which it happens, and how can we think about it? I figured that the way we steer, i.e., make decisions, is by using information.

I should be clear: whenever I say information, I never mean it “in the sense of Claude Shannon”. As beautiful as Shannon’s notion of information is, he’s not talking about the kind of information I mean. He explicitly said in his seminal paper that information in his sense is not concerned with meaning:

Frequently the messages have meaning; that is they refer to or are correlated according to some system with certain physical or conceptual entities. These semantic aspects of communication are irrelevant to the engineering problem. The significant aspect is that the actual message is one selected from a set of possible messages.

In contrast, I’m interested in the semantic stuff, which flows between humans, and which makes possible decisions about things like climate change. Shannon invented a very useful quantitative measure of meaningless probability distributions.

That’s not the kind of information I’m talking about. When I say “I want to know what information is”, I’m saying I want to formulate the notion of human-usable semantic meaning, in as mathematical a way as possible.

Back to my problem: we need to steer the ship, and to do so we need to use information properly. Unfortunately, I had no idea what information is, nor how it’s used to make decisions (let alone to make good ones), nor how it’s obtained from our interaction with the world. Moreover, I didn’t have a clue how the minute information-handling at the micro-level, e.g., done by cells inside a body or employees inside a corporation, would yield information-handling at the macro (body or corporate) level.

I set out to try to understand what information is and how it can be communicated. What kind of stuff is information? It seems to follow rules: facts can be put together to form new facts, but only in certain ways. I was once explaining this idea to Dan Kan, and he agreed saying, “Yes, information is inherently a combinatorial affair.” What is the combinatorics of information?

Communication is similarly difficult to understand, once you dig into it. For example, my brain somehow enables me to use information and so does yours. But our brains are wired up in personal and ad hoc ways, when you look closely, a bit like a fingerprint or retinal scan. I found it fascinating that two highly personalized semantic networks could interface well enough to effectively collaborate.

There are two issues that I wanted to understand, and by to understand I mean to make mathematical to my own satisfaction. The first is what information is, as structured stuff, and what communication is, as a transfer of structured stuff. The second is how communication at micro-levels can create, or be, understanding at macro-levels, i.e., how a group can steer as a singleton.

Looking back on this endeavor now, I remain concerned. Things are getting increasingly complex, in the sorts of ways predicted by Mark C. Taylor in his book, and we seem to be losing some control: of the NSA, of privacy, of people 3D printing guns or germs, of drones, of big financial institutions, etc.

Can we expect or hope that our species as a whole will make decisions that are healthy, like keeping the temperature down, given the information we have available? Are we in the driver’s seat, or is our ship currently in the process of spiraling out of our control?

Let’s assume that we don’t want to panic but that we do want to participate in helping the human community to make appropriate decisions. A possible first step could be to formalize the notion of “using information well”. If we could do this rigorously, it would go a long way toward helping humanity get onto a healthy course. Further, mathematics is one of humanity’s best inventions. Using this tool to improve our ability to use information properly is a non-partisan approach to addressing the issue. It’s not about fighting, it’s about figuring out what’s happening, and weighing all our options in an informed way.

So, I ask: What kind of mathematics might serve as a formal ground for the notion of meaningful information, including both its successful communication and its role in decision-making?

Part 2: Creating a knowledge network.

Part 3: From parts to wholes.


Follow

Get every new post delivered to your Inbox.

Join 2,983 other followers