Open Petri Nets (Part 3)

19 August, 2018

I’ve been talking about my new paper with Jade Master:

• John Baez and Jade Master, Open Petri nets.

In Part 1 we saw the double category of open Petri nets; in Part 2 we saw the reachability semantics for open Petri nets as a double functor. Now I’d like to wrap up by showing you the engine beneath the hood of our results.

I fell in love with Petri nets when I realized that they were really just presentations of free symmetric monoidal categories. If you like category theory, this turns Petri nets from something mysterious into something attractive.

In any category you can compose morphisms f\colon X \to Y and g\colon Y \to Z and get a morphism gf \colon X \to Z. In a monoidal category you can also tensor morphisms f \colon X \to X' and g \colon Y \to Y' and get a morphism f \otimes g \colon X \otimes X' \to Y \otimes Y'. This of course relies on your ability to tensor objects. In a symmetric monoidal category you also have X \otimes Y \cong Y \otimes X. And of course, there is more to it than this. But this is enough to get started.

A Petri net has ‘places’ and also ‘transitions’ going between multisets of places:

From this data we can try to generate a symmetric monoidal category whose objects are built from the places and whose morphisms are built from the transitions. So, for example, the above Petri net would give a symmetric monoidal category with an object

2 susceptible + infected

and a morphism from this to the object

susceptible + 2 infected

(built using the transition infection), and a morphism
from this to the object

susceptible + infected + resistant

(built using the transition recovery) and so on. Here we are using + to denote the tensor product in our symmetric monoidal category, as usual in chemistry.

When we do this sort of construction, the resulting symmetric monoidal category is ‘free’. That is, we are not imposing any really interesting equations: the objects are freely generated by the places in our Petri net by tensoring, and the morphisms are freely generated by the transitions by tensoring and composition.

That’s the basic idea. The problem is making this idea precise!

Many people have tried in many different ways. I like this approach the best:

• José Meseguer and Ugo Montanari, Petri nets are monoids, Information and Computation 88 (1990), 105–155.

but I think it can be simplified a bit, so let me describe what Jade and I did in our paper.

The problem is that there are different notions of symmetric monoidal category, and also different notions of morphism between Petri nets. We take the maximally strict approach, and work with ‘commutative’ monoidal categories. These are just commutative monoid objects in \mathrm{Cat}, so their associator:

\alpha_{A,B,C} \colon (A + B) + C \stackrel{\sim}{\longrightarrow} A + (B + C)

their left and right unitor:

\lambda_A \colon 0 + A \stackrel{\sim}{\longrightarrow} A

\rho_A \colon A + 0 \stackrel{\sim}{\longrightarrow} A

and even—disturbingly—their braiding:

\sigma_{A,B} \colon A + B \stackrel{\sim}{\longrightarrow} B + A

are all identity morphisms.

The last would ordinarily be seen as ‘going too far’, since while every symmetric monoidal category is equivalent to one with trivial associator and unitors, this ceases to be true if we also require the braiding to be trivial. However, it seems that Petri nets most naturally serve to present symmetric monoidal categories of this very strict sort. There just isn’t enough information in a Petri net to make it worthwhile giving them a nontrivial braiding

\sigma_{A,B} \colon A + B \stackrel{\sim}{\longrightarrow} B+A

It took me a while to accept this, but now it seem obvious. If you want a nontrivial braiding, you should be using something a bit fancier than a Petri net.

Thus, we construct adjoint functors between a category of Petri nets, which we call \textrm{Petri}, and a category of ‘commutative monoidal categories’, which we call \textrm{CMC}.

An object of \textrm{Petri} is a Petri net: that is, a set S of places, a set T of transitions, and source and target functions

s, t \colon T \to \mathbb{N}[S]

where \mathbb{N}[S] is the underlying set of the free commutative monoid on S.

More concretely, \mathbb{N}[S] is the set of formal finite linear combinations of elements of S with natural number coefficients. The set S naturally includes in \mathbb{N}[S], and for any function

f \colon S \to S'

there is a unique monoid homomorphism

\mathbb{N}[f] \colon \mathbb{N}[S] \to \mathbb{N}[S']

extending f.

A Petri net morphism from a Petri net

s, t \colon T \to \mathbb{N}[S]

to a Petri net

s', t' \colon T' \to \mathbb{N}[S']

is a pair of functions

f \colon T \to T'

g \colon S \to S'

making the two obvious diagrams commute:

There is a category \textrm{Petri} with Petri nets as objects and Petri net morphisms as morphisms.

On the other hand, a commutative monoidal category is a commutative monoid object in \mathrm{Cat}. Explicitly, it’s a strict monoidal category (C,+,0) such that for all objects A and B we have

A + B = B + A

and for all morphisms f and g

f + g = g + f

Note that a commutative monoidal category is the same as a strict symmetric monoidal category where the symmetry isomorphisms

\sigma_{A,B} \colon A + B \stackrel{\sim}{\longrightarrow} B+A

are all identity morphisms. Every strict monoidal functor between commutative monoidal categories is automatically a strict symmetric monoidal functor. So, we let \mathrm{CMC} be the category whose objects are commutative monoidal categories and whose morphisms are strict monoidal functors.

There’s a functor

U \colon \mathrm{CMC} \to \mathrm{Petri}

sending any commutative monoidal category C to its underlying Petri net. This Petri net has the set of objects \mathrm{Ob}(C) as its set of places and the set of morphisms \mathrm{Mor}(C) as its set of transitions, and

s, t \colon \mathrm{Mor}(C) \to \mathrm{Ob}(C) \hookrightarrow \mathbb{N}[\mathrm{Ob}(C)]

as its source and target maps.

Proposition. The functor U \colon \mathrm{CMC} \to \mathrm{Petri} has a left adjoint

F \colon \mathrm{Petri} \to \mathrm{CMC}

This is Proposition 10 in our paper, and we give an explicit construction of this left adjoint.

So that’s our conception of the free commutative monoidal category on a Petri net. It’s pretty simple. How could anyone have done anything else?

Montanari and Meseguer do almost the same thing, but our category of Petri nets is a subcategory of theirs: our morphisms of Petri nets send places to places, while they allow more general maps that send a place to a formal linear combination of places. On the other hand, they consider a full subcategory of our \mathrm{CMC} containing only commutative monoidal categories whose objects form a free commutative monoid.

Other papers do a variety of more complicated things. I don’t have the energy to explain them all, but you can see some here:

• Pierpaolo Degano, José Meseguer and Ugo Montanari, Axiomatizing net computations and processes, in Logic in Computer Science 1989, IEEE, New Jersey, 1989, pp. 175–185.

• Vladimiro Sassone, Strong concatenable processes: an approach to the category of Petri net computations, BRICS Report Series, Dept. of Computer Science, U. Aarhus, 1994.

• Vladimiro Sassone, On the category of Petri net computations, in Colloquium on Trees in Algebra and Programming, Springer, Berlin, 1995.

• Vladimiro Sassone, An axiomatization of the algebra of Petri net concatenable processes, in Theoretical Computer Science 170 (1996), 277–296.

• Vladimiro Sassone and Pavel Sobociński, A congruence for Petri nets, Electronic Notes in Theoretical Computer Science 127 (2005), 107–120.

Getting the free commutative monoidal category on a Petri net right is key to developing the reachability semantics for open Petri nets in a nice way. But to see that, you’ll have to read our paper!


Part 1: the double category of open Petri nets.

Part 2: the reachability semantics for open Petri nets.

Part 3: the free symmetric monoidal category on a Petri net.


Open Petri Nets (Part 2)

18 August, 2018

I’d like to continue talking about this paper:

• John Baez and Jade Master, Open Petri nets.

Last time I explained, in a sketchy way, the double category of open Petri nets. This time I’d like to describe a ‘semantics’ for open Petri nets.

In his famous thesis Functorial Semantics of Algebraic Theories, Lawvere introduced the idea that semantics, as a map from expressions to their meanings, should be a functor between categories. This has been generalized in many directions, and the same idea works for double categories. So, we describe our semantics for open Petri nets as a map

\blacksquare \colon \mathbb{O}\mathbf{pen}(\mathrm{Petri}) \to \mathbb{R}\mathbf{el}

from our double category of open Petri nets to a double category of relations. This map sends any open Petri net to its ‘reachability relation’.

In Petri net theory, a marking of a set X is a finite multisubset of X. We can think of this as a way of placing finitely ‘tokens’—little black dots—on the elements of X. A Petri net lets us start with some marking of its places and then repeatedly change the marking by moving tokens around, using the transitions. This is how Petri nets describe processes!

For example, here’s a Petri net from chemistry:

Here’s a marking of its places:

But using the transitions, we can repeatedly change the marking. We started with one atom of carbon, one molecule of oxygen, one molecule of sodium hydroxide and one molecule of hydrochloric acid. But they can turn into one molecule of carbon dioxide, one molecule of sodium hydroxide and one molecule of hydrochloric acid:

These can then turn into one molecule of sodium bicarbonate and one molecule of hydrochloric acid:

Then these can turn into one molecule of carbon dioxide, one molecule of water and one molecule of sodium chloride:

People say one marking is reachable from another if you can get it using a finite sequence of transitions in this manner. (Our paper explains this well-known notion more formally.) In this example every marking has 0 or 1 tokens in each place. But that’s not typical: in general we could have any natural number of tokens in each place, so long as the total number of tokens is finite.

Our paper adapts the concept of reachability to open Petri nets. Let \mathbb{N}[X] denote the set of markings of the set X. Given an open Petri net P \colon X \nrightarrow Y there is a reachability relation

\blacksquare P \subseteq \mathbb{N}[X] \times \mathbb{N}[Y]

This relation holds when we can take a given marking of X, feed those tokens into the Petri net P, move them around using transitions in P, and have them pop out and give a certain marking of Y, leaving no tokens behind.

For example, consider this open Petri net P \colon X \nrightarrow Y:

Here is a marking of X:

We can feed these tokens into P and move them around using transitions in P:

They can then pop out into Y, leaving none behind:

This gives a marking of Y that is ‘reachable’ from the original marking of X.

The main result of our paper is that the map sending an open Petri net P to its reachability relation \blacksquare P extends to a ‘lax double functor’

\blacksquare \colon \mathbb{O}\mathbf{pen}(\mathrm{Petri}) \to \mathbb{R}\mathbf{el}

where \mathbb{O}\mathbf{pen}(\mathrm{Petri}) is a double category having open Petri nets as horizontal 1-cells and \mathbb{R}\mathbf{el} is a double category having relations as horizontal 1-cells.

I can give you a bit more detail on those double categories, and also give you a clue about what ‘lax’ means, without it becoming too stressful.

Last time I said the double category \mathbb{O}\mathbf{pen}(\mathrm{Petri}) has:

• sets X, Y, Z, \dots as objects,

• functions f \colon X \to Y as vertical 1-morphisms,

• open Petri nets P \colon X \nrightarrow Y as horizontal 1-cells—they look like this:



• morphisms between open Petri nets as 2-morphisms—an example would be the visually obvious map from this open Petri net:

to this one:

What about \mathbb{R}\mathbf{el}? This double category has

• sets X, Y, Z, \dots as objects,

• functions f \colon X \to Y as vertical 1-morphisms,

• relations R \subseteq X \times Y as horizontal 1-cells from X to Y, and

• maps between relations as 2-morphisms. Here a map between relations is a square

that obeys

(f \times g) R \subseteq S

So, the idea of the reachability semantics is that it maps:

• any set X to the set \mathbb{N}[X] consisting of all markings of that set.

• any function f \colon X \to Y to the obvious function

\mathbb{N}(f) \colon \mathbb{N}[X] \to \mathbb{N}[Y]

(Yes, \mathbb{N} is a really a functor.)

• any open Petri net P \colon X \nrightarrow Y to its reachability relation

\blacksquare P \colon \mathbb{N}[X] \to \mathbb{N}[Y]

• any morphism between Petri nets to the obvious map between their reachability relations.

Especially if you draw some examples, all this seems quite reasonable and nice. But it’s important to note that \blacksquare is a lax double functor. This means that it does not send a composite open Petri net PQ to the composite of the reachability relations for P and Q. So, we do not have

\blacksquare Q \; \blacksquare P = \blacksquare (QP)

Instead, we just have

\blacksquare Q \; \blacksquare P \subseteq \blacksquare (QP)

It’s easy to see why. Take P \colon X \nrightarrow Y to be this open Petri net:

and take Q \colon Y \nrightarrow Z to be this one:

Then their composite QP \colon X \nrightarrow Y is this:

It’s easy to see that \blacksquare (QP) is a proper subset of \blacksquare Q \; \blacksquare P. In QP a token can move all the way from point 1 to point 5. But it does not do so by first moving through P and then moving through Q! It has to take a more complicated zig-zag path where it first leaves P and enters Q, then comes back into P, and then goes to Q.

In our paper, Jade and I conjecture that we get

\blacksquare Q \; \blacksquare P = \blacksquare (QP)

if we restrict the reachability semantics to a certain specific sub-double category of \mathbb{O}\mathbf{pen}(\mathrm{Petri}) consisting of ‘one-way’ open Petri nets.

Finally, besides showing that

\blacksquare \colon \mathbb{O}\mathbf{pen}(\mathrm{Petri}) \to \mathbb{R}\mathbf{el}

is a lax double functor, we also show that it’s symmetric monoidal. This means that the reachability semantics works as you’d expect when you run two open Petri nets ‘in parallel’.

In a way, the most important thing about our paper is that it illustrates some methods to study semantics for symmetric monoidal double categories. Kenny Courser and I will describe these methods more generally in our paper “Structured cospans.” They can be applied to timed Petri nets, colored Petri nets, and various other kinds of Petri nets. One can also develop a reachability semantics for open Petri nets that are glued together along transitions as well as places.

I hear that the company Statebox wants these and other generalizations. We aim to please—so we’d like to give it a try.

Next time I’ll wrap up this little series of posts by explaining how Petri nets give symmetric monoidal categories.


Part 1: the double category of open Petri nets.

Part 2: the reachability semantics for open Petri nets.

Part 3: the free symmetric monoidal category on a Petri net.


Open Petri Nets (Part 1)

15 August, 2018

Jade Master and I have just finished a paper on open Petri nets:

• John Baez and Jade Master, Open Petri nets.

Abstract. The reachability semantics for Petri nets can be studied using open Petri nets. For us an ‘open’ Petri net is one with certain places designated as inputs and outputs via a cospan of sets. We can compose open Petri nets by gluing the outputs of one to the inputs of another. Open Petri nets can be treated as morphisms of a category, which becomes symmetric monoidal under disjoint union. However, since the composite of open Petri nets is defined only up to isomorphism, it is better to treat them as morphisms of a symmetric monoidal double category \mathbb{O}\mathbf{pen}(\mathrm{Petri}). Various choices of semantics for open Petri nets can be described using symmetric monoidal double functors out of \mathbb{O}\mathbf{pen}(\mathrm{Petri}). Here we describe the reachability semantics, which assigns to each open Petri net the relation saying which markings of the outputs can be obtained from a given marking of the inputs via a sequence of transitions. We show this semantics gives a symmetric monoidal lax double functor from \mathbb{O}\mathbf{pen}(\mathrm{Petri}) to the double category of relations. A key step in the proof is to treat Petri nets as presentations of symmetric monoidal categories; for this we use the work of Meseguer, Montanari, Sassone and others.

I’m excited about this, especially because our friends at Statebox are planning to use open Petri nets in their software. They’ve recently come out with a paper too:

• Fabrizio Romano Genovese and Jelle Herold, Executions in (semi-)integer Petri nets are compact closed categories.

Petri nets are widely used to model open systems in subjects ranging from computer science to chemistry. There are various kinds of Petri net, and various ways to make them ‘open’, and my paper with Jade only handles the simplest. But our techniques are flexible, so they can be generalized.

What’s an open Petri net? For us, it’s a thing like this:



The yellow circles are called ‘places’ (or in chemistry, ‘species’). The aqua rectangles are called ‘transitions’ (or in chemistry, ‘reactions’). There can in general be lots of places and lots of transitions. The bold arrows from places to transitions and from transitions to places complete the structure of a Petri net. There are also arbitrary functions from sets X and Y into the set of places. This makes our Petri net into an ‘open’ Petri net.

We can think of open Petri nets as morphisms between finite sets. There’s a way to compose them! Suppose we have an open Petri net P from X to Y, where now I’ve given names to the points in these sets:

We write this as P \colon X \nrightarrow Y for short, where the funky arrow reminds us this isn’t a function between sets. Given another open Petri net Q \colon Y \nrightarrow Z, for example this:

the first step in composing P and Q is to put the pictures together:

At this point, if we ignore the sets X,Y,Z, we have a new Petri net whose set of places is the disjoint union of those for P and Q.

The second step is to identify a place of P with a place of Q whenever both are images of the same point in Y. We can then stop drawing everything involving Y, and get an open Petri net QP \colon X \nrightarrow Z, which looks like this:

Formalizing this simple construction leads us into a bit of higher category theory. The process of taking the disjoint union of two sets of places and then quotienting by an equivalence relation is a pushout. Pushouts are defined only up to canonical isomorphism: for example, the place labeled C in the last diagram above could equally well have been labeled D or E. This is why to get a category, with composition strictly associative, we need to use isomorphism classes of open Petri nets as morphisms. But there are advantages to avoiding this and working with open Petri nets themselves. Basically, it’s better to work with things than mere isomorphism classes of things! If we do this, we obtain not a category but a bicategory with open Petri nets as morphisms.

However, this bicategory is equipped with more structure. Besides composing open Petri nets, we can also ‘tensor’ them via disjoint union: this describes Petri nets being run in parallel rather than in series. The result is a symmetric monoidal bicategory. Unfortunately, the axioms for a symmetric monoidal bicategory are cumbersome to check directly. Double categories turn out to be more convenient.

Double categories were introduced in the 1960s by Charles Ehresmann. More recently they have found their way into applied mathematics. They been used to study various things, including open dynamical systems:

• Eugene Lerman and David Spivak, An algebra of open continuous time dynamical systems and networks.

open electrical circuits and chemical reaction networks:

• Kenny Courser, A bicategory of decorated cospans, Theory and Applications of Categories 32 (2017), 995–1027.

open discrete-time Markov chains:

• Florence Clerc, Harrison Humphrey and P. Panangaden, Bicategories of Markov processes, in Models, Algorithms, Logics and Tools, Lecture Notes in Computer Science 10460, Springer, Berlin, 2017, pp. 112–124.

and coarse-graining for open continuous-time Markov chains:

• John Baez and Kenny Courser, Coarse-graining open Markov processes. (Blog article here.)

As noted by Shulman, the easiest way to get a symmetric monoidal bicategory is often to first construct a symmetric monoidal double category:

• Mike Shulman, Constructing symmetric monoidal bicategories.

The theory of ‘structured cospans’ gives a systematic way to build symmetric monoidal double categories—Kenny Courser and I are writing a paper on this—and Jade and I use this to construct the symmetric monoidal double category of open Petri nets.

A 2-morphism in a double category can be drawn as a square like this:

We call X_1,X_2,Y_1 and Y_2 ‘objects’, f and g ‘vertical 1-morphisms’, M and N ‘horizontal 1-cells’, and \alpha a ‘2-morphism’. We can compose vertical 1-morphisms to get new vertical 1-morphisms and compose horizontal 1-cells to get new horizontal 1-cells. We can compose the 2-morphisms in two ways: horizontally and vertically. (This is just a quick sketch of the ideas, not the full definition.)

In our paper, Jade and I start by constructing a symmetric monoidal double category \mathbb{O}\mathbf{pen}(\textrm{Petri}) with:

• sets X, Y, Z, \dots as objects,

• functions f \colon X \to Y as vertical 1-morphisms,

• open Petri nets P \colon X \nrightarrow Y as horizontal 1-cells,

• morphisms between open Petri nets as 2-morphisms.

(Since composition of horizontal 1-cells is associative only up to an invertible 2-morphism, this is technically a pseudo double category.)

What are the morphisms between open Petri nets like? A simple example may be help give a feel for this. There is a morphism from this open Petri net:

to this one:

mapping both primed and unprimed symbols to unprimed ones. This describes a process of ‘simplifying’ an open Petri net. There are also morphisms that include simple open Petri nets in more complicated ones, etc.

This is just the start. Our real goal is to study the semantics of open Petri nets: that is, how they actually describe processes! And for that, we need to think about the free symmetric monoidal category on a Petri net. You can read more about those things in Part 2 and Part 3 of this series.


Part 1: the double category of open Petri nets.

Part 2: the reachability semantics for open Petri nets.

Part 3: the free symmetric monoidal category on a Petri net.


Applied Category Theory Course: Collaborative Design

13 July, 2018


Our course on applied category theory is now starting the fourth chapter of Fong and Spivak’s book Seven Sketches. Chapter 4 is about collaborative design: building big projects from smaller parts. This is based on work by Andrea Censi:

• Andrea Censi, A mathematical theory of co-design.

The main mathematical content of this chapter is the theory of enriched profunctors. We’ll mainly talk about enriched profunctors between categories enriched in monoidal preorders. The picture above shows what one of these looks like!

Here are the lectures:

Lecture 55 – Chapter 4: Enriched Profunctors and Collaborative Design
Lecture 56 – Chapter 4: Feasibility Relations
Lecture 57 – Chapter 4: Feasibility Relations
Lecture 58 – Chapter 4: Composing Feasibility Relations
Lecture 59 – Chapter 4: Cost-Enriched Profunctors
Lecture 60 – Chapter 4: Closed Monoidal Preorders
Lecture 61 – Chapter 4: Closed Monoidal Preorders
Lecture 62 – Chapter 4: Constructing Enriched Categories
Lecture 63 – Chapter 4: Composing Enriched Profunctors
Lecture 64 – Chapter 4: The Category of Enriched Profunctors
Lecture 65 – Chapter 4: Collaborative Design
Lecture 66 – Chapter 4: Collaborative Design
Lecture 67 – Chapter 4: Feedback in Collaborative Design
Lecture 68 – Chapter 4: Feedback in Collaborative Design
Lecture 69 – Chapter 4: Feedback in Collaborative Design
Lecture 70 – Chapter 4: Tensoring Enriched Profunctors
Lecture 71 – Chapter 4: Caps and Cups for Enriched Profunctors
Lecture 72 – Chapter 4: Monoidal Categories
Lecture 73 – Chapter 4: String Diagrams and Strictification
Lecture 74 – Chapter 4: Compact Closed Categories
Lecture 75 – Chapter 4: The Grand Synthesis
Lecture 76 – Chapter 4: The Grand Synthesis
Lecture 77 – Chapter 4: The End? No, the Beginning!


Applied Category Theory Course: Databases

6 June, 2018

 

In my online course on applied category theory we’re now into the third chapter of Fong and Spivak’s book Seven Sketches. Now we’re talking about databases!

To some extent this is just an excuse to (finally) introduce categories, functors, natural transformations, adjoint functors and Kan extensions. Great stuff, and databases are a great source of easy examples.

But it’s also true that Spivak helps run a company called Categorical Informatics that actually helps design databases using category theory! And his partner, Ryan Wisnesky, would be happy to talk to people about it. If you’re interested, click the link: he’s attending my course.

To read and join discussions on Chapter 3 go here:

Chapter 3

You can also do exercises and puzzles, and see other people’s answers to these.

Here are the lectures I’ve given so far:

Lecture 34 – Chapter 3: Categories
Lecture 35 – Chapter 3: Categories versus Preorders
Lecture 36 – Chapter 3: Categories from Graphs
Lecture 37 – Chapter 3: Presentations of Categories
Lecture 38 – Chapter 3: Functors
Lecture 39 – Chapter 3: Databases
Lecture 40 – Chapter 3: Relations
Lecture 41 – Chapter 3: Composing Functors
Lecture 42 – Chapter 3: Transforming Databases
Lecture 43 – Chapter 3: Natural Transformations
Lecture 44 – Chapter 3: Categories, Functors and Natural Transformations
Lecture 45 – Chapter 3: Composing Natural Transformations
Lecture 46 – Chapter 3: Isomorphisms
Lecture 47 – Chapter 3: Adjoint Functors
Lecture 48 – Chapter 3: Adjoint Functors
Lecture 49 – Chapter 3: Kan Extensions
Lecture 50 – Chapter 3: Kan Extensions
Lecture 51 – Chapter 3: Right Kan Extensions
Lecture 52 – Chapter 3: The Hom-Functor
Lecture 53 – Chapter 3: Free and Forgetful Functors
Lecture 54 – Chapter 3: Tying Up Loose Ends


Workshop on Compositional Approaches

24 May, 2018

This looks great too:

Workshop on Compositional Approaches in Physics, Natural Language Processing, and Social Sciences, 2 September 2018, Nice, France.

Compositional Approaches for Physics, NLP, and Social Sciences (CAPNS 2018) will be colocated with QI 2018. The workshop is a continuation and extension of the Workshop on Semantic Spaces at the Intersection of NLP, Physics and Cognitive Science held in June 2016.

AIMS AND SCOPE
The ability to compose parts to form a more complex whole, and to analyze a whole as a combination of elements, is desirable across disciplines. In this workshop we bring together researchers applying compositional approaches to NLP, Physics, Cognitive Science, and Game Theory. The interplay between these disciplines will foster theoretically motivated approaches to understanding how meanings of words interact in sentences and discourse, how concepts develop, and how complex games can be analyzed. Commonalities between the compositional mechanisms employed may be extracted, and applications and phenomena traditionally thought of as ‘non-compositional’ will be examined.

Topics of interests include (but are not restricted to):
Applications of quantum logic in natural language processing and cognitive science
Compositionality in vector space models of meaning
Compositionality in conceptual spaces
Compositional approaches to game theory
Reasoning in vector spaces and conceptual spaces
Conceptual spaces in linguistics
Game-theoretic models of language and conceptual change
Diagrammatic reasoning for natural language processing, cognitive science, and game theory
Compositional explanations of so-called ‘non-compositional’ phenomena such as metaphor

IMPORTANT DATES:
June 30th: Paper submission
July 15th: Notification to contributors
September 2nd: Workshop date

CONFIRMED SPEAKERS:
Gerhard Jäger, Professor of General Linguistics, University of Tübingen
Paul Smolensky, Principal Researcher, Microsoft Research, and Krieger-Eisenhower Professor of Cognitive Science, Johns Hopkins University

SUBMISSIONS:
We invite:
Original contributions (up to 12 pages) of previously unpublished work. Submission of substantial, albeit partial results of work in progress is welcomed.

Extended abstracts (3 pages) of previously published work that is recent and relevant to the workshop. These should include a link to a separately published paper or preprint.

Contributions should be submitted at:
https://easychair.org/conferences/?conf=capns2018

PROGRAMME COMMITTEE:
Peter Bruza, Queensland University of Technology
Trevor Cohen, University of Texas
Fredrik Nordvall Forsberg, University of Strathclyde
Liane Gabora, University of British Columbia
Peter Gärdenfors, Lund University
Helle Hvid Hansen, TU Delft
Chris Heunen, University of Edinburgh
Peter Hines, University of York
Alexander Kurz, University of Leicester
Antonio Lieto, University of Turin
Glyn Morrill, Universitat Politècnica de Catalunya
Dusko Pavlovic, University of Hawaii
Taher Pilehvar, University of Cambridge
Emmanuel Pothos, City, University of London
Matthew Purver, Queen Mary University of London
Mehrnoosh Sadrzadeh, Queen Mary University of London
Marta Sznajder, Munich Center for Mathematical Philosophy
Pawel Sobocinski, University of Southampton
Dominic Widdows, Grab Technologies
Geraint Wiggins, Vrije Universiteit Brussel
Victor Winschel, OICOS GmbH
Philipp Zahn, University of St. Gallen
Frank Zenker, University of Konstanz

ORGANIZATION:
Bob Coecke, University of Oxford
Jules Hedges, University of Oxford
Dimitri Kartsaklis, University of Cambridge
Martha Lewis, ILLC, University of Amsterdam
Dan Marsden, University of Oxford


RChain

12 May, 2018

guest post by Christian Williams

Mike Stay has been doing some really cool stuff since earning his doctorate. He’s been collaborating with Greg Meredith, who studied the π-calculus with Abramsky, and then conducted impactful research and design in the software industry before some big ideas led him into the new frontier of decentralization. They and a great team are developing RChain, a distributed computing infrastructure based on the reflective higher-order π-calculus, the ρ-calculus.

They’ve made significant progress in the first year, and on April 17-18 they held the RChain Developer Conference in Boulder, Colorado. Just five months ago, the first conference was a handful of people; now this received well over a hundred. Programmers, venture capitalists, blockchain enthusiasts, experts in software, finance, and mathematics: myriad perspectives from around the globe came to join in the dawn of a new internet. Let’s just say, it’s a lot to take in. This project is the real deal – the idea is revolutionary, the language is powerful, the architecture is elegant; the ambition is immense and skilled developers are actually bringing it to reality. There’s no need for hype: you’re gonna be hearing about RChain.

Documentation , GitHub , Architecture

Here’s something from the documentation:

The open-source RChain project is building a decentralized, economic, censorship-resistant, public compute infrastructure and blockchain. It will host and execute programs popularly referred to as “smart contracts”. It will be trustworthy, scalable, concurrent, with proof-of-stake consensus and content delivery.

The decentralization movement is ambitious and will provide awesome opportunities for new social and economic interactions. Decentralization also provides a counterbalance to abuses and corruption that occasionally occur in large organizations where power is concentrated. Decentralization supports self-determination and the rights of individuals to self-organize. Of course, the realities of a more decentralized world will also have its challenges and issues, such as how the needs of international law, public good, and compassion will be honored.

We admire the awesome innovations of Bitcoin, Ethereum, and other platforms that have dramatically advanced the state of decentralized systems and ushered in this new age of cryptocurrency and smart contracts. However, we also see symptoms that those projects did not use the best engineering and formal models for scaling and correctness in order to support mission-critical solutions. The ongoing debates about scaling and reliability are symptomatic of foundational architectural issues. For example, is it a scalable design to insist on an explicit serialized processing order for all of a blockchain’s transactions conducted on planet earth?

To become a blockchain solution with industrial-scale utility, RChain must provide content delivery at the scale of Facebook and support transactions at the speed of Visa. After due diligence on the current state of many blockchain projects, after deep collaboration with other blockchain developers, and after understanding their respective roadmaps, we concluded that the current and near-term blockchain architectures cannot meet these requirements. In mid-2016, we resolved to build a better blockchain architecture.

Together with the blockchain industry, we are still at the dawn of this decentralized movement. Now is the time to lay down a solid architectural foundation. The journey ahead for those who share this ambitious vision is as challenging as it is worthwhile, and this document summarizes that vision and how we seek to accomplish it.

We began by admitting the following minimal requirements:

  • Dynamic, responsive, and provably correct smart contracts.
  • Concurrent execution of independent smart contracts.
  • Data separation to reduce unnecessary data replication of otherwise independent tokens and smart contracts.
  • Dynamic and responsive node-to-node communication.
  • Computationally non-intensive consensus/validation protocol.

Building quality software is challenging. It is easier to build “clever” software; however, the resulting software is often of poor quality, riddled with bugs, difficult to maintain, and difficult to evolve. Inheriting and working on such software can be hellish for development teams, not to mention their customers. When building an open-source system to support a mission-critical economy, we reject a minimal-success mindset in favor of end-to-end correctness.

To accomplish the requirements above, our design approach is committed to:

  • A computational model that assumes fine-grained concurrency and dynamic network topology.
  • A composable and dynamic resource addressing scheme.
  • The functional programming paradigm, as it more naturally accommodates distributed and parallel processing.
  • Formally verified, correct-by-construction protocols which leverage model checking and theorem proving.
  • The principles of intension and compositionality.

RChain is light years ahead of the industry. Why? It is upholding the principle of correct by construction with the depth and rigor of mathematics. For years, Mike and Greg have been developing original ideas for distributed computation: in particular, logic as a distributive law is an “algorithm for deriving a spatial-behavioral type system from a formal presentation of a computational calculus.” This is a powerful way to integrate operational semantics into a language, and prove soundness with a single natural transformation; it also provides an extremely expressive query language, with which you could search the entire world to find “code that does x”. Mike’s strong background in higher category theory has enabled the formalization of Greg’s ideas, which he has developed over decades of thinking deeply and comprehensively about the world of computing. Of all of these, there is one concept which is the heart and pulse of RChain, which unifies the system as a rational whole: the ρ-calculus.

So what’s the big idea? First, some history: back in the late 80s, Greg developed a classification of computational models called “the 4 C’s”:

completeness,
compositionality,
(a good notion of)
complexity, and
concurrency.

He found that there was none which had all four, and predicted the existence of one. Just a few years later, Milner invented the π-calculus, and since then it has reigned as the natural language of network computing. It presents a totally different way of thinking: instead of representing sequential instructions for a single machine, the π-calculus is fundamentally concurrent—processes or agents communicate over names or channels, and computation occurs through the interaction of processes. The language is simple yet remarkably powerful; it is deeply connected with game semantics and linear logic, and has become an essential tool in systems engineering and biocomputing: see mobile process calculi for programming the blockchain.


Here is the basic syntax. The variables x,y are names, and P,Q are processes:

P,Q := 0 | (νx)P | x?(y).P | x!(y).P | P|Q

(do nothing | create new x; run P | receive on x and bind to y; run P | send value y on x; run P | run P and Q in parallel)

The computational engine, the basic reduction analogous to beta-reduction of lambda calculus, is the communication rule:

COMM : x!(y).P|x?(z).Q → P|Q[y/z]

(given parallel output and input processes along the same channel, the value is transferred from the output to the input, and is substituted for all occurrences of the input variable in the continuation process)

The definition of a process calculus must specify structural congruence: these express the equivalences between processes—for example, ({P},|,0) forms a commutative monoid.


The π-calculus reforms computation, on the most basic level, to be a cooperative activity. Why is this important? To have a permanently free internet, we have to be able to support it without reliance on centralized powers. This is one of the simplest points, but there are many deeper reasons which I am not yet knowledgeable enough to express. It’s all about the philosophy of openness which is characteristic of applied category theory: historically, we have developed theories and practices which are isolated from each other and the world, and had to fabricate their interrelation and cooperation ad hoc; this leaves us needlessly struggling with inadequate systems, and limits our thought and action.

Surely there must be a less primitive way of making big changes in the store than by pushing vast numbers of words back and forth through the von Neumann bottleneck. Not only is this tube a literal bottleneck for the data traffic of a problem, but, more importantly, it is an intellectual bottleneck that has kept us tied to word-at-a-time thinking instead of encouraging us to think in terms of the larger conceptual units of the task at hand. Thus programming is basically planning and detailing the enormous traffic of words through the von Neumann bottleneck, and much of that traffic concerns not significant data itself, but where to find it. — John Backus, 1977 ACM Turing Award

There have been various mitigations to these kind of problems, but the cognitive limitation remains, and a total renewal is necessary; the π-calculus completely reimagines the nature of computation in society, and opens vast possibility. We can begin to conceive of the many interwoven strata of computation as a coherent, fluid whole. While I was out of my depth in many talks during the conference, I began to appreciate that this was a truly comprehensive innovation: RChain reforms almost every aspect of computing, from the highest abstraction all the way down to the metal. Coalescing the architecture, as mentioned earlier, is the formal calculus as the core guiding principle. There was some concern that because the ρ-calculus is so different from traditional languages, there may be resistance to adoption; but our era is a paradigm shift, the call is for a new way of thinking, and we must adapt.

So why are we using the reflective higher-order π-calculus, the ρ-calculus? Because there’s just one problem with the conventional π-calculus: it presupposes a countably infinite collection of atomic names. These are not only problematic to generate and manage, but the absence of structure is a massive waste. In this regard, the π-calculus was incomplete, until Greg realized that you can “close the loop” with reflection, a powerful form of self-reference:

Code ←→ Data

The mantra is that names are quoted processes; this idea pervades and guides the design of RChain. There is no need to import infinitely many opaque, meaningless symbols—the code itself is nothing but clear, meaningful syntax. If there is an intrinsic method of reference and dereference, or “quoting and unquoting”, code can be turned into data, sent as a message, and then turned back into code; known as “code mobility”, one can communicate big programs as easily as emails. This allows for metaprogramming: on an industrial level, not only people write programs—programs write programs. This is essential to creating a robust virtual infrastructure.

So, how can the π-calculus be made reflective? By solving for the least fixed point of a recursive equation, which parametrizes processes by names:

P[x] = 0 | x?(x).P[x] | x!(P[x]) | P[x]|P[x] | @P[x] | *x

RP = P[RP]

This is reminiscent of how the Y combinator enables recursion by giving the fixed point of any function, Yf = f(Yf). The last two terms of the syntax are reference and dereference, which turn code into data and data into code. Notice that we did not include a continuation for output: the ρ-calculus is asynchronous, meaning that the sender does not get confirmation that the message has been received; this is important for efficient parallel computation and corresponds to polarised linear logic. We adopt the convention that names are output and processes are input. The last two modifications to complete the official ρ-calculus syntax are multi-input and pattern-matching:


P,Q := 0                                                null process

| for(p1←x1,…,pn←xn).P                input guarded process

| x!(@Q)                                        output a name

| *x                                          dereference, evaluate code

| P|Q                                        parallel composition

x,p := @P                                            name or quoted process

(each ‘pi’ is a “pattern” or formula to collect terms on channel ‘xi’—this is extremely useful and general, and enables powerful functionality throughout the system)


Simple. Of course, this is not really a programming language yet, though it is more usable than the pure λ-calculus. Rholang, the actual language of RChain, adds some essential features:

ρ-calculus + variables + useful ground terms + new name construction + arithmetic + pattern matching = Rholang

Here’s the specification, syntax and semantics, and a visualization; explore code and examples in GitHub and learn the contract design in the documentation—you can even try coding on rchain.cloud! For those who don’t like clicking all these links, let’s see just one concrete example of a contract, the basic program in Rholang: a process with persistent state, associated code, and associated addresses. This is a Cell, which stores a value until it is accessed or updated:


contract Cell( get, set, state ) = {
 select {
   case rtn <- get; v <- state => {
     rtn!( *v ) | state!( *v ) | Cell( get, set, state ) }
   case newValue <- set; v <- state => {
     state!( *newValue ) | Cell( get, set, state ) }

}}


The parameters are the channels on which the contract communicates. Cell selects from two possibilities: either it is being accessed, i.e. there is data (the return channel) to receive on get, then it outputs on rtn and maintains its state and call; or it is being updated, i.e. there is data (the new value) to receive on set, then it updates state and calls itself again. This shows how the ontology of the language enables natural recursion, and thereby persistent storage: state is Cell’s way of “talking to itself”—since the sequential aspect of Rholang is functional, one “cycles” data to persist. The storage layer uses a similar idea; the semantics may be related to traced monoidal categories.

Curiously, the categorical semantics of the ρ-calculus has proven elusive. There is the general ideology that λ:sequential :: π:concurrent, that the latter is truly fundamental, but the Curry-Howard-Lambek isomorphism has not yet been generalized canonically—though there has been partial success, involving functor-category denotational semantics, linear logic, and session types. Despite its great power and universality, the ρ-calculus remains a bit of a mystery in mathematics: this fact should intrigue anyone who cares about logic, types, and categories as the foundations of abstract thought.

Now, the actual system—the architecture consists of five interwoven layers (all better explained in the documentation):


Storage: based on Special K – “a pattern language for the web.” This layer stores both key-value pairs and continuations through an essential duality of database and query—if you don’t find what you’re looking for, leave what you would have done in its place, and when it arrives, the process will continue. Greg characterizes this idea as the computational equivalent of the law of excluded middle.

[channel, pattern, data, continuation]

RChain has a refined, multidimensional view of resources – compute, memory, storage, and network—and accounts for their production and consumption linearly.

Execution: a Rho Virtual Machine instance is a context for ρ-calculus reduction of storage elements. The entire state of the blockchain is one big Rholang term, which is updated by a transaction: a receive invokes a process which changes key values, and the difference must be verified by consensus. Keys permanently maintain the entire history of state transitions. While currently based on the Java VM, it will be natively hosted.

Namespace: a set of channels, i.e. resources, organized by a logic for access and authority. The primary significance is scalability – a user does not need to deal with the whole chain, only pertinent namespaces. ‘A namespace definition may control the interactions that occur in the space, for example, by specifying: accepted addresses, namespaces, or behavioral types; maximum or minimum data size; or input-output structure.’ These handle nondeterminism of the two basic “race conditions”, contention for resources:

x!(@Q1) | for(ptrn <- x){P} | x!(@Q2)

for(ptrn <- x){P1} | x!(@Q) | for(ptrn <- x){P2}

Contrasted with flat public keys of other blockchains, domains work with DNS and extend them by a compositional tree structure. Each node as a named channel is itself a namespace, and hence definitions can be built up inductively, with precise control.

Consensus: verify partial orders of changes to the one-big-Rholang-term state; the block structure should persist as a directed acyclic graph. The algorithm is Proof of Stake – the capacity to validate in a namespace is tied to the “stake” one holds in it. Greg explains via tangles, and how the complex CASPER protocol works naturally with RChain.

Contracts: ‘An RChain contract is a well-specified, well-behaved, and formally verified program that interacts with other such programs.’ (K Framework) ; (DAO attack) ‘A behavioral type is a property of an object that binds it to a discrete range of action patterns. Behavioral types constrain not only the structure of input and output, but the permitted order of inputs and outputs among communicating and (possibly) concurrent processes under varying conditions… The Rholang behavioral type system will iteratively decorate terms with modal logical operators, which are propositions about the behavior of those terms. Ultimately properties [such as] data information flow, resource access, will be concretized in a type system that can be checked at compile-time. The behavioral type systems Rholang will support make it possible to evaluate collections of contracts against how their code is shaped and how it behaves. As such, Rholang contracts elevate semantics to a type-level vantage point, where we are able to scope how entire protocols can safely interface.’ (LADL)

So what can you build on RChain? Anything.

Decentralized applications: identity, reputation, tokens, timestamping, financial services, content delivery, exchanges, social networks, marketplaces, (decentralized autonomous) organizations, games, oracles, (Ethereum dApps), … new forms of code yet to be imagined. It’s much more than a better internet: RChain is a potential abstract foundation for a rational global society. The system is a minimalist framework of universally principled design; it is a canvas with which we can begin to explore how the world should really work. If we are open and thoughtful, if we care enough, we can learn to do things right.

The project is remarkably unknown for its magnitude, and building widespread adoption may be one of RChain’s greatest challenges. Granted, it is new; but education will not be easy. It’s too big a reformation for a public mindset which thinks of (technological) progress as incrementally better specs or added features; this is conceptual progression, an alien notion to many. That’s why the small but growing community is vital. This article is nothing; I’m clearly unqualified—click links, read papers, watch videos.  The scale of ambition, the depth of insight, the lucidity of design, the unity of theory and practice—it’s something to behold. And it’s real. Mercury will be complete in December. It’s happening, right now, and you can be a part of it. Learn, spread the word. Get involved; join the discussion or even the development—the RChain website has all the resources you need.

40% of the world population lives within 100km of the ocean. Greg pointed out that if we can’t even handle today’s refugee crises, what will possibly happen when the waters rise? At the very least, we desperately need better large-scale coordination systems. Will we make it to the next millennium? We can—just a matter of will.

Thank you for reading. You are great.