Higher-Dimensional Rewriting in Warsaw (Part 1)

This summer there will be a conference on higher-dimensional algebra and rewrite rules in Warsaw. They want people to submit papers! I’ll give a talk about presentations of symmetric monoidal categories that arise in electrical engineering and control theory. This is part of the network theory program, which we talk about so often here on Azimuth.

There should also be interesting talks about combinatorial algebra, homotopical aspects of rewriting theory, and more:

Higher-Dimensional Rewriting and Applications, 28-29 June 2015, Warsaw, Poland. Co-located with the RDP, RTA and TLCA conferences. Organized by Yves Guiraud, Philippe Malbos and Samuel Mimram.

Description

Over recent years, rewriting methods have been generalized from strings and terms to richer algebraic structures such as operads, monoidal categories, and more generally higher dimensional categories. These extensions of rewriting fit in the general scope of higher-dimensional rewriting theory, which has emerged as a unifying algebraic framework. This approach allows one to perform homotopical and homological analysis of rewriting systems (Squier theory). It also provides new computational methods in combinatorial algebra (Artin-Tits monoids, Coxeter and Garside structures), in homotopical and homological algebra (construction of cofibrant replacements, Koszulness property). The workshop is open to all topics concerning higher-dimensional generalizations and applications of rewriting theory, including

• higher-dimensional rewriting: polygraphs / computads, higher-dimensional generalizations of string/term/graph rewriting systems, etc.

• homotopical invariants of rewriting systems: homotopical and homological finiteness properties, Squier theory, algebraic Morse theory, coherence results in algebra and higher-dimensional category theory, etc.

• linear rewriting: presentations and resolutions of algebras and operads, Gröbner bases and generalizations, homotopy and homology of algebras and operads, Koszul duality theory, etc.

• applications of higher-dimensional and linear rewriting and their interactions with other fields: calculi for quantum computations, algebraic lambda-calculi, proof nets, topological models for concurrency, homotopy type theory, combinatorial group theory, etc.

• implementations: the workshop will also be interested in implementation issues in higher-dimensional rewriting and will allow demonstrations of prototypes of existing and new tools in higher-dimensional rewriting.

Submitting

Important dates:

• Submission: April 15, 2015

• Notification: May 6, 2015

• Final version: May 20, 2015

• Conference: 28-29 June, 2015

Submissions should consist of an extended abstract, approximately 5 pages long, in standard article format, in PDF. The page for uploading those is here. The accepted extended abstracts will be made available electronically before the
workshop.

Organizers

Program committee:

• Vladimir Dotsenko (Trinity College, Dublin)

• Yves Guiraud (INRIA / Université Paris 7)

• Jean-Pierre Jouannaud (École Polytechnique)

• Philippe Malbos (Université Claude Bernard Lyon 1)

• Paul-André Melliès (Université Paris 7)

• Samuel Mimram (École Polytechnique)

• Tim Porter (University of Wales, Bangor)

• Femke van Raamsdonk (VU University, Amsterdam)

4 Responses to Higher-Dimensional Rewriting in Warsaw (Part 1)

  1. linasv says:

    I would be very thrilled to see more work on non-symmetric monoidal categories. Bob Coecke has done some work here, but I’d like to see more.

    The non-symmetric aspect appears to be very important for natural language (the word-order in a sentence matters). Coecke rediscovers (and puts on a cat theory foundation) the linguistic theory of a “dependency grammar”, and specifically, a “link grammar”, although he does not seem to know this, or to have pursued it any further. So this first step is exciting, but more is needed.

    To explain the “higher-dimensional” aspect, I need some background: We know that the dependency parse of an English sentence is a directed tree, with directed links, which are typed. So: like an electrical circuit, but most of the wires are diodes, having a direction, and the diodes having dozens of different types. Oh, and some links are better “conductors” than others, so that there is a prefered flow through least “resistance” or “cost”. So a “minimal spanning tree” tendsto be the prefered structure. Oh, and the dependency tree need not be a tree, cycles are allowed and can be important, serving as constraints.

    The electrical analogy breaks down here. Although “diodes” connect words, the words themselves are interesting and unique: some words have a valency of 2 or 3 (verbs connect to subjects, objects and adverbial modifiers) some have a valency of noe (determiners: the, a, this, that connect to just one noun) The “valency” is analogous to “tensor rank”, so that a valency of 3 means its like a tensor with 3 indexes that need to be contracted with something to form a valid expression. Electrical-circuit-wise, its like a transistor with three wires. Unlike electrical circuits, the links are typed, which means you cannot hook anything to anything, but only to other, compatible types (wires). The types are a constraint. (Reminiscent of electrical safety building codes: electricians must never ever connect a black wire to a white wire, and neither of those to a green wire).

    The higher-dimensional aspect is this, then: consider the two phrases: “X is next to Y” and “X borders on Y”. Here, the X and Y are the uncontracted tensor indexes (the free variables), and all the other links betwen the various words are bound. The trick then, is recognizing synonymous phrases, possibly with a re-write mechanism.

    So here’s where I’m at: I have a natural langauge parser called “link-grammar” that is more or less isomorphic with Bob Coecke’s non-symmetric monoidal cats (if you squint a little bit). I have two graph-rewriting systems: one called RelEx that applies graph re-write rules to extract certain quasi-semanitic-like featues and subgraphs from purely syntactic linkgrammar graphs. I have a general-purposed (hyper-)graph rewriting system within OpenCog (the graphs are actualy hypergraphs, because hypergraphs are a better fit for AI, and the “edges” are typed (in the type-theory sense of type)).

    Here’s what I want: A system, that given large quantites of text, can automatically discern the graphs, the relations, the valencies, the rewrite rules. Presumably using some maximum-entropy principle. I’ve gotten part-way down this path: the category theory, the graphs, are fairly clear. It is very confusing, however, to correctly compute the entropy for a graph, because the graph edges are typed, can range over only certain types; the types mean only certain vertexes can connect but not others. What’s the entropy or mutal information of such a graph? This is very vexing.

    A related, maybe simpler (???) problem is the automated discovery of graph rewrite rules for synonymous graphs. (e.g. given the “X borders on Y” example above, discover the rewrite rules that correctly transform the one to the other, without allowing incorrect transformations).

    Anyway, its all very exciting stuff (to me). I have absolutely no clue if any of the other “buzzword” algebras and dualities and homologies can provide any insight into the structure of language, but would very much like to know.

    • John Baez says:

      All this is very interesting. I’m not deep enough into linguistics to quickly offer interesting suggestions. I understand all the buzzwords that I provided links to—operads, homological algebra, homotopical algebra, cofibrant replacement, Koszulness, etc.—but I don’t see instant applications to what you’re saying. I’ll think about it! I want to start applying more of these ideas to network theory, but only after I get the more ‘applied’ side figured out a bit better, so I know better what problems I’m trying to solve.

  2. Mike Stay says:

    Greg Meredith and I intend to submit a paper on a 2-categorical interpretation of pi calculus to this conference. The idea is that we have two generating objects, N for names and P for processes. The 2-category is cartesian closed and has coproducts, so we can talk about lists of names rather than just tuples of a fixed length.

    Morphisms into P are called “processes”. P is a strict commutative monoid object: there’s a “do-nothing” process that’s the unit and “running concurrently” is the monoidal product, denoted by the “pipe” symbol |.

    There are three generating processes beyond the built-ins:

    speak: N × 1/(1-N) -> P takes a name to speak to and a list of names to say.

    listen: N × (1/(1-N) ⊸ P) -> P takes the name of the listener and a function that gets a list of names and produces a process.

    comm: 1 -> P is a catalyst that enables synchronization on a name as I’ll describe below.

    Lambda calculus is confluent, which means it doesn’t matter in what order term rewrites happen. Because of this, Lambek was able to mod out by rewriting, and there are nice 2-categorical models of lambda calculus studied by guys like Hilken and Hirschowitz.

    Pi calculus models things like race conditions and contention for resources. It matters very much in what order you open a window and stick your head through it! The pi calculus process listen(x, T) must not allow T to be rewritten until a message has been heard by a guy named x. All rewriting happens in the topmost scope.

    2-categories obey a sort of “principle of relativity” where 2-morphisms can be applied at any point to a composition of 1-morphisms; there’s no way to tell when you’re at the topmost scope without introducing a marker for it. We use the marker “- | comm”.

    There’s a single generating 2-morphism. The left-hand side takes a name, copies it, and feeds one copy to “speak” and one to “listen”. The outputs of “speak”, “listen”, and “comm” are all multiplied together with pipe.

    The right-hand side deletes the name and evaluates the function at the list of names, then multiplies that with the output of “comm”.

    The standard presentation of pi calculus includes term constructors for declaring a new name and for replicating a process. Greg has a presentation where names are “quoted processes”, which instead adds morphisms quote:P -> N and unquote:N -> P satisfying some relations. Both of these can simulate each other, but Greg’s presentation is easier to reason about with category theory and has some other nice properties.

You can use Markdown or HTML in your comments. You can also use LaTeX, like this: $latex E = m c^2 $. The word 'latex' comes right after the first dollar sign, with a space after it.

This site uses Akismet to reduce spam. Learn how your comment data is processed.