RChain

12 May, 2018

guest post by Christian Williams

Mike Stay has been doing some really cool stuff since earning his doctorate. He’s been collaborating with Greg Meredith, who studied the π-calculus with Abramsky, and then conducted impactful research and design in the software industry before some big ideas led him into the new frontier of decentralization. They and a great team are developing RChain, a distributed computing infrastructure based on the reflective higher-order π-calculus, the ρ-calculus.

They’ve made significant progress in the first year, and on April 17-18 they held the RChain Developer Conference in Boulder, Colorado. Just five months ago, the first conference was a handful of people; now this received well over a hundred. Programmers, venture capitalists, blockchain enthusiasts, experts in software, finance, and mathematics: myriad perspectives from around the globe came to join in the dawn of a new internet. Let’s just say, it’s a lot to take in. This project is the real deal – the idea is revolutionary, the language is powerful, the architecture is elegant; the ambition is immense and skilled developers are actually bringing it to reality. There’s no need for hype: you’re gonna be hearing about RChain.

Documentation , GitHub , Architecture

Here’s something from the documentation:

The open-source RChain project is building a decentralized, economic, censorship-resistant, public compute infrastructure and blockchain. It will host and execute programs popularly referred to as “smart contracts”. It will be trustworthy, scalable, concurrent, with proof-of-stake consensus and content delivery.

The decentralization movement is ambitious and will provide awesome opportunities for new social and economic interactions. Decentralization also provides a counterbalance to abuses and corruption that occasionally occur in large organizations where power is concentrated. Decentralization supports self-determination and the rights of individuals to self-organize. Of course, the realities of a more decentralized world will also have its challenges and issues, such as how the needs of international law, public good, and compassion will be honored.

We admire the awesome innovations of Bitcoin, Ethereum, and other platforms that have dramatically advanced the state of decentralized systems and ushered in this new age of cryptocurrency and smart contracts. However, we also see symptoms that those projects did not use the best engineering and formal models for scaling and correctness in order to support mission-critical solutions. The ongoing debates about scaling and reliability are symptomatic of foundational architectural issues. For example, is it a scalable design to insist on an explicit serialized processing order for all of a blockchain’s transactions conducted on planet earth?

To become a blockchain solution with industrial-scale utility, RChain must provide content delivery at the scale of Facebook and support transactions at the speed of Visa. After due diligence on the current state of many blockchain projects, after deep collaboration with other blockchain developers, and after understanding their respective roadmaps, we concluded that the current and near-term blockchain architectures cannot meet these requirements. In mid-2016, we resolved to build a better blockchain architecture.

Together with the blockchain industry, we are still at the dawn of this decentralized movement. Now is the time to lay down a solid architectural foundation. The journey ahead for those who share this ambitious vision is as challenging as it is worthwhile, and this document summarizes that vision and how we seek to accomplish it.

We began by admitting the following minimal requirements:

  • Dynamic, responsive, and provably correct smart contracts.
  • Concurrent execution of independent smart contracts.
  • Data separation to reduce unnecessary data replication of otherwise independent tokens and smart contracts.
  • Dynamic and responsive node-to-node communication.
  • Computationally non-intensive consensus/validation protocol.

Building quality software is challenging. It is easier to build “clever” software; however, the resulting software is often of poor quality, riddled with bugs, difficult to maintain, and difficult to evolve. Inheriting and working on such software can be hellish for development teams, not to mention their customers. When building an open-source system to support a mission-critical economy, we reject a minimal-success mindset in favor of end-to-end correctness.

To accomplish the requirements above, our design approach is committed to:

  • A computational model that assumes fine-grained concurrency and dynamic network topology.
  • A composable and dynamic resource addressing scheme.
  • The functional programming paradigm, as it more naturally accommodates distributed and parallel processing.
  • Formally verified, correct-by-construction protocols which leverage model checking and theorem proving.
  • The principles of intension and compositionality.

RChain is light years ahead of the industry. Why? It is upholding the principle of correct by construction with the depth and rigor of mathematics. For years, Mike and Greg have been developing original ideas for distributed computation: in particular, logic as a distributive law is an “algorithm for deriving a spatial-behavioral type system from a formal presentation of a computational calculus.” This is a powerful way to integrate operational semantics into a language, and prove soundness with a single natural transformation; it also provides an extremely expressive query language, with which you could search the entire world to find “code that does x”. Mike’s strong background in higher category theory has enabled the formalization of Greg’s ideas, which he has developed over decades of thinking deeply and comprehensively about the world of computing. Of all of these, there is one concept which is the heart and pulse of RChain, which unifies the system as a rational whole: the ρ-calculus.

So what’s the big idea? First, some history: back in the late 80s, Greg developed a classification of computational models called “the 4 C’s”:

completeness,
compositionality,
(a good notion of)
complexity, and
concurrency.

He found that there was none which had all four, and predicted the existence of one. Just a few years later, Milner invented the π-calculus, and since then it has reigned as the natural language of network computing. It presents a totally different way of thinking: instead of representing sequential instructions for a single machine, the π-calculus is fundamentally concurrent—processes or agents communicate over names or channels, and computation occurs through the interaction of processes. The language is simple yet remarkably powerful; it is deeply connected with game semantics and linear logic, and has become an essential tool in systems engineering and biocomputing: see mobile process calculi for programming the blockchain.


Here is the basic syntax. The variables x,y are names, and P,Q are processes:

P,Q := 0 | (νx)P | x?(y).P | x!(y).P | P|Q

(do nothing | create new x; run P | receive on x and bind to y; run P | send value y on x; run P | run P and Q in parallel)

The computational engine, the basic reduction analogous to beta-reduction of lambda calculus, is the communication rule:

COMM : x!(y).P|x?(z).Q → P|Q[y/z]

(given parallel output and input processes along the same channel, the value is transferred from the output to the input, and is substituted for all occurrences of the input variable in the continuation process)

The definition of a process calculus must specify structural congruence: these express the equivalences between processes—for example, ({P},|,0) forms a commutative monoid.


The π-calculus reforms computation, on the most basic level, to be a cooperative activity. Why is this important? To have a permanently free internet, we have to be able to support it without reliance on centralized powers. This is one of the simplest points, but there are many deeper reasons which I am not yet knowledgeable enough to express. It’s all about the philosophy of openness which is characteristic of applied category theory: historically, we have developed theories and practices which are isolated from each other and the world, and had to fabricate their interrelation and cooperation ad hoc; this leaves us needlessly struggling with inadequate systems, and limits our thought and action.

Surely there must be a less primitive way of making big changes in the store than by pushing vast numbers of words back and forth through the von Neumann bottleneck. Not only is this tube a literal bottleneck for the data traffic of a problem, but, more importantly, it is an intellectual bottleneck that has kept us tied to word-at-a-time thinking instead of encouraging us to think in terms of the larger conceptual units of the task at hand. Thus programming is basically planning and detailing the enormous traffic of words through the von Neumann bottleneck, and much of that traffic concerns not significant data itself, but where to find it. — John Backus, 1977 ACM Turing Award

There have been various mitigations to these kind of problems, but the cognitive limitation remains, and a total renewal is necessary; the π-calculus completely reimagines the nature of computation in society, and opens vast possibility. We can begin to conceive of the many interwoven strata of computation as a coherent, fluid whole. While I was out of my depth in many talks during the conference, I began to appreciate that this was a truly comprehensive innovation: RChain reforms almost every aspect of computing, from the highest abstraction all the way down to the metal. Coalescing the architecture, as mentioned earlier, is the formal calculus as the core guiding principle. There was some concern that because the ρ-calculus is so different from traditional languages, there may be resistance to adoption; but our era is a paradigm shift, the call is for a new way of thinking, and we must adapt.

So why are we using the reflective higher-order π-calculus, the ρ-calculus? Because there’s just one problem with the conventional π-calculus: it presupposes a countably infinite collection of atomic names. These are not only problematic to generate and manage, but the absence of structure is a massive waste. In this regard, the π-calculus was incomplete, until Greg realized that you can “close the loop” with reflection, a powerful form of self-reference:

Code ←→ Data

The mantra is that names are quoted processes; this idea pervades and guides the design of RChain. There is no need to import infinitely many opaque, meaningless symbols—the code itself is nothing but clear, meaningful syntax. If there is an intrinsic method of reference and dereference, or “quoting and unquoting”, code can be turned into data, sent as a message, and then turned back into code; known as “code mobility”, one can communicate big programs as easily as emails. This allows for metaprogramming: on an industrial level, not only people write programs—programs write programs. This is essential to creating a robust virtual infrastructure.

So, how can the π-calculus be made reflective? By solving for the least fixed point of a recursive equation, which parametrizes processes by names:

P[x] = 0 | x?(x).P[x] | x!(P[x]) | P[x]|P[x] | @P[x] | *x

RP = P[RP]

This is reminiscent of how the Y combinator enables recursion by giving the fixed point of any function, Yf = f(Yf). The last two terms of the syntax are reference and dereference, which turn code into data and data into code. Notice that we did not include a continuation for output: the ρ-calculus is asynchronous, meaning that the sender does not get confirmation that the message has been received; this is important for efficient parallel computation and corresponds to polarised linear logic. We adopt the convention that names are output and processes are input. The last two modifications to complete the official ρ-calculus syntax are multi-input and pattern-matching:


P,Q := 0                                                null process

| for(p1←x1,…,pn←xn).P                input guarded process

| x!(@Q)                                        output a name

| *x                                          dereference, evaluate code

| P|Q                                        parallel composition

x,p := @P                                            name or quoted process

(each ‘pi’ is a “pattern” or formula to collect terms on channel ‘xi’—this is extremely useful and general, and enables powerful functionality throughout the system)


Simple. Of course, this is not really a programming language yet, though it is more usable than the pure λ-calculus. Rholang, the actual language of RChain, adds some essential features:

ρ-calculus + variables + useful ground terms + new name construction + arithmetic + pattern matching = Rholang

Here’s the specification, syntax and semantics, and a visualization; explore code and examples in GitHub and learn the contract design in the documentation—you can even try coding on rchain.cloud! For those who don’t like clicking all these links, let’s see just one concrete example of a contract, the basic program in Rholang: a process with persistent state, associated code, and associated addresses. This is a Cell, which stores a value until it is accessed or updated:


contract Cell( get, set, state ) = {
 select {
   case rtn <- get; v <- state => {
     rtn!( *v ) | state!( *v ) | Cell( get, set, state ) }
   case newValue <- set; v <- state => {
     state!( *newValue ) | Cell( get, set, state ) }

}}


The parameters are the channels on which the contract communicates. Cell selects from two possibilities: either it is being accessed, i.e. there is data (the return channel) to receive on get, then it outputs on rtn and maintains its state and call; or it is being updated, i.e. there is data (the new value) to receive on set, then it updates state and calls itself again. This shows how the ontology of the language enables natural recursion, and thereby persistent storage: state is Cell’s way of “talking to itself”—since the sequential aspect of Rholang is functional, one “cycles” data to persist. The storage layer uses a similar idea; the semantics may be related to traced monoidal categories.

Curiously, the categorical semantics of the ρ-calculus has proven elusive. There is the general ideology that λ:sequential :: π:concurrent, that the latter is truly fundamental, but the Curry-Howard-Lambek isomorphism has not yet been generalized canonically—though there has been partial success, involving functor-category denotational semantics, linear logic, and session types. Despite its great power and universality, the ρ-calculus remains a bit of a mystery in mathematics: this fact should intrigue anyone who cares about logic, types, and categories as the foundations of abstract thought.

Now, the actual system—the architecture consists of five interwoven layers (all better explained in the documentation):


Storage: based on Special K – “a pattern language for the web.” This layer stores both key-value pairs and continuations through an essential duality of database and query—if you don’t find what you’re looking for, leave what you would have done in its place, and when it arrives, the process will continue. Greg characterizes this idea as the computational equivalent of the law of excluded middle.

[channel, pattern, data, continuation]

RChain has a refined, multidimensional view of resources – compute, memory, storage, and network—and accounts for their production and consumption linearly.

Execution: a Rho Virtual Machine instance is a context for ρ-calculus reduction of storage elements. The entire state of the blockchain is one big Rholang term, which is updated by a transaction: a receive invokes a process which changes key values, and the difference must be verified by consensus. Keys permanently maintain the entire history of state transitions. While currently based on the Java VM, it will be natively hosted.

Namespace: a set of channels, i.e. resources, organized by a logic for access and authority. The primary significance is scalability – a user does not need to deal with the whole chain, only pertinent namespaces. ‘A namespace definition may control the interactions that occur in the space, for example, by specifying: accepted addresses, namespaces, or behavioral types; maximum or minimum data size; or input-output structure.’ These handle nondeterminism of the two basic “race conditions”, contention for resources:

x!(@Q1) | for(ptrn <- x){P} | x!(@Q2)

for(ptrn <- x){P1} | x!(@Q) | for(ptrn <- x){P2}

Contrasted with flat public keys of other blockchains, domains work with DNS and extend them by a compositional tree structure. Each node as a named channel is itself a namespace, and hence definitions can be built up inductively, with precise control.

Consensus: verify partial orders of changes to the one-big-Rholang-term state; the block structure should persist as a directed acyclic graph. The algorithm is Proof of Stake – the capacity to validate in a namespace is tied to the “stake” one holds in it. Greg explains via tangles, and how the complex CASPER protocol works naturally with RChain.

Contracts: ‘An RChain contract is a well-specified, well-behaved, and formally verified program that interacts with other such programs.’ (K Framework) ; (DAO attack) ‘A behavioral type is a property of an object that binds it to a discrete range of action patterns. Behavioral types constrain not only the structure of input and output, but the permitted order of inputs and outputs among communicating and (possibly) concurrent processes under varying conditions… The Rholang behavioral type system will iteratively decorate terms with modal logical operators, which are propositions about the behavior of those terms. Ultimately properties [such as] data information flow, resource access, will be concretized in a type system that can be checked at compile-time. The behavioral type systems Rholang will support make it possible to evaluate collections of contracts against how their code is shaped and how it behaves. As such, Rholang contracts elevate semantics to a type-level vantage point, where we are able to scope how entire protocols can safely interface.’ (LADL)

So what can you build on RChain? Anything.

Decentralized applications: identity, reputation, tokens, timestamping, financial services, content delivery, exchanges, social networks, marketplaces, (decentralized autonomous) organizations, games, oracles, (Ethereum dApps), … new forms of code yet to be imagined. It’s much more than a better internet: RChain is a potential abstract foundation for a rational global society. The system is a minimalist framework of universally principled design; it is a canvas with which we can begin to explore how the world should really work. If we are open and thoughtful, if we care enough, we can learn to do things right.

The project is remarkably unknown for its magnitude, and building widespread adoption may be one of RChain’s greatest challenges. Granted, it is new; but education will not be easy. It’s too big a reformation for a public mindset which thinks of (technological) progress as incrementally better specs or added features; this is conceptual progression, an alien notion to many. That’s why the small but growing community is vital. This article is nothing; I’m clearly unqualified—click links, read papers, watch videos.  The scale of ambition, the depth of insight, the lucidity of design, the unity of theory and practice—it’s something to behold. And it’s real. Mercury will be complete in December. It’s happening, right now, and you can be a part of it. Learn, spread the word. Get involved; join the discussion or even the development—the RChain website has all the resources you need.

40% of the world population lives within 100km of the ocean. Greg pointed out that if we can’t even handle today’s refugee crises, what will possibly happen when the waters rise? At the very least, we desperately need better large-scale coordination systems. Will we make it to the next millennium? We can—just a matter of will.

Thank you for reading. You are great.


Symposium on Compositional Structures

4 May, 2018

As I type this, sitting in a lecture hall at the Lorentz Center, Jamie Vicary, University of Birmingham and University of Oxford, is announcing a new series of meetings:

Symposium on Compositional Structures.

The website, which will probably change, currently says this:

Symposium on Compositional Structures (SYCO)

The Symposium on Compositional Structures is a new interdisciplinary meeting aiming to support the growing community of researchers interested in the phenomenon of compositionality, from both applied and abstract perspectives, and in particular where category theory serves as a unifying common language.

We welcome submissions from researchers across computer science, mathematics, physics, philosophy, and beyond, with the aim of fostering discussion, disseminating new ideas, and spreading knowledge of open problems between fields. Submission is encouraged for both mature research and work in progress, and by both established academics and junior researchers, including students. The meeting does not have proceedings.

While no list of topics could be exhaustive, SYCO welcomes submissions with a compositional focus related any the following areas, in particular from the perspective of category theory:

logical methods in computer science, including quantum and classical programming, concurrency, natural language processing and machine learning;

graphical calculi, including string diagrams, Petri nets and reaction networks;

languages and frameworks, including process algebras, proof nets, type theory and game semantics;

abstract algebra and pure category theory, including monoidal category theory, higher category theory, operads, polygraphs, and relationships to homotopy theory;

quantum algebra, including quantum computation and representation theory;

tools and techniques, including rewriting, formal proofs and proof assistants;

industrial applications, including case studies and real-world problem descriptions.

Meetings

Meetings will involve both invited and contributed talks. The first meeting is planned for Autumn 2018, with more details to follow soon.

Funding

Some funding may be available to support travel and subsistence, especially for junior researchers who are speaking at the meeting.

Steering committee

The symposium is managed by the following people:

Ross Duncan, University of Strathclyde.
Chris Heunen, University of Edinburgh.
Aleks Kissinger, Radboud University Nijmegen.
Samuel Mimram, École Polytechnique.
Mehrnoosh Sadrzadeh, Queen Mary.
Pawel Sobocinski, University of Southampton.
Jamie Vicary, University of Birmingham and University of Oxford.


Applied Category Theory at NIST (Part 2)

18 April, 2018

Here are links to the slides and videos for most of the talks from this workshop:

Applied Category Theory: Bridging Theory & Practice, March 15–16, 2018, NIST, Gaithersburg, Maryland, USA. Organized by Spencer Breiner and Eswaran Subrahmanian.

They give a pretty good picture of what went on. Spencer Breiner put them up here; what follows is just a copy of what’s on his site.

Unfortunately, the end of Dusko Pavlovic’s talk, as well as Ryan Wisnesky’s and Steve Huntsman’s were lost due to a technical error. You can also find a Youtube playlist with all of the videos here.

Introduction to NIST:

Ram Sriram – NIST and Category Theory

 

Spencer Breiner – Introduction

Invited talks:

Bob Coecke – From quantum foundations to cognition via pictures

 

Dusko Pavlovic – Security Science in string diagrams (partial video)

 

John Baez – Compositional design and tasking of networks (part 1)

 

John Foley – Compositional design and tasking of networks (part 2)

 

David Spivak – A higher-order temporal logic for dynamical systems

 

Lightning Round Talks:

Ryan Wisnesky – Categorical databases (no video)

Steve Huntsman – Towards an operad of goals (no video)

 

Bill Regli – Disrupting interoperability (no slides)

 

Evan Patterson – Applied category theory in data science

 

Brendan Fong – data structures for network languages

 

Stephane Dugowson – A short introduction to a general theory of interactivity

 

Michael Robinson – Sheaf methods for inference

 

Cliff Joslyn – Seeking a categorical systems theory via the category of hypergraphs

 

Emilie Purvine – A category-theoretical investigation of the type hierarchy for heterogeneous sensor integration

 

Helle Hvid Hansen – Long-term values in Markov decision processes, corecursively

 

Alberto Speranzon – Localization and planning for autonomous systems via (co)homology computation

 

Josh Tan – Indicator frameworks (no slides)

Breakout round report


Applied Category Theory 2018 Schedule

13 April, 2018

Here’s the schedule of the ACT2018 workshop:

Click to enlarge!

They put me on last, either because my talk will be so boring that it’s okay everyone will have left, or because my talk will be so exciting that nobody will want to leave. I haven’t dared ask the organizers which one.

On the other hand, they’ve put me on first for the “school” which occurs one week before the workshop. Here’s the schedule for the ACT 2018 Adjoint School:


A Double Conference

23 February, 2018

Here’s a cool way to cut carbon emissions: a double conference. The idea is to have a conference in two faraway locations connected by live video stream, to reduce the amount of long-distance travel!

Even better, it’s about a great subject:

• Higher algebra and mathematical physics, August 13–17, 2018, Perimeter Institute, Waterloo, Canada, and Max Planck Institute for Mathematics, Bonn, Germany.

Here’s the idea:

“Higher algebra” has become important throughout mathematics, physics, and mathematical physics, and this conference will bring together leading experts in higher algebra and its mathematical physics applications. In physics, the term “algebra” is used quite broadly: any time you can take two operators or fields, multiply them, and write the answer in some standard form, a physicist will be happy to call this an “algebra”. “Higher algebra” is characterized by the appearance of a hierarchy of multilinear operations (e.g. A-infinity and L-infinity algebras). These structures can be higher categorical in nature (e.g. derived categories, cohomology theories), and can involve mixtures of operations and co-operations (Hopf algebras, Frobenius algebras, etc.). Some of these notions are purely algebraic (e.g. algebra objects in a category), while others are quite geometric (e.g. shifted symplectic structures).

An early manifestation of higher algebra in high-energy physics was supersymmetry. Supersymmetry makes quantum field theory richer and thus more complicated, but at the same time many aspects become more tractable and many problems become exactly solvable. Since then, higher algebra has made numerous appearances in mathematical physics, both high- and low-energy._

Participation is limited. Some financial support is available for early-career mathematicians. For more information and to apply, please visit the conference website of the institute closer to you:

North America: http://www.perimeterinstitute.ca/HAMP
Europe: http://www.mpim-bonn.mpg.de/HAMP

If you have any questions, please write to double.conference.2018@gmail.com.

One of the organizers, Aaron Mazel-Gee, told me:

We are also interested in spreading the idea of double conferences more generally: we’re hoping that our own event’s success inspires other academic communities to organize their own double conferences. We’re hoping to eventually compile a sort of handbook to streamline the process for others, so that they can learn from our own experiences regarding the various unique challenges that organizing such an event poses. Anyways, all of this is just to say that I would be happy for you to publicize this event anywhere that it might reach these broader audiences.

So, if you’re interested in having a double conference, please contact the organizers of this one for tips on how to do it! I’m sure they’ll have better advice after they’ve actually done it. I’ve found that the technical details really matter for these things: it can be very frustrating when they don’t work correctly. Avoiding such problems requires testing everything ahead of time—under conditions that exactly match what you’re planning to do!


Applied Category Theory at NIST (Part 1)

17 February, 2018

I think it’s really cool how applied category theory is catching on. My former student Blake Pollard is working at the National Institute of Standards and Technology on applications of category theory to electrical engineering. He’s working with Spencer Breiner… and now Breiner is helping run a workshop on this stuff:

• Applied Category Theory: Bridging Theory & Practice, March 15–16, 2018, NIST, Gaithersburg, Maryland, USA. Organized by Spencer Breiner and Eswaran Subrahmanian.

It’s by invitation only, but I can’t resist mentioning its existence. Here’s the idea:

What: The Information Technology Laboratory at NIST is pleased to announce a workshop on Applied Category Theory to be held at NIST’s Gaithersburg, Maryland campus on March 15 & 16, 2018. The meeting will focus on practical avenues for introducing methods from category theory into real-world applications, with an emphasis on examples rather than theorems.

Who: The workshop aims to bring together two distinct groups. First, category theorists interested in pursuing applications outside of the usual mathematical fields. Second, domain experts and research managers from industry, government, science and engineering who have in mind potential domain applications for categorical methods.

Intended Outcomes: A proposed landscape of potential CT applications and the infrastructure needed to realize them, together with a 5-10 year roadmap for developing the field of applied category theory. This should include perspectives from industry, academia and government as well as major milestones, potential funding sources, avenues for technology transfer and necessary improvements in tool support and methodology. Exploratory collaborations between category theorists and domain experts. We will ask that each group come prepared to meet the other side. Mathematicians should be prepared with concrete examples that demonstrate practical applications of CT in an intuitive way. Domain experts should bring to the table specific problems to which they can devote time and/or funding as well as some reasons about why they think CT might be relevant to this application.

Invited Speakers:
John Baez (University of California at Riverside) and John Foley (Metron Scientific Solutions).
Bob Coecke (University of Oxford).
Dusko Pavlovic (University of Hawaii).

Some other likely participants include Chris Boner (Metron), Arquimedes Canedo (Siemens at Princeton), Stephane Dugowson (Supméca), William Edmonson (North Carolina A&T), Brendan Fong (MIT), Mark Fuge (University of Maryland), Jack Gray (Penumbra), Helle Hansen (Delft), Jelle Herold (Statebox), Marisa Hughes (Johns Hopkins), Steve Huntsman (BAE Systems), Patrick Johnson (Dassault Systèmes), Al Jones (NIST), Cliff Joslyn (Pacific Northwest National Laboratory), Richard Malek (NSF), Tom Mifflin (Metron), Ira Monarch (Carnegie Mellon), John Paschkewitz (DARPA), Evan Patterson (Stanford), Blake Pollard (NIST), Emilie Purvine (Pacific Northwest National Laboratory), Mark Raugas (Pacific Northwest National Laboratory), Bill Regli (University of Maryland), Michael Robinson (American U.) Alberto Speranzon (Honeywell Aerospace), David Spivak (MIT), Eswaran Subrahmanian (Carnegie Mellon), Jamie Vicary (Birmingham and Oxford), and Ryan Wisnesky (Categorical Informatics).

A bunch of us will stay on into the weekend and talk some more. I hope we make a lot of progress—and I plan to let you know how it goes!

I’ll be giving a joint talk with John Foley about our work using operads to design networks. This work is part of the Complex Adaptive System Composition and Design Environment project being done by Metron Scientific Solutions and managed by John Paschkewitz at DARPA.


Applied Category Theory at UCR (Part 3)

13 November, 2017

We had a special session on applied category theory here at UCR:

Applied category theory, Fall Western Sectional Meeting of the AMS, 4-5 November 2017, U.C. Riverside.

A bunch of people stayed for a few days afterwards, and we had a lot of great discussions. I wish I could explain everything that happened, but I’m too busy right now. Luckily, even if you couldn’t come here, you can now see slides of almost all the talks… and videos of many!

Click on talk titles to see abstracts. For multi-author talks, the person whose name is in boldface is the one who gave the talk. For videos, go here: I haven’t yet created links to all the videos.

Saturday November 4, 2017

9:00 a.m.A higher-order temporal logic for dynamical systemstalk slides.
David I. Spivak, MIT.

10:00 a.m.
Algebras of open dynamical systems on the operad of wiring diagramstalk slides.
Dmitry Vagner, Duke University
David I. Spivak, MIT
Eugene Lerman, University of Illinois at Urbana-Champaign

10:30 a.m.
Abstract dynamical systemstalk slides.
Christina Vasilakopoulou, UCR
David Spivak, MIT
Patrick Schultz, MIT

3:00 p.m.
Decorated cospanstalk slides.
Brendan Fong, MIT

4:00 p.m.
Compositional modelling of open reaction networkstalk slides.
Blake S. Pollard, UCR
John C. Baez, UCR

4:30 p.m.
A bicategory of coarse-grained Markov processestalk slides.
Kenny Courser, UCR

5:00 p.m.
A bicategorical syntax for pure state qubit quantum mechanicstalk slides.
Daniel M. Cicala, UCR

5:30 p.m.
Open systems in classical mechanicstalk slides.
Adam Yassine, UCR

Sunday November 5, 2017

9:00 a.m.
Controllability and observability: diagrams and dualitytalk slides.
Jason Erbele, Victor Valley College

9:30 a.m.
Frobenius monoids, weak bimonoids, and corelationstalk slides.
Brandon Coya, UCR

10:00 a.m.
Compositional design and tasking of networks.
John D. Foley, Metron, Inc.
John C. Baez, UCR
Joseph Moeller, UCR
Blake S. Pollard, UCR

10:30 a.m.
Operads for modeling networkstalk slides.
Joseph Moeller, UCR
John Foley, Metron Inc.
John C. Baez, UCR
Blake S. Pollard, UCR

2:00 p.m.
Reeb graph smoothing via cosheavestalk slides.
Vin de Silva, Department of Mathematics, Pomona College

3:00 p.m.
Knowledge representation in bicategories of relationstalk slides.
Evan Patterson, Stanford University, Statistics Department

3:30 p.m.
The multiresolution analysis of flow graphstalk slides.
Steve Huntsman, BAE Systems

4:00 p.m.
Data modeling and integration using the open source tool Algebraic Query Language (AQL)talk slides.
Peter Y. Gates, Categorical Informatics
Ryan Wisnesky, Categorical Informatics