Circuits, Bond Graphs, and Signal-Flow Diagrams

19 May, 2018

 

My student Brandon Coya finished his thesis, and successfully defended it last Tuesday!

• Brandon Coya, Circuits, Bond Graphs, and Signal-Flow Diagrams: A Categorical Perspective, Ph.D. thesis, U. C. Riverside, 2018.

It’s about networks in engineering. He uses category theory to study the diagrams engineers like to draw, and functors to understand how these diagrams are interpreted.

His thesis raises some really interesting pure mathematical questions about the category of corelations and a ‘weak bimonoid’ that can be found in this category. Weak bimonoids were invented by Pastro and Street in their study of ‘quantum categories’, a generalization of quantum groups. So, it’s fascinating to see a weak bimonoid that plays an important role in electrical engineering!

However, in what follows I’ll stick to less fancy stuff: I’ll just explain the basic idea of Brandon’s thesis, say a bit about circuits and ‘bond graphs’, and outline his main results. What follows is heavily based on the introduction of his thesis, but I’ve baezified it a little.

The basic idea

People, and especially scientists and engineers, are naturally inclined to draw diagrams and pictures when they want to better understand a problem. One example is when Feynman introduced his famous diagrams in 1949; particle physicists have been using them ever since. But some other diagrams introduced by engineers are far more important to the functioning of the modern world and its technology. It’s outrageous, but sociologically understandable, that mathematicians have figured out more about Feynman diagrams than these other kinds: circuit diagrams, bond graphs and signal-flow diagrams. This is the problem Brandon aims to fix.

I’ve been unable to track down the early history of circuit diagrams, so if you know about that please tell me! But in the 1940s, Harry Olson pointed out analogies in electrical, mechanical, thermodynamic, hydraulic, and chemical systems, which allowed circuit diagrams to be applied to a wide variety of fields. On April 24, 1959, Henry Paynter woke up and invented the diagrammatic language of bond graphs to study generalized versions of voltage and current, called ‘effort’ and ‘flow,’ which are implicit in the analogies found by Olson. Bond graphs are now widely used in engineering. On the other hand, control theorists use diagrams of a different kind, called ‘signal-flow diagrams’, to study linear open dynamical systems.

Although category theory predates some of these diagrams, it was not until the 1980s that Joyal and Street showed string digrams can be used to reason about morphisms in any symmetric monoidal category. This motivates Brandon’s first goal: viewing electrical circuits, signal-flow diagrams, and bond graphs as string diagrams for morphisms in symmetric monoidal categories.

This lets us study networks from a compositional perspective. That is, we can study a big network by describing how it is composed of smaller pieces. Treating networks as morphisms in a symmetric monoidal category lets us build larger ones from smaller ones by composing and tensoring them: this makes the compositional perspective into precise mathematics. To study a network in this way we must first define a notion of ‘input’ and ‘output’ for the network diagram. Then gluing diagrams together, so long as the outputs of one match the inputs of the other, defines the composition for a category.

Network diagrams are typically assigned data, such as the potential and current associated to a wire in an electrical circuit. Since the relation between the data tells us how a network behaves, we call this relation the ‘behavior’ of a network. The way in which we assign behavior to a network comes from first treating a network as a ‘black box’, which is a system with inputs and outputs whose internal mechanisms are unknown or ignored. A simple example is the lock on a doorknob: one can insert a key and try to turn it; it either opens the door or not, and it fulfills this function without us needing to know its inner workings. We can treat a system as a black box through the process called ‘black-boxing’, which forgets its inner workings and records only the relation it imposes between its inputs and outputs.

Since systems with inputs and outputs can be seen as morphisms in a category we expect black-boxing to be a functor out of a category of this sort. Assigning each diagram its behavior in a functorial way is formalized by functorial semantics, first introduced in Lawvere’s thesis in 1963. This consists of using categories with specific extra structure as ‘theories’ whose ‘models’ are structure-preserving functors into other such categories. We then think of the diagrams as a syntax, while the behaviors are the semantics. Thus black-boxing is actually an example of functorial semantics. This leads us to another goal: to study the functorial semantics, i.e. black-boxing functors, for electrical circuits, signal-flow diagrams, and bond graphs.

Brendan Fong and I began this type of work by showing how to describe circuits made of wires, resistors, capacitors, and inductors as morphisms in a category using ‘decorated cospans’. Jason Erbele and I, and separately Bonchi, Sobociński and Zanasi, studied signal flow diagrams as morphisms in a category. In other work Brendan Fong, Blake Pollard and I looked at Markov processes, while Blake and I studied chemical reaction networks using decorated cospans. In all of these cases, we also studied the functorial semantics of these diagram languages.

Brandon’s main tool is the framework of ‘props’, also called ‘PROPs’, introduced by Mac Lane in 1965. The acronym stands for “products and permutations”, and these operations roughly describe what a prop can do. More precisely, a prop is a strict symmetric monoidal category equipped with a distinguished object latex X$ such that every object is a tensor power X^{\otimes n}. Props arise because very often we think of a network as going between some set of input nodes and some set of output nodes, where the nodes are indistinguishable from each other. Thus we typically think of a network as simply having some natural number as an input and some natural number as an output, so that the network is actually a morphism in a prop.

Circuits and bond graphs

Now let’s take a quick tour of circuits and bond graphs. Much more detail can be found in Brandon’s thesis, but this may help you know what to picture when you hear terminology from electrical engineering.

Here is an electrical circuit made of only perfectly conductive wires:

This is just a graph, consisting of a set N of nodes, a set E of edges, and maps s,t\colon E\to N sending each edge to its source and target node. We refer to the edges as perfectly conductive wires and say that wires go between nodes. Then associated to each perfectly conductive wire in an electrical circuit is a pair of real numbers called ‘potential’, \phi, and ‘current’, I.

Typically each node gets a potential, but in the above case the potential at either end of a wire would be the same so we may as well associate the potential to the wire. Current and potential in circuits like these obey two laws due to Kirchoff. First, at any node, the sum of currents flowing into that node is equal to the sum of currents flowing out of that node. The other law states that any connected wires must have the same potential.

We say that the above circuit is closed as opposed to being open because it does not have any inputs or outputs. In order to talk about open circuits and thereby bring the ‘compositional perspective’ into play we need a notion for inputs and outputs of a circuit. We do this using two maps i\colon X\to N and o\colon Y \to N that specifiy the inputs and outputs of a circuit. Here is an example:

We call the sets X, Y, and the disjoint union X + Y the inputs, outputs, and terminals of the circuit, respectively. To each terminal we associate a potential and current. In total this gives a space of allowed potentials and currents on the terminals and we call this space the ‘behavior’ of the circuit. Since we do this association without knowing the potentials and currents inside the rest of the circuit we call this process ‘black-boxing’ the circuit. This process hides the internal workings of the circuit and just tells us the relation between inputs and outputs. In fact this association is functorial, but to understand the functoriality first requires that we say how to compose these kinds of circuits. We save this for later.

There are also electrical circuits that have ‘components’ such as resistors, inductors, voltage sources, and current sources. These are graphs as above, but with edges now labelled by elements in some set L. Here is one for example:

We call this an L-circuit. We may also black-box an L-circuit to get a space of allowed potentials and currents, i.e. the behavior of the L-circuit, and this process is functorial as well. The components in a circuit determine the possible potential and current pairs because they impose additional relationships. For example, a resistor between two nodes has a resistance R and is drawn as:

In an L-circuit this would be an edge labelled by some positive real number R. For a resistor like this Kirchhoff’s current law says I_1=I_2 and Ohm’s Law says \phi_2-\phi_1 =I_1R. This tells us how to construct the black-boxing functor that extracts the right behavior.

Engineers often work with wires that come in pairs where the current on one wire is the negative of the current on the other wire. In such a case engineers care about the difference in potential more than each individual potential. For such pairs of perfectly conductive wires:

we call V=\phi_2-\phi_1 the ‘voltage’ and I=I_1=-I_2 the ‘current’. Note the word current is used for two different, yet related concepts. We call a pair of wires like this a ‘bond’ and a pair of nodes like this a ‘port’. To summarize we say that bonds go between ports, and in a ‘bond graph’ we draw a bond as follows:

Note that engineers do not explicitly draw ports at the ends of bonds; we follow this notation and simply draw a bond as a thickened edge. Engineers who work with bond graphs often use the terms ‘effort’ and ‘flow’ instead of voltage and current. Thus a bond between two ports in a bond graph is drawn equipped with an effort and flow, rather than a voltage and current, as follows:

A bond graph consists of bonds connected together using ‘1-junctions’ and ‘0-junctions’. These two types of junctions impose equations between the efforts and flows on the attached bonds. The flows on bonds connected together with a 1-junction are all equal, while the efforts sum to zero, after sprinkling in some signs depending on how we orient the bonds. For 0-junctions it works the other way: the efforts are all equal while the flows sum to zero! The duality here is well-known to engineers but perhaps less so to mathematicians. This is one topic Brandon’s thesis explores.

Brandon explains bond graphs in more detail in Chapter 5 of his thesis, but here is an example:

The arrow at the end of a bond indicates which direction of current flow counts as positive, while the bar is called the ‘causal stroke’. These are unnecessary for Brandon’s work, so he adopts a simplified notation without the arrow or bar. In engineering it’s also important to attach general circuit components, but Brandon doesn’t consider these.

Outline

In Chapter 2 of his thesis, Brandon provides the necessary background for studying four categories as props:

• the category of finite sets and spans: \textrm{FinSpan}

• the category of finite sets and relations: \textrm{FinRel}

• the category of finite sets and cospans: \textrm{FinCospan}

• the category of finite sets and corelations: \textrm{FinCorel}.

In particular, \textrm{FinCospan} and \textrm{FinCorel} are crucial to the study of networks.

In Corollary 2.3.4 he notes that any prop has a presentation in terms of generators and equations. Then he recalls the known presentations for \textrm{FinSpan}, \textrm{FinCospan}, and \textrm{FinRel}. Proposition 2.3.7 lets us build props as quotients of other props.

He begins Chapter 3 by showing that $\mathrm{FinCorel}$ is ‘the prop for extraspecial commutative Frobenius monoids’, based on a paper he wrote with Brendan Fong. This result also gives a presentation for \mathrm{FinCorel}.

Then he defines an “L-circuit” as a graph with specified inputs and outputs, together with a labelling set for the edges of the graph. L-circuits are morphisms in the prop \textrm{Circ}_L. In Proposition 3.2.8 he uses a result of Rosebrugh, Sabadini and Walters to show that \textrm{Circ}_L can be viewed as the coproduct of \textrm{FinCospan} and the free prop on the set L of labels.

Brandon then defines \textrm{Circ} to be the prop \textrm{Circ}_L where L consists of a single element. This example is important, because \textrm{Circ} can be seen as the category whose morphisms are circuits made of only perfectly conductive wires! From any morphism in \textrm{Circ} he extracts a cospan of finite sets and then turns the cospan into a corelation. These two processes are functorial, so he gets a method for sending a circuit made of only perfectly conductive wires to a corelation:

\textrm{Circ} \stackrel{H'}{\longrightarrow} \textrm{FinCospan} \stackrel{H}{\longrightarrow} \textrm{FinCorel}

There is also a functor

K\colon \textrm{FinCorel} \to \textrm{FinRel}_k

where \textrm{FinRel}_k is the category whose objects are finite dimensional vector spaces and whose morphisms R\colon U\to V are linear relations, that is, linear subspaces R\subseteq U \oplus V. By composing with the above functors H' and H he associates a linear relation R to any circuit made of perfectly conductive wires. On the other hand he gets a subspace for any such circuit by first assigning potential and current to each terminal, and then subjecting these variables to the appropriate physical laws.

It turns out that these two ways of assigning a subspace to a morphism in \textrm{Circ} are the same. So, he calls the linear relation associated to a circuit using the composite KHH' the “behavior” of the circuit and defines the “black-boxing” functor

\blacksquare \colon \textrm{Circ}\to \textrm{FinRel}_k

to be the composite of these:

\textrm{Circ} \stackrel{H'}{\longrightarrow} \textrm{FinCospan} \stackrel{H}{\longrightarrow} \textrm{FinCorel} \stackrel{K}{\longrightarrow} \textrm{FinRel}_k

Note that the underlying corelation of a circuit made of perfectly conductive wires completely determines the behavior of the circuit via the functor K.

In Chapter 4 he reinterprets the black-boxing functor \blacksquare as a morphism of props. He does this by introducing the category \textrm{LagRel}_k, whose objects are “symplectic” vector spaces and whose morphisms are “Lagrangian” relations. In Proposition 4.1.6 he proves that the functor K\colon \textrm{FinCorel} \to \textrm{FinRel}_k actually picks out a Lagrangian relation for any corelation and thus determines a morphism of props. So, he redefines K to be this morphism

K\colon \mathrm{FinCorel} \to \mathrm{LagRel}_k

and reinterprets black-boxing as the composite

\mathrm{Circ} \stackrel{H'}{\longrightarrow} \mathrm{FinCospan} \stackrel{H}{\longrightarrow} \mathrm{FinCorel} \stackrel{K}{\longrightarrow} \mathrm{LagRel}_k

After doing al this hard work for circuits made of perfectly conductive wires—a warmup exercises that engineers might scoff at—Brandon shows the power of his results by easily extending the black-boxing functor to circuits with arbitrary label sets in Theorem 4.2.1. He applies this result to a prop whose morphisms are circuits made of resistors, inductors, and capacitors. Then he considers a more general and mathematically more natural approach to linear circuits using the prop \textrm{Circ}_k. The morphisms here are open circuits with wires labelled by elements of some chosen field k. In Theorem 4.2.4 he prove the existence of a morphism of props

\blacksquare \colon \textrm{Circ}_k \to \textrm{LagRel}_k

that describes the black-boxing of circuits built from arbitrary linear components.

Brandon then picks up where Jason Erbele’s thesis left off, and recalls how control theorists use “signal-flow diagrams” to draw linear relations. These diagrams make up the category \textrm{SigFlow}_k, which is the free prop generated by the same generators as \textrm{FinRel}_k. Similarly he defines the prop \widetilde{\mathrm{Circ}}_k as the free prop generated by the same generators as \textrm{Circ}_k. Then there is a strict symmetric monoidal functor T\colon \widetilde{\mathrm{Circ}}_k \to \textrm{SigFlow}_k giving a commutative square:

Of course, circuits made of perfectly conductive wires are a special case of linear circuits. We can express this fact using another commutative square:

Combining the diagrams so far, Brandon gets a commutative diagram summarizing the relationship between linear circuits, cospans, corelations, and signal-flow diagrams:

Brandon concludes Chapter 4 by extending his work to circuits with voltage and current sources. These types of circuits define affine relations instead of linear relations. The prop framework lets Brandon extend black-boxing to these types of circuits by showing that affine Lagrangian relations are morphisms in a prop \textrm{AffLagRel}_k. This leads to Theorem 4.4.5, which says that for any field k and label set L there is a unique morphism of props

\blacksquare \colon \textrm{Circ}_L \to \textrm{AffLagRel}_k

extending the other black-boxing functor and sending each element of L to an arbitrarily chosen affine Lagrangian relation between potentials and currents.

In Chapter 5, Brandon studies bond graphs as morphisms in a category. His goal is to define a category \textrm{BondGraph}, whose morphisms are bond graphs, and then assign a space of efforts and flows as behavior to any bond graph using a functor. He also constructs a functor that assigns a space of potentials and currents to any bond graph, which agrees with the way that potential and current relate to effort and flow.

The subtle way he defines \textrm{BondGraph} comes from two different approaches to studying bond graphs, and the problems inherent in each approach. The first approach leads him to a subcategory \textrm{FinCorel}^\circ of \textrm{FinCorel}, while the second leads him to a subcategory \textrm{LagRel}_k^\circ of \textrm{LagRel}_k. There isn’t a commutative square relating these four categories, but Brandon obtains a pentagon that commutes up to a natural transformation by inventing a new category \textrm{BondGraph}:

This category is a way of formalizing Paynter’s idea of bond graphs.

In his first approach, Brandon views a bond graph as an electrical circuit. He takes advantage of his earlier work on circuits and corelations by taking \textrm{FinCorel} to be the category whose morphisms are circuits made of perfectly conductive wires. In this approach a terminal is the object 1 and a wire is the identity corelation from 1 to 1, while a circuit from m terminals to n terminals is a corelation from m to n.

In this approach Brandon thinks of a port as the object 2, since a port is a pair of nodes. Then he thinks of a bond as a pair of wires and hence the identity corelation from 2 to 2. Lastly, the two junctions are two different ways of connecting ports together, and thus specific corelations from 2m to 2n. It turns out that by following these ideas he can equip the object 2 with two different Frobenius monoid structures, which behave very much like 1-junctions and 0-junctions in bond graphs!

It would be great if the morphisms built from these two Frobenius monoids corresponded perfectly to bond graphs. Unfortunately there are some equations which hold between morphisms made from these Frobenius monoids that do not hold for corresponding bond graphs. So, Brandon defines a category \textrm{FinCorel}^\circ using the morphisms that come from these two Frobenius monoids and moves on to a second attempt at defining \textrm{BondGraph}.

Since bond graphs impose Lagrangian relations between effort and flow, this second approach starts by looking back at \textrm{LagRel}_k. The relations associated to a 1-junction make k\oplus k into yet another Frobenius monoid, while the relations associated to a 0-junction make k\oplus k into a different Frobenius monoid. These two Frobenius monoid structures interact to form a bimonoid! Unfortunately, a bimonoid has some equations between morphisms that do not correspond to equations between bond graphs, so this approach also does not result in morphisms that are bond graphs. Nonetheless, Brandon defines a category \textrm{LagRel}_k^\circ using the two Frobenius monoid structures k\oplus k.

Since it turns out that \textrm{FinCorel}^\circ and \textrm{LagRel}_k^\circ have corresponding generators, Brandon defines \textrm{BondGraph} as a prop that also has corresponding generators, but with only the equations found in both \textrm{FinCorel}^\circ and \textrm{LagRel}_k^\circ. By defining \textrm{BondGraph} in this way he automatically gets two functors

F\colon \textrm{BondGraph} \to \textrm{LagRel}_k^\circ

and

G\colon \textrm{BondGraph} \to \textrm{FinCorel}^\circ

The functor F associates effort and flow to a bond graph, while the functor G lets us associate potential and current to a bond graph using the previous work done on \textrm{FinCorel}. Then the Lagrangian subspace relating effort, flow, potential, and current:

\{(V,I,\phi_1,I_1,\phi_2,I_2) | V = \phi_2-\phi_1, I = I_1 = -I_2\}

defines a natural transformation in the following diagram:

Putting this together with the diagram we saw before, Brandon gets a giant diagram which encompasses the relationships between circuits, signal-flow diagrams, bond graphs, and their behaviors in category theoretic terms:

This diagram is a nice quick road map of his thesis. Of course, you need to understand all the categories in this diagram, all the functors, and also their applications to engineering, to fully appreciate what he has accomplished! But his thesis explains that.

To learn more

Coya’s thesis has lots of references, but if you want to see diagrams at work in actual engineering, here are some good textbooks on bond graphs:

• D. C. Karnopp, D. L. Margolis and R. C. Rosenberg, System Dynamics: A Unified Approach, Wiley, New York, 1990.

• F. T. Brown, Engineering System Dynamics: A Unified Graph-Centered Approach, Taylor and Francis, New York, 2007.

and here’s a good one on signal-flow diagrams:

• B. Friedland, Control System Design: An Introduction to State-Space Methods, S. W. Director (ed.), McGraw–Hill Higher Education, 1985.


RChain

12 May, 2018

guest post by Christian Williams

Mike Stay has been doing some really cool stuff since earning his doctorate. He’s been collaborating with Greg Meredith, who studied the π-calculus with Abramsky, and then conducted impactful research and design in the software industry before some big ideas led him into the new frontier of decentralization. They and a great team are developing RChain, a distributed computing infrastructure based on the reflective higher-order π-calculus, the ρ-calculus.

They’ve made significant progress in the first year, and on April 17-18 they held the RChain Developer Conference in Boulder, Colorado. Just five months ago, the first conference was a handful of people; now this received well over a hundred. Programmers, venture capitalists, blockchain enthusiasts, experts in software, finance, and mathematics: myriad perspectives from around the globe came to join in the dawn of a new internet. Let’s just say, it’s a lot to take in. This project is the real deal – the idea is revolutionary, the language is powerful, the architecture is elegant; the ambition is immense and skilled developers are actually bringing it to reality. There’s no need for hype: you’re gonna be hearing about RChain.

Documentation , GitHub , Architecture

Here’s something from the documentation:

The open-source RChain project is building a decentralized, economic, censorship-resistant, public compute infrastructure and blockchain. It will host and execute programs popularly referred to as “smart contracts”. It will be trustworthy, scalable, concurrent, with proof-of-stake consensus and content delivery.

The decentralization movement is ambitious and will provide awesome opportunities for new social and economic interactions. Decentralization also provides a counterbalance to abuses and corruption that occasionally occur in large organizations where power is concentrated. Decentralization supports self-determination and the rights of individuals to self-organize. Of course, the realities of a more decentralized world will also have its challenges and issues, such as how the needs of international law, public good, and compassion will be honored.

We admire the awesome innovations of Bitcoin, Ethereum, and other platforms that have dramatically advanced the state of decentralized systems and ushered in this new age of cryptocurrency and smart contracts. However, we also see symptoms that those projects did not use the best engineering and formal models for scaling and correctness in order to support mission-critical solutions. The ongoing debates about scaling and reliability are symptomatic of foundational architectural issues. For example, is it a scalable design to insist on an explicit serialized processing order for all of a blockchain’s transactions conducted on planet earth?

To become a blockchain solution with industrial-scale utility, RChain must provide content delivery at the scale of Facebook and support transactions at the speed of Visa. After due diligence on the current state of many blockchain projects, after deep collaboration with other blockchain developers, and after understanding their respective roadmaps, we concluded that the current and near-term blockchain architectures cannot meet these requirements. In mid-2016, we resolved to build a better blockchain architecture.

Together with the blockchain industry, we are still at the dawn of this decentralized movement. Now is the time to lay down a solid architectural foundation. The journey ahead for those who share this ambitious vision is as challenging as it is worthwhile, and this document summarizes that vision and how we seek to accomplish it.

We began by admitting the following minimal requirements:

  • Dynamic, responsive, and provably correct smart contracts.
  • Concurrent execution of independent smart contracts.
  • Data separation to reduce unnecessary data replication of otherwise independent tokens and smart contracts.
  • Dynamic and responsive node-to-node communication.
  • Computationally non-intensive consensus/validation protocol.

Building quality software is challenging. It is easier to build “clever” software; however, the resulting software is often of poor quality, riddled with bugs, difficult to maintain, and difficult to evolve. Inheriting and working on such software can be hellish for development teams, not to mention their customers. When building an open-source system to support a mission-critical economy, we reject a minimal-success mindset in favor of end-to-end correctness.

To accomplish the requirements above, our design approach is committed to:

  • A computational model that assumes fine-grained concurrency and dynamic network topology.
  • A composable and dynamic resource addressing scheme.
  • The functional programming paradigm, as it more naturally accommodates distributed and parallel processing.
  • Formally verified, correct-by-construction protocols which leverage model checking and theorem proving.
  • The principles of intension and compositionality.

RChain is light years ahead of the industry. Why? It is upholding the principle of correct by construction with the depth and rigor of mathematics. For years, Mike and Greg have been developing original ideas for distributed computation: in particular, logic as a distributive law is an “algorithm for deriving a spatial-behavioral type system from a formal presentation of a computational calculus.” This is a powerful way to integrate operational semantics into a language, and prove soundness with a single natural transformation; it also provides an extremely expressive query language, with which you could search the entire world to find “code that does x”. Mike’s strong background in higher category theory has enabled the formalization of Greg’s ideas, which he has developed over decades of thinking deeply and comprehensively about the world of computing. Of all of these, there is one concept which is the heart and pulse of RChain, which unifies the system as a rational whole: the ρ-calculus.

So what’s the big idea? First, some history: back in the late 80s, Greg developed a classification of computational models called “the 4 C’s”:

completeness,
compositionality,
(a good notion of)
complexity, and
concurrency.

He found that there was none which had all four, and predicted the existence of one. Just a few years later, Milner invented the π-calculus, and since then it has reigned as the natural language of network computing. It presents a totally different way of thinking: instead of representing sequential instructions for a single machine, the π-calculus is fundamentally concurrent—processes or agents communicate over names or channels, and computation occurs through the interaction of processes. The language is simple yet remarkably powerful; it is deeply connected with game semantics and linear logic, and has become an essential tool in systems engineering and biocomputing: see mobile process calculi for programming the blockchain.


Here is the basic syntax. The variables x,y are names, and P,Q are processes:

P,Q := 0 | (νx)P | x?(y).P | x!(y).P | P|Q

(do nothing | create new x; run P | receive on x and bind to y; run P | send value y on x; run P | run P and Q in parallel)

The computational engine, the basic reduction analogous to beta-reduction of lambda calculus, is the communication rule:

COMM : x!(y).P|x?(z).Q → P|Q[y/z]

(given parallel output and input processes along the same channel, the value is transferred from the output to the input, and is substituted for all occurrences of the input variable in the continuation process)

The definition of a process calculus must specify structural congruence: these express the equivalences between processes—for example, ({P},|,0) forms a commutative monoid.


The π-calculus reforms computation, on the most basic level, to be a cooperative activity. Why is this important? To have a permanently free internet, we have to be able to support it without reliance on centralized powers. This is one of the simplest points, but there are many deeper reasons which I am not yet knowledgeable enough to express. It’s all about the philosophy of openness which is characteristic of applied category theory: historically, we have developed theories and practices which are isolated from each other and the world, and had to fabricate their interrelation and cooperation ad hoc; this leaves us needlessly struggling with inadequate systems, and limits our thought and action.

Surely there must be a less primitive way of making big changes in the store than by pushing vast numbers of words back and forth through the von Neumann bottleneck. Not only is this tube a literal bottleneck for the data traffic of a problem, but, more importantly, it is an intellectual bottleneck that has kept us tied to word-at-a-time thinking instead of encouraging us to think in terms of the larger conceptual units of the task at hand. Thus programming is basically planning and detailing the enormous traffic of words through the von Neumann bottleneck, and much of that traffic concerns not significant data itself, but where to find it. — John Backus, 1977 ACM Turing Award

There have been various mitigations to these kind of problems, but the cognitive limitation remains, and a total renewal is necessary; the π-calculus completely reimagines the nature of computation in society, and opens vast possibility. We can begin to conceive of the many interwoven strata of computation as a coherent, fluid whole. While I was out of my depth in many talks during the conference, I began to appreciate that this was a truly comprehensive innovation: RChain reforms almost every aspect of computing, from the highest abstraction all the way down to the metal. Coalescing the architecture, as mentioned earlier, is the formal calculus as the core guiding principle. There was some concern that because the ρ-calculus is so different from traditional languages, there may be resistance to adoption; but our era is a paradigm shift, the call is for a new way of thinking, and we must adapt.

So why are we using the reflective higher-order π-calculus, the ρ-calculus? Because there’s just one problem with the conventional π-calculus: it presupposes a countably infinite collection of atomic names. These are not only problematic to generate and manage, but the absence of structure is a massive waste. In this regard, the π-calculus was incomplete, until Greg realized that you can “close the loop” with reflection, a powerful form of self-reference:

Code ←→ Data

The mantra is that names are quoted processes; this idea pervades and guides the design of RChain. There is no need to import infinitely many opaque, meaningless symbols—the code itself is nothing but clear, meaningful syntax. If there is an intrinsic method of reference and dereference, or “quoting and unquoting”, code can be turned into data, sent as a message, and then turned back into code; known as “code mobility”, one can communicate big programs as easily as emails. This allows for metaprogramming: on an industrial level, not only people write programs—programs write programs. This is essential to creating a robust virtual infrastructure.

So, how can the π-calculus be made reflective? By solving for the least fixed point of a recursive equation, which parametrizes processes by names:

P[x] = 0 | x?(x).P[x] | x!(P[x]) | P[x]|P[x] | @P[x] | *x

RP = P[RP]

This is reminiscent of how the Y combinator enables recursion by giving the fixed point of any function, Yf = f(Yf). The last two terms of the syntax are reference and dereference, which turn code into data and data into code. Notice that we did not include a continuation for output: the ρ-calculus is asynchronous, meaning that the sender does not get confirmation that the message has been received; this is important for efficient parallel computation and corresponds to polarised linear logic. We adopt the convention that names are output and processes are input. The last two modifications to complete the official ρ-calculus syntax are multi-input and pattern-matching:


P,Q := 0                                                null process

| for(p1←x1,…,pn←xn).P                input guarded process

| x!(@Q)                                        output a name

| *x                                          dereference, evaluate code

| P|Q                                        parallel composition

x,p := @P                                            name or quoted process

(each ‘pi’ is a “pattern” or formula to collect terms on channel ‘xi’—this is extremely useful and general, and enables powerful functionality throughout the system)


Simple. Of course, this is not really a programming language yet, though it is more usable than the pure λ-calculus. Rholang, the actual language of RChain, adds some essential features:

ρ-calculus + variables + useful ground terms + new name construction + arithmetic + pattern matching = Rholang

Here’s the specification, syntax and semantics, and a visualization; explore code and examples in GitHub and learn the contract design in the documentation—you can even try coding on rchain.cloud! For those who don’t like clicking all these links, let’s see just one concrete example of a contract, the basic program in Rholang: a process with persistent state, associated code, and associated addresses. This is a Cell, which stores a value until it is accessed or updated:


contract Cell( get, set, state ) = {
 select {
   case rtn <- get; v <- state => {
     rtn!( *v ) | state!( *v ) | Cell( get, set, state ) }
   case newValue <- set; v <- state => {
     state!( *newValue ) | Cell( get, set, state ) }

}}


The parameters are the channels on which the contract communicates. Cell selects from two possibilities: either it is being accessed, i.e. there is data (the return channel) to receive on get, then it outputs on rtn and maintains its state and call; or it is being updated, i.e. there is data (the new value) to receive on set, then it updates state and calls itself again. This shows how the ontology of the language enables natural recursion, and thereby persistent storage: state is Cell’s way of “talking to itself”—since the sequential aspect of Rholang is functional, one “cycles” data to persist. The storage layer uses a similar idea; the semantics may be related to traced monoidal categories.

Curiously, the categorical semantics of the ρ-calculus has proven elusive. There is the general ideology that λ:sequential :: π:concurrent, that the latter is truly fundamental, but the Curry-Howard-Lambek isomorphism has not yet been generalized canonically—though there has been partial success, involving functor-category denotational semantics, linear logic, and session types. Despite its great power and universality, the ρ-calculus remains a bit of a mystery in mathematics: this fact should intrigue anyone who cares about logic, types, and categories as the foundations of abstract thought.

Now, the actual system—the architecture consists of five interwoven layers (all better explained in the documentation):


Storage: based on Special K – “a pattern language for the web.” This layer stores both key-value pairs and continuations through an essential duality of database and query—if you don’t find what you’re looking for, leave what you would have done in its place, and when it arrives, the process will continue. Greg characterizes this idea as the computational equivalent of the law of excluded middle.

[channel, pattern, data, continuation]

RChain has a refined, multidimensional view of resources – compute, memory, storage, and network—and accounts for their production and consumption linearly.

Execution: a Rho Virtual Machine instance is a context for ρ-calculus reduction of storage elements. The entire state of the blockchain is one big Rholang term, which is updated by a transaction: a receive invokes a process which changes key values, and the difference must be verified by consensus. Keys permanently maintain the entire history of state transitions. While currently based on the Java VM, it will be natively hosted.

Namespace: a set of channels, i.e. resources, organized by a logic for access and authority. The primary significance is scalability – a user does not need to deal with the whole chain, only pertinent namespaces. ‘A namespace definition may control the interactions that occur in the space, for example, by specifying: accepted addresses, namespaces, or behavioral types; maximum or minimum data size; or input-output structure.’ These handle nondeterminism of the two basic “race conditions”, contention for resources:

x!(@Q1) | for(ptrn <- x){P} | x!(@Q2)

for(ptrn <- x){P1} | x!(@Q) | for(ptrn <- x){P2}

Contrasted with flat public keys of other blockchains, domains work with DNS and extend them by a compositional tree structure. Each node as a named channel is itself a namespace, and hence definitions can be built up inductively, with precise control.

Consensus: verify partial orders of changes to the one-big-Rholang-term state; the block structure should persist as a directed acyclic graph. The algorithm is Proof of Stake – the capacity to validate in a namespace is tied to the “stake” one holds in it. Greg explains via tangles, and how the complex CASPER protocol works naturally with RChain.

Contracts: ‘An RChain contract is a well-specified, well-behaved, and formally verified program that interacts with other such programs.’ (K Framework) ; (DAO attack) ‘A behavioral type is a property of an object that binds it to a discrete range of action patterns. Behavioral types constrain not only the structure of input and output, but the permitted order of inputs and outputs among communicating and (possibly) concurrent processes under varying conditions… The Rholang behavioral type system will iteratively decorate terms with modal logical operators, which are propositions about the behavior of those terms. Ultimately properties [such as] data information flow, resource access, will be concretized in a type system that can be checked at compile-time. The behavioral type systems Rholang will support make it possible to evaluate collections of contracts against how their code is shaped and how it behaves. As such, Rholang contracts elevate semantics to a type-level vantage point, where we are able to scope how entire protocols can safely interface.’ (LADL)

So what can you build on RChain? Anything.

Decentralized applications: identity, reputation, tokens, timestamping, financial services, content delivery, exchanges, social networks, marketplaces, (decentralized autonomous) organizations, games, oracles, (Ethereum dApps), … new forms of code yet to be imagined. It’s much more than a better internet: RChain is a potential abstract foundation for a rational global society. The system is a minimalist framework of universally principled design; it is a canvas with which we can begin to explore how the world should really work. If we are open and thoughtful, if we care enough, we can learn to do things right.

The project is remarkably unknown for its magnitude, and building widespread adoption may be one of RChain’s greatest challenges. Granted, it is new; but education will not be easy. It’s too big a reformation for a public mindset which thinks of (technological) progress as incrementally better specs or added features; this is conceptual progression, an alien notion to many. That’s why the small but growing community is vital. This article is nothing; I’m clearly unqualified—click links, read papers, watch videos.  The scale of ambition, the depth of insight, the lucidity of design, the unity of theory and practice—it’s something to behold. And it’s real. Mercury will be complete in December. It’s happening, right now, and you can be a part of it. Learn, spread the word. Get involved; join the discussion or even the development—the RChain website has all the resources you need.

40% of the world population lives within 100km of the ocean. Greg pointed out that if we can’t even handle today’s refugee crises, what will possibly happen when the waters rise? At the very least, we desperately need better large-scale coordination systems. Will we make it to the next millennium? We can—just a matter of will.

Thank you for reading. You are great.


Applied Category Theory Course: Resource Theories

12 May, 2018

 

My course on applied category theory is continuing! After a two-week break where the students did exercises, I’m back to lecturing about Fong and Spivak’s book Seven Sketches. Now we’re talking about “resource theories”. Resource theories help us answer questions like this:

  1. Given what I have, is it possible to get what I want?
  2. Given what I have, how much will it cost to get what I want?
  3. Given what I have, how long will it take to get what I want?
  4. Given what I have, what is the set of ways to get what I want?

Resource theories in their modern form were arguably born in these papers:

• Bob Coecke, Tobias Fritz and Robert W. Spekkens, A mathematical theory of resources.

• Tobias Fritz, Resource convertibility and ordered commutative monoids.

We are lucky to have Tobias in our course, helping the discussions along! He’s already posted some articles on resource theory here on this blog:

• Tobias Fritz, Resource convertibility (part 1), Azimuth, 7 April 2015.

• Tobias Fritz, Resource convertibility (part 2), Azimuth, 10 April 2015.

• Tobias Fritz, Resource convertibility (part 3), Azimuth, 13 April 2015.

We’re having fun bouncing between the relatively abstract world of monoidal preorders and their very concrete real-world applications to chemistry, scheduling, manufacturing and other topics. Here are the lectures so far:

Lecture 18 – Chapter 2: Resource Theories
Lecture 19 – Chapter 2: Chemistry and Scheduling
Lecture 20 – Chapter 2: Manufacturing
Lecture 21 – Chapter 2: Monoidal Preorders
Lecture 22 – Chapter 2: Symmetric Monoidal Preorders
Lecture 23 – Chapter 2: Commutative Monoidal Posets
Lecture 24 – Chapter 2: Pricing Resources
Lecture 25 – Chapter 2: Reaction Networks
Lecture 26 – Chapter 2: Monoidal Monotones
Lecture 27 – Chapter 2: Adjoints of Monoidal Monotones
Lecture 28 – Chapter 2: Ignoring Externalities

 


Effective Thermodynamics for a Marginal Observer

8 May, 2018

guest post by Matteo Polettini

Suppose you receive an email from someone who claims “here is the project of a machine that runs forever and ever and produces energy for free!” Obviously he must be a crackpot. But he may be well-intentioned. You opt for not being rude, roll your sleeves, and put your hands into the dirt, holding the Second Law as lodestar.

Keep in mind that there are two fundamental sources of error: either he is not considering certain input currents (“hey, what about that tiny hidden cable entering your machine from the electrical power line?!”, “uh, ah, that’s just to power the “ON” LED”, “mmmhh, you sure?”), or else he is not measuring the energy input correctly (“hey, why are you using a Geiger counter to measure input voltages?!”, “well, sir, I ran out of voltmeters…”).

In other words, the observer might only have partial information about the setup, either in quantity or quality. Because he has been marginalized by society (most crackpots believe they are misunderstood geniuses) we will call such observer “marginal,” which incidentally is also the word that mathematicians use when they focus on the probability of a subset of stochastic variables.

In fact, our modern understanding of thermodynamics as embodied in statistical mechanics and stochastic processes is founded (and funded) on ignorance: we never really have “complete” information. If we actually had, all energy would look alike, it would not come in “more refined” and “less refined” forms, there would not be a differentials of order/disorder (using Paul Valery’s beautiful words), and that would end thermodynamic reasoning, the energy problem, and generous research grants altogether.

Even worse, within this statistical approach we might be missing chunks of information because some parts of the system are invisible to us. But then, what warrants that we are doing things right, and he (our correspondent) is the crackpot? Couldn’t it be the other way around? Here I would like to present some recent ideas I’ve been working on together with some collaborators on how to deal with incomplete information about the sources of dissipation of a thermodynamic system. I will do this in a quite theoretical manner, but somehow I will mimic the guidelines suggested above for debunking crackpots. My three buzzwords will be: marginal, effective, and operational.

“Complete” thermodynamics: an out-of-the-box view

The laws of thermodynamics that I address are:

• The good ol’ Second Law (2nd)

• The Fluctuation-Dissipation Relation (FDR), and the Reciprocal Relation (RR) close to equilibrium.

• The more recent Fluctuation Relation (FR)1 and its corollary the Integral Fluctuation Relation (IFR), which have been discussed on this blog in a remarkable post by Matteo Smerlak.

The list above is all in the “area of the second law”. How about the other laws? Well, thermodynamics has for long been a phenomenological science, a patchwork. So-called stochastic thermodynamics is trying to put some order in it by systematically grounding thermodynamic claims in (mostly Markov) stochastic processes. But it’s not an easy task, because the different laws of thermodynamics live in somewhat different conceptual planes. And it’s not even clear if they are theorems, prescriptions, or habits (a bit like in jurisprudence2).

Within stochastic thermodynamics, the Zeroth Law is so easy nobody cares to formulate it (I do, so stay tuned…). The Third Law: no idea, let me know. As regards the First Law (or, better, “laws”, as many as there are conserved quantities across the system/environment interface…), we will assume that all related symmetries have been exploited from the offset to boil down the description to a minimum.

1

This minimum is as follows. We identify a system that is well separated from its environment. The system evolves in time, the environment is so large that its state does not evolve within the timescales of the system3. When tracing out the environment from the description, an uncertainty falls upon the system’s evolution. We assume the system’s dynamics to be described by a stochastic Markovian process.

How exactly the system evolves and what is the relationship between system and environment will be described in more detail below. Here let us take an “out of the box” view. We resolve the environment into several reservoirs labeled by index \alpha. Each of these reservoirs is “at equilibrium” on its own (whatever that means4). Now, the idea is that each reservoir tries to impose “its own equilibrium” on the system, and that their competition leads to a flow of currents across the system/environment interface. Each time an amount of the reservoir’s resource crosses the interface, a “thermodynamic cost” has to be to be paid or gained (be it a chemical potential difference for a molecule to go through a membrane, or a temperature gradient for photons to be emitted/absorbed, etc.).

The fundamental quantities of stochastic thermodynamic modeling thus are:

• On the “-dynamic” side: the time-integrated currents \Phi^t_\alpha, independent among themselves5. Currents are stochastic variables distributed with joint probability density

P(\{\Phi_\alpha\}_\alpha)

• On the “thermo-” side: The so-called thermodynamic forces or “affinities”6 \mathcal{A}_\alpha (collectively denoted \mathcal{A}). These are tunable parameters that characterize reservoir-to-reservoir gradients, and they are not stochastic. For convenience, we conventionally take them all positive.

Dissipation is quantified by the entropy production:

\sum \mathcal{A}_\alpha \Phi^t_\alpha

We are finally in the position to state the main results. Be warned that in the following expressions the exact treatment of time and its scaling would require a lot of specifications, but keep in mind that all these relations hold true in the long-time limit, and that all cumulants scale linearly with time.

FR: The probability of observing positive currents is exponentially favoured with respect to negative currents according to

P(\{\Phi_\alpha\}_\alpha) / P(\{-\Phi_\alpha\}_\alpha) = \exp \sum \mathcal{A}_\alpha \Phi^t_\alpha

Comment: This is not trivial, it follows from the explicit expression of the path integral, see below.

IFR: The exponential of minus the entropy production is unity

\big\langle  \exp - \sum \mathcal{A}_\alpha \Phi^t_\alpha  \big\rangle_{\mathcal{A}} =1

Homework: Derive this relation from the FR in one line.

2nd Law: The average entropy production is not negative

\sum \mathcal{A}_\alpha \left\langle \Phi^t_\alpha \right\rangle_{\mathcal{A}} \geq 0

Homework: Derive this relation using Jensen’s inequality.

Equilibrium: Average currents vanish if and only if affinities vanish:

\left\langle \Phi^t_\alpha \right\rangle_{\mathcal{A}} \equiv 0, \forall \alpha \iff  \mathcal{A}_\alpha \equiv 0, \forall \alpha

Homework: Derive this relation taking the first derivative w.r.t. {\mathcal{A}_\alpha} of the IFR. Notice that also the average depends on the affinities.

S-FDR: At equilibrium, it is impossible to tell whether a current is due to a spontaneous fluctuation (quantified by its variance) or to an external perturbation (quantified by the response of its mean). In a symmetrized (S-) version:

\left.  \frac{\partial}{\partial \mathcal{A}_\alpha}\left\langle \Phi^t_{\alpha'} \right\rangle \right|_{0} + \left.  \frac{\partial}{\partial \mathcal{A}_{\alpha'}}\left\langle \Phi^t_{\alpha} \right\rangle \right|_{0} = \left. \left\langle \Phi^t_{\alpha} \Phi^t_{\alpha'} \right\rangle \right|_{0}

Homework: Derive this relation taking the mixed second derivatives w.r.t. {\mathcal{A}_\alpha} of the IFR.

RR: The reciprocal response of two different currents to a perturbation of the reciprocal affinities close to equilibrium is symmetrical:

\left.  \frac{\partial}{\partial \mathcal{A}_\alpha}\left\langle \Phi^t_{\alpha'} \right\rangle \right|_{0} - \left.  \frac{\partial}{\partial \mathcal{A}_{\alpha'}}\left\langle \Phi^t_{\alpha} \right\rangle \right|_{0} = 0

Homework: Derive this relation taking the mixed second derivatives w.r.t. {\mathcal{A}_\alpha} of the FR.

Notice the implication scheme: FR ⇒ IFR ⇒ 2nd, IFR ⇒ S-FDR, FR ⇒ RR.

“Marginal” thermodynamics (still out-of-the-box)

Now we assume that we can only measure a marginal subset of currents \{\Phi_\mu^t\}_\mu \subset \{\Phi_\alpha^t\}_\alpha (index \mu always has a smaller range than \alpha), distributed with joint marginal probability

P(\{\Phi_\mu\}_\mu) = \int \prod_{\alpha \neq \mu} d\Phi_\alpha \, P(\{\Phi_\alpha\}_\alpha)

2

Notice that a state where these marginal currents vanish might not be an equilibrium, because other currents might still be whirling around. We call this a stalling state.

\mathrm{stalling:} \qquad \langle \Phi_\mu \rangle \equiv 0,  \quad \forall \mu

My central question is: can we associate to these currents some effective affinity \mathcal{Q}_\mu in such a way that at least some of the results above still hold true? And, are all definitions involved just a fancy mathematical construct, or are they operational?

First the bad news: In general the FR is violated for all choices of effective affinities:

P(\{\Phi_\mu\}_\mu) / P(\{-\Phi_\mu\}_\mu) \neq \exp \sum \mathcal{Q}_\mu \Phi^t_\mu

This is not surprising and nobody would expect that. How about the IFR?

Marginal IFR: There are effective affinities such that

\left\langle \exp - \sum \mathcal{Q}_\mu \Phi^t_\mu \right\rangle_{\mathcal{A}} =1

Mmmhh. Yeah. Take a closer look this expression: can you see why there actually exists an infinite choice of “effective affinities” that would make that average cross 1? Which on the other hand is just a number, so who even cares? So this can’t be the point.

The fact is, the IFR per se is hardly of any practical interest, as are all “absolutes” in physics. What matters is “relatives”: in our case, response. But then we need to specify how the effective affinities depend on the “real” affinities. And here steps in a crucial technicality, whose precise argumentation is a pain. Basing on reasonable assumptions7, we demonstrate that the IFR holds for the following choice of effective affinities:

\mathcal{Q}_\mu = \mathcal{A}_\mu - \mathcal{A}^{\mathrm{stalling}}_\mu,

where \mathcal{A}^{\mathrm{stalling}} is the set of values of the affinities that make marginal currents stall. Notice that this latter formula gives an operational definition of the effective affinities that could in principle be reproduced in laboratory (just go out there and tune the tunable until everything stalls, and measure the difference). Obviously:

Stalling: Marginal currents vanish if and only if effective affinities vanish:

\left\langle \Phi^t_\mu \right\rangle_{\mathcal{A}} \equiv 0, \forall \mu \iff \mathcal{A}_\mu \equiv 0, \forall \mu

Now, according to the inference scheme illustrated above, we can also prove that:

Effective 2nd Law: The average marginal entropy production is not negative

\sum \mathcal{Q}_\mu \left\langle \Phi^t_\mu \right\rangle_{\mathcal{A}} \geq 0

S-FDR at stalling:

\left. \frac{\partial}{\partial \mathcal{A}_\mu}\left\langle \Phi^t_{\mu'} \right\rangle \right|_{\mathcal{A}^{\mathrm{stalling}}} + \left. \frac{\partial}{\partial \mathcal{A}_{\mu'}}\left\langle \Phi^t_{\mu} \right\rangle \right|_{\mathcal{A}^{\mathrm{stalling}}} = \left. \left\langle \Phi^t_{\mu} \Phi^t_{\mu'} \right\rangle \right|_{\mathcal{A}^{\mathrm{stalling}}}

Notice instead that the RR is gone at stalling. This is a clear-cut prediction of the theory that can be experimented with basically the same apparatus with which response theory has been experimentally studied so far (not that I actually know what these apparatus are…): at stalling states, differing from equilibrium states, the S-FDR still holds, but the RR does not.

Into the box

You’ve definitely gotten enough at this point, and you can give up here. Please exit through the gift shop.

If you’re stubborn, let me tell you what’s inside the box. The system’s dynamics is modeled as a continuous-time, discrete configuration-space Markov “jump” process. The state space can be described by a graph G=(I, E) where I is the set of configurations, E is the set of possible transitions or “edges”, and there exists some incidence relation between edges and couples of configurations. The process is determined by the rates w_{i \gets j} of jumping from one configuration to another.

We choose these processes because they allow some nice network analysis and because the path integral is well defined! A single realization of such a process is a trajectory

\omega^t = (i_0,\tau_0) \to (i_1,\tau_1) \to \ldots \to (i_N,\tau_N)

A “Markovian jumper” waits at some configuration i_n for some time \tau_n with an exponentially decaying probability w_{i_n} \exp - w_{i_n} \tau_n with exit rate w_i = \sum_k w_{k \gets i}, then instantaneously jumps to a new configuration i_{n+1} with transition probability w_{i_{n+1} \gets {i_n}}/w_{i_n}. The overall probability density of a single trajectory is given by

P(\omega^t) = \delta \left(t - \sum_n \tau_n \right) e^{- w_{i_N}\tau_{i_N}} \prod_{n=0}^{N-1} w_{j_n \gets i_n} e^{- w_{i_n} \tau_{i_n}}

One can in principle obtain the probability distribution function of any observable defined along the trajectory by taking the marginal of this measure (though in most cases this is technically impossible). Where does this expression come from? For a formal derivation, see the very beautiful review paper by Weber and Frey, but be aware that this is what one would intuitively come up with if one had to simulate with the Gillespie algorithm.

The dynamics of the Markov process can also be described by the probability of being at some configuration i at time t, which evolves via the master equation

\dot{p}_i(t) = \sum_j \left[ w_{ij} p_j(t) - w_{ji} p_i(t) \right].

We call such probability the system’s state, and we assume that the system relaxes to a uniquely defined steady state p = \mathrm{lim}_{t \to \infty} p(t).

A time-integrated current along a single trajectory is a linear combination of the net number of jumps \#^t between configurations in the network:

\Phi^t_\alpha = \sum_{ij} C^{ij}_\alpha \left[ \#^t(i \gets j) - \#^t(j\gets i) \right]

The idea here is that one or several transitions within the system occur because of the “absorption” or the “emission” of some environmental degrees of freedom, each with different intensity. However, for the moment let us simplify the picture and require that only one transition contributes to a current, that is that there exist i_\alpha,j_\alpha such that

C^{ij}_\alpha = \delta^i_{i_\alpha} \delta^j_{j_\alpha}.

Now, what does it mean for such a set of currents to be “complete”? Here we get inspiration from Kirchhoff’s Current Law in electrical circuits: the continuity of the trajectory at each configuration of the network implies that after a sufficiently long time, cycle or loop or mesh currents completely describe the steady state. There is a standard procedure to identify a set of cycle currents: take a spanning tree T of the network; then the currents flowing along the edges E\setminus T left out from the spanning tree form a complete set.

The last ingredient you need to know are the affinities. They can be constructed as follows. Consider the Markov process on the network where the observable edges are removed G' = (I,T). Calculate the steady state of its associated master equation (p^{\mathrm{eq}}_i)_i, which is necessarily an equilibrium (since there cannot be cycle currents in a tree…). Then the affinities are given by

\mathcal{A}_\alpha = \log  w_{i_\alpha j_\alpha} p^{\mathrm{eq}}_{j_\alpha} / w_{j_\alpha i_\alpha} p^{\mathrm{eq}}_{i_\alpha}.

Now you have all that is needed to formulate the complete theory and prove the FR.

Homework: (Difficult!) With the above definitions, prove the FR.

How about the marginal theory? To define the effective affinities, take the set E_{\mathrm{mar}} = \{i_\mu j_\mu, \forall \mu\} of edges where there run observable currents. Notice that now its complement obtained by removing the observable edges, the hidden edge set E_{\mathrm{hid}} = E \setminus E_{\mathrm{mar}}, is not in general a spanning tree: there might be cycles that are not accounted for by our observations. However, we can still consider the Markov process on the hidden space, and calculate its stalling steady state p^{\mathrm{st}}_i, and ta-taaa: The effective affinities are given by

\mathcal{Q}_\mu = \log w_{i_\mu j_\mu} p^{\mathrm{st}}_{j_\mu} / w_{j_\mu i_\mu} p^{\mathrm{st}}_{i_\mu}.

Proving the marginal IFR is far more complicated than the complete FR. In fact, very often in my field we will not work with the current’ probability density itself, but we prefer to take its bidirectional Laplace transform and work with the currents’ cumulant generating function. There things take a quite different and more elegant look.

Many other questions and possibilities open up now. The most important one left open is: Can we generalize the theory the (physically relevant) case where the current is supported on several edges? For example, for a current defined like \Phi^t = 5 \Phi^t_{12} + 7 \Phi^t_{34}? Well, it depends: the theory holds provided that the stalling state is not “internally alive”, meaning that if the observable current vanishes on average, then also should \Phi^t_{12} and \Phi^t_{34} separately. This turns out to be a physically meaningful but quite strict condition.

Is all of thermodynamics “effective”?

Let me conclude with some more of those philosophical considerations that sadly I have to leave out of papers…

Stochastic thermodynamics strongly depends on the identification of physical and information-theoretic entropies — something that I did not openly talk about, but that lurks behind the whole construction. Throughout my short experience as researcher I have been pursuing a program of “relativization” of thermodynamics, by making the role of the observer more and more evident and movable. Inspired by Einstein’s Gedankenexperimenten, I also tried to make the theory operational. This program may raise eyebrows here and there: Many thermodynamicians embrace a naive materialistic world-view whereby what only matters are “real” physical quantities like temperature, pressure, and all the rest of the information-theoretic discourse is at best mathematical speculation or a fascinating analog with no fundamental bearings. According to some, information as a physical concept lingers alarmingly close to certain extreme postmodern claims in the social sciences that “reality” does not exist unless observed, a position deemed dangerous at times when the authoritativeness of science is threatened by all sorts of anti-scientific waves.

I think, on the contrary, that making concepts relative and effective and by summoning the observer explicitly is a laic and prudent position that serves as an antidote to radical subjectivity. The other way around—clinging to the objectivity of a preferred observer, which is implied in any materialistic interpretation of thermodynamics, e.g. by assuming that the most fundamental degrees of freedom are the positions and velocities of gas’s molecules—is the dangerous position, expecially when the role of such preferred observer is passed around from the scientist to the technician and eventually to the technocrat, who would be induced to believe there are simple technological fixes to complex social problems

How do we reconcile observer-dependency and the laws of physics? The object and the subject? On the one hand, much like the position of an object depends on the reference frame, so much so entropy and entropy production do depend on the observer and the particular apparatus that he controls or experiment he is involved with. On the other hand, much like motion is ultimately independent of position and it is agreed upon by all observers that share compatible measurement protocols, so much so the laws of thermodynamics are independent of that particular observer’s quantification of entropy and entropy production (e.g., the effective Second Law holds independently of how much the marginal observer knows of the system, if he operates according to our phenomenological protocol…). This is the case even in the every-day thermodynamics as practiced by energetic engineers et al., where there are lots of choices to gauge upon, and there is no other external warrant that the amount of dissipation being quantified is the “true” one (whatever that means…)—there can only be trust in one’s own good practices and methodology.

So in this sense, I like to think that all observers are marginal, that this effective theory serves as a dictionary by which different observers practice and communicate thermodynamics, and that we should not revere the laws of thermodynamics as “true” idols, but rather as tools of good scientific practice.

References

• M. Polettini and M. Esposito, Effective fluctuation and response theory, arXiv:1803.03552.

In this work we give the complete theory and numerous references to work of other people that was along the same lines. We employ a “spiral” approach to the presentation of the results, inspired by the pedagogical principle of Albert Baez.

• M. Polettini and M. Esposito, Effective thermodynamics for a marginal observer, Phys. Rev. Lett. 119 (2017), 240601, arXiv:1703.05715.

This is a shorter version of the story.

• B. Altaner, M. Polettini and M. Esposito, Fluctuation-dissipation relations far from equilibrium, Phys. Rev. Lett. 117 (2016), 180601, arXiv:1604.0883.

An early version of the story, containing the FDR results but not the full-fledged FR.

• G. Bisker, M. Polettini, T. R. Gingrich and J. M. Horowitz, Hierarchical bounds on entropy production inferred from partial information, J. Stat. Mech. (2017), 093210, arXiv:1708.06769.

Some extras.

• M. F. Weber and E. Frey, Master equations and the theory of stochastic path integrals, Rep. Progr. Phys. 80 (2017), 046601, arXiv:1609.02849.

Great reference if one wishes to learn about path integrals for master equation systems.

Footnotes

1 There are as many so-called “Fluctuation Theorems” as there are authors working on them, so I decided not to call them by any name. Furthermore, notice I prefer to distinguish between a relation (a formula) and a theorem (a line of reasoning). I lingered more on this here.

2 “Just so you know, nobody knows what energy is.”—Richard Feynman.

I cannot help but mention here the beautiful book by Shapin and Schaffer, Leviathan and the Air-Pump, about the Boyle vs. Hobbes diatribe about what constitutes a “matter of fact,” and Bruno Latour’s interpretation of it in We Have Never Been Modern. Latour argues that “modernity” is a process of separation of the human and natural spheres, and within each of these spheres a process of purification of the unit facts of knowledge and the unit facts of politics, of the object and the subject. At the same time we live in a world where these two spheres are never truly separated, a world of “hybrids” that are at the same time necessary “for all practical purposes” and unconceivable according to the myths that sustain the narration of science, of the State, and even of religion. In fact, despite these myths, we cannot conceive a scientific fact out of the contextual “network” where this fact is produced and replicated, and neither we can conceive society out of the material needs that shape it: so in this sense “we have never been modern”, we are not quite different from all those societies that we take pleasure of studying with the tools of anthropology. Within the scientific community Latour is widely despised; probably he is also misread. While it is really difficult to see how his analysis applies to, say, high-energy physics, I find that thermodynamics and its ties to the industrial revolution perfectly embodies this tension between the natural and the artificial, the matter of fact and the matter of concern. Such great thinkers as Einstein and Ehrenfest thought of the Second Law as the only physical law that would never be replaced, and I believe this is revelatory. A second thought on the Second Law, a systematic and precise definition of all its terms and circumstances, reveals that the only formulations that make sense are those phenomenological statements such as Kelvin-Planck’s or similar, which require a lot of contingent definitions regarding the operation of the engine, while fetishized and universal statements are nonsensical (such as that masterwork of confusion that is “the entropy of the Universe cannot decrease”). In this respect, it is neither a purely natural law—as the moderns argue, nor a purely social construct—as the postmodern argue. One simply has to renounce to operate this separation. While I do not have a definite answer on this problem, I like to think of the Second Law as a practice, a consistency check of the thermodynamic discourse.

3 This assumption really belongs to a time, the XIXth century, when resources were virtually infinite on planet Earth…

4 As we will see shortly, we define equilibrium as that state where there are no currents at the interface between the system and the environment, so what is the environment’s own definition of equilibrium?!

5 This because we have already exploited the First Law.

6 This nomenclature comes from alchemy, via chemistry (think of Goethe’s The elective affinities…), it propagated in the XXth century via De Donder and Prigogine, and eventually it is still present in language in Luxembourg because in some way we come from the “late Brussels school”.

7 Basically, we ask that the tunable parameters are environmental properties, such as temperatures, chemical potentials, etc. and not internal properties, such as the energy landscape or the activation barriers between configurations.


Compositionality

6 May, 2018

A new journal! We’ve been working on it for a long time, but we finished sorting out some details at ACT2018, and now we’re ready to tell the world!

It’s free to read, free to publish in, and it’s about building big things from smaller parts. Here’s the top of the journal’s home page right now:

Here’s the official announcement:

We are pleased to announce the launch of Compositionality, a new diamond open-access journal for research using compositional ideas, most notably of a category-theoretic origin, in any discipline. Topics may concern foundational structures, an organizing principle, or a powerful tool. Example areas include but are not limited to: computation, logic, physics, chemistry, engineering, linguistics, and cognition. To learn more about the scope and editorial policies of the journal, please visit our website at http://www.compositionality-journal.org.

Compositionality is the culmination of a long-running discussion by many members of the extended category theory community, and the editorial policies, look, and mission of the journal have yet to be finalized. We would love to get your feedback about our ideas on the forum we have established for this purpose:

http://reddit.com/r/compositionality

Lastly, the journal is currently receiving applications to serve on the editorial board; submissions are due May 31 and will be evaluated by the members of our steering board: John Baez, Bob Coecke, Kathryn Hess, Steve Lack, and Valeria de Paiva.

https://tinyurl.com/call-for-editors

We will announce a call for submissions in mid-June.

We’re looking forward to your ideas and submissions!

Best regards,

Brendan Fong, Nina Otter, and Joshua Tan

http://www.compositionality-journal.org/


Symposium on Compositional Structures

4 May, 2018

As I type this, sitting in a lecture hall at the Lorentz Center, Jamie Vicary, University of Birmingham and University of Oxford, is announcing a new series of meetings:

Symposium on Compositional Structures.

The website, which will probably change, currently says this:

Symposium on Compositional Structures (SYCO)

The Symposium on Compositional Structures is a new interdisciplinary meeting aiming to support the growing community of researchers interested in the phenomenon of compositionality, from both applied and abstract perspectives, and in particular where category theory serves as a unifying common language.

We welcome submissions from researchers across computer science, mathematics, physics, philosophy, and beyond, with the aim of fostering discussion, disseminating new ideas, and spreading knowledge of open problems between fields. Submission is encouraged for both mature research and work in progress, and by both established academics and junior researchers, including students. The meeting does not have proceedings.

While no list of topics could be exhaustive, SYCO welcomes submissions with a compositional focus related any the following areas, in particular from the perspective of category theory:

logical methods in computer science, including quantum and classical programming, concurrency, natural language processing and machine learning;

graphical calculi, including string diagrams, Petri nets and reaction networks;

languages and frameworks, including process algebras, proof nets, type theory and game semantics;

abstract algebra and pure category theory, including monoidal category theory, higher category theory, operads, polygraphs, and relationships to homotopy theory;

quantum algebra, including quantum computation and representation theory;

tools and techniques, including rewriting, formal proofs and proof assistants;

industrial applications, including case studies and real-world problem descriptions.

Meetings

Meetings will involve both invited and contributed talks. The first meeting is planned for Autumn 2018, with more details to follow soon.

Funding

Some funding may be available to support travel and subsistence, especially for junior researchers who are speaking at the meeting.

Steering committee

The symposium is managed by the following people:

Ross Duncan, University of Strathclyde.
Chris Heunen, University of Edinburgh.
Aleks Kissinger, Radboud University Nijmegen.
Samuel Mimram, École Polytechnique.
Mehrnoosh Sadrzadeh, Queen Mary.
Pawel Sobocinski, University of Southampton.
Jamie Vicary, University of Birmingham and University of Oxford.


Thermodynamics of Computation

4 May, 2018

David Wolpert of the Santa Fe Institute has set up a website on the thermodynamics of computation:

Thermodynamics of Computation Wiki.

Here’s the idea:

This website is the result of a successful meeting at SFI which brought together researchers from diverse disciplines including biology, computer science, physics, bioinformatics, and chemistry to discuss overlapping interesting in thermodynamics and computation.

The thermodynamic restrictions on all systems that perform computation provide major challenges to modern design of computers. For example, at present ~5% of US energy consumption is used to run computers. Similarly, ~50% of the lifetime budget of a modern high-performance computing center is to pay the energy bill. As another example, Google has placed some of their data servers next to rivers in order to use the water in the river to remove the heat they generate. As a final example, one of the major challenges facing current efforts to build exascale computers is how to keep them from melting.

The thermodynamic costs of performing computation also play a major role in biological computation. For example, ~ 20% of the calories burned by a human are used by their brain — which does nothing but compute. This is a massive reproductive fitness penalty. Indeed, one of the leading contenders for what development in hominin evolution allowed us to develop into homo sapiens is the discovery of fire, since that allowed us to cook meat and tubers, and thereby for the first time release enough calories from our food sources to power our brains. Despite this huge fitness cost though, no current evolution theory accounts for the thermodynamic consequences of computation. It also seems that minimizing the thermodynamic costs of computation has been a major factor in evolution’s selection of the chemical networks in all biological organisms, not just hominin brains. In particular, simply dividing the rate of computation in the entire biosphere by the rate of free energy incident on the earth in sunlight shows that the biosphere as a whole performs computation with a thermodynamic efficiency orders of magnitude greater than our current supercomputers.

In the 1960s and 1970s, Rolf Landauer, Charlie Bennett and collaborators performed the first, pioneering analysis of the fundamental thermodynamic costs involved in bit erasure, perhaps the simplest example of a computation. Unfortunately, their physics was semi-formal, initially with no equations at all. This is because when they did their work, nonequilibrium statistical physics was in its infancy, and so they simply did not have the tools for a formal analysis of the thermodynamics of computation.

Moreover, only a trivially small portion of computer science theory (CS) is concerned with the number of erasure operations needed to perform a given computation. At its core, much of CS is concerned with unavoidable resource / time tradeoffs in running computation. That is the basis of all computational complexity theory, many approaches to characterizing the algorithmic power of different kinds of computers, etc.

Fortunately, the last two decades have seen a revolution in nonequilibrium statistical physics. This has resulted in some extremely powerful tools for analyzing the fundamental thermodynamic properties of far-from-equilibrium dynamic systems – like computers. These tools have already clarified that there are unavoidable thermodynamic tradeoffs in computation, in addition to the resource/time tradeoffs of conventional CS theory. These thermodynamic tradeoffs relate the (physical) speed of a computation, the noise level, and whether the computational system is thermodynamically “tailored” for one specific environment or is general purpose, to name just a few. Interestingly, some of these tradeoffs are also addressed in modern computer engineering, for example in the techniques of approximate computing, adaptive slowing, etc. However, this work is being done in an ad hoc manner, driven entirely by phenomenology.

As a result, the time is ripe to pursue a new field of science and engineering: a modern thermodynamics of computation. This would combine the resource/time tradeoffs of concern in conventional CS with the thermodynamic tradeoffs in computation that are now being revealed. In this way we should be able to develop the tools necessary both for analyzing thermodynamic costs in biological systems and for engineering next-generation computers.

We hope this website will serve as a gathering place and hub for all interested.

Wolpert writes:

This website contains

1) An initial list of researchers working on relating topics, grouped by scientific field:

i) Nonequilibrium statistical physics
ii) Stochastic thermodynamics
iii) Chemical reaction networks
iv) Computer science theory
v) Computer science engineering to address energy costs
vi) Thermodynamics of neurobiology
vii) Thermodynamics of single cells
viii) Artificial biological computation
ix) Naturally occurring biological computation
x) Quantum thermodynamics and information processing

2) An initial list of relevant papers, grouped by the same fields, with added meta-data
of keywords, citation scores, and click-through counts.

3) The researchers are cross-referenced with any of the papers they are authors on.

4) A job board

5) An events page

6) A discussion forums page.

Coming soon are more elaborate search functions, a dedicated page for funding opportunities, one for video presentations, and automated special announcement notifications.

Please note that this website is a wiki. That means that everyone who is registered can edit it. Please register (under “request an account”) and then edit the site.

In particular, the current list of researchers and papers is only a starting point. There are likely many omissions, misfilings of researchers and papers in the wrong subject area, and outright mistakes.

Also, to receive monthly notices of additions to the website, please sign up for notifications.