Here are the slides of my talk at the workshop on compositionality at the Simons Institute for the Theory of Computing next week:
• John Baez, Compositionality in network theory, 6 December 2016.
Abstract. To describe systems composed of interacting parts, scientists and engineers draw diagrams of networks: flow charts, Petri nets, electrical circuit diagrams, signalflow graphs, chemical reaction networks, Feynman diagrams and the like. In principle all these different diagrams fit into a common framework: the mathematics of symmetric monoidal categories. This has been known for some time. However, the details are more challenging, and ultimately more rewarding, than this basic insight. Two complementary approaches are presentations of symmetric monoidal categories using generators and relations (which are more algebraic in flavor) and decorated cospan categories (which are more geometrical). In this talk we focus on the latter.
This talk assumes considerable familiarity with category theory. For a much gentler talk on the same theme, see:
• Monoidal categories of networks.
Here are the slides of Blake Pollard’s talk at the Santa Fe Institute workshop on Statistical Physics, Information Processing and Biology:
• Blake Pollard, Compositional frameworks for open systems, 17 November 2016.
He gave a really nice introduction to how we can use categories to study open systems, with his main example being ‘open Markov processes’, where probability can flow in and out of the set of states. People liked it a lot!
I’ve been thinking hard about climate change since at least 2010. That’s why I started this blog. But the last couple years I’ve focused on basic research in network theory as a preliminary step toward green mathematics. Basic research is what I’m best at, and there are plenty of people working on the more immediate, more urgent aspects of climate change.
Indeed, after the Paris Agreement, I started hoping that politicians were taking this issue seriously and that we’d ultimately deal with it—even though I knew this agreement was not itself enough to keep warming below 2° C:
There is a troubling paradox at the heart of climate policy. On the one hand, nobody can doubt the historic success of the Paris Agreement. On the other hand, everybody willing to look can see the impact of our changing climate. People already face rising seas, expanding desertification and coastal erosion. They take little comfort from agreements to adopt mitigation measures and finance adaptation in the future. They need action today.
That is why the Emissions Gap Report tracks our progress in restricting global warming to 1.5 – 2 degrees Celsius above preindustrial levels by the end of this century. This year’s data shows that overall emissions are still rising, but more slowly, and in the case of carbon dioxide, hardly at all. The report foresees further reductions in the short term and increased ambition in the medium term. Make no mistake; the Paris Agreement will slow climate change. The recent Kigali Amendment to the Montreal Protocol will do the same.
But not enough: not nearly enough and not fast enough. This report estimates we are actually on track for global warming of up to 3.4 degrees Celsius. Current commitments will reduce emissions by no more than a third of the levels required by 2030 to avert disaster. The Kigali Amendment will take off 0.5 degrees Celsius, although not until well after 2030. Action on shortlived climate pollutants, such as black carbon, can take off a further 0.5 degrees Celsius. This means we need to find another one degree from somewhere to meet the stronger, and safer, target of 1.5 degrees Celsius warming.
So, we must take urgent action. If we don’t, we will mourn the loss of biodiversity and natural resources. We will regret the economic fallout. Most of all, we will grieve over the avoidable human tragedy; the growing numbers of climate refugees hit by hunger, poverty, illness and conflict will be a constant reminder of our failure to deliver.
That’s from an annual report put out by the United Nations Environment Programme, or UNEP:
• United Nations Environment Programme, The Emissions Gap Report 2016.
As this report makes clear, we can bridge the gap and keep global warming below 2° C, if we work very hard.
But my limited optimism was shaken by the US presidential election, and especially by the choice of Myron Ebell to head the ‘transition team’ for the Environmental Protection Agency. For the US government to dismantle the Clean Power Plan and abandon the Paris Agreement would seriously threaten the fight against climate change.
Luckily, people already recognize that even with the Paris Agreement, a lot of work must happen at the ‘subnational’ level. This work will go on even if the US federal government gives up. So I want to learn more about it, and get involved somehow.
This is where the Under2 Coalition comes in.
California, Connecticut, Minnesota, New Hampshire, New York, Oregon, Rhode Island, Vermont and Washington have signed onto a spinoff of the Paris Climate Agreement. It’s called the Under2 Memorandum of Understanding, or Under2 MOU for short.
“Under 2” stands for two goals:
• under 2 degrees Celsius of global warming, and
• under 2 tonnes of carbon dioxide emitted per person per year.
These states have agreed to cut greenhouse gas emissions to 8095% below 1990 levels by 2050. They’ve also agreed to share technology and scientific research, expand use of zeroemission vehicles, etc., etc.
And it’s not just US states that are involved in this! A total of 165 jurisdictions in 33 countries and six continents have signed or endorsed the Under2 MOU. Together, they form the Under2 Coalition. They represent more than 1.08 billion people and $25.7 trillion in GDP, more than a third of the global economy:
I’ll list the members, starting with ones near the US. If you go to the link you can find out exactly what each of these ‘subnational entities’ are promising to do. In a future post, I’ll say more about the details, since I want Riverside to join this coalition. Jim Stuttard has already started a page about a city in the UK which is not a member of the Under2 Coalition, but has done a lot of work to figure out how to cut carbon emissions:
• Azimuth Wiki, Birmingham Green Commission.
This sort of information will be useful for other cities.
UNITED STATES
Austin
California
Connecticut
Los Angeles
Massachusetts
Minnesota
New Hampshire
New York City
New York State
Oakland City
Oregon
Portland City
Rhode Island
Sacramento
San Francisco
Seattle
Vermont
Washington
CANADA
British Columbia
Northwest Territories
Ontario
Québec
Vancouver City
MEXICO
Baja California
Chiapas
Hidalgo
Jalisco
Mexico City
Mexico State
Michoacán
Quintana Roo
Tabasco
Yucatán
BRAZIL
Acre
Amazonas
Mato Grosso
Pernambuco
Rondônia
São Paulo City
São Paulo State
Tocantins
CHILE
Santiago City
COLOMBIA
Guainía
Guaviare
PERU
Loreto
San Martín
Ucayali
AUSTRIA
Lower Austria
FRANCE
Alsace
Aquitaine
AuvergneRhôneAlpes
BasRhin
MidiPyrénées
Pays de la Loire
GERMANY
BadenWürttemberg
Bavaria
Hesse
North RhineWestphalia
SchleswigHolstein
Thuringia
HUNGARY
Budapest
ITALY
Abruzzo
Basilicata
EmiliaRomagna
Lombardy
Piedmont
Sardinia
Veneto
THE NETHERLANDS
Drenthe
North Brabant
North Holland
South Holland
PORTUGAL
Azores
Madeira
SPAIN
Andalusia
Basque Country
Catalonia
Navarra
SWEDEN
Jämtland Härjedalen
SWITZERLAND
BaselLandschaft
BaselStadt
UNITED KINGDOM
Bristol
Greater Manchester
Scotland
Wales
AUSTRALIA
Australian Capital Territory (ACT)
South Australia
CHINA
Alliance of Peaking Pioneer Cities (represents 23 cities)
Jiangsu Province
Sichuan
Zhenjiang City
INDIA
Telangana
INDONESIA
East Kalimantan
South Sumatra
West Kalimantan
JAPAN
Gifu
NEPAL
Kathmandu Valley
KENYA
Laikipia County
IVORY COAST
Assemblée des Régions de Côte d’Ivoire (represents 33 subnationals)
NIGERIA
Cross River State
MOZAMBIQUE
Nampula
SENEGAL
Guédiawaye
Here at the Santa Fe Institute we’re having a workshop on Statistical Physics, Information Processing and Biology. Unfortunately the talks are not being videotaped, so it’s up to me to spread the news of what’s going on here.
Christopher Jarzynski is famous for discovering the Jarzynski equality. It says
where is Boltzmann’s consstant and is the temperature of a system that’s in equilibrium before some work is done on it. is the change in free energy, is the amount of work, and the angle brackets represent an average over the possible options for what takes place—this sort of process is typically nondeterministic.
We’ve seen a good quick explanation of this equation here on Azimuth:
• Eric Downes, Crooks’ Fluctuation Theorem, Azimuth, 30 April 2011.
We’ve also gotten a proof, where it was called the ‘integral fluctuation theorem’:
• Matteo Smerlak, The mathematical origin of irreversibility, Azimuth, 8 October 2012.
It’s a fundamental result in nonequilibrium statistical mechanics—a subject where inequalities are so common that this equation is called an ‘equality’.
Two days ago, Jarzynski gave an incredibly clear hourlong tutorial on this subject, starting with the basics of thermodynamics and zipping forward to modern work. With his permission, you can see the slides here:
• Christopher Jarzynski, A brief introduction to the delights of nonequilibrium statistical physics.
Also try this review article:
• Christopher Jarzynski, Equalities and inequalities: irreversibility and the Second Law of thermodynamics at the nanoscale, Séminaire Poincaré XV Le Temps (2010), 77–102.
This is my talk for the Santa Fe Institute workshop on Statistical Mechanics, Information Processing and Biology:
• Computation and thermodynamics.
It’s about the link between computation and entropy. I take the idea of a Turing machine for granted, but starting with that I explain recursive functions, the ChurchTuring thesis, Kolomogorov complexity, the relation between Kolmogorov complexity and Shannon entropy, the uncomputability of Kolmogorov complexity, the ‘complexity barrier’, Levin’s computable version of complexity, and finally my work with Mike Stay on algorithmic thermodynamics.
In my talk slides I mention the ‘complexity barrier’, and state this theorem:
Theorem. Choose your favorite set of axioms for math. If it’s finite and consistent, there exists C ≥ 0, the complexity barrier, such that for no natural number n can you prove the Kolmogorov complexity of n exceeds C.
For a sketch of the proof of this result, go here:
• Chaitin’s incompleteness theorem.
In my talk I showed a movie related to this: an animated video created in 2009 using a program less than 4 kilobytes long that runs on a Windows XP machine:
For more details, read our paper:
• John Baez and Mike Stay, Algorithmic thermodynamics, Math. Struct. Comp. Sci. 22 (2012), 771787.
or these blog articles:
• Algorithmic thermodynamics (part 1).
• Algorithmic thermodynamics (part 2).
They all emphasize slightly different aspects!
• Monoidal categories of networks.
Nature and the world of human technology are full of networks. People like to draw diagrams of networks: flow charts, electrical circuit diagrams, chemical reaction networks, signalflow graphs, Bayesian networks, food webs, Feynman diagrams and the like. Far from mere informal tools, many of these diagrammatic languages fit into a rigorous framework: category theory. I will explain a bit of how this works and discuss some applications.
There I will be using the vaguer, less scary title ‘The mathematics of networks’. In fact, all the monoidal categories I discuss are symmetric monoidal, but I decided that too many definitions will make people unhappy.
The main new thing in this talk is my work with Blake Pollard on symmetric monoidal categories where the morphisms are ‘open Petri nets’. This allows us to describe ‘open’ chemical reactions, where chemical flow in and out. Composing these morphisms then corresponds to sticking together open Petri nets to form larger open Petri nets.
The Santa Fe Institute, in New Mexico, is a place for studying complex systems. I’ve never been there! Next week I’ll go there to give a colloquium on network theory, and also to participate in this workshop:
• Statistical Mechanics, Information Processing and Biology, November 16–18, Santa Fe Institute. Organized by David Krakauer, Michael Lachmann, Manfred Laubichler, Peter Stadler, and David Wolpert.
Abstract. This workshop will address a fundamental question in theoretical biology: Does the relationship between statistical physics and the need of biological systems to process information underpin some of their deepest features? It recognizes that a core feature of biological systems is that they acquire, store and process information (i.e., perform computation). However to manipulate information in this way they require a steady flux of free energy from their environments. These two, interrelated attributes of biological systems are often taken for granted; they are not part of standard analyses of either the homeostasis or the evolution of biological systems. In this workshop we aim to fill in this major gap in our understanding of biological systems, by gaining deeper insight in the relation between the need for biological systems to process information and the free energy they need to pay for that processing.
The goal of this workshop is to address these issues by focusing on a set three specific questions: 1) How has the fraction of free energy flux on earth that is used by biological computation changed with time? 2) What is the free energy cost of biological computation or functioning? 3) What is the free energy cost of the evolution of biological computation or functioning? In all of these cases we are interested in the fundamental limits that the laws of physics impose on various aspects of living systems as expressed by these three questions.
I think it’s not open to the public, but I will try to blog about it. The speakers include a lot of experts on information theory, statistical mechanics, and biology. Here they are:
Wednesday November 16: Chris Jarzynski, Seth Lloyd, Artemy Kolchinski, John Baez, Manfred Laubichler, Harold de Vladar, Sonja Prohaska, Chris Kempes.
Thursday November 17: Phil Ball, Matina C. DonaldsonMatasci, Sebastian Deffner, David Wolpert, Daniel Polani, Christoph Flamm, Massimiliano Esposito, Hildegard MeyerOrtmanns, Blake Pollard, Mikhail Prokopenko, Peter Stadler, Ben Machta.
Friday November 18: Jim Crutchfield, Sara Walker, Hyunju Kim, Takahiro Sagawa, Michael Lachmann, Wojciech Zurek, Christian Van den Broeck, Susanne Still, Chris Stephens.
Here’s an interesting story about the rise of wind energy in Texas:
• Richard Martin, The one and only Texas wind boom, Technology Review, 3 October 2016.
I’ll quote the start:
Rolan Petty stabbed at the dirt with a boot toe and looked up at the broiling west Texas sun. “I call it farming on faith,” he said of his unirrigated cotton farm. “You just have faith that the rain is gonna come.”
If it doesn’t come, Petty has a backup income stream: leasing fees. All around us, towering 150 feet over Petty’s combine and the scrubbylooking cotton plants in neat rows, stood a forest of wind turbines that stretched to the horizon. Petty’s land on the arid plain of west Texas lies on the edge of the vast Horse Hollow wind farm, with 430 turbines spread over 73 square miles. It was the largest wind farm in the world when it was completed, in 2006. Petty’s family leases land to Horse Hollow and another wind farm in the area, making about $7,500 a year on each of the several dozen turbines on their property. Wind power has become a big windfall for the Pettys, as it has for many landowners in Texas—allowing Rolan and his parents and three brothers to make hundreds of thousands of dollars every year whether the rains come or not. And the Petty farm is just a small player in the largest renewableenergy boom the United States has ever seen.
With nearly 18,000 megawatts of capacity, Texas, if it were a country, would be the sixthlargest generator of wind power in the world, right behind Spain. Now Texas is preparing to add several thousand megawatts more—roughly equal to the wind capacity that can be found in all of California. Most of these turbines are in west Texas, one of the most desolate and windy regions in the continental United States. Fifteen years ago, when the groundwork for this boom was being set, this area had little but cotton and grain farms, oil fields, scrub and dry riverbeds, and small towns that were mostly withering.
Today it’s a land of spindly white turbines that line the highways—and the pockets of landowners. At night, when the wind blows strongest and steadiest, if you stand out in one of the fields you can hear the great blades make a ghostly shoopshoop sound as they turn. Wind power has brought prosperity to towns that were literally drying up less than a generation ago. “In the 2011 drought a lot of people around here would have filed for bankruptcy if not for the turbines,” said Russ Petty, one of Rolan’s brothers, who was giving me a driving tour of the property. “What it’s done is helped keep this land in the family.”
It has also shown that a big state can get a substantial amount of its power from renewable sources without significant disruptions, given the right policies and the right infrastructure investments. The U.S. Department of Energy’s 2015 report Wind Vision set a goal of getting 35 percent of all electricity in the country from wind in 2050, up from 4.5 percent today. In Texas, at times, that number has already been exceeded: on several windy days last winter, wind power briefly supplied more than 40 percent of the state’s electricity. For wind power advocates, Texas is a model for the rest of the country.
But it also reveals what wind power can’t achieve. Overall, wind still represents less than 20 percent of the state’s generation capacity—a number that dips into the low single digits on calm, hot summer days. And even with the wind power boom, the state’s total estimated carbon emissions were the highest in the nation in 2013, the most recent year for which data is available—up 5 percent from the previous year.
What’s more, the conditions that have spurred Texas’s boom may not be easily duplicated. Not only is Texas scoured by usually steady winds, but it has something most other places lack: a gigantic transmission system that was built to bring electricity from the desolate western and northern parts of the state to the big cities of the south and east, including Dallas, Austin, San Antonio, and Houston. Under a program known as Competitive Renewable Energy Zones, or CREZ, the power lines were approved in 2007 and cost nearly $7 billion to build. They have added a few dollars a month to residential electricity bills, but they now look like a farsighted infrastructure investment that other states are unwilling or unable to make.
I drove nearly 1,200 miles, from Abilene to Amarillo and many places in between, this summer to explore the wind explosion in Texas. I wanted to understand what was driving this ongoing boom, and what the ultimate limit might be. How much wind power can the Texas grid absorb, economically and physically? And can other states, and other nations, achieve what Texas has, or are there conditions here that will be difficult or impossible to reproduce anywhere else?
Read the rest here.
I’m excited! In early December I’m going to a workshop on ‘compositionality’, meaning how big complex things can be built by sticking together smaller, simpler parts:
• Compositionality, December 59, workshop at the Simons Institute for the Theory of Computing, Berkeley. Organized by Samson Abramsky, Lucien Hardy and Michael Mislove.
In 2007 Jim Simons, the guy who helped invent Chern–Simons theory and then went on to make billions using math to run a hedge fund, founded a research center for geometry and physics on Long Island. More recently he’s also set up this institute for theoretical computer science, in Berkeley. I’ve never been there before.
‘Compositionality’ sounds like an incredibly broad topic, but since it’s part of a semesterlong program on Logical structures in computation, this workshop will be aimed at theoretical computer scientists, who have specific ideas about compositionality. And these theoretical computer scientists tend to like category theory. After all, category theory is about morphisms, which you can compose.
Here’s the idea:
The compositional description of complex objects is a fundamental feature of the logical structure of computation. The use of logical languages in database theory and in algorithmic and finite model theory provides a basic level of compositionality, but establishing systematic relationships between compositional descriptions and complexity remains elusive. Compositional models of probabilistic systems and languages have been developed, but inferring probabilistic properties of systems in a compositional fashion is an important challenge. In quantum computation, the phenomenon of entanglement poses a challenge at a fundamental level to the scope of compositional descriptions. At the same time, compositionally has been proposed as a fundamental principle for the development of physical theories. This workshop will focus on the common structures and methods centered on compositionality that run through all these areas.
So, some physics and quantum computation will get into the mix!
A lot of people working on categories and computation will be at this workshop. Here’s what I know about the talks so far. If you click on the talk titles you’ll get abstracts, at least for most of them.
9 – 9:30 am 
Coffee and CheckIn


9:30 – 10:30 am  
10:30 – 11 am 
Break


11 – 11:20 am  
11:25 – 11:45 am  
11:50 am – 12:25 pm  
12:30 – 2 pm 
Lunch

9:30 – 10:05 am  
10 am – 10:45 am  
10:50 – 11:20 am 
Break


11:20 – 11:55 am  
12 – 12:35 pm  
12:40 – 2 pm 
Lunch


2 – 3 pm 
Discussion


3 – 3:40 pm 
• Brendan Fong, The Algebra of Open and Interconnected Systems, Ph.D. thesis, Department of Computer Science, University of Oxford, 2016.
This material is close to my heart, since I’ve informally served as Brendan’s advisor since 2011, when he came to Singapore to work with me on chemical reaction networks. We’ve been collaborating intensely ever since. I just looked at our correspondence, and I see it consists of 880 emails!
At some point I gave him a project: describe the category whose morphisms are electrical circuits. He took up the challenge much more ambitiously than I’d ever expected, developing powerful general frameworks to solve not only this problem but also many others. He did this in a number of papers, most of which I’ve already discussed:
• Brendan Fong, Decorated cospans, Th. Appl. Cat. 30 (2015), 1096–1120. (Blog article here.)
• Brendan Fong and John Baez, A compositional framework for passive linear circuits. (Blog article here.)
• Brendan Fong, John Baez and Blake Pollard, A compositional framework for Markov processes. (Blog article here.)
• Brendan Fong and Brandon Coya, Corelations are the prop for extraspecial commutative Frobenius monoids. (Blog article here.)
• Brendan Fong, Paolo Rapisarda and Paweł Sobociński,
A categorical approach to open and interconnected dynamical systems.
But Brendan’s thesis is the best place to see a lot of this material in one place, integrated and clearly explained.
I wanted to write a summary of his thesis. But since he did that himself very nicely in the preface, I’m going to be lazy and just quote that! (I’ll leave out the references, which are crucial in scholarly prose but a bit offputting in a blog.)
This is a thesis in the mathematical sciences, with emphasis on the mathematics. But before we get to the category theory, I want to say a few words about the scientific tradition in which this thesis is situated.
Mathematics is the language of science. Twinned so intimately with physics, over the past centuries mathematics has become a superb—indeed, unreasonably effective—language for understanding planets moving in space, particles in a vacuum, the structure of spacetime, and so on. Yet, while Wigner speaks of the unreasonable effectiveness of mathematics in the natural sciences, equally eminent mathematicians, not least Gelfand, speak of the unreasonable ineffectiveness of mathematics in biology and related fields. Why such a difference?
A contrast between physics and biology is that while physical systems can often be studied in isolation—the proverbial particle in a vacuum—biological systems are necessarily situated in their environment. A heart belongs in a body, an ant in a colony. One of the first to draw attention to this contrast was Ludwig von Bertalanffy, biologist and founder of general systems theory, who articulated the difference as one between closed and open systems:
Conventional physics deals only with closed systems, i.e. systems which are considered to be isolated from their environment. […] However, we find systems which by their very nature and definition are not closed systems. Every living organism is essentially an open system. It maintains itself in a continuous inflow and outflow, a building up and breaking down of components, never being, so long as it is alive, in a state of chemical and thermodynamic equilibrium but maintained in a socalled ‘steady state’ which is distinct from the latter.
While the ambitious generality of general systems theory has proved difficult, von Bertalanffy’s philosophy has had great impact in his home field of biology, leading to the modern field of systems biology. Half a century later, Dennis Noble, another great pioneer of systems biology and the originator of the first mathematical model of a working heart, describes the shift as one from reduction to integration.
Systems biology […] is about putting together rather than taking apart, integration rather than reduction. It requires that we develop ways of thinking about integration that are as rigorous as our reductionist programmes, but different. It means changing our philosophy, in the full sense of the term.
In this thesis we develop rigorous ways of thinking about integration or, as we refer to it, interconnection.
Interconnection and openness are tightly related. Indeed, openness implies that a system may be interconnected with its environment. But what is an environment but comprised of other systems? Thus the study of open systems becomes the study of how a system changes under interconnection with other systems.
To model this, we must begin by creating language to describe theinterconnection of systems. While reductionism hopes that phenomena can be explained by reducing them to “elementary units investigable independently of each other” (in the words of von Bertalanffy), this philosophy of integration introduces as an additional and equal priority the investigation of the way these units are interconnected. As such, this thesis is predicated on the hope that the meaning of an expression in our new language is determined by the meanings of its constituent expressions together with the syntactic rules combining them. This is known as the principle of compositionality.
Also commonly known as Frege’s principle, the principle of compositionality both dates back to Ancient Greek and Vedic philosophy, and is still the subject of active research today. More recently, through the work of Montague in natural language semantics and Strachey and Scott in programming language semantics, the principle of compositionality has found formal expression as the dictum that the interpretation of a language should be given by a homomorphism from an algebra of syntactic representations to an algebra of semantic objects. We too shall follow this route.
The question then arises: what do we mean by algebra? This mathematical question leads us back to our scientific objectives: what do we mean by system? Here we must narrow, or at least define, our scope. We give some examples. The investigations of this thesis began with electrical circuits and their diagrams, and we will devote significant time to exploring their compositional formulation. We discussed biological systems above, and our notion of system
includes these, modelled say in the form of chemical reaction networks or Markov processes, or the compartmental models of epidemiology, population biology, and ecology. From computer science, we consider Petri nets, automata, logic circuits, and the like. More abstractly, our notion of system encompasses matrices and systems of differential equations.
Drawing together these notions of system are welldeveloped diagrammatic representations based on network diagrams— that is, topological graphs. We call these networkstyle diagrammatic languages. In abstract, by ‘system’ we shall simply mean that which can be represented by a box with a collection of terminals, perhaps of different types, through which it interfaces with the surroundings. Concretely, one might envision a circuit diagram with terminals, such as
or
The algebraic structure of interconnection is then simply the structure that results from the ability to connect terminals of one system with terminals of another. This graphical approach motivates our language of interconnection: indeed, these diagrams will be the expressions of our language.
We claim that the existence of a networkstyle diagrammatic language to represent a system implies that interconnection is inherently important in understanding the system. Yet, while each of these example notions of system are wellstudied in and of themselves, their compositional, or algebraic, structure has received scant attention. In this thesis, we study an algebraic structure called a ‘hypergraph category’, and argue that this is the relevant algebraic structure for modelling interconnection of open systems.
Given these preexisting diagrammatic formalisms and our visual intuition, constructing algebras of syntactic representations is thus rather straightforward. The semantics and their algebraic structure are more subtle.
In some sense our semantics is already given to us too: in studying these systems as closed systems, scientists have already formalised the meaning of these diagrams. But we have shifted from a closed perspective to an open one, and we need our semantics to also account for points of interconnection.
Taking inspiration from Willems’ behavioural approach and Deutsch’s constructor theory, in this thesis I advocate the following position. First, at each terminal of an open system we may make measurements appropriate to the type of terminal. Given a collection of terminals, the universum is then the set of all possible measurement outcomes. Each open system has a collection of terminals, and hence a universum. The semantics of an open system is the subset of measurement outcomes on the terminals that are permitted by the system. This is known as the behaviour of the system.
For example, consider a resistor of resistance This has two terminals—the two ends of the resistor—and at each terminal, we may measure the potential and the current. Thus the universum of this system is the set where the summands represent respectively the potentials and currents at each of the two terminals. The resistor is governed by Kirchhoff’s current law, or conservation of charge,
and Ohm’s law. Conservation of charge states that the current flowing into one terminal must equal the current flowing out of the other terminal, while Ohm’s law states that this current will be proportional to the potential difference, with constant of proportionality Thus the behaviour of the resistor is the set
Note that in this perspective a law such as Ohm’s law is a mechanism for partitioning behaviours into possible and impossible behaviours.
Interconnection of terminals then asserts the identification of the variables at the identified terminals. Fixing some notion of open system and subsequently an algebra of syntactic representations for these systems, our approach, based on the principle of compositionality, requires this to define an algebra of semantic objects and a homomorphism from syntax to semantics. The first part of this thesis develops the mathematical tools necessary to pursue this vision for modelling open systems and their interconnection.
The next goal is to demonstrate the efficacy of this philosophy in applications. At core, this work is done in the faith that the right language allows deeper insight into the underlying structure. Indeed, after setting up such a language for open systems there are many questions to be asked: Can we find a sound and complete logic for determining when two syntactic expressions have the same semantics? Suppose we have systems that have some property, for example controllability. In what ways can we interconnect controllable systems so that the combined system is also controllable? Can we compute the semantics of a large system quicker by computing the semantics of subsystems and then composing them? If I want a given system to achieve a specified trajectory, can we interconnect another system to make it do so? How do two different notions of system, such as circuit diagrams and signal flow graphs, relate to each other? Can we find homomorphisms between their syntactic and semantic algebras? In the second part of this thesis we explore some applications in depth, providing answers to questions of the above sort.
The thesis is divided into two parts. Part I, comprising
Chapters 1 to 4, focuses on mathematical foundations. In it we develop the theory of hypergraph categories and a powerful tool for constructing and manipulating them: decorated corelations. Part II, comprising Chapters 5 to 7, then discusses applications of this theory to examples of open systems.
The central refrain of this thesis is that the syntax and semantics of networkstyle diagrammatic languages can be modelled by hypergraph categories. These are introduced in Chapter 1. Hypergraph categories are symmetric monoidal categories in which every object is equipped with the structure of a special commutative Frobenius monoid in a way compatible with the monoidal product. As we will rely heavily on properties of monoidal categories, their functors, and their graphical calculus, we begin with a whirlwind review of these ideas. We then provide a definition of hypergraph categories and their functors, a strictification theorem, and an important example: the category of cospans in a category with finite colimits.
A cospan is a pair of morphisms
with a common codomain. In Chapter 2 we introduce the idea of a ‘decorated cospan’, which equips the apex with extra structure. Our motivating example is cospans of finite sets decorated by graphs, as in this picture:
Here graphs are a proxy for expressions in a networkstyle diagrammatic language. To give a bit more formal detail, let be a category with finite colimits, writing its as coproduct as and let be a braided monoidal category. Decorated cospans provide a method of producing a hypergraph category from a lax braided monoidal functor
The objects of these categories are simply the objects of while the morphisms are pairs comprising a cospan in together with an element in —the socalled decoration. We will also describe how to construct hypergraph functors between decorated cospan categories. In particular, this provides a useful tool for constructing a hypergraph category that captures the syntax of a networkstyle diagrammatic language.
Having developed a method to construct a category where the morphisms are expressions in a diagrammatic language, we turn our attention to categories of semantics. This leads us to the notion of a corelation, to which we devote Chapter 3. Given a factorisation system on a category we define a corelation to be a cospan such that the copairing of the two maps, a map is a morphism in Factorising maps using the factorisation system leads to a notion of equivalence on cospans, and this helps us describe when two diagrams are equivalent. Like cospans, corelations form hypergraph categories.
In Chapter 4 we decorate corelations. Like decorated cospans,
decorated corelations are corelations together with some additional structure on the apex. We again use a lax braided monoidal functor to specify the sorts of extra structure allowed. Moreover, decorated corelations too form the morphisms of a hypergraph category. The culmination of our theoretical work is to show that every hypergraph category and every hypergraph functor can be constructe using decorated corelations. This implies that we can use decorated corelations to construct a semantic hypergraph category for any networkstyle diagrammatic language, as well as a hypergraph functor from its syntactic category that interprets each diagram. We also discuss how the intuitions behind decorated corelations guide construction of these categories and functors.
Having developed these theoretical tools, in the second part we turn to demonstrating that they have useful applications. Chapter 5 uses corelations to formalise signal flow diagrams representing linear timeinvariant discrete dynamical systems as morphisms in a category. Our main result gives an intuitive sound and fully complete equational theory for reasoning about these linear timeinvariant systems. Using this framework, we derive a novel structural characterisation of controllability, and consequently provide a methodology for analysing controllability of networked and interconnected systems.
Chapter 6 studies passive linear networks. Passive linear
networks are used in a wide variety of engineering applications, but the best studied are electrical circuits made of resistors, inductors and capacitors. The goal is to construct what we call the ‘black box functor’, a hypergraph functor from a category of open circuit diagrams to a category of behaviours of circuits. We construct the former as a decorated cospan category, with each morphism a cospan of finite sets decorated by a circuit diagram on the apex. In this category, composition describes the process of attaching the outputs of one circuit to the inputs of another. The behaviour of a circuit is the relation it imposes between currents and potentials at their terminals. The space of these currents and potentials naturally has the structure of a symplectic vector space, and the relation imposed by a circuit is a Lagrangian linear relation. Thus, the black box functor goes from our category of circuits to the category of symplectic vector spaces and Lagrangian linear relations. Decorated corelations provide a critical tool for constructing these hypergraph categories and the black box functor.
Finally, in Chapter 7 we mention two further research directions. The first is the idea of a ‘bound colimit’, which aims to describe why epimono factorisation systems are useful for constructing corelation categories of semantics for open systems. The second research direction pertains to applications of the black box functor for passive linear networks, discussing the work of Jekel on the inverse problem for electric circuits and the work of Baez, Fong, and Pollard on open Markov processes.