Modelling Interconnected Systems with Decorated Corelations

9 December, 2016

Here at the Simons Institute workshop on compositionality, my talk on network theory introduced ‘decorated cospans’ as a general model of open systems. These were invented by Brendan Fong, and are nicely explained in his thesis:

• Brendan Fong, The Algebra of Open and Interconnected Systems. (Blog article here.)

But he went further: to understand the externally observable behavior of an open system we often want to simplify a decorated cospan and get another sort of structure, which he calls a ‘decorated corelation’. His talk here explains decorated corelations and what they’re good for:

, my talk on compositionality in network theory introduced ‘decorated cospans’ as a general model of open systems. These were invented by Brendan Fong, and are nicely explained in his thesis:

• Brendan Fong, The Algebra of Open and Interconnected Systems.

But he went further: to understand the externally observable behavior of an open system we often want to simplify a decorated cospan and get another sort of structure, which he calls a ‘decorated corelation’. His talk here explains decorated corelations and what they’re good for:

Abstract. Hypergraph categories are monoidal categories in which every object is equipped with a special commutative Frobenius monoid. Morphisms in a hypergraph category can hence be represented by string diagrams in which strings can branch and split: diagrams that are reminiscent of electrical circuit diagrams. As such they provide a framework for formalising the syntax and semantics of circuit-type diagrammatic languages. In this talk I will introduce decorated corelations as a tool for building hypergraph categories and hypergraph functors, drawing examples from linear algebra and dynamical systems.


Semantics for Physicists

7 December, 2016

I once complained that my student Brendan Fong said ‘semantics’ too much. You see, I’m in a math department, but he was actually in the computer science department at Oxford: I was his informal supervisor. Theoretical computer scientists love talking about syntax versus semantics—that is, written expressions versus what those expressions actually mean, or programs versus what those programs actually do. So Brendan was very comfortable with that distinction. But my other grad students, coming from a math department didn’t understand it… and he was mentioning it in practically ever other sentence.

In 1963, in his PhD thesis, Bill Lawvere figured out a way to talk about syntax versus semantics that even mathematicians—well, even category theorists—could understand. It’s called ‘functorial semantics’. The idea is that things you write are morphisms in some category X, while their meanings are morphisms in some other category Y. There’s a functor F \colon X \to Y which sends things you write to their meanings. This functor sends syntax to semantics!

But physicists may not enjoy this idea unless they see it at work in physics. In physics, too, the distinction is important! But it takes a while to understand. I hope Prakash Panangaden’s talk at the start of the Simons Institute workshop on compositionality is helpful. Check it out:


Compositionality in Network Theory

29 November, 2016

Here are the slides of my talk at the workshop on compositionality at the Simons Institute for the Theory of Computing next week. I decided to talk about some new work with Blake Pollard. You can see slides here:

• John Baez, Compositionality in network theory, 6 December 2016.

and a video here:

Abstract. To describe systems composed of interacting parts, scientists and engineers draw diagrams of networks: flow charts, Petri nets, electrical circuit diagrams, signal-flow graphs, chemical reaction networks, Feynman diagrams and the like. In principle all these different diagrams fit into a common framework: the mathematics of symmetric monoidal categories. This has been known for some time. However, the details are more challenging, and ultimately more rewarding, than this basic insight. Two complementary approaches are presentations of symmetric monoidal categories using generators and relations (which are more algebraic in flavor) and decorated cospan categories (which are more geometrical). In this talk we focus on the latter.

This talk assumes considerable familiarity with category theory. For a much gentler talk on the same theme, see:

Monoidal categories of networks.

networks_compositionality


Monoidal Categories of Networks

12 November, 2016

Here are the slides of my colloquium talk at the Santa Fe Institute at 11 am on Tuesday, November 15th. I’ll explain some not-yet-published work with Blake Pollard on a monoidal category of ‘open Petri nets’:

Monoidal categories of networks.

Nature and the world of human technology are full of networks. People like to draw diagrams of networks: flow charts, electrical circuit diagrams, chemical reaction networks, signal-flow graphs, Bayesian networks, food webs, Feynman diagrams and the like. Far from mere informal tools, many of these diagrammatic languages fit into a rigorous framework: category theory. I will explain a bit of how this works and discuss some applications.

There I will be using the vaguer, less scary title ‘The mathematics of networks’. In fact, all the monoidal categories I discuss are symmetric monoidal, but I decided that too many definitions will make people unhappy.

The main new thing in this talk is my work with Blake Pollard on symmetric monoidal categories where the morphisms are ‘open Petri nets’. This allows us to describe ‘open’ chemical reactions, where chemical flow in and out. Composing these morphisms then corresponds to sticking together open Petri nets to form larger open Petri nets.


Compositionality Workshop

1 November, 2016

I’m excited! In early December I’m going to a workshop on ‘compositionality’, meaning how big complex things can be built by sticking together smaller, simpler parts:

Compositionality, December 5-9, workshop at the Simons Institute for the Theory of Computing, Berkeley. Organized by Samson Abramsky, Lucien Hardy and Michael Mislove.

In 2007 Jim Simons, the guy who helped invent Chern–Simons theory and then went on to make billions using math to run a hedge fund, founded a research center for geometry and physics on Long Island. More recently he’s also set up this institute for theoretical computer science, in Berkeley. I’ve never been there before.

‘Compositionality’ sounds like an incredibly broad topic, but since it’s part of a semester-long program on Logical structures in computation, this workshop will be aimed at theoretical computer scientists, who have specific ideas about compositionality. And these theoretical computer scientists tend to like category theory. After all, category theory is about morphisms, which you can compose.

Here’s the idea:

The compositional description of complex objects is a fundamental feature of the logical structure of computation. The use of logical languages in database theory and in algorithmic and finite model theory provides a basic level of compositionality, but establishing systematic relationships between compositional descriptions and complexity remains elusive. Compositional models of probabilistic systems and languages have been developed, but inferring probabilistic properties of systems in a compositional fashion is an important challenge. In quantum computation, the phenomenon of entanglement poses a challenge at a fundamental level to the scope of compositional descriptions. At the same time, compositionally has been proposed as a fundamental principle for the development of physical theories. This workshop will focus on the common structures and methods centered on compositionality that run through all these areas.

So, some physics and quantum computation will get into the mix!

A lot of people working on categories and computation will be at this workshop. Here’s what I know about the talks so far. If you click on the talk titles you’ll get abstracts, at least for most of them.

The program

 

Monday, December 5th, 2016
9 – 9:20 am
Coffee and Check-In
9:20 – 9:30 am
Opening Remarks
9:30 – 10:30 am
10:30 – 11 am
Break
11 – 11:35 am
11:40 am – 12:15 pm
12:20 – 2 pm
Lunch
2 – 2:35 pm
2:40 – 3:15 pm
3:30 – 4 pm
Break
4 – 5 pm
Discussion
5 – 6 pm
Reception

 

Tuesday, December 6th, 2016
9 – 9:30 am
Coffee and Check-In
9:30 – 10:30 am
10:30 – 11 am
Break
11 – 11:35 am
11:40 am – 12 pm
12:05 – 12:25 pm
12:30 – 2 pm
Lunch
2 – 2:35 pm
2:40 – 3:15 pm
3:30 – 4 pm
Break
4 – 5 pm
Discussion

 

Wednesday, December 7th, 2016
9 – 9:30 am
Coffee and Check-In
9:30 – 10:30 am
10:30 – 11 am
Break
11 – 11:20 am
11:25 – 11:45 am
11:50 am – 12:25 pm
12:30 – 2 pm
Lunch

 

Thursday, December 8th, 2016
9 – 9:30 am
Coffee and Check-In
9:30 – 10:05 am
10:10 – 10:30 am
10:35 – 11 am
Break
11 – 11:20 am
11 am – 11:45 am
11 am – 12:10 pm
12 pm – 2 pm
Lunch
2 – 2:35 pm
2:40 – 3:15 pm
3 pm – 3:50 pm
Break
3:50 – 4:25 pm
4:30 – 4:50 pm

 

Friday, December 9th, 2016
9:30 – 10:05 am
10 am – 10:45 am
10:50 – 11:20 am
Break
11:20 – 11:55 am
12 – 12:35 pm
12:40 – 2 pm
Lunch
2 – 3 pm
Discussion
3 – 3:40 pm

Open and Interconnected Systems

23 October, 2016

Brendan Fong finished his thesis a while ago, and here it is!

• Brendan Fong, The Algebra of Open and Interconnected Systems, Ph.D. thesis, Department of Computer Science, University of Oxford, 2016.

This material is close to my heart, since I’ve informally served as Brendan’s advisor since 2011, when he came to Singapore to work with me on chemical reaction networks. We’ve been collaborating intensely ever since. I just looked at our correspondence, and I see it consists of 880 emails!

At some point I gave him a project: describe the category whose morphisms are electrical circuits. He took up the challenge much more ambitiously than I’d ever expected, developing powerful general frameworks to solve not only this problem but also many others. He did this in a number of papers, most of which I’ve already discussed:

• Brendan Fong, Decorated cospans, Th. Appl. Cat. 30 (2015), 1096–1120. (Blog article here.)

• Brendan Fong and John Baez, A compositional framework for passive linear circuits. (Blog article here.)

• Brendan Fong, John Baez and Blake Pollard, A compositional framework for Markov processes. (Blog article here.)

• Brendan Fong and Brandon Coya, Corelations are the prop for extraspecial commutative Frobenius monoids. (Blog article here.)

• Brendan Fong, Paolo Rapisarda and Paweł Sobociński,
A categorical approach to open and interconnected dynamical systems.

But Brendan’s thesis is the best place to see a lot of this material in one place, integrated and clearly explained.

I wanted to write a summary of his thesis. But since he did that himself very nicely in the preface, I’m going to be lazy and just quote that! (I’ll leave out the references, which are crucial in scholarly prose but a bit off-putting in a blog.)

Preface

This is a thesis in the mathematical sciences, with emphasis on the mathematics. But before we get to the category theory, I want to say a few words about the scientific tradition in which this thesis is situated.

Mathematics is the language of science. Twinned so intimately with physics, over the past centuries mathematics has become a superb—indeed, unreasonably effective—language for understanding planets moving in space, particles in a vacuum, the structure of spacetime, and so on. Yet, while Wigner speaks of the unreasonable effectiveness of mathematics in the natural sciences, equally eminent mathematicians, not least Gelfand, speak of the unreasonable ineffectiveness of mathematics in biology and related fields. Why such a difference?

A contrast between physics and biology is that while physical systems can often be studied in isolation—the proverbial particle in a vacuum—biological systems are necessarily situated in their environment. A heart belongs in a body, an ant in a colony. One of the first to draw attention to this contrast was Ludwig von Bertalanffy, biologist and founder of general systems theory, who articulated the difference as one between closed and open systems:

Conventional physics deals only with closed systems, i.e. systems which are considered to be isolated from their environment. […] However, we find systems which by their very nature and definition are not closed systems. Every living organism is essentially an open system. It maintains itself in a continuous inflow and outflow, a building up and breaking down of components, never being, so long as it is alive, in a state of chemical and thermodynamic equilibrium but maintained in a so-called ‘steady state’ which is distinct from the latter.

While the ambitious generality of general systems theory has proved difficult, von Bertalanffy’s philosophy has had great impact in his home field of biology, leading to the modern field of systems biology. Half a century later, Dennis Noble, another great pioneer of systems biology and the originator of the first mathematical model of a working heart, describes the shift as one from reduction to integration.

Systems biology […] is about putting together rather than taking apart, integration rather than reduction. It requires that we develop ways of thinking about integration that are as rigorous as our reductionist programmes, but different. It means changing our philosophy, in the full sense of the term.

In this thesis we develop rigorous ways of thinking about integration or, as we refer to it, interconnection.

Interconnection and openness are tightly related. Indeed, openness implies that a system may be interconnected with its environment. But what is an environment but comprised of other systems? Thus the study of open systems becomes the study of how a system changes under interconnection with other systems.

To model this, we must begin by creating language to describe theinterconnection of systems. While reductionism hopes that phenomena can be explained by reducing them to “elementary units investigable independently of each other” (in the words of von Bertalanffy), this philosophy of integration introduces as an additional and equal priority the investigation of the way these units are interconnected. As such, this thesis is predicated on the hope that the meaning of an expression in our new language is determined by the meanings of its constituent expressions together with the syntactic rules combining them. This is known as the principle of compositionality.

Also commonly known as Frege’s principle, the principle of compositionality both dates back to Ancient Greek and Vedic philosophy, and is still the subject of active research today. More recently, through the work of Montague in natural language semantics and Strachey and Scott in programming language semantics, the principle of compositionality has found formal expression as the dictum that the interpretation of a language should be given by a homomorphism from an algebra of syntactic representations to an algebra of semantic objects. We too shall follow this route.

The question then arises: what do we mean by algebra? This mathematical question leads us back to our scientific objectives: what do we mean by system? Here we must narrow, or at least define, our scope. We give some examples. The investigations of this thesis began with electrical circuits and their diagrams, and we will devote significant time to exploring their compositional formulation. We discussed biological systems above, and our notion of system
includes these, modelled say in the form of chemical reaction networks or Markov processes, or the compartmental models of epidemiology, population biology, and ecology. From computer science, we consider Petri nets, automata, logic circuits, and the like. More abstractly, our notion of system encompasses matrices and systems of differential equations.

Drawing together these notions of system are well-developed diagrammatic representations based on network diagrams— that is, topological graphs. We call these network-style diagrammatic languages. In abstract, by ‘system’ we shall simply mean that which can be represented by a box with a collection of terminals, perhaps of different types, through which it interfaces with the surroundings. Concretely, one might envision a circuit diagram with terminals, such as

or

The algebraic structure of interconnection is then simply the structure that results from the ability to connect terminals of one system with terminals of another. This graphical approach motivates our language of interconnection: indeed, these diagrams will be the expressions of our language.

We claim that the existence of a network-style diagrammatic language to represent a system implies that interconnection is inherently important in understanding the system. Yet, while each of these example notions of system are well-studied in and of themselves, their compositional, or algebraic, structure has received scant attention. In this thesis, we study an algebraic structure called a ‘hypergraph category’, and argue that this is the relevant algebraic structure for modelling interconnection of open systems.

Given these pre-existing diagrammatic formalisms and our visual intuition, constructing algebras of syntactic representations is thus rather straightforward. The semantics and their algebraic structure are more subtle.

In some sense our semantics is already given to us too: in studying these systems as closed systems, scientists have already formalised the meaning of these diagrams. But we have shifted from a closed perspective to an open one, and we need our semantics to also account for points of interconnection.

Taking inspiration from Willems’ behavioural approach and Deutsch’s constructor theory, in this thesis I advocate the following position. First, at each terminal of an open system we may make measurements appropriate to the type of terminal. Given a collection of terminals, the universum is then the set of all possible measurement outcomes. Each open system has a collection of terminals, and hence a universum. The semantics of an open system is the subset of measurement outcomes on the terminals that are permitted by the system. This is known as the behaviour of the system.

For example, consider a resistor of resistance r. This has two terminals—the two ends of the resistor—and at each terminal, we may measure the potential and the current. Thus the universum of this system is the set \mathbb{R}\oplus\mathbb{R}\oplus\mathbb{R}\oplus\mathbb{R}, where the summands represent respectively the potentials and currents at each of the two terminals. The resistor is governed by Kirchhoff’s current law, or conservation of charge,
and Ohm’s law. Conservation of charge states that the current flowing into one terminal must equal the current flowing out of the other terminal, while Ohm’s law states that this current will be proportional to the potential difference, with constant of proportionality 1/r. Thus the behaviour of the resistor is the set

\displaystyle{   \big\{\big(\phi_1,\phi_2,     -\tfrac1r(\phi_2-\phi_1),\tfrac1r(\phi_2-\phi_1)\big)\,\big\vert\,     \phi_1,\phi_2 \in \mathbb{R}\big\} }

Note that in this perspective a law such as Ohm’s law is a mechanism for partitioning behaviours into possible and impossible behaviours.

Interconnection of terminals then asserts the identification of the variables at the identified terminals. Fixing some notion of open system and subsequently an algebra of syntactic representations for these systems, our approach, based on the principle of compositionality, requires this to define an algebra of semantic objects and a homomorphism from syntax to semantics. The first part of this thesis develops the mathematical tools necessary to pursue this vision for modelling open systems and their interconnection.

The next goal is to demonstrate the efficacy of this philosophy in applications. At core, this work is done in the faith that the right language allows deeper insight into the underlying structure. Indeed, after setting up such a language for open systems there are many questions to be asked: Can we find a sound and complete logic for determining when two syntactic expressions have the same semantics? Suppose we have systems that have some property, for example controllability. In what ways can we interconnect controllable systems so that the combined system is also controllable? Can we compute the semantics of a large system quicker by computing the semantics of subsystems and then composing them? If I want a given system to achieve a specified trajectory, can we interconnect another system to make it do so? How do two different notions of system, such as circuit diagrams and signal flow graphs, relate to each other? Can we find homomorphisms between their syntactic and semantic algebras? In the second part of this thesis we explore some applications in depth, providing answers to questions of the above sort.

Outline of the thesis

The thesis is divided into two parts. Part I, comprising
Chapters 1 to 4, focuses on mathematical foundations. In it we develop the theory of hypergraph categories and a powerful tool for constructing and manipulating them: decorated corelations. Part II, comprising Chapters 5 to 7, then discusses applications of this theory to examples of open systems.

The central refrain of this thesis is that the syntax and semantics of network-style diagrammatic languages can be modelled by hypergraph categories. These are introduced in Chapter 1. Hypergraph categories are symmetric monoidal categories in which every object is equipped with the structure of a special commutative Frobenius monoid in a way compatible with the monoidal product. As we will rely heavily on properties of monoidal categories, their functors, and their graphical calculus, we begin with a whirlwind review of these ideas. We then provide a definition of hypergraph categories and their functors, a strictification theorem, and an important example: the category of cospans in a category with finite colimits.

A cospan is a pair of morphisms

X \to N \leftarrow Y

with a common codomain. In Chapter 2 we introduce the idea of a ‘decorated cospan’, which equips the apex N with extra structure. Our motivating example is cospans of finite sets decorated by graphs, as in this picture:

Here graphs are a proxy for expressions in a network-style diagrammatic language. To give a bit more formal detail, let \mathcal C be a category with finite colimits, writing its as coproduct as +, and let (\mathcal D, \otimes) be a braided monoidal category. Decorated cospans provide a method of producing a hypergraph category from a lax braided monoidal functor

F\colon (\mathcal C,+) \to (\mathcal D, \otimes)

The objects of these categories are simply the objects of \mathcal C, while the morphisms are pairs comprising a cospan X \rightarrow N \leftarrow Y in \mathcal C together with an element I \to FN in \mathcal D—the so-called decoration. We will also describe how to construct hypergraph functors between decorated cospan categories. In particular, this provides a useful tool for constructing a hypergraph category that captures the syntax of a network-style diagrammatic language.

Having developed a method to construct a category where the morphisms are expressions in a diagrammatic language, we turn our attention to categories of semantics. This leads us to the notion of a corelation, to which we devote Chapter 3. Given a factorisation system (\mathcal{E},\mathcal{M}) on a category \mathcal{C}, we define a corelation to be a cospan X \to N \leftarrow Y such that the copairing of the two maps, a map X+Y \to N, is a morphism in \mathcal{E}. Factorising maps X+Y \to N using the factorisation system leads to a notion of equivalence on cospans, and this helps us describe when two diagrams are equivalent. Like cospans, corelations form hypergraph categories.

In Chapter 4 we decorate corelations. Like decorated cospans,
decorated corelations are corelations together with some additional structure on the apex. We again use a lax braided monoidal functor to specify the sorts of extra structure allowed. Moreover, decorated corelations too form the morphisms of a hypergraph category. The culmination of our theoretical work is to show that every hypergraph category and every hypergraph functor can be constructe using decorated corelations. This implies that we can use decorated corelations to construct a semantic hypergraph category for any network-style diagrammatic language, as well as a hypergraph functor from its syntactic category that interprets each diagram. We also discuss how the intuitions behind decorated corelations guide construction of these categories and functors.

Having developed these theoretical tools, in the second part we turn to demonstrating that they have useful applications. Chapter 5 uses corelations to formalise signal flow diagrams representing linear time-invariant discrete dynamical systems as morphisms in a category. Our main result gives an intuitive sound and fully complete equational theory for reasoning about these linear time-invariant systems. Using this framework, we derive a novel structural characterisation of controllability, and consequently provide a methodology for analysing controllability of networked and interconnected systems.

Chapter 6 studies passive linear networks. Passive linear
networks are used in a wide variety of engineering applications, but the best studied are electrical circuits made of resistors, inductors and capacitors. The goal is to construct what we call the ‘black box functor’, a hypergraph functor from a category of open circuit diagrams to a category of behaviours of circuits. We construct the former as a decorated cospan category, with each morphism a cospan of finite sets decorated by a circuit diagram on the apex. In this category, composition describes the process of attaching the outputs of one circuit to the inputs of another. The behaviour of a circuit is the relation it imposes between currents and potentials at their terminals. The space of these currents and potentials naturally has the structure of a symplectic vector space, and the relation imposed by a circuit is a Lagrangian linear relation. Thus, the black box functor goes from our category of circuits to the category of symplectic vector spaces and Lagrangian linear relations. Decorated corelations provide a critical tool for constructing these hypergraph categories and the black box functor.

Finally, in Chapter 7 we mention two further research directions. The first is the idea of a ‘bound colimit’, which aims to describe why epi-mono factorisation systems are useful for constructing corelation categories of semantics for open systems. The second research direction pertains to applications of the black box functor for passive linear networks, discussing the work of Jekel on the inverse problem for electric circuits and the work of Baez, Fong, and Pollard on open Markov processes.


Struggles with the Continuum (Part 8)

25 September, 2016

We’ve been looking at how the continuum nature of spacetime poses problems for our favorite theories of physics—problems with infinities. Last time we saw a great example: general relativity predicts the existence of singularities, like black holes and the Big Bang. I explained exactly what these singularities really are. They’re not points or regions of spacetime! They’re more like ways for a particle to ‘fall off the edge of spacetime’. Technically, they are incomplete timelike or null geodesics.

The next step is to ask whether these singularities rob general relativity of its predictive power. The ‘cosmic censorship hypothesis’, proposed by Penrose in 1969, claims they do not.

In this final post I’ll talk about cosmic censorship, and conclude with some big questions… and a place where you can get all these posts in a single file.

Cosmic censorship

To say what we want to rule out, we must first think about what behaviors we consider acceptable. Consider first a black hole formed by the collapse of a star. According to general relativity, matter can fall into this black hole and ‘hit the singularity’ in a finite amount of proper time, but nothing can come out of the singularity.

The time-reversed version of a black hole, called a ‘white hole’, is often considered more disturbing. White holes have never been seen, but they are mathematically valid solutions of Einstein’s equation. In a white hole, matter can come out of the singularity, but nothing can fall in. Naively, this seems to imply that the future is unpredictable given knowledge of the past. Of course, the same logic applied to black holes would say the past is unpredictable given knowledge of the future.

Big Bang Cosmology

Big Bang cosmology

If white holes are disturbing, perhaps the Big Bang should be more so. In the usual solutions of general relativity describing the Big Bang, all matter in the universe comes out of a singularity! More precisely, if one follows any timelike geodesic back into the past, it becomes undefined after a finite amount of proper time. Naively, this may seem a massive violation of predictability: in this scenario, the whole universe ‘sprang out of nothing’ about 14 billion years ago.

However, in all three examples so far—astrophysical black holes, their time-reversed versions and the Big Bang—spacetime is globally hyperbolic. I explained what this means last time. In simple terms, it means we can specify initial data at one moment in time and use the laws of physics to predict the future (and past) throughout all of spacetime. How is this compatible with the naive intuition that a singularity causes a failure of predictability?

For any globally hyperbolic spacetime M, one can find a smoothly varying family of Cauchy surfaces S_t (t \in \mathbb{R}) such that each point of M lies on exactly one of these surfaces. This amounts to a way of chopping spacetime into ‘slices of space’ for various choices of the ‘time’ parameter t. For an astrophysical black hole, the singularity is in the future of all these surfaces. That is, an incomplete timelike or null geodesic must go through all these surfaces S_t before it becomes undefined. Similarly, for a white hole or the Big Bang, the singularity is in the past of all these surfaces. In either case, the singularity cannot interfere with our predictions of what occurs in spacetime.

A more challenging example is posed by the Kerr–Newman solution of Einstein’s equation coupled to the vacuum Maxwell equations. When

e^2 + (J/m)^2 < m^2

this solution describes a rotating charged black hole with mass m, charge e and angular momentum J in units where c = G = 1. However, an electron violates this inequality. In 1968, Brandon Carter pointed out that if the electron were described by the Kerr–Newman solution, it would have a gyromagnetic ratio of g = 2, much closer to the true answer than a classical spinning sphere of charge, which gives g = 1. But since

e^2 + (J/m)^2 > m^2

this solution gives a spacetime that is not globally hyperbolic: it has closed timelike curves! It also contains a ‘naked singularity’. Roughly speaking, this is a singularity that can be seen by arbitrarily faraway observers in a spacetime whose geometry asymptotically approaches that of Minkowski spacetime. The existence of a naked singularity implies a failure of global hyperbolicity.

The cosmic censorship hypothesis comes in a number of forms. The original version due to Penrose is now called ‘weak cosmic censorship’. It asserts that in a spacetime whose geometry asymptotically approaches that of Minkowski spacetime, gravitational collapse cannot produce a naked singularity.

In 1991, Preskill and Thorne made a bet against Hawking in which they claimed that weak cosmic censorship was false. Hawking conceded this bet in 1997 when a counterexample was found. This features finely-tuned infalling matter poised right on the brink of forming a black hole. It almost creates a region from which light cannot escape—but not quite. Instead, it creates a naked singularity!

Given the delicate nature of this construction, Hawking did not give up. Instead he made a second bet, which says that weak cosmic censorshop holds ‘generically’ — that is, for an open dense set of initial conditions.

In 1999, Christodoulou proved that for spherically symmetric solutions of Einstein’s equation coupled to a massless scalar field, weak cosmic censorship holds generically. While spherical symmetry is a very restrictive assumption, this result is a good example of how, with plenty of work, we can make progress in rigorously settling the questions raised by general relativity.

Indeed, Christodoulou has been a leader in this area. For example, the vacuum Einstein equations have solutions describing gravitational waves, much as the vacuum Maxwell equations have solutions describing electromagnetic waves. However, gravitational waves can actually form black holes when they collide. This raises the question of the stability of Minkowski spacetime. Must sufficiently small perturbations of the Minkowski metric go away in the form of gravitational radiation, or can tiny wrinkles in the fabric of spacetime somehow amplify themselves and cause trouble—perhaps even a singularity? In 1993, together with Klainerman, Christodoulou proved that Minkowski spacetime is indeed stable. Their proof fills a 514-page book.

In 2008, Christodoulou completed an even longer rigorous study of the formation of black holes. This can be seen as a vastly more detailed look at questions which Penrose’s original singularity theorem addressed in a general, preliminary way. Nonetheless, there is much left to be done to understand the behavior of singularities in general relativity.

Conclusions

In this series of posts, we’ve seen that in every major theory of physics, challenging mathematical questions arise from the assumption that spacetime is a continuum. The continuum threatens us with infinities! Do these infinities threaten our ability to extract predictions from these theories—or even our ability to formulate these theories in a precise way?

We can answer these questions, but only with hard work. Is this a sign that we are somehow on the wrong track? Is the continuum as we understand it only an approximation to some deeper model of spacetime? Only time will tell. Nature is providing us with plenty of clues, but it will take patience to read them correctly.

For more

To delve deeper into singularities and cosmic censorship, try this delightful book, which is free online:

• John Earman, Bangs, Crunches, Whimpers and Shrieks: Singularities and Acausalities in Relativistic Spacetimes, Oxford U. Press, Oxford, 1993.

To read this whole series of posts in one place, with lots more references and links, see:

• John Baez, Struggles with the continuum.