Open Systems in Classical Mechanics

5 August, 2020

I think we need a ‘compositional’ approach to classical mechanics. A classical system is typically built from parts, and we describe the whole system by describing its parts and then saying how they are put together. But this aspect of classical mechanics is typically left informal. You learn how it works in a physics class by doing lots of homework problems, but the rules are never completely spelled out, which is one reason physics is hard.

I want an approach that makes the compositionality of classical mechanics formal: a category (or categories) where the morphisms are open classical systems—that is, classical systems with the ability to interact with the outside world—and composing these morphisms describes putting together open systems to form larger open systems.

There are actually two main approaches to classical mechanics: the Lagrangian approach, which describes the state of a system in terms of position and velocity, and the Hamiltonian approach, which describes the state of a system in terms of position and momentum. There’s a way to go from the first approach to the second, called the Legendre transformation. So we should have a least two categories, one for Lagrangian open systems and one for Hamiltonian open systems, and a functor from the first to the second.

That’s what this paper provides:

• John C. Baez, David Weisbart and Adam Yassine, Open systems in classical mechanics.

The basic idea is by not new—but there are some twists! I like treating open systems as cospans with extra structure. But in this case it makes more sense to use spans, since the space of states of a classical system maps to the space of states of any subsystem. We’ll compose these spans using pullbacks.

For example, suppose you have a spring with rocks at both ends:

spring

If it’s in 1-dimensional space, and we only care about the position and momentum of the two rocks (not vibrations of the spring), we can say the phase space of this system is the cotangent bundle T^\ast \mathbb{R}^2.

But this system has some interesting subsystems: the rocks at the ends! So we get a span. We could draw it like this:

span.jpg

but what I really mean is that we have a span of phase spaces:

span_2.jpg

Here the left-hand arrow maps the state of the whole system to the state of the left-hand rock, and the right-hand arrow maps the state of the whole system to the state of the right-hand rock. These maps are smooth maps between manifolds, but they’re better than that! They are Poisson maps between symplectic manifolds: that’s where the physics comes in. They’re also surjective.

Now suppose we have two such open systems. We can compose them, or ‘glue them together’, by identifying the right-hand rock of one with the left-hand rock of the other. We can draw this as follows:

two_spans.jpg

Now we have a big three-rock system on top, whose states map to states of our original two-rock systems, and then down to states of the individual rocks. This picture really stands for the following commutative diagram:

two_spans_2.jpg

Here the phase space of the big three-rock system on top is obtained as a pullback: that’s how we formalize the process of gluing together two open systems! We can then discard some information and get a span:

composite_span

Bravo! We’ve managed to build a more complicated open system by gluing together two simpler ones! Or in mathematical terms: we’ve taken two spans of symplectic manifolds, where the maps involved in are surjective Poisson maps, and composed them to get another such span.

Since we can compose them, it shouldn’t be surprising that there’s a category whose morphisms are such spans—or more precisely, isomorphism classes of such spans. But we can go further! We can equip all the symplectic manifolds in this story with Hamiltonians, to describe dynamics. And we get a category whose morphisms are open Hamiltonian systems, which we call \mathsf{HamSy}. This is Theorem 4.2 of our paper.

But be careful: to describe one of these open Hamiltonian systems, we need to choose a Hamiltonian not only on the symplectic manifold at the apex of the span, but also on the two symplectic manifolds at the bottom—its ‘feet’. We need this to be able to compute the new Hamiltonian we get when we compose, or glue together, two open Hamiltonian systems. If we just added Hamiltonians for two subsystems, we’d ‘double-count’ the energy when we glued them together.

This takes us further from the decorated cospan or structured cospan frameworks I’ve been talking about repeatedly on this blog. Using spans instead of cospans is not a big deal: a span in some category is just a cospan in the opposite category. What’s a bigger deal is that we’re decorating not just the apex of our spans with extra data, but its feet—and when we compose our spans, we need this data on the feet to compute the data for the apex of the new composite span.

Furthermore, doing pullbacks is subtler in categories of manifolds than in the categories I’d been using for decorated or structured cospans. To handle this nicely, my coauthors wrote a whole separate paper!

• David Weisbart and Adam Yassine, Constructing span categories from categories without pullbacks.

Anyway, in our present paper we get not only a category \mathsf{HamSy} of open Hamiltonian systems, but also a category \mathsf{LagSy} of open Lagrangian systems. So we can do both Hamiltonian and Lagrangian mechanics with open systems.

Moreover, they’re compatible! In classical mechanics we use the Legendre transformation to turn Lagrangian systems into their Hamiltonian counterparts. Now this becomes a functor:

\mathcal{L} \colon \mathsf{LagSy} \to \mathsf{HamSy}

That’s Theorem 5.5.

So, classical mechanics is becoming ‘compositional’. We can convert the Lagrangian descriptions of a bunch of little open systems into their Hamiltonian descriptions and then glue the results together, and we get the same answer as if we did that conversion on the whole big system. Thus, we’re starting to formalize the way physicists think about physical systems ‘one piece at a time’.


Linear Logic and Petri Nets

28 July, 2020

Wow! Elena Di Lavore and Xiaoyan Li explained how to make a category of Petri nets that’s a model of linear logic! I consider myself a sort of expert on Petri nets, but I didn’t know this stuff:

• Elena Di Lavore and Xiaoyan Li, Linear logic flavoured composition of Petri nets, The n-Category Café, 27 July 2020.

It has great pictures, too. Let me summarize a tiny bit.

A Petri net is a very simple thing. Here’s a Petri net that shows how healthy white blood cells (H), individual viruses (V) and infected white blood cells (I) interact when someone gets AIDS:

The yellow boxes are different kinds of things, called species; the aqua boxes are processes, called transitions.

There are different ways to form categories using Petri nets. Jade Master and I have focused on two:

1) How to turn a Petri net into a category where the morphisms say what the Petri net can do.

2) How to make a category with ‘open’ Petri nets as morphisms. Composing these lets you build big Petri nets from smaller pieces.

open_petri_2

Di Lavore and Li instead explain:

3) How to make a category with elementary Petri nets as objects: a morphism from one elementary Petri net to another turns each transition of the first into a transition of the second.

An elementary Petri net is one where each transition uses each species at most once as an input and at most once as an output:

nets

The category they get has lots of interesting structure! Like products, shown here:

product

In fact it has products, coproducts, two other monoidal structures, and exponentials—all fitting together in a wonderful way, as described by intuitionistic linear logic! To prove this, the key is to use Valeria de Paiva’s work on “Dialectica categories”. They explain how.

This is not original research: Elena Di Lavore and Xiaoyan Li wrote this blog article for the ACT2020 Adjoint School, and they’re explaining Carolyn Brown and Doug Gurr’s paper “A categorical linear framework for Petri nets”.

It’s worth comparing this paper:

• Uffe Engberg and Glynn Winskel, Petri nets as models of linear logic, in Colloquium on Trees in Algebra and Programming, Springer, Berlin, 1990, pp. 147–161.

Engberg and Winskel get a model of linear logic—or to be precise, a ‘commutative quantale’—by taking the category you get from a single Petri net as in item 1) and massaging it a bit. I explained it here:

• John Baez, Quantales from Petri nets, Azimuth, 6 October 2019.


Open Markov Processes

4 July, 2020

I gave a talk on some work I did with Kenny Courser. You can see slides of the talk, and also a video and the papers it’s based on:

Coarse-graining open Markov processes.

Abstract. We illustrate some new paradigms in applied category theory with the example of coarse-graining open Markov processes. Coarse-graining is a standard method of extracting a simpler Markov process from a more complicated one by identifying states. Here we extend coarse-graining to ‘open’ Markov processes: that is, those where probability can flow in or out of certain states called ‘inputs’ and ‘outputs’. One can build up an ordinary Markov process from smaller open pieces in two basic ways: composition, where we identify the outputs of one open Markov process with the inputs of another, and tensoring, where we set two open Markov processes side by side. These constructions make open Markov processes into the morphisms of a symmetric monoidal category. But we can go further and construct a symmetric monoidal double category where the 2-morphisms include ways of coarse-graining open Markov processes. We can describe the behavior of open Markov processes using double functors out of this double category.

For more, look at these:

• John Baez, Brendan Fong and Blake Pollard, A compositional framework for Markov processes. (Blog article here.)

• John Baez and Kenny Courser, Coarse-graining open Markov processes. (Blog article here.)

• Kenny Courser, Open Systems: A Double Categorical Perspective.


Getting to the Bottom of Noether’s Theorem

29 June, 2020

Most of us have been staying holed up at home lately. I spent the last month holed up writing a paper that expands on my talk at a conference honoring the centennial of Noether’s 1918 paper on symmetries and conservation laws. This made my confinement a lot more bearable. It was good getting back to this sort of mathematical physics after a long time spent on applied category theory. It turns out I really missed it.

While everyone at the conference kept emphasizing that Noether’s 1918 paper had two big theorems in it, my paper is just about the easy one—the one physicists call Noether’s theorem:

Getting to the bottom of Noether’s theorem.

People often summarize this theorem by saying “symmetries give conservation laws”. And that’s right, but it’s only true under some assumptions: for example, that the equations of motion come from a Lagrangian.

This leads to some interesting questions. For which types of physical theories do symmetries give conservation laws? What are we assuming about the world, if we assume it is described by a theories of this type? It’s hard to get to the bottom of these questions, but it’s worth trying.

We can prove versions of Noether’s theorem relating symmetries to conserved quantities in many frameworks. While a differential geometric framework is truer to Noether’s original vision, my paper studies the theorem algebraically, without mentioning Lagrangians.

Now, Atiyah said:

…algebra is to the geometer what you might call the Faustian offer. As you know, Faust in Goethe’s story was offered whatever he wanted (in his case the love of a beautiful woman), by the devil, in return for selling his soul. Algebra is the offer made by the devil to the mathematician. The devil says: I will give you this powerful machine, it will answer any question you like. All you need to do is give me your soul: give up geometry and you will have this marvellous machine.

While this is sometimes true, algebra is more than a computational tool: it allows us to express concepts in a very clear and distilled way. Furthermore, the geometrical framework developed for classical mechanics is not sufficient for quantum mechanics. An algebraic approach emphasizes the similarity between classical and quantum mechanics, clarifying their differences.

In talking about Noether’s theorem I keep using an interlocking trio of important concepts used to describe physical systems: ‘states’, ‘observables’ and `generators’. A physical system has a convex set of states, where convex linear combinations let us describe probabilistic mixtures of states. An observable is a real-valued quantity whose value depends—perhaps with some randomness—on the state. More precisely: an observable maps each state to a probability measure on the real line. A generator, on the other hand, is something that gives rise to a one-parameter group of transformations of the set of states—or dually, of the set of observables.

It’s easy to mix up observables and generators, but I want to distinguish them. When we say ‘the energy of the system is 7 joules’, we are treating energy as an observable: something you can measure. When we say ‘the Hamiltonian generates time translations’, we are treating the Hamiltonian as a generator.

In both classical mechanics and ordinary complex quantum mechanics we usually say the Hamiltonian is the energy, because we have a way to identify them. But observables and generators play distinct roles—and in some theories, such as real or quaternionic quantum mechanics, they are truly different. In all the theories I consider in my paper the set of observables is a Jordan algebra, while the set of generators is a Lie algebra. (Don’t worry, I explain what those are.)

When we can identify observables with generators, we can state Noether’s theorem as the following equivalence:


The generator a generates transformations that leave the
observable b fixed.

\Updownarrow

The generator b generates transformations that leave the observable a fixed.

In this beautifully symmetrical statement, we switch from thinking of a as the generator and b as the observable in the first part to thinking of b as the generator and a as the observable in the second part. Of course, this statement is true only under some conditions, and the goal of my paper is to better understand these conditions. But the most fundamental condition, I claim, is the ability to identify observables with generators.

In classical mechanics we treat observables as being the same as generators, by treating them as elements of a Poisson algebra, which is both a Jordan algebra and a Lie algebra. In quantum mechanics observables are not quite the same as generators. They are both elements of something called a ∗-algebra. Observables are self-adjoint, obeying

a^* = a

while generators are skew-adjoint, obeying

a^* = -a

The self-adjoint elements form a Jordan algebra, while the skew-adjoint elements form a Lie algebra.

In ordinary complex quantum mechanics we use a complex ∗-algebra. This lets us turn any self-adjoint element into a skew-adjoint one by multiplying it by \sqrt{-1}. Thus, the complex numbers let us identify observables with generators! In real and quaternionic quantum mechanics this identification is impossible, so the appearance of complex numbers in quantum mechanics is closely connected to Noether’s theorem.

In short, classical mechanics and ordinary complex quantum mechanics fit together in this sort of picture:

classical_and_quantum_mechanics

To dig deeper, it’s good to examine generators on their own: that is, Lie algebras. Lie algebras arise very naturally from the concept of ‘symmetry’. Any Lie group gives rise to a Lie algebra, and any element of this Lie algebra then generates a one-parameter family of transformations of that very same Lie algebra. This lets us state a version of Noether’s theorem solely in terms of generators:


The generator a generates transformations that leave the generator b fixed.

\Updownarrow

The generator b generates transformations that leave the generator a fixed.

And when we translate these statements into equations, their equivalence follows directly from this elementary property of the Lie bracket:


[a,b] = 0

\Updownarrow

[b,a] = 0

Thus, Noether’s theorem is almost automatic if we forget about observables and work solely with generators. The only questions left are: why should symmetries be described by Lie groups, and what is the meaning of this property of the Lie bracket?

In my paper I tackle both these questions, and point out that the Lie algebra formulation of Noether’s theorem comes from a more primitive group formulation, which says that whenever you have two group elements g and h,


g commutes with h.

\Updownarrow

h commutes with g.

That is: whenever you’ve got two ways of transforming a physical system, the first transformation is ‘conserved’ by second if and only if the second is conserved by the first!

However, observables are crucial in physics. Working solely with generators in order to make Noether’s theorem a tautology would be another sort of Faustian bargain. So, to really get to the bottom of Noether’s theorem, we need to understand the map from observables to generators. In ordinary quantum mechanics this comes from multiplication by i. But this just pushes the mystery back a notch: why should we be using the complex numbers in quantum mechanics?

For this it’s good to spend some time examining observables on their own: that is, Jordan algebras. Those of greatest importance in physics are the unital JB-algebras, which are unfortunately named not after me, but Jordan and Banach. These allow a unified approach to real, complex and quaternionic quantum mechanics, along with some more exotic theories. So, they let us study how the role of complex numbers in quantum mechanics is connected to Noether’s theorem.

Any unital JB-algebra O has a partial ordering: that is, we can talk about one observable being greater than or equal to another. With the help of this we can define states on O, and prove that any observable maps each state to a probability measure on the real line.

More surprisingly, any JB-algebra also gives rise to two Lie algebras. The smaller of these, say L, has elements that generate transformations of O that preserve all the structure of this unital JB-algebra. They also act on the set of states. Thus, elements of L truly deserve to be considered ‘generators’.

In a unital JB-algebra there is not always a way to reinterpret observables as generators. However, Alfsen and Shultz have defined the notion of a ‘dynamical correspondence’ for such an algebra, which is a well-behaved map

\psi \colon O \to L

One of the two conditions they impose on this map implies a version of Noether’s theorem. They prove that any JB-algebra with a dynamical correspondence gives a complex ∗-algebra where the observables are self-adjoint elements, the generators are skew-adjoint, and we can convert observables into generators by multiplying them by i.

This result is important, because the definition of JB-algebra does not involve the complex numbers, nor does the concept of dynamical correspondence. Rather, the role of the complex numbers in quantum mechanics emerges from a map from observables to generators that obeys conditions including Noether’s theorem!

To be a bit more precise, Alfsen and Shultz’s first condition on the map \psi \colon O \to L says that every observable a \in O generates transformations that leave a itself fixed. I call this the self-conservation principle. It implies Noether’s theorem.

However, in their definition of dynamical correspondence, Alfsen and Shultz also impose a second, more mysterious condition on the map \psi. I claim that that this condition is best understood in terms of the larger Lie algebra associated to a unital JB-algebra. As a vector space this is the direct sum

A = O \oplus L

but it’s equipped with a Lie bracket such that

[-,-] \colon L \times L \to L    \qquad [-,-] \colon L \times O \to O

[-,-] \colon O \times L \to O    \qquad [-,-] \colon O \times O \to L

As I mentioned, elements of L generate transformations of O that preserve all the structure on this unital JB-algebra. Elements of O also generate transformations of O, but these only preserve its vector space structure and partial ordering.

What’s the meaning of these other transformations? I claim they’re connected to statistical mechanics.

For example, consider ordinary quantum mechanics and let O be the unital JB-algebra of all bounded self-adjoint operators on a complex Hilbert space. Then L is the Lie algebra of all bounded skew-adjoint operators on this Hilbert space. There is a dynamical correpondence sending any observable H \in O to the generator \psi(H) = iH \in L, which then generates a one-parameter group of transformations of O like this:

a \mapsto e^{itH/\hbar} \, a \, e^{-itH/\hbar}  \qquad \forall t \in \mathbb{R}, a \in O

where \hbar is Planck’s constant. If H is the Hamiltonian of some system, this is the usual formula for time evolution of observables in the Heisenberg picture. But H also generates a one-parameter group of transformations of O as follows:

a \mapsto  e^{-\beta H/2} \, a \, e^{-\beta H/2}  \qquad \forall \beta \in \mathbb{R}, a \in O

Writing \beta = 1/kT where T is temperature and k is Boltzmann’s constant, I claim that these are ‘thermal transformations’. Acting on a state in thermal equilibrium at some temperature, these transformations produce states in thermal equilibrium at other temperatures (up to normalization).

The analogy between it/\hbar and 1/kT is often summarized by saying “inverse temperature is imaginary time”. The second condition in Alfsen and Shultz’s definition of dynamical correspondence is a way of capturing this principle in a way that does not explicitly mention the complex numbers. Thus, we may very roughly say their result explains the role of complex numbers in quantum mechanics starting from three assumptions:

• observables form Jordan algebra of a nice sort (a unital JB-algebra)

• the self-conservation principle (and thus Noether’s theorem)

• the relation between time and inverse temperature.

I still want to understand all of this more deeply, but the way statistical mechanics entered the game was surprising to me, so I feel I made a little progress.

I hope the paper is half as fun to read as it was to write! There’s a lot more in it than described here.


ACT2020 Program

27 June, 2020

Boston2

Applied Category Theory 2020 is coming up soon! After the Tutorial Day on Sunday July 6th, there will be talks from Monday July 7th to Friday July 10th. All talks will be live on Zoom and on YouTube. Recorded versions will appear on YouTube later.

Here is the program—click on it to download a more readable version:


Here are the talks! They come in three kinds: keynotes, regular presentations and short industry presentations. Within each I’ve listed them in alphabetical order by speaker: I believe the first author is the speaker.

This is gonna be fun.

Keynote presentations (35 minutes)

• Henry Adams, Johnathan Bush and Joshua Mirth, Operations on metric thickenings.

• Nicolas Blanco and Noam Zeilberger: Bifibrations of polycategories and classical linear logic.

• Bryce Clarke, Derek Elkins, Jeremy Gibbons, Fosco Loregian, Bartosz Milewski, Emily Pillmore and Mario Román: Profunctor optics, a categorical update.

• Tobias Fritz, Tomáš Gonda, Paolo Perrone and Eigil Rischel: Distribution functors, second-order stochastic dominance and the Blackwell–Sherman–Stein Theorem in categorical probability.

• Micah Halter, Evan Patterson, Andrew Baas and James Fairbanks: Compositional scientific computing with Catlab and SemanticModels.

• Joachim Kock: Whole-grain Petri nets and processes.

• Andre Kornell, Bert Lindenhovius and Michael Mislove: Quantum CPOs.

• Martha Lewis: Towards logical negation in compositional distributional semantics.

• Jade Master and John Baez: Open Petri nets.

• Lachlan McPheat, Mehrnoosh Sadrzadeh, Hadi Wazni and Gijs Wijnholds, Categorical vector space semantics for Lambek calculus with a relevant modality.

• David Jaz Myers: Double categories of open dynamical systems.

• Toby St Clere Smithe, Cyber Kittens, or first steps towards categorical cybernetics.

Regular presentations (20 minutes)

• Robert Atkey, Bruno Gavranović, Neil Ghani, Clemens Kupke, Jeremy Ledent and Fredrik Nordvall Forsberg: Compositional game theory, compositionally.

• John Baez and Kenny Courser: Coarse-graining open Markov processes.

• Georgios Bakirtzis, Christina Vasilakopoulou and Cody Fleming, Compositional cyber-physical systems modeling.

• Marco Benini, Marco Perin, Alexander Alexander Schenkel and Lukas Woike: Categorification of algebraic quantum field theories.

• Daniel Cicala: Rewriting structured cospans.

• Bryce Clarke: A diagrammatic approach to symmetric lenses.

• Bob Coecke, Giovanni de Felice, Konstantinos Meichanetzidis, Alexis Toumi, Stefano Gogioso and Nicolo Chiappori: Quantum natural language processing.

• Geoffrey Cruttwell, Jonathan Gallagher and Dorette Pronk: Categorical semantics of a simple differential programming language.

• Swaraj Dash and Sam Staton: A monad for probabilistic point processes.

• Giovanni de Felice, Elena Di Lavore, Mario Román and Alexis Toumi: Functorial language games for question answering.

• Giovanni de Felice, Alexis Toumi and Bob Coecke: DisCoPy: monoidal categories in Python.

• Brendan Fong, David Jaz Myers and David I. Spivak: Behavioral mereology: a modal logic for passing constraints.

• Rocco Gangle, Gianluca Caterina and Fernando Tohme, A generic figures reconstruction of Peirce’s existential graphs (alpha).

• Jules Hedges and Philipp Zahn: Open games in practice.

• Jules Hedges: Non-compositionality in categorical systems theory.

• Michael Johnson and Robert Rosebrugh, The more legs the merrier: A new composition for symmetric (multi-)lenses.

• Joe Moeller, John Baez and John Foley: Petri nets with catalysts.

• John Nolan and Spencer Breiner, Symmetric monoidal categories with attributes.

• Joseph Razavi and Andrea Schalk: Gandy machines made easy via category theory.

• Callum Reader: Measures and enriched categories.

• Mario Román: Open diagrams via coend calculus.

• Luigi Santocanale, Dualizing sup-preserving endomaps of a complete lattice.

• Dan Shiebler: Categorical stochastic processes and likelihood.

• Richard Statman, Products in a category with only one object.

• David I. Spivak: Poly: An abundant categorical setting for mode-dependent dynamics.

• Christine Tasson and Martin Hyland, The linear-non-linear substitution 2-monad.

• Tarmo Uustalu, Niccolò Veltri and Noam Zeilberger: Proof theory of partially normal skew monoidal categories.

• Dmitry Vagner, David I. Spivak and Evan Patterson: Wiring diagrams as normal forms for computing in symmetric monoidal categories.

• Matthew Wilson, James Hefford, Guillaume Boisseau and Vincent Wang: The safari of update structures: visiting the lens and quantum enclosures.

• Paul Wilson and Fabio Zanasi: Reverse derivative ascent: a categorical approach to learning Boolean circuits.

• Vladimir Zamdzhiev: Computational adequacy for substructural lambda calculi.

• Gioele Zardini, David I. Spivak, Andrea Censi and Emilio Frazzoli: A compositional sheaf-theoretic framework for event-based systems.

Industry presentations (8 minutes)

• Arquimedes Canedo (Siemens Corporate Technology).

• Brendan Fong (Topos Institute).

• Jelle Herold (Statebox): Industrial strength CT.

• Steve Huntsman (BAE): Inhabiting the value proposition for category theory.

• Ilyas Khan (Cambridge Quantum Computing).

• Alan Ransil (Protocol Labs): Compositional data structures for the decentralized web.

• Alberto Speranzon (Honeywell).

• Ryan Wisnesky (Conexus): Categorical informatics at scale.


ACT2020 Tutorial Day

17 June, 2020

If you’re wanting to learn some applied category theory, register for the tutorials that are taking place on July 5, 2020 as part of ACT2020!

Applied category theory offers a rigorous mathematical language and toolset for relating different concepts from across math, science, and technology. For example, category theory finds common patterns between geometry (shapes), algebra (equations), numbers, logic, probability, etc. Applied category theory (ACT) looks for how those very same patterns extend outward to data, programs, processes, physics, linguistics, and so on—things we see in the real world. The field is currently growing, as new applications and common patterns are being found all the time. When you understand these ideas, more of your intuitions about the world can be made rigorous and thus be communicated at a larger scale. This in turn gives our community a chance to solve larger and more complex scientific, technological, and maybe even societal problems.

This year’s international applied category theory conference ACT2020 is having a tutorial day, meant to introduce newcomers to applied category theory. Tutorial day will take place on July 5 and will include a few main topics that will be taught semi-traditionally (via presentation, exercises, and discussion) over Zoom, as well as mentors who will be available throughout the day to work with smaller groups and/or individuals. We invite you to sign up here if you’re interested, so we can keep you posted. Hope to see you there!

The four courses will be roughly as follows:

• David Spivak: categorical databases for introducing sets, functions, categories, and functors.

• Fabrizio Genovese: string diagrams as a graphical language for category theory.

• Emily Riehl: the Yoneda lemma in the context of matrices.

• Paolo Perrone: monads and comonads.


Thermalization

12 June, 2020

I’m wondering if people talk about this. Maybe you know?

Given a self-adjoint operator H that’s bounded below and a density matrix D on some Hilbert space, we can define for any \beta > 0 a new density matrix

\displaystyle{ D_\beta = \frac{e^{-\beta H/2} \, D \, e^{-\beta H/2}}{\mathrm{tr}(e^{-\beta H/2} \, D \, e^{-\beta H/2})} }

I would like to call this the thermalization of D when H is a Hamiltonian and \beta = 1/kT where T is the temperature and k is Boltzmann’s constant.

For example, in the finite-dimensional case we can take D to be the identity matrix, normalized to have trace 1. Then D_\beta is the Gibbs state at temperature T: that is, the state of thermal equilibrium at temperature T.

But I want to know if you’ve seen people do this thermalization trick starting from some other density matrix D.


Categorical Statistics Group

10 June, 2020

As a spinoff of the workshop Categorical Probability and Statistics, Oliver Shetler has organized a reading group on category theory applied to statistics. The first meeting is Saturday June 27th at 17:00 UTC.

You can sign up for the group here, and also read more about it there. We’re discussing the group on the Category Theory Community Server, so if you want to join the reading group should probably also join that.

Here is a reading list. I’m sure the group won’t cover all these papers—we’ll start with the first one and see how it goes from there. But it’s certainly helpful to have a list like this.

• McCullagh, What is a statistical model?

• Morse and Sacksteder, Statistical isomorphism.

• Simpson, Probability sheaves and the Giry monad.

• Fritz, A synthetic approach to Markov kernels, conditional independence and theorems on sufficient statistics.

• Jacobs, Probabilities, distribution monads, and convex categories.

• Keimel, The monad of probability measures over compact ordered spaces and its Eilenberg-Moore algebras.

• McCullaugh, Di Nardo, Senato, Natural statistics for spectral samples.

• Perrone, Categorical Probability and Stochastic Dominance in Metric Spaces. (Ph.D. thesis)

• Patterson, The Algebra and Machine Representation of Statistical Models. (Ph.D. thesis)

• Tuyeras, A category theoretical argument for causal inference.

• Culbertson and Sturtz, A categorical foundation for Bayesian probability.

• Fong, Causal Theories: A Categorical Perspective on Bayesian Networks. (Masters thesis)

• Fritz and Perrone, A probability monad as the colimit of spaces of finite samples.

• Fritz and Perrone, Bimonoidal structure of probability monads.

• Fritz, A presentation of the category of stochastic matrices.

• Jacobs and Furber, Towards a categorical account of conditional probability.

• Bradley, At the Interface of Algebra and Statistics. (Ph.D. Thesis)

• Bradley, Stoudenmire and Terilla, Modeling sequences with quantum states.

• Jacobs, Categorical aspects of parameter learning.

• Jacobs, Parameters and parameterization in specification, using distributive categories.

• Parzygnat, Inverses, disintegrations, and Bayesian inversion in quantum Markov categories.

• Jacobs, A channel-based perspective on conjugate priors.


ACT@UCR Seminar (Part 2)

7 June, 2020

The spring 2020 seminar on applied category theory at U.C. Riverside is done! Here you can see videos of all the talks, along with talk slides, discussions and more:

• John Baez: Structured cospans and double categories.

• Prakash Panangaden: A categorical view of conditional expectation.

• Jules Hedges: Open games: the long road to practical applications.

• Michael Shulman: Star-autonomous envelopes.

• Gershom Bazerman: A localic approach to the semantics of dependency, conflict, and concurrency.

• Sarah Rovner-Frydman: Separation logic through a new lens.

• Tai-Danae Bradley: Formal concepts vs. eigenvectors of density operators.

• Gordon Plotkin: A complete axiomatisation of partial differentiation.

• Simon Willerton: The Legendre–Fenchel transform from a category theoretic perspective.

• Nina Otter: Values and inclusivity in the applied category theory community.

Thanks to everyone for participating in this!


Values and Inclusivity in the ACT Community

27 May, 2020

In the tenth and final talk of this spring’s ACT@UCR seminar, Nina Otter led a discussion about diversity in the applied category theory community, with these speakers:

• Nina Otter: introduction, and some potential initiatives

• Jade Master: Experience in setting up an online research community for minorities in ACT

• Brendan Fong: Statement of values for ACT community

• Emily Riehl: Experience at MATRIX institute

• Christian Williams: Quick overview of ACT server

This is a change from her originally scheduled talk, due to the killing of George Floyd and ensuing events.

The discussion took place at the originally scheduled time on Wednesday June 3rd. Afterwards we had discussions at the Theory Community Server, here:

https://categorytheory.zulipchat.com/#narrow/stream/229966-ACT.40UCR-seminar/topic/June.203rd.3A.20Nina.20Otter

You can join the conversation there if you sign in.

You can see her slides here, or download a video here, or watch the video here:

• Nina Otter, Values and inclusivity in the applied category theory community.

Abstract. Saddened by the current events, we are taking this opportunity to pause and reflect on what we can do to change the status quo and try to bring about real and long-lasting change. Thus, we are holding a discussion aimed at finding concrete solutions to make the Applied Category Theory community more inclusive, and also to reflect about the values that our community would like to stand for and endorse, in particular, in terms of which sources of funding go against our values. While this discussion is specific to the applied category theory community, we believe that many of the topics will be of interest also to people in other fields, and thus we welcome anybody with an interest to attend. The discussion will consist of two parts: we will have first several people give short talks to discuss common issues that we need to address, as well as present specific plans for initiatives that we could take. We believe that the current pandemic, and the fact that all activities are now taking place remotely, gives us the opportunity to involve people who would otherwise find it difficult to travel, because of disabilities, financial reasons or care-taking responsibilities. Thus, now we have the opportunity to come up with new types of mentoring, collaborations, and many other initiatives that might have been difficult to envision until just a couple of months ago. The second part of the discussion will take place on the category theory community server, and its purpose is to allow for a broader participation in the discussion, and ideally during this part we will be able to flesh out in detail the specific initiatives that have been proposed in the talks.