## Open Systems in Classical Mechanics

I think we need a ‘compositional’ approach to classical mechanics. A classical system is typically built from parts, and we describe the whole system by describing its parts and then saying how they are put together. But this aspect of classical mechanics is typically left informal. You learn how it works in a physics class by doing lots of homework problems, but the rules are never completely spelled out, which is one reason physics is hard.

I want an approach that makes the compositionality of classical mechanics formal: a category (or categories) where the morphisms are open classical systems—that is, classical systems with the ability to interact with the outside world—and composing these morphisms describes putting together open systems to form larger open systems.

There are actually two main approaches to classical mechanics: the Lagrangian approach, which describes the state of a system in terms of position and velocity, and the Hamiltonian approach, which describes the state of a system in terms of position and momentum. There’s a way to go from the first approach to the second, called the Legendre transformation. So we should have a least two categories, one for Lagrangian open systems and one for Hamiltonian open systems, and a functor from the first to the second.

That’s what this paper provides:

• John C. Baez, David Weisbart and Adam Yassine, Open systems in classical mechanics.

The basic idea is by not new—but there are some twists! I like treating open systems as cospans with extra structure. But in this case it makes more sense to use spans, since the space of states of a classical system maps to the space of states of any subsystem. We’ll compose these spans using pullbacks.

For example, suppose you have a spring with rocks at both ends:

If it’s in 1-dimensional space, and we only care about the position and momentum of the two rocks (not vibrations of the spring), we can say the phase space of this system is the cotangent bundle $T^\ast \mathbb{R}^2.$

But this system has some interesting subsystems: the rocks at the ends! So we get a span. We could draw it like this:

but what I really mean is that we have a span of phase spaces:

Here the left-hand arrow maps the state of the whole system to the state of the left-hand rock, and the right-hand arrow maps the state of the whole system to the state of the right-hand rock. These maps are smooth maps between manifolds, but they’re better than that! They are Poisson maps between symplectic manifolds: that’s where the physics comes in. They’re also surjective.

Now suppose we have two such open systems. We can compose them, or ‘glue them together’, by identifying the right-hand rock of one with the left-hand rock of the other. We can draw this as follows:

Now we have a big three-rock system on top, whose states map to states of our original two-rock systems, and then down to states of the individual rocks. This picture really stands for the following commutative diagram:

Here the phase space of the big three-rock system on top is obtained as a pullback: that’s how we formalize the process of gluing together two open systems! We can then discard some information and get a span:

Bravo! We’ve managed to build a more complicated open system by gluing together two simpler ones! Or in mathematical terms: we’ve taken two spans of symplectic manifolds, where the maps involved in are surjective Poisson maps, and composed them to get another such span.

Since we can compose them, it shouldn’t be surprising that there’s a category whose morphisms are such spans—or more precisely, isomorphism classes of such spans. But we can go further! We can equip all the symplectic manifolds in this story with Hamiltonians, to describe dynamics. And we get a category whose morphisms are open Hamiltonian systems, which we call $\mathsf{HamSy}.$ This is Theorem 4.2 of our paper.

But be careful: to describe one of these open Hamiltonian systems, we need to choose a Hamiltonian not only on the symplectic manifold at the apex of the span, but also on the two symplectic manifolds at the bottom—its ‘feet’. We need this to be able to compute the new Hamiltonian we get when we compose, or glue together, two open Hamiltonian systems. If we just added Hamiltonians for two subsystems, we’d ‘double-count’ the energy when we glued them together.

This takes us further from the decorated cospan or structured cospan frameworks I’ve been talking about repeatedly on this blog. Using spans instead of cospans is not a big deal: a span in some category is just a cospan in the opposite category. What’s a bigger deal is that we’re decorating not just the apex of our spans with extra data, but its feet—and when we compose our spans, we need this data on the feet to compute the data for the apex of the new composite span.

Furthermore, doing pullbacks is subtler in categories of manifolds than in the categories I’d been using for decorated or structured cospans. To handle this nicely, my coauthors wrote a whole separate paper!

• David Weisbart and Adam Yassine, Constructing span categories from categories without pullbacks.

Anyway, in our present paper we get not only a category $\mathsf{HamSy}$ of open Hamiltonian systems, but also a category $\mathsf{LagSy}$ of open Lagrangian systems. So we can do both Hamiltonian and Lagrangian mechanics with open systems.

Moreover, they’re compatible! In classical mechanics we use the Legendre transformation to turn Lagrangian systems into their Hamiltonian counterparts. Now this becomes a functor:

$\mathcal{L} \colon \mathsf{LagSy} \to \mathsf{HamSy}$

That’s Theorem 5.5.

So, classical mechanics is becoming ‘compositional’. We can convert the Lagrangian descriptions of a bunch of little open systems into their Hamiltonian descriptions and then glue the results together, and we get the same answer as if we did that conversion on the whole big system. Thus, we’re starting to formalize the way physicists think about physical systems ‘one piece at a time’.

### 4 Responses to Open Systems in Classical Mechanics

1. Frederic Barbaresco says:

Port-Hamiltonian approaches are other tentatives to glue hamiltonian descriptions of unit system to capture behavior of complex one.
See last talks of Bernhard Maschke at Les Houches Summer week SPIGL’20:

2. Seems like there’s an information channel here between situations with different possibilities, as in–

Click to access day5_1.pdf

3. John Baez says:

On the Category Theory Community Server, Morgan Rogers and I are having a nice conversation about this paper. He wrote:

It’s been a while since I did any hands-on physics, and I’m a little lazy to take a full dive into your paper, so forgive me if my question seems silly. When you say “We can equip all the symplectic manifolds in this story with Hamiltonians”, and then say it’s a big deal that you’re doing this for the feet of the spans as well as the apex, why is that? When equipping Hamiltonians, don’t you do so on the underlying category before taking spans? That is, if one is composing spans with Hamiltonians, shouldn’t the Hamiltonians of the intermediate system match up for that to make sense, and shouldn’t the Hamiltonian on the apex be constrained to be compatible with the Hamiltonians on the feet?

Another way of saying what I’m trying to ask is, why isn’t “HamSy” an exactly analogous construction to the category of spans of Poisson maps, but on a slightly richer category? I gather that taking pullbacks is not straightforward in the richer setting; is that the root of it?

I wrote:

When you say “We can equip all the symplectic manifolds in this story with Hamiltonians”, and then sat it’s a big deal that you’re doing this for the feet of the spans as well as the apex, why is that?

Just for people who didn’t read my blog article: I didn’t exactly say it was a “big deal”. It’s only “big” in a rather technical sense, namely that this prevents our theory of open classical systems from fitting into the “decorated cospan” or “structured cospan” frameworks that some of us have been working on for a while. Nobody who wasn’t closely following these technical developments would care much.

When equipping manifolds with Hamiltonians, don’t you do so on the underlying category before taking spans?

No, because we don’t want to require that the legs of the spans are morphisms “preserving” the Hamiltonians, and perhaps more importantly when we compose two spans to get the Hamiltonian on the new apex we add the Hamiltonians on the apices of the spans we’re composing (after pulling them back) and subtract the Hamiltonian on the common foot (after pulling it back).

This tricky-sounding rule says that when you glue together two physical systems by identifying a piece of one with an isomorphic piece of the other, the energy of the resulting system is the sum of the energies of the two systems you glued together, minus the energy in the piece you’ve identified. The subtraction is necessary to avoid “double-counting”.

Another way of saying what I’m trying to ask is, why isn’t “HamSy” an exactly analogous construction to the category of spans of Poisson maps, but on a slightly richer category?

I think not; I think $\mathsf{HamSy}$ is not a decorated or structured cospan category, thanks to the sneaky rule for getting the Hamiltonian of a composite system.

I should try to prove it’s not.

I gather that taking pullbacks is not straightforward in the richer setting; is that the root of it?

That’s a somewhat different issue that we also have to deal with.

He wrote:

When equipping manifolds with Hamiltonians, don’t you do so on the underlying category before taking spans?

No, because we don’t want to require that the legs of the spans are morphisms “preserving” the Hamiltonians, and perhaps more importantly when we compose two spans to get the Hamiltonian on the new apex we add the Hamiltonians on the apices of the spans we’re composing (after pulling them back) and subtract the Hamiltonian on the common foot (after pulling it back).

Aren’t there constraints on the Hamiltonians of subsystems of a common system? If not, why not, and if so, shouldn’t there be morphisms (possibly in the opposite direction to the Poisson maps) expressing this? The formula you describe does sound a lot like the formula you would get for a pushout, after all.

I wrote:

Aren’t there constraints on the Hamiltonians of subsystems of a common system?

Not sure what you mean. When we compose open systems, which are cospans with extra structure, we demand that the Hamiltonians on the feet match. But we don’t impose any relation between the Hamiltonian on the feet of a span and the Hamiltonian on its apex.

If not, why not?

Because there’s no way to determine the energy of a subsystem just from the energy of the whole system, or vice versa; in general they obey no relation whatsoever.

He wrote:

In the example of particles on ends of a spring you give in the blog post, the kinetic energy of either particle surely makes an additive contribution to the energy of the system, doesn’t it? Although the calculation for potentials seems more difficult…

As another example, if I have (a classical model of) two atoms in a molecule (such as the two hydrogens in a water molecule), their individual energies are not determined by the energy of the molecule, nor do their energies determine the energy of the entire system since there is another big piece attached, but there must surely be constraints in each direction?

I wrote:

In the example of particles on ends of a spring you give in the blog post, the kinetic energy of either particle surely makes an additive contribution to the energy of the system, doesn’t it?

Kinetic energy is additive in a certain sense. But when you’re doing Hamiltonian mechanics with general symplectic manifolds, there’s no concept of “kinetic” versus “potential” energy: energy is just an arbitrary smooth function on your symplectic manifold of states.

In this situation there’s no general relation that holds between the energy of a whole system and the energies of two randomly chosen subsystems of that system, so we specify all three.

We also have a setup for open Lagrangian systems, which is formally analogous.

In the Lagrangian approach there’s a distinction between kinetic and potential energy built in – at least at the level of generality we work at (which is not maximally general). The configuration space (= space of possible “positions”) is a Riemannian manifold. A point in the tangent bundle specifies a position and velocity. The Riemannian metric lets us compute the kinetic energy from the velocity. The potential energy is an arbitrary smooth function on the Riemannian manifold. The Lagrangian is the kinetic energy minus the potential energy.

Now our open systems are spans of Riemannian manifolds where the apex and feet are equipped with potential functions. When we compose these spans we do the same trick of adding the potentials for the apices of the two spans and subtracting off the potential for their common foot—again to prevent “double-counting”.

4. These explanations of the same system in terms of two different languages– Lagrangian and Hamiltonian– seem like the explanation of information flow around the flashlight example, page 6 of ‘Information Flow: The Logic of Distributed Systems’ by Barwise and Seligman, cited in the above-linked presentation:

“Stepping below the casual uniformity of talk about information [in the flashlight example], we see a great disunity of theoretical principles and modes of explanation. Psychology, physiology, physics, linguistics, and telephone engineering are very different disciplines. They use different mathematical models (if any), and it is far from clear how the separate models may be linked to account for the whole story.”

In the flashlight example, the language of electrical engineering describes the circuitry of the flashlight. The language of mechanical engineering describes the mechanical workings of the switch, and so on. Yet the flashlight is one system– just as the physical system, above, is one system, even though it can be described in the two different languages of the Hamiltonian and the Lagrangian.

This site uses Akismet to reduce spam. Learn how your comment data is processed.