“Stepping below the casual uniformity of talk about information [in the flashlight example], we see a great disunity of theoretical principles and modes of explanation. Psychology, physiology, physics, linguistics, and telephone engineering are very different disciplines. They use different mathematical models (if any), and it is far from clear how the separate models may be linked to account for the whole story.”

In the flashlight example, the language of electrical engineering describes the circuitry of the flashlight. The language of mechanical engineering describes the mechanical workings of the switch, and so on. Yet the flashlight is one system– just as the physical system, above, is one system, even though it can be described in the two different languages of the Hamiltonian and the Lagrangian.

]]>It’s been a while since I did any hands-on physics, and I’m a little lazy to take a full dive into your paper, so forgive me if my question seems silly. When you say “We can equip all the symplectic manifolds in this story with Hamiltonians”, and then say it’s a big deal that you’re doing this for the feet of the spans as well as the apex, why is that? When equipping Hamiltonians, don’t you do so on the underlying category before taking spans? That is, if one is composing spans with Hamiltonians, shouldn’t the Hamiltonians of the intermediate system match up for that to make sense, and shouldn’t the Hamiltonian on the apex be constrained to be compatible with the Hamiltonians on the feet?

Another way of saying what I’m trying to ask is, why isn’t “HamSy” an exactly analogous construction to the category of spans of Poisson maps, but on a slightly richer category? I gather that taking pullbacks is not straightforward in the richer setting; is that the root of it?

I wrote:

When you say “We can equip all the symplectic manifolds in this story with Hamiltonians”, and then sat it’s a big deal that you’re doing this for the feet of the spans as well as the apex, why is that?

Just for people who didn’t read my blog article: I didn’t exactly say it was a “big deal”. It’s only “big” in a rather technical sense, namely that this prevents our theory of open classical systems from fitting into the “decorated cospan” or “structured cospan” frameworks that some of us have been working on for a while. Nobody who wasn’t closely following these technical developments would care much.

When equipping manifolds with Hamiltonians, don’t you do so on the underlying category before taking spans?

No, because we don’t want to require that the legs of the spans are morphisms “preserving” the Hamiltonians, and perhaps more importantly when we compose two spans to get the Hamiltonian on the new apex we add the Hamiltonians on the apices of the spans we’re composing (after pulling them back) and subtract the Hamiltonian on the common foot (after pulling it back).

This tricky-sounding rule says that when you glue together two physical systems by identifying a piece of one with an isomorphic piece of the other, the energy of the resulting system is the sum of the energies of the two systems you glued together, minus the energy in the piece you’ve identified. The subtraction is necessary to avoid “double-counting”.

Another way of saying what I’m trying to ask is, why isn’t “HamSy” an exactly analogous construction to the category of spans of Poisson maps, but on a slightly richer category?

I think not; I think is not a decorated or structured cospan category, thanks to the sneaky rule for getting the Hamiltonian of a composite system.

I should try to

proveit’s not.I gather that taking pullbacks is not straightforward in the richer setting; is that the root of it?

That’s a somewhat different issue that we also have to deal with.

He wrote:

When equipping manifolds with Hamiltonians, don’t you do so on the underlying category before taking spans?

No, because we don’t want to require that the legs of the spans are morphisms “preserving” the Hamiltonians, and perhaps more importantly when we compose two spans to get the Hamiltonian on the new apex we add the Hamiltonians on the apices of the spans we’re composing (after pulling them back) and subtract the Hamiltonian on the common foot (after pulling it back).

Aren’t there constraints on the Hamiltonians of subsystems of a common system? If not, why not, and if so, shouldn’t there be morphisms (possibly in the opposite direction to the Poisson maps) expressing this? The formula you describe does sound a lot like the formula you would get for a pushout, after all.

I wrote:

Aren’t there constraints on the Hamiltonians of subsystems of a common system?

Not sure what you mean. When we compose open systems, which are cospans with extra structure, we demand that the Hamiltonians on the feet match. But we don’t impose any relation between the Hamiltonian on the feet of a span and the Hamiltonian on its apex.

If not, why not?

Because there’s no way to determine the energy of a subsystem just from the energy of the whole system, or vice versa; in general they obey no relation whatsoever.

He wrote:

In the example of particles on ends of a spring you give in the blog post, the kinetic energy of either particle surely makes an additive contribution to the energy of the system, doesn’t it? Although the calculation for potentials seems more difficult…

As another example, if I have (a classical model of) two atoms in a molecule (such as the two hydrogens in a water molecule), their individual energies are not determined by the energy of the molecule, nor do their energies determine the energy of the entire system since there is another big piece attached, but there must surely be constraints in each direction?

I wrote:

]]>In the example of particles on ends of a spring you give in the blog post, the kinetic energy of either particle surely makes an additive contribution to the energy of the system, doesn’t it?

Kinetic energy is additive in a certain sense. But when you’re doing Hamiltonian mechanics with general symplectic manifolds, there’s no concept of “kinetic” versus “potential” energy: energy is just an arbitrary smooth function on your symplectic manifold of states.

In this situation there’s no general relation that holds between the energy of a whole system and the energies of two randomly chosen subsystems of that system, so we specify all three.

We also have a setup for open Lagrangian systems, which is formally analogous.

In the Lagrangian approach there’s a distinction between kinetic and potential energy built in – at least at the level of generality

wework at (which is not maximally general). The configuration space (= space of possible “positions”) is a Riemannian manifold. A point in the tangent bundle specifies a position and velocity. The Riemannian metric lets us compute the kinetic energy from the velocity. The potential energy is an arbitrary smooth function on the Riemannian manifold. The Lagrangian is the kinetic energy minus the potential energy.Now our open systems are spans of Riemannian manifolds where the apex and feet are equipped with potential functions. When we compose these spans we do the same trick of adding the potentials for the apices of the two spans and subtracting off the potential for their common foot—again to prevent “double-counting”.

See last talks of Bernhard Maschke at Les Houches Summer week SPIGL’20:

]]>