Networks of Dynamical Systems

18 March, 2014

guest post by Eugene Lerman

Hi, I’m Eugene Lerman. I met John back in the mid 1980s when John and I were grad students at MIT. John was doing mathematical physics and I was studying symplectic geometry. We never talked about networks. Now I teach in the math department at the University of Illinois at Urbana, and we occasionally talk about networks on his blog.

A few years ago a friend of mine who studies locomotion in humans and other primates asked me if I knew of any math that could be useful to him.

I remember coming across an expository paper on ‘coupled cell networks’:

• Martin Golubitsky and Ian Stewart, Nonlinear dynamics of networks: the groupoid formalism, Bull. Amer. Math. Soc. 43 (2006), 305–364.

In this paper, Golubitsky and Stewart used the study of animal gaits and models for the hypothetical neural networks called ‘central pattern generators’ that give rise to these gaits to motivate the study of networks of ordinary differential equations with symmetry. In particular they were interested in ‘synchrony’. When a horse trots, or canters, or gallops, its limbs move in an appropriate pattern, with different pairs of legs moving in synchrony:


They explained that synchrony (and the patterns) could arise when the differential equations have finite group symmetries. They also proposed several systems of symmetric ordinary differential equations that could generate the appropriate patterns.

Later on Golubitsky and Stewart noticed that there are systems of ODEs that have no group symmetries but whose solutions nonetheless exhibit certain synchrony. They found an explanation: these ODEs were ‘groupoid invariant’. I thought that it would be fun to understand what ‘groupoid invariant’ meant and why such invariance leads to synchrony.

I talked my colleague Lee DeVille into joining me on this adventure. At the time Lee had just arrived at Urbana after a postdoc at NYU. After a few years of thinking about these networks Lee and I realized that strictly speaking one doesn’t really need groupoids for these synchrony results and it’s better to think of the social life of networks instead. Here is what we figured out—a full and much too precise story is here:

• Eugene Lerman and Lee DeVille, Dynamics on networks of manifolds.

Let’s start with an example of a class of ODEs with a mysterious property:

Example. Consider this ordinary differential equation for a function \vec{x} : \mathbb{R} \to {\mathbb{R}}^3

\begin{array}{rcl}  \dot{x}_1&=& f(x_1,x_2)\\  \dot{x}_2&=& f(x_2,x_1)\\  \dot{x}_3&=& f(x_3, x_2)  \end{array}

for some function f:{\mathbb{R}}^2 \to {\mathbb{R}}. It is easy to see that a function x(t) solving

\displaystyle{  \dot{x} = f(x,x)  }

gives a solution of these equations if we set

\vec{x}(t) = (x(t),x(t),x(t))

You can think of the differential equations in this example as describing the dynamics of a complex system built out of three interacting subsystems. Then any solution of the form

\vec{x}(t) = (x(t),x(t),x(t))

may be thought of as a synchronization: the three subsystems are evolving in lockstep.

One can also view the result geometrically: the diagonal

\displaystyle{  \Delta = \{(x_1,x_2, x_3)\in {\mathbb{R}}^3 \mid x_1 =x_2 = x_3\}  }

is an invariant subsystem of the continuous-time dynamical system defined by the differential equations. Remarkably enough, such a subsystem exists for any choice of a function f.

Where does such a synchronization or invariant subsystem come from? There is no apparent symmetry of {\mathbb{R}}^3 that preserves the differential equations and fixes the diagonal \Delta, and thus could account for this invariant subsystem. It turns out that what matters is the structure of the mutual dependencies of the three subsystems making up the big system. That is, the evolution of x_1 depends only on x_1 and x_2, the evolution of x_2 depends only on x_2 and x_3, and the evolution of x_3 depends only on x_3 and x_2.

These dependencies can be conveniently pictured as a directed graph:

The graph G has no symmetries. Nonetheless, the existence of the invariant subsystem living on the diagonal \Delta can be deduced from certain properties of the graph G. The key is the existence of a surjective map of graphs

\varphi :G\to G'

from our graph G to a graph G' with exactly one node, call it a, and one arrow. It is also crucial that \varphi has the following lifting property: there is a unique way to lift the one and only arrow of G' to an arrow of G once we specify the target node of the lift.

We now formally define the notion of a ‘network of phase spaces’ and of a continuous-time dynamical system on such a network. Equivalently, we define a ‘network of continuous-time dynamical systems’. We start with a directed graph

G=\{G_1\rightrightarrows G_0\}

Here G_1 is the set of edges, G_0 is the set of nodes, and the two arrows assign to an edge its source and target, respectively. To each node we attach a phase space (or more formally a manifold, perhaps with boundary or corners). Here ‘attach’ means that we choose a function {\mathcal P}:G_0 \to {\mathsf{PhaseSpace}}; it assigns to each node a\in G_0 a phase space {\mathcal P}(a).

In our running example, to each node of the graph G we attach the real line {\mathbb{R}}. (If we think of the set G_0 as a discrete category and {\mathsf{PhaseSpace}} as a category of manifolds and smooth maps, then {\mathcal P} is simply a functor.)

Thus a network of phase spaces is a pair (G,{\mathcal P}), where G is a directed graph and {\mathcal P} is an assignment of phase spaces to the nodes of G.

We think of the collection \{{\mathcal P}(a)\}_{a\in G_0} as the collection of phase spaces of the subsystems constituting the network (G, {\mathcal P}). We refer to {\mathcal P} as a phase space function. Since the state of the network should be determined completely and uniquely by the states of its subsystems, it is reasonable to take the total phase space of the network to be the product

\displaystyle{  {\mathbb{P}}(G, {\mathcal P}):= \bigsqcap_{a\in G_0} {\mathcal P}(a).  }

In the example the total phase space of the network (G,{\mathcal P}) is {\mathbb{R}}^3, while the phase space of the network (G', {\mathcal P}') is the real line {\mathbb{R}}.

Finally we need to interpret the arrows. An arrow b\xrightarrow{\gamma}a in a graph G should encode the fact that the dynamics of the subsystem associated to the node a depends on the states of the subsystem associated to the node b. To make this precise requires the notion of an ‘open system’, or ‘control system’, which we define below. It also requires a way to associate an open system to the set of arrows coming into a node/vertex a.

To a first approximation an open system (or control system, I use the two terms interchangeably) is a system of ODEs depending on parameters. I like to think of a control system geometrically: a control system on a phase space M controlled by the the phase space U is a map

F: U\times M \to TM

where TM is the tangent bundle of the space M, so that for all (u,m)\in U\times M, F(u,m) is a vector tangent to M at the point m. Given phase spaces U and M the set of all corresponding control systems forms a vector space which we denote by

\displaystyle{ \mathsf{Ctrl}(U\times M \to M)}

(More generally one can talk about the space of control systems associated with a surjective submersion Q\to M. For us, submersions of the form U\times M \to M are general enough.)

To encode the incoming arrows, we introduce the input tree I(a) (this is a very short tree, a corolla if you like). The input tree of a node a of a graph G is a directed graph whose arrows are precisely the arrows of G coming into the vertex a, but any two parallel arrows of G with target a will have disjoint sources in I(a). In the example the input tree I of the one node of a of G' is the tree

There is always a map of graphs \xi:I(a) \to G. For instance for the input tree in the example we just discussed, \xi is the map

Consequently if (G,{\mathcal P}) is a network and I(a) is an input tree of a node of G, then (I(a), {\mathcal P}\circ \xi) is also a network. This allows us to talk about the phase space {\mathbb{P}} I(a) of an input tree. In our running example,

{\mathbb{P}} I(a) = {\mathbb{R}}^2

Given a network (G,{\mathcal P}), there is a vector space \mathsf{Ctrl}({\mathbb{P}} I(a)\to {\mathbb{P}} a) of open systems associated with every node a of G.

In our running example, the vector space associated to the one node a of (G',{\mathcal P}') is

\mathsf{Ctrl}({\mathbb{R}}^2, {\mathbb{R}})  \simeq C^\infty({\mathbb{R}}^2, {\mathbb{R}})

In the same example, the network (G,{\mathcal P}) has three nodes and we associate the same vector space C^\infty({\mathbb{R}}^2, {\mathbb{R}}) to each one of them.

We then construct an interconnection map

\displaystyle{  {\mathcal{I}}: \bigsqcap_{a\in G_0} \mathsf{Ctrl}({\mathbb{P}} I(a)\to {\mathbb{P}} a) \to \Gamma (T{\mathbb{P}}(G, {\mathcal P})) }

from the product of spaces of all control systems to the space of vector fields

\Gamma (T{\mathbb{P}} (G, {\mathcal P}))

on the total phase space of the network. (We use the standard notation to denote the tangent bundle of a manifold R by TR and the space of vector fields on R by \Gamma (TR)). In our running example the interconnection map for the network (G',{\mathcal P}') is the map

\displaystyle{  {\mathcal{I}}: C^\infty({\mathbb{R}}^2, {\mathbb{R}}) \to C^\infty({\mathbb{R}}, {\mathbb{R}}), \quad f(x,u) \mapsto f(x,x).  }

The interconnection map for the network (G,{\mathcal P}) is the map

\displaystyle{  {\mathcal{I}}: C^\infty({\mathbb{R}}^2, {\mathbb{R}})^3 \to C^\infty({\mathbb{R}}^3, {\mathbb{R}}^3)}

given by

\displaystyle{  ({\mathscr{I}}(f_1,f_2, f_3))\,(x_1,x_2, x_3) = (f_1(x_1,x_2), f_2(x_2,x_1),  f_3(x_3,x_2)).  }

To summarize: a dynamical system on a network of phase spaces is the data (G, {\mathcal P}, (w_a)_{a\in G_0} ) where G=\{G_1\rightrightarrows G_0\} is a directed graph, {\mathcal P}:{\mathcal P}:G_0\to {\mathsf{PhaseSpace}} is a phase space function and (w_a)_{a\in G_0} is a collection of open systems compatible with the graph and the phase space function. The corresponding vector field on the total space of the network is obtained by interconnecting the open systems.

Dynamical systems on networks can be made into the objects of a category. Carrying this out gives us a way to associate maps of dynamical systems to combinatorial data.

The first step is to form the category of networks of phase spaces, which we call {\mathsf{Graph}}/{\mathsf{PhaseSpace}}. In this category, by definition, a morphism from a network (G,{\mathcal P}) to a network (G', {\mathcal P}') is a map of directed graphs \varphi:G\to G' which is compatible with the phase space functions:

\displaystyle{  {\mathcal P}'\circ \varphi = {\mathcal P}.  }

Using the universal properties of products it is easy to show that a map of networks \varphi: (G,{\mathcal P})\to (G',{\mathcal P}') defines a map {\mathbb{P}}\varphi of total phase spaces in the opposite direction:

\displaystyle{  {\mathbb{P}} \varphi: {\mathbb{P}} G' \to {\mathbb{P}} G.  }

In the category theory language the total phase space assignment extends to a contravariant functor

\displaystyle{ {\mathbb{P}}:  {({\mathsf{Graph}}/{\mathsf{Region}})}^{\mbox{\sf {\tiny {op}}}} \to  {\mathsf{Region}}.  }

We call this functor the total phase space functor. In our running example, the map

{\mathbb{P}}\varphi:{\mathbb{R}} = {\mathbb{P}}(G',{\mathcal P}') \to  {\mathbb{R}}^3 = {\mathbb{P}} (G,{\mathcal P})

is given by

\displaystyle{  {\mathbb{P}} \varphi (x) = (x,x,x).  }

Continuous-time dynamical systems also form a category, which we denote by \mathsf{DS}. The objects of this category are pairs consisting of a phase space and a vector field on this phase space. A morphism in this category is a smooth map of phase spaces that intertwines the two vector fields. That is:

\displaystyle{  \mathrm{Hom}_\mathsf{DS} ((M,X), (N,Y))   = \{f:M\to N \mid Df \circ X = Y\circ f\}  }

for any pair of objects (M,X), (N,Y) in \mathsf{DS}.

In general, morphisms in this category are difficult to determine explicitly. For example if (M, X) = ((a,b), \frac{d}{dt}) then a morphism from (M,X) to some dynamical system (N,Y) is simply a piece of an integral curve of the vector field Y defined on an interval (a,b). And if (M, X) = (S^1, \frac{d}{d\theta}) is the constant vector field on the circle then a morphism from (M,X) to (N,Y) is a periodic orbit of Y. Proving that a given dynamical system has a periodic orbit is usually hard.

Consequently, given a map of networks

\varphi:(G,{\mathcal P})\to (G',{\mathcal P}')

and a collection of open systems

\{w'_{a'}\}_{a'\in G'_0}

on (G',{\mathcal P}') we expect it to be very difficult if not impossible to find a collection of open systems \{w_a\}_{a\in G_0} so that

\displaystyle{  {\mathbb{P}} \varphi: ({\mathbb{P}} G', {\mathscr{I}}' (w'))\to ({\mathbb{P}} G, {\mathscr{I}} (w))  }

is a map of dynamical systems.

It is therefore somewhat surprising that there is a class of maps of graphs for which the above problem has an easy solution! The graph maps of this class are known by several different names. Following Boldi and Vigna we refer to them as graph fibrations. Note that despite what the name suggests, graph fibrations in general are not required to be surjective on nodes or edges. For example, the inclusion

is a graph fibration. We say that a map of networks

\varphi:(G,{\mathcal P})\to (G',{\mathcal P}')

is a fibration of networks if \varphi:G\to G' is a graph fibration. With some work one can show that a fibration of networks induces a pullback map

\displaystyle{  \varphi^*: \bigsqcap_{b\in G_0'} \mathsf{Ctrl}({\mathbb{P}} I(b)\to {\mathbb{P} b) \to  \bigsqcap_{a\in G_0} \mathsf{Ctrl}({\mathbb{P}}} I(a)\to {\mathbb{P}} a)  }

on the sets of tuples of the associated open systems. The main result of Dynamics on networks of manifolds is a proof that for a fibration of networks \varphi:(G,{\mathcal P})\to (G',{\mathcal P}') the maps \varphi^*, {\mathbb{P}} \varphi and the two interconnection maps {\mathcal{I}} and {\mathcal{I}}' are compatible:

Theorem. Let \varphi:(G,{\mathcal P})\to (G',{\mathcal P}') be a fibration of networks of manifolds. Then the pullback map

\displaystyle{ \varphi^*: \mathsf{Ctrl}(G',{\mathcal P}')\to \mathsf{Ctrl}(G,{\mathcal P})  }

is compatible with the interconnection maps

\displaystyle{  {\mathcal{I}}': \mathsf{Ctrl}(G',{\mathcal P}')) \to \Gamma (T{\mathbb{P}} G') }

and

\displaystyle{  {\mathcal{I}}:  (\mathsf{Ctrl}(G,{\mathcal P})) \to \Gamma (T{\mathbb{P}} G)  }

Namely for any collection w'\in \mathsf{Ctrl}(G',{\mathcal P}') of open systems on the network (G', {\mathcal P}') the diagram

commutes. In other words,

\displaystyle{ {\mathbb{P}} \varphi: ({\mathbb{P}}  (G',{\mathcal P}'), {\mathscr{I}}' (w'))\to ({\mathbb{P}} (G,  {\mathcal P}), {\mathscr{I}} (\varphi^* w')) }

is a map of continuous-time dynamical systems, a morphism in \mathsf{DS}.

At this point the pure mathematician in me is quite happy: I have a theorem! Better yet, the theorem solves the puzzle at the beginning of this post. But if you have read this far, you may well be wondering: “Ok, you told us about your theorem. Very nice. Now what?”

There is plenty to do. On the practical side of things, the continuous-time dynamical systems are much too limited for contemporary engineers. Most of the engineers I know care a lot more about hybrid systems. These kinds of systems exhibit both continuous time behavior and abrupt transitions, hence the name. For example, anti-lock breaks on a car is just that kind of a system — if a sensor detects that the car is skidding and the foot break is pressed, it starts pulsing the breaks. This is not your father’s continuous time dynamical system! Hybrid dynamical systems are very hard to understand. They have been all the rage in the engineering literature for the last 10-15 years. Sadly, most pure mathematicians never heard of them. It would be interesting to extend the theorem I am bragging about to hybrid systems.

Here is another problem that may be worth thinking about: how much of the theorem holds up to numerical simulation? My feeling is that any explicit numerical integration method will behave well. Implicit methods I am not sure about.

And finally a more general issue. John has been talking about networks quite a bit on this blog. But his networks and my networks look like very different mathematical structures. What do they have in common besides nodes and arrows? What mathematical structure are they glimpses of?


Network Theory II

12 March, 2014

 

Chemists are secretly doing applied category theory! When chemists list a bunch of chemical reactions like

C + O₂ → CO₂

they are secretly describing a ‘category’.

That shouldn’t be surprising. A category is simply a collection of things called objects together with things called morphisms going from one object to another, often written

f: x → y

The rules of a category say:

1) we can compose a morphism f: x → y and another morphism g: y → z to get an arrow gf: x → z,

2) (hg)f = h(gf), so we don’t need to bother with parentheses when composing arrows,

3) every object x has an identity morphism 1ₓ: x → x that obeys 1ₓ f = f and f 1ₓ = f.

Whenever we have a bunch of things (objects) and processes (arrows) that take one thing to another, we’re likely to have a category. In chemistry, the objects are bunches of molecules and the arrows are chemical reactions. But we can ‘add’ bunches of molecules and also add reactions, so we have something more than a mere category: we have something called a symmetric monoidal category.

My talk here, part of a series, is an explanation of this viewpoint and how we can use it to take ideas from elementary particle physics and apply them to chemistry! For more details try this free book:

• John Baez and Jacob Biamonte, A Course on Quantum Techniques for Stochastic Mechanics.

as well as this paper on the Anderson–Craciun–Kurtz theorem (discussed in my talk):

• John Baez and Brendan Fong, Quantum techniques for studying equilibrium in reaction networks.

You can also see the slides of this talk. Click on any picture in the slides, or any text in blue, and get more information!


Network Theory I

2 March, 2014

 

Here’s a video of a talk I gave last Tuesday—part of a series. You can see the slides here:

Network Theory I: electrical circuits and signal-flow graphs.

Click on items in blue, or pictures, for more information.

One reason I’m glad I gave this talk is because afterwards Jamie Vicary pointed out some very interesting consequences of the relations among signal-flow diagrams listed in my talk. It turns out they imply equations familiar from the theory of complementarity in categorical quantum mechanics!

This is the kind of mathematical surprise that makes life worthwhile for me. It seemed utterly shocking at first, but I think I’ve figured out why it happens. Now is not the time to explain… but I’ll have to do it soon, both here and in the paper that Jason Eberle are writing about control theory.

For now, besides the slides, the best place to read more about this program is here:

• Brendan Fong, A compositional approach to control theory.


Categories in Control – Erlangen Talk

6 February, 2014

 

I’m visiting Erlangen from now until the end of May, since my wife got a grant to do research here. I’m trying to get a lot of papers finished. But today I’m giving a talk in the math department of the university here, which with Germanic brevity is called the Friedrich-Alexander-Universität Erlangen-Nürnberg.

You can see my slides here, or maybe even come to my talk:

Categories in control, Thursday 6 February 2014, 16:15–18:00, Mathematics Department of the FAU, in Übungsraum 1.

The title is a pun. It’s about categories in control theory, the branch of engineering that studies dynamical systems with inputs and outputs, and how to optimize their behavior.

Control theorists often describe these systems using signal-flow graphs. Here is a very rough schematic signal-flow graph, describing the all-important concept of a ‘feedback loop’:

Here is a detailed one, describing a specific device called a servo:

The device is shown on top, and the signal-flow graph describing its behavior is at bottom. For details, click on the picture.

Now, if you have a drop of category-theorist’s blood in your veins, you’ll look at this signal-flow graph and think my god, that’s a string diagram for a morphism in a monoidal category!

And you’d be right. But if you want to learn what that means, and why it matters, read my talk slides!

The slides should make sense if you’re a mathematician, but maybe not otherwise. So, here’s the executive summary. The same sort of super-abstract math that handles things like Feynman diagrams:

also handles signal-flow graphs. The details are different in important and fascinating ways, and this is what I’m mainly concerned with. But we now understand how signal-flow graphs fit into the general theory of networks. This means we can proceed to use modern math to study them—and their relation to other kinds of networks, like electrical circuit diagrams:

More talks

Thanks to the Azimuth Project team, my graduate students and many other folks, the dream of network theory as a step toward ‘green mathematics’ seems to be coming true! There’s a vast amount left to be done, so I’d have trouble convincing a skeptic, but I feel the project has turned a corner. I now feel in my bones that it’s going to work: we’ll eventually develop a language for biology and ecology based in part on category theory.

So, I think it’s a good time to explain all the various aspects of this project that have been cooking away—some quite visibly, but others on secret back burners:

• Jacob Biamonte and I have written a book on Petri nets and chemical reaction networks. You may have seen parts of this on the blog. We started this project at the Centre for Quantum Technologies, but now he’s working at the Institute for Scientific Interchange, in Turin—and collaborating with people there on various aspects of network theory.

• Brendan Fong is working with me on electrical circuits. You may know him for his posts here on chemical reaction networks. He’s now a grad student in computer science at Oxford.

• Jason Erbele, a math grad student at U.C. Riverside, is working with me on control theory. This work is the main topic of my talk—but I also sketch how it ties together with what Brendan is doing. There’s a lot more to say here.

• Tobias Fritz, a postdoc at the Perimeter Institute, is working with me on category-theoretic aspects of information theory. We published a paper on entropy with Tom Leinster, and we’ve got a followup on relative entropy that’s almost done. I should be working on it right this instant! But for now, read the series of posts here on Azimuth: Relative Entropy Part 1, Part 2 and Part 3.

• Brendan Fong has also done some great work on Bayesian networks, using ideas that connect nicely to what Tobias and I are doing.

• Tu Pham and Franciscus Rebro are working on the math that underlies all these projects: bicategories of spans.

The computer science department at Oxford is a great place for category theory and diagrammatic reasoning, thanks to the presence of Samson Abramsky, Bob Coecke and others. I’m going to visit them from February 21 to March 14. It seems like a good time to give a series of talks on this stuff. So, stay tuned! I’ll try to make slides available here.


Wormholes and Entanglement

20 January, 2014

 

An apparent contradiction in what most physicists believe about black holes—the ‘firewall problem’—is making some very good physicists reach for some very crazy-sounding ideas to find a way out. In particular, Maldacena and Susskind have come up with the idea that any pair of quantum-entangled particles is actually connected by a wormhole.

Entanglement is a spooky way for far-away particles to be correlated, but you can’t use it communicate faster than light. It’s been seen in the lab, but it’s only possible thanks to quantum mechanics. The first people to make a fuss over entanglement were Einstein, Podolsky and Rosen, back in 1935.

A wormhole is a spooky way for far-away regions of space to be connected by a kind of ‘tunnel’—but you probably can’t use it to communicate faster than light. Nobody has ever seen one, but they’re theoretically possible thanks to general relativity. The first people to make a fuss over wormholes were Einstein and Rosen, back in 1935.

So, superficially, it makes sense that there should be a connection between wormholes and entanglement. But when you learn enough physics, you’ll see that Maldacena and Susskind’s proposal sounds completely hare-brained.

But when you learn more physics—maybe more than enough?—you might decide there’s some merit to this idea after all. At the Centre for Quantum Technologies last summer, Jamie Vicary and I noticed some interesting connections between wormholes and quantum entanglement. We now have a paper out!

In it, we study quantum gravity in a universe where space is just 2-dimensional, not 3-dimensional like ours. It’s not realistic, but it has one huge advantage: there’s a working theory of what quantum gravity could be like when space is 2-dimensional, so you can calculate stuff!

So, we calculate what happens when a wormhole forms, and we show the ends look like a particle and its antiparticle (this was already known), and we note that this particle-antiparticle pair is entangled. In fact it’s completely entangled: any piece of information you might want to know about one can also be found in the other.

However, in a sense that Jamie and I make precise, this entanglement is ‘fake’. The reason is that the two ends of the wormhole are not independent things. They’re just two views of the same thing… and, technically, it doesn’t count as entanglement when something is ‘entangled with itself’. This fact is crucial to how Maldacena and Susskind want to get around the firewall problem.

For more details, try this:

Wormholes and entanglement, The n-Category Café.

This has links to other stuff, including our paper, but also some blog articles explaining the firewall problem, the paper by Maldacena and Susskind, and the original Einstein–Podolsky–Rosen and Einstein–Rosen papers (in English).

Since this quantum gravity stuff is more suited to the n-Category Café than here, I won’t enable comments here. If you want to talk, please go there. Sorry!


Quantropy (Part 4)

11 November, 2013

There’s a new paper on the arXiv:

• John Baez and Blake Pollard, Quantropy.

Blake is a physics grad student at U. C. Riverside who plans to do his thesis with me.

If you have carefully read all my previous posts on quantropy (Part 1, Part 2 and Part 3), there’s only a little new stuff here. But still, it’s better organized, and less chatty.

And in fact, Blake came up with a lot of new stuff for this paper! He studied the quantropy of the harmonic oscillator, and tweaked the analogy between statistical mechanics and quantum mechanics in an interesting way. Unfortunately, we needed to put a version of this paper on the arXiv by a deadline, and our writeup of this new work wasn’t quite ready (my fault). So, we’ll put that other stuff in a new version—or, I’m thinking now, a separate paper.

But here are two new things.

First, putting this paper on the arXiv had the usual good effect of revealing some existing work on the same topic. Joakim Munkhammar emailed me and pointed out this paper, which is free online:

• Joakim Munkhammar, Canonical relational quantum mechanics from information theory, Electronic Journal of Theoretical Physics 8 (2011), 93–108.

You’ll see it cites Garrett Lisi’s paper and pushes forward in various directions. There seems to be a typo where he writes the path integral

Z = \displaystyle{ \int e^{-\alpha S(q) } D q}

and says

In order to fit the purpose Lisi concluded that the Lagrange multiplier value \alpha \equiv 1/i \hbar. In similarity with Lisi’s approach we shall also assume that the arbitrary scaling-part of the constant \alpha is in fact 1/\hbar.

I’m pretty sure he means 1/i\hbar, given what he writes later. However, he speaks of ‘maximizing entropy’, which is not quite right for a complex-valued quantity; Blake and I prefer to give this new quantity a new name, and speak of ‘finding a stationary point of quantropy’.

But in a way these are small issues; being a mathematician, I’m more quick to spot tiny technical defects than to absorb significant new ideas, and it will take a while to really understand Munkhammar’s paper.

Second, while writing our paper, Blake and I noticed another similarity between the partition function of a classical ideal gas and the partition function of a quantum free particle. Both are given by an integral like this:

Z = \displaystyle{\int e^{-\alpha S(q) } D q }

where S is a quadratic function of q \in \mathbb{R}^n. Here n is the number of degrees of freedom for the particles in the ideal gas, or the number of time steps for a free particle on a line (where we are discretizing time). The only big difference is that

\alpha = 1/kT

for the ideal gas, but

\alpha = 1/i \hbar

for the free particle.

In both cases there’s an ambiguity in the answer! The reason is that to do this integral, we need to pick a measure D q. The obvious guess is Lebesgue measure

dq = dq_1 \cdots dq_n

on \mathbb{R}^n. But this can’t be right, on physical grounds!

The reason is that the partition function Z needs to be dimensionless, but d q has units. To correct this, we need to divide dq by some dimensionful quantity to get D q.

In the case of the ideal gas, this dimensionful quantity involves the ‘thermal de Broglie wavelength’ of the particles in the gas. And this brings Planck’s constant into the game, even though we’re not doing quantum mechanics: we’re studying the statistical mechanics of a classical ideal gas!

That’s weird and interesting. It’s not the only place where we see that classical statistical mechanics is incomplete or inconsistent, and we need to introduce some ideas from quantum physics to get sensible answers. The most famous one is the ultraviolet catastrophe. What are all rest?

In the case of the free particle, we need to divide by a quantity with dimensions of lengthn to make

dq = dq_1 \cdots dq_n

dimensionless, since each dq_i has dimensions of length. The easiest way is to introduce a length scale \Delta x and divide each dq_i by that. This is commonly done when people study the free particle. This length scale drops out of the final answer for the questions people usually care about… but not for the quantropy.

Similarly, Planck’s constant drops out of the final answer for some questions about the classical ideal gas, but not for its entropy!

So there’s an interesting question here, about what this new length scale \Delta x means, if anything. One might argue that quantropy is a bad idea, and the need for this new length scale to make it unambiguous is just proof of that. However, the mathematical analogy to quantum mechanics is so precise that I think it’s worth going a bit further out on this limb, and thinking a bit more about what’s going on.

Some weird sort of déjà vu phenomenon seems to be going on. Once upon a time, people tried to calculate the partition functions of classical systems. They discovered they were infinite or ambiguous until they introduced Planck’s constant, and eventually quantum mechanics. Then Feynman introduced the path integral approach to quantum mechanics. In this approach one is again computing partition functions, but now with a new meaning, and with complex rather than real exponentials. But these partition functions are again infinite or ambiguous… for very similar mathematical reasons! And at least in some cases, we can remove the ambiguity using the same trick as before: introducing a new constant. But then… what?

Are we stuck in an infinite loop here? What, if anything, is the meaning of this ‘second Planck’s constant’? Does this have anything to do with second quantization? (I don’t see how, but I can’t resist asking.)


The Elitzur–Vaidman Bomb-Testing Method

24 August, 2013

Quantum mechanics forces us to refine our attitude to counterfactual conditionals: questions about what would have happened if we had done something, even though we didn’t.

“What would the position of the particle be if I’d measured that… when actually I measured its momentum?” Here you’ll usually get no definite answer.

But sometimes you can use quantum mechanics to find out what would have happened if you’d done something… when classically it seems impossible!

Suppose you have a bunch of bombs. Some have a sensor that will absorb a photon you shine on it, and make the bomb explode! Others have a broken sensor, that won’t interact with the photon at all.

Can you choose some working bombs? You can tell if a bomb works by shining a photon on it. But if it works, it blows up—and then it doesn’t work anymore!

So, it sounds impossible. But with quantum mechanics you can do it. You can find some bombs that would have exploded if you had shone photons at them!

Here’s how:

Put a light that emits a single photon at A. Have the photon hit the half-silvered mirror at lower left, so it has a 50% chance of going through to the right, and a 50% chance of reflecting and going up. But in quantum mechanics, it sort of does both!

Put a bomb at B. Recombine the photon’s paths using two more mirrors. Have the two paths meet at a second half-silvered mirror at upper right. You can make it so that if the bomb doesn’t work, the photon interferes with itself and definitely goes to C, not D.

But if the bomb works, it absorbs the photon and explodes unless the photon takes the top route… in which case, when it hits the second half-silvered mirror, it has a 50% chance of going to C and a 50% chance of going to D.

So:

• If the bomb doesn’t work, the photon has a 100% chance of going to C.

• If the bomb works, there’s a 50% chance that it absorbs the photon and explodes. There’s also a 50% chance that the bomb does not explode—and then the photon is equally likely to go to either C or D. So, the photon has a 25% chance of reaching C and a 25% chance of reaching D.

So: if you see a photon at D, you know you have a working bomb… but the bomb has not exploded!

For each working bomb there’s:

• a 50% chance that it explodes,
• a 25% chance that it doesn’t explode but you can’t tell if it works,
• a 25% chance that it doesn’t explode but you can tell that it works.

This is the Elitzur–Vaidman bomb-testing method. It was invented by Avshalom Elitzur and Lev Vaidman in 1993. One year later, physicists actually did an experiment to show this idea works… but alas, not using actual bombs!

In 1996, Kwiat showed that using more clever methods, you can reduce the percentage of wasted working bombs as close to zero as you like. And pushing the idea even further, Graeme Mitchison and Richard Jozsa showed in 1999 that you can get a quantum computer to do a calculation for you without even turning it on!

This sounds amazing, but it’s really no more amazing than the bomb-testing method I’ve already described.

References

For details, read these:

• A. Elitzur and L. Vaidman, Quantum mechanical interaction-free measurements, Found. Phys. 23 (1993), 987–997.

• Paul G. Kwiat, H. Weinfurter, T. Herzog, A. Zeilinger, and M. Kasevich, Experimental realization of “interaction-free” measurements.

• Paul G. Kwiat, Interaction-free measurements.

• Graeme Mitchison and Richard Jozsa, Counterfactual computation, Proc. Roy. Soc. Lond. A457 (2001), 1175–1194.

The picture is from the Wikipedia article, which also has other references:

Elitzur–Vaidman bomb tester, Wikipedia.

Bas Spitters pointed out this category-theoretic analysis of the issue:

• Robert Furber and Bart Jacobs, Towards a categorical account of conditional probability.


Follow

Get every new post delivered to your Inbox.

Join 3,095 other followers