guest post by Eugene Lerman
Hi, I’m Eugene Lerman. I met John back in the mid 1980s when John and I were grad students at MIT. John was doing mathematical physics and I was studying symplectic geometry. We never talked about networks. Now I teach in the math department at the University of Illinois at Urbana, and we occasionally talk about networks on his blog.
A few years ago a friend of mine who studies locomotion in humans and other primates asked me if I knew of any math that could be useful to him.
I remember coming across an expository paper on ‘coupled cell networks’:
• Martin Golubitsky and Ian Stewart, Nonlinear dynamics of networks: the groupoid formalism, Bull. Amer. Math. Soc. 43 (2006), 305–364.
In this paper, Golubitsky and Stewart used the study of animal gaits and models for the hypothetical neural networks called ‘central pattern generators’ that give rise to these gaits to motivate the study of networks of ordinary differential equations with symmetry. In particular they were interested in ‘synchrony’. When a horse trots, or canters, or gallops, its limbs move in an appropriate pattern, with different pairs of legs moving in synchrony:
They explained that synchrony (and the patterns) could arise when the differential equations have finite group symmetries. They also proposed several systems of symmetric ordinary differential equations that could generate the appropriate patterns.
Later on Golubitsky and Stewart noticed that there are systems of ODEs that have no group symmetries but whose solutions nonetheless exhibit certain synchrony. They found an explanation: these ODEs were ‘groupoid invariant’. I thought that it would be fun to understand what ‘groupoid invariant’ meant and why such invariance leads to synchrony.
I talked my colleague Lee DeVille into joining me on this adventure. At the time Lee had just arrived at Urbana after a postdoc at NYU. After a few years of thinking about these networks Lee and I realized that strictly speaking one doesn’t really need groupoids for these synchrony results and it’s better to think of the social life of networks instead. Here is what we figured out—a full and much too precise story is here:
• Eugene Lerman and Lee DeVille, Dynamics on networks of manifolds.
Let’s start with an example of a class of ODEs with a mysterious property:
Example. Consider this ordinary differential equation for a function
for some function It is easy to see that a function
solving
gives a solution of these equations if we set
You can think of the differential equations in this example as describing the dynamics of a complex system built out of three interacting subsystems. Then any solution of the form
may be thought of as a synchronization: the three subsystems are evolving in lockstep.
One can also view the result geometrically: the diagonal
is an invariant subsystem of the continuous-time dynamical system defined by the differential equations. Remarkably enough, such a subsystem exists for any choice of a function
Where does such a synchronization or invariant subsystem come from? There is no apparent symmetry of that preserves the differential equations and fixes the diagonal
and thus could account for this invariant subsystem. It turns out that what matters is the structure of the mutual dependencies of the three subsystems making up the big system. That is, the evolution of
depends only on
and
the evolution of
depends only on
and
and the evolution of
depends only on
and
These dependencies can be conveniently pictured as a directed graph:

The graph has no symmetries. Nonetheless, the existence of the invariant subsystem living on the diagonal
can be deduced from certain properties of the graph
The key is the existence of a surjective map of graphs
from our graph to a graph
with exactly one node, call it
and one arrow. It is also crucial that
has the following lifting property: there is a unique way to lift the one and only arrow of
to an arrow of
once we specify the target node of the lift.
We now formally define the notion of a ‘network of phase spaces’ and of a continuous-time dynamical system on such a network. Equivalently, we define a ‘network of continuous-time dynamical systems’. We start with a directed graph
Here is the set of edges,
is the set of nodes, and the two arrows assign to an edge its source and target, respectively. To each node we attach a phase space (or more formally a manifold, perhaps with boundary or corners). Here ‘attach’ means that we choose a function
it assigns to each node
a phase space
In our running example, to each node of the graph we attach the real line
(If we think of the set
as a discrete category and
as a category of manifolds and smooth maps, then
is simply a functor.)
Thus a network of phase spaces is a pair where
is a directed graph and
is an assignment of phase spaces to the nodes of
We think of the collection as the collection of phase spaces of the subsystems constituting the network
We refer to
as a phase space function. Since the state of the network should be determined completely and uniquely by the states of its subsystems, it is reasonable to take the total phase space of the network to be the product
In the example the total phase space of the network is
while the phase space of the network
is the real line
Finally we need to interpret the arrows. An arrow in a graph
should encode the fact that the dynamics of the subsystem associated to the node
depends on the states of the subsystem associated to the node
To make this precise requires the notion of an ‘open system’, or ‘control system’, which we define below. It also requires a way to associate an open system to the set of arrows coming into a node/vertex
To a first approximation an open system (or control system, I use the two terms interchangeably) is a system of ODEs depending on parameters. I like to think of a control system geometrically: a control system on a phase space controlled by the the phase space
is a map
where is the tangent bundle of the space
so that for all
is a vector tangent to
at the point
Given phase spaces
and
the set of all corresponding control systems forms a vector space which we denote by
(More generally one can talk about the space of control systems associated with a surjective submersion For us, submersions of the form
are general enough.)
To encode the incoming arrows, we introduce the input tree (this is a very short tree, a corolla if you like). The input tree of a node
of a graph
is a directed graph whose arrows are precisely the arrows of
coming into the vertex
but any two parallel arrows of
with target
will have disjoint sources in
In the example the input tree
of the one node of
of
is the tree

There is always a map of graphs For instance for the input tree in the example we just discussed,
is the map

Consequently if is a network and
is an input tree of a node of
then
is also a network. This allows us to talk about the phase space
of an input tree. In our running example,
Given a network there is a vector space
of open systems associated with every node
of
In our running example, the vector space associated to the one node of
is
In the same example, the network has three nodes and we associate the same vector space
to each one of them.
We then construct an interconnection map
from the product of spaces of all control systems to the space of vector fields
on the total phase space of the network. (We use the standard notation to denote the tangent bundle of a manifold by
and the space of vector fields on
by
). In our running example the interconnection map for the network
is the map
The interconnection map for the network is the map
given by
To summarize: a dynamical system on a network of phase spaces is the data where
is a directed graph,
is a phase space function and
is a collection of open systems compatible with the graph and the phase space function. The corresponding vector field on the total space of the network is obtained by interconnecting the open systems.
Dynamical systems on networks can be made into the objects of a category. Carrying this out gives us a way to associate maps of dynamical systems to combinatorial data.
The first step is to form the category of networks of phase spaces, which we call In this category, by definition, a morphism from a network
to a network
is a map of directed graphs
which is compatible with the phase space functions:
Using the universal properties of products it is easy to show that a map of networks defines a map
of total phase spaces in the opposite direction:
In the category theory language the total phase space assignment extends to a contravariant functor
We call this functor the total phase space functor. In our running example, the map
is given by
Continuous-time dynamical systems also form a category, which we denote by The objects of this category are pairs consisting of a phase space and a vector field on this phase space. A morphism in this category is a smooth map of phase spaces that intertwines the two vector fields. That is:
for any pair of objects in
In general, morphisms in this category are difficult to determine explicitly. For example if then a morphism from
to some dynamical system
is simply a piece of an integral curve of the vector field
defined on an interval
And if
is the constant vector field on the circle then a morphism from
to
is a periodic orbit of
Proving that a given dynamical system has a periodic orbit is usually hard.
Consequently, given a map of networks
and a collection of open systems
on we expect it to be very difficult if not impossible to find a collection of open systems
so that
is a map of dynamical systems.
It is therefore somewhat surprising that there is a class of maps of graphs for which the above problem has an easy solution! The graph maps of this class are known by several different names. Following Boldi and Vigna we refer to them as graph fibrations. Note that despite what the name suggests, graph fibrations in general are not required to be surjective on nodes or edges. For example, the inclusion

is a graph fibration. We say that a map of networks
is a fibration of networks if is a graph fibration. With some work one can show that a fibration of networks induces a pullback map
on the sets of tuples of the associated open systems. The main result of Dynamics on networks of manifolds is a proof that for a fibration of networks the maps
and the two interconnection maps
and
are compatible:
Theorem. Let be a fibration of networks of manifolds. Then the pullback map
is compatible with the interconnection maps
and
Namely for any collection of open systems on the network
the diagram

commutes. In other words,
is a map of continuous-time dynamical systems, a morphism in
At this point the pure mathematician in me is quite happy: I have a theorem! Better yet, the theorem solves the puzzle at the beginning of this post. But if you have read this far, you may well be wondering: “Ok, you told us about your theorem. Very nice. Now what?”
There is plenty to do. On the practical side of things, the continuous-time dynamical systems are much too limited for contemporary engineers. Most of the engineers I know care a lot more about hybrid systems. These kinds of systems exhibit both continuous time behavior and abrupt transitions, hence the name. For example, anti-lock breaks on a car is just that kind of a system — if a sensor detects that the car is skidding and the foot break is pressed, it starts pulsing the breaks. This is not your father’s continuous time dynamical system! Hybrid dynamical systems are very hard to understand. They have been all the rage in the engineering literature for the last 10-15 years. Sadly, most pure mathematicians never heard of them. It would be interesting to extend the theorem I am bragging about to hybrid systems.
Here is another problem that may be worth thinking about: how much of the theorem holds up to numerical simulation? My feeling is that any explicit numerical integration method will behave well. Implicit methods I am not sure about.
And finally a more general issue. John has been talking about networks quite a bit on this blog. But his networks and my networks look like very different mathematical structures. What do they have in common besides nodes and arrows? What mathematical structure are they glimpses of?
Eugene wrote:
Good question! They don’t look so different to me, since I’m working with about 6 different kinds of networks (reaction networks, Markov processes, electrical circuits, bond graphs, signal flow graphs, and Bayesian networks), and this looks like another member of the family. However, there’s a key point I’d need to resolve to feel sure of this.
How do you stick together two of your ‘dynamical systems on a network of phase spaces’? This is a crucial question because if I have a machine made of parts, I use this operation to stick together the parts and build the whole machine.
The point of networks, in my mind, is that they’re mainly graphs with labelled vertices and edges, so we can attach two of them by gluing vertices together, as long as the vertices have labels that match. This is simply a pushout of labelled graphs
where the labelled graph
has no edges, only vertices, and it’s included both in
and
:
You define a network of phase spaces to be a graph with vertices labelled by phase spaces, so I see how to stick together two of these using the procedure I just described, and it makes perfect sense. Agreed?
But how do I stick together two ‘dynamical systems on a network of phase spaces’?
If I fully understood what you wrote, this question would probably be easy. I’d just look at one of your dynamical systems on a network of phase spaces, and see how to think of it as assembled from pieces.
John wrote:
i don’t see how to do it. The issue is that these dynamical systems are closed and it doesn’t quite make sense to interconnect two closed systems.
While networks are “directed graphs with labelled vertices and edges” the meaning of vertices and edges varies quite a bit from one context to another.
For examle some of the vertices in Petri nets (places) have states, while others (reactions) don’t. If you take a Petri net and turn it into a system of ODEs as you explain way back when, you get a dynamical system on a network, but the graph associated with this new network is quite different from the original Petri net and has a different meaning….
Eugene wrote:
Okay. But you mentioned ‘open systems’. So there should be some way to generalize your setup a bit so we can have a graph with phase spaces at vertices and some special vertices called ‘inputs’ and ‘outputs’ that behave differently than the rest, while the other vertices are handled as you do now. Only in the case when there are no inputs and no outputs will we get a closed system.
A simple case might be something like this:
A—>—B—>—C
Let’s say A is an input, C is an output and B is neither. All three get labelled with phase spaces—I’ll use the same letters for these phase spaces. Maybe the state at A is to thought of as an arbitrary function of time… so no chosen vector field on its phase space. The state at B is controlled by the state at A… so we get a map from A × B to TB, such that (a,b) is mapped to a tangent vector at b. The state at C is controlled by B, I guess… so maybe we get a map from B × C to TC such that (b,c) is mapped to a tangent vector at C.
I don’t like the massive asymmetry between inputs and outputs here, but I do think we could take this setup and a similar setup
C—>—D—>—E
and ‘glue them together’ to get a similar setup
A—>—B—>—C—>—D—>—E
with A as input and E as output. Right?
I’m not claiming I have it perfected, but something vaguely like this has got to work, and for me it’s very important. A network is a system made of smaller parts, but to make this precise we need a rule for gluing together little networks to make bigger ones. And then networks will become morphisms in a category (or really a bicategory, but never mind: that stuff will take care of itself, once the basic idea is worked out).
Right, but that’s okay. I believe that many kinds of networks give bicategories where the morphisms are directed graphs with labelled vertices and edges, certain vertices being ‘inputs’ and ‘outputs’. The allowed labellings vary, and the meanings of these morphisms vary a lot. I’m developing various bicategories of this sort and functors relating them. So far the ones I understand best are those where the morphisms are just graphs, those where the morphisms are electrical circuits, those where the morphisms are signal flow diagrams, and those where the morphisms are bond graphs. Markov processes and Bayesian networks are next in line. The whole collection all these bicategories, and the functors relating them, is my idea of ‘network theory’.
Hi John, I am thinking about your question and will post something in 4-5 hours. It’s complicated.
I agree: given a bunch of (continuous time) open systems and a collection of appropriate “wires” connecting outputs to inputs, we can get a new open system by interconnection.
This suggests that there is a symmetric colored operad floating around, something like the operad of wiring diagrams of Dylan Rupel and David Spivak (http://arxiv.org/abs/1307.6894v1). It’s not exactly this operad for various technical reasons. David and I have been talking about this not yet existing operad for the last few weeks. The objects of this operads are boxes with wires (inputs and outputs) and morphisms are putting small boxes in a big box and interconnecting the internal wires.
We would then think of the collections of open systems as an algebra over this operad. The objects of this algebra would be vector spaces and the interconnections would go to linear maps.
Every network of phase spaces in the sense of my paper with Lee would then define a morphism in this operad from the collection of the input trees of the network wired appropriatly (the wiring is unduced by the underlying graph) to a closed box — the box with no inputs. This box would represent vector fields on a manifold.
But there is more: fibrations of networks of phase spaces give rise to a different kind of arrows in the operad. On the algera side these arrows should go to something more general than linear maps. So we are getting a two dimensional categorical structure that sounds a bit like a bicategory. To me it looks more like an algebra over a “double multicategory” (is there such a thing?).
Eugene wrote:
Great!
That’s probably true, but I prefer the formalism of categories for studying networks, so I would say “so there’s a symmetric monoidal category that has networks of continuous-time open systems with certain vertices denoted “inputs” and “outputs” as its morphisms.” And I would like to study this category! With you!
Mind you, I like operads for certain applications, but the framework of symmetric monoidal categories seems simpler and more powerful than the framework of “algebras of some operad that’s like the operad for wiring diagrams, but not exactly this, for technical reasons”. Once we’ve got a symmetric monoidal category we instantly know a lot of stuff.
(Of course, symmetric monoidal categories with a fixed set of objects are algebras of a certain colored operad! So in some sense it’s all a matter of taste. But I’ve got my taste and I’m gonna keep it.)
I haven’t heard about them yet, but people keep making up more things, so it’s just a matter of time.
Personally, I’d like to include “maps between networks of continuous-time dynamical systems” (and in particular fibrations) as 2-morphisms in my aforementioned symmetric monoidal category, boosting it to a symmetric monoidal bicategory. And luckily, my student Mike Stay has set up the necessary machinery to make it possible—and I believe even easy—to prove that this really is a symmetric monoidal bicategory!
John wrote
I would be happy to work with you.
Great! Does what I’m saying make sense? Does it seem true to you? Obvious? Mysterious?
It makes sense to me in a vague kind of way. I would guess that the objects are open systems (or maybe more generally multisets of open systems) and morphisms are interconnections. The tensor on objects is forming an unordered list of them, or, more or less equivalently, forming products. One mystery: interconnecting looks strictly associative to me, since we are just composing functions. So why bicategories?
Something has been bugging me about this, but I was having a very hard time articulating exactly what it is that doesn’t feel right. Here, I think, is the issue in a nutshell: the functor that assigns to each manifold
the vector space of vector fields on
is not monoidal: the space of vector fields on the product
is not a product/direct sum of spaces of vector fields on each factor
By the way, it is a covariant functor. Fun puzzle: what’s the target category? Hint: it’s not Vect.
It would take more explanation for me to see precisely what problem you’re worrying about. When I take the product of two compact manifolds
I take the tensor product of their commutative algebra of smooth functions
so
is a contravariant symmetric monoidal functor.
Then, the
-module of vector fields on
is the ‘external’ tensor product of the
-module of vector fields on
and the
-module of vector fields on
I don’t know if ‘external’ is the right word, but I’m referring to the standard way to take a module over a commutative algebra
and a module over a commutative algebra
and tensor them to get a module over the a commutative algebra
, which is different than the usual way of taking two
-modules and tensoring them to get a new
-module. This gives a functor
There’s a way to take a tensor product of categories with finite colimits, sometimes called the ‘box tensor product’
and then I believe we the above functor gives rise to an equivalence
which is what I’m calling the ‘external tensor product’. So, I believe `Mod’ is also a symmetric monoidal (2-)functor in a suitable sense.
You may be more familiar with all of this in the geometric picture where we take a vector bundle over
and a vector bundle over
and tensor them to get a vector bundle over
. This ‘external’ tensor product of vector bundles is different than (but closely related to) the perhaps more commonly discussed recipe where we take two vector bundles over a manifold and tensor them to get another vector bundle over the same manifold. The external tensor product of the tangent bundles of
and
is the tangent bundle of 
In short, I believe everything is `as monoidal as it should be’.
Eugene wrote:
I’m forgetting the terminology here (I forget exactly what’s an “open system” versus an “interconnection”), but I thought we agreed an object would be a collection of vertices labelled by phase spaces, while a morphism would be a directed graph with:
• each vertex labelled by a phase space
• each edge A—>—B labelled by a map from A × B to TB, such that (a,b) is mapped to a tangent vector at b
• some set of vertices denoted ‘inputs’ (these form the source of our morphism)
• some set of vertices denoted ‘outputs’ (these form the target of our morphism)
I gave the smallest nontrivial example here:
But it’s quite possible I’m doing something wrong, so go ahead and correct me if you think I might be!
You write:
Actually in the ‘category’ I just described, composition is only associative up to isomorphism. Indeed, even if we ignore all the phase spaces and maps A × B → TB, and form a ‘category’ where objects are collections of vertices and morphisms are graphs going from one set of vertices to another, composition is only associative up to isomorphism.
However, that sort of “associativity up to isomorphism” is fairly dull, so without much loss we can eliminate it by decategorifying: that is, taking isomorphism classes of morphisms, to squash the bicategory down to a category. That’s what Brendan Fong and I do in our study of electrical circuits: we start by using the work of Mike Stay to get a bicategory where morphisms are labelled graphs, but then we decategorify it, because we’re not really interested in the graph isomorphisms (yet). Indeed, everything so far is just reusing tricks from what I’m doing with Brendan.
The real reason for working with a bicategory is more interesting: you want to study ““maps between networks of continuous-time dynamical systems”, in particular “fibrations”. Since the networks themselves give morphisms, these maps are 2-morphisms! I believe the technology developed by Mike Stay still applies.
Sorry, a mistake! I said a morphism in our category his a graph with:
But the second itiem is wrong. We want each ‘input tree’, consisting of all edges going from nodes labelled with phase spaces
to the node labelled
to be labelled by a map from
such that
is mapped to a tangent vector at
And in fact even this is not quite right: if the node labelled
is one of those denoted an ‘input’, we do not want this extra data!
John wrote:
Sorry John, I don’t think we agree yet. Or, if you prefer, we are thinking of different algebraic structures. In fact I suspect you are thinking of two similar but different structures at the same time. After all at some point you said
Isn’t it usually the case that when you ignore something, you are writing down a functor?
The operadic approach of Rupel and Spivak suggests that the objects of whatever algebraic structure we are thinking of should be “spiders:”
little graphs consisting of one special vertex (which represents the phase space of an open system), a bunch of arrows coming in (inputs) and a bunch of arrows coming out (outputs or measurements).
This looks like the right choice to me. I suppose we should label everything with phase spaces (the inputs, the main node and the outputs).
Such a spider is part of our syntax. Given a finite collection of such spiders (for Rupel and Spivak my spider is a box trailing two types of wires: input wires and output wires), we can interconnect them by connecting an output wire/leg of one spider to the input wire/leg of another spider provided the phase space labels on the wires match. We get a bunch of interconnected spiders. We think of it as a new spider with one node, which is labelled by the product of the node labels of the constituent spiders. We think of going from several spiders to one big fat spider as a morphism in our colored operad. That’s the syntax.
The semantics is obtained by assigning to each spider the space of all possible open systems with the given inputs, given outputs and given internal phase space. The interconnection of spiders then correspond to a map of sets. We get then an algebra over an operad.
But even if you don’t agree with what I wrote, could you possibly go into the nitty-gritty and explain where non-associativity in your approach is coming from? I feel like maybe missing something important.
Eugene wrote:
Is this true even taking into account my correction to the post you replied to? Obviously it’s wrong to have wanted
We need to use input trees, as described in my correction.
I suspect this is not what’s bugging you, though, since you write:
Whenever you form a category where the morphisms are ‘pieces of stuff’ (e.g. labelled graphs) and you compose them by ‘sticking them together’, you face this issue. ‘Sticking things together’ is taking a pushout, and pushouts are defined only up to a canonical isomorphism. So, there’s no way that sticking things together will give a strictly associative composition, unless we force it by declaring certain isomorphic things to be ‘equal’.
A bit more technically speaking, I’m talking about bicategories where a morphism from
to
is a cospan, i.e. a diagram
in some category $\mathcal{C}.$. We think of
as a ‘piece of stuff’, while
and
are its ‘input end’ and ‘output end’.
For example
could be a category of labelled graphs, so that
is a labelled graph with ‘input end’
and ‘output end’
We can demands that
and
have only vertices, no edges. This is what I do in my work with Brendan Fong on electrical circuits.
Whenever
is a category with pushouts, we get a bicategory
where the objects are those of of
, the morphisms are cospans, and cospans are composed via pushout. Indeed this is one of first nontrivial examples of a bicategory studied by Benabou when he defined that concept! You can read a bit more about this idea here:
• Span, nLab.
where the dual concept is discussed: a span in a category with pullbacks. A span in
is the same as a cospan in
, so this is no big deal.
I keep pointing you to Mike Stay’s paper because he showed that
is symmetric monoidal when
has coproducts as well as pushouts. This is true in all the examples I’m talking about. My approach to network theory is based on his result.
Yes: we get a cospan bicategory when we start with the category of graphs, and another when we start with the category of graphs having some kind of labelling. There is a forgetful functor from the latter to the former when we forget the labelling. I thought that forgetting the labellings for a minute would make the nonassociativity of composition easier to visualize.
This is fine except that in collapsing an interesting graph with lots of nodes down to a spider with just one node, you’re forgetting the interesting structure of this graph. For example the graph may have symmetries, and there can be interesting maps from one graph to another, e.g. the fibrations you like. These show up as 2-morphisms in the bicategory I’m talking about.
John, thank you for explaining the main philosophy of your approach. (By the way, you seem to be flipping back and forth between the words “span” and “cospan.” Am I again missing something?)
I need to think how the three approaches to networks (yours, Rupel-Spivak and mine with Lee DeVille) fit together. I think each of them must has its strengths and weaknesses, and it would be good to understand what those are. Personally I find it hard to understand somebody else framework, but David has been very good at explaining his, including what it it’s supposed to do (rigorously define modularity) and where it’s coming from (finite state machines). Mind you, this is me garbling his explanations. Looks like you want me to go and read Stay’s paper. That may take some time.
On a slightly different subject: I don’t think what you call a mistake is actually a mistake: a node labelled with
may have several inputs labelled with
and that’s OK. Moreover, it may be necessary for some purposes to allow such things.
(these reply columns are getting really skinny and its getting very hard for me to reply to the right comment. Is there a better venue to continue this discussion? an n-catlab page, perhaps?)
It is interesting the synchronization, it seem that different time shift in the coordinates does not affect the synchronization (I am thinking the Lissajous curves).
It seem an inverse problem, usually the symmetry give a geometrical constraint on the motion, here an a posteriori constraint on the motion (on a surface or on a line) give a constraint on the functions (that must be equal); so that each symmetry in the motion (motion on a surface, or existence of invariants) must give a constraint on the functions (the inverse of the Noether’s theorem).
I found on internet some bibliography on the inversion of Noether’s theorem in Lagrangian, but this could be true for each differential equation.
If I understand well, the synchronization condition is necessary for the control in an open system, where a simpler system control the complex.
Not sure if I am parsing it correctly, but time shifts do matter. After all, what gets mapped around are integral curves of vector fields.
I don’t think of the theorem I stated as a converse of Noether’s theorem. For me it’s a weak variant: surjective fibrations give rise to invariant subsystems, which is a kind of conservation law.
And you are right. Well of course in the stable case it holds well as the system converges to an equilibrium point on the diagonal.
But I have done a few quick simulations of unstable cases (i can send the pics/code if anyone is interested) and in fact it seems to hold surprisingly well even for unstable systems.
Upon a quick further investigation one can see that If the system is linear, then you can write it as
where
and it turns out that the A matrix has the following structure:
and one of the eigenvectors of such a matrix is always along the diagonal, which means that the diagonal is invariant.
For nonlinear system i only have some kind of geometrical intuition to what might happen to preserve the structure along the diagonal.
However perhaps expressing the nonlinear functions as Taylor series in the diagonal neighborhood one might be able to show that terms that would alter the structure cancel out (or they don’t even exist). Sorry about the fuzzy language, I hope i conveyed the main idea.
I was being too cagey when I said
I have a proof written down that the analogue of the main theorem I described holds for discrete time dynamical systems. Since explicit numerical methods, such as RK4, are discrete time dynamial systems, they should, in theory preserve the diagonal in the example. I say “in theory” for two reasons: I don’t understand the errors introduced by rounding off and, more generally, “while in theory there is no difference between theory and practice, in practice things work a little differently.” :)
On the related subject: do you know of an example of an implicit numerical scheme that doesn’t preserve the diagonal in this example?
Actually it seems that any implicit (either fixed or variable step) solver can’t preserve the diagonal in the unstable case.
I have tried ode23t, ode23s, ode15s and they all diverge to a noticeable extent, with respect to non stiff solvers.
Thanks. That’s interesting and helpful.
Eugene wrote:
Sure, I hope it’s starting to make sense. I plan to write a long series of blog articles about it as part of the network theory series, but I haven’t gotten started yet.
Whoops—I caught and fixed a couple of typos in my latest comment, where I wrote
when I should have written
. But you’ll still see the word ‘span’ a lot in what I wrote, because a lot of the mathematical literature focuses on spans.
It’s no big deal, because any general result about spans is also a result about cospans. As you probably know, a cospan in
is exactly the same thing as a span in
Composing cospans via pushout in
is exactly the same thing as composing spans via pullback in 
When I think about a chunk of stuff that has two ends, I think of it as a cospan
since the ends are included in the chunk of stuff. But that means I can often restrict the state of some chunk of stuff to its ends, and that gives me a span
Whee! Contravariant functors are fun—I never know whether I’m coming or going!
I prefer to have my conversations here, where more people will see them and join the revolution. If you want a fatter reply column, just go down to the very very bottom of the blog and reply there, as I’m doing now.
On the other hand, if we actually start writing a paper, the nLab or an Azimuth Wiki experiment page would be one way to do that. They’re technically almost identical; they’re both run by Andrew Stacey.
Have you come across the “port-Hamiltonian systems” formalism from control engineering? I’m only just getting started exploring it but as I understand it it is a method to decompose dynamical systems into networks of open systems which preserving physical invariants like conservation of energy, so you get to keep a lot of the nice geometry of Hamiltonian systems: http://www.icm2006.org/proceedings/Vol_III/contents/ICM_Vol_3_65.pdf
There are links to thermodynamics and entropy in the theory as well: http://arxiv.org/abs/1308.1213
I know about them, and Eugene knows even more about them. Eugene’s formalism presented here is more general, but it doesn’t incorporate conservation of energy, which one certainly wants in some situations. You could say he’s studying networks of general dynamical systems, but we also want symplectic and Poisson versions of this theory. Port-hamiltonian systems are a big step in that direction, but I would love to see a more mathematical treatment that really treats these systems as networks. They should have an underlying graph, there should be rules for sticking them together, etc.
To add to John’s comment: I don’t understand the global geometric structure behind port-Hamiltonian formalism. It is said in many sources that the structure is a Dirac structure (more careful practitioners would say “Courant algebroids”) but that’s not quite right, mainly because there is dissipation. I have spent quite a bit of time being confused by that — how could a Dirac structure have dissipation? In the port-Hamiltonian formalism dissipation is modeled by adding a metric-like structure. I don’t know enough physics to judge if this is a good way to go. My impression is that dissipative systems are notoriously hard to model well. I’d love to be persuaded that the so-called metri-plectic structures (i.e. metric and symplectic) and their Courant algebroid analogues are good models of reality.
While listening to this talk, I thought the way in which one CRN emulates another in Cardelli’s formalism looks awfully similar to the way one dynamical system emulates another in Eugene Lerman’s formalism…
Eugene Lerman is especially interested in classical mechanics and networked dynamical systems, and he wrote an introductory article about them here on the Azimuth […]
This May, a small group of mathematicians is going to a workshop on the categorical foundations of network theory, or Jacob Biamonte. I’m trying to get us mentally prepared for this. We all have different ideas, yet they should fit together somehow.
Tobias Fritz, Eugene Lerman and David Spivak have all written articles here about their work, though I suspect Eugene will have a lot of completely new things to say, too. Now it’s time for me to say what my students and I have doing.
I’m way late to this party, but thank you to Professor Lerman for writing this! I’ve been trying to work my way through the iterations of this paper in detail for several years now and the overview is a great help.
A couple of points worth making from the applied side (if anyone actually returns to the thread!):
First, Stewart showed – and I hope I’m translating this correctly into the Boldi/Vigna terminology – that the epimorphic fibrations of a given graph form a complete lattice. Since this lattice is isomorphic to a lattice of synchrony subspaces for the system, it makes sense to consider the subspace complements as well: these are spaces e.g.
that I’m calling ‘cosynchrony subspaces’. It’s unclear to me whether there’s a connection to graph morphisms (cofibrations?) for this dual lattice.
Second, exponential synchronization (see e.g. Pham’s and Slotine’s work) is just the contraction of a corresponding cosynchrony subspace, so we can also consider contraction of synchrony spaces themselves (‘cosynchronization’). In both cases, we can assert stability of the subspace through simple operations on the dynamical system’s Jacobian matrix.
In other words, for an explicit but freely parametrized system, e.g. a network of model neurons with unknown connection weights, we can 1) find interesting subspaces combinatorially, via graph fibrations and subspace complementarity 2) find parametric conditions that guarantee dynamical convergence onto one or another of those subspaces.
Finally, I believe H. Ruan has been working on the question of “what happens when we add more things to an existing network” and also has a much more detailed picture of network structure’s influence on things like phase-locking.
Thanks for the comment! I informed Eugene about it.