Network Theory (Part 29)

I’m talking about electrical circuits, but I’m interested in them as models of more general physical systems. Last time we started seeing how this works. We developed an analogy between electrical circuits and physical systems made of masses and springs, with friction:

Electronics Mechanics
charge: Q position: q
current: I = \dot{Q} velocity: v = \dot{q}
flux linkage: \lambda momentum: p
voltage: V = \dot{\lambda} force: F = \dot{p}
inductance: L mass: m
resistance: R damping coefficient: r  
inverse capacitance: 1/C   spring constant: k

But this is just the first of a large set of analogies. Let me list some, so you can see how wide-ranging they are!

More analogies

People in system dynamics often use effort as a term to stand for anything analogous to force or voltage, and flow as a general term to stand for anything analogous to velocity or electric current. They call these variables e and f.

To me it’s important that force is the time derivative of momentum, and velocity is the time derivative of position. Following physicists, I write momentum as p and position as q. So, I’ll usually write effort as \dot{p} and flow as \dot{q}.

Of course, ‘position’ is a term special to mechanics; it’s nice to have a general term for the thing whose time derivative is flow, that applies to any context. People in systems dynamics seem to use displacement as that general term.

It would also be nice to have a general term for the thing whose time derivative is effort… but I don’t know one. So, I’ll use the word momentum.

Now let’s see the analogies! Let’s see how displacement q, flow \dot{q}, momentum p and effort \dot{p} show up in several subjects:

displacement:    q flow:      \dot q momentum:      p effort:           \dot p
Mechanics: translation position velocity momentum force
Mechanics: rotation angle angular velocity angular momentum torque
Electronics charge current flux linkage voltage
Hydraulics volume flow pressure momentum pressure
Thermal Physics entropy entropy flow temperature momentum temperature
Chemistry moles molar flow chemical momentum chemical potential

We’d been considering mechanics of systems that move along a line, via translation, but we can also consider mechanics for systems that turn round and round, via rotation. So, there are two rows for mechanics here.

There’s a row for electronics, and then a row for hydraulics, which is closely analogous. In this analogy, a pipe is like a wire. The flow of water plays the role of current. Water pressure plays the role of electrostatic potential. The difference in water pressure between two ends of a pipe is like the voltage across a wire. When water flows through a pipe, the power equals the flow times this pressure difference—just as in an electrical circuit the power is the current times the voltage across the wire.

A resistor is like a narrowed pipe:

An inductor is like a heavy turbine placed inside a pipe: this makes the water tend to keep flowing at the same rate it’s already flowing! In other words, it provides a kind of ‘inertia’ analogous
to mass.

A capacitor is like a tank with pipes coming in from both ends, and a rubber sheet dividing it in two lengthwise:

When studying electrical circuits as a kid, I was shocked when I first learned that capacitors don’t let the electrons through: it didn’t seem likely you could do anything useful with something like that! But of course you can. Similarly, this gizmo doesn’t let the water through.

A voltage source is like a compressor set up to maintain a specified pressure difference between the input and output:

Similarly, a current source is like a pump set up to maintain a specified flow.

Finally, just as voltage is the time derivative of a fairly obscure quantity called ‘flux linkage’, pressure is the time derivative of an even more obscure quantity which has no standard name. I’m calling it ‘pressure momentum’, thanks to the analogy

momentum: force :: pressure momentum: pressure

Just as pressure has units of force per area, pressure momentum has units of momentum per area!

People invented this analogy back when they were first struggling to understand electricity, before electrons had been observed:

Hydraulic analogy, Wikipedia.

The famous electrical engineer Oliver Heaviside pooh-poohed this analogy, calling it the “drain-pipe theory”. I think he was making fun of William Henry Preece. Preece was another electrical engineer, who liked the hydraulic analogy and disliked Heaviside’s fancy math. In his inaugural speech as president of the Institution of Electrical Engineers in 1893, Preece proclaimed:

True theory does not require the abstruse language of mathematics to make it clear and to render it acceptable. All that is solid and substantial in science and usefully applied in practice, have been made clear by relegating mathematic symbols to their proper store place—the study.

According to the judgement of history, Heaviside made more progress in understanding electromagnetism than Preece. But there’s still a nice analogy between electronics and hydraulics. And I’ll eventually use the abstruse language of mathematics to make it very precise!

But now let’s move on to the row called ‘thermal physics’. We could also call this ‘thermodynamics’. It works like this. Say you have a physical system in thermal equilibrium and all you can do is heat it up or cool it down ‘reversibly’—that is, while keeping it in thermal equilibrium all along. For example, imagine a box of gas that you can heat up or cool down. If you put a tiny amount dE of energy into the system in the form of heat, then its entropy increases by a tiny amount dS. And they’re related by this equation:

dE = TdS

where T is the temperature.

Another way to say this is

\displaystyle{ \frac{dE}{dt} = T \frac{dS}{dt} }

where t is time. On the left we have the power put into the system in the form of heat. But since power should be ‘effort’ times ‘flow’, on the right we should have ‘effort’ times ‘flow’. It makes some sense to call dS/dt the ‘entropy flow’. So temperature, T, must play the role of ‘effort’.

This is a bit weird. I don’t usually think of temperature as a form of ‘effort’ analogous to force or torque. Stranger still, our analogy says that ‘effort’ should be the time derivative of some kind of ‘momentum’, So, we need to introduce temperature momentum: the integral of temperature over time. I’ve never seen people talk about this concept, so it makes me a bit nervous.

But when we have a more complicated physical system like a piston full of gas in thermal equilibrium, we can see the analogy working. Now we have

dE = TdS - PdV

The change in energy dE of our gas now has two parts. There’s the change in heat energy TdS, which we saw already. But now there’s also the change in energy due to compressing the piston! When we change the volume of the gas by a tiny amount dV, we put in energy -PdV.

Now look back at the first chart I drew! It says that pressure is a form of ‘effort’, while volume is a form of ‘displacement’. If you believe that, the equation above should help convince you that temperature is also a form of effort, while entropy is a form of displacement.

But what about the minus sign? That’s no big deal: it’s the result of some arbitrary conventions. P is defined to be the outward pressure of the gas on our piston. If this is positive, reducing the volume of the gas takes a positive amount of energy, so we need to stick in a minus sign. I could eliminate this minus sign by changing some conventions—but if I did, the chemistry professors at UCR would haul me away and increase my heat energy by burning me at the stake.

Speaking of chemistry: here’s how the chemistry row in the analogy chart works. Suppose we have a piston full of gas made of different kinds of molecules, and there can be chemical reactions that change one kind into another. Now our equation gets fancier:

\displaystyle{ dE = TdS - PdV + \sum_i  \mu_i dN_i }

Here N_i is the number of molecules of the ith kind, while \mu_i is a quantity called a chemical potential. The chemical potential simply says how much energy it takes to increase the number of molecules of a given kind. So, we see that chemical potential is another form of effort, while number of molecules is another form of displacement.

But chemists are too busy to count molecules one at a time, so they count them in big bunches called ‘moles’. A mole is the number of atoms in 12 grams of carbon-12. That’s roughly

602,214,150,000,000,000,000,000

atoms. This is called Avogadro’s constant. If we used 1 gram of hydrogen, we’d get a very close number called ‘Avogadro’s number’, which leads to lots of jokes:

(He must be desperate because he looks so weird… sort of like a mole!)

So, instead of saying that the displacement in chemistry is called ‘number of molecules’, you’ll sound more like an expert if you say ‘moles’. And the corresponding flow is called molar flow.

The truly obscure quantity in this row of the chart is the one whose time derivative is chemical potential! I’m calling it chemical momentum simply because I don’t know another name.

Why are linear and angular momentum so famous compared to pressure momentum, temperature momentum and chemical momentum?

I suspect it’s because the laws of physics are symmetrical
under translations and rotations. When the assumptions of Noether’s theorem hold, this guarantees that the total momentum and angular momentum of a closed system are conserved. Apparently the laws of physics lack the symmetries that would make the other kinds of momentum be conserved.

This suggests that we should dig deeper and try to understand more deeply how this chart is connected to ideas in classical mechanics, like Noether’s theorem or symplectic geometry. I will try to do that sometime later in this series.

More generally, we should try to understand what gives rise to a row in this analogy chart. Are there are lots of rows I haven’t talked about yet, or just a few? There are probably lots. But are there lots of practically important rows that I haven’t talked about—ones that can serve as the basis for new kinds of engineering? Or does something about the structure of the physical world limit the number of such rows?

Mildly defective analogies

Engineers care a lot about dimensional analysis. So, they often make a big deal about the fact that while effort and flow have different dimensions in different rows of the analogy chart, the following four things are always true:

pq has dimensions of action (= energy × time)
\dot{p} q has dimensions of energy
p \dot{q} has dimensions of energy
\dot{p} \dot{q} has dimensions of power (= energy / time)

In fact any one of these things implies all the rest.

These facts are important when designing ‘mixed systems’, which combine different rows in the chart. For example, in mechatronics, we combine mechanical and electronic elements in a single circuit! And in a hydroelectric dam, power is converted from hydraulic to mechanical and then electric form:

One goal of network theory should be to develop a unified language for studying mixed systems! Engineers have already done most of the hard work. And they’ve realized that thanks to conservation of energy, working with pairs of flow and effort variables whose product has dimensions of power is very convenient. It makes it easy to track the flow of energy through these systems.

However, people have tried to extend the analogy chart to include ‘mildly defective’ examples where effort times flow doesn’t have dimensions of power. The two most popular are these:

displacement:    q flow:      \dot q momentum:      p effort:           \dot p
Heat flow heat heat flow temperature momentum temperature
Economics inventory product flow economic momentum product price

The heat flow analogy comes up because people like to think of heat flow as analogous to electrical current, and temperature as analogous to voltage. Why? Because an insulated wall acts a bit like a resistor! The current flowing through a resistor is a function the voltage across it. Similarly, the heat flowing through an insulated wall is about proportional to the difference in temperature between the inside and the outside.

However, there’s a difference. Current times voltage has dimensions of power. Heat flow times temperature does not have dimensions of power. In fact, heat flow by itself already has dimensions of power! So, engineers feel somewhat guilty about this analogy.

Being a mathematical physicist, a possible way out presents itself to me: use units where temperature is dimensionless! In fact such units are pretty popular in some circles. But I don’t know if this solution is a real one, or whether it causes some sort of trouble.

In the economic example, ‘energy’ has been replaced by ‘money’. So other words, ‘inventory’ times ‘product price’ has units of money. And so does ‘product flow’ times ‘economic momentum’! I’d never heard of economic momentum before I started studying these analogies, but I didn’t make up that term. It’s the thing whose time derivative is ‘product price’. Apparently economists have noticed a tendency for rising prices to keep rising, and falling prices to keep falling… a tendency toward ‘conservation of momentum’ that doesn’t fit into their models of rational behavior.

I’m suspicious of any attempt to make economics seem like physics. Unlike elementary particles or rocks, people don’t seem to be very well modelled by simple differential equations. However, some economists have used the above analogy to model economic systems. And I can’t help but find that interesting—even if intellectually dubious when taken too seriously.

An auto-analogy

Beside the analogy I’ve already described between electronics and mechanics, there’s another one, called ‘Firestone’s analogy’:

• F.A. Firestone, A new analogy between mechanical and electrical systems, Journal of the Acoustical Society of America 4 (1933), 249–267.

Alain Bossavit pointed this out in the comments to Part 27. The idea is to treat current as analogous to force instead of velocity… and treat voltage as analogous to velocity instead of force!

In other words, switch your p’s and q’s:

Electronics Mechanics          (usual analogy) Mechanics      (Firestone’s analogy)
charge position: q momentum: p
current velocity: \dot{q} force: \dot{p}
flux linkage momentum: p position: q
voltage force: \dot{p} velocity: \dot{q}

This new analogy is not ‘mildly defective’: the product of effort and flow variables still has dimensions of power. But why bother with another analogy?

It may be helpful to recall this circuit from last time:

It’s described by this differential equation:

L \ddot{Q} + R \dot{Q} + C^{-1} Q = V

We used the ‘usual analogy’ to translate it into classical mechanics problem, and we got a problem where an object of mass L is hanging from a spring with spring constant 1/C and damping coefficient R, and feeling an additional external force F:

m \ddot{q} + r \dot{q} + k q = F

And that’s fine. But there’s an intuitive sense in which all three forces are acting ‘in parallel’ on the mass, rather than in series. In other words, all side by side, instead of one after the other.

Using Firestone’s analogy, we get a different classical mechanics problem, where the three forces are acting in series. The spring is connected to source of friction, which in turn is connected to an external force.

This may seem a bit mysterious. But instead of trying to explain it, I’ll urge you to read his paper, which is short and clearly written. I instead want to make a somewhat different point, which is that we can take a mechanical system, convert it to an electrical one following the usual analogy, and then convert back to a mechanical one using Firestone’s analogy. This gives us an ‘auto-analogy’ between mechanics and itself, which switches p and q.

And although I haven’t been able to figure out why from Firestone’s paper, I have other reasons for feeling sure this auto-analogy should contain a minus sign. For example:

p \mapsto q, \qquad q \mapsto -p

In other words, it should correspond to a 90° rotation in the (p,q) plane. There’s nothing sacred about whether we rotate clockwise or counterclockwise; we can equally well do this:

p \mapsto -q, \qquad q \mapsto p

But we need the minus sign to get a so-called symplectic transformation of the (p,q) plane. And from my experience with classical mechanics, I’m pretty sure we want that. If I’m wrong, please let me know!

I have a feeling we should revisit this issue when we get more deeply into the symplectic aspects of circuit theory. So, I won’t go on now.

References

The analogies I’ve been talking about are studied in a branch of engineering called system dynamics. You can read more about it here:

• Dean C. Karnopp, Donald L. Margolis and Ronald C. Rosenberg, System Dynamics: a Unified Approach, Wiley, New York, 1990.

• Forbes T. Brown, Engineering System Dynamics: a Unified Graph-Centered Approach, CRC Press, Boca Raton, 2007.

• Francois E. Cellier, Continuous System Modelling, Springer, Berlin, 1991.

System dynamics already uses lots of diagrams of networks. One of my goals in weeks to come is to explain the category theory lurking behind these diagrams.

37 Responses to Network Theory (Part 29)

  1. amarashiki says:

    Hi John:

    You say

    “(…)we need to introduce temperature momentum: the integral of temperature over time. I’ve never seen people talk about this concept, so it makes me a bit nervous.(…)”

    It is unprecise. I have talked about integrals of classical variables in Mechanics. I don’t know if you have read (or not) one of the best posts, I think ;) (I think since It is one of the most visited entries), I have ever written in my blog:

    LOG#053. Derivatives of position.

    You can easily understand that I have thought about that issue in the “mirror space” of Mechanics. The integral of temperature w.r.t. can be seen as the thermodynamical analogue of the “mechanical absement”!

    Usually, people believe that the only usual kinematical/dynamical variables are the n-th derivatives (specially the 1st, and 2nd are the most important according to the galilean/newtonian/relativistic or even quantum theories). However, some “more interesting” variables do appear when you perform “integrals”. Absement or the reciprocal inverse of position (something like a quantum momentum p=1/x with h=1) is the presement! Perhaps, the invention of infinitesimal calculus and that we are focusing only (generally speaking) in the first, second or third derivative of “position” is just un accident that we choose “position” as the main variable. I have my own suspitions about these facts…

    Imagine an exotic exoplanet where the ETI discover calculus not by differentiation but by “integration”. How would ETI write the classical equations of motion?And the quantum counterparts?

    With respect to the interpretation of

    A=\int T dt

    beyond to be interpreted as “a momentum”, you can also interpret it as the “farness” of temperature during some time interval. Moreover, note that if you introduce the Boltzmann constant (natural units) there, then you get something that is pretty like an action/angular momentum!

    In fact, making the full analogy with the mechanical definitions, another interesting magnitude would be the thermodynamical analogue of the presement, something like

    B=\int \dfrac{1}{T}dt

    Essentially, introducing again a Boltzmann constant (in the denominator of course this time) we would get the time integral of the celebreted \beta=(k_BT)^{-1}. In this case, you have something like \sim \mbox{TIME}/\mbox{ENERGY}=\mbox{POWER}^{-1}. This magnitude would measure the “nearness” in the thermodynamical sense!

    In summary, we get:

    1) Your idea of the integral of temperature with respect to time is the analogue of the mechanical absement.

    2) That “momentum” of temperature measure somehow the “farness” of the thermodynamical equilibrium as t flows!

    3) The analogue of presement can be also worked out. It shows that the nearness of the thermodynamical equilibrium is somehow related to the inverse of the energy flow, a.k.a., power.

    What do you think?

  2. Aaron Denney says:

    Wouldn’t H -> -H reverse time, and thus give a sign flip to momentum?

  3. Joerg Paul says:

    Very interesting article! I wonder, if it is possible to found dualities in the other parts of phyics like Firestones analogy. In rotational mechanics this is easy, just swap angle and angular momentum. But are there useful dualities in hydralics, thermal physics or chemistry?

  4. Bossavit says:

    The idea [of Firestone’s analogy] is to treat current as analogous to force instead of velocity… and treat voltage as analogous to velocity instead of force!

    I believe your mechanical ‘auto-analogy’ will prove essential to understand this puzzle. But reasoning on *lumped* dynamical systems may hide things that would look clearer in the context of the mechanics of continua. I mean this:

    – Calling q the charge density and j the current density, we have

    (1) d_t q + div j = 0 (charge conservation)

    – Calling p the density of momentum and s the stress tensor, we have (in the absence of body forces, that would otherwise appear in the right-hand side)

    (2) d_t p + div s = 0 (momentum conservation)

    (The sign convention I use in (2) is not the standard one: The standard stress tensor is minus s. Doing this will enhance the analogy.) The difference between (1) and (2) lies in q being scalar whereas p is vector-valued (or rather, covector-valued, since the force field f = d_t p is better conceived as a field of covectors). But apart from this difference, (1) and (2) are strikingly parallel: Integrating them over a bounded domain D will give a balance of “stuff”, electric charge in the case of (1), momentum in the case of (2).

    Now, take the integral form of (1) and (2) and let D collapse to a point (“lumping”). One obtains

    (1′) d_t [charge inside] = [current flowing in]

    (2′) d_t [momentum inside] = [force exerted by outside matter]

    and there we are: “current analogous to force”, indeed, and charge analogous to momentum.

  5. Frederik De Roo says:

    P is defined to be the outward pressure of the gas on our piston. If this is positive, reducing the volume of the gas takes a positive amount of energy, so we need to stick in a minus sign. I could eliminate this minus sign by changing some conventions

    If you use tension instead of pressure, you might get rid of the minus sign without provoking colleagues at UCR?

    Actually, even though you mention “on our piston” I find the word ‘outward pressure’ somewhat confusing because I would define outward and inward with respect to the (outward) normal of a small element inside the gas, so for me pressure would point inward and tension outward.

  6. Arrow says:

    My bet would be the analogies are a consequence of the fact all kinds of motion have the same fundamental origin – whatever it is.

    • John Baez says:

      Yes, I agree. I’d say it’s mostly about Hamiltonian and Lagrangian mechanics, which are the usual ways of understanding motion at the classical (i.e. non-quantum) level… but with few big extra twists: we’re studying open systems, we’re treating them as networks, and we’re allowing dissipation. What I’m doing is warming up to present a mathematical theory of this stuff… which I’m still busy trying to learn/invent.

      • amarashiki says:

        I agree too, but with an addition…I believe that there is something BIG in all this analogy, and I think the key idea is to consider or better understand the origin of those kinds of motion. And my conjecture is that there are something else beyond Classical Hamiltonian/Lagrangian mechanics, even if we consider “dissipation”…If the origin of the analogies is that the origin of the motions is the same, what kind of physical principle “catches” all that. I mean, it can hardly be something related to classical mechanics since it “extends it”. What kind of symmetry or invariance transformation is playing here?

        By the way, I have some questions for John…What about this “generalized momentum” of temperature:

        A^{(n)}=\int \int \cdots \int T d^nt or

        B^{(n)}=\in \int \cdots \int (1/T)d^nt?

        (1) What happens in the limit n\longrightarrow \infty? Can it make some sense?

        (2) What would happen in the case of the n-th jet derivatives d^n T/dt^n?

        Remark: In Cosmology, the statefinder variables in the Hubble luminosity distance relation relate important variables like the Hubble parameter,or some densities with the derivatives of the scale factor! So it seems that higher order derivatives make sense too in a “cosmomechanical” framework!

      • domenico says:

        I write this result in an other blog, so it is a repeat of one of my theories (I try to write here only new ideas).
        Each differential equation have an Hamiltonian, if we doubling the number of variable:
        0=F(y,\dot y, \ddot y, \cdots )
        0=\frac{d}{dt} F(y,\dot y, \ddot y, \cdots )=\dot y \partial_y F+\ddot y \partial_{\dot y} F+\cdots
        \left\{ 	\begin{array}{l} 		y = y_0 \\ 		\dot y_{j} = y_{j+1} \\ 		\dot y_n = G(y_0, y_1, \cdots, y_{n-1}) 	\end{array} \right.
        H = p_n G(y_0, y_1, \cdots, y_{n-1})+ \sum^{n-1}_{j=0} p_j y_{j+1}
        \left\{ 	\begin{array}{l} 		\dot y_0 = \ \ \,\frac{\partial H}{\partial p_0} = y_{1} \\ 		\dot p_0 = -\frac{\partial H}{\partial y_0} = -p_{n} \partial_{0} G(y_0,y_1,\cdots,y_{n-1}) \\ 		\dot y_j = \ \ \,\frac{\partial H}{\partial p_j} = y_{j+1} \\ 		\dot p_j = -\frac{\partial H}{\partial y_j} = -p_{j-1}-p_{n} \partial_j G(y_0,y_1,\cdots,y_{n-1}) \\ 		\dot y_n = \ \ \,\frac{\partial H}{\partial p_n} = G(y_0, y_1, y_{n-1}) \\ 		\dot p_n = -\frac{\partial H}{\partial y_n} = -p_{n-1} 	\end{array} \right.
        then there is possible to write the Hamiltonian for each differential equation, that can describe a dissipative system.
        The interesting thing is that the Hamiltonian-Jacobi equation and the Schrodinger equation are equal for this differential equation (classical system=quantum system):
        $ 0 = \partial_t \Psi + H(t,y_j,\partial_j \Psi) $

  7. John Baez says:

    Over on G+, Alex Golden wrote:

    This reminds me of the study of Dirac structures, a sort of generalization of Hamiltonian systems that allow I/O relationships.

    I replied:

    Yes, that’s a topic I plan to talk about! I found this book pretty helpful:

    • Vincent Duindam, Alessandro Macchelli, Stefano Stramigioli and Herman Bruyninckx, eds., Modeling and Control of Complex Physical Systems: The Port-Hamiltonian Approach, Springer, Berlin, 2009.

    If you know other good sources, I’d be happy to hear about them.

    I turned up this free paper:

    • Arjan van der Schaft and J. Cervera, Composition of Dirac structures and control of port-Hamiltonian systems, http://www3.nd.edu/~mtns/papers/10432_1.pdf

    and Jess Robertson turned up an easier paper by one of the same authors:

    • Arjan van der Schaft, Port-Hamiltonian systems: an introductory survey.

    • Eugene says:

      I would treat the claim that port-Hamiltonian systems have Dirac geometry under the hood with caution. Until last summer I more or less accepted this claim at face value it and spent some time and effort to understand it. For conservative systems with non-holonomic constraints Dirac structures look like a right framework, but may be an overkill. But for nonlinear systems with external forces and phase spaces with nontrivial topology I don’t think it works. For a bit I hoped that Courant algebroids, would be enough, but again, I don’t see how to fit in external forces/dissipation nicely.

      Way back in the 1980s Brocket suggested that there should be something called a Hamiltonian control system. But in practice these systems are of the form H + \mu_1 G_1 + \ldots \mu_n G_n, where H is your Hamiltonian, the G_i‘s are other functions on your phase space, and \mu_i‘s are the control variables. A more general geometric definition doesn’t seem to exist.

    • John Baez says:

      Eugene wrote:

      I would treat the claim that port-Hamiltonian systems have Dirac geometry under the hood with caution. Until last summer I more or less accepted this claim at face value it and spent some time and effort to understand it.

      Thanks for the warning! Much as it may seem I’ve forgotten, I have your work in mind and I’m slowly creeping toward the point of studying it more carefully and discussing it here. I am going to start by doing some things with 1) electrical networks built from only linear resistors and 2) electrical networks built from only linear ‘passive’ elements, e.g. resistors, inductors and capacitors. I get the impression that Dirac structures are able to handle these cases, even though the resistors introduce dissipation. Do you agree?

      I think there’s lots of fun left in linear systems, but I eventually want to do nonlinear ones, and then I’ll really need your help.

      • Eugene says:

        I should add that what confused me were differences in terminology. Dirac structures were defined by Ted Courant and Alan Weinstein in the early 1990s as a simultaneous generalization of Poisson and symplectic geometry. But in engineering/applied math literature “Dirac structure” is used more loosely. In particular it seems to include Riemannian metrics and combinations of metrics and Poisson tensors. In particular whenever you see dissipation there is a symmetric tensor involved (or, more precisely a sum of a(n almost) Poisson tensor and a metric) and this is not a Dirac structure in the sense of Courant-Weinstein. The terms metriplectic and Leibnitz also gets thrown around and seem to roughly mean the same thing, as far as I can tell.

      • John Baez says:

        Thanks. In the review article I cited, in Section 2.3.3, van der Schaft uses Dirac structures to describe a class of port-Hamiltonian systems with dissipation. I’m hoping this formalism is appropriate for describing linear systems with dissipation, like circuits made of linear capacitors, inductors and resistors. Do you know if it is?

        • Eugene says:

          I used to be very confused by this paper. I still am. The paper does prominently feature Dirac structures (as defined by Courant and Weinstein and as commonly understood in Poisson geometry) but then there is equation (29) on p. 1349 which has this funny R(x) term. And it is not something you would/should see in a Hamiltonian system defined by a Dirac structure. Another confusing thing about the survey is that flows and efforts are not generalized velocities and momenta; they don’t live in the so called Pontriagin bundle. They live as sections of a certain vector bundle and its dual (trivial vector bundle in van der Schaft’s set up). So what is really being talked about, or so it seems to me, is a generalization of a Courant algebroid-like structure with a metric term. The closest description I have seen in literature is that of a Leibnitz-Dirac structure (cf arziv:1210.1042 [math.DG]). But the geometry seems to be is that of a Leibnitz-Courant algebroid, yet undefined in literature. Looks like you’ll have to invent it!

  8. Uncle Al says:

    I suspect it’s because the laws of physics are symmetrical under translations and rotations. When the assumptions of Noether’s theorem hold, this guarantees that the total momentum and angular momentum of a closed system are conserved. Symmetry under rotation, vacuum isotropy, is rigorously true for massless boson photons. They detect no vacuum anisotropy, refraction, dispersion, dichroism, gyrotropy (arxiv:1208.5288, 0912.5057, 0905.1929, 0706.2031, 1006.1376, 1106.1068). Observation suggests the vacuum is trace chiral anisotropic toward fermionic matter:

    Baryogenesis Sakharov conditions obtain if cosmic inflation was pseudoscalar false vacuum decay, resolving the Weak interaction. Inflated spacetime is trace chiral anisotropic toward fermionic matter. 1) Parity “violations” are intrinsic. 2) Noetherian connection between vacuum isotropy and angular momentum conservation leaks, hence MOND’s 1.2×10^(-10) m/s^2 Milgrom acceleration, ending dark matter. 3) SUSY and quantum gravitation, despite rigorous persuasive mathematics, empirically fail as written. Strop Occam’s razor with observation.

    Opposite shoes fit into trace chiral vacuum with different energies. They locally vacuum free fall along trace non-identical minimum action trajectories, violating the Equivalence Principle. Crystallography’s opposite shoes are chemically and macroscopically identical, single crystal test masses in enantiomorphic space groups: P3(1)21 versus P3(2)21 alpha-quartz or P3(1) versus P3(2) gamma-glycine. Run geometric Eötvös experiments.

    Microwave spectrometers are more accessible. Racemic chiral molecular rotors launched at identical spin temperatures diverge spin temperatures when moving through a vacuum chiral background. Vacuum supersonic expand helium-entrained vapor of racemic D_3-trishomocuban-4-one, an intensely geometrically chiral rigid rotor, to initial <5 kelvin rotation temperature in a chirped pulse FT microwave spectrometer. If enantiomers' rotation temperature spectra diverge, Einstein-Cartan-Sciama-Kibble gravitation's chiral spacetime torsion is validated. Somebody should look. The worst they can do is succeed, explaining everything.


    DOI: 10.1055/s-0031-1289708

  9. dcorfield says:

    “Preece…disliked Heaviside’s fancy math”. Which just goes to show how relative such judgements are. Heaviside didn’t like the ‘fancy math’ of quaternions, describing them as “antiphysical and unnatural” in opposition to proponents such as Tait.

  10. Daniel Mahler says:

    I wonder if there is an analogy like this for information theory? It might relate to the Thermal Physics analogy via entropy. This could tie in with Charles Bennett’s work on thermodynamics of computation and the cost of erasing information. Maybe the Cramer-Rao bound would turn up as well.

    • Daniel Mahler says:

      What applying this analogy to information theory would mean is unclear, but that is a part of the question (i am fishing :)). Here is one thought on how it might play out in a machine learning. Suppose we have a space of data and a space of models, then the model parameters would be the positions and the data variables would be the forces, ie data provides information on how to improve the model. The loss function being optimized might then play the role of something like energy

    • John Baez says:

      Information is proportional to entropy, and this analogy is used in machine learning, especially in MaxEnt approaches. In particular, when we choose the probability distribution that maximizes entropy subject to some constraints, we are finding a Gibbs state, and that instantly gives us equations that generalize the one I mentioned in this blog article:

      \displaystyle{ d E = T d S - P d V + \sum_i \mu_i dN_i }

      So, I think we could adapt the ‘Thermal Physics’ row of the chart to include a lot of ideas from machine learning. My two posts on Classical Mechanics versus Thermodynamics, and my series on Information Geometry, should give some clues as to how this works.

      So yes, we should pursue this aspect of the analogy! I’ve been invited to teach a tutorial on information geometry at NIPS 2013, a conference on Neural Information Processing Systems that takes place at Lake Tahoe this December. So I’ve got a great excuse to think about how networks and information theory fit together.

  11. Marcus Urruh says:

    Dear John Baez, I am very curious what you would think about this work by Thomas Etter, “Dynamical Markov States and the Quantum Core”, where he claims he can very simply produce full quantum density matrix formalism from pure statistics of Markov Processes.

    Or did you already consider these things?

    It may be the missing link for your theory, if it is not flawed somewhere.

    Please have a look and tell me if you can make something out of this.

    Slides from a talk:

    Click to access Dynamical_Markov.pdf

    Here is a longer paper elaborating on some of his basic ideas.

    PROCESS, SYSTEM, CAUSALITY, AND QUANTUM MECHANICS
    A Psychoanalysis of Animal Faith
    Tom Etter

    Click to access PSCQM.pdf

    ABSTRACT
    I shall argue in this paper that a central piece of modern physics does not really belong to physics at all but to elementary probability theory. Given a joint probability distribution D on a set of random variables containing x and y, define a link between x and y to be the condition x=y on D. Define the state S of a link x=y as the joint probability distribution matrix on x and y without the link. The two core laws of quantum mechanics are the Born probability rule, and the unitary dynamical law whose best known form is the Schrödinger’s equation. Von Neumann formulated these two laws in the language of Hilbert space as prob(P) = trace(PS) and S’T = TS respectively, where P is a projection, S and S’ are density matrices, and T is a unitary transformation. We’ll see that if we regard link states as density matrices, the algebraic forms of these two core laws occur as completely general theorems about links. When we extend probability theory by allowing cases to count negatively, we find that the Hilbert space framework of quantum mechanics proper emerges from the assumption that all S’s are symmetrical in rows and columns. On the other hand, Markovian systems emerge when we assume that one of every linked variable pair has a uniform probability distribution. By representing quantum and Markovian structure in this way, we see clearly both how they differ, and also how they can coexist in natural harmony with each other, as they must in quantum measurement, which we’ll examine in some detail. Looking beyond quantum mechanics, we see how both structures have their special places in a much larger continuum of formal systems that we have yet to look for in nature.

    All the best,
    Marcus

  12. Hamilton says:

    Hi John!

    Thanks for the excellent article! Have you seen the work of Gabriel Kron who applied similar reasoning to modeling Schrödinger’s Equation using circuits?

    • Gabriel Kron, Electric circuit models of the Schrödinger equation, Phys. Rev. 67 (1943), 39–43.

  13. Jacques says:

    Here is a great article by Rosen about analogous systems.

    http://link.springer.com/article/10.1007/BF02476608

    He seems to take the view that this is always possible to construct an analogy (at least the analogy between any physical system with any other subsystem), not unique and not “special” in the sense that there is no universal implication about nature in our analogies :(

  14. Ali Moharrer says:

    As I understood Peter Rowlands (reading his Zero to Infinity physics book), he argued for the relationship between (a way to bridge across) our understanding of the limit of conservative systems (and classical fields) by allowing conservation and non-conservation systems (also fields) to form a dual as part of a non-dual representation of Nature. It appears that Nature allows for a counter-intuitive co-existence of both measurable and non-measurable characterizations. What is the break in the symmetries that differentiate an ideal flow (described by Euler equation) from the real one (Navier Stokes equations)? Can these two systems (a conservative and a dissipative) somehow possess other kinds of unifying features that symmetry principles are special cases in that framework?

  15. In this paper, we study the props for various kinds of electrical circuits:

    • John Baez, Brandon Coya and Franciscus Rebro, Props in network theory.

    Engineers have been using diagrammatic methods for circuits since time immemorial. And in the 1940s, Olson explained how to apply circuit diagrams to networks of mechanical, hydraulic, thermodynamic and chemical components:

    • Harry F. Olson, Dynamical Analogies, Van Nostrand, New York, 1943.

    By 1961, Paynter had made the analogies between these various systems mathematically precise using ‘bond graphs’:

    • Henry M. Paynter, Analysis and Design of Engineering Systems, MIT Press, Cambridge, Massachusetts, 1961.

    Here he shows a picture of a hydroelectric power plant, and the bond graph that abstractly describes it:

    By 1963, Forrester was using circuit diagrams in economics:

    • Jay Wright Forrester, Industrial Dynamics, MIT Press, Cambridge, Massachusetts, 1961.

    In 1984, Odum published a beautiful and influential book on their use in biology and ecology:

    • Howard T. Odum, Ecological and General Systems: An Introduction to Systems Ecology, Wiley, New York, 1984.

    Energy Systems Symbols

    We can use props to study circuit diagrams of all these kinds! The underlying mathematics is similar in each case, so we focus on just one example: electrical circuits. For other examples, take a look at this:

    • John Baez, Network theory (part 29), Azimuth, 23 April 2013.

You can use Markdown or HTML in your comments. You can also use LaTeX, like this: $latex E = m c^2 $. The word 'latex' comes right after the first dollar sign, with a space after it.

This site uses Akismet to reduce spam. Learn how your comment data is processed.