Network Theory (Part 28)

Last time I left you with some puzzles. One was to use the laws of electrical circuits to work out what this one does:

If we do this puzzle, and keep our eyes open, we’ll see an analogy between electrical circuits and classical mechanics! And this is the first of a huge set of analogies. The same math shows up in many different subjects, whenever we study complex systems made of interacting parts. So, it should become part of any general theory of networks.

This simple circuit is very famous: it’s called a series RLC circuit, because it has a resistor of resistance R, an inductor of inductance L, and a capacitor of capacitance C, all hooked up ‘in series’, meaning one after another. But understand this circuit, it’s good to start with an even simpler one, where we leave out the voltage source:

This has three edges, so reading from top to bottom there are 3 voltages V_1, V_2, V_3, and 3 currents I_1, I_2, I_3, one for each edge. The white and black dots are called ‘nodes’, and the white ones are called ‘terminals’: current can flow in or out of those.

The voltages and currents obey a bunch of equations:

• Kirchhoff’s current law says the current flowing into each node that’s not a terminal equals the current flowing out:

I_1 = I_2 = I_3

• Kirchhoff’s voltage law says there are potentials \phi_0, \phi_1, \phi_2, \phi_3, one for each node, such that:

V_1 = \phi_0 - \phi_1

V_2 = \phi_1 - \phi_2

V_3 = \phi_2 - \phi_3

In this particular problem, Kirchhoff’s voltage law doesn’t say much, since we can always find potentials obeying this, given the voltages. But in other problems it can be important. And even here it suggests that the sum V_1 + V_2 + V_3 will be important; this is the ‘total voltage across the circuit’.

Next, we get one equation for each circuit element:

• The law for a resistor says:

V_1 = R I_1

The law for a inductor says:

\displaystyle{ V_2 = L \frac{d I_2}{d t} }

The law for a capacitor says:

\displaystyle{ I_3 = C \frac{d V_3}{d t} }

These are all our equations. What should we do with them? Since I_1 = I_2 = I_3, it makes sense to call all these currents simply I and solve for each voltage in terms of this. Here’s what we get:

V_1 = R I

\displaystyle{ V_2 = L \frac{d I}{d t} }

\displaystyle {V_3 = C^{-1} \int I \, dt }

So, if we know the current flowing through the circuit we can work out the voltage across each circuit element!

Well, not quite: in the case of the capacitor we only know it up to a constant, since there’s a constant of integration. This may seem like a minor objection, but it’s worth taking seriously. The point is that the charge on the capacitor’s plate is proportional to the voltage across the capacitor:

\displaystyle{V_3 = C^{-1} Q }

When electrons move on or off the plate, this charge changes, and we get a current:

\displaystyle{I = \frac{d Q}{d t} }

So, we can work out the time derivative of V_3 from the current I, but to work out V_3 itself we need the charge Q.

Treat these as definitions if you like, but they’re physical facts too! And they let us rewrite our trio of equations:

V_1 = R I

\displaystyle{ V_2 = L \frac{d I}{d t} }

\displaystyle{V_3 = C^{-1} \int I \, dt }

in terms of the charge, as follows:

V_1 = R \dot{Q}

V_2 = L \ddot{Q}

V_3 = C^{-1} Q

Then if we add these three equations, we get

V_1 + V_2 + V_3 = L \ddot Q + R \dot Q + C^{-1} Q

So, if we define the total voltage by

V = V_1 + V_2 + V_3 = \phi_0 - \phi_3

we get

L \ddot Q + R \dot Q + C^{-1} Q = V

And this is great!

Why? Because this equation is famous! If you’re a mathematician, you know it as the most general second-order linear ordinary differential equation with constant coefficients. But if you’re a physicist, you know it as the damped driven oscillator.

The analogy between electronics and mechanics

Here’s an example of a damped driven oscillator:

We’ve got an object hanging from a spring with some friction, and an external force pulling it down. Here the external force is gravity, so it’s constant in time, but we can imagine fancier situations where it’s not. So in a general damped driven oscillator:

• the object has mass m (and the spring is massless),

• the spring constant is k (this says how strong the spring force is),

• the damping coefficient is r (this says how much friction there is),

• the external force is F (in general a function of time).

Then Newton’s law says

m \ddot{q} + r \dot{q} + k q = F

And apart from the use of different letters, this is exactly like the equation for our circuit! Remember, that was

L \ddot Q + R \dot Q + C^{-1} Q = V

So, we get a wonderful analogy relating electronics and mechanics! It goes like this:

Electronics Mechanics
charge: Q position: q
current: I = \dot{Q} velocity: v = \dot{q}
voltage: V force: F
inductance: L mass: m
resistance: R damping coefficient: r  
inverse capacitance: 1/C   spring constant: k

If you understand mechanics, you can use this to get intuition about electronics… or vice versa. I’m more comfortable with mechanics, so when I see this circuit:

I imagine a current of electrons whizzing along, ‘forced’ by the voltage across the circuit, getting slowed by the ‘friction’ of the resistor, wanting to continue their motion thanks to the inertia or ‘mass’ of the inductor, and getting stuck on the plate of the capacitor, where their mutual repulsion pushes back against the flow of current—just like a spring fights back when you pull on it! This lets me know how the circuit will behave: I can use my mechanical intuition.

The only mildly annoying thing is that the inverse of the capacitance C is like the spring constant k. But this makes perfect sense. A capacitor is like a spring: you ‘pull’ on it with voltage and it ‘stretches’ by building up electric charge on its plate. If its capacitance is high, it’s like a easily stretchable spring. But this means the corresponding spring constant is low.

Besides letting us transfer intuition and techniques, the other great thing about analogies is that they suggest ways of extending themselves. For example, we’ve seen that current is the time derivative of charge. But if we hadn’t, we could still have guessed it, because current is like velocity, which is the time derivative of something important.

Similarly, force is analogous to voltage. But force is the time derivative of momentum! We don’t have momentum on our chart. Our chart is also missing the thing whose time derivative is voltage. This thing is called flux linkage, and sometimes denotes \lambda. So we should add this, and momentum, to our chart:

Electronics Mechanics
charge: Q position: q
current: I = \dot{Q} velocity: v = \dot{q}
flux linkage: \lambda momentum: p
voltage: V = \dot{\lambda} force: F = \dot{p}
inductance: L mass: m
resistance: R damping coefficient: r  
inverse capacitance: 1/C   spring constant: k

Fourier transforms

But before I get carried away talking about analogies, let’s try to solve the equation for our circuit:

L \ddot Q + R \dot Q + C^{-1} Q = V

This instantly tells us the voltage V as a function of time if we know the charge Q as a function of time. So, ‘solving’ it means figuring out Q if we know V. You may not care about Q—it’s the charge of the electrons stuck on the capacitor—but you should certainly care about the current I = \dot{Q}, and figuring out Q will get you that.

Besides, we’ll learn something good from solving this equation.

We could solve it using either the Laplace transform or the Fourier transform. They’re very similar. For some reason electrical engineers prefer the Laplace transform—does anyone know why? But I think the Fourier transform is conceptually preferable, slightly, so I’ll use that.

The idea is to write any function of time as a linear combination of oscillating functions \exp(i\omega t) with different frequencies \omega. More precisely, we write our function f as an integral

\displaystyle{ f(t) = \frac{1}{\sqrt{2 \pi}} \int_{-\infty}^\infty \hat{f}(\omega) e^{i\omega t} \, d\omega }

Here the function \hat{f} is called the Fourier transform of f, and it’s given by

\displaystyle{ \hat{f}(\omega) = \frac{1}{\sqrt{2 \pi}} \int_{-\infty}^\infty f(t) e^{-i\omega t} \, dt }

There is a lot one could say about this, but all I need right now is that differentiating a function has the effect of multiplying its Fourier transform by i\omega. To see this, we simply take the Fourier transform of \dot{f}:

\begin{array}{ccl}  \hat{\dot{f}}(\omega) &=& \displaystyle{  \frac{1}{\sqrt{2 \pi}} \int_{-\infty}^\infty \frac{df(t)}{dt} \, e^{-i\omega t} \, dt } \\  \\  &=& \displaystyle{ -\frac{1}{\sqrt{2 \pi}} \int_{-\infty}^\infty f(t) \frac{d}{dt} e^{-i\omega t} \, dt } \\  \\  &=& \displaystyle{ i\omega \; \frac{1}{\sqrt{2 \pi}} \int_{-\infty}^\infty f(t) e^{-i\omega t} \, dt } \\  \\  &=& i\omega \hat{f}(\omega) \end{array}

where in the second step we integrate by parts. So,

\hat{\dot{f}}(\omega) = i\omega \hat{f}(\omega)

The Fourier transform is linear, too, so we can start with our differential equation:

L \ddot Q + R \dot Q + C^{-1} Q = V

and take the Fourier transform of each term, getting

\displaystyle{ \left((i\omega)^2 L + (i\omega) R + C^{-1}\right) \hat{Q}(\omega) = \hat{V}(\omega) }

We can now solve for the charge in a completely painless way:

\displaystyle{  \hat{Q}(\omega) =  \frac{1}{((i\omega)^2 L + (i\omega) R + C^{-1})} \, \hat{V}(\omega) }

Well, we actually solved for \hat{Q} in terms of \hat{V}. But if we’re good at taking Fourier transforms, this is good enough. And it has a deep inner meaning.

To see its inner meaning, note that the Fourier transform of an oscillating function \exp(i \omega_0 t) is a delta function at the frequency \omega = \omega_0. This says that this oscillating function is purely of frequency \omega_0, like a laser beam of one pure color, or a sound of one pure pitch.

Actually there’s a little fudge factor due to how I defined the Fourier transform: if

f(t) = e^{i\omega_0 t}


\displaystyle{ \hat{f}(\omega) = \sqrt{2 \pi} \, \delta(\omega - \omega_0) }

But it’s no big deal. (You can define your Fourier transform so the 2\pi doesn’t show up here, but it’s bound to show up somewhere.)

Also, you may wonder how the complex numbers got into the game. What would it mean to say the voltage is \exp(i \omega t)? The answer is: don’t worry, everything in sight is linear, so we can take the real or imaginary part of any equation and get one that makes physical sense.

Anyway, what does our relation

\displaystyle{  \hat{Q}(\omega) =  \frac{1}{((i\omega)^2 L + (i\omega) R + C^{-1})} \hat{V}(\omega) }

mean? It means that if we put an oscillating voltage of frequency \omega_0 across our circuit, like this:

V(t) = e^{i \omega_0 t}

then we’ll get an oscillating charge at the same frequency, like this:

\displaystyle{  Q(t) =  \frac{1}{((i\omega_0)^2 L + (i\omega_0) R + C^{-1})}  e^{i \omega_0 t}  }

To see this, just use the fact that the Fourier transform of \exp(i \omega_0 t) is essentially a delta function at \omega_0, and juggle the equations appropriately!

But the magnitude and phase of this oscillating charge Q(t) depends on the function

\displaystyle{ \frac{1}{((i\omega_0)^2 L + (i\omega_0) R + C^{-1})}  }

For example, Q(t) will be big when \omega_0 is near a pole of this function! We can use this to study the resonant frequency of our circuit.

The same idea works for many more complicated circuits, and other things too. The function up there is an example of a transfer function: it describes the response of a linear, time-invariant system to an input of a given frequency. Here the ‘input’ is the voltage and the ‘response’ is the charge.


Taking this idea to its logical conclusion, we can see inductors and capacitors as being resistors with a frequency-dependent, complex-valued resistance! This generalized resistance is called ‘impedance. Let’s see how it works.

Suppose we have an electrical circuit. Consider any edge e of this circuit:

• If our edge e is labelled by a resistor of resistance R:


V_e = R I_e

Taking Fourier transforms, we get

\hat{V}_e = R \hat{I}_e

so nothing interesting here: our resistor acts like a resistor of resistance R no matter what the frequency of the voltage and current are!

• If our edge e is labelled by an inductor of inductance L:


\displaystyle{ V_e = L \frac{d I_e}{d t} }

Taking Fourier transforms, we get

\hat{V}_e = (i\omega L) \hat{I}_e

This is interesting: our inductor acts like a resistor of resistance i \omega L when the frequency of the current and voltage is \omega. So, we say the ‘impedance’ of the inductor is i \omega L.

• If our edge e is labelled by a capacitor of capacitance C:

we have

\displaystyle{ I_e = C \frac{d V_e}{d t} }

Taking Fourier transforms, we get

\hat{I}_e = (i\omega C) \hat{V}_e


\displaystyle{ \hat{V}_e = \frac{1}{i \omega C} \hat{I_e} }

So, our capacitor acts like a resistor of resistance 1/(i \omega C) when the frequency of the current and voltage is \omega. We say the ‘impedance’ of the capacitor is 1/(i \omega L).

It doesn’t make sense to talk about the impedance of a voltage source or current source, since these circuit elements don’t give a linear relation between voltage and current. But whenever an element is linear and its properties don’t change with time, the Fourier transformed voltage will be some function of frequency times the Fourier transformed current. And in this case, we call that function the impedance of the element. The symbol for impedance is Z, so we have

\hat{V}_e(\omega) = Z(\omega) \hat{I}_e(\omega)


\hat{V}_e = Z \hat{I}_e

for short.

The big picture

In case you’re getting lost in the details, here are the big lessons for today:

• There’s a detailed analogy between electronics and mechanics, which we’ll later extend to many other systems.

• The study of linear time-independent elements can be reduced to the study of resistors if we generalize resistance to impedance by letting it be a complex-valued function instead of a real number.

One thing we’re doing is preparing for a general study of linear time-independent open systems. We’ll use linear algebra, but the field—the number system in our linear algebra—will consist of complex-valued functions, rather than real numbers.


Let’s not forget our original problem:

This is closely related to the problem we just solved. All the equations we derived still hold! But if you do the math, or use some intuition, you’ll see the voltage source ensures that the voltage we’ve been calling V is a constant. So, the current I flowing around the wire obeys the same equation we got before:

L \ddot Q + R \dot Q + C^{-1} Q = V

where \dot Q = I. The only difference is that now V is constant.

Puzzle. Solve this equation for Q(t).

There are lots of ways to do this. You could use a Fourier transform, which would give a satisfying sense of completion to this blog article. Or, you could do it some other way.

50 Responses to Network Theory (Part 28)

  1. amarashiki says:

    Awesome post, John! :) I do know now how to solve the puzzles, but let me remark that (non-linear) network theory has striking points of “connections” with graph and hypergraph theory.

    In fact, we can see electrical networks as hypergraphs, but I have not found (yet) a useful application of hypergraph techniquest to networks…I am not expert there, as a physicist I am completely ignorant about that despite the fact I feel it is important somehow!

    The flux linkage stuff is important too, and well, I prefer non-linear network theory to appear at the end of this wonderful thread! ;)

    Off-topic: how did you do those beautiful tables? Did you use html code+LaTeX instead the pure LaTeX that sometimes gives me problems when I want to “make” my own tables?

    • John Baez says:

      I’m glad you liked this post. I want to study nonlinear circuits, because when we study other kinds of networks we’ll need to allow nonlinearity. However, assuming linearity will let me develop certain new ideas without getting distracted by technicalities. So, I’ll do linear circuits first, and then nonlinear ones… just as you want.

      Off-topic: how did you do those beautiful tables? Did you use html code+LaTeX instead the pure LaTeX that sometimes gives me problems when I want to “make” my own tables?

      Yes, I used the HTML method of making tables, and put some LaTeX math inside the tables. Here’s the idea:

      <table border="1">
      <td>charge: Q</td>
      <td>position: q</td>
      <td>current: I</td>
      <td>velocity: v</td>


      Electronics Mechanics
      charge: Q position: q
      current: I velocity: v

      I haven’t included LaTeX in this example, to focus on how the table itself works. Changing the border width from “1″ to “10″ gives something a bit weird:

      Electronics Mechanics
      charge: Q position: q
      current: I velocity: v

      You can also have more columns, of course:

      <table border="1">

      gives this:

      Electronics Mechanics Hydraulics
      charge position volume
      current velocity flow
    • rumpelstiltskin says:

      The last three entries in your tables for the analogies between mechanics and electronics are transposed.

    • davidtweed says:

      Electrical circuit theory turns up in a “new” (in the sense of only 3years old!) approach to the max-flow/min-cut problem that’s often a building block in various algorithms/techniques on general network graphs, as described in this paper, but it’s presented in a way as if electrical circuit theory is the obvious thing that should be used while it seems very mysterious to me. (This is one of the things that prompted my question in the previous part of the Network Theory posts.)

      • John Baez says:

        My colleague Larry Harper in the math department at UCR has been working on aspects of the max-flow/min-cut problem for decades, and he recently came out with a paper where he applies similar techniques to networks of resistors:

        • Larry Harper, Morphisms for resistive networks.

        I doubt this paper will instantly solve your problems, but I bet a conversation with him might help.

        Does the mystery lie in the fact that in max-flow/min-cut problems we assume each ‘pipe’ has a maximum allowed flow through it, while we don’t assume this for the wires in electrical circuits?

        • davidtweed says:

          I don’t entirely follow the paper yet, but it’s not the “electrical network” is actually used separately from the conventional “capacitated flow”. The idea is to repeatedly take an existing, probably suboptimal flow and use a function that, amongst other things, takes to the ratio of flow to capacity and maps them to a “resistance value”, then view the network as an electrical circuit and solve for the current along each edge and then uses these currents as multipliers to scale the original “capacitated flows” in the graph. Repeat this multiple times until you’ve figured out a very very tight approximation to the maximum flow.

          What’s mysterious is the “big picture” of why the combination of the mapping into resistances and then using Kirchhoff’s rules gives a good set of scalings. I can work through individual assertions in the paper without getting a sense of why. It’s certainly an interesting example that might appeal to Amarashiki.

          (Incidentally, this is one of those things that would only be worthwhile on much bigger graphs than I’m interested in directly; I came across this paper while researching and remain intrigued precisely because it’s seems so mysterious to me…)

  2. John Baez says:

    In class it became clear that if we impose Kirchoff’s current law at every node in our graph, we get nonsense. For example, in this example:

    we’d get

    I_1 = I_2 = I_3 = 0

    since there’s just one edge coming out of the top node and into the bottom one!

    The solution is to pick a subset of our nodes, called terminals, and impose Kirchhoff’s law only at the other nodes. Then later, when we start building bigger circuits by gluing together smaller ones, we will glue them together at the terminals.

    Eventually we will get a category where the objects are collections of terminals, and the morphisms are circuits.

    • Todd Trimble says:

      if we impose Kirchoff’s

      Oops — didn’t you just admonish someone about the spelling? :-)

      • John Baez says:

        If I always remembered to spell this right, I wouldn’t have gone on about it—I normally just fix misspellings. But I keep tripping over this one… as you can see.

    • westy31 says:

      To get an analog of an non-driven oscillator, connect the circuit into a loop:

      If we use the current=velocity analogy, then Kirchhoff’s current law now says that all velocities are equal, but can be any value. That makes sense, there is a single mass moving, and the oscillation can have any amplitude.

      Kirchhoff’s voltage law says that:

      I(j\omega L + 1/j\omega C + R) = 0

      Either I = 0 or,we have an eigenmode, a special case of w for which the impedances sum to zero. It is complex valued because of the damping. If R= 0, then \omega^2 = 1/(LC).


      • westy31 says:

        I inserted a picture, but it does not show up. The address is

        • John Baez says:

          Thanks for discussing this case! This was Puzzle 1 in Part 28, in this special case where the voltage V of the voltage source is zero.

          Unfortunately pictures from other people don’t appear in comments. So, the advice I keep telling people (whenever this happens) is to post a URL for the picture, and let me turn it into an actual picture. I’ve done that here.

          I also turned your equations into LaTeX. It’s pretty easy: for example, to get

          I(j\omega L + 1/j\omega C + R) = 0

          you just need to type

          $latex I(j\omega L + 1/j\omega C + R) = 0$

          A reminder appears right over the box where you input your comment!

          Mathematicians will be interested to see that here we have a true engineer: someone who writes j rather than i for the square root of minus one, because i reminds them too much of current.

          Mathematicians use i; engineers j; to unify them Hamilton needed to invent the quaternions, which has both of these square roots of -1. It took him years to realize that to do this, we need a third square root of -1, k = ij. After he realized this he was able to predict that someday another academic discipline will arise, which is the product of math and engineering.

  3. Alan Cooper says:

    One reason for liking the Laplace Transform approach may be the way it builds in the initial conditions.

    • John Baez says:

      Yes, good point. I guess this is more about how the Laplace transform involves an integral over a half-line than about the fact that it uses the kernel \exp(-s t) instead of \exp(- i s t). Formally I believe we could use either in the Laplace transform, but of course the integrals will converge better with \exp(-s t).

  4. For some reason electrical engineers prefer the Laplace transform—does anyone know why?

    I think the Laplace transform is more general, because the added exponential in the transform causes the integral to converge for a wider class of signals. It might be a little harder to understand initially, but its additional expressive power comes handy later on.

    For example if you used Laplace you could have solved the same example without having to use the Dirac’s delta function (which calls for Distribution’s theory and anyway it might be a little troublesome conceptually). In fact if you evaluate the Laplace transform along the imaginary axis you obtain exactly the “frequency response” of the circuit, that you wrote earlier as the relationship between V(jw) and Q(jw).

    Even more to the point, if you have an unstable system, (e.g. the resistance in your RLC circuit is negative), you can still solve it and describe the solution in terms of Laplace transforms, but i don’t think you can use it Fourier. Can you ?

    • John Baez says:

      I don’t mind distributions at all: as a physicist I love the Dirac delta and its derivatives, and as a mathematician I spent happy years learning about Schwartz’s theory of distributions, Fréchet spaces and all that stuff. Admittedly, not everyone wants to learn this! But having learned it, I like it.

      But you’re right that the Fourier transform of something like \exp(t) is rather scary… it’s not even a distribution! I guess it’s a hyperfunction. So, maybe this is enough reason for people to prefer Laplace transforms.

  5. Joerg Paul says:

    Interesting Article! What do you think about a second analogy, which I found in the net:
    electronics – mechanics
    charge – momentum
    current – force
    voltage – velocity
    mass – capacitance
    spring constant – inverse inductance
    Is this a symmetry or a duality?

    • John Baez says:

      Very thought-provoking comment!

      It becomes clear, if we compare the mechanics columns in the two analogies, that we’re seeing a duality where we switch position and momentum in classical mechanics. Concepts come in pairs like this:

      position: q momentum: p
      velocity: v = \dot{q} force: F = \dot{p}
      mass: m inverse spring constant: 1/k
      damping coefficient: r   inverse damping coefficient: 1/r  

      This is evident in Hamilton’s equations:

      and it’s the basic insight behind symplectic geometry. In quantum mechanics, the duality that switches position and momentum is the Fourier transform. We’ve seen that sneaking into our lectures already, but only subtly: I was using the Fourier transform to switch time and frequency!

      There’s a minus sign built into this duality: when we perform the switch, we should replace p by q and q by - p. This seems weird, but it’s explained by symplectic geometry. In simple terms, we’re doing a 90° rotation in the (p,q) plane.

      For example, momentum is related to velocity by

      p = m \dot{q}

      but if we follow the chart and replace p by q, m by 1/k and \dot{q} by -\dot{p} we get

      \displaystyle{ q = -\frac{1}{k} \dot{p} }

      which is Hooke’s law for the force of a spring:

      \dot{p} = - k q

      I hinted at this duality in Puzzle 3 last time, where it showed up as a symmetry that switched capacitors and inductors. We needed to reinvent the idea of ‘current source’ to make our formalism symmetrical, since that’s the symmetrical partner of ‘voltage source’.

      So, I partially understand this stuff. There is, however, plenty left to think about! For example, friction is not normally included in Hamiltonian mechanics, except at a fundamental level where we keep track of the motion of all the atoms. There’s no Hamiltonian that gives the equations of motion for a damped oscillator. That’s why I left out the damping coefficient in my duality chart above. However, damping, and resistors, play an important role in the analogy between mechanics and electronics. I need to understand this better.

      • Alan Cooper says:

        And comparing the electronics columns for the same mechanical system makes it clearer to me what you were hinting at in that Puzzle 3. The DE for Voltage across a parallel circuit in terms of applied Current is the same as for Current through a series circuit in terms of applied Voltage if the C and L values are swapped and the R is inverted.

        For the corresponding symmetry between force and velocity (or position and momentum) we also need to swap compliance and mass and switch from series to parallel linkages – which ties together the physical geometry of the network with the symplectic geometry of the mechanics. But the need to reciprocate the resistance seems to make this fail in exactly the conservative case of Hamiltonian systems. I’m puzzled!

        • Alan Cooper says:

          Actually the reciprocal R business is fine. Removing the resistance from a series circuit is basically changing the R value to 0 and removing a resistance in parallel is equivalent to making it infinite, so the non dissipative LC circuits should both model the undamped mechanical oscillator. Still have to think about the difference between series and parallel connection for a simple mass and spring system though!

    • westy31 says:

      Some comments on force force=current versus force=voltage:

      In hydraulics and acoustics, pressure is a scalar quantity, like voltage. It makes sense to put this at vertices. At each vertex, you can connect hydraulic hoses or acoustic pipes. The sum of the volume velocities (velocity times area) is either zero, or if the node is compressible, proportional to the time derivative of pressure. It is intuitive to connect edges, as if they were hoses. It is possible to see velocities as vectors, because you can lay them out in directions as if they were geometrical dimensions. However, I am still trying to figure out if there is something like conservation of momentum related to this, I am not sure. You can fold hydraulic hoses in other directions without changing any dynamics.

      In mechanics however, force is a vector, just like velocity. It is no longer obvious which should intuitively be the vertex and which the edge. It is often useful to do force=current, because you add forces connected to a ´node´ of mass, as if the mass were a vertex.

      But note that we can only model 1-dimensional systems using ordinary circuits in this way, since we cannot distinguish between forces in 2 different dimensions. We would like both force and velocity to be located on edges, so they are vector-like, but we need to stick one of them onto vertices.

      One way forward I to use a generalisation a of circuit called an n-complex. [This is my own work, not necessarily mainstream.] A 1 complex is like a circuit, it has vertices (0-cells) and edges (1-cells). But a 2-complex also contains loops (2-cells). Of course, a circuit always contains loops. But the loop gets physical meaning when we put a mesh impedance in it. This modifies Kirchhoff’s voltage law, in the sense that the sum of voltage differences across the impedances enclosing a loop is not necessarily zero, but the mesh current times the mesh impedance.

      An example is the Maxwell analogy on my website

      The mechanical analog of the Maxwell equations in vacuum is waves in an incompressible elastic medium. The electric field can be related to the force vectors. The velocity field is not so directly a vector, it is a bit hidden in the mesh velocities. This stuff probably drove the aether people of the 19th century like Kelvin and Heaviside crazy.

      A full model for mechanical vibrations in >1 dimensions is not so simple…


      • John Baez says:

        What you’re calling ‘loops’ are actually quite mainstream in electrical circuit theory. As you hint, they’re also called ‘meshes’:

        Mesh analysis, Wikipedia.

        though here they want to restrict the term ‘mesh’ to circuits that can be drawn on a plane. (I wouldn’t restrict it that way, and the word ‘loop’ means something else in math, so I actually prefer ‘mesh’ or ‘face’ for a 2d thing.)

        The theory of n-complexes is pretty important in topology, where people use several different kinds: CW complexes and piecewise-linear CW complexes and cubical complexes and simplicial complexes and simplicial sets… all of which give rise to chain complexes, which are the purely algebraic structures that we really need to write down Maxwell’s equations and some of their generalizations.

        If I had time I would talk about all these things, but I clearly don’t. I’m mostly trying to keep this ‘network theory’ series focused on 1-dimensional entities, like graphs. I may however bring 2-dimensional complexes into the game for a little bit, because the current/voltage duality is nicely discussed this way.

        I’m also fond of 2-dimensional complexes because I used them as part of the definition of ‘spin foam’ in my work on quantum gravity.

        • westy31 says:

          The mesh analysis as in the Wikipedia article is a technique for solving conventional circuits. But I want to add a ‘mesh impedance’, which modifies Kirchhoff’s voltage law, making our 2-complex something inequivalent to a 1-complex.

          According to the Wikipedia article on mesh analyses, the word ‘loop’ is also in use:
          “A more general technique, called loop analysis (with the corresponding network variables called loop currents) can be applied to any circuit, planar or not.”

          Maybe 2-cell?


        • John Baez says:

          Mathematicians would say ’2-cell’.

  6. Bossavit says:

    This is Firestone’s analogy:

    • F.A. Firestone, A new analogy between mechanical and electrical systems, Journal of the Acoustical Society of America 4 (1933), 24–267.

    Prof. Baez, I’m much eager to read more about the duality that you mention!

    • John Baez says:

      Thanks, that reference is really helpful! I added links to both a free version of the paper and the official journal version. (As usual here, the first is gotten by clicking the paper’s title, the second by clicking the journal name.)

      To clarify for people who don’t look at the paper: Firestone’s ‘new analogy’ is the one that treats velocity as analogous to voltage and force as analogous to current. The ‘old analogy’ is the one discussed in my blog article, which goes the other way around: it treats velocity as analogous to current and force as analogous to voltage.

      These two analogies are related by the duality I mentioned in my comment above.

      What’s exciting to me is this remark by Firestone, which points out a nuisance I’d noticed but hadn’t fully grappled with yet. In the old analogy:

      Mechanical elements in series must be represented by electrical elements in parallel, and vice versa.

      You can see that in my blog article! I studied a circuit with three elements in series, but using the old analogy it was analogous to three forces acting in parallel on an object.

      So here’s the point: the duality I mentioned also switches the concepts of ‘series’ and ‘parallel’!

      As we’ll see later, this switch between series and parallel corresponds in graph theory to something called Poincaré duality.

      So, the overall picture keeps getting bigger and more unified.

    • Alan Cooper says:

      Apparently Firestone was more interested in using electronics to model mechanics rather than vice versa – and even seems to accept less generality going the other way since he says (at the bottom of p254) that masses can only be connected in parallel (what looks like a rigid series connector actually keeps the velocities=voltages equal and each mass absorbs a proportional amount of force=current) and so the corresponding capacitors must all be grounded.

      Am I right in guessing that in order to mechanically model a series connection of capacitors he would need the corresponding masses to be connected by something like a hydraulic system with piston areas inversely proportional to masses?

      • alQpr says:

        While the hydraulic thingy between masses might work for modelling capacitors in series in Firestone’s mobility analogy (or inductances in parallel in the traditional impedance analogy), I couldn’t see how to use it to connect the mass in “series” with anything else (ie with its velocity added to, rather than being the same as, what it is connected to). So I struggled for some time to come up with a mechanical model which corresponds in the Firestone sense to a circuit with all three elements in series (or in the traditional sense to all three in parallel). But then it struck me that the forces could be made equal (as opposed to complementary) by running the connector over a pulley. I think this works – and also gives the mass a second “terminal” rather than invoking an imaginary connection to the rest frame.

        (Of course this must have all been worked out years ago!)

  7. domenico says:

    I am thinking to the approximation of the Milankovitch cycle with a electric network (it is only a mathematical game, with no applications).
    If the Earth temperature is a function of the insolation, then I think can be write an electric network that connect insolation to temperature:

    T(s) = Z(s) I(s)

    then it is ever possible to write an approximation of Z:

    \displaystyle{ Z(s) = \frac{\prod_n (s-s_n) }{\prod_n (s-w_n)} }

    so with a right number of component, it is possible to built a electric network that make an approximation of the Milankovitch cycle.
    This impedance is a differential equation (that have a mechanical analogy) that connect insolation and temperature, and it is an analog computer to evaluate the trajectories of Milankovitch cycle.

    • domenico says:

      I am thinking (I am not an electronic expert) that a Milankovitch cycle can be made using an overlap of four parallel direct digital synthesizer (if it is possible the synchronization of the signals to obtain the Fourier series approximation of the insolation) then the admittance can be evaluated to obtain an output current equal to the mean temperature (using the Laplace transform of the Fourier series that approximate the Milankovitch cycle).
      The circuit is symmetric, then from the Milankovitch cycle current (the temperature) it is possible to obtain the insolation voltage: the difference between the geological temperature measures and the circuit measure can be the increase of the temperature due to the C02 accumulation in the atmosphere.

    • John Baez says:

      The idea of modelling how climate cycles are influence by Milankovitch cycles using a linear circuit is interesting… but I believe it’s interesting because we can use it to show that the climate is inherently nonlinear.

      The reason I believe this is that Giampiero Campa did a ‘best linear fit’ to the climate data and saw that it doesn’t fit the temperature peaks very well. In a reply to Blake Pollard’s post he wrote:

      Thanks, I got the data and performed some (mostly linear) system identification to see if you could find a linear differential equation that could approximate temperature well when the other 4 data are given as inputs (forcing terms in the equation).

      The very preliminary answers seems to be “not really”, as the best system somehow reproduces the temperature baseline of the validation data (from obliquity and eccentricity alone) but is not able to really reproduce the peaks:

      So it certainly looks like nonlinearity (if not even other input variables) does play an important role in the final temperature history. I might try something more on the nonlinear identification front in the next few weeks.

      I think using this type of idea to quantify the amount of nonlinearity in a system could be interesting. Probably it’s already been done: I’m just starting to learn control theory and ‘system identification’.

      Of course you can only use this type of idea to quantify the amount of nonlinearity in a system if you assume there are no other input variables besides the ones you’re considering… or at least assume some bound on their ‘strength’.

      • domenico says:

        These are only free thoughts in a walk (can be correct).
        I think that exist ever a generalized impedence Z that transform each function I(t) in a function V(t) .
        The generalized network Z contain the simbol of the Laplace transform of the element in the network: R for resistance, 1/s for capacitor and s for inductor, and other power symbols.
        It is ever possible a I(t), and V(t), approximation with some optimal series in the time domain, then it is ever possible the Laplace transform of the I(t), and V(t), so exist Z(s)=V(s)/I(s).
        It is possible the asymptotic expansion of Z(s), and so obtain the generalized impedence,
        Only if the parameter number of the network is low, then there is a correlation between the signals.
        I can make, in some weeks, the temperature and insolation approximation with Fourier series, with free frequencies, in time domain until the human presence on the Earth; then I can make the Z(s)=T(s)/I(s) evaluation, and I can make the evaluation of the temperature without the human presence:
        T(t)=\int^t_0 d\tau Z(t-\tau) I(\tau)=\int^t_0 d\tau \int^{t-\tau}_0 d\nu \frac{I(\tau)}{I(t-\tau)}T(t-\tau-\nu)
        this integral is simple, because can be evaluated using the inverse Laplace transform: there are programs that make this transformations.
        I think that the difference between this temperature, and the measured temperature, can be an estimation of the human influence in the climate.

        • domenico says:

          I obtain an approximation of the Earth temperature using a Fourier series, with free frequencies.
          I init to obtain the least element of the error function.
          There is a sea of relative minimun numerical values, and I init to obtain a quick program with L-BFGS that restart when there is a low error reduction, and a restart minimization of a single amplitude (minimization of a component of the gradient), and a restart in a shell neighbourhood.
          The Fourier series is:
          T(t)=\alpha+\sum_n \beta_n \cos(\omega_n t)+\sum_n \gamma \cos(\omega t)
          that is the solution of the differential equation
          \prod_n \left(\frac{d^2}{dt^2}+\omega^2_n \right)T(t) = 0
          only when w_n=n w there is the usual Fourier series; when there is a free value then it is possible the forecasting of the temperature.
          The free Fourier series is generally not periodic, because the angular frequencies have a irrational ratio.
          The data set have a great gaussian noise, it seem me like a tide measure without stilling well, or a measure of local temperature with many local little temperature variations.
          I think that can be possible a statistical mean from measure obtained in different earth places, to obtain a noiseless data set.
          An analogy exist between tides (correlation with the sun position) and Milankovich cycles, and an other analogy in the measure of the data set, and an analogy in the forecasting.
          My last result is (I leave the exponential notation to avoid any mistake in the transcription):
          This is only a step, but if the forecast is optimal, then it is not necessary a Laplace transform

        • domenico says:

          I am thinking that the low frequencies (~100 Kyears period) component of the Milankovich cycle (like thermal tides) cannot have Earth origin (geological or meteorological origin); this component (if all is true) can give an indirect measure of the change in the Sun emission for solar inner dynamics, so that can be possible to adjust the solar models to obtain the right oscillation.

  8. David Lyon says:

    The units don’t work out in the original table, but if you do add units you get equations that seem to be related to how charge density waves respond in a quantum electronic circuit. Charge density waves carry electric current. Adding the usual symbol for charge density “λ” gives the following table with the same physical units on both sides:


    charge: Q
    position: q λ

    current: I
    velocity: v λ

    flux linkage: λ λ
    momentum: p

    voltage: V λ
    force: \dot{p}

    inductance: L \lambda^2
    mass: m

    resistance: R \lambda^2
    damping coefficient: r

    inverse capacitance: \frac{\lambda^2}{C}
    spring constant: k

    The fourth row is the Lorentz force equation.

    • John Baez says:

      Yes, this is interesting!

      By the way, some of your LaTeX didn’t parse at first because you wrote things like

      $latex \frac{λ^2}{C}$

      instead of

      $latex \frac{\lambda^2}{C}$

      You can’t put fancy UNICODE characters in LaTeX here.

  9. Marco Tulio Angulo says:

    Another interesting aspect of this problem is the following one: given a transfer function find an electrical (or mechanical) grid that has the desired transfer function.

    It is of special interest the case when the electrical system uses only passive components (i.e. resistors, capacitors and inductors). The response to this problem was provided by Roaul Bott ( in his Ph.D thesis “Electrical Network Theory” and latter published in the now classical paper “Impedance Synthesis without Use of Transforms”.

    It is interesting quote his own words in a 2000 interview (see the wikipedia page ) that he sees “networks as discrete versions of harmonic theory”, so his experience with network synthesis and electronic filter topology introduced him to algebraic topology.

    From this result it was possible to construct an electrical network with the desired transfer function. From the analogy of mechanical and electrical systems, one can imagine that a similar construction should be possible for mechanical system. In fact, this was not entirely the case: a new mechanical element was needed that was the analog of a capacitor without one terminal connected to ground. This motivated the construction of the so-called “inerter”, see the following reference:

    In fact, the construction of the inerter was motivated for the design of better suspension in Formula-One racing cars:

    • alQpr says:

      Thanks for that “inerter” link. It solves the problem I was worrying about – including both compressive and tensile forces with a rigid connector and storing the energy compactly in a flywheel rather than having loose masses flying all over the place.

  10. Greg Egan says:

    A small typo: the three integrals following the sentence “To see this, we simply take the Fourier transform” should be with respect to time, not frequency.

  11. I’ve been talking about electrical circuits, but I’m interested in them as models of more general physical systems. Last time we started seeing how this works. We developed an analogy between electrical circuits and physical [...]

  12. Larry Hedding says:

    John, I’m a retired Electrical Engineer that reads your blogs and some of your Math ‘stuff’ as a hobby. On the topic of electrical mechanical analogies, I was wondering if you had any commments, thoughts, regarding ‘Memristance’ the relationship between charge and flux linkage, or momentum and position, by analogy. Leon Chu is cited as the ‘Father’ of the study, concept, popularization, etc. of ‘Memristance’ (or it’s inverse, Memductance’).

    • John Baez says:

      Sorry to take so long to reply. We discussed memristors starting here, in the comments to “week290″ of This Week’s Finds, which is when I first started talking about bond graphs and analogies between different systems that could be described in terms of p,q,\dot{p},\dot{q}.

      At that point I just reached the point of understanding memristors as one of the natural possibilities for circuit elements that impose a relation between two of those variables p,q,\dot{p},\dot{q}.

      I really should get back to them… but probably when I talk about bond graphs in this network theory series!

      • Larry Hedding says:

        Thanks for your reply John; I know you are hyper-busy. Bond-Graphs versus Control-System Block diagram Algebra Transfer Functions was of great interest to me after someone gave me a ‘Mechatronics’ book 20 years ago. I’m not a fan of the name Mechatronics, but is a mainstream term and curriculum at some schools. Thanks again!

        • John Baez says:

          I hope to say a lot more about bond graphs and the ‘signal flow diagrams’ used in control theory in a while… so stay tuned!

  13. westy31 says:

    I just finished working out a discrete time harmonic oscillator, that conserves energy.

    Rather than use inductors and capacitors to encode time dependence, it uses resistors that are negative valued in the time direction.
    The link:

You can use HTML in your comments. You can also use LaTeX, like this: $latex E = m c^2 $. The word 'latex' comes right after the first dollar sign, with a space after it.

Fill in your details below or click an icon to log in: Logo

You are commenting using your account. Log Out / Change )

Twitter picture

You are commenting using your Twitter account. Log Out / Change )

Facebook photo

You are commenting using your Facebook account. Log Out / Change )

Google+ photo

You are commenting using your Google+ account. Log Out / Change )

Connecting to %s


Get every new post delivered to your Inbox.

Join 2,709 other followers