There’s been some new progress on graph Laplacians!
As a mathematical physicist, I’ve always been in love with the Laplacian:
It shows up in many of the most fundamental equations of physics: the wave equation, the heat equation, Schrödinger’s equation… and Poisson’s equation:
which says how the density of matter, affects the gravitational potential
As I’ve grown interested in network theory, I’ve gotten more and more interested in ‘graph Laplacians’. These are discretized versions of the Laplacian, where we replace 3-dimensional space by a ‘graph’, meaning something like this:
You can get a lot of interesting information about a graph from its Laplacian. You can also set up discretized versions of all the famous equations I mentioned.
The new progress is a simple algorithm for very efficiently solving Poisson’s equation for graph Laplacians:
• Jonathan A. Kelner, Lorenzo Orecchia, Aaron Sidford, Zeyuan Allen Zhu, A simple, combinatorial algorithm for solving SDD systems in nearly-linear time.
Here’s a very clear explanation of the general idea, conveying some sense of why it’s so important, without any nasty equations:
• Larry Hardesty, Short algorithm, long-range consequences, MIT News, 1 March 2013.
It begins:
In the last decade, theoretical computer science has seen remarkable progress on the problem of solving graph Laplacians — the esoteric name for a calculation with hordes of familiar applications in scheduling, image processing, online product recommendation, network analysis, and scientific computing, to name just a few. Only in 2004 did researchers first propose an algorithm that solved graph Laplacians in “nearly linear time,” meaning that the algorithm’s running time didn’t increase exponentially with the size of the problem.
At this year’s ACM Symposium on the Theory of Computing, MIT researchers will present a new algorithm for solving graph Laplacians that is not only faster than its predecessors, but also drastically simpler.
This animation shows two different “spanning trees” for a simple graph, a grid like those used in much scientific computing. The speedups promised by a new MIT algorithm require “low-stretch” spanning trees (green), in which the paths between neighboring nodes don’t become excessively long (red).
I can’t beat this article at its own game… except to clarify that ‘solving graph Laplacians’ means solving Poisson’s equation with a graph Laplacian replacing the usual Laplacian.
So, let me just supplement this article with the nasty equations saying what a graph Laplacian actually is. Start with a graph. More precisely, start with a simple graph. Such a graph has a set of vertices and a set of edges
such that
which says the edges are undirected, and
which says there are no loops.
The graph Laplacian is an operator that takes a function on the vertices of our graph,
and gives a new such function as follows:
The version of Poisson’s equation for this graph Laplacian is thus
But I should warn you: this operator has eigenvalues that are less than equal to zero, like the usual Laplacian
People often insert a minus sign to make the eigenvalues ≥ 0.
There is a huge amount to say about graph Laplacians! If you want, you can learn more here:
• Michael William Newman, The Laplacian Spectrum of Graphs, Masters Thesis, Department of Mathematics, University of Manitoba, 2000.
I’ve been learning about some of their applications here:
• Ernesto Estrada, The Structure of Complex Networks: Theory and Applications, Oxford University Press, Oxford, 2011.
I hope sometime to summarize a bit of this material and push the math forward a bit. So, it was nice to see new progress on efficient algorithms for doing computations with graph Laplacians!
Maybe there is a tiny typo?
(Not \Psi, but \psi)
Thanks—fixed. I cut-and-paste some of this equation from Part 15 of my network theory series, where I discussed how graph Laplacians were both self-adjoint and infinitesimal stochastic, meaning they’re good both for describing Schrödinger’s equation on a graph (where time evolution is unitary) and a random walk on that graph (where time evolution is stochastic). So, that’s another place to read more about what graph Laplacians are good for!
In
shouldn’t
be
?
Yes, thanks—fixed.
I’m confused. I see a text by an Ernesto Estrada called “The Structure of Complex Networks: Theory and Applications”. Are they the same person?
Typo. Fixed! I have a friend on G+ named Daniel Estrada.
One area that these Laplacians end up being applied is in preconditioning more general equations over graphs, as described in the overview here. The idea is roughly to use a laplacian system to approximate a more complicated linear system, solve that quickly and then “subtract off in an appropriate way” that solution to get a smaller-magnitude general system that’s the remains. This can then be laplacian approximated and solved, etc. I can see how this would work with general matrix/graph equations, but it apparently also has connections with network flows.
That article by Erica Klarreich on the Simons Foundation web site that David Tweed linked to is great! Coincidentally, on Google+ John linked to a more recent article by the same writer, about Zhang’s prime gap result. I’m not sure if Klarreich is a professional mathematician or a professional science writer, but she certainly does a great job covering the subject.
Yes, she does a good job of getting ideas across. She describes herself as a journalist with training in mathematics.
http://www.ericaklarreich.com/aboutme.html
I especially enjoyed the article on three-manifold topology and the proof of the virtual Haken conjecture
https://www.simonsfoundation.org/quanta/20121002-getting-into-shapes-from-hyperbolic-geometry-to-cube-complexes-and-back/
Trivial point, but I believe

normally says there are no self-terminations, not that “there are no loops”.
I think I’m using the word loop in a official standard way but one can say ‘self-loop’ if one wants to emphasize that we mean an edge from a vertex to itself, not a general path of edges that ends where it starts.
There is a simple way to solve Laplacians on graphs. For each vertex, you simply make the potential equal to the weighted sum of its neighbours, with the admittances as weight factors. If you repeat this a number of times, you generally converge to a solution.
(This is easy to try, for example in a spreadsheet.)
The number of iterations is probably roughly proportional to the number of vertices times the number of neighbours 2 vertices are apart on average as. I guess the authors have found a smart way to use a spanning tree backbone to speed things up. I wonder if it works for non-linear graphs too, this could be important in fluid dynamics.
Gerard
Hmm…, for a tree-shaped network, ie with no loops, you can solve the Laplacian in a single iteration, by working up the branches toward the main stem. So if you combine that with the method I described above, that might indeed work. Interesting!
Gerard
In the article I linked it explains that a tree is indeed very easily and efficiently soluble, but unfortunately its solution is significantly different to the general Laplacian. One of the keys to this approach is apparently adding back in just enough of the edges that complete loops that really significantly affect the solution to get a very good approximation while not minimising the amount of work in addition to the work required for solving a tree.
When the popular article says “solving graph Laplacians”, they are actually referring to solving Poisson’s equation
for
given
, where
is a graph Laplacian. You seem to be talking about solving the Laplace equation
Does your method generalize to that? There should be some pretty easy way…
Sorry, I’m not sure I understand that sentence: it reminds me of how Germans put prepositions at the end, like “I woke this this morning up.” Maybe you mean
where the distance between two vertices is defined to be the length of the shortest path connecting them?
Yes, generalizing the method to the Poison equation is possible.

You use (my first latex):
But you need also

Otherwise the
You are also right about the sentence I mixed up.
Gerard
Retry Latex:
$V_i \rightarrow (\sum(V_j Y_{ji})-\rho_ii)/\sum(Y_{ji})) $
Retry2 Latex:
latex$ V_i \rightarrow (\sum(V_j Y_{ji})-\rho_ii)/\sum(Y_{ji})) $
Your LaTeX is converging to correctness! I’m glad you’re trying to use LaTeX.
If you look directly above the box you type into here, you’ll see these directions:
I’ve attempted to fix your LaTeX, but you also have a sentence that doesn’t reach the end. I think I get your overall point, though.
Sorry about my messy replies. The unfinished sentence is about the fact that in the circuit, there is no net influx. Interestingly, this means
integrated over space is zero.
I find the definition of graph Laplacian peculiar. If I would write the discretized version of the ordinary Laplacian it would involve the function at three discretization points whereas ordinary partial derivative would involve just difference between the function at two discretization points which would resemble your graph Laplacian more.
Can one define such thing as partial derivative on graph and how would that differ from the laplacian?
Anonymous wrote:
This definition gives the usual discretization of the ordinary Laplacian if our graph is a lattice like this:
or something similar in any other dimension. For example, in the 1-dimensional case, where we have a chain of vertices with each vertex
connected to its neighboring vertices
and
by edges, this definition gives
which is (up to a constant factor) the usual discretization of the 2nd derivative.
Right! I can’t believe I missed that. Thank you for clarifying this.
While working on geometry and circuits, a nice side result I obtained last week is a formula for the N-circumsphere of an N-simplex in terms of squared distances only:
http://westy31.home.xs4all.nl/Circumsphere/ncircumsphere.htm
[…] There are other matrices that are defined by the graph. Perhaps the most familiar is the Laplacian, which has recently been a topic on this blog (see parts 15, 16 and 20 of the series, and this recent post). […]
Reblogged this on Human Mathematics.