Quantum Network Theory (Part 2)

guest post by Tomi Johnson

Last time I told you how a random walk called the ‘uniform escape walk’ could be used to analyze a network. In particular, Google uses it to rank nodes. For the case of an undirected network, the steady state of this random walk tells us the degrees of the nodes—that is, how many edges come out of each node.

Now I’m going to prove this to you. I’ll also exploit the connection between this random walk and a quantum walk, also introduced last time. In particular, I’ll connect the properties of this quantum walk to the degrees of a network by exploiting its relationship with the random walk.

This is pretty useful, considering how tricky these quantum walks can be. As the parts of the world that we model using quantum mechanics get bigger and have more complicated structures, like biological network, we need all the help in understanding quantum walks that we can get. So I’d better start!


Starting with any (simple, connected) graph, we can get an old-fashioned ‘stochastic’ random walk on this graph, but also a quantum walk. The first is the uniform escape stochastic walk, where the walker has an equal probability per time of walking along any edge leaving the node they are standing at. The second is the related quantum walk we’re going to study now. These two walks are generated by two matrices, which we called S and Q. The good thing is that these matrices are similar, in the technical sense.

We studied this last time, and everything we learned is summarized here:

Diagram outlining the main concepts (again)


G is a simple graph that specifies

A the adjacency matrix (the generator of a quantum walk) with elements A_{i j} equal to unity if nodes i and j are connected, and zero otherwise (A_{i i} = 0), which subtracted from

D the diagonal matrix of degrees D_{i i} = \sum_j A_{i j} gives

L = D - A the symmetric Laplacian (generator of stochastic and quantum walks), which when normalized by D returns both

S = L D^{-1} the generator of the uniform escape stochastic walk and

Q = D^{-1/2} L D^{-1/2} the quantum walk generator to which it is similar!

Now I hope you remember where we are. Next I’ll talk you through the mathematics of the uniform escape stochastic walk S and how it connects to the degrees of the nodes in the large-time limit. Then I’ll show you how this helps us solve aspects of the quantum walk generated by Q.

Stochastic walk

The uniform escape stochastic walk generated by S is popular because it has a really useful stationary state.

To recap from Part 20 of the network theory series, a stationary state of a stochastic walk is one that does not change in time. By the master equation

\displaystyle{ \frac{d}{d t} \psi(t) = -S \psi(t)}

the stationary state must be an eigenvector of S with eigenvalue 0.

A fantastic pair of theorems hold:

• There is always a unique (up to multiplication by a positive number) positive eigenvector \pi of S with eigenvalue 0. That is, there is a unique stationary state \pi.

• Regardless of the initial state \psi(0), any solution of the master equation approaches this stationary state \pi in the large-time limit:

\displaystyle{ \lim_{t \rightarrow \infty} \psi(t) = \pi }

To find this unique stationary state, consider the Laplacian L, which is both infinitesimal stochastic and symmetric. Among other things, this means the rows of L sum to zero:

\displaystyle{ \sum_j L_{i j} = 0 }

Thus, the ‘all ones’ vector \mathbf{1} is an eigenvector of L with zero eigenvalue:

L \mathbf{1} = 0

Inserting the identity I = D^{-1} D into this equation we then find D \mathbf{1} is a zero eigenvector of S:

L \mathbf{1} =  ( L D^{-1} ) ( D \mathbf{1} ) = S ( D \mathbf{1} ) = 0

Therefore we just need to normalize this to get the large-time stationary state of the walk:

\displaystyle{ \pi = \frac{D \mathbf{1}}{\sum_i D_{i i}} }

If we write i for the basis vector that is zero except at the ith node of our graph, and 1 at that node, the inner product \langle i , \pi \rangle is large-time probability of finding a walker at that node. The equation above implies this is proportional to the degree D_{i i} of node i.

We can check this for the following graph:

Illustration of a simple graph

We find that \pi is

\displaystyle{ \left( \begin{matrix} 1/6 \\ 1/6 \\ 1/4 \\ 1/4 \\ 1/6 \end{matrix} \right) }

which implies large-time probability 1/6 for nodes 1, 2 and 5, and 1/4 for nodes 3 and 4. Comparing this to the original graph, this exactly reflects the arrangement of degrees, as we knew it must.

Math works!

The quantum walk

Next up is the quantum walk generated by Q. Not a lot is known about quantum walks on networks of arbitrary geometry, but below we’ll see some analytical results are obtained by exploiting the similarity of S and Q.

Where to start? Well, let’s start at the bottom, what quantum physicists call the ground state. In contrast to stochastic walks, for a quantum walk every eigenvector \phi_k of Q is a stationary state of the quantum walk. (In Puzzle 5, at the bottom of this page, I ask you to prove this). The stationary state \phi_0 is of particular interest physically and mathematically. Physically, since eigenvectors of the Q correspond to states of well-defined energy equal to the associated eigenvalue, \phi_0 is the state of lowest energy, energy zero, hence the name ‘ground state’. (In Puzzle 3, I ask you to prove that all eigenvalues of Q are non-negative, so zero really does correspond to the ground state.)

Mathematically, the relationship between eigenvectors implied by the similarity of S and Q means

\phi_0 \propto D^{-1/2} \pi \propto  D^{1/2} \mathbf{1}

So in the ground state, the probability of our quantum walker being found at node i is

| \langle i , \phi_0 \rangle |^2 \propto | \langle i , D^{1/2} \rangle \mathbf{1} |^2 = D_{i i}

Amazingly, this probability is proportional to the degree and so is exactly the same as \langle i , \pi \rangle, the probability in the stationary state \pi of the stochastic walk!

In short: a zero energy quantum walk Q leads to exactly the same distribution of the walker over the nodes as in the large-time limit of the uniform escape stochastic walk S. The classically important notion of degree distribution also plays a role in quantum walks!

This is already pretty exciting. What else can we say? If you are someone who feels faint at the sight of quantum mechanics, well done for getting this far, but watch out for what’s coming next.

What if the walker starts in some other initial state? Is there some quantum walk analogue of the unique large-time state of a stochastic walk?

In fact, the quantum walk in general does not converge to a stationary state. But there is a probability distribution that can be thought to characterize the quantum walk in the same way as the large-time state characterizes the stochastic walk. It’s the large-time average probability vector P.

If you didn’t know the time that had passed since the beginning of a quantum walk, then the best estimate for the probability of your measuring the walker to be at node i would be the large-time average probability

\displaystyle{ \langle i , P \rangle = \lim_{T \rightarrow \infty} \frac{1}{T} \int_0^T | \psi_i (t) |^2 d t }

There’s a bit that we can do to simplify this expression. As usual in quantum mechanics, let’s start with the trick of diagonalizing Q. This amounts to writing

\displaystyle{  Q= \sum_k \epsilon_k \Phi_k }

where \Phi_k are projectors onto the eigenvectors \phi_k of Q, and \epsilon_k are the corresponding eigenvalues of Q. If we insert this equation into

\psi(t)  = e^{-Q t} \psi(0)

we get

\displaystyle{  \psi(t)  = \sum_k e^{-\epsilon_k t} \Phi_k \psi(0) }

and thus

\displaystyle{ \langle i , P \rangle = \lim_{T \rightarrow \infty} \frac{1}{T} \int_0^T | \sum_k e^{-i \epsilon_k t} \langle i, \Phi_k \psi (0) \rangle |^2 d t }

Due to the integral over all time, the interference between terms corresponding to different eigenvalues averages to zero, leaving:

\displaystyle{ \langle i , P \rangle = \sum_k | \langle i, \Phi_k \psi(0) \rangle |^2 }

The large-time average probability is then the sum of terms contributed by the projections of the initial state onto each eigenspace.

So we have a distribution that characterizes a quantum walk for a general initial state, but it’s a complicated beast. What can we say about it?

Our best hope of understanding the large-time average probability is through the term | \langle i, \Phi_0 \psi (0) \rangle |^2 associated with the zero energy eigenspace, since we know everything about this space.

For example, we know the zero energy eigenspace is one-dimensional and spanned by the eigenvector \phi_0. This means that the projector is just the usual outer product

\Phi_0 = | \phi_0 \rangle \langle \phi_0 | = \phi_0 \phi_0^\dagger

where we have normalized \phi_0 according to the inner product \langle \phi_0, \phi_0\rangle = 1. (If you’re wondering why I’m using all these angled brackets, well, they’re a notation named after Dirac that is adored by quantum physicists.)

The zero eigenspace contribution to the large-time average probability then breaks nicely into two:

\begin{array}{ccl} | \langle i, \Phi_0 \psi (0) \rangle |^2 &=& | \langle i, \phi_0\rangle \; \langle \phi_0,  \psi (0) \rangle |^2  \\  \\  &=& | \langle i, \phi_0\rangle |^2 \; | \langle \phi_0 , \psi (0) \rangle |^2 \\   \\  &=& \langle i ,  \pi \rangle \; | \langle \phi_0 ,  \psi (0) \rangle |^2 \end{array}

This is just the product of two probabilities:

• first, the probability \langle i , \pi \rangle for a quantum state in the zero energy eigenspace to be at node i, as we found above,


• second, the probability | \langle \phi_0, \psi (0)\rangle |^2 of being in this eigenspace to begin with. (Remember, in quantum mechanics the probability of measuring the system to have an energy is the modulus squared of the projection of the state onto the associated eigenspace, which for the one-dimensional zero energy eigenspace means just the inner product with the ground state.)

This is all we need to say something interesting about the large-time average probability for all states. We’ve basically shown that we can break the large-time probability vector P into a sum of two normalized probability vectors:

P = (1- \eta) \pi + \eta \Omega

the first \pi being the stochastic stationary state associated with the zero energy eigenspace, and the second $\Omega$ associated with the higher energy eigenspaces, with

\displaystyle{ \langle i , \Omega \rangle = \frac{ \sum_{k\neq 0} | \langle i, \Phi_k  \psi (0) \rangle |^2  }{ \eta} }

The weight of each term is governed by the parameter

\eta =  1 - | \langle \phi_0, \psi (0)\rangle |^2

which you could think of as the quantumness of the result. This is one minus the probability of the walker being in the zero energy eigenspace, or equivalently the probability of the walker being outside the zero energy eigenspace.

So even if we don’t know \Omega, we know its importance is controlled by a parameter \eta that governs how close the large-time average distribution P of the quantum walk is to the corresponding stochastic stationary distribution \pi.

What do we mean by ‘close’? Find out for yourself:

Puzzle 1. Show, using a triangle inequality, that the trace distance between the two characteristic stochastic and quantum distributions \{ \langle i , P \rangle \}_i and \{ \langle i , \pi \rangle \}_i is upper-bounded by 2 \eta.

Can we say anything physical about when the quantumness \eta is big or small?

Because the eigenvalues of Q have a physical interpretation in terms of energy, the answer is yes. The quantumness \eta is the probability of being outside the zero energy state. Call the next lowest eigenvalue \Delta = \min_{k \neq 0} \epsilon_k the energy gap. If the quantum walk is not in the zero energy eigenspace then it must be in an eigenspace of energy greater or equal to \Delta. Therefore the expected energy E of the quantum walker must bound the quantumness E \ge \eta \Delta.

This tells us that a quantum walk with a low energy is similar to a stochastic walk in the large-time limit. We already knew this was exactly true in the zero energy limit, but this result goes further.

So little is known about quantum walks on networks of arbitrary geometry that we were very pleased to find this result. It says there is a special case in which the walk is characterized by the degree distribution of the network, and a clear physical parameter that bounds how far the walk is from this special case.

Also, in finding it we learned that the difficulties of the initial state dependence, enhanced by the lack of convergence to a stationary state, could be overcome for a quantum walk, and that the relationships between quantum and stochastic walks extend beyond those with shared generators.

What next?

That’s all for the latest bit of idea sharing at the interface between stochastic and quantum systems.

I hope I’ve piqued your interest about quantum walks. There’s so much still left to work out about this topic, and your help is needed!

Other questions we have include: What holds analytically about the form of the quantum correction? Numerically it is known that the so-called quantum correction \Omega tends to enhance the probability of being found on nodes of low degree compared to \pi. Can someone explain why? What happens if a small amount of stochastic noise is added to a quantum walk? Or a lot of noise?

It’s difficult to know who is best placed to answer these questions: experts in quantum physics, graph theory, complex networks or stochastic processes? I suspect it’ll take a bit of help from everyone.

Background reading

A couple of textbooks with comprehensive sections on non-negative matrices and continuous-time stochastic processes are:

• Peter Lancaster and Miron Tismenetsky, The Theory of Matrices: with Applications, 2nd edition, Academic Press, San Diego, 1985.

• James R. Norris, Markov Chains, Cambridge University Press, Cambridge, 1997.

There is, of course, the book that arose from the Azimuth network theory series, which considers several relationships between quantum and stochastic processes on networks:

• John Baez and Jacob Biamonte, A Course on Quantum Techniques for Stochastic Mechanics, 2012.

Another couple of books on complex networks are:

• Mark Newman, Networks: An Introduction, Oxford University Press, Oxford, 2010.

• Ernesto Estrada, The Structure of Complex Networks: Theory and Applications, Oxford University Press, Oxford, 2011. Note that the first chapter is available free online.

There are plenty more useful references in our article on this topic:

• Mauro Faccin, Tomi Johnson, Jacob Biamonte, Sabre Kais and Piotr Migdał, Degree distribution in quantum walks on complex networks.

Puzzles for the enthusiastic

Sadly I didn’t have space to show proofs of all the theorems I used. So here are a few puzzles that guide you to doing the proofs for yourself:

Stochastic walks and stationary states

Puzzle 2. (For the hard core.) Prove there is always a unique positive eigenvector for a stochastic walk generated by S. You’ll need the assumption that the graph G is connected. It’s not simple, and you’ll probably need help from a book, perhaps one of those above by Lancaster and Tismenetsky, and Norris.

Puzzle 3. Show that the eigenvalues of S (and therefore Q) are non-negative. A good way to start this proof is to apply the Perron-Frobenius theorem to the non-negative matrix M = - S + I \max_i S_{i i}. This implies that M has a positive eigenvalue r equal to its spectral radius

r = \max_k | \lambda_k |

where \lambda_k are the eigenvalues of M, and the associated eigenvector v is positive. Since S = - M + I \max_i S_{i i}, it follows that S shares the eigenvectors of M and the associated eigenvalues are related by inverted translation:

\epsilon_k = - \lambda_k + \max_i S_{i i}

Puzzle 4. Prove that regardless of the initial state \psi(0), the zero eigenvector \pi is obtained in the large-time limit \lim_{t \rightarrow \infty} \psi(t) = \pi of the walk generated by S. This breaks down into two parts:

(a) Using the approach from Puzzle 5, to show that S v  = \epsilon_v v, the positivity of v and the infinitesimal stochastic property \sum_i S_{i j} = 0 imply that \epsilon_v = \epsilon_0 = 0 and thus v = \pi is actually the unique zero eigenvector and stationary state of S (its uniqueness follows from puzzle 4, you don’t need to re-prove it).

(b) By inserting the decomposition S = \sum_k \epsilon_k \Pi_k into e^{-S t} and using the result of puzzle 5, complete the proof.

(Though I ask you to use the diagonalizability of S, the main results still hold if the generator is irreducible but not diagonalizable.)

Quantum walks

Here are a couple of extra puzzles for those of you interested in quantum mechanics:

Puzzle 5. In quantum mechanics, probabilities are given by the moduli squared of amplitudes, so multiplying a state by a number of modulus unity has no physical effect. By inserting

\displaystyle{ Q= \sum_k \epsilon_k \Phi_k }

into the quantum time evolution matrix e^{-Q t}, show that if

\psi(0) = \phi_k


\psi(t)  = e^{ - i \epsilon_k t} \psi(0)

hence \phi_k is a stationary state in the quantum sense, as probabilities don’t change in time.

Puzzle 6. By expanding the initial state \psi(0) in terms of the complete orthogonal basis vectors \phi_k show that for a quantum walk \psi(t) never converges to a stationary state unless it began in one.

21 Responses to Quantum Network Theory (Part 2)

  1. amarashiki says:

    Wonderful post again, Tomi Johnson! :)

  2. Alexander Golden says:

    Is it possible for the mathematics that describe “quantum walks” to apply to systems that are not physically “quantum” in the sense of the classical action S >> \hbar ?

    • Tomi Johnson says:

      That’s an interesting point. Here we’ve gone for a formulation of quantum mechanics that makes use of the Schroedinger equation. It would be interesting to instead use the Lagrangian formalism, as you suggest, whereupon taking \hbar \rightarrow 0 would give the classical path, i.e., that of extreme action.

  3. arch1 says:

    Tomi (all),
    Is it possible to give a plain English (or English + elementary math, skipping the matrix algebra and QM-ese) description of the quantum walk, along the lines of the one you provide for a uniform escape stochastic walk? In any case, thanks for the interesting posts.

    • John Baez says:

      This sounds like a fun puzzle! Let’s see if I get this right. First let me do the stochastic walk.

      In the stochastic walk you have a fixed probability per time of walking from the node you’re at to some neighboring node… and you’re equally likely to go to any neighboring node.

      So, for example, if you’re standing at some node, each 1/000 of a second you have approximately a probability of 1/1000 of walking to some neighboring node in the graph. And if there are 6 neighboring nodes, you have an equal chance of walking to each of these nodes… so, each millisecond you have approximately a 1/6000 chance of walking to any specific one of these nodes.

      (Take the limit where you replace 1/1000 by smaller and smaller numbers to get the precise description of what’s going on. This is a continuous-time random walk, but it’s easier to imagine time ticking away in small discrete increments!)

      I got this by trying to translate the formula for the transition probability per time into words:

      S = L D^{-1}

      Here S_{ij} is the probability per time of hopping from node j to node i. D is the diagonal matrix where D_{ii} is the number of edges incident to the ith node. Finally, L = A - D, and A_{ij} is 1 if there’s an edge from i to j and zero otherwise.

      So, the above formula says the probability per time of hopping from node j to i is

      S_{ij} = A_{ij} / D_{jj}

      In plain English, this says you have a fixed probability (namely 1) per second to walk from the node you’re at to some neighboring node… and you’re equally likely to go to any neighboring node.

      • domenico says:

        I am thinking that if an output state is ever observed, then the adjacent matrix is asimmetric for the connection (can happen only the chemical change of state k in the state j).
        So the stationary state, obtained in an infinite dynamic time, have only the occupation of state j, so that the state k is vacuum, but so each state that connnect k is vacuum, and so on.
        It seems plausible that the continuous observation of a quantum state leads to the complete organic synthesis.

      • arch1 says:

        Thanks John! I see now how the transition probabilities relate to the matrix algebra (at least, to *that part* of the matrix algebra you cover here).

    • John Baez says:

      Now let me try the quantum walk, which is what arch1 actually asked about. Now instead of probabilities we need to talk about amplitudes.

      What’s the amplitude per time to walk from a node i to a neighboring node j? It’s 1 divided by:

      the square root of the number of neighbors of node i, times the square root of the number of neighbors of node j,

      For example, suppose you are at a node with 6 neighbors. What’s the amplitude that in the next 1/1000 second you’ll walk to a specific neighboring node? It depends on how many neighbors this node has! Suppose it has 5. Then the amplitude is

      \displaystyle{  \frac{1}{\sqrt{5}} \, \frac{1}{\sqrt{6}} \, \frac{1}{1000} }

      Compare this with the stochastic walk! In the same situation, in the stochastic walk, what’s the the probability that in the next 1/1000 second you’ll walk to a specific neighboring node? It’s

      \displaystyle{  \frac{1}{6} \, \frac{1}{1000} }

      So, as usual, quantum mechanics has a symmetry between inputs and outputs that stochastic mechanics lacks. In quantum mechanics, the amplitude per time to hop from i to j is always equal to the amplitude per time to hop from j to i (at least up to a phase). In stochastic mechanics, the probability per time to hop from i to j need not equal the probability per time to hope from j to i.

      Also, since you need to take the absolute value of an amplitude and square it to get a probability, we see that our quantum walk is subject to quantum Zeno effect. As Alan Turing wrote:

      It is easy to show using standard theory that if a system starts in an eigenstate of some observable, and measurements are made of that observable N times a second, then, even if the state is not a stationary one, the probability that the system will be in the same state after, say, one second, tends to one as N tends to infinity; that is, that continual observations will prevent motion …

  4. arch1 says:

    Thanks again! Given the transition probability behavior you describe I understand your comment about symmetry and Turing’s comment about the Zeno effect (which I’d have called the watched-pot effect:-)

    The squaring is weird. Taking what I think you’re saying at face value, it seems that a quantum walker must be (2^2-1^2)=3 times as likely to walk during its 2nd millisecond spent at a given node than during its 1st millisecond, so its walking amplitude must be sqrt(3) times larger for the 2nd ms than for the 1st ms. But this makes the concept ‘amplitude [to walk] per time’ seem incoherent or at least poorly named.

    ..so I must be confused (saying this just to acknowledge the obvious, not to beg more help as you already have both day and night jobs:-)

    • arch1 says:

      My 15 Aug 8:58pm comment was in reply to John’s 15 Aug 6:41am comment on quantum walks. Sorry for any confusion.

    • Tomi Johnson says:

      In the classical case, starting at some node, the probability of the walker being at a neighboring node initially increases linearly in time t. However, for a quantum walker it initially increases quadratically as t^2. So yes, I agree, one can’t think of rates of flow of probability in the quantum case. Things are linear, but for amplitudes, not probabilities. So talking about ‘amplitude per unit time’ is ok, but ‘probability per unit time’ not so. This is one of those things that makes quantum mechanics hard sometimes!

      As John said, one implication of this is the quantum Zeno effect. If you add some on-site dephasing (equivalent to spy observing which node the walker is on but not telling you what he sees – just destroying the inter-site coherence), this will never turn the quantum walk into a classical walk. Rather it will just, for strong enough dephasing (frequent enough observation by the spy) cause the walker to stop.

    • John Baez says:

      Arch1 wrote:

      The squaring is weird. Taking what I think you’re saying at face value, it seems that a quantum walker must be (2^2-1^2)=3 times as likely to walk during its 2nd millisecond spent at a given node than during its 1st millisecond, so its walking amplitude must be sqrt(3) times larger for the 2nd ms than for the 1st ms. But this makes the concept ‘amplitude [to walk] per time’ seem incoherent or at least poorly named.

      …so I must be confused.

      Let’s look at it a bit more carefully.

      Say there are just two nodes. Say the amplitude to stay at the first node is a while the amplitude to hop to the second one is b. Since we take the absolute value squared to get from an amplitude to a probability, and probabilities sum to 1, we must have

      |a|^2 + |b|^2 = 1.

      So, |a| and |b| are the coordinates of a point on a circle! If the ‘amplitude to walk per time’ equals 1, this point goes around the circle at speed 1. The simplest option is this:

      a = \cos t

      b = \sin t

      This works just fine:

      |a|^2 + |b|^2 = \cos^2 t + \sin^2 t = 1

      Now suppose we wait just a short amount of time, so that t is small. The amplitude to be at the first node at t is

      a = \cos t \approx 1 - t^2/2

      The amplitude to be at the second node is:

      b = \sin t \approx t

      This increases linearly with time (for small times), so I feel justified in talking about the ‘amplitude to walk per time’: it’s 1.

      However, the probability to be at the second node at time t is

      |b|^2 \approx t^2

      So yes, the probability to be there after 2 milliseconds is 4 times the probability to be there after 1 millisecond.

      I think maybe you were mixing up the change of the square of something and the square of the change.

      Philosophically, part of the point is you can only tell if the quantum walker has stepped from one node to another if you look. If you look often enough, you can reduce the probability of it doing this as small as you like. “The watched pot never boils.” But if you don’t look, it doesn’t make much sense to ask whether the walker made the step in the first millisecond or the second—since there is no way to answer this question!

      This is the kind of thing that makes some people think quantum mechanics is strange. But these people are stuck in an old worldview, which ignores the danger of counterfactual questions like “what would I have seen if I’d looked, even though I didn’t?”

  5. pendantry says:

    I think my brain exploded midway through the fourth paragraph. Thank you anyway, for the attempt to enlighten.

    PS Happy Earth Overshoot Day.

  6. I think this post is also very well written and generally very clear, but there are just a few points that i am not sure i am getting right, mostly regarding the stochastic walk:

    Given the definition of \psi (its elements represent the probability of the walker being found on that node) it seems we are implicitly considering only those vectors in the simplex \sum_i \psi_i = 1.

    If that is true then both the initial condition \psi(0) as well as the stationary state \pi must lie on this surface, but this constraint does not seem (to me) to be enforced by the evolution equations. Perhaps is this the reason why you say that you “need” to normalize the zero eigenvector D1 ? I am puzzled by why you do this normalization otherwise.

    in general if we could chose any \psi(0) independent of any constraint, then i would expect the “arrival” state to be some kind of projection of \psi(0) along the direction spanned by the unitary vector \pi , so in that case the initial condition \psi(0) would still matter.

    I don’t know, I am sure i am getting something wrong, (sorry if this was somewhat covered previously, i haven’t been able to find anything about it).

    Other than that i think that the big difference in the behavior between stochastic and quantum walk must be due to the imaginary number that multiplies the Hamiltonian in the evolution equation, which completely changes the game. That, and the fact that now square amplitudes, not just amplitudes, represent probabilities, so the constraint now has a different shape, that is a sphere. Hopefully i am getting at least this part somewhat right :-).

    • Tomi Johnson says:

      Hi Giampiero,

      Thanks for reading and commenting!

      I think you understand what’s going on very well.

      We are indeed considering only vectors in the simplex \sum_i \psi_i (t) = 1 (and \psi_i \geq 0).

      The evolution equations do actually have the property that if \psi (0) is in the simplex then so is \psi (t) for all t.

      This property is ensured whenever the classical Hamiltonian H is infinitesimal stochastic (its columns sum to zero, its off-diagonals are non-positive, and diagonals are non-negative). You can then show that the evolution operator \mathrm{exp}(-Ht) is stochastic (its elements are non-negative and columns add to unity). The stochastic map \psi (t) \mathrm{exp}(-Ht) \psi (0) then preserves the normalization and non-negativity of the \psi.

      If you want, I can provide proofs that an infinitesimal stochastic H leads to a stochastic \mathrm{exp}(-Ht).

      Also you can check that S is infinitesimal stochastic.

      I think this was explained more in Part 1 of the series, in case you missed that.

      Let me know if anything is still unclear!

      P.S. Yes, that factor of i makes a big difference doesn’t it!? So does the fact that in the quantum case the Hamiltonian is Hermitian rather than infinitesimal stochastic – again this is required to preserve probability, which as you point out are different for quantum.

  7. Hi Tomi, thanks for answering. Ok, now i see what “infinitesimal stochastic” and “stochastic” Hamiltonian really mean. You actually said it in a sentence above, but i guess there were probably too many things at once for me :-)

    If you want, I can provide proofs that an infinitesimal stochastic H leads to a stochastic \mathrm{exp}(-Ht).

    No i trust you on that one, and i think that intuitively it sort of makes sense given that exp(0)=1.

    Also the other missing piece that i didn’t see before is that if all the columns of a stochastic Hamiltonian adds up to one, then any given convex combination of these columns also must add up to one. As a consequence, if \psi (0) is in the simplex so must be \psi (t), and the simplex is invariant.

  8. In this blog post I will introduce some basics of quantum mechanics, with the emphasis on why a particle being in a few places at once behaves measurably differently from a particle whose position we just don’t know. It’s a kind of continuation of the “Quantum Network Theory” series by Tomi Johnson about our work in Jake Biamonte’s group at the ISI Foundation in Turin. […]

  9. […] The relation between quantum walks on complex networks and node degree, Quantum Network Theory Part 2  […]

  10. […] The relation between quantum walks on complex networks and node degree, Quantum Network Theory Part 2  […]

You can use Markdown or HTML in your comments. You can also use LaTeX, like this: $latex E = m c^2 $. The word 'latex' comes right after the first dollar sign, with a space after it.

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out /  Change )

Twitter picture

You are commenting using your Twitter account. Log Out /  Change )

Facebook photo

You are commenting using your Facebook account. Log Out /  Change )

Connecting to %s

This site uses Akismet to reduce spam. Learn how your comment data is processed.