• The Philosophy and Physics of Noether’s Theorems, 5-6 October 2018, Fischer Hall, 1-4 Suffolk Street, London, UK. Organized by Bryan W. Roberts (LSE) and Nicholas Teh (Notre Dame).

]]>Thanks.

While I really do like this exploration of the analogy between stochastic and quantum, I do think that the randomness in the stochastic case is a much different beast. Measurements are very different in the two cases. Obviously in both cases measuring changes future evolution. The difference crops up in that even unconditioned measurements change final distributions in quantum mechanics. In stochastic mechanics we can imagine that we “peek” at every time step, without changing anything. In that sense, considering every step random gives no observable differences. This is just not possible in quantum mechanics.

(You’re well aware of all this of course, but it’s something that regularly gets swept under the rug, and I think is worth pointing out when making analogies of this sort.)

]]>Yeah—my point was that in both quantum mechanics and stochastic mechanics, the state of the system at time is a deterministic function of the state at time zero, but this state describes the *probability distribution* of results you get when you measure any observable. So if you say time evolution is random in a so-called ‘random walk’, maybe you should say the same for time evolution in quantum mechanics too. But if you don’t, you shouldn’t. Anyway, my first sentence here is what I meant to say, so let me fix my blog post so it says that!

I side with Garrett on this one. To get the 10% value, we need to assume that there are a finite number of jewels in the box. In this case 10 gives the Markov inequality based on counting statistics.

But say we had no knowledge of the number of jewels in the box. According to our ignorance of that number, we have to assume a uniform distribution which can range anywhere from 0 to a very large number. The mean of a uniform is simply 1/2 of that very large number. That would certainly be larger than 10, correct?

Based on that, I would assume the value is closer to Garrett’s 0.000045 number than 0.1.

Thanks for a word problem that makes one think.

]]>Sorry. Of course I meant expectations are constant. Was in a rush.

My calculations are certainly based, in part, on Brendan’s original. I simply took his ideas and reformulated them using discrete calculus.

]]>Eric wrote:

I vaguely that if the expectation of and is zero, then it means itself must be constant on all connected components.

If the expectation of is zero, then has to be *zero* wherever the probability distribution is supported.

Maybe you meant something like this:

**Theorem.** Given a Markov process on a finite set of states , and has the property that the expectation values of and don’t change with time regardless of the initial conditions, then is constant on each connected component of the ‘transition graph’ of that Markov process.

This is part of our Theorem 1. We prove it for continuous-time Markov processes, but the result is also true for discrete-time ones (also known as ‘Markov chains’).

Morally speaking the situation is the same when the state space is an infinite measure space, but technically speaking it’s a lot more subtle.

Btw, I’ll mention to readers who care about priority that your calculations are based to some extent on Brendan’s original calculations which went into the proof of Theorem 1. I’ve already had trouble publishing one paper because I talked about it on this blog a bunch and a referee somehow concluded that meant the paper contained nothing new!

]]>Network Theory and Discrete Calculus – Noether’s Theorem

I vaguely remember that if the expectation of and is zero, then it means itself must be constant on all connected components.

In other words, if , it means itself is constant.

This seems significantly less interesting the the quantum version. The stuff I looked at was discrete in time as well, so maybe things are more interesting in continuous time (?).

]]>Since this paper is an offshoot of the network theory project, we’re mainly thinking about Markov processes coming from chemical reaction networks, where a state consists of a collection of molecules, and we have conserved quantities like ‘the number of nitrogen atoms’.

But thanks to you, I’m now trying to imagine geometrical examples. The only really nice ones that leap to mind come from foliations of Riemannian manifolds, where we require that a particle stay on a given leaf while doing its random walk.

]]>Argh. Mestupid should have checked the converse first: A bounded harmonic function does not necessarily commute with the Brownian motion semigroup! Luckily. If it does, it must be locally constant.

]]>