• The Philosophy and Physics of Noether’s Theorems, 5-6 October 2018, Fischer Hall, 1-4 Suffolk Street, London, UK. Organized by Bryan W. Roberts (LSE) and Nicholas Teh (Notre Dame).

]]>While I really do like this exploration of the analogy between stochastic and quantum, I do think that the randomness in the stochastic case is a much different beast. Measurements are very different in the two cases. Obviously in both cases measuring changes future evolution. The difference crops up in that even unconditioned measurements change final distributions in quantum mechanics. In stochastic mechanics we can imagine that we “peek” at every time step, without changing anything. In that sense, considering every step random gives no observable differences. This is just not possible in quantum mechanics.

(You’re well aware of all this of course, but it’s something that regularly gets swept under the rug, and I think is worth pointing out when making analogies of this sort.)

]]>But say we had no knowledge of the number of jewels in the box. According to our ignorance of that number, we have to assume a uniform distribution which can range anywhere from 0 to a very large number. The mean of a uniform is simply 1/2 of that very large number. That would certainly be larger than 10, correct?

Based on that, I would assume the value is closer to Garrett’s 0.000045 number than 0.1.

Thanks for a word problem that makes one think.

]]>My calculations are certainly based, in part, on Brendan’s original. I simply took his ideas and reformulated them using discrete calculus.

]]>I vaguely that if the expectation of and is zero, then it means itself must be constant on all connected components.

If the expectation of is zero, then has to be *zero* wherever the probability distribution is supported.

Maybe you meant something like this:

**Theorem.** Given a Markov process on a finite set of states , and has the property that the expectation values of and don’t change with time regardless of the initial conditions, then is constant on each connected component of the ‘transition graph’ of that Markov process.

This is part of our Theorem 1. We prove it for continuous-time Markov processes, but the result is also true for discrete-time ones (also known as ‘Markov chains’).

Morally speaking the situation is the same when the state space is an infinite measure space, but technically speaking it’s a lot more subtle.

Btw, I’ll mention to readers who care about priority that your calculations are based to some extent on Brendan’s original calculations which went into the proof of Theorem 1. I’ve already had trouble publishing one paper because I talked about it on this blog a bunch and a referee somehow concluded that meant the paper contained nothing new!

]]>Network Theory and Discrete Calculus – Noether’s Theorem

I vaguely remember that if the expectation of and is zero, then it means itself must be constant on all connected components.

In other words, if , it means itself is constant.

This seems significantly less interesting the the quantum version. The stuff I looked at was discrete in time as well, so maybe things are more interesting in continuous time (?).

]]>But thanks to you, I’m now trying to imagine geometrical examples. The only really nice ones that leap to mind come from foliations of Riemannian manifolds, where we require that a particle stay on a given leaf while doing its random walk.

]]>