If the infinitesimal transformation of the potential is little, then the trajectory variation is little, and it could be possible an (infinitesimal) coordinates trasformation to obtain an invariant of the trajectory: it could be possible to obtain an invariant that connect coordinates variations with potential variation (like in the case of field theory) for a classical system. ]]>

The spectral theorem in its grown-up, infinite-dimensional form tells you how to deal with this. Needless to say, physicists *pretend* the infinite-dimensional case is just like the finite-dimensional case… but Reed and Simon give a rigorous treatment of this crucial issue.

acts in 4 dimensions. I checked my answers with Barry Simon, the champ of mathematical physics.

And I was happy to discover that my answers were all correct! Moreover, a lot of this material will appear in his forthcoming book *A Comprehensive Course in Analysis, Part 4: Operator Theory*.

So, here’s the story:

• In this case is essentially self-adjoint on So, it admits a unique self-adjoint extension and there’s no ambiguity about this case.

• In this case is *not* essentially self-adjoint on

• In this case the expected energy is bounded below for It thus has a canonical ‘best choice’ of self-adjoint extension, the Friedrichs extension.

• In this case the expected energy is not bounded below, so we don’t have the Friedrichs extension to help us choose which self-adjoint extension is ‘best’.

In short, it’s the same as in 3 dimensions, but with the numbers 0 and -1 replacing 3/4 and -1/4. The situation is ‘better’ than in 3 dimensions.

]]>Thanks! The notion of extension of an operator is clear to me now, and also that a typical case involves relaxing boundary conditions of the functions on which it operates. I also understand now that changing the domain can change the set of “candidate adjoints”, ideally to a singleton set containing only the operator itself. (Did I get this right?)

All that is exactly right! So, I’ll reward (or punish) you with a bit more information.

Often it is difficult or annoying to describe the domain of a self-adjoint operator. So, we often settle for an **essentially self-adjoint** operator: one that has a unique self-adjoint extension.

Any self-adjoint operator is essentially self-adjoint, with its unique self-adjoint extension being *itself*. But essential self-adjointness is a very useful generalization.

For example, here is an essentially self-adjoint operator:

with the domain consisting of all smooth functions on the unit interval obeying

The unique self-adjoint extension of is the operator

with the larger domain consisting of all functions with

and with second derivative lying in .

As you can see, saying “second derivative lying in ” sounds more technical than saying “smooth” (infinitely differentiable). We need to be technical because we’re trying to describe the “exactly correct” domain instead of something that’s “almost right”, but a bit smaller. And in more complicated examples, the exactly correct domain is almost impossible to describe explicitly.

Anyway, it’s all explained very nicely near the end of Reed and Simon. Good luck on that, and don’t hesitate to ask questions, especially in public forums where I can have the pleasure of showing off in my reply! I like analysis and hardly ever get to talk about it these days.

]]>I’m really glad about the advice on Wightman/Streater and Reed/Simon. It had dawned on me that my next step should be studying a certain bunch of maths, which miraculously agrees with what Reed/Simon seems to cover. So after finishing Tao’s book I’ll just pick up Reed/Simon.

]]>• Barry Simon, Essential self-adjointness of Schrödinger operators with singular potentials, *Arch. Rational Mech. Analysis* *52* (1973), 44–48.

proves:

Theorem 2:Let with and and suppose thatThen is essentially self-adjoint on .

Here I believe we must have though people often use the opposite sign convention, to make it a nonnegative operator. This matters here, but I’m going to assume I’m right.

Anyway, the function is in , and we can take So, the key condition is that

and when we have

so this theorem only says that is essentially self-adjoint on when

That’s not useless… but it’s useless for attractive potentials.

Let me just see how this theorem fares in 3 and 5 dimensions, where I already know stuff.

When we have

so this theorem says that is essentially self-adjoint on when

Good, this matches what’s in my blog article!

When we have

so this theorem says that is essentially self-adjoint on when

Good! This matches what I said in an earlier comment!

]]>Hi John, thanks for your elaborate reply! I’ll boldly display my ignorance and ask: what exactly is meant by “extension” of a self-adjoint operator?

Actually I was talking about taking an operator that’s not self-adjoint and trying to extend it to make it self-adjoint. So your real question is: *what’s an extension of an operator?*

And the answer is: *an operator is a kind of function, and I think you know what it means to extend a function*. But there’s more to say.

Let’s consider the simple Hamiltonian from your reply. Does “extension” mean extending its domain […]

Yes, if you’d stopped there your answer would be correct. The linear operators that physicists like are rarely operators that map *all* of a Hilbert space to itself. They’re usually defined on just part of the Hilbert space. So, they are linear operators where is a linear subspace called the **domain** of To **extend** means to find a new operator where is a bigger domain, such that equals on the original domain:

and for

Anyway…

Does “extension” mean extending its domain to smooth functions on an interval larger as the unit interval, while still requiring that the functions vanish at 0 and 1?

That’s almost never how it works. A more typical example would be this. We start with

defined on the domain consisting of all smooth functions on the unit interval that vanish at the endpoints. This operator is symmetric:

for all

but it’s not self-adjoint. So, we might try to extend this operator to get something self-adjoint. For example, we could take an operator

which looks just like except it has a bigger domain : all functions with second derivative in obeying ‘periodic boundary conditions’:

This operator is self-adjoint! Since I haven’t defined ‘self-adjoint’ that’s not so easy to check, but it’s easy to check that it’s still symmetric:

for all

As you can see, this stuff is a bit technical. But it’s important, because the ‘same-looking’ operator can actually be different self-adjoint operators, describing different physics, depending on the domain! For example, if I changed only this equation in what I said above:

we’d have a new domain and a new self-adjoint operator

with this new domain… and has quite different properties than It describes particle that pick up a phase of -1 when they hit one end of the unit interval and pop out the other end!

Both and are self-adjoint extensions of

I was trying to inch my way towards QFT, and an acquaintance recommended my the book “PCT, Spin and Statistics, and All That” by Streater and Wightman. I started reading an realized I wasn’t quite ready for it.

Yikes, that’s a hard way to start learning QFT—your acquaintance must have been a sadist. This book assumes you know quantum field theory and are eager to make it mathematically rigorous. At the very least all the stuff I explained just now about operators should be familiar to you before try to climb that mountain. But I see you backed down.

That lead me into measure theory. Then I found Terry Tao’s book “An introduction to measure theory”.

Okay, good! If you’re still interested in analysis after that, I’d suggest Reed and Simon’s *Methods of Modern Mathematical Physics I: Functional Analysis*. This leads you rapidly but (I think) clearly through analysis from topology and measure theory, through distributions, up to self-adjoint operators on Hilbert space, all the while paying a lot of attention to their applications in physics. I really loved it when I was learning this stuff.