Quantum Superposition

guest post by Piotr Migdał

In this blog post I will introduce some basics of quantum mechanics, with the emphasis on why a particle being in a few places at once behaves measurably differently from a particle whose position we just don’t know. It’s a kind of continuation of the “Quantum Network Theory” series (Part 1, Part 2) by Tomi Johnson about our work in Jake Biamonte’s group at the ISI Foundation in Turin. My goal is to explain quantum community detection. Before that, I need to introduce the relevant basics of quantum mechanics, and of the classical community detection.

But before I start, let me introduce myself, as it’s my first post to Azimuth.

I just finished my quantum optics theory Ph.D in Maciej Lewenstein’s group at The Institute of Photonic Sciences in Castelldefels, a beach near Barcelona. My scientific interests range from quantum physics, through complex networks, to data-driven approach…. to pretty much anything—and now I work as a data science freelancer. I enjoy doing data visualizations (for example of relations between topics in mathematics), I am a big fan of Rényi entropy (largely thanks to Azimuth), and I’m a believer in open science. If you think that there are too many off-topic side projects here, you are absolutely right!

In my opinion, quantum mechanics is easy. Based on my gifted education experience it takes roughly 9 intense hours to introduce entanglement to students having only a very basic linear algebra background. Even more, I believe that it is possible to get familiar with quantum mechanics by just playing with it—so I am developing a Quantum Game!

Quantum weirdness

In quantum mechanics a particle can be in a few places at once. It sounds strange. So strange, that some pioneers of quantum mechanics (including, famously, Albert Einstein) didn’t want to believe in it: not because of any disagreement with experiment, not because of any lack of mathematical beauty, just because it didn’t fit their philosophical view of physics.

It went further: in the Soviet Union the idea that electron can be in many places (resonance bonds) was considered to oppose materialism. Later, in California, hippies investigated quantum mechanics as a basis for parapsychology—which, arguably, gave birth to the field of quantum information.

As Griffiths put it in his Introduction to Quantum Mechanics (Chapter 4.4.1):

To the layman, the philosopher, or the classical physicist, a statement of the form “this particle doesn’t have a well-defined position” [...] sounds vague, incompetent, or (worst of all) profound. It is none of these.

In this guest blog post I will try to show that not only can a particle be in many places at once, but also that if it were not in many places at once then it would cause problems. That is, as fundamental phenomena as atoms forming chemical bonds, or particle moving in the vacuum, require it.

As in many other cases, the simplest non-trivial case is perfect for explaining idea, as it covers the most important phenomena, while being easy to analyze, visualize and comprehend. Quantum mechanics is not an exception—let us start with a system of two states.

A two state system

Let us study a simplified model of the hydrogen molecular ion \mathrm{H}_2^+, that is, a system of two protons and one electron (see Feynman Lectures on Physics, Vol. III, Chapter 10.1). Since the protons are heavy and slow, we treat them as fixed. We focus on the electron moving in the electric field created by protons.

In quantum mechanics we describe the state of a system using a complex vector. In simple terms, this is a list of complex numbers called ‘probability amplitudes’. For an electron that can be near one proton or another, we use a list of two numbers:

|\psi\rangle =      \begin{bmatrix}          \alpha \\ \beta      \end{bmatrix}

In this state the electron is near the first proton with probability |\alpha|^2, and near the second one with probability |\beta|^2.

Note that

|\psi \rangle = \alpha \begin{bmatrix}          1 \\ 0      \end{bmatrix} + \beta \begin{bmatrix}          0 \\ 1      \end{bmatrix}

So, we say the electron is in a ‘linear combination’ or ‘superposition’ of the two states

|1\rangle =      \begin{bmatrix}          1 \\ 0      \end{bmatrix}

(where it’s near the first proton) and the state

|2\rangle =      \begin{bmatrix}          0 \\ 1      \end{bmatrix}

(where it’s near the second proton).

Why do we denote unit vectors in strange brackets looking like

| \mathrm{something} \rangle ?

Well, this is called Dirac notation (or bra-ket notation) and it is immensely useful in quantum mechanics. We won’t go into it in detail here; merely note that | \cdot \rangle stands for a column vector and \langle \cdot | stands for a row vector, while \psi is a traditional symbol for a quantum state.).

Amplitudes can be thought as ‘square roots’ of probabilities. We can force an electron to localize by performing a classical measurement, for example by moving protons away and measuring which of them has neutral charge (for being coupled with the electron). Then, we get probability | \alpha|^2 of finding it near the first proton and |\beta|^2 of finding it near the second. So, we require that

|\alpha|^2 + |\beta|^2 = 1

Note that as amplitudes are complex, for a given probability there are many possible amplitudes. For example

1 = |1|^2 = |-1|^2 = |i|^2 = \left| \tfrac{1+i}{\sqrt{2}} \right|^2 = \cdots

where i is the imaginary unit, with i^2 = -1.

We will now show that the electron ‘wants’ to be spread out. Electrons don’t really have desires, so this is physics slang for saying that the electron will have less energy if its probability of being near the first proton is equal to its probability of being near the second proton: namely, 50%.

In quantum mechanics, a Hamiltonian is a matrix that describes the relation between the energy and evolution (i.e. how the state changes in time). The expected value of the energy of any state | \psi \rangle is

E = \langle \psi | H | \psi \rangle

Here the row vector \langle \psi | is the column vector | \psi\rangle after transposition and complex conjugation (i.e. changing i to -i), and

\langle \psi | H | \psi \rangle

means we are doing matrix multiplication on \langle \psi |, H and | \psi \rangle to get a number.

For the electron in the \mathrm{H}_2^+ molecule the Hamiltonian can be written as the following 2 \times 2 matrix with real, positive entries:

H =      \begin{bmatrix}          E_0 & \Delta \\          \Delta & E_0      \end{bmatrix},

where E_0 is the energy of the electron being either in state |1\rangle or state |2\rangle, and \Delta is the ‘tunneling amplitude’, which describes how easy it is for the electron to move from neighborhood of one proton to that of the other.

The expected value—physicists call it the ‘expectation value’—of the energy of a given state |\psi\rangle is:

E = \langle \psi | H | \psi \rangle \equiv      \begin{bmatrix}          \alpha^* & \beta^*      \end{bmatrix}      \begin{bmatrix}          E_0 & \Delta \\          \Delta & E_0      \end{bmatrix}      \begin{bmatrix}          \alpha \\ \beta      \end{bmatrix}.

The star symbol denotes the complex conjugation. If you are unfamiliar with complex numbers, just work with real numbers on which this operation does nothing.

Exercise 1. Find \alpha and \beta with

|\alpha|^2 + |\beta|^2 = 1

that minimize or maximize the expectation value of energy \langle \psi | H | \psi \rangle for

|\psi\rangle =      \begin{bmatrix}          \alpha \\ \beta      \end{bmatrix}

Exercise 2. What’s the expectation value value of the energy for the states | 1 \rangle and | 2 \rangle?

Or if you are lazy, just read the answer! It is straightforward to check that

E = (\alpha^* \alpha + \beta^* \beta) E_0 + (\alpha^* \beta + \beta^* \alpha) \Delta

The coefficient of E_0 is 1, so the minimal energy is E_0 - \Delta and the maximal energy is E_0 + \Delta. The states achieving these energies are spread out:

| \psi_- \rangle =      \begin{bmatrix}          1/\sqrt{2} \\ -1/\sqrt{2}      \end{bmatrix},      \quad \text{with} \quad      \quad E = E_0 - \Delta

and

| \psi_+ \rangle =      \begin{bmatrix}          1/\sqrt{2} \\ 1/\sqrt{2}      \end{bmatrix},      \quad \text{with} \quad      \quad E = E_0 + \Delta

The energies of these states are below and above the energy E_0, and \Delta says how much.

So, the electron is ‘happier’ (electrons don’t have moods either) to be in the state |\psi_-\rangle than to be localized near only one of the protons. In other words—and this is Chemistry 101—atoms like to share electrons and it bonds them. Also, they like to share electrons in a particular and symmetric way.

For reference, |\psi_+ \rangle is called ‘antibonding state’. If the electron is in this state, the atoms will get repelled from each other—and so much for the molecule!

How to classically add quantum things

How can we tell a difference between an electron being in a superposition between two states, and just not knowing its ‘real’ position? Well, first we need to devise a way to describe probabilistic mixtures.

It looks simple—if we have an electron in the state |1\rangle or |2\rangle with probabilities 1/2, we may be tempted to write

|\psi\rangle = \tfrac{1}{\sqrt{2}} |1\rangle + \tfrac{1}{\sqrt{2}} |2\rangle

We’re getting the right probabilities, so it looks legit. But there is something strange about the energy. We have obtained the state |\psi_+\rangle with energy E_0+\Delta by mixing two states with the energy E_0!

Moreover, we could have used different amplitudes such that |\alpha|^2=|\beta|^2=1/2 and gotten different energies. So, we need to devise a way to avoid guessing amplitudes. All in all, we used quotation marks for ‘square roots’ for a reason!

It turns out that to describe statistical mixtures we can use density matrices.

The states we’d been looking at before are described by vectors like this:

| \psi \rangle =      \begin{bmatrix}          \alpha \\ \beta      \end{bmatrix}

These are called ‘pure states’. For a pure state, here is how we create a density matrix:

\rho = | \psi \rangle \langle \psi |      \equiv      \begin{bmatrix}          \alpha \alpha^* & \alpha \beta^*\\          \beta \alpha^* & \beta \beta^*       \end{bmatrix}

On the diagonal we get probabilities (|\alpha|^2 and |\beta|^2), whereas the off-diagonal terms (\alpha \beta^* and its complex conjugate) are related to the presence of quantum effects. For example, for |\psi_-\rangle we get

\rho =      \begin{bmatrix}          1/2 & -1/2\\          -1/2 & 1/2       \end{bmatrix}

For an electron in the state |1\rangle we get

\rho =      \begin{bmatrix}          1 & 0\\          0 & 0       \end{bmatrix}.

To calculate the energy, the recipe is the following:

E = \mathrm{tr}[H \rho]

where \mathrm{tr} is the ‘trace‘: the sum of the diagonal entries. For a n \times n square matrix with entries A_{ij} its trace is

\mathrm{tr}(A) = A_{11} + A_{22} + \ldots + A_{nn}

Exercise 3. Show that this formula for energy, and the previous one, give the same result on pure states.

I advertised that density matrices allow us to mix quantum states. How do they do that? Very simple: just by adding density matrices, multiplied by the respective probabilities:

\rho = p_1 \rho_1 + p_2 \rho_2 + \cdots + p_n \rho_n

It is exactly how we would mix probability vectors. Indeed, the diagonals are probability vectors!

So, let’s say that our co-worker was drunk and we are not sure if (s)he said that the state is |\psi_-\rangle or |1\rangle. However, we think that the probabilities are 1/3 and 2/3. We get the density matrix:

\rho =      \begin{bmatrix}          5/6 & -1/6\\          -1/6 & 1/6       \end{bmatrix}

So, how about its energy?

Exercise 4. Show that calculating energy using density matrix gives the same result as averaging energy over component pure states.

I may have given the impression that density matrix is an artificial thing, at best—a practical trick, and what we ‘really’ have are pure states (vectors), each with a given probability. If so, the next exercise is for you:

Exercise 5. Show that a 50%-50% mixture of |1\rangle and |2\rangle is the same as a 50%-50% mixture of |\psi_+\rangle and |\psi_-\rangle.

This is different than statistical mechanics, or statistics, where we can always think about probability distributions as uniquely defined statistical mixtures of possible states. Here, as we see, it can be a bit more tricky.

As we said, for the diagonals things work as for classical probabilities. But there is more—at the same time as adding probabilities we also add the off-diagonal terms, which can add up to cancel, depending on their signs. It’s why it’s mixing quantum states may make them losing their quantum properties.

The value of the off-diagonal term is related to so-called ‘coherence’ between the states |1\rangle and |2\rangle. Its value is bounded by the respective probabilities:

\left| \rho_{12} \right| \leq \sqrt{\rho_{11}\rho_{22}} = \sqrt{p_1 p_2}

where for pure states we get equality.

If the value is zero, there are no quantum effects between two positions: this means that the electron is sure to be at one place or the other, though we might be uncertain at which place. This is fundamentally different from a superposition (non-zero \rho_{12}), where we are uncertain at which site a particle is, but it can no longer be thought to be at one site or the other: it must be in some way associated with both simultaneously.

Exercise 6. For each c \in [-1,1] propose how to obtain a mixed state described by density matrix

\rho =       \begin{bmatrix}          1/2 & c/2\\          c/2 & 1/2       \end{bmatrix}

by mixing pure states of your choice.

A spatial wavefunction

A similar thing works for position. Instead of a two-level system let’s take a particle in one dimension. The analogue of a state vector is a wavefunction, a complex-valued function on a line:

\psi(x)

In this continuous variant, p(x) = |\psi(x)|^2 is the probability density of finding particle in one place.

We construct the density matrix (or rather: ‘density operator’) in an way that is analogous to what we did for the two-level system:

\rho(x, x') = \psi(x) \psi^*(x')

Instead of a 2×2 matrix matrix, it is a complex function of two real variables. The probability density can be described by its diagonal values, i.e.

p(x) = \rho(x,x)

Again, we may wonder if the particle energetically favors being in many places at once. Well, it does.

Density matrices for a classical and quantum state. They yield the same probability distributions (for positions). However, their off-diagonal values (i.e. $x\neq x’$) are different. The classical state is just a probabilistic mixture of a particle being in a particular place.

What would happen if we had a mixture of perfectly localized particles? Due to Heisenberg’s uncertainly principle we have

\Delta x \Delta p \geq \frac{\hbar}{2},

that is, that the product of standard deviations of position and momentum is at least some value.

If we exactly know the position, then the uncertainty of momentum goes to infinity. (The same thing holds if we don’t know position, but it can be known, even in principle. Quantum mechanics couldn’t care less if the particle’s position is known by us, by our friend, by our detector or by a dust particle.)

The Hamiltonian represents energy, the energy of a free particle in continuous system is

H=p^2/(2m)

where m is its mass, and p is its momentum: that is, mass times velocity. So, if the particle is completely localized:

• its energy is infinite,
• its velocity are infinite, so in no time its wavefunction will spread everywhere.

Infinite energies sometimes happen if physics. But if we get infinite velocities we see that there is something wrong. So a particle needs to be spread out, or ‘delocalized’, to some degree, to have finite energy.

As a side note, to consider high energies we would need to employ special relativity. In fact, one cannot localize a massive particle that much, as it will create a soup of particles and antiparticles, once its energy related to momentum uncertainty is as much as the energy related to its mass; see the Darwin term in the fine structure.

Moreover, depending on the degree of its delocalization its behavior is different. For example, a statistical mixture of highly localized particles would spread a lot faster than the same $p(x)$ but derived from a single wavefunction. The density matrix of the former would be in between of that of pure state (a ‘circular’ Gaussian function) and the classical state (a ‘linear’ Gaussian). That is, it would be an ‘oval’ Gaussian, with off-diagonal values being smaller than for the pure state.

Let us look at two Gaussian wavefunctions, with varying level of coherent superposition between them. That is, each Gaussian is already a superposition, but when we combine two we let ourselves use a superposition, or a mixture, or something in between. For a perfect superposition of Gaussian, we would have the density matrix

\rho(x,x') = \frac{1}{2} \left( \phi(x+\tfrac{d}{2}) + \phi(x-\tfrac{d}{2}) \right) \left( \phi(x'+\tfrac{d}{2}) + \phi(x'-\tfrac{d}{2}) \right)

where \phi(x) is a normalized Gaussian function. For a statistical mixture between these Gaussians split by a distance of d, we would have:

\rho(x,x') = \frac{1}{2} \phi(x+\tfrac{d}{2}) \phi(x'+\tfrac{d}{2})  +  \frac{1}{2} \phi(x-\tfrac{d}{2}) \phi(x'-\tfrac{d}{2})

And in general,

\begin{array}{ccl}  \rho(x,x') &=& \frac{1}{2} \left( \phi(x+\tfrac{d}{2}) \phi(x'+\tfrac{d}{2})  +  \phi(x-\tfrac{d}{2}) \phi(x'-\tfrac{d}{2})\right) + \\ \\  && \frac{c}{2} \left( \phi(x+\tfrac{d}{2}) \phi(x'-\tfrac{d}{2})  + \phi(x-\tfrac{d}{2}) \phi(x'+\tfrac{d}{2}) \right)  \end{array}

for some |c| \leq 1.

PICTURE: Two Gaussian wavefunctions (centered at -2 and +2) in a coherent superposition with each other (the first and the last plot) and a statistical mixture (the middle plot); the 2nd and 4th plot show intermediary states. Superposition can be with different phase, much like the hydrogen example. Color represents absolute value and hue phase; here red is for positive numbers and teal is for negative.

(Click to enlarge.)

Conclusion

We have seen learnt the difference between the quantum superposition and the statistical mixture of states. In particular, while both of these descriptions may give the same probabilities, their predictions on the physical properties of states differ. For example, we need an electron to be delocalized in a specific way to describe chemical bonds; and we need delocalization of any particle to predict its movement.

We used density matrices to express both quantum superposition and (classical) lack of knowledge on the same ground. We have identified its off-diagonal terms as ones related to the quantum coherence.

But what if there were not only two states, but many? So, instead of \mathrm{H}_2^+ (we were not even considering the full hydrogen atom, but only its ionized version), how about electric excitation on something bigger? Not even \mathrm{C}_2\mathrm{H}_5\mathrm{OH} or some sugar, but a protein complex!

So, this will be your homework (cf. this homework on topology). Just joking, there will be another blog post.

26 Responses to Quantum Superposition

  1. If anything is unclear, feel invited to ask! (I am aware that it’s a crash course, so don’t feel shy!)

  2. arch1 says:

    Attempting Ex. 1 pretty mechanically I seem to get an energy expectation value of
    E_0 + 2 \Delta * Re(\alpha) * Re(\beta)
    which given the constraint is(?) maximized when
    \alpha = \beta = 1/\sqrt2 + 0i
    and minimized when
    \alpha = -\beta = 1/\sqrt2 + 0i
    and based on what you said these are each states in which the electron has a 50% probability of being near each proton (I guess maybe when I read further I’ll learn the significance of the sign difference).

    • Your result is right; but I am not sure, what you find problematic here – as

      |1/\sqrt{2}|=|-1/\sqrt{2}|=1/2

      so in both states probabilities of finding an electron in a given spot are 50%-50%.

      • arch1 says:

        Thanks for the feedback Piotr. Nothing problematic yet (I’ll try exuding more confidence next time so as not to miscommunicate:-)

        • My omission, there is one thing you should correct – it is not \mathrm{Re}(\alpha) \mathrm{Re}(\beta) (hint: calculate for \alpha=\beta=i).

        • John Baez says:

          By the way: to get LaTeX to work in your comments, follow the instructions that appear above the box in which you type your comment.

          In brief: don’t type

          $x^2$

          type

          $latex x^2$

          with a space after the “latex”.

        • arch1 says:

          Thanks for the correction Piotr. For the expectation energy I now get E_0 + 2\Delta(\alpha_x \beta_x + \alpha_y \beta_y) which under the given constraint I think is maximized (minimized) whenever |\alpha| = 1/\sqrt 2 and \beta and \alpha are equal (opposite).

  3. domenico says:

    I don’t understand, but it is interesting.
    It seem simpler to analyze (with spectroscopy), a complex ionized molecule (like C2H5OH) rather than the neutral molecule.
    It is like if there was an electron that is shared in the ionized molecules, with tunneling, and this give a spectral emission (or absorption) for each tunneling.
    It is similar to the matrix mechanics (that it is diagonal in the energy, and the knowledge of the elements permit to describe the system), but here there are non-diagonal elements; is it possible a complete knowledge of the molecular structure, with the complete emission (or absorption) spectrum? Do it simplify the search of the solution of the structure?

    • In general, sole absorption spectrum is not enough to recover the molecule structure (at least from the practical perspective; from theoretical, I think it isn’t). If you abstract the molecule to a matrix with finite number of transitions, then its eigenvalues (called a spectrum for a reason) are its energies. However, there are many matrices giving the same spectrum.

  4. In general, sole absorption spectrum is not enough to recover the molecule structure (at least from the practical perspective; from theoretical, I think it isn’t). If you abstract the molecule to a matrix with finite number of transitions, then its eigenvalues (called a spectrum for a reason) are its energies. However, there are many matrices giving the same spectrum.

    • domenico says:

      I am thinking (I don’t know if it is true) that could be possible a single photon spectroscopy.
      If there is a quantum tunneling, then there is a creation of a bound state (photon-electron) for a time given by the Heisenberg’s uncertainty principle, so that there is a time delay in the photon trajectory; for example it could be measured the phase difference between two photon (one that interact with the charged molecule, and the other in the free space), and the time delay could give information on the difference in molecular energy levels, like the usual spectroscopy (using the time delay).

  5. nad says:

    Based on my gifted education experience it takes roughly 9 intense hours to introduce entanglement to students having only a very basic linear algebra background.

    I was reading today an interesting interview with Ha Vinh Tho in the german newspaper “Berliner Zeitung.” As I understood, Ha Vinh Tho is the current manager of the Gross National Happiness Centre where he seems to tend to think as I understood that emotional intelligence is mostly a skill:

    “Soziale Kompetenz oder emotionale Intelligenz zum Beispiel sind Fähigkeiten, die man sich aneignen kann, die geschult werden können.”

    Translation without guarantee:

    “Social Competence or emotional intelligence for example are skills, which can be learned and taught.

    I was wondering how much of that skills can be learned or how much may eventually be inherited analogous to the heritability of IQ which has though also a strong component of socioeconomic status, which can get of course rather visible for example in lead exposure of child workers (whose existence may of course cause quite some social distress in itself.)).

    So I was wondering about relations between IQ to (aspects of) emotional intelligence.
    It seems there is a bigger correlation between empathy and giftedness, shown for example in the study:
    “Prosocial reasoning and empathy in gifted children”

    Logistic regression analysis confirmed that ability
    predicted the highest level of prosocial reasoning, and the means indicated that gifted students tended to use a higher level of prosocial reasoning than their age peers.

    Links between giftedness and emotional intelligence seem to be though not uncontroversial, as the study “Assessing emotional intelligence in gifted and non-gifted high school students: Outcomes depend on the measure” suggests:

    Findings suggest that individual differences are measure dependent, with the profile of scores variable across Emotional Intelligence assessment procedures.

    So it would be interesting to hear if you have made any experiences which could indicate towards a correlation between giftedness and aspects of emotional intelligence.

    • My observation is that many (though certainly not all) students on gifted-education are nerdy. Some of them directly exhibit mild autistic traits.

      So, in short, I would expect interpersonal emotional intelligence to be lower than average – reading other’s emotions and understanding implicit communicates is one of the main things for people on the autism spectrum. (With a caveat that maybe I have a non-statistical sample, or maybe it’s higher variance rather than mean, etc.)

      At the same time, for things related to analyzing own emotions, and control things, I think it might be higher than average. Certainly, there are more mature in terms of behavior (with practical consequences of not that many incidents).

      When it comes to giftedness and the autism spectrum (or Asperger syndrome, defined as autism without cognitive delay), I recommend:

      Simon Baron-Cohen et al, The Autism-Spectrum Quotient (AQ): Evidence from Asperger Syndrome/High-Functioning Autism, Malesand Females, Scientists and Mathematicians, Journal of Autism and Developmental Disorders, 31, 5-17 (2001).

      [Autism-Spectrum Quotient] The students in Cambridge University did not differ from the randomly selected control group, but scientists (including mathematicians) scored significantly higher than both humanities and social sciences students, confirming an earlier study that autistic conditions are associated with scientific skills. Within the sciences, mathematicians scored highest. This was replicated in Group 4, the Mathematics Olympiad winners scoring significantly higher than the male Cambridge humanities students.

      See: my links on Asperger syndrome and research papers on relation of Asperger syndrome to intellectual traits. I even wrote a paper on it (for a student’s conference, and unfortunately – it remains in Polish, but you can lookup the references).

  6. nad says:

    (With a caveat that maybe I have a non-statistical sample, or maybe it’s higher variance rather than mean, etc.)

    If you got the impression that:

    So, in short, I would expect interpersonal emotional intelligence to be lower than average

    Interesting. As I understood this was a class for gifted children/young adults with a special interest in math/physics. Let’s assume that your observation would reflect the average emotional intelligence within math/physics groups and in particular that affective empathy and prosocial behaviour would also be lower than average in math/physics crowds. If one now looks at the sole aspect of empathy and prosocial behaviour (as being a trait of emotional intelligence) and if the study in the above mentioned article:
    “Prosocial reasoning and empathy in gifted children”
    reveals an approximate correct assessment of empathy in high IQ children (and if one assumes that this trait is kept until adulthood) then this seems to indicate that quite a share of gifted children with high empathy and prosocial behaviour traits are not going into math/physics.

    • I wouldn’t make such statements. Especially we may be talking about different aspects of empathy. I don’t see in the linked tests anything about testing theory of mind (i.e. realizing what other ones are feeling, wanting, how one’s action is going to affect them); with respect to altruistic traits, I certainly don’t thing that my students have it any lower. Plus, I would take any anecdotal evidence (here: mine) with a grain of salt.

      BTW: For an example of a person being very bad at reading other’s thoughts, but very loyal and good, I recommend reading The Strangest Man: The Hidden Life of Paul Dirac, Quantum Genius by Graham Farmelo (a review; there are also tons of other reasons to read it, especially if you are interested in quantum mechanics or history of physics).

  7. nad says:

    I wouldn’t make such statements. Especially we may be talking about different aspects of empathy.

    I wrote: “Let’s assume that your observation would reflect the average emotional intelligence within math/physics groups and in particular that affective empathy and prosocial behaviour would also be lower than average in math/physics crowds.” because the article “Prosocial reasoning and empathy in gifted children” speaks about prosocial behaviour and affective empathy.
    So I so to say left what seems to be called “cognitive empathy” and “theory of mind” out and as I wrote it is an assumption.

    I would take any anecdotal evidence (here: mine) with a grain of salt.

    Sure a personal impression is certainly not a study. Apart from that
    the article “Prosocial reasoning and empathy in gifted children” mentions that:

    Despite the promising indications that the academic gifted students used higher levels of prosocial reasoning and that they had higher levels of affective empathy, further research needs to be conducted to confirm these results.

    Moreover the article: Who Cares? Revisiting Empathy in Asperger Syndrome mentions that (as I understood in contrast to people with autism) people with Asperger syndrome may have no deficit in affective empathy (but then the study was peformed only with 21 persons and 21 controls). Moreover as the study ” “Assessing emotional intelligence in gifted and non-gifted high school students: Outcomes depend on the measure” says the judgements seem to depend rather highly on which aspect one is looking for. Nevertheless it seems (thats also what I heard from non mathematicians) that especially math departments may often be perceived as rather “beyond the nerd stage”, even if there may also rather “normal” types be running around and some of them may even just be pretending. I think this may certainly be something which should be taken into consideration, likewise assessing what types of “extraordinary behaviour” may to be expected is eventually also helpful, especially for the young students.

    Thanks for the links, I found that strange statement of Marcin Kotowski quite funny:

    ….blunting of one’s ambition is the first symptom of senility.

    Anyways another question.
    You wrote:

    —and now I work as a data science freelancer. I enjoy doing data visualizations (for example of relations between topics in mathematics)

    Are you back to Poland? If yes, is it easy to live on data science and scientific visualization in Poland? Where are you in Poland (I am living about 60kms from polish border).

    • It goes a bit off-topic. (Feel free to e-mail me – my address can be found easily.) BTW: Click “reply” under a post you want to respond to, rather than start a separate message – it can make it easier to group this discussion, rather than have it scattered.

      Data science is a good career. In Poland there are some data science jobs (not many data visualization, though), but it is possible to work remotely. Most of my contracts are from people in other places (especially – California). Though, I still need to learn how to juggle my scientific interests and earning (it may be wiser to ask me in a year or so).

  8. Berényi Péter says:

    How do you attach labels (like “first” and “second”) to protons?

    • John Baez says:

      Heh, a good puzzle.

    • Protons, as such, are indistinguishable. But once they are localized in certain positions we could treat them as is they were distinguishable. For example, a proton in my head is indistinguishable from a proton on Mars. Yet, as long they don’t interfere with each other, it does matter whether they are distinguishable or not.

      But, for the sake of this post, you can think of it as an electron being near to a proton, on the right (vs “near to the proton on the right”). That is:

      p+ p+ e-

      vs

      e- p+ p+

      (makes sense even if protons are indistinguishable).

      • John Baez says:

        It’s also amusing to note that the two eigenstates you derived for the electron, \psi_+ and \psi_-, don’t change in any observable way if we exchange the role of the two protons.

  9. Berényi Péter says:

    My first head on collision with quantum weirdness was a paper basket.

    Electrons are famous for having a nearly inexplicable property. They do not assume the same state after a 360° rotation, as ordinary objects do. They need two full rotations, 720° to accomplish that.

    And here comes the paper basket. You have to attach it by four elastic strings to the four vertices of a tetrahedron, like pegs in the opposite corners of a room at the ceiling and two more at the ends of the other diagonal on the floor.

    If you rotate the paper basket by 360°, it may get back to its original position, but the strings become entangled hopelessly, so its state can clearly be distinguished from its former one. However, another full rotation does a miracle. In this case, with some moderate skill, you can disentangle the strings completely, so the state of the entire configuration becomes indistinguishable from its original one.

    Question: is there a deeper connection between electrons an paper baskets held in place by strings or is it just a curious coincidence?

    • John Baez says:

      It’s not just a curious coincidence: the same underlying mathematics is at work in both cases. Namely, the fact that the rotation group SO(3) is not simply connected, but has a simply-connected double cover. Here’s an introduction:

      Plate trick, Wikipedia.

      I call it the coffee cup trick, since it’s easy to illustrate with a coffee cup.

      This page goes a bit deeper:

      Orientation entanglement, Wikipedia.

  10. […] post follows my browsing of Piotr Migdal’s guest post on John Baez’ blog, here [^]. Migdal’s aim is make QM simple to understand. He somehow begins with Dirac’s […]

You can use Markdown or HTML in your comments. You can also use LaTeX, like this: $latex E = m c^2 $. The word 'latex' comes right after the first dollar sign, with a space after it.

This site uses Akismet to reduce spam. Learn how your comment data is processed.