## Quantum Frontiers in Network Science

6 May, 2014

guest post by Jacob Biamonte

There’s going to be a workshop on quantum network theory in Berkeley this June. The event is being organized by some of my collaborators and will be a satellite of the biggest annual network science conference, NetSci.

A theme of the Network Theory series here on Azimuth has been to merge ideas appearing in quantum theory with other disciplines. Remember the first post by John which outlined the goal of a general theory of networks? Well, everyone’s been chipping away at this stuff for a few years now and I think you’ll agree that this workshop seems like an excellent way to push these topics even further, particularly as they apply to complex networks.

The event is being organized by Mauro Faccin, Filippo Radicchi and Zoltán Zimborás. You might recall when Tomi Johnson first explained to us some ideas connecting quantum physics with the concepts of complex networks (see Quantum Network Theory Part 1 and Part 2). Tomi’s going to be speaking at this event. I understand there is even still a little bit of space left to contribute talks and/or to attend. I suspect that those interested can sort this out by emailing the organizers or just follow the instructions to submit an abstract.

They have named their event Quantum Frontiers in Network Science or QNET for short. Here’s their call.

Quantum Frontiers in Network Science

This year the biggest annual network science conference, NetSci will take place in Berkeley California on 2-6 June. We are organizing a one-day Satellite Workshop on Quantum Frontiers in Network Science (QNET).

A grand challenge in contemporary complex network science is to reconcile the staple “statistical mechanics based approach” with a theory based on quantum physics. When considering networks where quantum coherence effects play a non-trivial role, the predictive power of complex network science has been shown to break down. A new theory is now being developed which is based on quantum theory, from first principles. Network theory is a diverse subject which developed independently in several disciplines to rely on graphs with additional structure to model complex systems. Network science has of course played a significant role in quantum theory, for example in topics such as tensor network states, chiral quantum walks on complex networks, categorical tensor networks, and categorical models of quantum circuits, to name only a few. However, the ideas of complex network science are only now starting to be united with modern quantum theory. From this respect, one aim of the workshop is to put in contact two big and generally not very well connected scientific communities: statistical and quantum physicists.

The topic of network science underwent a revolution when it was realized that systems such as social or transport networks could be interrelated through common network properties, but what are the relevant properties to consider when facing quantum systems? This question is particularly timely as there has been a recent push towards studying increasingly larger quantum mechanical systems, where the analysis is only beginning to undergo a shift towards embracing the concepts of complex networks.

For example, theoretical and experimental attention has turned to explaining transport in photosynthetic complexes comprising tens to hundreds of molecules and thousands of atoms using quantum mechanics. Likewise, in condensed matter physics using the language of “chiral quantum walks”, the topological structure of the interconnections comprising complex materials strongly affects their transport properties.

An ultimate goal is a mathematical theory and formal description which pinpoints the similarities and differences between the use of networks throughout the quantum sciences. This would give rise to a theory of networks augmenting the current statistical mechanics approach to complex network structure, evolution, and process with a new theory based on quantum mechanics.

Topics of special interest to the satellite include

• Quantum transport and chiral quantum walks on complex networks
• Detecting community structure in quantum systems
• Tensor algebra and multiplex networks
• Quantum information measures (such as entropy) applied to complex networks
• Quantum critical phenomena in complex networks
• Quantum models of network growth
• Quantum techniques for reaction networks
• Quantum algorithms for problems in complex network science
• Foundations of quantum theory in relation to complex networks and processes thereon
• Quantum inspired mathematics as a foundation for network science

Info

QNET will be held at the NetSci Conference venue at the Clark Kerr Campus of the University of California, on June 2nd in the morning (8am-1pm).

• Main conference page: NetSci2014
Call for abstracts and the program

It sounds interesting! You’ll notice that the list of topics seems reminiscent of some of the things we’ve been talking about right here on Azimuth! A general theme of the Network Theory Series has been geared towards developing frameworks to describe networked systems through a common language and then to map the use of tools and results across disciplines. It seems like a great place to talk about these ideas. Oh, and here’s a current list of the speakers:

Leonardo Banchi (UCL, London)
Ginestra Bianconi (London)
Silvano Garnerone (IQC, Waterloo)
Laetitia Gauvin (ISI Foundation)
Marco Javarone (Sassari)
Tomi Johnson (Oxford)

and again, the organizers are

Mauro Faccin (ISI Foundation)
Zoltán Zimborás (UCL)

From the call, we can notice that a central discussion topic at QNET will be about contrasting stochastic and quantum mechanics. Here on Azimuth we like this stuff. You might remember that stochastic mechanics was formulated in the network theory series to mathematically resemble quantum theory (see e.g. Part 12). This formalism was then employed to produce several results, including a stochastic version of Noether’s theorem by John and Brendan in Parts 11 and 13—recently Ville has also written Noether’s Theorem: Quantum vs Stochastic. Several other results were produced by relating quantum field theory to Petri nets from population biology and to chemical reaction networks in chemistry (see the Network Theory homepage). It seems to me that people attending QNET will be interested in these sorts of things, as well as other related topics.

One of the features of complex network science is that it is often numerically based and geared directly towards interesting real-world applications. I suspect some interesting results should stem from the discussions that will take place at this workshop.

By the way, here’s a view of downtown San Francisco at dusk from Berkeley Hills California from the NetSci homepage:

## Network Theory I

2 March, 2014

Here’s a video of a talk I gave last Tuesday—part of a series. You can see the slides here:

One reason I’m glad I gave this talk is because afterwards Jamie Vicary pointed out some very interesting consequences of the relations among signal-flow diagrams listed in my talk. It turns out they imply equations familiar from the theory of complementarity in categorical quantum mechanics!

This is the kind of mathematical surprise that makes life worthwhile for me. It seemed utterly shocking at first, but I think I’ve figured out why it happens. Now is not the time to explain… but I’ll have to do it soon, both here and in the paper that Jason Eberle are writing about control theory.

• Brendan Fong, A compositional approach to control theory.

## The Elitzur–Vaidman Bomb-Testing Method

24 August, 2013

Quantum mechanics forces us to refine our attitude to counterfactual conditionals: questions about what would have happened if we had done something, even though we didn’t.

“What would the position of the particle be if I’d measured that… when actually I measured its momentum?” Here you’ll usually get no definite answer.

But sometimes you can use quantum mechanics to find out what would have happened if you’d done something… when classically it seems impossible!

Suppose you have a bunch of bombs. Some have a sensor that will absorb a photon you shine on it, and make the bomb explode! Others have a broken sensor, that won’t interact with the photon at all.

Can you choose some working bombs? You can tell if a bomb works by shining a photon on it. But if it works, it blows up—and then it doesn’t work anymore!

So, it sounds impossible. But with quantum mechanics you can do it. You can find some bombs that would have exploded if you had shone photons at them!

Here’s how:

Put a light that emits a single photon at A. Have the photon hit the half-silvered mirror at lower left, so it has a 50% chance of going through to the right, and a 50% chance of reflecting and going up. But in quantum mechanics, it sort of does both!

Put a bomb at B. Recombine the photon’s paths using two more mirrors. Have the two paths meet at a second half-silvered mirror at upper right. You can make it so that if the bomb doesn’t work, the photon interferes with itself and definitely goes to C, not D.

But if the bomb works, it absorbs the photon and explodes unless the photon takes the top route… in which case, when it hits the second half-silvered mirror, it has a 50% chance of going to C and a 50% chance of going to D.

So:

• If the bomb doesn’t work, the photon has a 100% chance of going to C.

• If the bomb works, there’s a 50% chance that it absorbs the photon and explodes. There’s also a 50% chance that the bomb does not explode—and then the photon is equally likely to go to either C or D. So, the photon has a 25% chance of reaching C and a 25% chance of reaching D.

So: if you see a photon at D, you know you have a working bomb… but the bomb has not exploded!

For each working bomb there’s:

• a 50% chance that it explodes,
• a 25% chance that it doesn’t explode but you can’t tell if it works,
• a 25% chance that it doesn’t explode but you can tell that it works.

This is the Elitzur–Vaidman bomb-testing method. It was invented by Avshalom Elitzur and Lev Vaidman in 1993. One year later, physicists actually did an experiment to show this idea works… but alas, not using actual bombs!

In 1996, Kwiat showed that using more clever methods, you can reduce the percentage of wasted working bombs as close to zero as you like. And pushing the idea even further, Graeme Mitchison and Richard Jozsa showed in 1999 that you can get a quantum computer to do a calculation for you without even turning it on!

This sounds amazing, but it’s really no more amazing than the bomb-testing method I’ve already described.

### References

• A. Elitzur and L. Vaidman, Quantum mechanical interaction-free measurements, Found. Phys. 23 (1993), 987–997.

• Paul G. Kwiat, H. Weinfurter, T. Herzog, A. Zeilinger, and M. Kasevich, Experimental realization of “interaction-free” measurements.

• Paul G. Kwiat, Interaction-free measurements.

• Graeme Mitchison and Richard Jozsa, Counterfactual computation, Proc. Roy. Soc. Lond. A457 (2001), 1175–1194.

The picture is from the Wikipedia article, which also has other references:

Elitzur–Vaidman bomb tester, Wikipedia.

Bas Spitters pointed out this category-theoretic analysis of the issue:

• Robert Furber and Bart Jacobs, Towards a categorical account of conditional probability.

## Centre for Quantum Mathematics and Computation

6 March, 2013

This fall they’re opening a new Centre for Quantum Mathematics and Computation at Oxford University. They’ll be working on diagrammatic methods for topology and quantum theory, quantum gravity, and computation. You’ll understand what this means if you know the work of the people involved:

• Samson Abramsky
• Bob Coecke
• Christopher Douglas
• Kobi Kremnitzer
• Steve Simon
• Ulrike Tillman
• Jamie Vicary

All these people are already at Oxford, so you may wonder what’s new about this center. I’m not completely sure, but they’ve gotten money from EPSRC (roughly speaking, the British NSF), and they’re already hiring a postdoc. Applications are due on March 11, so hurry up if you’re interested!

They’re having a conference October 1st to 4th to start things off. I’ll be speaking there, and they tell me that Steve Awodey, Alexander Beilinson, Lucien Hardy, Martin Hyland, Chris Isham, Dana Scott, and Anton Zeilinger have been invited too.

I’m really looking forward to seeing Chris Isham, since he’s one of the most honest and critical thinkers about quantum gravity and the big difficulties we have in understanding this subject—and he has trouble taking airplane flights, so it’s been a long time since I’ve seen him. It’ll also be great to see all the other people I know, and meet the ones I don’t.

For example, back in the 1990′s, I used to spend summers in Cambridge talking about n-categories with Martin Hyland and his students Eugenia Cheng, Tom Leinster and Aaron Lauda (who had been an undergraduate at U.C. Riverside). And more recently I’ve been talking a lot with Jamie Vicary about categories and quantum computation—since was in Singapore some of the time while I was there. (Indeed, I’m going back there this summer, and so will he.)

I’m not as big on n-categories and quantum gravity as I used to be, but I’m still interested in the foundations of quantum theory and how it’s connected to computation, so I think I can give a talk with some new ideas in it.

## Quantum Computing Position at U.C. Riverside

6 October, 2012

Here at U.C. Riverside, Alexander Korotkov wants to hire a postdoc in quantum measurement and quantum computing with superconducting qubits.

He writes:

The work will be mainly related to quantum feedback of superconducting qubits. The first experiment was published in Nature today. (Some News & Views discussion can be seen here.) The theory is still rather simple and needs improvement.

## Time Crystals

26 September, 2012

When water freezes and forms a crystal, it creates a periodic pattern in space. Could there be something that crystallizes to form a periodic pattern in time? Frank Wilczek, who won the Nobel Prize for helping explain why quarks and gluons trapped inside a proton or neutron act like freely moving particles when you examine them very close up, dreamt up this idea and called it a time crystal:

• Frank Wilczek, Classical time crystals.

• Frank Wilczek, Quantum time crystals.

‘Time crystals’ sound like something from Greg Egan’s Orthogonal trilogy, set in a universe where there’s no fundamental distinction between time and space. But Wilczek wanted to realize these in our universe.

Of course, it’s easy to make a system that behaves in an approximately periodic way while it slowly runs down: that’s how a clock works: tick tock, tick tock, tick tock… But a system that keeps ‘ticking away’ without using up any resource or running down would be a strange new thing. There’s no telling what weird stuff we might do with it.

It’s also interesting because physicists love symmetry. In ordinary physics there are two very important symmetries: spatial translation symmetry, and time translation symmetry. Spatial translation symmetry says that if you move an experiment any amount to the left or right, it works the same way. Time translation symmetry says that if you do an experiment any amount of time earlier or later, it works the same way.

Crystals are fascinating because they ‘spontaneously break’ spatial translation symmetry. Take a liquid, cool it until it freezes, and it forms a crystal which does not look the same if you move it any amount to the right or left. It only looks the same if you move it certain discrete steps to the right or left!

The idea of a ‘time crystal’ is that it’s a system that spontaneously breaks time translation symmetry.

Given how much physicists have studied time translation symmetry and spontaneous symmetry breaking, it’s sort of shocking that nobody before 2012 wrote about this possibility. Or maybe someone did—but I haven’t heard about it.

It takes real creativity to think of an idea so radical yet so simple. But Wilczek is famously creative. For example, he came up with anyons: a new kind of particle, neither boson nor fermion, that lives in a universe where space is 2-dimensional. And now we can make those in the lab.

Unfortunately, Wilczek didn’t know how to make a time crystal. But now a team including Xiang Zhang (seated) and Tongcang Li (standing) at U.C. Berkeley have a plan for how to do it.

Actually they propose a ring-shaped system that’s periodic in time and also in space, as shown in the picture. They call it a space-time crystal:

Here we propose a space-time crystal of trapped ions and a method to realize it experimentally by confining ions in a ring-shaped trapping potential with a static magnetic field. The ions spontaneously form a spatial ring crystal due to Coulomb repulsion. This ion crystal can rotate persistently at the lowest quantum energy state in magnetic fields with fractional fluxes. The persistent rotation of trapped ions produces the temporal order, leading to the formation of a space-time crystal. We show that these space-time crystals are robust for direct experimental observation. The proposed space-time crystals of trapped ions provide a new dimension for exploring many-body physics and emerging properties of matter.

The new paper is here:

• Tongcang Li, Zhe-Xuan Gong, Zhang-Qi Yin, H. T. Quan, Xiaobo Yin, Peng Zhang, L.-M. Duan and Xiang Zhang, Space-time crystals of trapped ions.

Alas, the press release put out by Lawrence Berkeley National Laboratory is very misleading. It describes the space-time crystal as a clock that

will theoretically persist even after the rest of our universe reaches entropy, thermodynamic equilibrium or “heat-death”.

NO!

First of all, ‘reaching entropy’ doesn’t mean anything. More importantly, as time goes by and things fall apart, this space-time crystal, assuming anyone can actually make it, will also fall apart.

I know what the person talking to the reporter was trying to say: the cool thing about this setup is that it gives a system that’s truly time-periodic, not gradually using up some resource and running down like an ordinary clock. But nonphysicist readers, seeing an article entitled ‘A Clock that Will Last Forever’, may be fooled into thinking this setup is, umm, a clock that will last forever. It’s not.

If this setup were the whole universe, it might keep ticking away forever. But in fact it’s just a small, carefully crafted portion of our universe, and it interacts with the rest of our universe, so it will gradually fall apart when everything else does… or in fact much sooner: as soon as the scientists running it turn off the experiment.

### Classifying space-time crystals

What could we do with space-time crystals? It’s way too early to tell, at least for me. But since I’m a mathematician, I’d be happy to classify them. Over on Google+, William Rutiser asked if there are 4d analogs of the 3d crystallographic groups. And the answer is yes! Mathematicians with too much time on their hands have classified the analogues of crystallographic groups in 4 dimensions:

Space group: classification in small dimensions, Wikipedia.

In general these groups are called space groups (see the article for the definition). In 1 dimension there are just two, namely the symmetry groups of this:

— o — o — o — o — o — o —

and this:

— > — > — > — > — > — > —

In 2 dimensions there are 17 and they’re called wallpaper groups. In 3 dimensions there are 230 and they are called crystallographic groups. In 4 dimensions there are 4894, in 5 dimensions there are… hey, Wikipedia leaves this space blank in their table!… and in 6 dimensions there are 28,934,974.

So, there is in principle quite a large subject to study here, if people can figure out how to build a variety of space-time crystals.

There’s already book on this, if you’re interested:

• Harold Brown, Rolf Bulow, Joachim Neubuser, Hans Wondratschek and Hans Zassenhaus, Crystallographic Groups of Four-Dimensional Space, Wiley Monographs in Crystallography, 1978.﻿

## Quantizing Electrical Circuits

2 February, 2012

As you may know, there’s a wonderful and famous analogy between classical mechanics and electrical circuit theory. I explained it back in “week288″, so I won’t repeat that story now. If you don’t know what I’m talking about, take a look!

This analogy opens up the possibility of quantizing electrical circuits by straightforwardly copying the way we quantize classical mechanics problems. I’d often wondered if this would be useful.

It is, and people have done it:

• Michel H. Devoret, Quantum fluctuations in electrical circuits.

Michel Devoret, Rob Schoelkopf and others call this idea quantronics: the study of mesoscopic electronic effects in which collective degrees of freedom like currents and voltages behave quantum mechanically.

I just learned about this from a talk by Sean Barrett here in Coogee. There are lots of cool applications, but right now I’m mainly interested in how this extends the set of analogies between different physical theories.

One interesting thing is how they quantize circuits with resistors. Over in classical mechanics, this corresponds to systems with friction. These systems, called ‘dissipative’ systems, don’t have a conserved energy. More precisely, energy leaks out of the system under consideration and gets transferred to the environment in the form of heat. It’s hard to quantize systems where energy isn’t conserved, so people in quantronics model resistors as infinite chains of inductors and capacitors: see the ‘LC ladder circuit’ on page 15 of Devoret’s notes. This idea is also the basis of the Caldeira–Leggett model of a particle coupled to a heat bath made of harmonic oscillators: it amounts to including the environment as part of the system being studied.

## A Quantum Hammersley–Clifford Theorem

29 January, 2012

I’m at this workshop:

Sydney Quantum Information Theory Workshop: Coogee 2012, 30 January – 2 February 2012, Coogee Bay Hotel, Coogee, Sydney, organized by Stephen Bartlett, Gavin Brennen, Andrew Doherty and Tom Stace.

Right now David Poulin is speaking about a quantum version of the Hammersley–Clifford theorem, which is a theorem about Markov networks. Let me quickly say a bit about what he proved! This will be a bit rough, since I’m doing it live…

The mutual information between two random variables is

$I(A:B) = S(A) + S(B) - S(A,B)$

The conditional mutual information between three random variables $C$ is

$I(A:B|C) = \sum_c p(C=c) I(A:B|C=c)$

It’s the average amount of information about $B$ learned by measuring $A$ when you already knew $C.$

All this works for both classical (Shannon) and quantum (von Neumann) entropy. So, when we say ‘random variable’ above, we
could mean it in the traditional classical sense or in the quantum sense.

If $I(A:B|C) = 0$ then $A, C, B$ has the following Markov property: if you know $C,$ learning $A$ tells you nothing new about $B.$ In condensed matter physics, say a spin system, we get (quantum) random variables from measuring what’s going on in regions, and we have short range entanglement if $I(A:B|C) = 0$ when $C$ corresponds to some sufficiently thick region separating the regions $A$ and $B.$ We’ll get this in any Gibbs state of a spin chain with a local Hamiltonian.

A Markov network is a graph with random variables at vertices (and thus subsets of vertices) such that $I(A:B|C) = 0$ whenever $C$ is a subset of vertices that completely ‘shields’ the subset $A$ from the subset $B$: any path from $A$ to $B$ goes through a vertex in a $C.$

The Hammersley–Clifford theorem says that in the classical case we can get any Markov network from the Gibbs state

$\exp(-\beta H)$

of a local Hamiltonian $H,$ and vice versa. Here a Hamiltonian is local if it is a sum of terms, one depending on the degrees of freedom in each clique in the graph:

$H = \sum_{C \in \mathrm{cliques}} h_C$

Hayden, Jozsa, Petz and Winter gave a quantum generalization of one direction of this result to graphs that are just ‘chains’, like this:

o—o—o—o—o—o—o—o—o—o—o—o

Namely: for such graphs, any quantum Markov network is the Gibbs state of some local Hamiltonian. Now Poulin has shown the same for all graphs. But the converse is, in general, false. If the different terms $h_C$ in a local Hamiltonian all commute, its Gibbs state will have the Markov property. But otherwise, it may not.

For some related material, see:

• David Poulin, Quantum graphical models and belief propagation.

## Probabilities Versus Amplitudes

5 December, 2011

Here are the slides of the talk I’m giving at the CQT Annual Symposium on Wednesday afternoon, which is Tuesday morning for a lot of you. If you catch mistakes, I’d love to hear about them before then!

Abstract: Some ideas from quantum theory are just beginning to percolate back to classical probability theory. For example, there is a widely used and successful theory of “chemical reaction networks”, which describes the interactions of molecules in a stochastic rather than quantum way. If we look at it from the perspective of quantum theory, this turns out to involve creation and annihilation operators, coherent states and other well-known ideas—but with a few big differences. The stochastic analogue of quantum field theory is also used in population biology, and here the connection is well-known. But what does it mean to treat wolves as fermions or bosons?

## Liquid Light

28 November, 2011

Elisabeth Giacobino works at the Ecole Normale Supérieure in Paris. Last week she gave a talk at the Centre for Quantum Technologies. It was about ‘polariton condensates’. You can see a video of her talk here.

What’s a polariton? It’s a strange particle: a blend of matter and light. Polaritons are mostly made of light… with just enough matter mixed in so they can form a liquid! This liquid can form eddies just like water. Giacobino and her team of scientists have actually gotten pictures:

Physicists call this liquid a ‘polariton condensate’, but normal people may better appreciate how wonderful it is if we call it liquid light. That’s not 100% accurate, but it’s close—you’ll see what I mean in a minute.

Here’s a picture of Elisabeth Giacobino (at right) and her coworkers in 2010—not exactly the same team who is working on liquid light, but the best I can find:

#### How to make liquid light

How do you make liquid light?

First, take a thin film of some semiconductor like gallium arsenide. It’s full of electrons roaming around, so imagine a sea of electrons, like water. If you knock out an electron with enough energy, you’ll get a ‘hole’ which can move around like a particle of its own. Yes, the absence of a thing can act like a thing. Imagine an air bubble in the sea.

All this so far is standard stuff. But now for something more tricky: if you knock an electron just a little, it won’t go far from the hole it left behind. They’ll be attracted to each other, so they’ll orbit each other!

What you’ve got now is like a hydrogen atom—but instead of an electron and a proton, it’s made from an electron and a hole! It’s called an exciton. In Giacobino’s experiments, the excitons are 200 times as big as hydrogen atoms.

Excitons are exciting, but not exciting enough for us. So next, put a mirror on each side of your thin film. Now light can bounce back and forth. The light will interact with the excitons. If you do it right, this lets a particle of light—called a photon—blend with an exciton and form a new particle called polariton.

How does a photon ‘blend’ with an exciton? Umm, err… this involves quantum mechanics. In quantum mechanics you can take two possible situations and add them and get a new one, a kind of ‘blend’ called a ‘superposition’. ‘Schrödinger’s cat’ is what you get when you blend a live cat and a dead cat. People like to argue about why we don’t see half-live, half-dead cats. But never mind: we can see a blend of a photon and an exciton! Giacobino and her coworkers have done just that.

The polaritons they create are mostly light, with just a teeny bit of exciton blended in. Photons have no mass at all. So, perhaps it’s not surprising that their polaritons have a very small mass: about 10-5 times as heavy as an electron!

They don’t last very long: just about 4-10 picoseconds. A picosecond is a trillionth of a second, or 10-12 seconds. After that they fall apart. However, this is long enough for polaritons to do lots of interesting things.

For starters, polaritons interact with each other enough to form a liquid. But it’s not just any ordinary liquid: it’s often a superfluid, like very cold liquid helium. This means among other things, that it has almost no viscosity.

So: it’s even better than liquid light: it’s superfluid light!

#### The flow of liquid light

What can you do with liquid light?

For starters, you can watch it flow around obstacles. Semiconductors have ‘defects’—little flaws in the crystal structure. These act as obstacles to the flow of polaritons. And Giacobimo and her team have seen the flow of polaritons around defects in the semiconductor:

The two pictures at left are two views of the polariton condensate flowing smoothly around a defect. In these pictures the condensate is a superfluid.

The two pictures in the middle show a different situation. Here the polariton condensate is viscous enough so that it forms a trail of eddies as it flows past the defect. Yes, eddies of light!

And the two pictures at right show yet another situation. In every fluid, we can have waves of pressure. This is called… ‘sound’. Yes, this is how ordinary sound works in air, or under water. But we can also have sound in a polariton condensate!

That’s pretty cool: sound in liquid light! But wait. We haven’t gotten to the really cool part yet. Whenever you have a fluid moving past an obstacle faster than the speed of sound, you get a ‘shock wave’: the obstacle leaves an expanding trail of sound in its wake, behind it, because the sound can’t catch up. That’s why jets flying faster than sound leave a sonic boom behind them.

And that’s what you’re seeing in the pictures at right. The polariton condensate is flowing past the defect faster than the speed of sound, which happens to be around 850,000 meters per second in this experiment. We’re seeing the shock wave it makes. So, we’re seeing a sonic boom in liquid light!

It’s possible we’ll be able to use polariton condensates for interesting new technologies. Giacobimo and her team are also considering using them to study Hawking radiation: the feeble glow that black holes emit according to Hawking’s predictions. There aren’t black holes in polariton condensates, but it may be possible to create a similar kind of radiation. That would be really cool!

But to me, just being able to make a liquid consisting mostly of light, and study its properties, is already a triumph: just for the beauty of it.

#### Scary technical details

All the pictures of polariton condensates flowing around a defect came from here:

• A. Amo, S. Pigeon, D. Sanvitto, V. G. Sala, R. Hivet, I. Carusotto, F. Pisanello, G. Lemenager, R. Houdre, E. Giacobino, C. Ciuti, and A. Bramati, Hydrodynamic solitons in polariton superfluids.

and this is the paper to read for more details.

I tried to be comprehensible to ordinary folks, but there are a few more things I can’t resist saying.

First, there are actually many different kinds of polaritons. In general, polaritons are quasiparticles formed by the interaction of photons and matter. For example, in some crystals sound acts like it’s made of particles, and these quasiparticles are called ‘phonons’. But sometimes phonons can interact with light to form quasiparticles—and these are called ‘phonon-polaritons’. I’ve only been talking about ‘exciton-polaritons’.

If you know a bit about superfluids, you may be interested to hear that the wavy patterns show the phase of the order parameter ψ in the Landau-Ginzburg theory of superfluids:

If you know about quantum field theory, you may be interested to know that the Hamiltonian describing photon-exciton interactions involves terms roughly like

$\alpha a^\dagger a + \beta b^\dagger b + \gamma (a^\dagger b + b^\dagger a)$

where $a$ is the annihilation operator for photons, $b$ is the annihilation operator for excitons, the Greek letters are various constants, and the third term describes the interaction of photons and excitons. We can simplify this Hamiltonian by defining new particles that are linear combinations of photons and excitons. It’s just like diagonalizing a matrix; we get something like

$\delta c^\dagger c + \epsilon d^\dagger d$

where $c$ and $d$ are certain linear combinations of $a$ and $b$. These act as annihilation operators for our new particles… and one of these new particles is the very light ‘polariton’ I’ve been talking about!