Applied Category Theory 2021 — Call for Papers

16 April, 2021


The deadline for submitting papers is coming up soon: May 10th.

Fourth Annual International Conference on Applied Category Theory (ACT 2021), July 12–16, 2021, online and at the Computer Laboratory of the University of Cambridge.

Plans to run ACT 2021 as one of the first physical conferences post-lockdown are progressing well. Consider going to Cambridge! Financial support is available for students and junior researchers.

Applied category theory is a topic of interest for a growing community of researchers, interested in studying many different kinds of systems using category-theoretic tools. These systems are found across computer science, mathematics, and physics, as well as in social science, linguistics, cognition, and neuroscience. The background and experience of our members is as varied as the systems being studied. The goal of the Applied Category Theory conference series is to bring researchers together, disseminate the latest results, and facilitate further development of the field.

We accept submissions of both original research papers, and work accepted/submitted/ published elsewhere. Accepted original research papers will be invited for publication in a proceedings volume. The keynote addresses will be drawn from the best accepted papers. The conference will include an industry showcase event.

We hope to run the conference as a hybrid event, with physical attendees present in Cambridge, and other participants taking part online. However, due to the state of the pandemic, the possibility of in-person attendance is not yet confirmed. Please do not book your travel or hotel accommodation yet.

Financial support

We are able to offer financial support to PhD students and junior researchers. Full guidance is on the webpage.

Important dates (all in 2021)

• Submission Deadline: Monday 10 May
• Author Notification: Monday 7 June
• Financial Support Application Deadline: Monday 7 June
• Financial Support Notification: Tuesday 8 June
• Priority Physical Registration Opens: Wednesday 9 June
• Ordinary Physical Registration Opens: Monday 13 June
• Reserved Accommodation Booking Deadline: Monday 13 June
• Adjoint School: Monday 5 to Friday 9 July
• Main Conference: Monday 12 to Friday 16 July

Submissions

The following two types of submissions are accepted:

Proceedings Track. Original contributions of high-quality work consisting of an extended abstract, up to 12 pages, that provides evidence of results of genuine interest, and with enough detail to allow the program committee to assess the merits of the work. Submission of work-in-progress is encouraged, but it must be more substantial than a research proposal.

Non-Proceedings Track. Descriptions of high-quality work submitted or published elsewhere will also be considered, provided the work is recent and relevant to the conference. The work may be of any length, but the program committee members may only look at the first 3 pages of the submission, so you should ensure that these pages contain sufficient evidence of the quality and rigour of your work.

Papers in the two tracks will be reviewed against the same standards of quality. Since ACT is an interdisciplinary conference, we use two tracks to accommodate the publishing conventions of different disciplines. For example, those from a Computer Science background may prefer the Proceedings Track, while those from a Mathematics, Physics or other background may prefer the Non-Proceedings Track. However, authors from any background are free to choose the track that they prefer, and submissions may be moved from the Proceedings Track to the Non-Proceedings Track at any time at the request of the authors.

Contributions must be submitted in PDF format. Submissions to the Proceedings Track must be prepared with LaTeX, using the EPTCS style files available at http://style.eptcs.org.

The submission link will soon be available on the ACT2021 web page: https://www.cl.cam.ac.uk/events/act2021

Program Committee

Chair:

• Kohei Kishida, University of Illinois, Urbana-Champaign

Members:

• Richard Blute, University of Ottawa
• Spencer Breiner, NIST
• Daniel Cicala, University of New Haven
• Robin Cockett, University of Calgary
• Bob Coecke, Cambridge Quantum Computing
• Geoffrey Cruttwell, Mount Allison University
• Valeria de Paiva, Samsung Research America and University of Birmingham
• Brendan Fong, Massachusetts Institute of Technology
• Jonas Frey, Carnegie Mellon University
• Tobias Fritz, Perimeter Institute for Theoretical Physics
• Fabrizio Romano Genovese, Statebox
• Helle Hvid Hansen, University of Groningen
• Jules Hedges, University of Strathclyde
• Chris Heunen, University of Edinburgh
• Alex Hoffnung, Bridgewater
• Martti Karvonen, University of Ottawa
• Kohei Kishida, University of Illinois, Urbana -Champaign (chair)
• Martha Lewis, University of Bristol
• Bert Lindenhovius, Johannes Kepler University Linz
• Ben MacAdam, University of Calgary
• Dan Marsden, University of Oxford
• Jade Master, University of California, Riverside
• Joe Moeller, NIST
• Koko Muroya, Kyoto University
• Simona Paoli, University of Leicester
• Daniela Petrisan, Université de Paris, IRIF
• Mehrnoosh Sadrzadeh, University College London
• Peter Selinger, Dalhousie University
• Michael Shulman, University of San Diego
• David Spivak, MIT and Topos Institute
• Joshua Tan, University of Oxford
• Dmitry Vagner
• Jamie Vicary, University of Cambridge
• John van de Wetering, Radboud University Nijmegen
• Vladimir Zamdzhiev, Inria, LORIA, Université de Lorraine
• Maaike Zwart


Black Dwarf Supernovae

14 April, 2021

“Black dwarf supernovae”. They sound quite dramatic! And indeed, they may be the last really exciting events in the Universe.

It’s too early to be sure. There could be plenty of things about astrophysics we don’t understand yet—and intelligent life may throw up surprises even in the very far future. But there’s a nice scenario here:

• M. E. Caplan, Black dwarf supernova in the far future, Monthly Notices of the Royal Astronomical Society 497 (2020), 4357–4362.

First, let me set the stage. What happens in the short run: say, the first 1023 years or so?

For a while, galaxies will keep colliding. These collisions seem to destroy spiral galaxies: they fuse into bigger elliptical galaxies. We can already see this happening here and there—and our own Milky Way may have a near collision with Andromeda in only 3.85 billion years or so, well before the Sun becomes a red giant. If this happens, a bunch of new stars will be born from the shock waves due to colliding interstellar gas.

By 7 billion years we expect that Andromeda and the Milky Way will merge and form a large elliptical galaxy. Unfortunately, elliptical galaxies lack spiral arms, which seem to be a crucial part of the star formation process, so star formation may cease even before the raw materials run out.

Of course, no matter what happens, the birth of new stars must eventually cease, since there’s a limited amount of hydrogen, helium, and other stuff that can undergo fusion.

This means that all the stars will eventually burn out. The longest lived are the red dwarf stars, the smallest stars capable of supporting fusion today, with a mass about 0.08 times that of the Sun. These will run out of hydrogen about 10 trillion years from now, and not be able to burn heavier elements–so then they will slowly cool down.

(I’m deliberately ignoring what intelligent life may do. We can imagine civilizations that develop the ability to control stars, but it’s hard to predict what they’ll do so I’m leaving them out of this story.)

A star becomes a white dwarf—and eventually a black dwarf when it cools—if its core, made of highly compressed matter, has a mass less than 1.4 solar masses. In this case the core can be held up by the ‘electron degeneracy pressure’ caused by the Pauli exclusion principle, which works even at zero temperature. But if the core is heavier than this, it collapses! It becomes a neutron star if it’s between 1.4 and 2 solar masses, and a black hole if it’s more massive.

In about 100 trillion years, all normal star formation processes will have ceased, and the universe will have a population of stars consisting of about 55% white dwarfs, 45% brown dwarfs, and a smaller number of neutron stars and black holes. Star formation will continue at a very slow rate due to collisions between brown and/or white dwarfs.

The black holes will suck up some of the other stars they encounter. This is especially true for the big black holes at the galactic centers, which power radio galaxies if they swallow stars at a sufficiently rapid rate. But most of the stars, as well as interstellar gas and dust, will eventually be hurled into intergalactic space. This happens to a star whenever it accidentally reaches escape velocity through its random encounters with other stars. It’s a slow process, but computer simulations show that about 90% of the mass of the galaxies will eventually ‘boil off’ this way — while the rest becomes a big black hole.

How long will all this take? Well, the white dwarfs will cool to black dwarfs in about 100 quadrillion years, and the galaxies will boil away by about 10 quintillion years. Most planets will have already been knocked off their orbits by then, thanks to random disturbances which gradually take their toll over time. But any that are still orbiting stars will spiral in thanks to gravitational radiation in about 100 quintillion years.

I think the numbers are getting a bit silly. 100 quintillion is 1020, and let’s use scientific notation from now on.

Then what? Well, in about 1023 years the dead stars will actually boil off from the galactic clusters, not just the galaxies, so the clusters will disintegrate. At this point the cosmic background radiation will have cooled to about 10-13 Kelvin, and most things will be at about that temperature unless proton decay or some other such process keeps them warmer.

Okay: so now we have a bunch of isolated black holes, neutron stars, and black dwarfs together with lone planets, asteroids, rocks, dust grains, molecules and atoms of gas, photons and neutrinos, all very close to absolute zero.

I had a dream, which was not all a dream.
The bright sun was extinguishd, and the stars
Did wander darkling in the eternal space,
Rayless, and pathless, and the icy earth
Swung blind and blackening in the moonless air.

— Lord Byron

So what happens next?

We expect that black holes evaporate due to Hawking radiation: a solar-mass one should do so in 1067 years, and a really big one, comparable to the mass of a galaxy, should take about 1099 years. Small objects like planets and asteroids may eventually ‘sublimate’: that is, slowly dissipate by losing atoms due to random processes. I haven’t seen estimates on how long this will take. For larger objects, like neutron stars, this may take a very long time.

But I want to focus on stars lighter than 1.2 solar masses. As I mentioned, these will become white dwarfs held up by their electron degeneracy pressure, and by about 1017 years they will cool down to become very cold black dwarfs. Their cores will crystallize!


Then what? If a proton can decay into other particles, for example a positron and a neutral pion, black dwarfs may slowly shrink away to nothing due to this process, emitting particles as they fade away! Right now we know that the lifetime of the proton to decay via such processes is at least 1032 years. It could be much longer.

But suppose the proton is completely stable. Then what happens? In this scenario, a very slow process of nuclear fusion will slowly turn black dwarfs into iron! It’s called pycnonuclear fusion. The idea is that due to quantum tunneling, nuclei next to each other in the crystal lattice within a black dwarf will occasionally get ‘right on top of each other’ and fuse into heavier nucleus! Since iron-56 is the most stable nucleus, eventually iron will predominate.

Iron is more dense than lighter elements, so as this happens the black dwarf will shrink. It may eventually shrink down to being so dense that electron pressure will no longer hold it up. If this happens, the black dwarf will suddenly collapse, just like heavier stars. It will release a huge amount of energy and explode as gravitational potential energy gets converted into heat. This is a black dwarf supernova.

When will black dwarf supernovae first happen, assuming proton decay or some other unknown processes don’t destroy the black dwarfs first?

This is what Matt Caplan calculated:

We now consider the evolution of a white dwarf toward an iron black dwarf and the circumstances that result in collapse. Going beyond the simple order of magnitude estimates of Dyson (1979), we know pycnonuclear fusion rates are strongly dependent on density so they are greatest in the core of the black dwarf and slowest at the surface. Therefore, the internal structure of a black dwarf evolving toward collapse can be thought of as an astronomically slowly moving ‘burning’ front growing outward from the core toward the surface. This burning front grows outward much more slowly than any hydrodynamical or nuclear timescale, and the star remains at approximately zero temperature for this phase. Furthermore, in contrast to traditional thermonuclear stellar burning, the later reactions with higher Z parents take significantly longer due to the larger tunneling barriers for fusion.

Here “later reactions with higher Z parents” means fusion reactions involving heavier nuclei. The very last step, for example, is when two silicon nuclei fuse to form a nucleus of iron. In an ordinary star these later reactions happen much faster than those involving light nuclei, but for black dwarfs this pattern is reversed—and everything happens at ridiculously slow rate, at a temperature near absolute zero.

He estimates a black dwarf of 1.24 solar masses will collapse and go supernova after about 101600 years, when roughly half its mass has turned to iron.

Lighter ones will take much longer. A black dwarf of 1.16 solar masses could take 1032000 years to go supernova.

These black dwarf supernovae could be the last really energetic events in the Universe.

It’s downright scary to think how far apart these black dwarfs will be when they explode. As I mentioned, galaxies and clusters will have long since have boiled away, so every black dwarf will be completely alone in the depths of space. Distances between them will be doubling every 12 billion years according to the current standard model of cosmology, the ΛCDM model. But 12 billion years is peanuts compared to the time scales I’m talking about now!

So, by the time black dwarfs start to explode, the distances between these stars will be expanded by a factor of roughly

\displaystyle{ e^{10^{1000}} }

compared to their distances today. That’s a very rough estimate, but it means that each black dwarf supernova will be living in its own separate world.


The Expansion of the Universe

9 April, 2021

We can wait a while to explore the Universe, but we shouldn’t wait too long. If the Universe continues its accelerating expansion as predicted by the usual model of cosmology, it will eventually expand by a factor of 2 every 12 billion years. So if we wait too long, we can’t ever reach a distant galaxy.

In fact, after 150 billion years, all galaxies outside our Local Group will become completely inaccessible, in principle by any form of transportation not faster than light!

For an explanation, read this:

• Toby Ord, The edges of our Universe.

This is where I got the table.

150 billion years sounds like a long time, but the smallest stars powered by fusion—the red dwarf stars, which are very plentiful—are expected to last much longer: about 10 trillion years!  So, we can imagine a technologically advanced civilization that has managed to spread over the Local Group and live near red dwarf stars, which eventually regrets that it has waited too long to expand through more of the Universe.  

The Local Group is a collection of roughly 50 nearby galaxies containing about 2 trillion stars, so there’s certainly plenty to do here. It’s held together by gravity, so it won’t get stretched out by the expansion of the Universe—not, at least, until its stars slowly “boil off” due to some randomly picking up high speeds. But will happen much, much later: more than 10 quintillion years, that is, 1019 years.

For more, see this article of mine:

The end of the Universe.


The Koide Formula

4 April, 2021

There are three charged leptons: the electron, the muon and the tau. Let m_e, m_\mu and m_\tau be their masses. Then the Koide formula says

\displaystyle{ \frac{m_e + m_\mu + m_\tau}{\big(\sqrt{m_e} + \sqrt{m_\mu} + \sqrt{m_\tau}\big)^2} = \frac{2}{3} }

There’s no known reason for this formula to be true! But if you plug in the experimentally measured values of the electron, muon and tau masses, it’s accurate within the current experimental error bars:

\displaystyle{ \frac{m_e + m_\mu + m_\tau}{\big(\sqrt{m_e} + \sqrt{m_\mu} + \sqrt{m_\tau}\big)^2} = 0.666661 \pm 0.000007 }

Is this significant or just a coincidence? Will it fall apart when we measure the masses more accurately? Nobody knows.

Here’s something fun, though:

Puzzle. Show that no matter what the electron, muon and tau masses might be—that is, any positive numbers whatsoever—we must have

\displaystyle{ \frac{1}{3} \le \frac{m_e + m_\mu + m_\tau}{\big(\sqrt{m_e} + \sqrt{m_\mu} + \sqrt{m_\tau}\big)^2} \le 1}

For some reason this ratio turns out to be almost exactly halfway between the lower bound and upper bound!

Koide came up with his formula in 1982 before the tau’s mass was measured very accurately.  At the time, using the observed electron and muon masses, his formula predicted the tau’s mass was

m_\tau = 1776.97 MeV/c2

while the observed mass was

m_\tau = 1784.2 ± 3.2 MeV/c2

Not very good.

In 1992 the tau’s mass was measured much more accurately and found to be

m_\tau = 1776.99 ± 0.28 MeV/c2

Much better!

Koide has some more recent thoughts about his formula:

• Yoshio Koide, What physics does the charged lepton mass relation tell us?, 2018.

He points out how difficult it is to explain a formula like this, given how masses depend on an energy scale in quantum field theory.


Vincenzo Galilei

3 April, 2021

I’ve been reading about early music. I ran into Vicenzo Galilei, an Italian lute player, composer, and music theorist who lived during the late Renaissance and helped start the Baroque era. Of course anyone interested in physics will know Galileo Galilei. And it turns out Vicenzo was Galileo’s dad!

The really interesting part is that Vincenzo did a lot of experiments—and he got Galileo interested in the experimental method!

Vicenzo started out as a lutenist, but in 1563 he met Gioseffo Zarlino, the most important music theorist of the sixteenth century, and began studying with him. Vincenzo became interested in tuning and keys, and in 1584 he anticipated Bach’s Well-Tempered Clavier by composing 24 groups of dances, one for each of the 12 major and 12 minor keys.

He also studied acoustics, especially vibrating strings and columns of air. He discovered that while the frequency of sound produced by a vibrating string varies inversely with the length of string, it’s also proportional to the square root of the tension applied. For example, weights suspended from strings of equal length need to be in a ratio of 9:4 to produce a perfect fifth, which is the frequency ratio 3:2.

Galileo later told a biographer that Vincenzo introduced him to the idea of systematic testing and measurement. The basement of their house was strung with lengths of lute string materials, each of different lengths, with different weights attached. Some say this drew Galileo’s attention away from pure mathematics to physics!

You can see books by Vicenzo Galilei here:

• Internet Archive, Vincenzo Galilei, c. 1520 – 2 July 1591.

Unfortunately for me they’re in Italian, but the title of his Dialogo della Musica Antica et Della Moderna reminds me of his son’s Dialogo sopra i Due Massimi Sistemi del Mondo (Dialog Concerning the Two Chief World Systems).

Speaking of dialogs, here’s a nice lute duet by Vincenzo Galilei, played by Evangelina Mascardi and Frédéric Zigante:

It’s from his book Fronimo Dialogo, an instruction manual for the lute which includes many compositions, including the 24 dances illustrating the 24 keys. “Fronimo” was an imaginary expert in the lute—in ancient Greek, phronimo means sage—and the book apparently consists of dialogs with between Fronimo and a student Eumazio (meaning “he who learns well”).

So, I now suspect that Galileo also got his fondness for dialogs from his dad, too! Or maybe everyone was writing them back then?


Can We Understand the Standard Model Using Octonions?

31 March, 2021


I gave two talks in Latham Boyle and Kirill Krasnov’s Perimeter Institute workshop Octonions and the Standard Model.

The first talk was on Monday April 5th at noon Eastern Time. The second was exactly one week later, on Monday April 12th at noon Eastern Time.

Here they are:

Can we understand the Standard Model? (video, slides)

Abstract. 40 years trying to go beyond the Standard Model hasn’t yet led to any clear success. As an alternative, we could try to understand why the Standard Model is the way it is. In this talk we review some lessons from grand unified theories and also from recent work using the octonions. The gauge group of the Standard Model and its representation on one generation of fermions arises naturally from a process that involves splitting 10d Euclidean space into 4+6 dimensions, but also from a process that involves splitting 10d Minkowski spacetime into 4d Minkowski space and 6 spacelike dimensions. We explain both these approaches, and how to reconcile them.

The second is on Monday April 12th at noon Eastern Time:

Can we understand the Standard Model using octonions? (video, slides)

Abstract. Dubois-Violette and Todorov have shown that the Standard Model gauge group can be constructed using the exceptional Jordan algebra, consisting of 3×3 self-adjoint matrices of octonions. After an introduction to the physics of Jordan algebras, we ponder the meaning of their construction. For example, it implies that the Standard Model gauge group consists of the symmetries of an octonionic qutrit that restrict to symmetries of an octonionic qubit and preserve all the structure arising from a choice of unit imaginary octonion. It also sheds light on why the Standard Model gauge group acts on 10d Euclidean space, or Minkowski spacetime, while preserving a 4+6 splitting.

You can see all the slides and videos and also some articles with more details here.


Offshore Wind Power in the US

30 March, 2021

More good news about wind power! Earlier this month I mentioned progress on Vineyard Wind, a planned offshore wind farm that should generate 800 megawatts when it’s finally running. Now the Biden adminstration has begun a broader initiative. They want to get sixteen construction plans for offshore wind worked out by 2025, for 19 gigawatts of wind power. And their bigger goal is 30 gigawatts of wind power by 2030.

• Juliet Eilperin and Brady Dennis, Biden administration launches major push to expand offshore wind power, Washington Post, 29 March 2021.

The White House on Monday detailed an ambitious plan to expand wind farms along the East Coast and jump-start the country’s nascent offshore wind industry, saying it hoped to trigger a massive clean-energy effort in the fight against climate change.

The plan would generate 30 gigawatts of offshore wind power by the end of the decade — enough to power more than 10 million American homes and cut 78 million metric tons of carbon dioxide emissions. To accomplish that, the Biden administration said, it would speed permitting for projects off the East Coast, invest in research and development, provide low-interest loans to industry and fund changes to U.S. ports.

“We are ready to rock-and-roll,” national climate adviser Gina McCarthy told reporters in a phone call Monday. She framed the effort as being as much about jobs as about clean energy. Offshore wind power will generate “thousands of good-paying union jobs. This is all about creating great jobs in the ocean and in our port cities and in our heartland,” she said.

The initiative represents a major stretch for the United States. The country has only one offshore wind project online at this time, generating 30 megawatts, off Rhode Island.

Administration officials said they would speed up offshore wind development by setting concrete deadlines for reviewing and approving permit applications; establish a new wind energy area in the waters between Long Island and the New Jersey coast; invest $230 million to upgrade U.S. ports; and provide $3 billion in potential loans for the offshore wind industry through the Energy Department.

The program also instructs the National Oceanic and Atmospheric Administration to share data with Orsted, a Danish offshore wind development firm, about the U.S. waters where it holds leases. NOAA will grant $1 million to help study the impact of offshore wind operations on fishing operators as well as coastal communities.

The National Offshore Wind Research and Development Consortium, a joint project of the Energy Department and the New York State Energy Research and Development Authority, will give $8 million in research grants to 15 offshore wind research and development projects.

[….]

Although offshore wind represents the fastest-growing sector in renewable power, the country remains far behind Europe.

Europe already has 24 gigawatts of operational capacity, and Britain alone aims to have 40 gigawatts online by 2030, said Vegard Wiik Vollset, vice president of renewable energy at Rystad Energy, which analyzes the energy sector.

“Compared to Europe, the U.S. is very much in its infancy,” he said.

But wind power is poised to take off along the East Coast, with recent commitments from several states — Connecticut, Maryland, Massachusetts, New Jersey, New York and Virginia — to buy at least 25,000 megawatts of offshore electricity by 2035, according to the American Clean Power Association.

As part of Monday’s announcement, the Interior Department’s Bureau of Ocean Energy Management said it will start preparing an environmental-impact statement for Ocean Wind, a New Jersey project that has 1,100 megawatts of capacity.


The Joy of Condensed Matter

24 March, 2021

I published a slightly different version of this article in Nautilus on February 24, 2021.


Everyone seems to be talking about the problems with physics: Woit’s book Not Even Wrong, Smolin’s The Trouble With Physics and Hossenfelder’s Lost in Math leap to mind, and they have started a wider conversation. But is all of physics really in trouble, or just some of it?

If you actually read these books, you’ll see they’re about so-called “fundamental physics”. Some other parts of physics are doing just fine, and I want to tell you about one. It’s called “condensed matter physics”, and it’s the study of solids and liquids. We are living in the golden age of condensed matter physics.

But first, what is “fundamental” physics? It’s a tricky term. You might think any truly revolutionary development in physics counts as fundamental. But in fact physicists use this term in a more precise, narrowly delimited way. One of the goals of physics is to figure out some laws that, at least in principle, we could use to predict everything that can be predicted about the physical universe. The search for these laws is fundamental physics.

The fine print is crucial. First: “at least in principle”. In principle we can use the fundamental physics we know to calculate the boiling point of water to immense accuracy—but nobody has done it yet, because the calculation is hard. Second: “everything that can be predicted”. As far we can tell, quantum mechanics says there’s inherent randomness in things, which makes some predictions impossible, not just impractical, to carry out with certainty. And this inherent quantum randomness sometimes gets amplified over time, by a phenomenon called chaos. For this reason, even if we knew everything about the universe now, we couldn’t predict the weather precisely a year from now.

So even if fundamental physics succeeded perfectly, it would be far from giving the answer to all our questions about the physical world. But it’s important nonetheless, because it gives us the basic framework in which we can try to answer these questions.

As of now, research in fundamental physics has given us the Standard Model (which seeks to describe matter and all the forces except gravity) and General Relativity (which describes gravity). These theories are tremendously successful, but we know they are not the last word. Big questions remain unanswered—like the nature of dark matter, or whatever is fooling us into thinking there’s dark matter.
Unfortunately, progress on these questions has been very slow since the 1990s.

Luckily fundamental physics is not all of physics, and today it is no longer the most exciting part of physics. There is still plenty of mind-blowing new physics being done. And lot of it—though by no means all—is condensed matter physics.

Traditionally, the job of condensed matter physics was to predict the properties of solids and liquids found in nature. Sometimes this can be very hard: for example, computing the boiling point of water. But now we know enough fundamental physics to design strange new materials—and then actually make these materials, and probe their properties with experiments, testing our theories of how they should work. Even better, these experiments can often be done on a table top. There’s no need for enormous particle accelerators here.

Let’s look at an example. We’ll start with the humble “hole”. A crystal is a regular array of atoms, each with some electrons orbiting it. When one of these electrons gets knocked off somehow, we get a “hole”: an atom with a missing electron. And this hole can actually move around like a particle! When an electron from some neighboring atom moves to fill the hole, the hole moves to the neighboring atom. Imagine a line of people all wearing hats except for one whose head is bare: if their neighbor lends them their hat, the bare head moves to the neighbor. If this keeps happening, the bare head will move down the line of people. The absence of a thing can act like a thing!

The famous physicist Paul Dirac came up with the idea of holes in 1930. He correctly predicted that since electrons have negative electric charge, holes should have positive charge. Dirac was working on fundamental physics: he hoped the proton could be explained as a hole. That turned out not to be true. Later physicists found another particle that could: the “positron”. It’s just like an electron with the opposite charge. And thus antimatter—particles like ordinary matter particles, with the same mass but with the opposite charge—was born. But that’s another story.

In 1931, Heisenberg applied the idea of holes to condensed matter physics. He realized that just as electrons create an electrical current as they move along, so do holes—but because they’re positively charged, their electrical current goes in the other direction! It became clear that holes carry electrical current in some but of the materials called “semiconductors”: for example, silicon with a bit of aluminum added to it. After many further developments, in 1948 the physicist William Schockley patented transistors that use both holes and electrons to form a kind of switch. He later won the Nobel prize for this, and now they’re widely used in computer chips.

Holes in semiconductors are not really particles in the sense of fundamental physics. They are really just a convenient way of thinking about the motion of electrons. But any sufficiently convenient abstraction takes on a life of its own. The equations that describe the behavior of holes are just like the equations that describe the behavior of particles. So, we can treat holes as if they were particles. We’ve already seen that a hole is positively charged. But because it takes energy to get a hole moving, a hole also acts like it has a mass. And so on: the properties we normally attribute to particles also make sense for holes.

Physicists have a name for things that act like particles even though they’re really not: “quasiparticles”. There are many kinds: holes are just one of the simplest. The beauty of quasiparticles is that we can practically make them to order, having a vast variety of properties. As Michael Nielsen put it, we now live in the era of “designer matter”.

For example, consider the “exciton”. Since an electron is negatively charged and a hole is positively charged, they attract each other. And if the hole is much heavier than the electron—remember, a hole has a mass—an electron can orbit a hole much as an electron orbits a proton in a hydrogen atom. Thus, they form a kind of artificial atom called an exciton. It’s a ghostly dance of presence and absence!


This is how an exciton moves through a crystal.

The idea of excitons goes back all the way to 1931. By now we can make excitons in large quantities in certain semiconductors. They don’t last for long: the electron quickly falls back into the hole. It can take between 1 and 10 trillonths of a second for this to happen. But that’s enough time to do some interesting things.

For example: if you can make an artificial atom, can you make an artificial molecule? Sure! Just as two atoms of hydrogen can stick together and form a molecule, two excitons can stick together and form a “biexciton”. An exciton can stick to another hole and form a “trion”. An exciton can even stick to a photon—a particle of light—and form something called a “polariton”. It’s a blend of matter and light!

Can you make a gas of artificial atoms? Yes! At low densities and high temperatures, excitons zip around very much like atoms in a gas. Can you make a liquid? Again, yes: at higher densities, and colder temperatures, excitons bump into each other enough to act like a liquid. At even colder temperatures, excitons can even form a “superfluid”, with almost zero viscosity: if you could somehow get it swirling around, it would go on practically forever.

This is just a small taste of what researchers in condensed matter physics are doing these days. Besides excitons, they are studying a host of other quasiparticles. A “phonon” is a quasiparticle of sound formed from vibrations moving through a crystal. A “magnon” is a quasiparticle of magnetization: a pulse of electrons in a crystal whose spins have flipped. The list goes on, and becomes ever more esoteric.

But there is also much more to the field than quasiparticles. Physicists can now create materials in which the speed of light is much slower than usual, say 40 miles an hour. They can create materials called “hyperbolic metamaterials” in which light moves as if there were two space dimensions and two time dimensions, instead of the usual three dimensions of space and one of time! Normally we think that time can go forward in just one direction, but in these substances light acts as if there’s a whole circle of directions that count as “forward in time”. The possibilities are limited only by our imagination and the fundamental laws of physics.

At this point, usually some skeptic comes along and questions whether these things are useful. Indeed, some of these new materials are likely to be useful. In fact a lot of condensed matter physics, while less glamorous than what I have just described, is carried out precisely to develop new improved computer chips—and also technologies like “photonics,” which uses light instead of electrons. The fruits of photonics are ubiquitous—it saturates modern technology, like flat-screen TVs—but physicists are now aiming for more radical applications, like computers that process information using light.

Then typically some other kind of skeptic comes along and asks if condensed matter physics is “just engineering”. Of course the very premise of this question is insulting: there is nothing wrong with engineering! Trying to build useful things is not only important in itself, it’s a great way to raise deep new questions about physics. For example the whole field of thermodynamics, and the idea of entropy, arose in part from trying to build better steam engines. But condensed matter physics is not just engineering. Large portions of it are blue-sky research into the possibilities of matter, like I’ve been talking about here.

These days, the field of condensed matter physics is just as full of rewarding new insights as the study of elementary particles or black holes. And unlike fundamental physics, progress in condensed matter physics is rapid—in part because experiments are comparatively cheap and easy, and in part because there is more new territory to explore.

So, when you see someone bemoaning the woes of fundamental physics, take them seriously—but don’t let it get you down. Just find a good article on condensed matter physics and read that. You’ll cheer up immediately.


Language Complexity (Part 7)

23 March, 2021

David A. Tanzer

Higher complexity classes

In Part 6, we saw a program with quadratic complexity. The collection of all languages that can be decided in O(n^k) time for some k is called P, for polynomial time complexity.

Now let’s consider languages that appear to require time that is exponential in the size of the input, and hence lie outside of P.

Here is a decision problem that is believed to be of this sort. Say you are given a description of a boolean circuit, involving AND, OR and NOT gates, which has N inputs and one output. Is there a combination of input values that causes the output to be True?

It appears that any general decision procedure for this problem must resort to some form of searching all the possible input combinations. For N inputs, that’s on the order of 2^N combinations to be tried. So the computation takes time that is exponential in the number of inputs.

There is a related decision problem for languages. Consider the language of Boolean formulas like (not(X7) and (X1 or not(X2 and X1)). The query is to find assignments of True/False to each of the variables which satisfy the formula, i.e., which make it evaluate to True.

Note that each Boolean formula is equivalent to a Boolean circuit, and a satisfying assignment to the variables is tantamount to an input combination which causes the circuit to output True.

Let SAT be the language consisting of Boolean formulas which are satisfiable, i.e., for which a satisfying assignment exists. For example, the formula (X1 and X2) belongs to SAT, because it is satisfied by the assignment X1=True, X2=True. On the other hand, (X1 and not(X1)) has no satisfying assignment, and so it does not belong to SAT.

Apparently, a decider for SAT must end up resorting to trying an exponential number of combinations. Now the number of variables in a formula is O(n). A brute force search through input combinations means exponential time complexity.

Could some clever person figure out a better method, which runs in polynomial time? Not if the widely believed conjecture that P != NP holds true.

Reposted from the Signal Beat, Copyright © 2021, All Rights Reserved.


Language Complexity (Part 6)

18 March, 2021

David A. Tanzer

Quadratic complexity

In Part 5 we introduced big O notation for describing linear complexity. Now let’s look at a function with greater than linear complexity:

def square_length(text): 
    # compute the square of the length of text
    # FIXME: not the most elegant or efficient approach
    n = length(text)  
    counter = 0    
    for i = 1 to n:        
        for j = 1 to n:             
            counter = counter + 1   
    return counter

Here, due to the suboptimal implementation, the number of steps is proportional to the square of the size of the input.

Let f(n) = MaxSteps(\mathit{square\_length}, n).

Then f(n) = O(n^2).

This says that f becomes eventually bounded by some quadratic function. On the other hand, it is not the case that f(n) = O(n), as f(n) will eventually exceed any linear function.

Here is the general definition of big O notation:

Definition.   f(n) = O(g(n)) means that for some r > 0 and n_1, we have that n > n_1 \implies |f(n)| < r g(n).

Any function which is eventually bounded by a linear function must also be eventually bounded by a quadratic function, i.e., linear functions are “smaller than” quadratic functions. So f(n) = O(n) makes a stronger statement than f(n) = O(n^2). Generally, we try to make the strongest statement possible about the complexity of a function.

Reposted from the Signal Beat, Copyright © 2021, All Rights Reserved.