Can We Understand the Standard Model?

16 March, 2021

I’m giving a talk in Latham Boyle and Kirill Krasnov’s Perimeter Institute workshop Octonions and the Standard Model on Monday April 5th at noon Eastern Time.

This talk will be a review of some facts about the Standard Model. Later I’ll give one that says more about the octonions.

Can we understand the Standard Model?

Abstract. 40 years trying to go beyond the Standard Model hasn’t yet led to any clear success. As an alternative, we could try to understand why the Standard Model is the way it is. In this talk we review some lessons from grand unified theories and also from recent work using the octonions. The gauge group of the Standard Model and its representation on one generation of fermions arises naturally from a process that involves splitting 10d Euclidean space into 4+6 dimensions, but also from a process that involves splitting 10d Minkowski spacetime into 4d Minkowski space and 6 spacelike dimensions. We explain both these approaches, and how to reconcile them.

You can see the slides here, and later a video of my talk will appear. You can register to attend the talk at the workshop’s website.

Here’s a puzzle, just for fun. As I’ll recall in my talk, there’s a normal subgroup of \mathrm{U}(1) \times \mathrm{SU}(2) \times \mathrm{SU}(3) that acts trivially on all known particles, and this fact is very important. The ‘true’ gauge group of the Standard Model is the quotient of \mathrm{U}(1) \times \mathrm{SU}(2) \times \mathrm{SU}(3) by this normal subgroup.

This normal subgroup is isomorphic to \mathbb{Z}_6 and it consists of all the elements

(\zeta^n, (-1)^n, \omega^n )  \in \mathrm{U}(1) \times \mathrm{SU}(2) \times \mathrm{SU}(3)


\zeta = e^{2 \pi i / 6}

is my favorite primitive 6th root of unity, -1 is my favorite primitive square root of unity, and

\omega = e^{2 \pi i / 3}

is my favorite primitive cube root of unity. (I’m a primitive kind of guy, in touch with my roots.)

Here I’m turning the numbers (-1)^n into elements of \mathrm{SU}(2) by multiplying them by the 2 \times 2 identity matrix, and turning the numbers \omega^n into elements of \mathrm{SU}(3) by multiplying them by the 3 \times 3 identity matrix.

But in fact there are a bunch of normal subgroups of \mathrm{U}(1) \times \mathrm{SU}(2) \times \mathrm{SU}(3) isomorphic to \mathbb{Z}_6. By my count there are 12 of them! So you have to be careful that you’ve got the right one, when you’re playing with some math and trying to make it match the Standard Model.

Puzzle 1. Are there really exactly 12 normal subgroups of \mathrm{U}(1) \times \mathrm{SU}(2) \times \mathrm{SU}(3) that are isomorphic to \mathbb{Z}_6?

Puzzle 2. Which ones give quotients isomorphic to the true gauge group of the Standard Model, which is \mathrm{U}(1) \times \mathrm{SU}(2) \times \mathrm{SU}(3) modulo the group of elements (\zeta^n, (-1)^n, \omega^n)?

To help you out, it helps to know that every normal subgroup of \mathrm{SU}(2) is a subgroup of its center, which consists of the matrices \pm 1. Similarly, every normal subgroup of \mathrm{SU}(3) is a subgroup of its center, which consists of the matrices 1, \omega and \omega^2. So, the center of \mathrm{U}(1) \times \mathrm{SU}(2) \times \mathrm{SU}(3) is \mathrm{U}(1) \times \mathbb{Z}_2 \times \mathbb{Z}_3.

Here, I believe, are the 12 normal subgroups of \mathrm{U}(1) \times \mathrm{SU}(2) \times \mathrm{SU}(3) isomorphic to \mathbb{Z}_6. I could easily have missed some, or gotten something else wrong!

  1. The group consisting of all elements (1, (-1)^n, \omega^n).
  2. The group consisting of all elements ((-1)^n, 1, \omega^n).
  3. The group consisting of all elements ((-1)^n, (-1)^n, \omega^n).
  4. The group consisting of all elements (\omega^n, (-1)^n, 1).
  5. The group consisting of all elements (\omega^n, (-1)^n, \omega^n).
  6. The group consisting of all elements (\omega^n, (-1)^n, \omega^{-n}).
  7. The group consisting of all elements (\zeta^n , 1, 1).
  8. The group consisting of all elements (\zeta^n , (-1)^n, 1).
  9. The group consisting of all elements (\zeta^n , 1, \omega^n).
  10. The group consisting of all elements (\zeta^n , 1, \omega^{-n}).
  11. The group consisting of all elements (\zeta^n , (-1)^n, \omega^n).
  12. The group consisting of all elements (\zeta^n , (-1)^n, \omega^{-n}).

Mathematics in the 21st Century

16 March, 2021

I’m giving a talk in the Topos Institute Colloquium on Thursday March 25, 2021 at 18:00 UTC. That’s 11:00 am Pacific Time.

I’ll say a bit about the developments we might expect if mathematicians could live happily in an ivory tower and never come down for the rest of the century. But my real focus will be on how math will interact with the world outside mathematics.

Mathematics in the 21st Century

Abstract. The climate crisis is part of a bigger transformation in which humanity realizes that the Earth is a finite system and that no physical quantity can grow exponentially forever. This transformation may affect mathematics—and be affected by it—just as dramatically as the agricultural and industrial revolutions. After a review of the problems, we discuss how mathematicians can help make this transformation a bit easier, and some ways in which mathematics may change.

You can see my slides here, and click on links in dark brown for more information. You can watch the talk on YouTube here, either live or recorded later:

You can also watch the talk live on Zoom. Only Zoom lets you ask questions. The password for Zoom can be found on the Topos Institute Colloquium website.

Vineyard Wind

13 March, 2021

Massachusetts law requires that the state get 3,200 megawatts of offshore wind power by 2035. This would be about 20% of the electricity they consume. But so far they get only about 120 megawatts from wind, offshore and onshore. Most of the potential wind energy areas shown above have not yet been developed.

This week there’s some promising news about Vineyard Wind, a planned offshore wind farm that should generate 800 megawatts when it’s finally running. This project had been stalled for years by the federal government. But no more! It just passed an environmental review. The final decision about whether it can go ahead should be made in April.

• Kelsey Tambirino, Biden administration gives major push to giant offshore wind farm, Politico, 8 March 2021.

The Interior Department said on Monday it had completed its environmental review for a massive wind farm off the coast of Massachusetts, a key step toward final approval of the long-stalled project that will play a prominent role in President Joe Biden’s effort to expand renewable energy in the U.S.

The completion of the review is a breakthrough for the U.S. offshore wind industry, which has lagged behind its European counterparts and the U.S. onshore industry that has grown rapidly, even during the pandemic. It also marks a key acceleration for the Biden administration that has advocated renewables growth on public lands and waters.

“This is a really significant step forward in the process for moving toward more offshore wind development in the United States,” Bureau of Ocean Energy Management Director Amanda Lefton told reporters.

"This is the day the U.S. offshore wind industry has been anxiously awaiting for years. Today’s announcement provides the regulatory greenlight the industry needs to attract investments and move projects forward," said Liz Burdock, head of the non-profit group Business Network for Offshore Wind.

The proposed 800-megawatt project, called Vineyard Wind, would be located approximately 12 nautical miles off the coast of Martha’s Vineyard and would be the first commercial-scale offshore wind project in the country. Two other small offshore projects have been built off the coasts of Rhode Island and Virginia, but at 30 MW and 12 MW, respectively, are a fraction of the size of the Vineyard Wind project, which needs a final record of decision before construction can begin. That decision could come this spring.

The BOEM analysis’ preferred alternative would allow up to 84 turbines to be installed in 100 of the 106 proposed blocks for the facility. It would prohibit the installation of wind turbine generators in six locations in the northernmost section of the development area and the wind turbine generators would also be required to be arranged in in a north-south and east-west orientation, with at least 1 nautical mile between each turbine.

Emerging Researchers in Category Theory

11 March, 2021


Eugenia Cheng is an expert on giving clear, fun math talks.

Now you can take a free class from her on how to give clear, fun math talks!

You need to be a grad student in category theory—and priority will be given to those who aren’t at fancy schools, etc.

Her course is called the Emerging Researchers in Category Theory Virtual Seminar, or Em-Cats for short. You can apply for it here:

The first round of applications is due April 30th. It looks pretty cool, and knowing Eugenia, you’ll get a lot of help on giving talks.


The aims are, broadly:

• Help the next generation of category theorists become wonderful speakers.
• Make use of the virtual possibilities, and give opportunities to graduate students in places where there is not a category theory group or local seminar they can usefully speak in.
• Give an opportunity to graduate students to have a global audience, especially giving more visibility to students from less famous/large groups.
• Make a general opportunity for community among category theorists who are more isolated than those with local groups.
• Make a series of truly intelligible talks, which we hope students and researchers around the world will enjoy and appreciate.

Talk Preparation and Guidelines

Eugenia Cheng has experience with training graduate students in giving talks, from when she ran a similar seminar for graduate students at the University of Sheffield. Everyone did indeed give an excellent talk.

We ask that all Em-Cats speakers are willing to work with Eugenia and follow her advice. The guidelines document outlines what she believes constitutes a good talk. We acknowledge that this is to some extent a matter of opinion, but these are the guidelines for this particular seminar. Eugenia is confident that with her assistance everyone who wishes to do so will be able to give an excellent, accessible talk, and that this will benefit both the speaker and the community.

Language Complexity (Part 5)

10 March, 2021

David A. Tanzer

Big O analysis

Recall our function f(n) from Part 4, which gives the values 2, 13, 14, 25, 26, 37, …

Using ‘big O’ notation, we write f(n) = O(n) to say that f is linearly bounded.

This means that f(n) will eventually become less than some linear function.

As we said, f(n) has a “rough slope” of 6. So f could never be bounded by e.g. the linear function 2n. On the other hand it looks like f should be bounded by any linear function with slope greater than 6, e.g., g(n) = 10n. However, g is not a perfect bound on f, as f(0)=2 > g(0)=0, and f(1)=13 > g(1)=10.

But once n reaches 2 we have that f(n) < g(n). So we say that f is eventually bounded by g.

Now let’s recap.

Definition.   f(n) = O(n) means that for some r > 0 and n_1, we have that n > n_1 \implies |f(n)| < r  n.

Now let’s apply big O to the analysis of this function from the previous post:

def approximately_linear(text):
    counter = 0
    for i = 1 to length(text):
        if i is odd:
            for j = 1 to 10:
                counter = counter + 1
    return counter

The function approximately_linear has definite linear time complexity because:

f(n) \equiv \text{MaxSteps}(\mathit{approximately\_linear}, n) = O(n)

Reposted from the Signal Beat, Copyright © 2021, All Rights Reserved.

Magic Numbers

9 March, 2021

Working in the Manhattan Project, Maria Goeppert Mayer discovered in 1948 that nuclei with certain numbers of protons and/or neutrons are more stable than others. In 1963 she won the Nobel prize for explaining this discovery with her ‘nuclear shell model’.

Nuclei with 2, 8, 20, 28, 50, or 82 protons are especially stable, and also nuclei with 2, 8, 20, 28, 50, 82 or 126 neutrons. Eugene Wigner called these magic numbers, and it’s a fun challenge to explain them.

For starters one can imagine a bunch of identical fermions in a harmonic oscillator potential. In one-dimensional space we have evenly spaced energy levels, each of which holds one state if we ignore spin. I’ll write this as

1, 1, 1, 1, ….

But if we have spin-1/2 fermions, each of these energy levels can hold two spin states, so the numbers double:

2, 2, 2, 2, ….

In two-dimensional space, ignoring spin, the pattern changes to

1, 1+1, 1+1+1, 1+1+1+1, ….

or in other words

1, 2, 3, 4, ….

That is: there’s one state of the lowest possible energy, 2 states of the next energy, and so on. Including spin the numbers double:

2, 4, 6, 8, ….

In three-dimensional space the pattern changes to this if we ignore spin:

1, 1+2, 1+2+3, 1+2+3+4, ….


1, 3, 6, 10, ….

So, we’re getting triangular numbers! Here’s a nice picture of these states, drawn by J. G. Moxness:

Including spin the numbers double:

2, 6, 12, 20, ….

So, there are 2 states of the lowest energy, 2+6 = 8 states of the first two energies, 2+6+12 = 20 states of the first three energies, and so on. We’ve got the first 3 magic numbers right! But then things break down: next we get 2+6+12+20 = 40, while the next magic number is just 28.

Wikipedia has a nice explanation of what goes wrong and how to fix it to get the next few magic numbers right:

Nuclear shell model.

We need to take two more effects into account. First, ‘spin-orbit interactions’ decrease the energy of a state when some spins point in the opposite direction from the orbital angular momentum. Second, the harmonic oscillator potential gets flattened out at large distances, so states of high angular momentum have less energy than you’d expect. I won’t attempt to explain the details, since Wikipedia does a pretty good job and I’m going to want breakfast soon. Here’s a picture that cryptically summarizes the analysis:

The notation is old-fashioned, from spectroscopy—you may know it if you’ve studied atomic physics, or chemistry. If you don’t know it, don’t worry about it! The main point is that the energy levels in the simple story I just told change a bit. They don’t change much until we hit the fourth magic number; then 8 of the next 20 energy levels get lowered so much that this magic number is 2+6+12+8 = 28 instead of 2+6+12+20 = 40. Things go on from there.

But here’s something cute: our simplified calculation of the magic numbers actually matches the count of states in each energy level for a four-dimensional harmonic oscillator! In four dimensions, if we ignore spin, the number of states in each energy level goes like this:

1, 1+3, 1+3+6, 1+3+6+10, …

These are the tetrahedral numbers:

Doubling them to take spin into account, we get the first three magic numbers right! Then, alas, we get 40 instead of 28.

But we can understand some interesting features of the world using just the first three magic numbers: 2, 8, and 20.

For example, helium-4 has 2 protons and 2 neutrons, so it’s ‘doubly magic’ and very stable. It’s the second most common substance in the universe! And in radioactive decays, often a helium nucleus gets shot out. Before people knew what it was, people called it an ‘alpha particle’… and the name stuck.

Oxygen-16, with 8 protons and 8 neutrons, is also doubly magic. So is calcium-40, with 20 protons and 20 neutrons. This is the heaviest stable element with the same number of protons and neutrons! After that, the repulsive electric charge of the protons needs to be counteracted by a greater number of neutrons.

A wilder example is helium-10, with 2 protons and 8 neutrons. It’s doubly magic, but not stable. It just barely clings to existence, helped by all that magic.

Here’s one thing I didn’t explain yet, which is actually pretty easy. Why is it true that—ignoring the spin—the number of states of the harmonic oscillator in the nth energy level follows this pattern in one-dimensional space:

1, 1, 1, 1, ….

and this pattern in two-dimensional space:

1, 1+1 = 2, 1+1+1 = 3, 1+1+1+1 = 4, …

and this pattern in three-dimensional space:

1, 1+2 = 3, 1+2+3 = 6, 1+2+3+4 = 10, ….

and this pattern in four-dimensional space:

1, 1+3 = 4, 1+3+6 = 10, 1+3+6+10 = 20, ….

and so on?

To see this we need to know two things. First, the allowed energies for a harmonic oscillator in one-dimensional space are equally spaced. So, if we say the lowest energy allowed is 0, by convention, and choose units where the next allowed energy is 1, then the allowed energies are the natural numbers:

0, 1, 2, 3, 4, ….

Second, a harmonic oscillator in n-dimensional space is just like n independent harmonic oscillators in one-dimensional space. In particular, its energy is just the sum of their energies.

So, the number of states of energy E for an n-dimensional oscillator is just the number of ways of writing E as a sum of a list of n natural numbers! The order of the list matters here: writing 3 as 1+2 counts as different than writing it as 2+1.

This leads to the patterns we’ve seen. For example, consider a harmonic oscillator in two-dimensional space. It has 1 state of energy 0, namely


It has 2 states of energy 1, namely

1+0 and 0+1

It has 3 states of energy 2, namely

2+0 and 1+1 and 0+2

and so on.

Next, consider a harmonic oscillator in three-dimensional space. This has 1 state of energy 0, namely


It has 3 states of energy 1, namely

1+0+0 and 0+1+0 and 0+0+1

It has 6 states of energy 2, namely

2+0+0 and 1+1+0 and 1+0+1 and 0+2+0 and 0+1+1 and 0+0+2

and so on. You can check that we’re getting triangular numbers: 1, 3, 6, etc. The easiest way is to note that to get a state of energy E, the first of the three independent oscillators can have any natural number j from 0 to E as its energy, and then there are E – j ways to choose the energies of the other two oscillators so that they sum to E – j. This gives a total of

E + (E-1) + (E-2) + \cdots + 1

states, and this is a triangular number.

The pattern continues in a recursive way: in four-dimensional space the same sort of argument gives us tetrahedral numbers because these are sums of triangular numbers, and so on. We’re getting the diagonals of Pascal’s triangle, otherwise known as binomial coefficients.

We often think of the binomial coefficient

\displaystyle{\binom{n}{k} }

as the number of ways of choosing a k-element subset of an n-element set. But here we are seeing it’s the number of ways of choosing an ordered (k+1)-tuple of natural numbers that sum to n. You may enjoy finding a quick proof that these two things are equal!


6 March, 2021

A baryon is a particle made of 3 quarks. The most familiar are the proton, which consists of two up quarks and a down quark, and the neutron, made of two downs and an up. Baryons containing strange quarks were discovered later, since the strange quark is more massive and soon decays to an up or down quark. A hyperon is a baryon that contains one or more strange quarks, but none of the still more massive quarks.

The first hyperon be found was the Λ, or lambda baryon. It’s made of an up quark, a down quark and a strange quark. You can think of it as a ‘heavy neutron’ in which one down quark was replaced by a strange quark. The strange quark has the same charge as the down, so like the neutron the Λ is neutral.

The Λ baryon was discovered in October 1950 by V. D. Hopper and S. Biswas of the University of Melbourne: these particles produced naturally when cosmic rays hit the upper atmosphere, and they were detected in photographic emulsions flown in a balloon. Imagine discovering a new elementary particle using a balloon! Those were the good old days.

The Λ has a mean life of just 0.26 nanoseconds, but that’s actually a long time in this business. The strange quark can only decay using the weak force, which, as its name suggests, is weak—so this happens slowly compared to decays involving the electromagnetic or strong forces.

For comparison, the Δ+ baryon is made of two ups and a down, just like a proton, but it has spin 3/2 instead of spin 1/2. So, you can think of it as a ‘fast-spinning proton’. It decays very quickly via the strong force: it has a mean life of just 5.6 × 10-23 seconds! When you get used to things like this, a nanosecond seems like an eternity.

The unexpectedly long lifetime of the Λ and some other particles was considered ‘strange’, and this eventually led people to dream up a quantity called ‘strangeness’, which is not conserved, but only changed by the weak interaction, so that strange particles decay on time scales of roughly nanoseconds. In 1962 Murray Gell-Mann realized that strangeness is simply the number of strange quarks in a particle, minus the number of strange antiquarks.

So, what’s a ‘hypernucleus’?

A hypernucleus is nucleus containing one or more hyperons along with the usual protons and neutrons. Since nuclei are held together by the strong force, they do things on time scales of 10-23 seconds—so an extra hyperon, which lasts for many billion times longer, can be regarded as a stable particle of a new kind when you’re doing nuclear physics! It lets you build new kinds of nuclei.

One well-studied hypernucleus is the hypertriton. Remember, an ordinary triton consists of a proton and two neutrons: it’s the nucleus of tritium, the radioactive isotope of hydrogen used in hydrogen bombs, also known as hydrogen-3. To get a hypertriton, we replace one of the neutrons with a Λ. So, it consists of a proton, a neutron, and a Λ.

In a hypertriton, the Λ behaves almost like a free particle. So, the lifetime of a hypertriton should be almost the same as that of a Λ by itself. Remember, the lifetime of the Λ is 0.26 nanoseconds. The lifetime of the hypertriton is a bit less: 0.24 nanoseconds. Predicting this lifetime, and even measuring it accurately, has taken a lot of work:

Hypertriton lifetime puzzle nears resolution, CERN Courier, 20 December 2019.

Hypernuclei get more interesting when they have more protons and neutrons. In a nucleus the protons form ‘shells’: due to the Pauli exclusion principle, you can only put one proton in each state. The neutrons form their own shells. So the situation is a bit like chemistry, where the electrons form shells, but now you have two kinds of shells. For example in helium-4 we have two protons, one spin-up and one spin-down, in the lowest energy level, also known as the first shell—and also two neutrons in their lowest energy level.

If you add an extra neutron to your helium-4, to get helium-5, it has to occupy a higher energy level. But if you add a hyperon, since it’s different from both the proton and neutron, it can too can occupy the lowest energy level.

Indeed, no matter how big your nucleus is, if you add a hyperon it goes straight to the lowest energy level! You can roughly imagine it as falling straight to the center of the nucleus—though everything is quantum-mechanical, so these mental images have to be taken with a grain of salt.

One reason for studying hypernuclei is that in some neutron stars, the inner core may contain hyperons! The point is that by weaseling around the Pauli exclusion principle, we can get more particles in low-energy states, producing dense forms of nuclear matter that have less energy. But nobody knows if this ‘strange nuclear matter’ is really stable. So this is an active topic of research. Hypernuclei are one of the few ways to learn useful information about this using experiments in the lab.

For a lot more, try this:

• A. Gal, E. V. Hungerford and D. J. Millener, Strangeness in nuclear physics, Reviews of Modern Physics 88 (2016), 035004.

You can see some hyperons in the baryon octet, which consists of spin-1/2 baryons made of up, down and strange quarks:

and the baryon decuplet which consists of spin-3/2 baryons made of up, down and strange quarks:

In these charts I3 is proportional to the number of up quarks minus the number of down quarks, Q is the electric charge, and S is the strangeness.

Gell-Mann and other physicists realized that mathematically, both the baryon octet and the baryon decuplet are both irreducible representations of SU(3). But that’s another tale!

Physics History Puzzle

3 March, 2021

Which famous physicist once gave a lecture attended by a secret agent with a pistol, who would kill him if he said the wrong thing?

Language Complexity (Part 4)

2 March, 2021

David A. Tanzer

Summarizing computational complexity

In Part 3 we defined, for each program P, a detailed function P'(n) that gives the worst case number of steps that P must perform when given some input of size n. Now we want to summarize P into general classes, such as linear, quadratic, etc.

What’s in a step?

But we should clarify something before proceeding: what is meant by a ‘step’ of a program? Do we count it in units of machine language, or in terms of higher level statements? Before, we said that each iteration of the loop for the ALL_A decision procedure counted as a step. But in a more detailed view, each iteration of the loop includes multiple steps: comparing the input character to ‘A’, incrementing a counter, performing a test.

All these interpretations are plausible. Fortunately, provided that the definition of a program step is ‘reasonable’, all of them will lead to the same general classification of the program’s time complexity. Here, by reasonable I mean that the definition of a step should be such that, on a given computer, there is an absolute bound on the amount of clock time needed for the processor to complete one step.

Approximately linear functions

The classification of programs into complexity classes such as linear, quadratic, etc., is a generalization which doesn’t require that the time complexity be exactly linear, quadratic, etc. For an example, consider the following code:

def approximately_linear(text):
    # perform a silly computation
    counter = 0
    for i = 1 to length(text):
        if i is odd:
            for j = 1 to 10:
                counter = counter + 1
    return counter

Here are the number of steps it performs, as a function of the input length:

f(0) = 2
f(1) = 13
f(2) = 14
f(3) = 25
f(4) = 26
f(5) = 37

The value increases alternately by 1 and then by 11. As it increases by 12 for every two steps, we could say that is “approximately linear,” with slope “roughly equal” to 6. But in fine detail, the graph looks like a sawtooth.

Soon, we will explain how this function gets definitively classified as having linear complexity.

Appendix: Python machines versus Turing machines

Here we are programming and measuring complexity on a Python-like machine, rather than a pure Turning machine. This is surfaced, for example, in the fact that without further ado we called a function length(text) to count the number of characters, and will regard this as a single step of the computation. On a true Turing machine, however, counting the length of the string takes N steps, as this operation requires that the tape be advanced one character at a time until the end of the string is detected.

This is a point which turns out not to substantially affect the complexity classification of a language. Assuming that steps are counted reasonably, any optimal decision procedure for a language of strings, whether written in Turing machine language, Python, C# or what have you, will end up with the same complexity classification.

The length function in Python really does take a bounded amount of time, so it is fair to count it as a single step. The crux of this matter is that, in a higher level language, a string is more than a sequence of characters, as it is a data structure containing length information as well. So there is order N work that is implied just by the existence of a string. But this can be folded into the up-front cost of merely reading the input, which is a general precondition for a language decider.

But, you may ask, what about languages which can be decided without even reading all of the input? For example, the language of strings that begin with the prefix “abc”. Ok, so you got me.

Still, as a practical matter, anything with linear or sub-linear complexity can be considered excellent and simple. The real challenges have do with complexity which is greater than linear, and which represents a real practical issue: software performance. So, for intents and purposes, we may treat any implied order N costs as being essentially zeros – as long as they can be on a one-time, up-front basis, e.g., the order N work involved in constructing a string object.

Reposted from the Signal Beat, Copyright © 2021, All Rights Reserved.

Theoretical Physics in the 21st Century

1 March, 2021

I gave a talk at the Zürich Theoretical Physics Colloquium for Sustainability Week 2021. I was excited to get a chance to speak both about the future of theoretical physics and the climate crisis.

You can see a video of my talk, and also my slides: links in blue on my slides lead to more information.

Title: Theoretical Physics in the 21st Century.

Time: Monday, 8 March 2021, 15:45 UTC (that is, Greenwich Mean Time).

Abstract: The 20th century was the century of physics. What about the 21st? Though progress on some old problems is frustratingly slow, exciting new questions are emerging in condensed matter physics, nonequilibrium thermodynamics and other fields. And most of all, the 21st century is the dawn of the Anthropocene, in which we will adapt to the realities of life on a finite-​sized planet. How can physicists help here?

Hosts: Niklas Beisert, Anna Knörr.