Hypergraph Categories of Cospans

11 March, 2018


Two students in the Applied Category Theory 2018 school wrote a blog article about Brendan Fong’s theory of decorated cospans:

• Jonathan Lorand and Fabrizio Genovese, Hypergraph categories of cospans, The n-Category Café, 28 February 2018.

Jonathan Lorand is a math grad student at the University of Zurich working on symplectic and Poisson geometry with Alberto Cattaneo. Fabrizio Genovese is a grad student in computer science at the University of Oxford, working with Bob Coecke and Dan Marsden on categorical quantum mechanics, quantum field theory and the like.

Brendan was my student, so it’s nice to see newer students writing a clear summary of some of his thesis work, namely this paper:

• Brendan Fong, Decorated cospans, Theory and Applications of Categories 30 (2015), 1096–1120.

I wrote a summary of it myself, so I won’t repeat it here:

• John Baez, Decorated cospans, Azimuth, 1 May 2015.

What’s especially interesting to me is that both Jonathan and Fabrizio know some mathematical physics, and they’re part of a group who will be working with me on some problems as part of the Applied Category Theory 2018 school! Brendan and Blake Pollard and I used symplectic geometry and decorated cospans to study the black-boxing of electrical circuits and Markov processes… maybe we should try to go further with that project!

An Upper Bound on Reidemeister Moves

9 March, 2018


Graham’s number is famous for being the largest number to have ever shown up in a proof. The true story is more complicated, as I discovered by asking Graham. But here’s a much smaller but still respectable number that showed up in knot theory:

2 \uparrow \uparrow (10 \uparrow 1,000,000)

It’s 2 to the 2 to the 2 to the 2… where we go on for 101,000,000 times. It appears in a 2011 paper by Coward and Lackenby. It shows up in their upper bound on how many steps it can take to wiggle around one picture of a link until you get another picture of the same link.

This upper bound is ridiculously large. But because this upper bound is computable, it follows that we can decide, in a finite amount of time, whether two pictures show the same link or not. We know when to give up. This had previously been unknown!

Here’s the paper:

• Alexander Coward and Marc Lackenby, An upper bound on Reidemeister moves, American Journal of Mathematics 136 (2014), 1023–1066.

Let me spell out the details a tiny bit more.

A link is a collection of circles embedded in 3-dimensional Euclidean space. We count two links as ‘the same’, or ‘ambient isotopic’, if we can carry one to another by a smooth motion where no circle ever crosses another. (This can be made more precise.) We can draw links in the plane:

and we can get between any two diagrams of the same link by distorting the plane and also doing a sequence of ‘Reidemeister moves’. There are 3 kinds of Reidemeister moves, shown above and also here:

Coward and Lackenby found an upper bound on how many Reidemeister moves it takes to get between two diagrams of the same link. Let n be the total number of crossings in both diagrams. Then we need at most 2 to the 2 to the 2 to the 2 to the 2… Reidemeister moves, where the number of 2’s in this tower is cn, where c = 101,000,000.

It’s fun to look at the paper and see how they get such a terrible upper bound. I’m sure they could have done much better with a bit of work, but that wasn’t the point. All they wanted was a computable upper bound.

Subsequently, Lackenby proved a polynomial upper bound on how many Reidemeister moves it takes to reduce a diagram of the unknot to a circle, like this:

If the original diagram has n crossings, he proved it takes at most (236n)11 Reidemeister moves. Because this is a polynomial, it follows that recognizing whether a knot diagram is a diagram of the unknot is in NP. As far as I know, it remains an open question whether this problem is in P.

• Marc Lackenby, A polynomial upper bound on Reidemeister moves, Annals of Mathematics 182 (2015), 491–564.

As a challenge, can you tell if this diagram depicts the unknot?

If you get stuck, read Lackenby’s paper!

To learn more about any of the pictures here, click on them. For example, this unknotting process:

showed up in this paper:

• Louis Kauffman and Sofia Lambropoulou, Hard unknots and collapsing tangles, in Introductory Lectures On Knot Theory: Selected Lectures Presented at the Advanced School and Conference on Knot Theory and Its Applications to Physics and Biology, 2012, pp. 187–247.

I bumped into Coward and Lackenby’s theorem here:

• Evelyn Lamb, Laura Taalman’s Favorite Theorem, Scientific American, 8 March 2018.

It says:

Taalman’s favorite theorem gives a way to know for sure whether a knot is equivalent to the unknot, a simple circle. It shows that if the knot is secretly the unknot, there is an upper bound, based on the number of crossings in a diagram of the knot, to the number of Reidemeister moves you will have to do to reduce the knot to a circle. If you try every possible sequence of moves that is at least that long and your diagram never becomes a circle, you know for sure that the knot is really a knot and not an unknot. (Say that ten times fast.)

Taalman loves this theorem not only because it was the first explicit upper bound for the question but also because of how extravagant the upper bound is. In the original paper proving this theorem, Joel Haas and Jeffrey Lagarias got a bound of

2^{n 10^{11}}

where n is the number of crossings in the diagram. That’s 2 to the n hundred billionth power. Yikes! When you try to put that number into the online calculator Wolfram Alpha, even for a very small number of crossings, the calculator plays dead.

Dr. Taalman also told us about another paper, this one by Alexander Coward and Marc Lackenby, that bounds the number of Reidemeister moves needed to show whether any two given knot diagrams are equivalent. That bound involves towers of powers that also get comically large incredibly quickly. They’re too big for me to describe how big they are.

So, I wanted to find out how big they are!

If you want a more leisurely introdution to the Haas–Lagarias result, try the podcast available at Eveyln Lamb’s article, or this website:

• Kevin Knudson, My favorite theorem: Laura Talman, Episode 14.

Coarse-Graining Open Markov Processes

4 March, 2018

Kenny Courser and I have been working hard on this paper for months:

• John Baez and Kenny Courser, Coarse-graining open Markov processes.

It may be almost done. So, it would be great if people here could take a look and comment on it! It’s a cool mix of probability theory and double categories. I’ve posted a similar but non-isomorphic article on the n-Category Café, where people know a lot about double categories. But maybe some of you here know more about Markov processes!

‘Coarse-graining’ is a standard method of extracting a simple Markov process from a more complicated one by identifying states. We extend coarse-graining to open Markov processes. An ‘open’ Markov process is one where probability can flow in or out of certain states called ‘inputs’ and ‘outputs’. One can build up an ordinary Markov process from smaller open pieces in two basic ways:

• composition, where we identify the outputs of one open Markov process with the inputs of another,


• tensoring, where we set two open Markov processes side by side.

A while back, Brendan Fong, Blake Pollard and I showed that these constructions make open Markov processes into the morphisms of a symmetric monoidal category:

A compositional framework for Markov processes, Azimuth, January 12, 2016.

Here Kenny and I go further by constructing a symmetric monoidal double category where the 2-morphisms include ways of coarse-graining open Markov processes. We also extend the previously defined ‘black-boxing’ functor from the category of open Markov processes to this double category.

But before you dive into the paper, let me explain all this stuff a bit more….

Very roughly speaking, a ‘Markov process’ is a stochastic model describing a sequence of transitions between states in which the probability of a transition depends only on the current state. But the only Markov processes talk about are continuous-time Markov processes with a finite set of states. These can be drawn as labeled graphs:

where the number labeling each edge describes the probability per time of making a transition from one state to another.

An ‘open’ Markov process is a generalization in which probability can also flow in or out of certain states designated as ‘inputs’ and outputs’:

Open Markov processes can be seen as morphisms in a category, since we can compose two open Markov processes by identifying the outputs of the first with the inputs of the second. Composition lets us build a Markov process from smaller open parts—or conversely, analyze the behavior of a Markov process in terms of its parts.

In this paper, Kenny extend the study of open Markov processes to include coarse-graining. ‘Coarse-graining’ is a widely studied method of simplifying a Markov process by mapping its set of states X onto some smaller set X' in a manner that respects the dynamics. Here we introduce coarse-graining for open Markov processes. And we show how to extend this notion to the case of maps p: X \to X' that are not surjective, obtaining a general concept of morphism between open Markov processes.

Since open Markov processes are already morphisms in a category, it is natural to treat morphisms between them as morphisms between morphisms, or ‘2-morphisms’. We can do this using double categories!

Double categories were first introduced by Ehresmann around 1963. Since then, they’ve used in topology and other branches of pure math—but more recently they’ve been used to study open dynamical systems and open discrete-time Markov chains. So, it should not be surprising that they are also useful for open Markov processes..

A 2-morphism in a double category looks like this:

While a mere category has only objects and morphisms, here we have a few more types of things. We call A, B, C and D ‘objects’, f and g ‘vertical 1-morphisms’, M and N ‘horizontal 1-cells’, and \alpha a ‘2-morphism’. We can compose vertical 1-morphisms to get new vertical 1-morphisms and compose horizontal 1-cells to get new horizontal 1-cells. We can compose the 2-morphisms in two ways: horizontally by setting squares side by side, and vertically by setting one on top of the other. The ‘interchange law’ relates vertical and horizontal composition of 2-morphisms.

In a ‘strict’ double category all these forms of composition are associative. In a ‘pseudo’ double category, horizontal 1-cells compose in a weakly associative manner: that is, the associative law holds only up to an invertible 2-morphism, the ‘associator’, which obeys a coherence law. All this is just a sketch; for details on strict and pseudo double categories try the paper by Grandis and Paré.

Kenny and I construct a double category \mathbb{M}\mathbf{ark} with:

  1. finite sets as objects,
  2. maps between finite sets as vertical 1-morphisms,
  3. open Markov processes as horizontal 1-cells,
  4. morphisms between open Markov processes as 2-morphisms.

I won’t give the definition of item 4 here; you gotta read our paper for that! Composition of open Markov processes is only weakly associative, so \mathbb{M}\mathbf{ark} is a pseudo double category.

This is how our paper goes. In Section 2 we define open Markov processes and steady state solutions of the open master equation. In Section 3 we introduce coarse-graining first for Markov processes and then open Markov processes. In Section 4 we construct the double category \mathbb{M}\mathbf{ark} described above. We prove this is a symmetric monoidal double category in the sense defined by Mike Shulman. This captures the fact that we can not only compose open Markov processes but also ‘tensor’ them by setting them side by side.

For example, if we compose this open Markov process:

with the one I showed you before:

we get this open Markov process:

But if we tensor them, we get this:

As compared with an ordinary Markov process, the key new feature of an open Markov process is that probability can flow in or out. To describe this we need a generalization of the usual master equation for Markov processes, called the ‘open master equation’.

This is something that Brendan, Blake and I came up with earlier. In this equation, the probabilities at input and output states are arbitrary specified functions of time, while the probabilities at other states obey the usual master equation. As a result, the probabilities are not necessarily normalized. We interpret this by saying probability can flow either in or out at both the input and the output states.

If we fix constant probabilities at the inputs and outputs, there typically exist solutions of the open master equation with these boundary conditions that are constant as a function of time. These are called ‘steady states’. Often these are nonequilibrium steady states, meaning that there is a nonzero net flow of probabilities at the inputs and outputs. For example, probability can flow through an open Markov process at a constant rate in a nonequilibrium steady state. It’s like a bathtub where water is flowing in from the faucet, and flowing out of the drain, but the level of the water isn’t changing.

Brendan, Blake and I studied the relation between probabilities and flows at the inputs and outputs that holds in steady state. We called the process of extracting this relation from an open Markov process ‘black-boxing’, since it gives a way to forget the internal workings of an open system and remember only its externally observable behavior. We showed that black-boxing is compatible with composition and tensoring. In other words, we showed that black-boxing is a symmetric monoidal functor.

In Section 5 of our new paper, Kenny and I show that black-boxing is compatible with morphisms between open Markov processes. To make this idea precise, we prove that black-boxing gives a map from the double category \mathbb{M}\mathbf{ark} to another double category, called \mathbb{L}\mathbf{inRel}, which has:

  1. finite-dimensional real vector spaces U,V,W,\dots as objects,
  2. linear maps f : V \to W as vertical 1-morphisms from V to W,
  3. linear relations R \subseteq V \oplus W as horizontal 1-cells from V to W,
  4. squares

    obeying (f \oplus g)R \subseteq S as 2-morphisms.

Here a ‘linear relation’ from a vector space V to a vector space W is a linear subspace R \subseteq V \oplus W. Linear relations can be composed in the usual way we compose relations. The double category \mathbb{L}\mathbf{inRel} becomes symmetric monoidal using direct sum as the tensor product, but unlike \mathbb{M}\mathbf{ark} it is a strict double category: that is, composition of linear relations is associative.

Our main result, Theorem 5.5, says that black-boxing gives a symmetric monoidal double functor

\blacksquare : \mathbb{M}\mathbf{ark} \to \mathbb{L}\mathbf{inRel}

As you’ll see if you check out our paper, there’s a lot of nontrivial content hidden in this short statement! The proof requires a lot of linear algebra and also a reasonable amount of category theory. For example, we needed this fact: if you’ve got a commutative cube in the category of finite sets:

and the top and bottom faces are pushouts, and the two left-most faces are pullbacks, and the two left-most arrows on the bottom face are monic, then the two right-most faces are pullbacks. I think it’s cool that this is relevant to Markov processes!

Finally, in Section 6 we state a conjecture. First we use a technique invented by Mike Shulman to construct symmetric monoidal bicategories \mathbf{Mark} and \mathbf{LinRel} from the symmetric monoidal double categories \mathbb{M}\mathbf{ark} and \mathbb{L}\mathbf{inRel}. We conjecture that our black-boxing double functor determines a functor between these symmetric monoidal bicategories. This has got to be true. However, double categories seem to be a simpler framework for coarse-graining open Markov processes.

Finally, let me talk a bit about some related work. As I already mentioned, Brendan, Blake and I constructed a symmetric monoidal category where the morphisms are open Markov processes. However, we formalized such Markov processes in a slightly different way than Kenny and I do now. We defined a Markov process to be one of the pictures I’ve been showing you: a directed multigraph where each edge is assigned a positive number called its ‘rate constant’. In other words, we defined it to be a diagram

where X is a finite set of vertices or ‘states’, E is a finite set of edges or ‘transitions’ between states, the functions s,t : E \to X give the source and target of each edge, and $r : E \to (0,\infty)$ gives the rate constant for each transition. We explained how from this data one can extract a matrix of real numbers (H_{i j})_{i,j \in X} called the ‘Hamiltonian’ of the Markov process, with two properties that are familiar in this game:

H_{i j} \geq 0 if i \neq j,

\sum_{i \in X} H_{i j} = 0 for all j \in X.

A matrix with these properties is called ‘infinitesimal stochastic’, since these conditions are equivalent to \exp(t H) being stochastic for all t \ge 0.

In our new paper, Kenny and I skip the directed multigraphs and work directly with the Hamiltonians! In other words, we define a Markov process to be a finite set X together with an infinitesimal stochastic matrix (H_{ij})_{i,j \in X}. This allows us to work more directly with the Hamiltonian and the all-important ‘master equation’

\displaystyle{    \frac{d p(t)}{d t} = H p(t)  }

which describes the evolution of a time-dependent probability distribution

p(t) : X \to \mathbb{R}

Clerc, Humphrey and Panangaden have constructed a bicategory with finite sets as objects, ‘open discrete labeled Markov processes’ as morphisms, and ‘simulations’ as 2-morphisms. The use the word ‘open’ in a pretty similar way to me. But their open discrete labeled Markov processes are also equipped with a set of ‘actions’ which represent interactions between the Markov process and the environment, such as an outside entity acting on a stochastic system. A ‘simulation’ is then a function between the state spaces that map the inputs, outputs and set of actions of one open discrete labeled Markov process to the inputs, outputs and set of actions of another.

Another compositional framework for Markov processes was discussed by de Francesco Albasini, Sabadini and Walters. They constructed an algebra of ‘Markov automata’. A Markov automaton is a family of matrices with non-negative real coefficients that is indexed by elements of a binary product of sets, where one set represents a set of ‘signals on the left interface’ of the Markov automata and the other set analogously for the right interface.

So, double categories are gradually invading the theory of Markov processes… as part of the bigger trend toward applied category theory. They’re natural things; scientists should use them.

Nonstandard Integers as Complex Numbers

3 March, 2018


I just read something cool:

• Joel David Hamkins, Nonstandard models of arithmetic arise in the complex numbers, 3 March 2018.

Let me try to explain it in a simplified way. I think all cool math should be known more widely than it is. Getting this to happen requires a lot of explanations at different levels.

Here goes:

The Peano axioms are a nice set of axioms describing the natural numbers. But thanks to Gödel’s incompleteness theorem, these axioms can’t completely nail down the structure of the natural numbers. So, there are lots of different ‘models’ of Peano arithmetic.

These are often called ‘nonstandard’ models. If you take a model of Peano arithmetic—say, your favorite ‘standard’ model —you can get other models by throwing in extra natural numbers, larger than all the standard ones. These nonstandard models can be countable or uncountable. For more, try this:

Nonstandard models of arithmetic, Wikipedia.

Starting with any of these models you can define integers in the usual way (as differences of natural numbers), and then rational numbers (as ratios of integers). So, there are lots of nonstandard versions of the rational numbers. Any one of these will be a field: you can add, subtract, multiply and divide your nonstandard rationals, in ways that obey all the usual basic rules.

Now for the cool part: if your nonstandard model of the natural numbers is small enough, your field of nonstandard rational numbers can be found somewhere in the standard field of complex numbers!

In other words, your nonstandard rationals are a subfield of the usual complex numbers: a subset that’s closed under addition, subtraction, multiplication and division by things that aren’t zero.

This is counterintuitive at first, because we tend to think of nonstandard models of Peano arithmetic as spooky and elusive things, while we tend to think of the complex numbers as well-understood.

However, the field of complex numbers is actually very large, and it has room for many spooky and elusive things inside it. This is well-known to experts, and we’re just seeing more evidence of that.

I said that all this works if your nonstandard model of the natural numbers is small enough. But what is “small enough”? Just the obvious thing: your nonstandard model needs to have a cardinality smaller than that of the complex numbers. So if it’s countable, that’s definitely small enough.

This fact was recently noticed by Alfred Dolich at a pub after a logic seminar at the City University of New York. The proof is very easy if you know this result: any field of characteristic zero whose cardinality is less than or equal to that of the continuum is isomorphic to some subfield of the complex numbers. So, unsurprisingly, it turned out to have been repeatedly discovered before.

And the result I just mentioned follows from this: any two algebraically closed fields of characteristic zero that have the same uncountable cardinality must be isomorphic. So, say someone hands you a field F of characteristic zero whose cardinality is smaller than that of the continuum. You can take its algebraic closure by throwing in roots to all polynomials, and its cardinality won’t get bigger. Then you can throw in even more elements, if necessary, to get a field whose cardinality is that of the continuum. The resulting field must be isomorphic to the complex numbers. So, F is isomorphic to a subfield of the complex numbers.

To round this off, I should say a bit about why nonstandard models of Peano arithmetic are considered spooky and elusive. Tennenbaum’s theorem says that for any countable non-standard model of Peano arithmetic there is no way to code the elements of the model as standard natural numbers such that either the addition or multiplication operation of the model is a computable function on the codes.

We can, however, say some things about what these countable nonstandard models are like as ordered sets. They can be linearly ordered in a way compatible with addition and multiplication. And then they consist of one copy of the standard natural numbers, followed by a lot of copies of the standard integers, which are packed together in a dense way: that is, for any two distinct copies, there’s another distinct copy between them. Furthermore, for any of these copies, there’s another copy before it, and another after it.

I should also say what’s good about algebraically closed fields of characteristic zero: they are uncountably categorical. In other words, any two models of the axioms for an algebraically closed field with the same cardinality must be isomorphic. (This is not true for the countable models: it’s easy to find different countable algebraically closed fields of characteristic zero. They are not spooky and elusive.)

So, any algebraically closed field whose cardinality is that of the continuum is isomorphic to the complex numbers!

For more on the logic of complex numbers, written at about the same level as this, try this post of mine:

The logic of real and complex numbers, Azimuth 8 September 2014.

Cartesian Bicategories

1 March, 2018

Two students in the Applied Category Theory 2018 school have blogged about a classic paper in category theory:

• Daniel Cicala and Jules Hedges, Cartesian bicategories, The n-Category Café, 19 February 2018.

Jules Hedges is a postdoc in the computer science department at Oxford who is applying category theory to game theory and economics. Daniel Cicala is a grad student working with me on a compositional approach to graph rewriting, which is about stuff like this:

This picture shows four ‘open graphs’: graphs with inputs and outputs. The vertices are labelled with operations. The top of the picture shows a ‘rewrite rule’ where one open graph is turned into another: the operation of multiplying by 2 is replaced by the operation of adding something to itself. The bottom of the picture shows one way we can ‘apply’ this rule: this takes us from open graph at bottom left to the open graph at bottom right.

So, we can use graph rewriting to think about ways to transform a computer program into another, perhaps simpler, computer program that does the same thing.

How do we formalize this?

A computer program wants to be a morphism, since it’s a process that turns some input into some output. Rewriting wants to be a 2-morphism, since it’s a ‘meta-process’ that turns some program into some other program. So, there should be some bicategory with computer programs (or labelled open graphs!) as morphisms and rewrites as 2-morphisms. In fact there should be a bunch of such bicategories, since there are a lot of details that one can tweak.

Together with my student Kenny Courser, Daniel has been investigating these bicategories:

• Daniel Cicala, Spans of cospans, Theory and Applications of Categories 33 (2018), 131–147.

Abstract. We discuss the notion of a span of cospans and define, for them, horizonal and vertical composition. These compositions satisfy the interchange law if working in a topos C and if the span legs are monic. A bicategory is then constructed from C-objects, C-cospans, and doubly monic spans of C-cospans. The primary motivation for this construction is an application to graph rewriting.

• Daniel Cicala, Spans of cospans in a topos, Theory and Applications of Categories 33 (2018), 1–22.

Abstract. For a topos T, there is a bicategory MonicSp(Csp(T)) whose objects are those of T, morphisms are cospans in T, and 2-morphisms are isomorphism classes of monic spans of cospans in T. Using a result of Shulman, we prove that MonicSp(Csp(T)) is symmetric monoidal, and moreover, that it is compact closed in the sense of Stay. We provide an application which illustrates how to encode double pushout rewrite rules as 2-morphisms inside a compact closed sub-bicategory of MonicSp(Csp(Graph)).

This stuff sounds abstract and esoteric when they talk about it, but it’s really all about things like the picture above—and it’s an important part of network theory!

Recently Daniel Cicala has noticed that some of the bicategories he’s getting are ‘cartesian bicategories’ in the sense of this paper:

• Aurelio Carboni and Robert F. C. Walters, Cartesian bicategories I, Journal of Pure and Applied Algebra 49 (1987), 11–32.

And that’s the paper he’s blogging about now with Jules Hedges!

Insect Population Crash

25 February, 2018

Scary news from Australia:

• Marc Rigby, Insect population decline leaves Australian scientists scratching for solutions, ABC Far North, 23 February 2018.

I’ll quote the start:

A global crash in insect populations has found its way to Australia, with entomologists across the country reporting lower than average numbers of wild insects.

University of Sydney entomologist Dr. Cameron Webb said researchers around the world widely acknowledge that insect populations are in decline, but are at a loss to determine the cause.

“On one hand it might be the widespread use of insecticides, on the other hand it might be urbanisation and the fact that we’re eliminating some of the plants where it’s really critical that these insects complete their development,” Dr Webb said.

“Add in to the mix climate change and sea level rise and it’s incredibly difficult to predict exactly what it is. It’s left me dumbfounded.”

Entomologist and owner of the Australian Insect Farm, near Innisfail in far north Queensland, Jack Hasenpusch is usually able to collect swarms of wild insects at this time of year.

“I’ve been wondering for the last few years why some of the insects have been dropping off and put it down to lack of rainfall,” Mr. Hasenpusch said.

“This year has really taken the cake with the lack of insects, it’s left me dumbfounded, I can’t figure out what’s going on.”

Mr Hasenpusch said entomologists he had spoken to from Sydney, Brisbane, Perth and even as far away as New Caledonia and Italy all had similar stories.

The Australian Butterfly Sanctuary in Kuranda, west of Cairns, has had difficulty breeding the far north’s iconic Ulysses butterfly for more than two years.

“We’ve had [the problem] checked by scientists, the University of Queensland was involved, Biosecurity Queensland was involved but so far we haven’t found anything unusual in the bodies [of caterpillars] that didn’t survive,” said breeding laboratory supervisor Tina Kupke.

“We’ve had some short successes but always failed in the second generation.”

Ms. Lupke said the problem was not confined to far north Queensland, or even Australia. “Some of our pupae go overseas from some of our breeders here and they’ve all had the same problem,” she said. “And the Melbourne Zoo has been trying for quite a while with the same problems.”

Limited lifecycle prefaces population plummet

Dr. Webb, who primarily researches mosquitoes, said numbers were also in decline across New South Wales this year, which was indicative of the situation in other insect populations.

“We’ve had a really strange summer; it’s been very dry, sometimes it’s been brutally hot but sometimes it’s been cooler than average,” he said.

“Mosquito populations, much like a lot of other insects, rely on the combination of water, humidity and temperature to complete their lifecycle. When you mix around any one of those three components you can really change the local population dynamics.”

All this reminds me of a much more detailed study showing a dramatic insect population decline in Germany over a much longer time period:

• Gretchen Vogel, Where have all the insects gone?, Science, 10 May 2017.

I’ll just quote a bit of this article:

Now, a new set of long-term data is coming to light, this time from a dedicated group of mostly amateur entomologists who have tracked insect abundance at more than 100 nature reserves in western Europe since the 1980s.

Over that time the group, the Krefeld Entomological Society, has seen the yearly insect catches fluctuate, as expected. But in 2013 they spotted something alarming. When they returned to one of their earliest trapping sites from 1989, the total mass of their catch had fallen by nearly 80%. Perhaps it was a particularly bad year, they thought, so they set up the traps again in 2014. The numbers were just as low. Through more direct comparisons, the group—which had preserved thousands of samples over 3 decades—found dramatic declines across more than a dozen other sites.

It also mentions a similar phenomenon in Scotland:

Since 1968, scientists at Rothamsted Research, an agricultural research center in Harpenden, U.K., have operated a system of suction traps—12-meter-long suction tubes pointing skyward. Set up in fields to monitor agricultural pests, the traps capture all manner of insects that happen to fly over them; they are “effectively upside-down Hoovers running 24/7, continually sampling the air for migrating insects,” says James Bell, who heads the Rothamsted Insect Survey.

Between 1970 and 2002, the biomass caught in the traps in southern England did not decline significantly. Catches in southern Scotland, however, declined by more than two-thirds during the same period. Bell notes that overall numbers in Scotland were much higher at the start of the study. “It might be that much of the [insect] abundance in southern England had already been lost” by 1970, he says, after the dramatic postwar changes in agriculture and land use.

Here’s the actual research paper:

• Caspar A. Hallmann, Martin Sorg, Eelke Jongejans, Henk Siepel, Nick Hofland, Heinz Schwan, Werner Stenmans, Andreas Müller, Hubert Sumser, Thomas Hörren, Dave Goulson and Hans de Kroon, More than 75 percent decline over 27 years in total flying insect biomass in protected areas, PLOS One, 18 October 2017.

Abstract. Global declines in insects have sparked wide interest among scientists, politicians, and the general public. Loss of insect diversity and abundance is expected to provoke cascading effects on food webs and to jeopardize ecosystem services. Our understanding of the extent and underlying causes of this decline is based on the abundance of single species or taxonomic groups only, rather than changes in insect biomass which is more relevant for ecological functioning. Here, we used a standardized protocol to measure total insect biomass using Malaise traps, deployed over 27 years in 63 nature protection areas in Germany (96 unique location-year combinations) to infer on the status and trend of local entomofauna. Our analysis estimates a seasonal decline of 76%, and mid-summer decline of 82% in flying insect biomass over the 27 years of study. We show that this decline is apparent regardless of habitat type, while changes in weather, land use, and habitat characteristics cannot explain this overall decline. This yet unrecognized loss of insect biomass must be taken into account in evaluating declines in abundance of species depending on insects as a food source, and ecosystem functioning in the European landscape.

It seems we are heading into strange times.

A Double Conference

23 February, 2018

Here’s a cool way to cut carbon emissions: a double conference. The idea is to have a conference in two faraway locations connected by live video stream, to reduce the amount of long-distance travel!

Even better, it’s about a great subject:

• Higher algebra and mathematical physics, August 13–17, 2018, Perimeter Institute, Waterloo, Canada, and Max Planck Institute for Mathematics, Bonn, Germany.

Here’s the idea:

“Higher algebra” has become important throughout mathematics, physics, and mathematical physics, and this conference will bring together leading experts in higher algebra and its mathematical physics applications. In physics, the term “algebra” is used quite broadly: any time you can take two operators or fields, multiply them, and write the answer in some standard form, a physicist will be happy to call this an “algebra”. “Higher algebra” is characterized by the appearance of a hierarchy of multilinear operations (e.g. A-infinity and L-infinity algebras). These structures can be higher categorical in nature (e.g. derived categories, cohomology theories), and can involve mixtures of operations and co-operations (Hopf algebras, Frobenius algebras, etc.). Some of these notions are purely algebraic (e.g. algebra objects in a category), while others are quite geometric (e.g. shifted symplectic structures).

An early manifestation of higher algebra in high-energy physics was supersymmetry. Supersymmetry makes quantum field theory richer and thus more complicated, but at the same time many aspects become more tractable and many problems become exactly solvable. Since then, higher algebra has made numerous appearances in mathematical physics, both high- and low-energy._

Participation is limited. Some financial support is available for early-career mathematicians. For more information and to apply, please visit the conference website of the institute closer to you:

North America: http://www.perimeterinstitute.ca/HAMP
Europe: http://www.mpim-bonn.mpg.de/HAMP

If you have any questions, please write to double.conference.2018@gmail.com.

One of the organizers, Aaron Mazel-Gee, told me:

We are also interested in spreading the idea of double conferences more generally: we’re hoping that our own event’s success inspires other academic communities to organize their own double conferences. We’re hoping to eventually compile a sort of handbook to streamline the process for others, so that they can learn from our own experiences regarding the various unique challenges that organizing such an event poses. Anyways, all of this is just to say that I would be happy for you to publicize this event anywhere that it might reach these broader audiences.

So, if you’re interested in having a double conference, please contact the organizers of this one for tips on how to do it! I’m sure they’ll have better advice after they’ve actually done it. I’ve found that the technical details really matter for these things: it can be very frustrating when they don’t work correctly. Avoiding such problems requires testing everything ahead of time—under conditions that exactly match what you’re planning to do!