The Physics of Butterfly Wings

11 August, 2015



Some butterflies have shiny, vividly colored wings. From different angles you see different colors. This effect is called iridescence. How does it work?

It turns out these butterfly wings are made of very fancy materials! Light bounces around inside these materials in a tricky way. Sunlight of different colors winds up reflecting off these materials in different directions.

We’re starting to understand the materials and make similar substances in the lab. They’re called photonic crystals. They have amazing properties.

Here at the Centre for Quantum Technologies we have people studying exotic materials of many kinds. Next door, there’s a lab completely devoted to studying graphene: crystal sheets of carbon in which electrons can move as if they were massless particles! Graphene has a lot of potential for building new technologies—that’s why Singapore is pumping money into researching it.

Some physicists at MIT just showed that one of the materials in butterfly wings might act like a 3d form of graphene. In graphene, electrons can only move easily in 2 directions. In this new material, electrons could move in all 3 directions, acting as if they had no mass.

The pictures here show the microscopic structure of two materials found in butterfly wings:

The picture at left actually shows a sculpture made by the mathematical artist Bathsheba Grossman. But it’s a piece of a gyroid: a surface with a very complicated shape, which repeats forever in 3 directions. It’s called a minimal surface because you can’t shrink its area by tweaking it just a little. It divides space into two regions.

The gyroid was discovered in 1970 by a mathematician, Alan Schoen. It’s a triply periodic minimal surfaces, meaning one that repeats itself in 3 different directions in space, like a crystal.


Schoen was working for NASA, and his idea was to use the gyroid for building ultra-light, super-strong structures. But that didn’t happen. Research doesn’t move in predictable directions.

In 1983, people discovered that in some mixtures of oil and water, the oil naturally forms a gyroid. The sheets of oil try to minimize their area, so it’s not surprising that they form a minimal surface. Something else makes this surface be a gyroid—I’m not sure what.

Butterfly wings are made of a hard material called chitin. Around 2008, people discovered that the chitin in some iridescent butterfly wings is made in a gyroid pattern! The spacing in this pattern is very small, about one wavelength of visible light. This makes light move through this material in a complicated way, which depends on the light’s color and the direction it’s moving.

So: butterflies have naturally evolved a photonic crystal based on a gyroid!

The universe is awesome, but it’s not magic. A mathematical pattern is beautiful if it’s a simple solution to at least one simple problem. This is why beautiful patterns naturally bring themselves into existence: they’re the simplest ways for certain things to happen. Darwinian evolution helps out: it scans through trillions of possibilities and finds solutions to problems. So, we should expect life to be packed with mathematically beautiful patterns… and it is.

The picture at right above shows a ‘double gyroid’. Here it is again:

This is actually two interlocking surfaces, shown in red and blue. You can get them by writing the gyroid as a level surface:

f(x,y,z) = 0

and taking the two nearby surfaces

f(x,y,z) = \pm c

for some small value of c..

It turns out that while they’re still growing, some butterflies have a double gyroid pattern in their wings. This turns into a single gyroid when they grow up!

The new research at MIT studied how an electron would move through a double gyroid pattern. They calculated its dispersion relation: how the speed of the electron would depend on its energy and the direction it’s moving.

An ordinary particle moves faster if it has more energy. But a massless particle, like a photon, moves at the same speed no matter what energy it has. The MIT team showed that an electron in a double gyroid pattern moves at a speed that doesn’t depend much on its energy. So, in some ways this electron acts like a massless particle.

But it’s quite different than a photon. It’s actually more like a neutrino! You see, unlike photons, electrons and neutrinos are spin-1/2 particles. Neutrinos are almost massless. A massless spin-1/2 particle can have a built-in handedness, spinning in only one direction around its axis of motion. Such a particle is called a Weyl spinor. The MIT team showed that a electron moving through a double gyroid acts approximately like a Weyl spinor!

How does this work? Well, the key fact is that the double gyroid has a built-in handedness, or chirality. It comes in a left-handed and right-handed form. You can see the handedness quite clearly in Grossman’s sculpture of the ordinary gyroid:

Beware: nobody has actually made electrons act like Weyl spinors in the lab yet. The MIT team just found a way that should work. Someday someone will actually make it happen, probably in less than a decade. And later, someone will do amazing things with this ability. I don’t know what. Maybe the butterflies know!

References and more

For a good introduction to the physics of gyroids, see:

• James A. Dolan, Bodo D. Wilts, Silvia Vignolini, Jeremy J. Baumberg, Ullrich Steiner and Timothy D. Wilkinson, Optical properties of gyroid structured materials: from photonic crystals to metamaterials, Advanced Optical Materials 3 (2015), 12–32.

For some of the history and math of gyroids, see Alan Schoen’s webpage:

• Alan Schoen, Triply-periodic minimal surfaces.

For more on gyroids in butterfly wings, see:

• K. Michielsen and D. G. Stavenga, Gyroid cuticular structures in butterfly wing scales: biological photonic crystals.

• Vinodkumar Saranathana et al, Structure, function, and self-assembly of single network gyroid (I4132) photonic crystals in butterfly wing scales, PNAS 107 (2010), 11676–11681.

The paper by Michielsen and Stavenga is free online! They say the famous ‘blue Morpho’ butterfly shown in the picture at the top of this article does not use a gyroid; it uses a “two-dimensional photonic crystal slab consisting of arrays of rectangles formed by lamellae and microribs.” But they find gyroids in four other species: Callophrys rubi, Cyanophrys remus, Pardes sesostris and Teinopalpus imperialis. It compares tunnelling electron microscope pictures of slices of their iridescent patches with computer-generated slices of gyroids. The comparison looks pretty good to me:

For the evolution of iridescence, see:

• Melissa G. Meadows et al, Iridescence: views from many angles, J. Roy. Soc. Interface 6 (2009).

For the new research at MIT, see:

• Ling Lu, Liang Fu, John D. Joannopoulos and Marin Soljačić, Weyl points and line nodes in gapless gyroid photonic crystals.

• Ling Lu, Zhiyu Wang, Dexin Ye, Lixin Ran, Liang Fu, John D. Joannopoulos and Marin Soljačić, Experimental observation of Weyl points, Science 349 (2015), 622–624.

Again, the first is free online. There’s a lot of great math lurking inside, most of which is too mind-blowing too explain quickly. Let me just paraphrase the start of the paper, so at least experts can get the idea:

Two-dimensional (2d) electrons and photons at the energies and frequencies of Dirac points exhibit extraordinary features. As the best example, almost all the remarkable properties of graphene are tied to the massless Dirac fermions at its Fermi level. Topologically, Dirac cones are not only the critical points for 2d phase transitions but also the unique surface manifestation of a topologically gapped 3d bulk. In a similar way, it is expected that if a material could be found that exhibits a 3d linear dispersion relation, it would also display a wide range of interesting physics phenomena. The associated 3D linear point degeneracies are called “Weyl points”. In the past year, there have been a few studies of Weyl fermions in electronics. The associated Fermi-arc surface states, quantum Hall effect, novel transport properties and a realization of the Adler–Bell–Jackiw anomaly are also expected. However, no observation of Weyl points has been reported. Here, we present a theoretical discovery and detailed numerical investigation of frequency-isolated Weyl points in perturbed double-gyroid photonic crystals along with their complete phase diagrams and their topologically protected surface states.

Also a bit for the mathematicians:

Weyl points are topologically stable objects in the 3d Brillouin zone: they act as monopoles of Berry flux in momentum space, and hence are intimately related to the topological invariant known as the Chern number. The Chern number can be defined for a single bulk band or a set of bands, where the Chern numbers of the individual bands are summed, on any closed 2d surface in the 3d Brillouin zone. The difference of the Chern numbers defined on two surfaces, of all bands below the Weyl point frequencies, equals the sum of the chiralities of the Weyl points enclosed in between the two surfaces.

This is a mix of topology and physics jargon that may be hard for pure mathematicians to understand, but I’ll be glad to translate if there’s interest.

For starters, a ‘monopole of Berry flux in momentum space’ is a poetic way of talking about a twisted complex line bundle over the space of allowed energy-momenta of the electron in the double gyroid. We get a twist at every ‘Weyl point’, meaning a point where the dispersion relations look locally like those of a Weyl spinor when its energy-momentum is near zero. Near such a point, the dispersion relations are a Fourier-transformed version of the Weyl equation.


The Game of Googol

20 July, 2015

Here’s a puzzle from a recent issue of Quanta, an online science magazine:

Puzzle 1: I write down two different numbers that are completely unknown to you, and hold one in my left hand and one in my right. You have absolutely no idea how I generated these two numbers. Which is larger?

You can point to one of my hands, and I will show you the number in it. Then you can decide to either select the number you have seen or switch to the number you have not seen, held in the other hand. Is there a strategy that will give you a greater than 50% chance of choosing the larger number, no matter which two numbers I write down?

At first it seems the answer is no. Whatever number you see, the other number could be larger or smaller. There’s no way to tell. So obviously you can’t get a better than 50% chance of picking the hand with the largest number—even if you’ve seen one of those numbers!

But “obviously” is not a proof. Sometimes “obvious” things are wrong!

It turns out that, amazingly, the answer to the puzzle is yes! You can find a strategy to do better than 50%. But the strategy uses randomness. So, this puzzle is a great illustration of the power of randomness.

If you want to solve it yourself, stop now or read Quanta magazine for some clues—they offered a small prize for the best answer:

• Pradeep Mutalik, Can information rise from randomness?, Quanta, 7 July 2015.

Greg Egan gave a nice solution in the comments to this magazine article, and I’ll reprint it below along with two followup puzzles. So don’t look down there unless you want a spoiler.

I should add: the most common mistake among educated readers seems to be assuming that the first player, the one who chooses the two numbers, chooses them according to some probability distribution. Don’t assume that. They are simply arbitrary numbers.

The history of this puzzle

I’d seen this puzzle before—do you know who invented it? On G+, Hans Havermann wrote:

I believe the origin of this puzzle goes back to (at least) John Fox and Gerald Marnie’s 1958 betting game ‘Googol’. Martin Gardner mentioned it in his February 1960 column in Scientific American. Wikipedia mentions it under the heading ‘Secretary problem’. Gardner suggested that a variant of the game was proposed by Arthur Cayley in 1875.

Actually the game of Googol is a generalization of the puzzle that we’ve been discussing. Martin Gardner explained it thus:

Ask someone to take as many slips of paper as he pleases, and on each slip write a different positive number. The numbers may range from small fractions of 1 to a number the size of a googol (1 followed by a hundred 0s) or even larger. These slips are turned face down and shuffled over the top of a table. One at a time you turn the slips face up. The aim is to stop turning when you come to the number that you guess to be the largest of the series. You cannot go back and pick a previously turned slip. If you turn over all the slips, then of course you must pick the last one turned.

So, the puzzle I just showed you is the special case when there are just 2 slips of paper. I seem to recall that Gardner incorrectly dismissed this case as trivial!

There’s been a lot of work on Googol. Julien Berestycki writes:

I heard about this puzzle a few years ago from Sasha Gnedin. He has a very nice paper about this

• Alexander V. Gnedin, A solution to the game of Googol, Annals of Probability (1994), 1588–1595.

One of the many beautiful ideas in this paper is that it asks what is the best strategy for the guy who writes the numbers! It also cites a paper by Gnedin and Berezowskyi (of oligarchic fame). 

Egan’s solution

Okay, here is Greg Egan’s solution, paraphrased a bit:

Pick some function f : \mathbb{R} \to \mathbb{R} such that:

\displaystyle{ \lim_{x \to -\infty} f(x) = 0 }

\displaystyle{ \lim_{x \to +\infty} f(x) = 1 }

f is strictly increasing: if x > y then f(x) > f(y)

There are lots of functions like this, for example

\displaystyle{f(x) = \frac{e^x}{e^x + 1} }

Next, pick one of the first player’s hands at random. If the number you are shown is a, compute f(a). Then generate a uniformly distributed random number z between 0 and 1. If z is less than or equal to f(a) guess that x is the larger number, but if z is greater than f(a) guess that the larger number is in the other hand.

The probability of guessing correctly can be calculated as the probability of seeing the larger number initially and then, correctly, sticking with it, plus the probability of seeing the smaller number initially and then, correctly, choosing the other hand.

Say the larger number is x and the smaller one is y. Then the probability of guessing correctly is

\frac{1}{2} f(x) + \frac{1}{2} (1 - f(y)) =  \frac{1}{2} + \frac{1}{2} (f(x) - f(y))

This is strictly greater than \frac{1}{2} since x > y so f(x) - f(y) > 0.

So, you have a more than 50% chance of winning! But as you play the game, there’s no way to tell how much more than 50%. If the numbers on the other players hands are very large, or very small, your chance will be just slightly more than 50%.

Followup puzzles

Here are two more puzzles:

Puzzle 2: Prove that no deterministic strategy can guarantee you have a more than 50% chance of choosing the larger number.

Puzzle 3: There are perfectly specific but ‘algorithmically random’ sequences of bits, which can’t predicted well by any program. If we use these to generate a uniform algorithmically random number between 0 and 1, and use the strategy Egan describes, will our chance of choosing the larger number be more than 50%, or not?

But watch out—here come Egan’s solutions to those!

Solutions

Egan writes:

Here are my answers to your two puzzles on G+.

Puzzle 2: Prove that no deterministic strategy can guarantee you have a more than 50% chance of choosing the larger number.

Answer: If we adopt a deterministic strategy, that means there is a function S: \mathbb{R} \to \{0,1\} that tells us whether on not we stick with the number x when we see it. If S(x)=1 we stick with it, if S(x)=0 we swap it for the other number.

If the two numbers are x and y, with x > y, then the probability of success will be:

P = 0.5 + 0.5(S(x)-S(y))

This is exactly the same as the formula we obtained when we stuck with x with probability f(x), but we have specialised to functions S valued in \{0,1\}.

We can only guarantee a more than 50% chance of choosing the larger number if S is monotonically increasing everywhere, i.e. S(x) > S(y) whenever x > y. But this is impossible for a function valued in \{0,1\}. To prove this, define x_0 to be any number in [1,2] such that S(x_0)=0; such an x_0 must exist, otherwise S would be constant on [1,2] and hence not monotonically increasing. Similarly define x_1 to be any number in [-2,-1] such that S(x_1) = 1. We then have x_0 > x_1 but S(x_0) < S(x_1).

Puzzle 3: There are perfectly specific but ‘algorithmically random’ sequences of bits, which can’t predicted well by any program. If we use these to generate a uniform algorithmically random number between 0 and 1, and use the strategy Egan describes, will our chance of choosing the larger number be more than 50%, or not?

Answer: As Philip Gibbs noted, a deterministic pseudo-random number generator is still deterministic. Using a specific sequence of algorithmically random bits

(b_1, b_2, \dots )

to construct a number z between 0 and 1 means z takes on the specific value:

z_0 = \sum_i b_i 2^{-i}

So rather than sticking with x with probability f(x) for our monotonically increasing function f, we end up always sticking with x if z_0 \le f(x), and always swapping if z_0 > f(x). This is just using a function S:\mathbb{R} \to \{0,1\} as in Puzzle 2, with:

S(x) = 0 if x < f^{-1}(z_0)

S(x) = 1 if x \ge f^{-1}(z_0)

So all the same consequences as in Puzzle 2 apply, and we cannot guarantee a more than 50% chance of choosing the larger number.

Puzzle 3 emphasizes the huge gulf between ‘true randomness’, where we only have a probability distribution of numbers z, and the situation where we have a specific number z_0, generated by any means whatsoever.

We could generate z_0 using a pseudorandom number generator, radioactive decay of atoms, an oracle whose randomness is certified by all the Greek gods, or whatever. No matter how randomly z_0 is generated, once we have it, we know there exist choices for the first player that will guarantee our defeat!

This may seem weird at first, but if you think about simple games of luck you’ll see it’s completely ordinary. We can have a more than 50% chance of winning such a game even if for any particular play we make the other player has a move that ensures our defeat. That’s just how randomness works.


Trends in Reaction Network Theory (Part 2)

1 July, 2015

Here in Copenhagen we’ll soon be having a bunch of interesting talks on chemical reaction networks:

Workshop on Mathematical Trends in Reaction Network Theory, 1-3 July 2015, Department of Mathematical Sciences, University of Copenhagen. Organized by Elisenda Feliu and Carsten Wiuf.

Looking through the abstracts, here are a couple that strike me.

First of all, Gheorghe Craciun claims to have proved the biggest open conjecture in this field: the Global Attractor Conjecture!

• Gheorge Craciun, Toric differential inclusions and a proof of the global attractor conjecture.

This famous old conjecture says that for a certain class of chemical reactions, the ones coming from ‘complex balanced reaction networks’, the chemicals will approach equilibrium no matter what their initial concentrations are. Here’s what Craciun says:

Abstract. In a groundbreaking 1972 paper Fritz Horn and Roy Jackson showed that a complex balanced mass-action system must have a unique locally stable equilibrium within any compatibility class. In 1974 Horn conjectured that this equilibrium is a global attractor, i.e., all solutions in the same compatibility class must converge to this equilibrium. Later, this claim was called the Global Attractor Conjecture, and it was shown that it has remarkable implications for the dynamics of large classes of polynomial and power-law dynamical systems, even if they are not derived from mass-action kinetics. Several special cases of this conjecture have been proved during the last decade. We describe a proof of the conjecture in full generality. In particular, it will follow that all detailed balanced mass action systems and all deficiency zero mass-action systems have the global attractor property. We will also discuss some implications for biochemical mechanisms that implement noise filtering and cellular homeostasis.

Manoj Gopalkrishnan wrote a great post explaining the concept of complex balanced reaction network here on Azimuth, so if you want to understand the conjecture you could start there.

Even better, Manoj is talking here about a way to do statistical inference with chemistry! His talk is called ‘Statistical inference with a chemical soup’:

Abstract. The goal is to design an “intelligent chemical soup” that can do statistical inference. This may have niche technological applications in medicine and biological research, as well as provide fundamental insight into the workings of biochemical reaction pathways. As a first step towards our goal, we describe a scheme that exploits the remarkable mathematical similarity between log-linear models in statistics and chemical reaction networks. We present a simple scheme that encodes the information in a log-linear model as a chemical reaction network. Observed data is encoded as initial concentrations, and the equilibria of the corresponding mass-action system yield the maximum likelihood estimators. The simplicity of our scheme suggests that molecular environments, especially within cells, may be particularly well suited to performing statistical computations.

It’s based on this paper:

• Manoj Gopalkrishnan, A scheme for molecular computation of maximum likelihood estimators for log-linear models.

I’m not sure, but this idea may exploit existing analogies between the approach to equilibrium in chemistry, the approach to equilibrium in evolutionary game theory, and statistical inference. You may have read Marc Harper’s post about that stuff!

David Doty is giving a broader review of ‘Computation by (not about) chemistry’:

Abstract. The model of chemical reaction networks (CRNs) is extensively used throughout the natural sciences as a descriptive language for existing chemicals. If we instead think of CRNs as a programming language for describing artificially engineered chemicals, what sorts of computations are possible for these chemicals to achieve? The answer depends crucially on several formal choices:

1) Do we treat matter as infinitely divisible (real-valued concentrations) or atomic (integer-valued counts)?

2) How do we represent the input and output of the computation (e.g., Boolean presence or absence of species, positive numbers directly represented by counts/concentrations, positive and negative numbers represented indirectly by the difference between counts/concentrations of a pair of species)?

3) Do we assume mass-action rate laws (reaction rates proportional to reactant counts/concentrations) or do we insist the system works correctly under a broader class of rate laws?

The talk will survey several recent results and techniques. A primary goal of the talk is to convey the “programming perspective”: rather than asking “What does chemistry do?”, we want to understand “What could chemistry do?” as well as “What can chemistry provably not do?”

I’m really interested in chemical reaction networks that appear in biological systems, and there will be lots of talks about that. For example, Ovidiu Radulescu will talk about ‘Taming the complexity of biochemical networks through model reduction and tropical geometry’. Model reduction is the process of simplifying complicated models while preserving at least some of their good features. Tropical geometry is a cool version of algebraic geometry that uses the real numbers with minimization as addition and addition as multiplication. This number system underlies the principle of least action, or the principle of maximum energy. Here is Radulescu’s abstract:

Abstract. Biochemical networks are used as models of cellular physiology with diverse applications in biology and medicine. In the absence of objective criteria to detect essential features and prune secondary details, networks generated from data are too big and therefore out of the applicability of many mathematical tools for studying their dynamics and behavior under perturbations. However, under circumstances that we can generically denote by multi-scaleness, large biochemical networks can be approximated by smaller and simpler networks. Model reduction is a way to find these simpler models that can be more easily analyzed. We discuss several model reduction methods for biochemical networks with polynomial or rational rate functions and propose as their common denominator the notion of tropical equilibration, meaning finite intersection of tropical varieties in algebraic geometry. Using tropical methods, one can strongly reduce the number of variables and parameters of biochemical network. For multi-scale networks, these reductions are computed symbolically on orders of magnitude of parameters and variables, and are valid in wide domains of parameter and phase spaces.

I’m talking about the analogy between probabilities and quantum amplitudes, and how this makes chemistry analogous to particle physics. You can see two versions of my talk here, but I’ll be giving the ‘more advanced’ version, which is new:

Probabilities versus amplitudes.

Abstract. Some ideas from quantum theory are just beginning to percolate back to classical probability theory. For example, the master equation for a chemical reaction network describes the interactions of molecules in a stochastic rather than quantum way. If we look at it from the perspective of quantum theory, this formalism turns out to involve creation and annihilation operators, coherent states and other well-known ideas—but with a few big differences.

Anyway, there are a lot more talks, but if I don’t have breakfast and walk over to the math department, I’ll miss those talks!

You can learn more about individual talks in the comments here (see below) and also in Matteo Polettini’s blog:

• Matteo Polettini, Mathematical trends in reaction network theory: part 1 and part 2, Out of Equilibrium, 1 July 2015.


PROPs for Linear Systems

18 May, 2015

Eric Drexler likes to say: engineering is dual to science, because science tries to understand what the world does, while engineering is about getting the world to do what you want. I think we need a slightly less ‘coercive’, more ‘cooperative’ approach to the world in order to develop ‘ecotechnology’, but it’s still a useful distinction.

For example, classical mechanics is the study of what things do when they follow Newton’s laws. Control theory is the study of what you can get them to do.

Say you have an upside-down pendulum on a cart. Classical mechanics says what it will do. But control theory says: if you watch the pendulum and use what you see to move the cart back and forth correctly, you can make sure the pendulum doesn’t fall over!

Control theorists do their work with the help of ‘signal-flow diagrams’. For example, here is the signal-flow diagram for an inverted pendulum on a cart:

When I take a look at a diagram like this, I say to myself: that’s a string diagram for a morphism in a monoidal category! And it’s true. Jason Erbele wrote a paper explaining this. Independently, Bonchi, Sobociński and Zanasi did some closely related work:

• John Baez and Jason Erbele, Categories in control.

• Filippo Bonchi, Paweł Sobociński and Fabio Zanasi, Interacting Hopf algebras.

• Filippo Bonchi, Paweł Sobociński and Fabio Zanasi, A categorical semantics of signal flow graphs.

I’ll explain some of the ideas at the Turin meeting on the categorical foundations of network theory. But I also want to talk about this new paper that Simon Wadsley of Cambridge University wrote with my student Nick Woods:

• Simon Wadsley and Nick Woods, PROPs for linear systems.

This makes the picture neater and more general!

You see, Jason and I used signal flow diagrams to give a new description of the category of finite-dimensional vector spaces and linear maps. This category plays a big role in the control theory of linear systems. Bonchi, Sobociński and Zanasi gave a closely related description of an equivalent category, \mathrm{Mat}(k), where:

• objects are natural numbers, and

• a morphism f : m \to n is an n \times m matrix with entries in the field k,

and composition is given by matrix multiplication.

But Wadsley and Woods generalized all this work to cover \mathrm{Mat}(R) whenever R is a commutative rig. A rig is a ‘ring without negatives’—like the natural numbers. We can multiply matrices valued in any rig, and this includes some very useful examples… as I’ll explain later.

Wadsley and Woods proved:

Theorem. Whenever R is a commutative rig, \mathrm{Mat}(R) is the PROP for bicommutative bimonoids over R.

This result is quick to state, but it takes a bit of explaining! So, let me start by bringing in some definitions.

Bicommutative bimonoids

We will work in any symmetric monoidal category, and draw morphisms as string diagrams.

A commutative monoid is an object equipped with a multiplication:

and a unit:

obeying these laws:

For example, suppose \mathrm{FinVect}_k is the symmetric monoidal category of finite-dimensional vector spaces over a field k, with direct sum as its tensor product. Then any object V \in \mathrm{FinVect}_k is a commutative monoid where the multiplication is addition:

(x,y) \mapsto x + y

and the unit is zero: that is, the unique map from the zero-dimensional vector space to V.

Turning all this upside down, cocommutative comonoid has a comultiplication:

and a counit:

obeying these laws:

For example, consider our vector space V \in \mathrm{FinVect}_k again. It’s a commutative comonoid where the comultiplication is duplication:

x \mapsto (x,x)

and the counit is deletion: that is, the unique map from V to the zero-dimensional vector space.

Given an object that’s both a commutative monoid and a cocommutative comonoid, we say it’s a bicommutative bimonoid if these extra axioms hold:

You can check that these are true for our running example of a finite-dimensional vector space V. The most exciting one is the top one, which says that adding two vectors and then duplicating the result is the same as duplicating each one, then adding them appropriately.

Our example has some other properties, too! Each element c \in k defines a morphism from V to itself, namely scalar multiplication by c:

x \mapsto c x

We draw this as follows:

These morphisms are compatible with the ones so far:

Moreover, all the ‘rig operations’ in k—that is, addition, multiplication, 0 and 1, but not subtraction or division—can be recovered from what we have so far:

We summarize this by saying our vector space V is a bicommutative bimonoid ‘over k‘.

More generally, suppose we have a bicommutative bimonoid A in a symmetric monoidal category. Let \mathrm{End}(A) be the set of bicommutative bimonoid homomorphisms from A to itself. This is actually a rig: there’s a way to add these homomorphisms, and also a way to ‘multiply’ them (namely, compose them).

Suppose R is any commutative rig. Then we say A is a bicommutative bimonoid over R if it’s equipped with a rig homomorphism

\Phi : R \to \mathrm{End}(A)

This is a way of summarizing the diagrams I just showed you! You see, each c \in R gives a morphism from A to itself, which we write as

The fact that this is a bicommutative bimonoid endomorphism says precisely this:

And the fact that \Phi is a rig homomorphism says precisely this:

So sometimes the right word is worth a dozen pictures!

What Jason and I showed is that for any field k, the \mathrm{FinVect}_k is the free symmetric monoidal category on a bicommutative bimonoid over k. This means that the above rules, which are rules for manipulating signal flow diagrams, completely characterize the world of linear algebra!

Bonchi, Sobociński and Zanasi used ‘PROPs’ to prove a similar result where the field is replaced by a sufficiently nice commutative ring. And Wadlsey and Woods used PROPS to generalize even further to the case of an arbitrary commutative rig!

But what are PROPs?

PROPs

A PROP is a particularly tractable sort of symmetric monoidal category: a strict symmetric monoidal category where the objects are natural numbers and the tensor product of objects is given by ordinary addition. The symmetric monoidal category \mathrm{FinVect}_k is equivalent to the PROP \mathrm{Mat}(k), where a morphism f : m \to n is an n \times m matrix with entries in k, composition of morphisms is given by matrix multiplication, and the tensor product of morphisms is the direct sum of matrices.

We can define a similar PROP \mathrm{Mat}(R) whenever R is a commutative rig, and Wadsley and Woods gave an elegant description of the ‘algebras’ of \mathrm{Mat}(R). Suppose C is a PROP and D is a strict symmetric monoidal category. Then the category of algebras of C in D is the category of strict symmetric monoidal functors F : C \to D and natural transformations between these.

If for every choice of D the category of algebras of C in D is equivalent to the category of algebraic structures of some kind in D, we say C is the PROP for structures of that kind. This explains the theorem Wadsley and Woods proved:

Theorem. Whenever R is a commutative rig, \mathrm{Mat}(R) is the PROP for bicommutative bimonoids over R.

The fact that an algebra of \mathrm{Mat}(R) is a bicommutative bimonoid is equivalent to all this stuff:

The fact that \Phi(c) is a bimonoid homomorphism for all c \in R is equivalent to this stuff:

And the fact that \Phi is a rig homomorphism is equivalent to this stuff:

This is a great result because it includes some nice new examples.

First, the commutative rig of natural numbers gives a PROP \mathrm{Mat}. This is equivalent to the symmetric monoidal category \mathrm{FinSpan}, where morphisms are isomorphism classes of spans of finite sets, with disjoint union as the tensor product. Steve Lack had already shown that \mathrm{FinSpan} is the PROP for bicommutative bimonoids. But this also follows from the result of Wadsley and Woods, since every bicommutative bimonoid V is automatically equipped with a unique rig homomorphism

\Phi : \mathbb{N} \to \mathrm{End}(V)

Second, the commutative rig of booleans

\mathbb{B} = \{F,T\}

with ‘or’ as addition and ‘and’ as multiplication gives a PROP \mathrm{Mat}(\mathbb{B}). This is equivalent to the symmetric monoidal category \mathrm{FinRel} where morphisms are relations between finite sets, with disjoint union as the tensor product. Samuel Mimram had already shown that this is the PROP for special bicommutative bimonoids, meaning those where comultiplication followed by multiplication is the identity:

But again, this follows from the general result of Wadsley and Woods!

Finally, taking the commutative ring of integers \mathbb{Z}, Wadsley and Woods showed that \mathrm{Mat}(\mathbb{Z}) is the PROP for bicommutative Hopf monoids. The key here is that scalar multiplication by -1 obeys the axioms for an antipode—the extra morphism that makes a bimonoid into a Hopf monoid. Here are those axioms:

More generally, whenever R is a commutative ring, the presence of -1 \in R guarantees that a bimonoid over R is automatically a Hopf monoid over R. So, when R is a commutative ring, Wadsley and Woods’ result implies that \mathrm{Mat}(R) is the PROP for Hopf monoids over R.

Earlier, in their paper on ‘interacting Hopf algebras’, Bonchi, Sobociński and Zanasi had given an elegant and very different proof that \mathrm{Mat}(R) is the PROP for Hopf monoids over R whenever R is a principal ideal domain. The advantage of their argument is that they build up the PROP for Hopf monoids over R from smaller pieces, using some ideas developed by Steve Lack. But the new argument by Wadsley and Woods has its own charm.

In short, we’re getting the diagrammatics of linear algebra worked out very nicely, providing a solid mathematical foundation for signal flow diagrams in control theory!


Resource Theories

12 May, 2015

by Brendan Fong

Hugo Nava-Kopp and I have a new paper on resource theories:

• Brendan Fong and Hugo Nava-Kopp, Additive monotones for resource theories of parallel-combinable processes with discarding.

A mathematical theory of resources is Tobias Fritz’s current big project. He’s already explained how ordered monoids can be viewed as theories of resource convertibility in a three part series on this blog.

Ordered monoids are great, and quite familiar in network theory: for example, a Petri net can be viewed as a presentation for an ordered commutative monoid. But this work started in symmetric monoidal categories, together with my (Oxford) supervisor Bob Coecke and Rob Spekkens.

The main idea is this: think of the objects of your symmetric monoidal category as resources, and your morphisms as ways to convert one resource into another. The monoidal product or ‘tensor product’ in your category allows you to talk about collections of your resources. So, for example, in the resource theory of chemical reactions, our objects are molecules like oxygen O2, hydrogen H2, and water H2O, and morphisms things like the electrolysis of water:

2H2O → O2 + 2H2

This is a categorification of the ordered commutative monoid of resource convertibility: we now keep track of how we convert resources into one another, instead of just whether we can convert them.

Categorically, I find the other direction easier to state: being a category, the resource theory is enriched over \mathrm{Set}, while a poset is enriched over the poset of truth values or ‘booleans’ \mathbb{B} = \{0,1\}. If we ‘partially decategorify’ by changing the base of enrichment along the functor \mathrm{Set} \to \mathbb{B} that maps the empty set to 0 and any nonempty set to 1, we obtain the ordered monoid corresponding to the resource theory.

But the research programme didn’t start at resource theories either. The starting point was ‘partitioned process theories’.

Here’s an example that guided the definitions. Suppose we have a bunch of labs with interacting quantum systems, separated in space. With enough cooperation and funding, they can do big joint operations on their systems, like create entangled pairs between two locations. For ‘free’, however, they’re limited to classical communication between the locations, although they can do the full range of quantum operations on their local system. So you’ve got a symmetric monoidal category with objects quantum systems and morphisms quantum operations, together with a wide (all-object-including) symmetric monoidal subcategory that contains the morphisms you can do with local quantum operations and classical communication (known as LOCC operations).

This general structure: a symmetric monoidal category (or SMC for short) with a wide symmetric monoidal subcategory, is called a partitioned process theory. We call the morphisms in the SMC processes, and those in the subSMC free processes.

There are a number of methods for building a resource theory (i.e. an SMC) from a partitioned process theory. The unifying idea though, is that your new SMC has the processes f,g as objects, and morphisms f \to g ways of using the free processes to build g from f.

But we don’t have to go to fancy sounding quantum situations to find examples of partitioned process theories. Instead, just look at any SMC in which each object is equipped with an algebraic structure. Then the morphisms defining this structure can be taken as our ‘free’ processes.

For example, in a multigraph category every object has the structure of a ‘special commutative Frobenius algebra’. That’s a bit of a mouthful, but John defined it a while back, and examples include categories where morphisms are electrical circuits, and categories where morphisms are signal flow diagrams.

So these categories give partitioned process theories! This idea of partitioning the morphisms into ‘free’ ones and ‘costly’ ones is reminiscent of what I was saying earlier about the operad of wiring diagrams about it being useful to separate behavioural structure from interconnection structure.

This suggests that we can also view the free processes as generating some sort of operad, that describes the ways we allow ourselves to use free processes to turn processes into other processes. If we really want to roll a big machine out to play with this stuff, framed bicategories may also be interesting; Spivak is already using them to get at questions about operads. But that’s all conjecture, and a bit of a digression.

To get back to the point, this was all just to say that if you find yourself with a bunch of resistors, and you ask ‘what can I build?’, then you’re after the resource theory apparatus.

You can read more about this stuff here:

• Bob Coecke, Tobias Fritz and Rob W. Spekkens, A mathematical theory of resources.

• Tobias Fritz, The mathematical structure of theories of resource convertibility I.


Cospans, Wiring Diagrams, and the Behavioral Approach

5 May, 2015

joint with Brendan Fong

We’re getting ready for the Turin workshop on the
Categorical Foundations of Network Theory. So, we’re trying to get our thoughts in order.

Last time we talked about understanding types of networks as categories of decorated cospans. Earlier, David Spivak told us about understanding networks as algebras of an operad. Both these frameworks work at capturing notions of modularity and interconnection. Are they then related? How?

In this post we want to discuss some similarities between decorated cospan categories and algebras for Spivak’s operad of wiring diagrams. The main idea is that the two approaches are ‘essentially’ equivalent, but that compared to decorated cospans, Spivak’s operad formalism puts greater emphasis on the distinction between the ‘duplication’ and ‘deletion’ morphisms and other morphisms in our category.

The precise details are still to be worked out—jump in and help us!

Operads

We begin with a bit about operads in general. Recall that an operad is similar to a category, except that instead of a set \mathrm{hom}(x,y) of morphisms from any object x to any object y, you have a set \mathrm{hom}(x_1,\dots,x_n;y) of operations from any finite list of objects x_1,...,x_n to any object y . If we have an operation f \in \mathrm{hom}(x_1,\dots,x_n;y), we can call x_1, \dots, x_n the inputs of f and call y the output of f .

We can compose operations in an operad. To understand how, it’s easiest to use pictures. We draw an operation in \mathrm{hom}(x_1,\dots,x_n;y) as a little box with n wires coming in and one wire coming out:

The input wires should be labelled with the objects x_1, \dots, x_n and the output wire with the object y, but I haven’t done this.

We are allowed to compose these operations as follows:

as long as the outputs of the operations g_1,\dots,g_n match the inputs of the operation f . The result is a new operation which we call f \circ (g_1,\dots,g_n) .

We demand that there be unary operations 1_x \in \mathrm{hom}(x;x) serving as identities for composition, and we impose an associative law that makes a composite of composites like this well-defined:

So far this is the definition of a operad without permutations. In a full-fledged permutative operad, we can also permute the inputs of an operation f and get a new operation:

which we call f \sigma if \sigma is the the permutation of the inputs. We demand that (f \sigma) \sigma' = f (\sigma \sigma') . And finally, we demand that permutations act in a way that is compatible with composition. For example:

Here we see that (f \sigma) \circ (g_1, \dots, g_n) is equal to some obvious other thing.

Finally, there is a law saying

f \circ (g_1 \sigma_1, \dots, g_n \sigma_n) = (f \circ (g_1 , \dots, g_n)) \sigma

for some choice of \sigma that you can cook up from the permuations \sigma_i in an obvious way. We leave it as an exercise to work out the details. By the way, one well-known book on operads accidentally omits this law, so here’s a rather more lengthy exercise: read this book, see which theorems require this law, and correct their proofs!

Operads are similar to symmetric monoidal categories. The idea is that in a symmetric monoidal category you can just form the tensor product x_1 \otimes \dots \otimes x_n and talk about the set of morphisms x_1 \otimes \cdots \otimes \cdots x_n \to y . Indeed any symmetric monoidal category gives an operad in this way: just define \mathrm{hom}(x_1,...,x_n;y) to be \mathrm{hom}(x_1 \otimes \cdots \otimes x_n, y) . If we do this with Set, which is a symmetric monoidal category using the usual cartesian product of sets, we get an operad called \mathrm{Set}.

An algebra for an operad O is an operad homomorphism O \to \mathrm{Set}. We haven’t said what an operad homomorphism is, but you can probably figure it out yourself. The point is this: an algebra for O turns the abstract operations in O into actual operations on sets!

Finally, we should warn you that operads come in several flavors, and we’ve been talking about ‘typed permutative operads’. ‘Typed’ means that there’s more than one object; ‘permutative’ means that we have the ability to permute the input wires. When people say ‘operad’, they often mean an untyped permutative operad. For that, just specialize down to the case where there’s only one object x.

You can see a fully precise definition of untyped permutative operads here:

Operad theory, Wikipedia.

along with the definition of an untyped operad without permutations.

The operad of wiring diagrams

Spivak’s favorite operad is the operad of wiring diagrams. The operad of wiring diagrams WD is an operad version of \mathrm{Cospan}(\mathrm{FinSet}), constructed in the vein suggested above: the objects are finite sets, and an operation from a list of sets X_1,...,X_n to a set Y is a cospan

X_1+ \cdots +X_n \rightarrow S \leftarrow Y

Spivak draws such a thing as a big circle with n small circles cut out from the interior:

The outside of the big circle has a set Y of terminals marked on it, and each small circle has a set X_i of terminals marked on it. Then in the interior of this shape there are wires connecting these terminals. This what he calls a wiring diagrams.

You compose these wiring diagrams by pasting other wiring diagrams into each of the small circles.

The relationship with our Frobenius monoid diagrams is pretty simple: we draw our ‘wiring diagrams’ X \to Y in a square, with the X terminals on the left and Y terminals on the right. To get a Spivak-approved wiring diagram, glue the top and bottom edges of this square together, then flatten the cylinder you get down into an annulus, with the X-side on the inside and Y-side on the outside. If X = X_1+X_2 you can imagine gluing opposite edges of the inside circle together to divide it into two small circles accordingly, and so on.

Relational algebras of type A

Algebras for wiring diagrams tell you what components you have available to wire together with your diagrams. An algebra for the operad of wiring diagrams is an operad homomorphism

WD \to \mathrm{Set}

What does this look like? Just like a functor for categories, it assigns to each natural number a set, and each wiring diagram a function.

In work related to decorated cospans (such as our paper on circuits or John and Jason’s work on signal flow diagrams), our semantics usually is constructed from a field of values—not a physicist’s ‘field’, bt an algebraist’s sort of ‘field’, where you can add, multiply, subtract and divide. For example, we like being able to assign a real number like a velocity, or potential, or current to a variable. This gives us vector spaces and a bunch of nice linear-algebraic structures.

Spivak works more generally: he’s interested in the structure when you just have a set of values. While this means we can’t do some of the nice things we could do with a field, it also means this framework can do things like talk about logic gates, where the variables are boolean ones, or number theoretic questions, where you’re interested in the natural numbers.

So to discuss semantics we pick a set A of values, such as the real numbers or natural numbers or booleans or colors. We imagine then associating elements of this set to each wire in a wiring diagram. More technically, the algebra

\mathrm{Rel}A: WD \to \mathrm{Set}

then maps each finite set X to the power set \mathcal{P}(A^X) of the set A^X of functions X \to A .

On the morphisms (the wiring diagrams themselves), this functor behaves as follows. Note that a function (X \to A) can be thought of as an ‘X-vector’ (a_1,...,a_x) of ‘A-coordinates’. A wiring diagram X \to Y is just a cospan

X \to N  \leftarrow Y

in \mathrm{FinSet}, so it can be thought of as some compares

X \to N

followed by some copies

N \to Y

Thus, given a wiring diagram X \to Y, we can consider a partial function that maps an X-vector to the Y-vector by doing these compares, and if it passes them does the copies and returns the resulting Y-vector, but if not returns ‘undefined’. We can then define a map \mathcal{P}(A^X) \to \mathcal{P}(A^Y) which takes a set of X-vectors to its image under this partial function.

This semantics is called the relational WD-algebra of type A. We can think of it as being like the ‘light operations’ fragment of the signal flow calculus. By ‘light operations’, we mean the operations of duplication and deletion, which form a cocommutative comonoid:

and their time-reversed versions, ‘coduplication’ and ‘codeletion’, which form a commutatative monoid:

These fit together to form a Frobenius monoid, meaning that these equations hold:

And it’s actually extra-special, meaning that these equations hold:

(If you don’t understand these hieroglyphics, please reread our post about categories in control theory, and ask some questions!)

Note that we can’t do the ‘dark operations’, because we only have a set A of values, not a field, and the dark operations involve addition and zero!

Operads and the behavioral approach

In formulating Frobenius monoids this way, Spivak achieves something that we’ve been working hard to find ways to achieve: a separation of the behavioral structure from the interconnection structure.

What do I mean by this? In his ‘behavioral approach‘, Willems makes the point that for all their elaborate and elegant formulation, in the end physical laws just come down to dividing the set of what might a priori be possible (the ‘universum’) into the set of things that actually are possible (the ‘behavior’), and the set of things that aren’t). Here the universum is the set A^X: a priori, on each of the wires in X, we might associate any value of A . For example, to the two wires at the ends of a resistor, we might a priori associate any pair of currents. But physical law, here Kirchhoff’s current law, says otherwise: the currents must be equal and opposite. So the ‘behavior’ is the subset (i,-i) of the universum \mathbb{R}^2.

So you can say that to each object X in the operad of wiring diagrams the relational algebra of type A associates the set \mathcal{P}(A^X) of possible behaviors—the universum is A^X . (\mathcal{P}(A^X) forms some sort of meta-universum, where you can discuss physical laws about physical laws, commonly called ‘principles’.)

The second key aspect of the behavioral approach is that the behaviors of larger systems can be constructed from the behaviors of its subsystems, if we understand the so-called ‘interconnection structure’ well enough. This is a key principle in engineering: we build big, complicated systems from a much smaller set of components, whether it be electronics from resistors and inductors, or mechanical devices from wheels and rods and ropes, or houses from Lego bricks. The various interconnection structures here are the wiring diagrams, and our relational algebras say they act by what Willems calls ‘variable sharing’.

This division between behavior and interconnection motivates the decorated cospan construction (where the decorations are the ‘components’, the cospans the ‘interconnection’) and also the multigraph categories discussed by Aleks Kissinger (where morphisms are the ‘components’, and the Frobenius monoid operations are the ‘interconnection’):

• Aleks Kissinger, Finite matrices are complete for (dagger-)multigraph categories.

So it’s good to have this additional way of thinking about things in our repertoire: operads describe ‘interconnection’, their algebras ‘behaviors’.

The separation Spivak achieves, however, seems to me to come at the cost of neat ways to talk about individual components, and perhaps this can be seen as the essential difference between the two approaches. By including our components as morphisms, we can talk more carefully about them and additional structure individual components have. On the other hand, by lumping all the components into the objects, Spivak can talk more carefully about how the interconnection structure acts on all behaviors at once.

Other operads of wiring diagrams

One advantage of the operad approach is that you can easily tweak your operad to talk about different sorts of network structure. Sometimes you can make similar adjustments with decorated cospans too, such as working over the category of typed finite sets, rather than just finite sets, to discuss networks in which wires have types, and only wires of the same types can be connected together. A physical example is a model of a hydroelectric power plant, where you don’t want to connect a water pipe with an electrical cable! This is also a common technique in computer science, where you don’t want to try to multiply two strings of text, or try to interpret a telephone number as a truth value.

But some modifications are harder to do with decorated cospans. In some other papers, Spivak employs a more restricted operad of wiring diagrams, in which joining wires and terminating wires is not allowed, among other things. He uses this to formalise graphical languages for certain types of discrete-time processes, open dynamical systems, including mode-dependent ones.

For more detail, read these:

• Brendan Fong, Decorated cospans.

• David Spivak, The operad of wiring diagrams: formalizing a graphical language for databases, recursion, and plug-and-play circuits.


Decorated Cospans

1 May, 2015

Last time I talked about a new paper I wrote with Brendan Fong. It’s about electrical circuits made of ‘passive’ components, like resistors, inductors and capacitors. We showed these circuits are morphisms in a category. Moreover, there’s a functor sending each circuit to its ‘external behavior’: what it does, as seen by someone who can only measure voltages and currents at the terminals.

Our paper uses a formalism that Brendan developed here:

• Brendan Fong, Decorated cospans.

The idea here is we may want to take something like a graph with edges labelled by positive numbers:

and say that some of its nodes are ‘inputs’, while others are ‘outputs’:

This lets us treat our labelled graph as a ‘morphism’ from the set X to the set Y.

The point is that we can compose such morphisms. For example, suppose we have another one of these things, going from Y to Z:

Since the points of Y are sitting in both things:

we can glue them together and get a thing going from X to Z:

That’s how we compose these morphisms.

Note how we’re specifying some nodes of our original thing as inputs and outputs:

We’re using maps from two sets X and Y to the set N of nodes of our graph. And a bit surprisingly, we’re not demanding that these maps be one-to-one. That turns out to be useful—and in general, when doing math, it’s dumb to make your definitions forbid certain possibilities unless you really need to.

So, our thing is really a cospan of finite sets—that is, a diagram of finite sets and functions like this:

together some extra structure on the set N. This extra structure is what Brendan calls a decoration, and it makes the cospan into a ‘decorated cospan’. In this particular example, a decoration on N is a way of making it into the nodes of a graph with edges labelled by positive numbers. But one could consider many other kinds of decorations: this idea is very general.

To formalize the idea of ‘a kind of decoration’, Brendan uses a functor

F: \mathrm{FinSet} \to \mathrm{Set}

sending each finite set N to a set of F(N). This set F(N) is the set of decorations of the given kind that we can put on N.

So, for any such functor F, a decorated cospan of finite sets is a cospan of finite sets:

together with an element of F(N).

But in fact, Brendan goes further. He’s not content to use a functor

F: \mathrm{FinSet} \to \mathrm{Set}

to decorate his cospans.

First, there’s no need to limit ourselves to cospans of finite sets: we can replace \mathrm{FinSet} with some other category! If C is any category with finite colimits, there’s a category \mathrm{Cospan}(C) with:

• objects of C as its objects,
• isomorphism classes of cospans between these as morphisms.

Second, there’s no need to limit ourselves to decorations that are elements of a set: we can replace \mathrm{Set} with some other category! If D is any symmetric monoidal category, we can define an element of an object d \in D to be a morphism

f: I \to d

where I is the unit for the tensor product in D.

So, Brendan defines decorated cospans at this high level of generality, and shows that under some mild conditions we can compose them, just as in the pictures we saw earlier.

Here’s one of the theorems Brendan proves:

Theorem. Suppose C is a category with finite colimits, and make C into a symmetric monoidal category with its coproduct as the tensor product. Suppose D is a symmetric monoidal category, and suppose F: C \to D is a lax symmetric monoidal functor. Define an F-decorated cospan to be a cospan

in C together with an element of F(N). Then there is a category with

• objects of C as its objects,
• isomorphism classes of F-decorated cospans as its morphisms.

This is called the F-decorated cospan category, FCospan. This category becomes symmetric monoidal in a natural way. It is then a dagger compact category.

(You may not know all this jargon, but ‘lax symmetric monoidal’, for example, talks about how we can take decorations on two things and get a decoration on their disjoint union, or ‘coproduct’. We need to be able to do this—as should be obvious from the pictures I drew. Also, a ‘dagger compact category’ is the kind of category whose morphisms can be drawn as networks.)

Brendan also explains how to get functors between decorated cospan categories. We need this in our paper on electrical circuits, because we consider several categories where morphisms is a circuit, or something that captures some aspect of a circuit. Most of these categories are decorated cospan categories. We want to get functors between them. And often we can just use Brendan’s general results to get the job done! No fuss, no muss: all the hard work has been done ahead of time.

I expect to use this technology a lot in my work on network theory.


Follow

Get every new post delivered to your Inbox.

Join 3,062 other followers