The Physics of Butterfly Wings

11 August, 2015



Some butterflies have shiny, vividly colored wings. From different angles you see different colors. This effect is called iridescence. How does it work?

It turns out these butterfly wings are made of very fancy materials! Light bounces around inside these materials in a tricky way. Sunlight of different colors winds up reflecting off these materials in different directions.

We’re starting to understand the materials and make similar substances in the lab. They’re called photonic crystals. They have amazing properties.

Here at the Centre for Quantum Technologies we have people studying exotic materials of many kinds. Next door, there’s a lab completely devoted to studying graphene: crystal sheets of carbon in which electrons can move as if they were massless particles! Graphene has a lot of potential for building new technologies—that’s why Singapore is pumping money into researching it.

Some physicists at MIT just showed that one of the materials in butterfly wings might act like a 3d form of graphene. In graphene, electrons can only move easily in 2 directions. In this new material, electrons could move in all 3 directions, acting as if they had no mass.

The pictures here show the microscopic structure of two materials found in butterfly wings:

The picture at left actually shows a sculpture made by the mathematical artist Bathsheba Grossman. But it’s a piece of a gyroid: a surface with a very complicated shape, which repeats forever in 3 directions. It’s called a minimal surface because you can’t shrink its area by tweaking it just a little. It divides space into two regions.

The gyroid was discovered in 1970 by a mathematician, Alan Schoen. It’s a triply periodic minimal surfaces, meaning one that repeats itself in 3 different directions in space, like a crystal.


Schoen was working for NASA, and his idea was to use the gyroid for building ultra-light, super-strong structures. But that didn’t happen. Research doesn’t move in predictable directions.

In 1983, people discovered that in some mixtures of oil and water, the oil naturally forms a gyroid. The sheets of oil try to minimize their area, so it’s not surprising that they form a minimal surface. Something else makes this surface be a gyroid—I’m not sure what.

Butterfly wings are made of a hard material called chitin. Around 2008, people discovered that the chitin in some iridescent butterfly wings is made in a gyroid pattern! The spacing in this pattern is very small, about one wavelength of visible light. This makes light move through this material in a complicated way, which depends on the light’s color and the direction it’s moving.

So: butterflies have naturally evolved a photonic crystal based on a gyroid!

The universe is awesome, but it’s not magic. A mathematical pattern is beautiful if it’s a simple solution to at least one simple problem. This is why beautiful patterns naturally bring themselves into existence: they’re the simplest ways for certain things to happen. Darwinian evolution helps out: it scans through trillions of possibilities and finds solutions to problems. So, we should expect life to be packed with mathematically beautiful patterns… and it is.

The picture at right above shows a ‘double gyroid’. Here it is again:

This is actually two interlocking surfaces, shown in red and blue. You can get them by writing the gyroid as a level surface:

f(x,y,z) = 0

and taking the two nearby surfaces

f(x,y,z) = \pm c

for some small value of c..

It turns out that while they’re still growing, some butterflies have a double gyroid pattern in their wings. This turns into a single gyroid when they grow up!

The new research at MIT studied how an electron would move through a double gyroid pattern. They calculated its dispersion relation: how the speed of the electron would depend on its energy and the direction it’s moving.

An ordinary particle moves faster if it has more energy. But a massless particle, like a photon, moves at the same speed no matter what energy it has. The MIT team showed that an electron in a double gyroid pattern moves at a speed that doesn’t depend much on its energy. So, in some ways this electron acts like a massless particle.

But it’s quite different than a photon. It’s actually more like a neutrino! You see, unlike photons, electrons and neutrinos are spin-1/2 particles. Neutrinos are almost massless. A massless spin-1/2 particle can have a built-in handedness, spinning in only one direction around its axis of motion. Such a particle is called a Weyl spinor. The MIT team showed that a electron moving through a double gyroid acts approximately like a Weyl spinor!

How does this work? Well, the key fact is that the double gyroid has a built-in handedness, or chirality. It comes in a left-handed and right-handed form. You can see the handedness quite clearly in Grossman’s sculpture of the ordinary gyroid:

Beware: nobody has actually made electrons act like Weyl spinors in the lab yet. The MIT team just found a way that should work. Someday someone will actually make it happen, probably in less than a decade. And later, someone will do amazing things with this ability. I don’t know what. Maybe the butterflies know!

References and more

For a good introduction to the physics of gyroids, see:

• James A. Dolan, Bodo D. Wilts, Silvia Vignolini, Jeremy J. Baumberg, Ullrich Steiner and Timothy D. Wilkinson, Optical properties of gyroid structured materials: from photonic crystals to metamaterials, Advanced Optical Materials 3 (2015), 12–32.

For some of the history and math of gyroids, see Alan Schoen’s webpage:

• Alan Schoen, Triply-periodic minimal surfaces.

For more on gyroids in butterfly wings, see:

• K. Michielsen and D. G. Stavenga, Gyroid cuticular structures in butterfly wing scales: biological photonic crystals.

• Vinodkumar Saranathana et al, Structure, function, and self-assembly of single network gyroid (I4132) photonic crystals in butterfly wing scales, PNAS 107 (2010), 11676–11681.

The paper by Michielsen and Stavenga is free online! They say the famous ‘blue Morpho’ butterfly shown in the picture at the top of this article does not use a gyroid; it uses a “two-dimensional photonic crystal slab consisting of arrays of rectangles formed by lamellae and microribs.” But they find gyroids in four other species: Callophrys rubi, Cyanophrys remus, Pardes sesostris and Teinopalpus imperialis. It compares tunnelling electron microscope pictures of slices of their iridescent patches with computer-generated slices of gyroids. The comparison looks pretty good to me:

For the evolution of iridescence, see:

• Melissa G. Meadows et al, Iridescence: views from many angles, J. Roy. Soc. Interface 6 (2009).

For the new research at MIT, see:

• Ling Lu, Liang Fu, John D. Joannopoulos and Marin Soljačić, Weyl points and line nodes in gapless gyroid photonic crystals.

• Ling Lu, Zhiyu Wang, Dexin Ye, Lixin Ran, Liang Fu, John D. Joannopoulos and Marin Soljačić, Experimental observation of Weyl points, Science 349 (2015), 622–624.

Again, the first is free online. There’s a lot of great math lurking inside, most of which is too mind-blowing too explain quickly. Let me just paraphrase the start of the paper, so at least experts can get the idea:

Two-dimensional (2d) electrons and photons at the energies and frequencies of Dirac points exhibit extraordinary features. As the best example, almost all the remarkable properties of graphene are tied to the massless Dirac fermions at its Fermi level. Topologically, Dirac cones are not only the critical points for 2d phase transitions but also the unique surface manifestation of a topologically gapped 3d bulk. In a similar way, it is expected that if a material could be found that exhibits a 3d linear dispersion relation, it would also display a wide range of interesting physics phenomena. The associated 3D linear point degeneracies are called “Weyl points”. In the past year, there have been a few studies of Weyl fermions in electronics. The associated Fermi-arc surface states, quantum Hall effect, novel transport properties and a realization of the Adler–Bell–Jackiw anomaly are also expected. However, no observation of Weyl points has been reported. Here, we present a theoretical discovery and detailed numerical investigation of frequency-isolated Weyl points in perturbed double-gyroid photonic crystals along with their complete phase diagrams and their topologically protected surface states.

Also a bit for the mathematicians:

Weyl points are topologically stable objects in the 3d Brillouin zone: they act as monopoles of Berry flux in momentum space, and hence are intimately related to the topological invariant known as the Chern number. The Chern number can be defined for a single bulk band or a set of bands, where the Chern numbers of the individual bands are summed, on any closed 2d surface in the 3d Brillouin zone. The difference of the Chern numbers defined on two surfaces, of all bands below the Weyl point frequencies, equals the sum of the chiralities of the Weyl points enclosed in between the two surfaces.

This is a mix of topology and physics jargon that may be hard for pure mathematicians to understand, but I’ll be glad to translate if there’s interest.

For starters, a ‘monopole of Berry flux in momentum space’ is a poetic way of talking about a twisted complex line bundle over the space of allowed energy-momenta of the electron in the double gyroid. We get a twist at every ‘Weyl point’, meaning a point where the dispersion relations look locally like those of a Weyl spinor when its energy-momentum is near zero. Near such a point, the dispersion relations are a Fourier-transformed version of the Weyl equation.


Trends in Reaction Network Theory (Part 2)

1 July, 2015

Here in Copenhagen we’ll soon be having a bunch of interesting talks on chemical reaction networks:

Workshop on Mathematical Trends in Reaction Network Theory, 1-3 July 2015, Department of Mathematical Sciences, University of Copenhagen. Organized by Elisenda Feliu and Carsten Wiuf.

Looking through the abstracts, here are a couple that strike me.

First of all, Gheorghe Craciun claims to have proved the biggest open conjecture in this field: the Global Attractor Conjecture!

• Gheorge Craciun, Toric differential inclusions and a proof of the global attractor conjecture.

This famous old conjecture says that for a certain class of chemical reactions, the ones coming from ‘complex balanced reaction networks’, the chemicals will approach equilibrium no matter what their initial concentrations are. Here’s what Craciun says:

Abstract. In a groundbreaking 1972 paper Fritz Horn and Roy Jackson showed that a complex balanced mass-action system must have a unique locally stable equilibrium within any compatibility class. In 1974 Horn conjectured that this equilibrium is a global attractor, i.e., all solutions in the same compatibility class must converge to this equilibrium. Later, this claim was called the Global Attractor Conjecture, and it was shown that it has remarkable implications for the dynamics of large classes of polynomial and power-law dynamical systems, even if they are not derived from mass-action kinetics. Several special cases of this conjecture have been proved during the last decade. We describe a proof of the conjecture in full generality. In particular, it will follow that all detailed balanced mass action systems and all deficiency zero mass-action systems have the global attractor property. We will also discuss some implications for biochemical mechanisms that implement noise filtering and cellular homeostasis.

Manoj Gopalkrishnan wrote a great post explaining the concept of complex balanced reaction network here on Azimuth, so if you want to understand the conjecture you could start there.

Even better, Manoj is talking here about a way to do statistical inference with chemistry! His talk is called ‘Statistical inference with a chemical soup’:

Abstract. The goal is to design an “intelligent chemical soup” that can do statistical inference. This may have niche technological applications in medicine and biological research, as well as provide fundamental insight into the workings of biochemical reaction pathways. As a first step towards our goal, we describe a scheme that exploits the remarkable mathematical similarity between log-linear models in statistics and chemical reaction networks. We present a simple scheme that encodes the information in a log-linear model as a chemical reaction network. Observed data is encoded as initial concentrations, and the equilibria of the corresponding mass-action system yield the maximum likelihood estimators. The simplicity of our scheme suggests that molecular environments, especially within cells, may be particularly well suited to performing statistical computations.

It’s based on this paper:

• Manoj Gopalkrishnan, A scheme for molecular computation of maximum likelihood estimators for log-linear models.

I’m not sure, but this idea may exploit existing analogies between the approach to equilibrium in chemistry, the approach to equilibrium in evolutionary game theory, and statistical inference. You may have read Marc Harper’s post about that stuff!

David Doty is giving a broader review of ‘Computation by (not about) chemistry’:

Abstract. The model of chemical reaction networks (CRNs) is extensively used throughout the natural sciences as a descriptive language for existing chemicals. If we instead think of CRNs as a programming language for describing artificially engineered chemicals, what sorts of computations are possible for these chemicals to achieve? The answer depends crucially on several formal choices:

1) Do we treat matter as infinitely divisible (real-valued concentrations) or atomic (integer-valued counts)?

2) How do we represent the input and output of the computation (e.g., Boolean presence or absence of species, positive numbers directly represented by counts/concentrations, positive and negative numbers represented indirectly by the difference between counts/concentrations of a pair of species)?

3) Do we assume mass-action rate laws (reaction rates proportional to reactant counts/concentrations) or do we insist the system works correctly under a broader class of rate laws?

The talk will survey several recent results and techniques. A primary goal of the talk is to convey the “programming perspective”: rather than asking “What does chemistry do?”, we want to understand “What could chemistry do?” as well as “What can chemistry provably not do?”

I’m really interested in chemical reaction networks that appear in biological systems, and there will be lots of talks about that. For example, Ovidiu Radulescu will talk about ‘Taming the complexity of biochemical networks through model reduction and tropical geometry’. Model reduction is the process of simplifying complicated models while preserving at least some of their good features. Tropical geometry is a cool version of algebraic geometry that uses the real numbers with minimization as addition and addition as multiplication. This number system underlies the principle of least action, or the principle of maximum energy. Here is Radulescu’s abstract:

Abstract. Biochemical networks are used as models of cellular physiology with diverse applications in biology and medicine. In the absence of objective criteria to detect essential features and prune secondary details, networks generated from data are too big and therefore out of the applicability of many mathematical tools for studying their dynamics and behavior under perturbations. However, under circumstances that we can generically denote by multi-scaleness, large biochemical networks can be approximated by smaller and simpler networks. Model reduction is a way to find these simpler models that can be more easily analyzed. We discuss several model reduction methods for biochemical networks with polynomial or rational rate functions and propose as their common denominator the notion of tropical equilibration, meaning finite intersection of tropical varieties in algebraic geometry. Using tropical methods, one can strongly reduce the number of variables and parameters of biochemical network. For multi-scale networks, these reductions are computed symbolically on orders of magnitude of parameters and variables, and are valid in wide domains of parameter and phase spaces.

I’m talking about the analogy between probabilities and quantum amplitudes, and how this makes chemistry analogous to particle physics. You can see two versions of my talk here, but I’ll be giving the ‘more advanced’ version, which is new:

Probabilities versus amplitudes.

Abstract. Some ideas from quantum theory are just beginning to percolate back to classical probability theory. For example, the master equation for a chemical reaction network describes the interactions of molecules in a stochastic rather than quantum way. If we look at it from the perspective of quantum theory, this formalism turns out to involve creation and annihilation operators, coherent states and other well-known ideas—but with a few big differences.

Anyway, there are a lot more talks, but if I don’t have breakfast and walk over to the math department, I’ll miss those talks!

You can learn more about individual talks in the comments here (see below) and also in Matteo Polettini’s blog:

• Matteo Polettini, Mathematical trends in reaction network theory: part 1 and part 2, Out of Equilibrium, 1 July 2015.


Information and Entropy in Biological Systems (Part 7)

6 June, 2015

In 1961, Rolf Landauer argued that that the least possible amount of energy required to erase one bit of information stored in memory at temperature T is kT \ln 2, where k is Boltzmann’s constant.

This is called the Landauer limit, and it came after many decades of arguments concerning Maxwell’s demon and the relation between information and entropy.

In fact, these arguments are still not finished. For example, here’s an argument that the Landauer limit is not as solid as widely believed:

• John D. Norton, Waiting for Landauer, Studies in History and Philosophy of Modern Physics 42 (2011), 184–198.

But something like the Landauer limit almost surely holds under some conditions! And if it holds, it puts some limits on what organisms can do. That’s what David Wolpert spoke about at our workshop! You can see his slides here:

David WolpertThe Landauer limit and thermodynamics of biological organisms.

You can also watch a video:


Information and Entropy in Biological Systems (Part 6)

1 June, 2015

The resounding lack of comment to this series of posts confirms my theory that a blog post that says “go somewhere else and read something” will never be popular. Even if it’s “go somewhere else and watch a video”, this is too much like saying

Hi! Want to talk? Okay, go into that other room and watch TV, then come back when you’re done and we’ll talk about it.

But no matter: our workshop on Information and Entropy in Biological Systems was really exciting! I want to make it available to the world as much as possible. I’m running around too much to create lovingly hand-crafted summaries of each talk—and I know you’re punishing me for that, with your silence. But I’ll keep on going, just to get the material out there.

Marc Harper spoke about information in evolutionary game theory, and we have a nice video of that. I’ve been excited about his work for quite a while, because it shows that the analogy between ‘evolution’ and ‘learning’ can be made mathematically precise. I summarized some of his ideas in my information geometry series, and I’ve also gotten him to write two articles for this blog:

• Marc Harper, Relative entropy in evolutionary dynamics, Azimuth, 22 January 2014.

• Marc Harper, Stationary stability in finite populations, Azimuth, 24 March 2015.

Here are the slides and video of his talk:

• Marc Harper, Information transport and evolutionary dynamics.


Information and Entropy in Biological Systems (Part 5)

30 May, 2015

John Harte of U. C. Berkeley spoke about the maximum entropy method as a method of predicting patterns in ecology. Annette Ostling of the University of Michigan spoke about some competing theories, such as the ‘neutral model’ of biodiversity—a theory that sounds much too simple to be right, yet fits the data surprisingly well!

We managed to get a video of Ostling’s talk, but not Harte’s. Luckily, you can see the slides of both. You can also see a summary of Harte’s book Maximum Entropy and Ecology:

• John Baez, Maximum entropy and ecology, Azimuth, 21 February 2013.

Here are his talk slides and abstract:

• John Harte, Maximum entropy as a foundation for theory building in ecology.

Abstract. Constrained maximization of information entropy (MaxEnt) yields least-biased probability distributions. In statistical physics, this powerful inference method yields classical statistical mechanics/thermodynamics under the constraints imposed by conservation laws. I apply MaxEnt to macroecology, the study of the distribution, abundance, and energetics of species in ecosystems. With constraints derived from ratios of ecological state variables, I show that MaxEnt yields realistic abundance distributions, species-area relationships, spatial aggregation patterns, and body-size distributions over a wide range of taxonomic groups, habitats and spatial scales. I conclude with a brief summary of some of the major opportunities at the frontier of MaxEnt-based macroecological theory.

Here is a video of Ostling’s talk, as well as her slides and some papers she recommended:

• Annette Ostling, The neutral theory of biodiversity and other competitors to maximum entropy.

Abstract: I am a bit of the odd man out in that I will not talk that much about information and entropy, but instead about neutral theory and niche theory in ecology. My interest in coming to this workshop is in part out of an interest in what greater insights we can get into neutral models and stochastic population dynamics in general using entropy and information theory.

I will present the niche and neutral theories of the maintenance of diversity of competing species in ecology, and explain the dynamics included in neutral models in ecology. I will also briefly explain how one can derive a species abundance distribution from neutral models. I will present the view that neutral models have the potential to serve as more process-based null models than previously used in ecology for detecting the signature of niches and habitat filtering. However, tests of neutral theory in ecology have not as of yet been as useful as tests of neutral theory in evolutionary biology, because they leave open the possibility that pattern is influenced by “demographic complexity” rather than niches. I will mention briefly some of the work I’ve been doing to try to construct better tests of neutral theory.

Finally I’ll mention some connections that have been made so far between predictions of entropy theory and predictions of neutral theory in ecology and evolution.

These papers present interesting relations between ecology and statistical mechanics. Check out the nice ‘analogy chart’ in the second one!

• M. G. Bowler, Species abundance distributions, statistical mechanics and the priors of MaxEnt, Theoretical Population Biology 92 (2014), 69–77.

Abstract. The methods of Maximum Entropy have been deployed for some years to address the problem of species abundance distributions. In this approach, it is important to identify the correct weighting factors, or priors, to be applied before maximising the entropy function subject to constraints. The forms of such priors depend not only on the exact problem but can also depend on the way it is set up; priors are determined by the underlying dynamics of the complex system under consideration. The problem is one of statistical mechanics and it is the properties of the system that yield the correct MaxEnt priors, appropriate to the way the problem is framed. Here I calculate, in several different ways, the species abundance distribution resulting when individuals in a community are born and die independently. In
the usual formulation the prior distribution for the number of species over the number of individuals is 1/n; the problem can be reformulated in terms of the distribution of individuals over species classes, with a uniform prior. Results are obtained using master equations for the dynamics and separately through the combinatoric methods of elementary statistical mechanics; the MaxEnt priors then emerge a posteriori. The first object is to establish the log series species abundance distribution as the outcome of per capita guild dynamics. The second is to clarify the true nature and origin of priors in the language of MaxEnt. Finally, I consider how it may come about that the distribution is similar to log series in the event that filled niches dominate species abundance. For the general ecologist, there are two messages. First, that species abundance distributions are determined largely by population sorting through fractional processes (resulting in the 1/n factor) and secondly that useful information is likely to be found only in departures from the log series. For the MaxEnt practitioner, the message is that the prior with respect to which the entropy is to be maximised is determined by the nature of the problem and the way in which it is formulated.

• Guy Sella and Aaron E. Hirsh, The application of statistical physics to evolutionary biology, Proc. Nat. Acad. Sci. 102 (2005), 9541–9546.

A number of fundamental mathematical models of the evolutionary process exhibit dynamics that can be difficult to understand analytically. Here we show that a precise mathematical analogy can be drawn between certain evolutionary and thermodynamic systems, allowing application of the powerful machinery of statistical physics to analysis of a family of evolutionary models. Analytical results that follow directly from this approach include the steady-state distribution of fixed genotypes and the load in finite populations. The analogy with statistical physics also reveals that, contrary to a basic tenet of the nearly neutral theory of molecular evolution, the frequencies of adaptive and deleterious substitutions at steady state are equal. Finally, just as the free energy function quantitatively characterizes the balance between energy and entropy, a free fitness function provides an analytical expression for the balance between natural selection and stochastic drift.


Information and Entropy in Biological Systems (Part 4)

21 May, 2015

I kicked off the workshop on Information and Entropy in Biological Systems with a broad overview of the many ways information theory and entropy get used in biology:

• John Baez, Information and entropy in biological systems.

Abstract. Information and entropy are being used in biology in many different ways: for example, to study biological communication systems, the ‘action-perception loop’, the thermodynamic foundations of biology, the structure of ecosystems, measures of biodiversity, and evolution. Can we unify these? To do this, we must learn to talk to each other. This will be easier if we share some basic concepts which I’ll sketch here.

The talk is full of links, in blue. If you click on these you can get more details. You can also watch a video of my talk:


Information and Entropy in Biological Systems (Part 3)

20 May, 2015

We had a great workshop on information and entropy in biological systems, and now you can see what it was like. I think I’ll post these talks one a time, or maybe a few at a time, because they’d be overwhelming taken all at once.

So, let’s dive into Chris Lee’s exciting ideas about organisms as ‘information evolving machines’ that may provide ‘disinformation’ to their competitors. Near the end of his talk, he discusses some new results on an ever-popular topic: the Prisoner’s Dilemma. You may know about this classic book:

• Robert Axelrod, The Evolution of Cooperation, Basic Books, New York, 1984. Some passages available free online.

If you don’t, read it now! He showed that the simple ‘tit for tat’ strategy did very well in some experiments where the game was played repeatedly and strategies who did well got to ‘reproduce’ themselves. This result was very exciting, so a lot of people have done research on it. More recently a paper on this subject by William Press and Freeman Dyson received a lot of hype. I think this is a good place to learn about that:

• Mike Shulman, Zero determinant strategies in the iterated Prisoner’s Dilemma, The n-Category Café, 19 July 2012.

Chris Lee’s new work on the Prisoner’s Dilemma is here, cowritten with two other people who attended the workshop:

The art of war: beyond memory-one strategies in population games, PLOS One, 24 March 2015.

Abstract. We show that the history of play in a population game contains exploitable information that can be successfully used by sophisticated strategies to defeat memory-one opponents, including zero determinant strategies. The history allows a player to label opponents by their strategies, enabling a player to determine the population distribution and to act differentially based on the opponent’s strategy in each pairwise interaction. For the Prisoner’s Dilemma, these advantages lead to the natural formation of cooperative coalitions among similarly behaving players and eventually to unilateral defection against opposing player types. We show analytically and empirically that optimal play in population games depends strongly on the population distribution. For example, the optimal strategy for a minority player type against a resident tit-for-tat (TFT) population is ‘always cooperate’ (ALLC), while for a majority player type the optimal strategy versus TFT players is ‘always defect’ (ALLD). Such behaviors are not accessible to memory-one strategies. Drawing inspiration from Sun Tzu’s the Art of War, we implemented a non-memory-one strategy for population games based on techniques from machine learning and statistical inference that can exploit the history of play in this manner. Via simulation we find that this strategy is essentially uninvadable and can successfully invade (significantly more likely than a neutral mutant) essentially all known memory-one strategies for the Prisoner’s Dilemma, including ALLC (always cooperate), ALLD (always defect), tit-for-tat (TFT), win-stay-lose-shift (WSLS), and zero determinant (ZD) strategies, including extortionate and generous strategies.

And now for the talk! Click on the talk title here for Chris Lee’s slides, or go down and watch the video:

• Chris Lee, Empirical information, potential information and disinformation as signatures of distinct classes of information evolving machines.

Abstract. Information theory is an intuitively attractive way of thinking about biological evolution, because it seems to capture a core aspect of biology—life as a solution to “information problems”—in a fundamental way. However, there are non-trivial questions about how to apply that idea, and whether it has actual predictive value. For example, should we think of biological systems as being actually driven by an information metric? One idea that can draw useful links between information theory, evolution and statistical inference is the definition of an information evolving machine (IEM) as a system whose elements represent distinct predictions, and whose weights represent an information (prediction power) metric, typically as a function of sampling some iterative observation process. I first show how this idea provides useful results for describing a statistical inference process, including its maximum entropy bound for optimal inference, and how its sampling-based metrics (“empirical information”, Ie, for prediction power; and “potential information”, Ip, for latent prediction power) relate to classical definitions such as mutual information and relative entropy. These results suggest classification of IEMs into several distinct types:

1. Ie machine: e.g. a population of competing genotypes evolving under selection and mutation is an IEM that computes an Ie equivalent to fitness, and whose gradient (Ip) acts strictly locally, on mutations that it actually samples. Its transition rates between steady states will decrease exponentially as a function of evolutionary distance.

2. “Ip tunneling” machine: a statistical inference process summing over a population of models to compute both Ie, Ip can directly detect “latent” information in the observations (not captured by its model), which it can follow to “tunnel” rapidly to a new steady state.

3. disinformation machine (multiscale IEM): an ecosystem of species is an IEM whose elements (species) are themselves IEMs that can interact. When an attacker IEM can reduce a target IEM’s prediction power (Ie) by sending it a misleading signal, this “disinformation dynamic” can alter the evolutionary landscape in interesting ways, by opening up paths for rapid co-evolution to distant steady-states. This is especially true when the disinformation attack targets a feature of high fitness value, yielding a combination of strong negative selection for retention of the target feature, plus strong positive selection for escaping the disinformation attack. I will illustrate with examples from statistical inference and evolutionary game theory. These concepts, though basic, may provide useful connections between diverse themes in the workshop.


Follow

Get every new post delivered to your Inbox.

Join 3,063 other followers