Information and Entropy in Biological Systems (Part 7)

6 June, 2015

In 1961, Rolf Landauer argued that that the least possible amount of energy required to erase one bit of information stored in memory at temperature T is kT \ln 2, where k is Boltzmann’s constant.

This is called the Landauer limit, and it came after many decades of arguments concerning Maxwell’s demon and the relation between information and entropy.

In fact, these arguments are still not finished. For example, here’s an argument that the Landauer limit is not as solid as widely believed:

• John D. Norton, Waiting for Landauer, Studies in History and Philosophy of Modern Physics 42 (2011), 184–198.

But something like the Landauer limit almost surely holds under some conditions! And if it holds, it puts some limits on what organisms can do. That’s what David Wolpert spoke about at our workshop! You can see his slides here:

David WolpertThe Landauer limit and thermodynamics of biological organisms.

You can also watch a video:


Information and Entropy in Biological Systems (Part 6)

1 June, 2015

The resounding lack of comment to this series of posts confirms my theory that a blog post that says “go somewhere else and read something” will never be popular. Even if it’s “go somewhere else and watch a video”, this is too much like saying

Hi! Want to talk? Okay, go into that other room and watch TV, then come back when you’re done and we’ll talk about it.

But no matter: our workshop on Information and Entropy in Biological Systems was really exciting! I want to make it available to the world as much as possible. I’m running around too much to create lovingly hand-crafted summaries of each talk—and I know you’re punishing me for that, with your silence. But I’ll keep on going, just to get the material out there.

Marc Harper spoke about information in evolutionary game theory, and we have a nice video of that. I’ve been excited about his work for quite a while, because it shows that the analogy between ‘evolution’ and ‘learning’ can be made mathematically precise. I summarized some of his ideas in my information geometry series, and I’ve also gotten him to write two articles for this blog:

• Marc Harper, Relative entropy in evolutionary dynamics, Azimuth, 22 January 2014.

• Marc Harper, Stationary stability in finite populations, Azimuth, 24 March 2015.

Here are the slides and video of his talk:

• Marc Harper, Information transport and evolutionary dynamics.


Information and Entropy in Biological Systems (Part 5)

30 May, 2015

John Harte of U. C. Berkeley spoke about the maximum entropy method as a method of predicting patterns in ecology. Annette Ostling of the University of Michigan spoke about some competing theories, such as the ‘neutral model’ of biodiversity—a theory that sounds much too simple to be right, yet fits the data surprisingly well!

We managed to get a video of Ostling’s talk, but not Harte’s. Luckily, you can see the slides of both. You can also see a summary of Harte’s book Maximum Entropy and Ecology:

• John Baez, Maximum entropy and ecology, Azimuth, 21 February 2013.

Here are his talk slides and abstract:

• John Harte, Maximum entropy as a foundation for theory building in ecology.

Abstract. Constrained maximization of information entropy (MaxEnt) yields least-biased probability distributions. In statistical physics, this powerful inference method yields classical statistical mechanics/thermodynamics under the constraints imposed by conservation laws. I apply MaxEnt to macroecology, the study of the distribution, abundance, and energetics of species in ecosystems. With constraints derived from ratios of ecological state variables, I show that MaxEnt yields realistic abundance distributions, species-area relationships, spatial aggregation patterns, and body-size distributions over a wide range of taxonomic groups, habitats and spatial scales. I conclude with a brief summary of some of the major opportunities at the frontier of MaxEnt-based macroecological theory.

Here is a video of Ostling’s talk, as well as her slides and some papers she recommended:

• Annette Ostling, The neutral theory of biodiversity and other competitors to maximum entropy.

Abstract: I am a bit of the odd man out in that I will not talk that much about information and entropy, but instead about neutral theory and niche theory in ecology. My interest in coming to this workshop is in part out of an interest in what greater insights we can get into neutral models and stochastic population dynamics in general using entropy and information theory.

I will present the niche and neutral theories of the maintenance of diversity of competing species in ecology, and explain the dynamics included in neutral models in ecology. I will also briefly explain how one can derive a species abundance distribution from neutral models. I will present the view that neutral models have the potential to serve as more process-based null models than previously used in ecology for detecting the signature of niches and habitat filtering. However, tests of neutral theory in ecology have not as of yet been as useful as tests of neutral theory in evolutionary biology, because they leave open the possibility that pattern is influenced by “demographic complexity” rather than niches. I will mention briefly some of the work I’ve been doing to try to construct better tests of neutral theory.

Finally I’ll mention some connections that have been made so far between predictions of entropy theory and predictions of neutral theory in ecology and evolution.

These papers present interesting relations between ecology and statistical mechanics. Check out the nice ‘analogy chart’ in the second one!

• M. G. Bowler, Species abundance distributions, statistical mechanics and the priors of MaxEnt, Theoretical Population Biology 92 (2014), 69–77.

Abstract. The methods of Maximum Entropy have been deployed for some years to address the problem of species abundance distributions. In this approach, it is important to identify the correct weighting factors, or priors, to be applied before maximising the entropy function subject to constraints. The forms of such priors depend not only on the exact problem but can also depend on the way it is set up; priors are determined by the underlying dynamics of the complex system under consideration. The problem is one of statistical mechanics and it is the properties of the system that yield the correct MaxEnt priors, appropriate to the way the problem is framed. Here I calculate, in several different ways, the species abundance distribution resulting when individuals in a community are born and die independently. In
the usual formulation the prior distribution for the number of species over the number of individuals is 1/n; the problem can be reformulated in terms of the distribution of individuals over species classes, with a uniform prior. Results are obtained using master equations for the dynamics and separately through the combinatoric methods of elementary statistical mechanics; the MaxEnt priors then emerge a posteriori. The first object is to establish the log series species abundance distribution as the outcome of per capita guild dynamics. The second is to clarify the true nature and origin of priors in the language of MaxEnt. Finally, I consider how it may come about that the distribution is similar to log series in the event that filled niches dominate species abundance. For the general ecologist, there are two messages. First, that species abundance distributions are determined largely by population sorting through fractional processes (resulting in the 1/n factor) and secondly that useful information is likely to be found only in departures from the log series. For the MaxEnt practitioner, the message is that the prior with respect to which the entropy is to be maximised is determined by the nature of the problem and the way in which it is formulated.

• Guy Sella and Aaron E. Hirsh, The application of statistical physics to evolutionary biology, Proc. Nat. Acad. Sci. 102 (2005), 9541–9546.

A number of fundamental mathematical models of the evolutionary process exhibit dynamics that can be difficult to understand analytically. Here we show that a precise mathematical analogy can be drawn between certain evolutionary and thermodynamic systems, allowing application of the powerful machinery of statistical physics to analysis of a family of evolutionary models. Analytical results that follow directly from this approach include the steady-state distribution of fixed genotypes and the load in finite populations. The analogy with statistical physics also reveals that, contrary to a basic tenet of the nearly neutral theory of molecular evolution, the frequencies of adaptive and deleterious substitutions at steady state are equal. Finally, just as the free energy function quantitatively characterizes the balance between energy and entropy, a free fitness function provides an analytical expression for the balance between natural selection and stochastic drift.


Information and Entropy in Biological Systems (Part 4)

21 May, 2015

I kicked off the workshop on Information and Entropy in Biological Systems with a broad overview of the many ways information theory and entropy get used in biology:

• John Baez, Information and entropy in biological systems.

Abstract. Information and entropy are being used in biology in many different ways: for example, to study biological communication systems, the ‘action-perception loop’, the thermodynamic foundations of biology, the structure of ecosystems, measures of biodiversity, and evolution. Can we unify these? To do this, we must learn to talk to each other. This will be easier if we share some basic concepts which I’ll sketch here.

The talk is full of links, in blue. If you click on these you can get more details. You can also watch a video of my talk:


Information and Entropy in Biological Systems (Part 3)

20 May, 2015

We had a great workshop on information and entropy in biological systems, and now you can see what it was like. I think I’ll post these talks one a time, or maybe a few at a time, because they’d be overwhelming taken all at once.

So, let’s dive into Chris Lee’s exciting ideas about organisms as ‘information evolving machines’ that may provide ‘disinformation’ to their competitors. Near the end of his talk, he discusses some new results on an ever-popular topic: the Prisoner’s Dilemma. You may know about this classic book:

• Robert Axelrod, The Evolution of Cooperation, Basic Books, New York, 1984. Some passages available free online.

If you don’t, read it now! He showed that the simple ‘tit for tat’ strategy did very well in some experiments where the game was played repeatedly and strategies who did well got to ‘reproduce’ themselves. This result was very exciting, so a lot of people have done research on it. More recently a paper on this subject by William Press and Freeman Dyson received a lot of hype. I think this is a good place to learn about that:

• Mike Shulman, Zero determinant strategies in the iterated Prisoner’s Dilemma, The n-Category Café, 19 July 2012.

Chris Lee’s new work on the Prisoner’s Dilemma is here, cowritten with two other people who attended the workshop:

The art of war: beyond memory-one strategies in population games, PLOS One, 24 March 2015.

Abstract. We show that the history of play in a population game contains exploitable information that can be successfully used by sophisticated strategies to defeat memory-one opponents, including zero determinant strategies. The history allows a player to label opponents by their strategies, enabling a player to determine the population distribution and to act differentially based on the opponent’s strategy in each pairwise interaction. For the Prisoner’s Dilemma, these advantages lead to the natural formation of cooperative coalitions among similarly behaving players and eventually to unilateral defection against opposing player types. We show analytically and empirically that optimal play in population games depends strongly on the population distribution. For example, the optimal strategy for a minority player type against a resident tit-for-tat (TFT) population is ‘always cooperate’ (ALLC), while for a majority player type the optimal strategy versus TFT players is ‘always defect’ (ALLD). Such behaviors are not accessible to memory-one strategies. Drawing inspiration from Sun Tzu’s the Art of War, we implemented a non-memory-one strategy for population games based on techniques from machine learning and statistical inference that can exploit the history of play in this manner. Via simulation we find that this strategy is essentially uninvadable and can successfully invade (significantly more likely than a neutral mutant) essentially all known memory-one strategies for the Prisoner’s Dilemma, including ALLC (always cooperate), ALLD (always defect), tit-for-tat (TFT), win-stay-lose-shift (WSLS), and zero determinant (ZD) strategies, including extortionate and generous strategies.

And now for the talk! Click on the talk title here for Chris Lee’s slides, or go down and watch the video:

• Chris Lee, Empirical information, potential information and disinformation as signatures of distinct classes of information evolving machines.

Abstract. Information theory is an intuitively attractive way of thinking about biological evolution, because it seems to capture a core aspect of biology—life as a solution to “information problems”—in a fundamental way. However, there are non-trivial questions about how to apply that idea, and whether it has actual predictive value. For example, should we think of biological systems as being actually driven by an information metric? One idea that can draw useful links between information theory, evolution and statistical inference is the definition of an information evolving machine (IEM) as a system whose elements represent distinct predictions, and whose weights represent an information (prediction power) metric, typically as a function of sampling some iterative observation process. I first show how this idea provides useful results for describing a statistical inference process, including its maximum entropy bound for optimal inference, and how its sampling-based metrics (“empirical information”, Ie, for prediction power; and “potential information”, Ip, for latent prediction power) relate to classical definitions such as mutual information and relative entropy. These results suggest classification of IEMs into several distinct types:

1. Ie machine: e.g. a population of competing genotypes evolving under selection and mutation is an IEM that computes an Ie equivalent to fitness, and whose gradient (Ip) acts strictly locally, on mutations that it actually samples. Its transition rates between steady states will decrease exponentially as a function of evolutionary distance.

2. “Ip tunneling” machine: a statistical inference process summing over a population of models to compute both Ie, Ip can directly detect “latent” information in the observations (not captured by its model), which it can follow to “tunnel” rapidly to a new steady state.

3. disinformation machine (multiscale IEM): an ecosystem of species is an IEM whose elements (species) are themselves IEMs that can interact. When an attacker IEM can reduce a target IEM’s prediction power (Ie) by sending it a misleading signal, this “disinformation dynamic” can alter the evolutionary landscape in interesting ways, by opening up paths for rapid co-evolution to distant steady-states. This is especially true when the disinformation attack targets a feature of high fitness value, yielding a combination of strong negative selection for retention of the target feature, plus strong positive selection for escaping the disinformation attack. I will illustrate with examples from statistical inference and evolutionary game theory. These concepts, though basic, may provide useful connections between diverse themes in the workshop.


Information and Entropy in Biological Systems (Part 3)

6 April, 2015

I think you can watch live streaming video of our workshop on Information and Entropy in Biological Systems, which runs Wednesday April 8th to Friday April 10th. Later, videos will be made available in a permanent location.

To watch the workshop live, go here. Go down to where it says

Investigative Workshop: Information and Entropy in Biological Systems

Then click where it says live link. There’s nothing there now, but I’m hoping there will be when the show starts!

Below you can see the schedule of talks and a list of participants. The hours are in Eastern Daylight Time: add 4 hours to get Greenwich Mean Time. The talks start at 10 am EDT, which is 2 pm GMT.

Schedule

There will be 1½ hours of talks in the morning and 1½ hours in the afternoon for each of the 3 days, Wednesday April 8th to Friday April 10th. The rest of the time will be for discussions on different topics. We’ll break up into groups, based on what people want to discuss.

Each invited speaker will give a 30-minute talk summarizing the key ideas in some area, not their latest research so much as what everyone should know to start interesting conversations. After that, 15 minutes for questions and/or coffee.

Here’s the schedule. You can already see slides or other material for the talks with links!

Wednesday April 8

• 9:45-10:00 — the usual introductory fussing around.
• 10:00-10:30 — John Baez, Information and entropy in biological systems.
• 10:30-11:00 — questions, coffee.
• 11:00-11:30 — Chris Lee, Empirical information, potential information and disinformation.
• 11:30-11:45 — questions.

• 11:45-1:30 — lunch, conversations.

• 1:30-2:00 — John Harte, Maximum entropy as a foundation for theory building in ecology.
• 2:00-2:15 — questions, coffee.
• 2:15-2:45 — Annette Ostling, The neutral theory of biodiversity and other competitors to the principle of maximum entropy.
• 2:45-3:00 — questions, coffee.
• 3:00-5:30 — break up into groups for discussions.

• 5:30 — reception.

Thursday April 9

• 10:00-10:30 — David Wolpert, The Landauer limit and thermodynamics of biological organisms.
• 10:30-11:00 — questions, coffee.
• 11:00-11:30 — Susanne Still, Efficient computation and data modeling.
• 11:30-11:45 — questions.

• 11:45-1:30 — group photo, lunch, conversations.

• 1:30-2:00 — Matina Donaldson-Matasci, The fitness value of information in an uncertain environment.
• 2:00-2:15 — questions, coffee.
• 2:15-2:45 — Roderick Dewar, Maximum entropy and maximum entropy production in biological systems: survival of the likeliest?
• 2:45-3:00 — questions, coffee.
• 3:00-6:00 — break up into groups for discussions.

Friday April 10

• 10:00-10:30 — Marc Harper, Information transport and evolutionary dynamics.
• 10:30-11:00 — questions, coffee.
• 11:00-11:30 — Tobias Fritz, Characterizations of Shannon and Rényi entropy.
• 11:30-11:45 — questions.

• 11:45-1:30 — lunch, conversations.

• 1:30-2:00 — Christina Cobbold, Biodiversity measures and the role of species similarity.
• 2:00-2:15 — questions, coffee.
• 2:15-2:45 — Tom Leinster, Maximizing biological diversity.
• 2:45-3:00 — questions, coffee.
• 3:00-6:00 — break up into groups for discussions.

Participants

Here are the confirmed participants. This list may change a little bit:

• John Baez – mathematical physicist.

• Romain Brasselet – postdoc in cognitive neuroscience knowledgeable about information-theoretic methods and methods of estimating entropy from samples of probability distributions.

• Katharina Brinck – grad student at Centre for Complexity Science at Imperial College; did masters at John Harte’s lab, where she extended his Maximum Entropy Theory of Ecology (METE) to trophic food webs, to study how entropy maximization on the macro scale together with MEP on the scale of individuals drive the structural development of model ecosystems.

• Christina Cobbold – mathematical biologist, has studied the role of species similarity in measuring biodiversity.

• Troy Day – mathematical biologist, works with population dynamics, host-parasite dynamics, etc.; influential and could help move population dynamics to a more information-theoretic foundation.

• Roderick Dewar – physicist who studies the principle of maximal entropy production.

• Barrett Deris – MIT postdoc studying the studying the factors that influence evolvability of drug resistance in bacteria.

• Charlotte de Vries – a biology master’s student who studied particle physics to the master’s level at Oxford and the Perimeter Institute. Interested in information theory.

• Matina Donaldson-Matasci – a biologist who studies information, uncertainty and collective behavior.

• Chris Ellison – a postdoc who worked with James Crutchfield on “information-theoretic measures of structure and memory in stationary, stochastic systems – primarily, finite state hidden Markov models”. He coauthored “Intersection Information based on Common Randomness”, http://arxiv.org/abs/1310.1538. The idea: “The introduction of the partial information decomposition generated a flurry of proposals for defining an intersection information that quantifies how much of “the same information” two or more random variables specify about a target random variable. As of yet, none is wholly satisfactory.” Works on mutual information between organisms and environment (along with David Krakauer and Jessica Flack), and also entropy rates.

• Cameron Freer – MIT postdoc in Brain and Cognitive Sciences working on maximum entropy production principles, algorithmic entropy etc.

• Tobias Fritz – a physicist who has worked on “resource theories” and haracterizations of Shannon and Rényi entropy and on resource theories.

• Dashiell Fryer – works with Marc Harper on information geometry and evolutionary game theory.

• Michael Gilchrist – an evolutionary biologist studying how errors and costs of protein translation affect the codon usage observed within a genome. Works at NIMBioS.

• Manoj Gopalkrishnan – an expert on chemical reaction networks who understands entropy-like Lyapunov functions for these systems.

• Marc Harper – works on evolutionary game theory using ideas from information theory, information geometry, etc.

• John Harte – an ecologist who uses the maximum entropy method to predict the structure of ecosystems.

• Ellen Hines – studies habitat modeling and mapping for marine endangered species and ecosystems, sea level change scenarios, documenting of human use and values. Her lab has used MaxEnt methods.

• Elizabeth Hobson – behavior ecology postdoc developing methods to quantify social complexity in animals. Works at NIMBioS.

• John Jungk – works on graph theory and biology.

• Chris Lee – in bioinformatics and genomics; applies information theory to experiment design and evolutionary biology.

• Maria Leites – works on dynamics, bifurcations and applications of coupled systems of non-linear ordinary differential equations with applications to ecology, epidemiology, and transcriptional regulatory networks. Interested in information theory.

• Tom Leinster – a mathematician who applies category theory to study various concepts of ‘magnitude’, including biodiversity and entropy.

• Timothy Lezon – a systems biologist in the Drug Discovery Institute at Pitt, who has used entropy to characterize phenotypic heterogeneity in populations of cultured cells.

• Maria Ortiz Mancera – statistician working at CONABIO, the National Commission for Knowledge and Use of Biodiversity, in Mexico.

• Yajun Mei – statistician who uses Kullback-Leibler divergence and how to efficiently compute entropy for the two-state hidden Markov models.

• Robert Molzon – mathematical economist who has studied deterministic approximation of stochastic evolutionary dynamics.

• David Murrugarra – works on discrete models in mathematical biology; interested in learning about information theory.

• Annette Ostling – studies community ecology, focusing on the influence of interspecific competition on community structure, and what insights patterns of community structure might provide about the mechanisms by which competing species coexist.

• Connie Phong – grad student at Chicago’s Institute of Genomics and System biology, working on how “certain biochemical network motifs are more attuned than others at maintaining strong input to output relationships under fluctuating conditions.”

• Petr Plechak – works on information-theoretic tools for estimating and minimizing errors in coarse-graining stochastic systems. Wrote “Information-theoretic tools for parametrized coarse-graining of non-equilibrium extended systems”.

• Blake Polllard – physics grad student working with John Baez on various generalizations of Shannon and Renyi entropy, and how these entropies change with time in Markov processes and open Markov processes.

• Timothee Poisot – works on species interaction networks; developed a “new suite of tools for probabilistic interaction networks”.

• Richard Reeve – works on biodiversity studies and the spread of antibiotic resistance. Ran a program on entropy-based biodiversity measures at a mathematics institute in Barcelona.

• Rob Shaw – works on entropy and information in biotic and pre-biotic systems.

• Matteo Smerlak – postdoc working on nonequilibrium thermodynamics and its applications to biology, especially population biology and cell replication.

• Susanne Still – a computer scientist who studies the role of thermodynamics and information theory in prediction.

• Alexander Wissner-Gross – Institute Fellow at the Harvard University Institute for Applied Computational Science and Research Affiliate at the MIT Media Laboratory, interested in lots of things.

• David Wolpert – works at the Santa Fe Institute on i) information theory and game theory, ii) the second law of thermodynamics and dynamics of complexity, iii) multi-information source optimization, iv) the mathematical underpinnings of reality, v) evolution of organizations.

• Matthew Zefferman – works on evolutionary game theory, institutional economics and models of gene-culture co-evolution. No work on information, but a postdoc at NIMBioS.


Thermodynamics with Continuous Information Flow

21 March, 2015

guest post by Blake S. Pollard

Over a century ago James Clerk Maxwell created a thought experiment that has helped shape our understanding of the Second Law of Thermodynamics: the law that says entropy can never decrease.

Maxwell’s proposed experiment was simple. Suppose you had a box filled with an ideal gas at equilibrium at some temperature. You stick in an insulating partition, splitting the box into two halves. These two halves are isolated from one another except for one important caveat: somewhere along the partition resides a being capable of opening and closing a door, allowing gas particles to flow between the two halves. This being is also capable of observing the velocities of individual gas particles. Every time a particularly fast molecule is headed towards the door the being opens it, letting fly into the other half of the box. When a slow particle heads towards the door the being keeps it closed. After some time, fast molecules would build up on one side of the box, meaning half the box would heat up! To an observer it would seem like the box, originally at a uniform temperature, would for some reason start splitting up into a hot half and a cold half. This seems to violate the Second Law (as well as all our experience with boxes of gas).

Of course, this apparent violation probably has something to do with positing the existence of intelligent microscopic doormen. This being, and the thought experiment itself, are typically referred to as Maxwell’s demon.

Demon2
Photo credit: Peter MacDonald, Edmonds, UK

When people cook up situations that seem to violate the Second Law there is typically a simple resolution: you have to consider the whole system! In the case of Maxwell’s demon, while the entropy of the box decreases, the entropy of the system as a whole, demon include, goes up. Precisely quantifying how Maxwell’s demon doesn’t violate the Second Law has led people to a better understanding of the role of information in thermodynamics.

At the American Physical Society March Meeting in San Antonio, Texas, I had the pleasure of hearing some great talks on entropy, information, and the Second Law. Jordan Horowitz, a postdoc at Boston University, gave a talk on his work with Massimiliano Esposito, a researcher at the University of Luxembourg, on how one can understand situations like Maxwell’s demon (and a whole lot more) by analyzing the flow of information between subsystems.

Consider a system made up of two parts, X and Y. Each subsystem has a discrete set of states. Each systems makes transitions among these discrete states. These dynamics can be modeled as Markov processes. They are interested in modeling the thermodynamics of information flow between subsystems. To this end they consider a bipartite system, meaning that either X transitions or Y transitions, never both at the same time. The probability distribution p(x,y) of the whole system evolves according to the master equation:

\displaystyle{ \frac{dp(x,y)}{dt} = \sum_{x', y'} H_{x,x'}^{y,y'}p(x',y') - H_{x',x}^{y',y}p(x,y) }

where H_{x,x'}^{y,y'} is the rate at which the system transitions from (x',y') \to (x,y). The ‘bipartite’ condition means that H has the form

H_{x,x'}^{y,y'} = \left\{ \begin{array}{cc} H_{x,x'}^y & x \neq x'; y=y' \\   H_x^{y,y'} & x=x'; y \neq y' \\  0 & \text{otherwise.} \end{array} \right.

The joint system is an open system that satisfies the second law of thermodynamics:

\displaystyle{ \frac{dS_i}{dt} = \frac{dS_{XY}}{dt} + \frac{dS_e}{dt} \geq 0 }

where

\displaystyle{ S_{XY} = - \sum_{x,y} p(x,y) \ln ( p(x,y) ) }

is the Shannon entropy of the system, satisfying

\displaystyle{ \frac{dS_{XY} }{dt} = \sum_{x,y} \left[ H_{x,x'}^{y,y'}p(x',y') - H_{x',x}^{y',y}   p(x,y) \right] \ln \left( \frac{p(x',y')}{p(x,y)} \right) }

and

\displaystyle{ \frac{dS_e}{dt}  = \sum_{x,y} \left[ H_{x,x'}^{y,y'}p(x',y') - H_{x',x}^{y',y} p(x,y) \right] \ln \left( \frac{ H_{x,x'}^{y,y'} } {H_{x',x}^{y',y} } \right) }

is the entropy change of the environment.

We want to investigate how the entropy production of the whole system relates to entropy production in the bipartite pieces X and Y. To this end they define a new flow, the information flow, as the time rate of change of the mutual information

\displaystyle{ I = \sum_{x,y} p(x,y) \ln \left( \frac{p(x,y)}{p(x)p(y)} \right) }

Its time derivative can be split up as

\displaystyle{ \frac{dI}{dt} = \frac{dI^X}{dt} + \frac{dI^Y}{dt}}

where

\displaystyle{ \frac{dI^X}{dt} = \sum_{x,y} \left[ H_{x,x'}^{y} p(x',y) - H_{x',x}^{y}p(x,y) \right] \ln \left( \frac{ p(y|x) }{p(y|x')} \right) }

and

\displaystyle{ \frac{dI^Y}{dt} = \sum_{x,y} \left[ H_{x}^{y,y'}p(x,y') - H_{x}^{y',y}p(x,y) \right] \ln \left( \frac{p(x|y)}{p(x|y')} \right) }

are the information flows associated with the subsystems X and Y respectively.

When

\displaystyle{ \frac{dI^X}{dt} > 0}

a transition in X increases the mutual information I, meaning that X ‘knows’ more about Y and vice versa.

We can rewrite the entropy production entering into the second law in terms of these information flows as

\displaystyle{ \frac{dS_i}{dt} = \frac{dS_i^X}{dt} + \frac{dS_i^Y}{dt} }

where

\displaystyle{ \frac{dS_i^X}{dt} = \sum_{x,y} \left[ H_{x,x'}^y p(x',y) - H_{x',x}^y p(x,y) \right] \ln \left( \frac{H_{x,x'}^y p(x',y) } {H_{x',x}^y p(x,y) } \right) \geq 0 }

and similarly for \frac{dS_Y}{dt} . This gives the following decomposition of entropy production in each subsystem:

\displaystyle{ \frac{dS_i^X}{dt} = \frac{dS^X}{dt} + \frac{dS^X_e}{dt} - \frac{dI^X}{dt} \geq 0 }

\displaystyle{ \frac{dS_i^Y}{dt} = \frac{dS^Y}{dt} + \frac{dS^X_e}{dt} - \frac{dI^Y}{dt} \geq 0},

where the inequalities hold for each subsystem. To see this, if you write out the left hand side of each inequality you will find that they are both of the form

\displaystyle{ \sum_{x,y} \left[ x-y \right] \ln \left( \frac{x}{y} \right) }

which is non-negative for x,y \geq 0.

The interaction between the subsystems is contained entirely in the information flow terms. Neglecting these terms gives rise to situations like Maxwell’s demon where a subsystem seems to violate the second law.

Lots of Markov processes have boring equilibria \frac{dp}{dt} = 0 where there is no net flow among the states. Markov processes also admit non-equilibrium steady states, where there may be some constant flow of information. In this steady state all explicit time derivatives are zero, including the net information flow:

\displaystyle{ \frac{dI}{dt} = 0 }

which implies that \frac{dI^X}{dt} = - \frac{dI^Y}{dt}. In this situation the above inequalities become

\displaystyle{ \frac{dS^X_i}{dt} = \frac{dS_e^X}{dt} - \frac{dI^X}{dt} }

and

\displaystyle{ \frac{dS^Y_i}{dt} = \frac{dS_e^X}{dt} + \frac{dI^X}{dt} }.

If

\displaystyle{ \frac{dI^X}{dt} > 0 }

then X is learning something about Y or acting as a sensor. The first inequality

\frac{dS_e^X}{dt} \geq \frac{dI^X}{dt} quantifies the minimum amount of energy X must supply to do this sensing. Similarly -\frac{dS_e^Y}{dt} \leq \frac{dI^X}{dt} bounds the amount of useful energy is available to Y as a result of this information transfer.

In their paper Horowitz and Esposito explore a few other examples and show the utility of this simple breakup of a system into two interacting subsystems in explaining various interesting situations in which the flow of information has thermodynamic significance.

For the whole story, read their paper!

• Jordan Horowitz and Massimiliano Esposito, Thermodynamics with continuous information flow, Phys. Rev. X 4 (2014), 031015.


Follow

Get every new post delivered to your Inbox.

Join 3,059 other followers