Nonequilibrium Thermodynamics in Biology (Part 2)

16 June, 2021

Larry Li, Bill Cannon and I ran a session on non-equilibrium thermodynamics in biology at SMB2021, the annual meeting of the Society for Mathematical Biology. You can see talk slides here!

Here’s the basic idea:

Since Lotka, physical scientists have argued that living things belong to a class of complex and orderly systems that exist not despite the second law of thermodynamics, but because of it. Life and evolution, through natural selection of dissipative structures, are based on non-equilibrium thermodynamics. The challenge is to develop an understanding of what the respective physical laws can tell us about flows of energy and matter in living systems, and about growth, death and selection. This session addresses current challenges including understanding emergence, regulation and control across scales, and entropy production, from metabolism in microbes to evolving ecosystems.

Click on the links to see slides for most of the talks:

Persistence, permanence, and global stability in reaction network models: some results inspired by thermodynamic principles
Gheorghe Craciun, University of Wisconsin–Madison

The standard mathematical model for the dynamics of concentrations in biochemical networks is called mass-action kinetics. We describe mass-action kinetics and discuss the connection between special classes of mass-action systems (such as detailed balanced and complex balanced systems) and the Boltzmann equation. We also discuss the connection between the ‘global attractor conjecture’ for complex balanced mass-action systems and Boltzmann’s H-theorem. We also describe some implications for biochemical mechanisms that implement noise filtering and cellular homeostasis.

The principle of maximum caliber of nonequilibria
Ken Dill, Stony Brook University

Maximum Caliber is a principle for inferring pathways and rate distributions of kinetic processes. The structure and foundations of MaxCal are much like those of Maximum Entropy for static distributions. We have explored how MaxCal may serve as a general variational principle for nonequilibrium statistical physics—giving well-known results, such as the Green-Kubo relations, Onsager’s reciprocal relations and Prigogine’s Minimum Entropy Production principle near equilibrium, but is also applicable far from equilibrium. I will also discuss some applications, such as finding reaction coordinates in molecular simulations non-linear dynamics in gene circuits, power-law-tail distributions in ‘social-physics’ networks, and others.

Nonequilibrium biomolecular information processes
Pierre Gaspard, Université libre de Bruxelles

Nearly 70 years have passed since the discovery of DNA structure and its role in coding genetic information. Yet, the kinetics and thermodynamics of genetic information processing in DNA replication, transcription, and translation remain poorly understood. These template-directed copolymerization processes are running away from equilibrium, being powered by extracellular energy sources. Recent advances show that their kinetic equations can be exactly solved in terms of so-called iterated function systems. Remarkably, iterated function systems can determine the effects of genome sequence on replication errors, up to a million times faster than kinetic Monte Carlo algorithms. With these new methods, fundamental links can be established between molecular information processing and the second law of thermodynamics, shedding a new light on genetic drift, mutations, and evolution.

Nonequilibrium dynamics of disturbed ecosystems
John Harte, University of California, Berkeley

The Maximum Entropy Theory of Ecology (METE) predicts the shapes of macroecological metrics in relatively static ecosystems, across spatial scales, taxonomic categories, and habitats, using constraints imposed by static state variables. In disturbed ecosystems, however, with time-varying state variables, its predictions often fail. We extend macroecological theory from static to dynamic, by combining the MaxEnt inference procedure with explicit mechanisms governing disturbance. In the static limit, the resulting theory, DynaMETE, reduces to METE but also predicts a new scaling relationship among static state variables. Under disturbances, expressed as shifts in demographic, ontogenic growth, or migration rates, DynaMETE predicts the time trajectories of the state variables as well as the time-varying shapes of macroecological metrics such as the species abundance distribution and the distribution of metabolic rates over
individuals. An iterative procedure for solving the dynamic theory is presented. Characteristic signatures of the deviation from static predictions of macroecological patterns are shown to result from different kinds of disturbance. By combining MaxEnt inference with explicit dynamical mechanisms of disturbance, DynaMETE is a candidate theory of macroecology for ecosystems responding to anthropogenic or natural disturbances.

Stochastic chemical reaction networks
Supriya Krishnamurthy, Stockholm University

The study of chemical reaction networks (CRN’s) is a very active field. Earlier well-known results (Feinberg Chem. Enc. Sci. 42 2229 (1987), Anderson et al Bull. Math. Biol. 72 1947 (2010)) identify a topological quantity called deficiency, easy to compute for CRNs of any size, which, when exactly equal to zero, leads to a unique factorized (non-equilibrium) steady-state for these networks. No general results exist however for the steady states of non-zero-deficiency networks. In recent work, we show how to write the full moment-hierarchy for any non-zero-deficiency CRN obeying mass-action kinetics, in terms of equations for the factorial moments. Using these, we can recursively predict values for lower moments from higher moments, reversing the procedure usually used to solve moment hierarchies. We show, for non-trivial examples, that in this manner we can predict any moment of interest, for CRN’s with non-zero deficiency and non-factorizable steady states. It is however an open question how scalable these techniques are for large networks.

Heat flows adjust local ion concentrations in favor of prebiotic chemistry
Christof Mast, Ludwig-Maximilians-Universität München

Prebiotic reactions often require certain initial concentrations of ions. For example, the activity of RNA enzymes requires a lot of divalent magnesium salt, whereas too much monovalent sodium salt leads to a reduction in enzyme function. However, it is known from leaching experiments that prebiotically relevant geomaterial such as basalt releases mainly a lot of sodium and only little magnesium. A natural solution to this problem is heat fluxes through thin rock fractures, through which magnesium is actively enriched and sodium is depleted by thermogravitational convection and thermophoresis. This process establishes suitable conditions for ribozyme function from a basaltic leach. It can take place in a spatially distributed system of rock cracks and is therefore particularly stable to natural fluctuations and disturbances.

Deficiency of chemical reaction networks and thermodynamics
Matteo Polettini, University of Luxembourg

Deficiency is a topological property of a Chemical Reaction Network linked to important dynamical features, in particular of deterministic fixed points and of stochastic stationary states. Here we link it to thermodynamics: in particular we discuss the validity of a strong vs. weak zeroth law, the existence of time-reversed mass-action kinetics, and the possibility to formulate marginal fluctuation relations. Finally we illustrate some subtleties of the Python module we created for MCMC stochastic simulation of CRNs, soon to be made public.

Large deviations theory and emergent landscapes in biological dynamics
Hong Qian, University of Washington

The mathematical theory of large deviations provides a nonequilibrium thermodynamic description of complex biological systems that consist of heterogeneous individuals. In terms of the notions of stochastic elementary reactions and pure kinetic species, the continuous-time, integer-valued Markov process dictates a thermodynamic structure that generalizes (i) Gibbs’ microscopic chemical thermodynamics of equilibrium matters to nonequilibrium small systems such as living cells and tissues; and (ii) Gibbs’ potential function to the landscapes for biological dynamics, such as that of C. H. Waddington and S. Wright.

Using the maximum entropy production principle to understand and predict microbial biogeochemistry
Joseph Vallino, Marine Biological Laboratory, Woods Hole

Natural microbial communities contain billions of individuals per liter and can exceed a trillion cells per liter in sediments, as well as harbor thousands of species in the same volume. The high species diversity contributes to extensive metabolic functional capabilities to extract chemical energy from the environment, such as methanogenesis, sulfate reduction, anaerobic photosynthesis, chemoautotrophy, and many others, most of which are only expressed by bacteria and archaea. Reductionist modeling of natural communities is problematic, as we lack knowledge on growth kinetics for most organisms and have even less understanding on the mechanisms governing predation, viral lysis, and predator avoidance in these systems. As a result, existing models that describe microbial communities contain dozens to hundreds of parameters, and state variables are extensively aggregated. Overall, the models are little more than non-linear parameter fitting exercises that have limited, to no, extrapolation potential, as there are few principles governing organization and function of complex self-assembling systems. Over the last decade, we have been developing a systems approach that models microbial communities as a distributed metabolic network that focuses on metabolic function rather than describing individuals or species. We use an optimization approach to determine which metabolic functions in the network should be up regulated versus those that should be down regulated based on the non-equilibrium thermodynamics principle of maximum entropy production (MEP). Derived from statistical mechanics, MEP proposes that steady state systems will likely organize to maximize free energy dissipation rate. We have extended this conjecture to apply to non-steady state systems and have proposed that living systems maximize entropy production integrated over time and space, while non-living systems maximize instantaneous entropy production. Our presentation will provide a brief overview of the theory and approach, as well as present several examples of applying MEP to describe the biogeochemistry of microbial systems in laboratory experiments and natural ecosystems.

Reduction and the quasi-steady state approximation
Carsten Wiuf, University of Copenhagen

Chemical reactions often occur at different time-scales. In applications of chemical reaction network theory it is often desirable to reduce a reaction network to a smaller reaction network by elimination of fast species or fast reactions. There exist various techniques for doing so, e.g. the Quasi-Steady-State Approximation or the Rapid Equilibrium Approximation. However, these methods are not always mathematically justifiable. Here, a method is presented for which (so-called) non-interacting species are eliminated by means of QSSA. It is argued that this method is mathematically sound. Various examples are given (Michaelis-Menten mechanism, two-substrate mechanism, …) and older related techniques from the 50s and 60s are briefly discussed.


Jacob Obrecht

15 June, 2021

This is a striking portrait of the “outsider genius” Jacob Obrecht:

Obrecht, ~1457–1505, was an important composer in the third generation of the Franco-Flemish school. While he was overshadowed by the superstar Josquin, I’m currently finding him more interesting—mainly on the basis of one long piece called Missa Maria zart.

Obrecht was very bold and experimental in his younger years. He would do wild stuff like play themes backwards, or take the notes in a melody, rearrange them in order of how long they were played, and use that as a new melody. Paraphrasing Wikipedia:

Combining modern and archaic elements, Obrecht’s style is multi-dimensional. Perhaps more than those of the mature Josquin, the masses of Obrecht display a profound debt to the music of Johannes Ockeghem in the wide-arching melodies and long musical phrases that typify the latter’s music. Obrecht’s style is an example of the contrapuntal extravagance of the late 15th century. He often used a cantus firmus technique for his masses: sometimes he divided his source material up into short phrases; at other times he used retrograde (backwards) versions of complete melodies or melodic fragments. He once even extracted the component notes and ordered them by note value, long to short, constructing new melodic material from the reordered sequences of notes. Clearly to Obrecht there could not be too much variety, particularly during the musically exploratory period of his early twenties. He began to break free from conformity to formes fixes (standard forms) especially in his chansons (songs). However, he much preferred composing Masses, where he found greater freedom. Furthermore, his motets reveal a wide variety of moods and techniques.

But I haven’t heard any of these far-out pieces yet. Instead, I’ve been wallowing in his masterpiece: Missa Maria zart, an hour-long mass he wrote one year before he died of the bubonic plague. Here is the
Tallis Scholars version, with a score:

It’s harmonically sweet: it seems to avoid the pungent leading-tones that Dufay or even Ockeghem lean on. It’s highly non-repetitive: while the same themes get reused in endless variations, there’s little if any exact repetition of anything that came before. And it’s very homogeneous: nothing stands out very dramatically. So it’s a bit like a beautiful large stone with all its rough edges smoothed down by water, that’s hard to get a handle on. And I’m the sort of guy who finds this irresistibly attractive. After about a dozen listens, it reveals itself.

The booklet in the Tallis Scholars version, written by Peter Phillips, explains it better:

To describe Obrecht’s Missa Maria zart (‘Mass for gentle Mary’) as a ‘great work’ is true in two respects. It is a masterpiece of sustained and largely abstract musical thought; and it is possibly the longest polyphonic setting of the Mass Ordinary ever written, over twice the length of the more standard examples by Palestrina and Josquin. How it was possible for Obrecht to conceive something so completely outside the normal experience of his time is one of the most fascinating riddles in Renaissance music.

Jacob Obrecht (1457/8–1505) was born in Ghent and died in Ferrara. If the place of death suggests that he was yet another Franco-Flemish composer who received his training in the Low Countries and made his living in Italy, this is inaccurate. For although Obrecht was probably the most admired living composer alongside Josquin des Prés, he consistently failed to find employment in the Italian Renaissance courts. The reason for this may have been that he could not sing well enough: musicians at that time were primarily required to perform, to which composing took second place. Instead he was engaged by churches in his native land—in Utrecht, Bergen op Zoom, Cambrai, Bruges and Antwerp—before he finally decided in 1504 to take the risk and go to the d’Este court in Ferrara. Within a few months of arriving there he had contracted the plague. He died as the leading representative of Northern polyphonic style, an idiom which his Missa Maria zart explores to the full.

This Mass has inevitably attracted a fair amount of attention. The most recent writer on the subject is Rob Wegman (Born for the Muses: The Life and Masses of Jacob Obrecht by Rob C Wegman (Oxford 1994) pp.322–330. Wegman, Op.cit., p.284, is referring to H Besseler’s article ‘Von Dufay bis Josquin, ein Literaturbericht’, Zeitschrift für Musikwissenschaft, 11 (1928/9), p.18): ‘Maria zart is the sphinx among Obrecht’s Masses. It is vast. Even the sections in reduced scoring … are unusually extended. Two successive duos in the Gloria comprise over 100 bars, two successive trios in the Credo close to 120; the Benedictus alone stretches over more than 100 bars’; ‘Maria zart has to be experienced as the whole, one-hour-long sound event that it is, and it will no doubt evoke different responses in each listener … one might say that the composer retreated into a sound world all his own’; ‘Maria zart is perhaps the only Mass that truly conforms to Besseler’s description of Obrecht as the outsider genius of the Josquin period.’

The special sound world of Maria zart was not in fact created by anything unusual in its choice of voices. Many four-part Masses of the later fifteenth century were written for a similar grouping: low soprano, as here, or high alto as the top part; two roughly equal tenor lines, one of them normally carrying the chant when it is quoted in long notes; and bass. The unusual element is to a certain extent the range of the voices—they are all required to sing at extremes of their registers and to make very wide leaps—but more importantly the actual detail of the writing: the protracted sequences against the long chant notes, the instrumental-like repetitions and imitations.

It is this detail which explains the sheer length of this Mass. At thirty-two bars the melody of Maria zart is already quite long as a paraphrase model (the Western Wind melody, for example, is twenty-two bars long) and it duly becomes longer when it is stated in very protracted note-lengths. This happens repeatedly in all the movements, the most substantial augmentation being times twelve (for example, ‘Benedicimus te’ and ‘suscipe deprecationem nostram’ in the Gloria; ‘visibilium’ and ‘Et ascendit’ in the Credo). But what ultimately makes the setting so extremely elaborate is Obrecht’s technique of tirelessly playing with the many short phrases of this melody, quoting snippets of it in different voices against each other, constantly varying the extent of the augmentation even within a single statement, taking motifs from it which can then be turned into other melodies and sequences, stating the phrases in antiphony between different voices. By making a kaleidoscope of the melody in these ways he literally saturated all the voice-parts in all the sections with references to it. To identify them all would be a near impossible task. The only time that Maria zart is quoted in full from beginning to end without interruption, fittingly, is at the conclusion of the Mass, in the soprano part of the third Agnus Dei (though even here Obrecht several times introduced unscheduled octave leaps).

At the same time as constantly quoting from the Maria zart melody Obrecht developed some idiosyncratic ways of adorning it. Perhaps the first thing to strike the ear is that the texture of the music is remarkably homogeneous. There are none of the quick bursts of vocal virtuosity one may find in Ockeghem, or the equally quick bursts of triple-time metre in duple beloved of Dufay and others. The calmer, more consistent world of Josquin is suggested (though it is worth remembering that Josquin may well have learnt this technique in the first place from Obrecht). This sound is partly achieved by use of motifs, often derived from the tune, which keep the rhythmic stability of the original but go on to acquire a life of their own. Most famously these motifs become sequences—an Obrecht special—some of them with a dazzling number of repetitions (nine at ‘miserere’ in the middle of Agnus Dei I; six of the much more substantial phrase at ‘qui ex Patre’ in the Credo; nine in the soprano part alone at ‘Benedicimus te’ in the Gloria. This number is greatly increased by imitation in the other non-chant parts). Perhaps this method is at its most beautiful at the beginning of the Sanctus. In addition the motifs are used in imitation between the voices, sometimes so presented that the singers have to describe leaps of anything up to a twelfth to take their place in the scheme (as in the passage beginning ‘Benedicimus te’ in the Gloria mentioned above). It is the impression which Obrecht gives of having had an inexhaustible supply of these motifs and melodic ideas, free or derived, that gives this piece so much of its vitality. The mesmerizing effect of these musical snippets unceasingly passing back and forth around the long notes of the central melody is at the heart of the particular sound world of this great work.

When Obrecht wrote his Missa Maria zart is not certain. Wegman concludes that it is a late work—possibly his last surviving Mass setting—on the suggestion that Obrecht was in Innsbruck, on his way to Italy, at about the time that some other settings of the Maria zart melody are known to have been written. These, by Ludwig Senfl and others, appeared between 1500 and 1504–6; the melody itself, a devotional monophonic song, was probably written in the Tyrol in the late fifteenth century. The idea that this Mass, stylistically at odds with much of Obrecht’s other known late works and anyway set apart from all his other compositions, was something of a swansong is particularly appealing. We shall never know exactly what Obrecht was hoping to prove in it, but by going to the extremes he did he set his contemporaries a challenge in a certain kind of technique which they proved unable or unwilling to rival.

This Gramophone review of the Tallis Scholars performance, by David Fallows, is also helpful:

This is a bizarre and fascinating piece: and the disc is long-awaited, because The Tallis Scholars have been planning it for some years. It may be the greatest challenge they have faced so far. Normally a Renaissance Mass cycle lasts from 20 to 30 minutes; in the present performance, this one lasts 69 minutes. No ‘liturgical reconstruction’ with chants or anything to flesh out the disc: just solid polyphony the whole way. It seems, in fact, to be the longest known Renaissance Mass.

It is a work that has long held the attention of musicologists: Marcus van Crevel’s famous edition was preceded by 160 pages of introduction discussing its design and numerology. And nobody has ever explained why it survives in only a single source—a funny print by a publisher who produced no other known music book. However, most critics agree that this is one of Obrecht’s last and most glorious works, even if it leaves them tongue-tied. Rob C. Wegman’s recent masterly study of Obrecht’s Masses put it in a nutshell: “Forget the imitation, it seems to tell us, be still, and listen”.

There is room for wondering whether all of it needs to be quite so slow: an earlier record, by the Prague Madrigal Singers (Supraphon, 6/72 – nla), got through it in far less time. Moreover, Obrecht is in any case a very strange composer, treating his dissonances far more freely than most of his contemporaries, sometimes running sequential patterns beyond their limit, making extraordinary demands of the singers in terms of range and phrase-length. That is, there may be ways of making the music run a little more fluidly, so that the irrational dissonances do not come across as clearly as they do here. But in most ways it is hard to fault Peter Phillips’s reading of this massive work.

With only eight singers on the four voices, he takes every detail seriously. And they sing with such conviction and skill that there is hardly a moment when the ear is inclined to wander. As we have come to expect, The Tallis Scholars are technically flawless and constantly alive. Briefly, the disc is a triumph. But, more than that, it is a major contribution to the catalogue, unflinchingly presenting both the beauties and the apparent flaws of this extraordinary work. Phew!

My ear must be too jaded by modern music to notice the dissonances.


Data Visualization Course

10 June, 2021

Are you a student interested in data analysis and sustainability? Or maybe you know some students interested in these things?

Then check this out: my former student Nina Otter, who now teaches at UCLA and Leipzig, is offering a short course on how to analyze and present data using modern methods like topological data analysis—with sustainable fishing as an example!

Students who apply before June 15 have a chance to learn a lot of cool stuff and get paid for it!

Call for Applications

We are advertising the following bootcamp, which will take place remotely on 22-25 June 2021.

If you are interested in participating, please apply here:

FishEthoBase data visualisation bootcamp: this is a 4-day bootcamp, organised by the DeMoS Institute, whose aim is to study ways to visualise scores and criteria from a fish ethology database. The database (http://fishethobase.net/) is an initiative led by the non-profits fair-fish international (http://www.fair-fish.net/what/) and FishEthoGroup (https://fishethogroup.net/). The database is publicly accessible, it stores all currently available ethological knowledge on fish, with a specific focus on species farmed in aquacultures, with the goal of improving the welfare of fish.

The bootcamp will take place virtually on 22-25 June 2021, and will involve a maximum of eight students selected through an open call during the first half of June. The students will be guided by researchers in statistics and topological data analysis. During the first day of the bootcamp there will be talks given by researchers from FishEthoBase, as well as from the mentors. The next three days will be devoted to focused work in groups, with each day starting and ending with short presentations given by students about the progress of their work; after the presentations there will also be time for feedback and discussions from FishEthoBase researchers, and the mentors. Towards the end of August there will be a 2-hour follow-up meeting to discuss the implementation of the results from the bootcamp.

Target audience: we encourage applications from advanced undergraduate, master, and PhD students from a variety of backgrounds, including, but not limited to, computer science, mathematics, statistics, data analysis, computational biology, maritime sciences, and zoology.

Inclusivity: we encourage especially students from underrepresented groups to apply to this bootcamp.

Remuneration: The students who will be selected to participate in the bootcamp will be remunerated with a salary of 1400 euros.

When: 22-25 June 2021, approximately 11-18 CET each day

Where: remotely, on Zoom

I think it’s really cool that Nina Otter has started the DeMoS Institute. Here is the basic idea:

The institute carries out research on topics related to anti-democratic tendencies in our society, as well as on meta-scientific questions on how to make the scientific system more democratic. We believe that research must be done in the presence of those who bear their consequences. Therefore, we perform our research while at the same time implementing directly practices that promote inclusivity, interdisciplinarity, and in active engagement with society at large.


Symmetric Monoidal Categories: a Rosetta Stone

28 May, 2021

The Topos Institute is in business! I’m really excited about visiting there this summer and working on applied category theory.

They recently had a meeting with some people concerned about AI risks, called Finding the Right Abstractions, organized by Scott Garrabrant, David Spivak, and Andrew Critch. I gave a gentle introduction to the uses of symmetric monoidal categories:

• Symmetric monoidal categories: a Rosetta Stone.

To describe systems composed of interacting parts, scientists and engineers draw diagrams of networks: flow charts, Petri nets, electrical circuit diagrams, signal-flow graphs, chemical reaction networks, Feynman diagrams and the like. All these different diagrams fit into a common framework: the mathematics of symmetric monoidal categories. While originally the morphisms in such categories were mainly used to describe processes, we can also use them to describe open systems.

You can see the slides here, and watch a video here:

For a lot more detail on these ideas, see:

• John Baez and Mike Stay, Physics, topology, logic and computation: a Rosetta Stone, in New Structures for Physics, ed. Bob Coecke, Lecture Notes in Physics vol. 813, Springer, Berlin, 2011, pp. 95—174.


Compositional Robotics (Part 2)

27 May, 2021

Very soon we’re having a workshop on applications of category theory to robotics:

2021 Workshop on Compositional Robotics: Mathematics and Tools, online, Monday 31 May 2021.

You’re invited! As of today it’s not too late to register and watch the talks online, and registration is free. Go here to register:

https://forms.gle/9v52EXgDFFGu3h9Q6

Here’s the schedule. All times are in UTC, so the show starts at 9:15 am Pacific Time:

Time (UTC) Speaker

Title

16:15-16:30   Intro and plan of the workshop

16:30-17:10

Jonathan Lorand

Category Theory Basics

17:20-18:00

John Baez Category Theory and Systems 

 

Breakout rooms

 

18:30-19:10

Andrea Censi
& Gioele Zardini

Categories for Co-Design

19:20-20:00

David Spivak

Dynamic Interaction Patterns

 

Breakout rooms

 

20:30-21:15

Aaron Ames

A Categorical Perspective on Robotics

21:30-22:15 Daniel Koditschek Toward a Grounded Type Theory for Robot Task Composition
22:30-00:30 Selected speakers Talks from open submissions

For more information go to the workshop website or my previous blog post on this workshop:

Compositional robotics (part 1).


Category Theory and Systems

27 May, 2021

I’m giving a talk on Monday the 31st of May, 2021 at 17:20 UTC, which happens to be 10:20 am Pacific Time for me. You can see my slides here:

Category theory and systems.

I’ll talk about how to describe open systems as morphisms in symmetric monoidal categories, and how to use ‘functorial semantics’ to describe the behavior of open systems.

It’s part of the 2021 Workshop on Compositional Robotics: Mathematics and Tools, and if you click the link you can see how to attend!  If you stick around for the rest of the workshop you’ll hear more concrete talks from people who really work on robotics. 


Court Orders Deep Carbon Cuts for Shell

26 May, 2021

Whoa! Today a Dutch court ordered Shell to reduce its carbon emissions by 45% by 2030 from 2019 levels!

“The court orders Royal Dutch Shell, by means of its corporate policy, to reduce its CO2 emissions by 45% by 2030 with respect to the level of 2019 for the Shell group and the suppliers and customers of the group,” the judge said.

Including customers—people who buy gasoline and other products from Shell and burn the stuff—means that Shell has to sell less of that stuff.

This is the first time a court has ruled a company needs to reduce its carbon emissions. It was possible because a Dutch court had earlier ruled that failing to slow global warming will lead to human rights violations.

Earlier this year Shell set out one of the sector’s most ambitious climate strategies. It has a target to cut the carbon intensity of its products by at least 6% by 2023, by 20% by 2030, by 45% by 2035 and by 100% by 2050 from 2016 levels.

But the court said that Shell’s climate policy was “not concrete and is full of conditions…that’s not enough.”

“The conclusion of the court is therefore that Shell is in danger of violating its obligation to reduce. And the court will therefore issue an order upon RDS,” the judge said.

The court ordered Shell to reduce its absolute levels of carbon emissions, while Shell’s intensity-based targets could allow the company to grow its output in theory.

“This is arguably the most significant climate change related judgment yet, which emphasises that companies and not just governments may be the target of strategic litigation which seeks to drive changes in behaviour,” said Tom Cummins, dispute resolution partner at law firm Ashurst.

Shell said that it would appeal the verdict and that it has set out its plan to become a net-zero emissions energy company by 2050.

That’s a quote from here:

• Bart Meijer, Dutch court orders Shell to deepen carbon cuts in landmark ruling, Reuters, 26 May 2021.

Here’s what Friends of the Earth Netherlands said on April 5, 2019, when they brought this case against Shell:

Donald Pols, Director of Friends of the Earth Netherlands said:

“Shell’s directors still do not want to say goodbye to oil and gas. They would pull the world into the abyss. The judge can prevent this from happening”

In the court summons, Friends of the Earth Netherlands outlines why it is bringing this groundbreaking climate litigation case against Shell, highlighting the company’s early knowledge of climate change and its own role in causing it. Despite acknowledging that the fossil fuel industry has a responsibility to act on climate change, and claiming to “strongly support” the Paris Agreement, Shell continues to lobby against climate policy and to invest billions in further oil and gas extraction. This is incompatible with global climate goals.

The 2018 Intergovernmental Panel on Climate Change report, a key piece of evidence in this case, underlines the importance of limiting global warming to 1.5 degrees for the protection of ecosystems and human lives, and outlines the devastating and potentially irreversible impacts of any “extra bit of warming”.

The court summons proves that Shell’s current climate ambitions do not guarantee any emissions reductions, but would in fact contribute to a huge overshoot of 1.5 degrees of global warming. The plaintiffs argue that Shell is violating its duty of care and threatening human rights by knowingly undermining the world’s chances to stay below 1.5C.

In addition, the plaintiffs argue that Shell is violating Articles 2 and 8 of the European Convention on Human Rights: the right to life and the right to family life. In the historic Urgenda case against the Dutch state, the Dutch Appeals court created a precedent by ruling that a failure to achieve climate goals leads to human rights violations. The court ordered the Dutch state to cut its greenhouse gas emissions by at least 25% by the end of 2020.

Roger Cox, who initially represented Urgenda, is now leading Friends of the Earth’s case against Shell. Roger said:

“If successful, the uniqueness of the case would be that Shell, as one of the largest multinational corporations in the world would be legally obligated to change its business operations. We also expect that this would have an effect on other fossil fuel companies, raising the pressure on them to change.”

If successful the court case would rule that Shell must reduce its CO2 emissions by 45% by 2030 compared to 2010 levels and to zero by 2050, in line with Climate Paris Accord. This would have major implications, requiring Shell to move away from fossil fuels.


Electrostatics and the Gauss–Lucas Theorem

24 May, 2021

Say you know the roots of a polynomial P and you want to know the roots of its derivative. You can do it using physics! Namely, electrostatics in 2d space, viewed as the complex plane.

To keep things simple, let us assume P does not have repeated roots. Then the procedure works as follows.

Put equal point charges at each root of P, then see where the resulting electric field vanishes. Those are the roots of P’.

I’ll explain why this is true a bit later. But first, we use this trick to see something cool.

There’s no way the electric field can vanish outside the convex hull of your set of point charges. After all, if all the charges are positive, the electric field must point out of that region. So, the roots of P’ must lie in the convex hull of the roots of P!



This cool fact is called the Gauss–Lucas theorem. It always seemed mysterious to me. Now, thanks to this ‘physics proof’, it seems completely obvious!

Of course, it relies on my first claim: that if we put equal point
charges at the roots of P, the electric field they generate will vanish at the roots of P’. Why is this true?

By multiplying by a constant if necessary, we can assume

\displaystyle{   P(z) = \prod_{i = 1}^n  (z - a_i) }

Thus

\displaystyle{  \ln |P(z)| = \sum_{i = 1}^n \ln|z - a_i| }

This function is the electric potential created by equal point charges at the points ai in the complex plane. The corresponding electric field is minus the gradient of the potential, so it vanishes at the critical points of this function. Equivalently, it vanishes at the critical points of the exponential of this function, namely |P|. Apart from one possible exception, these points are the same as the critical points of P, namely the roots of P’. So, we’re almost done!

The exception occurs when P has a critical point where P vanishes. |P| is not smooth where P vanishes, so in this case we cannot say the critical point of P is a critical point of |P|.

However, when P has a critical point where P vanishes, then this point is a repeated root of P, and I already said I’m assuming P has no repeated roots. So, we’re done—given this assumption.

Everything gets a bit more complicated when our polynomial has repeated roots. Greg Egan explored this, and also the case where its derivative has repeated roots.

However, the Gauss–Lucas theorem still applies to polynomials with repeated roots, and this proof explains why:

• Wikipedia, Gauss–Lucas theorem.

Alternatively, it should be possible to handle the case of a polynomial with repeated roots by thinking of it as a limit of polynomials without repeated roots.

By the way, in my physics proof of the Gauss–Lucas theorem I said the electric field generated by a bunch of positive point charges cannot vanish outside the convex hull of these point charges because the field ‘points out’ of this region. Let me clarify that.

It’s true even if the positive point charges aren’t all equal; they just need to have the same sign. The rough idea is that each charge creates an electric field that points radially outward, so these electric fields can’t cancel at a point that’s not ‘between’ several charges—in other words, at a point that’s not in the convex hull of the charges.

But let’s turn this idea into a rigorous argument.

Suppose z is some point outside the convex hull of the points ai. Then, by the hyperplane separation theorem, we can draw a line with z on one side and all the points ai on the other side. Let v be a vector normal to this line and pointing toward the z side. Then

v \cdot (z - a_i) > 0

for all i. Since the electric field created by the ith point charge is a positive multiple of z – ai at the point z, the total electric field at z has a positive dot product with v. So, it can’t be zero!

Credits

The picture of a convex hull is due to Robert Laurini.


Parallel Line Masses and Marden’s Theorem

22 May, 2021

Here’s an idea I got from Albert Chern on Twitter. He did all the hard work, and I think he also drew the picture I’m going to use. I’ll just express the idea in a different way.

Here’s a strange fact about Newtonian gravity.

Consider three parallel ‘line masses’ that have a constant mass per length—the same constant for each one. Choose a plane orthogonal to these lines. There will typically be two points on this plane, say a and b, where a mass can sit in equilibrium, with the gravitational pull from all three lines masses cancelling out. This will be an unstable equilibrium.

Put a mass at point a. Remove the three line masses—but keep in mind the triangle they formed where they pierced your plane!

You can now orbit a test particle in an elliptical orbit around the mass at a in such a way that:

• one focus of this ellipse is a,
• the other focus is b, and
• the ellipse fits inside the triangle, just touching the midpoint of each side of the triangle.

Even better, this ellipse has the largest possible area of any ellipse contained in the triangle!

Here is Chern’s picture:



 

The triangle’s corners are the three points where the line masses pierce your chosen plane. These line masses create a gravitational potential, and the contour lines are level curves of this potential.

You can see that the points a and b are at saddle points of the potential. Thus, a mass placed at either a and b will be in an unstable equilibrium.

You can see the ellipse with a and b as its foci, snugly fitting into the triangle.

You can sort of see that the ellipse touches the midpoints of the triangle’s edges.

What you can’t see is that this ellipse has the largest possible area for any ellipse fitting into the triangle!

Now let me explain the math. While the gravitational potential of a point mass in 3d space is proportional to 1/r, the gravitational potential of a line mass in 3d space is proportional to \log r, which is also the gravitational potential of a point mass in 2d space.

So, if we have three equal line masses, which are parallel and pierce an orthogonal plane at points p_1, p_2 and p_3, then their gravitational potential, as a function on this plane, will be proportional to

\phi(z) = \log|z - p_1| + \log|z - p_2| + \log|z - p_3|

Here I’m using z as our name for an arbitrary point on this plane, because the next trick is to think of this plane as the complex plane!

Where are the critical points (in fact saddle points) of this potential? They are just points where the gradient of \phi vanishes. To find these points, we can just take the exponential of \phi and see where the gradient of that vanishes. This is a nice idea because

e^{\phi(z)} = |(z-p_1)(z-p_2)(z-p_3)|

The gradient of this function will vanish whenever

P'(z) = 0

where

P(z) = (z-p_1)(z-p_2)(z-p_3)

Since P is a cubic polynomial, P' is a quadratic, hence proportional to

(z - a)(z - b)

for some a and b. Now we use

Marden’s theorem. Suppose the zeros p_1, p_2, p_3 of a cubic polynomial P are non-collinear. Then there is a unique ellipse inscribed in the triangle with vertices p_1, p_2, p_3 and tangent to the sides at their midpoints. The foci of this ellipse are the zeroes of the derivative of P.

For a short proof of this theorem go here:

Carlson’s proof of Marden’s theorem.

This ellipse is called the Steiner inellipse of the triangle:

• Wikipedia, Steiner inellipse.

The proof that it has the largest area of any ellipse inscribed in the triangle goes like this. Using a linear transformation of the plane you can map any triangle to an equilateral triangle. It’s obvious that there’s a circle inscribed in any equilateral triangle, touching each of the triangle’s midpoints. It’s at least very plausible that that this circle is the ellipse of largest area contained in the triangle. If we can prove this we’re done.

Why? Because linear transformations map circles to ellipses, and map midpoints of line segments to midpoints of line segments, and simply rescale areas by a constant fact. So applying the inverse linear transformation to the circle inscribed in the equilateral triangle, we get an ellipse inscribed in our original triangle, which will touch this triangle’s midpoints, and have the maximum possible area of any ellipse contained in this triangle!


Non-Equilibrium Thermodynamics in Biology (Part 1)

11 May, 2021

Together with William Cannon and Larry Li, I’m helping run a minisymposium as part of SMB2021, the annual meeting of the Society for Mathematical Biology:

• Non-equilibrium Thermodynamics in Biology: from Chemical Reaction Networks to Natural Selection, Monday June 14, 2021, beginning 9:30 am Pacific Time.

You can register for free here before May 31st, 11:59 pm Pacific Time. You need to register to watch the talks live on Zoom. I think the talks will be recorded.

Here’s the idea:

Abstract: Since Lotka, physical scientists have argued that living things belong to a class of complex and orderly systems that exist not despite the second law of thermodynamics, but because of it. Life and evolution, through natural selection of dissipative structures, are based on non-equilibrium thermodynamics. The challenge is to develop an understanding of what the respective physical laws can tell us about flows of energy and matter in living systems, and about growth, death and selection. This session will address current challenges including understanding emergence, regulation and control across scales, and entropy production, from metabolism in microbes to evolving ecosystems.

It’s exciting to me because I want to get back into work on thermodynamics and reaction networks, and we’ll have some excellent speakers on these topics. I think the talks will be in this order… later I will learn the exact schedule.

Christof Mast, Ludwig-Maximilians-Universität München

Coauthors: T. Matreux, K. LeVay, A. Schmid, P. Aikkila, L. Belohlavek, Z. Caliskanoglu, E. Salibi, A. Kühnlein, C. Springsklee, B. Scheu, D. B. Dingwell, D. Braun, H. Mutschler.

Title: Heat flows adjust local ion concentrations in favor of prebiotic chemistry

Abstract: Prebiotic reactions often require certain initial concentrations of ions. For example, the activity of RNA enzymes requires a lot of divalent magnesium salt, whereas too much monovalent sodium salt leads to a reduction in enzyme function. However, it is known from leaching experiments that prebiotically relevant geomaterial such as basalt releases mainly a lot of sodium and only little magnesium. A natural solution to this problem is heat fluxes through thin rock fractures, through which magnesium is actively enriched and sodium is depleted by thermogravitational convection and thermophoresis. This process establishes suitable conditions for ribozyme function from a basaltic leach. It can take place in a spatially distributed system of rock cracks and is therefore particularly stable to natural fluctuations and disturbances.

Supriya Krishnamurthy, Stockholm University

Coauthors: Eric Smith

Title: Stochastic chemical reaction networks

Abstract: The study of chemical reaction networks (CRNs) is a very active field. Earlier well-known results (Feinberg Chem. Enc. Sci. 42 2229 (1987), Anderson et al Bull. Math. Biol. 72 1947 (2010)) identify a topological quantity called deficiency, easy to compute for CRNs of any size, which, when exactly equal to zero, leads to a unique factorized (non-equilibrium) steady-state for these networks. No general results exist however for the steady states of non-zero-deficiency networks. In recent work, we show how to write the full moment-hierarchy for any non-zero-deficiency CRN obeying mass-action kinetics, in terms of equations for the factorial moments. Using these, we can recursively predict values for lower moments from higher moments, reversing the procedure usually used to solve moment hierarchies. We show, for non-trivial examples, that in this manner we can predict any moment of interest, for CRNs with non-zero deficiency and non-factorizable steady states. It is however an open question how scalable these techniques are for large networks.

Pierre Gaspard, Université libre de Bruxelles

Title: Nonequilibrium biomolecular information processes

Abstract: Nearly 70 years have passed since the discovery of DNA structure and its role in coding genetic information. Yet, the kinetics and thermodynamics of genetic information processing in DNA replication, transcription, and translation remain poorly understood. These template-directed copolymerization processes are running away from equilibrium, being powered by extracellular energy sources. Recent advances show that their kinetic equations can be exactly solved in terms of so-called iterated function systems. Remarkably, iterated function systems can determine the effects of genome sequence on replication errors, up to a million times faster than kinetic Monte Carlo algorithms. With these new methods, fundamental links can be established between molecular information processing and the second law of thermodynamics, shedding a new light on genetic drift, mutations, and evolution.

Carsten Wiuf, University of Copenhagen

Coauthors: Elisenda Feliu, Sebastian Walcher, Meritxell Sáez

Title: Reduction and the Quasi-Steady State Approximation

Abstract: Chemical reactions often occur at different time-scales. In applications of chemical reaction network theory it is often desirable to reduce a reaction network to a smaller reaction network by elimination of fast species or fast reactions. There exist various techniques for doing so, e.g. the Quasi-Steady-State Approximation or the Rapid Equilibrium Approximation. However, these methods are not always mathematically justifiable. Here, a method is presented for which (so-called) non-interacting species are eliminated by means of QSSA. It is argued that this method is mathematically sound. Various examples are given (Michaelis-Menten mechanism, two-substrate mechanism, …) and older related techniques from the 50-60ies are briefly discussed.

Matteo Polettini, University of Luxembourg

Coauthor: Tobias Fishback

Title: Deficiency of chemical reaction networks and thermodynamics

Abstract: Deficiency is a topological property of a Chemical Reaction Network linked to important dynamical features, in particular of deterministic fixed points and of stochastic stationary states. Here we link it to thermodynamics: in particular we discuss the validity of a strong vs. weak zeroth law, the existence of time-reversed mass-action kinetics, and the possibility to formulate marginal fluctuation relations. Finally we illustrate some subtleties of the Python module we created for MCMC stochastic simulation of CRNs, soon to be made public.

Ken Dill, Stony Brook University

Title: The principle of maximum caliber of nonequilibria

Abstract: Maximum Caliber is a principle for inferring pathways and rate distributions of kinetic processes. The structure and foundations of MaxCal are much like those of Maximum Entropy for static distributions. We have explored how MaxCal may serve as a general variational principle for nonequilibrium statistical physics – giving well-known results, such as the Green-Kubo relations, Onsager’s reciprocal relations and Prigogine’s Minimum Entropy Production principle near equilibrium, but is also applicable far from equilibrium. I will also discuss some applications, such as finding reaction coordinates in molecular simulations non-linear dynamics in gene circuits, power-law-tail distributions in “social-physics” networks, and others.

Joseph Vallino, Marine Biological Laboratory, Woods Hole

Coauthors: Ioannis Tsakalakis, Julie A. Huber

Title: Using the maximum entropy production principle to understand and predict microbial biogeochemistry

Abstract: Natural microbial communities contain billions of individuals per liter and can exceed a trillion cells per liter in sediments, as well as harbor thousands of species in the same volume. The high species diversity contributes to extensive metabolic functional capabilities to extract chemical energy from the environment, such as methanogenesis, sulfate reduction, anaerobic photosynthesis, chemoautotrophy, and many others, most of which are only expressed by bacteria and archaea. Reductionist modeling of natural communities is problematic, as we lack knowledge on growth kinetics for most organisms and have even less understanding on the mechanisms governing predation, viral lysis, and predator avoidance in these systems. As a result, existing models that describe microbial communities contain dozens to hundreds of parameters, and state variables are extensively aggregated. Overall, the models are little more than non-linear parameter fitting exercises that have limited, to no, extrapolation potential, as there are few principles governing organization and function of complex self-assembling systems. Over the last decade, we have been developing a systems approach that models microbial communities as a distributed metabolic network that focuses on metabolic function rather than describing individuals or species. We use an optimization approach to determine which metabolic functions in the network should be up regulated versus those that should be down regulated based on the non-equilibrium thermodynamics principle of maximum entropy production (MEP). Derived from statistical mechanics, MEP proposes that steady state systems will likely organize to maximize free energy dissipation rate. We have extended this conjecture to apply to non-steady state systems and have proposed that living systems maximize entropy production integrated over time and space, while non-living systems maximize instantaneous entropy production. Our presentation will provide a brief overview of the theory and approach, as well as present several examples of applying MEP to describe the biogeochemistry of microbial systems in laboratory experiments and natural ecosystems.

Gheorge Craciun, University of Wisconsin-Madison

Title: Persistence, permanence, and global stability in reaction network models: some results inspired by thermodynamic principles

Abstract: The standard mathematical model for the dynamics of concentrations in biochemical networks is called mass-action kinetics. We describe mass-action kinetics and discuss the connection between special classes of mass-action systems (such as detailed balanced and complex balanced systems) and the Boltzmann equation. We also discuss the connection between the “global attractor conjecture” for complex balanced mass-action systems and Boltzmann’s H-theorem. We also describe some implications for biochemical mechanisms that implement noise filtering and cellular homeostasis.

Hong Qian, University of Washington

Title: Large deviations theory and emergent landscapes in biological dynamics

Abstract: The mathematical theory of large deviations provides a nonequilibrium thermodynamic description of complex biological systems that consist of heterogeneous individuals. In terms of the notions of stochastic elementary reactions and pure kinetic species, the continuous-time, integer-valued Markov process dictates a thermodynamic structure that generalizes (i) Gibbs’ macroscopic chemical thermodynamics of equilibrium matters to nonequilibrium small systems such as living cells and tissues; and (ii) Gibbs’ potential function to the landscapes for biological dynamics, such as that of C. H. Waddington’s and S. Wright’s.

John Harte, University of Berkeley

Coauthors: Micah Brush, Kaito Umemura

Title: Nonequilibrium dynamics of disturbed ecosystems

Abstract: The Maximum Entropy Theory of Ecology (METE) predicts the shapes of macroecological metrics in relatively static ecosystems, across spatial scales, taxonomic categories, and habitats, using constraints imposed by static state variables. In disturbed ecosystems, however, with time-varying state variables, its predictions often fail. We extend macroecological theory from static to dynamic, by combining the MaxEnt inference procedure with explicit mechanisms governing disturbance. In the static limit, the resulting theory, DynaMETE, reduces to METE but also predicts a new scaling relationship among static state variables. Under disturbances, expressed as shifts in demographic, ontogenic growth, or migration rates, DynaMETE predicts the time trajectories of the state variables as well as the time-varying shapes of macroecological metrics such as the species abundance distribution and the distribution of metabolic rates over individuals. An iterative procedure for solving the dynamic theory is presented. Characteristic signatures of the deviation from static predictions of macroecological patterns are shown to result from different kinds of disturbance. By combining MaxEnt inference with explicit dynamical mechanisms of disturbance, DynaMETE is a candidate theory of macroecology for ecosystems responding to anthropogenic or natural disturbances.