The Periodic Table

21 January, 2022

 

I like many kinds of periodic table, but hate this one. See the problem?

Element 57 is drawn right next to element 72, replacing the element that should be there: element 71. So lutetium, element 71, is being denied its rightful place as a transition metal and is classified as a rare earth. Meanwhile lanthanum, element 57, which really is a rare earth, is drawn separately from all the rest! This is especially ironic because those rare earths are called ‘lanthanoids’ or ‘lanthanides’.

Similarly, element 89 is next to element 104, instead of the element that should be there: element 103. So lawrencium, element 103, is also being denied its rightful place as a transition metal. Meanwhile actinium, element 89, is banished from the row of ‘actinoids’, or ‘actinides’ — even though it gave them their name in the first place. How cruel!

Here Wikipedia does it right. Element 71 is a transition metal — not element 57. Similarly element 103 is a transition metal, not element 89.

This stuff is not just an arbitrary convention. Transition metals are chemically different from lanthanides and actinides. You can’t just stick them wherever you want.

In simple terms, as we move across the transition metals, they fill 1, 2, 3, … , 10 of their outermost d orbitals with electrons. Similarly as we move across the lanthanides or actinides, they fill 1, 2, 3, … 14 of their outermost f orbitals with electrons. I wrote about this here a while ago:

• John Baez, The Madelung rules, Azimuth, December 8, 2021.

There are some exceptions to the Madelung rules, but the bad periodic tables are not motivated by those exceptions. The Wikipedia periodic table accurately reflects the chemistry. The Encyclopedia Brittanica table completely ruins the story by arbitrarily sticking lanthanum and actinium in amongst the transition metals instead of the elements that should be there: lutetium and lawrencium. I see no good reason for doing this.

Here’s another common kind of periodic table that I hate. It cuts a hole into the bottom two rows of the transition metals, and moves the metals that should be there — elements 71 and 103 — into the rare earths and actinides.

This amounts to claiming that there are 15 rare earths and actinides, and just 9 transition metals in those two rows. That’s crazy: the fact that the p subshell holds 10 electrons and the d subshell holds 14 is dictated by group representation theory. Subshells hold 2, 6, 10, 14 electrons — twice odd numbers.

The periodic table is a marvelous thing: it shows how quantum
mechanics and math predict patterns in the elements. Have fun making up new designs — but if you’re going to use the old kind, use the good one!

If you don’t believe me, listen to this guy:

But unlike him, I don’t think experiments were necessary to realize that the bad periodic tables were messed up. It’s not as if they were designed based on some alternative theory about which elements are transition metals.

Interestingly, the International Union of Pure and Applied Chemistry were supposed to meet at the end of last year to settle this issue. What did they decide? If you find out, please let me know!


Transition Metals

9 December, 2021


The transition metals are more complicated than lighter elements.

Why?

Because they’re the first whose electron wavefunctions are described by quadratic functions of x,y, and z — not just linear or constant. These are called ‘d orbitals’, and they look sort of like this:

More precisely: the wavefunctions of electrons in atoms depend on the distance r from the nucleus and also the angles \theta, \phi. The angular dependence is described by ‘spherical harmonics’, certain functions on the sphere. These are gotten by taking certain polynomials in x,y,z and restricting them to the unit sphere. Chemists have their own jargon for this:

• constant polynomial: s orbital

• linear polynomial: p orbital

• quadratic polynomial: d orbital

• cubic polynomial: f orbital

and so on.

To be even more precise, a spherical harmonic is an
eigenfunction of the Laplacian on the sphere. Any such function is the restriction to the sphere of some homogeneous polynomial in x,y,z whose Laplacian in 3d space is zero. This polynomial can be constant, linear, etc.

The dimension of the space of spherical harmonics goes like 1, 3, 5, 7,… as we increase the degree of the polynomial starting from 0:

• constant: 1

• linear: x, y, z

• quadratic: xy, xz, yz, x^2 - y^2, x^2 - z^2

etcetera. So, we get one s orbital, three p orbitals, five d orbitals and so on. Here I’ve arbitrarily chosen a basis of the space of quadratic polynomials with vanishing Laplacian, and I’m not claiming this matches the d orbitals in the pictures!

The transition metals are the first to use the d orbitals. This is why they’re so different than lighter elements.

Although there are 5 d orbitals, an electron occupying such an orbital can have spin up or down. This is why there are 10 transition metals per row!

This chart doesn’t show the last row of highly radioactive transition metals, just the ones you’re likely to see:

Look: 10 per row, all because there’s a 5d space of quadratic polynomials in x,y,z with vanishing Laplacian. Math becomes matter.

The Madelung rules

Can we understand why the first transition element, scandium, has 21 electrons? Yes, if we’re willing to use the ‘Madelung rules’ explained last time. Let me review them rapidly here.

You’ll notice this chart has axes called n and \ell.

As I just explained, the angular dependence of an orbital is determined by a homogeneous polynomial with vanishing Laplacian. In the above chart, the degree of this polynomial is called \ell. The space of such polynomials has dimension 2\ell + 1.

But an orbital has an additional radial dependence, described using a number called n. The math, which I won’t go into, requires that 0 \le \ell \le n. That gives the above chart its roughly triangular appearance.

The letters s, p, d, f are just chemistry jargon for \ell = 0,1,2,3.

Thanks to spin and the Pauli exclusion principle, we can pack at most 2(2\ell + 1) electrons into the orbitals with a given choice of n and \ell. This bunch of orbitals is called a ‘subshell’.

The Madelung rules say the order in which subshells get filled:

  1. Electrons are assigned to subshells in order of increasing values of n + \ell.
  2. For subshells with the same value of n + \ell, electrons are assigned first to the subshell with lower n.

So let’s see what happens. Only when we hit \ell = 2 will we get transition metals!

\boxed{n + \ell = 1}

n = 1, \ell = 0

This is called the 1s subshell, and we can put 2 electrons in here. First we get hydrogen with 1 electron, then helium with 2. At this point all the n = 1 subshells are full, so the ‘1st shell’ is complete, and helium is called a ‘noble gas’.

\boxed{n + \ell = 2}

n = 2, \ell = 0

This is called the 2s subshell, and we can put 2 more electrons in here. We get lithium with 3 electrons, and then beryllium with 4.

\boxed{n + \ell = 3}

n = 2, \ell = 1

This is called the 2p subshell, and we can put 6 more electrons in here. We get:

◦ boron with 5 electrons,
◦ carbon with 6,
◦ nitrogen with 7,
◦ oxygen with 8,
◦ fluorine with 9,
◦ neon with 10.

At this point all the n = 2 subshells are full, so the 2nd shell is complete and neon is another noble gas.

n = 3, \ell = 0

This is is called the 3s subshell, and we can put 2 more electrons in here. We get sodium with 11 electrons, and magnesium with 12.

\boxed{n + \ell = 4}

n = 3, \ell = 1

This is called the 4p subshell, and we can put 6 more electrons in here. We get:

◦ aluminum with 13 electrons,
◦ silicon with 14,
◦ phosphorus with 15,
◦ sulfur with 16,
◦ chlorine with 17,
◦ argon with 18.

At this point all the n = 3 subshells are full, so the 3rd shell is complete and argon is another noble gas.

n = 4, \ell = 0

This is called the 4s subshell, and we can put 2 more electrons in here. We get potassium with 19 electrons and calcium with 20.

\boxed{n + \ell = 5}

n = 3, \ell = 2

This is called the 3d subshell, and we can put 10 electrons in here. Since now we’ve finally hit \ell = 2, and thus a d subshell, these are transition metals! We get:

◦ scandium with 21 electrons,
◦ titanium with 22,
◦ vanadium with 23,
◦ chromium with 24,
◦ manganese with 25,
◦ iron with 26,
◦ cobalt with 27,
◦ nickel with 28,
◦ copper with 29,
◦ zinc with 30.

And the story continues—but at least we’ve seen why the first batch of transition elements starts where it does!

The scandal of scandium

For a strong attack on the Madelung rules, see:

• Eric Scerri, The problem with the Aufbau principle for finding electronic configurations, 24 June 2012.

But it’s important to realize that he’s attacking a version of the Madelung rules that is different, and stronger than the version stated above. My version only concerned atoms, not ions. The stronger version claims that you can use the Madelung rules not only to determine the ground state of an atom, but also those of the positive ions obtained by taking that atom and removing some electrons!

This stronger version breaks down if you consider scandium with one electron removed. As we’ve just seen, scandium has the electrons as in argon together with three more: two in the 4s orbital and one in the 3d orbital. This conforms to the Madelung rules.

But when you ionize scandium and remove one electron, it’s not the 3d electron that leaves—it’s one of the 4s electrons! This breaks the stronger version of the Madelung rules.

The weaker version of the Madelung rules also breaks down, but later in the transition metals. The first problem is with chromium, the second is with copper:

By the Madelung rules, chromium should have 2 electrons in the 4s shell and 4 in the 3d shell. But in fact it has just 1 in the 4s and 5 in the 3d.

The second is with copper. By the Madelung rules, this should have 2 electrons in the 4s shell and 9 in the 3d. But in fact it has just 1 in the 4s and 10 in the 3d.

There are also other breakdowns in heavier transition metals, listed here:

• Wikipedia, Aufbau principle: exceptions in the d block.

These subtleties can only be understood by digging a lot deeper into how the electrons in an atom interact with each other. That’s above my pay grade right now. If you know a good place to learn more about this, let me know! I’m only interested in atoms here, not molecules.

Oxidation states of transition metals

Transition metals get some of their special properties because the electrons in the d subshell are easily removed. For example, this is why the transition metals conduct electricity.

Also, when reacting chemically with other elements, they lose different numbers of electrons. The different possibilities are called ‘oxidation states’.

For example, scandium has all the electrons of argon (Ar) plus two in an s orbital and one in a d orbital. It can easily lose 3 electrons, giving an oxidation state called Sc3+. Titanium has one more electron, so it can lose 4 and form Ti4+. And so on:



This accounts for the most obvious pattern in the chart below: the diagonal lines sloping up.



The red dots are common oxidation states, while the white dots are rarer oxidation states. For example iron (Fe) can lose 2 electrons, 3 electrons, 4 electrons (more rarely), 5 electrons, or 6 electrons (more rarely).

The diagonal lines sloping up come from the simple fact that as we move through a group of transition metals, there are more and more electrons in the d subshell, so more can be easily be removed. But everything is complicated by the fact that electrons interact! So the trend doesn’t go on forever: manganese gives up 8 electrons but iron doesn’t easily give up 8, only at most 6. And there’s much more going on, too.

Note also that the two charts above don’t actually agree: the chart in color includes more rare oxidation states.

References

For a bit more, read:

• Wikipedia, Transition metals.

Oxidation states of transition metals, Chemistry LibreTexts.

The colored chart of oxidation states in this post is from Wikicommons,
made by Felix Wan, corrected to include the two most common oxidation
states of ruthenium. The black-and-white chart is from the Chemistry
Libretexts
webpage.


The Madelung Rules

8 December, 2021

I’ve been thinking about chemistry lately. I’m always amazed by how far we can get in the study of multi-electron atoms using ideas from the hydrogen atom, which has just one electron.

If we ignore the electron’s spin and special relativity, and use just Schrödinger’s equation, the hydrogen atom is exactly solvable—and a key part of any thorough introduction to quantum mechanics. So I’ll zip through that material now, saying just enough to introduce the ‘Madelung rules’, which are two incredibly important rules of thumb for how electrons behave in the ground states of multi-electron atoms.

If take the Hilbert space \mathbf{H} of bound states of the hydrogen atom, ignoring electron spin, we can decompose it as a direct sum of subspaces:

\displaystyle{ \mathbf{H} = \bigoplus_{n = 1}^\infty \mathbf{H}_n }

where the energy equals -1/n^2 in the subspace \mathbf{H}_n.

Since the hydrogen atom has rotational symmetry, each subspace \mathbf{H}_n further decomposes into irreducible representations of the rotation group \mathrm{SO}(3). Any such representation is classified by a natural number \ell = 0, 1, 2, \dots. Physically this number describes angular momentum; mathematically it describes the dimension of the representation. The angular momentum is \sqrt{\ell(\ell + 1)}, while the dimension is 2\ell + 1. Concretely, we can think of this representation as the space of homogeneous polynomials of degree \ell with vanishing Laplacian. I’ll say a bit more about this next time.

The subspace \mathbf{H}_n decomposes as a direct sum of irreducible representations with 0 \le \ell \le n - 1, like this:

\displaystyle{\mathbf{H}_n = \bigoplus_{\ell = 0}^n \mathbf{H}_{n,\ell} }

So, in the subspace \mathbf{H}_{n,\ell} the electron has energy -1/n^2 and angular momentum \sqrt{\ell(\ell + 1)}.

I’ve been ignoring the electron’s spin, but we shouldn’t ignore it completely, because it’s very important for understanding the periodic table and chemistry in general. To take it into account, all I will do is tensor \mathbf{H} with \mathbb{C}^2, to account for the electron’s two spin states. So, the true Hilbert space of bound states of a hydrogen atom is \mathbf{H} \otimes \mathbb{C}^2.

By what I’ve said, we can decompose this Hilbert space into subspaces

\mathbf{H}_n \otimes \mathbb{C}^2

called shells, getting this:

\displaystyle{ \mathbf{H} \otimes \mathbb{C}^2 = \bigoplus_{n = 1}^\infty \mathbf{H}_n \otimes \mathbb{C}^2 }

We can then decompose the shells further into subspaces

\mathbf{H}_{n,\ell} \otimes \mathbb{C}

called subshells, obtaining our final result:

\displaystyle{ \mathbf{H} \otimes \mathbb{C}^2 = \bigoplus_{n = 1}^\infty \bigoplus_{\ell = 0}^{n-1} \mathbf{H}_{n , \ell} \otimes \mathbb{C}^2 }

Since

\dim(\mathbf{H}_{n,\ell}) = 2\ell + 1

the dimensions of the subshells are

\dim(\mathbf{H}_{n,\ell} \otimes \mathbb{C}^2) = 2(2\ell + 1)

or in other words, twice the odd numbers:

2, 6, 10, 14, \dots

This lets us calculate the dimensions of the shells:

\displaystyle{ \dim(\mathbf{H}_{n}) = \sum_{\ell = 0}^{n-1} \dim(\mathbf{H}_{n,\ell})  = 2\sum_{\ell = 0}^{n-1} (2\ell + 1) = 2n^2}

since the sum of the first bunch of odd numbers is perfect square. So, the dimensions of the shells are twice the perfect squares:

2, 8, 18, 32, \dots

Okay, so much for hydrogen!

Now, the ‘Aufbau principle’ says that as we keep going through the periodic table, looking at elements with more and more electrons, these electrons will act approximately as if each one occupies a subshell. Thanks to the Pauli exclusion principle, we can’t put more electrons into some subshell than the dimension of that subshell. So the big question is: which subshell does each electron go into?

This is the question that the Madelung rules answer! Here’s what they say:

  1. Electrons are assigned to subshells in order of increasing values of n + \ell.
  2. For subshells with the same value of n + \ell, electrons are assigned first to the subshell with lower n—or equivalently, higher \ell.

They aren’t always right, but they’re damned good, given that in reality we’ve got a bunch of electrons interacting with each other—and not even described by separate wavefunctions, but really one big fat ‘entangled’ wavefunction.

Here’s what the Madelung rules predict:

So, subshells get filled in order of increasing n + \ell, and for any choice of n + \ell we start with the biggest possible \ell and work down.

This chart uses some old-fashioned but still very popular notation for the subshells. We say the number n, but instead of saying the number \ell we use a letter:

\ell = 0: s
\ell = 1: p
\ell = 2: d
\ell = 3: f

The reasons for these letters would make for a long and thrilling story, but not today.

The history of the Madelung rules

At this point I should go through the periodic table and show you how well the Madelung rules predict what’s going on. Basically, as we fill a particular subshell we get a bunch of related elements, and then as we go on up to the next subshell we get another bunch—and we can understand a lot about their properties! But there are also plenty of deviations from the Madelung rules, which are also interesting. I’ll talk about all these things next time.

I should also say a bit about why the Madelung rules work as well as they do! For example, what’s the importance of n + \ell?

But that’s not what I have lined up for today. Sorry! Instead, I want to talk about something much less important: the historical origin of the Madelung rules. According to Wikipedia they have many names:

• the Madelung rule (after Erwin Madelung)
• the Janet rule (after Charles Janet)
• the Klechkowsky rule (after Vsevolod Klechkovsky)
• Wiswesser’s rule (after William Wiswesser)
• the Aufbau approximation
• the diagonal rule, and
• the Uncle Wiggly path.

Seriously! Uncle Wiggly!

Understanding the history of these rules is going to be difficult. You can read a lot about their prehistory here:

• Wikipedia, Aufbau principle.

Bohr and Sommerfeld played a big role in setting up the theory of shells and subshells, and perhaps the whole idea of ‘Aufbau’: German for ‘building up’ atoms one electron at a time.

But I was confused about whether Madelung discovered both rules or just one, especially because a lot of people say ‘Madelung rule’ in the singular, when there are really two. So I asked on History of Science and Mathematics Stackexchange:

There are two widely used rules of thumb to determine which subshells are filled in a neutral atom in its ground state:

• Electrons are assigned to subshells in order of increasing value of n + \ell.

• For subshells with the same value of n + \ell, electrons are assigned first to the subshell with lower n.

These rules don’t always hold, but that’s not my concern here. My question is: which of these rules did Erwin Madelung discover? (Both? Just one?)

Wikipedia seems to say Charles Janet discovered the first in 1928:

A periodic table in which each row corresponds to one value of n + \ell (where the values of n and \ell correspond to the principal and azimuthal quantum numbers respectively) was suggested by Charles Janet in 1928, and in 1930 he made explicit the quantum basis of this pattern, based on knowledge of atomic ground states determined by the analysis of atomic spectra. This table came to be referred to as the left-step table. Janet “adjusted” some of the actual n + \ell values of the elements, since they did not accord with his energy ordering rule, and he considered that the discrepancies involved must have arisen from measurement errors. In the event, the actual values were correct and the n + \ell energy ordering rule turned out to be an approximation rather than a perfect fit, although for all elements that are exceptions the regularised configuration is a low-energy excited state, well within reach of chemical bond energies.

It then goes on to say:

In 1936, the German physicist Erwin Madelung proposed this as an empirical rule for the order of filling atomic subshells, and most English-language sources therefore refer to the Madelung rule. Madelung may have been aware of this pattern as early as 1926.

Is “this” still rule 1?

It then goes on to say:

In 1945 William Wiswesser proposed that the subshells are filled in order of increasing values of the function

{\displaystyle W(n,\ell)=n+\ell-{\frac {\ell}{\ell+1}}.}

This is equivalent to the combination of rules 1 and 2. So, this article seems to suggest that rule 2 was discovered by Wiswesser. But I have my doubts. Goudsmit and Richards write:

Madelung discovered a simple empirical rule for neutral atoms. It consists of two parts.

and then they list both 1 and 2.

I have not yet managed to get ahold of Erwin Madelung’s work, e.g.

• E. Madelung, Die mathematischen Hilfsmittel der Physikers, Springer, Berlin, 1936

I got a helpful reply from M. Farooq:

The only relevant section in E. Madelung’s edited book, Die mathematischen Hilfsmittel der Physikers, Springer: Berlin, 1936 is a small paragraph. It seems Madelung just called this for electron book-keeping and he had no justification for his proposition. He calls it a lexicographic order (lexikographische Ordnung).

I was searching for it some other reasons several months ago: Question in SE Physics. The original and the (machine but edited) translations is shared.

15 Atomic structure (electron catalog) (to p. 301).

The eigenfunction of an atom, consisting of Z electrons and Z-times positively charged nucleus, can be constructed in the case of removed degeneracy in first approximation as a product of Z hydrogen eigenfunctions (cf. p. 356), each of which is defined by four quantum numbers n, \ell, m, s defined by n>0, n-1 \geqq \ell \geqq 0, s=\pm \frac{1}{2}, m \leqq l. According to Pauli’s principle, no two of these functions may coincide in all four quantum numbers. According to the Bohr principle of structure, an atom with Z electrons is formed from an atom with (Z-1) electrons by adding another one (and increasing the nuclear charge by 1) without changing the quantum numbers of the already existing electrons. Therefore a catalog can be set up, from whose in each case Z first positions the atom is built up in the basic state (cf. the table p. 360).

The ordering principle of this catalog is a lexicographic order according to the numbers (n+\ell), n, s, m. A theoretical justification of just this arrangement is not yet available. One reads from it:

1. The periodic system of the elements. Two atoms are homologous if in each case their “last electron” in the l, m, s coincides.

2. The spectroscopic character of the basic term, entered in column 10 . There is namely |\Sigma m|=0,1,2,3 \ldots the characterS P, D, F, G, H, I \ldots and (2|\Sigma s|+1) the multiplicity.

3. The possibilities for excited states (possible terms), where not all Z electrons are in the first Z positions of the catalog.

The catalog is the representation form of an empirical rule. It idealizes the experience, because in some cases deviations are observed.

So, it’s clear that Madelung knew and efficiently stated both rules in 1936! By the way, ‘lexicographic order’ is the standard mathematical term for how we’re ordering the pairs (n+\ell, n) in the Madelung rules.

References

I took the handsome chart illustrating the Madelung rules from here:

Aufbau Principle Chemistry God, 18 February 2020.

This blog article is definitely worth reading!


Benzene

30 November, 2021

The structure of benzene is fascinating. Look at all these different attempts to depict it! Let me tell you a tiny bit of the history.

In 1865, August Kekulé argued that benzene is a ring of carbon atoms with alternating single and double bonds. Later, at a conference celebrating the 25th anniversary of this discovery, he said he realized this after having a daydream of a snake grabbing its own tail.

Kekulé’s model was nice, because before this it was hard to see how 6 carbons and 6 hydrogens could form a reasonable molecule with each carbon having 4 bonds and each hydrogen having one. But this model led to big problems, which were only solved with quantum mechanics.

For example, if benzene looked like Kekulé’s model, there would be 4 ways to replace two hydrogens with chlorine! You could have two chlorines next to each other with a single bond between them as shown here… or with a double bond between them. But there aren’t 4, just 3.

In 1872 Kekulé tried to solve this problem by saying benzene rapidly oscillates between two forms. Below is his original picture of those two forms. The single bonds and double bonds trade places.

But there was still a problem: benzene has less energy than if it had alternating single and double bonds.

The argument continued until 1933, when Linus Pauling and George Wheland used quantum mechanics to tackle benzene. Here’s the first sentence in their paper:

As you can see, there were models much stranger than Kekulé’s.

What was Pauling and Wheland’s idea? Use the quantum superposition principle! A superposition of a live and dead cat is theoretically possible in quantum mechanics… but a superposition of two structures of a molecule can have lower energy than either structure alone, and then this is what we actually see! Here’s what Pauling said later, in 1946:

But in reality, benzene is much subtler than just a quantum superposition of Kekulé’s two structures. For example: 6 of its electrons become ‘delocalized’, their wavefunction forming two rings above and below the plane containing the carbon nuclei!

Benzene is far from the only molecule with this property: for example, all the ‘anthocyanins’ I talked about last time also have rings with delocalized electrons:

In general, such molecules are called aromatic, because some of the first to be discovered have strong odors. Aromaticity is an important concept in chemistry, and people still fight over its precise definition:

• Wikipedia, Aromaticity.

One thing is for sure: the essence of aromaticity is not the aroma. It’s more about having rings of carbons in a plane, with delocalized electrons in so-called pi bonds, which protrude at right angles to this plane.

Another typical feature of aromatic compounds is that they sustain ‘aromatic ring currents’. Let me illustrate this with the example of benzene:

When you turn on a magnetic field (shown in red here), a benzene molecule will automatically line up at right angles to the field, and these electrons start moving around! This current loop creates its own magnetic field (shown in purple).

What does this current loop look like, exactly? To understand this, you have to remember that the benzene’s 6 delocalized electrons lie above and below the plane of the benzene molecule.

So, if you compute the electric current above or below the plane of the benzene molecule, it goes around and around like this:

But if you compute the electric current in the plane of the benzene molecule—where the nuclei of the carbon atoms are—you get a much more complicated pattern. Some current even flows backward, against the overall flow!

This current creates its own magnetic field. Outside the benzene molecule, this points the same way as the externally imposed magnetic field, reinforcing it. So, a magnetic field that is strengthened when it goes through benzene. This is called ‘antishielding’—or ‘deshielding’ in this picture from Organic Spectroscopy International:

I want to understand aromaticity and aromatic ring currents better. If I have the energy, I’ll say more in future articles. For example, I want to tell you about ‘Hückel theory’: a simplified mathematical model of aromatic compounds that’s a lot of fun if you like graph theory and matrices.

References

Please click on the pictures to see where I got them. You can learn more that way! Some came from Wikicommons, via these Wikipedia articles:

• Wikipedia, Benzene.

• Wikipedia, Aromatic ring current.

including the pictures of current vector fields in benzene, created by ‘Hoferaanderl’.


Anthocyanins

28 November, 2021

 

As the chlorophyll wanes, now is the heyday of the xanthophylls,
carotenoids and anthocyanins. These contain carbon rings and chains whose electrons become delocalized… their wavefunctions resonating at different frequencies, emitting photons of yellow, orange and red!

Yes, it’s fall. I’m enjoying it.

I wrote about two xanthophylls in my May 27, 2014 diary entry: I explained how they get their color from the resonance of delocalized electrons that spread all over a carbon chain with alternating single and double bonds:

I discussed chlorophyll, which also has such a chain, in my May 29th entry. I wrote about some carotenoids in my July 2, 2006 entry: these too have long chains of carbons with alternating single and double bonds.

I haven’t discussed anthocyanins yet! These have rings rather than chains of carbon, but the basic mechanism is similar: it’s the delocalization of electrons that makes them able to resonate at frequencies in the visual range. They are often blue or purple, but they contribute to the color of many red leaves:



Click on these two graphics for more details! I got them from a website called Science Notes, and it says:

Some leaves make flavonoids. Anthocyanins are flavonoids which vary in color depending on pH. Anthocyanins are not usually present in leaves during the growing season. Instead, plants produce them as temperatures drop. They acts as a natural sunscreen and protect against cold damage. Anthocyanins also deter some insects that like to overwinter on plants and discourage new seedlings from sprouting too close to the parent plant. Plants need energy from light to make anthocyanins. So, vivid red and purple fall colors only appear if there are several sunny autumn days in a row.

This raises a lot of questions, like: how do anthocyanins protect
leaves from cold, and why do some leaves make them only shortly before they die? Or are they there all along, hidden behind the chlorophyll Maybe this paper would help:

• D. Lee and K. Gould, Anthocyanins in leaves and other vegetative organs: an introduction, Advances in Botanical Research 37 (2002), 1–16.

Thinking about anthocyanins has led me to ponder the mystery of aromaticity. Roughly, a compound is aromatic if it contains one or more rings with pi electrons delocalized over the whole ring. But people fight over the exact definition.

I may write more about this if I ever solve some puzzles that are bothering me, like the mathematical origin of Hückel’s rule, which says a planar ring of carbon atoms is aromatic if it has 4n + 2 pi electrons. I want to know where the formula 4n + 2 comes from, and I’m getting close.

An early paper by Linus Pauling discusses the resonance of electrons in anthocyanins and other compounds with rings of carbon. This one is freely available, and it’s pretty easy to read:

• Linus Pauling, Recent work on the configuration and electronic structure of molecules; with some applications to natural products, in Fortschritte der Chemie Organischer Naturstoffe, 1939, Springer, Vienna, pp. 203–235.


The Ideal Monatomic Gas

15 July, 2021

Today at the Topos Institute, Sophie Libkind, Owen Lynch and I spent some time talking about thermodynamics, Carnot engines and the like. As a result, I want to work out for myself some basic facts about the ideal gas. This stuff is all well-known, but I’m having trouble finding exactly what I want—and no more, thank you—collected in one place.

Just for background, the Carnot cycle looks roughly like this:


This is actually a very inaccurate picture, but it gets the point across. We have a container of gas, and we make it execute a cyclic motion, so its pressure P and volume V trace out a loop in the plane. As you can see, this loop consists of four curves:

• In the first, from a to b, we put a container of gas in contact with a hot medium. Then we make it undergo isothermal expansion: that is, expansion at a constant temperature.

• In the second, from b to c, we insulate the container and let the gas undergo adiabatic reversible expansion: that is, expansion while no heat enters or leaves. The temperature drops, but merely because the container expands, not because heat leaves. It reaches a lower temperature. Then we remove the insulation.

• In the third, from c to d, we put the container in contact with a cold medium that matches its temperature. Then we make it undergo isothermal contraction: that is, contraction at a constant temperature.

• In the fourth, from d to a, we insulate the container and let the gas undergo adiabatic reversible contraction: that is, contraction while no heat enters or leaves. The temperature increases until it matches that of the hot medium. Then we remove the insulation.

The Carnot cycle is historically important because it’s an example of a heat engine that’s as efficient as possible: it give you the most work possible for the given amount of heat transferred from the hot medium to the cold medium. But I don’t want to get into that. I just want to figure out formulas for everything that’s going on here—including formulas for the four curves in this picture!

To get specific formulas, I’ll consider an ideal monatomic gas, meaning a gas made of individual atoms, like helium. Some features of an ideal gas, like the formula for energy as a function of temperature, depend on whether it’s monatomic.

As a quirky added bonus, I’d like to highlight how certain properties of the ideal monatomic gas depend on the dimension of space. There’s a certain chunk of the theory that doesn’t depend on the dimension of space, as long as you interpret ‘volume’ to mean the n-dimensional analogue of volume. But the number 3 shows up in the formula for the energy of the ideal monatomic gas. And this is because space is 3-dimensional! So just for fun, I’ll do the whole analysis in n dimensions.

There are four basic formulas we need to know.

First, we have the ideal gas law:

PV = NkT

where

P is the pressure.
V is the n-dimensional volume.
N is the number of molecules in a container of gas.
k is a constant called Boltzmann’s constant.
T is the temperature.

Second, we have a formula for the energy, or more precisely the internal energy, of a monatomic ideal gas:

U = \frac{n}{2} NkT

where

U is the internal energy.
n is the dimension of space.

The factor of n/2 shows up thanks to the equipartition theorem: classically, a harmonic oscillator at temperature T has expected energy equal to kT times its number of degrees of freedom. Very roughly, the point is that in n dimensions there are n different directions in which an atom can move around.

Third, we have a relation between internal energy, work and heat:

dU = \delta W + \delta Q

Here

dU is the differential of internal energy.
\delta W is the infinitesimal work done to the gas.
\delta Q is the infinitesimal heat transferred to the gas.

The intuition is simple: to increase the energy of some gas you can do work to it or transfer heat to it. But the math may seem a bit murky, so let me explain.

I emphasize ‘to’ because it affects the sign: for example, the work done by the gas is minus the work done to the gas. Work done to the gas increases its internal energy, while work done by it reduces its internal energy. Similarly for heat.

But what is this ‘infinitesimal’ stuff, and these weird \delta symbols?

In a minute I’m going to express everything in terms of P and V. So, T, N and U will be functions on the plane with coordinates P and V. dU will be a 1-form on this plane: it’s the differential of the function U.

But \delta W and \delta Q are not differentials of functions W and Q. There are no functions on the plane called W and Q. You can not take a box of gas and measure its work, or heat! There are just 1-forms called \delta W and \delta Q describing the change in work or heat. These are not exact 1-forms: that is, they’re not differentials of functions.

Fourth and finally:

\delta W = - P dV

This should be intuitive. The work done by the gas on the outside world by changing its volume a little equals the pressure times the change in volume. So, the work done to the gas is minus the pressure times the change in volume.

One nice feature of the 1-form \delta W = -P d V is this: as we integrate it around a simple closed curve going counterclockwise, we get the area enclosed by that curve. So, the area of this region:


is the work done by our container of gas during the Carnot cycle. (There are a lot of minus signs to worry about here, but don’t worry, I’ve got them under control. Our curve is going clockwise, so the work done to our container of gas is negative, and it’s minus the area in the region.)

Okay, now that we have our four basic equations, we can play with them and derive consequences. Let’s suppose the number N of atoms in our container of gas is fixed—a constant. Then we think of everything as a function of two variables: P and V.

First, since PV = NkT we have

\displaystyle{ T = \frac{PV}{Nk} }

So temperature is proportional to pressure times volume.

Second, since PV = NkT and U = \frac{n}{2}NkT we have

U = \frac{n}{2} P V

So, like the temperature, the internal energy of the gas is proportional to pressure times volume—but it depends on the dimension of space!

From this we get

dU = \frac{n}{2} d(PV) = \frac{n}{2}( V dP + P dV)

From this and our formulas dU = \delta W + \delta Q, \delta W = -PdV we get

\begin{array}{ccl}  \delta Q &=& dU - \delta W \\  \\  &=& \frac{n}{2}( V dP + P dV) + P dV \\ \\  &=& \frac{n}{2} V dP + \frac{n+2}{2} P dV   \end{array}

That’s basically it!

But now we know how to figure out everything about the Carnot cycle. I won’t do it all here, but I’ll work out formulas for the curves in this cycle:


The isothermal curves are easy, since we’ve seen temperature is proportional to pressure times volume:

\displaystyle{ T = \frac{PV}{Nk} }

So, an isothermal curve is any curve with

P \propto V^{-1}

The adiabatic reversible curves, or ‘adiabats’ for short, are a lot more interesting. A curve C in the P  V plane is an adiabat if when the container of gas changes pressure and volume while moving along this curve, no heat gets transferred to or from the gas. That is:

\delta Q \Big|_C = 0

where the funny symbol means I’m restricting a 1-form to the curve and getting a 1-form on that curve (which happens to be zero).

Let’s figure out what an adiabat looks like! By our formula for Q we have

(\frac{n}{2} V dP + \frac{n+2}{2} P dV) \Big|_C = 0

or

\frac{n}{2} V dP \Big|_C = -\frac{n+2}{2} P dV \Big|_C

or

\frac{dP}{P} \Big|_C = - \frac{n+2}{n} \frac{dV}{V}\Big|_C

Now, we can integrate both sides along a portion of the curve C and get

\ln P = - \frac{n+2}{n} \ln V + \mathrm{constant}

or

P \propto V^{-(n+2)/n}

So in 3-dimensional space, as you let a gas expand adiabatically—say by putting it in an insulated cylinder so heat can’t get in or out—its pressure drops as its volume increases. But for a monatomic gas it drops in this peculiar specific way: the pressure goes like the volume to the -5/3 power.

In any dimension, the pressure of the monatomic gas drops more steeply when the container expands adiabatically than when it expands at constant temperature. Why? Because V^{-(n+2)/n} drops more rapidly than V^{-1} since

\frac{n+2}{n} > 1

But as n \to \infty,

\frac{n+2}{n} \to 1

so the adiabats become closer and and closer to the isothermal curves in high dimensions. This is not important for understanding the conceptually significant features of the Carnot cycle! But it’s curious, and I’d like to improve my understanding by thinking about it until it seems obvious. It doesn’t yet.


Nonequilibrium Thermodynamics in Biology (Part 2)

16 June, 2021

Larry Li, Bill Cannon and I ran a session on non-equilibrium thermodynamics in biology at SMB2021, the annual meeting of the Society for Mathematical Biology. You can see talk slides here!

Here’s the basic idea:

Since Lotka, physical scientists have argued that living things belong to a class of complex and orderly systems that exist not despite the second law of thermodynamics, but because of it. Life and evolution, through natural selection of dissipative structures, are based on non-equilibrium thermodynamics. The challenge is to develop an understanding of what the respective physical laws can tell us about flows of energy and matter in living systems, and about growth, death and selection. This session addresses current challenges including understanding emergence, regulation and control across scales, and entropy production, from metabolism in microbes to evolving ecosystems.

Click on the links to see slides for most of the talks:

Persistence, permanence, and global stability in reaction network models: some results inspired by thermodynamic principles
Gheorghe Craciun, University of Wisconsin–Madison

The standard mathematical model for the dynamics of concentrations in biochemical networks is called mass-action kinetics. We describe mass-action kinetics and discuss the connection between special classes of mass-action systems (such as detailed balanced and complex balanced systems) and the Boltzmann equation. We also discuss the connection between the ‘global attractor conjecture’ for complex balanced mass-action systems and Boltzmann’s H-theorem. We also describe some implications for biochemical mechanisms that implement noise filtering and cellular homeostasis.

The principle of maximum caliber of nonequilibria
Ken Dill, Stony Brook University

Maximum Caliber is a principle for inferring pathways and rate distributions of kinetic processes. The structure and foundations of MaxCal are much like those of Maximum Entropy for static distributions. We have explored how MaxCal may serve as a general variational principle for nonequilibrium statistical physics—giving well-known results, such as the Green-Kubo relations, Onsager’s reciprocal relations and Prigogine’s Minimum Entropy Production principle near equilibrium, but is also applicable far from equilibrium. I will also discuss some applications, such as finding reaction coordinates in molecular simulations non-linear dynamics in gene circuits, power-law-tail distributions in ‘social-physics’ networks, and others.

Nonequilibrium biomolecular information processes
Pierre Gaspard, Université libre de Bruxelles

Nearly 70 years have passed since the discovery of DNA structure and its role in coding genetic information. Yet, the kinetics and thermodynamics of genetic information processing in DNA replication, transcription, and translation remain poorly understood. These template-directed copolymerization processes are running away from equilibrium, being powered by extracellular energy sources. Recent advances show that their kinetic equations can be exactly solved in terms of so-called iterated function systems. Remarkably, iterated function systems can determine the effects of genome sequence on replication errors, up to a million times faster than kinetic Monte Carlo algorithms. With these new methods, fundamental links can be established between molecular information processing and the second law of thermodynamics, shedding a new light on genetic drift, mutations, and evolution.

Nonequilibrium dynamics of disturbed ecosystems
John Harte, University of California, Berkeley

The Maximum Entropy Theory of Ecology (METE) predicts the shapes of macroecological metrics in relatively static ecosystems, across spatial scales, taxonomic categories, and habitats, using constraints imposed by static state variables. In disturbed ecosystems, however, with time-varying state variables, its predictions often fail. We extend macroecological theory from static to dynamic, by combining the MaxEnt inference procedure with explicit mechanisms governing disturbance. In the static limit, the resulting theory, DynaMETE, reduces to METE but also predicts a new scaling relationship among static state variables. Under disturbances, expressed as shifts in demographic, ontogenic growth, or migration rates, DynaMETE predicts the time trajectories of the state variables as well as the time-varying shapes of macroecological metrics such as the species abundance distribution and the distribution of metabolic rates over
individuals. An iterative procedure for solving the dynamic theory is presented. Characteristic signatures of the deviation from static predictions of macroecological patterns are shown to result from different kinds of disturbance. By combining MaxEnt inference with explicit dynamical mechanisms of disturbance, DynaMETE is a candidate theory of macroecology for ecosystems responding to anthropogenic or natural disturbances.

Stochastic chemical reaction networks
Supriya Krishnamurthy, Stockholm University

The study of chemical reaction networks (CRN’s) is a very active field. Earlier well-known results (Feinberg Chem. Enc. Sci. 42 2229 (1987), Anderson et al Bull. Math. Biol. 72 1947 (2010)) identify a topological quantity called deficiency, easy to compute for CRNs of any size, which, when exactly equal to zero, leads to a unique factorized (non-equilibrium) steady-state for these networks. No general results exist however for the steady states of non-zero-deficiency networks. In recent work, we show how to write the full moment-hierarchy for any non-zero-deficiency CRN obeying mass-action kinetics, in terms of equations for the factorial moments. Using these, we can recursively predict values for lower moments from higher moments, reversing the procedure usually used to solve moment hierarchies. We show, for non-trivial examples, that in this manner we can predict any moment of interest, for CRN’s with non-zero deficiency and non-factorizable steady states. It is however an open question how scalable these techniques are for large networks.

Heat flows adjust local ion concentrations in favor of prebiotic chemistry
Christof Mast, Ludwig-Maximilians-Universität München

Prebiotic reactions often require certain initial concentrations of ions. For example, the activity of RNA enzymes requires a lot of divalent magnesium salt, whereas too much monovalent sodium salt leads to a reduction in enzyme function. However, it is known from leaching experiments that prebiotically relevant geomaterial such as basalt releases mainly a lot of sodium and only little magnesium. A natural solution to this problem is heat fluxes through thin rock fractures, through which magnesium is actively enriched and sodium is depleted by thermogravitational convection and thermophoresis. This process establishes suitable conditions for ribozyme function from a basaltic leach. It can take place in a spatially distributed system of rock cracks and is therefore particularly stable to natural fluctuations and disturbances.

Deficiency of chemical reaction networks and thermodynamics
Matteo Polettini, University of Luxembourg

Deficiency is a topological property of a Chemical Reaction Network linked to important dynamical features, in particular of deterministic fixed points and of stochastic stationary states. Here we link it to thermodynamics: in particular we discuss the validity of a strong vs. weak zeroth law, the existence of time-reversed mass-action kinetics, and the possibility to formulate marginal fluctuation relations. Finally we illustrate some subtleties of the Python module we created for MCMC stochastic simulation of CRNs, soon to be made public.

Large deviations theory and emergent landscapes in biological dynamics
Hong Qian, University of Washington

The mathematical theory of large deviations provides a nonequilibrium thermodynamic description of complex biological systems that consist of heterogeneous individuals. In terms of the notions of stochastic elementary reactions and pure kinetic species, the continuous-time, integer-valued Markov process dictates a thermodynamic structure that generalizes (i) Gibbs’ microscopic chemical thermodynamics of equilibrium matters to nonequilibrium small systems such as living cells and tissues; and (ii) Gibbs’ potential function to the landscapes for biological dynamics, such as that of C. H. Waddington and S. Wright.

Using the maximum entropy production principle to understand and predict microbial biogeochemistry
Joseph Vallino, Marine Biological Laboratory, Woods Hole

Natural microbial communities contain billions of individuals per liter and can exceed a trillion cells per liter in sediments, as well as harbor thousands of species in the same volume. The high species diversity contributes to extensive metabolic functional capabilities to extract chemical energy from the environment, such as methanogenesis, sulfate reduction, anaerobic photosynthesis, chemoautotrophy, and many others, most of which are only expressed by bacteria and archaea. Reductionist modeling of natural communities is problematic, as we lack knowledge on growth kinetics for most organisms and have even less understanding on the mechanisms governing predation, viral lysis, and predator avoidance in these systems. As a result, existing models that describe microbial communities contain dozens to hundreds of parameters, and state variables are extensively aggregated. Overall, the models are little more than non-linear parameter fitting exercises that have limited, to no, extrapolation potential, as there are few principles governing organization and function of complex self-assembling systems. Over the last decade, we have been developing a systems approach that models microbial communities as a distributed metabolic network that focuses on metabolic function rather than describing individuals or species. We use an optimization approach to determine which metabolic functions in the network should be up regulated versus those that should be down regulated based on the non-equilibrium thermodynamics principle of maximum entropy production (MEP). Derived from statistical mechanics, MEP proposes that steady state systems will likely organize to maximize free energy dissipation rate. We have extended this conjecture to apply to non-steady state systems and have proposed that living systems maximize entropy production integrated over time and space, while non-living systems maximize instantaneous entropy production. Our presentation will provide a brief overview of the theory and approach, as well as present several examples of applying MEP to describe the biogeochemistry of microbial systems in laboratory experiments and natural ecosystems.

Reduction and the quasi-steady state approximation
Carsten Wiuf, University of Copenhagen

Chemical reactions often occur at different time-scales. In applications of chemical reaction network theory it is often desirable to reduce a reaction network to a smaller reaction network by elimination of fast species or fast reactions. There exist various techniques for doing so, e.g. the Quasi-Steady-State Approximation or the Rapid Equilibrium Approximation. However, these methods are not always mathematically justifiable. Here, a method is presented for which (so-called) non-interacting species are eliminated by means of QSSA. It is argued that this method is mathematically sound. Various examples are given (Michaelis-Menten mechanism, two-substrate mechanism, …) and older related techniques from the 50s and 60s are briefly discussed.


Non-Equilibrium Thermodynamics in Biology (Part 1)

11 May, 2021

Together with William Cannon and Larry Li, I’m helping run a minisymposium as part of SMB2021, the annual meeting of the Society for Mathematical Biology:

• Non-equilibrium Thermodynamics in Biology: from Chemical Reaction Networks to Natural Selection, Monday June 14, 2021, beginning 9:30 am Pacific Time.

You can register for free here before May 31st, 11:59 pm Pacific Time. You need to register to watch the talks live on Zoom. I think the talks will be recorded.

Here’s the idea:

Abstract: Since Lotka, physical scientists have argued that living things belong to a class of complex and orderly systems that exist not despite the second law of thermodynamics, but because of it. Life and evolution, through natural selection of dissipative structures, are based on non-equilibrium thermodynamics. The challenge is to develop an understanding of what the respective physical laws can tell us about flows of energy and matter in living systems, and about growth, death and selection. This session will address current challenges including understanding emergence, regulation and control across scales, and entropy production, from metabolism in microbes to evolving ecosystems.

It’s exciting to me because I want to get back into work on thermodynamics and reaction networks, and we’ll have some excellent speakers on these topics. I think the talks will be in this order… later I will learn the exact schedule.

Christof Mast, Ludwig-Maximilians-Universität München

Coauthors: T. Matreux, K. LeVay, A. Schmid, P. Aikkila, L. Belohlavek, Z. Caliskanoglu, E. Salibi, A. Kühnlein, C. Springsklee, B. Scheu, D. B. Dingwell, D. Braun, H. Mutschler.

Title: Heat flows adjust local ion concentrations in favor of prebiotic chemistry

Abstract: Prebiotic reactions often require certain initial concentrations of ions. For example, the activity of RNA enzymes requires a lot of divalent magnesium salt, whereas too much monovalent sodium salt leads to a reduction in enzyme function. However, it is known from leaching experiments that prebiotically relevant geomaterial such as basalt releases mainly a lot of sodium and only little magnesium. A natural solution to this problem is heat fluxes through thin rock fractures, through which magnesium is actively enriched and sodium is depleted by thermogravitational convection and thermophoresis. This process establishes suitable conditions for ribozyme function from a basaltic leach. It can take place in a spatially distributed system of rock cracks and is therefore particularly stable to natural fluctuations and disturbances.

Supriya Krishnamurthy, Stockholm University

Coauthors: Eric Smith

Title: Stochastic chemical reaction networks

Abstract: The study of chemical reaction networks (CRNs) is a very active field. Earlier well-known results (Feinberg Chem. Enc. Sci. 42 2229 (1987), Anderson et al Bull. Math. Biol. 72 1947 (2010)) identify a topological quantity called deficiency, easy to compute for CRNs of any size, which, when exactly equal to zero, leads to a unique factorized (non-equilibrium) steady-state for these networks. No general results exist however for the steady states of non-zero-deficiency networks. In recent work, we show how to write the full moment-hierarchy for any non-zero-deficiency CRN obeying mass-action kinetics, in terms of equations for the factorial moments. Using these, we can recursively predict values for lower moments from higher moments, reversing the procedure usually used to solve moment hierarchies. We show, for non-trivial examples, that in this manner we can predict any moment of interest, for CRNs with non-zero deficiency and non-factorizable steady states. It is however an open question how scalable these techniques are for large networks.

Pierre Gaspard, Université libre de Bruxelles

Title: Nonequilibrium biomolecular information processes

Abstract: Nearly 70 years have passed since the discovery of DNA structure and its role in coding genetic information. Yet, the kinetics and thermodynamics of genetic information processing in DNA replication, transcription, and translation remain poorly understood. These template-directed copolymerization processes are running away from equilibrium, being powered by extracellular energy sources. Recent advances show that their kinetic equations can be exactly solved in terms of so-called iterated function systems. Remarkably, iterated function systems can determine the effects of genome sequence on replication errors, up to a million times faster than kinetic Monte Carlo algorithms. With these new methods, fundamental links can be established between molecular information processing and the second law of thermodynamics, shedding a new light on genetic drift, mutations, and evolution.

Carsten Wiuf, University of Copenhagen

Coauthors: Elisenda Feliu, Sebastian Walcher, Meritxell Sáez

Title: Reduction and the Quasi-Steady State Approximation

Abstract: Chemical reactions often occur at different time-scales. In applications of chemical reaction network theory it is often desirable to reduce a reaction network to a smaller reaction network by elimination of fast species or fast reactions. There exist various techniques for doing so, e.g. the Quasi-Steady-State Approximation or the Rapid Equilibrium Approximation. However, these methods are not always mathematically justifiable. Here, a method is presented for which (so-called) non-interacting species are eliminated by means of QSSA. It is argued that this method is mathematically sound. Various examples are given (Michaelis-Menten mechanism, two-substrate mechanism, …) and older related techniques from the 50-60ies are briefly discussed.

Matteo Polettini, University of Luxembourg

Coauthor: Tobias Fishback

Title: Deficiency of chemical reaction networks and thermodynamics

Abstract: Deficiency is a topological property of a Chemical Reaction Network linked to important dynamical features, in particular of deterministic fixed points and of stochastic stationary states. Here we link it to thermodynamics: in particular we discuss the validity of a strong vs. weak zeroth law, the existence of time-reversed mass-action kinetics, and the possibility to formulate marginal fluctuation relations. Finally we illustrate some subtleties of the Python module we created for MCMC stochastic simulation of CRNs, soon to be made public.

Ken Dill, Stony Brook University

Title: The principle of maximum caliber of nonequilibria

Abstract: Maximum Caliber is a principle for inferring pathways and rate distributions of kinetic processes. The structure and foundations of MaxCal are much like those of Maximum Entropy for static distributions. We have explored how MaxCal may serve as a general variational principle for nonequilibrium statistical physics – giving well-known results, such as the Green-Kubo relations, Onsager’s reciprocal relations and Prigogine’s Minimum Entropy Production principle near equilibrium, but is also applicable far from equilibrium. I will also discuss some applications, such as finding reaction coordinates in molecular simulations non-linear dynamics in gene circuits, power-law-tail distributions in “social-physics” networks, and others.

Joseph Vallino, Marine Biological Laboratory, Woods Hole

Coauthors: Ioannis Tsakalakis, Julie A. Huber

Title: Using the maximum entropy production principle to understand and predict microbial biogeochemistry

Abstract: Natural microbial communities contain billions of individuals per liter and can exceed a trillion cells per liter in sediments, as well as harbor thousands of species in the same volume. The high species diversity contributes to extensive metabolic functional capabilities to extract chemical energy from the environment, such as methanogenesis, sulfate reduction, anaerobic photosynthesis, chemoautotrophy, and many others, most of which are only expressed by bacteria and archaea. Reductionist modeling of natural communities is problematic, as we lack knowledge on growth kinetics for most organisms and have even less understanding on the mechanisms governing predation, viral lysis, and predator avoidance in these systems. As a result, existing models that describe microbial communities contain dozens to hundreds of parameters, and state variables are extensively aggregated. Overall, the models are little more than non-linear parameter fitting exercises that have limited, to no, extrapolation potential, as there are few principles governing organization and function of complex self-assembling systems. Over the last decade, we have been developing a systems approach that models microbial communities as a distributed metabolic network that focuses on metabolic function rather than describing individuals or species. We use an optimization approach to determine which metabolic functions in the network should be up regulated versus those that should be down regulated based on the non-equilibrium thermodynamics principle of maximum entropy production (MEP). Derived from statistical mechanics, MEP proposes that steady state systems will likely organize to maximize free energy dissipation rate. We have extended this conjecture to apply to non-steady state systems and have proposed that living systems maximize entropy production integrated over time and space, while non-living systems maximize instantaneous entropy production. Our presentation will provide a brief overview of the theory and approach, as well as present several examples of applying MEP to describe the biogeochemistry of microbial systems in laboratory experiments and natural ecosystems.

Gheorge Craciun, University of Wisconsin-Madison

Title: Persistence, permanence, and global stability in reaction network models: some results inspired by thermodynamic principles

Abstract: The standard mathematical model for the dynamics of concentrations in biochemical networks is called mass-action kinetics. We describe mass-action kinetics and discuss the connection between special classes of mass-action systems (such as detailed balanced and complex balanced systems) and the Boltzmann equation. We also discuss the connection between the “global attractor conjecture” for complex balanced mass-action systems and Boltzmann’s H-theorem. We also describe some implications for biochemical mechanisms that implement noise filtering and cellular homeostasis.

Hong Qian, University of Washington

Title: Large deviations theory and emergent landscapes in biological dynamics

Abstract: The mathematical theory of large deviations provides a nonequilibrium thermodynamic description of complex biological systems that consist of heterogeneous individuals. In terms of the notions of stochastic elementary reactions and pure kinetic species, the continuous-time, integer-valued Markov process dictates a thermodynamic structure that generalizes (i) Gibbs’ macroscopic chemical thermodynamics of equilibrium matters to nonequilibrium small systems such as living cells and tissues; and (ii) Gibbs’ potential function to the landscapes for biological dynamics, such as that of C. H. Waddington’s and S. Wright’s.

John Harte, University of Berkeley

Coauthors: Micah Brush, Kaito Umemura

Title: Nonequilibrium dynamics of disturbed ecosystems

Abstract: The Maximum Entropy Theory of Ecology (METE) predicts the shapes of macroecological metrics in relatively static ecosystems, across spatial scales, taxonomic categories, and habitats, using constraints imposed by static state variables. In disturbed ecosystems, however, with time-varying state variables, its predictions often fail. We extend macroecological theory from static to dynamic, by combining the MaxEnt inference procedure with explicit mechanisms governing disturbance. In the static limit, the resulting theory, DynaMETE, reduces to METE but also predicts a new scaling relationship among static state variables. Under disturbances, expressed as shifts in demographic, ontogenic growth, or migration rates, DynaMETE predicts the time trajectories of the state variables as well as the time-varying shapes of macroecological metrics such as the species abundance distribution and the distribution of metabolic rates over individuals. An iterative procedure for solving the dynamic theory is presented. Characteristic signatures of the deviation from static predictions of macroecological patterns are shown to result from different kinds of disturbance. By combining MaxEnt inference with explicit dynamical mechanisms of disturbance, DynaMETE is a candidate theory of macroecology for ecosystems responding to anthropogenic or natural disturbances.


Talc

21 February, 2021

 

Talc is one of the softest minerals—its hardness defines a ‘1’ on Moh’s scale of hardness. I just learned its structure at the molecular level, and I can’t resist showing it to you.

Talc is layers of octahedra sandwiched in tetrahedra!

Since these sandwiches form separate parallel sheets, I bet they can easily slide past each other. That must be why talc is so soft.

The octahedra are magnesium oxide and the tetrahedra are silicon oxide… with some hydroxyl groups attached to liven things up. The overall formula is Mg3Si4O10(OH)2. It’s called ‘hydrated magnesium silicate’.

The image here was created by Materialscientist and placed on Wikicommons under a Creative Commons Attribution-Share Alike 3.0 Unported license.


Stretched Water

3 October, 2020

The physics of water is endlessly fascinating. The phase diagram of water at positive temperature and pressure is already remarkably complex, as shown in this diagram by Martin Chaplin:

Click for a larger version. And read this post of mine for more:

Ice.

But it turns out there’s more: water is also interesting at negative pressure.

I don’t know why I never wondered about this! But people study stretched water, essentially putting a piston of water under tension and measuring its properties.

You probably know one weird thing about water: ice floats. Unlike most liquids, water at standard pressure reaches its maximum density above the freezing point, at about 4 °C. And for any fixed pressure, you can try to find the temperature at which water reaches its maximum density. You get a curve of density maxima in the pressure-temperature plane. And with stretched water experiments, you can even study this curve for negative pressures:

This graph is from here:

• Gaël Pallares, Miguel A. Gonzalez, Jose Luis F. Abascal, Chantal Valeriani, and Frédéric Caupin, Equation of state for water and its line of density maxima down to -120 MPa, Physical Chemistry Chemical Physics 18 (2016), 5896–5900.

-120 MPa is about -1200 times atmospheric pressure.

This is just the tip of the iceberg. I’m reading some papers and discovering lots of amazing things that I barely understand:

• Stacey L. Meadley and C. Austen Angell, Water and its relatives: the stable, supercooled and particularly the stretched, regimes.

• Jeremy C. Palmer, Peter H. Poole, Francesco Sciortino and Pablo G. Debenedetti, Advances in computational studies of the liquid–liquid transition in water and water-like models, Chemical Reviews 118 (2018), 9129–9151.

I would like to learn about some of these things and explain them. But for now, let me just quote the second paper to illustrate how strange water actually is:

Water is ubiquitous and yet also unusual. It is central to life, climate, agriculture, and industry, and an understanding of its properties is key in essentially all of the disciplines of the natural sciences and engineering. At the same time, and despite its apparent molecular simplicity, water is a highly unusual substance, possessing bulk properties that differ greatly, and often qualitatively, from those of other compounds. As a consequence, water has long been the subject of intense scientific scrutiny.

In this review, we describe the development and current status of the proposal that a liquid−liquid transition (LLT) occurs in deeply supercooled water. The focus of this review is on computational work, but we also summarize the relevant experimental and theoretical background. Since first proposed in 1992, this hypothesis has generated considerable interest and debate. In particular, in the past few years several works have challenged the evidence obtained from computer simulations of the ST2 model of water that support in principle the existence of an LLT, proposing instead that what was previously interpreted as an LLT is in fact ice crystallization. This challenge to the LLT hypothesis has stimulated a significant amount of new work aimed at resolving the controversy and to better understand the nature of an LLT in water-like computer models.

Unambiguously resolving this debate, it has been shown recently that the code used in the studies that most sharply challenge the LLT hypothesis contains a serious conceptual error that prevented the authors from properly characterizing the phase behavior of the ST2 water model. Nonetheless, the burst of renewed activity focusing on simulations of an LLT in water has yielded considerable new insights. Here, we review this recent work, which clearly demonstrates that an LLT is a well-defined and readily observed phenomenon in computer simulations of water-like models and is unambiguously distinguished from the crystal−liquid phase transition.

Yes, you heard that right: a phase transition between two phases of liquid water below the freezing point!

Both these phases are metastable: pretty quickly the water will freeze. But apparently it still makes some sense to talk about phases, and a phase transition between them!

What does this have to do with stretched water? I’m not sure, but apparently understanding this stuff is connected to understanding water at negative pressures. It also involves the ‘liquid-vapor spinodal’.

Huh?

The liquid-vapor spinodal is another curve in the pressure-temperature plane. As far as I can tell, it works like this: when the pressure drops below this curve, the liquid—which is already unstable: it would evaporate given time—suddenly forms bubbles of vapor.

At negative pressures the liquid-vapor spinodal has a pretty intuitive meaning: if you stretch water too much, it breaks!

There’s a conjecture due to a guy named Robin J. Speedy, which implies the liquid-vapor spinodal intersects the curve of density maxima! And it does so at negative pressures. I don’t really understand the significance of this, but it sounds cool. Super-cool.

Here’s what Palmer, Poole, Sciortino and Debenedetti have to say about this:

The development of a thermodynamically self-consistent picture of the behavior of the deeply supercooled liquid that correctly predicts these experimental observations remains at the center of research on water. While a number of competing scenarios have been advanced over the years, the fact that consensus continues to be elusive demonstrates the complexity of the theoretical problem and the difficulty of the experiments required to distinguish between scenarios.

One of the first of these scenarios, Speedy’s “stability limit conjecture” (SLC), exemplifies the challenge. As formulated by Speedy, and comprehensively analyzed by Debenedetti and D’Antonio, the SLC proposes that water’s line of density maxima in the P−T plane intersects the liquid−vapor spinodal at negative pressure. At such an intersection, thermodynamics requires that the spinodal pass through a minimum and reappear in the positive pressure region under deeply supercooled conditions. Interestingly, this scenario has recently been observed in a numerical study of model colloidal particles. The apparent power law behavior of water’s response functions is predicted by the SLC in terms of the approach to the line of thermodynamic singularities found at the spinodal.

Although the SLC has recently been shown to be thermodynamically incompatible with other features of the supercooled water phase diagram, it played a key role in the development of new scenarios. The SLC also pointed out the importance of considering the behavior of “stretched” water at negative pressure, a regime in which the liquid is metastable with respect to the nucleation of bubbles of the vapor phase. The properties of stretched water have been probed directly in several innovative experiments which continue to generate results that may help discriminate among the competing scenarios that have been formulated to explain the thermodynamic behavior of supercooled water.