Surveillance Publishing

5 December, 2021

Björn Brembs recently explained how

“massive over-payment of academic publishers has enabled
them to buy surveillance technology covering the entire workflow that can be used not only to be combined with our private data and sold, but also to make algorithmic (aka ‘evidenceled’) employment decisions.”

Reading about this led me to this article:

• Jefferson D. Pooley, Surveillance publishing.

It’s all about what publishers are doing to make money by collecting data on the habits of their readers. Let me quote a bunch!

After a general introduction to surveillance capitalism, Pooley turns to “surveillance publishing”. Their prime example: Elsevier. I’ll delete the scholarly footnotes here:

Consider Elsevier. The Dutch publishing house was founded in
the late nineteenth century, but it wasn’t until the 1970s that the firm began to launch and acquire journal titles at a frenzied pace. Elsevier’s model was Pergamon, the postwar science-publishing venture established by the brash Czech-born Robert Maxwell. By 1965, around the time that Garfield’s Science Citation Index first appeared, Pergamon was publishing 150 journals. Elsevier followed Maxwell’s lead, growing at a rate of 35 titles a year by the late 1970s. Both firms hiked their subscription prices aggressively, making huge profits off the prestige signaling of Garfield’s Journal Impact Factor. Maxwell sold Pergamon to Elsevier in 1991, months before his lurid death.

Elsevier was just getting started. The firm acquired The Lancet
the same year, when the company piloted what would become
ScienceDirect, its Web-based journal delivery platform. In 1993 the Dutch publisher merged with Reed International, a UK paper-maker turned media conglomerate. In 2015, the firm changed its name to RELX Group, after two decades of acquisitions, divestitures, and product launches—including Scopus in 2004, Elsevier’s answer to ISI’s Web of Science. The “shorter, more modern name,” RELX explained, is a nod to the company’s “transformation” from publisher to a “technology, content and analytics driven business.” RELX’s strategy? The “organic development of increasingly sophisticated information-based analytics and decisions tools”. Elsevier, in other words, was to become a surveillance publisher.

Since then, by acquisition and product launch, Elsevier has moved to make good on its self-description. By moving up and down the research lifecycle, the company has positioned itself to harvest behavioral surplus at every stage. Tracking lab results? Elsevier has Hivebench, acquired in 2016. Citation and data-sharing software? Mendeley, purchased in 2013. Posting your working paper or preprint? SSRN and Bepress, 2016 and 2017, respectively. Elsevier’s “solutions” for the post-publication phase of the scholarly workflow are anchored by Scopus and its 81 million records.

Curious about impact? Plum Analytics, an altmetrics company, acquired in 2017. Want to track your university’s researchers and their work? There’s the Pure “research information management system,” acquired in 2012. Measure researcher performance? SciVal, spun off from Scopus in 2009, which incorporates media monitoring service Newsflo, acquired in 2015.

Elsevier, to repurpose a computer science phrase, is now a fullstack publisher. Its products span the research lifecycle, from the lab bench through to impact scoring, and even—by way of Pure’s grant-searching tools—back to the bench, to begin anew. Some of its products are, you might say, services with benefits: Mendeley, for example, or even the ScienceDirect journal-delivery platform, provide reference management or journal access for customers and give off behavioral data to Elsevier. Products like SciVal and Pure, up the data chain, sell the processed data back to researchers and their employers, in the form of “research intelligence.”

It’s a good business for Elsevier. Facebook, Google, and Bytedance have to give away their consumer-facing services to attract data-producing users. If you’re not paying for it, the Silicon Valley adage has it, then you’re the product. For Elsevier and its peers, we’re the product and we’re paying (a lot) for it. Indeed, it’s likely that windfall subscription-and-APC profits in Elsevier’s “legacy” publishing business have financed its decade-long acquisition binge in analytics.

As Björn Brembs recently Tweeted:

“massive over-payment of academic publishers has enabled them to buy surveillance technology covering the entire workflow that can be used not only to be combined with our private data and sold, but also to make algorithmic (aka ‘evidenceled’) employment decisions.”

This is insult piled on injury: Fleece us once only to fleece us all over again, first in the library and then in the assessment office. Elsevier’s prediction products sort and process mined data in a variety of ways. The company touts what it calls its Fingerprint® Engine, which applies machine learning techniques to an ocean’s worth of scholarly texts—article abstracts, yes, but also patents, funding announcements, and proposals. Presumably trained on human-coded examples (scholar-designated article keywords?), the model assigns keywords (e.g., “Drug Resistance”) to documents, together with what amounts to a weighted score (e.g., 73%). The list of terms and scores is, the company says, a “Fingerprint.” The Engine is used in a variety of products, including Expert Lookup (to find reviewers), the company’s Journal Finder, and its Pure university-level research-management software. In the latter case, it’s scholars who get Fingerprinted:

“Pure applies semantic technology and 10 different research-specific keyword vocabularies to analyze a researcher’s publications and grant awards and transform them into a unique Fingerprint—a distinct visual index of concepts and a weighted list of structured terms.

But it’s not just Elsevier:

The machine learning techniques that Elsevier is using are of a piece with the RELX’s other predictive-analytics businesses aimed at corporate and legal customers, including LexisNexis Risk Solutions. Though RELX doesn’t provide specific revenue figures for its academic prediction products, the company’s 2020 SEC disclosures indicate that over a third of Elsevier’s revenue come from databases and electronic reference products–a business, the company states, in which “we continued to drive good growth through content development and enhanced machine learning and natural language processing based functionality”.

Many of Elsevier’s rivals appear to be rushing into the analytics
market, too, with a similar full research-stack data harvesting
strategy. Taylor & Francis, for example, is a unit of Informa, a UK-based conglomerate whose roots can be traced to Lloyd’s List, the eighteenth-century maritime-intelligence journal. In its 2020 annual report, the company wrote that it intends to “more deeply use and analyze the first party data” sitting in Taylor & Francis and other divisions, to “develop new services based on hard data and behavioral data insights.”

Last year Informa acquired the Faculty of 1000, together with its OA F1000Research publishing platform. Not to be outdone, Wiley bought Hindawi, a large independent OA publisher, along with its Phenom platform. The Hindawi purchase followed Wiley’s 2016 acquisition of Atypon, a researcher-facing software firm whose online platform, Literatum, Wiley recently adopted across its journal portfolio. “Know thy reader,” Atypon writes of Literatum. “Construct reports on the fly and get visualization of content usage and users’ site behavior in real time.” Springer Nature, to cite a third example, sits under the same Holtzbrink corporate umbrella as Digital Science, which incubates startups and launches products across the research lifecycle, including the Web of Science/Scopus competitor Dimensions, data repository Figshare, impact tracker Altmetric, and many others.

So, the definition of ‘diamond open access‘ should include: no surveillance.


Benzene

30 November, 2021

The structure of benzene is fascinating. Look at all these different attempts to depict it! Let me tell you a tiny bit of the history.

In 1865, August Kekulé argued that benzene is a ring of carbon atoms with alternating single and double bonds. Later, at a conference celebrating the 25th anniversary of this discovery, he said he realized this after having a daydream of a snake grabbing its own tail.

Kekulé’s model was nice, because before this it was hard to see how 6 carbons and 6 hydrogens could form a reasonable molecule with each carbon having 4 bonds and each hydrogen having one. But this model led to big problems, which were only solved with quantum mechanics.

For example, if benzene looked like Kekulé’s model, there would be 4 ways to replace two hydrogens with chlorine! You could have two chlorines next to each other with a single bond between them as shown here… or with a double bond between them. But there aren’t 4, just 3.

In 1872 Kekulé tried to solve this problem by saying benzene rapidly oscillates between two forms. Below is his original picture of those two forms. The single bonds and double bonds trade places.

But there was still a problem: benzene has less energy than if it had alternating single and double bonds.

The argument continued until 1933, when Linus Pauling and George Wheland used quantum mechanics to tackle benzene. Here’s the first sentence in their paper:

As you can see, there were models much stranger than Kekulé’s.

What was Pauling and Wheland’s idea? Use the quantum superposition principle! A superposition of a live and dead cat is theoretically possible in quantum mechanics… but a superposition of two structures of a molecule can have lower energy than either structure alone, and then this is what we actually see! Here’s what Pauling said later, in 1946:

But in reality, benzene is much subtler than just a quantum superposition of Kekulé’s two structures. For example: 6 of its electrons become ‘delocalized’, their wavefunction forming two rings above and below the plane containing the carbon nuclei!

Benzene is far from the only molecule with this property: for example, all the ‘anthocyanins’ I talked about last time also have rings with delocalized electrons:

In general, such molecules are called aromatic, because some of the first to be discovered have strong odors. Aromaticity is an important concept in chemistry, and people still fight over its precise definition:

• Wikipedia, Aromaticity.

One thing is for sure: the essence of aromaticity is not the aroma. It’s more about having rings of carbons in a plane, with delocalized electrons in so-called pi bonds, which protrude at right angles to this plane.

Another typical feature of aromatic compounds is that they sustain ‘aromatic ring currents’. Let me illustrate this with the example of benzene:

When you turn on a magnetic field (shown in red here), a benzene molecule will automatically line up at right angles to the field, and these electrons start moving around! This current loop creates its own magnetic field (shown in purple).

What does this current loop look like, exactly? To understand this, you have to remember that the benzene’s 6 delocalized electrons lie above and below the plane of the benzene molecule.

So, if you compute the electric current above or below the plane of the benzene molecule, it goes around and around like this:

But if you compute the electric current in the plane of the benzene molecule—where the nuclei of the carbon atoms are—you get a much more complicated pattern. Some current even flows backward, against the overall flow!

Outside the benzene molecule it points the same way as the externally imposed magnetic field, reinforcing it. So, a magnetic field that is strengthened when it goes through benzene. This is called ‘antishielding’—or ‘deshielding’ in this picture from Organic Spectroscopy International:

I want to understand aromaticity and aromatic ring currents better. If I have the energy, I’ll say more in future articles. For example, I want to tell you about ‘Hückel theory’: a simplified mathematical model of aromatic compounds that’s a lot of fun if you like graph theory and matrices.

References

Please click on the pictures to see where I got them. You can learn more that way! Some came from Wikicommons, via these Wikipedia articles:

• Wikipedia, Benzene.

• Wikipedia, Aromatic ring current.

including the pictures of current vector fields in benzene, created by ‘Hoferaanderl’.


Anthocyanins

28 November, 2021

 

As the chlorophyll wanes, now is the heyday of the xanthophylls,
carotenoids and anthocyanins. These contain carbon rings and chains whose electrons become delocalized… their wavefunctions resonating at different frequencies, emitting photons of yellow, orange and red!

Yes, it’s fall. I’m enjoying it.

I wrote about two xanthophylls in my May 27, 2014 diary entry: I explained how they get their color from the resonance of delocalized electrons that spread all over a carbon chain with alternating single and double bonds:

I discussed chlorophyll, which also has such a chain, in my May 29th entry. I wrote about some carotenoids in my July 2, 2006 entry: these too have long chains of carbons with alternating single and double bonds.

I haven’t discussed anthocyanins yet! These have rings rather than chains of carbon, but the basic mechanism is similar: it’s the delocalization of electrons that makes them able to resonate at frequencies in the visual range. They are often blue or purple, but they contribute to the color of many red leaves:



Click on these two graphics for more details! I got them from a website called Science Notes, and it says:

Some leaves make flavonoids. Anthocyanins are flavonoids which vary in color depending on pH. Anthocyanins are not usually present in leaves during the growing season. Instead, plants produce them as temperatures drop. They acts as a natural sunscreen and protect against cold damage. Anthocyanins also deter some insects that like to overwinter on plants and discourage new seedlings from sprouting too close to the parent plant. Plants need energy from light to make anthocyanins. So, vivid red and purple fall colors only appear if there are several sunny autumn days in a row.

This raises a lot of questions, like: how do anthocyanins protect
leaves from cold, and why do some leaves make them only shortly before they die? Or are they there all along, hidden behind the chlorophyll Maybe this paper would help:

• D. Lee and K. Gould, Anthocyanins in leaves and other vegetative organs: an introduction, Advances in Botanical Research 37 (2002), 1–16.

Thinking about anthocyanins has led me to ponder the mystery of aromaticity. Roughly, a compound is aromatic if it contains one or more rings with pi electrons delocalized over the whole ring. But people fight over the exact definition.

I may write more about this if I ever solve some puzzles that are bothering me, like the mathematical origin of Hückel’s rule, which says a planar ring of carbon atoms is aromatic if it has 4n + 2 pi electrons. I want to know where the formula 4n + 2 comes from, and I’m getting close.

An early paper by Linus Pauling discusses the resonance of electrons in anthocyanins and other compounds with rings of carbon. This one is freely available, and it’s pretty easy to read:

• Linus Pauling, Recent work on the configuration and electronic structure of molecules; with some applications to natural products, in Fortschritte der Chemie Organischer Naturstoffe, 1939, Springer, Vienna, pp. 203–235.


Compositional Thermostatics

22 November, 2021

At the Topos Institute this summer, a group of folks started talking about thermodynamics and category theory. It probably started because Spencer Breiner and my former student Joe Moeller, both working at NIST, were talking about thermodynamics with some people there. But I’ve been interested in thermodynamics for quite a while now –and Owen Lynch, a grad student visiting from the University of Utrecht, wanted to do his master’s thesis on the subject. He’s now working with me. Sophie Libkind, David Spivak and David Jaz Myers also joined in: they’re especially interested in open systems and how they interact.

Prompted by these conversations, a subset of us eventually wrote a paper on the foundations of equilibrium thermodynamics:

• John Baez, Owen Lynch and Joe Moeller, Compositional thermostatics.

The idea here is to describe classical thermodynamics, classical statistical mechanics and quantum statistical mechanics in a unified framework based on entropy maximization. This framework can also handle ‘generalized probabilistic theories’ of the sort studied in quantum foundations—that is, theories like quantum mechanics, but more general.

To unify all these theories, we define a ‘thermostatic system’ to be any convex space X of ‘states’ together with a concave function

S \colon X \to [-\infty, \infty]

assigning to each state an ‘entropy’.

Whenever several such systems are combined and allowed to come to equilibrium, the new equilibrium state maximizes the total entropy subject to constraints. We explain how to express this idea using an operad. Intuitively speaking, the operad we construct has as operations all possible ways of combining thermostatic systems. For example, there is an operation that combines two gases in such a way that they can exchange energy and volume, but not particles—and another operation that lets them exchange only particles, and so on.

It is crucial to use a sufficiently general concept of ‘convex space’, which need not be a convex subset of a vector space. Luckily there has been a lot of work on this, so we can just grab a good definition off the shelf:

Definition. A convex space is a set X with an operation c_\lambda \colon X \times X \to X for each \lambda \in [0, 1] such that the following identities hold:

1) c_1(x, y) = x

2) c_\lambda(x, x) = x

3) c_\lambda(x, y) = c_{1-\lambda}(y, x)

4) c_\lambda(c_\mu(x, y) , z) = c_{\lambda'}(x, c_{\mu'}(y, z)) for all 0 \le \lambda, \mu, \lambda', \mu' \le 1 satisfying \lambda\mu = \lambda' and 1-\lambda = (1-\lambda')(1-\mu').

To understand these axioms, especially the last, you need to check that any vector space is a convex space with

c_\lambda(x, y) = \lambda x + (1-\lambda)y

So, these operations c_\lambda describe ‘convex linear combinations’.

Indeed, any subset of a vector space closed under convex linear combinations is a convex space! But there are other examples too.

In 1949, the famous mathematician Marshall Stone invented ‘barycentric algebras’. These are convex spaces satisfying one extra axiom: the cancellation axiom, which says that whenever \lambda \ne 0,

c_\lambda(x,y) = c_\lambda(x',y) \implies x = x'

He proved that any barycentric algebra is isomorphic to a convex subset of a vector space. Later Walter Neumann noted that a convex space, defined as above, is isomorphic to a convex subset of a vector space if and only if the cancellation axiom holds.

Dropping the cancellation axiom has convenient formal consequences, since the resulting more general convex spaces can then be defined as algebras of a finitary commutative monad, giving the category of convex spaces very good properties.

But dropping this axiom is no mere formal nicety. In our definition of ‘thermostatic system’, we need the set of possible values of entropy to be a convex space. One obvious candidate is the set [0,\infty). However, for a well-behaved formalism based on entropy maximization, we want the supremum of any set of entropies to be well-defined. This forces us to consider the larger set [0,\infty], which does not obey the cancellation axiom.

But even that is not good enough! In thermodynamics you often read about the ‘heat bath‘, an idealized system that can absorb or emit an arbitrarily large amount of energy while keeping a fixed temperature. We want to treat the ‘heat bath’ as a thermostatic system on an equal footing with any other. To do this, we need to allow consider negative entropies—not because the heat bath can have negative entropy, but because it acts as an infinite reservoir of entropy, and the change in entropy from its default state can be positive or negative.

This suggests letting entropies take values in the convex space \mathbb{R}. But then the requirement that any set of entropies have a supremum (including empty and unbounded sets) forces us to use the larger convex space [-\infty,\infty].

This does not obey the cancellation axiom, so there is no way to think of it as a convex subset of a vector space. In fact, it’s not even immediately obvious how to make it into a convex space at all! After all, what do you get when you take a nontrivial convex linear combination of \infty and -\infty? You’ll have to read our paper for the answer, and the justification.

We then define a thermostatic system to be a convex set X together with a concave function

S \colon X \to [-\infty, \infty]

where concave means that

S(c_\lambda(x,y)) \ge c_\lambda(S(x), S(y))

We give lots of examples from classical thermodynamics, classical and quantum statistical mechanics, and beyond—including our friend the ‘heat bath’.

For example, suppose X is the set of probability distributions on an n-element set, and suppose S \colon X \to [-\infty, \infty] is the Shannon entropy

\displaystyle{ S(p) = - \sum_{i = 1}^n p_i \log p_i }

Then given two probability distributions p and q, we have

S(\lambda p + (1-\lambda q)) \ge \lambda S(p) + (1-\lambda) S(q)

for all \lambda \in [0,1]. So this entropy function is convex, and S \colon X \to [-\infty, \infty] defines a thermostatic system. But in this example the entropy only takes nonnegative values, and is never infinite, so you need to look at other examples to see why we want to let entropy take values in [-\infty,\infty].

After looking at examples of thermostatic systems, we define an operad whose operations are convex-linear relations from a product of convex spaces to a single convex space. And then we prove that thermostatic systems give an algebra for this operad: that is, we can really stick together thermostatic systems in all these ways. The trick is computing the entropy function of the new composed system from the entropy functions of its parts: this is where entropy maximization comes in.

For a nice introduction to these ideas, check out Owen’s blog article:

• Owen Lynch, Compositional thermostatics, Topos Institute Blog, 9 September 2021.

And then comes the really interesting part: checking that this adequately captures many of the examples physicists have thought about!

The picture at the top of this post shows one that we discuss: two cylinders of ideal gas with a movable divider between them that’s permeable to heat. Yes, this is an operation in an operad—and if you tell us the entropy function of each cylinder of gas, our formalism will automatically compute the entropy function of the resulting combination of these two cylinders.

There are many other examples. Did you ever hear of the ‘canonical ensemble’, the ‘microcanonical ensemble’, or the ‘grand canonical ensemble’? Those are famous things in statistical mechanics. We show how our formalism recovers these.

I’m sure there’s much more to be done. But I feel happy to see modern math being put to good use: making the foundations of thermodynamics more precise. Once Vladimir Arnol’d wrote:

Every mathematician knows that it is impossible to understand any elementary course in thermodynamics.

I’m not sure our work will help with that—and indeed, it’s possible that once the mathematicians finally understand thermodynamics, physicists won’t understand what the mathematicians are talking about! But at least we’re clearly seeing some more of the mathematical structures that are hinted at, but not fully spelled out, in such an elementary course.

I expect that our work will interact nicely with Simon Willerton’s work on the Legendre transform. The Legendre transform of a concave (or convex) function is widely used in thermostatics, and Simon describes this for functions valued in [-\infty,\infty] using enriched profunctors:

• Simon Willerton, Enrichment and the Legendre–Fenchel transform I, The n-Category Café, April 16, 2014.

• Simon Willerton, Enrichment and the Legendre–Fenchel transform II, The n-Category Café, May 22, 2014.

He also has a paper on this, and you can see him talk about it on YouTube.


The Kuramoto–Sivashinsky Equation (Part 7)

3 November, 2021

I have a lot of catching up to do. I want to share a bunch of work by Steve Huntsman. I’ll start with some older material. A bit of this may be ‘outdated’ by his later work, but I figure it’s all worth recording.

One goal here is to define ‘stripes’ for the Kuramoto–Sivashinky equation in a way that lets us count them, their births, and their mergers, and so on. We need a good definition to test the conjectures I made in Part 1.

While I originally formulated my conjectures for the ‘integral form’ of the
Kuramoto–Sivashinky equation:

h_t + h_{xx} + h_{xxxx} + \frac{1}{2} (h_x)^2 = 0

Steve has mostly been working with the derivative form:

u_t + u_{xx} + u_{xxxx} + u u_x = 0

so you can assume that unless I say otherwise. He’s using periodic boundary conditions such that

u(t,x) = u(t,x+L)

for some length L. The length depends on the particular experiment he’s doing.

First, a plot of stripes. It looks like L = 100 here:



Births and deaths are shown as green and red dots, respectively. But to see them, you may need to click on the picture to enlarge it!

According to my conjecture there should be no red dots. The red dots at the top and the bottom of the image don’t count: they mostly arise because this program doesn’t take the periodic boundary conditions into account. There are two other red dots, which are worth thinking about.

Nice! But how are stripes being defined here? He describes how:

The stripe definition is mostly pretty simple and not image processy at all, and the trick to improve it is limited to removing little blobs and is easily explained.

Let u(t,x) be the solution to the KSE. Then let

v(t,x) := u(t,x)- u(t,x+a)

where a is the average integer offset (maybe I’m missing a minus sign a la -a) that maximizes the cross-correlation between u(t,x) and -u(t,x+a). Now anywhere v exceeds its median is part of a stripe.

The image processing trick is that I delete little stripes (and I use what image processors would call 4-connectivity to define simply connected regions—this is the conservative idea that a pixel should have a neighbor to the north, south, east, or west to be connected to that neighbor, instead of the aggressive 8-connectivity that allows NE, NW, SE, SW too) whose area is less than 1000 grid points. So it uses lots of image processing machinery to actually do its job, but the definition is simple and easily explained mathematically.

An obvious fix that removes the two nontrivial deaths in the picture I sent is to require a death to be sufficiently far away from another stripe: here I am guessing that the characteristic radius of a stripe will work just fine.


Learn Applied Category Theory!

27 October, 2021

Do you like the idea of learning applied category theory by working on a project, as part of a team led by an expert? If you’re an early career researcher you can apply to do that now!

Mathematical Research Community: Applied Category Theory, meeting 2022 May 29–June 4. Details on how to apply: here. Deadline to apply: Tuesday 2022 February 15 at 11:59 Eastern Time.

After working with your team online, you’ll take an all-expenses-paid trip to a conference center in upstate New York for a week in the summer. There will be a pool, bocci, lakes with canoes, woods to hike around in, campfires at night… and also whiteboards, meeting rooms, and coffee available 24 hours a day to power your research!

Later you’ll get invited to the 2023 Joint Mathematics Meetings in Boston.

There will be three projects to choose from:

Valeria de Paiva (Topos Institute) will lead a study in the context of computer science that investigates indexed containers and partial compilers using lenses and Dialectica categories.

Nina Otter (Queen Mary University of London) will lead a study of social networks using simplicial complexes.

John Baez (University of California, Riverside) will lead a study of chemical reaction networks using category theoretic methods such as structured cospans.

The whole thing is being organized by Daniel Cicala of the University of New Haven:

and Simon Cho of Two Six Technologies:

I should add that this is just one of four ‘Mathematical Research Communities’ run by the American Mathematical Society in 2022, and you may prefer another. The applied category theory session will be held at the same time and place as one on data science! Then there are two more:

• Week 1a: Applied Category Theory

Organizers: John Baez, University of California, Riverside; Simon Cho, Two Six Technologies; Daniel Cicala, University of New Haven; Nina Otter, Queen Mary University of London; Valeria de Paiva, Topos Institute.

• Week 1b: Data Science at the Crossroads of Analysis, Geometry, and Topology

Organizers: Marina Meila, University of Washington; Facundo Mémoli, The Ohio State University; Jose Perea, Northeastern University; Nicolas Garcia Trillos, University of Wisconsin-Madison; Soledad Villar, Johns Hopkins University.

• Week 2a: Models and Methods for Sparse (Hyper)Network Science

Organizers: Sinan G. Aksoy, Pacific Northwest National Laboratory; Aric Hagberg, Los Alamos National Laboratory; Cliff Joslyn, Pacific Northwest National Laboratory; Bill Kay, Oak Ridge National Laboratory; Emilie Purvine, Pacific Northwest National Laboratory; Stephen J. Young, Pacific Northwest National Laboratory; Jennifer Webster, Pacific Northwest National Laboratory.

• Week 2b: Trees in Many Contexts

Organizers: Miklós Bóna, University of Florida; Éva Czabarka, University of South Carolina; Heather Smith Blake, Davidson College; Stephan Wagner, Uppsala University; Hua Wang, Georgia Southern University.

Applicants should be ready to engage in collaborative research and should be “early career”—either expecting to earn a PhD within two years or having completed a PhD within five years of the date of the summer conference. Exceptions to this limit on the career stage of an applicant may be made on a case-by-case basis. The Mathematical Research Community (MRC) program is open to individuals who are US citizens as well as to those who are affiliated with US institutions and companies/organizations. A few international participants may be accepted. Depending on space and other factors, a small number of self-funded participants may be admitted. Individuals who have once previously been an MRC participant will be considered for admission, and their applications must include a rationale for repeating. Please note that individuals cannot participate in the MRC program more than twice: applications from individuals who have twice been MRC participants will not be considered.

We seek individuals who will both contribute to and benefit from the MRC experience, and the goal is to create a collaborative research community that is vibrant, productive, and diverse. We welcome applicants from academic institutions of all types, as well as from private industry and government laboratories and agencies. Women and under-represented minorities are especially encouraged to apply.

All participants are expected to be active in the full array of MRC activities—the summer conference, special sessions at the Joint Mathematics Meetings, and follow-up collaborations.


The Kuramoto–Sivashinsky Equation (Part 6)

25 October, 2021

guest post by Theodore Kolokolnikov

I coded up a simple dynamical system with the following rules (loosely motivated by theory of motion of spikes in reaction-diffusion systems, see e.g. appendix of this paper, as well as this paper):

• insert a particle if inter-particle distance is more than some maxdist
• merge any two particles that collide
• otherwise evolve particles according to the ODE

\displaystyle{ x'_k(t) = \sum_{j=1}^N G_x(x_k, x_j) }

Here, G is a Green’s function that satisfies

G_{xx}-\lambda^2 G = -\delta(x,y)

inside the interval [-L,L] with Neumann boundary conditions G_x(\pm L, y)=0. Explicitly,

\displaystyle{ G(x,y) = \frac{\cosh((x+y)\lambda)+\cosh((2L-|x-y|) \lambda )}{2 \lambda \sinh(2L \lambda ) }  }

and

\displaystyle{ G_x(x,y)= \frac{\sinh((x+y) \lambda )+ \sinh((|x-y|-2L) \lambda ) \mbox{sign}(x-y) } {2 \sinh(2L\lambda)} }

where sign(0) is taken to be zero so that

\displaystyle{ G_x(y,y) := \frac{G_x(y^+,y)+ G_x(y^-,y)}{2} }

In particular, for large \lambda, one has

\displaystyle{ G(x,y)\sim\frac{e^{-\lambda | x-y|}}{2\lambda} }

and

\displaystyle{ G_x(x,y)\sim-\frac{e^{-\lambda | x-y|}} {2} \mbox{sign}(x-y), ~~\lambda \gg 1 }

Here are some of the resulting simulations, with different \lambda (including complex \lambda). This is mainly just for fun but there is a wide range of behaviours. In particular, I think the large-lambda limit should be able to capture analogous dynamics in the Keller-Siegel model with logistic growth (Hillen et. al.), see e.g. figures in this paper.














The Kuramoto–Sivashinsky Equation (Part 5)

24 October, 2021

In Parts 3 and 4, I showed some work of Cheyne Weis on the ‘derivative form’ of the Kuramoto–Sivashinksy equation, namely

u_t + u_{xx} + u_{xxxx} + u u_x = 0

Steve Huntsman’s picture of a solution above gives you a good feel for how this works.

Now let’s turn to the ‘integral form’, namely

h_t + h_{xx} + h_{xxxx} + \frac{1}{2} (h_x)^2 = 0

This has rather different behavior, though it’s closely related, since if h is any solution of the integral form then

u = h_x

is a solution of the derivative form.

Cheyne drew a solution of the integral form:



You’ll immediately see the most prominent feature: it slopes down! I’ll show later that the average of h over space can never increase with time, and it decreases unless h is constant as a function of space. By contrast, we saw in Part 2 that the average of u over space never changes with time.

However, we can subtract off the average of h over space to eliminate this dramatic but rather boring effect. The result looks like this:





Now it’s very easy to see the ‘stripes’ I’m so obsessed with: they are the ridges in these pictures. You can see how as time increases from left to right these stripes are born and merge, but never die or split.

But how can we mathematically define these stripes, to make it possible to state precise conjectures about them? We could try defining them to be points where u is locally maximized of as a function of x at any time t. With this definition, Cheyne gets stripes like this:



The previous picture shows up in the lower right hand corner of this one.

These stripes look pretty good, but you’ll see some gaps where they momentarily disappear and then reappear. I don’t think these invalidate my conjecture that stripes never ‘die’. I just think this definition of stripe is not quite right. (Of course I would think that, wouldn’t I? I want the conjecture to be true!)

Cheyne thought that maybe overlaying maxima in time would help:



This fills in some gaps, but there are still stripes that momentarily die, only to be shortly reborn. It might be good to define stripes to be points where this function— u minus its average over space—exceeds a certain cutoff.

Let’s conclude by proving that the average of h over space can never increase with time. To prove this, just take the time derivative of the integral of h over space, and show it’s \le 0. Remember that we’re assuming h(t,x) is periodic in x with period L, so ‘space’ is the interval [0,L] with its endpoints identified to form a circle. So, we get

\begin{array}{ccl}  \displaystyle{ \frac{d}{d t} \int_0^L h(t,x) \, dx } &=& \displaystyle{ \int_0^L h_t(t,x) \, dx } \\ \\  &=& \displaystyle{ -\int_0^L \left( h_{xx} + h_{xxxx} + \frac{1}{2} (h_x)^2 \right) \, dx } \\ \\  &=& \displaystyle{  -\left( h_x + h_{xxx} \right) \Big|_0^L -\int_0^L (h_x)^2 \, dx } \\ \\  &=& \displaystyle{ -\int_0^L (h_x)^2 \, dx }  \end{array}

This is \le 0, as desired. Moreover, it’s zero iff h is constant as a function on space!


The Kuramoto–Sivashinsky Equation (Part 4)

23 October, 2021

Here is some more work by Cheyne Weis. Last time I explained that Cheyne and Steve Huntsman were solving the ‘derivative form’ of the Kuramoto–Sivashinsky equation, namely this:

u_t + u_{xx} + u_{xxxx} + u u_x = 0

Above is one of Steve’s pictures of a typical solution with its characteristic ‘stripes’. Cheyne started trying to identify these stripes as locations where du/dx \le c for some cutoff c, for example c = -0.7. He made some nice 3d views of du/dx illustrating the problems with this. As he explains:



So there is the tradeoff that if I make the cutoff too high, the sections that look greenish are getting identified as stripes. If I make it too low, the points near where two stripes merge disappear. I attached some 3D plots showing the landscape of du/dx reiterating this point.

The disappearance of the stripes as they merge is unavoidable to a certain extent. The plots in the PowerPoint show how even when there is no cutoff, there is some gap between the merging stripes. There may be some characteristic length scale needed to qualify them as merging.









You can click on these pictures to make them bigger.


The Kuramoto–Sivashinsky Equation (Part 3)

23 October, 2021

I’ve been getting a lot of help from Steve Huntsman and also Cheyne Weis, who is a physics grad student at the University of Chicago. You can see a lot, but far from all, of Steve’s work as comments on part 1. Here are some things Cheyne has been doing.

Cheyne started out working with the ‘derivative form’ of the Kuramoto–Sivashinsky equation, meaning this:

u_t + u_{xx} + u_{xxxx} + u u_x = 0

and he soon noticed what Steve made clear in the image above: the ‘stripes’ in solutions of this equation aren’t ‘bumps’ (regions where u is large) but regions where the solution is rapidly changing from positive to negative. This suggests a way to define stripes: look for where du/dx < c for some negative c. It seems c = -0.7 is a pretty good choice.

I thought maybe it would be better to use the derivative of the PDE’s solution (du/dx) to define the stripes. You can find an image of this in the attached PowerPoint.



The second slide has another image where the lines represent the minima of du/dx (as a function of x) that are below a certain threshold c. You can see these lines appearing and combining as apparent in Thien An’s animation. Hopefully this is some progress on the definition of a “bump”. If you agree, I could use this to test some of your other conjectures.





Here are the result for a range of alternative choices of c. The problem, if we’re seeking a definition of ‘stripe’ where stripes never die as time passes, is the presence of short ‘ministripes’ that die shortly after they appear. What’s really going on, I believe, is that when small stripes merge with larger ones, the derivative du/dx becomes smaller in absolutely value, thus going above the cutoff c. In short, merging is being misinterpreted as death.