Information Processing and Biology

7 November, 2016

santa_fe_institute

The Santa Fe Institute, in New Mexico, is a place for studying complex systems. I’ve never been there! Next week I’ll go there to give a colloquium on network theory, and also to participate in this workshop:

Statistical Mechanics, Information Processing and Biology, November 16–18, Santa Fe Institute. Organized by David Krakauer, Michael Lachmann, Manfred Laubichler, Peter Stadler, and David Wolpert.

Abstract. This workshop will address a fundamental question in theoretical biology: Does the relationship between statistical physics and the need of biological systems to process information underpin some of their deepest features? It recognizes that a core feature of biological systems is that they acquire, store and process information (i.e., perform computation). However to manipulate information in this way they require a steady flux of free energy from their environments. These two, inter-related attributes of biological systems are often taken for granted; they are not part of standard analyses of either the homeostasis or the evolution of biological systems. In this workshop we aim to fill in this major gap in our understanding of biological systems, by gaining deeper insight in the relation between the need for biological systems to process information and the free energy they need to pay for that processing.

The goal of this workshop is to address these issues by focusing on a set three specific questions: 1) How has the fraction of free energy flux on earth that is used by biological computation changed with time? 2) What is the free energy cost of biological computation or functioning? 3) What is the free energy cost of the evolution of biological computation or functioning? In all of these cases we are interested in the fundamental limits that the laws of physics impose on various aspects of living systems as expressed by these three questions.

I think it’s not open to the public, but I will try to blog about it. The speakers include a lot of experts on information theory, statistical mechanics, and biology. Here they are:

Wednesday November 16: Chris Jarzynski, Seth Lloyd, Artemy Kolchinski, John Baez, Manfred Laubichler, Harold de Vladar, Sonja Prohaska, Chris Kempes.

Thursday November 17: Phil Ball, Matina C. Donaldson-Matasci, Sebastian Deffner, David Wolpert, Daniel Polani, Christoph Flamm, Massimiliano Esposito, Hildegard Meyer-Ortmanns, Blake Pollard, Mikhail Prokopenko, Peter Stadler, Ben Machta.

Friday November 18: Jim Crutchfield, Sara Walker, Hyunju Kim, Takahiro Sagawa, Michael Lachmann, Wojciech Zurek, Christian Van den Broeck, Susanne Still, Chris Stephens.


Frigatebirds

18 July, 2016

 

Frigatebirds are amazing!

They have the largest ratio of wing area to body weight of any bird. This lets them fly very long distances while only rarely flapping their wings. They often stay in the air for weeks at time. And one being tracked by satellite in the Indian Ocean stayed aloft for two months.

Surprisingly for sea birds, they don’t go into the water. Their feathers aren’t waterproof. They are true creatures of the air. They snatch fish from the ocean surface using their long, hooked bills—and they often eat flying fish! They clean themselves in flight by flying low and wetting themselves at the water’s surface before preening themselves.

They live a long time: often over 35 years.

But here’s the cool new discovery:

Since the frigatebird spends most of its life at sea, its habits outside of when it breeds on land aren’t well-known—until researchers started tracking them around the Indian Ocean. What the researchers discovered is that the birds’ flying ability almost defies belief.

Ornithologist Henri Weimerskirch put satellite tags on a couple of dozen frigatebirds, as well as instruments that measured body functions such as heart rate. When the data started to come in, he could hardly believe how high the birds flew.

“First, we found, ‘Whoa, 1,500 meters. Wow. Excellent, fantastique,’ ” says Weimerskirch, who is with the National Center for Scientific Research in Paris. “And after 2,000, after 3,000, after 4,000 meters — OK, at this altitude they are in freezing conditions, especially surprising for a tropical bird.”

Four thousand meters is more than 12,000 feet, or as high as parts of the Rocky Mountains. “There is no other bird flying so high relative to the sea surface,” he says.

Weimerskirch says that kind of flying should take a huge amount of energy. But the instruments monitoring the birds’ heartbeats showed that the birds weren’t even working up a sweat. (They wouldn’t, actually, since birds don’t sweat, but their heart rate wasn’t going up.)

How did they do it? By flying into a cloud.

“It’s the only bird that is known to intentionally enter into a cloud,” Weimerskirch says. And not just any cloud—a fluffy, white cumulus cloud. Over the ocean, these clouds tend to form in places where warm air rises from the sea surface. The birds hitch a ride on the updraft, all the way up to the top of the cloud.

[…]

“Absolutely incredible,” says Curtis Deutsch, an oceanographer at the University of Washington. “They’re doing it right through these cumulus clouds. You know, if you’ve ever been on an airplane, flying through turbulence, you know it can be a little bit nerve-wracking.”

One of the tagged birds soared 40 miles without a wing-flap. Several covered more than 300 miles a day on average, and flew continuously for weeks.

• Christopher Joyce, Nonstop flight: how the frigatebird can soar for weeks without stopping, All Things Considered, National Public Radio, 30 June 2016.

Frigatebirds aren’t admirable in every way. They’re kleptoparasites—now there’s a word you don’t hear every day! That’s a name for animals that steal food:

Frigatebirds will rob other seabirds such as boobies, particularly the red-footed booby, tropicbirds, shearwaters, petrels, terns, gulls and even ospreys of their catch, using their speed and maneuverability to outrun and harass their victims until they regurgitate their stomach contents. They may either assail their targets after they have caught their food or circle high over seabird colonies waiting for parent birds to return laden with food.

Frigatebird, Wikipedia.


Bleaching of the Great Barrier Reef

22 April, 2016


The chatter of gossip distracts us from the really big story, the Anthropocene: the new geological era we are bringing about. Here’s something that should be dominating the headlines: Most of the Great Barrier Reef, the world’s largest coral reef system, now looks like a ghostly graveyard.

Most corals are colonies of tiny genetically identical animals called polyps. Over centuries, their skeletons build up reefs, which are havens for many kinds of sea life. Some polyps catch their own food using stingers. But most get their food by symbiosis! They cooperate with single-celled organism called zooxanthellae. Zooxanthellae get energy from the sun’s light. They actually live inside the polyps, and provide them with food. Most of the color of a coral reef comes from these zooxanthellae.

When a polyp is stressed, the zooxanthellae living inside it may decide to leave. This can happen when the sea water gets too hot. Without its zooxanthellae, the polyp is transparent and the coral’s white skeleton is revealed—as you see here. We say the coral is bleached.

After they bleach, the polyps begin to starve. If conditions return to normal fast enough, the zooxanthellae may come back. If they don’t, the coral will die.

The Great Barrier Reef, off the northeast coast of Australia, contains over 2,900 reefs and 900 islands. It’s huge: 2,300 kilometers long, with an area of about 340,000 square kilometers. It can be seen from outer space!

With global warming, this reef has been starting to bleach. Parts of it bleached in 1998 and again in 2002. But this year, with a big El Niño pushing world temperatures to new record highs, is the worst.

Scientists have being flying over the Great Barrier Reef to study the damage, and divers have looked at some of the reefs in detail. Of the 522 reefs surveyed in the northern sector, over 80% are severely bleached and less than 1% are not bleached at all. The damage is less further south where the water is cooler—but most of the reefs are in the north:



The top expert on coral reefs in Australia, Terry Hughes, wrote:

I showed the results of aerial surveys of bleaching on the Great Barrier Reef to my students. And then we wept.

Imagine devoting your life to studying and trying to protect coral reefs, and then seeing this.

Some of the bleached reefs may recover. But as oceans continue to warm, the prospects look bleak. The last big El Niño was in 1998. With a lot of hard followup work, scientists showed that in the end, 16% of the world’s corals died in that event.

This year is quite a bit hotter.

So, global warming is not a problem for the future: it’s a problem now. It’s not good enough to cut carbon emissions eventually. We’ve got to get serious now.

I need to recommit myself to this. For example, I need to stop flying around to conferences. I’ve cut back, but I need to do much better. Future generations, living in the damaged world we’re creating, will not have much sympathy for our excuses.


Statistical Laws of Darwinian Evolution

18 April, 2016

guest post by Matteo Smerlak

Biologists like Steven J. Gould like to emphasize that evolution is unpredictable. They have a point: there is absolutely no way an alien visiting the Earth 400 million years ago could have said:

Hey, I know what’s gonna happen here. Some descendants of those ugly fish will grow wings and start flying in the air. Others will walk the surface of the Earth for a few million years, but they’ll get bored and they’ll eventually go back to the oceans; when they do, they’ll be able to chat across thousands of kilometers using ultrasound. Yet others will grow arms, legs, fur, they’ll climb trees and invent BBQ, and, sooner or later, they’ll start wondering “why all this?”.

Nor can we tell if, a week from now, the flu virus will mutate, become highly pathogenic and forever remove the furry creatures from the surface of the Earth.

Evolution isn’t gravity—we can’t tell in which directions things will fall down.

One reason we can’t predict the outcomes of evolution is that genomes evolve in a super-high dimensional combinatorial space, which a ginormous number of possible turns at every step. Another is that living organisms interact with one another in a massively non-linear way, with, feedback loops, tipping points and all that jazz.

Life’s a mess, if you want my physicist’s opinion.

But that doesn’t mean that nothing can be predicted. Think of statistics. Nobody can predict who I’ll vote for in the next election, but it’s easy to tell what the distribution of votes in the country will be like. Thus, for continuous variables which arise as sums of large numbers of independent components, the central limit theorem tells us that the distribution will always be approximately normal. Or take extreme events: the max of N independent random variables is distributed according to a member of a one-parameter family of so-called “extreme value distributions”: this is the content of the famous Fisher–Tippett–Gnedenko theorem.

So this is the problem I want to think about in this blog post: is evolution ruled by statistical laws? Or, in physics terms: does it exhibit some form of universality?

Fitness distributions are the thing

One lesson from statistical physics is that, to uncover universality, you need to focus on relevant variables. In the case of evolution, it was Darwin’s main contribution to figure out the main relevant variable: the average number of viable offspring, aka fitness, of an organism. Other features—physical strength, metabolic efficiency, you name it—matter only insofar as they are correlated with fitness. If we further assume that fitness is (approximately) heritable, meaning that descendants have the same fitness as their ancestors, we get a simple yet powerful dynamical principle called natural selection: in a given population, the lineage with the highest fitness eventually dominates, i.e. its fraction goes to one over time. This principle is very general: it applies to genes and species, but also to non-living entities such as algorithms, firms or language. The general relevance of natural selection as a evolutionary force is sometimes referred to as “Universal Darwinism”.

The general idea of natural selection is pictured below (reproduced from this paper):

It’s not hard to write down an equation which expresses natural selection in general terms. Consider an infinite population in which each lineage grows with some rate x. (This rate is called the log-fitness or Malthusian fitness to contrast it with the number of viable offspring w=e^{x\Delta t} with \Delta t the lifetime of a generation. It’s more convenient to use x than w in what follows, so we’ll just call x “fitness”). Then the distribution of fitness at time t satisfies the equation

\displaystyle{ \frac{\partial p_t(x)}{\partial t} =\left(x-\int d y\, y\, p_t(y)\right)p_t(x) }

whose explicit solution in terms of the initial fitness distribution p_0(x):

\displaystyle{ p_t(x)=\frac{e^{x t}p_0(x)}{\int d y\, e^{y t}p_0(y)} }

is called the Cramér transform of p_0(x) in large deviations theory. That is, viewed as a flow in the space of probability distributions, natural selection is nothing but a time-dependent exponential tilt. (These equations and the results below can be generalized to include the effect of mutations, which are critical to maintain variation in the population, but we’ll skip this here to focus on pure natural selection. See my paper referenced below for more information.)

An immediate consequence of these equations is that the mean fitness \mu_t=\int dx\, x\, p_t(x) grows monotonically in time, with a rate of growth given by the variance \sigma_t^2=\int dx\, (x-\mu_t)^2\, p_t(x):

\displaystyle{ \frac{d\mu_t}{dt}=\sigma_t^2\geq 0 }

The great geneticist Ronald Fisher (yes, the one in the extreme value theorem!) was very impressed with this relationship. He thought it amounted to an biological version of the second law of thermodynamics, writing in his 1930 monograph

Professor Eddington has recently remarked that “The law that entropy always increases—the second law of thermodynamics—holds, I think, the supreme position among the laws of nature”. It is not a little instructive that so similar a law should hold the supreme position among the biological sciences.

Unfortunately, this excitement hasn’t been shared by the biological community, notably because this Fisher “fundamental theorem of natural selection” isn’t predictive: the mean fitness \mu_t grows according to the fitness variance \sigma_t^2, but what determines the evolution of \sigma_t^2? I can’t use the identity above to predict the speed of evolution in any sense. Geneticists say it’s “dynamically insufficient”.

Two limit theorems

But the situation isn’t as bad as it looks. The evolution of p_t(x) may be decomposed into the evolution of its mean \mu_t, of its variance \sigma_t^2, and of its shape or type

\overline{p}_t(x)=\sigma_t p_t(\sigma_t x+\mu_t).

(We also call \overline{p}_t(x) the “standardized fitness distribution”.) With Ahmed Youssef we showed that:

• If p_0(x) is supported on the whole real line and decays at infinity as

-\ln\int_x^{\infty}p_0(y)d y\underset{x\to\infty}{\sim} x^{\alpha}

for some \alpha > 1, then \mu_t\sim t^{\overline{\alpha}-1}, \sigma_t^2\sim t^{\overline{\alpha}-2} and \overline{p}_t(x) converges to the standard normal distribution as t\to\infty. Here \overline{\alpha} is the conjugate exponent to \alpha, i.e. 1/\overline{\alpha}+1/\alpha=1.

• If p_0(x) has a finite right-end point x_+ with

p(x)\underset{x\to x_+}{\sim} (x_+-x)^\beta

for some \beta\geq0, then x_+-\mu_t\sim t^{-1}, \sigma_t^2\sim t^{-2} and \overline{p}_t(x) converges to the flipped gamma distribution

\displaystyle{ p^*_\beta(x)= \frac{(1+\beta)^{(1+\beta)/2}}{\Gamma(1+\beta)} \Theta[x-(1+\beta)^{1/2}] }

\displaystyle { e^{-(1+\beta)^{1/2}[(1+\beta)^{1/2}-x]}\Big[(1+\beta)^{1/2}-x\Big]^\beta }

Here and below the symbol \sim means “asymptotically equivalent up to a positive multiplicative constant”; \Theta(x) is the Heaviside step function. Note that p^*_\beta(x) becomes Gaussian in the limit \beta\to\infty, i.e. the attractors of cases 1 and 2 form a continuous line in the space of probability distributions; the other extreme case, \beta\to0, corresponds to a flipped exponential distribution.

The one-parameter family of attractors p_\beta^*(x) is plotted below:

These results achieve two things. First, they resolve the dynamical insufficiency of Fisher’s fundamental theorem by giving estimates of the speed of evolution in terms of the tail behavior of the initial fitness distribution. Second, they show that natural selection is indeed subject to a form of universality, whereby the relevant statistical structure turns out to be finite dimensional, with only a handful of “conserved quantities” (the \alpha and \beta exponents) controlling the late-time behavior of natural selection. This amounts to a large reduction in complexity and, concomitantly, an enhancement of predictive power.

(For the mathematically-oriented reader, the proof of the theorems above involves two steps: first, translate the selection equation into a equation for (cumulant) generating functions; second, use a suitable Tauberian theorem—the Kasahara theorem—to relate the behavior of generating functions at large values of their arguments to the tail behavior of p_0(x). Details in our paper.)

It’s useful to consider the convergence of fitness distributions to the attractors p_\beta^*(x) for 0\leq\beta\leq \infty in the skewness-kurtosis plane, i.e. in terms of the third and fourth cumulants of p_t(x).

The red curve is the family of attractors, with the normal at the bottom right and the flipped exponential at the top left, and the dots correspond to numerical simulations performed with the classical Wright–Fisher model and with a simple genetic algorithm solving a linear programming problem. The attractors attract!

Conclusion and a question

Statistics is useful because limit theorems (the central limit theorem, the extreme value theorem) exist. Without them, we wouldn’t be able to make any population-level prediction. Same with statistical physics: it only because matter consists of large numbers of atoms, and limit theorems hold (the H-theorem, the second law), that macroscopic physics is possible in the first place. I believe the same perspective is useful in evolutionary dynamics: it’s true that we can’t predict how many wings birds will have in ten million years, but we can tell what shape fitness distributions should have if natural selection is true.

I’ll close with an open question for you, the reader. In the central limit theorem as well as in the second law of thermodynamics, convergence is driven by a Lyapunov function, namely entropy. (In the case of the central limit theorem, it’s a relatively recent result by Arstein et al.: the entropy of the normalized sum of n i.i.d. random variables, when it’s finite, is a monotonically increasing function of n.) In the case of natural selection for unbounded fitness, it’s clear that entropy will also be eventually monotonically increasing—the normal is the distribution with largest entropy at fixed variance and mean.

Yet it turns out that, in our case, entropy isn’t monotonic at all times; in fact, the closer the initial distribution p_0(x) is to the normal distribution, the later the entropy of the standardized fitness distribution starts to increase. Or, equivalently, the closer the initial distribution p_0(x) to the normal, the later its relative entropy with respect to the normal. Why is this? And what’s the actual Lyapunov function for this process (i.e., what functional of the standardized fitness distribution is monotonic at all times under natural selection)?

In the plots above the blue, orange and green lines correspond respectively to

\displaystyle{ p_0(x)\propto e^{-x^2/2-x^4}, \quad p_0(x)\propto e^{-x^2/2-.01x^4}, \quad p_0(x)\propto e^{-x^2/2-.001x^4} }

References

• S. J. Gould, Wonderful Life: The Burgess Shale and the Nature of History, W. W. Norton & Co., New York, 1989.

• M. Smerlak and A. Youssef, Limiting fitness distributions in evolutionary dynamics, 2015.

• R. A. Fisher, The Genetical Theory of Natural Selection, Oxford University Press, Oxford, 1930.

• S. Artstein, K. Ball, F. Barthe and A. Naor, Solution of Shannon’s problem on the monotonicity of entropy, J. Am. Math. Soc. 17 (2004), 975–982.


Interview (Part 2)

21 March, 2016

Greg Bernhardt runs an excellent website for discussing physics, math and other topics, called Physics Forums. He recently interviewed me there. Since I used this opportunity to explain a bit about the Azimuth Project and network theory, I thought I’d reprint the interview here. Here is Part 2.

 

Tell us about your experience with past projects like “This Week’s Finds in Mathematical Physics”.

I was hired by U.C. Riverside back in 1989. I was lonely and bored, since Lisa was back on the other coast. So, I spent a lot of evenings on the computer.

We had the internet back then—this was shortly after stone tools were invented—but the world-wide web hadn’t caught on yet. So, I would read and write posts on “newsgroups” using a program called a “news server”. You have to imagine me sitting in front of an old green­-on­-black cathode ray tube monitor with a large floppy disk drive, firing up the old modem to hook up to the internet.

In 1993, I started writing a series of posts on the papers I’d read. I called it “This Week’s Finds in Mathematical Physics”, which was a big mistake, because I couldn’t really write one every week. After a while I started using it to explain lots of topics in math and physics. I wrote 300 issues. Then I quit in 2010, when I started taking climate change seriously.

Share with us a bit about your current projects like Azimuth and the n­-Café.

The n­-Category Café is a blog I started with Urs Schreiber and the philosopher David Corfield back in 2006, when all three of us realized that n­-categories are the big wave that math is riding right now. We have a bunch more bloggers on the team now. But the n­-Café lost some steam when I quit work in n­-categories and Urs started putting most of his energy into two related projects: a wiki called the nLab and a discussion group called the nForum.

In 2010, when I noticed that global warming was like a huge wave crashing down on our civilization, I started the Azimuth Project. The goal was to create a focal point for scientists and engineers interested in saving the planet. It consists of a team of people, a blog, a wiki and a discussion group. It was very productive for a while: we wrote a lot of educational articles on climate science and energy issues. But lately I’ve realized I’m better at abstract math. So, I’ve been putting more time into working with my grad students.

What about climate change has captured your interest?

That’s like asking: “What about that huge tsunami rushing toward us has captured your interest?”

Around 2004 I started hearing news that sent chills up my spine ­ and what really worried me is how few people were talking about this news, at least in the US.

I’m talking about how we’re pushing the Earth’s climate out of the glacial cycle we’ve been in for over a million years, into brand new territory. I’m talking about things like how it takes hundreds or thousands of years for CO2 to exit the atmosphere after it’s been put in. And I’m talking about how global warming is just part of a bigger phenomenon: the Anthropocene. That’s a new geological epoch, in which the biosphere is rapidly changing due to human influences. It’s not just the temperature:

• About 1/4 of all chemical energy produced by plants is now used by humans.

• The rate of species going extinct is 100­–1000 times the usual background rate.

• Populations of large ocean fish have declined 90% since 1950.

• Humans now take more nitrogen from the atmosphere and convert it into nitrates than all other processes combined.

8­-9 times as much phosphorus is flowing into oceans than the natural background rate.

This doesn’t necessarily spell the end of our civilization, but it is something that we’ll all have to deal with.

So, I felt the need to alert people and try to dream up strategies to do something. That’s why in 2010 I quit work on n­-categories and started the Azimuth Project.

Carbon Dioxide Variations

You have life experience on both US coasts. Which do you prefer and why?

There are some differences between the coasts, but they’re fairly minor. The West Coast is part of the Pacific Rim, so there’s more Asian influence here. The seasons are less pronounced here, because winds in the northern hemisphere blow from west to east, and the oceans serve as a temperature control system. Down south in Riverside it’s a semi­-desert, so we can eat breakfast in our back yard in January! But I live here not because I like the West Coast more. This just happens to be where my wife Lisa and I managed to get a job.

What I really like is getting out of the US and seeing the rest of the world. When you’re at cremation ritual in Bali, or a Hmong festival in Laos, the difference between regions of the US starts seeming pretty small.

But I wasn’t a born traveler. When I spent my first summer in England, I was very apprehensive about making a fool of myself. The British have different manners, and their old universities are full of arcane customs and subtle social distinctions that even the British find terrifying. But after a few summers there I got over it. First, all around the world, being American gives you a license to be clueless. If you behave any better than the worst stereotypes, people are impressed. Second, I spend most of my time with mathematicians, who are incredibly forgiving of bad social behavior as long as you know interesting theorems.

By now I’ve gotten to feel very comfortable in England. The last couple of years I’ve spent time at the quantum computation group at Oxford–the group run by Bob Coecke and Samson Abramsky. I like talking to Jamie Vicary about n­categories and physics, and also my old friend Minhyong Kim, who is a number theorist there.

I was also very apprehensive when I first visited Paris. Everyone talks about how the waiters are rude, and so on. But I think that’s an exaggeration. Yes, if you go to cafés packed with boorish tourists, the waiters will treat you like a boorish tourist—so don’t do that. If you go to quieter places and behave politely, most people are friendly. Luckily Lisa speaks French and has some friends in Paris; that opens up a lot of opportunities. I don’t speak French, so I always feel like a bit of an idiot, but I’ve learned to cope. I’ve spent a few summers there working with Paul­-André Melliès on category theory and logic.

Yau Ma Tei Market - Hong Kong

Yau Ma Tei Market – Hong Kong

I was also intimidated when I first spent a summer in Hong Kong—and even more so when I spent a summer in Shanghai. Lisa speaks Chinese too: she’s more cultured than me, and she drags me to interesting places. My first day walking around Shanghai left me completely exhausted: everything was new! Walking down the street you see people selling frogs in a bucket, strange fungi and herbs, then a little phone shop where telephone numbers with lots of 8’s cost more, and so on: it’s a kind of cognitive assault.

But again, I came to enjoy it. And coming back to California, everything seemed a bit boring. Why is there so much land that’s not being used? Where are all the people? Why is the food so bland?

I’ve spent the most time outside the US in Singapore. Again, that’s because my wife and I both got job offers there, not because it’s the best place in the world. Compared to China it’s rather sterile and manicured. But it’s still a fascinating place. They’ve pulled themselves up from a British colonial port town to a multi­cultural country that’s in some ways more technologically advanced than the US. The food is great: it’s a mix of Chinese, Indian, Malay and pretty much everything else. There’s essentially no crime: you can walk around in the darkest alley in the worst part of town at 3 am and still feel safe. It’s interesting to live in a country where people from very different cultures are learning to live together and prosper. The US considers itself a melting-pot, but in Singapore they have four national languages: English, Mandarin, Malay and Tamil.

Most of all, it’s great to live in places where the culture and politics is different than where I grew up. But I’m trying to travel less, because it’s bad for the planet.

You’ve gained some fame for your “crackpot index”. What were your motivations for developing it? Any new criteria you’d add?

After the internet first caught on, a bunch of us started using it to talk about physics on the usenet newsgroup sci.physics.

And then, all of a sudden, crackpots around the world started joining in!

Before this, I don’t think anybody realized how many people had their own personal theories of physics. You might have a crazy uncle who spent his time trying to refute special relativity, but you didn’t realize there were actually thousands of these crazy uncles.

As I’m sure you know here at Physics Forums, crackpots naturally tend to drive out more serious conversations. If you have some people talking about the laws of black hole thermodynamics, and some guy jumps in and says that the universe is a black hole, everyone will drop what they’re doing and argue with that guy. It’s irresistible. It reminds me of how when someone brings a baby to a party, everyone will start cooing to the baby. But it’s worse.

When physics crackpots started taking over the usenet newsgroup sci.physics, I discovered that they had a lot of features in common. The Crackpot Index summarizes these common features. Whenever I notice a new pattern, I add it.

For example: if someone starts comparing themselves to Galileo and says the physics establishment is going after them like the Inquisition, I guarantee you that they’re a crackpot. Their theories could be right—but unfortunately, they’ve got delusions of grandeur and a persecution complex.

It’s not being wrong that makes someone a crackpot. Being a full­-fledged crackpot is the endpoint of a tragic syndrome. Someone starts out being a bit too confident that they can revolutionize physics without learning it first. In fact, many young physicists go through this stage! But the good ones react to criticism by upping their game. The ones who become crackpots just brush it off. They come up with an idea that they think is great, and when nobody likes it, they don’t say “okay, I need to learn more.” Instead, they make up excuses: nobody understands me, maybe there’s a conspiracy at work, etc. The excuses get more complicated with each rebuff, and it gets harder and harder for them to back down and say “whoops, I was wrong”.

When I wrote the Crackpot Index, I thought crackpots were funny. Alexander Abian claimed all the world’s ills would be cured if we blew up the Moon. Archimedes Plutonium thinks the Universe is a giant plutonium atom. These ideas are funny. But now I realize how sad it is that someone can start with an passion for physics and end up in this kind of trap. They almost never escape.

Who are some of your math and physics heroes of the past and of today?

Wow, that’s a big question! I think every scientist needs to have heroes. I’ve had a lot.

Marie Curie

Marie Curie

When I was a kid, I was in love with Marie Curie. I wanted to marry a woman like her: someone who really cared about science. She overcame huge obstacles to get a degree in physics, discovered not one but two new elements, often doing experiments in her own kitchen—and won not one but two Nobel prizes. She was a tragic figure in many ways. Her beloved husband Pierre, a great physicist in his own right, slipped and was run over by a horse­-drawn cart, dying instantly when the wheels ran over his skull. She herself probably died from her experiments with radiation. But this made me love her all the more.

Later my big hero was Einstein. How could any physicist not have Einstein as a hero? First he came up with the idea that light comes in discrete quanta: photons. Then, two months later, he used Brownian motion to figure out the size of atoms. One month after that: special relativity, unifying space and time! Three months later, the equivalence between mass and energy. And all this was just a warmup for his truly magnificent theory of general relativity, explaining gravity as the curvature of space and time. He truly transformed our vision of the Universe. And then, in his later years, the noble and unsuccessful search for a unified field theory. As a friend of mine put it, what matters here is not that he failed: what matters is that he set physics a new goal, more ambitious than any goal it had before.

Later it was Feynman. As I mentioned, my uncle gave me Feynman’s Lectures on Physics. This is how I first learned Maxwell’s equations, special relativity, quantum mechanics. His way of explaining things with a minimum of jargon, getting straight to the heart of every issue, is something I really admire. Later I enjoyed his books like Surely You Must Be Joking. Still later I learned enough to be impressed by his work on QED.

But when you read his autobiographical books, you can see that he was a bit too obsessed with pretending to be a fun­-loving ordinary guy. A fun­-loving ordinary guy who just happens to be smarter than everyone else. In short, a self­-absorbed showoff. He could also be pretty mean to women—and in that respect, Einstein was even worse. So our heroes should not be admired uncritically.

Alexander Grothendieck

Alexander Grothendieck

A good example is Alexander Grothendieck. I guess he’s my main math hero these days. To solve concrete problems like the Weil conjectures, he avoided brute force techniques and instead developed revolutionary new concepts that gently dissolved those problems. And these new concepts turned out to be much more important than the problems that motivated him. I’m talking about abelian categories, schemes, topoi, stacks, things like that. Everyone who really wants to understand math at a deep level has got to learn these concepts. They’re beautiful and wonderfully simple—but not easy to master. You have to really change your world view to understand them, just like general relativity or quantum mechanics. You have to rewire your neurons.

At his peak, Grothendieck seemed almost superhuman. It seems he worked almost all day and all night, bouncing his ideas off the other amazing French algebraic geometers. Apparently 20,000 pages of his writings remain unpublished! But he became increasingly alienated from the mathematical establishment and eventually disappeared completely, hiding in a village near the Pyrenees.

Which groundbreaking advances in science and math are you most looking forward to?

I’d really like to see progress in figuring out the fundamental laws of physics. Ideally, I’d like to know the Theory of Everything. Of course, we don’t even know that there is one! There could be an endless succession of deeper and deeper realizations to be had about the laws of physics, with no final answer.

If we ever do discover the Theory of Everything, that won’t be the end of the story. It could be just the beginning. For example, next we could ask why this particular theory governs our Universe. Is it necessary, or contingent? People like to chat about this puzzle already, but I think it’s premature. I think we should find the Theory of Everything first.

Unfortunately, right now fundamental physics is in a phase of being “stuck”. I don’t expect to see the Theory of Everything in my lifetime. I’d be happy to see any progress at all! There are dozens of very basic things we don’t understand.

When it comes to math, I expect that people will have their hands full this century redoing the foundations using ∞-categories, and answering some of the questions that come up when you do this. The crowd working on “homotopy type theory” is making good progress–but so far they’re mainly thinking about ∞-groupoids, which are a very special sort of ∞-category. When we do all of math using ∞-categories, it will be a whole new ballgame.

And then there’s the question of whether humanity will figure out a way to keep from ruining the planet we live on. And the question of whether we’ll succeed in replacing ourselves with something more intelligent—or even wiser.

The Milky Way and Andromeda Nebula after their first collision, 4 billion years from now

The Milky Way and Andromeda Nebula after their first collision, 4 billion years from now

Here’s something cool: red dwarf stars will keep burning for 10 trillion years. If we, or any civilization, can settle down next to one of those, there will be plenty of time to figure things out. That’s what I hope for.

But some of my friends think that life always uses up resources as fast as possible. So one of my big questions is whether intelligent life will develop the patience to sit around and think interesting thoughts, or whether it will burn up red dwarf stars and every other source of energy as fast as it can, as we’re doing now with fossil fuels.

What does the future hold for John Baez? What are your goals?

What the future holds for me, primarily, is death.

That’s true of all of us—or at least most of us. While some hope that technology will bring immortality, or at least a much longer life, I bet most of us are headed for death fairly soon. So I try to make the most of the time I have.

I’m always re­-evaluating what I should do. I used to spend time thinking about quantum gravity and n­-categories. But quantum gravity feels stuck, and n­-category theory is shooting forward so fast that my help is no longer needed.

Climate change is hugely important, and nobody really knows what to do about it. Lots of people are trying lots of different things. Unfortunately I’m no better than the rest when it comes to the most obvious strategies—like politics, or climate science, or safer nuclear reactors, or better batteries and photocells.

The trick is finding things you can do better than other people. Right now for me that means thinking about networks and biology in a very abstract way. I’m inspired by this remark by Patten and Witkamp:

To understand ecosystems, ultimately will be to understand networks.

So that’s my goal for the next five years or so. It’s probably not be the best thing anyone can do to prepare for the Middle Anthropocene. But it may be the best thing I can do: use the math I know to help people understand the biosphere.

It may seem like I keep jumping around: from quantum gravity to n-categories to biology. But I keep wanting to think about networks, and how they change in time.

At some point I hope to retire and become a bit more of a self­-indulgent wastrel. I could write a fun book about group theory in geometry and physics, and a fun book about the octonions. I might even get around to spending more time on music!

John Baez in Namo Gorge, Gansu

John Baez


Among the Bone Eaters

30 January, 2016

Anthropologists sometimes talk about the virtues and dangers of ‘going native’: doing the same things as the people they’re studying, adopting their habits and lifestyles—and perhaps even their beliefs and values. The same applies to field biologists: you sometimes see it happen to people who study gorillas or chimpanzees.

It’s more impressive to see someone go native with a pack of hyenas:

• Marcus Baynes-Rock, Among the Bone Eaters: Encounters with Hyenas in Harar, Penn State University Press, 2015.

I’ve always been scared of hyenas, perhaps because they look ill-favored and ‘mean’ to me, or perhaps because their jaws have incredible bone-crushing force:

This is a spotted hyena, the species of hyena that Marcus Baynes-Rock befriended in the Ethiopian city of Harar. Their bite force has been estimated at 220 pounds!

(As a scientist I should say 985 newtons, but I have trouble imagining what it’s like to have teeth pressing into my flesh with a force of 985 newtons. If you don’t have a feeling for ‘pounds’, just imagine a 100-kilogram man standing on a hyena tooth that is pressing into your leg.)

So, you don’t want to annoy a hyena, or look too edible. However, the society of hyenas is founded on friendship! It’s the bonds of friendship that will make one hyena rush in to save another from an attacking lion. So, if you can figure out how to make hyenas befriend you, you’ve got some heavy-duty pals who will watch your back.

In Harar, people have been associating with spotted hyenas for a long time. At first they served as ‘trash collectors’, but later the association deepened. According to Wikipedia:

Written records indicate that spotted hyenas have been present in the walled Ethiopian city of Harar for at least 500 years, where they sanitise the city by feeding on its organic refuse.

The practice of regularly feeding them did not begin until the 1960s. The first to put it into practice was a farmer who began to feed hyenas in order to stop them attacking his livestock, with his descendants having continued the practice. Some of the hyena men give each hyena a name they respond to, and call to them using a “hyena dialect”, a mixture of English and Oromo. The hyena men feed the hyenas by mouth, using pieces of raw meat provided by spectators. Tourists usually organize to watch the spectacle through a guide for a negotiable rate. As of 2002, the practice is considered to be on the decline, with only two practicing hyena men left in Harar.


Hyena man — picture by Gusjer

According to local folklore, the feeding of hyenas in Harar originated during a 19th-century famine, during which the starving hyenas began to attack livestock and humans. In one version of the story, a pure-hearted man dreamed of how the Hararis could placate the hyenas by feeding them porridge, and successfully put it into practice, while another credits the revelation to the town’s Muslim saints convening on a mountaintop. The anniversary of this pact is celebrated every year on the Day of Ashura, when the hyenas are provided with porridge prepared with pure butter. It is believed that during this occasion, the hyenas’ clan leaders taste the porridge before the others. Should the porridge not be to the lead hyenas’ liking, the other hyenas will not eat it, and those in charge of feeding them make the requested improvements. The manner in which the hyenas eat the porridge on this occasion are believed to have oracular significance; if the hyena eats more than half the porridge, then it is seen as portending a prosperous new year. Should the hyena refuse to eat the porridge or eat all of it, then the people will gather in shrines to pray, in order to avert famine or pestilence.

Marcus Baynes-Rock went to Harar to learn about this. He wound up becoming friends with a pack of hyenas:

He would play with them and run with them through the city streets at night. In the end he ‘went native’: he would even be startled, like the hyenas, when they came across a human being!

To get a feeling for this, I think you have to either read his book or listen to this:

In a city that welcomes hyenas, an anthropologist makes friends, Here and Now, National Public Radio, 18 January 2016.

Nearer the beginning of this quest, he wrote this:

The Old Town of Harar in eastern Ethiopia is enclosed by a wall built 500 years ago to protect the town’s inhabitants from hostile neighbours after a religious conflict that destabilised the region. Historically, the gates would be opened every morning to admit outsiders into the town to buy and sell goods and perhaps worship at one of the dozens of mosques in the Muslim city. Only Muslims were allowed to enter. And each night, non-Hararis would be evicted from the town and the gates locked. So it is somewhat surprising that this endogamous, culturally exclusive society incorporated holes into its defensive wall, through which spotted hyenas from the surrounding hills could access the town at night.

Spotted hyenas could be considered the most hated mammal in Africa. Decried as ugly and awkward, associated with witches and sorcerers and seen as contaminating, spotted hyenas are a public relations challenge of the highest order. Yet in Harar, hyenas are not only allowed into the town to clean the streets of food scraps, they are deeply embedded in the traditions and beliefs of the townspeople. Sufism predominates in Harar and at last count there were 121 shrines in and near the town dedicated to the town’s saints. These saints are said to meet on Mt Hakim every Thursday to discuss any pressing issues facing the town and it is the hyenas who pass the information from the saints on to the townspeople via intermediaries who can understand hyena language. Etymologically, the Harari word for hyena, ‘waraba’ comes from ‘werabba’ which translates literally as ‘news man’. Hyenas are also believed to clear the streets of jinn, the unseen entities that are a constant presence for people in the town, and hyenas’ spirits are said to be like angels who fight with bad spirits to defend the souls of spiritually vulnerable people.

[…]

My current research in Harar is concerned with both sides of the relationship. First is the collection of stories, traditions, songs and proverbs of which there are many and trying to understand how the most hated mammal in Africa can be accommodated in an urban environment; to understand how a society can tolerate the presence of a potentially dangerous
species. Second is to understand the hyenas themselves and their participation in the relationship. In other parts of Ethiopia, and even within walking distance of Harar, hyenas are dangerous animals and attacks on people are common. Yet, in the old town of Harar, attacks are unheard of and it is not unusual to see hyenas, in search of food scraps, wandering past perfectly edible people sleeping in the streets. This localised immunity from attack is reassuring for a researcher spending nights alone with the hyenas in Harar’s narrow streets and alleys.

But this sounds like it was written before he went native!

Social networks

By the way: people have even applied network theory to friendships among spotted hyenas:

• Amiyaal Ilany, Andrew S. Booms and Kay E. Holekamp, Topological effects of network structure on long-term social network dynamics in a wild mammal, Ecology Letters, 18 (2015), 687–695.

The paper is not open-access, but there’s an easy-to-read summary here:

Scientists puzzled by ‘social network’ of spotted hyenas, Sci.news.com, 18 May 2015.

The scientists collected more than 55,000 observations of social interactions of spotted hyenas (also known as laughing hyenas) over a 20 year period in Kenya, making this one of the largest to date of social network dynamics in any non-human species.

They found that cohesive clustering of the kind where an individual bonds with friends of friends, something scientists call ‘triadic closure,’ was the most consistent factor influencing the long-term dynamics of the social structure of these mammals.

Individual traits, such as sex and social rank, and environmental effects, such as the amount of rainfall and the abundance of prey, also matter, but the ability of individuals to form and maintain social bonds in triads was key.

“Cohesive clusters can facilitate efficient cooperation and hence maximize fitness, and so our study shows that hyenas exploit this advantage. Interestingly, clustering is something done in human societies, from hunter-gatherers to Facebook users,” said Dr Ilany, who is the lead author on the study published in the journal Ecology Letters

Hyenas, which can live up to 22 years, typically live in large, stable groups known as clans, which can comprise more than 100 individuals.

According to the scientists, hyenas can discriminate maternal and paternal kin from unrelated hyenas and are selective in their social choices, tending to not form bonds with every hyena in the clan, rather preferring the friends of their friends.

They found that hyenas follow a complex set of rules when making social decisions. Males follow rigid rules in forming bonds, whereas females tend to change their preferences over time. For example, a female might care about social rank at one time, but then later choose based on rainfall amounts.

“In spotted hyenas, females are the dominant sex and so they can be very flexible in their social preferences. Females also remain in the same clan all their lives, so they may know the social environment better,” said study co-author Dr Kay Holekamp of Michigan State University.

“In contrast, males disperse to new clans after reaching puberty, and after they disperse they have virtually no social control because they are the lowest ranking individuals in the new clan, so we can speculate that perhaps this is why they are obliged to follow stricter social rules.”

If you like math, you might like this way of measuring ‘triadic closure’:

Triadic closure, Wikipedia.

For a program to measure triadic closure, click on the picture:


Glycolysis (Part 2)

18 January, 2016

Glyolysis is a way that organisms can get free energy from glucose without needing oxygen. Animals like you and me can do glycolysis, but we get more free energy from oxidizing glucose. Other organisms are anaerobic: they don’t need oxygen. And some, like yeast, survive mainly by doing glycolysis!

If you put yeast cells in water containing a constant low concentration of glucose, they convert it into alcohol at a constant rate. But if you increase the concentration of glucose something funny happens. The alcohol output starts to oscillate!

It’s not that the yeast is doing something clever and complicated. If you break down the yeast cells, killing them, this effect still happens. People think these oscillations are inherent to the chemical reactions in glycolysis.

I learned this after writing Part 1, thanks to Alan Rendall. I first met Alan when we were both working on quantum gravity. But last summer I met him in Copenhagen, where we both attending the workshop Trends in reaction network theory. It turned out that now he’s deep into the mathematics of biochemistry, especially chemical oscillations! He has a blog, and he’s written some great articles on glycolysis:

• Alan Rendall, Albert Goldbeter and glycolytic oscillations, Hydrobates, 21 January 2012.

• Alan Rendall, The Higgins–Selkov oscillator, Hydrobates, 14 May 2014.

In case you’re wondering, Hydrobates is the name of a kind of sea bird, the storm petrel. Alan is fond of sea birds. Since the ultimate goal of my work is to help our relationship with nature, this post is dedicated to the storm petrel:

The basics

Last time I gave a summary description of glycolysis:

glucose + 2 NAD+ + 2 ADP + 2 phosphate →
2 pyruvate + 2 NADH + 2 H+ + 2 ATP + 2 H2O

The idea is that a single molecule of glucose:

gets split into two molecules of pyruvate:

The free energy released from this process is used to take two molecules of adenosine diphosphate or ADP:

and attach to each one phosphate group, typically found as phosphoric acid:

thus producing two molecules of adenosine triphosphate or ATP:

along with 2 molecules of water.

But in the process, something else happens too! 2 molecules of nicotinamide adenine dinucleotide NAD get reduced. That is, they change from the oxidized form called NAD+:

to the reduced form called NADH, along with two protons: that is, 2 H+.

Puzzle 1. Why does NAD+ have a little plus sign on it, despite the two O’s in the picture above?

Left alone in water, ATP spontaneously converts back to ADP and phosphate:

ATP + H2O → ADP + phosphate

This process gives off 30.5 kilojoules of energy per mole. The cell harnesses this to do useful work by coupling this reaction to others. Thus, ATP serves as ‘energy currency’, and making it is the main point of glycolysis.

The cell can also use NADH to do interesting things. It generally has more free energy than NAD+, so it can power things while turning back into NAD+. Just how much more free energy it has depends a lot on conditions in the cell: for example, on the pH.

Puzzle 2. There is often roughly 700 times as much NAD+ as NADH in the cytoplasm of mammals. In these conditions, what is the free energy difference between NAD+ and NADH? I think this is something you’re supposed to be able to figure out.

Nothing in what I’ve said so far gives any clue about why glycolysis might exhibit oscillations. So, we have to dig deeper.

Some details

Glycolysis actually consists of 10 steps, each mediated by its own enzyme. Click on this picture to see all these steps:


If your eyes tend to glaze over when looking at this, don’t feel bad—so do mine. There’s a lot of information here. But if you look carefully, you’ll see that the 1st and 3rd stages of glycolysis actually convert 2 ATP’s to ADP, while the 7th and 10th convert 4 ADP’s to ATP. So, the early steps require free energy, while the later ones double this investment. As the saying goes, “it takes money to make money”.

This nuance makes it clear that if a cell starts with no ATP, it won’t be able to make ATP by glycolysis. And if has just a small amount of ATP, it won’t be very good at making it this way.

In short, this affects the dynamics in an important way. But I don’t see how it could explain oscillations in how much ATP is manufactured from a constant supply of glucose!

We can look up the free energy changes for each of the 10 reactions in glycolysis. Here they are, named by the enzymes involved:


I got this from here:

• Leslie Frost, Glycolysis.

I think these are her notes on Chapter 14 of Voet, Voet, and Pratt’s Fundamentals of Biochemistry. But again, I don’t think these explain the oscillations. So we have to look elsewhere.

Oscillations

By some careful detective work—by replacing the input of glucose by an input of each of the intermediate products—biochemists figured out which step causes the oscillations. It’s the 3rd step, where fructose-6-phosphate is converted into fructose-1,6-bisphosphate, powered by the conversion of ATP into ADP. The enzyme responsible for this step is called phosphofructokinase or PFK. And it turns out that PFK works better when there is ADP around!

In short, the reaction network shown above is incomplete: ADP catalyzes its own formation in the 3rd step.

How does this lead to oscillations? The Higgins–Selkov model is a scenario for how it might happen. I’ll explain this model, offering no evidence that it’s correct. And then I’ll take you to a website where you can see this model in action!

Suppose that fructose-6-phosphate is being produced at a constant rate. And suppose there’s some other reaction, which we haven’t mentioned yet, that uses up ADP at a constant rate. Suppose also that it takes two ADP’s to catalyze the 3rd step. So, we have these reactions:

→ fructose-6-phosphate
fructose-6-phosphate + 2 ADP → 3 ADP
ADP →

Here the blanks mean ‘nothing’, or more precisely ‘we don’t care’. The fructose-6-biphosphate is coming in from somewhere, but we don’t care where it’s coming from. The ADP is going away, but we don’t care where. We’re also ignoring the ATP that’s required for the second reaction, and the fructose-1,6-bisphosphate that’s produced by this reaction. All these features are irrelevant to the Higgins–Selkov model.

Now suppose there’s initially a lot of ADP around. Then the fructose-6-phosphate will quickly be used up, creating even more ADP. So we get even more ADP!

But as this goes on, the amount of fructose-6-phosphate sitting around will drop. So, eventually the production of ADP will drop. Thus, since we’re positing a reaction that uses up ADP at a constant rate, the amount of ADP will start to drop.

Eventually there will be very little ADP. Then it will be very hard for fructose-6-phosphate to get used up. So, the amount of fructose-6-phosphate will start to build up!

Of course, whatever ADP is still left will help use up this fructose-6-phosphate and turn it into ADP. This will increase the amount of ADP. So eventually we will have a lot of ADP again.

We’re back where we started. And so, we’ve got a cycle!

Of course, this story doesn’t prove anything. We should really take our chemical reaction network and translate it into some differential equations for the amount of fructose-6-phosphate and the amount of ADP. In the Higgins–Selkov model people sometimes write just ‘S’ for fructose-6-phosphate and ‘P’ for ADP. (In case you’re wondering, S stands for ‘substrate’ and P stands for ‘product’.) So, our chemical reaction network becomes

→ S
S + 2P → 3P
P →

and using the law of mass action we get these equations:

\displaystyle{ \frac{d S}{dt} = v_0 - k_1 S P^2 }

\displaystyle{ \frac{d P}{dt} = k_1 S P^2 - k_2 P }

where S and P stand for how much S and P we have, respectively, and v_0, k_1, k_2 are some constants.

Now we can solve these differential equations and see if we get oscillations. The answer depends on the constants v_0, k_1, k_2 and also perhaps the initial conditions.

To see what actually happens, try this website:

• Mike Martin, Glycolytic oscillations: the Higgins–Selkov model.

If you run it with the constants and initial conditions given to you, you’ll get oscillations. You’ll also get this vector field on the S,P plane, showing how the system evolves in time:


This is called a phase portrait, and its a standard tool for studying first-order differential equations where two variables depend on time.

This particular phase portrait shows an unstable fixed point and a limit cycle. That’s jargon for saying that in these conditions, the system will tend to oscillate. But if you adjust the constants, the limit cycle will go away! The appearance or disappearance of a limit cycle like this is called a Hopf bifurcation.

For details, see:

• Alan Rendall, Dynamical systems, Chapter 11: Oscillations.

He shows that the Higgins–Selkov model has a unique stationary solution (i.e. fixed point), which he describes. By linearizing it, he finds that this fixed point is stable when v_0 (the inflow of S) is less than a certain value, and unstable when it exceeds that value.

In the unstable case, if the solutions are all bounded as t \to \infty there must be a periodic solution. In the course notes he shows this for a simpler model of glycolysis, the Schnakenberg model. In some separate notes he shows it for the Higgins–Selkov model, at least for certain values of the parameters:

• Alan Rendall, The Higgins–Selkov oscillator.