Network Theory (Part 7)

27 April, 2011

guest post by Jacob Biamonte

This post is part of a series on what John and I like to call Petri net field theory. Stochastic Petri nets can be used to model everything from vending machines to chemical reactions. Chemists have proven some powerful theorems about when these systems have equilibrium states. We’re trying to bind these old ideas into our fancy framework, in hopes that quantum field theory techniques could also be useful in this deep subject. We’ll describe the general theory later; today we’ll do an example from population biology.

Those of you following this series should know that I’m the calculation bunny for this project, with John playing the role of the wolf. If I don’t work quickly, drawing diagrams and trying to keep up with John’s space-bending quasar of information, I’ll be eaten alive! It’s no joke, so please try to respond and pretend to enjoy anything you read here. This will keep me alive for longer. If I did not take notes during our meetings, lots of this stuff would have never made it here, so hope you enjoy.

Amoeba reproduction and competition

Here’s a stochastic Petri net:

It shows a world with one state, amoeba, and two transitions:

reproduction, where one amoeba turns into two. Let’s call the rate constant for this transition \alpha.

competition, where two amoebas battle for resources and only one survives. Let’s call the rate constant for this transition \beta.

We are going to analyse this example in several ways. First we’ll study the deterministic dynamics it describes: we’ll look at its rate equation, which turns out to be the logistic equation, familiar in population biology. Then we’ll study the stochastic dynamics, meaning its master equation. That’s where the ideas from quantum field theory come in.

The rate equation

If P(t) is the population of amoebas at time t, we can follow the rules explained in Part 3 and crank out this rate equation:

\displaystyle{ \frac{d P}{d t} = \alpha P - \beta P^2}

We can rewrite this as

\displaystyle{\frac{d P }{d t}= k P(1-\frac{P}{Q}) }

where

\displaystyle{ Q = \frac{\alpha}{\beta} , \qquad k = \alpha}

What’s the meaning of Q and k?

Q is the carrying capacity, that is, the maximum sustainable population the environment can support.

k is the growth rate describing the approximately exponential growth of population when P(t) is small.

It’s a rare treat to find such an important differential equation that can be solved by analytical methods. Let’s enjoy solving it.

We start by separating variables and integrating both sides:

\displaystyle{\int \frac{d P}{P (1-P/Q)} = \int k d t}

We need to use partial fractions on the left side above, resulting in

\displaystyle{\int \frac{d P}{P}} + \displaystyle{\int \frac{d P}{Q-P} } = \displaystyle{\int k d t}

and so we pick up a constant C, let

A= \pm e^{-C}

and rearrange things as

\displaystyle{\frac{Q-P}{P}=A e^{-k t} }

so the population as a function of time becomes

\displaystyle{ P(t) = \frac{Q}{1+A e^{-k t}}}

At t=0 we can determine A uniquely. We write P_0 := P(0) and A becomes

\displaystyle{ A = \frac{Q-P_0}{P_0}}

The model now becomes very intuitive. Let’s set Q = k=1 and make a plot for various values of A:

We arrive at three distinct cases:

equilibrium (A=0). The horizontal blue line corresponds to the case where the initial population P_0 exactly equals the carrying capacity. In this case the population is constant.

dieoff (A < 0). The three decaying curves above the horizontal blue line correspond to cases where initial population is higher than the carrying capacity. The population dies off over time and approaches the carrying capacity.

growth (A > 0). The four increasing curves below the horizontal blue line represent cases where the initial population is lower than the carrying capacity. Now the population grows over time and approaches the carrying capacity.

The master equation

Next, let us follow the rules explained in Part 6 to write down the master equation for our example. Remember, now we write:

\displaystyle{\Psi(t) = \sum_{n = 0}^\infty \psi_n(t) z^n }

where \psi_n(t) is the probability of having n amoebas at time t, and z is a formal variable. The master equation says

\displaystyle{\frac{d}{d t} \Psi(t) = H \Psi(t)}

where H is an operator on formal power series called the Hamiltonian. To get the Hamiltonian we take each transition in our Petri net and build an operator built from creation and annihilation operators, as follows. Reproduction works like this:

while competition works like this:

Here a is the annihilation operator, a^\dagger is the creation operator and N = a^\dagger a is the number operator. Last time John explained precisely how the N‘s arise. So the theory is already in place, and we arrive at this Hamiltonian:

H = \alpha (a^\dagger a^\dagger a - N) \;\; + \;\; \beta(a^\dagger a a - N(N-1))

Remember, \alpha is the rate constant for reproduction, while \beta is the rate constant for competition.

The master equation can be solved: it’s equivalent to

\frac{d}{d t}(e^{-t H}\Psi(t))=0

so that e^{-t H}\Psi(t) is constant, and so

\Psi(t) = e^{t H}\Psi(0)

and that’s it! We can calculate the time evolution starting from any initial probability distribution of populations. Maybe everyone is already used to this, but I find it rather remarkable.

Here’s how it works. We pick a population, say n amoebas at t=0. This would mean \Psi(0) = z^n. We then evolve this state using e^{t H}. We expand this operator as

\begin{array}{ccl} e^{t H} &=&\displaystyle{ \sum_{n=0}^\infty \frac{t^n H^n}{n!} }  \\  \\  &=& \displaystyle{ 1 + t H + \frac{1}{2}t^2 H^n + \cdots }\end{array}

This operator contains the full information for the evolution of the system. It contains the histories of all possible amoeba populations—an amoeba mosaic if you will. From this, we can construct amoeba Feynman diagrams.

To do this, we work out each of the H^n terms in the expansion above. The first-order terms correspond to the Hamiltonian acting once. These are proportional to either \alpha or \beta. The second-order terms correspond to the Hamiltonian acting twice. These are proportional to either \alpha^2, \alpha\beta or \beta^2. And so on.

This is where things start to get interesting! To illustrate how it works, we will consider two possibilities for the second-order terms:

1) We start with a lone amoeba, so \Psi(0) = z. It reproduces and splits into two. In the battle of the century, the resulting amoebas compete and one dies. At the end we have:

\frac{\alpha \beta}{2}  (a^\dagger a a)(a^\dagger a^\dagger a) z

We can draw this as a Feynman diagram:

You might find this tale grim, and you may not like the odds either. It’s true, the odds could be better, but people are worse off than amoebas! The great Japanese swordsman Miyamoto Musashi quoted the survival odds of fair sword duels as 1/3, seeing that 1/3 of the time both participants die. A remedy is to cheat, but these amoeba are competing honestly.

2) We start with two amoebas, so the initial state is \Psi(0) = z^2. One of these amoebas splits into two. One of these then gets into an argument with the original amoeba over the Azimuth blog. The amoeba who solved all John’s puzzles survives. At the end we have

\frac{\alpha \beta}{2} (a^\dagger a a)(a^\dagger a^\dagger a) z^2

with corresponding Feynman diagram:

This should give an idea of how this all works. The exponential of the Hamiltonian gives all possible histories, and each of these can be translated into a Feynman diagram. In a future blog entry, we might explain this theory in detail.

An equilibrium state

We’ve seen the equilibrium solution for the rate equation; now let’s look for equilibrium solutions of the master equation. This paper:

• D. F. Anderson, G. Craciun and T.G. Kurtz, Product-form stationary distributions for deficiency zero chemical reaction networks, arXiv:0803.3042.

proves that for a large class of stochastic Petri nets, there exists an equilibrium solution of the master equation where the number of things in each state is distributed according to a Poisson distribution. Even more remarkably, these probability distributions are independent, so knowing how many things are in one state tells you nothing about how many are in another!

Here’s a nice quote from this paper:

The surprising aspect of the deficiency zero theorem is that the assumptions of the theorem are completely related to the network of the system whereas the conclusions of the theorem are related to the dynamical properties of the system.

The ‘deficiency zero theorem’ is a result of Feinberg, which says that for a large class of stochastic Petri nets, the rate equation has a unique equilibrium solution. Anderson showed how to use this fact to get equilibrium solutions of the master equation!

We will consider this in future posts. For now, we need to talk a bit about ‘coherent states’.

These are all over the place in quantum theory. Legend (or at least Wikipedia) has it that Erwin Schrödinger himself discovered coherent states when he was looking for states of a quantum system that look ‘as classical as possible’. Suppose you have a quantum harmonic oscillator. Then the uncertainty principle says that

\Delta p \Delta q \ge \hbar/2

where \Delta p is the uncertainty in the momentum and \Delta q is the uncertainty in position. Suppose we want to make \Delta p \Delta q as small as possible, and suppose we also want \Delta p = \Delta q. Then we need our particle to be in a ‘coherent state’. That’s the definition. For the quantum harmonic oscillator, there’s a way to write quantum states as formal power series

\displaystyle{ \Psi = \sum_{n = 0}^\infty \psi_n z^n}

where \psi_n is the amplitude for having n quanta of energy. A coherent state then looks like this:

\displaystyle{ \Psi = e^{c z} = \sum_{n = 0}^\infty \frac{c^n}{n!} z^n}

where c can be any complex number. Here we have omitted a constant factor necessary to normalize the state.

We can also use coherent states in classical stochastic systems like collections of amoebas! Now the coefficient of z^n tells us the probability of having n amoebas, so c had better be real. And probabilities should sum to 1, so we really should normalize \Psi as follows:

\displaystyle{  \Psi = \frac{e^{c z}}{e^c} = e^{-c} \sum_{n = 0}^\infty \frac{c^n}{n!} z^n }

Now, the probability distribution

\displaystyle{\psi_n = e^{-c} \; \frac{c^n}{n!}}

is called a Poisson distribution. So, for starters you can think of a ‘coherent state’ as an over-educated way of talking about a Poisson distribution.

Let’s work out the expected number of amoebas in this Poisson distribution. In the answers to the puzzles in Part 6, we started using this abbreviation:

\displaystyle{ \sum \Psi = \sum_{n = 0}^\infty \psi_n }

We also saw that the expected number of amoebas in the probability distribution \Psi is

\displaystyle{  \sum N \Psi }

What does this equal? Remember that N = a^\dagger a. The annihilation operator a is just \frac{d}{d z}, so

\displaystyle{ a \Psi = c \Psi}

and we get

\displaystyle{ \sum N \Psi = \sum a^\dagger a \Psi = c \sum a^\dagger \Psi }

But we saw in Part 5 that a^\dagger is stochastic, meaning

\displaystyle{  \sum a^\dagger \Psi = \sum \Psi }

for any \Psi. Furthermore, our \Psi here has

\displaystyle{ \sum \Psi = 1}

since it’s a probability distribution. So:

\displaystyle{  \sum N \Psi = c \sum a^\dagger \Psi = c \sum \Psi = c}

The expected number of amoebas is just c.

Puzzle 1. This calculation must be wrong if c is negative: there can’t be a negative number of amoebas. What goes wrong then?

Puzzle 2. Use the same tricks to calculate the standard deviation of the number of amoebas in the Poisson distribution \Psi.

Now let’s return to our problem and consider the initial amoeba state

\displaystyle{ \Psi = e^{c z}}

Here aren’t bothering to normalize it, because we’re going to look for equilibrium solutions to the master equation, meaning solutions where \Psi(t) doesn’t change with time. So, we want to solve

\displaystyle{  H \Psi = 0}

Since this equation is linear, the normalization of \Psi doesn’t really matter.

Remember,

\displaystyle{  H\Psi = \alpha (a^\dagger a^\dagger a - N)\Psi + \beta(a^\dagger a a - N(N-1)) \Psi }

Let’s work this out. First consider the two \alpha terms:

\displaystyle{ a^\dagger a^\dagger a \Psi = c z^2 \Psi }

and

\displaystyle{ -N \Psi = -a^\dagger a\Psi = -c z \Psi}

Likewise for the \beta terms we find

\displaystyle{ a^\dagger a a\Psi=c^2 z \Psi}

and

\displaystyle{ -N(N-1)\psi = -a^\dagger a^\dagger a a \Psi = -c^2 z^2\Psi }

Here I’m using something John showed in Part 6: the product a^\dagger a^\dagger a a equals the ‘falling power’ N(N-1).

The sum of all four terms must vanish. This happens whenever

\displaystyle{ \alpha(c z^2 - c z)+\beta(c^2 z-c^2 z^2) = 0}

which is satisfied for

\displaystyle{ c= \frac{\alpha}{\beta}}

Yipee! We’ve found an equilibrium solution, since we found a value for c that makes H \Psi = 0. Even better, we’ve seen that the expected number of amoebas in this equilibrium state is

\displaystyle{ \frac{\alpha}{\beta}}

This is just the same as the equilibrium population we saw for the rate equation—that is, the carrying capacity for the logistic equation! That’s pretty cool, but it’s no coincidence: in fact, Anderson proved it works like this for lots of stochastic Petri nets.

I’m not sure what’s up next or what’s in store, since I’m blogging at gun point from inside a rabbit cage:

Give me all your blog posts, get out of that rabbit cage and reach for the sky!

I’d imagine we’re going to work out the theory behind this example and prove the existence of equilibrium solutions for master equations more generally. One idea John had was to have me start a night shift—that way you’ll get Azimuth posts 24 hours a day.


Equinox Summit

25 April, 2011

In response to my post asking What To Do?, Lee Smolin pointed out this conference on energy technologies:

Equinox Summit, 5-9 June 2011, Perimeter Institute/Waterloo University, Waterloo, Canada.

The idea:

The Equinox Summit will bring together leading top scientists in low-carbon technologies with a panel of industry and policy experts and the next generation of world leaders to pool their expertise and create a realistic roadmap from the energy challenges of today to a sustainable future by 2030.

These visionary researchers and decision makers will collaborate both in closed-door sessions and in free public presentations about the next generation of low-carbon energy solutions.

The public events are free but a ticket is required. Confirmed participants include these people:

Yacine Kadi CERN researcher Yacine Kadi, who is leading efforts to build next-generation nuclear reactors that eat their own waste.
Linda Nazar Canada Research Chair in Solid State Materials, Linda Nazar, who is researching new nanomaterials that could store more energy and deliver it faster.
Alan Aspuru-Guzik Harvard chemist Alan Aspuru-Guzik, recognized as one of the “Top 35 Under 35 Young Innovators” by the MIT Technology Review in 2010.
Cathy Foley Australian science agency chief Cathy Foley, whose research into superconductivity could lead to technological leaps in transportation and energy production.
Ted Sargent University of Toronto Electrical and Computer Engineering professor Ted Sargent, who has devised paint-on solar cell technology that harvests infrared energy from the Sun. His 2005 book “The Dance of the Molecules: How Nanotechnology is Changing our Lives” has been translated into French, Spanish, Italian, Korean, and Arabic.

Summit advisors and speakers include:

Robin Batterham Robin Batterham, President, Australian Academy of Technological Sciences and Engineers (ATSE), Former Chief Scientist of Australia, Former Chief Scientist, Rio Tinto.
Vaclav Smil Vaclav Smil, author of “Energy Myth and Realities: Bringing Science to the Energy Policy Debate” and “Transforming the Twentieth Century: Technical Innovations and Their Consequences” – the first non-American to receive the American Association for the Advancement of Science’s Award for Public Understanding of Science and Technology.

These descriptions of participants are from the conference website, so they’re a bit more gushy than anything I’d write, but it looks like an interesting crew! If you go there and learn something cool, try to remember to drop a line here.


What To Do? (Part 1)

24 April, 2011

In a comment on my last interview with Yudkowsky, Eric Jordan wrote:

John, it would be great if you could follow up at some point with your thoughts and responses to what Eliezer said here. He’s got a pretty firm view that environmentalism would be a waste of your talents, and it’s obvious where he’d like to see you turn your thoughts instead. I’m especially curious to hear what you think of his argument that there are already millions of bright people working for the environment, so your personal contribution wouldn’t be as important as it would be in a less crowded field.

I’ve been thinking about this a lot.

Indeed, the reason I quit work on my previous area of interest—categorification and higher gauge theory—was the feeling that more and more people were moving into it. When I started, it seemed like a lonely but exciting quest. By now there are plenty of conferences on it, attended by plenty of people. It would be a full-time job just keeping up, much less doing something truly new. That made me feel inadequate—and worse, unnecessary. Helping start a snowball roll downhill is fun… but what’s the point in chasing one that’s already rolling?

The people working in this field include former grad students of mine and other youngsters I helped turn on to the subject. At first this made me a bit frustrated. It’s as if I engineered my own obsolescence. If only I’d spent less time explaining things, and more time proving theorems, maybe I could have stayed at the forefront!

But by now I’ve learned to see the bright side: it means I’m free to do other things. As I get older, I’m becoming ever more conscious of my limited lifespan and the vast number of things I’d like to try.

But what to do?

This a big question. It’s a bit self-indulgent to discuss it publicly… or maybe not. It is, after all, a question we all face. I’ll talk about me, because I’m not up to tackling this question in its universal abstract form. But it could be you asking this, too.

For me this question was brought into sharp focus when I got a research position where I was allowed—nay, downright encouraged!—to follow my heart and work on what I consider truly important. In the ordinary course of life we often feel too caught up in the flow of things to do more than make small course corrections. Suddenly I was given a burst of freedom. What to do with it?

In my earlier work, I’d always taken the attitude that I should tackle whatever questions seemed most beautiful and profound… subject to the constraint that I had a good chance of making some progress on them. I realized that this attitude assumes other people will do most of the ‘dirty work’, whatever that may be. But I figured I could get away with it. I figured that if I were ever called to account—by my own conscience, say—I could point to the fact that I’d worked hard to understand the universe and also spent a lot of time teaching people, both in my job and in my spare time. Surely that counts for something?

I had, however, for decades been observing the slow-motion train wreck that our civilization seems to be engaged in. Global warming, ocean acidification and habitat loss may be combining to cause a mass extinction event, and perhaps—in conjunction with resource depletion—a serious setback to human civilization. Now is not the time to go over all the evidence: suffice it to say that I think we may be heading for serious trouble.

It’s hard to know just how much trouble. If it were just routine ‘misery as usual’, I’ll admit I’d be happy to sit back and let everyone else deal with these problems. But the more I study them, the more that seems untenable… especially since so many people are doing just that: sitting back and letting everyone else deal with them.

I’m not sure this complex of problems rises to the level of an ‘existential risk’—which Nick Bostrom defines as one where an adverse outcome would either annihilate intelligent life originating on Earth or permanently and drastically curtail its potential. But I see scenarios where we clobber ourselves quite seriously. They don’t even seem unlikely, and they don’t seem very far-off, and I don’t see people effectively rising to the occasion. So, just as I’d move to put out a fire if I saw smoke coming out of the kitchen and everyone else was too busy watching TV to notice, I feel I have to do something.

But the question remains: what to do?

Eliezer Yudkowsky had some unabashed advice:

I honestly don’t see how a rationalist can avoid this conclusion: At this absolutely critical hinge in the history of the universe—Earth in the 21st century—rational altruists should devote their marginal attentions to risks that threaten to terminate intelligent life or permanently destroy a part of its potential. Those problems, which Nick Bostrom named ‘existential risks’, have got all the scope. And when it comes to marginal impact, there are major risks outstanding that practically no one is working on. Once you get the stakes on a gut level it’s hard to see how doing anything else could be sane.

So how do you go about protecting the future of intelligent life? Environmentalism? After all, there are environmental catastrophes that could knock over our civilization… but then if you want to put the whole universe at stake, it’s not enough for one civilization to topple, you have to argue that our civilization is above average in its chances of building a positive galactic future compared to whatever civilization would rise again a century or two later. Maybe if there were ten people working on environmentalism and millions of people working on Friendly AI, I could see sending the next marginal dollar to environmentalism. But with millions of people working on environmentalism, and major existential risks that are completely ignored… if you add a marginal resource that can, rarely, be steered by expected utilities instead of warm glows, devoting that resource to environmentalism does not make sense.

Similarly with other short-term problems. Unless they’re little-known and unpopular problems, the marginal impact is not going to make sense, because millions of other people will already be working on them. And even if you argue that some short-term problem leverages existential risk, it’s not going to be perfect leverage and some quantitative discount will apply, probably a large one. I would be suspicious that the decision to work on a short-term problem was driven by warm glow, status drives, or simple conventionalism.

With that said, there’s also such a thing as comparative advantage—the old puzzle of the lawyer who works an hour in the soup clinic instead of working an extra hour as a lawyer and donating the money. Personally I’d say you can work an hour in the soup clinic to keep yourself going if you like, but you should also be working extra lawyer-hours and donating the money to the soup clinic, or better yet, to something with more scope. (See “Purchase Fuzzies and Utilons Separately” on Less Wrong.) Most people can’t work effectively on Artificial Intelligence (some would question if anyone can, but at the very least it’s not an easy problem). But there’s a variety of existential risks to choose from, plus a general background job of spreading sufficiently high-grade rationality and existential risk awareness. One really should look over those before going into something short-term and conventional. Unless your master plan is just to work the extra hours and donate them to the cause with the highest marginal expected utility per dollar, which is perfectly respectable.

Where should you go in life? I don’t know exactly, but I think I’ll go ahead and say “not environmentalism”. There’s just no way that the product of scope, marginal impact, and John Baez’s comparative advantage is going to end up being maximal at that point.

When I heard this, one of my first reactions was: “Of course I don’t want to do anything ‘conventional’, something that ‘millions of people’ are already doing”. After all, my sense of being just another guy in the crowd was a big factor in leaving work on categorification and higher gauge theory—and most people have never even heard of those subjects!

I think so far the Azimuth Project is proceeding in a sufficiently unconventional way that while it may fall flat on its face, it’s at least trying something new. Though I always want more people to join in, we’ve already got some good projects going that take advantage of my ‘comparative advantage’: the ability to do math and explain stuff.

The most visible here is the network theory project, which is a step towards the kind of math I think we need to understand a wide variety of complex systems. I’ve been putting most of my energy into that lately, and coming up with ideas faster than I can explain them. On top of that, Eric Forgy, Tim van Beek, Staffan Liljgeren, Matt Reece, David Tweed and others have other interesting projects cooking behind the scenes on the Azimuth Forum. I’ll be talking about those soon, too.

I don’t feel satisfied, though. I’m happy enough—that’s never a problem these days—but once you start trying to do things to help the world, instead of just have fun, it’s very tricky to determine the best way to proceed.

One can, of course, easily fool oneself into thinking one knows.


Stabilization Wedges (Part 5)

21 April, 2011

In 2004, Pacala and Socolow laid out a list of ways we can battle global warming using current technologies. They said that to avoid serious trouble, we need to choose seven ‘stabilization wedges’: that is, seven ways to cut carbon emissions by 1 gigatonne per year within 50 years. They listed 15 wedges to choose from, and I’ve told you about them here:

Part 1 – efficiency and conservation.

Part 2 – shifting from coal to natural gas, carbon capture and storage.

Part 3 – nuclear power and renewable energy.

Part 4 – reforestation, good soil management.

According to Pacala:

The message was a very positive one: “gee, we can solve this problem: there are lots of ways to solve it, and lots of ways for the marketplace to solve it.”

I find that interesting, because to me each wedge seems like a gargantuan enterprise—and taken together, they seem like the Seven Labors of Hercules. They’re technically feasible, but who has the stomach for them? I fear things need to get worse before we come to our senses and take action at the scale that’s required.

Anyway, that’s just me. But three years ago, Pacala publicly reconsidered his ideas for a very different reason. Based on new evidence, he gave a talk at Stanford where he said:

It’s at least possible that we’ve already let this thing go too far, and that the biosphere may start to fall apart on us, even if we do all this. We may have to fall back on some sort of dramatic Plan B. We have to stay vigilant as a species.

You can watch his talk here:

It’s pretty damned interesting: he’s a good speaker.

Here’s a dry summary of a few key points. I won’t try to add caveats: I’m sure he would add some himself in print, but I’d rather keep the message simple. I also won’t try to update his information! Not in this blog entry, anyway. But I’ll ask some questions, and I’ll be delighted if you help me out on those.

Emissions targets

First, Pacala’s review of different carbon emissions targets.

The old scientific view, circa 1998: if we could keep the CO2 from doubling from its preindustrial level of 280 parts per million, that would count as a success. Namely, most of the ‘monsters behind the door’ would not come out: continental ice sheets falling into the sea and swamping coastal cities, the collapse of the Atlantic ocean circulation, a drought in the Sahel region of Africa, etcetera.

Many experts say we’d be lucky to get away with CO2 merely doubling. At current burn rates we’ll double it by 2050, and quadruple it by the end of this century. We’ve got enough fossil fuels to send it to seven times its preindustrial levels.

Doubling it would take us to 560 parts per million. A lot of people think that’s too high to be safe. But going for lower levels gets harder:

• In Pacala and Socolow’s original paper, they talked about keeping CO2 below 500 ppm. This would require keeping CO2 emissions constant until 2050. This could be achieved by a radical decarbonization of the economies of rich countries, while allowing carbon emissions in poor countries to grow almost freely until that time.

• For a long time the IPCC and many organizations advocated keeping CO2 below 450 ppm. This would require cutting CO2 emissions by 50% by 2050, which could be achieved by a radical decarbonization in rich countries, and moderate decarbonization in poor countries.

• But by 2008 the IPCC and many groups wanted a cap of 2°C global warming, or keeping CO2 below 430 ppm. This would mean cutting CO2 emissions by 80% by 2050, which would require a radical decarbonization in both rich and poor countries.

The difference here is what poor people have to do. The rich countries need to radically cut carbon emissions in all these scenarios. In the USA, the Lieberman-Warner bill would have forced the complete decarbonization of the economy by 2050.

Then, Pacala spoke about 3 things that make him nervous:

1. Faster emissions growth

A 2007 paper by Canadell et al pointed out that starting in 2000, fossil fuel emissions started growing at 3% per year instead of the earlier figure of 1.5%. This could be due to China’s industrialization. Will this keep up in years to come? If so, the original Pacala-Socolow plan won’t work.

(How much, exactly, did the economic recession change this story?)

2. The ocean sink

Each year fossil fuel burning puts about 8 gigatons of carbon in the atmosphere. The ocean absorbs about 2 gigatons and the land absorbs about 2, leaving about 4 gigatons in the atmosphere.

However, as CO2 emissions rise, the oceanic CO2 sink has been growing less than anticipated. This seems to be due to a change in wind patterns, itself a consequence of global warming.

(What’s the latest story here?)

3. The land sink

As the CO2 levels go up, people expected plants to grow better and suck up more CO2. In the third IPCC report, models predicted that by 2050, plants will be drawing down 6 gigatonnes more carbon per year than they do now! The fourth IPCC report was similar.

This is huge: remember that right now we emit about 8 gigatonnes per year. Indeed, this effect, called CO2 fertilization, could be the difference between the land being a big carbon sink and a big carbon source. Why a carbon source? For one thing, without the plants sucking up CO2, temperatures will rise faster, and the Amazon rainforest may start to die, and permafrost in the Arctic may release more greenhouse gases (especially methane) as it melts.

In a simulation run by Pacala, where he deliberately assumed that plants fail to suck up more carbon dioxide, these effects happened and the biosphere dumped a huge amount of extra CO2 into the atmosphere: the equivalent of 26 stabilization wedges.

So, plans based on the IPCC models are essentially counting on plants to save us from ourselves.

But is there any reason to think plants might not suck up CO2 at the predicted rates?

Maybe. First, people have actually grown forests in doubled CO2 conditions to see how much faster plants grow then. But the classic experiment along these lines used young trees. In 2005, Körner et al did an experiment using mature trees… and they didn’t see them growing any faster!

Second, models in the third IPCC report assumed that as plants grew faster, they’d have no trouble getting all the nitrogen they need. But Hungate et al have argued otherwise. On the other hand, Alexander Barron discovered that some tropical plants were unexpectedly good at ramping up the rate at which they grab ahold of nitrogen from the atmosphere. But on the third hand, that only applies to the tropics. And on the fourth hand—a complicated problem like this requires one of those Indian gods with lots of hands—nitrogen isn’t the only limiting factor to worry about: there’s also phosphorus, for example.

Pacala goes on and discusses even more complicating factors. But his main point is simple. The details of CO2 fertilization matter a lot. It could make the difference between their original plan being roughly good enough… and being nowhere near good enough!

(What’s the latest story here?)


Network Theory (Part 6)

16 April, 2011

Now for the fun part. Let’s see how tricks from quantum theory can be used to describe random processes. I’ll try to make this post self-contained. So, even if you skipped a bunch of the previous ones, this should make sense.

You’ll need to know a bit of math: calculus, a tiny bit probability theory, and linear operators on vector spaces. You don’t need to know quantum theory, though you’ll have more fun if you do. What we’re doing here is very similar… but also strangely different—for reasons I explained last time.

Rabbits and quantum mechanics

Suppose we have a population of rabbits in a cage and we’d like to describe its growth in a stochastic way, using probability theory. Let \psi_n be the probability of having n rabbits. We can borrow a trick from quantum theory, and summarize all these probabilities in a formal power series like this:

\Psi = \sum_{n = 0}^\infty \psi_n z^n

The variable z doesn’t mean anything in particular, and we don’t care if the power series converges. See, in math ‘formal’ means “it’s only symbols on the page, just follow the rules”. It’s like if someone says a party is ‘formal’, so need to wear a white tie: you’re not supposed to ask what the tie means.

However, there’s a good reason for this trick. We can define two operators on formal power series, called the annihilation operator:

a \Psi = \frac{d}{d z} \Psi

and the creation operator:

a^\dagger \Psi = z \Psi

They’re just differentiation and multiplication by z, respectively. So, for example, suppose we start out being 100% sure we have n rabbits for some particular number n. Then \psi_n = 1, while all the other probabilities are 0, so:

\Psi = z^n

If we then apply the creation operator, we obtain

a^\dagger \Psi = z^{n+1}

Voilà! One more rabbit!

The annihilation operator is more subtle. If we start out with n rabbits:

\Psi = z^n

and then apply the annihilation operator, we obtain

a \Psi = n z^{n-1}

What does this mean? The z^{n-1} means we have one fewer rabbit than before. But what about the factor of n? It means there were n different ways we could pick a rabbit and make it disappear! This should seem a bit mysterious, for various reasons… but we’ll see how it works soon enough.

The creation and annihilation operators don’t commute:

(a a^\dagger - a^\dagger a) \Psi = \frac{d}{d z} (z \Psi) - z \frac{d}{d z} \Psi = \Psi

so for short we say:

a a^\dagger - a^\dagger a = 1

or even shorter:

[a, a^\dagger] = 1

where the commutator of two operators is

[S,T] = S T - T S

The noncommutativity of operators is often claimed to be a special feature of quantum physics, and the creation and annihilation operators are fundamental to understanding the quantum harmonic oscillator. There, instead of rabbits, we’re studying quanta of energy, which are peculiarly abstract entities obeying rather counterintuitive laws. So, it’s cool that the same math applies to purely classical entities, like rabbits!

In particular, the equation [a, a^\dagger] = 1 just says that there’s one more way to put a rabbit in a cage of rabbits, and then take one out, than to take one out and then put one in.

But how do we actually use this setup? We want to describe how the probabilities \psi_n change with time, so we write

\Psi(t) = \sum_{n = 0}^\infty \psi_n(t) z^n

Then, we write down an equation describing the rate of change of \Psi:

\frac{d}{d t} \Psi(t) = H \Psi(t)

Here H is an operator called the Hamiltonian, and the equation is called the master equation. The details of the Hamiltonian depend on our problem! But we can often write it down using creation and annihilation operators. Let’s do some examples, and then I’ll tell you the general rule.

Catching rabbits

Last time I told you what happens when we stand in a river and catch fish as they randomly swim past. Let me remind you of how that works. But today let’s use rabbits.

So, suppose an inexhaustible supply of rabbits are randomly roaming around a huge field, and each time a rabbit enters a certain area, we catch it and add it to our population of caged rabbits. Suppose that on average we catch one rabbit per unit time. Suppose the chance of catching a rabbit during any interval of time is independent of what happened before. What is the Hamiltonian describing the probability distribution of caged rabbits, as a function of time?

There’s an obvious dumb guess: the creation operator! However, we saw last time that this doesn’t work, and we saw how to fix it. The right answer is

H = a^\dagger - 1

To see why, suppose for example that at some time t we have n rabbits, so:

\Psi(t) = z^n

Then the master equation says that at this moment,

\frac{d}{d t} \Psi(t) = (a^\dagger - 1) \Psi(t) =  z^{n+1} - z^n

Since \Psi = \sum_{n = 0}^\infty \psi_n(t) z^n, this implies that the coefficients of our formal power series are changing like this:

\frac{d}{d t} \psi_{n+1}(t) = 1
\frac{d}{d t} \psi_{n}(t) = -1

while all the rest have zero derivative at this moment. And that’s exactly right! See, \psi_{n+1}(t) is the probability of having one more rabbit, and this is going up at rate 1. Meanwhile, \psi_n(t) is the probability of having n rabbits, and this is going down at the same rate.

Puzzle 1. Show that with this Hamiltonian and any initial conditions, the master equation predicts that the expected number of rabbits grows linearly.

Dying rabbits

Don’t worry: no rabbits are actually injured in the research that Jacob Biamonte is doing here at the Centre for Quantum Technologies. He’s keeping them well cared for in a big room on the 6th floor. This is just a thought experiment.

Suppose a mean nasty guy had a population of rabbits in a cage and didn’t feed them at all. Suppose that each rabbit has a unit probability of dying per unit time. And as always, suppose the probability of this happening in any interval of time is independent of what happens before that time.

What is the Hamiltonian? Again there’s a dumb guess: the annihilation operator! And again this guess is wrong, but it’s not far off. As before, the right answer includes a ‘correction term’:

H = a - N

This time the correction term is famous in its own right. It’s called the number operator:

N = a^\dagger a

The reason is that if we start with n rabbits, and apply this operator, it amounts to multiplication by n:

N z^n = z \frac{d}{d z} z^n = n z^n

Let’s see why this guess is right. Again, suppose that at some particular time t we have n rabbits, so

\Psi(t) = z^n

Then the master equation says that at this time

\frac{d}{d t} \Psi(t) = (a - N) \Psi(t) = n z^{n-1} - n z^n

So, our probabilities are changing like this:

\frac{d}{d t} \psi_{n-1}(t) = n
\frac{d}{d t} \psi_n(t) = -n

while the rest have zero derivative. And this is good! We’re starting with n rabbits, and each has a unit probability per unit time of dying. So, the chance of having one less should be going up at rate n. And the chance of having the same number we started with should be going down at the same rate.

Puzzle 2. Show that with this Hamiltonian and any initial conditions, the master equation predicts that the expected number of rabbits decays exponentially.

Breeding rabbits

Suppose we have a strange breed of rabbits that reproduce asexually. Suppose that each rabbit has a unit probability per unit time of having a baby rabbit, thus effectively duplicating itself.

As you can see from the cryptic picture above, this ‘duplication’ process takes one rabbit as input and has two rabbits as output. So, if you’ve been paying attention, you should be ready with a dumb guess for the Hamiltonian: a^\dagger a^\dagger a. This operator annihilates one rabbit and then creates two!

But you should also suspect that this dumb guess will need a ‘correction term’. And you’re right! As always, the correction terms makes the probability of things staying the same go down at exactly the rate that the probability of things changing goes up.

You should guess the correction term… but I’ll just tell you:

H = a^\dagger a^\dagger a - N

We can check this in the usual way, by seeing what it does when we have n rabbits:

H z^n =  z^2 \frac{d}{d z} z^n - n z^n = n z^{n+1} - n z^n

That’s good: since there are n rabbits, the rate of rabbit duplication is n. This is the rate at which the probability of having one more rabbit goes up… and also the rate at which the probability of having n rabbits goes down.

Puzzle 3. Show that with this Hamiltonian and any initial conditions, the master equation predicts that the expected number of rabbits grows exponentially.

Dueling rabbits

Let’s do some stranger examples, just so you can see the general pattern.

Here each pair of rabbits has a unit probability per unit time of fighting a duel with only one survivor. You might guess the Hamiltonian a^\dagger a a, but in fact:

H = a^\dagger a a - N(N-1)

Let’s see why this is right! Let’s see what it does when we have n rabbits:

H z^n = z \frac{d^2}{d z^2} z^n - n(n-1)z^n = n(n-1) z^{n-1} - n(n-1)z^n

That’s good: since there are n(n-1) ordered pairs of rabbits, the rate at which duels take place is n(n-1). This is the rate at which the probability of having one less rabbit goes up… and also the rate at which the probability of having n rabbits goes down.

(If you prefer unordered pairs of rabbits, just divide the Hamiltonian by 2. We should talk about this more, but not now.)

Brawling rabbits

Now each triple of rabbits has a unit probability per unit time of getting into a fight with only one survivor! I don’t know the technical term for a three-way fight, but perhaps it counts as a small ‘brawl’ or ‘melee’. In fact the Wikipedia article for ‘melee’ shows three rabbits in suits of armor, fighting it out:

Now the Hamiltonian is:

H = a^\dagger a^3 - N(N-1)(N-2)

You can check that:

H z^n = n(n-1)(n-2) z^{n-2} - n(n-1)(n-2) z^n

and this is good, because n(n-1)(n-2) is the number of ordered triples of rabbits. You can see how this number shows up from the math, too:

a^3 z^n = \frac{d^3}{d z^3} z^n = n(n-1)(n-2) z^{n-3}

The general rule

Suppose we have a process taking k rabbits as input and having j rabbits as output:

I hope you can guess the Hamiltonian I’ll use for this:

H = {a^{\dagger}}^j a^k - N(N-1) \cdots (N-k+1)

This works because

a^k z^n = \frac{d^k}{d z^k} z^n = n(n-1) \cdots (n-k+1) z^{n-k}

so that if we apply our Hamiltonian to n rabbits, we get

H z^n =  n(n-1) \cdots (n-k+1) (z^{n+j-k} - z^n)

See? As the probability of having n+j-k rabbits goes up, the probability of having n rabbits goes down, at an equal rate. This sort of balance is necessary for H to be a sensible Hamiltonian in this sort of stochastic theory (an ‘infinitesimal stochastic operator’, to be precise). And the rate is exactly the number of ordered k-tuples taken from a collection of n rabbits. This is called the kth falling power of n, and written as follows:

n^{\underline{k}} = n(n-1) \cdots (n-k+1)

Since we can apply functions to operators as well as numbers, we can write our Hamiltonian as:

H = {a^{\dagger}}^j a^k - N^{\underline{k}}

Kissing rabbits

Let’s do one more example just to test our understanding. This time each pair of rabbits has a unit probability per unit time of bumping into one another, exchanging a friendly kiss and walking off. This shouldn’t affect the rabbit population at all! But let’s follow the rules and see what they say.

According to our rules, the Hamiltonian should be:

H = {a^{\dagger}}^2 a^2 - N(N-1)

However,

{a^{\dagger}}^2 a^2 z^n = z^2 \frac{d^2}{dz^2} z^n = n(n-1) z^n = N(N-1) z^n

and since z^n form a ‘basis’ for the formal power series, we see that:

{a^{\dagger}}^2 a^2 = N(N-1)

so in fact:

H = 0

That’s good: if the Hamiltonian is zero, the master equation will say

\frac{d}{d t} \Psi(t) = 0

so the population, or more precisely the probability of having any given number of rabbits, will be constant.

There’s another nice little lesson here. Copying the calculation we just did, it’s easy to see that:

{a^{\dagger}}^k a^k = N^{\underline{k}}

This is a cute formula for falling powers of the number operator in terms of annihilation and creation operators. It means that for the general transition we saw before:

we can write the Hamiltonian in two equivalent ways:

H = {a^{\dagger}}^j a^k - N^{\underline{k}} =  {a^{\dagger}}^j a^k - {a^{\dagger}}^k a^k

Okay, that’s it for now! We can, and will, generalize all this stuff to stochastic Petri nets where there are things of many different kinds—not just rabbits. And we’ll see that the master equation we get matches the answer to the puzzle in Part 4. That’s pretty easy. But first, we’ll have a guest post by Jacob Biamonte, who will explain a more realistic example from population biology.


The Genetic Code

14 April, 2011

Certain mathematical physicists can’t help wondering why the genetic code works exactly the way it does. As you probably know, DNA is a helix bridged by pairs of bases, which come in 4 kinds:

adenine (A)
thymine (T)
cytosine (C)
guanine (G)

Because of how they’re shaped, A can only connect to T:

while C can only connect to G:

When DNA is copied to ‘messenger RNA’ as part of the process of making proteins, the T gets copied to uracil, or U. The other three base pairs stay the same.

A protein is made of lots of amino acids. A sequence of three base pairs forms a ‘codon’, which codes for a single amino acid. Here’s some messenger RNA with the codons indicated:

But here’s where it gets tricky: while there are 43 = 64 codons, they code for only 20 amino acids. Typically more than one codon codes for the same amino acid. There are two exceptions. One is the amino acid tryptophan, which is encoded only by UGG. The other is methionine, which is encoded only by AUG. AUG is also the ‘start codon’, which tells the cell where the code for a protein starts. So, methionine shows up at the start of every protein (or most maybe just most?), at least at first. It’s usually removed later in the protein manufacture process.

There are also three ‘stop codons’, which mark the end of a protein. They have cute names:

• UAG (‘amber’)
• UAA (‘ochre’)
• UGA (‘opal’)

But look at the actual pattern of which codons code for which amino acids:

It looks sort of regular… but also sort of irregular! Note how:

• Almost all amino acids either have 4 codons coding for them, or 2.
• If 4 codons code for the same amino acid, it’s because we can change the last base without any effect.
• If 2 codons code for the same amino acid, it’s because we can change the last base from U to C or from A to G without any effect.
• The amino acid tryptophan, with just one base pair coding for it, is right next to the 3 stop codons.

And so on…

This what attracts the mathematical physicists I’m talking about. They’re wondering what is the pattern here! Saying the patterns are coincidental—a “frozen accident of history”—won’t please these people.

Though I certainly don’t vouch for their findings, I sympathize with the impulse to find order amid chaos. Here are some papers I’ve seen:

• José Eduardo M. Hornos, Yvone M. M. Hornos and Michael Forger, Symmetry and symmetry breaking, algebraic approach to the genetic code, International Journal of Modern Physics B, 13 (1999), 2795-2885.

After a very long review of symmetry in physics, starting with the Big Bang and moving up through the theory of Lie algebras and Cartan’s classification of simple Lie algebras, the authors describe their program:

The first step in the search for symmetries in the genetic code consists in selecting a simple Lie algebra and an irreducible representation of this Lie algebra on a vector space of dimension 64: such a representation will in the following be referred to as a codon representation.

There turn out to be 11 choices. Then they look at Lie subalgebras of these Lie algebras that have codon representations, and try to organize the codons for the same amino acid into irreducible representations of these subalgebras. This follows the ‘symmetry breaking’ strategy that particle physicists use to organize particles into families (but with less justification, it seems to me). They show:

There is no symmetry breaking pattern through chains of subalgebras capable of reproducing exactly the degeneracies of the genetic code.

This is not the end of the paper, however!

Here’s another paper, which seems to focus on how the genetic code might be robust against small errors:

• Miguel A. Jimenez-Montano, Carlos R. de la Mora-Basanez, and Thorsten Poeschel, The hypercube structure of the genetic code explains conservative and non-conservative amino acid substitutions in vivo and in vitro.

And here’s another:

• S. Petoukhov, The genetic code, 8-dimensional hypercomplex numbers and dyadic shifts.

But these three papers seem rather ‘Platonic’ in inspiration: they don’t read like biology papers. What papers on the genetic code do biologists like best? I know there’s a lot of research on the origin of this code.

Maybe some of these would be interesting. I haven’t read any of them! But they seem a bit more mainstream than the ones I just listed:

• T. A. Ronneberg, L. F. Landweber, S. J. Freeland, Testing a biosynthetic theory of the genetic code: fact or artifact?, Proc. Natl. Acad. Sci. U.S.A. 97 (200), 13690–13695.

It has long been conjectured that the canonical genetic code evolved from a simpler primordial form that encoded fewer amino acids (e.g. Crick 1968). The most influential form of this idea, “code coevolution” (Wong 1975) proposes that the genetic code coevolved with the invention of biosynthetic pathways for new amino acids. It further proposes that a comparison of modern codon assignments with the conserved metabolic pathways of amino acid biosynthesis can inform us about this history of code expansion. Here we re-examine the biochemical basis of this theory to test the validity of its statistical support. We show that the theory’s definition of “precursor-product” amino acid pairs is unjustified biochemically because it requires the energetically unfavorable reversal of steps in extant metabolic pathways to achieve desired relationships. In addition, the theory neglects important biochemical constraints when calculating the probability that chance could assign precursor-product amino acids to contiguous codons. A conservative correction for these errors reveals a surprisingly high 23% probability that apparent patterns within the code are caused purely by chance. Finally, even this figure rests on post hoc assumptions about primordial codon assignments, without which the probability rises to 62% that chance alone could explain the precursor-product pairings found within the code. Thus we conclude that coevolution theory cannot adequately explain the structure of the genetic code.

• Pavel V. Baranov, Maxime Venin and Gregory Provan, Codon size reduction as the origin of the triplet genetic code, PLoS ONE 4 (2009), e5708.

The genetic code appears to be optimized in its robustness to missense errors and frameshift errors. In addition, the genetic code is near-optimal in terms of its ability to carry information in addition to the sequences of encoded proteins. As evolution has no foresight, optimality of the modern genetic code suggests that it evolved from less optimal code variants. The length of codons in the genetic code is also optimal, as three is the minimal nucleotide combination that can encode the twenty standard amino acids. The apparent impossibility of transitions between codon sizes in a discontinuous manner during evolution has resulted in an unbending view that the genetic code was always triplet. Yet, recent experimental evidence on quadruplet decoding, as well as the discovery of organisms with ambiguous and dual decoding, suggest that the possibility of the evolution of triplet decoding from living systems with non-triplet decoding merits reconsideration and further exploration. To explore this possibility we designed a mathematical model of the evolution of primitive digital coding systems which can decode nucleotide sequences into protein sequences. These coding systems can evolve their nucleotide sequences via genetic events of Darwinian evolution, such as point-mutations. The replication rates of such coding systems depend on the accuracy of the generated protein sequences. Computer simulations based on our model show that decoding systems with codons of length greater than three spontaneously evolve into predominantly triplet decoding systems. Our findings suggest a plausible scenario for the evolution of the triplet genetic code in a continuous manner. This scenario suggests an explanation of how protein synthesis could be accomplished by means of long RNA-RNA interactions prior to the emergence of the complex decoding machinery, such as the ribosome, that is required for stabilization and discrimination of otherwise weak triplet codon-anticodon interactions.

What’s the “recent experimental evidence on quadruplet decoding”, and what organisms have “ambiguous” or “dual” decoding?

• Tsvi Tlusty, A model for the emergence of the genetic code as a transition in a noisy information channel, J. Theor. Bio. 249 (2007), 331–342.

The genetic code maps the sixty-four nucleotide triplets (codons) to twenty amino-acids. Some argue that the specific form of the code with its twenty amino-acids might be a ‘frozen accident’ because of the overwhelming effects of any further change. Others see it as a consequence of primordial biochemical pathways and their evolution. Here we examine a scenario in which evolution drives the emergence of a genetic code by selecting for an amino-acid map that minimizes the impact of errors. We treat the stochastic mapping of codons to amino-acids as a noisy information channel with a natural fitness measure. Organisms compete by the fitness of their codes and, as a result, a genetic code emerges at a supercritical transition in the noisy channel, when the mapping of codons to amino-acids becomes nonrandom. At the phase transition, a small expansion is valid and the emergent code is governed by smooth modes of the Laplacian of errors. These modes are in turn governed by the topology of the error-graph, in which codons are connected if they are likely to be confused. This topology sets an upper bound – which is related to the classical map-coloring problem – on the number of possible amino-acids. The suggested scenario is generic and may describe a mechanism for the formation of other error-prone biological codes, such as the recognition of DNA sites by proteins in the transcription regulatory network.

• Tsvi Tlusty, A colorful origin for the genetic code: Information theory, statistical mechanics and the emergence of molecular codes, Phys. Life. Rev. 7 (2010), 362–376.

• S. J. Freeland, T. Wu and N. Keulmann, The case for an error minimizing standard genetic code, Orig. Life Evol. Biosph. 33 (2009), 457–477.

• G. Sella and D. Ardell, The coevolution of genes and genetic codes: Crick’s frozen accident revisited, J. Mol. Evol. 63 (2006), 297–313.


The Three-Fold Way

13 April, 2011

I just finished a series of blog posts about doing quantum theory using the real numbers, the complex numbers and the quaternions… and how Nature seems to use all three. Mathematically, they fit together in a structure that Freeman Dyson called The Three-Fold Way.

You read all those blog posts here:

State-observable duality – Part 1: a review of normed division algebras.

State-observable duality – Part 2: the theorem by Jordan, von Neumann and Wigner classifying ‘finite-dimensional formally real Jordan algebras’.

State-observable duality – Part 3: the Koecher–Vinberg classification of self-dual homogeneous convex cones, and its relation to state-observable duality.

Solèr’s Theorem: Maria Pia Solèr’s amazing theorem from 1995, which characterizes Hilbert spaces over the real numbers, complex numbers and quaternions.

The Three-Fold Way – Part 1: two problems with real and quaternionic quantum mechanics.

The Three-Fold Way – Part 2: why irreducible unitary representations on complex Hilbert spaces come in three flavors: real, complex and quaternionic.

The Three-Fold Way – Part 3: why the “q” in “qubit” stands for “quaternion”.

The Three-Fold Way – Part 4: how turning a particle around 180 degrees is related to making it go backwards in time, and what all this has to do with real numbers and quaternions.

The Three-Fold Way – Part 5 – a triangle of functors relating the categories of real, complex and quaternionic Hilbert spaces.

The Three-Fold Way – Part 6 — how the three-fold way solves two problems with real and quaternionic quantum mechanics.

All these blog posts are based on the following paper… but they’ve got a lot more jokes, digressions and silly pictures thrown in, so personally I recommend the blog posts:

Quantum theory and division algebras.

And if you’re into normed division algebras and physics, you might like this talk I gave on my work with John Huerta, which also brings the octonions into the game:

Higher gauge theory, division algebras and superstrings.

Finally, around May, John and I will come out with a Scientific American article explaining the same stuff in a less technical way. It’ll be called “The strangest numbers in string theory”.

Whew! I think that’s enough division algebras for now. I’ve long been on a quest to save the quaternions and octonions from obscurity and show the world just how great they are. It’s time to declare victory and quit. There’s a more difficult quest ahead: the search for green mathematics, whatever that might be.


Follow

Get every new post delivered to your Inbox.

Join 3,095 other followers