Network Theory (Part 2)

31 March, 2011

Today I’d like to start telling you about some research Jacob Biamonte and I are doing on stochastic Petri nets, quantum field theory, and category theory. It’ll take a few blog entries to cover this story, which is part of a larger story about network theory.

Stochastic Petri nets are one of many different diagrammatic languages people have evolved to study complex systems. We’ll see how they’re used in chemistry, molecular biology, population biology and queuing theory, which is roughly the science of waiting in line. If you’re a faithful reader of this blog, you’ve already seen an example of a Petri net taken from chemistry:

It shows some chemicals and some reactions involving these chemicals. To make it into a stochastic Petri net, we’d just label each reaction by a nonnegative real number: the reaction rate constant, or rate constant for short.

In general, a Petri net will have a set of states, which we’ll draw as yellow circles, and a set of transitions, which we’ll draw as blue rectangles. Here’s a Petri net from population biology:

Now, instead of different chemicals, the states are different species. And instead of chemical reactions, the transitions are processes involving our species.

This Petri net has two states: rabbit and wolf. It has three transitions:

• In birth, one rabbit comes in and two go out. This is a caricature of reality: these bunnies reproduce asexually, splitting in two like amoebas.

• In predation, one wolf and one rabbit come in and two wolves go out. This is a caricature of how predators need to eat prey to reproduce.

• In death, one wolf comes in and nothing goes out. Note that we’re pretending rabbits don’t die unless they’re eaten by wolves.

If we labelled each transition with a rate constant, we’d have a stochastic Petri net.

To make this Petri net more realistic, we’d have to make it more complicated. I’m trying to explain general ideas here, not realistic models of specific situations. Nonetheless, this Petri net already leads to an interesting model of population dynamics: a special case of the so-called ‘Lotka-Volterra predator-prey model’. We’ll see the details soon.

More to the point, this Petri net illustrates some possibilities that our previous example neglected. Every transition has some ‘input’ states and some ‘output’ states. But a state can show up more than once as the output (or input) of some transition. And as we see in ‘death’, we can have a transition with no outputs (or inputs) at all.

But let me stop beating around the bush, and give you the formal definitions. They’re simple enough:

Definition. A Petri net consists of a set S of states and a set T of transitions, together with a function

i : S \times T \to \mathbb{N}

saying how many copies of each state shows up as input for each transition, and a function

o: S \times T \to \mathbb{N}

saying how many times it shows up as output.

Definition. A stochastic Petri net is a Petri net together with a function

r: T \to [0,\infty)

giving a rate constant for each transition.

Starting from any stochastic Petri net, we can get two things. First:

• The master equation. This says how the probability that we have a given number of things in each state changes with time.

Since stochastic means ‘random’, the master equation is what gives stochastic Petri nets their name. The master equation is the main thing I’ll be talking about in future blog entries. But not right away!

Why not?

In chemistry, we typically have a huge number of things in each state. For example, a gram of water contains about 3 \times 10^{22} water molecules, and a smaller but still enormous number of hydroxide ions (OH), hydronium ions (H3O+), and other scarier things. These things blunder around randomly, bump into each other, and sometimes react and turn into other things. There’s a stochastic Petri net describing all this, as we’ll eventually see. But in this situation, we don’t usually want to know the probability that there are, say, exactly 310,1849,578,476,264 hydronium ions. That would be too much information! We’d be quite happy knowing the expected value of the number of hydronium ions, so we’d be delighted to have a differential equation that says how this changes with time.

And luckily, such an equation exists—and it’s much simpler than the master equation. So, today we’ll talk about:

• The rate equation. This says how the expected number of things in each state changes with time.

But first, I hope you get the overall idea. The master equation is stochastic: at each time the number of things in each state is a random variable taking values in \mathbb{N}, the set of natural numbers. The rate equation is deterministic: at each time the expected number of things in each state is a non-random variable taking values in [0,\infty), the set of nonnegative real numbers. If the master equation is the true story, the rate equation is only approximately true—but the approximation becomes good in some limit where the expected value of the number of things in each state is large, and the standard deviation is comparatively small.

If you’ve studied physics, this should remind you of other things. The master equation should remind you of the quantum harmonic oscillator, where energy levels are discrete, and probabilities are involved. The rate equation should remind you of the classical harmonic oscillator, where energy levels are continuous, and everything is deterministic.

When we get to the ‘original research’ part of our story, we’ll see this analogy is fairly precise! We’ll take a bunch of ideas from quantum mechanics and quantum field theory, and tweak them a bit, and show how we can use them to describe the master equation for a stochastic Petri net.

Indeed, the random processes that the master equation describes can be drawn as pictures:

This looks like a Feynman diagram, with animals instead of particles! It’s pretty funny, but the resemblance is no joke: the math will back it up.

I’m dying to explain all the details. But just as classical field theory is easier than quantum field theory, the rate equation is simpler than the master equation. So we should start there.

The rate equation

If you hand me a stochastic Petri net, I can write down its rate equation. Instead of telling you the general rule, which sounds rather complicated at first, let me do an example. Take the Petri net we were just looking at:

We can make it into a stochastic Petri net by choosing a number for each transition:

• the birth rate constant \beta

• the predation rate constant \gamma

• the death rate constant \delta

Let x(t) be the number of rabbits and let y(t) be the number of wolves at time t. Then the rate equation looks like this:

\frac{d x}{d t} = \beta x - \gamma x y

\frac{d y}{d t} = \gamma x y - \delta y

It’s really a system of equations, but I’ll call the whole thing “the rate equation” because later we may get smart and write it as a single equation.

See how it works?

• We get a term \beta x in the equation for rabbits, because rabbits are born at a rate equal to the number of rabbits times the birth rate constant \beta.

• We get a term - \delta y in the equation for wolves, because wolves die at a rate equal to the number of wolves times the death rate constant \delta.

• We get a term - \gamma x y in the equation for rabbits, because rabbits die at a rate equal to the number of rabbits times the number of wolves times the predation rate constant \gamma.

• We also get a term \gamma x y in the equation for wolves, because wolves are born at a rate equal to the number of rabbits times the number of wolves times \gamma.

Of course I’m not claiming that this rate equation makes any sense biologically! For example, think about predation. The \gamma x y terms in the above equation would make sense if rabbits and wolves roamed around randomly, and whenever a wolf and a rabbit came within a certain distance, the wolf had a certain probability of eating the rabbit and giving birth to another wolf. At least it would be make sense in the limit of large numbers of rabbits and wolves, where we can treat x and y as varying continuously rather than discretely. That’s a reasonable approximation to make sometimes. Unfortunately, rabbits and wolves don’t roam around randomly, and a wolf doesn’t spit out a new wolf each time it eats a rabbit.

Despite that, the equations

\frac{d x}{d t} = \beta x - \gamma x y

\frac{d y}{d t} = \gamma x y - \delta y

are actually studied in population biology. As I said, they’re a special case of the Lotka-Volterra predator-prey model, which looks like this:

\frac{d x}{d t} = \beta x - \gamma x y

\frac{d y}{d t} = \epsilon x y - \delta y

The point is that while these models are hideously oversimplified and thus quantitatively inaccurate, they exhibit interesting qualititative behavior that’s fairly robust. Depending on the rate constants, these equations can show either a stable equilibrium or stable periodic behavior. And we go from one regime to another, we see a kind of catastrophe called a “Hopf bifurcation”. I explained all this in week308 and week309. There I was looking at some other equations, not the Lotka-Volterra equations. But their qualitative behavior is the same!

If you want stochastic Petri nets that give quantitatively accurate models, it’s better to retreat to chemistry. Compared to animals, molecules come a lot closer to roaming around randomly and having a chance of reacting when they come within a certain distance. So in chemistry, rate equations can be used to make accurate predictions.

But I’m digressing. I should be explaining the general recipe for getting a rate equation from a stochastic Petri net! You might not be able to guess it from just one example. But I sense that you’re getting tired. So let’s stop now. Next time I’ll do more examples, and maybe even write down a general formula. But if you’re feeling ambitious, you can try this now:

Puzzle. Can you write down a stochastic Petri net whose rate equation is the Lotka-Volterra predator-prey model:

\frac{d x}{d t} = \beta x - \gamma x y

\frac{d y}{d t} = \epsilon x y - \delta y

for arbitrary \beta, \gamma, \delta, \epsilon \ge 0? If not, for which values of these rate constants can you do it?

References

If you want to study a bit on your own, here are some great online references on stochastic Petri nets and their rate equations:

• Peter J. E. Goss and Jean Peccoud, Quantitative modeling of stochastic systems in molecular biology by using stochastic Petri nets, Proc. Natl. Acad. Sci. USA 95 (June 1998), 6750-6755.

• Jeremy Gunawardena, Chemical reaction network theory for in-silico biologists.

• Martin Feinberg, Lectures on reaction networks.

I should admit that the first two talk about ‘chemical reaction networks’ instead of Petri nets. That’s no big deal: any chemical reaction network gives a Petri net in a pretty obvious way. You can probably figure out how; if you get stuck, just ask.

Also, I should admit that Petri net people say place where I’m saying state.

Here are some other references, which aren’t free unless you have an online subscription or access to a library:

• Peter J. Haas, Stochastic Petri Nets: Modelling, Stability, Simulation, Springer, Berlin, 2002.

• F. Horn and R. Jackson, General mass action kinetics, Archive for Rational Mechanics and Analysis 47 (1972), 81–116.

• Ina Koch, Petri nets – a mathematical formalism to analyze chemical reaction networks, Molecular Informatics 29 (2010), 838–843.

• Darren James Wilkinson, Stochastic Modelling for Systems Biology, Taylor & Francis, New York, 2006.


Network Theory (Part 1)

4 March, 2011

As a mathematician who has gotten interested in the problems facing our planet, I’ve been trying to cook up some new projects to work on. Over the decades I’ve spent a lot of time studying quantum field theory, quantum gravity, n-categories, and numerous pretty topics in pure math. My accumulated knowledge doesn’t seem terribly relevant to my new goals. But I don’t feel like doing a complete ‘brain dump’ and starting from scratch. And my day job still requires that I prove theorems.

Green Mathematics

I wish there were a branch of mathematics—in my dreams I call it green mathematics—that would interact with biology and ecology just as fruitfully as traditional mathematics interacts with physics. If the 20th century was the century of physics, while the 21st is the century of biology, shouldn’t mathematics change too? As we struggle to understand and improve humanity’s interaction with the biosphere, shouldn’t mathematicians have some role to play?

Of course, it’s possible that when you study truly complex systems—from a living cell to the Earth as a whole—mathematics loses the unreasonable effectiveness it so famously has when it comes to simple things like elementary particles. So, maybe there is no ‘green mathematics’.

Or maybe ‘green mathematics’ can only be born after we realize it needs to be fundamentally different than traditional mathematics. For starters, it may require massive use of computers, instead of the paper-and-pencil methods that work so well in traditional math. Simulations might become more important than proofs. That’s okay with me. Mathematicians like things to be elegant—but one can still have elegant definitions and elegant models, even if one needs computer simulations to see how the models behave.

Perhaps ‘green mathematics’ will require a radical shift of viewpoint that we can barely begin to imagine.

It’s also possible that ‘green mathematics’ already exists in preliminary form, scattered throughout many different fields: mathematical biology, quantitative ecology, bioinformatics, artificial life studies, and so on. Maybe we just need more mathematicians to learn these fields and seek to synthesize them.

I’m not sure what I think about this ‘green mathematics’ idea. But I think I’m getting a vague feel for it. This may sound corny, but I feel it should be about structures that are more like this:

than this:

I’ve spent a long time exploring the crystalline beauty of traditional mathematics, but now I’m feeling an urge to study something slightly more earthy.

Network Theory

When dreaming of grand syntheses, it’s easy to get bogged down in vague generalities. Let’s start with something smaller and more manageable.

Network theory, and the use of diagrams, have emerged independently in many fields of science. In particle physics we have Feynman diagrams:


In the humbler but more practical field of electronics we have circuit diagrams:

amplifier with bass boost

Throughout engineering we also have various other styles of diagram, such as bond graphs:


I’ve already told you about Petri nets, which are popular in computer science… but also nice for describing chemical reactions:

petri net

‘Chemical reaction networks’ do a similar job, in a more primitive way:

chemical reaction network

Chemistry shades imperceptibly into biology, and biology uses so many styles of diagram that an organization has tried to standardize them:

Systems Biology Graphical Notation (SBGN) homepage.

SBGN is made up of 3 different languages, representing different visions of biological systems. Each language involves a comprehensive set of symbols with precise semantics, together with detailed syntactic rules how maps are to be interpreted:

1) The Process Description language shows the temporal course of biochemical interactions in a network.


PD

2) The Entity Relationship language lets you to see all the relationships in which a given entity participates, regardless of the temporal aspects.


ER

3) The Activity Flow language depicts the flow of information between biochemical entities in a network.


AF

Biology shades into ecology, and in the 1950s, Howard T. Odum developed the ‘Energy Systems Language’ while studying tropical forests. Odum is now considered to be the founder of ‘systems ecology’. If you can get ahold of this big fat book, you’ll see it’s packed with interesting diagrams describing the flow of energy through ecosystems:

• Howard T. Odum, Systems Ecology: an Introduction, Wiley-Interscience, New York, 1983.

His language is sometimes called ‘Energese’, for short:

Energy Systems Symbols

The list goes on and on, and I won’t try for completeness… but we shouldn’t skip probability theory, statistics and machine learning! A Bayesian network, also known as a “belief network”, is a way to represent knowledge about some domain: it consists of a graph where the nodes are labelled by random variables and the edges represent probabilistic dependencies between these random variables. Various styles of diagrams have been used for these:


structural equation modeling

And don’t forget neural networks!

What Mathematicians Can Do

It’s clear that people from different subjects are reinventing the same kinds of diagrams. It’s also clear that diagrams are being used in a number of fundamentally different ways. So, there’s a lot to sort out.

I already mentioned one attempt to straighten things out: Systems Biology Graphical Notation. But that’s not the only one. For example, in 2001 the International Council on Systems Engineering set up a committee to customize their existing Unified Modeling Language and create something called Systems Modeling Language. This features nine types of diagrams!

So, people are already trying to systematize the use of diagrams. But mathematicians should join the fray.

Why? Because mathematicians are especially good at soaring above the particulars and seeing general patterns. Also, they know ways to think of diagrams, not just as handy tools, but as rigorously defined structures that you can prove theorems about… with the help of category theory.

I’ve written a bit about diagrams already, but not their ‘green’ applications. Instead, I focused on their applications to traditional subjects like topology, physics, logic and computation:

• John Baez and Aaron Lauda, A prehistory of n-categorical physics, to appear in Deep Beauty: Mathematical Innovation and the Search for an Underlying Intelligibility of the Quantum World, ed. Hans Halvorson, Cambridge U. Press.

• John Baez and Mike Stay, A Rosetta stone: topology, physics, logic and computation, in New Structures for Physics, ed. Bob Coecke, Lecture Notes in Physics vol. 813, Springer, Berlin, 2011, pp. 95-174.

It would be good to expand this circle of ideas to include chemistry, biology, ecology, statistics, and so on. There should be a mathematical theory underlying the use of networks in all these disciplines.

I’ve started a project on this with Jacob Biamonte, who works on two other applications of diagrams, namely to quantum computation and condensed matter physics:

• Jacob D. Biamonte, Stephen R. Clark and Dieter Jaksch, Categorical tensor networks.

So far we’ve focused on one aspect: stochastic Petri nets, which are used to describe chemical reactions and also certain predator-prey models in quantitative ecology. In the posts to come, I want to show how ideas from quantum field theory be used in studying stochastic Petri nets, and how this relates to the ‘categorification’ of Feynman diagram theory.


Petri Nets

18 January, 2011

I’m trying to build bridges between mathematics and practical subjects like ecology and engineering — subjects that might help us save the planet.

As part of this, I have a project to explain how some ideas from electrical engineering, control theory, systems ecology, systems biology and the like can be formalized — and to some extent unified — using “symmetric monoidal categories”. Whenever you have a setup with:

• abstract gadgets that have ‘input wires’ and ‘output wires’,

and you can

• hook up these gadgets by connecting outputs to inputs,

and

• the wires can cross over each other, and

• it doesn’t matter whether one wire crosses over or under another

then you’ve probably got a symmetric monoidal category! For a precise definition, and lots of examples, try:

• John Baez and Mike Stay, Physics, topology, logic and computation: a Rosetta Stone, in New Structures for Physics, ed. Bob Coecke, Lecture Notes in Physics vol. 813, Springer, Berlin, 2011, pp. 95-174.

Back then I was excited about examples from ‘topological quantum field theory’, and ‘linear logic’, and ‘programming semantics’, and ‘quantum circuit theory’. But now I’m trying to come down to earth and think about examples of a more everyday nature. And they turn out to be everywhere!

For example, when you see these diagrams:

you may see a 10-watt amplifier with bass boost, and 16-volt power supply with power-on indicator light. But I see morphisms in symmetric monoidal categories!

Back in week296 I explained a symmetric monoidal category where the morphisms are electrical circuits made only from linear resistors. It’s easy to generalize this to ‘passive linear circuits’ where we include linear inductors and capacitors… and we can keep going further in this direction. I’m writing a paper on this stuff now.

But today I want to head in another direction, and tell you about something I’m beginning to work on with Jacob Biamonte: something called ‘Petri nets’.

Before I do, I have to answer a question that I can see forming in your forehead: what’s the use of this stuff?

I don’t actually think electrical engineers are going to embrace category theory — at least not until it’s routinely taught as part of college math. And I don’t even think they should! I don’t claim that category theory is going to help them do anything they want to do.

What it will do is help organize our understanding of systems made of parts.

In mathematics, whenever you see a pattern that keeps showing up in different places, you should try to precisely define it and study it — and whenever you see it showing up somewhere, you should note that fact. Eventually, over time, your store of patterns grows, and you start seeing connections that weren’t obvious before. And eventually, really cool things will happen!

It’s hard to say what these things will be before they happen. It’s almost not worth trying. For example, the first people who started trying to compute the perimeter of an ellipse could never have guessed that the resulting math would, a century or so later, be great for cryptography. The necessary chain of ideas was too long and twisty to possibly envision ahead of time.

But in the long run, having mathematicians develop and investigate a deep repertoire of patterns tends to pay off.

Petri nets – the definition

So: what’s a Petri net? Here’s my quick explanation, using graphics that David Tweed kindly provided for this article:

Petri net, Azimuth Project.

A Petri net is a kind of diagram for the describing processes that arise in systems with many components. They were invented in 1939 by Carl Adam Petri — when he was 13 years old — in order to describe chemical reactions. They are a widely used model of concurrent processes in theoretical computer science. They are also used to describe interactions between organisms (e.g. predation, death, and birth), manufacturing processes, supply chains, and so on.

Here is an example from chemistry:

The circles in a Petri net denote so-called states, which in this example are chemical compounds. The rectangular boxes denote transitions, which in this example are chemical reactions. Every transition has a finite set of input states, with wires coming in from those, and a finite set of output states, with wires going out. All this information can equally well be captured by the usual notation for chemical reactions, as follows:

C + O2 → CO2

CO2 + NaOH → NaHCO3

NaHCO3 + HCl → H2O + NaCl + CO2

One advantage of Petri nets is that we can also label each state by some number (0,1,2,3,…) of black dots, called tokens. In our example, each black dot represents a molecule of a given kind. Thus

describes a situation where one atom of carbon, one molecule of oxygen, one molecule of sodium hydroxide, and one molecule of hydrochloric acid are present. No molecules of any other kind are present at this time.

We can then describe processes that occur with the passage of time by moving the tokens around. If the carbon reacts with oxygen we obtain:

If the carbon dioxide combines with the sodium hydroxide to form sodium bicarbonate, we then obtain:

Finally, if the sodium bicarbonate and hydrochloric acid react, we get:

Note that in each case the following rule holds: for any given transition, we can delete one token for each of its input states and simultaneously add one token for each of its output states.

Here is a simpler example, taken from this article:

Petri net, Wikipedia.



Here the transitions are denoted by black rectangles. This example is somewhat degenerate, because there no transitions with more than one input or more than one output. However, it illustrates the possibility of having more than one token in a given state. It also illustrates the possibility of a transition with no inputs, or no outputs. In chemistry this is useful for modelling a process where molecules of a given sort are added to or removed from the environment.

Symmetric monoidal categories

Now, what about symmetric monoidal categories? If you really want the definition, either read theRosetta Stone paper or the nLab article. For now, you can probably fake it.

A symmetric monoidal category is, very roughly, a category with a tensor product. If we ignore the tokens, a Petri net is a way of specifying a symmetric monoidal category by giving a set of objects x_1, \dots, x_n and a set of morphisms between tensor products of these objects, for example

f_1 : x_1 \otimes x_2 \to x_2 \otimes x_2

f_2 : x_2 \to 1

f_3 : 1 \to x_1

where 1 denotes the tensor product of no objects. For example, the objects might be molecules and the morphisms might be chemical reactions; in this case the tensor product symbol is written ‘+‘ rather than ‘\otimes‘.

The kind of symmetric monoidal category we get this way is a ‘free strict symmetric monoidal category’. It’s ‘free’ in the sense that it’s ‘freely generated’ from some objects and morphisms, without any relations. It’s ‘strict’ because we’re assuming the tensor product is precisely associative, not just associative ‘up to isomorphism’.

For more on how Petri nets describe free symmetric monoidal categories, see:

Vladimiro Sassone, On the category of Petri net computations, 6th International Conference on Theory and Practice of Software Development, Proceedings of TAPSOFT ’95, Lecture Notes in Computer Science 915, Springer, Berlin, pp. 334-348.

Here Sassone describes a category of Petri nets and sketches the description of a functor from that category to the category of strict symmetric monoidal categories. Sassone also has some other papers on Petri nets and category theory.

As I mentioned, Petri nets have been put to work in many ways. I won’t even start trying to explain that today — for that, try the references on the Azimuth Project page. My only goal today was to convince you that Petri nets are a pathetically simple idea, nothing to be scared of. And if you happen to be a category theorist, it should also be pathetically simple to see how they describe free strict symmetric monoidal categories. If you’re not… well, never mind!


Follow

Get every new post delivered to your Inbox.

Join 3,095 other followers