The implementation is a hack, and can be largely characterized as using reflection to re-target Python’s own compilation engine. The syntax @petrinet is what’s called a decorator (iirc it was borrowed from Java around python 2.4), and the way it works is that the result of compiling the function definition that follows, gets passed to the function petrinet, and the return value bound instead to the name of the defined function in the outer scope.

The function petrinet() takes apart the compiled function object which allows it to create bindings for all unbound variables meant to name species, to values that are singleton Bags with 1 exemplary of the name or species token counted in. The code of the function is then evaluated while the Bag class of these bound values provides appropriate definitions for the algebraic syntax of transitions. The most flaky is the cheap way transition names get associated with transitions; this is done by simply mapping evaluations of the >> operator in the sandboxed function evaluation, in order, to the names of local variables listed by the compiled function object. This boils down to the unenforced requirement that the decorated petri net function is formed of simple assignment statements to distinct variables, of the form

transition_name = (expression involving a single call to >>)

that is, with the RHS an expression involving exactly one evaluation of the >> operator. Also, the function must be declared with an empty parameter list or things will break.

The function petrirun is there to demonstrate that the intended semantics is captured (in a more realistic setting the procedural interpretation of the petri net would be better decoupled). petrirun() gets instantiated by petrinet() for each petri net as the executable object. It is actually a generator because it returns values with a yield statement, what allows to pull out of it, subsequent values on demand, for instance as the generator in a for-loop as demonstrated above. The list of transitions and the list of species are attached to it as user function attributes for allowing later user access. The petrirun instance should also be renamed to the decorated function name (to appear under that name when coming out in the REPL or tracebacks) but that was left out. petrirun() is also designed to exploit named parameters function call syntax to serve the stipulation of the initial labelling of the run.

]]>I.e., How does it work? I am not familiar with these 2.7 language constructs.

Looking it over some more, I’m getting some hazy pciture of it.. The syntax “@petrinet ___” is a way to pass the defined set of transitions to the function “petrinet.”

What is the type of the defined objects like “chemical_reaction.” Are they something like rulesets? And what is the general role of the symbols |, _ and >>. Yet you call the object AIDS like a function.

Above and beyond these language specifics, can you post here a small amount of dissection of the code? Saying essentially, here is what I wanted to achieve, and these are the mechanisms I used, which work together correctly because of XYZ. Thanks.

]]>@petrinet def AIDS() : α = 0.1 | _ >> healthy β = healthy+virion >> infected γ = 0.2 | healthy >> _ δ = 0.3 | infected >> _ ε = infected >> infected+virion ζ = 0.2 | virion >> _ @petrinet def rabbits_and_wolves() : birth = rabbit >> 2*rabbit predation = rabbit+wolf >> 2*wolf death = 0.3 | wolf >> _ @petrinet def chemical_reaction() : split = H2O >> 2*H+O combine = 2*H+O >> H2O

The optional number and vertical bar at the beginning of a rule expresses a weight or unnormalized rate different from unity.

The code is written for Python 3.3 but except for the use of greek letters in the “AIDS” petri net, should also run on Python 2.7. The implementation of the syntax in 65 LOCs is minimalistic, as no step is taken ensure graceful errors when the DSL code strays from the syntax illustrated.

The way to invoke the code is

>>> for step,transition,labelling in AIDS(healthy=5,virion=5): print(transition,"==>",labelling) if step>10 : break β ==> 4*healthy+1*infected+4*virion ε ==> 4*healthy+1*infected+5*virion β ==> 3*healthy+2*infected+4*virion β ==> 2*healthy+3*infected+3*virion ε ==> 2*healthy+3*infected+4*virion β ==> 1*healthy+4*infected+3*virion ε ==> 1*healthy+4*infected+4*virion β ==> 5*infected+3*virion ε ==> 5*infected+4*virion ε ==> 5*infected+5*virion ε ==> 5*infected+6*virion

And here is the code. I just hope the sourcecode quoting works as advertised :)

# -*- coding: utf-8 -*- from __future__ import print_function from collections import Counter class Bag(Counter) : weight=1 transitions=[] output={} def __rmul__(self, other) : return Bag({s:n*other for s,n in self.items()}) def __add__(self,other) : return Bag(Counter.__add__(self,other)) def __rshift__(self,other) : self.output=other self.transitions.append(self) return self def __ror__(self,other) : self.weight=other return self def __call__(self,other) : return Bag(other-self+self.output) def __repr__(self) : return '+'.join("%s*%s" % (n,s) for s,n in sorted(self.items()) if n) or '{}' def petrinet(fun) : code = getattr(fun,'__code__',0) or fun.func_code species={ n : Bag([n]) for n in code.co_names } species['_'] = Bag() Bag.transitions[:]=[] eval(code,species,{}) transitions=Bag.transitions[:] for n,t in zip(code.co_varnames,transitions): t.name=n del species['__builtins__'] del species['_'] def petrirun(**labelling) : from random import random from itertools import count labelling=Bag(labelling) for step in count(1) : possible=[t for t in transitions if t == t & labelling] if not possible : break pick=random()*sum(t.weight for t in possible) for k,transition in enumerate(possible) : pick-=transition.weight if pick<=0 : labelling=transition(labelling) yield step,transition.name,labelling break petrirun.species=species petrirun.transitions=transitions return petrirun]]>

Wow. I sort of get the idea, but not well.

If the stochastic pi-machine is pi-calculus in action, then that is proof that the calculus is cool and useful.

So I’m trying to make sense of the code that have for the sample reactions. Here a document that explains the some of the examples in the Sliverlight reaction simulator:

It includes radioactive decay, and Lotka reactions. Here is the Lotka reaction logic:

directive sample 5.0 1000

directive plot Y()

new c1@5.0:chan

new c2@0.0025:chan

let X() = ?c1; X()

let Y() =

do !c1; (Y() | Y())

or !c2

or ?c2

run (X() | 10 of Y())

The following questions are directed generally to anyone here. How much of this is pure pi-calculus, and how much is in the stochastic extension? The channels have rate constants associated with them, but is that the extent of the extension?

Can anyone give a detailed analysis and explanation of this code, to explain how it is acheiving the marvelous simulation result? Remember, I don’t really know what pi-calculus is, so you’ll have to start from a point of few assumptions.

Here is the the language definition document for stochastic pi-calculus:

Thanks

]]>Hmmm… My point wasn’t really about petri nets, and more about the

relevance of the specific programming language. However, now that I

am confronted with the problem, I suppose I should put my money where

my mouth is. How would I implement petri-nets?

Call it Graham’s principle that the structure visible in a program

should correspond to the problem, and not to the programming

language. The ideal for this is usually Lisp (for which see Paul

Graham passim), or, say, for large scale linear algebra, APL (alas

only once upon a time) or (today) Matlab. For working with relations

on tuples I think Prolog is likely to provide a clean map between

concrete and abstract.

A petri-net is a bipartite graph G plus a set of counters on one of

the types of nodes, T, so that the vector of these counters defines a

state. G defines a relation R, of transitions from state to state;

thus a particular run of a simulation of a petri-net has the form

T1 -R-> T2 -R-> T3 -R-> T4 -R-> …

Prolog allows me to model tuple transitions very easily. If I model

numbers ‘peano-style’, as s(s(…(0))), and T has the form

t(n1, n2, n3, ….)

Then I can model a rule directly in Prolog as:

r(rulename,

t(n1.old, n2.old, n3.old, ….),

t(n1.new, n2.new, n3.new, ….)).

which is simply a declared relation between states (I’ve also included

the name – which gives me a bit more control and documentation) and I

am done.

So how does this work in practice?

Well I can write down the the ‘Chemical reaction’ net as:

% t(O, H, H2O).

r(split,

t( O , H , s(H2O)),

t(s(O), s(s(H)), H2O ).

r(combine,

t(s(O), s(s(H)), H2O ),

t( O , H , s(H2O)).

Note that rules are directly declarative and I have completely

separated them from any other interpretive framework – I can modify my

non-deterministic model, or implement breadth first or bounded

depth-first search on the space easily as extensions. I can even run

my model backwards, should I so want.

A basic simulator (with a bound on the number of transitions) then

looks like:

simulate(T, T, [], 0).

simulate(Tinit, Tfinal, [(Tinit, R) | L], N) :-

choose(R),

r(R, Tinit, Tnext),

PrecN is N – 1,

simulate(Tnext, Tfinal, L, PrecN).

simulate(T, T, [], N) :-

not r(_, T, _),

N > 0.

choose is a bit more messy, since it should enumerate the possible

rules in random order. A state specific version (we can do much better

– more general – with a bit more space) would be

choose(R) :-

(0 =:= rand(2) ->

(X = split ; X = combine)

| (X = combine; X = split ).

A call to the simulator, with a bound of 20 transitions, is then:

simulate(t(s(s(s(s(s(0))))), s(s(s(0))), s(s(s(s(0))))), Tfinal, L, 20).

Not sure there is enough code here to count as a program (<= 19

lines). I don't have a prolog interpreter to hand – my employers are

very unenthusiastic about my installing my own software, so there may

be typos above, but I hope the idea is clear.

Note the argument here is not in favor of Prolog (a similar-flavored

– though not quite identical and not quite so concise – implementation

would be easy in a language like Scheme), but in favor of choosing

your tools so as to move you close to the problem rather than trying

to move the problem closer to your tools, and thus, in this case, no

objects/classes; if you were designing a user-interface, then an

object/class model would be close to your problem.

David Tanzer asked:

Bistable and periodic refer to the solutions to the rate equation, right?

Right.

We could think of the state space of a Petri net with k species either as the non-negative region of or , depending on whether you use a continuous approximation or not. In the continuous case we use a differential equation, the rate equation.

Right. My book with Jacob Biamonte wound up spending a lot of time on the rate equation. We explained that when a Petri net has ‘deficiency zero’ and is ‘weakly reversible’, people have a good understanding of solutions of the rate equation. There are just as many equilibrium solutions as you’d expect, no more and no less. They’re all stable, and there are no periodic solutions.

Even better, in this case, the ‘Anderson–Craciun–Kurtz theorem’ lets you quickly obtain equilibrium solutions of the *master* equation from equilibrium solutions of the rate equation.

But this case is, in a sense, the *boring* case!

( Is there a discrete analog?)

Yes, but people haven’t studied this as much. In fact, I don’t know if I’ve seen it anywhere, but I could easily describe it. It might be fun to generalize the zero deficiency theorem to this case, if nobody has yet. I think I could do it.

The solutions to the equations give the trajectories in state space, given an initial state. So we can look at attractor points and basins of attraction.

What I’m calling a ‘stable equilibrium’ is exactly the same as what you’re calling an attractor. When the ‘zero deficiency’ condition holds, we have a good handle on the attractors, and there’s only one attractor in each ‘stoichiometric compatibility class’.

Does bistable mean that there are two attractors and the whole (non-negative) region of is divided into the basins of attraction for these points?

No, but close. It means there are two attractors in some stoichiometric compability class.

I’m reluctant to explain ‘stoichiometric compatibility class’ in general, having already done so here, but if you think about the water formation and dissociation example you described, you’ll quickly get the idea.

There’s one stoichiometric compatibility class for each choice of these two quantities:

1) number of H_{2}O’s plus the number of O’s

2) twice the number of H_{2}O’s plus the number of H’s

These quantities can never change, since hydrogen and oxygen atoms can’t be created or destroyed by the reactions in this Petri net. So, a solution of the rate equation *or* master equation can only wander around in one stoichiometric compatibility class.

Thanks to the deficiency zero theorem, the rate equation has one attractor in each stoichometric compatibility class. Where this attractor is depends on the rate constants for the formation and dissociation of water.

This is a nice example of the zero deficiency theorem.

Does periodic means that there are points that cycle back to themselves,

Yes. The time it takes to come back is called the ‘period’.

or is it a broader notion, meaning that there are points whose trajectories are bounded, but do not converge to an attractor?

No, that broader notion would includes not just periodic solutions but quasiperiodic and chaotic ones.

Can a system be bistable and periodic, in the sense that there are two attractors, but there are also points that are periodic?

What’s periodic is not the system but a particular solution. A system can have any number of attractors and any number of periodic solutions.

The word ‘bistable’ focuses undo attention on the number *two*, which is interesting just because it’s the first number bigger than one. A light switch is bistable, and people are interested in ‘switches’ in biochemistry, but a general switch could have many stable settings.

Basically I’m asking about the character of the solutions to the rate equation, which I have not yet investigated. Can you summarize the main theorems regarding the solutions.

I summarized the deficiency zero theorem and Anderson-Craciun-Kurtz theorem rather vaguely above. I did it quite precisely in the book. There are also lots of other theorems, like the deficiency one theorem.

3. Given a stochastic Petri net, is there a decision procedure to say whether it is “bistable,” or “periodic”?

There are theorems—most notably the deficiency zero theorem and deficiency one theorem–that give necessary or sufficient theorems for these properties, but I don’t believe a full-fledged decision procedure is known.

Or does one have to resort to a probabilistic test, involving selections of an initial starting vector, and testing to see whether (1) it appears to be headed for an attractor (or infinity), or (2) whether it appears to be a periodic point. Clearly “appears” needs to be defined.

People can already do a lot better than this brute-force method, thanks in part to all the theorems people have shown, but I think there’s a huge unexplored territory here. Mathematical chemists are interested in this problem, and it seems to be tricky.

]]>The pi calculus is meant as a distilled framework for describing ‘concurrency’—roughly, computer networks where nodes can create channels to other nodes and send messages along these nodes—much as the lambda calculus is a distilled framework for describing computations done sequentially by a single processor.

Personally I find these distilled frameworks, that seek to get everything done with as few primitives as possible, less clear than a category-theoretic description of *large classes* of frameworks. When you try to distill everything down to a few primitives you wind up making a lot of arbitrary choices, and the resulting framework tends to look cryptic.

Of course, category theory also looks cryptic to those who haven’t studied it… but it has the advantage of being so generally useful that once you learn it and formulate some idea in this way, a bunch of connections to other ideas are instantly visible—ideas in math, logic, physics, and computation, for starters.

Right now I have just one grad student: Mike Stay, who works at Google. He’s working with me on categories and computation. In that post David Tweed mentions, he was endeavoring to explain the pi calculus to me, in response to my plea for help. As a result, he got interested in describing

By now he’s gone a lot further. See:

• Mike Stay, Higher categories for concurrency, *n*-Category Café.

for the state of his thinking a year and a half ago. Since then he’s been working on this subject with Jamie Vicary and making even more progress.

By now it’s clear to me that bigraphs, the pi calculus and other frameworks for concurrency would really profit from a category-theoretic description, and that when this is done it’ll look a bit like a *categorification* of Petri net theory, with symmetric monoidal bicategories replacing symmetric monoidal categories.

In other words: instead of describing a bunch of ‘particles’ interacting in time, these fancier frameworks describe a bunch of *labelled graphs* interacting and changing topology in time. The vertices of the graph are what I called ‘nodes’, and the edges are what I called ‘channels’. The nodes can do things (e.g. compute) and send messages to each other along the edges.

This sort of framework might be useful in biology and ecology, too. It would certainly be relevant to the ‘smart grid’.

]]>There’s surely better actual tutorials, but here’s a lightning introduction by Mike Stay.

]]>